Dec 03 00:19:24 localhost kernel: Linux version 5.14.0-645.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025
Dec 03 00:19:24 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Dec 03 00:19:24 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 03 00:19:24 localhost kernel: BIOS-provided physical RAM map:
Dec 03 00:19:24 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Dec 03 00:19:24 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Dec 03 00:19:24 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Dec 03 00:19:24 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Dec 03 00:19:24 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Dec 03 00:19:24 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Dec 03 00:19:24 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Dec 03 00:19:24 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Dec 03 00:19:24 localhost kernel: NX (Execute Disable) protection: active
Dec 03 00:19:24 localhost kernel: APIC: Static calls initialized
Dec 03 00:19:24 localhost kernel: SMBIOS 2.8 present.
Dec 03 00:19:24 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Dec 03 00:19:24 localhost kernel: Hypervisor detected: KVM
Dec 03 00:19:24 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Dec 03 00:19:24 localhost kernel: kvm-clock: using sched offset of 4773481280 cycles
Dec 03 00:19:24 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Dec 03 00:19:24 localhost kernel: tsc: Detected 2800.000 MHz processor
Dec 03 00:19:24 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Dec 03 00:19:24 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Dec 03 00:19:24 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Dec 03 00:19:24 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Dec 03 00:19:24 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Dec 03 00:19:24 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Dec 03 00:19:24 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Dec 03 00:19:24 localhost kernel: Using GB pages for direct mapping
Dec 03 00:19:24 localhost kernel: RAMDISK: [mem 0x2d472000-0x32a30fff]
Dec 03 00:19:24 localhost kernel: ACPI: Early table checksum verification disabled
Dec 03 00:19:24 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Dec 03 00:19:24 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 03 00:19:24 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 03 00:19:24 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 03 00:19:24 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Dec 03 00:19:24 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 03 00:19:24 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 03 00:19:24 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Dec 03 00:19:24 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Dec 03 00:19:24 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Dec 03 00:19:24 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Dec 03 00:19:24 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Dec 03 00:19:24 localhost kernel: No NUMA configuration found
Dec 03 00:19:24 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Dec 03 00:19:24 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Dec 03 00:19:24 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Dec 03 00:19:24 localhost kernel: Zone ranges:
Dec 03 00:19:24 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Dec 03 00:19:24 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Dec 03 00:19:24 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Dec 03 00:19:24 localhost kernel:   Device   empty
Dec 03 00:19:24 localhost kernel: Movable zone start for each node
Dec 03 00:19:24 localhost kernel: Early memory node ranges
Dec 03 00:19:24 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Dec 03 00:19:24 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Dec 03 00:19:24 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Dec 03 00:19:24 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Dec 03 00:19:24 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Dec 03 00:19:24 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Dec 03 00:19:24 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Dec 03 00:19:24 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Dec 03 00:19:24 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Dec 03 00:19:24 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Dec 03 00:19:24 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Dec 03 00:19:24 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Dec 03 00:19:24 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Dec 03 00:19:24 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Dec 03 00:19:24 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Dec 03 00:19:24 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Dec 03 00:19:24 localhost kernel: TSC deadline timer available
Dec 03 00:19:24 localhost kernel: CPU topo: Max. logical packages:   8
Dec 03 00:19:24 localhost kernel: CPU topo: Max. logical dies:       8
Dec 03 00:19:24 localhost kernel: CPU topo: Max. dies per package:   1
Dec 03 00:19:24 localhost kernel: CPU topo: Max. threads per core:   1
Dec 03 00:19:24 localhost kernel: CPU topo: Num. cores per package:     1
Dec 03 00:19:24 localhost kernel: CPU topo: Num. threads per package:   1
Dec 03 00:19:24 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Dec 03 00:19:24 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Dec 03 00:19:24 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Dec 03 00:19:24 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Dec 03 00:19:24 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Dec 03 00:19:24 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Dec 03 00:19:24 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Dec 03 00:19:24 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Dec 03 00:19:24 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Dec 03 00:19:24 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Dec 03 00:19:24 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Dec 03 00:19:24 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Dec 03 00:19:24 localhost kernel: Booting paravirtualized kernel on KVM
Dec 03 00:19:24 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Dec 03 00:19:24 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Dec 03 00:19:24 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Dec 03 00:19:24 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Dec 03 00:19:24 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Dec 03 00:19:24 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Dec 03 00:19:24 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 03 00:19:24 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64", will be passed to user space.
Dec 03 00:19:24 localhost kernel: random: crng init done
Dec 03 00:19:24 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Dec 03 00:19:24 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Dec 03 00:19:24 localhost kernel: Fallback order for Node 0: 0 
Dec 03 00:19:24 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Dec 03 00:19:24 localhost kernel: Policy zone: Normal
Dec 03 00:19:24 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Dec 03 00:19:24 localhost kernel: software IO TLB: area num 8.
Dec 03 00:19:24 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Dec 03 00:19:24 localhost kernel: ftrace: allocating 49335 entries in 193 pages
Dec 03 00:19:24 localhost kernel: ftrace: allocated 193 pages with 3 groups
Dec 03 00:19:24 localhost kernel: Dynamic Preempt: voluntary
Dec 03 00:19:24 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Dec 03 00:19:24 localhost kernel: rcu:         RCU event tracing is enabled.
Dec 03 00:19:24 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Dec 03 00:19:24 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Dec 03 00:19:24 localhost kernel:         Rude variant of Tasks RCU enabled.
Dec 03 00:19:24 localhost kernel:         Tracing variant of Tasks RCU enabled.
Dec 03 00:19:24 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Dec 03 00:19:24 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Dec 03 00:19:24 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 03 00:19:24 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 03 00:19:24 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 03 00:19:24 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Dec 03 00:19:24 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Dec 03 00:19:24 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Dec 03 00:19:24 localhost kernel: Console: colour VGA+ 80x25
Dec 03 00:19:24 localhost kernel: printk: console [ttyS0] enabled
Dec 03 00:19:24 localhost kernel: ACPI: Core revision 20230331
Dec 03 00:19:24 localhost kernel: APIC: Switch to symmetric I/O mode setup
Dec 03 00:19:24 localhost kernel: x2apic enabled
Dec 03 00:19:24 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Dec 03 00:19:24 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Dec 03 00:19:24 localhost kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Dec 03 00:19:24 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Dec 03 00:19:24 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Dec 03 00:19:24 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Dec 03 00:19:24 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Dec 03 00:19:24 localhost kernel: Spectre V2 : Mitigation: Retpolines
Dec 03 00:19:24 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Dec 03 00:19:24 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Dec 03 00:19:24 localhost kernel: RETBleed: Mitigation: untrained return thunk
Dec 03 00:19:24 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Dec 03 00:19:24 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Dec 03 00:19:24 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Dec 03 00:19:24 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Dec 03 00:19:24 localhost kernel: x86/bugs: return thunk changed
Dec 03 00:19:24 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Dec 03 00:19:24 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Dec 03 00:19:24 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Dec 03 00:19:24 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Dec 03 00:19:24 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Dec 03 00:19:24 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Dec 03 00:19:24 localhost kernel: Freeing SMP alternatives memory: 40K
Dec 03 00:19:24 localhost kernel: pid_max: default: 32768 minimum: 301
Dec 03 00:19:24 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Dec 03 00:19:24 localhost kernel: landlock: Up and running.
Dec 03 00:19:24 localhost kernel: Yama: becoming mindful.
Dec 03 00:19:24 localhost kernel: SELinux:  Initializing.
Dec 03 00:19:24 localhost kernel: LSM support for eBPF active
Dec 03 00:19:24 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec 03 00:19:24 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec 03 00:19:24 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Dec 03 00:19:24 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Dec 03 00:19:24 localhost kernel: ... version:                0
Dec 03 00:19:24 localhost kernel: ... bit width:              48
Dec 03 00:19:24 localhost kernel: ... generic registers:      6
Dec 03 00:19:24 localhost kernel: ... value mask:             0000ffffffffffff
Dec 03 00:19:24 localhost kernel: ... max period:             00007fffffffffff
Dec 03 00:19:24 localhost kernel: ... fixed-purpose events:   0
Dec 03 00:19:24 localhost kernel: ... event mask:             000000000000003f
Dec 03 00:19:24 localhost kernel: signal: max sigframe size: 1776
Dec 03 00:19:24 localhost kernel: rcu: Hierarchical SRCU implementation.
Dec 03 00:19:24 localhost kernel: rcu:         Max phase no-delay instances is 400.
Dec 03 00:19:24 localhost kernel: smp: Bringing up secondary CPUs ...
Dec 03 00:19:24 localhost kernel: smpboot: x86: Booting SMP configuration:
Dec 03 00:19:24 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Dec 03 00:19:24 localhost kernel: smp: Brought up 1 node, 8 CPUs
Dec 03 00:19:24 localhost kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Dec 03 00:19:24 localhost kernel: node 0 deferred pages initialised in 248ms
Dec 03 00:19:24 localhost kernel: Memory: 7763848K/8388068K available (16384K kernel code, 5795K rwdata, 13908K rodata, 4196K init, 7156K bss, 618212K reserved, 0K cma-reserved)
Dec 03 00:19:24 localhost kernel: devtmpfs: initialized
Dec 03 00:19:24 localhost kernel: x86/mm: Memory block size: 128MB
Dec 03 00:19:24 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Dec 03 00:19:24 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Dec 03 00:19:24 localhost kernel: pinctrl core: initialized pinctrl subsystem
Dec 03 00:19:24 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Dec 03 00:19:24 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Dec 03 00:19:24 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Dec 03 00:19:24 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Dec 03 00:19:24 localhost kernel: audit: initializing netlink subsys (disabled)
Dec 03 00:19:24 localhost kernel: audit: type=2000 audit(1764721160.598:1): state=initialized audit_enabled=0 res=1
Dec 03 00:19:24 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Dec 03 00:19:24 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Dec 03 00:19:24 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Dec 03 00:19:24 localhost kernel: cpuidle: using governor menu
Dec 03 00:19:24 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Dec 03 00:19:24 localhost kernel: PCI: Using configuration type 1 for base access
Dec 03 00:19:24 localhost kernel: PCI: Using configuration type 1 for extended access
Dec 03 00:19:24 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Dec 03 00:19:24 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Dec 03 00:19:24 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Dec 03 00:19:24 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Dec 03 00:19:24 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Dec 03 00:19:24 localhost kernel: Demotion targets for Node 0: null
Dec 03 00:19:24 localhost kernel: cryptd: max_cpu_qlen set to 1000
Dec 03 00:19:24 localhost kernel: ACPI: Added _OSI(Module Device)
Dec 03 00:19:24 localhost kernel: ACPI: Added _OSI(Processor Device)
Dec 03 00:19:24 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Dec 03 00:19:24 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Dec 03 00:19:24 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Dec 03 00:19:24 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Dec 03 00:19:24 localhost kernel: ACPI: Interpreter enabled
Dec 03 00:19:24 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Dec 03 00:19:24 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Dec 03 00:19:24 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Dec 03 00:19:24 localhost kernel: PCI: Using E820 reservations for host bridge windows
Dec 03 00:19:24 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Dec 03 00:19:24 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Dec 03 00:19:24 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Dec 03 00:19:24 localhost kernel: acpiphp: Slot [3] registered
Dec 03 00:19:24 localhost kernel: acpiphp: Slot [4] registered
Dec 03 00:19:24 localhost kernel: acpiphp: Slot [5] registered
Dec 03 00:19:24 localhost kernel: acpiphp: Slot [6] registered
Dec 03 00:19:24 localhost kernel: acpiphp: Slot [7] registered
Dec 03 00:19:24 localhost kernel: acpiphp: Slot [8] registered
Dec 03 00:19:24 localhost kernel: acpiphp: Slot [9] registered
Dec 03 00:19:24 localhost kernel: acpiphp: Slot [10] registered
Dec 03 00:19:24 localhost kernel: acpiphp: Slot [11] registered
Dec 03 00:19:24 localhost kernel: acpiphp: Slot [12] registered
Dec 03 00:19:24 localhost kernel: acpiphp: Slot [13] registered
Dec 03 00:19:24 localhost kernel: acpiphp: Slot [14] registered
Dec 03 00:19:24 localhost kernel: acpiphp: Slot [15] registered
Dec 03 00:19:24 localhost kernel: acpiphp: Slot [16] registered
Dec 03 00:19:24 localhost kernel: acpiphp: Slot [17] registered
Dec 03 00:19:24 localhost kernel: acpiphp: Slot [18] registered
Dec 03 00:19:24 localhost kernel: acpiphp: Slot [19] registered
Dec 03 00:19:24 localhost kernel: acpiphp: Slot [20] registered
Dec 03 00:19:24 localhost kernel: acpiphp: Slot [21] registered
Dec 03 00:19:24 localhost kernel: acpiphp: Slot [22] registered
Dec 03 00:19:24 localhost kernel: acpiphp: Slot [23] registered
Dec 03 00:19:24 localhost kernel: acpiphp: Slot [24] registered
Dec 03 00:19:24 localhost kernel: acpiphp: Slot [25] registered
Dec 03 00:19:24 localhost kernel: acpiphp: Slot [26] registered
Dec 03 00:19:24 localhost kernel: acpiphp: Slot [27] registered
Dec 03 00:19:24 localhost kernel: acpiphp: Slot [28] registered
Dec 03 00:19:24 localhost kernel: acpiphp: Slot [29] registered
Dec 03 00:19:24 localhost kernel: acpiphp: Slot [30] registered
Dec 03 00:19:24 localhost kernel: acpiphp: Slot [31] registered
Dec 03 00:19:24 localhost kernel: PCI host bridge to bus 0000:00
Dec 03 00:19:24 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Dec 03 00:19:24 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Dec 03 00:19:24 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Dec 03 00:19:24 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Dec 03 00:19:24 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Dec 03 00:19:24 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Dec 03 00:19:24 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Dec 03 00:19:24 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Dec 03 00:19:24 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Dec 03 00:19:24 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Dec 03 00:19:24 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Dec 03 00:19:24 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Dec 03 00:19:24 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Dec 03 00:19:24 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Dec 03 00:19:24 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Dec 03 00:19:24 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Dec 03 00:19:24 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Dec 03 00:19:24 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Dec 03 00:19:24 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Dec 03 00:19:24 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Dec 03 00:19:24 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Dec 03 00:19:24 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Dec 03 00:19:24 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Dec 03 00:19:24 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Dec 03 00:19:24 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Dec 03 00:19:24 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec 03 00:19:24 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Dec 03 00:19:24 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Dec 03 00:19:24 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Dec 03 00:19:24 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Dec 03 00:19:24 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Dec 03 00:19:24 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Dec 03 00:19:24 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Dec 03 00:19:24 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Dec 03 00:19:24 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Dec 03 00:19:24 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Dec 03 00:19:24 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Dec 03 00:19:24 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Dec 03 00:19:24 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Dec 03 00:19:24 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Dec 03 00:19:24 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Dec 03 00:19:24 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Dec 03 00:19:24 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Dec 03 00:19:24 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Dec 03 00:19:24 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Dec 03 00:19:24 localhost kernel: iommu: Default domain type: Translated
Dec 03 00:19:24 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Dec 03 00:19:24 localhost kernel: SCSI subsystem initialized
Dec 03 00:19:24 localhost kernel: ACPI: bus type USB registered
Dec 03 00:19:24 localhost kernel: usbcore: registered new interface driver usbfs
Dec 03 00:19:24 localhost kernel: usbcore: registered new interface driver hub
Dec 03 00:19:24 localhost kernel: usbcore: registered new device driver usb
Dec 03 00:19:24 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Dec 03 00:19:24 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Dec 03 00:19:24 localhost kernel: PTP clock support registered
Dec 03 00:19:24 localhost kernel: EDAC MC: Ver: 3.0.0
Dec 03 00:19:24 localhost kernel: NetLabel: Initializing
Dec 03 00:19:24 localhost kernel: NetLabel:  domain hash size = 128
Dec 03 00:19:24 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Dec 03 00:19:24 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Dec 03 00:19:24 localhost kernel: PCI: Using ACPI for IRQ routing
Dec 03 00:19:24 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Dec 03 00:19:24 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Dec 03 00:19:24 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Dec 03 00:19:24 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Dec 03 00:19:24 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Dec 03 00:19:24 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Dec 03 00:19:24 localhost kernel: vgaarb: loaded
Dec 03 00:19:24 localhost kernel: clocksource: Switched to clocksource kvm-clock
Dec 03 00:19:24 localhost kernel: VFS: Disk quotas dquot_6.6.0
Dec 03 00:19:24 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Dec 03 00:19:24 localhost kernel: pnp: PnP ACPI init
Dec 03 00:19:24 localhost kernel: pnp 00:03: [dma 2]
Dec 03 00:19:24 localhost kernel: pnp: PnP ACPI: found 5 devices
Dec 03 00:19:24 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Dec 03 00:19:24 localhost kernel: NET: Registered PF_INET protocol family
Dec 03 00:19:24 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Dec 03 00:19:24 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Dec 03 00:19:24 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Dec 03 00:19:24 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Dec 03 00:19:24 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Dec 03 00:19:24 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Dec 03 00:19:24 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Dec 03 00:19:24 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec 03 00:19:24 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec 03 00:19:24 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Dec 03 00:19:24 localhost kernel: NET: Registered PF_XDP protocol family
Dec 03 00:19:24 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Dec 03 00:19:24 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Dec 03 00:19:24 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Dec 03 00:19:24 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Dec 03 00:19:24 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Dec 03 00:19:24 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Dec 03 00:19:24 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Dec 03 00:19:24 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Dec 03 00:19:24 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 96761 usecs
Dec 03 00:19:24 localhost kernel: PCI: CLS 0 bytes, default 64
Dec 03 00:19:24 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Dec 03 00:19:24 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Dec 03 00:19:24 localhost kernel: Trying to unpack rootfs image as initramfs...
Dec 03 00:19:24 localhost kernel: ACPI: bus type thunderbolt registered
Dec 03 00:19:24 localhost kernel: Initialise system trusted keyrings
Dec 03 00:19:24 localhost kernel: Key type blacklist registered
Dec 03 00:19:24 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Dec 03 00:19:24 localhost kernel: zbud: loaded
Dec 03 00:19:24 localhost kernel: integrity: Platform Keyring initialized
Dec 03 00:19:24 localhost kernel: integrity: Machine keyring initialized
Dec 03 00:19:24 localhost kernel: Freeing initrd memory: 87804K
Dec 03 00:19:24 localhost kernel: NET: Registered PF_ALG protocol family
Dec 03 00:19:24 localhost kernel: xor: automatically using best checksumming function   avx       
Dec 03 00:19:24 localhost kernel: Key type asymmetric registered
Dec 03 00:19:24 localhost kernel: Asymmetric key parser 'x509' registered
Dec 03 00:19:24 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Dec 03 00:19:24 localhost kernel: io scheduler mq-deadline registered
Dec 03 00:19:24 localhost kernel: io scheduler kyber registered
Dec 03 00:19:24 localhost kernel: io scheduler bfq registered
Dec 03 00:19:24 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Dec 03 00:19:24 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Dec 03 00:19:24 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Dec 03 00:19:24 localhost kernel: ACPI: button: Power Button [PWRF]
Dec 03 00:19:24 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Dec 03 00:19:24 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Dec 03 00:19:24 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Dec 03 00:19:24 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Dec 03 00:19:24 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Dec 03 00:19:24 localhost kernel: Non-volatile memory driver v1.3
Dec 03 00:19:24 localhost kernel: rdac: device handler registered
Dec 03 00:19:24 localhost kernel: hp_sw: device handler registered
Dec 03 00:19:24 localhost kernel: emc: device handler registered
Dec 03 00:19:24 localhost kernel: alua: device handler registered
Dec 03 00:19:24 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Dec 03 00:19:24 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Dec 03 00:19:24 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Dec 03 00:19:24 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Dec 03 00:19:24 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Dec 03 00:19:24 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Dec 03 00:19:24 localhost kernel: usb usb1: Product: UHCI Host Controller
Dec 03 00:19:24 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-645.el9.x86_64 uhci_hcd
Dec 03 00:19:24 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Dec 03 00:19:24 localhost kernel: hub 1-0:1.0: USB hub found
Dec 03 00:19:24 localhost kernel: hub 1-0:1.0: 2 ports detected
Dec 03 00:19:24 localhost kernel: usbcore: registered new interface driver usbserial_generic
Dec 03 00:19:24 localhost kernel: usbserial: USB Serial support registered for generic
Dec 03 00:19:24 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Dec 03 00:19:24 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Dec 03 00:19:24 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Dec 03 00:19:24 localhost kernel: mousedev: PS/2 mouse device common for all mice
Dec 03 00:19:24 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Dec 03 00:19:24 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Dec 03 00:19:24 localhost kernel: rtc_cmos 00:04: registered as rtc0
Dec 03 00:19:24 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-12-03T00:19:23 UTC (1764721163)
Dec 03 00:19:24 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Dec 03 00:19:24 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Dec 03 00:19:24 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Dec 03 00:19:24 localhost kernel: usbcore: registered new interface driver usbhid
Dec 03 00:19:24 localhost kernel: usbhid: USB HID core driver
Dec 03 00:19:24 localhost kernel: drop_monitor: Initializing network drop monitor service
Dec 03 00:19:24 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Dec 03 00:19:24 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Dec 03 00:19:24 localhost kernel: Initializing XFRM netlink socket
Dec 03 00:19:24 localhost kernel: NET: Registered PF_INET6 protocol family
Dec 03 00:19:24 localhost kernel: Segment Routing with IPv6
Dec 03 00:19:24 localhost kernel: NET: Registered PF_PACKET protocol family
Dec 03 00:19:24 localhost kernel: mpls_gso: MPLS GSO support
Dec 03 00:19:24 localhost kernel: IPI shorthand broadcast: enabled
Dec 03 00:19:24 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Dec 03 00:19:24 localhost kernel: AES CTR mode by8 optimization enabled
Dec 03 00:19:24 localhost kernel: sched_clock: Marking stable (3248008139, 140160430)->(3532029959, -143861390)
Dec 03 00:19:24 localhost kernel: registered taskstats version 1
Dec 03 00:19:24 localhost kernel: Loading compiled-in X.509 certificates
Dec 03 00:19:24 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4c28336b4850d771d036b52fb2778fdb4f02f708'
Dec 03 00:19:24 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Dec 03 00:19:24 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Dec 03 00:19:24 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Dec 03 00:19:24 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Dec 03 00:19:24 localhost kernel: Demotion targets for Node 0: null
Dec 03 00:19:24 localhost kernel: page_owner is disabled
Dec 03 00:19:24 localhost kernel: Key type .fscrypt registered
Dec 03 00:19:24 localhost kernel: Key type fscrypt-provisioning registered
Dec 03 00:19:24 localhost kernel: Key type big_key registered
Dec 03 00:19:24 localhost kernel: Key type encrypted registered
Dec 03 00:19:24 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Dec 03 00:19:24 localhost kernel: Loading compiled-in module X.509 certificates
Dec 03 00:19:24 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4c28336b4850d771d036b52fb2778fdb4f02f708'
Dec 03 00:19:24 localhost kernel: ima: Allocated hash algorithm: sha256
Dec 03 00:19:24 localhost kernel: ima: No architecture policies found
Dec 03 00:19:24 localhost kernel: evm: Initialising EVM extended attributes:
Dec 03 00:19:24 localhost kernel: evm: security.selinux
Dec 03 00:19:24 localhost kernel: evm: security.SMACK64 (disabled)
Dec 03 00:19:24 localhost kernel: evm: security.SMACK64EXEC (disabled)
Dec 03 00:19:24 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Dec 03 00:19:24 localhost kernel: evm: security.SMACK64MMAP (disabled)
Dec 03 00:19:24 localhost kernel: evm: security.apparmor (disabled)
Dec 03 00:19:24 localhost kernel: evm: security.ima
Dec 03 00:19:24 localhost kernel: evm: security.capability
Dec 03 00:19:24 localhost kernel: evm: HMAC attrs: 0x1
Dec 03 00:19:24 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Dec 03 00:19:24 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Dec 03 00:19:24 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Dec 03 00:19:24 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Dec 03 00:19:24 localhost kernel: usb 1-1: Manufacturer: QEMU
Dec 03 00:19:24 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Dec 03 00:19:24 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Dec 03 00:19:24 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Dec 03 00:19:24 localhost kernel: Running certificate verification RSA selftest
Dec 03 00:19:24 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Dec 03 00:19:24 localhost kernel: Running certificate verification ECDSA selftest
Dec 03 00:19:24 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Dec 03 00:19:24 localhost kernel: clk: Disabling unused clocks
Dec 03 00:19:24 localhost kernel: Freeing unused decrypted memory: 2028K
Dec 03 00:19:24 localhost kernel: Freeing unused kernel image (initmem) memory: 4196K
Dec 03 00:19:24 localhost kernel: Write protecting the kernel read-only data: 30720k
Dec 03 00:19:24 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 428K
Dec 03 00:19:24 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Dec 03 00:19:24 localhost kernel: Run /init as init process
Dec 03 00:19:24 localhost kernel:   with arguments:
Dec 03 00:19:24 localhost kernel:     /init
Dec 03 00:19:24 localhost kernel:   with environment:
Dec 03 00:19:24 localhost kernel:     HOME=/
Dec 03 00:19:24 localhost kernel:     TERM=linux
Dec 03 00:19:24 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64
Dec 03 00:19:24 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 03 00:19:24 localhost systemd[1]: Detected virtualization kvm.
Dec 03 00:19:24 localhost systemd[1]: Detected architecture x86-64.
Dec 03 00:19:24 localhost systemd[1]: Running in initrd.
Dec 03 00:19:24 localhost systemd[1]: No hostname configured, using default hostname.
Dec 03 00:19:24 localhost systemd[1]: Hostname set to <localhost>.
Dec 03 00:19:24 localhost systemd[1]: Initializing machine ID from VM UUID.
Dec 03 00:19:24 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Dec 03 00:19:24 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Dec 03 00:19:24 localhost systemd[1]: Reached target Local Encrypted Volumes.
Dec 03 00:19:24 localhost systemd[1]: Reached target Initrd /usr File System.
Dec 03 00:19:24 localhost systemd[1]: Reached target Local File Systems.
Dec 03 00:19:24 localhost systemd[1]: Reached target Path Units.
Dec 03 00:19:24 localhost systemd[1]: Reached target Slice Units.
Dec 03 00:19:24 localhost systemd[1]: Reached target Swaps.
Dec 03 00:19:24 localhost systemd[1]: Reached target Timer Units.
Dec 03 00:19:24 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec 03 00:19:24 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Dec 03 00:19:24 localhost systemd[1]: Listening on Journal Socket.
Dec 03 00:19:24 localhost systemd[1]: Listening on udev Control Socket.
Dec 03 00:19:24 localhost systemd[1]: Listening on udev Kernel Socket.
Dec 03 00:19:24 localhost systemd[1]: Reached target Socket Units.
Dec 03 00:19:24 localhost systemd[1]: Starting Create List of Static Device Nodes...
Dec 03 00:19:24 localhost systemd[1]: Starting Journal Service...
Dec 03 00:19:24 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec 03 00:19:24 localhost systemd[1]: Starting Apply Kernel Variables...
Dec 03 00:19:24 localhost systemd[1]: Starting Create System Users...
Dec 03 00:19:24 localhost systemd[1]: Starting Setup Virtual Console...
Dec 03 00:19:24 localhost systemd[1]: Finished Create List of Static Device Nodes.
Dec 03 00:19:24 localhost systemd[1]: Finished Apply Kernel Variables.
Dec 03 00:19:24 localhost systemd-journald[306]: Journal started
Dec 03 00:19:24 localhost systemd-journald[306]: Runtime Journal (/run/log/journal/bb85f21b9f67464f8fbee50d4e1e7eb4) is 8.0M, max 153.6M, 145.6M free.
Dec 03 00:19:24 localhost systemd-sysusers[311]: Creating group 'users' with GID 100.
Dec 03 00:19:24 localhost systemd-sysusers[311]: Creating group 'dbus' with GID 81.
Dec 03 00:19:24 localhost systemd-sysusers[311]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Dec 03 00:19:24 localhost systemd[1]: Started Journal Service.
Dec 03 00:19:24 localhost systemd[1]: Finished Create System Users.
Dec 03 00:19:24 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Dec 03 00:19:24 localhost systemd[1]: Starting Create Volatile Files and Directories...
Dec 03 00:19:24 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Dec 03 00:19:24 localhost systemd[1]: Finished Create Volatile Files and Directories.
Dec 03 00:19:24 localhost systemd[1]: Finished Setup Virtual Console.
Dec 03 00:19:24 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Dec 03 00:19:24 localhost systemd[1]: Starting dracut cmdline hook...
Dec 03 00:19:24 localhost dracut-cmdline[327]: dracut-9 dracut-057-102.git20250818.el9
Dec 03 00:19:24 localhost dracut-cmdline[327]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 03 00:19:24 localhost systemd[1]: Finished dracut cmdline hook.
Dec 03 00:19:24 localhost systemd[1]: Starting dracut pre-udev hook...
Dec 03 00:19:24 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Dec 03 00:19:24 localhost kernel: device-mapper: uevent: version 1.0.3
Dec 03 00:19:24 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Dec 03 00:19:25 localhost kernel: RPC: Registered named UNIX socket transport module.
Dec 03 00:19:25 localhost kernel: RPC: Registered udp transport module.
Dec 03 00:19:25 localhost kernel: RPC: Registered tcp transport module.
Dec 03 00:19:25 localhost kernel: RPC: Registered tcp-with-tls transport module.
Dec 03 00:19:25 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Dec 03 00:19:25 localhost rpc.statd[445]: Version 2.5.4 starting
Dec 03 00:19:25 localhost rpc.statd[445]: Initializing NSM state
Dec 03 00:19:25 localhost rpc.idmapd[450]: Setting log level to 0
Dec 03 00:19:25 localhost systemd[1]: Finished dracut pre-udev hook.
Dec 03 00:19:25 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec 03 00:19:25 localhost systemd-udevd[463]: Using default interface naming scheme 'rhel-9.0'.
Dec 03 00:19:25 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec 03 00:19:25 localhost systemd[1]: Starting dracut pre-trigger hook...
Dec 03 00:19:25 localhost systemd[1]: Finished dracut pre-trigger hook.
Dec 03 00:19:25 localhost systemd[1]: Starting Coldplug All udev Devices...
Dec 03 00:19:25 localhost systemd[1]: Created slice Slice /system/modprobe.
Dec 03 00:19:25 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 03 00:19:25 localhost systemd[1]: Finished Coldplug All udev Devices.
Dec 03 00:19:25 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 03 00:19:25 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 03 00:19:25 localhost systemd[1]: Mounting Kernel Configuration File System...
Dec 03 00:19:25 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec 03 00:19:25 localhost systemd[1]: Reached target Network.
Dec 03 00:19:25 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec 03 00:19:25 localhost systemd[1]: Starting dracut initqueue hook...
Dec 03 00:19:25 localhost systemd[1]: Mounted Kernel Configuration File System.
Dec 03 00:19:25 localhost systemd[1]: Reached target System Initialization.
Dec 03 00:19:25 localhost systemd[1]: Reached target Basic System.
Dec 03 00:19:25 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Dec 03 00:19:25 localhost kernel: libata version 3.00 loaded.
Dec 03 00:19:25 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Dec 03 00:19:25 localhost kernel:  vda: vda1
Dec 03 00:19:25 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Dec 03 00:19:25 localhost kernel: scsi host0: ata_piix
Dec 03 00:19:25 localhost kernel: scsi host1: ata_piix
Dec 03 00:19:25 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Dec 03 00:19:25 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Dec 03 00:19:25 localhost systemd[1]: Found device /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f.
Dec 03 00:19:25 localhost systemd[1]: Reached target Initrd Root Device.
Dec 03 00:19:25 localhost kernel: ata1: found unknown device (class 0)
Dec 03 00:19:25 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Dec 03 00:19:25 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Dec 03 00:19:25 localhost systemd-udevd[496]: Network interface NamePolicy= disabled on kernel command line.
Dec 03 00:19:25 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Dec 03 00:19:25 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Dec 03 00:19:25 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Dec 03 00:19:25 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Dec 03 00:19:25 localhost systemd[1]: Finished dracut initqueue hook.
Dec 03 00:19:25 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Dec 03 00:19:25 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Dec 03 00:19:25 localhost systemd[1]: Reached target Remote File Systems.
Dec 03 00:19:25 localhost systemd[1]: Starting dracut pre-mount hook...
Dec 03 00:19:25 localhost systemd[1]: Finished dracut pre-mount hook.
Dec 03 00:19:25 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f...
Dec 03 00:19:25 localhost systemd-fsck[558]: /usr/sbin/fsck.xfs: XFS file system.
Dec 03 00:19:25 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f.
Dec 03 00:19:25 localhost systemd[1]: Mounting /sysroot...
Dec 03 00:19:26 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Dec 03 00:19:26 localhost kernel: XFS (vda1): Mounting V5 Filesystem fcf6b761-831a-48a7-9f5f-068b5063763f
Dec 03 00:19:26 localhost kernel: XFS (vda1): Ending clean mount
Dec 03 00:19:26 localhost systemd[1]: Mounted /sysroot.
Dec 03 00:19:26 localhost systemd[1]: Reached target Initrd Root File System.
Dec 03 00:19:26 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Dec 03 00:19:26 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Dec 03 00:19:26 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Dec 03 00:19:26 localhost systemd[1]: Reached target Initrd File Systems.
Dec 03 00:19:26 localhost systemd[1]: Reached target Initrd Default Target.
Dec 03 00:19:26 localhost systemd[1]: Starting dracut mount hook...
Dec 03 00:19:26 localhost systemd[1]: Finished dracut mount hook.
Dec 03 00:19:26 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Dec 03 00:19:27 localhost rpc.idmapd[450]: exiting on signal 15
Dec 03 00:19:27 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Dec 03 00:19:27 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Dec 03 00:19:27 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Dec 03 00:19:27 localhost systemd[1]: Stopped target Network.
Dec 03 00:19:27 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Dec 03 00:19:27 localhost systemd[1]: Stopped target Timer Units.
Dec 03 00:19:27 localhost systemd[1]: dbus.socket: Deactivated successfully.
Dec 03 00:19:27 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Dec 03 00:19:27 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Dec 03 00:19:27 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Dec 03 00:19:27 localhost systemd[1]: Stopped target Initrd Default Target.
Dec 03 00:19:27 localhost systemd[1]: Stopped target Basic System.
Dec 03 00:19:27 localhost systemd[1]: Stopped target Initrd Root Device.
Dec 03 00:19:27 localhost systemd[1]: Stopped target Initrd /usr File System.
Dec 03 00:19:27 localhost systemd[1]: Stopped target Path Units.
Dec 03 00:19:27 localhost systemd[1]: Stopped target Remote File Systems.
Dec 03 00:19:27 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Dec 03 00:19:27 localhost systemd[1]: Stopped target Slice Units.
Dec 03 00:19:27 localhost systemd[1]: Stopped target Socket Units.
Dec 03 00:19:27 localhost systemd[1]: Stopped target System Initialization.
Dec 03 00:19:27 localhost systemd[1]: Stopped target Local File Systems.
Dec 03 00:19:27 localhost systemd[1]: Stopped target Swaps.
Dec 03 00:19:27 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Dec 03 00:19:27 localhost systemd[1]: Stopped dracut mount hook.
Dec 03 00:19:27 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Dec 03 00:19:27 localhost systemd[1]: Stopped dracut pre-mount hook.
Dec 03 00:19:27 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Dec 03 00:19:27 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Dec 03 00:19:27 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Dec 03 00:19:27 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Dec 03 00:19:27 localhost systemd[1]: Stopped dracut initqueue hook.
Dec 03 00:19:27 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 03 00:19:27 localhost systemd[1]: Stopped Apply Kernel Variables.
Dec 03 00:19:27 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Dec 03 00:19:27 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Dec 03 00:19:27 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Dec 03 00:19:27 localhost systemd[1]: Stopped Coldplug All udev Devices.
Dec 03 00:19:27 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Dec 03 00:19:27 localhost systemd[1]: Stopped dracut pre-trigger hook.
Dec 03 00:19:27 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Dec 03 00:19:27 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec 03 00:19:27 localhost systemd[1]: Stopped Setup Virtual Console.
Dec 03 00:19:27 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Dec 03 00:19:27 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 03 00:19:27 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Dec 03 00:19:27 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Dec 03 00:19:27 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec 03 00:19:27 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Dec 03 00:19:27 localhost systemd[1]: systemd-udevd.service: Consumed 1.046s CPU time.
Dec 03 00:19:27 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Dec 03 00:19:27 localhost systemd[1]: Closed udev Control Socket.
Dec 03 00:19:27 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Dec 03 00:19:27 localhost systemd[1]: Closed udev Kernel Socket.
Dec 03 00:19:27 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Dec 03 00:19:27 localhost systemd[1]: Stopped dracut pre-udev hook.
Dec 03 00:19:27 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Dec 03 00:19:27 localhost systemd[1]: Stopped dracut cmdline hook.
Dec 03 00:19:27 localhost systemd[1]: Starting Cleanup udev Database...
Dec 03 00:19:27 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Dec 03 00:19:27 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Dec 03 00:19:27 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Dec 03 00:19:27 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Dec 03 00:19:27 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Dec 03 00:19:27 localhost systemd[1]: Stopped Create System Users.
Dec 03 00:19:27 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Dec 03 00:19:27 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Dec 03 00:19:27 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Dec 03 00:19:27 localhost systemd[1]: Finished Cleanup udev Database.
Dec 03 00:19:27 localhost systemd[1]: Reached target Switch Root.
Dec 03 00:19:27 localhost systemd[1]: Starting Switch Root...
Dec 03 00:19:27 localhost systemd[1]: Switching root.
Dec 03 00:19:27 localhost systemd-journald[306]: Journal stopped
Dec 03 00:19:28 localhost systemd-journald[306]: Received SIGTERM from PID 1 (systemd).
Dec 03 00:19:28 localhost kernel: audit: type=1404 audit(1764721167.254:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Dec 03 00:19:28 localhost kernel: SELinux:  policy capability network_peer_controls=1
Dec 03 00:19:28 localhost kernel: SELinux:  policy capability open_perms=1
Dec 03 00:19:28 localhost kernel: SELinux:  policy capability extended_socket_class=1
Dec 03 00:19:28 localhost kernel: SELinux:  policy capability always_check_network=0
Dec 03 00:19:28 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 03 00:19:28 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 03 00:19:28 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 03 00:19:28 localhost kernel: audit: type=1403 audit(1764721167.408:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Dec 03 00:19:28 localhost systemd[1]: Successfully loaded SELinux policy in 160.432ms.
Dec 03 00:19:28 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 29.726ms.
Dec 03 00:19:28 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 03 00:19:28 localhost systemd[1]: Detected virtualization kvm.
Dec 03 00:19:28 localhost systemd[1]: Detected architecture x86-64.
Dec 03 00:19:28 localhost systemd-rc-local-generator[638]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 00:19:28 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Dec 03 00:19:28 localhost systemd[1]: Stopped Switch Root.
Dec 03 00:19:28 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Dec 03 00:19:28 localhost systemd[1]: Created slice Slice /system/getty.
Dec 03 00:19:28 localhost systemd[1]: Created slice Slice /system/serial-getty.
Dec 03 00:19:28 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Dec 03 00:19:28 localhost systemd[1]: Created slice User and Session Slice.
Dec 03 00:19:28 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Dec 03 00:19:28 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Dec 03 00:19:28 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Dec 03 00:19:28 localhost systemd[1]: Reached target Local Encrypted Volumes.
Dec 03 00:19:28 localhost systemd[1]: Stopped target Switch Root.
Dec 03 00:19:28 localhost systemd[1]: Stopped target Initrd File Systems.
Dec 03 00:19:28 localhost systemd[1]: Stopped target Initrd Root File System.
Dec 03 00:19:28 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Dec 03 00:19:28 localhost systemd[1]: Reached target Path Units.
Dec 03 00:19:28 localhost systemd[1]: Reached target rpc_pipefs.target.
Dec 03 00:19:28 localhost systemd[1]: Reached target Slice Units.
Dec 03 00:19:28 localhost systemd[1]: Reached target Swaps.
Dec 03 00:19:28 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Dec 03 00:19:28 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Dec 03 00:19:28 localhost systemd[1]: Reached target RPC Port Mapper.
Dec 03 00:19:28 localhost systemd[1]: Listening on Process Core Dump Socket.
Dec 03 00:19:28 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Dec 03 00:19:28 localhost systemd[1]: Listening on udev Control Socket.
Dec 03 00:19:28 localhost systemd[1]: Listening on udev Kernel Socket.
Dec 03 00:19:28 localhost systemd[1]: Mounting Huge Pages File System...
Dec 03 00:19:28 localhost systemd[1]: Mounting POSIX Message Queue File System...
Dec 03 00:19:28 localhost systemd[1]: Mounting Kernel Debug File System...
Dec 03 00:19:28 localhost systemd[1]: Mounting Kernel Trace File System...
Dec 03 00:19:28 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec 03 00:19:28 localhost systemd[1]: Starting Create List of Static Device Nodes...
Dec 03 00:19:28 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 03 00:19:28 localhost systemd[1]: Starting Load Kernel Module drm...
Dec 03 00:19:28 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Dec 03 00:19:28 localhost systemd[1]: Starting Load Kernel Module fuse...
Dec 03 00:19:28 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Dec 03 00:19:28 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Dec 03 00:19:28 localhost systemd[1]: Stopped File System Check on Root Device.
Dec 03 00:19:28 localhost systemd[1]: Stopped Journal Service.
Dec 03 00:19:28 localhost systemd[1]: Starting Journal Service...
Dec 03 00:19:28 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec 03 00:19:28 localhost systemd[1]: Starting Generate network units from Kernel command line...
Dec 03 00:19:28 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 03 00:19:28 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Dec 03 00:19:28 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Dec 03 00:19:28 localhost systemd[1]: Starting Apply Kernel Variables...
Dec 03 00:19:28 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Dec 03 00:19:28 localhost systemd[1]: Starting Coldplug All udev Devices...
Dec 03 00:19:28 localhost systemd[1]: Mounted Huge Pages File System.
Dec 03 00:19:28 localhost systemd[1]: Mounted POSIX Message Queue File System.
Dec 03 00:19:28 localhost systemd[1]: Mounted Kernel Debug File System.
Dec 03 00:19:28 localhost systemd[1]: Mounted Kernel Trace File System.
Dec 03 00:19:28 localhost systemd[1]: Finished Create List of Static Device Nodes.
Dec 03 00:19:28 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 03 00:19:28 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 03 00:19:28 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 03 00:19:28 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Dec 03 00:19:28 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Dec 03 00:19:28 localhost systemd[1]: Finished Generate network units from Kernel command line.
Dec 03 00:19:28 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Dec 03 00:19:28 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec 03 00:19:28 localhost systemd[1]: Starting Rebuild Hardware Database...
Dec 03 00:19:28 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec 03 00:19:28 localhost systemd[1]: Starting Load/Save OS Random Seed...
Dec 03 00:19:28 localhost systemd[1]: Starting Create System Users...
Dec 03 00:19:28 localhost systemd-journald[680]: Journal started
Dec 03 00:19:28 localhost systemd-journald[680]: Runtime Journal (/run/log/journal/4d4ef2323cc3337bbfd9081b2a323b4e) is 8.0M, max 153.6M, 145.6M free.
Dec 03 00:19:27 localhost systemd[1]: Queued start job for default target Multi-User System.
Dec 03 00:19:27 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Dec 03 00:19:28 localhost systemd[1]: Started Journal Service.
Dec 03 00:19:28 localhost systemd[1]: Finished Load/Save OS Random Seed.
Dec 03 00:19:28 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec 03 00:19:28 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Dec 03 00:19:28 localhost systemd-journald[680]: Runtime Journal (/run/log/journal/4d4ef2323cc3337bbfd9081b2a323b4e) is 8.0M, max 153.6M, 145.6M free.
Dec 03 00:19:28 localhost systemd-journald[680]: Received client request to flush runtime journal.
Dec 03 00:19:28 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Dec 03 00:19:28 localhost kernel: ACPI: bus type drm_connector registered
Dec 03 00:19:28 localhost kernel: fuse: init (API version 7.37)
Dec 03 00:19:28 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec 03 00:19:28 localhost systemd[1]: Finished Load Kernel Module drm.
Dec 03 00:19:28 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Dec 03 00:19:28 localhost systemd[1]: Finished Load Kernel Module fuse.
Dec 03 00:19:28 localhost systemd[1]: Mounting FUSE Control File System...
Dec 03 00:19:28 localhost systemd[1]: Finished Apply Kernel Variables.
Dec 03 00:19:28 localhost systemd[1]: Finished Create System Users.
Dec 03 00:19:28 localhost systemd[1]: Mounted FUSE Control File System.
Dec 03 00:19:28 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Dec 03 00:19:28 localhost systemd[1]: Finished Coldplug All udev Devices.
Dec 03 00:19:28 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Dec 03 00:19:28 localhost systemd[1]: Reached target Preparation for Local File Systems.
Dec 03 00:19:28 localhost systemd[1]: Reached target Local File Systems.
Dec 03 00:19:28 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Dec 03 00:19:28 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Dec 03 00:19:28 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Dec 03 00:19:28 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Dec 03 00:19:28 localhost systemd[1]: Starting Automatic Boot Loader Update...
Dec 03 00:19:28 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Dec 03 00:19:28 localhost systemd[1]: Starting Create Volatile Files and Directories...
Dec 03 00:19:28 localhost bootctl[700]: Couldn't find EFI system partition, skipping.
Dec 03 00:19:28 localhost systemd[1]: Finished Automatic Boot Loader Update.
Dec 03 00:19:28 localhost systemd[1]: Finished Create Volatile Files and Directories.
Dec 03 00:19:28 localhost systemd[1]: Starting Security Auditing Service...
Dec 03 00:19:28 localhost systemd[1]: Starting RPC Bind...
Dec 03 00:19:28 localhost systemd[1]: Starting Rebuild Journal Catalog...
Dec 03 00:19:28 localhost auditd[706]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Dec 03 00:19:28 localhost auditd[706]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Dec 03 00:19:28 localhost systemd[1]: Started RPC Bind.
Dec 03 00:19:28 localhost systemd[1]: Finished Rebuild Journal Catalog.
Dec 03 00:19:28 localhost augenrules[711]: /sbin/augenrules: No change
Dec 03 00:19:28 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Dec 03 00:19:28 localhost augenrules[726]: No rules
Dec 03 00:19:28 localhost augenrules[726]: enabled 1
Dec 03 00:19:28 localhost augenrules[726]: failure 1
Dec 03 00:19:28 localhost augenrules[726]: pid 706
Dec 03 00:19:28 localhost augenrules[726]: rate_limit 0
Dec 03 00:19:28 localhost augenrules[726]: backlog_limit 8192
Dec 03 00:19:28 localhost augenrules[726]: lost 0
Dec 03 00:19:28 localhost augenrules[726]: backlog 3
Dec 03 00:19:28 localhost augenrules[726]: backlog_wait_time 60000
Dec 03 00:19:28 localhost augenrules[726]: backlog_wait_time_actual 0
Dec 03 00:19:28 localhost augenrules[726]: enabled 1
Dec 03 00:19:28 localhost augenrules[726]: failure 1
Dec 03 00:19:28 localhost augenrules[726]: pid 706
Dec 03 00:19:28 localhost augenrules[726]: rate_limit 0
Dec 03 00:19:28 localhost augenrules[726]: backlog_limit 8192
Dec 03 00:19:28 localhost augenrules[726]: lost 0
Dec 03 00:19:28 localhost augenrules[726]: backlog 2
Dec 03 00:19:28 localhost augenrules[726]: backlog_wait_time 60000
Dec 03 00:19:28 localhost augenrules[726]: backlog_wait_time_actual 0
Dec 03 00:19:28 localhost augenrules[726]: enabled 1
Dec 03 00:19:28 localhost augenrules[726]: failure 1
Dec 03 00:19:28 localhost augenrules[726]: pid 706
Dec 03 00:19:28 localhost augenrules[726]: rate_limit 0
Dec 03 00:19:28 localhost augenrules[726]: backlog_limit 8192
Dec 03 00:19:28 localhost augenrules[726]: lost 0
Dec 03 00:19:28 localhost augenrules[726]: backlog 1
Dec 03 00:19:28 localhost augenrules[726]: backlog_wait_time 60000
Dec 03 00:19:28 localhost augenrules[726]: backlog_wait_time_actual 0
Dec 03 00:19:28 localhost systemd[1]: Started Security Auditing Service.
Dec 03 00:19:28 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Dec 03 00:19:28 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Dec 03 00:19:29 localhost systemd[1]: Finished Rebuild Hardware Database.
Dec 03 00:19:29 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec 03 00:19:29 localhost systemd[1]: Starting Update is Completed...
Dec 03 00:19:29 localhost systemd[1]: Finished Update is Completed.
Dec 03 00:19:29 localhost systemd-udevd[734]: Using default interface naming scheme 'rhel-9.0'.
Dec 03 00:19:29 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec 03 00:19:29 localhost systemd[1]: Reached target System Initialization.
Dec 03 00:19:29 localhost systemd[1]: Started dnf makecache --timer.
Dec 03 00:19:29 localhost systemd[1]: Started Daily rotation of log files.
Dec 03 00:19:29 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Dec 03 00:19:29 localhost systemd[1]: Reached target Timer Units.
Dec 03 00:19:29 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec 03 00:19:29 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Dec 03 00:19:29 localhost systemd[1]: Reached target Socket Units.
Dec 03 00:19:29 localhost systemd-udevd[747]: Network interface NamePolicy= disabled on kernel command line.
Dec 03 00:19:29 localhost systemd[1]: Starting D-Bus System Message Bus...
Dec 03 00:19:29 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 03 00:19:29 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Dec 03 00:19:29 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Dec 03 00:19:29 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 03 00:19:29 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Dec 03 00:19:29 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Dec 03 00:19:29 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Dec 03 00:19:29 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 03 00:19:29 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 03 00:19:29 localhost systemd[1]: Started D-Bus System Message Bus.
Dec 03 00:19:29 localhost systemd[1]: Reached target Basic System.
Dec 03 00:19:29 localhost dbus-broker-lau[767]: Ready
Dec 03 00:19:29 localhost systemd[1]: Starting NTP client/server...
Dec 03 00:19:29 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Dec 03 00:19:29 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Dec 03 00:19:29 localhost systemd[1]: Starting IPv4 firewall with iptables...
Dec 03 00:19:29 localhost systemd[1]: Started irqbalance daemon.
Dec 03 00:19:29 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Dec 03 00:19:29 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 03 00:19:29 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 03 00:19:29 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 03 00:19:29 localhost systemd[1]: Reached target sshd-keygen.target.
Dec 03 00:19:29 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Dec 03 00:19:29 localhost systemd[1]: Reached target User and Group Name Lookups.
Dec 03 00:19:29 localhost systemd[1]: Starting User Login Management...
Dec 03 00:19:29 localhost chronyd[799]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec 03 00:19:29 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Dec 03 00:19:29 localhost chronyd[799]: Loaded 0 symmetric keys
Dec 03 00:19:29 localhost chronyd[799]: Using right/UTC timezone to obtain leap second data
Dec 03 00:19:29 localhost chronyd[799]: Loaded seccomp filter (level 2)
Dec 03 00:19:29 localhost systemd[1]: Started NTP client/server.
Dec 03 00:19:29 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Dec 03 00:19:29 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Dec 03 00:19:29 localhost systemd-logind[800]: Watching system buttons on /dev/input/event0 (Power Button)
Dec 03 00:19:29 localhost systemd-logind[800]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec 03 00:19:29 localhost kernel: kvm_amd: TSC scaling supported
Dec 03 00:19:29 localhost kernel: kvm_amd: Nested Virtualization enabled
Dec 03 00:19:29 localhost kernel: kvm_amd: Nested Paging enabled
Dec 03 00:19:29 localhost kernel: kvm_amd: LBR virtualization supported
Dec 03 00:19:29 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Dec 03 00:19:29 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Dec 03 00:19:29 localhost kernel: Console: switching to colour dummy device 80x25
Dec 03 00:19:29 localhost systemd-logind[800]: New seat seat0.
Dec 03 00:19:29 localhost systemd[1]: Started User Login Management.
Dec 03 00:19:29 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Dec 03 00:19:29 localhost kernel: [drm] features: -context_init
Dec 03 00:19:29 localhost kernel: [drm] number of scanouts: 1
Dec 03 00:19:29 localhost kernel: [drm] number of cap sets: 0
Dec 03 00:19:29 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Dec 03 00:19:29 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Dec 03 00:19:29 localhost kernel: Console: switching to colour frame buffer device 128x48
Dec 03 00:19:29 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Dec 03 00:19:29 localhost iptables.init[791]: iptables: Applying firewall rules: [  OK  ]
Dec 03 00:19:29 localhost systemd[1]: Finished IPv4 firewall with iptables.
Dec 03 00:19:29 localhost cloud-init[842]: Cloud-init v. 24.4-7.el9 running 'init-local' at Wed, 03 Dec 2025 00:19:29 +0000. Up 9.63 seconds.
Dec 03 00:19:30 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Dec 03 00:19:30 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Dec 03 00:19:30 localhost systemd[1]: run-cloud\x2dinit-tmp-tmp69jigwq1.mount: Deactivated successfully.
Dec 03 00:19:30 localhost systemd[1]: Starting Hostname Service...
Dec 03 00:19:30 localhost systemd[1]: Started Hostname Service.
Dec 03 00:19:30 np0005543037.novalocal systemd-hostnamed[856]: Hostname set to <np0005543037.novalocal> (static)
Dec 03 00:19:30 np0005543037.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Dec 03 00:19:30 np0005543037.novalocal systemd[1]: Reached target Preparation for Network.
Dec 03 00:19:30 np0005543037.novalocal systemd[1]: Starting Network Manager...
Dec 03 00:19:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721170.6563] NetworkManager (version 1.54.1-1.el9) is starting... (boot:ea2ffd2b-9398-4d40-9798-3e760752a119)
Dec 03 00:19:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721170.6570] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 03 00:19:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721170.6665] manager[0x55d390a2c080]: monitoring kernel firmware directory '/lib/firmware'.
Dec 03 00:19:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721170.6727] hostname: hostname: using hostnamed
Dec 03 00:19:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721170.6730] hostname: static hostname changed from (none) to "np0005543037.novalocal"
Dec 03 00:19:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721170.6736] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 03 00:19:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721170.7009] manager[0x55d390a2c080]: rfkill: Wi-Fi hardware radio set enabled
Dec 03 00:19:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721170.7012] manager[0x55d390a2c080]: rfkill: WWAN hardware radio set enabled
Dec 03 00:19:30 np0005543037.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Dec 03 00:19:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721170.7075] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec 03 00:19:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721170.7075] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 03 00:19:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721170.7113] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 03 00:19:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721170.7115] manager: Networking is enabled by state file
Dec 03 00:19:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721170.7120] settings: Loaded settings plugin: keyfile (internal)
Dec 03 00:19:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721170.7134] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 03 00:19:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721170.7171] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 03 00:19:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721170.7193] dhcp: init: Using DHCP client 'internal'
Dec 03 00:19:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721170.7199] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 03 00:19:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721170.7226] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 03 00:19:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721170.7243] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 03 00:19:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721170.7258] device (lo): Activation: starting connection 'lo' (3c357ba2-4585-405b-8323-b1feb378cf6e)
Dec 03 00:19:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721170.7277] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 03 00:19:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721170.7284] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 03 00:19:30 np0005543037.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 03 00:19:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721170.7366] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 03 00:19:30 np0005543037.novalocal systemd[1]: Started Network Manager.
Dec 03 00:19:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721170.7396] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 03 00:19:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721170.7401] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 03 00:19:30 np0005543037.novalocal systemd[1]: Reached target Network.
Dec 03 00:19:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721170.7429] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 03 00:19:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721170.7434] device (eth0): carrier: link connected
Dec 03 00:19:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721170.7441] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 03 00:19:30 np0005543037.novalocal systemd[1]: Starting Network Manager Wait Online...
Dec 03 00:19:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721170.7453] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec 03 00:19:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721170.7490] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 03 00:19:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721170.7496] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 03 00:19:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721170.7498] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 03 00:19:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721170.7502] manager: NetworkManager state is now CONNECTING
Dec 03 00:19:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721170.7504] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 03 00:19:30 np0005543037.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Dec 03 00:19:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721170.7537] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 03 00:19:30 np0005543037.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 03 00:19:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721170.7584] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 03 00:19:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721170.7599] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 03 00:19:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721170.7614] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 03 00:19:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721170.7622] device (lo): Activation: successful, device activated.
Dec 03 00:19:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721170.7650] dhcp4 (eth0): state changed new lease, address=38.102.83.36
Dec 03 00:19:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721170.7658] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 03 00:19:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721170.7682] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 03 00:19:30 np0005543037.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Dec 03 00:19:30 np0005543037.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec 03 00:19:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721170.7725] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 03 00:19:30 np0005543037.novalocal systemd[1]: Reached target NFS client services.
Dec 03 00:19:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721170.7744] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 03 00:19:30 np0005543037.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Dec 03 00:19:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721170.7748] manager: NetworkManager state is now CONNECTED_SITE
Dec 03 00:19:30 np0005543037.novalocal systemd[1]: Reached target Remote File Systems.
Dec 03 00:19:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721170.7790] device (eth0): Activation: successful, device activated.
Dec 03 00:19:30 np0005543037.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 03 00:19:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721170.7795] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 03 00:19:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721170.7797] manager: startup complete
Dec 03 00:19:30 np0005543037.novalocal systemd[1]: Finished Network Manager Wait Online.
Dec 03 00:19:30 np0005543037.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Dec 03 00:19:31 np0005543037.novalocal cloud-init[921]: Cloud-init v. 24.4-7.el9 running 'init' at Wed, 03 Dec 2025 00:19:31 +0000. Up 10.79 seconds.
Dec 03 00:19:31 np0005543037.novalocal cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Dec 03 00:19:31 np0005543037.novalocal cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 03 00:19:31 np0005543037.novalocal cloud-init[921]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Dec 03 00:19:31 np0005543037.novalocal cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 03 00:19:31 np0005543037.novalocal cloud-init[921]: ci-info: |  eth0  | True |         38.102.83.36         | 255.255.255.0 | global | fa:16:3e:ce:79:91 |
Dec 03 00:19:31 np0005543037.novalocal cloud-init[921]: ci-info: |  eth0  | True | fe80::f816:3eff:fece:7991/64 |       .       |  link  | fa:16:3e:ce:79:91 |
Dec 03 00:19:31 np0005543037.novalocal cloud-init[921]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Dec 03 00:19:31 np0005543037.novalocal cloud-init[921]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Dec 03 00:19:31 np0005543037.novalocal cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 03 00:19:31 np0005543037.novalocal cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Dec 03 00:19:31 np0005543037.novalocal cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 03 00:19:31 np0005543037.novalocal cloud-init[921]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Dec 03 00:19:31 np0005543037.novalocal cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 03 00:19:31 np0005543037.novalocal cloud-init[921]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Dec 03 00:19:31 np0005543037.novalocal cloud-init[921]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Dec 03 00:19:31 np0005543037.novalocal cloud-init[921]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Dec 03 00:19:31 np0005543037.novalocal cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 03 00:19:31 np0005543037.novalocal cloud-init[921]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Dec 03 00:19:31 np0005543037.novalocal cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 03 00:19:31 np0005543037.novalocal cloud-init[921]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Dec 03 00:19:31 np0005543037.novalocal cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 03 00:19:31 np0005543037.novalocal cloud-init[921]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Dec 03 00:19:31 np0005543037.novalocal cloud-init[921]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Dec 03 00:19:31 np0005543037.novalocal cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 03 00:19:31 np0005543037.novalocal useradd[987]: new group: name=cloud-user, GID=1001
Dec 03 00:19:31 np0005543037.novalocal useradd[987]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Dec 03 00:19:31 np0005543037.novalocal useradd[987]: add 'cloud-user' to group 'adm'
Dec 03 00:19:31 np0005543037.novalocal useradd[987]: add 'cloud-user' to group 'systemd-journal'
Dec 03 00:19:31 np0005543037.novalocal useradd[987]: add 'cloud-user' to shadow group 'adm'
Dec 03 00:19:31 np0005543037.novalocal useradd[987]: add 'cloud-user' to shadow group 'systemd-journal'
Dec 03 00:19:32 np0005543037.novalocal cloud-init[921]: Generating public/private rsa key pair.
Dec 03 00:19:32 np0005543037.novalocal cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Dec 03 00:19:32 np0005543037.novalocal cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Dec 03 00:19:32 np0005543037.novalocal cloud-init[921]: The key fingerprint is:
Dec 03 00:19:32 np0005543037.novalocal cloud-init[921]: SHA256:dPDuaZkY0Hen/1J6vLkN7MwQMhDRNHsDW0aT172fDlI root@np0005543037.novalocal
Dec 03 00:19:32 np0005543037.novalocal cloud-init[921]: The key's randomart image is:
Dec 03 00:19:32 np0005543037.novalocal cloud-init[921]: +---[RSA 3072]----+
Dec 03 00:19:32 np0005543037.novalocal cloud-init[921]: |        +++.=. ..|
Dec 03 00:19:32 np0005543037.novalocal cloud-init[921]: |       . +.B... o|
Dec 03 00:19:32 np0005543037.novalocal cloud-init[921]: |      . + * +.. .|
Dec 03 00:19:32 np0005543037.novalocal cloud-init[921]: |       o = o E . |
Dec 03 00:19:32 np0005543037.novalocal cloud-init[921]: |        S + +   o|
Dec 03 00:19:32 np0005543037.novalocal cloud-init[921]: |         + B = .o|
Dec 03 00:19:32 np0005543037.novalocal cloud-init[921]: |        . * o *+ |
Dec 03 00:19:32 np0005543037.novalocal cloud-init[921]: |         .   =o+=|
Dec 03 00:19:32 np0005543037.novalocal cloud-init[921]: |              +*=|
Dec 03 00:19:32 np0005543037.novalocal cloud-init[921]: +----[SHA256]-----+
Dec 03 00:19:32 np0005543037.novalocal cloud-init[921]: Generating public/private ecdsa key pair.
Dec 03 00:19:32 np0005543037.novalocal cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Dec 03 00:19:32 np0005543037.novalocal cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Dec 03 00:19:32 np0005543037.novalocal cloud-init[921]: The key fingerprint is:
Dec 03 00:19:32 np0005543037.novalocal cloud-init[921]: SHA256:dWoIifJ31OIrZUzznfO9wx63pVUXcHxZWhOkEbFoxQA root@np0005543037.novalocal
Dec 03 00:19:32 np0005543037.novalocal cloud-init[921]: The key's randomart image is:
Dec 03 00:19:32 np0005543037.novalocal cloud-init[921]: +---[ECDSA 256]---+
Dec 03 00:19:32 np0005543037.novalocal cloud-init[921]: |         E..o*==*|
Dec 03 00:19:32 np0005543037.novalocal cloud-init[921]: |     . . .  o.*=o|
Dec 03 00:19:32 np0005543037.novalocal cloud-init[921]: |  . . o = oo.o...|
Dec 03 00:19:32 np0005543037.novalocal cloud-init[921]: |   o   * *.+ .  .|
Dec 03 00:19:32 np0005543037.novalocal cloud-init[921]: |    . . S + +   o|
Dec 03 00:19:32 np0005543037.novalocal cloud-init[921]: |     . + o   o .o|
Dec 03 00:19:32 np0005543037.novalocal cloud-init[921]: |      . .     o.=|
Dec 03 00:19:32 np0005543037.novalocal cloud-init[921]: |       .       =*|
Dec 03 00:19:32 np0005543037.novalocal cloud-init[921]: |              o+.|
Dec 03 00:19:32 np0005543037.novalocal cloud-init[921]: +----[SHA256]-----+
Dec 03 00:19:32 np0005543037.novalocal cloud-init[921]: Generating public/private ed25519 key pair.
Dec 03 00:19:32 np0005543037.novalocal cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Dec 03 00:19:32 np0005543037.novalocal cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Dec 03 00:19:32 np0005543037.novalocal cloud-init[921]: The key fingerprint is:
Dec 03 00:19:32 np0005543037.novalocal cloud-init[921]: SHA256:MaCSeqWk4+hqxePgtr62EbNt8KngMYsqlCAR6580N/I root@np0005543037.novalocal
Dec 03 00:19:32 np0005543037.novalocal cloud-init[921]: The key's randomart image is:
Dec 03 00:19:32 np0005543037.novalocal cloud-init[921]: +--[ED25519 256]--+
Dec 03 00:19:32 np0005543037.novalocal cloud-init[921]: |..    .          |
Dec 03 00:19:32 np0005543037.novalocal cloud-init[921]: |.. . . .         |
Dec 03 00:19:32 np0005543037.novalocal cloud-init[921]: |..+ o   o        |
Dec 03 00:19:32 np0005543037.novalocal cloud-init[921]: |++ +     o       |
Dec 03 00:19:32 np0005543037.novalocal cloud-init[921]: |*==+ o  S        |
Dec 03 00:19:32 np0005543037.novalocal cloud-init[921]: |o*X+B .          |
Dec 03 00:19:32 np0005543037.novalocal cloud-init[921]: |=*+B.E           |
Dec 03 00:19:32 np0005543037.novalocal cloud-init[921]: |*=B.             |
Dec 03 00:19:32 np0005543037.novalocal cloud-init[921]: |%X+              |
Dec 03 00:19:32 np0005543037.novalocal cloud-init[921]: +----[SHA256]-----+
Dec 03 00:19:32 np0005543037.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Dec 03 00:19:32 np0005543037.novalocal systemd[1]: Reached target Cloud-config availability.
Dec 03 00:19:32 np0005543037.novalocal systemd[1]: Reached target Network is Online.
Dec 03 00:19:32 np0005543037.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Dec 03 00:19:32 np0005543037.novalocal systemd[1]: Starting Crash recovery kernel arming...
Dec 03 00:19:32 np0005543037.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Dec 03 00:19:32 np0005543037.novalocal systemd[1]: Starting System Logging Service...
Dec 03 00:19:32 np0005543037.novalocal sm-notify[1003]: Version 2.5.4 starting
Dec 03 00:19:32 np0005543037.novalocal systemd[1]: Starting OpenSSH server daemon...
Dec 03 00:19:32 np0005543037.novalocal systemd[1]: Starting Permit User Sessions...
Dec 03 00:19:32 np0005543037.novalocal systemd[1]: Started Notify NFS peers of a restart.
Dec 03 00:19:32 np0005543037.novalocal systemd[1]: Finished Permit User Sessions.
Dec 03 00:19:32 np0005543037.novalocal systemd[1]: Started Command Scheduler.
Dec 03 00:19:32 np0005543037.novalocal systemd[1]: Started Getty on tty1.
Dec 03 00:19:32 np0005543037.novalocal sshd[1005]: Server listening on 0.0.0.0 port 22.
Dec 03 00:19:32 np0005543037.novalocal sshd[1005]: Server listening on :: port 22.
Dec 03 00:19:32 np0005543037.novalocal systemd[1]: Started Serial Getty on ttyS0.
Dec 03 00:19:32 np0005543037.novalocal crond[1008]: (CRON) STARTUP (1.5.7)
Dec 03 00:19:32 np0005543037.novalocal crond[1008]: (CRON) INFO (Syslog will be used instead of sendmail.)
Dec 03 00:19:32 np0005543037.novalocal systemd[1]: Reached target Login Prompts.
Dec 03 00:19:32 np0005543037.novalocal crond[1008]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 63% if used.)
Dec 03 00:19:32 np0005543037.novalocal crond[1008]: (CRON) INFO (running with inotify support)
Dec 03 00:19:32 np0005543037.novalocal systemd[1]: Started OpenSSH server daemon.
Dec 03 00:19:32 np0005543037.novalocal rsyslogd[1004]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1004" x-info="https://www.rsyslog.com"] start
Dec 03 00:19:32 np0005543037.novalocal rsyslogd[1004]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Dec 03 00:19:32 np0005543037.novalocal systemd[1]: Started System Logging Service.
Dec 03 00:19:32 np0005543037.novalocal systemd[1]: Reached target Multi-User System.
Dec 03 00:19:32 np0005543037.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Dec 03 00:19:32 np0005543037.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Dec 03 00:19:32 np0005543037.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Dec 03 00:19:32 np0005543037.novalocal rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 03 00:19:32 np0005543037.novalocal kdumpctl[1016]: kdump: No kdump initial ramdisk found.
Dec 03 00:19:32 np0005543037.novalocal kdumpctl[1016]: kdump: Rebuilding /boot/initramfs-5.14.0-645.el9.x86_64kdump.img
Dec 03 00:19:32 np0005543037.novalocal sshd-session[1070]: Connection reset by 38.102.83.114 port 41882 [preauth]
Dec 03 00:19:32 np0005543037.novalocal sshd-session[1085]: Unable to negotiate with 38.102.83.114 port 41888: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Dec 03 00:19:32 np0005543037.novalocal sshd-session[1102]: Connection reset by 38.102.83.114 port 41896 [preauth]
Dec 03 00:19:32 np0005543037.novalocal sshd-session[1114]: Unable to negotiate with 38.102.83.114 port 41912: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Dec 03 00:19:32 np0005543037.novalocal sshd-session[1125]: Unable to negotiate with 38.102.83.114 port 41922: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Dec 03 00:19:32 np0005543037.novalocal cloud-init[1130]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Wed, 03 Dec 2025 00:19:32 +0000. Up 12.55 seconds.
Dec 03 00:19:32 np0005543037.novalocal sshd-session[1144]: Connection reset by 38.102.83.114 port 41944 [preauth]
Dec 03 00:19:33 np0005543037.novalocal sshd-session[1152]: Unable to negotiate with 38.102.83.114 port 41948: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Dec 03 00:19:33 np0005543037.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Dec 03 00:19:33 np0005543037.novalocal sshd-session[1157]: Unable to negotiate with 38.102.83.114 port 41954: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Dec 03 00:19:33 np0005543037.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Dec 03 00:19:33 np0005543037.novalocal sshd-session[1134]: Connection closed by 38.102.83.114 port 41932 [preauth]
Dec 03 00:19:33 np0005543037.novalocal dracut[1284]: dracut-057-102.git20250818.el9
Dec 03 00:19:33 np0005543037.novalocal cloud-init[1287]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Wed, 03 Dec 2025 00:19:33 +0000. Up 12.98 seconds.
Dec 03 00:19:33 np0005543037.novalocal cloud-init[1302]: #############################################################
Dec 03 00:19:33 np0005543037.novalocal cloud-init[1303]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Dec 03 00:19:33 np0005543037.novalocal cloud-init[1305]: 256 SHA256:dWoIifJ31OIrZUzznfO9wx63pVUXcHxZWhOkEbFoxQA root@np0005543037.novalocal (ECDSA)
Dec 03 00:19:33 np0005543037.novalocal cloud-init[1307]: 256 SHA256:MaCSeqWk4+hqxePgtr62EbNt8KngMYsqlCAR6580N/I root@np0005543037.novalocal (ED25519)
Dec 03 00:19:33 np0005543037.novalocal cloud-init[1309]: 3072 SHA256:dPDuaZkY0Hen/1J6vLkN7MwQMhDRNHsDW0aT172fDlI root@np0005543037.novalocal (RSA)
Dec 03 00:19:33 np0005543037.novalocal cloud-init[1310]: -----END SSH HOST KEY FINGERPRINTS-----
Dec 03 00:19:33 np0005543037.novalocal cloud-init[1311]: #############################################################
Dec 03 00:19:33 np0005543037.novalocal dracut[1286]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-645.el9.x86_64kdump.img 5.14.0-645.el9.x86_64
Dec 03 00:19:33 np0005543037.novalocal cloud-init[1287]: Cloud-init v. 24.4-7.el9 finished at Wed, 03 Dec 2025 00:19:33 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 13.17 seconds
Dec 03 00:19:33 np0005543037.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Dec 03 00:19:33 np0005543037.novalocal systemd[1]: Reached target Cloud-init target.
Dec 03 00:19:34 np0005543037.novalocal dracut[1286]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Dec 03 00:19:34 np0005543037.novalocal dracut[1286]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Dec 03 00:19:34 np0005543037.novalocal dracut[1286]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Dec 03 00:19:34 np0005543037.novalocal dracut[1286]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec 03 00:19:34 np0005543037.novalocal dracut[1286]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec 03 00:19:34 np0005543037.novalocal dracut[1286]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec 03 00:19:34 np0005543037.novalocal dracut[1286]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec 03 00:19:34 np0005543037.novalocal dracut[1286]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec 03 00:19:34 np0005543037.novalocal dracut[1286]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec 03 00:19:34 np0005543037.novalocal dracut[1286]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec 03 00:19:34 np0005543037.novalocal dracut[1286]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec 03 00:19:34 np0005543037.novalocal dracut[1286]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec 03 00:19:34 np0005543037.novalocal dracut[1286]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec 03 00:19:34 np0005543037.novalocal dracut[1286]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec 03 00:19:34 np0005543037.novalocal dracut[1286]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Dec 03 00:19:34 np0005543037.novalocal dracut[1286]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Dec 03 00:19:34 np0005543037.novalocal dracut[1286]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec 03 00:19:34 np0005543037.novalocal dracut[1286]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec 03 00:19:34 np0005543037.novalocal dracut[1286]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec 03 00:19:34 np0005543037.novalocal dracut[1286]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec 03 00:19:34 np0005543037.novalocal dracut[1286]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec 03 00:19:34 np0005543037.novalocal dracut[1286]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec 03 00:19:34 np0005543037.novalocal dracut[1286]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec 03 00:19:34 np0005543037.novalocal dracut[1286]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec 03 00:19:34 np0005543037.novalocal dracut[1286]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec 03 00:19:34 np0005543037.novalocal dracut[1286]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec 03 00:19:34 np0005543037.novalocal dracut[1286]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec 03 00:19:34 np0005543037.novalocal dracut[1286]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec 03 00:19:34 np0005543037.novalocal dracut[1286]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec 03 00:19:34 np0005543037.novalocal dracut[1286]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec 03 00:19:34 np0005543037.novalocal dracut[1286]: Module 'resume' will not be installed, because it's in the list to be omitted!
Dec 03 00:19:34 np0005543037.novalocal dracut[1286]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Dec 03 00:19:34 np0005543037.novalocal dracut[1286]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Dec 03 00:19:34 np0005543037.novalocal dracut[1286]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec 03 00:19:34 np0005543037.novalocal dracut[1286]: memstrack is not available
Dec 03 00:19:34 np0005543037.novalocal dracut[1286]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec 03 00:19:34 np0005543037.novalocal dracut[1286]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec 03 00:19:34 np0005543037.novalocal dracut[1286]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec 03 00:19:34 np0005543037.novalocal dracut[1286]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec 03 00:19:34 np0005543037.novalocal dracut[1286]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec 03 00:19:34 np0005543037.novalocal dracut[1286]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec 03 00:19:34 np0005543037.novalocal dracut[1286]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec 03 00:19:34 np0005543037.novalocal dracut[1286]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec 03 00:19:34 np0005543037.novalocal dracut[1286]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec 03 00:19:34 np0005543037.novalocal dracut[1286]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec 03 00:19:34 np0005543037.novalocal dracut[1286]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec 03 00:19:34 np0005543037.novalocal dracut[1286]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec 03 00:19:34 np0005543037.novalocal dracut[1286]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec 03 00:19:34 np0005543037.novalocal dracut[1286]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec 03 00:19:34 np0005543037.novalocal dracut[1286]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec 03 00:19:34 np0005543037.novalocal dracut[1286]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec 03 00:19:34 np0005543037.novalocal dracut[1286]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec 03 00:19:34 np0005543037.novalocal dracut[1286]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec 03 00:19:34 np0005543037.novalocal dracut[1286]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec 03 00:19:35 np0005543037.novalocal dracut[1286]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec 03 00:19:35 np0005543037.novalocal dracut[1286]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec 03 00:19:35 np0005543037.novalocal dracut[1286]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec 03 00:19:35 np0005543037.novalocal dracut[1286]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec 03 00:19:35 np0005543037.novalocal dracut[1286]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec 03 00:19:35 np0005543037.novalocal dracut[1286]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec 03 00:19:35 np0005543037.novalocal dracut[1286]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec 03 00:19:35 np0005543037.novalocal dracut[1286]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec 03 00:19:35 np0005543037.novalocal dracut[1286]: memstrack is not available
Dec 03 00:19:35 np0005543037.novalocal dracut[1286]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec 03 00:19:35 np0005543037.novalocal dracut[1286]: *** Including module: systemd ***
Dec 03 00:19:35 np0005543037.novalocal dracut[1286]: *** Including module: fips ***
Dec 03 00:19:35 np0005543037.novalocal chronyd[799]: Selected source 174.138.193.90 (2.centos.pool.ntp.org)
Dec 03 00:19:35 np0005543037.novalocal chronyd[799]: System clock TAI offset set to 37 seconds
Dec 03 00:19:36 np0005543037.novalocal dracut[1286]: *** Including module: systemd-initrd ***
Dec 03 00:19:36 np0005543037.novalocal dracut[1286]: *** Including module: i18n ***
Dec 03 00:19:36 np0005543037.novalocal dracut[1286]: *** Including module: drm ***
Dec 03 00:19:37 np0005543037.novalocal dracut[1286]: *** Including module: prefixdevname ***
Dec 03 00:19:37 np0005543037.novalocal dracut[1286]: *** Including module: kernel-modules ***
Dec 03 00:19:37 np0005543037.novalocal kernel: block vda: the capability attribute has been deprecated.
Dec 03 00:19:37 np0005543037.novalocal dracut[1286]: *** Including module: kernel-modules-extra ***
Dec 03 00:19:37 np0005543037.novalocal dracut[1286]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Dec 03 00:19:37 np0005543037.novalocal dracut[1286]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Dec 03 00:19:37 np0005543037.novalocal dracut[1286]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Dec 03 00:19:37 np0005543037.novalocal dracut[1286]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Dec 03 00:19:37 np0005543037.novalocal dracut[1286]: *** Including module: qemu ***
Dec 03 00:19:37 np0005543037.novalocal dracut[1286]: *** Including module: fstab-sys ***
Dec 03 00:19:37 np0005543037.novalocal dracut[1286]: *** Including module: rootfs-block ***
Dec 03 00:19:37 np0005543037.novalocal chronyd[799]: Selected source 149.56.19.163 (2.centos.pool.ntp.org)
Dec 03 00:19:37 np0005543037.novalocal dracut[1286]: *** Including module: terminfo ***
Dec 03 00:19:38 np0005543037.novalocal dracut[1286]: *** Including module: udev-rules ***
Dec 03 00:19:38 np0005543037.novalocal dracut[1286]: Skipping udev rule: 91-permissions.rules
Dec 03 00:19:38 np0005543037.novalocal dracut[1286]: Skipping udev rule: 80-drivers-modprobe.rules
Dec 03 00:19:38 np0005543037.novalocal dracut[1286]: *** Including module: virtiofs ***
Dec 03 00:19:38 np0005543037.novalocal dracut[1286]: *** Including module: dracut-systemd ***
Dec 03 00:19:39 np0005543037.novalocal dracut[1286]: *** Including module: usrmount ***
Dec 03 00:19:39 np0005543037.novalocal dracut[1286]: *** Including module: base ***
Dec 03 00:19:39 np0005543037.novalocal dracut[1286]: *** Including module: fs-lib ***
Dec 03 00:19:39 np0005543037.novalocal dracut[1286]: *** Including module: kdumpbase ***
Dec 03 00:19:39 np0005543037.novalocal dracut[1286]: *** Including module: microcode_ctl-fw_dir_override ***
Dec 03 00:19:39 np0005543037.novalocal dracut[1286]:   microcode_ctl module: mangling fw_dir
Dec 03 00:19:39 np0005543037.novalocal dracut[1286]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Dec 03 00:19:39 np0005543037.novalocal dracut[1286]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Dec 03 00:19:39 np0005543037.novalocal dracut[1286]:     microcode_ctl: configuration "intel" is ignored
Dec 03 00:19:39 np0005543037.novalocal dracut[1286]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Dec 03 00:19:39 np0005543037.novalocal dracut[1286]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Dec 03 00:19:39 np0005543037.novalocal dracut[1286]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Dec 03 00:19:39 np0005543037.novalocal dracut[1286]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Dec 03 00:19:39 np0005543037.novalocal dracut[1286]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Dec 03 00:19:39 np0005543037.novalocal dracut[1286]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Dec 03 00:19:39 np0005543037.novalocal dracut[1286]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Dec 03 00:19:39 np0005543037.novalocal dracut[1286]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Dec 03 00:19:39 np0005543037.novalocal dracut[1286]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Dec 03 00:19:40 np0005543037.novalocal dracut[1286]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Dec 03 00:19:40 np0005543037.novalocal dracut[1286]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Dec 03 00:19:40 np0005543037.novalocal dracut[1286]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Dec 03 00:19:40 np0005543037.novalocal dracut[1286]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Dec 03 00:19:40 np0005543037.novalocal dracut[1286]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Dec 03 00:19:40 np0005543037.novalocal dracut[1286]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Dec 03 00:19:40 np0005543037.novalocal dracut[1286]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Dec 03 00:19:40 np0005543037.novalocal dracut[1286]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Dec 03 00:19:40 np0005543037.novalocal dracut[1286]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Dec 03 00:19:40 np0005543037.novalocal dracut[1286]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Dec 03 00:19:40 np0005543037.novalocal dracut[1286]: *** Including module: openssl ***
Dec 03 00:19:40 np0005543037.novalocal dracut[1286]: *** Including module: shutdown ***
Dec 03 00:19:40 np0005543037.novalocal irqbalance[792]: Cannot change IRQ 25 affinity: Operation not permitted
Dec 03 00:19:40 np0005543037.novalocal irqbalance[792]: IRQ 25 affinity is now unmanaged
Dec 03 00:19:40 np0005543037.novalocal irqbalance[792]: Cannot change IRQ 31 affinity: Operation not permitted
Dec 03 00:19:40 np0005543037.novalocal irqbalance[792]: IRQ 31 affinity is now unmanaged
Dec 03 00:19:40 np0005543037.novalocal irqbalance[792]: Cannot change IRQ 28 affinity: Operation not permitted
Dec 03 00:19:40 np0005543037.novalocal irqbalance[792]: IRQ 28 affinity is now unmanaged
Dec 03 00:19:40 np0005543037.novalocal irqbalance[792]: Cannot change IRQ 32 affinity: Operation not permitted
Dec 03 00:19:40 np0005543037.novalocal irqbalance[792]: IRQ 32 affinity is now unmanaged
Dec 03 00:19:40 np0005543037.novalocal irqbalance[792]: Cannot change IRQ 30 affinity: Operation not permitted
Dec 03 00:19:40 np0005543037.novalocal irqbalance[792]: IRQ 30 affinity is now unmanaged
Dec 03 00:19:40 np0005543037.novalocal irqbalance[792]: Cannot change IRQ 29 affinity: Operation not permitted
Dec 03 00:19:40 np0005543037.novalocal irqbalance[792]: IRQ 29 affinity is now unmanaged
Dec 03 00:19:40 np0005543037.novalocal dracut[1286]: *** Including module: squash ***
Dec 03 00:19:40 np0005543037.novalocal dracut[1286]: *** Including modules done ***
Dec 03 00:19:40 np0005543037.novalocal dracut[1286]: *** Installing kernel module dependencies ***
Dec 03 00:19:40 np0005543037.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 03 00:19:41 np0005543037.novalocal dracut[1286]: *** Installing kernel module dependencies done ***
Dec 03 00:19:41 np0005543037.novalocal dracut[1286]: *** Resolving executable dependencies ***
Dec 03 00:19:43 np0005543037.novalocal dracut[1286]: *** Resolving executable dependencies done ***
Dec 03 00:19:43 np0005543037.novalocal dracut[1286]: *** Generating early-microcode cpio image ***
Dec 03 00:19:43 np0005543037.novalocal dracut[1286]: *** Store current command line parameters ***
Dec 03 00:19:43 np0005543037.novalocal dracut[1286]: Stored kernel commandline:
Dec 03 00:19:43 np0005543037.novalocal dracut[1286]: No dracut internal kernel commandline stored in the initramfs
Dec 03 00:19:43 np0005543037.novalocal dracut[1286]: *** Install squash loader ***
Dec 03 00:19:44 np0005543037.novalocal dracut[1286]: *** Squashing the files inside the initramfs ***
Dec 03 00:19:45 np0005543037.novalocal dracut[1286]: *** Squashing the files inside the initramfs done ***
Dec 03 00:19:45 np0005543037.novalocal dracut[1286]: *** Creating image file '/boot/initramfs-5.14.0-645.el9.x86_64kdump.img' ***
Dec 03 00:19:45 np0005543037.novalocal dracut[1286]: *** Hardlinking files ***
Dec 03 00:19:45 np0005543037.novalocal dracut[1286]: Mode:           real
Dec 03 00:19:45 np0005543037.novalocal dracut[1286]: Files:          50
Dec 03 00:19:45 np0005543037.novalocal dracut[1286]: Linked:         0 files
Dec 03 00:19:45 np0005543037.novalocal dracut[1286]: Compared:       0 xattrs
Dec 03 00:19:45 np0005543037.novalocal dracut[1286]: Compared:       0 files
Dec 03 00:19:45 np0005543037.novalocal dracut[1286]: Saved:          0 B
Dec 03 00:19:45 np0005543037.novalocal dracut[1286]: Duration:       0.001072 seconds
Dec 03 00:19:45 np0005543037.novalocal dracut[1286]: *** Hardlinking files done ***
Dec 03 00:19:45 np0005543037.novalocal dracut[1286]: *** Creating initramfs image file '/boot/initramfs-5.14.0-645.el9.x86_64kdump.img' done ***
Dec 03 00:19:46 np0005543037.novalocal kdumpctl[1016]: kdump: kexec: loaded kdump kernel
Dec 03 00:19:46 np0005543037.novalocal kdumpctl[1016]: kdump: Starting kdump: [OK]
Dec 03 00:19:46 np0005543037.novalocal systemd[1]: Finished Crash recovery kernel arming.
Dec 03 00:19:46 np0005543037.novalocal systemd[1]: Startup finished in 3.745s (kernel) + 3.178s (initrd) + 19.458s (userspace) = 26.383s.
Dec 03 00:19:56 np0005543037.novalocal sshd-session[4293]: Accepted publickey for zuul from 38.102.83.114 port 60178 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Dec 03 00:19:56 np0005543037.novalocal systemd[1]: Created slice User Slice of UID 1000.
Dec 03 00:19:56 np0005543037.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Dec 03 00:19:56 np0005543037.novalocal systemd-logind[800]: New session 1 of user zuul.
Dec 03 00:19:56 np0005543037.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Dec 03 00:19:56 np0005543037.novalocal systemd[1]: Starting User Manager for UID 1000...
Dec 03 00:19:56 np0005543037.novalocal systemd[4297]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 03 00:19:56 np0005543037.novalocal systemd[4297]: Queued start job for default target Main User Target.
Dec 03 00:19:56 np0005543037.novalocal systemd[4297]: Created slice User Application Slice.
Dec 03 00:19:56 np0005543037.novalocal systemd[4297]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 03 00:19:56 np0005543037.novalocal systemd[4297]: Started Daily Cleanup of User's Temporary Directories.
Dec 03 00:19:56 np0005543037.novalocal systemd[4297]: Reached target Paths.
Dec 03 00:19:56 np0005543037.novalocal systemd[4297]: Reached target Timers.
Dec 03 00:19:56 np0005543037.novalocal systemd[4297]: Starting D-Bus User Message Bus Socket...
Dec 03 00:19:56 np0005543037.novalocal systemd[4297]: Starting Create User's Volatile Files and Directories...
Dec 03 00:19:56 np0005543037.novalocal systemd[4297]: Listening on D-Bus User Message Bus Socket.
Dec 03 00:19:56 np0005543037.novalocal systemd[4297]: Reached target Sockets.
Dec 03 00:19:56 np0005543037.novalocal systemd[4297]: Finished Create User's Volatile Files and Directories.
Dec 03 00:19:56 np0005543037.novalocal systemd[4297]: Reached target Basic System.
Dec 03 00:19:56 np0005543037.novalocal systemd[4297]: Reached target Main User Target.
Dec 03 00:19:56 np0005543037.novalocal systemd[4297]: Startup finished in 157ms.
Dec 03 00:19:56 np0005543037.novalocal systemd[1]: Started User Manager for UID 1000.
Dec 03 00:19:56 np0005543037.novalocal systemd[1]: Started Session 1 of User zuul.
Dec 03 00:19:56 np0005543037.novalocal sshd-session[4293]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 03 00:19:56 np0005543037.novalocal python3[4379]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 00:19:59 np0005543037.novalocal python3[4407]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 00:20:00 np0005543037.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 03 00:20:05 np0005543037.novalocal python3[4468]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 00:20:06 np0005543037.novalocal python3[4508]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Dec 03 00:20:08 np0005543037.novalocal python3[4534]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDG1k3FGoeVRMi/pfqrqG+qYYbC3MHtyLyiSp+H1do1NUQw/Cg+wJJM9AY6SS30BRejfYeXmkEkXZhV8wBMXnVt9ot3DIJsYyFguzPUpBwm+dalGcqkaCwbE8oDxsrdeCCsXql6RrzRVh7SgNQfv6SiXU0RiXzN+6k535cBJGIOQoZy5yrkFFSqOoYGS8YY+3lq8NaHwsOn29bQCSd9+kxEOPMuEcoDJqy1nkNQ7ZgiCFfDkBa6Q7ODBFFl+BxSnhWQ6lWCnxYeIW1Br443YlF9LZB5t0bvGwfcxVO6u0AlkKaRayBqCVaC+OI9Ctyyrxo1/qd9tPuSIqrj5/mDZs8bOpuJTs9Ns6Sj101LzBe5Nmix/JPI09Q/5jwbKNupQI20OGkcCxu/TW4GjFzKjTzgbAuxiqfYVI/CIqZLzy4CJnhRR8O2SvkIpGgQ+O38P7YTwfvxb8siGkcOiJFj5Tf5seM1Fb5b18+PRSByPJa1E2UImhmHtdVaFSXYRivijF0= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 03 00:20:09 np0005543037.novalocal python3[4558]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 00:20:09 np0005543037.novalocal python3[4657]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 03 00:20:10 np0005543037.novalocal python3[4728]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764721209.4807749-207-279452062333833/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=0b7e10745d7f46dea4defb7b84db63f6_id_rsa follow=False checksum=2cedf01d49111e70ff95fb7d4c891ab5a13b2d0e backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 00:20:10 np0005543037.novalocal python3[4851]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 03 00:20:11 np0005543037.novalocal python3[4922]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764721210.4663303-240-164617209161887/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=0b7e10745d7f46dea4defb7b84db63f6_id_rsa.pub follow=False checksum=d7776b431cdb32c605b679e672f334972862f140 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 00:20:12 np0005543037.novalocal python3[4970]: ansible-ping Invoked with data=pong
Dec 03 00:20:13 np0005543037.novalocal python3[4994]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 00:20:15 np0005543037.novalocal python3[5052]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Dec 03 00:20:16 np0005543037.novalocal python3[5084]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 00:20:17 np0005543037.novalocal python3[5108]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 00:20:17 np0005543037.novalocal python3[5132]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 00:20:17 np0005543037.novalocal python3[5156]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 00:20:17 np0005543037.novalocal python3[5180]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 00:20:18 np0005543037.novalocal python3[5204]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 00:20:19 np0005543037.novalocal sudo[5228]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtoaxjqozpqrjglcqszwbvvmfkpyewfe ; /usr/bin/python3'
Dec 03 00:20:19 np0005543037.novalocal sudo[5228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:20:19 np0005543037.novalocal python3[5230]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 00:20:19 np0005543037.novalocal sudo[5228]: pam_unix(sudo:session): session closed for user root
Dec 03 00:20:20 np0005543037.novalocal sudo[5306]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsxlcugknybsdkhnhbchoblgxbjshkoa ; /usr/bin/python3'
Dec 03 00:20:20 np0005543037.novalocal sudo[5306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:20:20 np0005543037.novalocal python3[5308]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 03 00:20:20 np0005543037.novalocal sudo[5306]: pam_unix(sudo:session): session closed for user root
Dec 03 00:20:20 np0005543037.novalocal sudo[5379]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxxntgvsqlehwkeenrtcpotefrtjisjj ; /usr/bin/python3'
Dec 03 00:20:20 np0005543037.novalocal sudo[5379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:20:20 np0005543037.novalocal python3[5381]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764721219.9602287-21-88994690975283/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 00:20:20 np0005543037.novalocal sudo[5379]: pam_unix(sudo:session): session closed for user root
Dec 03 00:20:21 np0005543037.novalocal python3[5429]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 03 00:20:22 np0005543037.novalocal python3[5453]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 03 00:20:22 np0005543037.novalocal python3[5477]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 03 00:20:22 np0005543037.novalocal python3[5501]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 03 00:20:22 np0005543037.novalocal python3[5525]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 03 00:20:23 np0005543037.novalocal python3[5549]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 03 00:20:23 np0005543037.novalocal python3[5573]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 03 00:20:23 np0005543037.novalocal python3[5597]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 03 00:20:24 np0005543037.novalocal python3[5621]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 03 00:20:24 np0005543037.novalocal python3[5645]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 03 00:20:24 np0005543037.novalocal python3[5669]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 03 00:20:25 np0005543037.novalocal python3[5693]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 03 00:20:25 np0005543037.novalocal python3[5717]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 03 00:20:25 np0005543037.novalocal python3[5741]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 03 00:20:25 np0005543037.novalocal python3[5765]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 03 00:20:26 np0005543037.novalocal python3[5789]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 03 00:20:26 np0005543037.novalocal python3[5813]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 03 00:20:26 np0005543037.novalocal python3[5837]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 03 00:20:27 np0005543037.novalocal python3[5861]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 03 00:20:27 np0005543037.novalocal python3[5885]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 03 00:20:27 np0005543037.novalocal python3[5909]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 03 00:20:28 np0005543037.novalocal python3[5933]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 03 00:20:28 np0005543037.novalocal python3[5957]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 03 00:20:28 np0005543037.novalocal python3[5981]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 03 00:20:29 np0005543037.novalocal python3[6005]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 03 00:20:29 np0005543037.novalocal python3[6029]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 03 00:20:30 np0005543037.novalocal sudo[6053]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysplnqbeygrtuvtjnhehyqugsickjvjt ; /usr/bin/python3'
Dec 03 00:20:30 np0005543037.novalocal sudo[6053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:20:31 np0005543037.novalocal python3[6055]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 03 00:20:31 np0005543037.novalocal systemd[1]: Starting Time & Date Service...
Dec 03 00:20:31 np0005543037.novalocal systemd[1]: Started Time & Date Service.
Dec 03 00:20:31 np0005543037.novalocal systemd-timedated[6057]: Changed time zone to 'UTC' (UTC).
Dec 03 00:20:31 np0005543037.novalocal sudo[6053]: pam_unix(sudo:session): session closed for user root
Dec 03 00:20:31 np0005543037.novalocal sudo[6084]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hphghlleezwpysgmwdyclbithipdppej ; /usr/bin/python3'
Dec 03 00:20:31 np0005543037.novalocal sudo[6084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:20:31 np0005543037.novalocal python3[6086]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 00:20:31 np0005543037.novalocal sudo[6084]: pam_unix(sudo:session): session closed for user root
Dec 03 00:20:32 np0005543037.novalocal python3[6162]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 03 00:20:32 np0005543037.novalocal python3[6233]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764721232.08354-153-23865397437081/source _original_basename=tmpbk5lman3 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 00:20:33 np0005543037.novalocal python3[6333]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 03 00:20:33 np0005543037.novalocal python3[6404]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764721233.0582342-183-17510444778160/source _original_basename=tmprgnollya follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 00:20:34 np0005543037.novalocal sudo[6504]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmtnulnnybinllolrmqpppjiynydcavo ; /usr/bin/python3'
Dec 03 00:20:34 np0005543037.novalocal sudo[6504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:20:34 np0005543037.novalocal python3[6506]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 03 00:20:34 np0005543037.novalocal sudo[6504]: pam_unix(sudo:session): session closed for user root
Dec 03 00:20:34 np0005543037.novalocal sudo[6577]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znlncyrwojeiiaqnizxxejjdbddwvrrf ; /usr/bin/python3'
Dec 03 00:20:34 np0005543037.novalocal sudo[6577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:20:34 np0005543037.novalocal python3[6579]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764721234.1862042-231-177289158155994/source _original_basename=tmph6qnordz follow=False checksum=46b5d7337244bac6339155768dd5768694f5a0e7 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 00:20:34 np0005543037.novalocal sudo[6577]: pam_unix(sudo:session): session closed for user root
Dec 03 00:20:35 np0005543037.novalocal python3[6627]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 00:20:35 np0005543037.novalocal python3[6653]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 00:20:36 np0005543037.novalocal sudo[6731]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-livontlddtfukdcaqbgnnmqdwxqtkbqd ; /usr/bin/python3'
Dec 03 00:20:36 np0005543037.novalocal sudo[6731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:20:36 np0005543037.novalocal python3[6733]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 03 00:20:36 np0005543037.novalocal sudo[6731]: pam_unix(sudo:session): session closed for user root
Dec 03 00:20:36 np0005543037.novalocal sudo[6804]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylrlunlfylgnrtvjivbmruqtpalqdlkd ; /usr/bin/python3'
Dec 03 00:20:36 np0005543037.novalocal sudo[6804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:20:36 np0005543037.novalocal python3[6806]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764721235.9555855-273-68918345348552/source _original_basename=tmpdbgcena9 follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 00:20:36 np0005543037.novalocal sudo[6804]: pam_unix(sudo:session): session closed for user root
Dec 03 00:20:37 np0005543037.novalocal sudo[6855]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdxtyccfdynetfjslmzrjetkigujbusg ; /usr/bin/python3'
Dec 03 00:20:37 np0005543037.novalocal sudo[6855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:20:37 np0005543037.novalocal python3[6857]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163efc-24cc-fdf3-80a0-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 00:20:37 np0005543037.novalocal sudo[6855]: pam_unix(sudo:session): session closed for user root
Dec 03 00:20:38 np0005543037.novalocal python3[6885]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-fdf3-80a0-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Dec 03 00:20:39 np0005543037.novalocal python3[6913]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 00:20:56 np0005543037.novalocal sudo[6937]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jiivimkukrcohneehoupqxqferuoioxe ; /usr/bin/python3'
Dec 03 00:20:56 np0005543037.novalocal sudo[6937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:20:56 np0005543037.novalocal python3[6939]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 00:20:56 np0005543037.novalocal sudo[6937]: pam_unix(sudo:session): session closed for user root
Dec 03 00:21:01 np0005543037.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 03 00:21:30 np0005543037.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec 03 00:21:30 np0005543037.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Dec 03 00:21:30 np0005543037.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Dec 03 00:21:30 np0005543037.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Dec 03 00:21:30 np0005543037.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Dec 03 00:21:30 np0005543037.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Dec 03 00:21:30 np0005543037.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Dec 03 00:21:30 np0005543037.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Dec 03 00:21:30 np0005543037.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Dec 03 00:21:30 np0005543037.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Dec 03 00:21:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721290.6964] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 03 00:21:30 np0005543037.novalocal systemd-udevd[6943]: Network interface NamePolicy= disabled on kernel command line.
Dec 03 00:21:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721290.7137] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 03 00:21:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721290.7158] settings: (eth1): created default wired connection 'Wired connection 1'
Dec 03 00:21:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721290.7161] device (eth1): carrier: link connected
Dec 03 00:21:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721290.7163] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec 03 00:21:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721290.7167] policy: auto-activating connection 'Wired connection 1' (1b189b81-0918-3f63-b174-3141827cccab)
Dec 03 00:21:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721290.7170] device (eth1): Activation: starting connection 'Wired connection 1' (1b189b81-0918-3f63-b174-3141827cccab)
Dec 03 00:21:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721290.7171] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 03 00:21:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721290.7173] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 03 00:21:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721290.7176] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 03 00:21:30 np0005543037.novalocal NetworkManager[860]: <info>  [1764721290.7179] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 03 00:21:31 np0005543037.novalocal python3[6969]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163efc-24cc-e99c-3723-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 00:21:41 np0005543037.novalocal sudo[7047]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztaxhysmfjrottqrdcduzwdlcfhyghng ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 03 00:21:41 np0005543037.novalocal sudo[7047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:21:42 np0005543037.novalocal python3[7049]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 03 00:21:42 np0005543037.novalocal sudo[7047]: pam_unix(sudo:session): session closed for user root
Dec 03 00:21:42 np0005543037.novalocal sudo[7120]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oovkfstkslmigcfsfexywkifurndvqxa ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 03 00:21:42 np0005543037.novalocal sudo[7120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:21:42 np0005543037.novalocal python3[7122]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764721301.6832855-102-198149725575844/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=38f7cf41e66e471640785a9c339c05f31751f1d9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 00:21:42 np0005543037.novalocal sudo[7120]: pam_unix(sudo:session): session closed for user root
Dec 03 00:21:43 np0005543037.novalocal sudo[7170]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qterbvalaxtqwistpjwqekfgybnsdcaq ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 03 00:21:43 np0005543037.novalocal sudo[7170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:21:43 np0005543037.novalocal python3[7172]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 03 00:21:43 np0005543037.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec 03 00:21:43 np0005543037.novalocal systemd[1]: Stopped Network Manager Wait Online.
Dec 03 00:21:43 np0005543037.novalocal systemd[1]: Stopping Network Manager Wait Online...
Dec 03 00:21:43 np0005543037.novalocal systemd[1]: Stopping Network Manager...
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[860]: <info>  [1764721303.4740] caught SIGTERM, shutting down normally.
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[860]: <info>  [1764721303.4749] dhcp4 (eth0): canceled DHCP transaction
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[860]: <info>  [1764721303.4749] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[860]: <info>  [1764721303.4749] dhcp4 (eth0): state changed no lease
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[860]: <info>  [1764721303.4751] manager: NetworkManager state is now CONNECTING
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[860]: <info>  [1764721303.4806] dhcp4 (eth1): canceled DHCP transaction
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[860]: <info>  [1764721303.4807] dhcp4 (eth1): state changed no lease
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[860]: <info>  [1764721303.4876] exiting (success)
Dec 03 00:21:43 np0005543037.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 03 00:21:43 np0005543037.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Dec 03 00:21:43 np0005543037.novalocal systemd[1]: Stopped Network Manager.
Dec 03 00:21:43 np0005543037.novalocal systemd[1]: NetworkManager.service: Consumed 1.123s CPU time, 9.9M memory peak.
Dec 03 00:21:43 np0005543037.novalocal systemd[1]: Starting Network Manager...
Dec 03 00:21:43 np0005543037.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.5585] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:ea2ffd2b-9398-4d40-9798-3e760752a119)
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.5588] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.5666] manager[0x558380dc1070]: monitoring kernel firmware directory '/lib/firmware'.
Dec 03 00:21:43 np0005543037.novalocal systemd[1]: Starting Hostname Service...
Dec 03 00:21:43 np0005543037.novalocal systemd[1]: Started Hostname Service.
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.6844] hostname: hostname: using hostnamed
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.6850] hostname: static hostname changed from (none) to "np0005543037.novalocal"
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.6860] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.6872] manager[0x558380dc1070]: rfkill: Wi-Fi hardware radio set enabled
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.6872] manager[0x558380dc1070]: rfkill: WWAN hardware radio set enabled
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.6924] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.6924] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.6926] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.6926] manager: Networking is enabled by state file
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.6930] settings: Loaded settings plugin: keyfile (internal)
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.6938] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.6979] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.6996] dhcp: init: Using DHCP client 'internal'
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.7000] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.7009] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.7018] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.7031] device (lo): Activation: starting connection 'lo' (3c357ba2-4585-405b-8323-b1feb378cf6e)
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.7044] device (eth0): carrier: link connected
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.7052] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.7061] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.7062] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.7073] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.7084] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.7096] device (eth1): carrier: link connected
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.7103] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.7112] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (1b189b81-0918-3f63-b174-3141827cccab) (indicated)
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.7112] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.7123] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.7133] device (eth1): Activation: starting connection 'Wired connection 1' (1b189b81-0918-3f63-b174-3141827cccab)
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.7144] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 03 00:21:43 np0005543037.novalocal systemd[1]: Started Network Manager.
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.7151] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.7155] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.7159] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.7163] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.7168] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.7172] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.7176] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.7181] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.7192] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.7197] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.7212] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.7216] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.7244] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.7253] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.7266] device (lo): Activation: successful, device activated.
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.7281] dhcp4 (eth0): state changed new lease, address=38.102.83.36
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.7295] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 03 00:21:43 np0005543037.novalocal systemd[1]: Starting Network Manager Wait Online...
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.7408] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.7440] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.7443] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.7450] manager: NetworkManager state is now CONNECTED_SITE
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.7457] device (eth0): Activation: successful, device activated.
Dec 03 00:21:43 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721303.7467] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 03 00:21:43 np0005543037.novalocal sudo[7170]: pam_unix(sudo:session): session closed for user root
Dec 03 00:21:44 np0005543037.novalocal python3[7256]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163efc-24cc-e99c-3723-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 00:21:53 np0005543037.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 03 00:22:13 np0005543037.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 03 00:22:29 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721349.3284] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 03 00:22:29 np0005543037.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 03 00:22:29 np0005543037.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 03 00:22:29 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721349.3828] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 03 00:22:29 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721349.3832] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 03 00:22:29 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721349.3845] device (eth1): Activation: successful, device activated.
Dec 03 00:22:29 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721349.3851] manager: startup complete
Dec 03 00:22:29 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721349.3857] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Dec 03 00:22:29 np0005543037.novalocal NetworkManager[7177]: <warn>  [1764721349.3866] device (eth1): Activation: failed for connection 'Wired connection 1'
Dec 03 00:22:29 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721349.3883] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Dec 03 00:22:29 np0005543037.novalocal systemd[1]: Finished Network Manager Wait Online.
Dec 03 00:22:29 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721349.3976] dhcp4 (eth1): canceled DHCP transaction
Dec 03 00:22:29 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721349.3977] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 03 00:22:29 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721349.3977] dhcp4 (eth1): state changed no lease
Dec 03 00:22:29 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721349.3993] policy: auto-activating connection 'ci-private-network' (771916e5-3ce0-5ffe-bc07-7ed0f995ac40)
Dec 03 00:22:29 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721349.3997] device (eth1): Activation: starting connection 'ci-private-network' (771916e5-3ce0-5ffe-bc07-7ed0f995ac40)
Dec 03 00:22:29 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721349.3998] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 03 00:22:29 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721349.4001] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 03 00:22:29 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721349.4009] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 03 00:22:29 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721349.4020] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 03 00:22:29 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721349.4066] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 03 00:22:29 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721349.4068] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 03 00:22:29 np0005543037.novalocal NetworkManager[7177]: <info>  [1764721349.4077] device (eth1): Activation: successful, device activated.
Dec 03 00:22:36 np0005543037.novalocal systemd[4297]: Starting Mark boot as successful...
Dec 03 00:22:36 np0005543037.novalocal systemd[4297]: Finished Mark boot as successful.
Dec 03 00:22:39 np0005543037.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 03 00:22:43 np0005543037.novalocal sudo[7360]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jglanvchcquzgmihzqrazpuhuhfjtonw ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 03 00:22:43 np0005543037.novalocal sudo[7360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:22:43 np0005543037.novalocal python3[7362]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 03 00:22:43 np0005543037.novalocal sudo[7360]: pam_unix(sudo:session): session closed for user root
Dec 03 00:22:43 np0005543037.novalocal sudo[7433]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qapfmesuvwcfphihzsbcpyccfbiprwxe ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 03 00:22:43 np0005543037.novalocal sudo[7433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:22:43 np0005543037.novalocal python3[7435]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764721363.111369-267-85282930200528/source _original_basename=tmp532lk67c follow=False checksum=0a83b367ded148c485f0eee55c6073e307d79589 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 00:22:43 np0005543037.novalocal sudo[7433]: pam_unix(sudo:session): session closed for user root
Dec 03 00:23:44 np0005543037.novalocal sshd-session[4306]: Received disconnect from 38.102.83.114 port 60178:11: disconnected by user
Dec 03 00:23:44 np0005543037.novalocal sshd-session[4306]: Disconnected from user zuul 38.102.83.114 port 60178
Dec 03 00:23:44 np0005543037.novalocal sshd-session[4293]: pam_unix(sshd:session): session closed for user zuul
Dec 03 00:23:44 np0005543037.novalocal systemd-logind[800]: Session 1 logged out. Waiting for processes to exit.
Dec 03 00:25:36 np0005543037.novalocal systemd[4297]: Created slice User Background Tasks Slice.
Dec 03 00:25:36 np0005543037.novalocal systemd[4297]: Starting Cleanup of User's Temporary Files and Directories...
Dec 03 00:25:36 np0005543037.novalocal systemd[4297]: Finished Cleanup of User's Temporary Files and Directories.
Dec 03 00:25:43 np0005543037.novalocal sshd-session[7464]: Received disconnect from 217.170.199.90 port 60210:11:  [preauth]
Dec 03 00:25:43 np0005543037.novalocal sshd-session[7464]: Disconnected from authenticating user root 217.170.199.90 port 60210 [preauth]
Dec 03 00:26:53 np0005543037.novalocal sshd-session[7467]: Invalid user admin from 80.94.95.115 port 40910
Dec 03 00:26:53 np0005543037.novalocal sshd-session[7467]: Connection closed by invalid user admin 80.94.95.115 port 40910 [preauth]
Dec 03 00:30:21 np0005543037.novalocal sshd-session[7470]: Accepted publickey for zuul from 38.102.83.114 port 37502 ssh2: RSA SHA256:NqevRhMCntWIOoTdK6+DV077scp/CQGou+r/H3um4YU
Dec 03 00:30:21 np0005543037.novalocal systemd-logind[800]: New session 3 of user zuul.
Dec 03 00:30:21 np0005543037.novalocal systemd[1]: Started Session 3 of User zuul.
Dec 03 00:30:21 np0005543037.novalocal sshd-session[7470]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 03 00:30:21 np0005543037.novalocal sudo[7497]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yphsxxahzuogkpokjydbrlkkrzzvgabd ; /usr/bin/python3'
Dec 03 00:30:21 np0005543037.novalocal sudo[7497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:30:21 np0005543037.novalocal python3[7499]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-b657-a2e0-000000001cd8-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 00:30:21 np0005543037.novalocal sudo[7497]: pam_unix(sudo:session): session closed for user root
Dec 03 00:30:21 np0005543037.novalocal sudo[7526]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzcuwtpohpvdvxamsvlricbynixjcdoo ; /usr/bin/python3'
Dec 03 00:30:21 np0005543037.novalocal sudo[7526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:30:21 np0005543037.novalocal python3[7528]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 00:30:21 np0005543037.novalocal sudo[7526]: pam_unix(sudo:session): session closed for user root
Dec 03 00:30:21 np0005543037.novalocal sudo[7552]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfkrouwzflrtfilsuqakvchmczrckeee ; /usr/bin/python3'
Dec 03 00:30:21 np0005543037.novalocal sudo[7552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:30:21 np0005543037.novalocal python3[7554]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 00:30:21 np0005543037.novalocal sudo[7552]: pam_unix(sudo:session): session closed for user root
Dec 03 00:30:22 np0005543037.novalocal sudo[7578]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvpirkkyhcskhvljjzcjujcsxxyborma ; /usr/bin/python3'
Dec 03 00:30:22 np0005543037.novalocal sudo[7578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:30:22 np0005543037.novalocal python3[7580]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 00:30:22 np0005543037.novalocal sudo[7578]: pam_unix(sudo:session): session closed for user root
Dec 03 00:30:22 np0005543037.novalocal sudo[7605]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltzfqmmqxdchaerxvpgdgjegbeywfmtw ; /usr/bin/python3'
Dec 03 00:30:22 np0005543037.novalocal sudo[7605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:30:22 np0005543037.novalocal python3[7607]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 00:30:22 np0005543037.novalocal sudo[7605]: pam_unix(sudo:session): session closed for user root
Dec 03 00:30:23 np0005543037.novalocal sudo[7631]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjtoccspooojhixmqjjuhrotreoynbii ; /usr/bin/python3'
Dec 03 00:30:23 np0005543037.novalocal sudo[7631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:30:23 np0005543037.novalocal python3[7633]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 00:30:23 np0005543037.novalocal sudo[7631]: pam_unix(sudo:session): session closed for user root
Dec 03 00:30:23 np0005543037.novalocal sudo[7709]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbypfuaswxwdlspfvqvemeznoepezexr ; /usr/bin/python3'
Dec 03 00:30:23 np0005543037.novalocal sudo[7709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:30:23 np0005543037.novalocal python3[7711]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 03 00:30:23 np0005543037.novalocal sudo[7709]: pam_unix(sudo:session): session closed for user root
Dec 03 00:30:23 np0005543037.novalocal sudo[7782]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilzcqunugiwtitjqzkxhrznsnxtasppx ; /usr/bin/python3'
Dec 03 00:30:23 np0005543037.novalocal sudo[7782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:30:24 np0005543037.novalocal python3[7784]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764721823.413551-478-262932798804482/source _original_basename=tmpn6z68h0n follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 00:30:24 np0005543037.novalocal sudo[7782]: pam_unix(sudo:session): session closed for user root
Dec 03 00:30:24 np0005543037.novalocal sudo[7832]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gthqqzezeogwszrmdroplxsfnkeenfgl ; /usr/bin/python3'
Dec 03 00:30:24 np0005543037.novalocal sudo[7832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:30:25 np0005543037.novalocal python3[7834]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 03 00:30:25 np0005543037.novalocal systemd[1]: Reloading.
Dec 03 00:30:25 np0005543037.novalocal systemd-rc-local-generator[7856]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 00:30:25 np0005543037.novalocal sudo[7832]: pam_unix(sudo:session): session closed for user root
Dec 03 00:30:26 np0005543037.novalocal sudo[7887]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydfsjxakthvyjpxnvednfhjtbvsvaqja ; /usr/bin/python3'
Dec 03 00:30:26 np0005543037.novalocal sudo[7887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:30:26 np0005543037.novalocal python3[7889]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Dec 03 00:30:26 np0005543037.novalocal sudo[7887]: pam_unix(sudo:session): session closed for user root
Dec 03 00:30:26 np0005543037.novalocal sudo[7913]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwsvyxofbjvlhsflzeaetqdlmfpougcs ; /usr/bin/python3'
Dec 03 00:30:26 np0005543037.novalocal sudo[7913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:30:27 np0005543037.novalocal python3[7915]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 00:30:27 np0005543037.novalocal sudo[7913]: pam_unix(sudo:session): session closed for user root
Dec 03 00:30:27 np0005543037.novalocal sudo[7941]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqanhawdvvmwodfgjrxrkeahzapujldk ; /usr/bin/python3'
Dec 03 00:30:27 np0005543037.novalocal sudo[7941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:30:27 np0005543037.novalocal python3[7943]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 00:30:27 np0005543037.novalocal sudo[7941]: pam_unix(sudo:session): session closed for user root
Dec 03 00:30:27 np0005543037.novalocal sudo[7969]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elfavsqxnfgcgovljdmzzyvovlnsfkbs ; /usr/bin/python3'
Dec 03 00:30:27 np0005543037.novalocal sudo[7969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:30:27 np0005543037.novalocal python3[7971]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 00:30:27 np0005543037.novalocal sudo[7969]: pam_unix(sudo:session): session closed for user root
Dec 03 00:30:27 np0005543037.novalocal sudo[7997]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vksqsvvbutnskhuxuegxzefhjmfmsgzk ; /usr/bin/python3'
Dec 03 00:30:27 np0005543037.novalocal sudo[7997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:30:27 np0005543037.novalocal python3[7999]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 00:30:27 np0005543037.novalocal sudo[7997]: pam_unix(sudo:session): session closed for user root
Dec 03 00:30:28 np0005543037.novalocal python3[8026]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-b657-a2e0-000000001cdf-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 00:30:29 np0005543037.novalocal python3[8056]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 03 00:30:31 np0005543037.novalocal sshd-session[7473]: Connection closed by 38.102.83.114 port 37502
Dec 03 00:30:31 np0005543037.novalocal sshd-session[7470]: pam_unix(sshd:session): session closed for user zuul
Dec 03 00:30:31 np0005543037.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Dec 03 00:30:31 np0005543037.novalocal systemd[1]: session-3.scope: Consumed 4.497s CPU time.
Dec 03 00:30:31 np0005543037.novalocal systemd-logind[800]: Session 3 logged out. Waiting for processes to exit.
Dec 03 00:30:31 np0005543037.novalocal systemd-logind[800]: Removed session 3.
Dec 03 00:30:32 np0005543037.novalocal sshd-session[8063]: Accepted publickey for zuul from 38.102.83.114 port 59974 ssh2: RSA SHA256:NqevRhMCntWIOoTdK6+DV077scp/CQGou+r/H3um4YU
Dec 03 00:30:32 np0005543037.novalocal systemd-logind[800]: New session 4 of user zuul.
Dec 03 00:30:32 np0005543037.novalocal systemd[1]: Started Session 4 of User zuul.
Dec 03 00:30:32 np0005543037.novalocal sshd-session[8063]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 03 00:30:32 np0005543037.novalocal sudo[8090]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzqgahkbgagxuygkeqlotktnwdkptjuu ; /usr/bin/python3'
Dec 03 00:30:32 np0005543037.novalocal sudo[8090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:30:33 np0005543037.novalocal python3[8092]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 03 00:30:45 np0005543037.novalocal kernel: SELinux:  Converting 385 SID table entries...
Dec 03 00:30:45 np0005543037.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 03 00:30:45 np0005543037.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 03 00:30:45 np0005543037.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 03 00:30:45 np0005543037.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 03 00:30:45 np0005543037.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 03 00:30:45 np0005543037.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 03 00:30:45 np0005543037.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 03 00:30:54 np0005543037.novalocal kernel: SELinux:  Converting 385 SID table entries...
Dec 03 00:30:54 np0005543037.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 03 00:30:54 np0005543037.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 03 00:30:54 np0005543037.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 03 00:30:54 np0005543037.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 03 00:30:54 np0005543037.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 03 00:30:54 np0005543037.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 03 00:30:54 np0005543037.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 03 00:31:02 np0005543037.novalocal kernel: SELinux:  Converting 385 SID table entries...
Dec 03 00:31:03 np0005543037.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 03 00:31:03 np0005543037.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 03 00:31:03 np0005543037.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 03 00:31:03 np0005543037.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 03 00:31:03 np0005543037.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 03 00:31:03 np0005543037.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 03 00:31:03 np0005543037.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 03 00:31:04 np0005543037.novalocal setsebool[8154]: The virt_use_nfs policy boolean was changed to 1 by root
Dec 03 00:31:04 np0005543037.novalocal setsebool[8154]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Dec 03 00:31:15 np0005543037.novalocal kernel: SELinux:  Converting 388 SID table entries...
Dec 03 00:31:15 np0005543037.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 03 00:31:15 np0005543037.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 03 00:31:15 np0005543037.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 03 00:31:15 np0005543037.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 03 00:31:15 np0005543037.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 03 00:31:15 np0005543037.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 03 00:31:15 np0005543037.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 03 00:31:33 np0005543037.novalocal dbus-broker-launch[785]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec 03 00:31:33 np0005543037.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 03 00:31:33 np0005543037.novalocal systemd[1]: Starting man-db-cache-update.service...
Dec 03 00:31:33 np0005543037.novalocal systemd[1]: Reloading.
Dec 03 00:31:33 np0005543037.novalocal systemd-rc-local-generator[8912]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 00:31:33 np0005543037.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Dec 03 00:31:34 np0005543037.novalocal sudo[8090]: pam_unix(sudo:session): session closed for user root
Dec 03 00:31:51 np0005543037.novalocal python3[17533]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163efc-24cc-4747-1035-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 00:31:52 np0005543037.novalocal kernel: evm: overlay not supported
Dec 03 00:31:52 np0005543037.novalocal systemd[4297]: Starting D-Bus User Message Bus...
Dec 03 00:31:52 np0005543037.novalocal dbus-broker-launch[17947]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Dec 03 00:31:52 np0005543037.novalocal dbus-broker-launch[17947]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Dec 03 00:31:52 np0005543037.novalocal systemd[4297]: Started D-Bus User Message Bus.
Dec 03 00:31:52 np0005543037.novalocal dbus-broker-lau[17947]: Ready
Dec 03 00:31:52 np0005543037.novalocal systemd[4297]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec 03 00:31:52 np0005543037.novalocal systemd[4297]: Created slice Slice /user.
Dec 03 00:31:52 np0005543037.novalocal systemd[4297]: podman-17869.scope: unit configures an IP firewall, but not running as root.
Dec 03 00:31:52 np0005543037.novalocal systemd[4297]: (This warning is only shown for the first unit using IP firewalling.)
Dec 03 00:31:52 np0005543037.novalocal systemd[4297]: Started podman-17869.scope.
Dec 03 00:31:53 np0005543037.novalocal systemd[4297]: Started podman-pause-50584a2c.scope.
Dec 03 00:31:53 np0005543037.novalocal sudo[18276]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftpuvljxfjzrlfmbvmlynprehogwajuj ; /usr/bin/python3'
Dec 03 00:31:53 np0005543037.novalocal sudo[18276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:31:53 np0005543037.novalocal python3[18286]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.150:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.150:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 00:31:53 np0005543037.novalocal python3[18286]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Dec 03 00:31:53 np0005543037.novalocal sudo[18276]: pam_unix(sudo:session): session closed for user root
Dec 03 00:31:54 np0005543037.novalocal sshd-session[8066]: Connection closed by 38.102.83.114 port 59974
Dec 03 00:31:54 np0005543037.novalocal sshd-session[8063]: pam_unix(sshd:session): session closed for user zuul
Dec 03 00:31:54 np0005543037.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Dec 03 00:31:54 np0005543037.novalocal systemd[1]: session-4.scope: Consumed 59.881s CPU time.
Dec 03 00:31:54 np0005543037.novalocal systemd-logind[800]: Session 4 logged out. Waiting for processes to exit.
Dec 03 00:31:54 np0005543037.novalocal systemd-logind[800]: Removed session 4.
Dec 03 00:32:00 np0005543037.novalocal irqbalance[792]: Cannot change IRQ 26 affinity: Operation not permitted
Dec 03 00:32:00 np0005543037.novalocal irqbalance[792]: IRQ 26 affinity is now unmanaged
Dec 03 00:32:11 np0005543037.novalocal sshd-session[24422]: Unable to negotiate with 38.102.83.18 port 57300: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Dec 03 00:32:11 np0005543037.novalocal sshd-session[24419]: Unable to negotiate with 38.102.83.18 port 57310: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Dec 03 00:32:11 np0005543037.novalocal sshd-session[24421]: Connection closed by 38.102.83.18 port 57284 [preauth]
Dec 03 00:32:11 np0005543037.novalocal sshd-session[24424]: Unable to negotiate with 38.102.83.18 port 57306: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Dec 03 00:32:11 np0005543037.novalocal sshd-session[24425]: Connection closed by 38.102.83.18 port 57294 [preauth]
Dec 03 00:32:16 np0005543037.novalocal sshd-session[26135]: Accepted publickey for zuul from 38.102.83.114 port 55980 ssh2: RSA SHA256:NqevRhMCntWIOoTdK6+DV077scp/CQGou+r/H3um4YU
Dec 03 00:32:16 np0005543037.novalocal systemd-logind[800]: New session 5 of user zuul.
Dec 03 00:32:16 np0005543037.novalocal systemd[1]: Started Session 5 of User zuul.
Dec 03 00:32:16 np0005543037.novalocal sshd-session[26135]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 03 00:32:17 np0005543037.novalocal python3[26256]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKnvzaEJLazr+2A79QTuOg+l8N6rmlNU2AOwt8CCoTWkRPJADFYMQvyUy0SivCzoispoNUuXX55+VlwUbR0W2Pk= zuul@np0005543036.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 03 00:32:17 np0005543037.novalocal sudo[26427]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujhlltvrzfhpxkqukpkagwdaygoprfoz ; /usr/bin/python3'
Dec 03 00:32:17 np0005543037.novalocal sudo[26427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:32:17 np0005543037.novalocal python3[26440]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKnvzaEJLazr+2A79QTuOg+l8N6rmlNU2AOwt8CCoTWkRPJADFYMQvyUy0SivCzoispoNUuXX55+VlwUbR0W2Pk= zuul@np0005543036.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 03 00:32:17 np0005543037.novalocal sudo[26427]: pam_unix(sudo:session): session closed for user root
Dec 03 00:32:18 np0005543037.novalocal sudo[26799]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obpgqxvlhhixptwrtadzmxdjlwxozuyj ; /usr/bin/python3'
Dec 03 00:32:18 np0005543037.novalocal sudo[26799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:32:18 np0005543037.novalocal python3[26811]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005543037.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Dec 03 00:32:18 np0005543037.novalocal useradd[26877]: new group: name=cloud-admin, GID=1002
Dec 03 00:32:18 np0005543037.novalocal useradd[26877]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Dec 03 00:32:18 np0005543037.novalocal sudo[26799]: pam_unix(sudo:session): session closed for user root
Dec 03 00:32:18 np0005543037.novalocal sudo[27012]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvpdkbkhgxxzrywsqjynuohuiroseylm ; /usr/bin/python3'
Dec 03 00:32:18 np0005543037.novalocal sudo[27012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:32:19 np0005543037.novalocal python3[27021]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKnvzaEJLazr+2A79QTuOg+l8N6rmlNU2AOwt8CCoTWkRPJADFYMQvyUy0SivCzoispoNUuXX55+VlwUbR0W2Pk= zuul@np0005543036.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 03 00:32:19 np0005543037.novalocal sudo[27012]: pam_unix(sudo:session): session closed for user root
Dec 03 00:32:19 np0005543037.novalocal sudo[27276]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qaacvstohjogrbxrnyasswpgoghgrewg ; /usr/bin/python3'
Dec 03 00:32:19 np0005543037.novalocal sudo[27276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:32:19 np0005543037.novalocal python3[27289]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 03 00:32:19 np0005543037.novalocal sudo[27276]: pam_unix(sudo:session): session closed for user root
Dec 03 00:32:19 np0005543037.novalocal sudo[27527]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpibhuqtvowtmecstiszlayhakvemjoi ; /usr/bin/python3'
Dec 03 00:32:19 np0005543037.novalocal sudo[27527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:32:20 np0005543037.novalocal python3[27537]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764721939.198401-135-122839661404084/source _original_basename=tmp9pgmqr2u follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 00:32:20 np0005543037.novalocal sudo[27527]: pam_unix(sudo:session): session closed for user root
Dec 03 00:32:20 np0005543037.novalocal sudo[27848]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzoehxihcavwnivfmsowgqrxfawedyol ; /usr/bin/python3'
Dec 03 00:32:20 np0005543037.novalocal sudo[27848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:32:21 np0005543037.novalocal python3[27859]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Dec 03 00:32:21 np0005543037.novalocal systemd[1]: Starting Hostname Service...
Dec 03 00:32:21 np0005543037.novalocal systemd[1]: Started Hostname Service.
Dec 03 00:32:21 np0005543037.novalocal systemd-hostnamed[27958]: Changed pretty hostname to 'compute-0'
Dec 03 00:32:21 compute-0 systemd-hostnamed[27958]: Hostname set to <compute-0> (static)
Dec 03 00:32:21 compute-0 NetworkManager[7177]: <info>  [1764721941.2338] hostname: static hostname changed from "np0005543037.novalocal" to "compute-0"
Dec 03 00:32:21 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 03 00:32:21 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 03 00:32:21 compute-0 sudo[27848]: pam_unix(sudo:session): session closed for user root
Dec 03 00:32:21 compute-0 sshd-session[26190]: Connection closed by 38.102.83.114 port 55980
Dec 03 00:32:21 compute-0 sshd-session[26135]: pam_unix(sshd:session): session closed for user zuul
Dec 03 00:32:21 compute-0 systemd[1]: session-5.scope: Deactivated successfully.
Dec 03 00:32:21 compute-0 systemd[1]: session-5.scope: Consumed 2.716s CPU time.
Dec 03 00:32:21 compute-0 systemd-logind[800]: Session 5 logged out. Waiting for processes to exit.
Dec 03 00:32:21 compute-0 systemd-logind[800]: Removed session 5.
Dec 03 00:32:27 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 03 00:32:27 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 03 00:32:27 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1min 6.686s CPU time.
Dec 03 00:32:27 compute-0 systemd[1]: run-r8faee6dcbb17464395551d111f73ade5.service: Deactivated successfully.
Dec 03 00:32:31 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 03 00:32:51 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 03 00:34:26 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Dec 03 00:34:26 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Dec 03 00:34:26 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Dec 03 00:34:26 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Dec 03 00:37:48 compute-0 sshd-session[29919]: Accepted publickey for zuul from 38.102.83.18 port 45266 ssh2: RSA SHA256:NqevRhMCntWIOoTdK6+DV077scp/CQGou+r/H3um4YU
Dec 03 00:37:48 compute-0 systemd-logind[800]: New session 6 of user zuul.
Dec 03 00:37:48 compute-0 systemd[1]: Started Session 6 of User zuul.
Dec 03 00:37:48 compute-0 sshd-session[29919]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 03 00:37:49 compute-0 python3[29995]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 00:37:50 compute-0 sudo[30109]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inwltlixhvezhvxpqkqludnheyjyxolf ; /usr/bin/python3'
Dec 03 00:37:50 compute-0 sudo[30109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:37:51 compute-0 python3[30111]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 03 00:37:51 compute-0 sudo[30109]: pam_unix(sudo:session): session closed for user root
Dec 03 00:37:51 compute-0 sudo[30182]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryniyozkqmwckedhfgqwkvayvnqufcsy ; /usr/bin/python3'
Dec 03 00:37:51 compute-0 sudo[30182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:37:51 compute-0 python3[30184]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764722270.7212627-33630-205516411691382/source mode=0755 _original_basename=delorean.repo follow=False checksum=39c885eb875fd03e010d1b0454241c26b121dfb2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 00:37:51 compute-0 sudo[30182]: pam_unix(sudo:session): session closed for user root
Dec 03 00:37:51 compute-0 sudo[30208]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-duwfyrhjwdvazickfktfmxszlzmvzobl ; /usr/bin/python3'
Dec 03 00:37:51 compute-0 sudo[30208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:37:51 compute-0 python3[30210]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 03 00:37:51 compute-0 sudo[30208]: pam_unix(sudo:session): session closed for user root
Dec 03 00:37:52 compute-0 sudo[30281]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emdohxyhegxgyirmzarhjhmckgdvfbbu ; /usr/bin/python3'
Dec 03 00:37:52 compute-0 sudo[30281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:37:52 compute-0 python3[30283]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764722270.7212627-33630-205516411691382/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 00:37:52 compute-0 sudo[30281]: pam_unix(sudo:session): session closed for user root
Dec 03 00:37:52 compute-0 sudo[30307]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijnhbiztxvijoahevdfijyoaotmdmfsa ; /usr/bin/python3'
Dec 03 00:37:52 compute-0 sudo[30307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:37:52 compute-0 python3[30309]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 03 00:37:52 compute-0 sudo[30307]: pam_unix(sudo:session): session closed for user root
Dec 03 00:37:52 compute-0 sudo[30380]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epwzdyusxwdfytbtfpyiawuvfyhvlqxq ; /usr/bin/python3'
Dec 03 00:37:52 compute-0 sudo[30380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:37:52 compute-0 python3[30382]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764722270.7212627-33630-205516411691382/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 00:37:52 compute-0 sudo[30380]: pam_unix(sudo:session): session closed for user root
Dec 03 00:37:53 compute-0 sudo[30406]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iuleflfeltnnyvjrvfbpmmgvhgccgacb ; /usr/bin/python3'
Dec 03 00:37:53 compute-0 sudo[30406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:37:53 compute-0 python3[30408]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 03 00:37:53 compute-0 sudo[30406]: pam_unix(sudo:session): session closed for user root
Dec 03 00:37:53 compute-0 sudo[30479]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjsydpgqowaczcgufmhwainbjlklunng ; /usr/bin/python3'
Dec 03 00:37:53 compute-0 sudo[30479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:37:53 compute-0 python3[30481]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764722270.7212627-33630-205516411691382/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 00:37:53 compute-0 sudo[30479]: pam_unix(sudo:session): session closed for user root
Dec 03 00:37:53 compute-0 sudo[30505]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkybuauclepbyuanwlmpcrmtabldoglr ; /usr/bin/python3'
Dec 03 00:37:53 compute-0 sudo[30505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:37:53 compute-0 python3[30507]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 03 00:37:53 compute-0 sudo[30505]: pam_unix(sudo:session): session closed for user root
Dec 03 00:37:54 compute-0 sudo[30578]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eeqsqjjlnzlkckixstsfiyqbbtqwspvx ; /usr/bin/python3'
Dec 03 00:37:54 compute-0 sudo[30578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:37:54 compute-0 python3[30580]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764722270.7212627-33630-205516411691382/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 00:37:54 compute-0 sudo[30578]: pam_unix(sudo:session): session closed for user root
Dec 03 00:37:54 compute-0 sudo[30604]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iookwkidtdqbvsxjvqojlhqvliinrrzi ; /usr/bin/python3'
Dec 03 00:37:54 compute-0 sudo[30604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:37:54 compute-0 python3[30606]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 03 00:37:54 compute-0 sudo[30604]: pam_unix(sudo:session): session closed for user root
Dec 03 00:37:55 compute-0 sudo[30677]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thlrojwtqeuwqvxcwetqpmsfzbcmgakn ; /usr/bin/python3'
Dec 03 00:37:55 compute-0 sudo[30677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:37:55 compute-0 python3[30679]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764722270.7212627-33630-205516411691382/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 00:37:55 compute-0 sudo[30677]: pam_unix(sudo:session): session closed for user root
Dec 03 00:37:55 compute-0 sudo[30703]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkamkbdycesmbelkgbdgytxqqynzlxyv ; /usr/bin/python3'
Dec 03 00:37:55 compute-0 sudo[30703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:37:55 compute-0 python3[30705]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 03 00:37:55 compute-0 sudo[30703]: pam_unix(sudo:session): session closed for user root
Dec 03 00:37:55 compute-0 sudo[30776]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvpbwlcsuylegkiamjlecawcptmowjjy ; /usr/bin/python3'
Dec 03 00:37:55 compute-0 sudo[30776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:37:55 compute-0 python3[30778]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764722270.7212627-33630-205516411691382/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=6e18e2038d54303b4926db53c0b6cced515a9151 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 00:37:55 compute-0 sudo[30776]: pam_unix(sudo:session): session closed for user root
Dec 03 00:37:58 compute-0 sshd-session[30803]: Connection closed by 192.168.122.11 port 40292 [preauth]
Dec 03 00:37:58 compute-0 sshd-session[30804]: Connection closed by 192.168.122.11 port 40296 [preauth]
Dec 03 00:37:58 compute-0 sshd-session[30807]: Unable to negotiate with 192.168.122.11 port 40300: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Dec 03 00:37:58 compute-0 sshd-session[30806]: Unable to negotiate with 192.168.122.11 port 40306: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Dec 03 00:37:58 compute-0 sshd-session[30805]: Unable to negotiate with 192.168.122.11 port 40310: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Dec 03 00:38:23 compute-0 sshd-session[30813]: Connection closed by authenticating user root 80.94.95.115 port 15214 [preauth]
Dec 03 00:40:15 compute-0 sshd-session[30816]: Connection closed by authenticating user root 143.198.96.196 port 49434 [preauth]
Dec 03 00:41:13 compute-0 python3[30841]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 00:46:13 compute-0 sshd-session[29922]: Received disconnect from 38.102.83.18 port 45266:11: disconnected by user
Dec 03 00:46:13 compute-0 sshd-session[29922]: Disconnected from user zuul 38.102.83.18 port 45266
Dec 03 00:46:13 compute-0 sshd-session[29919]: pam_unix(sshd:session): session closed for user zuul
Dec 03 00:46:13 compute-0 systemd-logind[800]: Session 6 logged out. Waiting for processes to exit.
Dec 03 00:46:13 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Dec 03 00:46:13 compute-0 systemd[1]: session-6.scope: Consumed 5.778s CPU time.
Dec 03 00:46:13 compute-0 systemd-logind[800]: Removed session 6.
Dec 03 00:49:22 compute-0 sshd-session[30848]: Invalid user admin from 80.94.95.116 port 57750
Dec 03 00:49:22 compute-0 sshd-session[30848]: Connection closed by invalid user admin 80.94.95.116 port 57750 [preauth]
Dec 03 00:54:40 compute-0 sshd-session[30853]: Accepted publickey for zuul from 192.168.122.30 port 37236 ssh2: ECDSA SHA256:ja3ITS17A9km0/Ot+KN2pl9ub4ump/b6GV+vNoE7Szw
Dec 03 00:54:40 compute-0 systemd-logind[800]: New session 7 of user zuul.
Dec 03 00:54:40 compute-0 systemd[1]: Started Session 7 of User zuul.
Dec 03 00:54:40 compute-0 sshd-session[30853]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 03 00:54:41 compute-0 python3.9[31006]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 00:54:44 compute-0 sudo[31186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqhmhtqymglxtakhkdxdmadpyxzsjivu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723283.4599535-32-260607939766051/AnsiballZ_command.py'
Dec 03 00:54:44 compute-0 sudo[31186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:54:44 compute-0 python3.9[31188]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 00:54:51 compute-0 sudo[31186]: pam_unix(sudo:session): session closed for user root
Dec 03 00:54:51 compute-0 sshd-session[30856]: Connection closed by 192.168.122.30 port 37236
Dec 03 00:54:51 compute-0 sshd-session[30853]: pam_unix(sshd:session): session closed for user zuul
Dec 03 00:54:51 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Dec 03 00:54:51 compute-0 systemd[1]: session-7.scope: Consumed 8.215s CPU time.
Dec 03 00:54:51 compute-0 systemd-logind[800]: Session 7 logged out. Waiting for processes to exit.
Dec 03 00:54:51 compute-0 systemd-logind[800]: Removed session 7.
Dec 03 00:55:07 compute-0 sshd-session[31245]: Accepted publickey for zuul from 192.168.122.30 port 53224 ssh2: ECDSA SHA256:ja3ITS17A9km0/Ot+KN2pl9ub4ump/b6GV+vNoE7Szw
Dec 03 00:55:07 compute-0 systemd-logind[800]: New session 8 of user zuul.
Dec 03 00:55:07 compute-0 systemd[1]: Started Session 8 of User zuul.
Dec 03 00:55:07 compute-0 sshd-session[31245]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 03 00:55:08 compute-0 python3.9[31398]: ansible-ansible.legacy.ping Invoked with data=pong
Dec 03 00:55:10 compute-0 python3.9[31572]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 00:55:11 compute-0 sudo[31722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lukifphkcbahnxczmgcbmxrlobxzfslh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723310.6381822-45-11505938925108/AnsiballZ_command.py'
Dec 03 00:55:11 compute-0 sudo[31722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:55:11 compute-0 python3.9[31724]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 00:55:11 compute-0 sudo[31722]: pam_unix(sudo:session): session closed for user root
Dec 03 00:55:13 compute-0 sudo[31875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsyodtrwmzohlxecwjgfllbfrstynxuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723312.555318-57-168339814330966/AnsiballZ_stat.py'
Dec 03 00:55:13 compute-0 sudo[31875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:55:13 compute-0 python3.9[31877]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 00:55:13 compute-0 sudo[31875]: pam_unix(sudo:session): session closed for user root
Dec 03 00:55:14 compute-0 sudo[32027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpriseoiqknpsostyvaoejckeoatlcfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723313.5460768-65-126162180095485/AnsiballZ_file.py'
Dec 03 00:55:14 compute-0 sudo[32027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:55:14 compute-0 python3.9[32029]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 00:55:14 compute-0 sudo[32027]: pam_unix(sudo:session): session closed for user root
Dec 03 00:55:14 compute-0 sudo[32179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quqhsrqhswmesyudzxiqslrhmchruhdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723314.4881098-73-53510234384744/AnsiballZ_stat.py'
Dec 03 00:55:14 compute-0 sudo[32179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:55:15 compute-0 python3.9[32181]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 00:55:15 compute-0 sudo[32179]: pam_unix(sudo:session): session closed for user root
Dec 03 00:55:15 compute-0 sudo[32302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdlrojkzwkyibboncylixedobfnsiplz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723314.4881098-73-53510234384744/AnsiballZ_copy.py'
Dec 03 00:55:15 compute-0 sudo[32302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:55:15 compute-0 python3.9[32304]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764723314.4881098-73-53510234384744/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 00:55:15 compute-0 sudo[32302]: pam_unix(sudo:session): session closed for user root
Dec 03 00:55:16 compute-0 sudo[32454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zidwdiqzqwzhtztrujbxixeqjcitgyif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723316.201407-88-82419973336639/AnsiballZ_setup.py'
Dec 03 00:55:16 compute-0 sudo[32454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:55:16 compute-0 python3.9[32456]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 00:55:17 compute-0 sudo[32454]: pam_unix(sudo:session): session closed for user root
Dec 03 00:55:17 compute-0 sudo[32610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avvfdjrodwhautrehvxdltuojtoiioyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723317.381204-96-176494318987088/AnsiballZ_file.py'
Dec 03 00:55:17 compute-0 sudo[32610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:55:17 compute-0 python3.9[32612]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 00:55:17 compute-0 sudo[32610]: pam_unix(sudo:session): session closed for user root
Dec 03 00:55:18 compute-0 sudo[32762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcakktwzrdbcikqbfunemuuvfuxthcpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723318.2485945-105-278382479531247/AnsiballZ_file.py'
Dec 03 00:55:18 compute-0 sudo[32762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:55:18 compute-0 python3.9[32764]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 00:55:18 compute-0 sudo[32762]: pam_unix(sudo:session): session closed for user root
Dec 03 00:55:19 compute-0 python3.9[32914]: ansible-ansible.builtin.service_facts Invoked
Dec 03 00:55:25 compute-0 python3.9[33167]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 00:55:26 compute-0 python3.9[33317]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 00:55:27 compute-0 python3.9[33471]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 00:55:28 compute-0 sudo[33627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysndvyxttemnajnpsjkeomayvqsnvuju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723328.2819445-153-113742376831144/AnsiballZ_setup.py'
Dec 03 00:55:28 compute-0 sudo[33627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:55:28 compute-0 python3.9[33629]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 03 00:55:29 compute-0 sudo[33627]: pam_unix(sudo:session): session closed for user root
Dec 03 00:55:29 compute-0 sudo[33711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ponjxuobfjlihapbubnjvqunqffvkpvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723328.2819445-153-113742376831144/AnsiballZ_dnf.py'
Dec 03 00:55:29 compute-0 sudo[33711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:55:29 compute-0 python3.9[33713]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 03 00:56:18 compute-0 systemd[1]: Reloading.
Dec 03 00:56:18 compute-0 systemd-rc-local-generator[33913]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 00:56:18 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Dec 03 00:56:18 compute-0 systemd[1]: Reloading.
Dec 03 00:56:19 compute-0 systemd-rc-local-generator[33951]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 00:56:19 compute-0 systemd[1]: Starting dnf makecache...
Dec 03 00:56:19 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Dec 03 00:56:19 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Dec 03 00:56:19 compute-0 systemd[1]: Reloading.
Dec 03 00:56:19 compute-0 systemd-rc-local-generator[33991]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 00:56:19 compute-0 dnf[33961]: Failed determining last makecache time.
Dec 03 00:56:19 compute-0 dnf[33961]: delorean-openstack-barbican-42b4c41831408a8e323 160 kB/s | 3.0 kB     00:00
Dec 03 00:56:19 compute-0 dnf[33961]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 192 kB/s | 3.0 kB     00:00
Dec 03 00:56:19 compute-0 dnf[33961]: delorean-openstack-cinder-1c00d6490d88e436f26ef 195 kB/s | 3.0 kB     00:00
Dec 03 00:56:19 compute-0 dnf[33961]: delorean-python-stevedore-c4acc5639fd2329372142 141 kB/s | 3.0 kB     00:00
Dec 03 00:56:19 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Dec 03 00:56:19 compute-0 dnf[33961]: delorean-python-cloudkitty-tests-tempest-2c80f8 163 kB/s | 3.0 kB     00:00
Dec 03 00:56:19 compute-0 dnf[33961]: delorean-os-net-config-d0cedbdb788d43e5c7551df5 190 kB/s | 3.0 kB     00:00
Dec 03 00:56:19 compute-0 dnf[33961]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 178 kB/s | 3.0 kB     00:00
Dec 03 00:56:19 compute-0 dnf[33961]: delorean-python-designate-tests-tempest-347fdbc 189 kB/s | 3.0 kB     00:00
Dec 03 00:56:19 compute-0 dnf[33961]: delorean-openstack-glance-1fd12c29b339f30fe823e 187 kB/s | 3.0 kB     00:00
Dec 03 00:56:19 compute-0 dnf[33961]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 147 kB/s | 3.0 kB     00:00
Dec 03 00:56:19 compute-0 dnf[33961]: delorean-openstack-manila-3c01b7181572c95dac462 148 kB/s | 3.0 kB     00:00
Dec 03 00:56:19 compute-0 dnf[33961]: delorean-python-whitebox-neutron-tests-tempest- 194 kB/s | 3.0 kB     00:00
Dec 03 00:56:19 compute-0 dnf[33961]: delorean-openstack-octavia-ba397f07a7331190208c 176 kB/s | 3.0 kB     00:00
Dec 03 00:56:19 compute-0 dnf[33961]: delorean-openstack-watcher-c014f81a8647287f6dcc 180 kB/s | 3.0 kB     00:00
Dec 03 00:56:19 compute-0 dnf[33961]: delorean-ansible-config_template-5ccaa22121a7ff 195 kB/s | 3.0 kB     00:00
Dec 03 00:56:19 compute-0 dnf[33961]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 186 kB/s | 3.0 kB     00:00
Dec 03 00:56:19 compute-0 dnf[33961]: delorean-openstack-swift-dc98a8463506ac520c469a 190 kB/s | 3.0 kB     00:00
Dec 03 00:56:19 compute-0 dnf[33961]: delorean-python-tempestconf-8515371b7cceebd4282 179 kB/s | 3.0 kB     00:00
Dec 03 00:56:19 compute-0 dbus-broker-launch[767]: Noticed file-system modification, trigger reload.
Dec 03 00:56:19 compute-0 dbus-broker-launch[767]: Noticed file-system modification, trigger reload.
Dec 03 00:56:19 compute-0 dnf[33961]: delorean-openstack-heat-ui-013accbfd179753bc3f0 155 kB/s | 3.0 kB     00:00
Dec 03 00:56:19 compute-0 dbus-broker-launch[767]: Noticed file-system modification, trigger reload.
Dec 03 00:56:19 compute-0 dnf[33961]: CentOS Stream 9 - BaseOS                         59 kB/s | 5.9 kB     00:00
Dec 03 00:56:20 compute-0 dnf[33961]: CentOS Stream 9 - AppStream                      59 kB/s | 6.0 kB     00:00
Dec 03 00:56:20 compute-0 dnf[33961]: CentOS Stream 9 - CRB                            63 kB/s | 5.8 kB     00:00
Dec 03 00:56:20 compute-0 dnf[33961]: CentOS Stream 9 - Extras packages                88 kB/s | 8.3 kB     00:00
Dec 03 00:56:20 compute-0 dnf[33961]: dlrn-antelope-testing                           113 kB/s | 3.0 kB     00:00
Dec 03 00:56:20 compute-0 dnf[33961]: dlrn-antelope-build-deps                        112 kB/s | 3.0 kB     00:00
Dec 03 00:56:20 compute-0 dnf[33961]: centos9-rabbitmq                                 58 kB/s | 3.0 kB     00:00
Dec 03 00:56:20 compute-0 dnf[33961]: centos9-storage                                  89 kB/s | 3.0 kB     00:00
Dec 03 00:56:20 compute-0 dnf[33961]: centos9-opstools                                 91 kB/s | 3.0 kB     00:00
Dec 03 00:56:20 compute-0 dnf[33961]: NFV SIG OpenvSwitch                              58 kB/s | 3.0 kB     00:00
Dec 03 00:56:20 compute-0 dnf[33961]: repo-setup-centos-appstream                     121 kB/s | 4.4 kB     00:00
Dec 03 00:56:20 compute-0 dnf[33961]: repo-setup-centos-baseos                        158 kB/s | 3.9 kB     00:00
Dec 03 00:56:20 compute-0 dnf[33961]: repo-setup-centos-highavailability              175 kB/s | 3.9 kB     00:00
Dec 03 00:56:20 compute-0 dnf[33961]: repo-setup-centos-powertools                    185 kB/s | 4.3 kB     00:00
Dec 03 00:56:21 compute-0 dnf[33961]: Extra Packages for Enterprise Linux 9 - x86_64  107 kB/s |  33 kB     00:00
Dec 03 00:56:21 compute-0 dnf[33961]: Metadata cache created.
Dec 03 00:56:21 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec 03 00:56:21 compute-0 systemd[1]: Finished dnf makecache.
Dec 03 00:56:21 compute-0 systemd[1]: dnf-makecache.service: Consumed 1.859s CPU time.
Dec 03 00:57:24 compute-0 kernel: SELinux:  Converting 2718 SID table entries...
Dec 03 00:57:24 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 03 00:57:24 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 03 00:57:24 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 03 00:57:24 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 03 00:57:24 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 03 00:57:24 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 03 00:57:24 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 03 00:57:24 compute-0 dbus-broker-launch[785]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Dec 03 00:57:24 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 03 00:57:24 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 03 00:57:24 compute-0 systemd[1]: Reloading.
Dec 03 00:57:24 compute-0 systemd-rc-local-generator[34351]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 00:57:25 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 03 00:57:25 compute-0 sudo[33711]: pam_unix(sudo:session): session closed for user root
Dec 03 00:57:26 compute-0 sudo[35254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zijhnaibazncdmncfcgbxtewgfzeiliq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723445.7169864-165-49467340302814/AnsiballZ_command.py'
Dec 03 00:57:26 compute-0 sudo[35254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:57:26 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 03 00:57:26 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 03 00:57:26 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.595s CPU time.
Dec 03 00:57:26 compute-0 systemd[1]: run-r08757739ca544cf7a70eef4c3fbf367d.service: Deactivated successfully.
Dec 03 00:57:26 compute-0 python3.9[35267]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 00:57:27 compute-0 sudo[35254]: pam_unix(sudo:session): session closed for user root
Dec 03 00:57:28 compute-0 sudo[35547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpiyluavpvprnkgxstbcrwhyfhuxvwnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723447.5112607-173-20834569528734/AnsiballZ_selinux.py'
Dec 03 00:57:28 compute-0 sudo[35547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:57:28 compute-0 python3.9[35549]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec 03 00:57:28 compute-0 sudo[35547]: pam_unix(sudo:session): session closed for user root
Dec 03 00:57:29 compute-0 sudo[35699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbncwucngumnzhffqyycmmqegmhaclsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723448.9408922-184-262989722515857/AnsiballZ_command.py'
Dec 03 00:57:29 compute-0 sudo[35699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:57:29 compute-0 python3.9[35701]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec 03 00:57:30 compute-0 sudo[35699]: pam_unix(sudo:session): session closed for user root
Dec 03 00:57:31 compute-0 sudo[35852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iygggbiznbijadtncbgjpbzfjmipraac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723450.8919237-192-264228161041869/AnsiballZ_file.py'
Dec 03 00:57:31 compute-0 sudo[35852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:57:31 compute-0 python3.9[35854]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 00:57:31 compute-0 sudo[35852]: pam_unix(sudo:session): session closed for user root
Dec 03 00:57:32 compute-0 sudo[36004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quzdweqkhhuljarcjxxbnkqjxubtgssn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723451.8181875-200-180526050490370/AnsiballZ_mount.py'
Dec 03 00:57:32 compute-0 sudo[36004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:57:32 compute-0 python3.9[36006]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec 03 00:57:32 compute-0 sudo[36004]: pam_unix(sudo:session): session closed for user root
Dec 03 00:57:33 compute-0 sudo[36156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdmtefitecfbjpxgwqtiahtppddarrap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723453.43973-228-227846877241378/AnsiballZ_file.py'
Dec 03 00:57:33 compute-0 sudo[36156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:57:34 compute-0 python3.9[36158]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 00:57:34 compute-0 sudo[36156]: pam_unix(sudo:session): session closed for user root
Dec 03 00:57:34 compute-0 sudo[36308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwztjjrpzcefqwkhzxmeowgevfhnmoan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723454.2693102-236-69798250502799/AnsiballZ_stat.py'
Dec 03 00:57:34 compute-0 sudo[36308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:57:34 compute-0 python3.9[36310]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 00:57:34 compute-0 sudo[36308]: pam_unix(sudo:session): session closed for user root
Dec 03 00:57:35 compute-0 sudo[36431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izchxnlrpgaupegvoaqbuqtbcbzezzpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723454.2693102-236-69798250502799/AnsiballZ_copy.py'
Dec 03 00:57:35 compute-0 sudo[36431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:57:35 compute-0 python3.9[36433]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723454.2693102-236-69798250502799/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=93ed2f21639fbbc78ab23db012b5cabf31590b1b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 00:57:35 compute-0 sudo[36431]: pam_unix(sudo:session): session closed for user root
Dec 03 00:57:36 compute-0 sudo[36583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmlhjynlzxujrayjtmekjfmbwglzicnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723456.4532733-260-145504205455118/AnsiballZ_stat.py'
Dec 03 00:57:36 compute-0 sudo[36583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:57:38 compute-0 python3.9[36585]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 00:57:38 compute-0 sudo[36583]: pam_unix(sudo:session): session closed for user root
Dec 03 00:57:39 compute-0 sudo[36735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jybewjnhieizogjoracuwyvborxvuuei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723458.8900208-268-214555483065801/AnsiballZ_command.py'
Dec 03 00:57:39 compute-0 sudo[36735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:57:40 compute-0 python3.9[36737]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 00:57:40 compute-0 sudo[36735]: pam_unix(sudo:session): session closed for user root
Dec 03 00:57:41 compute-0 sudo[36889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slqyxqjeribccfnjpadziczhabnjqfuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723460.8101332-276-159359532611598/AnsiballZ_file.py'
Dec 03 00:57:41 compute-0 sudo[36889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:57:41 compute-0 python3.9[36891]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 00:57:41 compute-0 sudo[36889]: pam_unix(sudo:session): session closed for user root
Dec 03 00:57:42 compute-0 sudo[37041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwcnpfpaqtqdxqroshiubzskjcxsadyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723461.8672864-287-247219588207510/AnsiballZ_getent.py'
Dec 03 00:57:42 compute-0 sudo[37041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:57:42 compute-0 python3.9[37043]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec 03 00:57:42 compute-0 sudo[37041]: pam_unix(sudo:session): session closed for user root
Dec 03 00:57:42 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 03 00:57:42 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 03 00:57:43 compute-0 sudo[37195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdjjluiiefewkohkxxsbpmxhggoltoba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723462.8232112-295-108603533677184/AnsiballZ_group.py'
Dec 03 00:57:43 compute-0 sudo[37195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:57:43 compute-0 python3.9[37197]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 03 00:57:43 compute-0 groupadd[37198]: group added to /etc/group: name=qemu, GID=107
Dec 03 00:57:43 compute-0 groupadd[37198]: group added to /etc/gshadow: name=qemu
Dec 03 00:57:43 compute-0 groupadd[37198]: new group: name=qemu, GID=107
Dec 03 00:57:43 compute-0 sudo[37195]: pam_unix(sudo:session): session closed for user root
Dec 03 00:57:44 compute-0 sudo[37353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eagmlygfqoauzgvxlfngpfyegseqttoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723463.8846297-303-258423988358045/AnsiballZ_user.py'
Dec 03 00:57:44 compute-0 sudo[37353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:57:44 compute-0 python3.9[37355]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 03 00:57:44 compute-0 useradd[37357]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Dec 03 00:57:44 compute-0 sudo[37353]: pam_unix(sudo:session): session closed for user root
Dec 03 00:57:45 compute-0 sudo[37513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkkkxsxkwmiwqmjmviyihzlsyrdmywcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723465.1759858-311-254793102230162/AnsiballZ_getent.py'
Dec 03 00:57:45 compute-0 sudo[37513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:57:45 compute-0 python3.9[37515]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec 03 00:57:45 compute-0 sudo[37513]: pam_unix(sudo:session): session closed for user root
Dec 03 00:57:46 compute-0 sudo[37666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouvordwwsphzdbsdtdzahbcdmcstybsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723466.0752928-319-196273275593997/AnsiballZ_group.py'
Dec 03 00:57:46 compute-0 sudo[37666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:57:46 compute-0 python3.9[37668]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 03 00:57:46 compute-0 groupadd[37669]: group added to /etc/group: name=hugetlbfs, GID=42477
Dec 03 00:57:46 compute-0 groupadd[37669]: group added to /etc/gshadow: name=hugetlbfs
Dec 03 00:57:46 compute-0 groupadd[37669]: new group: name=hugetlbfs, GID=42477
Dec 03 00:57:46 compute-0 sudo[37666]: pam_unix(sudo:session): session closed for user root
Dec 03 00:57:47 compute-0 sudo[37824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdavpzmkqhofrfocdhczuoqehsjzjdoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723467.049618-328-269936364817038/AnsiballZ_file.py'
Dec 03 00:57:47 compute-0 sudo[37824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:57:47 compute-0 python3.9[37826]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec 03 00:57:47 compute-0 sudo[37824]: pam_unix(sudo:session): session closed for user root
Dec 03 00:57:48 compute-0 sudo[37976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcikupkrhfmxjzuqzonpqadwezkyrrzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723468.1081905-339-242133958594118/AnsiballZ_dnf.py'
Dec 03 00:57:48 compute-0 sudo[37976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:57:48 compute-0 python3.9[37978]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 03 00:57:50 compute-0 sudo[37976]: pam_unix(sudo:session): session closed for user root
Dec 03 00:57:51 compute-0 sudo[38129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqlxocotlmcxdjrkfuspzdjinnoemral ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723470.6962016-347-83988730848058/AnsiballZ_file.py'
Dec 03 00:57:51 compute-0 sudo[38129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:57:51 compute-0 python3.9[38131]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 00:57:51 compute-0 sudo[38129]: pam_unix(sudo:session): session closed for user root
Dec 03 00:57:51 compute-0 sudo[38281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvfubowdaoifxqcpxhhnyfgynotnmboj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723471.5245261-355-177500035114152/AnsiballZ_stat.py'
Dec 03 00:57:51 compute-0 sudo[38281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:57:52 compute-0 python3.9[38283]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 00:57:52 compute-0 sudo[38281]: pam_unix(sudo:session): session closed for user root
Dec 03 00:57:52 compute-0 sudo[38404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spigyionblyuyksdofpjqcztdjmfmweb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723471.5245261-355-177500035114152/AnsiballZ_copy.py'
Dec 03 00:57:52 compute-0 sudo[38404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:57:52 compute-0 python3.9[38406]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764723471.5245261-355-177500035114152/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 03 00:57:52 compute-0 sudo[38404]: pam_unix(sudo:session): session closed for user root
Dec 03 00:57:53 compute-0 sudo[38556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nausjuoebfsqiwgfkrzydavakmtcqmcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723473.0438452-370-90522545501354/AnsiballZ_systemd.py'
Dec 03 00:57:53 compute-0 sudo[38556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:57:54 compute-0 python3.9[38558]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 03 00:57:54 compute-0 systemd[1]: Starting Load Kernel Modules...
Dec 03 00:57:54 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Dec 03 00:57:54 compute-0 kernel: Bridge firewalling registered
Dec 03 00:57:54 compute-0 systemd-modules-load[38562]: Inserted module 'br_netfilter'
Dec 03 00:57:54 compute-0 systemd[1]: Finished Load Kernel Modules.
Dec 03 00:57:54 compute-0 sudo[38556]: pam_unix(sudo:session): session closed for user root
Dec 03 00:57:54 compute-0 sudo[38715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enppkgnvljpbprhscfpvsbqgtvpdfcvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723474.5247805-378-245584183881638/AnsiballZ_stat.py'
Dec 03 00:57:54 compute-0 sudo[38715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:57:55 compute-0 python3.9[38717]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 00:57:55 compute-0 sudo[38715]: pam_unix(sudo:session): session closed for user root
Dec 03 00:57:55 compute-0 sudo[38838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfilnukqeprgrviwddvpopqwzzfytwha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723474.5247805-378-245584183881638/AnsiballZ_copy.py'
Dec 03 00:57:55 compute-0 sudo[38838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:57:55 compute-0 python3.9[38840]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764723474.5247805-378-245584183881638/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 03 00:57:55 compute-0 sudo[38838]: pam_unix(sudo:session): session closed for user root
Dec 03 00:57:56 compute-0 sudo[38990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cisyvdrkbqztxqgdhwgksvwmmceynskj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723476.2598114-396-245644035376948/AnsiballZ_dnf.py'
Dec 03 00:57:56 compute-0 sudo[38990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:57:56 compute-0 python3.9[38992]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 03 00:57:59 compute-0 dbus-broker-launch[767]: Noticed file-system modification, trigger reload.
Dec 03 00:57:59 compute-0 dbus-broker-launch[767]: Noticed file-system modification, trigger reload.
Dec 03 00:58:00 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 03 00:58:00 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 03 00:58:00 compute-0 systemd[1]: Reloading.
Dec 03 00:58:00 compute-0 systemd-rc-local-generator[39056]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 00:58:00 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 03 00:58:00 compute-0 sudo[38990]: pam_unix(sudo:session): session closed for user root
Dec 03 00:58:02 compute-0 python3.9[40334]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 00:58:03 compute-0 python3.9[41176]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec 03 00:58:03 compute-0 python3.9[41859]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 00:58:04 compute-0 sudo[42738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igykjrdhsxsnilijojfxkluvcflvbehq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723484.1900594-435-204881063608036/AnsiballZ_command.py'
Dec 03 00:58:04 compute-0 sudo[42738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:58:04 compute-0 python3.9[42763]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 00:58:04 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 03 00:58:05 compute-0 systemd[1]: Starting Authorization Manager...
Dec 03 00:58:05 compute-0 polkitd[43396]: Started polkitd version 0.117
Dec 03 00:58:05 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 03 00:58:05 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 03 00:58:05 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 03 00:58:05 compute-0 systemd[1]: man-db-cache-update.service: Consumed 6.848s CPU time.
Dec 03 00:58:05 compute-0 systemd[1]: run-r362cddee764847aba208b94b00958fca.service: Deactivated successfully.
Dec 03 00:58:05 compute-0 polkitd[43396]: Loading rules from directory /etc/polkit-1/rules.d
Dec 03 00:58:05 compute-0 polkitd[43396]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 03 00:58:05 compute-0 polkitd[43396]: Finished loading, compiling and executing 2 rules
Dec 03 00:58:05 compute-0 polkitd[43396]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Dec 03 00:58:05 compute-0 systemd[1]: Started Authorization Manager.
Dec 03 00:58:05 compute-0 sudo[42738]: pam_unix(sudo:session): session closed for user root
Dec 03 00:58:06 compute-0 sudo[43565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oztgylglkwaubqvnneljnnahongtoiki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723485.9733064-444-271287744512086/AnsiballZ_systemd.py'
Dec 03 00:58:06 compute-0 sudo[43565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:58:06 compute-0 python3.9[43567]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 00:58:06 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec 03 00:58:06 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Dec 03 00:58:06 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec 03 00:58:06 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 03 00:58:07 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 03 00:58:07 compute-0 sudo[43565]: pam_unix(sudo:session): session closed for user root
Dec 03 00:58:08 compute-0 python3.9[43729]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec 03 00:58:10 compute-0 sudo[43879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjhfbyyiiciwgnyasfckageunkekpcuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723490.142044-501-65933718457708/AnsiballZ_systemd.py'
Dec 03 00:58:10 compute-0 sudo[43879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:58:10 compute-0 python3.9[43881]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 00:58:10 compute-0 systemd[1]: Reloading.
Dec 03 00:58:11 compute-0 systemd-rc-local-generator[43910]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 00:58:11 compute-0 sudo[43879]: pam_unix(sudo:session): session closed for user root
Dec 03 00:58:11 compute-0 sudo[44069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrxljxxuksvnclwobmloebwifobrxavj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723491.4268456-501-76424284606640/AnsiballZ_systemd.py'
Dec 03 00:58:11 compute-0 sudo[44069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:58:12 compute-0 python3.9[44071]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 00:58:13 compute-0 systemd[1]: Reloading.
Dec 03 00:58:13 compute-0 systemd-rc-local-generator[44101]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 00:58:13 compute-0 sudo[44069]: pam_unix(sudo:session): session closed for user root
Dec 03 00:58:14 compute-0 sudo[44258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltgafqwzcnliswbplpjqwictxxnxhlkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723493.7523766-517-4409747980371/AnsiballZ_command.py'
Dec 03 00:58:14 compute-0 sudo[44258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:58:14 compute-0 python3.9[44260]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 00:58:14 compute-0 sudo[44258]: pam_unix(sudo:session): session closed for user root
Dec 03 00:58:14 compute-0 sudo[44411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzixhewrimxqickxugbqzmqvlvicqjzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723494.6249642-525-183818763017513/AnsiballZ_command.py'
Dec 03 00:58:14 compute-0 sudo[44411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:58:15 compute-0 python3.9[44413]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 00:58:15 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Dec 03 00:58:15 compute-0 sudo[44411]: pam_unix(sudo:session): session closed for user root
Dec 03 00:58:15 compute-0 sudo[44564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lafturnxrhxdozctmgnqdszwokhbjehk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723495.473355-533-29608639408569/AnsiballZ_command.py'
Dec 03 00:58:15 compute-0 sudo[44564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:58:16 compute-0 python3.9[44566]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 00:58:17 compute-0 sudo[44564]: pam_unix(sudo:session): session closed for user root
Dec 03 00:58:18 compute-0 sudo[44726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nehvnralleqesedwsyuksvjvvadiwzer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723497.7861428-541-251474212806263/AnsiballZ_command.py'
Dec 03 00:58:18 compute-0 sudo[44726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:58:18 compute-0 python3.9[44728]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 00:58:18 compute-0 sudo[44726]: pam_unix(sudo:session): session closed for user root
Dec 03 00:58:19 compute-0 sudo[44879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgfljehfzhlveodamajdtxidxvrpfpat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723498.5833173-549-170319749341838/AnsiballZ_systemd.py'
Dec 03 00:58:19 compute-0 sudo[44879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:58:19 compute-0 python3.9[44881]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 03 00:58:19 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 03 00:58:19 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Dec 03 00:58:19 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Dec 03 00:58:19 compute-0 systemd[1]: Starting Apply Kernel Variables...
Dec 03 00:58:19 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 03 00:58:19 compute-0 systemd[1]: Finished Apply Kernel Variables.
Dec 03 00:58:19 compute-0 sudo[44879]: pam_unix(sudo:session): session closed for user root
Dec 03 00:58:20 compute-0 sshd-session[31248]: Connection closed by 192.168.122.30 port 53224
Dec 03 00:58:20 compute-0 sshd-session[31245]: pam_unix(sshd:session): session closed for user zuul
Dec 03 00:58:20 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Dec 03 00:58:20 compute-0 systemd[1]: session-8.scope: Consumed 2min 23.325s CPU time.
Dec 03 00:58:20 compute-0 systemd-logind[800]: Session 8 logged out. Waiting for processes to exit.
Dec 03 00:58:20 compute-0 systemd-logind[800]: Removed session 8.
Dec 03 00:58:27 compute-0 sshd-session[44911]: Accepted publickey for zuul from 192.168.122.30 port 46526 ssh2: ECDSA SHA256:ja3ITS17A9km0/Ot+KN2pl9ub4ump/b6GV+vNoE7Szw
Dec 03 00:58:27 compute-0 systemd-logind[800]: New session 9 of user zuul.
Dec 03 00:58:27 compute-0 systemd[1]: Started Session 9 of User zuul.
Dec 03 00:58:27 compute-0 sshd-session[44911]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 03 00:58:28 compute-0 python3.9[45064]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 00:58:29 compute-0 sudo[45218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scftomgzhstcctyyqvwbalxhyqtkmntf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723508.9644995-36-5010724436824/AnsiballZ_getent.py'
Dec 03 00:58:29 compute-0 sudo[45218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:58:29 compute-0 python3.9[45220]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec 03 00:58:29 compute-0 sudo[45218]: pam_unix(sudo:session): session closed for user root
Dec 03 00:58:31 compute-0 sudo[45371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cckuolrpgtvokdzpxlhccenjwdjycwjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723510.540073-44-272557474473178/AnsiballZ_group.py'
Dec 03 00:58:31 compute-0 sudo[45371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:58:31 compute-0 python3.9[45373]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 03 00:58:31 compute-0 groupadd[45374]: group added to /etc/group: name=openvswitch, GID=42476
Dec 03 00:58:31 compute-0 groupadd[45374]: group added to /etc/gshadow: name=openvswitch
Dec 03 00:58:31 compute-0 groupadd[45374]: new group: name=openvswitch, GID=42476
Dec 03 00:58:31 compute-0 sudo[45371]: pam_unix(sudo:session): session closed for user root
Dec 03 00:58:32 compute-0 sudo[45529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyyzyaartaqbbfxhkbpenwhrvrgnczmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723511.6409423-52-36256708663634/AnsiballZ_user.py'
Dec 03 00:58:32 compute-0 sudo[45529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:58:32 compute-0 python3.9[45531]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 03 00:58:32 compute-0 useradd[45533]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Dec 03 00:58:32 compute-0 useradd[45533]: add 'openvswitch' to group 'hugetlbfs'
Dec 03 00:58:32 compute-0 useradd[45533]: add 'openvswitch' to shadow group 'hugetlbfs'
Dec 03 00:58:32 compute-0 sudo[45529]: pam_unix(sudo:session): session closed for user root
Dec 03 00:58:33 compute-0 sudo[45689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoqvdfoyvdsebqkvfscevceuwlpzhjrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723512.9547753-62-110832610776155/AnsiballZ_setup.py'
Dec 03 00:58:33 compute-0 sudo[45689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:58:33 compute-0 python3.9[45691]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 03 00:58:33 compute-0 sudo[45689]: pam_unix(sudo:session): session closed for user root
Dec 03 00:58:34 compute-0 sudo[45773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amymfanwehkdsazxmfkeeadmldrieoqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723512.9547753-62-110832610776155/AnsiballZ_dnf.py'
Dec 03 00:58:34 compute-0 sudo[45773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:58:34 compute-0 python3.9[45775]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 03 00:58:37 compute-0 sudo[45773]: pam_unix(sudo:session): session closed for user root
Dec 03 00:58:38 compute-0 sudo[45939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knnpyudijgtcztapmaifkooqsuphaudd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723517.7833853-76-277252948856208/AnsiballZ_dnf.py'
Dec 03 00:58:38 compute-0 sudo[45939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:58:38 compute-0 python3.9[45941]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 03 00:58:38 compute-0 sshd-session[45784]: Received disconnect from 117.5.148.56 port 60882:11:  [preauth]
Dec 03 00:58:38 compute-0 sshd-session[45784]: Disconnected from authenticating user root 117.5.148.56 port 60882 [preauth]
Dec 03 00:58:40 compute-0 irqbalance[792]: Cannot change IRQ 27 affinity: Operation not permitted
Dec 03 00:58:40 compute-0 irqbalance[792]: IRQ 27 affinity is now unmanaged
Dec 03 00:58:49 compute-0 kernel: SELinux:  Converting 2730 SID table entries...
Dec 03 00:58:49 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 03 00:58:49 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 03 00:58:49 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 03 00:58:49 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 03 00:58:49 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 03 00:58:49 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 03 00:58:49 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 03 00:58:49 compute-0 groupadd[45964]: group added to /etc/group: name=unbound, GID=993
Dec 03 00:58:49 compute-0 groupadd[45964]: group added to /etc/gshadow: name=unbound
Dec 03 00:58:49 compute-0 groupadd[45964]: new group: name=unbound, GID=993
Dec 03 00:58:49 compute-0 useradd[45971]: new user: name=unbound, UID=993, GID=993, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Dec 03 00:58:49 compute-0 dbus-broker-launch[785]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Dec 03 00:58:49 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Dec 03 00:58:51 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 03 00:58:51 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 03 00:58:51 compute-0 systemd[1]: Reloading.
Dec 03 00:58:51 compute-0 systemd-rc-local-generator[46468]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 00:58:51 compute-0 systemd-sysv-generator[46472]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 00:58:51 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 03 00:58:52 compute-0 sudo[45939]: pam_unix(sudo:session): session closed for user root
Dec 03 00:58:52 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 03 00:58:52 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 03 00:58:52 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.040s CPU time.
Dec 03 00:58:52 compute-0 systemd[1]: run-re0044d9f0c424086b680e2045ee65649.service: Deactivated successfully.
Dec 03 00:58:52 compute-0 sudo[47036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxdlqxrxgwyfyiojipzjfkovijlfsrvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723532.2758715-84-273481348412021/AnsiballZ_systemd.py'
Dec 03 00:58:52 compute-0 sudo[47036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:58:53 compute-0 python3.9[47038]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 03 00:58:53 compute-0 systemd[1]: Reloading.
Dec 03 00:58:53 compute-0 systemd-rc-local-generator[47068]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 00:58:53 compute-0 systemd-sysv-generator[47073]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 00:58:53 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Dec 03 00:58:53 compute-0 chown[47079]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Dec 03 00:58:53 compute-0 ovs-ctl[47084]: /etc/openvswitch/conf.db does not exist ... (warning).
Dec 03 00:58:53 compute-0 ovs-ctl[47084]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Dec 03 00:58:53 compute-0 ovs-ctl[47084]: Starting ovsdb-server [  OK  ]
Dec 03 00:58:53 compute-0 ovs-vsctl[47134]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Dec 03 00:58:54 compute-0 ovs-vsctl[47154]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"eda9fd7d-f2b1-4121-b9ac-fc31f8426272\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Dec 03 00:58:54 compute-0 ovs-ctl[47084]: Configuring Open vSwitch system IDs [  OK  ]
Dec 03 00:58:54 compute-0 ovs-ctl[47084]: Enabling remote OVSDB managers [  OK  ]
Dec 03 00:58:54 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Dec 03 00:58:54 compute-0 ovs-vsctl[47160]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec 03 00:58:54 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Dec 03 00:58:54 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Dec 03 00:58:54 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Dec 03 00:58:54 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Dec 03 00:58:54 compute-0 ovs-ctl[47204]: Inserting openvswitch module [  OK  ]
Dec 03 00:58:54 compute-0 ovs-ctl[47173]: Starting ovs-vswitchd [  OK  ]
Dec 03 00:58:54 compute-0 ovs-vsctl[47224]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec 03 00:58:54 compute-0 ovs-ctl[47173]: Enabling remote OVSDB managers [  OK  ]
Dec 03 00:58:54 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Dec 03 00:58:54 compute-0 systemd[1]: Starting Open vSwitch...
Dec 03 00:58:54 compute-0 systemd[1]: Finished Open vSwitch.
Dec 03 00:58:54 compute-0 sudo[47036]: pam_unix(sudo:session): session closed for user root
Dec 03 00:58:55 compute-0 python3.9[47377]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 00:58:56 compute-0 sudo[47527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gapverwqpceypllogannqvlyiyvrabhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723535.8775573-102-174866635343348/AnsiballZ_sefcontext.py'
Dec 03 00:58:56 compute-0 sudo[47527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:58:56 compute-0 python3.9[47529]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec 03 00:58:57 compute-0 kernel: SELinux:  Converting 2744 SID table entries...
Dec 03 00:58:57 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 03 00:58:57 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 03 00:58:57 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 03 00:58:57 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 03 00:58:57 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 03 00:58:57 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 03 00:58:57 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 03 00:58:58 compute-0 sudo[47527]: pam_unix(sudo:session): session closed for user root
Dec 03 00:58:59 compute-0 python3.9[47684]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 00:59:00 compute-0 sudo[47840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmrbxkprmpytukjqamjdshaeclescddb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723539.8128605-120-108459834135579/AnsiballZ_dnf.py'
Dec 03 00:59:00 compute-0 dbus-broker-launch[785]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Dec 03 00:59:00 compute-0 sudo[47840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:59:00 compute-0 python3.9[47842]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 03 00:59:01 compute-0 sudo[47840]: pam_unix(sudo:session): session closed for user root
Dec 03 00:59:02 compute-0 sudo[47993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqdkzvyxhdpdnhqmbrjnfkomwnshwisf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723541.8668492-128-89814366852352/AnsiballZ_command.py'
Dec 03 00:59:02 compute-0 sudo[47993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:59:02 compute-0 python3.9[47995]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 00:59:03 compute-0 sudo[47993]: pam_unix(sudo:session): session closed for user root
Dec 03 00:59:04 compute-0 sudo[48280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkmwlfuiokphwfftipmyjbxhgvxsxeml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723543.6133807-136-237191649399863/AnsiballZ_file.py'
Dec 03 00:59:04 compute-0 sudo[48280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:59:04 compute-0 python3.9[48282]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 03 00:59:04 compute-0 sudo[48280]: pam_unix(sudo:session): session closed for user root
Dec 03 00:59:05 compute-0 python3.9[48432]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 00:59:06 compute-0 sudo[48584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-javsabjqdobxnomykewdapvhzoxwlpfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723545.6256487-152-161147056717066/AnsiballZ_dnf.py'
Dec 03 00:59:06 compute-0 sudo[48584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:59:06 compute-0 python3.9[48586]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 03 00:59:07 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 03 00:59:07 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 03 00:59:07 compute-0 systemd[1]: Reloading.
Dec 03 00:59:08 compute-0 systemd-rc-local-generator[48626]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 00:59:08 compute-0 systemd-sysv-generator[48629]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 00:59:08 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 03 00:59:08 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 03 00:59:08 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 03 00:59:08 compute-0 systemd[1]: run-rfab00bddda594710a51eecc261c119d3.service: Deactivated successfully.
Dec 03 00:59:08 compute-0 sudo[48584]: pam_unix(sudo:session): session closed for user root
Dec 03 00:59:09 compute-0 sudo[48901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyhkvbqyjqkzqjrvncjusctnlifzrvux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723548.7672951-160-63186929805970/AnsiballZ_systemd.py'
Dec 03 00:59:09 compute-0 sudo[48901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:59:09 compute-0 python3.9[48903]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 03 00:59:10 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec 03 00:59:10 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Dec 03 00:59:10 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Dec 03 00:59:10 compute-0 NetworkManager[7177]: <info>  [1764723550.5832] caught SIGTERM, shutting down normally.
Dec 03 00:59:10 compute-0 systemd[1]: Stopping Network Manager...
Dec 03 00:59:10 compute-0 NetworkManager[7177]: <info>  [1764723550.5853] dhcp4 (eth0): canceled DHCP transaction
Dec 03 00:59:10 compute-0 NetworkManager[7177]: <info>  [1764723550.5854] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 03 00:59:10 compute-0 NetworkManager[7177]: <info>  [1764723550.5854] dhcp4 (eth0): state changed no lease
Dec 03 00:59:10 compute-0 NetworkManager[7177]: <info>  [1764723550.5858] manager: NetworkManager state is now CONNECTED_SITE
Dec 03 00:59:10 compute-0 NetworkManager[7177]: <info>  [1764723550.5950] exiting (success)
Dec 03 00:59:10 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 03 00:59:10 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec 03 00:59:10 compute-0 systemd[1]: Stopped Network Manager.
Dec 03 00:59:10 compute-0 systemd[1]: NetworkManager.service: Consumed 16.900s CPU time, 4.1M memory peak, read 0B from disk, written 18.0K to disk.
Dec 03 00:59:10 compute-0 systemd[1]: Starting Network Manager...
Dec 03 00:59:10 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.6872] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:ea2ffd2b-9398-4d40-9798-3e760752a119)
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.6876] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.6952] manager[0x557797ba6090]: monitoring kernel firmware directory '/lib/firmware'.
Dec 03 00:59:10 compute-0 systemd[1]: Starting Hostname Service...
Dec 03 00:59:10 compute-0 systemd[1]: Started Hostname Service.
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8097] hostname: hostname: using hostnamed
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8098] hostname: static hostname changed from (none) to "compute-0"
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8107] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8116] manager[0x557797ba6090]: rfkill: Wi-Fi hardware radio set enabled
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8116] manager[0x557797ba6090]: rfkill: WWAN hardware radio set enabled
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8155] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8171] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8172] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8173] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8174] manager: Networking is enabled by state file
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8178] settings: Loaded settings plugin: keyfile (internal)
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8185] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8232] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8248] dhcp: init: Using DHCP client 'internal'
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8252] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8261] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8275] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8290] device (lo): Activation: starting connection 'lo' (3c357ba2-4585-405b-8323-b1feb378cf6e)
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8301] device (eth0): carrier: link connected
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8310] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8321] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8322] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8336] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8351] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8364] device (eth1): carrier: link connected
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8374] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8386] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (771916e5-3ce0-5ffe-bc07-7ed0f995ac40) (indicated)
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8388] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8400] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8412] device (eth1): Activation: starting connection 'ci-private-network' (771916e5-3ce0-5ffe-bc07-7ed0f995ac40)
Dec 03 00:59:10 compute-0 systemd[1]: Started Network Manager.
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8424] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8441] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8454] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8458] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8461] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8467] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8471] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8477] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8483] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8494] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8499] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8516] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8540] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8563] dhcp4 (eth0): state changed new lease, address=38.102.83.36
Dec 03 00:59:10 compute-0 systemd[1]: Starting Network Manager Wait Online...
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8568] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8578] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8668] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8682] device (lo): Activation: successful, device activated.
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8693] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8704] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8709] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8716] manager: NetworkManager state is now CONNECTED_LOCAL
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8722] device (eth1): Activation: successful, device activated.
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8738] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8741] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8748] manager: NetworkManager state is now CONNECTED_SITE
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8757] device (eth0): Activation: successful, device activated.
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8768] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 03 00:59:10 compute-0 NetworkManager[48912]: <info>  [1764723550.8793] manager: startup complete
Dec 03 00:59:10 compute-0 systemd[1]: Finished Network Manager Wait Online.
Dec 03 00:59:10 compute-0 sudo[48901]: pam_unix(sudo:session): session closed for user root
Dec 03 00:59:11 compute-0 sudo[49127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyyppjasslftnmgmnqznkcqthxhlkpzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723551.1547985-168-159757025550844/AnsiballZ_dnf.py'
Dec 03 00:59:11 compute-0 sudo[49127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:59:11 compute-0 python3.9[49129]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 03 00:59:16 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 03 00:59:16 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 03 00:59:16 compute-0 systemd[1]: Reloading.
Dec 03 00:59:16 compute-0 systemd-rc-local-generator[49181]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 00:59:16 compute-0 systemd-sysv-generator[49185]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 00:59:16 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 03 00:59:17 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 03 00:59:17 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 03 00:59:17 compute-0 systemd[1]: run-reacd68e3634542beb1a0455801bdc15f.service: Deactivated successfully.
Dec 03 00:59:17 compute-0 sudo[49127]: pam_unix(sudo:session): session closed for user root
Dec 03 00:59:18 compute-0 sudo[49586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jleitmclhftbeijfjnesnzxtqsndicnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723558.0806038-180-148985954591218/AnsiballZ_stat.py'
Dec 03 00:59:18 compute-0 sudo[49586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:59:18 compute-0 python3.9[49588]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 00:59:18 compute-0 sudo[49586]: pam_unix(sudo:session): session closed for user root
Dec 03 00:59:19 compute-0 sudo[49738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbknckebrbvkfyhuhfznpjhonjrtsnyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723558.979955-189-91485663878605/AnsiballZ_ini_file.py'
Dec 03 00:59:19 compute-0 sudo[49738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:59:19 compute-0 python3.9[49740]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 00:59:19 compute-0 sudo[49738]: pam_unix(sudo:session): session closed for user root
Dec 03 00:59:20 compute-0 sudo[49892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqmysdmsehbsuzoisffdbqhhbsgfuaeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723560.1482778-199-183598743371266/AnsiballZ_ini_file.py'
Dec 03 00:59:20 compute-0 sudo[49892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:59:20 compute-0 python3.9[49894]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 00:59:20 compute-0 sudo[49892]: pam_unix(sudo:session): session closed for user root
Dec 03 00:59:21 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 03 00:59:21 compute-0 sudo[50044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gonhnjkqvmtwwvegowbvocqlqaxicyqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723560.9772139-199-3331773728007/AnsiballZ_ini_file.py'
Dec 03 00:59:21 compute-0 sudo[50044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:59:21 compute-0 python3.9[50046]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 00:59:21 compute-0 sudo[50044]: pam_unix(sudo:session): session closed for user root
Dec 03 00:59:22 compute-0 sudo[50196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogossdslzrpwhsfgfjrroibrqfkiudft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723561.835896-214-181652172123900/AnsiballZ_ini_file.py'
Dec 03 00:59:22 compute-0 sudo[50196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:59:22 compute-0 python3.9[50198]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 00:59:22 compute-0 sudo[50196]: pam_unix(sudo:session): session closed for user root
Dec 03 00:59:23 compute-0 sudo[50348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynasbkrsgoifghxrtfvijfbmdiwdxrkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723562.6993728-214-192035845312497/AnsiballZ_ini_file.py'
Dec 03 00:59:23 compute-0 sudo[50348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:59:23 compute-0 python3.9[50350]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 00:59:23 compute-0 sudo[50348]: pam_unix(sudo:session): session closed for user root
Dec 03 00:59:23 compute-0 sudo[50500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrnikieprxctsbcvweuyxlebzvrvlqbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723563.5629869-229-61643240079190/AnsiballZ_stat.py'
Dec 03 00:59:23 compute-0 sudo[50500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:59:24 compute-0 python3.9[50502]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 00:59:24 compute-0 sudo[50500]: pam_unix(sudo:session): session closed for user root
Dec 03 00:59:24 compute-0 sudo[50623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgiasqtwguxohaekwwecxjsxfoclhmyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723563.5629869-229-61643240079190/AnsiballZ_copy.py'
Dec 03 00:59:24 compute-0 sudo[50623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:59:25 compute-0 python3.9[50625]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764723563.5629869-229-61643240079190/.source _original_basename=.dq1soo3g follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 00:59:25 compute-0 sudo[50623]: pam_unix(sudo:session): session closed for user root
Dec 03 00:59:25 compute-0 sudo[50775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkxalhwrsqnoogyjoscuodoqmmizaddj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723565.2805724-244-223204417221344/AnsiballZ_file.py'
Dec 03 00:59:25 compute-0 sudo[50775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:59:25 compute-0 python3.9[50777]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 00:59:25 compute-0 sudo[50775]: pam_unix(sudo:session): session closed for user root
Dec 03 00:59:26 compute-0 sudo[50927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwskcldibkjoaxnfvhkpchtvyvvoztno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723566.076659-252-325125604343/AnsiballZ_edpm_os_net_config_mappings.py'
Dec 03 00:59:26 compute-0 sudo[50927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:59:26 compute-0 python3.9[50929]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Dec 03 00:59:26 compute-0 sudo[50927]: pam_unix(sudo:session): session closed for user root
Dec 03 00:59:27 compute-0 sudo[51079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idqeslfmykxdlrsmobgqznwrqjhjzndq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723567.1829834-261-108089573692849/AnsiballZ_file.py'
Dec 03 00:59:27 compute-0 sudo[51079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:59:27 compute-0 python3.9[51081]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 00:59:27 compute-0 sudo[51079]: pam_unix(sudo:session): session closed for user root
Dec 03 00:59:28 compute-0 sudo[51231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdclhofgmowbomsxzywpupvsncuhspbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723568.176033-271-16092567067705/AnsiballZ_stat.py'
Dec 03 00:59:28 compute-0 sudo[51231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:59:28 compute-0 sudo[51231]: pam_unix(sudo:session): session closed for user root
Dec 03 00:59:29 compute-0 sudo[51354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewhamqumyvjvjkxapnciolrvfxsexber ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723568.176033-271-16092567067705/AnsiballZ_copy.py'
Dec 03 00:59:29 compute-0 sudo[51354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:59:29 compute-0 sudo[51354]: pam_unix(sudo:session): session closed for user root
Dec 03 00:59:30 compute-0 sudo[51506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juiknrmhvyqbucmdpqkkfdytzknhuomp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723569.8342664-286-80034307393712/AnsiballZ_slurp.py'
Dec 03 00:59:30 compute-0 sudo[51506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:59:30 compute-0 python3.9[51508]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Dec 03 00:59:30 compute-0 sudo[51506]: pam_unix(sudo:session): session closed for user root
Dec 03 00:59:31 compute-0 sudo[51681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksjcicgryaniyfyvlsvtavedteqdydlc ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723570.8562083-295-225688965816612/async_wrapper.py j806931486018 300 /home/zuul/.ansible/tmp/ansible-tmp-1764723570.8562083-295-225688965816612/AnsiballZ_edpm_os_net_config.py _'
Dec 03 00:59:31 compute-0 sudo[51681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:59:31 compute-0 ansible-async_wrapper.py[51683]: Invoked with j806931486018 300 /home/zuul/.ansible/tmp/ansible-tmp-1764723570.8562083-295-225688965816612/AnsiballZ_edpm_os_net_config.py _
Dec 03 00:59:31 compute-0 ansible-async_wrapper.py[51686]: Starting module and watcher
Dec 03 00:59:31 compute-0 ansible-async_wrapper.py[51686]: Start watching 51687 (300)
Dec 03 00:59:31 compute-0 ansible-async_wrapper.py[51687]: Start module (51687)
Dec 03 00:59:31 compute-0 ansible-async_wrapper.py[51683]: Return async_wrapper task started.
Dec 03 00:59:32 compute-0 sudo[51681]: pam_unix(sudo:session): session closed for user root
Dec 03 00:59:32 compute-0 python3.9[51688]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Dec 03 00:59:32 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Dec 03 00:59:32 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Dec 03 00:59:32 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Dec 03 00:59:32 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Dec 03 00:59:32 compute-0 kernel: cfg80211: failed to load regulatory.db
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.6026] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51689 uid=0 result="success"
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.6051] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51689 uid=0 result="success"
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.6819] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.6821] audit: op="connection-add" uuid="e1d2481b-6164-4a08-897f-9ee2e4788170" name="br-ex-br" pid=51689 uid=0 result="success"
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.6845] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.6847] audit: op="connection-add" uuid="9d1b2739-0b12-4874-b8c5-49405c85884d" name="br-ex-port" pid=51689 uid=0 result="success"
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.6866] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.6868] audit: op="connection-add" uuid="cb86081b-6ab8-4587-87ac-9aa91b7cefbf" name="eth1-port" pid=51689 uid=0 result="success"
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.6888] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.6890] audit: op="connection-add" uuid="e1932086-3560-4a59-8e0a-e1714d4606c9" name="vlan20-port" pid=51689 uid=0 result="success"
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.6909] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.6911] audit: op="connection-add" uuid="63452749-9f5d-479b-aa72-a599c547a27c" name="vlan21-port" pid=51689 uid=0 result="success"
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.6929] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.6931] audit: op="connection-add" uuid="18edac4f-6825-49d7-b415-29a1c561e6a3" name="vlan22-port" pid=51689 uid=0 result="success"
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.6950] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.6951] audit: op="connection-add" uuid="a304a50a-aad1-454d-afc8-26431c8e94ec" name="vlan23-port" pid=51689 uid=0 result="success"
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.6988] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="802-3-ethernet.mtu,ipv6.method,ipv6.addr-gen-mode,ipv6.dhcp-timeout,connection.timestamp,connection.autoconnect-priority,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=51689 uid=0 result="success"
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7016] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7018] audit: op="connection-add" uuid="02fb9cd9-2d1d-4f7b-82d3-7567314e8c5f" name="br-ex-if" pid=51689 uid=0 result="success"
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7082] audit: op="connection-update" uuid="771916e5-3ce0-5ffe-bc07-7ed0f995ac40" name="ci-private-network" args="ipv6.routes,ipv6.addresses,ipv6.routing-rules,ipv6.method,ipv6.addr-gen-mode,ipv6.dns,ovs-interface.type,connection.master,connection.port-type,connection.timestamp,connection.slave-type,connection.controller,ipv4.routes,ipv4.never-default,ipv4.addresses,ipv4.method,ipv4.dns,ipv4.routing-rules,ovs-external-ids.data" pid=51689 uid=0 result="success"
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7111] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7113] audit: op="connection-add" uuid="70048df0-7e06-42d8-83d3-244e265607b4" name="vlan20-if" pid=51689 uid=0 result="success"
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7139] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7141] audit: op="connection-add" uuid="aa05774f-fca6-4e42-a505-e286a47b7a5d" name="vlan21-if" pid=51689 uid=0 result="success"
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7168] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7170] audit: op="connection-add" uuid="42661d6a-7c05-4c96-81e2-72659e39c865" name="vlan22-if" pid=51689 uid=0 result="success"
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7195] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7198] audit: op="connection-add" uuid="7f6d0054-9067-437e-9d5d-98deb5c19148" name="vlan23-if" pid=51689 uid=0 result="success"
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7216] audit: op="connection-delete" uuid="1b189b81-0918-3f63-b174-3141827cccab" name="Wired connection 1" pid=51689 uid=0 result="success"
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7246] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7262] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7268] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (e1d2481b-6164-4a08-897f-9ee2e4788170)
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7269] audit: op="connection-activate" uuid="e1d2481b-6164-4a08-897f-9ee2e4788170" name="br-ex-br" pid=51689 uid=0 result="success"
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7271] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7282] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7287] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (9d1b2739-0b12-4874-b8c5-49405c85884d)
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7290] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7299] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7305] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (cb86081b-6ab8-4587-87ac-9aa91b7cefbf)
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7307] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7318] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7324] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (e1932086-3560-4a59-8e0a-e1714d4606c9)
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7327] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7336] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7342] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (63452749-9f5d-479b-aa72-a599c547a27c)
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7345] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7354] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7360] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (18edac4f-6825-49d7-b415-29a1c561e6a3)
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7363] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7373] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7379] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (a304a50a-aad1-454d-afc8-26431c8e94ec)
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7380] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7383] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7386] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7396] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7403] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7409] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (02fb9cd9-2d1d-4f7b-82d3-7567314e8c5f)
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7410] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7413] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7414] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7416] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7417] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7427] device (eth1): disconnecting for new activation request.
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7428] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7431] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7432] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7434] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7437] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7443] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7449] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (70048df0-7e06-42d8-83d3-244e265607b4)
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7450] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7453] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7456] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7458] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7461] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7467] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7473] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (aa05774f-fca6-4e42-a505-e286a47b7a5d)
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7474] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7478] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7480] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7482] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7485] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7491] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7496] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (42661d6a-7c05-4c96-81e2-72659e39c865)
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7497] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7501] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7503] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7505] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7508] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7514] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7520] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (7f6d0054-9067-437e-9d5d-98deb5c19148)
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7521] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7525] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7527] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7528] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7530] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7549] audit: op="device-reapply" interface="eth0" ifindex=2 args="802-3-ethernet.mtu,ipv6.method,ipv6.addr-gen-mode,connection.autoconnect-priority,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=51689 uid=0 result="success"
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7551] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7555] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7557] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7567] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7572] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7575] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7579] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7582] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 kernel: ovs-system: entered promiscuous mode
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7602] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7609] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7613] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7616] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7622] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 kernel: Timeout policy base is empty
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7627] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7632] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7635] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7641] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 systemd-udevd[51695]: Network interface NamePolicy= disabled on kernel command line.
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7646] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7654] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7657] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7662] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7667] dhcp4 (eth0): canceled DHCP transaction
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7667] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7667] dhcp4 (eth0): state changed no lease
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7669] dhcp4 (eth0): activation: beginning transaction (no timeout)
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7682] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7686] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51689 uid=0 result="fail" reason="Device is not activated"
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7690] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7696] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Dec 03 00:59:34 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7734] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7740] dhcp4 (eth0): state changed new lease, address=38.102.83.36
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7744] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7752] device (eth1): disconnecting for new activation request.
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7753] audit: op="connection-activate" uuid="771916e5-3ce0-5ffe-bc07-7ed0f995ac40" name="ci-private-network" pid=51689 uid=0 result="success"
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7830] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51689 uid=0 result="success"
Dec 03 00:59:34 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.7909] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Dec 03 00:59:34 compute-0 kernel: br-ex: entered promiscuous mode
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.8064] device (eth1): Activation: starting connection 'ci-private-network' (771916e5-3ce0-5ffe-bc07-7ed0f995ac40)
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.8070] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.8081] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.8085] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.8091] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.8096] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.8106] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.8107] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.8109] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.8110] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.8112] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.8113] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.8124] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.8132] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.8137] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.8141] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.8145] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.8149] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.8154] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.8158] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.8162] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Dec 03 00:59:34 compute-0 kernel: vlan22: entered promiscuous mode
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.8167] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.8171] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Dec 03 00:59:34 compute-0 systemd-udevd[51694]: Network interface NamePolicy= disabled on kernel command line.
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.8175] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.8181] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.8187] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.8193] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.8212] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.8224] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.8264] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.8269] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.8271] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 kernel: vlan20: entered promiscuous mode
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.8276] device (eth1): Activation: successful, device activated.
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.8281] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.8288] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 03 00:59:34 compute-0 kernel: vlan21: entered promiscuous mode
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.8350] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.8381] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 systemd-udevd[51797]: Network interface NamePolicy= disabled on kernel command line.
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.8456] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.8459] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.8464] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 kernel: vlan23: entered promiscuous mode
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.8471] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.8490] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.8596] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.8601] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.8602] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.8608] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.8624] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.8684] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.8684] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.8688] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.8703] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.8749] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.8772] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.8780] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 03 00:59:34 compute-0 NetworkManager[48912]: <info>  [1764723574.8795] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 03 00:59:35 compute-0 sudo[52045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykyejyrhjnodkfgavvjvqmivblpgpgqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723575.1455235-295-280531261193682/AnsiballZ_async_status.py'
Dec 03 00:59:35 compute-0 sudo[52045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:59:35 compute-0 python3.9[52047]: ansible-ansible.legacy.async_status Invoked with jid=j806931486018.51683 mode=status _async_dir=/root/.ansible_async
Dec 03 00:59:35 compute-0 sudo[52045]: pam_unix(sudo:session): session closed for user root
Dec 03 00:59:36 compute-0 NetworkManager[48912]: <info>  [1764723576.0168] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51689 uid=0 result="success"
Dec 03 00:59:36 compute-0 NetworkManager[48912]: <info>  [1764723576.3547] checkpoint[0x557797b7c950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Dec 03 00:59:36 compute-0 NetworkManager[48912]: <info>  [1764723576.3550] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51689 uid=0 result="success"
Dec 03 00:59:36 compute-0 NetworkManager[48912]: <info>  [1764723576.7544] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51689 uid=0 result="success"
Dec 03 00:59:36 compute-0 NetworkManager[48912]: <info>  [1764723576.7561] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51689 uid=0 result="success"
Dec 03 00:59:36 compute-0 ansible-async_wrapper.py[51686]: 51687 still running (300)
Dec 03 00:59:37 compute-0 NetworkManager[48912]: <info>  [1764723577.0261] audit: op="networking-control" arg="global-dns-configuration" pid=51689 uid=0 result="success"
Dec 03 00:59:37 compute-0 NetworkManager[48912]: <info>  [1764723577.0297] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Dec 03 00:59:37 compute-0 NetworkManager[48912]: <info>  [1764723577.0330] audit: op="networking-control" arg="global-dns-configuration" pid=51689 uid=0 result="success"
Dec 03 00:59:37 compute-0 NetworkManager[48912]: <info>  [1764723577.0363] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51689 uid=0 result="success"
Dec 03 00:59:37 compute-0 NetworkManager[48912]: <info>  [1764723577.2868] checkpoint[0x557797b7ca20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Dec 03 00:59:37 compute-0 NetworkManager[48912]: <info>  [1764723577.2874] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51689 uid=0 result="success"
Dec 03 00:59:37 compute-0 ansible-async_wrapper.py[51687]: Module complete (51687)
Dec 03 00:59:39 compute-0 sudo[52152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szjawkkhszpwvgxlpbaetdpqqnpihfao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723575.1455235-295-280531261193682/AnsiballZ_async_status.py'
Dec 03 00:59:39 compute-0 sudo[52152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:59:39 compute-0 python3.9[52154]: ansible-ansible.legacy.async_status Invoked with jid=j806931486018.51683 mode=status _async_dir=/root/.ansible_async
Dec 03 00:59:39 compute-0 sudo[52152]: pam_unix(sudo:session): session closed for user root
Dec 03 00:59:40 compute-0 sudo[52251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awfyctwcinrovggrxnojeyziombdetmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723575.1455235-295-280531261193682/AnsiballZ_async_status.py'
Dec 03 00:59:40 compute-0 sudo[52251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:59:40 compute-0 python3.9[52253]: ansible-ansible.legacy.async_status Invoked with jid=j806931486018.51683 mode=cleanup _async_dir=/root/.ansible_async
Dec 03 00:59:40 compute-0 sudo[52251]: pam_unix(sudo:session): session closed for user root
Dec 03 00:59:40 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 03 00:59:40 compute-0 sudo[52406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wymuosdralmrusfsmdipaewvgdgieavw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723580.5806954-322-199017558531633/AnsiballZ_stat.py'
Dec 03 00:59:40 compute-0 sudo[52406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:59:41 compute-0 python3.9[52408]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 00:59:41 compute-0 sudo[52406]: pam_unix(sudo:session): session closed for user root
Dec 03 00:59:41 compute-0 sudo[52529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yipnvilagppricmqczawgmnwwqhlkjlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723580.5806954-322-199017558531633/AnsiballZ_copy.py'
Dec 03 00:59:41 compute-0 sudo[52529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:59:41 compute-0 python3.9[52531]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764723580.5806954-322-199017558531633/.source.returncode _original_basename=.0fqgky8x follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 00:59:41 compute-0 sudo[52529]: pam_unix(sudo:session): session closed for user root
Dec 03 00:59:41 compute-0 ansible-async_wrapper.py[51686]: Done in kid B.
Dec 03 00:59:42 compute-0 sudo[52681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvngylvgfriemsrgczavdhikyfqfvhie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723582.1366723-338-229455501789424/AnsiballZ_stat.py'
Dec 03 00:59:42 compute-0 sudo[52681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:59:42 compute-0 python3.9[52684]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 00:59:42 compute-0 sudo[52681]: pam_unix(sudo:session): session closed for user root
Dec 03 00:59:43 compute-0 sudo[52805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssxnomqlpamqxbyvfkwygdcfcchpbjjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723582.1366723-338-229455501789424/AnsiballZ_copy.py'
Dec 03 00:59:43 compute-0 sudo[52805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:59:43 compute-0 python3.9[52807]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764723582.1366723-338-229455501789424/.source.cfg _original_basename=.9a0tc_0x follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 00:59:43 compute-0 sudo[52805]: pam_unix(sudo:session): session closed for user root
Dec 03 00:59:44 compute-0 sudo[52957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qetzykgfbvpohdcqykxawgekipgyyzxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723583.7014933-353-44798161521975/AnsiballZ_systemd.py'
Dec 03 00:59:44 compute-0 sudo[52957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 00:59:44 compute-0 python3.9[52959]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 03 00:59:44 compute-0 systemd[1]: Reloading Network Manager...
Dec 03 00:59:44 compute-0 NetworkManager[48912]: <info>  [1764723584.5448] audit: op="reload" arg="0" pid=52963 uid=0 result="success"
Dec 03 00:59:44 compute-0 NetworkManager[48912]: <info>  [1764723584.5456] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Dec 03 00:59:44 compute-0 systemd[1]: Reloaded Network Manager.
Dec 03 00:59:44 compute-0 sudo[52957]: pam_unix(sudo:session): session closed for user root
Dec 03 00:59:45 compute-0 sshd-session[44914]: Connection closed by 192.168.122.30 port 46526
Dec 03 00:59:45 compute-0 sshd-session[44911]: pam_unix(sshd:session): session closed for user zuul
Dec 03 00:59:45 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Dec 03 00:59:45 compute-0 systemd[1]: session-9.scope: Consumed 56.490s CPU time.
Dec 03 00:59:45 compute-0 systemd-logind[800]: Session 9 logged out. Waiting for processes to exit.
Dec 03 00:59:45 compute-0 systemd-logind[800]: Removed session 9.
Dec 03 00:59:50 compute-0 sshd-session[52994]: Accepted publickey for zuul from 192.168.122.30 port 39286 ssh2: ECDSA SHA256:ja3ITS17A9km0/Ot+KN2pl9ub4ump/b6GV+vNoE7Szw
Dec 03 00:59:50 compute-0 systemd-logind[800]: New session 10 of user zuul.
Dec 03 00:59:50 compute-0 systemd[1]: Started Session 10 of User zuul.
Dec 03 00:59:50 compute-0 sshd-session[52994]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 03 00:59:51 compute-0 python3.9[53147]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 00:59:53 compute-0 python3.9[53302]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 03 00:59:54 compute-0 python3.9[53495]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 00:59:54 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 03 00:59:55 compute-0 sshd-session[52997]: Connection closed by 192.168.122.30 port 39286
Dec 03 00:59:55 compute-0 sshd-session[52994]: pam_unix(sshd:session): session closed for user zuul
Dec 03 00:59:55 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Dec 03 00:59:55 compute-0 systemd[1]: session-10.scope: Consumed 2.860s CPU time.
Dec 03 00:59:55 compute-0 systemd-logind[800]: Session 10 logged out. Waiting for processes to exit.
Dec 03 00:59:55 compute-0 systemd-logind[800]: Removed session 10.
Dec 03 01:00:00 compute-0 sshd-session[53525]: Accepted publickey for zuul from 192.168.122.30 port 35306 ssh2: ECDSA SHA256:ja3ITS17A9km0/Ot+KN2pl9ub4ump/b6GV+vNoE7Szw
Dec 03 01:00:00 compute-0 systemd-logind[800]: New session 11 of user zuul.
Dec 03 01:00:00 compute-0 systemd[1]: Started Session 11 of User zuul.
Dec 03 01:00:00 compute-0 sshd-session[53525]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 03 01:00:02 compute-0 python3.9[53679]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 01:00:03 compute-0 python3.9[53834]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 01:00:04 compute-0 sudo[53988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ingddbvmczghpwugjfmorqbwdzyurtdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723603.7353516-40-44555319217752/AnsiballZ_setup.py'
Dec 03 01:00:04 compute-0 sudo[53988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:00:04 compute-0 python3.9[53990]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 03 01:00:04 compute-0 sudo[53988]: pam_unix(sudo:session): session closed for user root
Dec 03 01:00:05 compute-0 sudo[54073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqgmevxxruybscoknnhygodzsuayotpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723603.7353516-40-44555319217752/AnsiballZ_dnf.py'
Dec 03 01:00:05 compute-0 sudo[54073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:00:05 compute-0 python3.9[54075]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 03 01:00:06 compute-0 sudo[54073]: pam_unix(sudo:session): session closed for user root
Dec 03 01:00:07 compute-0 sudo[54226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcdlzkjehdfgkooyxoqjiyxhlrnbrwiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723606.9394739-52-178792509405792/AnsiballZ_setup.py'
Dec 03 01:00:07 compute-0 sudo[54226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:00:07 compute-0 python3.9[54228]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 03 01:00:08 compute-0 sudo[54226]: pam_unix(sudo:session): session closed for user root
Dec 03 01:00:09 compute-0 sudo[54421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmtumytevzttgoguqtvnsskupmjhgcgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723608.4899962-63-245980533030902/AnsiballZ_file.py'
Dec 03 01:00:09 compute-0 sudo[54421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:00:09 compute-0 python3.9[54423]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:00:09 compute-0 sudo[54421]: pam_unix(sudo:session): session closed for user root
Dec 03 01:00:10 compute-0 sudo[54573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbabminplinjqasamhvlzbgyywmkkdmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723609.609933-71-189317458868861/AnsiballZ_command.py'
Dec 03 01:00:10 compute-0 sudo[54573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:00:10 compute-0 python3.9[54575]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:00:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat3800328645-merged.mount: Deactivated successfully.
Dec 03 01:00:10 compute-0 podman[54576]: 2025-12-03 01:00:10.427959689 +0000 UTC m=+0.072113342 system refresh
Dec 03 01:00:10 compute-0 sudo[54573]: pam_unix(sudo:session): session closed for user root
Dec 03 01:00:11 compute-0 sudo[54737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awhmsgrjxbxqeklznhavrzvknjdtvzds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723610.7084196-79-218479120912643/AnsiballZ_stat.py'
Dec 03 01:00:11 compute-0 sudo[54737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:00:11 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 03 01:00:11 compute-0 python3.9[54739]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:00:11 compute-0 sudo[54737]: pam_unix(sudo:session): session closed for user root
Dec 03 01:00:12 compute-0 sudo[54860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvgzaaivdcwpgumdtbwaceibwujmwsnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723610.7084196-79-218479120912643/AnsiballZ_copy.py'
Dec 03 01:00:12 compute-0 sudo[54860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:00:12 compute-0 python3.9[54862]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723610.7084196-79-218479120912643/.source.json follow=False _original_basename=podman_network_config.j2 checksum=bebc1be99e667a6cdefc816a6f456d6e46ef811e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:00:12 compute-0 sudo[54860]: pam_unix(sudo:session): session closed for user root
Dec 03 01:00:12 compute-0 sudo[55012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yedovvybauprrcroqljobdqswvfqftlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723612.548454-94-204791102889799/AnsiballZ_stat.py'
Dec 03 01:00:12 compute-0 sudo[55012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:00:13 compute-0 python3.9[55014]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:00:13 compute-0 sudo[55012]: pam_unix(sudo:session): session closed for user root
Dec 03 01:00:13 compute-0 sudo[55135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqwubdaapozrwfbhsdfljcendwepscgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723612.548454-94-204791102889799/AnsiballZ_copy.py'
Dec 03 01:00:13 compute-0 sudo[55135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:00:13 compute-0 python3.9[55137]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764723612.548454-94-204791102889799/.source.conf follow=False _original_basename=registries.conf.j2 checksum=88b6a52c62914061ba0322e1e0763af09791b362 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:00:13 compute-0 sudo[55135]: pam_unix(sudo:session): session closed for user root
Dec 03 01:00:14 compute-0 sudo[55287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohqbcgtbodtwjheesaeviqjwyhfngxxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723614.1476746-110-181798468580537/AnsiballZ_ini_file.py'
Dec 03 01:00:14 compute-0 sudo[55287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:00:14 compute-0 python3.9[55289]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:00:14 compute-0 sudo[55287]: pam_unix(sudo:session): session closed for user root
Dec 03 01:00:15 compute-0 sudo[55439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgrcifbkiwzfrwqacefamepqkvvxrruc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723615.1511266-110-268216748123721/AnsiballZ_ini_file.py'
Dec 03 01:00:15 compute-0 sudo[55439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:00:15 compute-0 python3.9[55441]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:00:15 compute-0 sudo[55439]: pam_unix(sudo:session): session closed for user root
Dec 03 01:00:16 compute-0 sudo[55591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykvaoropjuezozlhvvaavcuuvdkizftz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723615.9752474-110-274926813112447/AnsiballZ_ini_file.py'
Dec 03 01:00:16 compute-0 sudo[55591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:00:16 compute-0 python3.9[55593]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:00:16 compute-0 sudo[55591]: pam_unix(sudo:session): session closed for user root
Dec 03 01:00:17 compute-0 sudo[55743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcnxrbezssznydrdzkrndzeetiohfhlb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723616.74205-110-79987393056859/AnsiballZ_ini_file.py'
Dec 03 01:00:17 compute-0 sudo[55743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:00:17 compute-0 python3.9[55745]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:00:17 compute-0 sudo[55743]: pam_unix(sudo:session): session closed for user root
Dec 03 01:00:18 compute-0 sudo[55895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdhzajmgptzgesnsisubpzjgafgesowr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723617.7039905-141-132465523043438/AnsiballZ_dnf.py'
Dec 03 01:00:18 compute-0 sudo[55895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:00:18 compute-0 python3.9[55897]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 03 01:00:19 compute-0 sudo[55895]: pam_unix(sudo:session): session closed for user root
Dec 03 01:00:20 compute-0 sudo[56048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbaysexzcwvhwznhbixnzyhrnbcmtlbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723619.9416149-152-274933178968190/AnsiballZ_setup.py'
Dec 03 01:00:20 compute-0 sudo[56048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:00:20 compute-0 python3.9[56050]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 01:00:20 compute-0 sudo[56048]: pam_unix(sudo:session): session closed for user root
Dec 03 01:00:21 compute-0 sudo[56204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtmfpcnhbjbpebvvdicofjlaxydrrbej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723620.915563-160-10076922025969/AnsiballZ_stat.py'
Dec 03 01:00:21 compute-0 sudo[56204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:00:21 compute-0 python3.9[56206]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:00:21 compute-0 sudo[56204]: pam_unix(sudo:session): session closed for user root
Dec 03 01:00:21 compute-0 sshd-session[56077]: Invalid user ubnt from 80.94.95.116 port 50078
Dec 03 01:00:21 compute-0 sshd-session[56077]: Connection closed by invalid user ubnt 80.94.95.116 port 50078 [preauth]
Dec 03 01:00:22 compute-0 sudo[56356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suouwebhwfnjuidimwtkgmtowbuyjrzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723621.7560043-169-20386563373091/AnsiballZ_stat.py'
Dec 03 01:00:22 compute-0 sudo[56356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:00:22 compute-0 python3.9[56358]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:00:22 compute-0 sudo[56356]: pam_unix(sudo:session): session closed for user root
Dec 03 01:00:23 compute-0 sudo[56508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgepahfckjsypilklpxxwoplgvmozzar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723622.6338553-179-207138102706333/AnsiballZ_command.py'
Dec 03 01:00:23 compute-0 sudo[56508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:00:23 compute-0 python3.9[56510]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:00:23 compute-0 sudo[56508]: pam_unix(sudo:session): session closed for user root
Dec 03 01:00:24 compute-0 sudo[56661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrshsqqsvxggjflhzkguxmwgchtoekhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723623.7905731-189-37564015588253/AnsiballZ_service_facts.py'
Dec 03 01:00:24 compute-0 sudo[56661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:00:24 compute-0 python3.9[56663]: ansible-service_facts Invoked
Dec 03 01:00:24 compute-0 network[56680]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 03 01:00:24 compute-0 network[56681]: 'network-scripts' will be removed from distribution in near future.
Dec 03 01:00:24 compute-0 network[56682]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 03 01:00:28 compute-0 sudo[56661]: pam_unix(sudo:session): session closed for user root
Dec 03 01:00:30 compute-0 sudo[56965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahlvcqrkomyjnqynrdzdamzabjxgmzev ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1764723629.507947-204-250330416294233/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1764723629.507947-204-250330416294233/args'
Dec 03 01:00:30 compute-0 sudo[56965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:00:30 compute-0 sudo[56965]: pam_unix(sudo:session): session closed for user root
Dec 03 01:00:30 compute-0 sudo[57132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyffuefbtaqbqalglcijjxqalxmuhmmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723630.5354452-215-190471311793642/AnsiballZ_dnf.py'
Dec 03 01:00:30 compute-0 sudo[57132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:00:31 compute-0 python3.9[57134]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 03 01:00:32 compute-0 sudo[57132]: pam_unix(sudo:session): session closed for user root
Dec 03 01:00:33 compute-0 sudo[57285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrepenbqcmoobulmllbcwfxmmtzcrohj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723632.7003584-228-143262322084130/AnsiballZ_package_facts.py'
Dec 03 01:00:33 compute-0 sudo[57285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:00:33 compute-0 python3.9[57287]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec 03 01:00:33 compute-0 sudo[57285]: pam_unix(sudo:session): session closed for user root
Dec 03 01:00:34 compute-0 sudo[57437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwjstpmmgjpnbkdrnfpawwefhsrrjdec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723634.4930625-238-187471491673923/AnsiballZ_stat.py'
Dec 03 01:00:34 compute-0 sudo[57437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:00:35 compute-0 python3.9[57439]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:00:35 compute-0 sudo[57437]: pam_unix(sudo:session): session closed for user root
Dec 03 01:00:35 compute-0 sudo[57562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urnonlalljkwotgwqjsjnvlmmikianzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723634.4930625-238-187471491673923/AnsiballZ_copy.py'
Dec 03 01:00:35 compute-0 sudo[57562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:00:35 compute-0 python3.9[57564]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764723634.4930625-238-187471491673923/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:00:35 compute-0 sudo[57562]: pam_unix(sudo:session): session closed for user root
Dec 03 01:00:36 compute-0 sudo[57716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pplpuvafcxdskrvrpobxabntbxxbhkoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723636.1788118-253-276213849071225/AnsiballZ_stat.py'
Dec 03 01:00:36 compute-0 sudo[57716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:00:36 compute-0 python3.9[57718]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:00:36 compute-0 sudo[57716]: pam_unix(sudo:session): session closed for user root
Dec 03 01:00:37 compute-0 sudo[57841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hezolkcikmufntnoxlpqgftnkvoprljb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723636.1788118-253-276213849071225/AnsiballZ_copy.py'
Dec 03 01:00:37 compute-0 sudo[57841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:00:37 compute-0 python3.9[57843]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764723636.1788118-253-276213849071225/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:00:37 compute-0 sudo[57841]: pam_unix(sudo:session): session closed for user root
Dec 03 01:00:38 compute-0 sudo[57995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hiblcniyaceqqwxoxamejmkqjsymztcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723638.1526022-274-243117193221570/AnsiballZ_lineinfile.py'
Dec 03 01:00:38 compute-0 sudo[57995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:00:38 compute-0 python3.9[57997]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:00:38 compute-0 sudo[57995]: pam_unix(sudo:session): session closed for user root
Dec 03 01:00:39 compute-0 sudo[58149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ernucovhyprsyhddglhgeqonlntpkles ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723639.4624455-289-56416018253685/AnsiballZ_setup.py'
Dec 03 01:00:39 compute-0 sudo[58149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:00:40 compute-0 python3.9[58151]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 03 01:00:40 compute-0 sudo[58149]: pam_unix(sudo:session): session closed for user root
Dec 03 01:00:41 compute-0 sudo[58233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyhhzizrfyoptecmuqorkgxzvgccboki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723639.4624455-289-56416018253685/AnsiballZ_systemd.py'
Dec 03 01:00:41 compute-0 sudo[58233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:00:41 compute-0 python3.9[58235]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:00:41 compute-0 sudo[58233]: pam_unix(sudo:session): session closed for user root
Dec 03 01:00:42 compute-0 sudo[58387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpolqtmmrmrpavqyqzemrjgnjnbdauzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723642.0912757-305-98218747606807/AnsiballZ_setup.py'
Dec 03 01:00:42 compute-0 sudo[58387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:00:42 compute-0 python3.9[58389]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 03 01:00:43 compute-0 sudo[58387]: pam_unix(sudo:session): session closed for user root
Dec 03 01:00:43 compute-0 sudo[58471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfytcdtwbxlqridnhkcxpfuzxwzzwsvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723642.0912757-305-98218747606807/AnsiballZ_systemd.py'
Dec 03 01:00:43 compute-0 sudo[58471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:00:43 compute-0 python3.9[58473]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 03 01:00:43 compute-0 chronyd[799]: chronyd exiting
Dec 03 01:00:43 compute-0 systemd[1]: Stopping NTP client/server...
Dec 03 01:00:43 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Dec 03 01:00:43 compute-0 systemd[1]: Stopped NTP client/server.
Dec 03 01:00:43 compute-0 systemd[1]: Starting NTP client/server...
Dec 03 01:00:43 compute-0 chronyd[58481]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec 03 01:00:43 compute-0 chronyd[58481]: Frequency -27.935 +/- 0.470 ppm read from /var/lib/chrony/drift
Dec 03 01:00:43 compute-0 chronyd[58481]: Loaded seccomp filter (level 2)
Dec 03 01:00:43 compute-0 systemd[1]: Started NTP client/server.
Dec 03 01:00:43 compute-0 sudo[58471]: pam_unix(sudo:session): session closed for user root
Dec 03 01:00:44 compute-0 sshd-session[53528]: Connection closed by 192.168.122.30 port 35306
Dec 03 01:00:44 compute-0 sshd-session[53525]: pam_unix(sshd:session): session closed for user zuul
Dec 03 01:00:44 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Dec 03 01:00:44 compute-0 systemd[1]: session-11.scope: Consumed 31.032s CPU time.
Dec 03 01:00:44 compute-0 systemd-logind[800]: Session 11 logged out. Waiting for processes to exit.
Dec 03 01:00:44 compute-0 systemd-logind[800]: Removed session 11.
Dec 03 01:00:50 compute-0 sshd-session[58507]: Accepted publickey for zuul from 192.168.122.30 port 33402 ssh2: ECDSA SHA256:ja3ITS17A9km0/Ot+KN2pl9ub4ump/b6GV+vNoE7Szw
Dec 03 01:00:50 compute-0 systemd-logind[800]: New session 12 of user zuul.
Dec 03 01:00:50 compute-0 systemd[1]: Started Session 12 of User zuul.
Dec 03 01:00:50 compute-0 sshd-session[58507]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 03 01:00:51 compute-0 sudo[58660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbtdlttmvikjdnizxhitdldakadvzvkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723651.0430188-22-254355089710291/AnsiballZ_file.py'
Dec 03 01:00:51 compute-0 sudo[58660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:00:51 compute-0 python3.9[58662]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:00:51 compute-0 sudo[58660]: pam_unix(sudo:session): session closed for user root
Dec 03 01:00:52 compute-0 sudo[58812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihnkjclaelsxptpdwndxxwirzohgtnei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723652.1736214-34-273263560277370/AnsiballZ_stat.py'
Dec 03 01:00:52 compute-0 sudo[58812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:00:52 compute-0 python3.9[58814]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:00:52 compute-0 sudo[58812]: pam_unix(sudo:session): session closed for user root
Dec 03 01:00:53 compute-0 sudo[58935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppqyajqjdvcarazzzemvixakaswtellk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723652.1736214-34-273263560277370/AnsiballZ_copy.py'
Dec 03 01:00:53 compute-0 sudo[58935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:00:53 compute-0 python3.9[58937]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764723652.1736214-34-273263560277370/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:00:53 compute-0 sudo[58935]: pam_unix(sudo:session): session closed for user root
Dec 03 01:00:54 compute-0 sshd-session[58510]: Connection closed by 192.168.122.30 port 33402
Dec 03 01:00:54 compute-0 sshd-session[58507]: pam_unix(sshd:session): session closed for user zuul
Dec 03 01:00:54 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Dec 03 01:00:54 compute-0 systemd[1]: session-12.scope: Consumed 2.061s CPU time.
Dec 03 01:00:54 compute-0 systemd-logind[800]: Session 12 logged out. Waiting for processes to exit.
Dec 03 01:00:54 compute-0 systemd-logind[800]: Removed session 12.
Dec 03 01:00:59 compute-0 sshd-session[58962]: Accepted publickey for zuul from 192.168.122.30 port 38938 ssh2: ECDSA SHA256:ja3ITS17A9km0/Ot+KN2pl9ub4ump/b6GV+vNoE7Szw
Dec 03 01:00:59 compute-0 systemd-logind[800]: New session 13 of user zuul.
Dec 03 01:00:59 compute-0 systemd[1]: Started Session 13 of User zuul.
Dec 03 01:00:59 compute-0 sshd-session[58962]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 03 01:01:01 compute-0 python3.9[59115]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 01:01:01 compute-0 CROND[59197]: (root) CMD (run-parts /etc/cron.hourly)
Dec 03 01:01:01 compute-0 run-parts[59200]: (/etc/cron.hourly) starting 0anacron
Dec 03 01:01:01 compute-0 anacron[59208]: Anacron started on 2025-12-03
Dec 03 01:01:01 compute-0 anacron[59208]: Will run job `cron.daily' in 14 min.
Dec 03 01:01:01 compute-0 anacron[59208]: Will run job `cron.weekly' in 34 min.
Dec 03 01:01:01 compute-0 anacron[59208]: Will run job `cron.monthly' in 54 min.
Dec 03 01:01:01 compute-0 anacron[59208]: Jobs will be executed sequentially
Dec 03 01:01:01 compute-0 run-parts[59210]: (/etc/cron.hourly) finished 0anacron
Dec 03 01:01:01 compute-0 CROND[59196]: (root) CMDEND (run-parts /etc/cron.hourly)
Dec 03 01:01:02 compute-0 sudo[59284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gufupdfiwquowkytxfmqgqcqardwyeij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723661.5220437-33-173932077270989/AnsiballZ_file.py'
Dec 03 01:01:02 compute-0 sudo[59284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:01:02 compute-0 python3.9[59286]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:01:02 compute-0 sudo[59284]: pam_unix(sudo:session): session closed for user root
Dec 03 01:01:03 compute-0 sudo[59459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shkgildydlzpkvzkzqamhjpbrnwvhfkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723662.5702708-41-55121153701005/AnsiballZ_stat.py'
Dec 03 01:01:03 compute-0 sudo[59459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:01:03 compute-0 python3.9[59461]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:01:03 compute-0 sudo[59459]: pam_unix(sudo:session): session closed for user root
Dec 03 01:01:04 compute-0 sudo[59582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybjtqaxfkofbkggdxolgkomftecjovmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723662.5702708-41-55121153701005/AnsiballZ_copy.py'
Dec 03 01:01:04 compute-0 sudo[59582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:01:04 compute-0 python3.9[59584]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1764723662.5702708-41-55121153701005/.source.json _original_basename=.hpn5ugi2 follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:01:04 compute-0 sudo[59582]: pam_unix(sudo:session): session closed for user root
Dec 03 01:01:05 compute-0 sudo[59734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zeffcdptxmsrjflojwbroyucwhyvincp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723665.0302866-64-170354918919978/AnsiballZ_stat.py'
Dec 03 01:01:05 compute-0 sudo[59734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:01:05 compute-0 python3.9[59736]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:01:05 compute-0 sudo[59734]: pam_unix(sudo:session): session closed for user root
Dec 03 01:01:06 compute-0 sudo[59857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntkkquacdwkketppfrryfgfaxraxsxhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723665.0302866-64-170354918919978/AnsiballZ_copy.py'
Dec 03 01:01:06 compute-0 sudo[59857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:01:06 compute-0 python3.9[59859]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764723665.0302866-64-170354918919978/.source _original_basename=.osbygo2v follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:01:06 compute-0 sudo[59857]: pam_unix(sudo:session): session closed for user root
Dec 03 01:01:07 compute-0 sudo[60009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdingjhnnpdxhifaoabnxrqyxnuzzudg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723666.6246622-80-182485188147145/AnsiballZ_file.py'
Dec 03 01:01:07 compute-0 sudo[60009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:01:07 compute-0 python3.9[60011]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:01:07 compute-0 sudo[60009]: pam_unix(sudo:session): session closed for user root
Dec 03 01:01:07 compute-0 sudo[60161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqqxjessdkvlckxwksrfskeydarwgpsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723667.4517655-88-59904245999159/AnsiballZ_stat.py'
Dec 03 01:01:07 compute-0 sudo[60161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:01:08 compute-0 python3.9[60163]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:01:08 compute-0 sudo[60161]: pam_unix(sudo:session): session closed for user root
Dec 03 01:01:08 compute-0 sudo[60284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlemoefadngjkngjeezihdpobdtpuzet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723667.4517655-88-59904245999159/AnsiballZ_copy.py'
Dec 03 01:01:08 compute-0 sudo[60284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:01:08 compute-0 python3.9[60286]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764723667.4517655-88-59904245999159/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:01:08 compute-0 sudo[60284]: pam_unix(sudo:session): session closed for user root
Dec 03 01:01:09 compute-0 sudo[60436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmhwclozurnrqmlfcrdauxxmzyvzzbyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723669.1934643-88-38801202557653/AnsiballZ_stat.py'
Dec 03 01:01:09 compute-0 sudo[60436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:01:09 compute-0 python3.9[60438]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:01:09 compute-0 sudo[60436]: pam_unix(sudo:session): session closed for user root
Dec 03 01:01:10 compute-0 sudo[60559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zthdxnobvnsptwzvdrdjfcbaasyfudnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723669.1934643-88-38801202557653/AnsiballZ_copy.py'
Dec 03 01:01:10 compute-0 sudo[60559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:01:10 compute-0 python3.9[60561]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764723669.1934643-88-38801202557653/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:01:10 compute-0 sudo[60559]: pam_unix(sudo:session): session closed for user root
Dec 03 01:01:11 compute-0 sudo[60711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhgebdctynakrlefhrakjjyfjmnmbbmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723670.6611917-117-256098832255530/AnsiballZ_file.py'
Dec 03 01:01:11 compute-0 sudo[60711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:01:11 compute-0 python3.9[60713]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:01:11 compute-0 sudo[60711]: pam_unix(sudo:session): session closed for user root
Dec 03 01:01:12 compute-0 sudo[60863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evrihtkytuijijbiamfcssbseefjwjgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723671.487979-125-257188047968297/AnsiballZ_stat.py'
Dec 03 01:01:12 compute-0 sudo[60863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:01:12 compute-0 python3.9[60865]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:01:12 compute-0 sudo[60863]: pam_unix(sudo:session): session closed for user root
Dec 03 01:01:12 compute-0 sudo[60986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezihsnipzujtwweddkyqimllmbktqmwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723671.487979-125-257188047968297/AnsiballZ_copy.py'
Dec 03 01:01:12 compute-0 sudo[60986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:01:13 compute-0 python3.9[60988]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723671.487979-125-257188047968297/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:01:13 compute-0 sudo[60986]: pam_unix(sudo:session): session closed for user root
Dec 03 01:01:13 compute-0 sudo[61138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-riqqbgyshtmuemmumfhbfzfybfyzogsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723673.2694209-140-1634923374622/AnsiballZ_stat.py'
Dec 03 01:01:13 compute-0 sudo[61138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:01:13 compute-0 python3.9[61140]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:01:13 compute-0 sudo[61138]: pam_unix(sudo:session): session closed for user root
Dec 03 01:01:14 compute-0 sudo[61261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuyheqhxvcrtkfyxofdbczdgieatbqpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723673.2694209-140-1634923374622/AnsiballZ_copy.py'
Dec 03 01:01:14 compute-0 sudo[61261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:01:14 compute-0 python3.9[61263]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723673.2694209-140-1634923374622/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:01:14 compute-0 sudo[61261]: pam_unix(sudo:session): session closed for user root
Dec 03 01:01:15 compute-0 sudo[61413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcbhjzzqvlovxxhgrsqkfoctbefovzdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723674.7643874-155-269275797843918/AnsiballZ_systemd.py'
Dec 03 01:01:15 compute-0 sudo[61413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:01:15 compute-0 python3.9[61415]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:01:15 compute-0 systemd[1]: Reloading.
Dec 03 01:01:16 compute-0 systemd-rc-local-generator[61438]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:01:16 compute-0 systemd-sysv-generator[61444]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:01:16 compute-0 systemd[1]: Reloading.
Dec 03 01:01:16 compute-0 systemd-rc-local-generator[61482]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:01:16 compute-0 systemd-sysv-generator[61486]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:01:16 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Dec 03 01:01:16 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Dec 03 01:01:16 compute-0 sudo[61413]: pam_unix(sudo:session): session closed for user root
Dec 03 01:01:17 compute-0 sudo[61642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqllbwpxxnzbuoffdtzvzfwrmdjwifik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723676.7405562-163-202003708782574/AnsiballZ_stat.py'
Dec 03 01:01:17 compute-0 sudo[61642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:01:17 compute-0 python3.9[61644]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:01:17 compute-0 sudo[61642]: pam_unix(sudo:session): session closed for user root
Dec 03 01:01:17 compute-0 sudo[61765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuadhwrohhcnxyyvbmpdtzzodthydsyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723676.7405562-163-202003708782574/AnsiballZ_copy.py'
Dec 03 01:01:17 compute-0 sudo[61765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:01:17 compute-0 python3.9[61767]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723676.7405562-163-202003708782574/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:01:18 compute-0 sudo[61765]: pam_unix(sudo:session): session closed for user root
Dec 03 01:01:18 compute-0 sudo[61917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsrbhqmvpstisjotthwwlllgzfcjuovv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723678.2125723-178-20500503554652/AnsiballZ_stat.py'
Dec 03 01:01:18 compute-0 sudo[61917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:01:18 compute-0 python3.9[61919]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:01:18 compute-0 sudo[61917]: pam_unix(sudo:session): session closed for user root
Dec 03 01:01:19 compute-0 sudo[62040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-behqsvfrlprwwczpuovjibjifpquhofr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723678.2125723-178-20500503554652/AnsiballZ_copy.py'
Dec 03 01:01:19 compute-0 sudo[62040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:01:19 compute-0 python3.9[62042]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723678.2125723-178-20500503554652/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:01:19 compute-0 sudo[62040]: pam_unix(sudo:session): session closed for user root
Dec 03 01:01:20 compute-0 sudo[62192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfbmtkmyeqcbknrohfdfbtwihmonbecv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723679.9780226-193-190794345599024/AnsiballZ_systemd.py'
Dec 03 01:01:20 compute-0 sudo[62192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:01:20 compute-0 python3.9[62194]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:01:20 compute-0 systemd[1]: Reloading.
Dec 03 01:01:20 compute-0 systemd-sysv-generator[62228]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:01:20 compute-0 systemd-rc-local-generator[62225]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:01:21 compute-0 systemd[1]: Reloading.
Dec 03 01:01:21 compute-0 systemd-rc-local-generator[62260]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:01:21 compute-0 systemd-sysv-generator[62264]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:01:21 compute-0 systemd[1]: Starting Create netns directory...
Dec 03 01:01:21 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 03 01:01:21 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 03 01:01:21 compute-0 systemd[1]: Finished Create netns directory.
Dec 03 01:01:21 compute-0 sudo[62192]: pam_unix(sudo:session): session closed for user root
Dec 03 01:01:22 compute-0 python3.9[62422]: ansible-ansible.builtin.service_facts Invoked
Dec 03 01:01:22 compute-0 network[62439]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 03 01:01:22 compute-0 network[62440]: 'network-scripts' will be removed from distribution in near future.
Dec 03 01:01:22 compute-0 network[62441]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 03 01:01:27 compute-0 sudo[62701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utkhglhhkmbvoemyfcdpikyttbvjxqys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723687.2586098-209-261240512140131/AnsiballZ_systemd.py'
Dec 03 01:01:27 compute-0 sudo[62701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:01:28 compute-0 python3.9[62703]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:01:28 compute-0 systemd[1]: Reloading.
Dec 03 01:01:28 compute-0 systemd-rc-local-generator[62733]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:01:28 compute-0 systemd-sysv-generator[62736]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:01:28 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Dec 03 01:01:28 compute-0 iptables.init[62743]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Dec 03 01:01:28 compute-0 iptables.init[62743]: iptables: Flushing firewall rules: [  OK  ]
Dec 03 01:01:28 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Dec 03 01:01:28 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Dec 03 01:01:28 compute-0 sudo[62701]: pam_unix(sudo:session): session closed for user root
Dec 03 01:01:29 compute-0 sudo[62937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-feqexldzulfthvbeomzqoelfworvyyhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723688.95924-209-151709494101228/AnsiballZ_systemd.py'
Dec 03 01:01:29 compute-0 sudo[62937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:01:29 compute-0 python3.9[62939]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:01:30 compute-0 sudo[62937]: pam_unix(sudo:session): session closed for user root
Dec 03 01:01:31 compute-0 sudo[63091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jasomsmtzajntxjpsmurhggoyhgnbbvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723691.0628045-225-200062186149650/AnsiballZ_systemd.py'
Dec 03 01:01:31 compute-0 sudo[63091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:01:31 compute-0 python3.9[63093]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:01:31 compute-0 systemd[1]: Reloading.
Dec 03 01:01:31 compute-0 systemd-rc-local-generator[63119]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:01:31 compute-0 systemd-sysv-generator[63125]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:01:32 compute-0 systemd[1]: Starting Netfilter Tables...
Dec 03 01:01:32 compute-0 systemd[1]: Finished Netfilter Tables.
Dec 03 01:01:32 compute-0 sudo[63091]: pam_unix(sudo:session): session closed for user root
Dec 03 01:01:33 compute-0 sudo[63283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhuexjakqmnxrntlrpiurembigngyqrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723692.3901963-233-104382401639658/AnsiballZ_command.py'
Dec 03 01:01:33 compute-0 sudo[63283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:01:33 compute-0 python3.9[63285]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:01:33 compute-0 sudo[63283]: pam_unix(sudo:session): session closed for user root
Dec 03 01:01:34 compute-0 sudo[63436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vybgphtslyadqdmnpkdpbqfxmrxfpjkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723693.9896095-247-196790145501321/AnsiballZ_stat.py'
Dec 03 01:01:34 compute-0 sudo[63436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:01:34 compute-0 python3.9[63438]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:01:34 compute-0 sudo[63436]: pam_unix(sudo:session): session closed for user root
Dec 03 01:01:35 compute-0 sudo[63561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlbqxoctmugwgxoaedjxidvnzyuahmyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723693.9896095-247-196790145501321/AnsiballZ_copy.py'
Dec 03 01:01:35 compute-0 sudo[63561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:01:35 compute-0 python3.9[63563]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764723693.9896095-247-196790145501321/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:01:35 compute-0 sudo[63561]: pam_unix(sudo:session): session closed for user root
Dec 03 01:01:36 compute-0 sudo[63714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnvycaesdhawcthshvyfuhlprntteeuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723695.65974-262-44960469976812/AnsiballZ_systemd.py'
Dec 03 01:01:36 compute-0 sudo[63714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:01:36 compute-0 python3.9[63716]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 03 01:01:36 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Dec 03 01:01:36 compute-0 sshd[1005]: Received SIGHUP; restarting.
Dec 03 01:01:36 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Dec 03 01:01:36 compute-0 sshd[1005]: Server listening on 0.0.0.0 port 22.
Dec 03 01:01:36 compute-0 sshd[1005]: Server listening on :: port 22.
Dec 03 01:01:36 compute-0 sudo[63714]: pam_unix(sudo:session): session closed for user root
Dec 03 01:01:37 compute-0 sudo[63870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhvgrcsaeqbwwdsizpvfcxoudulynvqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723696.6630023-270-204974223437249/AnsiballZ_file.py'
Dec 03 01:01:37 compute-0 sudo[63870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:01:37 compute-0 python3.9[63872]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:01:37 compute-0 sudo[63870]: pam_unix(sudo:session): session closed for user root
Dec 03 01:01:38 compute-0 sudo[64022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csjxvkxldmuesxlpnsakbfokckxbmumb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723697.6273012-278-104958785570483/AnsiballZ_stat.py'
Dec 03 01:01:38 compute-0 sudo[64022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:01:38 compute-0 python3.9[64024]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:01:38 compute-0 sudo[64022]: pam_unix(sudo:session): session closed for user root
Dec 03 01:01:38 compute-0 sudo[64145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpuvwncvulgyockhiwyykaixdknefulm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723697.6273012-278-104958785570483/AnsiballZ_copy.py'
Dec 03 01:01:38 compute-0 sudo[64145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:01:38 compute-0 python3.9[64147]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723697.6273012-278-104958785570483/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:01:38 compute-0 sudo[64145]: pam_unix(sudo:session): session closed for user root
Dec 03 01:01:39 compute-0 sudo[64297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlnlocuitdobgfxhxtewxatsucjzistw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723699.2849238-296-196472446984028/AnsiballZ_timezone.py'
Dec 03 01:01:39 compute-0 sudo[64297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:01:40 compute-0 python3.9[64299]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 03 01:01:40 compute-0 systemd[1]: Starting Time & Date Service...
Dec 03 01:01:40 compute-0 systemd[1]: Started Time & Date Service.
Dec 03 01:01:40 compute-0 sudo[64297]: pam_unix(sudo:session): session closed for user root
Dec 03 01:01:40 compute-0 sudo[64453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmbblkmfwrfkuhpoguqybwokklsemuds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723700.5152705-305-205701072816311/AnsiballZ_file.py'
Dec 03 01:01:40 compute-0 sudo[64453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:01:41 compute-0 python3.9[64455]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:01:41 compute-0 sudo[64453]: pam_unix(sudo:session): session closed for user root
Dec 03 01:01:41 compute-0 sudo[64605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzbnvgeinbqsrmwaxevlzipnnfxelyfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723701.5408926-313-207055980954492/AnsiballZ_stat.py'
Dec 03 01:01:41 compute-0 sudo[64605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:01:42 compute-0 python3.9[64607]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:01:42 compute-0 sudo[64605]: pam_unix(sudo:session): session closed for user root
Dec 03 01:01:42 compute-0 sudo[64728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utfpfswylaxgbdyxehstoxjjjinwkeyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723701.5408926-313-207055980954492/AnsiballZ_copy.py'
Dec 03 01:01:42 compute-0 sudo[64728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:01:42 compute-0 python3.9[64730]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764723701.5408926-313-207055980954492/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:01:42 compute-0 sudo[64728]: pam_unix(sudo:session): session closed for user root
Dec 03 01:01:43 compute-0 sudo[64880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eteeghievgmaafypcwwpamqvtoxpnied ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723703.0046923-328-197381561334933/AnsiballZ_stat.py'
Dec 03 01:01:43 compute-0 sudo[64880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:01:43 compute-0 python3.9[64882]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:01:43 compute-0 sudo[64880]: pam_unix(sudo:session): session closed for user root
Dec 03 01:01:44 compute-0 sudo[65003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rposfasqdcbbhnxtygfjaxrlrhqdasja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723703.0046923-328-197381561334933/AnsiballZ_copy.py'
Dec 03 01:01:44 compute-0 sudo[65003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:01:44 compute-0 python3.9[65005]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764723703.0046923-328-197381561334933/.source.yaml _original_basename=.mycgpd4_ follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:01:44 compute-0 sudo[65003]: pam_unix(sudo:session): session closed for user root
Dec 03 01:01:45 compute-0 sudo[65155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsiruyzjzfqbzybxpkrnbmvvizeuiaix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723704.5087607-343-136635693966243/AnsiballZ_stat.py'
Dec 03 01:01:45 compute-0 sudo[65155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:01:45 compute-0 python3.9[65157]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:01:45 compute-0 sudo[65155]: pam_unix(sudo:session): session closed for user root
Dec 03 01:01:45 compute-0 sudo[65278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikgljdpekmonqqpudgsetvzzenemxnsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723704.5087607-343-136635693966243/AnsiballZ_copy.py'
Dec 03 01:01:45 compute-0 sudo[65278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:01:46 compute-0 python3.9[65280]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723704.5087607-343-136635693966243/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:01:46 compute-0 sudo[65278]: pam_unix(sudo:session): session closed for user root
Dec 03 01:01:46 compute-0 sudo[65430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqrawohioapcqzetxacfisveudwhggqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723706.2544918-358-179345615106606/AnsiballZ_command.py'
Dec 03 01:01:46 compute-0 sudo[65430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:01:46 compute-0 python3.9[65432]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:01:46 compute-0 sudo[65430]: pam_unix(sudo:session): session closed for user root
Dec 03 01:01:47 compute-0 sudo[65583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmlycwniaahifblcogvwexgcfywbtgjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723707.129115-366-263362045366112/AnsiballZ_command.py'
Dec 03 01:01:47 compute-0 sudo[65583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:01:47 compute-0 python3.9[65585]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:01:47 compute-0 sudo[65583]: pam_unix(sudo:session): session closed for user root
Dec 03 01:01:48 compute-0 sudo[65736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyjoofudfjhadxaovfmsrpinpuxuvtbu ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764723708.0163047-374-40185539802111/AnsiballZ_edpm_nftables_from_files.py'
Dec 03 01:01:48 compute-0 sudo[65736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:01:48 compute-0 python3[65738]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 03 01:01:48 compute-0 sudo[65736]: pam_unix(sudo:session): session closed for user root
Dec 03 01:01:49 compute-0 sudo[65888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orzhxfjqmxlogfdvyljxtyqligsxfiof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723709.168731-382-196107179830500/AnsiballZ_stat.py'
Dec 03 01:01:49 compute-0 sudo[65888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:01:49 compute-0 python3.9[65890]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:01:49 compute-0 sudo[65888]: pam_unix(sudo:session): session closed for user root
Dec 03 01:01:50 compute-0 sudo[66011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wleexkowutupeavqyovdddeajnxwjoit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723709.168731-382-196107179830500/AnsiballZ_copy.py'
Dec 03 01:01:50 compute-0 sudo[66011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:01:50 compute-0 python3.9[66013]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723709.168731-382-196107179830500/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:01:50 compute-0 sudo[66011]: pam_unix(sudo:session): session closed for user root
Dec 03 01:01:51 compute-0 sudo[66163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkkwteujhuogtlpizzqpktiziekgkaxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723710.73767-397-263405754327380/AnsiballZ_stat.py'
Dec 03 01:01:51 compute-0 sudo[66163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:01:51 compute-0 python3.9[66165]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:01:51 compute-0 sudo[66163]: pam_unix(sudo:session): session closed for user root
Dec 03 01:01:51 compute-0 sudo[66286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ianenkciiomiyunzbnaagthufusgkmwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723710.73767-397-263405754327380/AnsiballZ_copy.py'
Dec 03 01:01:51 compute-0 sudo[66286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:01:52 compute-0 python3.9[66288]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723710.73767-397-263405754327380/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:01:52 compute-0 sudo[66286]: pam_unix(sudo:session): session closed for user root
Dec 03 01:01:52 compute-0 sudo[66438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frfyhzvzbsjnnvoogqktrrfbyenhvdmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723712.5029597-412-92431733530844/AnsiballZ_stat.py'
Dec 03 01:01:52 compute-0 sudo[66438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:01:53 compute-0 python3.9[66440]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:01:53 compute-0 sudo[66438]: pam_unix(sudo:session): session closed for user root
Dec 03 01:01:53 compute-0 sudo[66561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aygqzdybkmbvlfchnjbcxfrsxtbzhork ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723712.5029597-412-92431733530844/AnsiballZ_copy.py'
Dec 03 01:01:53 compute-0 sudo[66561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:01:53 compute-0 python3.9[66563]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723712.5029597-412-92431733530844/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:01:53 compute-0 sudo[66561]: pam_unix(sudo:session): session closed for user root
Dec 03 01:01:54 compute-0 sudo[66713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzmsopwtjywghgdvsqtowbmknnqxgoqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723714.0065947-427-170385453476110/AnsiballZ_stat.py'
Dec 03 01:01:54 compute-0 sudo[66713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:01:54 compute-0 python3.9[66715]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:01:54 compute-0 sudo[66713]: pam_unix(sudo:session): session closed for user root
Dec 03 01:01:55 compute-0 sudo[66836]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkrvnuqrdurxhabsutqhpetpxglzrdmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723714.0065947-427-170385453476110/AnsiballZ_copy.py'
Dec 03 01:01:55 compute-0 sudo[66836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:01:55 compute-0 python3.9[66838]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723714.0065947-427-170385453476110/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:01:55 compute-0 sudo[66836]: pam_unix(sudo:session): session closed for user root
Dec 03 01:01:56 compute-0 sudo[66988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjsuiaqlrmeppsqyvzjcrttsubylpwzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723715.5116231-442-230110429081055/AnsiballZ_stat.py'
Dec 03 01:01:56 compute-0 sudo[66988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:01:56 compute-0 python3.9[66990]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:01:56 compute-0 sudo[66988]: pam_unix(sudo:session): session closed for user root
Dec 03 01:01:56 compute-0 sudo[67111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pofgyvtwqqnxjotwfpxlvgidvlovgyig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723715.5116231-442-230110429081055/AnsiballZ_copy.py'
Dec 03 01:01:56 compute-0 sudo[67111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:01:57 compute-0 python3.9[67113]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723715.5116231-442-230110429081055/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:01:57 compute-0 sudo[67111]: pam_unix(sudo:session): session closed for user root
Dec 03 01:01:57 compute-0 sudo[67263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovsfosawumeufxhojmhrbaknajimudzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723717.3456283-457-42579534002821/AnsiballZ_file.py'
Dec 03 01:01:57 compute-0 sudo[67263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:01:57 compute-0 python3.9[67265]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:01:58 compute-0 sudo[67263]: pam_unix(sudo:session): session closed for user root
Dec 03 01:01:58 compute-0 sudo[67415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ueeppeicphzanulgicpibknpqsyuttxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723718.2593539-465-60688378617899/AnsiballZ_command.py'
Dec 03 01:01:58 compute-0 sudo[67415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:01:58 compute-0 python3.9[67417]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:01:58 compute-0 sudo[67415]: pam_unix(sudo:session): session closed for user root
Dec 03 01:01:59 compute-0 sudo[67574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibizpuovbmdnrdhnreyshkbbdgcyafsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723719.198023-473-266446070110444/AnsiballZ_blockinfile.py'
Dec 03 01:01:59 compute-0 sudo[67574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:02:00 compute-0 python3.9[67576]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:02:00 compute-0 sudo[67574]: pam_unix(sudo:session): session closed for user root
Dec 03 01:02:00 compute-0 sudo[67727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jziikqgwilhfwrfpfjbxevcaijaudxfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723720.420119-482-23969030080109/AnsiballZ_file.py'
Dec 03 01:02:00 compute-0 sudo[67727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:02:01 compute-0 python3.9[67729]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:02:01 compute-0 sudo[67727]: pam_unix(sudo:session): session closed for user root
Dec 03 01:02:01 compute-0 sudo[67879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wodysvjbgmgcxhotukbryjqkvjdpuvyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723721.2346406-482-96835298094540/AnsiballZ_file.py'
Dec 03 01:02:01 compute-0 sudo[67879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:02:01 compute-0 python3.9[67881]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:02:01 compute-0 sudo[67879]: pam_unix(sudo:session): session closed for user root
Dec 03 01:02:02 compute-0 sudo[68031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jiupafxvarlhpezlycpozteptjrhnuqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723722.1088502-497-256159750315220/AnsiballZ_mount.py'
Dec 03 01:02:02 compute-0 sudo[68031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:02:02 compute-0 python3.9[68033]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 03 01:02:02 compute-0 sudo[68031]: pam_unix(sudo:session): session closed for user root
Dec 03 01:02:03 compute-0 sudo[68184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tepimbalofkgibdvtwlclqjotiqzxnph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723723.1437495-497-279878675180663/AnsiballZ_mount.py'
Dec 03 01:02:03 compute-0 sudo[68184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:02:03 compute-0 python3.9[68186]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 03 01:02:03 compute-0 sudo[68184]: pam_unix(sudo:session): session closed for user root
Dec 03 01:02:04 compute-0 sshd-session[58965]: Connection closed by 192.168.122.30 port 38938
Dec 03 01:02:04 compute-0 sshd-session[58962]: pam_unix(sshd:session): session closed for user zuul
Dec 03 01:02:04 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Dec 03 01:02:04 compute-0 systemd[1]: session-13.scope: Consumed 45.088s CPU time.
Dec 03 01:02:04 compute-0 systemd-logind[800]: Session 13 logged out. Waiting for processes to exit.
Dec 03 01:02:04 compute-0 systemd-logind[800]: Removed session 13.
Dec 03 01:02:09 compute-0 sshd-session[68212]: Accepted publickey for zuul from 192.168.122.30 port 54644 ssh2: ECDSA SHA256:ja3ITS17A9km0/Ot+KN2pl9ub4ump/b6GV+vNoE7Szw
Dec 03 01:02:09 compute-0 systemd-logind[800]: New session 14 of user zuul.
Dec 03 01:02:09 compute-0 systemd[1]: Started Session 14 of User zuul.
Dec 03 01:02:09 compute-0 sshd-session[68212]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 03 01:02:10 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 03 01:02:10 compute-0 sudo[68367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxlixigktbgwgvuzazvlxorkswxaucqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723729.8785157-16-197746522189223/AnsiballZ_tempfile.py'
Dec 03 01:02:10 compute-0 sudo[68367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:02:10 compute-0 python3.9[68369]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec 03 01:02:10 compute-0 sudo[68367]: pam_unix(sudo:session): session closed for user root
Dec 03 01:02:11 compute-0 sudo[68519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgrmgouulykteldtxyhbntliaxqlplgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723730.964166-28-201703992435679/AnsiballZ_stat.py'
Dec 03 01:02:11 compute-0 sudo[68519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:02:11 compute-0 python3.9[68521]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:02:11 compute-0 sudo[68519]: pam_unix(sudo:session): session closed for user root
Dec 03 01:02:12 compute-0 sudo[68671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxfmortivgkbzzalyufrlwztgrrpdufz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723731.9667733-38-222315226872665/AnsiballZ_setup.py'
Dec 03 01:02:12 compute-0 sudo[68671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:02:12 compute-0 python3.9[68673]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 01:02:12 compute-0 sudo[68671]: pam_unix(sudo:session): session closed for user root
Dec 03 01:02:13 compute-0 sudo[68823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esbpghttlcqcuilfxfcjfhmulctgydiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723733.2522972-47-239158781983372/AnsiballZ_blockinfile.py'
Dec 03 01:02:13 compute-0 sudo[68823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:02:14 compute-0 python3.9[68825]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDUXzfc0dZJxCJJ4PEHADvL0LyRTIDw765KVVRPjKe66bZHCDrMnH3lZh13FtxojtEeAMtDjWC+H3ZGbvKAjyg6wN6ZmxRsL7o57jFWbBEQCHr3VQojAmFhu1UrX7NiAqOVCHai4lYrpddO28T1lK3oP3KKbw3gMA9o0GCA5TlMf5uAu10Zmp6u/NuST5GBQqc8D2ID2cZ5OL+IJ5OedhsuV0SutU2S7A/ua95d57ddgc8ltJh/JzrnYCjHsD4NNKpp1HDuLXzKlMVFpbxi5ihzlepdP4BMWtBqKzvoCCD+KxwXBNVjKLo57B/h+kfTNX/PI8IkDAGLOxYZyPozHtsLiKtTLao7Q1nU67ZcSZbDPBluTaBcUuiS12fEsU2SjMVNRPDFBKj8pn5cXmIZJaLccIvvWYr4u9xIEA1aX0IjZS9FEHD+eVLVe3HkQ+rFJ2WgMARupAMDmyso43Cje+xIL0vZYayq3PyCWhVln1wW80k/cY/5JCqhzF2lelqLBlU=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICuIgcpw897dA3mGBxBK8DwsvfOOhRnRBasT73h7OlLn
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBITA4C6TXl/AXsVGH1teKmoFi3piNxhosC0B5paSBiifwK5pyHq3w8pYOtVe+KhAjGKZJREVbl0k3rnMeNo31ps=
                                             create=True mode=0644 path=/tmp/ansible.tbw3opq2 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:02:14 compute-0 sudo[68823]: pam_unix(sudo:session): session closed for user root
Dec 03 01:02:14 compute-0 sudo[68975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbnuweaoaykpgkpxznrmdhwmloccyzvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723734.2730575-55-128117727270388/AnsiballZ_command.py'
Dec 03 01:02:14 compute-0 sudo[68975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:02:15 compute-0 python3.9[68977]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.tbw3opq2' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:02:15 compute-0 sudo[68975]: pam_unix(sudo:session): session closed for user root
Dec 03 01:02:15 compute-0 sudo[69129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mclixivgrakqyloxygaspiczzmlusmfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723735.3380196-63-14162611644677/AnsiballZ_file.py'
Dec 03 01:02:15 compute-0 sudo[69129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:02:16 compute-0 python3.9[69131]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.tbw3opq2 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:02:16 compute-0 sudo[69129]: pam_unix(sudo:session): session closed for user root
Dec 03 01:02:16 compute-0 sshd-session[68215]: Connection closed by 192.168.122.30 port 54644
Dec 03 01:02:16 compute-0 sshd-session[68212]: pam_unix(sshd:session): session closed for user zuul
Dec 03 01:02:16 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Dec 03 01:02:16 compute-0 systemd[1]: session-14.scope: Consumed 4.384s CPU time.
Dec 03 01:02:16 compute-0 systemd-logind[800]: Session 14 logged out. Waiting for processes to exit.
Dec 03 01:02:16 compute-0 systemd-logind[800]: Removed session 14.
Dec 03 01:02:22 compute-0 sshd-session[69156]: Accepted publickey for zuul from 192.168.122.30 port 42492 ssh2: ECDSA SHA256:ja3ITS17A9km0/Ot+KN2pl9ub4ump/b6GV+vNoE7Szw
Dec 03 01:02:22 compute-0 systemd-logind[800]: New session 15 of user zuul.
Dec 03 01:02:22 compute-0 systemd[1]: Started Session 15 of User zuul.
Dec 03 01:02:22 compute-0 sshd-session[69156]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 03 01:02:24 compute-0 python3.9[69309]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 01:02:25 compute-0 sudo[69463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umwlhtpgrxpaswslaapzfiftfmwgfyqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723744.557392-32-256074522933487/AnsiballZ_systemd.py'
Dec 03 01:02:25 compute-0 sudo[69463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:02:25 compute-0 python3.9[69465]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 03 01:02:25 compute-0 sudo[69463]: pam_unix(sudo:session): session closed for user root
Dec 03 01:02:26 compute-0 sudo[69617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klnhaaszferdgjgjnsecrdwbiobdcxsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723745.8811395-40-21919549691585/AnsiballZ_systemd.py'
Dec 03 01:02:26 compute-0 sudo[69617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:02:26 compute-0 python3.9[69619]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 03 01:02:26 compute-0 sudo[69617]: pam_unix(sudo:session): session closed for user root
Dec 03 01:02:27 compute-0 sudo[69770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdazcuwcojledlznbyujahweydryymku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723746.864141-49-58622538120261/AnsiballZ_command.py'
Dec 03 01:02:27 compute-0 sudo[69770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:02:27 compute-0 python3.9[69772]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:02:27 compute-0 sudo[69770]: pam_unix(sudo:session): session closed for user root
Dec 03 01:02:28 compute-0 sudo[69923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-seyworhbwjpwmnpdztudxpebbzwdchhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723748.0162773-57-210100899800395/AnsiballZ_stat.py'
Dec 03 01:02:28 compute-0 sudo[69923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:02:28 compute-0 python3.9[69925]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:02:28 compute-0 sudo[69923]: pam_unix(sudo:session): session closed for user root
Dec 03 01:02:29 compute-0 sudo[70077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhpmaufgsuabxpgiinhtcigphpwrsgtj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723749.0249977-65-192577521539182/AnsiballZ_command.py'
Dec 03 01:02:29 compute-0 sudo[70077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:02:29 compute-0 python3.9[70079]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:02:29 compute-0 sudo[70077]: pam_unix(sudo:session): session closed for user root
Dec 03 01:02:30 compute-0 sudo[70232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbtxivrvutcrlntzvafxagonrcprvghf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723749.9188602-73-265629464657419/AnsiballZ_file.py'
Dec 03 01:02:30 compute-0 sudo[70232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:02:30 compute-0 python3.9[70234]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:02:30 compute-0 sudo[70232]: pam_unix(sudo:session): session closed for user root
Dec 03 01:02:31 compute-0 sshd-session[69159]: Connection closed by 192.168.122.30 port 42492
Dec 03 01:02:31 compute-0 sshd-session[69156]: pam_unix(sshd:session): session closed for user zuul
Dec 03 01:02:31 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Dec 03 01:02:31 compute-0 systemd[1]: session-15.scope: Consumed 5.674s CPU time.
Dec 03 01:02:31 compute-0 systemd-logind[800]: Session 15 logged out. Waiting for processes to exit.
Dec 03 01:02:31 compute-0 systemd-logind[800]: Removed session 15.
Dec 03 01:02:37 compute-0 sshd-session[70259]: Accepted publickey for zuul from 192.168.122.30 port 54658 ssh2: ECDSA SHA256:ja3ITS17A9km0/Ot+KN2pl9ub4ump/b6GV+vNoE7Szw
Dec 03 01:02:37 compute-0 systemd-logind[800]: New session 16 of user zuul.
Dec 03 01:02:37 compute-0 systemd[1]: Started Session 16 of User zuul.
Dec 03 01:02:37 compute-0 sshd-session[70259]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 03 01:02:38 compute-0 python3.9[70412]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 01:02:39 compute-0 sudo[70566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxntouzdgkvmruezberipdhpleftufjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723758.8051407-34-193659726780046/AnsiballZ_setup.py'
Dec 03 01:02:39 compute-0 sudo[70566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:02:39 compute-0 python3.9[70568]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 03 01:02:39 compute-0 sudo[70566]: pam_unix(sudo:session): session closed for user root
Dec 03 01:02:40 compute-0 sudo[70650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnwnhkxntrtbshmaniwrhreeghrezgun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723758.8051407-34-193659726780046/AnsiballZ_dnf.py'
Dec 03 01:02:40 compute-0 sudo[70650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:02:40 compute-0 python3.9[70652]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 03 01:02:41 compute-0 sudo[70650]: pam_unix(sudo:session): session closed for user root
Dec 03 01:02:42 compute-0 python3.9[70803]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:02:43 compute-0 python3.9[70954]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 03 01:02:44 compute-0 python3.9[71104]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:02:44 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 03 01:02:45 compute-0 python3.9[71255]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:02:46 compute-0 sshd-session[70262]: Connection closed by 192.168.122.30 port 54658
Dec 03 01:02:46 compute-0 sshd-session[70259]: pam_unix(sshd:session): session closed for user zuul
Dec 03 01:02:46 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Dec 03 01:02:46 compute-0 systemd[1]: session-16.scope: Consumed 6.673s CPU time.
Dec 03 01:02:46 compute-0 systemd-logind[800]: Session 16 logged out. Waiting for processes to exit.
Dec 03 01:02:46 compute-0 systemd-logind[800]: Removed session 16.
Dec 03 01:02:52 compute-0 sshd-session[71281]: Accepted publickey for zuul from 192.168.122.30 port 58488 ssh2: ECDSA SHA256:ja3ITS17A9km0/Ot+KN2pl9ub4ump/b6GV+vNoE7Szw
Dec 03 01:02:52 compute-0 systemd-logind[800]: New session 17 of user zuul.
Dec 03 01:02:52 compute-0 systemd[1]: Started Session 17 of User zuul.
Dec 03 01:02:52 compute-0 sshd-session[71281]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 03 01:02:53 compute-0 chronyd[58481]: Selected source 167.160.187.179 (pool.ntp.org)
Dec 03 01:02:53 compute-0 python3.9[71434]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 01:02:55 compute-0 sudo[71588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkanxcrhepahcglljobunghahhnbhpcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723774.8162072-50-153027729697845/AnsiballZ_file.py'
Dec 03 01:02:55 compute-0 sudo[71588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:02:55 compute-0 python3.9[71590]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:02:55 compute-0 sudo[71588]: pam_unix(sudo:session): session closed for user root
Dec 03 01:02:56 compute-0 sudo[71740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktkdfctmdpyyvgprvplutzfgeabaxdlx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723775.9773345-50-186804527212748/AnsiballZ_file.py'
Dec 03 01:02:56 compute-0 sudo[71740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:02:56 compute-0 python3.9[71742]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:02:56 compute-0 sudo[71740]: pam_unix(sudo:session): session closed for user root
Dec 03 01:02:57 compute-0 sudo[71892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dznemxbsdqfrskejqinzyyjufbaofqmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723776.8415914-65-114394127413801/AnsiballZ_stat.py'
Dec 03 01:02:57 compute-0 sudo[71892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:02:57 compute-0 python3.9[71894]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:02:57 compute-0 sudo[71892]: pam_unix(sudo:session): session closed for user root
Dec 03 01:02:58 compute-0 sudo[72015]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqeufetqygikumtilqkymezytpqysnvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723776.8415914-65-114394127413801/AnsiballZ_copy.py'
Dec 03 01:02:58 compute-0 sudo[72015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:02:58 compute-0 python3.9[72017]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723776.8415914-65-114394127413801/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=b60dcba84b3e4fb617a490c112070b73c949335a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:02:58 compute-0 sudo[72015]: pam_unix(sudo:session): session closed for user root
Dec 03 01:02:59 compute-0 sudo[72167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqubowzugdvwdinlxacpzppqrobwqjxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723778.8327813-65-72766569506374/AnsiballZ_stat.py'
Dec 03 01:02:59 compute-0 sudo[72167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:02:59 compute-0 python3.9[72169]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:02:59 compute-0 sudo[72167]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:00 compute-0 sudo[72290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmhbcfnrfrycfbntvjuyyjlbblgoezxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723778.8327813-65-72766569506374/AnsiballZ_copy.py'
Dec 03 01:03:00 compute-0 sudo[72290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:00 compute-0 python3.9[72292]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723778.8327813-65-72766569506374/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=0d1efd97dce1e1c7f057dca4a97cb1fb49ba3bf4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:03:00 compute-0 sudo[72290]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:00 compute-0 sudo[72442]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxkolilogagbzvtykhncrjgcmkhefnuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723780.4202187-65-21495615635438/AnsiballZ_stat.py'
Dec 03 01:03:00 compute-0 sudo[72442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:00 compute-0 python3.9[72444]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:03:01 compute-0 sudo[72442]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:01 compute-0 sudo[72565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imgdnficonfjjzovsbjjnlxqcwwzywcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723780.4202187-65-21495615635438/AnsiballZ_copy.py'
Dec 03 01:03:01 compute-0 sudo[72565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:01 compute-0 python3.9[72567]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723780.4202187-65-21495615635438/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=92a51a9bbc603098437ab5af983ff5e779096e63 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:03:01 compute-0 sudo[72565]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:02 compute-0 sudo[72717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjwlqmfzhtbjavvpfealksqajjwaruob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723781.9949727-109-12980603601404/AnsiballZ_file.py'
Dec 03 01:03:02 compute-0 sudo[72717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:02 compute-0 python3.9[72719]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:03:02 compute-0 sudo[72717]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:03 compute-0 sudo[72869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuuzbokleornvsscslgkwkebxjrcqnjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723782.808261-109-130198010240409/AnsiballZ_file.py'
Dec 03 01:03:03 compute-0 sudo[72869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:03 compute-0 python3.9[72871]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:03:03 compute-0 sudo[72869]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:04 compute-0 sudo[73021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iiirqukxljgrpspjrgvznqwscdhxlkfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723783.6405196-124-259528894097759/AnsiballZ_stat.py'
Dec 03 01:03:04 compute-0 sudo[73021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:04 compute-0 python3.9[73023]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:03:04 compute-0 sudo[73021]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:04 compute-0 sudo[73144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ceoxbeiexjereruqsplwtcysmmifrrfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723783.6405196-124-259528894097759/AnsiballZ_copy.py'
Dec 03 01:03:04 compute-0 sudo[73144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:05 compute-0 python3.9[73146]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723783.6405196-124-259528894097759/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=d55fd552526a772bf5e3784699784cee65404ed5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:03:05 compute-0 sudo[73144]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:05 compute-0 sudo[73296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rldbrmrjsxgaqjzcijjskfgnkuinpauh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723785.3038852-124-61680363695397/AnsiballZ_stat.py'
Dec 03 01:03:05 compute-0 sudo[73296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:05 compute-0 python3.9[73298]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:03:05 compute-0 sudo[73296]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:06 compute-0 sudo[73419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcpfrzllbtdfwhislncdjyhdyaymagtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723785.3038852-124-61680363695397/AnsiballZ_copy.py'
Dec 03 01:03:06 compute-0 sudo[73419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:06 compute-0 python3.9[73421]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723785.3038852-124-61680363695397/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=0d1efd97dce1e1c7f057dca4a97cb1fb49ba3bf4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:03:06 compute-0 sudo[73419]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:07 compute-0 sudo[73571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tiktujoayaoofchtlksybcfmrdrxplkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723786.7627301-124-156314046842763/AnsiballZ_stat.py'
Dec 03 01:03:07 compute-0 sudo[73571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:07 compute-0 python3.9[73573]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:03:07 compute-0 sudo[73571]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:07 compute-0 sudo[73694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtozdxxhuotcboslxjxbbqkgpxcfutps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723786.7627301-124-156314046842763/AnsiballZ_copy.py'
Dec 03 01:03:07 compute-0 sudo[73694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:08 compute-0 python3.9[73696]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723786.7627301-124-156314046842763/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=1b9df5c7eafafbcfe088505d80d8a06e3c7b4466 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:03:08 compute-0 sudo[73694]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:08 compute-0 sudo[73846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clmcxouodzadmhhgzxmsouyqwgwwgyxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723788.4014523-168-205673751362196/AnsiballZ_file.py'
Dec 03 01:03:08 compute-0 sudo[73846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:08 compute-0 python3.9[73848]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:03:08 compute-0 sudo[73846]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:09 compute-0 sudo[73998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-himfkpfptvzkddhouqsueicjyiimbrvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723789.1398125-168-167876830922983/AnsiballZ_file.py'
Dec 03 01:03:09 compute-0 sudo[73998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:09 compute-0 python3.9[74000]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:03:09 compute-0 sudo[73998]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:10 compute-0 sudo[74150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjzlzmbtorexnscuhawtmuabfxshhizc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723789.9022157-183-18529603661504/AnsiballZ_stat.py'
Dec 03 01:03:10 compute-0 sudo[74150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:10 compute-0 python3.9[74152]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:03:10 compute-0 sudo[74150]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:10 compute-0 sudo[74273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tljllbxkupaumxgggslsmfgcpqbhibjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723789.9022157-183-18529603661504/AnsiballZ_copy.py'
Dec 03 01:03:10 compute-0 sudo[74273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:11 compute-0 python3.9[74275]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723789.9022157-183-18529603661504/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=e2647a2010a652e485acabe94eeb39508d65a0bc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:03:11 compute-0 sudo[74273]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:11 compute-0 sudo[74425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmksgoynaefdefdxhnktxiklckhhoofg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723791.3426628-183-214121697856966/AnsiballZ_stat.py'
Dec 03 01:03:11 compute-0 sudo[74425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:11 compute-0 python3.9[74427]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:03:11 compute-0 sudo[74425]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:12 compute-0 sudo[74548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghfcyoumxpojwwytohwtsvyfmlabkvek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723791.3426628-183-214121697856966/AnsiballZ_copy.py'
Dec 03 01:03:12 compute-0 sudo[74548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:12 compute-0 python3.9[74550]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723791.3426628-183-214121697856966/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=775fef96b1ca8947276e166dfff5facf815492ee backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:03:12 compute-0 sudo[74548]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:13 compute-0 sudo[74700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtblvyshozsymgkwrqxetrapxkfmqfre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723792.9700813-183-135116643218560/AnsiballZ_stat.py'
Dec 03 01:03:13 compute-0 sudo[74700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:13 compute-0 python3.9[74702]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:03:13 compute-0 sudo[74700]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:14 compute-0 sudo[74823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypcxtuicthptuvcitdkdnyfevxpgdgty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723792.9700813-183-135116643218560/AnsiballZ_copy.py'
Dec 03 01:03:14 compute-0 sudo[74823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:14 compute-0 python3.9[74825]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723792.9700813-183-135116643218560/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=85623a68291344524b32d6dec8b93c00901cb0e7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:03:14 compute-0 sudo[74823]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:14 compute-0 sudo[74975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbzmvlrkrwgmcgdetsdtwqkaiqzvxpmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723794.5605304-227-78972496534909/AnsiballZ_file.py'
Dec 03 01:03:14 compute-0 sudo[74975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:15 compute-0 python3.9[74977]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:03:15 compute-0 sudo[74975]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:15 compute-0 sudo[75127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uujnjuqajhipegqartccpmfbngcvrpcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723795.3536484-227-155841968598990/AnsiballZ_file.py'
Dec 03 01:03:15 compute-0 sudo[75127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:15 compute-0 python3.9[75129]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:03:15 compute-0 sudo[75127]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:16 compute-0 sudo[75279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vstxmouqqqtrgfumhcoeobgpbymkgqxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723796.3131318-242-103449171502661/AnsiballZ_stat.py'
Dec 03 01:03:16 compute-0 sudo[75279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:16 compute-0 python3.9[75281]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:03:16 compute-0 sudo[75279]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:17 compute-0 sudo[75402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vitjimxhvkxdrealyucwydvlbmyxmteo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723796.3131318-242-103449171502661/AnsiballZ_copy.py'
Dec 03 01:03:17 compute-0 sudo[75402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:17 compute-0 python3.9[75404]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723796.3131318-242-103449171502661/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=068de18b8da001226dc33069c5839a972e795c9b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:03:17 compute-0 sudo[75402]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:18 compute-0 sudo[75554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxwbbohrvvnubjuivbjwxmgqwwsljctq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723797.8065267-242-34745148559102/AnsiballZ_stat.py'
Dec 03 01:03:18 compute-0 sudo[75554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:18 compute-0 python3.9[75556]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:03:18 compute-0 sudo[75554]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:18 compute-0 sudo[75677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtkeebclnwozkpyuczcirblfltwlrvdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723797.8065267-242-34745148559102/AnsiballZ_copy.py'
Dec 03 01:03:18 compute-0 sudo[75677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:19 compute-0 python3.9[75679]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723797.8065267-242-34745148559102/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=2a64f1b8009feb5d4193c68d35401643b8ae94ef backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:03:19 compute-0 sudo[75677]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:19 compute-0 sudo[75829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekhqutiuzlzxoqckeefnyuqkettadecg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723799.3041406-242-255786176711678/AnsiballZ_stat.py'
Dec 03 01:03:19 compute-0 sudo[75829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:19 compute-0 python3.9[75831]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:03:19 compute-0 sudo[75829]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:20 compute-0 sudo[75952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vngywaaiwqvqxqmplpuffjzuydmlipeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723799.3041406-242-255786176711678/AnsiballZ_copy.py'
Dec 03 01:03:20 compute-0 sudo[75952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:20 compute-0 python3.9[75954]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723799.3041406-242-255786176711678/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=89de33ad168226810c0097243f44ecd47145b3c3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:03:20 compute-0 sudo[75952]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:21 compute-0 sudo[76104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvrxltudszucwwjmdlkpslrpqbjculpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723801.4749167-302-219807233425144/AnsiballZ_file.py'
Dec 03 01:03:21 compute-0 sudo[76104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:22 compute-0 python3.9[76106]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:03:22 compute-0 sudo[76104]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:22 compute-0 sudo[76256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwezfmcottstclmnkemgqsykyxugcxtt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723802.3329875-310-51307819843582/AnsiballZ_stat.py'
Dec 03 01:03:22 compute-0 sudo[76256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:22 compute-0 python3.9[76258]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:03:22 compute-0 sudo[76256]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:23 compute-0 sudo[76379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kusmwjfvccvjtqhzhgdfnwoqsqhpxwgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723802.3329875-310-51307819843582/AnsiballZ_copy.py'
Dec 03 01:03:23 compute-0 sudo[76379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:23 compute-0 python3.9[76381]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723802.3329875-310-51307819843582/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=93ed2f21639fbbc78ab23db012b5cabf31590b1b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:03:23 compute-0 sudo[76379]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:24 compute-0 sudo[76531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-secgouwilheotpnrlcgstbxepgyfpchl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723803.9643643-326-270367845316133/AnsiballZ_file.py'
Dec 03 01:03:24 compute-0 sudo[76531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:24 compute-0 python3.9[76533]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:03:24 compute-0 sudo[76531]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:25 compute-0 sudo[76683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hasvthzfsjyydilobsnebpyzpjjuonsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723805.029667-334-53845862355604/AnsiballZ_stat.py'
Dec 03 01:03:25 compute-0 sudo[76683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:25 compute-0 python3.9[76685]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:03:25 compute-0 sudo[76683]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:26 compute-0 sudo[76806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlnsymbugltmmlwqfdnhoinrqzdkgkzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723805.029667-334-53845862355604/AnsiballZ_copy.py'
Dec 03 01:03:26 compute-0 sudo[76806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:26 compute-0 python3.9[76808]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723805.029667-334-53845862355604/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=93ed2f21639fbbc78ab23db012b5cabf31590b1b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:03:26 compute-0 sudo[76806]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:27 compute-0 sudo[76958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egsxsotfznehwijrhekkuqvwojsyqbkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723807.0576496-350-251934514100844/AnsiballZ_file.py'
Dec 03 01:03:27 compute-0 sudo[76958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:27 compute-0 python3.9[76960]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:03:27 compute-0 sudo[76958]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:28 compute-0 sudo[77110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anqitcoatdwhijycqekmoystmvjgxvne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723807.8865862-358-121369756982184/AnsiballZ_stat.py'
Dec 03 01:03:28 compute-0 sudo[77110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:28 compute-0 python3.9[77112]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:03:28 compute-0 sudo[77110]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:29 compute-0 sudo[77233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wooxivzfnuspiyiszzwjxfebhrrsfinn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723807.8865862-358-121369756982184/AnsiballZ_copy.py'
Dec 03 01:03:29 compute-0 sudo[77233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:29 compute-0 python3.9[77235]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723807.8865862-358-121369756982184/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=93ed2f21639fbbc78ab23db012b5cabf31590b1b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:03:29 compute-0 sudo[77233]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:29 compute-0 sudo[77385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewwvlcxutbcvlyhclwcefflzgcrvcspl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723809.4596853-374-39520623186815/AnsiballZ_file.py'
Dec 03 01:03:29 compute-0 sudo[77385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:30 compute-0 python3.9[77387]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:03:30 compute-0 sudo[77385]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:30 compute-0 sudo[77537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enohllgrnqucrszdgiuhdbogfqecpfgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723810.2527897-382-39508486280094/AnsiballZ_stat.py'
Dec 03 01:03:30 compute-0 sudo[77537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:30 compute-0 python3.9[77539]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:03:30 compute-0 sudo[77537]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:31 compute-0 sudo[77660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gffznjcwjkeacuapjjkrhwanzeawitld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723810.2527897-382-39508486280094/AnsiballZ_copy.py'
Dec 03 01:03:31 compute-0 sudo[77660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:31 compute-0 python3.9[77662]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723810.2527897-382-39508486280094/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=93ed2f21639fbbc78ab23db012b5cabf31590b1b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:03:31 compute-0 sudo[77660]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:32 compute-0 sudo[77812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygemxxkyqdeydwvrkfrggikhhrbzobkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723811.8222892-398-146963623272924/AnsiballZ_file.py'
Dec 03 01:03:32 compute-0 sudo[77812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:32 compute-0 python3.9[77814]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:03:32 compute-0 sudo[77812]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:33 compute-0 sudo[77964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzbhhybiuikrnvszzhgsaekogehxpxpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723812.6598082-406-145118192353420/AnsiballZ_stat.py'
Dec 03 01:03:33 compute-0 sudo[77964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:33 compute-0 python3.9[77966]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:03:33 compute-0 sudo[77964]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:33 compute-0 sudo[78087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-teaxfsmlewgpelexwkenokkbuukszpjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723812.6598082-406-145118192353420/AnsiballZ_copy.py'
Dec 03 01:03:33 compute-0 sudo[78087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:34 compute-0 python3.9[78089]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723812.6598082-406-145118192353420/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=93ed2f21639fbbc78ab23db012b5cabf31590b1b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:03:34 compute-0 sudo[78087]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:34 compute-0 sudo[78239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmxmimrnzuaoycgnwbzovtxttdiuatlf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723814.4449291-422-119636387238555/AnsiballZ_file.py'
Dec 03 01:03:34 compute-0 sudo[78239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:35 compute-0 python3.9[78241]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry-power-monitoring setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:03:35 compute-0 sudo[78239]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:35 compute-0 sudo[78391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkcsxhzwhpnyjrsomtrjmqypxqmqfksc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723815.2941115-430-119130246927620/AnsiballZ_stat.py'
Dec 03 01:03:35 compute-0 sudo[78391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:35 compute-0 python3.9[78393]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:03:35 compute-0 sudo[78391]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:36 compute-0 sudo[78514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjhnamrdxuxiphxklyfzdeygkvhcfjbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723815.2941115-430-119130246927620/AnsiballZ_copy.py'
Dec 03 01:03:36 compute-0 sudo[78514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:36 compute-0 python3.9[78516]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723815.2941115-430-119130246927620/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=93ed2f21639fbbc78ab23db012b5cabf31590b1b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:03:36 compute-0 sudo[78514]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:37 compute-0 sshd-session[71284]: Connection closed by 192.168.122.30 port 58488
Dec 03 01:03:37 compute-0 sshd-session[71281]: pam_unix(sshd:session): session closed for user zuul
Dec 03 01:03:37 compute-0 systemd-logind[800]: Session 17 logged out. Waiting for processes to exit.
Dec 03 01:03:37 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Dec 03 01:03:37 compute-0 systemd[1]: session-17.scope: Consumed 34.802s CPU time.
Dec 03 01:03:37 compute-0 systemd-logind[800]: Removed session 17.
Dec 03 01:03:43 compute-0 sshd-session[78541]: Accepted publickey for zuul from 192.168.122.30 port 42694 ssh2: ECDSA SHA256:ja3ITS17A9km0/Ot+KN2pl9ub4ump/b6GV+vNoE7Szw
Dec 03 01:03:43 compute-0 systemd-logind[800]: New session 18 of user zuul.
Dec 03 01:03:43 compute-0 systemd[1]: Started Session 18 of User zuul.
Dec 03 01:03:43 compute-0 sshd-session[78541]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 03 01:03:44 compute-0 python3.9[78694]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 01:03:45 compute-0 sudo[78848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etqmeihubsvwrcsvolzorquvgmmlrbsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723824.8214793-34-173600235500697/AnsiballZ_file.py'
Dec 03 01:03:45 compute-0 sudo[78848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:45 compute-0 python3.9[78850]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:03:45 compute-0 sudo[78848]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:46 compute-0 sudo[79000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqnitdqtzwnyzcgmuvsoakszjqpqzboi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723825.7382388-34-161519636190196/AnsiballZ_file.py'
Dec 03 01:03:46 compute-0 sudo[79000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:46 compute-0 python3.9[79002]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:03:46 compute-0 sudo[79000]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:47 compute-0 python3.9[79152]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 01:03:47 compute-0 sudo[79302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbnqvbnrfkusidnfxxhvmnuaghxgiekd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723827.426242-57-224380880873264/AnsiballZ_seboolean.py'
Dec 03 01:03:47 compute-0 sudo[79302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:48 compute-0 python3.9[79304]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec 03 01:03:49 compute-0 sudo[79302]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:50 compute-0 sudo[79458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewboxbfkbxcpiylrocolvleytpctngru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723829.6823385-67-169880334514924/AnsiballZ_setup.py'
Dec 03 01:03:50 compute-0 dbus-broker-launch[785]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Dec 03 01:03:50 compute-0 sudo[79458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:50 compute-0 python3.9[79460]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 03 01:03:50 compute-0 sudo[79458]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:51 compute-0 sudo[79542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xaxjwwqaztsnhulaiiyuhhhoktwdacup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723829.6823385-67-169880334514924/AnsiballZ_dnf.py'
Dec 03 01:03:51 compute-0 sudo[79542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:51 compute-0 python3.9[79544]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 03 01:03:52 compute-0 sudo[79542]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:53 compute-0 sudo[79695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrwsluwouyljwuladecevolmgsyriepr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723832.915227-79-135200776575283/AnsiballZ_systemd.py'
Dec 03 01:03:53 compute-0 sudo[79695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:53 compute-0 python3.9[79697]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 03 01:03:54 compute-0 sudo[79695]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:54 compute-0 sudo[79850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dntlialgodzsolzxiwqrzoapclqrvlca ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764723834.309079-87-11636985233827/AnsiballZ_edpm_nftables_snippet.py'
Dec 03 01:03:54 compute-0 sudo[79850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:55 compute-0 python3[79852]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                            rule:
                                              proto: udp
                                              dport: 4789
                                          - rule_name: 119 neutron geneve networks
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              state: ["UNTRACKED"]
                                          - rule_name: 120 neutron geneve networks no conntrack
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              table: raw
                                              chain: OUTPUT
                                              jump: NOTRACK
                                              action: append
                                              state: []
                                          - rule_name: 121 neutron geneve networks no conntrack
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              table: raw
                                              chain: PREROUTING
                                              jump: NOTRACK
                                              action: append
                                              state: []
                                           dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Dec 03 01:03:55 compute-0 sudo[79850]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:55 compute-0 sudo[80002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvnqtkkuwekebvygezzjzaqwvgyvynpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723835.408609-96-120109556911673/AnsiballZ_file.py'
Dec 03 01:03:55 compute-0 sudo[80002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:55 compute-0 python3.9[80004]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:03:55 compute-0 sudo[80002]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:56 compute-0 sudo[80154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jocuhvrvghdhgywjphrxfzpdgoazjmxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723836.20778-104-118562809747808/AnsiballZ_stat.py'
Dec 03 01:03:56 compute-0 sudo[80154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:56 compute-0 python3.9[80156]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:03:56 compute-0 sudo[80154]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:57 compute-0 sudo[80232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxhdvalkemaeqvjktewwcqkqlmznlvyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723836.20778-104-118562809747808/AnsiballZ_file.py'
Dec 03 01:03:57 compute-0 sudo[80232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:57 compute-0 python3.9[80234]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:03:57 compute-0 sudo[80232]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:58 compute-0 sudo[80385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etdbqfszesvbyoipbqxjxmnrnvzhgoxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723837.8845468-116-112272934747484/AnsiballZ_stat.py'
Dec 03 01:03:58 compute-0 sudo[80385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:58 compute-0 python3.9[80387]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:03:58 compute-0 sudo[80385]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:58 compute-0 sudo[80463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxfhxwkwhhsndvgxwnodcggsfqbyjdvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723837.8845468-116-112272934747484/AnsiballZ_file.py'
Dec 03 01:03:58 compute-0 sudo[80463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:59 compute-0 python3.9[80465]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.jbptfvyw recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:03:59 compute-0 sudo[80463]: pam_unix(sudo:session): session closed for user root
Dec 03 01:03:59 compute-0 sudo[80615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izdhfubwtkzuergyxhoitrlrzsmbsjut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723839.236907-128-8941134588268/AnsiballZ_stat.py'
Dec 03 01:03:59 compute-0 sudo[80615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:03:59 compute-0 python3.9[80617]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:03:59 compute-0 sudo[80615]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:00 compute-0 sudo[80693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvymgxhfirwiyvqdvlatcyeuddjsmrqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723839.236907-128-8941134588268/AnsiballZ_file.py'
Dec 03 01:04:00 compute-0 sudo[80693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:00 compute-0 python3.9[80695]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:04:00 compute-0 sudo[80693]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:01 compute-0 sudo[80845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rknqieyywmvswfybjhqvtvhbaegvmsmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723840.6709921-141-146615067819420/AnsiballZ_command.py'
Dec 03 01:04:01 compute-0 sudo[80845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:01 compute-0 python3.9[80847]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:04:01 compute-0 sudo[80845]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:02 compute-0 sudo[80998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xiwftocpsbzjisrqgpefpondsexockoz ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764723841.7019343-149-257563594995530/AnsiballZ_edpm_nftables_from_files.py'
Dec 03 01:04:02 compute-0 sudo[80998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:02 compute-0 python3[81000]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 03 01:04:02 compute-0 sudo[80998]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:03 compute-0 sudo[81150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzlpmjqogbzcpfyunorjprqionuovhfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723842.676047-157-64282342555869/AnsiballZ_stat.py'
Dec 03 01:04:03 compute-0 sudo[81150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:03 compute-0 python3.9[81152]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:04:03 compute-0 sudo[81150]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:03 compute-0 sudo[81275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erwynsbpuhdrtjbwbfpsauzurivhgmdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723842.676047-157-64282342555869/AnsiballZ_copy.py'
Dec 03 01:04:03 compute-0 sudo[81275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:04 compute-0 python3.9[81277]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723842.676047-157-64282342555869/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:04:04 compute-0 sudo[81275]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:04 compute-0 sudo[81427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjnfxhxovmtommzdrqkfhxhwjzciydrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723844.3556914-172-123434415528750/AnsiballZ_stat.py'
Dec 03 01:04:04 compute-0 sudo[81427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:05 compute-0 python3.9[81429]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:04:05 compute-0 sudo[81427]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:05 compute-0 sudo[81552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxvrqxhnzskshkkomdlhonndygyecmct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723844.3556914-172-123434415528750/AnsiballZ_copy.py'
Dec 03 01:04:05 compute-0 sudo[81552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:05 compute-0 python3.9[81554]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723844.3556914-172-123434415528750/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:04:05 compute-0 sudo[81552]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:06 compute-0 sudo[81704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obhvoioqloscguktaupnnigedttjauza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723845.896886-187-161120702790852/AnsiballZ_stat.py'
Dec 03 01:04:06 compute-0 sudo[81704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:06 compute-0 python3.9[81706]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:04:06 compute-0 sudo[81704]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:07 compute-0 sudo[81829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drlvqleczkpzghkqmewghkinphwvwcwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723845.896886-187-161120702790852/AnsiballZ_copy.py'
Dec 03 01:04:07 compute-0 sudo[81829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:07 compute-0 python3.9[81831]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723845.896886-187-161120702790852/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:04:07 compute-0 sudo[81829]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:07 compute-0 sudo[81981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsouwtjrewscbkiakfkmgavuxyyjcolg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723847.502152-202-261790757201180/AnsiballZ_stat.py'
Dec 03 01:04:07 compute-0 sudo[81981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:08 compute-0 python3.9[81983]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:04:08 compute-0 sudo[81981]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:08 compute-0 sudo[82106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnrjkwgvptsihigqomhprdywjetsmkdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723847.502152-202-261790757201180/AnsiballZ_copy.py'
Dec 03 01:04:08 compute-0 sudo[82106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:08 compute-0 python3.9[82108]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723847.502152-202-261790757201180/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:04:08 compute-0 sudo[82106]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:09 compute-0 sudo[82258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzpramxrgtwfdnazakichtnprjeldsgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723849.0628862-217-242799789866787/AnsiballZ_stat.py'
Dec 03 01:04:09 compute-0 sudo[82258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:09 compute-0 python3.9[82260]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:04:09 compute-0 sudo[82258]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:10 compute-0 sudo[82383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thyeczctxaxiwcapbsnwgbipdpptbcgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723849.0628862-217-242799789866787/AnsiballZ_copy.py'
Dec 03 01:04:10 compute-0 sudo[82383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:10 compute-0 python3.9[82385]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723849.0628862-217-242799789866787/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:04:10 compute-0 sudo[82383]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:11 compute-0 sudo[82535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udnnmmjbenkixvojczrffrcvyqckjdmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723850.643369-232-180312424902719/AnsiballZ_file.py'
Dec 03 01:04:11 compute-0 sudo[82535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:11 compute-0 python3.9[82537]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:04:11 compute-0 sudo[82535]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:11 compute-0 sudo[82687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxlsxeutytjtbutxgobuoysemhtbefky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723851.4903822-240-271387697383699/AnsiballZ_command.py'
Dec 03 01:04:11 compute-0 sudo[82687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:12 compute-0 python3.9[82689]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:04:12 compute-0 sudo[82687]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:12 compute-0 sudo[82842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zojrarjblkpdfdpsiknsvjmfxznnknbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723852.392066-248-209614686934210/AnsiballZ_blockinfile.py'
Dec 03 01:04:12 compute-0 sudo[82842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:13 compute-0 python3.9[82844]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:04:13 compute-0 sudo[82842]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:13 compute-0 sudo[82994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhmczsrymzggspjhvslcomhsgcznufnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723853.4691987-257-262762723738013/AnsiballZ_command.py'
Dec 03 01:04:13 compute-0 sudo[82994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:14 compute-0 python3.9[82996]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:04:14 compute-0 sudo[82994]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:14 compute-0 sudo[83147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubumbokowfzzprlgsqwpudtkbgzuywvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723854.287078-265-63008059862706/AnsiballZ_stat.py'
Dec 03 01:04:14 compute-0 sudo[83147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:14 compute-0 python3.9[83149]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:04:14 compute-0 sudo[83147]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:15 compute-0 sudo[83301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jiysmufyhikhrowfyfdytcwioafbgxdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723855.125389-273-23207410999923/AnsiballZ_command.py'
Dec 03 01:04:15 compute-0 sudo[83301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:15 compute-0 python3.9[83303]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:04:15 compute-0 sudo[83301]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:16 compute-0 sudo[83456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpwecorzgycbbxjfsovpedcsdzfpcfbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723856.0007155-281-153683114919627/AnsiballZ_file.py'
Dec 03 01:04:16 compute-0 sudo[83456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:16 compute-0 python3.9[83458]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:04:16 compute-0 sudo[83456]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:17 compute-0 python3.9[83608]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 01:04:18 compute-0 sudo[83759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfowwpeoxcwbrhmrhrsxjddmbmbphxpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723858.6021879-321-172154440154384/AnsiballZ_command.py'
Dec 03 01:04:18 compute-0 sudo[83759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:19 compute-0 python3.9[83761]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:2e:0a:f2:93:49:d5" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:04:19 compute-0 ovs-vsctl[83762]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:2e:0a:f2:93:49:d5 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Dec 03 01:04:19 compute-0 sudo[83759]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:19 compute-0 sudo[83912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgrobmlvedxfuemkccmijmnungjsdppx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723859.5096672-330-260445838491549/AnsiballZ_command.py'
Dec 03 01:04:19 compute-0 sudo[83912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:20 compute-0 python3.9[83914]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                            ovs-vsctl show | grep -q "Manager"
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:04:20 compute-0 sudo[83912]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:20 compute-0 sudo[84067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oaienblxgdtwfqetatmtuqwdmqaahrua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723860.3657413-338-36729210191853/AnsiballZ_command.py'
Dec 03 01:04:20 compute-0 sudo[84067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:20 compute-0 python3.9[84069]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:04:20 compute-0 ovs-vsctl[84070]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Dec 03 01:04:21 compute-0 sudo[84067]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:21 compute-0 python3.9[84220]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:04:22 compute-0 sudo[84372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwvabvnpuqceqdlszrmvrcqcukndbdla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723862.2294774-355-3914276710398/AnsiballZ_file.py'
Dec 03 01:04:22 compute-0 sudo[84372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:22 compute-0 python3.9[84374]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:04:22 compute-0 sudo[84372]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:23 compute-0 sudo[84524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kaoeqgogtkolhyzmavezhsrkpkwxweua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723863.0212133-363-87185181569679/AnsiballZ_stat.py'
Dec 03 01:04:23 compute-0 sudo[84524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:23 compute-0 python3.9[84526]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:04:24 compute-0 sudo[84524]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:24 compute-0 sudo[84602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqgpbwvazzfzqiqgfbtxlezidnhbsgnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723863.0212133-363-87185181569679/AnsiballZ_file.py'
Dec 03 01:04:24 compute-0 sudo[84602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:25 compute-0 python3.9[84604]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:04:25 compute-0 sudo[84602]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:25 compute-0 sudo[84754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prwxiktzkocneinjpmdanvdmprhaklxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723865.3300483-363-281039115526015/AnsiballZ_stat.py'
Dec 03 01:04:25 compute-0 sudo[84754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:25 compute-0 python3.9[84756]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:04:25 compute-0 sudo[84754]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:26 compute-0 sudo[84832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuyyxeezaqfpynrsgupvvtiqunrporlw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723865.3300483-363-281039115526015/AnsiballZ_file.py'
Dec 03 01:04:26 compute-0 sudo[84832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:26 compute-0 python3.9[84834]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:04:26 compute-0 sudo[84832]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:27 compute-0 sudo[84984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npiqvrcsupuqyoprnwwwwqquzqfxtnup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723866.692498-386-225674899851630/AnsiballZ_file.py'
Dec 03 01:04:27 compute-0 sudo[84984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:27 compute-0 python3.9[84986]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:04:27 compute-0 sudo[84984]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:27 compute-0 sudo[85136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kujyzkvevkznwmhbmelskfzrxxcbbbja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723867.5312612-394-235816224476718/AnsiballZ_stat.py'
Dec 03 01:04:27 compute-0 sudo[85136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:28 compute-0 python3.9[85138]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:04:28 compute-0 sudo[85136]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:28 compute-0 sudo[85214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtvfpjknoihsriucedulkpfwsnvfjoli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723867.5312612-394-235816224476718/AnsiballZ_file.py'
Dec 03 01:04:28 compute-0 sudo[85214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:28 compute-0 python3.9[85216]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:04:28 compute-0 sudo[85214]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:29 compute-0 sudo[85366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmyxmlxtuoegznmnqpdneprbycsyjzlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723868.9424891-406-231041431095572/AnsiballZ_stat.py'
Dec 03 01:04:29 compute-0 sudo[85366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:29 compute-0 python3.9[85368]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:04:29 compute-0 sudo[85366]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:29 compute-0 sudo[85444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvjmqvpptzszhhggjxvpvkxdcsxymbpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723868.9424891-406-231041431095572/AnsiballZ_file.py'
Dec 03 01:04:29 compute-0 sudo[85444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:30 compute-0 python3.9[85446]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:04:30 compute-0 sudo[85444]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:30 compute-0 sudo[85596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uirlnozirbxydiasgfkppodnqaupjqch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723870.2588725-418-192889717499624/AnsiballZ_systemd.py'
Dec 03 01:04:30 compute-0 sudo[85596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:30 compute-0 python3.9[85598]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:04:30 compute-0 systemd[1]: Reloading.
Dec 03 01:04:31 compute-0 systemd-rc-local-generator[85627]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:04:31 compute-0 systemd-sysv-generator[85631]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:04:31 compute-0 sudo[85596]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:31 compute-0 sudo[85786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvekaznfsrlsxologcfguenerwznftod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723871.5964186-426-178993361855362/AnsiballZ_stat.py'
Dec 03 01:04:31 compute-0 sudo[85786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:32 compute-0 python3.9[85788]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:04:32 compute-0 sudo[85786]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:32 compute-0 sudo[85864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebzmpkfzypiycxvsudwbinbwvydxacha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723871.5964186-426-178993361855362/AnsiballZ_file.py'
Dec 03 01:04:32 compute-0 sudo[85864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:32 compute-0 python3.9[85866]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:04:32 compute-0 sudo[85864]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:33 compute-0 sudo[86016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqjophinbnwekralcdbzhvtdhzdfdncd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723872.9880037-438-193934799760454/AnsiballZ_stat.py'
Dec 03 01:04:33 compute-0 sudo[86016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:33 compute-0 python3.9[86018]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:04:33 compute-0 sudo[86016]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:33 compute-0 sudo[86094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rccmgomnakksoavkeyhvhwgagosuyomx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723872.9880037-438-193934799760454/AnsiballZ_file.py'
Dec 03 01:04:33 compute-0 sudo[86094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:34 compute-0 python3.9[86096]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:04:34 compute-0 sudo[86094]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:34 compute-0 sudo[86246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rufmgutodwxbxszmnrflnwumjyjaplyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723874.3988996-450-194245000792686/AnsiballZ_systemd.py'
Dec 03 01:04:34 compute-0 sudo[86246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:35 compute-0 python3.9[86248]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:04:35 compute-0 systemd[1]: Reloading.
Dec 03 01:04:35 compute-0 systemd-rc-local-generator[86276]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:04:35 compute-0 systemd-sysv-generator[86280]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:04:35 compute-0 systemd[1]: Starting Create netns directory...
Dec 03 01:04:35 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 03 01:04:35 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 03 01:04:35 compute-0 systemd[1]: Finished Create netns directory.
Dec 03 01:04:35 compute-0 sudo[86246]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:36 compute-0 sudo[86439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlmkpoaateqtcjpxylgtjukuyfkzuubg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723875.8889766-460-98941821960237/AnsiballZ_file.py'
Dec 03 01:04:36 compute-0 sudo[86439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:36 compute-0 python3.9[86441]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:04:36 compute-0 sudo[86439]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:37 compute-0 sudo[86591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lruhdtzswowecwwkoudxijndslatpgwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723876.761959-468-174589115804978/AnsiballZ_stat.py'
Dec 03 01:04:37 compute-0 sudo[86591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:37 compute-0 python3.9[86593]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:04:37 compute-0 sudo[86591]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:37 compute-0 sudo[86714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eykvzecilqfbjduhvgwntgfvglqaawdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723876.761959-468-174589115804978/AnsiballZ_copy.py'
Dec 03 01:04:37 compute-0 sudo[86714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:38 compute-0 python3.9[86716]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764723876.761959-468-174589115804978/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:04:38 compute-0 sudo[86714]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:38 compute-0 sudo[86866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvxnipfiwmyzhfcdbzclipxvmqyiigpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723878.4859889-485-243753995130022/AnsiballZ_file.py'
Dec 03 01:04:38 compute-0 sudo[86866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:39 compute-0 python3.9[86868]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:04:39 compute-0 sudo[86866]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:39 compute-0 sudo[87018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxvwpmacmdjfpxabdzlxhebanoaxcgtc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723879.3958988-493-96981597582269/AnsiballZ_stat.py'
Dec 03 01:04:39 compute-0 sudo[87018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:39 compute-0 python3.9[87020]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:04:40 compute-0 sudo[87018]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:40 compute-0 sudo[87141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwyzsenwhfrulshyaoeyhaxslvlnreqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723879.3958988-493-96981597582269/AnsiballZ_copy.py'
Dec 03 01:04:40 compute-0 sudo[87141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:40 compute-0 python3.9[87143]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764723879.3958988-493-96981597582269/.source.json _original_basename=.ih8im7m8 follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:04:40 compute-0 sudo[87141]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:41 compute-0 sudo[87293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzyycyxxnwwvjnywpqsaiohskbcymdew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723880.9567177-508-24000378801450/AnsiballZ_file.py'
Dec 03 01:04:41 compute-0 sudo[87293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:41 compute-0 python3.9[87295]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:04:41 compute-0 sudo[87293]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:42 compute-0 sudo[87445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsjtleaofsdgsshxpwzhpjusomyxjsdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723881.7812-516-194654684516414/AnsiballZ_stat.py'
Dec 03 01:04:42 compute-0 sudo[87445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:42 compute-0 sudo[87445]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:42 compute-0 sudo[87568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gadtfxtigzwwtlyuzhwtintebjzmgqog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723881.7812-516-194654684516414/AnsiballZ_copy.py'
Dec 03 01:04:42 compute-0 sudo[87568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:43 compute-0 sudo[87568]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:43 compute-0 sudo[87720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyilwvylbfnjbmcpgvfimvtuytmnzrld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723883.4783928-533-75463284571570/AnsiballZ_container_config_data.py'
Dec 03 01:04:43 compute-0 sudo[87720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:44 compute-0 python3.9[87722]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Dec 03 01:04:44 compute-0 sudo[87720]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:45 compute-0 sudo[87872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uecgjblytueznuxpjvyptvekguoyxtay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723884.4883404-542-194579666897720/AnsiballZ_container_config_hash.py'
Dec 03 01:04:45 compute-0 sudo[87872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:45 compute-0 python3.9[87874]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 03 01:04:45 compute-0 sudo[87872]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:46 compute-0 sudo[88024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpelscmjkalhtgtzhrcwhcsnlvpcvvlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723885.5624642-551-170904041009913/AnsiballZ_podman_container_info.py'
Dec 03 01:04:46 compute-0 sudo[88024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:46 compute-0 python3.9[88026]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 03 01:04:46 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 03 01:04:46 compute-0 sudo[88024]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:47 compute-0 sudo[88187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itawkejueytaiedogdcxkgxwjlfojzxj ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764723886.9231381-564-251560619854095/AnsiballZ_edpm_container_manage.py'
Dec 03 01:04:47 compute-0 sudo[88187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:47 compute-0 python3[88189]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 03 01:04:47 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 03 01:04:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat3174505029-lower\x2dmapped.mount: Deactivated successfully.
Dec 03 01:04:53 compute-0 podman[88202]: 2025-12-03 01:04:53.632419469 +0000 UTC m=+5.757131934 image pull 3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec 03 01:04:53 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 03 01:04:53 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 03 01:04:53 compute-0 podman[88321]: 2025-12-03 01:04:53.810688522 +0000 UTC m=+0.053773895 container create 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 03 01:04:53 compute-0 podman[88321]: 2025-12-03 01:04:53.783162004 +0000 UTC m=+0.026247397 image pull 3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec 03 01:04:53 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 03 01:04:53 compute-0 python3[88189]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec 03 01:04:53 compute-0 sudo[88187]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:54 compute-0 sudo[88510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzxpbrrmzprtpintwzcxnxvbatpmcfmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723894.2171385-572-138294441887004/AnsiballZ_stat.py'
Dec 03 01:04:54 compute-0 sudo[88510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:54 compute-0 python3.9[88512]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:04:54 compute-0 sudo[88510]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:55 compute-0 sudo[88664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euxqneciwigckzuzdickpyjhitrouxlc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723895.1540334-581-39785551867746/AnsiballZ_file.py'
Dec 03 01:04:55 compute-0 sudo[88664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:55 compute-0 python3.9[88666]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:04:55 compute-0 sudo[88664]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:56 compute-0 sudo[88740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdbfisdbkpbcgcmhygagvnfhsxjuoqcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723895.1540334-581-39785551867746/AnsiballZ_stat.py'
Dec 03 01:04:56 compute-0 sudo[88740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:56 compute-0 python3.9[88742]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:04:56 compute-0 sudo[88740]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:56 compute-0 sudo[88891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usdoewynisqqzbhegvxbmatwnwyatgpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723896.3928676-581-121462620523232/AnsiballZ_copy.py'
Dec 03 01:04:56 compute-0 sudo[88891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:57 compute-0 python3.9[88893]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764723896.3928676-581-121462620523232/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:04:57 compute-0 sudo[88891]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:57 compute-0 sudo[88967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyqiulgnkpbfabsoqyzlbztflgjxiygn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723896.3928676-581-121462620523232/AnsiballZ_systemd.py'
Dec 03 01:04:57 compute-0 sudo[88967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:57 compute-0 python3.9[88969]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 03 01:04:57 compute-0 systemd[1]: Reloading.
Dec 03 01:04:58 compute-0 systemd-rc-local-generator[88996]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:04:58 compute-0 systemd-sysv-generator[89001]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:04:58 compute-0 sudo[88967]: pam_unix(sudo:session): session closed for user root
Dec 03 01:04:58 compute-0 sudo[89078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbocngbdejkeieqxicpvwxbsuwtxmfrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723896.3928676-581-121462620523232/AnsiballZ_systemd.py'
Dec 03 01:04:58 compute-0 sudo[89078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:04:58 compute-0 python3.9[89080]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:04:59 compute-0 systemd[1]: Reloading.
Dec 03 01:05:00 compute-0 systemd-rc-local-generator[89110]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:05:00 compute-0 systemd-sysv-generator[89113]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:05:00 compute-0 systemd[1]: Starting ovn_controller container...
Dec 03 01:05:00 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Dec 03 01:05:00 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:05:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/808bcb6384b23452bcd1d6368dafee09d321969a57d81b4723ebe2407c4e8f83/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec 03 01:05:00 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f.
Dec 03 01:05:00 compute-0 podman[89121]: 2025-12-03 01:05:00.431117861 +0000 UTC m=+0.182455393 container init 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 03 01:05:00 compute-0 ovn_controller[89134]: + sudo -E kolla_set_configs
Dec 03 01:05:00 compute-0 podman[89121]: 2025-12-03 01:05:00.479636378 +0000 UTC m=+0.230973890 container start 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 03 01:05:00 compute-0 edpm-start-podman-container[89121]: ovn_controller
Dec 03 01:05:00 compute-0 systemd[1]: Created slice User Slice of UID 0.
Dec 03 01:05:00 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Dec 03 01:05:00 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Dec 03 01:05:00 compute-0 systemd[1]: Starting User Manager for UID 0...
Dec 03 01:05:00 compute-0 edpm-start-podman-container[89120]: Creating additional drop-in dependency for "ovn_controller" (926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f)
Dec 03 01:05:00 compute-0 systemd[89174]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Dec 03 01:05:00 compute-0 podman[89141]: 2025-12-03 01:05:00.611297124 +0000 UTC m=+0.118415681 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller)
Dec 03 01:05:00 compute-0 systemd[1]: 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f-77c277b09b6e8051.service: Main process exited, code=exited, status=1/FAILURE
Dec 03 01:05:00 compute-0 systemd[1]: 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f-77c277b09b6e8051.service: Failed with result 'exit-code'.
Dec 03 01:05:00 compute-0 systemd[1]: Reloading.
Dec 03 01:05:00 compute-0 systemd-rc-local-generator[89221]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:05:00 compute-0 systemd-sysv-generator[89224]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:05:00 compute-0 systemd[89174]: Queued start job for default target Main User Target.
Dec 03 01:05:00 compute-0 systemd[89174]: Created slice User Application Slice.
Dec 03 01:05:00 compute-0 systemd[89174]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Dec 03 01:05:00 compute-0 systemd[89174]: Started Daily Cleanup of User's Temporary Directories.
Dec 03 01:05:00 compute-0 systemd[89174]: Reached target Paths.
Dec 03 01:05:00 compute-0 systemd[89174]: Reached target Timers.
Dec 03 01:05:00 compute-0 systemd[89174]: Starting D-Bus User Message Bus Socket...
Dec 03 01:05:00 compute-0 systemd[89174]: Starting Create User's Volatile Files and Directories...
Dec 03 01:05:00 compute-0 systemd[89174]: Listening on D-Bus User Message Bus Socket.
Dec 03 01:05:00 compute-0 systemd[89174]: Reached target Sockets.
Dec 03 01:05:00 compute-0 systemd[89174]: Finished Create User's Volatile Files and Directories.
Dec 03 01:05:00 compute-0 systemd[89174]: Reached target Basic System.
Dec 03 01:05:00 compute-0 systemd[89174]: Reached target Main User Target.
Dec 03 01:05:00 compute-0 systemd[89174]: Startup finished in 183ms.
Dec 03 01:05:00 compute-0 systemd[1]: Started User Manager for UID 0.
Dec 03 01:05:00 compute-0 systemd[1]: Started ovn_controller container.
Dec 03 01:05:00 compute-0 systemd[1]: Started Session c1 of User root.
Dec 03 01:05:00 compute-0 sudo[89078]: pam_unix(sudo:session): session closed for user root
Dec 03 01:05:00 compute-0 ovn_controller[89134]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 03 01:05:00 compute-0 ovn_controller[89134]: INFO:__main__:Validating config file
Dec 03 01:05:00 compute-0 ovn_controller[89134]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 03 01:05:00 compute-0 ovn_controller[89134]: INFO:__main__:Writing out command to execute
Dec 03 01:05:00 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Dec 03 01:05:00 compute-0 ovn_controller[89134]: ++ cat /run_command
Dec 03 01:05:00 compute-0 ovn_controller[89134]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec 03 01:05:00 compute-0 ovn_controller[89134]: + ARGS=
Dec 03 01:05:00 compute-0 ovn_controller[89134]: + sudo kolla_copy_cacerts
Dec 03 01:05:01 compute-0 systemd[1]: Started Session c2 of User root.
Dec 03 01:05:01 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Dec 03 01:05:01 compute-0 ovn_controller[89134]: + [[ ! -n '' ]]
Dec 03 01:05:01 compute-0 ovn_controller[89134]: + . kolla_extend_start
Dec 03 01:05:01 compute-0 ovn_controller[89134]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Dec 03 01:05:01 compute-0 ovn_controller[89134]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec 03 01:05:01 compute-0 ovn_controller[89134]: + umask 0022
Dec 03 01:05:01 compute-0 ovn_controller[89134]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Dec 03 01:05:01 compute-0 ovn_controller[89134]: 2025-12-03T01:05:01Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec 03 01:05:01 compute-0 ovn_controller[89134]: 2025-12-03T01:05:01Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec 03 01:05:01 compute-0 ovn_controller[89134]: 2025-12-03T01:05:01Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Dec 03 01:05:01 compute-0 ovn_controller[89134]: 2025-12-03T01:05:01Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Dec 03 01:05:01 compute-0 ovn_controller[89134]: 2025-12-03T01:05:01Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec 03 01:05:01 compute-0 ovn_controller[89134]: 2025-12-03T01:05:01Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Dec 03 01:05:01 compute-0 NetworkManager[48912]: <info>  [1764723901.2015] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Dec 03 01:05:01 compute-0 NetworkManager[48912]: <info>  [1764723901.2038] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 03 01:05:01 compute-0 NetworkManager[48912]: <info>  [1764723901.2063] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Dec 03 01:05:01 compute-0 NetworkManager[48912]: <info>  [1764723901.2072] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Dec 03 01:05:01 compute-0 NetworkManager[48912]: <info>  [1764723901.2078] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec 03 01:05:01 compute-0 kernel: br-int: entered promiscuous mode
Dec 03 01:05:01 compute-0 ovn_controller[89134]: 2025-12-03T01:05:01Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec 03 01:05:01 compute-0 systemd-udevd[89290]: Network interface NamePolicy= disabled on kernel command line.
Dec 03 01:05:01 compute-0 ovn_controller[89134]: 2025-12-03T01:05:01Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 03 01:05:01 compute-0 ovn_controller[89134]: 2025-12-03T01:05:01Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 03 01:05:01 compute-0 ovn_controller[89134]: 2025-12-03T01:05:01Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Dec 03 01:05:01 compute-0 ovn_controller[89134]: 2025-12-03T01:05:01Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Dec 03 01:05:01 compute-0 ovn_controller[89134]: 2025-12-03T01:05:01Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Dec 03 01:05:01 compute-0 ovn_controller[89134]: 2025-12-03T01:05:01Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec 03 01:05:01 compute-0 ovn_controller[89134]: 2025-12-03T01:05:01Z|00014|main|INFO|OVS feature set changed, force recompute.
Dec 03 01:05:01 compute-0 ovn_controller[89134]: 2025-12-03T01:05:01Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 03 01:05:01 compute-0 ovn_controller[89134]: 2025-12-03T01:05:01Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 03 01:05:01 compute-0 ovn_controller[89134]: 2025-12-03T01:05:01Z|00017|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec 03 01:05:01 compute-0 ovn_controller[89134]: 2025-12-03T01:05:01Z|00018|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Dec 03 01:05:01 compute-0 ovn_controller[89134]: 2025-12-03T01:05:01Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 03 01:05:01 compute-0 ovn_controller[89134]: 2025-12-03T01:05:01Z|00020|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Dec 03 01:05:01 compute-0 ovn_controller[89134]: 2025-12-03T01:05:01Z|00021|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Dec 03 01:05:01 compute-0 ovn_controller[89134]: 2025-12-03T01:05:01Z|00022|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Dec 03 01:05:01 compute-0 ovn_controller[89134]: 2025-12-03T01:05:01Z|00023|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 03 01:05:01 compute-0 ovn_controller[89134]: 2025-12-03T01:05:01Z|00024|main|INFO|OVS feature set changed, force recompute.
Dec 03 01:05:01 compute-0 ovn_controller[89134]: 2025-12-03T01:05:01Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 03 01:05:01 compute-0 ovn_controller[89134]: 2025-12-03T01:05:01Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 03 01:05:01 compute-0 ovn_controller[89134]: 2025-12-03T01:05:01Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 03 01:05:01 compute-0 ovn_controller[89134]: 2025-12-03T01:05:01Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 03 01:05:01 compute-0 ovn_controller[89134]: 2025-12-03T01:05:01Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 03 01:05:01 compute-0 ovn_controller[89134]: 2025-12-03T01:05:01Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 03 01:05:01 compute-0 NetworkManager[48912]: <info>  [1764723901.4434] manager: (ovn-b585df-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Dec 03 01:05:01 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Dec 03 01:05:01 compute-0 NetworkManager[48912]: <info>  [1764723901.4753] device (genev_sys_6081): carrier: link connected
Dec 03 01:05:01 compute-0 NetworkManager[48912]: <info>  [1764723901.4759] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Dec 03 01:05:01 compute-0 sudo[89397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwdnocthhuwzulyflkrhcinlspcsbjev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723901.1941195-609-214559345873113/AnsiballZ_command.py'
Dec 03 01:05:01 compute-0 sudo[89397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:05:01 compute-0 python3.9[89399]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:05:01 compute-0 ovs-vsctl[89400]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Dec 03 01:05:01 compute-0 sudo[89397]: pam_unix(sudo:session): session closed for user root
Dec 03 01:05:02 compute-0 sudo[89550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgqixhyfpbsznulatsbucxhbeunmkyae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723902.1202977-617-97702729047006/AnsiballZ_command.py'
Dec 03 01:05:02 compute-0 sudo[89550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:05:02 compute-0 python3.9[89552]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:05:02 compute-0 ovs-vsctl[89554]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Dec 03 01:05:02 compute-0 sudo[89550]: pam_unix(sudo:session): session closed for user root
Dec 03 01:05:03 compute-0 sudo[89705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-achdrrqpctginicssyvpsfrtnrbsytde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723903.1981223-631-197341213620681/AnsiballZ_command.py'
Dec 03 01:05:03 compute-0 sudo[89705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:05:03 compute-0 python3.9[89707]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:05:03 compute-0 ovs-vsctl[89708]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Dec 03 01:05:03 compute-0 sudo[89705]: pam_unix(sudo:session): session closed for user root
Dec 03 01:05:04 compute-0 sshd-session[78544]: Connection closed by 192.168.122.30 port 42694
Dec 03 01:05:04 compute-0 sshd-session[78541]: pam_unix(sshd:session): session closed for user zuul
Dec 03 01:05:04 compute-0 systemd-logind[800]: Session 18 logged out. Waiting for processes to exit.
Dec 03 01:05:04 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Dec 03 01:05:04 compute-0 systemd[1]: session-18.scope: Consumed 1min 8.652s CPU time.
Dec 03 01:05:04 compute-0 systemd-logind[800]: Removed session 18.
Dec 03 01:05:10 compute-0 sshd-session[89733]: Accepted publickey for zuul from 192.168.122.30 port 33360 ssh2: ECDSA SHA256:ja3ITS17A9km0/Ot+KN2pl9ub4ump/b6GV+vNoE7Szw
Dec 03 01:05:10 compute-0 systemd-logind[800]: New session 20 of user zuul.
Dec 03 01:05:10 compute-0 systemd[1]: Started Session 20 of User zuul.
Dec 03 01:05:10 compute-0 sshd-session[89733]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 03 01:05:11 compute-0 systemd[1]: Stopping User Manager for UID 0...
Dec 03 01:05:11 compute-0 systemd[89174]: Activating special unit Exit the Session...
Dec 03 01:05:11 compute-0 systemd[89174]: Stopped target Main User Target.
Dec 03 01:05:11 compute-0 systemd[89174]: Stopped target Basic System.
Dec 03 01:05:11 compute-0 systemd[89174]: Stopped target Paths.
Dec 03 01:05:11 compute-0 systemd[89174]: Stopped target Sockets.
Dec 03 01:05:11 compute-0 systemd[89174]: Stopped target Timers.
Dec 03 01:05:11 compute-0 systemd[89174]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 03 01:05:11 compute-0 systemd[89174]: Closed D-Bus User Message Bus Socket.
Dec 03 01:05:11 compute-0 systemd[89174]: Stopped Create User's Volatile Files and Directories.
Dec 03 01:05:11 compute-0 systemd[89174]: Removed slice User Application Slice.
Dec 03 01:05:11 compute-0 systemd[89174]: Reached target Shutdown.
Dec 03 01:05:11 compute-0 systemd[89174]: Finished Exit the Session.
Dec 03 01:05:11 compute-0 systemd[89174]: Reached target Exit the Session.
Dec 03 01:05:11 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Dec 03 01:05:11 compute-0 systemd[1]: Stopped User Manager for UID 0.
Dec 03 01:05:11 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Dec 03 01:05:11 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Dec 03 01:05:11 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Dec 03 01:05:11 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Dec 03 01:05:11 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Dec 03 01:05:11 compute-0 python3.9[89886]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 01:05:12 compute-0 sudo[90045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjqjkgyslfvfftridetorpgepzbcjanz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723912.0566428-34-90341028166779/AnsiballZ_command.py'
Dec 03 01:05:12 compute-0 sudo[90045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:05:12 compute-0 python3.9[90047]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:05:12 compute-0 sudo[90045]: pam_unix(sudo:session): session closed for user root
Dec 03 01:05:13 compute-0 sudo[90210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgpqegziqqdasrmvcvjtrxhzgbnnnwnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723913.251511-45-193932129061149/AnsiballZ_systemd_service.py'
Dec 03 01:05:13 compute-0 sudo[90210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:05:14 compute-0 python3.9[90212]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 03 01:05:14 compute-0 systemd[1]: Reloading.
Dec 03 01:05:14 compute-0 systemd-rc-local-generator[90240]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:05:14 compute-0 systemd-sysv-generator[90243]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:05:14 compute-0 sudo[90210]: pam_unix(sudo:session): session closed for user root
Dec 03 01:05:15 compute-0 python3.9[90398]: ansible-ansible.builtin.service_facts Invoked
Dec 03 01:05:15 compute-0 network[90415]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 03 01:05:15 compute-0 network[90416]: 'network-scripts' will be removed from distribution in near future.
Dec 03 01:05:15 compute-0 network[90417]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 03 01:05:21 compute-0 sudo[90677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzuafpnqldgvdxqqbyzjkqjkosibkcew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723920.889223-64-22919886730695/AnsiballZ_systemd_service.py'
Dec 03 01:05:21 compute-0 sudo[90677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:05:21 compute-0 python3.9[90679]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:05:22 compute-0 sudo[90677]: pam_unix(sudo:session): session closed for user root
Dec 03 01:05:22 compute-0 sudo[90830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nazpupfgwxwmtbmpwuqaaxbffujqblap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723922.2419162-64-188881592933971/AnsiballZ_systemd_service.py'
Dec 03 01:05:22 compute-0 sudo[90830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:05:22 compute-0 python3.9[90832]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:05:22 compute-0 sudo[90830]: pam_unix(sudo:session): session closed for user root
Dec 03 01:05:23 compute-0 sudo[90983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgevbszykuxggdmoxdiswyvmergiwlny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723923.1595554-64-51939182973761/AnsiballZ_systemd_service.py'
Dec 03 01:05:23 compute-0 sudo[90983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:05:23 compute-0 python3.9[90985]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:05:23 compute-0 sudo[90983]: pam_unix(sudo:session): session closed for user root
Dec 03 01:05:24 compute-0 sudo[91136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khnjkcwvnpxjtfkvvfnoibcjdeuhdiuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723924.0430644-64-70205024140752/AnsiballZ_systemd_service.py'
Dec 03 01:05:24 compute-0 sudo[91136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:05:24 compute-0 python3.9[91138]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:05:24 compute-0 sudo[91136]: pam_unix(sudo:session): session closed for user root
Dec 03 01:05:25 compute-0 sudo[91289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyjlatxhfgqnbauthczjsxthsudbpvzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723924.9807096-64-277938943353468/AnsiballZ_systemd_service.py'
Dec 03 01:05:25 compute-0 sudo[91289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:05:25 compute-0 python3.9[91291]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:05:25 compute-0 sudo[91289]: pam_unix(sudo:session): session closed for user root
Dec 03 01:05:26 compute-0 sudo[91442]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxnbclokmccmgwabxprjgsxwelzfiwnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723925.8773887-64-263436235136773/AnsiballZ_systemd_service.py'
Dec 03 01:05:26 compute-0 sudo[91442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:05:26 compute-0 python3.9[91444]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:05:26 compute-0 sudo[91442]: pam_unix(sudo:session): session closed for user root
Dec 03 01:05:27 compute-0 sudo[91595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pofmqojqhxsyvbmpehiqpqzpjjzniayv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723926.7596245-64-200543130895416/AnsiballZ_systemd_service.py'
Dec 03 01:05:27 compute-0 sudo[91595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:05:27 compute-0 python3.9[91597]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:05:27 compute-0 sudo[91595]: pam_unix(sudo:session): session closed for user root
Dec 03 01:05:28 compute-0 sudo[91748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyhccdsogexvkduzsippjpxnqsqbxajb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723928.318239-116-277311427535759/AnsiballZ_file.py'
Dec 03 01:05:28 compute-0 sudo[91748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:05:29 compute-0 python3.9[91750]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:05:29 compute-0 sudo[91748]: pam_unix(sudo:session): session closed for user root
Dec 03 01:05:29 compute-0 sudo[91900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpffvrevwxhhatwtacqmqtxokbaxcmvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723929.236649-116-91291945310441/AnsiballZ_file.py'
Dec 03 01:05:29 compute-0 sudo[91900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:05:29 compute-0 python3.9[91902]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:05:29 compute-0 sudo[91900]: pam_unix(sudo:session): session closed for user root
Dec 03 01:05:30 compute-0 sudo[92052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsubiwqusgibqzdxqojnmcrptgmudikb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723929.9767015-116-240899562495215/AnsiballZ_file.py'
Dec 03 01:05:30 compute-0 sudo[92052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:05:30 compute-0 python3.9[92054]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:05:30 compute-0 sudo[92052]: pam_unix(sudo:session): session closed for user root
Dec 03 01:05:30 compute-0 ovn_controller[89134]: 2025-12-03T01:05:30Z|00025|memory|INFO|16000 kB peak resident set size after 29.8 seconds
Dec 03 01:05:30 compute-0 ovn_controller[89134]: 2025-12-03T01:05:30Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Dec 03 01:05:30 compute-0 podman[92109]: 2025-12-03 01:05:30.884788592 +0000 UTC m=+0.131363422 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec 03 01:05:31 compute-0 sudo[92230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onixnbnkgckuzqqdgfeyhyabodptjnhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723930.6800601-116-146155949498171/AnsiballZ_file.py'
Dec 03 01:05:31 compute-0 sudo[92230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:05:31 compute-0 python3.9[92232]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:05:31 compute-0 sudo[92230]: pam_unix(sudo:session): session closed for user root
Dec 03 01:05:31 compute-0 sudo[92382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnlawszyfdzwbuzwjukagdqvuhflydvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723931.4749732-116-53401364626703/AnsiballZ_file.py'
Dec 03 01:05:31 compute-0 sudo[92382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:05:32 compute-0 python3.9[92384]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:05:32 compute-0 sudo[92382]: pam_unix(sudo:session): session closed for user root
Dec 03 01:05:32 compute-0 sudo[92535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdpnnjyifydhxxuldhalhvneasfviqvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723932.214661-116-254501292388752/AnsiballZ_file.py'
Dec 03 01:05:32 compute-0 sudo[92535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:05:32 compute-0 python3.9[92537]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:05:32 compute-0 sudo[92535]: pam_unix(sudo:session): session closed for user root
Dec 03 01:05:33 compute-0 sudo[92687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qquvufuyxnaeenwlhptmynorclscjogy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723933.0038471-116-113304550058733/AnsiballZ_file.py'
Dec 03 01:05:33 compute-0 sudo[92687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:05:33 compute-0 python3.9[92689]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:05:33 compute-0 sudo[92687]: pam_unix(sudo:session): session closed for user root
Dec 03 01:05:34 compute-0 sudo[92839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkliqzqrinlqdwskswnvnwckgytlfgjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723933.9672668-166-2027583262172/AnsiballZ_file.py'
Dec 03 01:05:34 compute-0 sudo[92839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:05:34 compute-0 python3.9[92841]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:05:34 compute-0 sudo[92839]: pam_unix(sudo:session): session closed for user root
Dec 03 01:05:35 compute-0 sudo[92991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmdjysztudjcmjrrvjxyhyenkybepzcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723934.7800076-166-254126685165666/AnsiballZ_file.py'
Dec 03 01:05:35 compute-0 sudo[92991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:05:35 compute-0 python3.9[92993]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:05:35 compute-0 sudo[92991]: pam_unix(sudo:session): session closed for user root
Dec 03 01:05:35 compute-0 sudo[93143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqltmxgogecmneleemuffiyrnubyzqkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723935.5615935-166-132942822457198/AnsiballZ_file.py'
Dec 03 01:05:35 compute-0 sudo[93143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:05:36 compute-0 python3.9[93145]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:05:36 compute-0 sudo[93143]: pam_unix(sudo:session): session closed for user root
Dec 03 01:05:36 compute-0 sudo[93295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtkenynyhbeorsmuyddmmbeuuavsypww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723936.3603368-166-81514639307983/AnsiballZ_file.py'
Dec 03 01:05:36 compute-0 sudo[93295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:05:36 compute-0 python3.9[93297]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:05:36 compute-0 sudo[93295]: pam_unix(sudo:session): session closed for user root
Dec 03 01:05:37 compute-0 sudo[93447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yixwaumznlqosehzajxgmagwurxepajj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723937.235058-166-277416751445821/AnsiballZ_file.py'
Dec 03 01:05:37 compute-0 sudo[93447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:05:37 compute-0 python3.9[93449]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:05:37 compute-0 sudo[93447]: pam_unix(sudo:session): session closed for user root
Dec 03 01:05:38 compute-0 sudo[93599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfgomssyuefbvxzbnmlwwmbgzhjecpbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723937.9971929-166-118596980016886/AnsiballZ_file.py'
Dec 03 01:05:38 compute-0 sudo[93599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:05:38 compute-0 python3.9[93601]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:05:38 compute-0 sudo[93599]: pam_unix(sudo:session): session closed for user root
Dec 03 01:05:39 compute-0 sudo[93751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvpvzbzssvgqwugozukhlilkrhfadsfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723938.7935207-166-12509800758381/AnsiballZ_file.py'
Dec 03 01:05:39 compute-0 sudo[93751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:05:39 compute-0 python3.9[93753]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:05:39 compute-0 sudo[93751]: pam_unix(sudo:session): session closed for user root
Dec 03 01:05:40 compute-0 sudo[93903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-loeezagqralofefhbonvnxypgracwgow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723939.7012172-217-263953645442097/AnsiballZ_command.py'
Dec 03 01:05:40 compute-0 sudo[93903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:05:40 compute-0 python3.9[93905]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                              systemctl disable --now certmonger.service
                                              test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                            fi
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:05:40 compute-0 sudo[93903]: pam_unix(sudo:session): session closed for user root
Dec 03 01:05:41 compute-0 python3.9[94057]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 03 01:05:42 compute-0 sudo[94207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vglqlzlajwmoicfyajrvwamxbnoqlvgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723941.7043586-235-39070030331785/AnsiballZ_systemd_service.py'
Dec 03 01:05:42 compute-0 sudo[94207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:05:42 compute-0 python3.9[94209]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 03 01:05:42 compute-0 systemd[1]: Reloading.
Dec 03 01:05:42 compute-0 systemd-sysv-generator[94239]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:05:42 compute-0 systemd-rc-local-generator[94233]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:05:42 compute-0 sudo[94207]: pam_unix(sudo:session): session closed for user root
Dec 03 01:05:43 compute-0 sudo[94394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjyufekdttbazsqdfejabqoqlglafwix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723942.9299307-243-90303271768216/AnsiballZ_command.py'
Dec 03 01:05:43 compute-0 sudo[94394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:05:43 compute-0 python3.9[94396]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:05:43 compute-0 sudo[94394]: pam_unix(sudo:session): session closed for user root
Dec 03 01:05:44 compute-0 sudo[94547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fssttyeezjyqhalnoxzaxchhvyrjhyxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723943.7253861-243-120453124187321/AnsiballZ_command.py'
Dec 03 01:05:44 compute-0 sudo[94547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:05:44 compute-0 python3.9[94549]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:05:44 compute-0 sudo[94547]: pam_unix(sudo:session): session closed for user root
Dec 03 01:05:45 compute-0 sudo[94700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxfnxzxnmkkwnslebbtacjdhlbplfzoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723944.673751-243-16485501441984/AnsiballZ_command.py'
Dec 03 01:05:45 compute-0 sudo[94700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:05:45 compute-0 python3.9[94702]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:05:45 compute-0 sudo[94700]: pam_unix(sudo:session): session closed for user root
Dec 03 01:05:45 compute-0 sudo[94853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lopmzfhrvrjbmzbrhstcztruflxafexp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723945.4116158-243-136459188007241/AnsiballZ_command.py'
Dec 03 01:05:45 compute-0 sudo[94853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:05:45 compute-0 python3.9[94855]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:05:45 compute-0 sudo[94853]: pam_unix(sudo:session): session closed for user root
Dec 03 01:05:46 compute-0 sudo[95006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xeefejszcljxjunolwebkxtymqrwfifu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723946.111727-243-110571563630195/AnsiballZ_command.py'
Dec 03 01:05:46 compute-0 sudo[95006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:05:46 compute-0 python3.9[95008]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:05:46 compute-0 sudo[95006]: pam_unix(sudo:session): session closed for user root
Dec 03 01:05:47 compute-0 sudo[95159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqfjjnajzfkmfecrogvmbwmgctgboliy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723946.9026878-243-237740208673298/AnsiballZ_command.py'
Dec 03 01:05:47 compute-0 sudo[95159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:05:47 compute-0 python3.9[95161]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:05:47 compute-0 sudo[95159]: pam_unix(sudo:session): session closed for user root
Dec 03 01:05:48 compute-0 sudo[95312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dltnqtpchntffxtgvzkrwumxhomsvbll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723947.6983867-243-212001436617886/AnsiballZ_command.py'
Dec 03 01:05:48 compute-0 sudo[95312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:05:48 compute-0 python3.9[95314]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:05:48 compute-0 sudo[95312]: pam_unix(sudo:session): session closed for user root
Dec 03 01:05:49 compute-0 sudo[95465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncfpmvydofnuvskihxxqitgmptyfnqzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723948.9752126-297-25053804250032/AnsiballZ_getent.py'
Dec 03 01:05:49 compute-0 sudo[95465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:05:49 compute-0 python3.9[95467]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Dec 03 01:05:49 compute-0 sudo[95465]: pam_unix(sudo:session): session closed for user root
Dec 03 01:05:50 compute-0 sudo[95618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgmyzrwqtozmkwckpmsdxkobniyfuufk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723949.9481301-305-245315238657275/AnsiballZ_group.py'
Dec 03 01:05:50 compute-0 sudo[95618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:05:50 compute-0 python3.9[95620]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 03 01:05:50 compute-0 groupadd[95621]: group added to /etc/group: name=libvirt, GID=42473
Dec 03 01:05:50 compute-0 groupadd[95621]: group added to /etc/gshadow: name=libvirt
Dec 03 01:05:50 compute-0 groupadd[95621]: new group: name=libvirt, GID=42473
Dec 03 01:05:50 compute-0 sudo[95618]: pam_unix(sudo:session): session closed for user root
Dec 03 01:05:51 compute-0 sudo[95776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ooxgxyjqoterhbpsxwpwewkqwhyupcof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723951.0270712-313-65974604793565/AnsiballZ_user.py'
Dec 03 01:05:51 compute-0 sudo[95776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:05:51 compute-0 python3.9[95778]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 03 01:05:51 compute-0 useradd[95780]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Dec 03 01:05:52 compute-0 sudo[95776]: pam_unix(sudo:session): session closed for user root
Dec 03 01:05:52 compute-0 sudo[95936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-peqbxewfusxdeezqeulvtlarrpxaprlh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723952.4240544-324-63571775612561/AnsiballZ_setup.py'
Dec 03 01:05:52 compute-0 sudo[95936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:05:53 compute-0 python3.9[95938]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 03 01:05:53 compute-0 sudo[95936]: pam_unix(sudo:session): session closed for user root
Dec 03 01:05:54 compute-0 sudo[96020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdluitatycwpxfenbupycjhbnokxhcse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764723952.4240544-324-63571775612561/AnsiballZ_dnf.py'
Dec 03 01:05:54 compute-0 sudo[96020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:05:54 compute-0 python3.9[96022]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 03 01:06:01 compute-0 podman[96057]: 2025-12-03 01:06:01.873477926 +0000 UTC m=+0.131429573 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Dec 03 01:06:21 compute-0 kernel: SELinux:  Converting 2757 SID table entries...
Dec 03 01:06:21 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 03 01:06:21 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 03 01:06:21 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 03 01:06:21 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 03 01:06:21 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 03 01:06:21 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 03 01:06:21 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 03 01:06:30 compute-0 kernel: SELinux:  Converting 2757 SID table entries...
Dec 03 01:06:30 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 03 01:06:30 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 03 01:06:30 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 03 01:06:30 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 03 01:06:30 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 03 01:06:30 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 03 01:06:30 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 03 01:06:32 compute-0 dbus-broker-launch[785]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Dec 03 01:06:32 compute-0 podman[96253]: 2025-12-03 01:06:32.894484718 +0000 UTC m=+0.137511850 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 03 01:07:03 compute-0 podman[105516]: 2025-12-03 01:07:03.878027859 +0000 UTC m=+0.131158322 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 03 01:07:33 compute-0 kernel: SELinux:  Converting 2758 SID table entries...
Dec 03 01:07:33 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 03 01:07:33 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 03 01:07:33 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 03 01:07:33 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 03 01:07:33 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 03 01:07:33 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 03 01:07:33 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 03 01:07:34 compute-0 dbus-broker-launch[785]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Dec 03 01:07:34 compute-0 podman[113121]: 2025-12-03 01:07:34.49444082 +0000 UTC m=+0.157533759 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 03 01:07:34 compute-0 groupadd[113150]: group added to /etc/group: name=dnsmasq, GID=992
Dec 03 01:07:34 compute-0 groupadd[113150]: group added to /etc/gshadow: name=dnsmasq
Dec 03 01:07:34 compute-0 groupadd[113150]: new group: name=dnsmasq, GID=992
Dec 03 01:07:34 compute-0 useradd[113157]: new user: name=dnsmasq, UID=992, GID=992, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Dec 03 01:07:34 compute-0 dbus-broker-launch[767]: Noticed file-system modification, trigger reload.
Dec 03 01:07:34 compute-0 dbus-broker-launch[767]: Noticed file-system modification, trigger reload.
Dec 03 01:07:35 compute-0 groupadd[113170]: group added to /etc/group: name=clevis, GID=991
Dec 03 01:07:35 compute-0 groupadd[113170]: group added to /etc/gshadow: name=clevis
Dec 03 01:07:35 compute-0 groupadd[113170]: new group: name=clevis, GID=991
Dec 03 01:07:35 compute-0 useradd[113177]: new user: name=clevis, UID=991, GID=991, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Dec 03 01:07:35 compute-0 usermod[113187]: add 'clevis' to group 'tss'
Dec 03 01:07:35 compute-0 usermod[113187]: add 'clevis' to shadow group 'tss'
Dec 03 01:07:38 compute-0 polkitd[43396]: Reloading rules
Dec 03 01:07:38 compute-0 polkitd[43396]: Collecting garbage unconditionally...
Dec 03 01:07:38 compute-0 polkitd[43396]: Loading rules from directory /etc/polkit-1/rules.d
Dec 03 01:07:38 compute-0 polkitd[43396]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 03 01:07:38 compute-0 polkitd[43396]: Finished loading, compiling and executing 3 rules
Dec 03 01:07:38 compute-0 polkitd[43396]: Reloading rules
Dec 03 01:07:38 compute-0 polkitd[43396]: Collecting garbage unconditionally...
Dec 03 01:07:38 compute-0 polkitd[43396]: Loading rules from directory /etc/polkit-1/rules.d
Dec 03 01:07:38 compute-0 polkitd[43396]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 03 01:07:38 compute-0 polkitd[43396]: Finished loading, compiling and executing 3 rules
Dec 03 01:07:40 compute-0 groupadd[113374]: group added to /etc/group: name=ceph, GID=167
Dec 03 01:07:40 compute-0 groupadd[113374]: group added to /etc/gshadow: name=ceph
Dec 03 01:07:40 compute-0 groupadd[113374]: new group: name=ceph, GID=167
Dec 03 01:07:40 compute-0 useradd[113380]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Dec 03 01:07:43 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Dec 03 01:07:43 compute-0 sshd[1005]: Received signal 15; terminating.
Dec 03 01:07:43 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Dec 03 01:07:43 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Dec 03 01:07:43 compute-0 systemd[1]: sshd.service: Consumed 1.942s CPU time, read 32.0K from disk, written 4.0K to disk.
Dec 03 01:07:43 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Dec 03 01:07:43 compute-0 systemd[1]: Stopping sshd-keygen.target...
Dec 03 01:07:43 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 03 01:07:43 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 03 01:07:43 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 03 01:07:43 compute-0 systemd[1]: Reached target sshd-keygen.target.
Dec 03 01:07:43 compute-0 systemd[1]: Starting OpenSSH server daemon...
Dec 03 01:07:43 compute-0 sshd[113879]: Server listening on 0.0.0.0 port 22.
Dec 03 01:07:43 compute-0 sshd[113879]: Server listening on :: port 22.
Dec 03 01:07:43 compute-0 systemd[1]: Started OpenSSH server daemon.
Dec 03 01:07:46 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 03 01:07:46 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 03 01:07:46 compute-0 systemd[1]: Reloading.
Dec 03 01:07:46 compute-0 systemd-rc-local-generator[114138]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:07:46 compute-0 systemd-sysv-generator[114142]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:07:46 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 03 01:07:49 compute-0 sudo[96020]: pam_unix(sudo:session): session closed for user root
Dec 03 01:07:50 compute-0 sudo[117905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhyggdviipvqoftqfhnpmvdmtdwhczck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724069.9635518-336-57236732068007/AnsiballZ_systemd.py'
Dec 03 01:07:50 compute-0 sudo[117905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:07:51 compute-0 python3.9[117935]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 03 01:07:51 compute-0 systemd[1]: Reloading.
Dec 03 01:07:51 compute-0 systemd-rc-local-generator[118318]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:07:51 compute-0 systemd-sysv-generator[118321]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:07:51 compute-0 sudo[117905]: pam_unix(sudo:session): session closed for user root
Dec 03 01:07:52 compute-0 sudo[119099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikssydcggejpgrynobvctzhyjeqoddft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724071.731204-336-178747956121149/AnsiballZ_systemd.py'
Dec 03 01:07:52 compute-0 sudo[119099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:07:52 compute-0 python3.9[119120]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 03 01:07:52 compute-0 systemd[1]: Reloading.
Dec 03 01:07:52 compute-0 systemd-rc-local-generator[119538]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:07:52 compute-0 systemd-sysv-generator[119542]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:07:52 compute-0 sudo[119099]: pam_unix(sudo:session): session closed for user root
Dec 03 01:07:53 compute-0 sudo[120330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzyxanhttdveywnovubfxwbtcmnydyou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724073.0710194-336-143442668379038/AnsiballZ_systemd.py'
Dec 03 01:07:53 compute-0 sudo[120330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:07:53 compute-0 python3.9[120351]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 03 01:07:53 compute-0 systemd[1]: Reloading.
Dec 03 01:07:53 compute-0 systemd-rc-local-generator[120722]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:07:53 compute-0 systemd-sysv-generator[120726]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:07:54 compute-0 sudo[120330]: pam_unix(sudo:session): session closed for user root
Dec 03 01:07:54 compute-0 sudo[121494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmzznewrzrlyxrccregatqbudwvphyol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724074.2468836-336-156440552622100/AnsiballZ_systemd.py'
Dec 03 01:07:54 compute-0 sudo[121494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:07:54 compute-0 python3.9[121520]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 03 01:07:55 compute-0 systemd[1]: Reloading.
Dec 03 01:07:55 compute-0 systemd-rc-local-generator[121850]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:07:55 compute-0 systemd-sysv-generator[121854]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:07:55 compute-0 sudo[121494]: pam_unix(sudo:session): session closed for user root
Dec 03 01:07:55 compute-0 sudo[122720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqwbsrannsivtkzcjhvyclcqlmmivewz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724075.6437447-365-178518838059137/AnsiballZ_systemd.py'
Dec 03 01:07:56 compute-0 sudo[122720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:07:56 compute-0 python3.9[122743]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 03 01:07:56 compute-0 systemd[1]: Reloading.
Dec 03 01:07:56 compute-0 systemd-rc-local-generator[123169]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:07:56 compute-0 systemd-sysv-generator[123173]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:07:56 compute-0 sudo[122720]: pam_unix(sudo:session): session closed for user root
Dec 03 01:07:57 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 03 01:07:57 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 03 01:07:57 compute-0 systemd[1]: man-db-cache-update.service: Consumed 13.821s CPU time.
Dec 03 01:07:57 compute-0 systemd[1]: run-r42de0716c2164488920e24670160307f.service: Deactivated successfully.
Dec 03 01:07:57 compute-0 sudo[123631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdzesqcztmsqsrcpsmfierucdkqofmsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724076.9285188-365-60911303713990/AnsiballZ_systemd.py'
Dec 03 01:07:57 compute-0 sudo[123631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:07:57 compute-0 python3.9[123633]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 03 01:07:57 compute-0 systemd[1]: Reloading.
Dec 03 01:07:57 compute-0 systemd-rc-local-generator[123664]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:07:57 compute-0 systemd-sysv-generator[123668]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:07:58 compute-0 sudo[123631]: pam_unix(sudo:session): session closed for user root
Dec 03 01:07:58 compute-0 sudo[123821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnbwjlbunsoxvxzqmztsrrrbcnowhaox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724078.1951218-365-15358618536220/AnsiballZ_systemd.py'
Dec 03 01:07:58 compute-0 sudo[123821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:07:58 compute-0 python3.9[123823]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 03 01:07:59 compute-0 systemd[1]: Reloading.
Dec 03 01:07:59 compute-0 systemd-sysv-generator[123856]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:07:59 compute-0 systemd-rc-local-generator[123851]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:07:59 compute-0 sudo[123821]: pam_unix(sudo:session): session closed for user root
Dec 03 01:07:59 compute-0 sudo[124011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufordemdyuhaigzjrrbitqhdqrragzsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724079.4900892-365-201212873279319/AnsiballZ_systemd.py'
Dec 03 01:07:59 compute-0 sudo[124011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:00 compute-0 python3.9[124013]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 03 01:08:00 compute-0 sudo[124011]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:00 compute-0 sudo[124166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-posvoymevkviueigrtljshrnrmvymtsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724080.6278396-365-184525878958980/AnsiballZ_systemd.py'
Dec 03 01:08:00 compute-0 sudo[124166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:01 compute-0 python3.9[124168]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 03 01:08:01 compute-0 systemd[1]: Reloading.
Dec 03 01:08:01 compute-0 systemd-sysv-generator[124199]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:08:01 compute-0 systemd-rc-local-generator[124194]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:08:01 compute-0 sudo[124166]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:02 compute-0 sudo[124356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybxrkpjixdracjtoxwlnaajyzmhksjva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724081.9409115-401-169105927838134/AnsiballZ_systemd.py'
Dec 03 01:08:02 compute-0 sudo[124356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:02 compute-0 python3.9[124358]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 03 01:08:02 compute-0 systemd[1]: Reloading.
Dec 03 01:08:02 compute-0 systemd-rc-local-generator[124388]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:08:02 compute-0 systemd-sysv-generator[124393]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:08:03 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Dec 03 01:08:03 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Dec 03 01:08:03 compute-0 sudo[124356]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:03 compute-0 sudo[124549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfzdgryoxpcddjafgysxcxyxbbjscyil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724083.3649416-409-194413417179889/AnsiballZ_systemd.py'
Dec 03 01:08:03 compute-0 sudo[124549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:04 compute-0 python3.9[124551]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 03 01:08:04 compute-0 sudo[124549]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:04 compute-0 sudo[124724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jiwxnyvvsbmtslumdbweuwgzxhxvghgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724084.4063466-409-6562733050506/AnsiballZ_systemd.py'
Dec 03 01:08:04 compute-0 sudo[124724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:04 compute-0 podman[124654]: 2025-12-03 01:08:04.932136646 +0000 UTC m=+0.182956403 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 03 01:08:05 compute-0 python3.9[124729]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 03 01:08:06 compute-0 sudo[124724]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:07 compute-0 sudo[124886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogwqfvhewzjsvceqeuwbifiyrehqilpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724086.6897545-409-193153992421177/AnsiballZ_systemd.py'
Dec 03 01:08:07 compute-0 sudo[124886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:07 compute-0 python3.9[124888]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 03 01:08:07 compute-0 sudo[124886]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:08 compute-0 sudo[125041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luxyjzgprtzkjcudhqqrqjjgnfvpxswy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724087.681858-409-89948328252128/AnsiballZ_systemd.py'
Dec 03 01:08:08 compute-0 sudo[125041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:08 compute-0 python3.9[125043]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 03 01:08:08 compute-0 sudo[125041]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:08 compute-0 sudo[125196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rijxtiuydkjilscecxaabeiyqcwieaaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724088.6098084-409-84089581709831/AnsiballZ_systemd.py'
Dec 03 01:08:08 compute-0 sudo[125196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:09 compute-0 python3.9[125198]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 03 01:08:09 compute-0 sudo[125196]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:10 compute-0 sudo[125351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccaigebbrnglsfffdiiiypbrtrrkaoxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724089.763815-409-210009954241722/AnsiballZ_systemd.py'
Dec 03 01:08:10 compute-0 sudo[125351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:10 compute-0 python3.9[125353]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 03 01:08:10 compute-0 sudo[125351]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:11 compute-0 sudo[125506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttxjqatyhrsjtruznfuqjtcoaefnarwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724090.7877183-409-6409289089750/AnsiballZ_systemd.py'
Dec 03 01:08:11 compute-0 sudo[125506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:11 compute-0 python3.9[125508]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 03 01:08:11 compute-0 sudo[125506]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:12 compute-0 sudo[125661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rscheamnpplggjlragvcaobxempbshjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724091.8068578-409-229562932757346/AnsiballZ_systemd.py'
Dec 03 01:08:12 compute-0 sudo[125661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:12 compute-0 python3.9[125663]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 03 01:08:12 compute-0 sudo[125661]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:13 compute-0 sudo[125816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtsjvhmwgarflzjjonmepsesnugvlzqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724092.8269153-409-7755186559462/AnsiballZ_systemd.py'
Dec 03 01:08:13 compute-0 sudo[125816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:13 compute-0 python3.9[125818]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 03 01:08:14 compute-0 sudo[125816]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:15 compute-0 sudo[125971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcgjnzoihmuiirqtbjkwzxvnkcadyqgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724095.040832-409-67409199322145/AnsiballZ_systemd.py'
Dec 03 01:08:15 compute-0 sudo[125971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:15 compute-0 python3.9[125973]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 03 01:08:15 compute-0 sudo[125971]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:16 compute-0 sudo[126126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvikcyuprbrsovtbmwszzsttnfbhbuws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724096.0381641-409-14038701969476/AnsiballZ_systemd.py'
Dec 03 01:08:16 compute-0 sudo[126126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:16 compute-0 python3.9[126128]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 03 01:08:16 compute-0 sudo[126126]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:17 compute-0 sudo[126281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-noysiuyvzmhhgdusevmomeeznoengunz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724097.056906-409-276237019264874/AnsiballZ_systemd.py'
Dec 03 01:08:17 compute-0 sudo[126281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:17 compute-0 python3.9[126283]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 03 01:08:18 compute-0 sudo[126281]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:19 compute-0 sudo[126437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhxglkqlrngqovtemiitcdykyrktirly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724099.103526-409-40379069108245/AnsiballZ_systemd.py'
Dec 03 01:08:19 compute-0 sudo[126437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:19 compute-0 python3.9[126439]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 03 01:08:19 compute-0 sudo[126437]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:20 compute-0 sudo[126592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axjfybzmphsvptikvvxcowkhjeuyxpha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724100.1101358-409-277223613285145/AnsiballZ_systemd.py'
Dec 03 01:08:20 compute-0 sudo[126592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:20 compute-0 python3.9[126594]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 03 01:08:20 compute-0 sudo[126592]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:21 compute-0 sudo[126747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uodkymxidjzstucptkkcgagoroftiqgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724101.4031339-511-125175612178920/AnsiballZ_file.py'
Dec 03 01:08:21 compute-0 sudo[126747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:21 compute-0 python3.9[126749]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:08:21 compute-0 sudo[126747]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:22 compute-0 sudo[126899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikwsuvnpqvhjlagrjihcpxupmbvhjtac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724102.185603-511-39510454394981/AnsiballZ_file.py'
Dec 03 01:08:22 compute-0 sudo[126899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:22 compute-0 python3.9[126901]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:08:22 compute-0 sudo[126899]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:23 compute-0 sudo[127051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-riozsanpvcousglddimqqglobctabuwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724102.9577181-511-30749783105177/AnsiballZ_file.py'
Dec 03 01:08:23 compute-0 sudo[127051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:23 compute-0 python3.9[127053]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:08:23 compute-0 sudo[127051]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:24 compute-0 sudo[127203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgijdflvfnfgfcyqautpjjowdwyhcoho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724103.7367053-511-173747768425769/AnsiballZ_file.py'
Dec 03 01:08:24 compute-0 sudo[127203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:24 compute-0 python3.9[127205]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:08:24 compute-0 sudo[127203]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:24 compute-0 sudo[127355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qioxrydmjbszngsrzkhlarspyvfhfase ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724104.5402815-511-259706755058532/AnsiballZ_file.py'
Dec 03 01:08:24 compute-0 sudo[127355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:25 compute-0 python3.9[127357]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:08:25 compute-0 sudo[127355]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:25 compute-0 sudo[127507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwplmmjtcpmpxawbynxxqvpdnpaqlwea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724105.2718148-511-14010921819106/AnsiballZ_file.py'
Dec 03 01:08:25 compute-0 sudo[127507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:25 compute-0 python3.9[127509]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:08:25 compute-0 sudo[127507]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:26 compute-0 sudo[127659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebsduxuyrlmaqmykadomcyxfepeaknpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724106.1147103-554-64980507818704/AnsiballZ_stat.py'
Dec 03 01:08:26 compute-0 sudo[127659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:27 compute-0 python3.9[127661]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:08:27 compute-0 sudo[127659]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:27 compute-0 sudo[127784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewjpljacxwwhcvatrvzcdipbtrwrnkla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724106.1147103-554-64980507818704/AnsiballZ_copy.py'
Dec 03 01:08:27 compute-0 sudo[127784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:28 compute-0 python3.9[127786]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764724106.1147103-554-64980507818704/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:08:28 compute-0 sudo[127784]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:28 compute-0 sudo[127936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmaqsvosphwtkxxwmbpcdazqrzxardjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724108.264661-554-203054879063040/AnsiballZ_stat.py'
Dec 03 01:08:28 compute-0 sudo[127936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:28 compute-0 python3.9[127938]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:08:28 compute-0 sudo[127936]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:29 compute-0 sudo[128061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-niuzbipbnlhssxnuaosyehaiqlreitth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724108.264661-554-203054879063040/AnsiballZ_copy.py'
Dec 03 01:08:29 compute-0 sudo[128061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:29 compute-0 python3.9[128063]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764724108.264661-554-203054879063040/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:08:29 compute-0 sudo[128061]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:30 compute-0 sudo[128213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owynxmkfjmmppycaorynptesxzmfpddu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724109.8201556-554-237003242366092/AnsiballZ_stat.py'
Dec 03 01:08:30 compute-0 sudo[128213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:30 compute-0 python3.9[128215]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:08:30 compute-0 sudo[128213]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:31 compute-0 sudo[128338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnfotdvaeshwosrpvsosbumkufzeqvob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724109.8201556-554-237003242366092/AnsiballZ_copy.py'
Dec 03 01:08:31 compute-0 sudo[128338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:31 compute-0 python3.9[128340]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764724109.8201556-554-237003242366092/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:08:31 compute-0 sudo[128338]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:31 compute-0 sudo[128490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luuyqcxhvsznqglcpjmmnfepphjfktqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724111.4959571-554-136141935862436/AnsiballZ_stat.py'
Dec 03 01:08:31 compute-0 sudo[128490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:32 compute-0 python3.9[128492]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:08:32 compute-0 sudo[128490]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:32 compute-0 sudo[128615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iowtxmpctputfsoeljvfyxmaftrwcpdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724111.4959571-554-136141935862436/AnsiballZ_copy.py'
Dec 03 01:08:32 compute-0 sudo[128615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:32 compute-0 python3.9[128617]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764724111.4959571-554-136141935862436/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:08:32 compute-0 sudo[128615]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:33 compute-0 sudo[128767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plmkbvkzbroygkylotynrxwszlylnfee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724113.0482216-554-86244142630330/AnsiballZ_stat.py'
Dec 03 01:08:33 compute-0 sudo[128767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:33 compute-0 python3.9[128769]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:08:33 compute-0 sudo[128767]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:34 compute-0 sudo[128892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpvgdwfqzxhizewurxgnshnjqetqqdho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724113.0482216-554-86244142630330/AnsiballZ_copy.py'
Dec 03 01:08:34 compute-0 sudo[128892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:34 compute-0 python3.9[128894]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764724113.0482216-554-86244142630330/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:08:34 compute-0 sudo[128892]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:35 compute-0 sudo[129055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljsfuiqibnbojtwfqvtfvjkqhlneeysb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724114.6510894-554-208574331546034/AnsiballZ_stat.py'
Dec 03 01:08:35 compute-0 sudo[129055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:35 compute-0 podman[129018]: 2025-12-03 01:08:35.249076938 +0000 UTC m=+0.132170509 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 03 01:08:35 compute-0 python3.9[129065]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:08:35 compute-0 sudo[129055]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:35 compute-0 sudo[129196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-biohmuxpjokfyywotubyowdvzmuwfymx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724114.6510894-554-208574331546034/AnsiballZ_copy.py'
Dec 03 01:08:35 compute-0 sudo[129196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:36 compute-0 python3.9[129198]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764724114.6510894-554-208574331546034/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:08:36 compute-0 sudo[129196]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:36 compute-0 sudo[129348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btejwrlyfswhlmiuihochpqahfprasie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724116.2701998-554-188144601487620/AnsiballZ_stat.py'
Dec 03 01:08:36 compute-0 sudo[129348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:36 compute-0 python3.9[129350]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:08:36 compute-0 sudo[129348]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:37 compute-0 sudo[129471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aevxzropupaawvzdjyqihdgorfbnpeau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724116.2701998-554-188144601487620/AnsiballZ_copy.py'
Dec 03 01:08:37 compute-0 sudo[129471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:37 compute-0 python3.9[129473]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764724116.2701998-554-188144601487620/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:08:37 compute-0 sudo[129471]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:38 compute-0 sudo[129623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvnlmaeduhazmjescobygsmluhmhdnay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724117.7536788-554-242960495565794/AnsiballZ_stat.py'
Dec 03 01:08:38 compute-0 sudo[129623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:38 compute-0 python3.9[129625]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:08:38 compute-0 sudo[129623]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:38 compute-0 sudo[129748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ziixubioqogfyzmlygempagplgozxfsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724117.7536788-554-242960495565794/AnsiballZ_copy.py'
Dec 03 01:08:38 compute-0 sudo[129748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:39 compute-0 python3.9[129750]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764724117.7536788-554-242960495565794/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:08:39 compute-0 sudo[129748]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:39 compute-0 sudo[129900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilxqdszuigvknzgnbrchpbimcyklxqym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724119.4843774-667-199788607681480/AnsiballZ_command.py'
Dec 03 01:08:39 compute-0 sudo[129900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:40 compute-0 python3.9[129902]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Dec 03 01:08:40 compute-0 sudo[129900]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:40 compute-0 sudo[130053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-moxrvlvsnbaoaysoppnkqnxaacltxqyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724120.412426-676-168992313743397/AnsiballZ_file.py'
Dec 03 01:08:40 compute-0 sudo[130053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:41 compute-0 python3.9[130055]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:08:41 compute-0 sudo[130053]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:41 compute-0 sudo[130205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrwwitgoeqiabmvhfplxndmgqjauanyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724121.2302625-676-212697209481725/AnsiballZ_file.py'
Dec 03 01:08:41 compute-0 sudo[130205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:41 compute-0 python3.9[130207]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:08:41 compute-0 sudo[130205]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:42 compute-0 sudo[130357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aawtztvitxzwumsbrtbqyfuxtlsbpdhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724122.0628686-676-251411116544867/AnsiballZ_file.py'
Dec 03 01:08:42 compute-0 sudo[130357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:42 compute-0 python3.9[130359]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:08:42 compute-0 sudo[130357]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:43 compute-0 sudo[130509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oaftachyizkfblvhjlfuzkhpdzxhunuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724122.981862-676-75036903985978/AnsiballZ_file.py'
Dec 03 01:08:43 compute-0 sudo[130509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:43 compute-0 python3.9[130511]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:08:43 compute-0 sudo[130509]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:44 compute-0 sudo[130661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdhsjymdynhbwyswynaxipuazzylxqpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724124.0081213-676-218251556765419/AnsiballZ_file.py'
Dec 03 01:08:44 compute-0 sudo[130661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:44 compute-0 python3.9[130663]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:08:44 compute-0 sudo[130661]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:45 compute-0 sudo[130813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eegdyodchozizkigxzymataxtnjvykmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724124.868378-676-161042163776115/AnsiballZ_file.py'
Dec 03 01:08:45 compute-0 sudo[130813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:45 compute-0 python3.9[130815]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:08:45 compute-0 sudo[130813]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:46 compute-0 sudo[130965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eaqhtqiybaphpbdycgdkkbhxfrwsnjhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724125.6901436-676-245433207166952/AnsiballZ_file.py'
Dec 03 01:08:46 compute-0 sudo[130965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:46 compute-0 python3.9[130967]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:08:46 compute-0 sudo[130965]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:46 compute-0 sudo[131117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxnmjloelqyzzjcetfuisyqfpputwzlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724126.4793012-676-176792990440261/AnsiballZ_file.py'
Dec 03 01:08:46 compute-0 sudo[131117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:47 compute-0 python3.9[131119]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:08:47 compute-0 sudo[131117]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:47 compute-0 sudo[131269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttimrqereihjvwrfpkafqdfrryojgone ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724127.3661425-676-178958622001590/AnsiballZ_file.py'
Dec 03 01:08:47 compute-0 sudo[131269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:47 compute-0 python3.9[131271]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:08:47 compute-0 sudo[131269]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:48 compute-0 sudo[131421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvwisqaixnduvjsccfdudjxzlwczftpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724128.1565552-676-30755846998886/AnsiballZ_file.py'
Dec 03 01:08:48 compute-0 sudo[131421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:48 compute-0 python3.9[131423]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:08:48 compute-0 sudo[131421]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:49 compute-0 sudo[131573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-boxpxbhkdxsrewjpmuhjuwuuwzwqvxpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724128.9902146-676-226381482914470/AnsiballZ_file.py'
Dec 03 01:08:49 compute-0 sudo[131573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:49 compute-0 python3.9[131575]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:08:49 compute-0 sudo[131573]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:50 compute-0 sudo[131725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmspxxncesqxkncrvdhxctrwduopfees ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724129.8651986-676-155177922602324/AnsiballZ_file.py'
Dec 03 01:08:50 compute-0 sudo[131725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:50 compute-0 python3.9[131727]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:08:50 compute-0 sudo[131725]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:50 compute-0 sudo[131877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyitkjwfftsmeorotxhvqcvpkmkpzeod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724130.603408-676-102475078449271/AnsiballZ_file.py'
Dec 03 01:08:50 compute-0 sudo[131877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:51 compute-0 python3.9[131879]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:08:51 compute-0 sudo[131877]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:51 compute-0 sudo[132029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrlvjvaocyqkgclqfvjokhsqezmgqlvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724131.4000669-676-111621923301663/AnsiballZ_file.py'
Dec 03 01:08:51 compute-0 sudo[132029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:51 compute-0 python3.9[132031]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:08:51 compute-0 sudo[132029]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:52 compute-0 sudo[132181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbhvzklnwdqjyxaxgghazedyzpksljgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724132.270563-775-281344316964294/AnsiballZ_stat.py'
Dec 03 01:08:52 compute-0 sudo[132181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:52 compute-0 python3.9[132183]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:08:52 compute-0 sudo[132181]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:53 compute-0 sudo[132304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqnzgfczwgpttwjkbwvzhidihwtqfchc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724132.270563-775-281344316964294/AnsiballZ_copy.py'
Dec 03 01:08:53 compute-0 sudo[132304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:53 compute-0 python3.9[132306]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764724132.270563-775-281344316964294/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:08:53 compute-0 sudo[132304]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:54 compute-0 sudo[132456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsqicrwjmyfkziznohzshzfrvrfdabhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724133.836852-775-17713208219975/AnsiballZ_stat.py'
Dec 03 01:08:54 compute-0 sudo[132456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:54 compute-0 python3.9[132458]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:08:54 compute-0 sudo[132456]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:54 compute-0 sudo[132579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyhukhtjbgwiztpxiqyvwuroubmynrzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724133.836852-775-17713208219975/AnsiballZ_copy.py'
Dec 03 01:08:54 compute-0 sudo[132579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:55 compute-0 python3.9[132581]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764724133.836852-775-17713208219975/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:08:55 compute-0 sudo[132579]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:55 compute-0 sudo[132731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csyaqchfdiiywrjdzkydwlxnxrobdyxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724135.3225746-775-180398792079461/AnsiballZ_stat.py'
Dec 03 01:08:55 compute-0 sudo[132731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:55 compute-0 python3.9[132733]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:08:55 compute-0 sudo[132731]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:56 compute-0 sudo[132854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqxqevezjzxvncrtxaxcujlrlkztnosd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724135.3225746-775-180398792079461/AnsiballZ_copy.py'
Dec 03 01:08:56 compute-0 sudo[132854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:56 compute-0 python3.9[132856]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764724135.3225746-775-180398792079461/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:08:56 compute-0 sudo[132854]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:57 compute-0 sudo[133006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnzbabhqklgzifowdscihsiypkntzuyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724136.8637676-775-112816617984071/AnsiballZ_stat.py'
Dec 03 01:08:57 compute-0 sudo[133006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:57 compute-0 python3.9[133008]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:08:57 compute-0 sudo[133006]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:57 compute-0 sudo[133129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpfdhndqnnukewrdfptzvnbtkkgbjpwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724136.8637676-775-112816617984071/AnsiballZ_copy.py'
Dec 03 01:08:57 compute-0 sudo[133129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:58 compute-0 python3.9[133131]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764724136.8637676-775-112816617984071/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:08:58 compute-0 sudo[133129]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:58 compute-0 sudo[133281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrepzdpivsljgmdsotxrrrxvtsjbkxma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724138.3357427-775-277679050298585/AnsiballZ_stat.py'
Dec 03 01:08:58 compute-0 sudo[133281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:58 compute-0 python3.9[133283]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:08:58 compute-0 sudo[133281]: pam_unix(sudo:session): session closed for user root
Dec 03 01:08:59 compute-0 sudo[133404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfvqtcygpybbiobujvxirhwlqatazgaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724138.3357427-775-277679050298585/AnsiballZ_copy.py'
Dec 03 01:08:59 compute-0 sudo[133404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:08:59 compute-0 python3.9[133406]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764724138.3357427-775-277679050298585/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:08:59 compute-0 sudo[133404]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:00 compute-0 sudo[133556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjdfqxkimkpktagpnlfhnywgkwksukln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724139.8584023-775-83627938985654/AnsiballZ_stat.py'
Dec 03 01:09:00 compute-0 sudo[133556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:00 compute-0 python3.9[133558]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:09:00 compute-0 sudo[133556]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:00 compute-0 sudo[133679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvqjecejvpzheuomqvnfyjmiinrbhpno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724139.8584023-775-83627938985654/AnsiballZ_copy.py'
Dec 03 01:09:00 compute-0 sudo[133679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:01 compute-0 python3.9[133681]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764724139.8584023-775-83627938985654/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:09:01 compute-0 sudo[133679]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:01 compute-0 sudo[133831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvkdvbjcscyskeluirjnwgjeuqkvnbwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724141.324325-775-96386516564849/AnsiballZ_stat.py'
Dec 03 01:09:01 compute-0 sudo[133831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:01 compute-0 python3.9[133833]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:09:01 compute-0 sudo[133831]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:02 compute-0 sudo[133954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcrrqafafdngxixeyhtokupylrdfwjal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724141.324325-775-96386516564849/AnsiballZ_copy.py'
Dec 03 01:09:02 compute-0 sudo[133954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:02 compute-0 python3.9[133956]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764724141.324325-775-96386516564849/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:09:02 compute-0 sudo[133954]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:03 compute-0 sudo[134106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjpcfqlafxdbjvkxzurewxcmdodsentw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724142.8382962-775-136486080407141/AnsiballZ_stat.py'
Dec 03 01:09:03 compute-0 sudo[134106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:03 compute-0 python3.9[134108]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:09:03 compute-0 sudo[134106]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:03 compute-0 sudo[134229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqiqwjylsctrzgbswzeuyvxyrppkowqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724142.8382962-775-136486080407141/AnsiballZ_copy.py'
Dec 03 01:09:03 compute-0 sudo[134229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:04 compute-0 python3.9[134231]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764724142.8382962-775-136486080407141/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:09:04 compute-0 sudo[134229]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:04 compute-0 sudo[134381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywbhedjmjjybzfqfwccwfkzlrfefpqnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724144.2625206-775-101879579551709/AnsiballZ_stat.py'
Dec 03 01:09:04 compute-0 sudo[134381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:04 compute-0 python3.9[134383]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:09:04 compute-0 sudo[134381]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:05 compute-0 sudo[134504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfzzpzejinnfpjtcwezkmldhtlncqnkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724144.2625206-775-101879579551709/AnsiballZ_copy.py'
Dec 03 01:09:05 compute-0 sudo[134504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:05 compute-0 podman[134506]: 2025-12-03 01:09:05.47859816 +0000 UTC m=+0.150905217 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 03 01:09:05 compute-0 python3.9[134507]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764724144.2625206-775-101879579551709/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:09:05 compute-0 sudo[134504]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:06 compute-0 sudo[134682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwapvfixfrmfcfxbvrvzpcwplyeibctc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724145.6870432-775-197444344919079/AnsiballZ_stat.py'
Dec 03 01:09:06 compute-0 sudo[134682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:06 compute-0 python3.9[134684]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:09:06 compute-0 sudo[134682]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:06 compute-0 sudo[134805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psnbntosucmqnfwzltmovpvmclxpvyud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724145.6870432-775-197444344919079/AnsiballZ_copy.py'
Dec 03 01:09:06 compute-0 sudo[134805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:06 compute-0 python3.9[134807]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764724145.6870432-775-197444344919079/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:09:06 compute-0 sudo[134805]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:07 compute-0 sudo[134957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-geilheqmqimfesninmvxpswpezhbjjcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724147.1335928-775-18919158659108/AnsiballZ_stat.py'
Dec 03 01:09:07 compute-0 sudo[134957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:07 compute-0 python3.9[134959]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:09:07 compute-0 sudo[134957]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:08 compute-0 sudo[135080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enbxmgrrjdbdqvkzpidqtecrwmofjslc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724147.1335928-775-18919158659108/AnsiballZ_copy.py'
Dec 03 01:09:08 compute-0 sudo[135080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:08 compute-0 python3.9[135082]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764724147.1335928-775-18919158659108/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:09:08 compute-0 sudo[135080]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:08 compute-0 sudo[135232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjziklbheztynekvjfjoibbmvnyyglfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724148.5678654-775-212599326249677/AnsiballZ_stat.py'
Dec 03 01:09:08 compute-0 sudo[135232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:09 compute-0 python3.9[135234]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:09:09 compute-0 sudo[135232]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:09 compute-0 sudo[135355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wiuuovlisevtvhyqjufjqesenyfqlpdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724148.5678654-775-212599326249677/AnsiballZ_copy.py'
Dec 03 01:09:09 compute-0 sudo[135355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:09 compute-0 python3.9[135357]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764724148.5678654-775-212599326249677/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:09:09 compute-0 sudo[135355]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:10 compute-0 sudo[135507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gaprbhjgjxeloiaxpqxymqpqtzmbonup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724150.0798807-775-243042836051934/AnsiballZ_stat.py'
Dec 03 01:09:10 compute-0 sudo[135507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:10 compute-0 python3.9[135509]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:09:10 compute-0 sudo[135507]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:11 compute-0 sudo[135630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nohdxidkalixgwvecohtjjpjxkjnjrvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724150.0798807-775-243042836051934/AnsiballZ_copy.py'
Dec 03 01:09:11 compute-0 sudo[135630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:11 compute-0 python3.9[135632]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764724150.0798807-775-243042836051934/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:09:11 compute-0 sudo[135630]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:11 compute-0 sudo[135782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfijawiahxlrnynfkgsnubifncnrjzan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724151.56528-775-33823100341130/AnsiballZ_stat.py'
Dec 03 01:09:11 compute-0 sudo[135782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:12 compute-0 python3.9[135784]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:09:12 compute-0 sudo[135782]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:12 compute-0 sudo[135905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pakcnzrlwjvxkjcsuxmcnwfgzyjfgiiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724151.56528-775-33823100341130/AnsiballZ_copy.py'
Dec 03 01:09:12 compute-0 sudo[135905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:12 compute-0 python3.9[135907]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764724151.56528-775-33823100341130/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:09:12 compute-0 sudo[135905]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:13 compute-0 python3.9[136057]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:09:14 compute-0 sudo[136210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytgurewzgiuogeyoempvvqaszsbrvqac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724153.9914613-981-233388816792059/AnsiballZ_seboolean.py'
Dec 03 01:09:14 compute-0 sudo[136210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:14 compute-0 python3.9[136212]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Dec 03 01:09:15 compute-0 sudo[136210]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:16 compute-0 sudo[136366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrcgljvhloakniufxswvknlfesqnruln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724156.1877556-989-44863646611419/AnsiballZ_copy.py'
Dec 03 01:09:16 compute-0 dbus-broker-launch[785]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Dec 03 01:09:16 compute-0 sudo[136366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:16 compute-0 python3.9[136368]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:09:16 compute-0 sudo[136366]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:17 compute-0 sudo[136518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rprjpyoysacofrszbfinzdkzlygqmafz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724157.027593-989-84987405385652/AnsiballZ_copy.py'
Dec 03 01:09:17 compute-0 sudo[136518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:17 compute-0 python3.9[136520]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:09:17 compute-0 sudo[136518]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:18 compute-0 sudo[136670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irrjyqskodlkrlovhxnlnvkeldxcjpvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724157.8253736-989-260905762901025/AnsiballZ_copy.py'
Dec 03 01:09:18 compute-0 sudo[136670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:18 compute-0 python3.9[136672]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:09:18 compute-0 sudo[136670]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:18 compute-0 sudo[136822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fducudsvqbghezjaezvmoxduajnovdtd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724158.5849311-989-235846051582813/AnsiballZ_copy.py'
Dec 03 01:09:18 compute-0 sudo[136822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:19 compute-0 python3.9[136824]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:09:19 compute-0 sudo[136822]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:19 compute-0 sudo[136974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-peuqxhymqlyehlmlflnkgasewqfcgreu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724159.416233-989-271359120979056/AnsiballZ_copy.py'
Dec 03 01:09:19 compute-0 sudo[136974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:19 compute-0 python3.9[136976]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:09:20 compute-0 sudo[136974]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:20 compute-0 sudo[137126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpojexzwkrewblulgslafqgicvojkjmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724160.2400029-1025-133847355015190/AnsiballZ_copy.py'
Dec 03 01:09:20 compute-0 sudo[137126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:20 compute-0 python3.9[137128]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:09:20 compute-0 sudo[137126]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:21 compute-0 sudo[137278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aiftudoirvzwhlpfwncaryzgiobxhpwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724161.0966384-1025-203604208897571/AnsiballZ_copy.py'
Dec 03 01:09:21 compute-0 sudo[137278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:21 compute-0 python3.9[137280]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:09:21 compute-0 sudo[137278]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:22 compute-0 sudo[137430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfikdurrgubcvnobcjfehgrjpbjmbqxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724161.8653877-1025-95885953845688/AnsiballZ_copy.py'
Dec 03 01:09:22 compute-0 sudo[137430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:22 compute-0 python3.9[137432]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:09:22 compute-0 sudo[137430]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:23 compute-0 sudo[137582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-naxhkhhnlsqsfpezyaoqyivjvvwxjzwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724162.7324796-1025-232281126175473/AnsiballZ_copy.py'
Dec 03 01:09:23 compute-0 sudo[137582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:23 compute-0 python3.9[137584]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:09:23 compute-0 sudo[137582]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:23 compute-0 sudo[137734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kiqjscecwzjrlyjmzcxriclgrfsogoyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724163.4824786-1025-5704612490861/AnsiballZ_copy.py'
Dec 03 01:09:23 compute-0 sudo[137734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:24 compute-0 python3.9[137736]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:09:24 compute-0 sudo[137734]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:24 compute-0 sudo[137886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhlvedhxjhweozhmkdfadfefufriwsvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724164.3778014-1061-95981088993347/AnsiballZ_systemd.py'
Dec 03 01:09:24 compute-0 sudo[137886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:25 compute-0 python3.9[137888]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 03 01:09:25 compute-0 systemd[1]: Reloading.
Dec 03 01:09:25 compute-0 systemd-sysv-generator[137922]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:09:25 compute-0 systemd-rc-local-generator[137918]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:09:25 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Dec 03 01:09:25 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Dec 03 01:09:25 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Dec 03 01:09:25 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Dec 03 01:09:25 compute-0 systemd[1]: Starting libvirt logging daemon...
Dec 03 01:09:25 compute-0 systemd[1]: Started libvirt logging daemon.
Dec 03 01:09:25 compute-0 sudo[137886]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:26 compute-0 sudo[138080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hstarduzihanyfzdtufbgrktkmtyxbox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724165.9803793-1061-133249256897952/AnsiballZ_systemd.py'
Dec 03 01:09:26 compute-0 sudo[138080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:26 compute-0 python3.9[138082]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 03 01:09:26 compute-0 systemd[1]: Reloading.
Dec 03 01:09:26 compute-0 systemd-sysv-generator[138111]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:09:26 compute-0 systemd-rc-local-generator[138106]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:09:27 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Dec 03 01:09:27 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Dec 03 01:09:27 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Dec 03 01:09:27 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Dec 03 01:09:27 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Dec 03 01:09:27 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Dec 03 01:09:27 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Dec 03 01:09:27 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Dec 03 01:09:27 compute-0 systemd[1]: Started libvirt nodedev daemon.
Dec 03 01:09:27 compute-0 sudo[138080]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:27 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Dec 03 01:09:27 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Dec 03 01:09:27 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Dec 03 01:09:27 compute-0 sudo[138303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uiigtpbuolyuqqmgmdulnwqxspgdomhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724167.4060864-1061-187411306274832/AnsiballZ_systemd.py'
Dec 03 01:09:27 compute-0 sudo[138303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:28 compute-0 python3.9[138305]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 03 01:09:28 compute-0 systemd[1]: Reloading.
Dec 03 01:09:28 compute-0 systemd-rc-local-generator[138332]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:09:28 compute-0 systemd-sysv-generator[138337]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:09:28 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Dec 03 01:09:28 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Dec 03 01:09:28 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Dec 03 01:09:28 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Dec 03 01:09:28 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec 03 01:09:28 compute-0 setroubleshoot[138118]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 45987a61-d46d-4d61-a1f4-80217b511162
Dec 03 01:09:28 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec 03 01:09:28 compute-0 setroubleshoot[138118]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Dec 03 01:09:28 compute-0 setroubleshoot[138118]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 45987a61-d46d-4d61-a1f4-80217b511162
Dec 03 01:09:28 compute-0 setroubleshoot[138118]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Dec 03 01:09:28 compute-0 sudo[138303]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:29 compute-0 sudo[138516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etzptprhypaojitajoxiswujuubgxrbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724168.8058856-1061-39527001163699/AnsiballZ_systemd.py'
Dec 03 01:09:29 compute-0 sudo[138516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:29 compute-0 python3.9[138518]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 03 01:09:29 compute-0 systemd[1]: Reloading.
Dec 03 01:09:29 compute-0 systemd-sysv-generator[138547]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:09:29 compute-0 systemd-rc-local-generator[138540]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:09:29 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Dec 03 01:09:29 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Dec 03 01:09:29 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Dec 03 01:09:29 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Dec 03 01:09:29 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Dec 03 01:09:29 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Dec 03 01:09:29 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Dec 03 01:09:29 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Dec 03 01:09:29 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Dec 03 01:09:29 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Dec 03 01:09:29 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Dec 03 01:09:29 compute-0 systemd[1]: Started libvirt QEMU daemon.
Dec 03 01:09:30 compute-0 sudo[138516]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:30 compute-0 sudo[138731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hifmlbzugzbkfralnlscrxoujjzsjwox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724170.1853437-1061-103578353870670/AnsiballZ_systemd.py'
Dec 03 01:09:30 compute-0 sudo[138731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:30 compute-0 python3.9[138733]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 03 01:09:30 compute-0 systemd[1]: Reloading.
Dec 03 01:09:31 compute-0 systemd-rc-local-generator[138761]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:09:31 compute-0 systemd-sysv-generator[138765]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:09:31 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Dec 03 01:09:31 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Dec 03 01:09:31 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Dec 03 01:09:31 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Dec 03 01:09:31 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Dec 03 01:09:31 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Dec 03 01:09:31 compute-0 systemd[1]: Starting libvirt secret daemon...
Dec 03 01:09:31 compute-0 systemd[1]: Started libvirt secret daemon.
Dec 03 01:09:31 compute-0 sudo[138731]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:32 compute-0 sudo[138943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyzvklfzbtnkeacwldycnngipgusxevx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724171.6895514-1098-44859803640728/AnsiballZ_file.py'
Dec 03 01:09:32 compute-0 sudo[138943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:32 compute-0 python3.9[138945]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:09:32 compute-0 sudo[138943]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:33 compute-0 sudo[139095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmjeejistgxevkwcpkfjlfzmptzpddoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724172.6617773-1106-140519836210829/AnsiballZ_find.py'
Dec 03 01:09:33 compute-0 sudo[139095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:33 compute-0 python3.9[139097]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 03 01:09:33 compute-0 sudo[139095]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:34 compute-0 sudo[139247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnuulexnaeauwotcezbfdesnidomlaly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724173.760765-1120-203147377214047/AnsiballZ_stat.py'
Dec 03 01:09:34 compute-0 sudo[139247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:34 compute-0 python3.9[139249]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:09:34 compute-0 sudo[139247]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:34 compute-0 sudo[139370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-soidyjflzarjbomzrbzgvgvznoifuvdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724173.760765-1120-203147377214047/AnsiballZ_copy.py'
Dec 03 01:09:34 compute-0 sudo[139370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:35 compute-0 python3.9[139372]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764724173.760765-1120-203147377214047/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:09:35 compute-0 sudo[139370]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:35 compute-0 sudo[139542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngeobgysujaopjpwxrodqxejxhdktgmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724175.51468-1136-229294567025248/AnsiballZ_file.py'
Dec 03 01:09:35 compute-0 podman[139472]: 2025-12-03 01:09:35.853912165 +0000 UTC m=+0.111474224 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 03 01:09:35 compute-0 sudo[139542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:36 compute-0 python3.9[139550]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:09:36 compute-0 sudo[139542]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:36 compute-0 sudo[139700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsfnjcmkizefjlpijyfmlipbalzmocdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724176.2551708-1144-10585849801725/AnsiballZ_stat.py'
Dec 03 01:09:36 compute-0 sudo[139700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:36 compute-0 python3.9[139702]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:09:36 compute-0 sudo[139700]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:37 compute-0 sudo[139778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lynkmntgkfxbgihuskcsgoajcdpyoske ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724176.2551708-1144-10585849801725/AnsiballZ_file.py'
Dec 03 01:09:37 compute-0 sudo[139778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:37 compute-0 python3.9[139780]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:09:37 compute-0 sudo[139778]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:37 compute-0 sudo[139930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osjgaydrnqzlcvsjdizoqdqdssndtatp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724177.6306052-1156-131661647127824/AnsiballZ_stat.py'
Dec 03 01:09:37 compute-0 sudo[139930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:38 compute-0 python3.9[139932]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:09:38 compute-0 sudo[139930]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:38 compute-0 sudo[140008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kimrqvwhdrymtqwubckroucasrrbizhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724177.6306052-1156-131661647127824/AnsiballZ_file.py'
Dec 03 01:09:38 compute-0 sudo[140008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:38 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Dec 03 01:09:38 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.017s CPU time.
Dec 03 01:09:38 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Dec 03 01:09:38 compute-0 python3.9[140010]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.xj52i7t5 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:09:38 compute-0 sudo[140008]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:39 compute-0 sudo[140160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlzwrxemdzkqazbdowrwaqjontnyblnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724178.9722238-1168-68571155491958/AnsiballZ_stat.py'
Dec 03 01:09:39 compute-0 sudo[140160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:39 compute-0 python3.9[140162]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:09:39 compute-0 sudo[140160]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:40 compute-0 sudo[140238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edlvgnecefhjcomvigvvxzprnzmzrjjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724178.9722238-1168-68571155491958/AnsiballZ_file.py'
Dec 03 01:09:40 compute-0 sudo[140238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:40 compute-0 python3.9[140240]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:09:40 compute-0 sudo[140238]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:40 compute-0 sudo[140390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lidqlsqvdpomqqfytsndqnfbmotsaheg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724180.5808349-1181-156577837226305/AnsiballZ_command.py'
Dec 03 01:09:40 compute-0 sudo[140390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:41 compute-0 python3.9[140392]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:09:41 compute-0 sudo[140390]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:41 compute-0 sudo[140543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgtwfikxwxdxcgebjzwdjrytkceiqnmf ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764724181.417756-1189-71973919454547/AnsiballZ_edpm_nftables_from_files.py'
Dec 03 01:09:41 compute-0 sudo[140543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:42 compute-0 python3[140545]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 03 01:09:42 compute-0 sudo[140543]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:43 compute-0 sudo[140695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgiyxogiqkkbozctfqakxjjcpzmcjirr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724182.4329662-1197-100524299950017/AnsiballZ_stat.py'
Dec 03 01:09:43 compute-0 sudo[140695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:43 compute-0 python3.9[140697]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:09:43 compute-0 sudo[140695]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:43 compute-0 sudo[140773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfizqgvviltpftofxybhugllztigyolf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724182.4329662-1197-100524299950017/AnsiballZ_file.py'
Dec 03 01:09:43 compute-0 sudo[140773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:43 compute-0 python3.9[140775]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:09:43 compute-0 sudo[140773]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:44 compute-0 sudo[140925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-marbvxqjyevbatyqwvowdhwgfrfyawqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724184.036672-1209-11313006205616/AnsiballZ_stat.py'
Dec 03 01:09:44 compute-0 sudo[140925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:44 compute-0 python3.9[140927]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:09:44 compute-0 sudo[140925]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:45 compute-0 sudo[141003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgbpuzcwdsvekqnrltnsgmcdahzxskpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724184.036672-1209-11313006205616/AnsiballZ_file.py'
Dec 03 01:09:45 compute-0 sudo[141003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:45 compute-0 python3.9[141005]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:09:45 compute-0 sudo[141003]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:45 compute-0 sudo[141155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ackqxptioeywhpomfnbualtgcqschvcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724185.4556634-1221-245922784014574/AnsiballZ_stat.py'
Dec 03 01:09:45 compute-0 sudo[141155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:46 compute-0 python3.9[141157]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:09:46 compute-0 sudo[141155]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:46 compute-0 sudo[141233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvgpttsgrouurfdsevgcgfrfontkzjhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724185.4556634-1221-245922784014574/AnsiballZ_file.py'
Dec 03 01:09:46 compute-0 sudo[141233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:46 compute-0 python3.9[141235]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:09:46 compute-0 sudo[141233]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:47 compute-0 sudo[141385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnlcqfomkedtbvfmrvmmqvzkgzzdznwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724187.0850654-1233-79458317091697/AnsiballZ_stat.py'
Dec 03 01:09:47 compute-0 sudo[141385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:47 compute-0 python3.9[141387]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:09:47 compute-0 sudo[141385]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:48 compute-0 sudo[141463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dskozzeocpuncscbahtjiddvswwulukr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724187.0850654-1233-79458317091697/AnsiballZ_file.py'
Dec 03 01:09:48 compute-0 sudo[141463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:48 compute-0 python3.9[141465]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:09:48 compute-0 sudo[141463]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:48 compute-0 sudo[141615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvazihqtfhrrotqsghmbagaycgkgjfrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724188.476935-1245-49384853960312/AnsiballZ_stat.py'
Dec 03 01:09:48 compute-0 sudo[141615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:49 compute-0 python3.9[141617]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:09:49 compute-0 sudo[141615]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:49 compute-0 sudo[141740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebulhisjcbmdxqomszklevivqceujoiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724188.476935-1245-49384853960312/AnsiballZ_copy.py'
Dec 03 01:09:49 compute-0 sudo[141740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:49 compute-0 python3.9[141742]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764724188.476935-1245-49384853960312/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:09:49 compute-0 sudo[141740]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:50 compute-0 sudo[141892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtszkzitcresgaoggfgtylntyuxcosch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724190.1511154-1260-132568520076773/AnsiballZ_file.py'
Dec 03 01:09:50 compute-0 sudo[141892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:50 compute-0 python3.9[141894]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:09:50 compute-0 sudo[141892]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:51 compute-0 sudo[142044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cklopwqhtcxrjitsgcqtkddhtluiqusx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724190.9341767-1268-163914127637540/AnsiballZ_command.py'
Dec 03 01:09:51 compute-0 sudo[142044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:51 compute-0 python3.9[142046]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:09:51 compute-0 sudo[142044]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:52 compute-0 sudo[142199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcfiaqetkzrjekdjmubsflwxjfurntmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724191.8075259-1276-123796111928012/AnsiballZ_blockinfile.py'
Dec 03 01:09:52 compute-0 sudo[142199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:52 compute-0 python3.9[142201]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:09:52 compute-0 sudo[142199]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:53 compute-0 sudo[142351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvbkiapmsqvdzqcyqjlazrznfrmwiuxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724192.8468785-1285-170552932482522/AnsiballZ_command.py'
Dec 03 01:09:53 compute-0 sudo[142351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:53 compute-0 python3.9[142353]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:09:53 compute-0 sudo[142351]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:54 compute-0 sudo[142504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-voxdteyuepwswxawrwrgkpnjqqlqezgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724193.7039611-1293-123196254917029/AnsiballZ_stat.py'
Dec 03 01:09:54 compute-0 sudo[142504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:54 compute-0 python3.9[142506]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:09:54 compute-0 sudo[142504]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:54 compute-0 sudo[142658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqwgdrbqqxvjsburvagqqcticdsixqee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724194.518829-1301-232687611431598/AnsiballZ_command.py'
Dec 03 01:09:54 compute-0 sudo[142658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:55 compute-0 python3.9[142660]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:09:55 compute-0 sudo[142658]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:55 compute-0 sudo[142813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycrfjpsfdvxcnspzuicwufnptzvzuxth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724195.406537-1309-12568146801121/AnsiballZ_file.py'
Dec 03 01:09:55 compute-0 sudo[142813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:55 compute-0 python3.9[142815]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:09:56 compute-0 sudo[142813]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:56 compute-0 sudo[142965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhdeepiirprgvnlpyfvcubhzgavtxzzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724196.3381162-1317-173719425326966/AnsiballZ_stat.py'
Dec 03 01:09:56 compute-0 sudo[142965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:56 compute-0 python3.9[142967]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:09:57 compute-0 sudo[142965]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:57 compute-0 sudo[143088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phvqdiomllxggebhyftmixdwyzyawlnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724196.3381162-1317-173719425326966/AnsiballZ_copy.py'
Dec 03 01:09:57 compute-0 sudo[143088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:57 compute-0 python3.9[143090]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764724196.3381162-1317-173719425326966/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:09:57 compute-0 sudo[143088]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:58 compute-0 sudo[143240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzffcplnbfegxdezujpsrajmfyyfxujn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724197.9343765-1332-13288238363288/AnsiballZ_stat.py'
Dec 03 01:09:58 compute-0 sudo[143240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:58 compute-0 python3.9[143242]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:09:58 compute-0 sudo[143240]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:59 compute-0 sudo[143363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iryofbpmilelcreyppwqaqlpyvjjoakm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724197.9343765-1332-13288238363288/AnsiballZ_copy.py'
Dec 03 01:09:59 compute-0 sudo[143363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:09:59 compute-0 python3.9[143365]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764724197.9343765-1332-13288238363288/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:09:59 compute-0 sudo[143363]: pam_unix(sudo:session): session closed for user root
Dec 03 01:09:59 compute-0 sudo[143515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgcrgouqktzvmojuleunvqqgmvhdhhit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724199.5368965-1347-164659593333833/AnsiballZ_stat.py'
Dec 03 01:09:59 compute-0 sudo[143515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:10:00 compute-0 python3.9[143517]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:10:00 compute-0 sudo[143515]: pam_unix(sudo:session): session closed for user root
Dec 03 01:10:00 compute-0 sudo[143638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzyrudirrxuyjnrmijpkzhjrrzdvntku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724199.5368965-1347-164659593333833/AnsiballZ_copy.py'
Dec 03 01:10:00 compute-0 sudo[143638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:10:00 compute-0 python3.9[143640]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764724199.5368965-1347-164659593333833/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:10:00 compute-0 sudo[143638]: pam_unix(sudo:session): session closed for user root
Dec 03 01:10:01 compute-0 sudo[143790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxozshzlxyzprsvyqbpmtssayiabpjss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724201.064578-1362-28437790946977/AnsiballZ_systemd.py'
Dec 03 01:10:01 compute-0 sudo[143790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:10:01 compute-0 python3.9[143792]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:10:01 compute-0 systemd[1]: Reloading.
Dec 03 01:10:01 compute-0 systemd-sysv-generator[143822]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:10:01 compute-0 systemd-rc-local-generator[143819]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:10:02 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Dec 03 01:10:02 compute-0 sudo[143790]: pam_unix(sudo:session): session closed for user root
Dec 03 01:10:02 compute-0 sudo[143981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozhydvvqejnzmgpurcbdigpneopazqbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724202.3662553-1370-19639399729822/AnsiballZ_systemd.py'
Dec 03 01:10:02 compute-0 sudo[143981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:10:03 compute-0 python3.9[143983]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 03 01:10:03 compute-0 systemd[1]: Reloading.
Dec 03 01:10:03 compute-0 systemd-rc-local-generator[144008]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:10:03 compute-0 systemd-sysv-generator[144014]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:10:03 compute-0 systemd[1]: Reloading.
Dec 03 01:10:03 compute-0 systemd-sysv-generator[144052]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:10:03 compute-0 systemd-rc-local-generator[144048]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:10:03 compute-0 sudo[143981]: pam_unix(sudo:session): session closed for user root
Dec 03 01:10:04 compute-0 sshd-session[89736]: Connection closed by 192.168.122.30 port 33360
Dec 03 01:10:04 compute-0 sshd-session[89733]: pam_unix(sshd:session): session closed for user zuul
Dec 03 01:10:04 compute-0 systemd[1]: session-20.scope: Deactivated successfully.
Dec 03 01:10:04 compute-0 systemd[1]: session-20.scope: Consumed 4min 4.020s CPU time.
Dec 03 01:10:04 compute-0 systemd-logind[800]: Session 20 logged out. Waiting for processes to exit.
Dec 03 01:10:04 compute-0 systemd-logind[800]: Removed session 20.
Dec 03 01:10:06 compute-0 podman[144080]: 2025-12-03 01:10:06.94636104 +0000 UTC m=+0.197219202 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:10:10 compute-0 sshd-session[144107]: Accepted publickey for zuul from 192.168.122.30 port 44202 ssh2: ECDSA SHA256:ja3ITS17A9km0/Ot+KN2pl9ub4ump/b6GV+vNoE7Szw
Dec 03 01:10:10 compute-0 systemd-logind[800]: New session 21 of user zuul.
Dec 03 01:10:10 compute-0 systemd[1]: Started Session 21 of User zuul.
Dec 03 01:10:10 compute-0 sshd-session[144107]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 03 01:10:11 compute-0 python3.9[144260]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 01:10:13 compute-0 sudo[144414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hohfgmldtqjfmeshfdzxzpkttapvmvjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724212.6640065-36-131039367583120/AnsiballZ_systemd_service.py'
Dec 03 01:10:13 compute-0 sudo[144414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:10:13 compute-0 python3.9[144416]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 03 01:10:13 compute-0 systemd[1]: Reloading.
Dec 03 01:10:13 compute-0 systemd-sysv-generator[144446]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:10:13 compute-0 systemd-rc-local-generator[144443]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:10:14 compute-0 sudo[144414]: pam_unix(sudo:session): session closed for user root
Dec 03 01:10:15 compute-0 python3.9[144600]: ansible-ansible.builtin.service_facts Invoked
Dec 03 01:10:15 compute-0 network[144617]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 03 01:10:15 compute-0 network[144618]: 'network-scripts' will be removed from distribution in near future.
Dec 03 01:10:15 compute-0 network[144619]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 03 01:10:20 compute-0 sudo[144889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpybwkxdnawbpxcokiczoauzamgaubft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724220.5965111-55-188316871923323/AnsiballZ_systemd_service.py'
Dec 03 01:10:20 compute-0 sudo[144889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:10:21 compute-0 python3.9[144891]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:10:21 compute-0 sudo[144889]: pam_unix(sudo:session): session closed for user root
Dec 03 01:10:22 compute-0 sudo[145042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xleweghfkamhvhqaatcjgbqqcsrivzcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724221.6357794-65-216930968205741/AnsiballZ_file.py'
Dec 03 01:10:22 compute-0 sudo[145042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:10:22 compute-0 python3.9[145044]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:10:22 compute-0 sudo[145042]: pam_unix(sudo:session): session closed for user root
Dec 03 01:10:23 compute-0 sudo[145194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arlquyhwaqgfugnniqnlnojodbebgzak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724222.6770148-73-281471671862818/AnsiballZ_file.py'
Dec 03 01:10:23 compute-0 sudo[145194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:10:23 compute-0 python3.9[145196]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:10:23 compute-0 sudo[145194]: pam_unix(sudo:session): session closed for user root
Dec 03 01:10:24 compute-0 sudo[145346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyfdlrcjqidspqnansiehuvpgqkuyjwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724223.5746553-82-217141005425889/AnsiballZ_command.py'
Dec 03 01:10:24 compute-0 sudo[145346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:10:24 compute-0 python3.9[145348]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:10:24 compute-0 sudo[145346]: pam_unix(sudo:session): session closed for user root
Dec 03 01:10:25 compute-0 python3.9[145500]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 03 01:10:25 compute-0 sudo[145650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-maccvzsmidruzhoouykeukoteqoexctv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724225.5719402-100-132750207502351/AnsiballZ_systemd_service.py'
Dec 03 01:10:25 compute-0 sudo[145650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:10:26 compute-0 python3.9[145652]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 03 01:10:26 compute-0 systemd[1]: Reloading.
Dec 03 01:10:26 compute-0 systemd-rc-local-generator[145678]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:10:26 compute-0 systemd-sysv-generator[145683]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:10:26 compute-0 sudo[145650]: pam_unix(sudo:session): session closed for user root
Dec 03 01:10:27 compute-0 sudo[145838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvocykqcolunwdlbpchoetouskxzpzue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724226.8375764-108-206454673910631/AnsiballZ_command.py'
Dec 03 01:10:27 compute-0 sudo[145838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:10:27 compute-0 python3.9[145840]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:10:27 compute-0 sudo[145838]: pam_unix(sudo:session): session closed for user root
Dec 03 01:10:28 compute-0 sudo[145991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxlemnupxfxvbfanqfmplaprwaphgxdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724227.69223-117-248736024525760/AnsiballZ_file.py'
Dec 03 01:10:28 compute-0 sudo[145991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:10:28 compute-0 python3.9[145993]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:10:28 compute-0 sudo[145991]: pam_unix(sudo:session): session closed for user root
Dec 03 01:10:29 compute-0 python3.9[146143]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:10:30 compute-0 python3.9[146295]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:10:30 compute-0 python3.9[146416]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764724229.4501264-133-10094918554559/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:10:31 compute-0 sudo[146566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyyhjgorbvflnrjeuyzfcfnlcwuvgwgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724231.2365727-148-159209685788372/AnsiballZ_group.py'
Dec 03 01:10:31 compute-0 sudo[146566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:10:31 compute-0 python3.9[146568]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None
Dec 03 01:10:32 compute-0 sudo[146566]: pam_unix(sudo:session): session closed for user root
Dec 03 01:10:32 compute-0 sudo[146718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oygrslijtzzrvtbhlzlzzbwvvnemvrwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724232.3764215-159-99809019331418/AnsiballZ_getent.py'
Dec 03 01:10:32 compute-0 sudo[146718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:10:33 compute-0 python3.9[146720]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Dec 03 01:10:33 compute-0 sudo[146718]: pam_unix(sudo:session): session closed for user root
Dec 03 01:10:33 compute-0 sudo[146871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxtnubqpqfhgovpzcegiueaiefvpabno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724233.4843524-167-138169175751072/AnsiballZ_group.py'
Dec 03 01:10:33 compute-0 sudo[146871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:10:34 compute-0 python3.9[146873]: ansible-ansible.builtin.group Invoked with gid=42405 name=ceilometer state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 03 01:10:34 compute-0 groupadd[146874]: group added to /etc/group: name=ceilometer, GID=42405
Dec 03 01:10:34 compute-0 groupadd[146874]: group added to /etc/gshadow: name=ceilometer
Dec 03 01:10:34 compute-0 groupadd[146874]: new group: name=ceilometer, GID=42405
Dec 03 01:10:34 compute-0 sudo[146871]: pam_unix(sudo:session): session closed for user root
Dec 03 01:10:34 compute-0 sudo[147029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brfbjrbfncaedrowvnfkteepbqtgynza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724234.3933415-175-273598053381011/AnsiballZ_user.py'
Dec 03 01:10:34 compute-0 sudo[147029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:10:35 compute-0 python3.9[147031]: ansible-ansible.builtin.user Invoked with comment=ceilometer user group=ceilometer groups=['libvirt'] name=ceilometer shell=/sbin/nologin state=present uid=42405 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 03 01:10:35 compute-0 useradd[147033]: new user: name=ceilometer, UID=42405, GID=42405, home=/home/ceilometer, shell=/sbin/nologin, from=/dev/pts/0
Dec 03 01:10:35 compute-0 useradd[147033]: add 'ceilometer' to group 'libvirt'
Dec 03 01:10:35 compute-0 useradd[147033]: add 'ceilometer' to shadow group 'libvirt'
Dec 03 01:10:35 compute-0 sudo[147029]: pam_unix(sudo:session): session closed for user root
Dec 03 01:10:36 compute-0 python3.9[147189]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:10:37 compute-0 podman[147284]: 2025-12-03 01:10:37.254893422 +0000 UTC m=+0.141503633 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3)
Dec 03 01:10:37 compute-0 python3.9[147323]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764724236.1263049-201-45576077124287/.source.conf _original_basename=ceilometer.conf follow=False checksum=f74f01c63e6cdeca5458ef9aff2a1db5d6a4e4b9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:10:38 compute-0 python3.9[147487]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:10:38 compute-0 python3.9[147608]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764724237.563031-201-26509365525789/.source.yaml _original_basename=polling.yaml follow=False checksum=6c8680a286285f2e0ef9fa528ca754765e5ed0e5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:10:39 compute-0 python3.9[147758]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:10:40 compute-0 python3.9[147879]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764724239.158092-201-64655868279686/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:10:41 compute-0 python3.9[148029]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:10:41 compute-0 python3.9[148181]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:10:42 compute-0 python3.9[148333]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:10:43 compute-0 python3.9[148454]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764724242.2372503-260-94435110314242/.source.json follow=False _original_basename=ceilometer-agent-compute.json.j2 checksum=264d11e8d3809e7ef745878dce7edd46098e25b2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:10:44 compute-0 python3.9[148604]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:10:44 compute-0 python3.9[148680]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:10:45 compute-0 python3.9[148830]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:10:46 compute-0 python3.9[148951]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764724245.0575614-260-268496560557249/.source.json follow=False _original_basename=ceilometer_agent_compute.json.j2 checksum=4096a0f5410f47dcaf8ab19e56a9d8e211effecd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:10:47 compute-0 python3.9[149101]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:10:47 compute-0 python3.9[149222]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764724246.6965408-260-227293744136339/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:10:48 compute-0 python3.9[149372]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:10:49 compute-0 python3.9[149493]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764724248.206519-260-135852601045186/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:10:50 compute-0 python3.9[149643]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:10:50 compute-0 python3.9[149764]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764724249.5579848-260-220429034871979/.source.json follow=False _original_basename=node_exporter.json.j2 checksum=6e4982940d2bfae88404914dfaf72552f6356d81 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:10:51 compute-0 python3.9[149914]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:10:52 compute-0 python3.9[150035]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764724251.1041336-260-147737679316964/.source.yaml follow=False _original_basename=node_exporter.yaml.j2 checksum=81d906d3e1e8c4f8367276f5d3a67b80ca7e989e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:10:53 compute-0 python3.9[150185]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:10:53 compute-0 python3.9[150306]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764724252.455809-260-162574920611845/.source.json follow=False _original_basename=openstack_network_exporter.json.j2 checksum=d474f1e4c3dbd24762592c51cbe5311f0a037273 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:10:54 compute-0 python3.9[150456]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:10:55 compute-0 python3.9[150577]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764724253.8739967-260-188606570879260/.source.yaml follow=False _original_basename=openstack_network_exporter.yaml.j2 checksum=2b6bd0891e609bf38a73282f42888052b750bed6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:10:55 compute-0 python3.9[150727]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:10:56 compute-0 python3.9[150848]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764724255.3754954-260-107486637468523/.source.json follow=False _original_basename=podman_exporter.json.j2 checksum=e342121a88f67e2bae7ebc05d1e6d350470198a5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:10:57 compute-0 python3.9[150998]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:10:58 compute-0 python3.9[151119]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764724256.8346415-260-79351533927252/.source.yaml follow=False _original_basename=podman_exporter.yaml.j2 checksum=7ccb5eca2ff1dc337c3f3ecbbff5245af7149c47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:10:59 compute-0 python3.9[151269]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:10:59 compute-0 python3.9[151345]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/node_exporter.yaml _original_basename=node_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/node_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:11:00 compute-0 python3.9[151495]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:11:00 compute-0 python3.9[151571]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml _original_basename=podman_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/podman_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:11:01 compute-0 python3.9[151721]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:11:02 compute-0 python3.9[151797]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:11:03 compute-0 sudo[151947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmgevpagbsjvqyuwdyvhvmbkkvustqqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724262.7764175-449-210522619538816/AnsiballZ_file.py'
Dec 03 01:11:03 compute-0 sudo[151947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:11:03 compute-0 python3.9[151949]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:11:03 compute-0 sudo[151947]: pam_unix(sudo:session): session closed for user root
Dec 03 01:11:04 compute-0 sudo[152099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjpxjbbejskrcreiqpgmmdtgzfemrirt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724263.6348195-457-264354653276127/AnsiballZ_file.py'
Dec 03 01:11:04 compute-0 sudo[152099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:11:04 compute-0 python3.9[152101]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:11:04 compute-0 sudo[152099]: pam_unix(sudo:session): session closed for user root
Dec 03 01:11:04 compute-0 sudo[152251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqxwfglbeyogbrmhwlhqfplshkfjvqiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724264.4648771-465-131888637322561/AnsiballZ_file.py'
Dec 03 01:11:04 compute-0 sudo[152251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:11:05 compute-0 python3.9[152253]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:11:05 compute-0 sudo[152251]: pam_unix(sudo:session): session closed for user root
Dec 03 01:11:05 compute-0 sudo[152403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hioiwlxrwzqydfsxlmyovnyukqahpmhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724265.295493-473-132605957507694/AnsiballZ_systemd_service.py'
Dec 03 01:11:05 compute-0 sudo[152403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:11:05 compute-0 python3.9[152405]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:11:06 compute-0 systemd[1]: Reloading.
Dec 03 01:11:06 compute-0 systemd-rc-local-generator[152434]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:11:06 compute-0 systemd-sysv-generator[152438]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:11:06 compute-0 systemd[1]: Listening on Podman API Socket.
Dec 03 01:11:06 compute-0 sudo[152403]: pam_unix(sudo:session): session closed for user root
Dec 03 01:11:07 compute-0 sudo[152594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjrtxyfkslqeapclgalokkndudslxnjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724266.7505462-482-154463817992616/AnsiballZ_stat.py'
Dec 03 01:11:07 compute-0 sudo[152594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:11:07 compute-0 python3.9[152596]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:11:07 compute-0 sudo[152594]: pam_unix(sudo:session): session closed for user root
Dec 03 01:11:07 compute-0 sudo[152737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvusqushzujkwsuanjazummcfrjufzfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724266.7505462-482-154463817992616/AnsiballZ_copy.py'
Dec 03 01:11:07 compute-0 sudo[152737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:11:07 compute-0 podman[152684]: 2025-12-03 01:11:07.853171533 +0000 UTC m=+0.112522562 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 03 01:11:08 compute-0 python3.9[152744]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764724266.7505462-482-154463817992616/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:11:08 compute-0 sudo[152737]: pam_unix(sudo:session): session closed for user root
Dec 03 01:11:08 compute-0 sudo[152820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hymyqffpfrfmgqtwdaeyjzkcclazrxeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724266.7505462-482-154463817992616/AnsiballZ_stat.py'
Dec 03 01:11:08 compute-0 sudo[152820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:11:08 compute-0 python3.9[152822]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:11:08 compute-0 sudo[152820]: pam_unix(sudo:session): session closed for user root
Dec 03 01:11:09 compute-0 sudo[152943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhdupjmbtesozclygzpprbpcfplvnadm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724266.7505462-482-154463817992616/AnsiballZ_copy.py'
Dec 03 01:11:09 compute-0 sudo[152943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:11:09 compute-0 python3.9[152945]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764724266.7505462-482-154463817992616/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:11:09 compute-0 sudo[152943]: pam_unix(sudo:session): session closed for user root
Dec 03 01:11:10 compute-0 sudo[153095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmxubepqgekkerbgtyhqfumglscatiho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724269.6687343-510-186255088015842/AnsiballZ_container_config_data.py'
Dec 03 01:11:10 compute-0 sudo[153095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:11:10 compute-0 python3.9[153097]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=ceilometer_agent_compute.json debug=False
Dec 03 01:11:10 compute-0 sudo[153095]: pam_unix(sudo:session): session closed for user root
Dec 03 01:11:11 compute-0 sudo[153247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgweekashuyklocsmvucqmjheyhsdzds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724270.7826028-519-50095639967595/AnsiballZ_container_config_hash.py'
Dec 03 01:11:11 compute-0 sudo[153247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:11:11 compute-0 python3.9[153249]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 03 01:11:11 compute-0 sudo[153247]: pam_unix(sudo:session): session closed for user root
Dec 03 01:11:12 compute-0 sudo[153399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plztsmexolkvfjuwxygzxwakjwuljnsd ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764724272.0998468-529-45139544699627/AnsiballZ_edpm_container_manage.py'
Dec 03 01:11:12 compute-0 sudo[153399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:11:13 compute-0 python3[153401]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=ceilometer_agent_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec 03 01:11:27 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Dec 03 01:11:28 compute-0 podman[153413]: 2025-12-03 01:11:28.476004735 +0000 UTC m=+15.338801640 image pull b1b6d71b432c07886b3bae74df4dc9841d1f26407d5f96d6c1e400b0154d9a3d quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Dec 03 01:11:28 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec 03 01:11:28 compute-0 podman[153557]: 2025-12-03 01:11:28.713208496 +0000 UTC m=+0.074717093 container create 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_managed=true, container_name=ceilometer_agent_compute, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, config_id=edpm, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Dec 03 01:11:28 compute-0 podman[153557]: 2025-12-03 01:11:28.674822642 +0000 UTC m=+0.036331289 image pull b1b6d71b432c07886b3bae74df4dc9841d1f26407d5f96d6c1e400b0154d9a3d quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Dec 03 01:11:28 compute-0 python3[153401]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck compute --label config_id=edpm --label container_name=ceilometer_agent_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']} --log-driver journald --log-level info --network host --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z --volume /run/libvirt:/run/libvirt:shared,ro --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested kolla_start
Dec 03 01:11:28 compute-0 sudo[153399]: pam_unix(sudo:session): session closed for user root
Dec 03 01:11:29 compute-0 sudo[153747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-saxsbzqtzfzfcketcqvtabdvayfaibgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724289.1621778-537-36640336509788/AnsiballZ_stat.py'
Dec 03 01:11:29 compute-0 sudo[153747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:11:29 compute-0 python3.9[153749]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:11:29 compute-0 sudo[153747]: pam_unix(sudo:session): session closed for user root
Dec 03 01:11:30 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Dec 03 01:11:30 compute-0 sudo[153903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eieyjnuxhubtgtiooasyxfzsecltltao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724290.1381361-546-166259079750072/AnsiballZ_file.py'
Dec 03 01:11:30 compute-0 sudo[153903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:11:30 compute-0 python3.9[153905]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:11:30 compute-0 sudo[153903]: pam_unix(sudo:session): session closed for user root
Dec 03 01:11:31 compute-0 sudo[154054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqrmtqdsgynfnyojcbkhqsujtkpwmgat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724290.8544693-546-241107542865356/AnsiballZ_copy.py'
Dec 03 01:11:31 compute-0 sudo[154054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:11:31 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec 03 01:11:31 compute-0 python3.9[154057]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764724290.8544693-546-241107542865356/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:11:31 compute-0 sudo[154054]: pam_unix(sudo:session): session closed for user root
Dec 03 01:11:32 compute-0 sudo[154131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgspiixmfgvqoxiehkwtreldqnqbuzay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724290.8544693-546-241107542865356/AnsiballZ_systemd.py'
Dec 03 01:11:32 compute-0 sudo[154131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:11:32 compute-0 sshd-session[153624]: Connection closed by authenticating user operator 80.94.95.116 port 43402 [preauth]
Dec 03 01:11:32 compute-0 python3.9[154133]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 03 01:11:32 compute-0 systemd[1]: Reloading.
Dec 03 01:11:32 compute-0 systemd-sysv-generator[154165]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:11:32 compute-0 systemd-rc-local-generator[154162]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:11:32 compute-0 sudo[154131]: pam_unix(sudo:session): session closed for user root
Dec 03 01:11:33 compute-0 sudo[154243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-weqmlucvaeoibhlpsdmielniyaqowsqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724290.8544693-546-241107542865356/AnsiballZ_systemd.py'
Dec 03 01:11:33 compute-0 sudo[154243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:11:33 compute-0 python3.9[154245]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:11:33 compute-0 systemd[1]: Reloading.
Dec 03 01:11:33 compute-0 systemd-rc-local-generator[154271]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:11:33 compute-0 systemd-sysv-generator[154276]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:11:34 compute-0 systemd[1]: Starting ceilometer_agent_compute container...
Dec 03 01:11:34 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:11:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c05fda9c1bf365319581a3b522676c832bdfa8164015757f5238b71ba927c121/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec 03 01:11:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c05fda9c1bf365319581a3b522676c832bdfa8164015757f5238b71ba927c121/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 03 01:11:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c05fda9c1bf365319581a3b522676c832bdfa8164015757f5238b71ba927c121/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec 03 01:11:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c05fda9c1bf365319581a3b522676c832bdfa8164015757f5238b71ba927c121/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec 03 01:11:34 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264.
Dec 03 01:11:34 compute-0 podman[154285]: 2025-12-03 01:11:34.303196733 +0000 UTC m=+0.219371316 container init 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=edpm)
Dec 03 01:11:34 compute-0 ceilometer_agent_compute[154300]: + sudo -E kolla_set_configs
Dec 03 01:11:34 compute-0 sudo[154306]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 03 01:11:34 compute-0 ceilometer_agent_compute[154300]: sudo: unable to send audit message: Operation not permitted
Dec 03 01:11:34 compute-0 sudo[154306]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 03 01:11:34 compute-0 podman[154285]: 2025-12-03 01:11:34.351481347 +0000 UTC m=+0.267655870 container start 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Dec 03 01:11:34 compute-0 podman[154285]: ceilometer_agent_compute
Dec 03 01:11:34 compute-0 systemd[1]: Started ceilometer_agent_compute container.
Dec 03 01:11:34 compute-0 sudo[154243]: pam_unix(sudo:session): session closed for user root
Dec 03 01:11:34 compute-0 ceilometer_agent_compute[154300]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 03 01:11:34 compute-0 ceilometer_agent_compute[154300]: INFO:__main__:Validating config file
Dec 03 01:11:34 compute-0 ceilometer_agent_compute[154300]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 03 01:11:34 compute-0 ceilometer_agent_compute[154300]: INFO:__main__:Copying service configuration files
Dec 03 01:11:34 compute-0 ceilometer_agent_compute[154300]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec 03 01:11:34 compute-0 ceilometer_agent_compute[154300]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec 03 01:11:34 compute-0 ceilometer_agent_compute[154300]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec 03 01:11:34 compute-0 ceilometer_agent_compute[154300]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec 03 01:11:34 compute-0 ceilometer_agent_compute[154300]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec 03 01:11:34 compute-0 ceilometer_agent_compute[154300]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec 03 01:11:34 compute-0 ceilometer_agent_compute[154300]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 03 01:11:34 compute-0 ceilometer_agent_compute[154300]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 03 01:11:34 compute-0 ceilometer_agent_compute[154300]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 03 01:11:34 compute-0 ceilometer_agent_compute[154300]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 03 01:11:34 compute-0 ceilometer_agent_compute[154300]: INFO:__main__:Writing out command to execute
Dec 03 01:11:34 compute-0 sudo[154306]: pam_unix(sudo:session): session closed for user root
Dec 03 01:11:34 compute-0 ceilometer_agent_compute[154300]: ++ cat /run_command
Dec 03 01:11:34 compute-0 podman[154307]: 2025-12-03 01:11:34.445779158 +0000 UTC m=+0.078676468 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 03 01:11:34 compute-0 ceilometer_agent_compute[154300]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec 03 01:11:34 compute-0 ceilometer_agent_compute[154300]: + ARGS=
Dec 03 01:11:34 compute-0 ceilometer_agent_compute[154300]: + sudo kolla_copy_cacerts
Dec 03 01:11:34 compute-0 systemd[1]: 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264-597a351873c464a5.service: Main process exited, code=exited, status=1/FAILURE
Dec 03 01:11:34 compute-0 systemd[1]: 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264-597a351873c464a5.service: Failed with result 'exit-code'.
Dec 03 01:11:34 compute-0 sudo[154333]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 03 01:11:34 compute-0 ceilometer_agent_compute[154300]: sudo: unable to send audit message: Operation not permitted
Dec 03 01:11:34 compute-0 sudo[154333]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 03 01:11:34 compute-0 sudo[154333]: pam_unix(sudo:session): session closed for user root
Dec 03 01:11:34 compute-0 ceilometer_agent_compute[154300]: + [[ ! -n '' ]]
Dec 03 01:11:34 compute-0 ceilometer_agent_compute[154300]: + . kolla_extend_start
Dec 03 01:11:34 compute-0 ceilometer_agent_compute[154300]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec 03 01:11:34 compute-0 ceilometer_agent_compute[154300]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Dec 03 01:11:34 compute-0 ceilometer_agent_compute[154300]: + umask 0022
Dec 03 01:11:34 compute-0 ceilometer_agent_compute[154300]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Dec 03 01:11:35 compute-0 sudo[154482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idtvhpakzfywswcgmeiqspsuxqidrrnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724294.6461363-570-133983888833267/AnsiballZ_systemd.py'
Dec 03 01:11:35 compute-0 sudo[154482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.242 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.242 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.242 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.242 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.242 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.243 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.243 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.243 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.243 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.243 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.243 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.243 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.243 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.243 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.243 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.243 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.243 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.244 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.244 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.244 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.244 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.244 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.244 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.244 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.244 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.244 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.244 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.245 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.245 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.245 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.245 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.245 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.245 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.245 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.245 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.245 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.245 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.245 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.245 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.245 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.245 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.245 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.246 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.246 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.246 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.246 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.246 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.246 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.246 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.246 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.246 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.246 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.246 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.246 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.246 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.246 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.247 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.247 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.247 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.247 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.247 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.247 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.247 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.247 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.247 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.247 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.247 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.247 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.247 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.247 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.248 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.248 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.248 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.248 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.248 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.248 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.248 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.248 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.248 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.248 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.248 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.248 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.248 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.249 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.249 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.249 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.249 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.249 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.249 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.249 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.249 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.249 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.249 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.249 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.249 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.249 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.250 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.250 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.250 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.250 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.250 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.250 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.250 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.250 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.250 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.250 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.250 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.250 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.250 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.251 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.251 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.251 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.251 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.251 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.251 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.251 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.251 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.251 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.251 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.251 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.251 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.251 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.252 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.252 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.252 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.252 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.252 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.252 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.252 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.252 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.252 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.252 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.252 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.252 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.252 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.252 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.253 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.253 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.253 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.253 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.253 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.253 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.253 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.253 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.253 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.253 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.253 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.253 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.253 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.253 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.254 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.254 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.254 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.273 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.274 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.274 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.275 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.275 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.275 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.275 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.275 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.276 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.276 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.276 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.276 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.276 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.276 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.276 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.277 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.277 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.277 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.277 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.277 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.277 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.277 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.277 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.277 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.278 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.278 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.278 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.278 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.278 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.278 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.278 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.278 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.278 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.279 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.279 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.279 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.279 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.279 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.279 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.279 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.279 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.279 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.280 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.280 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.280 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.280 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.280 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.280 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.280 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.280 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.281 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.281 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.281 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.281 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.281 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.281 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.281 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.281 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.281 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.282 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.282 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.282 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.282 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.282 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.282 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.282 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.282 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.282 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.283 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.283 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.283 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.283 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.283 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.283 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.283 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.283 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.284 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.284 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.284 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.284 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.284 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.284 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.284 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.285 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.285 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.285 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.285 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.285 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.285 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.285 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.285 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.285 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.286 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.286 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.286 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.286 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.286 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.286 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.286 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.286 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.287 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.287 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.287 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.287 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.287 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.287 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.287 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.287 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.287 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.288 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.288 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.288 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.288 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.288 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.288 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.288 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.289 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.289 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.289 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.289 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.289 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.289 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.289 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.289 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.289 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.290 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.290 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.290 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.290 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.290 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.290 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.290 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.290 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.290 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.291 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.291 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.291 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.291 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.291 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.291 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.291 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.291 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.291 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.292 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.292 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.292 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.292 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.292 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.292 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.292 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.292 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.292 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.293 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.293 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.294 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.296 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.297 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Dec 03 01:11:35 compute-0 python3.9[154484]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 03 01:11:35 compute-0 systemd[1]: Stopping ceilometer_agent_compute container...
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.469 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.469 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Dec 03 01:11:35 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Dec 03 01:11:35 compute-0 systemd[1]: Started libvirt QEMU daemon.
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.571 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:319
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.571 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:323
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.580 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentHeartBeatManager(0) [12]
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.582 14 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.582 14 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.582 14 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.783 14 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.783 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.783 14 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.783 14 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.784 14 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.784 14 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.784 14 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.784 14 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.784 14 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.785 14 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.785 14 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.785 14 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.785 14 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.785 14 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.785 14 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.786 14 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.786 14 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.786 14 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.786 14 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.786 14 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.787 14 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.787 14 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.787 14 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.787 14 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.787 14 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.787 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.788 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.788 14 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.788 14 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.788 14 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.788 14 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.788 14 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.788 14 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.789 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.789 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.789 14 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.789 14 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.789 14 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.789 14 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.789 14 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.790 14 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.790 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.790 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.790 14 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.790 14 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.790 14 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.791 14 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.791 14 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.791 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.791 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.791 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.791 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.791 14 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.792 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.792 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.792 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.792 14 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.792 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.792 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.793 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.793 14 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.793 14 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.793 14 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.793 14 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.793 14 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.793 14 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.794 14 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.794 14 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.794 14 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.794 14 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.794 14 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.794 14 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.794 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.795 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.795 14 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.795 14 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.795 14 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.795 14 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.795 14 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.795 14 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.796 14 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.796 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.796 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.796 14 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.796 14 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.797 14 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.797 14 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.797 14 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.797 14 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.797 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.797 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.797 14 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.798 14 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.798 14 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.798 14 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.798 14 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.798 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.798 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.798 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.798 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.799 14 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.799 14 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.799 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.799 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.799 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.799 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.800 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.800 14 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.800 14 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.800 14 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.800 14 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.800 14 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.800 14 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.801 14 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.801 14 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.801 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.801 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.801 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.801 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.801 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.801 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.802 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.802 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.802 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.802 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.802 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.802 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.802 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.802 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.802 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.803 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.803 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.803 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.803 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.803 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.803 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.803 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.803 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.803 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.803 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.804 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.804 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.804 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.804 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.804 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.804 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.804 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.805 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.805 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.805 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.805 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.805 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.805 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.805 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.806 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.806 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.806 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.806 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.806 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.806 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.806 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.807 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.807 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.807 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.807 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.807 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.807 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.807 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.808 14 DEBUG cotyledon._service [-] Run service AgentManager(0) [14] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.808 14 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [14]
Dec 03 01:11:35 compute-0 virtqemud[154511]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Dec 03 01:11:35 compute-0 virtqemud[154511]: hostname: compute-0
Dec 03 01:11:35 compute-0 virtqemud[154511]: End of file while reading data: Input/output error
Dec 03 01:11:35 compute-0 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.821 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:335
Dec 03 01:11:36 compute-0 systemd[1]: libpod-7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264.scope: Deactivated successfully.
Dec 03 01:11:36 compute-0 systemd[1]: libpod-7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264.scope: Consumed 1.537s CPU time.
Dec 03 01:11:36 compute-0 podman[154496]: 2025-12-03 01:11:36.012749408 +0000 UTC m=+0.599149378 container died 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute)
Dec 03 01:11:36 compute-0 systemd[1]: 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264-597a351873c464a5.timer: Deactivated successfully.
Dec 03 01:11:36 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264.
Dec 03 01:11:36 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264-userdata-shm.mount: Deactivated successfully.
Dec 03 01:11:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-c05fda9c1bf365319581a3b522676c832bdfa8164015757f5238b71ba927c121-merged.mount: Deactivated successfully.
Dec 03 01:11:38 compute-0 podman[154549]: 2025-12-03 01:11:38.395174526 +0000 UTC m=+0.149954450 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller)
Dec 03 01:11:39 compute-0 podman[154496]: 2025-12-03 01:11:39.501317065 +0000 UTC m=+4.087717035 container cleanup 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 03 01:11:39 compute-0 podman[154496]: ceilometer_agent_compute
Dec 03 01:11:39 compute-0 podman[154576]: ceilometer_agent_compute
Dec 03 01:11:39 compute-0 systemd[1]: edpm_ceilometer_agent_compute.service: Deactivated successfully.
Dec 03 01:11:39 compute-0 systemd[1]: Stopped ceilometer_agent_compute container.
Dec 03 01:11:39 compute-0 systemd[1]: Starting ceilometer_agent_compute container...
Dec 03 01:11:39 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:11:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c05fda9c1bf365319581a3b522676c832bdfa8164015757f5238b71ba927c121/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec 03 01:11:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c05fda9c1bf365319581a3b522676c832bdfa8164015757f5238b71ba927c121/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 03 01:11:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c05fda9c1bf365319581a3b522676c832bdfa8164015757f5238b71ba927c121/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec 03 01:11:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c05fda9c1bf365319581a3b522676c832bdfa8164015757f5238b71ba927c121/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec 03 01:11:39 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264.
Dec 03 01:11:39 compute-0 podman[154589]: 2025-12-03 01:11:39.803959097 +0000 UTC m=+0.159214272 container init 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Dec 03 01:11:39 compute-0 ceilometer_agent_compute[154605]: + sudo -E kolla_set_configs
Dec 03 01:11:39 compute-0 podman[154589]: 2025-12-03 01:11:39.837119923 +0000 UTC m=+0.192375048 container start 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec 03 01:11:39 compute-0 sudo[154611]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 03 01:11:39 compute-0 ceilometer_agent_compute[154605]: sudo: unable to send audit message: Operation not permitted
Dec 03 01:11:39 compute-0 podman[154589]: ceilometer_agent_compute
Dec 03 01:11:39 compute-0 sudo[154611]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 03 01:11:39 compute-0 systemd[1]: Started ceilometer_agent_compute container.
Dec 03 01:11:39 compute-0 sudo[154482]: pam_unix(sudo:session): session closed for user root
Dec 03 01:11:39 compute-0 ceilometer_agent_compute[154605]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 03 01:11:39 compute-0 ceilometer_agent_compute[154605]: INFO:__main__:Validating config file
Dec 03 01:11:39 compute-0 ceilometer_agent_compute[154605]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 03 01:11:39 compute-0 ceilometer_agent_compute[154605]: INFO:__main__:Copying service configuration files
Dec 03 01:11:39 compute-0 ceilometer_agent_compute[154605]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec 03 01:11:39 compute-0 ceilometer_agent_compute[154605]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec 03 01:11:39 compute-0 ceilometer_agent_compute[154605]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec 03 01:11:39 compute-0 ceilometer_agent_compute[154605]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec 03 01:11:39 compute-0 ceilometer_agent_compute[154605]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec 03 01:11:39 compute-0 ceilometer_agent_compute[154605]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec 03 01:11:39 compute-0 ceilometer_agent_compute[154605]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 03 01:11:39 compute-0 ceilometer_agent_compute[154605]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 03 01:11:39 compute-0 ceilometer_agent_compute[154605]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 03 01:11:39 compute-0 ceilometer_agent_compute[154605]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 03 01:11:39 compute-0 ceilometer_agent_compute[154605]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 03 01:11:39 compute-0 ceilometer_agent_compute[154605]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 03 01:11:39 compute-0 ceilometer_agent_compute[154605]: INFO:__main__:Writing out command to execute
Dec 03 01:11:39 compute-0 sudo[154611]: pam_unix(sudo:session): session closed for user root
Dec 03 01:11:39 compute-0 podman[154612]: 2025-12-03 01:11:39.933046073 +0000 UTC m=+0.078263265 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=edpm)
Dec 03 01:11:39 compute-0 ceilometer_agent_compute[154605]: ++ cat /run_command
Dec 03 01:11:39 compute-0 ceilometer_agent_compute[154605]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec 03 01:11:39 compute-0 ceilometer_agent_compute[154605]: + ARGS=
Dec 03 01:11:39 compute-0 ceilometer_agent_compute[154605]: + sudo kolla_copy_cacerts
Dec 03 01:11:39 compute-0 systemd[1]: 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264-2d98b0e3ac0f54e5.service: Main process exited, code=exited, status=1/FAILURE
Dec 03 01:11:39 compute-0 systemd[1]: 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264-2d98b0e3ac0f54e5.service: Failed with result 'exit-code'.
Dec 03 01:11:39 compute-0 sudo[154640]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 03 01:11:39 compute-0 ceilometer_agent_compute[154605]: sudo: unable to send audit message: Operation not permitted
Dec 03 01:11:39 compute-0 sudo[154640]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 03 01:11:39 compute-0 sudo[154640]: pam_unix(sudo:session): session closed for user root
Dec 03 01:11:39 compute-0 ceilometer_agent_compute[154605]: + [[ ! -n '' ]]
Dec 03 01:11:39 compute-0 ceilometer_agent_compute[154605]: + . kolla_extend_start
Dec 03 01:11:39 compute-0 ceilometer_agent_compute[154605]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec 03 01:11:39 compute-0 ceilometer_agent_compute[154605]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Dec 03 01:11:39 compute-0 ceilometer_agent_compute[154605]: + umask 0022
Dec 03 01:11:39 compute-0 ceilometer_agent_compute[154605]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Dec 03 01:11:40 compute-0 sudo[154788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stjqaeypyrhmqjyspblxsopnaddfxxwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724300.1193957-578-106239911037634/AnsiballZ_stat.py'
Dec 03 01:11:40 compute-0 sudo[154788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.700 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.700 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.700 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.700 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.700 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.700 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.700 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.700 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.701 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.701 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.701 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.701 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.701 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.701 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.701 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.701 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.701 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.701 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.701 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.702 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.702 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.702 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.702 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.702 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.702 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.702 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.702 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.702 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.702 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.702 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.703 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.703 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.703 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.703 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.703 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.703 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.703 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.703 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.703 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.703 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.703 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.703 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.703 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.703 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.704 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.704 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.704 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.704 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.704 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.704 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.704 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.704 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.704 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.704 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.704 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.704 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.705 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.705 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.705 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.705 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.705 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.705 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.705 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.705 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.705 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.705 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.705 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.705 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.705 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.705 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.706 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.706 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.706 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.706 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.706 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.706 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.706 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.706 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.706 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.706 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.706 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.706 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.707 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.707 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.707 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.707 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.707 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.707 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.707 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.707 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.707 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.707 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.707 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.708 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.708 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.708 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.708 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.708 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.708 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.708 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.708 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.708 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.708 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.708 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.708 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.709 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.709 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.709 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.709 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.709 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.709 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.709 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.709 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.709 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.709 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.709 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.709 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.709 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.710 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.710 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.710 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.710 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.710 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.710 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.710 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.710 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.710 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.710 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.710 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.710 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.710 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.710 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.711 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.711 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.711 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.711 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.711 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.711 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.711 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.711 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.711 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.711 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.711 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.711 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.711 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.712 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.712 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.712 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.712 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.712 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.712 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.712 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.712 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.735 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Dec 03 01:11:40 compute-0 python3.9[154790]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.736 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.737 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.737 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.737 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.737 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.737 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.737 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.738 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.738 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.738 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.738 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.738 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.738 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.739 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.739 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.739 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.739 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.739 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.739 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.739 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.740 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.740 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.740 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.740 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.740 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.740 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.740 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.740 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.741 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.741 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.741 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.741 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.741 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.741 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.741 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.742 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.742 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.742 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.742 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.742 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.742 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.742 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.742 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.743 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.743 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.743 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.743 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.743 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.743 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.743 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.743 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.744 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.744 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.744 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.744 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.744 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.744 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.744 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.744 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.745 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.745 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.745 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.745 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.745 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.745 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.746 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.746 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.746 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.746 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.746 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.747 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.747 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.747 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.747 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.747 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.747 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.748 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.748 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.748 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.748 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.748 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.748 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.748 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.748 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.749 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.749 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.749 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.749 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.749 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.749 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.749 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.750 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.750 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.750 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.750 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.750 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.750 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.750 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.750 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.751 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.751 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.751 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.751 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.751 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 sudo[154788]: pam_unix(sudo:session): session closed for user root
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.751 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.751 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.752 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.752 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.752 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.752 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.752 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.752 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.752 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.752 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.753 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.753 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.753 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.753 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.753 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.753 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.753 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.753 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.753 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.754 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.754 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.754 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.754 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.754 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.754 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.755 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.755 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.755 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.755 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.755 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.755 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.755 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.755 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.756 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.756 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.756 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.756 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.756 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.756 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.756 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.756 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.757 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.757 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.757 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.757 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.757 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.757 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.757 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.758 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.760 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.763 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.764 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.767 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.779 14 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.780 14 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.780 14 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.921 14 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.921 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.921 14 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.921 14 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.921 14 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.921 14 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.921 14 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.921 14 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.922 14 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.922 14 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.922 14 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.922 14 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.922 14 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.922 14 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.922 14 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.922 14 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.923 14 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.923 14 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.923 14 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.923 14 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.923 14 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.924 14 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.924 14 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.924 14 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.924 14 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.924 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.924 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.924 14 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.924 14 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.924 14 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.924 14 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.924 14 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.924 14 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.925 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.925 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.925 14 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.925 14 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.925 14 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.925 14 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.925 14 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.925 14 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.925 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.925 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.925 14 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.925 14 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.925 14 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.926 14 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.926 14 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.926 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.926 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.926 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.926 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.926 14 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.926 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.926 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.926 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.926 14 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.926 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.927 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.927 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.927 14 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.927 14 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.927 14 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.927 14 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.927 14 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.927 14 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.927 14 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.927 14 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.927 14 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.927 14 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.927 14 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.928 14 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.928 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.928 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.928 14 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.928 14 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.928 14 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.928 14 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.928 14 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.928 14 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.928 14 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.928 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.928 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.929 14 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.929 14 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.929 14 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.929 14 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.929 14 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.929 14 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.929 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.929 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.929 14 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.929 14 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.929 14 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.929 14 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.930 14 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.930 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.930 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.930 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.930 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.930 14 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.930 14 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.930 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.930 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.930 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.930 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.930 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.930 14 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.931 14 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.931 14 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.931 14 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.931 14 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.931 14 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.931 14 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.931 14 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.931 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.931 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.931 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.931 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.931 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.931 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.931 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.932 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.932 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.932 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.932 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.932 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.932 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.932 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.932 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.932 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.932 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.932 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.932 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.932 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.932 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.932 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.932 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.932 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.932 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.933 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.933 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.933 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.933 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.933 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.933 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.933 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.933 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.933 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.933 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.933 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.933 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.933 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.934 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.934 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.934 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.934 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.934 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.934 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.934 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.934 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.934 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.934 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.934 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.934 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.934 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.935 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.935 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.935 14 DEBUG cotyledon._service [-] Run service AgentManager(0) [14] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.938 14 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.12/site-packages/ceilometer/agent.py:64
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.964 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.965 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.965 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea70b770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.966 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f00ebd496a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.966 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea70b770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.967 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.967 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eda45910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea70b770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.968 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea70b770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.968 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea70b770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.969 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea70b770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.969 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea70b770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.969 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea70b770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.969 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eabec2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea70b770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.970 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea70b770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.970 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea70b770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.970 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea70b770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.970 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea70b770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.971 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea70b770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.971 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea70b770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.971 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea70b770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.971 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea70b770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.972 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea70b770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.972 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea70b770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.972 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea70b770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.972 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea70b770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.972 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea70b770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.975 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea70b770>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.976 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebcadee0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea70b770>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.976 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f00ebd4b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea70b770>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.977 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea70b770>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.977 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f00edba6090>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.978 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.978 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f00ebd4bb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.978 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.978 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f00ebd4b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.979 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.979 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f00ebd4b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.979 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.979 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f00ebd4b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.979 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.979 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f00ebd4b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.979 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.980 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f00eabec290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.980 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.980 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f00ebd4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.980 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.980 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f00ebd4b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.980 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f00ebd4b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.981 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f00ebd4bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.981 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f00ebd4b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.981 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f00ebd4bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.982 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f00ebd4bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.982 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f00ebd4bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.982 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f00ebe0e030>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.982 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f00ebd4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f00ebd4b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f00ede91a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f00ebd4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f00ebd4b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f00ede92450>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f00ebd4bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f00ebd4bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.985 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.985 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.986 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.986 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.986 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.986 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.986 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.986 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.986 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.986 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:11:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.989 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:11:41 compute-0 sudo[154924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbkzvyzwonyjafsdtehmwfpjuofcjazu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724300.1193957-578-106239911037634/AnsiballZ_copy.py'
Dec 03 01:11:41 compute-0 sudo[154924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:11:41 compute-0 python3.9[154926]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/node_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764724300.1193957-578-106239911037634/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:11:41 compute-0 sudo[154924]: pam_unix(sudo:session): session closed for user root
Dec 03 01:11:42 compute-0 sudo[155076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftqzpquhzqkekfvedgxvpwfxtttzwasl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724301.8978922-595-131284897461198/AnsiballZ_container_config_data.py'
Dec 03 01:11:42 compute-0 sudo[155076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:11:42 compute-0 python3.9[155078]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=node_exporter.json debug=False
Dec 03 01:11:42 compute-0 sudo[155076]: pam_unix(sudo:session): session closed for user root
Dec 03 01:11:43 compute-0 sudo[155228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijjnqxnmzkqljyzzbzqwgiwaftjjajcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724302.768807-604-248733447047152/AnsiballZ_container_config_hash.py'
Dec 03 01:11:43 compute-0 sudo[155228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:11:43 compute-0 python3.9[155230]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 03 01:11:43 compute-0 sudo[155228]: pam_unix(sudo:session): session closed for user root
Dec 03 01:11:44 compute-0 sudo[155380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcogxywyaxblfpanvlhkzqzsbqbldfai ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764724303.894377-614-116450794199159/AnsiballZ_edpm_container_manage.py'
Dec 03 01:11:44 compute-0 sudo[155380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:11:44 compute-0 python3[155382]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=node_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec 03 01:11:46 compute-0 podman[155394]: 2025-12-03 01:11:46.012316838 +0000 UTC m=+1.333773195 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Dec 03 01:11:46 compute-0 podman[155491]: 2025-12-03 01:11:46.238837161 +0000 UTC m=+0.066195260 container create 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, config_id=edpm, container_name=node_exporter, managed_by=edpm_ansible)
Dec 03 01:11:46 compute-0 podman[155491]: 2025-12-03 01:11:46.19925407 +0000 UTC m=+0.026612219 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Dec 03 01:11:46 compute-0 python3[155382]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name node_exporter --conmon-pidfile /run/node_exporter.pid --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck node_exporter --label config_id=edpm --label container_name=node_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9100:9100 --user root --volume /var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z --volume /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw --volume /var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z quay.io/prometheus/node-exporter:v1.5.0 --web.config.file=/etc/node_exporter/node_exporter.yaml --web.disable-exporter-metrics --collector.systemd --collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service --no-collector.dmi --no-collector.entropy --no-collector.thermal_zone --no-collector.time --no-collector.timex --no-collector.uname --no-collector.stat --no-collector.hwmon --no-collector.os --no-collector.selinux --no-collector.textfile --no-collector.powersupplyclass --no-collector.pressure --no-collector.rapl
Dec 03 01:11:46 compute-0 sudo[155380]: pam_unix(sudo:session): session closed for user root
Dec 03 01:11:47 compute-0 sudo[155679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhdbkgltkryeqsvleshhqgyisrihgjay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724306.7030144-622-252391062948277/AnsiballZ_stat.py'
Dec 03 01:11:47 compute-0 sudo[155679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:11:47 compute-0 python3.9[155681]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:11:47 compute-0 sudo[155679]: pam_unix(sudo:session): session closed for user root
Dec 03 01:11:48 compute-0 sudo[155833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qijmzxfcbtmemzrhvforcdaxfaaxiezp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724307.6969287-631-168280185670739/AnsiballZ_file.py'
Dec 03 01:11:48 compute-0 sudo[155833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:11:48 compute-0 python3.9[155835]: ansible-file Invoked with path=/etc/systemd/system/edpm_node_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:11:48 compute-0 sudo[155833]: pam_unix(sudo:session): session closed for user root
Dec 03 01:11:49 compute-0 sudo[155984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdijbjiahuqqivscvfkspdipvrnmseeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724308.4020264-631-194718615764564/AnsiballZ_copy.py'
Dec 03 01:11:49 compute-0 sudo[155984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:11:49 compute-0 python3.9[155986]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764724308.4020264-631-194718615764564/source dest=/etc/systemd/system/edpm_node_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:11:49 compute-0 sudo[155984]: pam_unix(sudo:session): session closed for user root
Dec 03 01:11:49 compute-0 sudo[156060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptrphqewiknpznxecyuyeozflbvirywo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724308.4020264-631-194718615764564/AnsiballZ_systemd.py'
Dec 03 01:11:49 compute-0 sudo[156060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:11:49 compute-0 python3.9[156062]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 03 01:11:49 compute-0 systemd[1]: Reloading.
Dec 03 01:11:49 compute-0 systemd-sysv-generator[156092]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:11:49 compute-0 systemd-rc-local-generator[156089]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:11:50 compute-0 sudo[156060]: pam_unix(sudo:session): session closed for user root
Dec 03 01:11:50 compute-0 sudo[156170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggvzlkxffomchvhmhhkejkyolbcnyjeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724308.4020264-631-194718615764564/AnsiballZ_systemd.py'
Dec 03 01:11:50 compute-0 sudo[156170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:11:50 compute-0 python3.9[156172]: ansible-systemd Invoked with state=restarted name=edpm_node_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:11:51 compute-0 systemd[1]: Reloading.
Dec 03 01:11:51 compute-0 systemd-rc-local-generator[156201]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:11:51 compute-0 systemd-sysv-generator[156205]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:11:52 compute-0 systemd[1]: Starting node_exporter container...
Dec 03 01:11:52 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:11:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a223acd36c294252abbef2129cb869c4b2118341768b302e0db9403ccbec37a8/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 03 01:11:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a223acd36c294252abbef2129cb869c4b2118341768b302e0db9403ccbec37a8/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec 03 01:11:52 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb.
Dec 03 01:11:52 compute-0 podman[156212]: 2025-12-03 01:11:52.411709726 +0000 UTC m=+0.183974423 container init 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 03 01:11:52 compute-0 node_exporter[156228]: ts=2025-12-03T01:11:52.434Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Dec 03 01:11:52 compute-0 node_exporter[156228]: ts=2025-12-03T01:11:52.434Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Dec 03 01:11:52 compute-0 node_exporter[156228]: ts=2025-12-03T01:11:52.434Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Dec 03 01:11:52 compute-0 node_exporter[156228]: ts=2025-12-03T01:11:52.436Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec 03 01:11:52 compute-0 node_exporter[156228]: ts=2025-12-03T01:11:52.436Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec 03 01:11:52 compute-0 node_exporter[156228]: ts=2025-12-03T01:11:52.436Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec 03 01:11:52 compute-0 node_exporter[156228]: ts=2025-12-03T01:11:52.436Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec 03 01:11:52 compute-0 node_exporter[156228]: ts=2025-12-03T01:11:52.436Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Dec 03 01:11:52 compute-0 node_exporter[156228]: ts=2025-12-03T01:11:52.437Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Dec 03 01:11:52 compute-0 node_exporter[156228]: ts=2025-12-03T01:11:52.437Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec 03 01:11:52 compute-0 node_exporter[156228]: ts=2025-12-03T01:11:52.437Z caller=node_exporter.go:117 level=info collector=arp
Dec 03 01:11:52 compute-0 node_exporter[156228]: ts=2025-12-03T01:11:52.437Z caller=node_exporter.go:117 level=info collector=bcache
Dec 03 01:11:52 compute-0 node_exporter[156228]: ts=2025-12-03T01:11:52.437Z caller=node_exporter.go:117 level=info collector=bonding
Dec 03 01:11:52 compute-0 node_exporter[156228]: ts=2025-12-03T01:11:52.437Z caller=node_exporter.go:117 level=info collector=btrfs
Dec 03 01:11:52 compute-0 node_exporter[156228]: ts=2025-12-03T01:11:52.437Z caller=node_exporter.go:117 level=info collector=conntrack
Dec 03 01:11:52 compute-0 node_exporter[156228]: ts=2025-12-03T01:11:52.437Z caller=node_exporter.go:117 level=info collector=cpu
Dec 03 01:11:52 compute-0 node_exporter[156228]: ts=2025-12-03T01:11:52.437Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec 03 01:11:52 compute-0 node_exporter[156228]: ts=2025-12-03T01:11:52.437Z caller=node_exporter.go:117 level=info collector=diskstats
Dec 03 01:11:52 compute-0 node_exporter[156228]: ts=2025-12-03T01:11:52.437Z caller=node_exporter.go:117 level=info collector=edac
Dec 03 01:11:52 compute-0 node_exporter[156228]: ts=2025-12-03T01:11:52.437Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec 03 01:11:52 compute-0 node_exporter[156228]: ts=2025-12-03T01:11:52.437Z caller=node_exporter.go:117 level=info collector=filefd
Dec 03 01:11:52 compute-0 node_exporter[156228]: ts=2025-12-03T01:11:52.437Z caller=node_exporter.go:117 level=info collector=filesystem
Dec 03 01:11:52 compute-0 node_exporter[156228]: ts=2025-12-03T01:11:52.437Z caller=node_exporter.go:117 level=info collector=infiniband
Dec 03 01:11:52 compute-0 node_exporter[156228]: ts=2025-12-03T01:11:52.437Z caller=node_exporter.go:117 level=info collector=ipvs
Dec 03 01:11:52 compute-0 node_exporter[156228]: ts=2025-12-03T01:11:52.437Z caller=node_exporter.go:117 level=info collector=loadavg
Dec 03 01:11:52 compute-0 node_exporter[156228]: ts=2025-12-03T01:11:52.437Z caller=node_exporter.go:117 level=info collector=mdadm
Dec 03 01:11:52 compute-0 node_exporter[156228]: ts=2025-12-03T01:11:52.437Z caller=node_exporter.go:117 level=info collector=meminfo
Dec 03 01:11:52 compute-0 node_exporter[156228]: ts=2025-12-03T01:11:52.438Z caller=node_exporter.go:117 level=info collector=netclass
Dec 03 01:11:52 compute-0 node_exporter[156228]: ts=2025-12-03T01:11:52.438Z caller=node_exporter.go:117 level=info collector=netdev
Dec 03 01:11:52 compute-0 node_exporter[156228]: ts=2025-12-03T01:11:52.438Z caller=node_exporter.go:117 level=info collector=netstat
Dec 03 01:11:52 compute-0 node_exporter[156228]: ts=2025-12-03T01:11:52.438Z caller=node_exporter.go:117 level=info collector=nfs
Dec 03 01:11:52 compute-0 node_exporter[156228]: ts=2025-12-03T01:11:52.438Z caller=node_exporter.go:117 level=info collector=nfsd
Dec 03 01:11:52 compute-0 node_exporter[156228]: ts=2025-12-03T01:11:52.438Z caller=node_exporter.go:117 level=info collector=nvme
Dec 03 01:11:52 compute-0 node_exporter[156228]: ts=2025-12-03T01:11:52.438Z caller=node_exporter.go:117 level=info collector=schedstat
Dec 03 01:11:52 compute-0 node_exporter[156228]: ts=2025-12-03T01:11:52.438Z caller=node_exporter.go:117 level=info collector=sockstat
Dec 03 01:11:52 compute-0 node_exporter[156228]: ts=2025-12-03T01:11:52.438Z caller=node_exporter.go:117 level=info collector=softnet
Dec 03 01:11:52 compute-0 node_exporter[156228]: ts=2025-12-03T01:11:52.438Z caller=node_exporter.go:117 level=info collector=systemd
Dec 03 01:11:52 compute-0 node_exporter[156228]: ts=2025-12-03T01:11:52.438Z caller=node_exporter.go:117 level=info collector=tapestats
Dec 03 01:11:52 compute-0 node_exporter[156228]: ts=2025-12-03T01:11:52.438Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec 03 01:11:52 compute-0 node_exporter[156228]: ts=2025-12-03T01:11:52.438Z caller=node_exporter.go:117 level=info collector=vmstat
Dec 03 01:11:52 compute-0 node_exporter[156228]: ts=2025-12-03T01:11:52.438Z caller=node_exporter.go:117 level=info collector=xfs
Dec 03 01:11:52 compute-0 node_exporter[156228]: ts=2025-12-03T01:11:52.438Z caller=node_exporter.go:117 level=info collector=zfs
Dec 03 01:11:52 compute-0 node_exporter[156228]: ts=2025-12-03T01:11:52.439Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Dec 03 01:11:52 compute-0 node_exporter[156228]: ts=2025-12-03T01:11:52.440Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Dec 03 01:11:52 compute-0 podman[156212]: 2025-12-03 01:11:52.453560836 +0000 UTC m=+0.225825503 container start 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 01:11:52 compute-0 podman[156212]: node_exporter
Dec 03 01:11:52 compute-0 systemd[1]: Started node_exporter container.
Dec 03 01:11:52 compute-0 sudo[156170]: pam_unix(sudo:session): session closed for user root
Dec 03 01:11:52 compute-0 podman[156237]: 2025-12-03 01:11:52.563421379 +0000 UTC m=+0.092677903 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 01:11:53 compute-0 sudo[156411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jiydyjkhcqtiecwajgktmgdyoaqfkpgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724312.7238536-655-261014646131703/AnsiballZ_systemd.py'
Dec 03 01:11:53 compute-0 sudo[156411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:11:53 compute-0 python3.9[156413]: ansible-ansible.builtin.systemd Invoked with name=edpm_node_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 03 01:11:53 compute-0 systemd[1]: Stopping node_exporter container...
Dec 03 01:11:53 compute-0 systemd[1]: libpod-0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb.scope: Deactivated successfully.
Dec 03 01:11:53 compute-0 podman[156417]: 2025-12-03 01:11:53.556215339 +0000 UTC m=+0.070532301 container died 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 03 01:11:53 compute-0 systemd[1]: 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb-2874ab3da3df488a.timer: Deactivated successfully.
Dec 03 01:11:53 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb.
Dec 03 01:11:53 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb-userdata-shm.mount: Deactivated successfully.
Dec 03 01:11:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-a223acd36c294252abbef2129cb869c4b2118341768b302e0db9403ccbec37a8-merged.mount: Deactivated successfully.
Dec 03 01:11:53 compute-0 podman[156417]: 2025-12-03 01:11:53.753938347 +0000 UTC m=+0.268255299 container cleanup 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 01:11:53 compute-0 podman[156417]: node_exporter
Dec 03 01:11:53 compute-0 systemd[1]: edpm_node_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec 03 01:11:53 compute-0 podman[156446]: node_exporter
Dec 03 01:11:53 compute-0 systemd[1]: edpm_node_exporter.service: Failed with result 'exit-code'.
Dec 03 01:11:53 compute-0 systemd[1]: Stopped node_exporter container.
Dec 03 01:11:53 compute-0 systemd[1]: Starting node_exporter container...
Dec 03 01:11:54 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:11:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a223acd36c294252abbef2129cb869c4b2118341768b302e0db9403ccbec37a8/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 03 01:11:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a223acd36c294252abbef2129cb869c4b2118341768b302e0db9403ccbec37a8/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec 03 01:11:54 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb.
Dec 03 01:11:54 compute-0 podman[156459]: 2025-12-03 01:11:54.057185898 +0000 UTC m=+0.150613251 container init 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 03 01:11:54 compute-0 node_exporter[156474]: ts=2025-12-03T01:11:54.074Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Dec 03 01:11:54 compute-0 node_exporter[156474]: ts=2025-12-03T01:11:54.074Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Dec 03 01:11:54 compute-0 node_exporter[156474]: ts=2025-12-03T01:11:54.074Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Dec 03 01:11:54 compute-0 node_exporter[156474]: ts=2025-12-03T01:11:54.075Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec 03 01:11:54 compute-0 node_exporter[156474]: ts=2025-12-03T01:11:54.076Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec 03 01:11:54 compute-0 node_exporter[156474]: ts=2025-12-03T01:11:54.076Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec 03 01:11:54 compute-0 node_exporter[156474]: ts=2025-12-03T01:11:54.076Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec 03 01:11:54 compute-0 node_exporter[156474]: ts=2025-12-03T01:11:54.076Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Dec 03 01:11:54 compute-0 node_exporter[156474]: ts=2025-12-03T01:11:54.076Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Dec 03 01:11:54 compute-0 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec 03 01:11:54 compute-0 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=arp
Dec 03 01:11:54 compute-0 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=bcache
Dec 03 01:11:54 compute-0 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=bonding
Dec 03 01:11:54 compute-0 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=btrfs
Dec 03 01:11:54 compute-0 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=conntrack
Dec 03 01:11:54 compute-0 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=cpu
Dec 03 01:11:54 compute-0 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec 03 01:11:54 compute-0 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=diskstats
Dec 03 01:11:54 compute-0 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=edac
Dec 03 01:11:54 compute-0 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec 03 01:11:54 compute-0 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=filefd
Dec 03 01:11:54 compute-0 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=filesystem
Dec 03 01:11:54 compute-0 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=infiniband
Dec 03 01:11:54 compute-0 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=ipvs
Dec 03 01:11:54 compute-0 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=loadavg
Dec 03 01:11:54 compute-0 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=mdadm
Dec 03 01:11:54 compute-0 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=meminfo
Dec 03 01:11:54 compute-0 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=netclass
Dec 03 01:11:54 compute-0 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=netdev
Dec 03 01:11:54 compute-0 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=netstat
Dec 03 01:11:54 compute-0 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=nfs
Dec 03 01:11:54 compute-0 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=nfsd
Dec 03 01:11:54 compute-0 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=nvme
Dec 03 01:11:54 compute-0 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=schedstat
Dec 03 01:11:54 compute-0 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=sockstat
Dec 03 01:11:54 compute-0 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=softnet
Dec 03 01:11:54 compute-0 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=systemd
Dec 03 01:11:54 compute-0 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=tapestats
Dec 03 01:11:54 compute-0 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec 03 01:11:54 compute-0 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=vmstat
Dec 03 01:11:54 compute-0 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=xfs
Dec 03 01:11:54 compute-0 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=zfs
Dec 03 01:11:54 compute-0 node_exporter[156474]: ts=2025-12-03T01:11:54.078Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Dec 03 01:11:54 compute-0 node_exporter[156474]: ts=2025-12-03T01:11:54.079Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Dec 03 01:11:54 compute-0 podman[156459]: 2025-12-03 01:11:54.094256182 +0000 UTC m=+0.187683515 container start 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 01:11:54 compute-0 podman[156459]: node_exporter
Dec 03 01:11:54 compute-0 systemd[1]: Started node_exporter container.
Dec 03 01:11:54 compute-0 sudo[156411]: pam_unix(sudo:session): session closed for user root
Dec 03 01:11:54 compute-0 podman[156483]: 2025-12-03 01:11:54.164828104 +0000 UTC m=+0.063391485 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 01:11:54 compute-0 sudo[156657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjgfgurnftdfrfwfsykiloahyekokdel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724314.3554714-663-128042525100474/AnsiballZ_stat.py'
Dec 03 01:11:54 compute-0 sudo[156657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:11:54 compute-0 python3.9[156659]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/podman_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:11:54 compute-0 sudo[156657]: pam_unix(sudo:session): session closed for user root
Dec 03 01:11:55 compute-0 sudo[156780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytbdydpzveugypynamimolcrrtjwoyku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724314.3554714-663-128042525100474/AnsiballZ_copy.py'
Dec 03 01:11:55 compute-0 sudo[156780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:11:55 compute-0 python3.9[156782]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/podman_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764724314.3554714-663-128042525100474/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:11:55 compute-0 sudo[156780]: pam_unix(sudo:session): session closed for user root
Dec 03 01:11:56 compute-0 sudo[156932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgbqgpesolnffviygbolsspflevhhrti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724316.0348132-680-71667980407426/AnsiballZ_container_config_data.py'
Dec 03 01:11:56 compute-0 sudo[156932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:11:56 compute-0 python3.9[156934]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=podman_exporter.json debug=False
Dec 03 01:11:56 compute-0 sudo[156932]: pam_unix(sudo:session): session closed for user root
Dec 03 01:11:57 compute-0 sudo[157084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lplbwbjwiotkieaczdrpjhixmpgcxmrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724316.943989-689-235709808526260/AnsiballZ_container_config_hash.py'
Dec 03 01:11:57 compute-0 sudo[157084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:11:57 compute-0 python3.9[157086]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 03 01:11:57 compute-0 sudo[157084]: pam_unix(sudo:session): session closed for user root
Dec 03 01:11:58 compute-0 sudo[157236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dglwkbksicnmnpkphfooimgnmetrufnx ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764724317.9102538-699-12697023798184/AnsiballZ_edpm_container_manage.py'
Dec 03 01:11:58 compute-0 sudo[157236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:11:58 compute-0 python3[157238]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=podman_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec 03 01:12:00 compute-0 podman[157253]: 2025-12-03 01:12:00.205293922 +0000 UTC m=+1.563265157 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Dec 03 01:12:00 compute-0 podman[157350]: 2025-12-03 01:12:00.378029353 +0000 UTC m=+0.071734818 container create 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, config_id=edpm, container_name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 03 01:12:00 compute-0 podman[157350]: 2025-12-03 01:12:00.342394801 +0000 UTC m=+0.036100336 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Dec 03 01:12:00 compute-0 python3[157238]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name podman_exporter --conmon-pidfile /run/podman_exporter.pid --env OS_ENDPOINT_TYPE=internal --env CONTAINER_HOST=unix:///run/podman/podman.sock --healthcheck-command /openstack/healthcheck podman_exporter --label config_id=edpm --label container_name=podman_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9882:9882 --user root --volume /var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z --volume /run/podman/podman.sock:/run/podman/podman.sock:rw,z --volume /var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z quay.io/navidys/prometheus-podman-exporter:v1.10.1 --web.config.file=/etc/podman_exporter/podman_exporter.yaml
Dec 03 01:12:00 compute-0 sudo[157236]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:01 compute-0 sudo[157537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyusnsntlmtxgapqmpjaalxiqbfmhnky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724320.8722515-707-109289884027434/AnsiballZ_stat.py'
Dec 03 01:12:01 compute-0 sudo[157537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:01 compute-0 python3.9[157539]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:12:01 compute-0 sudo[157537]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:02 compute-0 sudo[157691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfuqdbwmdwjtiptnxtwmiaylqegoxlzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724321.8132818-716-5878756533424/AnsiballZ_file.py'
Dec 03 01:12:02 compute-0 sudo[157691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:02 compute-0 python3.9[157693]: ansible-file Invoked with path=/etc/systemd/system/edpm_podman_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:12:02 compute-0 sudo[157691]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:03 compute-0 sudo[157842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrqwcqavbjipxfsszhevehvbunqmbaej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724322.5425866-716-198375030057364/AnsiballZ_copy.py'
Dec 03 01:12:03 compute-0 sudo[157842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:03 compute-0 python3.9[157844]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764724322.5425866-716-198375030057364/source dest=/etc/systemd/system/edpm_podman_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:12:03 compute-0 sudo[157842]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:03 compute-0 sudo[157918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxlpzmbrlaudgkvjzesuwddozvsxrgpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724322.5425866-716-198375030057364/AnsiballZ_systemd.py'
Dec 03 01:12:03 compute-0 sudo[157918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:04 compute-0 python3.9[157920]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 03 01:12:04 compute-0 systemd[1]: Reloading.
Dec 03 01:12:04 compute-0 systemd-rc-local-generator[157948]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:12:04 compute-0 systemd-sysv-generator[157951]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:12:04 compute-0 sudo[157918]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:04 compute-0 sudo[158029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnkkjypasvebafhkltfhiijjzuzjlhgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724322.5425866-716-198375030057364/AnsiballZ_systemd.py'
Dec 03 01:12:04 compute-0 sudo[158029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:05 compute-0 python3.9[158031]: ansible-systemd Invoked with state=restarted name=edpm_podman_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:12:05 compute-0 systemd[1]: Reloading.
Dec 03 01:12:05 compute-0 systemd-rc-local-generator[158061]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:12:05 compute-0 systemd-sysv-generator[158065]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:12:05 compute-0 systemd[1]: Starting podman_exporter container...
Dec 03 01:12:05 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:12:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a3af917fa450f54f3298d1356fcb0769645478608c41ec56846e1707f625807/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 03 01:12:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a3af917fa450f54f3298d1356fcb0769645478608c41ec56846e1707f625807/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec 03 01:12:05 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a.
Dec 03 01:12:05 compute-0 podman[158072]: 2025-12-03 01:12:05.717289178 +0000 UTC m=+0.204008060 container init 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 03 01:12:05 compute-0 podman_exporter[158087]: ts=2025-12-03T01:12:05.742Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Dec 03 01:12:05 compute-0 podman_exporter[158087]: ts=2025-12-03T01:12:05.742Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Dec 03 01:12:05 compute-0 podman_exporter[158087]: ts=2025-12-03T01:12:05.742Z caller=handler.go:94 level=info msg="enabled collectors"
Dec 03 01:12:05 compute-0 podman_exporter[158087]: ts=2025-12-03T01:12:05.742Z caller=handler.go:105 level=info collector=container
Dec 03 01:12:05 compute-0 podman[158072]: 2025-12-03 01:12:05.748868716 +0000 UTC m=+0.235587588 container start 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 01:12:05 compute-0 podman[158072]: podman_exporter
Dec 03 01:12:05 compute-0 systemd[1]: Starting Podman API Service...
Dec 03 01:12:05 compute-0 systemd[1]: Started Podman API Service.
Dec 03 01:12:05 compute-0 systemd[1]: Started podman_exporter container.
Dec 03 01:12:05 compute-0 podman[158098]: time="2025-12-03T01:12:05Z" level=info msg="/usr/bin/podman filtering at log level info"
Dec 03 01:12:05 compute-0 podman[158098]: time="2025-12-03T01:12:05Z" level=info msg="Setting parallel job count to 25"
Dec 03 01:12:05 compute-0 podman[158098]: time="2025-12-03T01:12:05Z" level=info msg="Using sqlite as database backend"
Dec 03 01:12:05 compute-0 podman[158098]: time="2025-12-03T01:12:05Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Dec 03 01:12:05 compute-0 podman[158098]: time="2025-12-03T01:12:05Z" level=info msg="Using systemd socket activation to determine API endpoint"
Dec 03 01:12:05 compute-0 podman[158098]: time="2025-12-03T01:12:05Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Dec 03 01:12:05 compute-0 sudo[158029]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:05 compute-0 podman[158098]: @ - - [03/Dec/2025:01:12:05 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Dec 03 01:12:05 compute-0 podman[158098]: time="2025-12-03T01:12:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:12:05 compute-0 podman[158098]: @ - - [03/Dec/2025:01:12:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 9686 "" "Go-http-client/1.1"
Dec 03 01:12:05 compute-0 podman_exporter[158087]: ts=2025-12-03T01:12:05.852Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Dec 03 01:12:05 compute-0 podman_exporter[158087]: ts=2025-12-03T01:12:05.854Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Dec 03 01:12:05 compute-0 podman_exporter[158087]: ts=2025-12-03T01:12:05.854Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Dec 03 01:12:05 compute-0 podman[158096]: 2025-12-03 01:12:05.856893463 +0000 UTC m=+0.086398142 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=starting, health_failing_streak=1, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 03 01:12:05 compute-0 systemd[1]: 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a-47b3dd630bd3997e.service: Main process exited, code=exited, status=1/FAILURE
Dec 03 01:12:05 compute-0 systemd[1]: 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a-47b3dd630bd3997e.service: Failed with result 'exit-code'.
Dec 03 01:12:06 compute-0 sudo[158279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtddoosszbwxxglrslejamilxiqddsmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724326.0457342-740-232237134350499/AnsiballZ_systemd.py'
Dec 03 01:12:06 compute-0 sudo[158279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:06 compute-0 python3.9[158281]: ansible-ansible.builtin.systemd Invoked with name=edpm_podman_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 03 01:12:06 compute-0 systemd[1]: Stopping podman_exporter container...
Dec 03 01:12:06 compute-0 podman[158098]: @ - - [03/Dec/2025:01:12:05 +0000] "GET /v4.9.3/libpod/events?filters=%7B%7D&since=&stream=true&until= HTTP/1.1" 200 1449 "" "Go-http-client/1.1"
Dec 03 01:12:06 compute-0 systemd[1]: libpod-7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a.scope: Deactivated successfully.
Dec 03 01:12:06 compute-0 conmon[158087]: conmon 7fad237e83203b5eedaa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a.scope/container/memory.events
Dec 03 01:12:06 compute-0 podman[158285]: 2025-12-03 01:12:06.9128796 +0000 UTC m=+0.068493199 container died 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 01:12:06 compute-0 systemd[1]: 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a-47b3dd630bd3997e.timer: Deactivated successfully.
Dec 03 01:12:06 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a.
Dec 03 01:12:06 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a-userdata-shm.mount: Deactivated successfully.
Dec 03 01:12:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a3af917fa450f54f3298d1356fcb0769645478608c41ec56846e1707f625807-merged.mount: Deactivated successfully.
Dec 03 01:12:07 compute-0 podman[158285]: 2025-12-03 01:12:07.195830194 +0000 UTC m=+0.351443803 container cleanup 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 01:12:07 compute-0 podman[158285]: podman_exporter
Dec 03 01:12:07 compute-0 systemd[1]: edpm_podman_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec 03 01:12:07 compute-0 podman[158311]: podman_exporter
Dec 03 01:12:07 compute-0 systemd[1]: edpm_podman_exporter.service: Failed with result 'exit-code'.
Dec 03 01:12:07 compute-0 systemd[1]: Stopped podman_exporter container.
Dec 03 01:12:07 compute-0 systemd[1]: Starting podman_exporter container...
Dec 03 01:12:07 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:12:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a3af917fa450f54f3298d1356fcb0769645478608c41ec56846e1707f625807/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 03 01:12:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a3af917fa450f54f3298d1356fcb0769645478608c41ec56846e1707f625807/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec 03 01:12:07 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a.
Dec 03 01:12:07 compute-0 podman[158324]: 2025-12-03 01:12:07.481699257 +0000 UTC m=+0.155898931 container init 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 01:12:07 compute-0 podman_exporter[158339]: ts=2025-12-03T01:12:07.503Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Dec 03 01:12:07 compute-0 podman_exporter[158339]: ts=2025-12-03T01:12:07.503Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Dec 03 01:12:07 compute-0 podman_exporter[158339]: ts=2025-12-03T01:12:07.503Z caller=handler.go:94 level=info msg="enabled collectors"
Dec 03 01:12:07 compute-0 podman_exporter[158339]: ts=2025-12-03T01:12:07.503Z caller=handler.go:105 level=info collector=container
Dec 03 01:12:07 compute-0 podman[158098]: @ - - [03/Dec/2025:01:12:07 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Dec 03 01:12:07 compute-0 podman[158098]: time="2025-12-03T01:12:07Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:12:07 compute-0 podman[158324]: 2025-12-03 01:12:07.51640779 +0000 UTC m=+0.190607474 container start 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 03 01:12:07 compute-0 podman[158324]: podman_exporter
Dec 03 01:12:07 compute-0 podman[158098]: @ - - [03/Dec/2025:01:12:07 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 9688 "" "Go-http-client/1.1"
Dec 03 01:12:07 compute-0 systemd[1]: Started podman_exporter container.
Dec 03 01:12:07 compute-0 podman_exporter[158339]: ts=2025-12-03T01:12:07.528Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Dec 03 01:12:07 compute-0 podman_exporter[158339]: ts=2025-12-03T01:12:07.529Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Dec 03 01:12:07 compute-0 podman_exporter[158339]: ts=2025-12-03T01:12:07.530Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Dec 03 01:12:07 compute-0 sudo[158279]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:07 compute-0 podman[158349]: 2025-12-03 01:12:07.610764333 +0000 UTC m=+0.073254264 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 03 01:12:08 compute-0 sudo[158524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkayjuyucustxmwsvfhnwigweyskvuow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724327.8103576-748-143711231949342/AnsiballZ_stat.py'
Dec 03 01:12:08 compute-0 sudo[158524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:08 compute-0 python3.9[158526]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/openstack_network_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:12:08 compute-0 sudo[158524]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:08 compute-0 sudo[158647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nraiyadsogxxfbfbdbjpjdjkegasnjac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724327.8103576-748-143711231949342/AnsiballZ_copy.py'
Dec 03 01:12:08 compute-0 sudo[158647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:09 compute-0 python3.9[158649]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/openstack_network_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764724327.8103576-748-143711231949342/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:12:09 compute-0 sudo[158647]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:09 compute-0 sudo[158816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yeiqixfikzkhnetaitsdxjdchjkcuwvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724329.467814-765-45580239723996/AnsiballZ_container_config_data.py'
Dec 03 01:12:09 compute-0 sudo[158816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:09 compute-0 podman[158769]: 2025-12-03 01:12:09.895446907 +0000 UTC m=+0.152778716 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 03 01:12:10 compute-0 python3.9[158822]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=openstack_network_exporter.json debug=False
Dec 03 01:12:10 compute-0 sudo[158816]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:10 compute-0 sudo[158992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyflnfsuxmuwixzbuhrmkbbkwvqcerfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724330.3615894-774-174581509180589/AnsiballZ_container_config_hash.py'
Dec 03 01:12:10 compute-0 podman[158951]: 2025-12-03 01:12:10.781684273 +0000 UTC m=+0.081530254 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=2, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125)
Dec 03 01:12:10 compute-0 sudo[158992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:10 compute-0 systemd[1]: 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264-2d98b0e3ac0f54e5.service: Main process exited, code=exited, status=1/FAILURE
Dec 03 01:12:10 compute-0 systemd[1]: 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264-2d98b0e3ac0f54e5.service: Failed with result 'exit-code'.
Dec 03 01:12:10 compute-0 python3.9[158998]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 03 01:12:11 compute-0 sudo[158992]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:11 compute-0 sudo[159148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfusiyrjuqziekucepwfacfohmdurfly ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764724331.513834-784-131054625872796/AnsiballZ_edpm_container_manage.py'
Dec 03 01:12:11 compute-0 sudo[159148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:12 compute-0 python3[159150]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=openstack_network_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec 03 01:12:14 compute-0 podman[159163]: 2025-12-03 01:12:14.792716262 +0000 UTC m=+2.511051202 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Dec 03 01:12:15 compute-0 podman[159264]: 2025-12-03 01:12:15.008253961 +0000 UTC m=+0.072023826 container create 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=openstack_network_exporter, managed_by=edpm_ansible, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, release=1755695350, vendor=Red Hat, Inc., version=9.6)
Dec 03 01:12:15 compute-0 podman[159264]: 2025-12-03 01:12:14.97295753 +0000 UTC m=+0.036727435 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Dec 03 01:12:15 compute-0 python3[159150]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name openstack_network_exporter --conmon-pidfile /run/openstack_network_exporter.pid --env OS_ENDPOINT_TYPE=internal --env OPENSTACK_NETWORK_EXPORTER_YAML=/etc/openstack_network_exporter/openstack_network_exporter.yaml --healthcheck-command /openstack/healthcheck openstack-netwo --label config_id=edpm --label container_name=openstack_network_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9105:9105 --volume /var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z --volume /var/run/openvswitch:/run/openvswitch:rw,z --volume /var/lib/openvswitch/ovn:/run/ovn:rw,z --volume /proc:/host/proc:ro --volume /var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Dec 03 01:12:15 compute-0 sudo[159148]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:15 compute-0 sudo[159453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjnzmyyhzolirikmquidpsabdxxtwumm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724335.4583318-792-152613001276723/AnsiballZ_stat.py'
Dec 03 01:12:15 compute-0 sudo[159453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:16 compute-0 python3.9[159455]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:12:16 compute-0 sudo[159453]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:16 compute-0 sudo[159607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlegjzcrtvcmbmyowwfdmcdbkexhqdwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724336.4450026-801-133832725506153/AnsiballZ_file.py'
Dec 03 01:12:16 compute-0 sudo[159607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:17 compute-0 python3.9[159609]: ansible-file Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:12:17 compute-0 sudo[159607]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:17 compute-0 sudo[159758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smuwnyxlxwgjnrfuvwylwxebppxfcquk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724337.143841-801-190947757946186/AnsiballZ_copy.py'
Dec 03 01:12:17 compute-0 sudo[159758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:18 compute-0 python3.9[159760]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764724337.143841-801-190947757946186/source dest=/etc/systemd/system/edpm_openstack_network_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:12:18 compute-0 sudo[159758]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:18 compute-0 sudo[159834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnbkbygmuibgufwwfsukseusuiaspifd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724337.143841-801-190947757946186/AnsiballZ_systemd.py'
Dec 03 01:12:18 compute-0 sudo[159834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:18 compute-0 python3.9[159836]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 03 01:12:18 compute-0 systemd[1]: Reloading.
Dec 03 01:12:18 compute-0 systemd-rc-local-generator[159862]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:12:19 compute-0 systemd-sysv-generator[159868]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:12:19 compute-0 sudo[159834]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:19 compute-0 sudo[159945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efpwrifhhdjifddfuffpmbocjblgygdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724337.143841-801-190947757946186/AnsiballZ_systemd.py'
Dec 03 01:12:19 compute-0 sudo[159945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:19 compute-0 python3.9[159947]: ansible-systemd Invoked with state=restarted name=edpm_openstack_network_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:12:19 compute-0 systemd[1]: Reloading.
Dec 03 01:12:20 compute-0 systemd-rc-local-generator[159978]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:12:20 compute-0 systemd-sysv-generator[159982]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:12:20 compute-0 systemd[1]: Starting openstack_network_exporter container...
Dec 03 01:12:20 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:12:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a9fbefe05972c72a9f7a13632a386f66954f2ee389425ed857290e85304a23e/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec 03 01:12:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a9fbefe05972c72a9f7a13632a386f66954f2ee389425ed857290e85304a23e/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 03 01:12:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a9fbefe05972c72a9f7a13632a386f66954f2ee389425ed857290e85304a23e/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec 03 01:12:20 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44.
Dec 03 01:12:20 compute-0 podman[159987]: 2025-12-03 01:12:20.451896593 +0000 UTC m=+0.189400777 container init 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, name=ubi9-minimal, vcs-type=git, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 03 01:12:20 compute-0 openstack_network_exporter[160003]: INFO    01:12:20 main.go:48: registering *bridge.Collector
Dec 03 01:12:20 compute-0 openstack_network_exporter[160003]: INFO    01:12:20 main.go:48: registering *coverage.Collector
Dec 03 01:12:20 compute-0 openstack_network_exporter[160003]: INFO    01:12:20 main.go:48: registering *datapath.Collector
Dec 03 01:12:20 compute-0 openstack_network_exporter[160003]: INFO    01:12:20 main.go:48: registering *iface.Collector
Dec 03 01:12:20 compute-0 openstack_network_exporter[160003]: INFO    01:12:20 main.go:48: registering *memory.Collector
Dec 03 01:12:20 compute-0 openstack_network_exporter[160003]: INFO    01:12:20 main.go:48: registering *ovnnorthd.Collector
Dec 03 01:12:20 compute-0 openstack_network_exporter[160003]: INFO    01:12:20 main.go:48: registering *ovn.Collector
Dec 03 01:12:20 compute-0 openstack_network_exporter[160003]: INFO    01:12:20 main.go:48: registering *ovsdbserver.Collector
Dec 03 01:12:20 compute-0 openstack_network_exporter[160003]: INFO    01:12:20 main.go:48: registering *pmd_perf.Collector
Dec 03 01:12:20 compute-0 openstack_network_exporter[160003]: INFO    01:12:20 main.go:48: registering *pmd_rxq.Collector
Dec 03 01:12:20 compute-0 openstack_network_exporter[160003]: INFO    01:12:20 main.go:48: registering *vswitch.Collector
Dec 03 01:12:20 compute-0 openstack_network_exporter[160003]: NOTICE  01:12:20 main.go:76: listening on https://:9105/metrics
Dec 03 01:12:20 compute-0 podman[159987]: 2025-12-03 01:12:20.492217507 +0000 UTC m=+0.229721701 container start 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, maintainer=Red Hat, Inc., release=1755695350, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vcs-type=git, managed_by=edpm_ansible, version=9.6, architecture=x86_64, config_id=edpm, distribution-scope=public, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 03 01:12:20 compute-0 podman[159987]: openstack_network_exporter
Dec 03 01:12:20 compute-0 systemd[1]: Started openstack_network_exporter container.
Dec 03 01:12:20 compute-0 sudo[159945]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:20 compute-0 podman[160013]: 2025-12-03 01:12:20.614160506 +0000 UTC m=+0.106084399 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_id=edpm, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, io.openshift.expose-services=, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, distribution-scope=public, version=9.6)
Dec 03 01:12:21 compute-0 sudo[160185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpnddnvgucenseypzrkkhfplgimjunti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724340.7770345-825-214615145586022/AnsiballZ_systemd.py'
Dec 03 01:12:21 compute-0 sudo[160185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:21 compute-0 python3.9[160187]: ansible-ansible.builtin.systemd Invoked with name=edpm_openstack_network_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 03 01:12:21 compute-0 systemd[1]: Stopping openstack_network_exporter container...
Dec 03 01:12:21 compute-0 systemd[1]: libpod-3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44.scope: Deactivated successfully.
Dec 03 01:12:21 compute-0 podman[160191]: 2025-12-03 01:12:21.689046466 +0000 UTC m=+0.095620472 container died 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, version=9.6, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=Red Hat, Inc.)
Dec 03 01:12:21 compute-0 systemd[1]: 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44-398a825980b2ed60.timer: Deactivated successfully.
Dec 03 01:12:21 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44.
Dec 03 01:12:21 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44-userdata-shm.mount: Deactivated successfully.
Dec 03 01:12:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a9fbefe05972c72a9f7a13632a386f66954f2ee389425ed857290e85304a23e-merged.mount: Deactivated successfully.
Dec 03 01:12:22 compute-0 podman[160191]: 2025-12-03 01:12:22.702053629 +0000 UTC m=+1.108627575 container cleanup 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.buildah.version=1.33.7, release=1755695350, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, version=9.6, build-date=2025-08-20T13:12:41, vcs-type=git, config_id=edpm, managed_by=edpm_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 03 01:12:22 compute-0 podman[160191]: openstack_network_exporter
Dec 03 01:12:22 compute-0 systemd[1]: edpm_openstack_network_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec 03 01:12:22 compute-0 podman[160219]: openstack_network_exporter
Dec 03 01:12:22 compute-0 systemd[1]: edpm_openstack_network_exporter.service: Failed with result 'exit-code'.
Dec 03 01:12:22 compute-0 systemd[1]: Stopped openstack_network_exporter container.
Dec 03 01:12:22 compute-0 systemd[1]: Starting openstack_network_exporter container...
Dec 03 01:12:22 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:12:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a9fbefe05972c72a9f7a13632a386f66954f2ee389425ed857290e85304a23e/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec 03 01:12:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a9fbefe05972c72a9f7a13632a386f66954f2ee389425ed857290e85304a23e/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 03 01:12:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a9fbefe05972c72a9f7a13632a386f66954f2ee389425ed857290e85304a23e/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec 03 01:12:22 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44.
Dec 03 01:12:22 compute-0 podman[160233]: 2025-12-03 01:12:22.939121201 +0000 UTC m=+0.151070214 container init 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, release=1755695350, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.6, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc.)
Dec 03 01:12:22 compute-0 openstack_network_exporter[160250]: INFO    01:12:22 main.go:48: registering *bridge.Collector
Dec 03 01:12:22 compute-0 openstack_network_exporter[160250]: INFO    01:12:22 main.go:48: registering *coverage.Collector
Dec 03 01:12:22 compute-0 openstack_network_exporter[160250]: INFO    01:12:22 main.go:48: registering *datapath.Collector
Dec 03 01:12:22 compute-0 openstack_network_exporter[160250]: INFO    01:12:22 main.go:48: registering *iface.Collector
Dec 03 01:12:22 compute-0 openstack_network_exporter[160250]: INFO    01:12:22 main.go:48: registering *memory.Collector
Dec 03 01:12:22 compute-0 openstack_network_exporter[160250]: INFO    01:12:22 main.go:48: registering *ovnnorthd.Collector
Dec 03 01:12:22 compute-0 openstack_network_exporter[160250]: INFO    01:12:22 main.go:48: registering *ovn.Collector
Dec 03 01:12:22 compute-0 openstack_network_exporter[160250]: INFO    01:12:22 main.go:48: registering *ovsdbserver.Collector
Dec 03 01:12:22 compute-0 openstack_network_exporter[160250]: INFO    01:12:22 main.go:48: registering *pmd_perf.Collector
Dec 03 01:12:22 compute-0 openstack_network_exporter[160250]: INFO    01:12:22 main.go:48: registering *pmd_rxq.Collector
Dec 03 01:12:22 compute-0 openstack_network_exporter[160250]: INFO    01:12:22 main.go:48: registering *vswitch.Collector
Dec 03 01:12:22 compute-0 openstack_network_exporter[160250]: NOTICE  01:12:22 main.go:76: listening on https://:9105/metrics
Dec 03 01:12:22 compute-0 podman[160233]: 2025-12-03 01:12:22.976133554 +0000 UTC m=+0.188082527 container start 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, container_name=openstack_network_exporter, managed_by=edpm_ansible, release=1755695350, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, maintainer=Red Hat, Inc., distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, io.openshift.expose-services=)
Dec 03 01:12:22 compute-0 podman[160233]: openstack_network_exporter
Dec 03 01:12:22 compute-0 systemd[1]: Started openstack_network_exporter container.
Dec 03 01:12:23 compute-0 sudo[160185]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:23 compute-0 podman[160260]: 2025-12-03 01:12:23.086836632 +0000 UTC m=+0.094590170 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, config_id=edpm, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, release=1755695350, build-date=2025-08-20T13:12:41, vcs-type=git, container_name=openstack_network_exporter, version=9.6, io.buildah.version=1.33.7, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc.)
Dec 03 01:12:23 compute-0 sudo[160432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oavtmgfkcgkjnptlcsrujqschgnghxqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724343.2269292-833-106862362546369/AnsiballZ_find.py'
Dec 03 01:12:23 compute-0 sudo[160432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:23 compute-0 python3.9[160434]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 03 01:12:23 compute-0 sudo[160432]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:24 compute-0 sudo[160597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgxkfbhjxelttkncxvemewumvwmfddph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724344.1731083-843-246610686502727/AnsiballZ_podman_container_info.py'
Dec 03 01:12:24 compute-0 sudo[160597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:24 compute-0 podman[160558]: 2025-12-03 01:12:24.686146883 +0000 UTC m=+0.083579197 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 01:12:24 compute-0 python3.9[160610]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Dec 03 01:12:24 compute-0 sudo[160597]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:25 compute-0 sudo[160773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nicvljfryzzeqjetooehparpvvswioas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724345.211142-851-279516668794429/AnsiballZ_podman_container_exec.py'
Dec 03 01:12:25 compute-0 sudo[160773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:25 compute-0 python3.9[160775]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:12:26 compute-0 systemd[1]: Started libpod-conmon-926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f.scope.
Dec 03 01:12:26 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 03 01:12:26 compute-0 podman[160776]: 2025-12-03 01:12:26.048740402 +0000 UTC m=+0.105303356 container exec 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Dec 03 01:12:26 compute-0 podman[160776]: 2025-12-03 01:12:26.056340113 +0000 UTC m=+0.112903067 container exec_died 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Dec 03 01:12:26 compute-0 sudo[160773]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:26 compute-0 systemd[1]: libpod-conmon-926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f.scope: Deactivated successfully.
Dec 03 01:12:26 compute-0 sudo[160959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxaklkedkxzbqgodbvjdbpgyzzrhbpdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724346.304828-859-42398039504120/AnsiballZ_podman_container_exec.py'
Dec 03 01:12:26 compute-0 sudo[160959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:26 compute-0 python3.9[160961]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:12:27 compute-0 systemd[1]: Started libpod-conmon-926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f.scope.
Dec 03 01:12:27 compute-0 podman[160962]: 2025-12-03 01:12:27.066895592 +0000 UTC m=+0.107967597 container exec 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 03 01:12:27 compute-0 podman[160962]: 2025-12-03 01:12:27.102355258 +0000 UTC m=+0.143427213 container exec_died 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 03 01:12:27 compute-0 systemd[1]: libpod-conmon-926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f.scope: Deactivated successfully.
Dec 03 01:12:27 compute-0 sudo[160959]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:27 compute-0 sudo[161143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgvjxunukpuspkxluxpjsdxpesmzqean ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724347.4026635-867-51432450789503/AnsiballZ_file.py'
Dec 03 01:12:27 compute-0 sudo[161143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:27 compute-0 python3.9[161145]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:12:27 compute-0 sudo[161143]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:28 compute-0 sudo[161295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-moxmzcfupbuzyjhlzewoggttxmbtupbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724348.2327375-876-60568691805513/AnsiballZ_podman_container_info.py'
Dec 03 01:12:28 compute-0 sudo[161295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:28 compute-0 python3.9[161297]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Dec 03 01:12:28 compute-0 sudo[161295]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:29 compute-0 sudo[161461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqsvwfdismzvbokaiktrjjjilblkbvei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724349.2032902-884-1727095226210/AnsiballZ_podman_container_exec.py'
Dec 03 01:12:29 compute-0 sudo[161461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:29 compute-0 python3.9[161463]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:12:29 compute-0 systemd[1]: Started libpod-conmon-7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264.scope.
Dec 03 01:12:29 compute-0 podman[161464]: 2025-12-03 01:12:29.96218814 +0000 UTC m=+0.118514037 container exec 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute)
Dec 03 01:12:29 compute-0 podman[161464]: 2025-12-03 01:12:29.997084538 +0000 UTC m=+0.153410355 container exec_died 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible)
Dec 03 01:12:30 compute-0 systemd[1]: libpod-conmon-7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264.scope: Deactivated successfully.
Dec 03 01:12:30 compute-0 sudo[161461]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:30 compute-0 sudo[161647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzyyscyufnmtojbwkgrvhzfavnlaiaxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724350.2718337-892-275756612340594/AnsiballZ_podman_container_exec.py'
Dec 03 01:12:30 compute-0 sudo[161647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:30 compute-0 python3.9[161649]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:12:31 compute-0 systemd[1]: Started libpod-conmon-7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264.scope.
Dec 03 01:12:31 compute-0 podman[161650]: 2025-12-03 01:12:31.02157346 +0000 UTC m=+0.107770600 container exec 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 03 01:12:31 compute-0 podman[161650]: 2025-12-03 01:12:31.052047515 +0000 UTC m=+0.138244595 container exec_died 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 03 01:12:31 compute-0 systemd[1]: libpod-conmon-7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264.scope: Deactivated successfully.
Dec 03 01:12:31 compute-0 sudo[161647]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:31 compute-0 sudo[161830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avdvhseuxvmgqvvahvypqhcmcesnmnwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724351.3264844-900-246573357693236/AnsiballZ_file.py'
Dec 03 01:12:31 compute-0 sudo[161830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:31 compute-0 python3.9[161832]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:12:31 compute-0 sudo[161830]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:32 compute-0 sudo[161982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aodnhxxvutsgwpyxmuldqpzzpueunrvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724352.2222326-909-42775761821824/AnsiballZ_podman_container_info.py'
Dec 03 01:12:32 compute-0 sudo[161982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:32 compute-0 python3.9[161984]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Dec 03 01:12:32 compute-0 sudo[161982]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:33 compute-0 sudo[162147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcfddagrrcgntplqslmahxqstyihgftg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724353.2123954-917-255373980251970/AnsiballZ_podman_container_exec.py'
Dec 03 01:12:33 compute-0 sudo[162147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:33 compute-0 python3.9[162149]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:12:33 compute-0 systemd[1]: Started libpod-conmon-0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb.scope.
Dec 03 01:12:33 compute-0 podman[162150]: 2025-12-03 01:12:33.979610092 +0000 UTC m=+0.099323164 container exec 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 03 01:12:34 compute-0 podman[162150]: 2025-12-03 01:12:34.015156531 +0000 UTC m=+0.134869563 container exec_died 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 01:12:34 compute-0 systemd[1]: libpod-conmon-0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb.scope: Deactivated successfully.
Dec 03 01:12:34 compute-0 sudo[162147]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:34 compute-0 sudo[162329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgakzwftvhnrcvfdndmwfqyhxtswnsol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724354.3229282-925-117570923543922/AnsiballZ_podman_container_exec.py'
Dec 03 01:12:34 compute-0 sudo[162329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:34 compute-0 python3.9[162331]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:12:35 compute-0 systemd[1]: Started libpod-conmon-0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb.scope.
Dec 03 01:12:35 compute-0 podman[162332]: 2025-12-03 01:12:35.069473077 +0000 UTC m=+0.107718829 container exec 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 03 01:12:35 compute-0 podman[162332]: 2025-12-03 01:12:35.105283694 +0000 UTC m=+0.143529446 container exec_died 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 03 01:12:35 compute-0 systemd[1]: libpod-conmon-0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb.scope: Deactivated successfully.
Dec 03 01:12:35 compute-0 sudo[162329]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:35 compute-0 sudo[162513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-futqrowfzwxybjnqpmvktsabrwjdowei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724355.3847694-933-65362115460894/AnsiballZ_file.py'
Dec 03 01:12:35 compute-0 sudo[162513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:36 compute-0 python3.9[162515]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:12:36 compute-0 sudo[162513]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:36 compute-0 sudo[162665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlqusbtkwsafeebdmkkuqneotcbwxiqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724356.3287616-942-60251572603392/AnsiballZ_podman_container_info.py'
Dec 03 01:12:36 compute-0 sudo[162665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:36 compute-0 python3.9[162667]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Dec 03 01:12:37 compute-0 sudo[162665]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:37 compute-0 sudo[162830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdvzetruajoooaitvxeahcaslnrgrrou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724357.2696297-950-47546579583020/AnsiballZ_podman_container_exec.py'
Dec 03 01:12:37 compute-0 sudo[162830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:37 compute-0 podman[162832]: 2025-12-03 01:12:37.791859664 +0000 UTC m=+0.093035842 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 03 01:12:37 compute-0 python3.9[162833]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:12:38 compute-0 systemd[1]: Started libpod-conmon-7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a.scope.
Dec 03 01:12:38 compute-0 podman[162854]: 2025-12-03 01:12:38.032068516 +0000 UTC m=+0.107931517 container exec 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 01:12:38 compute-0 podman[162854]: 2025-12-03 01:12:38.063493814 +0000 UTC m=+0.139356815 container exec_died 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 03 01:12:38 compute-0 systemd[1]: libpod-conmon-7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a.scope: Deactivated successfully.
Dec 03 01:12:38 compute-0 sudo[162830]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:38 compute-0 sudo[163036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvlrxvghodaghdrlzvtbccdldcziehqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724358.3469658-958-161823459444242/AnsiballZ_podman_container_exec.py'
Dec 03 01:12:38 compute-0 sudo[163036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:38 compute-0 python3.9[163038]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:12:39 compute-0 systemd[1]: Started libpod-conmon-7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a.scope.
Dec 03 01:12:39 compute-0 podman[163039]: 2025-12-03 01:12:39.120985746 +0000 UTC m=+0.113978086 container exec 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 01:12:39 compute-0 podman[163039]: 2025-12-03 01:12:39.153035222 +0000 UTC m=+0.146027492 container exec_died 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 01:12:39 compute-0 systemd[1]: libpod-conmon-7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a.scope: Deactivated successfully.
Dec 03 01:12:39 compute-0 sudo[163036]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:39 compute-0 sudo[163220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obgnzkvtnjwjxdzilpyfelicjhmeuoer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724359.4263983-966-10528985524786/AnsiballZ_file.py'
Dec 03 01:12:39 compute-0 sudo[163220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:40 compute-0 python3.9[163222]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:12:40 compute-0 sudo[163220]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:40 compute-0 sudo[163383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okferxviapapqitmmyhlqbtnnoparpxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724360.3216858-975-73416309860784/AnsiballZ_podman_container_info.py'
Dec 03 01:12:40 compute-0 sudo[163383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:40 compute-0 podman[163346]: 2025-12-03 01:12:40.825706876 +0000 UTC m=+0.160123696 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:12:40 compute-0 python3.9[163391]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Dec 03 01:12:40 compute-0 podman[163399]: 2025-12-03 01:12:40.953002443 +0000 UTC m=+0.122210086 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.vendor=CentOS)
Dec 03 01:12:41 compute-0 sudo[163383]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:41 compute-0 sudo[163581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqybuurvkewckaucblagzvepkvvbzgys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724361.2600672-983-231928438112731/AnsiballZ_podman_container_exec.py'
Dec 03 01:12:41 compute-0 sudo[163581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:41 compute-0 python3.9[163583]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:12:42 compute-0 systemd[1]: Started libpod-conmon-3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44.scope.
Dec 03 01:12:42 compute-0 podman[163584]: 2025-12-03 01:12:42.036684568 +0000 UTC m=+0.126852016 container exec 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.6, architecture=x86_64, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, vcs-type=git, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, release=1755695350, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm)
Dec 03 01:12:42 compute-0 podman[163584]: 2025-12-03 01:12:42.048174949 +0000 UTC m=+0.138342357 container exec_died 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, architecture=x86_64, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.6, io.openshift.expose-services=, release=1755695350, build-date=2025-08-20T13:12:41, distribution-scope=public, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-type=git)
Dec 03 01:12:42 compute-0 sudo[163581]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:42 compute-0 systemd[1]: libpod-conmon-3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44.scope: Deactivated successfully.
Dec 03 01:12:42 compute-0 sudo[163765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xntkyxnnvcerokcpizfsmpjxjpyeolwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724362.3427284-991-33520827701338/AnsiballZ_podman_container_exec.py'
Dec 03 01:12:42 compute-0 sudo[163765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:43 compute-0 python3.9[163767]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:12:43 compute-0 systemd[1]: Started libpod-conmon-3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44.scope.
Dec 03 01:12:43 compute-0 podman[163768]: 2025-12-03 01:12:43.135059222 +0000 UTC m=+0.098490054 container exec 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, build-date=2025-08-20T13:12:41, distribution-scope=public, io.buildah.version=1.33.7, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, name=ubi9-minimal, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec 03 01:12:43 compute-0 podman[163768]: 2025-12-03 01:12:43.167435536 +0000 UTC m=+0.130866358 container exec_died 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, version=9.6, config_id=edpm, maintainer=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, vcs-type=git, vendor=Red Hat, Inc.)
Dec 03 01:12:43 compute-0 systemd[1]: libpod-conmon-3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44.scope: Deactivated successfully.
Dec 03 01:12:43 compute-0 sudo[163765]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:43 compute-0 sudo[163946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jygxmrbfqrxxeaqtxhzcivsuytwinqxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724363.457068-999-76868512796422/AnsiballZ_file.py'
Dec 03 01:12:43 compute-0 sudo[163946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:44 compute-0 python3.9[163948]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:12:44 compute-0 sudo[163946]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:44 compute-0 sudo[164098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhauncjrfnzmohibbihmhukkscphhunq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724364.3763509-1008-275755841291990/AnsiballZ_file.py'
Dec 03 01:12:44 compute-0 sudo[164098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:45 compute-0 python3.9[164100]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:12:45 compute-0 sudo[164098]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:45 compute-0 sudo[164250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkxvdgxgvdvajzsuzzkpuiyjugzmheel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724365.2886798-1016-103933134709506/AnsiballZ_stat.py'
Dec 03 01:12:45 compute-0 sudo[164250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:45 compute-0 python3.9[164252]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:12:45 compute-0 sudo[164250]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:46 compute-0 sudo[164373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlrljrabaiccgrrnxeatbwwbntsewiwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724365.2886798-1016-103933134709506/AnsiballZ_copy.py'
Dec 03 01:12:46 compute-0 sudo[164373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:46 compute-0 python3.9[164375]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/telemetry.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764724365.2886798-1016-103933134709506/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:12:46 compute-0 sudo[164373]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:47 compute-0 sudo[164525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kaunnisedmgzajfccgsfgwfffmtwbtmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724366.9276745-1032-136184932313506/AnsiballZ_file.py'
Dec 03 01:12:47 compute-0 sudo[164525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:47 compute-0 python3.9[164527]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:12:47 compute-0 sudo[164525]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:48 compute-0 sudo[164677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbqsifyhvfeclrwcmiisrlinoywkxszy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724367.8622074-1040-248806293640083/AnsiballZ_stat.py'
Dec 03 01:12:48 compute-0 sudo[164677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:48 compute-0 python3.9[164679]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:12:48 compute-0 sudo[164677]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:48 compute-0 sudo[164755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efsffsyxoqlwhkrkttmjqzxkfkgvpfhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724367.8622074-1040-248806293640083/AnsiballZ_file.py'
Dec 03 01:12:48 compute-0 sudo[164755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:49 compute-0 python3.9[164757]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:12:49 compute-0 sudo[164755]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:49 compute-0 sudo[164907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptgplbpdhkktgfpkocudmrrvcvttuyad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724369.3165998-1052-205508688435104/AnsiballZ_stat.py'
Dec 03 01:12:49 compute-0 sudo[164907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:49 compute-0 python3.9[164909]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:12:49 compute-0 sudo[164907]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:50 compute-0 sudo[164985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mewblpqznidbcoczidfzzvwaoxnbgqbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724369.3165998-1052-205508688435104/AnsiballZ_file.py'
Dec 03 01:12:50 compute-0 sudo[164985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:50 compute-0 python3.9[164987]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.b8uiwgyx recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:12:50 compute-0 sudo[164985]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:51 compute-0 sudo[165137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nelltulxsibtalhpvvidwcjlkcmsxytp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724370.7264748-1064-103491885152516/AnsiballZ_stat.py'
Dec 03 01:12:51 compute-0 sudo[165137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:51 compute-0 python3.9[165139]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:12:51 compute-0 sudo[165137]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:51 compute-0 sudo[165215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avptqzvzxmcdltsswmieuiywtrswspfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724370.7264748-1064-103491885152516/AnsiballZ_file.py'
Dec 03 01:12:51 compute-0 sudo[165215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:51 compute-0 python3.9[165217]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:12:51 compute-0 sudo[165215]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:52 compute-0 sudo[165367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwnfogxprvpdqdbjepilcrxbutkhgtzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724372.1450915-1077-200095159169892/AnsiballZ_command.py'
Dec 03 01:12:52 compute-0 sudo[165367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:52 compute-0 python3.9[165369]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:12:52 compute-0 sudo[165367]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:53 compute-0 sudo[165530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzrjlwzgjylyqlkeycpbkqmldwraemdj ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764724372.9513736-1085-75990329686187/AnsiballZ_edpm_nftables_from_files.py'
Dec 03 01:12:53 compute-0 sudo[165530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:53 compute-0 podman[165494]: 2025-12-03 01:12:53.591588425 +0000 UTC m=+0.105947582 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, version=9.6, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, name=ubi9-minimal, release=1755695350, vcs-type=git, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, managed_by=edpm_ansible, config_id=edpm, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9)
Dec 03 01:12:53 compute-0 python3[165540]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 03 01:12:53 compute-0 sudo[165530]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:54 compute-0 sudo[165694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrogaxjzqbegcaftudmaxlglleihmddc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724374.0186172-1093-151833495654259/AnsiballZ_stat.py'
Dec 03 01:12:54 compute-0 sudo[165694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:54 compute-0 python3.9[165696]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:12:54 compute-0 sudo[165694]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:54 compute-0 podman[165699]: 2025-12-03 01:12:54.817761232 +0000 UTC m=+0.077413715 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 03 01:12:55 compute-0 sudo[165796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yysebdgkexpauqrhgnudkbkrjavcqkfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724374.0186172-1093-151833495654259/AnsiballZ_file.py'
Dec 03 01:12:55 compute-0 sudo[165796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:55 compute-0 python3.9[165798]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:12:55 compute-0 sudo[165796]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:55 compute-0 sudo[165948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjtenjurbotbcspknvhraupmwwpxwnzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724375.4888241-1105-112286642783259/AnsiballZ_stat.py'
Dec 03 01:12:55 compute-0 sudo[165948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:56 compute-0 python3.9[165950]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:12:56 compute-0 sudo[165948]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:56 compute-0 sudo[166026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkzssbjppozyuwvxjkluqdhdmjiweesg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724375.4888241-1105-112286642783259/AnsiballZ_file.py'
Dec 03 01:12:56 compute-0 sudo[166026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:56 compute-0 python3.9[166028]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:12:56 compute-0 sudo[166026]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:57 compute-0 sudo[166178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsuxuzxahxuirrggfylwkfirshggxehp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724376.916502-1117-16211236756583/AnsiballZ_stat.py'
Dec 03 01:12:57 compute-0 sudo[166178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:57 compute-0 python3.9[166180]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:12:57 compute-0 sudo[166178]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:58 compute-0 sudo[166256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjohigxxsohbwcmqcduwzefkoeoenzxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724376.916502-1117-16211236756583/AnsiballZ_file.py'
Dec 03 01:12:58 compute-0 sudo[166256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:58 compute-0 python3.9[166258]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:12:58 compute-0 sudo[166256]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:58 compute-0 sudo[166408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utzcrazmyvnknukmaxfygmbkmkbrvnen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724378.419688-1129-99781659518742/AnsiballZ_stat.py'
Dec 03 01:12:58 compute-0 sudo[166408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:59 compute-0 python3.9[166410]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:12:59 compute-0 sudo[166408]: pam_unix(sudo:session): session closed for user root
Dec 03 01:12:59 compute-0 sudo[166486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcmqsdsmgwsfmjjefmyixbjdajiamcfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724378.419688-1129-99781659518742/AnsiballZ_file.py'
Dec 03 01:12:59 compute-0 sudo[166486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:12:59 compute-0 python3.9[166488]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:12:59 compute-0 sudo[166486]: pam_unix(sudo:session): session closed for user root
Dec 03 01:13:00 compute-0 sudo[166638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agjehyfigxtttpdnhisxmgsgrojguyyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724380.0770705-1141-81232477247573/AnsiballZ_stat.py'
Dec 03 01:13:00 compute-0 sudo[166638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:13:00 compute-0 python3.9[166640]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:13:00 compute-0 sudo[166638]: pam_unix(sudo:session): session closed for user root
Dec 03 01:13:01 compute-0 sudo[166763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrjemdzstxswivgyykiwkilmrgqmwsew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724380.0770705-1141-81232477247573/AnsiballZ_copy.py'
Dec 03 01:13:01 compute-0 sudo[166763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:13:01 compute-0 python3.9[166765]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764724380.0770705-1141-81232477247573/.source.nft follow=False _original_basename=ruleset.j2 checksum=bc835bd485c96b4ac7465e87d3a790a8d097f2aa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:13:01 compute-0 sudo[166763]: pam_unix(sudo:session): session closed for user root
Dec 03 01:13:02 compute-0 sudo[166915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgmridkldfeqpsskujuibyfvcmnryoog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724381.7449036-1156-140911356011663/AnsiballZ_file.py'
Dec 03 01:13:02 compute-0 sudo[166915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:13:02 compute-0 python3.9[166917]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:13:02 compute-0 sudo[166915]: pam_unix(sudo:session): session closed for user root
Dec 03 01:13:03 compute-0 sudo[167067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyuefdnvzuyaepfokhdnhhxpjdperdqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724382.6245596-1164-75515272507564/AnsiballZ_command.py'
Dec 03 01:13:03 compute-0 sudo[167067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:13:03 compute-0 python3.9[167069]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:13:03 compute-0 sudo[167067]: pam_unix(sudo:session): session closed for user root
Dec 03 01:13:04 compute-0 sudo[167222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zklpwjomtyudhgimflaqoexzixmaaekw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724383.488169-1172-273787239209418/AnsiballZ_blockinfile.py'
Dec 03 01:13:04 compute-0 sudo[167222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:13:04 compute-0 python3.9[167224]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:13:04 compute-0 sudo[167222]: pam_unix(sudo:session): session closed for user root
Dec 03 01:13:05 compute-0 sudo[167374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-towrmwzugpfrnxzzascfodupgyrfbxlj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724384.7551503-1181-37564218080922/AnsiballZ_command.py'
Dec 03 01:13:05 compute-0 sudo[167374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:13:05 compute-0 python3.9[167376]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:13:05 compute-0 sudo[167374]: pam_unix(sudo:session): session closed for user root
Dec 03 01:13:06 compute-0 sudo[167527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnctfdvsxapxuobbmzlkxecjguuimhgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724385.614536-1189-227347280357198/AnsiballZ_stat.py'
Dec 03 01:13:06 compute-0 sudo[167527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:13:06 compute-0 python3.9[167529]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:13:06 compute-0 sudo[167527]: pam_unix(sudo:session): session closed for user root
Dec 03 01:13:06 compute-0 sudo[167681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdcyhvrshzcelrpbcezhysukmyojpwkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724386.453088-1197-192565991945546/AnsiballZ_command.py'
Dec 03 01:13:06 compute-0 sudo[167681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:13:07 compute-0 python3.9[167683]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:13:07 compute-0 sudo[167681]: pam_unix(sudo:session): session closed for user root
Dec 03 01:13:07 compute-0 sudo[167836]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtbbicybasehxdwuohusandwggsuukbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724387.3449214-1205-71194033030227/AnsiballZ_file.py'
Dec 03 01:13:07 compute-0 sudo[167836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:13:07 compute-0 python3.9[167838]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:13:07 compute-0 sudo[167836]: pam_unix(sudo:session): session closed for user root
Dec 03 01:13:08 compute-0 sshd-session[144110]: Connection closed by 192.168.122.30 port 44202
Dec 03 01:13:08 compute-0 sshd-session[144107]: pam_unix(sshd:session): session closed for user zuul
Dec 03 01:13:08 compute-0 systemd[1]: session-21.scope: Deactivated successfully.
Dec 03 01:13:08 compute-0 systemd[1]: session-21.scope: Consumed 2min 31.753s CPU time.
Dec 03 01:13:08 compute-0 systemd-logind[800]: Session 21 logged out. Waiting for processes to exit.
Dec 03 01:13:08 compute-0 systemd-logind[800]: Removed session 21.
Dec 03 01:13:08 compute-0 podman[167863]: 2025-12-03 01:13:08.513677833 +0000 UTC m=+0.077014093 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 03 01:13:08 compute-0 openstack_network_exporter[160250]: ERROR   01:13:08 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:13:08 compute-0 openstack_network_exporter[160250]: ERROR   01:13:08 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:13:08 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:13:08 compute-0 openstack_network_exporter[160250]: ERROR   01:13:08 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:13:08 compute-0 openstack_network_exporter[160250]: ERROR   01:13:08 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:13:08 compute-0 openstack_network_exporter[160250]: ERROR   01:13:08 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:13:08 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:13:11 compute-0 podman[167894]: 2025-12-03 01:13:11.80031836 +0000 UTC m=+0.061248262 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 03 01:13:11 compute-0 podman[167895]: 2025-12-03 01:13:11.869100733 +0000 UTC m=+0.117434323 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Dec 03 01:13:14 compute-0 sshd-session[167939]: Accepted publickey for zuul from 192.168.122.30 port 42212 ssh2: ECDSA SHA256:ja3ITS17A9km0/Ot+KN2pl9ub4ump/b6GV+vNoE7Szw
Dec 03 01:13:14 compute-0 systemd-logind[800]: New session 22 of user zuul.
Dec 03 01:13:14 compute-0 systemd[1]: Started Session 22 of User zuul.
Dec 03 01:13:14 compute-0 sshd-session[167939]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 03 01:13:15 compute-0 sudo[168092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qifupenqfnhkegxotkrtvzrmbmeclqjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724394.465097-24-186325435750974/AnsiballZ_systemd_service.py'
Dec 03 01:13:15 compute-0 sudo[168092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:13:15 compute-0 python3.9[168094]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 03 01:13:15 compute-0 systemd[1]: Reloading.
Dec 03 01:13:15 compute-0 systemd-rc-local-generator[168117]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:13:15 compute-0 systemd-sysv-generator[168120]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:13:15 compute-0 sudo[168092]: pam_unix(sudo:session): session closed for user root
Dec 03 01:13:16 compute-0 python3.9[168279]: ansible-ansible.builtin.service_facts Invoked
Dec 03 01:13:16 compute-0 network[168296]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 03 01:13:16 compute-0 network[168297]: 'network-scripts' will be removed from distribution in near future.
Dec 03 01:13:16 compute-0 network[168298]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 03 01:13:21 compute-0 sudo[168568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwimjxkyzpsruxssuqfzqsqopiadlbth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724401.3524616-47-110793403201281/AnsiballZ_systemd_service.py'
Dec 03 01:13:21 compute-0 sudo[168568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:13:22 compute-0 python3.9[168570]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:13:22 compute-0 sudo[168568]: pam_unix(sudo:session): session closed for user root
Dec 03 01:13:23 compute-0 sudo[168721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emfedkzuisykmjfdirzgjdrgnvukwspg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724402.5194511-57-195193277097983/AnsiballZ_file.py'
Dec 03 01:13:23 compute-0 sudo[168721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:13:23 compute-0 python3.9[168723]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:13:23 compute-0 sudo[168721]: pam_unix(sudo:session): session closed for user root
Dec 03 01:13:23 compute-0 podman[168814]: 2025-12-03 01:13:23.853898045 +0000 UTC m=+0.101094226 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, version=9.6, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, maintainer=Red Hat, Inc., release=1755695350, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, distribution-scope=public)
Dec 03 01:13:23 compute-0 sudo[168896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fevaavofqrkcurgphdxasnbitwyyurdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724403.5550106-65-63654737976063/AnsiballZ_file.py'
Dec 03 01:13:23 compute-0 sudo[168896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:13:24 compute-0 python3.9[168898]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:13:24 compute-0 sudo[168896]: pam_unix(sudo:session): session closed for user root
Dec 03 01:13:25 compute-0 sudo[169058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isagchruxmhopdjbzxkgqhxlipvlqpyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724404.5103002-74-140106333593146/AnsiballZ_command.py'
Dec 03 01:13:25 compute-0 sudo[169058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:13:25 compute-0 podman[169022]: 2025-12-03 01:13:25.192971516 +0000 UTC m=+0.079028389 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 01:13:25 compute-0 python3.9[169064]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:13:25 compute-0 sudo[169058]: pam_unix(sudo:session): session closed for user root
Dec 03 01:13:26 compute-0 python3.9[169223]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 03 01:13:27 compute-0 sudo[169373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljuoabggivkynnwztplhxkycszpicinp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724406.7866302-92-158885275447779/AnsiballZ_systemd_service.py'
Dec 03 01:13:27 compute-0 sudo[169373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:13:27 compute-0 python3.9[169375]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 03 01:13:27 compute-0 systemd[1]: Reloading.
Dec 03 01:13:27 compute-0 systemd-rc-local-generator[169404]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:13:27 compute-0 systemd-sysv-generator[169407]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:13:27 compute-0 sudo[169373]: pam_unix(sudo:session): session closed for user root
Dec 03 01:13:28 compute-0 sudo[169561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljlrassgzvgiecdechpptupzilmkilrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724408.0488749-100-179652244816010/AnsiballZ_command.py'
Dec 03 01:13:28 compute-0 sudo[169561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:13:28 compute-0 python3.9[169563]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:13:28 compute-0 sudo[169561]: pam_unix(sudo:session): session closed for user root
Dec 03 01:13:29 compute-0 sudo[169714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqdxigqrufisqlcglvzlfmfilrjfltjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724409.0933409-109-39343133629762/AnsiballZ_file.py'
Dec 03 01:13:29 compute-0 sudo[169714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:13:29 compute-0 python3.9[169716]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry-power-monitoring recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:13:29 compute-0 sudo[169714]: pam_unix(sudo:session): session closed for user root
Dec 03 01:13:29 compute-0 podman[158098]: time="2025-12-03T01:13:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:13:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:13:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 12784 "" "Go-http-client/1.1"
Dec 03 01:13:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:13:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2143 "" "Go-http-client/1.1"
Dec 03 01:13:30 compute-0 python3.9[169871]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:13:31 compute-0 openstack_network_exporter[160250]: ERROR   01:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:13:31 compute-0 openstack_network_exporter[160250]: ERROR   01:13:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:13:31 compute-0 openstack_network_exporter[160250]: ERROR   01:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:13:31 compute-0 openstack_network_exporter[160250]: ERROR   01:13:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:13:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:13:31 compute-0 openstack_network_exporter[160250]: ERROR   01:13:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:13:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:13:31 compute-0 python3.9[170023]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:13:32 compute-0 python3.9[170144]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764724410.918309-125-116023157648922/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:13:33 compute-0 sudo[170294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-garbkbjuukusfijhqdxbimdqebfnaqlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724412.9387398-143-34517600781459/AnsiballZ_getent.py'
Dec 03 01:13:33 compute-0 sudo[170294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:13:33 compute-0 python3.9[170296]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Dec 03 01:13:33 compute-0 sudo[170294]: pam_unix(sudo:session): session closed for user root
Dec 03 01:13:35 compute-0 python3.9[170447]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:13:35 compute-0 python3.9[170568]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764724414.644533-171-178298927822883/.source.conf _original_basename=ceilometer.conf follow=False checksum=e93ef84feaa07737af66c0c1da2fd4bdcae81d37 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:13:36 compute-0 python3.9[170718]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:13:37 compute-0 python3.9[170839]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764724416.0936341-171-274543445144827/.source.yaml _original_basename=polling.yaml follow=False checksum=5ef7021082c6431099dde63e021011029cd65119 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:13:38 compute-0 python3.9[170989]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:13:38 compute-0 podman[171084]: 2025-12-03 01:13:38.778303356 +0000 UTC m=+0.090148130 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 03 01:13:38 compute-0 python3.9[171122]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764724417.691299-171-280403020519388/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:13:39 compute-0 python3.9[171283]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:13:40 compute-0 python3.9[171435]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.964 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.966 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.966 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e95b2150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.967 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f00ebd496a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.967 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e95b2150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.968 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eda45910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e95b2150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.968 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e95b2150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.968 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e95b2150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.969 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e95b2150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.969 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.969 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e95b2150>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.970 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f00ebd4b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.970 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e95b2150>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.970 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.970 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eabec2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e95b2150>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.971 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f00edba6090>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.971 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e95b2150>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.971 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.972 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e95b2150>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.972 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f00ebd4bb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.972 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e95b2150>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.972 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.973 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e95b2150>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.973 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f00ebd4b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.973 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e95b2150>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.974 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.974 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e95b2150>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.974 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f00ebd4b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.974 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e95b2150>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.975 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.975 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e95b2150>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.975 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f00ebd4b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.975 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e95b2150>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.976 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e95b2150>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.976 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f00ebd4b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e95b2150>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.977 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e95b2150>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.977 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f00eabec290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e95b2150>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.978 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e95b2150>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.979 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f00ebd4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebcadee0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e95b2150>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.980 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.980 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f00ebd4b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.981 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f00ebd4b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e95b2150>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.981 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.981 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e95b2150>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f00ebd4bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.982 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f00ebd4b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.982 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f00ebd4bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f00ebd4bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f00ebd4bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f00ebe0e030>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f00ebd4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f00ebd4b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f00ede91a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f00ebd4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f00ebd4b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f00ede92450>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f00ebd4bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f00ebd4bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.986 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.986 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.986 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.989 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.989 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.989 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.989 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.989 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.989 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.990 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.990 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.990 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.990 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.990 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.990 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:13:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.991 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:13:41 compute-0 python3.9[171588]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:13:42 compute-0 podman[171683]: 2025-12-03 01:13:42.024459579 +0000 UTC m=+0.092556897 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 03 01:13:42 compute-0 podman[171684]: 2025-12-03 01:13:42.062079003 +0000 UTC m=+0.122744333 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:13:42 compute-0 python3.9[171732]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764724420.9401445-230-224011119583391/.source.json follow=False _original_basename=ceilometer-agent-ipmi.json.j2 checksum=21255e7f7db3155b4a491729298d9407fe6f8335 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:13:42 compute-0 python3.9[171902]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:13:43 compute-0 python3.9[171978]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:13:44 compute-0 python3.9[172128]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:13:45 compute-0 python3.9[172249]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764724423.7351851-230-166665735895968/.source.json follow=False _original_basename=ceilometer_agent_ipmi.json.j2 checksum=cf81874b7544c057599ec397442879f74d42b3ec backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:13:45 compute-0 python3.9[172399]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:13:46 compute-0 python3.9[172520]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764724425.3299277-230-205940406386415/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:13:47 compute-0 python3.9[172670]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:13:48 compute-0 python3.9[172791]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764724426.7745686-230-6273744919252/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:13:48 compute-0 python3.9[172941]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:13:49 compute-0 python3.9[173062]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764724428.2021344-230-208592151178601/.source.json follow=False _original_basename=kepler.json.j2 checksum=89451093c8765edd3915016a9e87770fe489178d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:13:50 compute-0 python3.9[173212]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:13:51 compute-0 python3.9[173288]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:13:51 compute-0 sudo[173438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-toveyslhtupeetfghtquznldfeprhvpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724431.476601-325-197419347537786/AnsiballZ_file.py'
Dec 03 01:13:51 compute-0 sudo[173438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:13:52 compute-0 python3.9[173440]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:13:52 compute-0 sudo[173438]: pam_unix(sudo:session): session closed for user root
Dec 03 01:13:52 compute-0 sudo[173590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykgmwvatbfiazuzvlvtpvxhajwmupvel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724432.2980692-333-271577706026705/AnsiballZ_file.py'
Dec 03 01:13:52 compute-0 sudo[173590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:13:52 compute-0 python3.9[173592]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:13:52 compute-0 sudo[173590]: pam_unix(sudo:session): session closed for user root
Dec 03 01:13:53 compute-0 sudo[173742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gixmfxhhkrgswkfcrtwwkzxanmimnhrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724433.139811-341-12415371006096/AnsiballZ_file.py'
Dec 03 01:13:53 compute-0 sudo[173742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:13:53 compute-0 python3.9[173744]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:13:53 compute-0 sudo[173742]: pam_unix(sudo:session): session closed for user root
Dec 03 01:13:54 compute-0 sudo[173904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kaoattqoxwdprlqulhzgydglrpiszimn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724433.9436047-349-72010599689751/AnsiballZ_stat.py'
Dec 03 01:13:54 compute-0 sudo[173904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:13:54 compute-0 podman[173868]: 2025-12-03 01:13:54.52905398 +0000 UTC m=+0.128965058 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.expose-services=, name=ubi9-minimal, build-date=2025-08-20T13:12:41, version=9.6, com.redhat.component=ubi9-minimal-container, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, managed_by=edpm_ansible)
Dec 03 01:13:54 compute-0 python3.9[173911]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:13:54 compute-0 sudo[173904]: pam_unix(sudo:session): session closed for user root
Dec 03 01:13:55 compute-0 sudo[174035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aolqayrosznmcooohvvcxouevlzunsab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724433.9436047-349-72010599689751/AnsiballZ_copy.py'
Dec 03 01:13:55 compute-0 sudo[174035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:13:55 compute-0 python3.9[174037]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764724433.9436047-349-72010599689751/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:13:55 compute-0 sudo[174035]: pam_unix(sudo:session): session closed for user root
Dec 03 01:13:55 compute-0 podman[174038]: 2025-12-03 01:13:55.452318201 +0000 UTC m=+0.063208104 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 01:13:55 compute-0 sudo[174135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vicqbmhadqvvrjhqxthebsfecqoblijh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724433.9436047-349-72010599689751/AnsiballZ_stat.py'
Dec 03 01:13:55 compute-0 sudo[174135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:13:55 compute-0 python3.9[174137]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:13:55 compute-0 sudo[174135]: pam_unix(sudo:session): session closed for user root
Dec 03 01:13:56 compute-0 sudo[174258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-foeuiznzhbwoyumolxshlfrtqkwuxhhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724433.9436047-349-72010599689751/AnsiballZ_copy.py'
Dec 03 01:13:56 compute-0 sudo[174258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:13:56 compute-0 python3.9[174260]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764724433.9436047-349-72010599689751/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:13:56 compute-0 sudo[174258]: pam_unix(sudo:session): session closed for user root
Dec 03 01:13:57 compute-0 sudo[174410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctfkdknabpsgwmzdgbandvrqqbomizlx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724436.8167927-349-237136512574990/AnsiballZ_stat.py'
Dec 03 01:13:57 compute-0 sudo[174410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:13:57 compute-0 python3.9[174412]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/kepler/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:13:57 compute-0 sudo[174410]: pam_unix(sudo:session): session closed for user root
Dec 03 01:13:57 compute-0 sudo[174533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whdqcdwqhfdvpumvnkmlrbebivkftikt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724436.8167927-349-237136512574990/AnsiballZ_copy.py'
Dec 03 01:13:57 compute-0 sudo[174533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:13:58 compute-0 python3.9[174535]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/kepler/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764724436.8167927-349-237136512574990/.source _original_basename=healthcheck follow=False checksum=57ed53cc150174efd98819129660d5b9ea9ea61a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:13:58 compute-0 sudo[174533]: pam_unix(sudo:session): session closed for user root
Dec 03 01:13:59 compute-0 sudo[174685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uesrblwsmdyrnbgfnglpgorajsjklfki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724438.600641-391-271687049610685/AnsiballZ_container_config_data.py'
Dec 03 01:13:59 compute-0 sudo[174685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:13:59 compute-0 python3.9[174687]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=ceilometer_agent_ipmi.json debug=False
Dec 03 01:13:59 compute-0 sudo[174685]: pam_unix(sudo:session): session closed for user root
Dec 03 01:13:59 compute-0 podman[158098]: time="2025-12-03T01:13:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:13:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:13:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 12784 "" "Go-http-client/1.1"
Dec 03 01:13:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:13:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2153 "" "Go-http-client/1.1"
Dec 03 01:14:00 compute-0 sudo[174837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsxucqmhqcxpysyhpwtopqgzwaqvpsaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724439.6829739-400-54634747939555/AnsiballZ_container_config_hash.py'
Dec 03 01:14:00 compute-0 sudo[174837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:14:00 compute-0 python3.9[174839]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 03 01:14:00 compute-0 sudo[174837]: pam_unix(sudo:session): session closed for user root
Dec 03 01:14:01 compute-0 openstack_network_exporter[160250]: ERROR   01:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:14:01 compute-0 openstack_network_exporter[160250]: ERROR   01:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:14:01 compute-0 openstack_network_exporter[160250]: ERROR   01:14:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:14:01 compute-0 openstack_network_exporter[160250]: ERROR   01:14:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:14:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:14:01 compute-0 openstack_network_exporter[160250]: ERROR   01:14:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:14:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:14:01 compute-0 sudo[174989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdyafaanyyfgxfcfqvxgxqfmldjmjdaw ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764724440.7851238-410-51258032515126/AnsiballZ_edpm_container_manage.py'
Dec 03 01:14:01 compute-0 sudo[174989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:14:01 compute-0 python3[174991]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=ceilometer_agent_ipmi.json log_base_path=/var/log/containers/stdouts debug=False
Dec 03 01:14:06 compute-0 podman[175004]: 2025-12-03 01:14:06.824136886 +0000 UTC m=+4.990111436 image pull 24d4416455a3caf43088be1a1fdcd72d9680ad5e64ac2b338cb2cc50d15f5acc quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Dec 03 01:14:07 compute-0 podman[175103]: 2025-12-03 01:14:07.02788863 +0000 UTC m=+0.076489276 container create ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_managed=true)
Dec 03 01:14:07 compute-0 podman[175103]: 2025-12-03 01:14:06.994130773 +0000 UTC m=+0.042731479 image pull 24d4416455a3caf43088be1a1fdcd72d9680ad5e64ac2b338cb2cc50d15f5acc quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Dec 03 01:14:07 compute-0 python3[174991]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_ipmi --conmon-pidfile /run/ceilometer_agent_ipmi.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck ipmi --label config_id=edpm --label container_name=ceilometer_agent_ipmi --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified kolla_start
Dec 03 01:14:07 compute-0 sudo[174989]: pam_unix(sudo:session): session closed for user root
Dec 03 01:14:07 compute-0 sudo[175290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agyrefjaiwqleytxhxdsfqoswwlwtjxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724447.417282-418-172041466534399/AnsiballZ_stat.py'
Dec 03 01:14:07 compute-0 sudo[175290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:14:08 compute-0 python3.9[175292]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:14:08 compute-0 sudo[175290]: pam_unix(sudo:session): session closed for user root
Dec 03 01:14:08 compute-0 sudo[175444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvsqfdtbcawbbbaozeqevcdsjoreizde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724448.3996294-427-30390361137547/AnsiballZ_file.py'
Dec 03 01:14:08 compute-0 sudo[175444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:14:08 compute-0 python3.9[175446]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:14:09 compute-0 sudo[175444]: pam_unix(sudo:session): session closed for user root
Dec 03 01:14:09 compute-0 podman[175568]: 2025-12-03 01:14:09.831228953 +0000 UTC m=+0.076917508 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 03 01:14:09 compute-0 sudo[175612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayonnixvhdnpwjlytzgveodgkjfweupq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724449.2859688-427-119060671743697/AnsiballZ_copy.py'
Dec 03 01:14:09 compute-0 sudo[175612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:14:10 compute-0 python3.9[175621]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764724449.2859688-427-119060671743697/source dest=/etc/systemd/system/edpm_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:14:10 compute-0 sudo[175612]: pam_unix(sudo:session): session closed for user root
Dec 03 01:14:10 compute-0 sudo[175695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxiyhgtcurbuntiplahtzcubsragdwlb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724449.2859688-427-119060671743697/AnsiballZ_systemd.py'
Dec 03 01:14:10 compute-0 sudo[175695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:14:10 compute-0 python3.9[175697]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 03 01:14:10 compute-0 systemd[1]: Reloading.
Dec 03 01:14:11 compute-0 systemd-sysv-generator[175728]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:14:11 compute-0 systemd-rc-local-generator[175723]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:14:11 compute-0 sudo[175695]: pam_unix(sudo:session): session closed for user root
Dec 03 01:14:11 compute-0 sudo[175806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgqawjrvdrjeylwkcgpnajzjgfdharcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724449.2859688-427-119060671743697/AnsiballZ_systemd.py'
Dec 03 01:14:11 compute-0 sudo[175806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:14:12 compute-0 python3.9[175808]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:14:12 compute-0 systemd[1]: Reloading.
Dec 03 01:14:12 compute-0 podman[175810]: 2025-12-03 01:14:12.282971945 +0000 UTC m=+0.105493919 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 03 01:14:12 compute-0 systemd-rc-local-generator[175882]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:14:12 compute-0 systemd-sysv-generator[175886]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:14:12 compute-0 podman[175811]: 2025-12-03 01:14:12.363283748 +0000 UTC m=+0.181208483 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:14:12 compute-0 systemd[1]: Starting ceilometer_agent_ipmi container...
Dec 03 01:14:12 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:14:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc6ab5567927337a784d4e1fad456ca1db68e67b38a0f6ac3c208559879cc889/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 03 01:14:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc6ab5567927337a784d4e1fad456ca1db68e67b38a0f6ac3c208559879cc889/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec 03 01:14:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc6ab5567927337a784d4e1fad456ca1db68e67b38a0f6ac3c208559879cc889/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec 03 01:14:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc6ab5567927337a784d4e1fad456ca1db68e67b38a0f6ac3c208559879cc889/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec 03 01:14:12 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92.
Dec 03 01:14:12 compute-0 podman[175892]: 2025-12-03 01:14:12.745605489 +0000 UTC m=+0.177149659 container init ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 03 01:14:12 compute-0 ceilometer_agent_ipmi[175908]: + sudo -E kolla_set_configs
Dec 03 01:14:12 compute-0 podman[175892]: 2025-12-03 01:14:12.783307036 +0000 UTC m=+0.214851196 container start ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 03 01:14:12 compute-0 podman[175892]: ceilometer_agent_ipmi
Dec 03 01:14:12 compute-0 sudo[175914]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 03 01:14:12 compute-0 sudo[175914]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 03 01:14:12 compute-0 sudo[175914]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 03 01:14:12 compute-0 systemd[1]: Started ceilometer_agent_ipmi container.
Dec 03 01:14:12 compute-0 sudo[175806]: pam_unix(sudo:session): session closed for user root
Dec 03 01:14:12 compute-0 ceilometer_agent_ipmi[175908]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 03 01:14:12 compute-0 ceilometer_agent_ipmi[175908]: INFO:__main__:Validating config file
Dec 03 01:14:12 compute-0 ceilometer_agent_ipmi[175908]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 03 01:14:12 compute-0 ceilometer_agent_ipmi[175908]: INFO:__main__:Copying service configuration files
Dec 03 01:14:12 compute-0 ceilometer_agent_ipmi[175908]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec 03 01:14:12 compute-0 ceilometer_agent_ipmi[175908]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec 03 01:14:12 compute-0 ceilometer_agent_ipmi[175908]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec 03 01:14:12 compute-0 ceilometer_agent_ipmi[175908]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec 03 01:14:12 compute-0 ceilometer_agent_ipmi[175908]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec 03 01:14:12 compute-0 ceilometer_agent_ipmi[175908]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec 03 01:14:12 compute-0 ceilometer_agent_ipmi[175908]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 03 01:14:12 compute-0 ceilometer_agent_ipmi[175908]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 03 01:14:12 compute-0 ceilometer_agent_ipmi[175908]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 03 01:14:12 compute-0 ceilometer_agent_ipmi[175908]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 03 01:14:12 compute-0 ceilometer_agent_ipmi[175908]: INFO:__main__:Writing out command to execute
Dec 03 01:14:12 compute-0 sudo[175914]: pam_unix(sudo:session): session closed for user root
Dec 03 01:14:12 compute-0 ceilometer_agent_ipmi[175908]: ++ cat /run_command
Dec 03 01:14:12 compute-0 ceilometer_agent_ipmi[175908]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec 03 01:14:12 compute-0 ceilometer_agent_ipmi[175908]: + ARGS=
Dec 03 01:14:12 compute-0 ceilometer_agent_ipmi[175908]: + sudo kolla_copy_cacerts
Dec 03 01:14:12 compute-0 sudo[175938]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 03 01:14:12 compute-0 sudo[175938]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 03 01:14:12 compute-0 sudo[175938]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 03 01:14:12 compute-0 podman[175915]: 2025-12-03 01:14:12.906775889 +0000 UTC m=+0.106558829 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 03 01:14:12 compute-0 systemd[1]: ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92-445d4fc2e44b3551.service: Main process exited, code=exited, status=1/FAILURE
Dec 03 01:14:12 compute-0 sudo[175938]: pam_unix(sudo:session): session closed for user root
Dec 03 01:14:12 compute-0 systemd[1]: ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92-445d4fc2e44b3551.service: Failed with result 'exit-code'.
Dec 03 01:14:12 compute-0 ceilometer_agent_ipmi[175908]: + [[ ! -n '' ]]
Dec 03 01:14:12 compute-0 ceilometer_agent_ipmi[175908]: + . kolla_extend_start
Dec 03 01:14:12 compute-0 ceilometer_agent_ipmi[175908]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec 03 01:14:12 compute-0 ceilometer_agent_ipmi[175908]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Dec 03 01:14:12 compute-0 ceilometer_agent_ipmi[175908]: + umask 0022
Dec 03 01:14:12 compute-0 ceilometer_agent_ipmi[175908]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Dec 03 01:14:13 compute-0 sudo[176086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvbvcbgrekbrofzavhqzfvuejyjdgeit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724453.2168758-453-281163051790507/AnsiballZ_container_config_data.py'
Dec 03 01:14:13 compute-0 sudo[176086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.697 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.697 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.697 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.698 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.698 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.698 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.698 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.698 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.698 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.698 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.698 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.698 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.698 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.699 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.699 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.699 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.699 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.699 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.699 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.699 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.699 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.699 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.699 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.699 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.699 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.700 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.700 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.700 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.700 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.700 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.700 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.700 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.700 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.700 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.700 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.700 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.700 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.701 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.701 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.701 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.701 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.701 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.701 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.701 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.701 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.701 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.701 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.701 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.701 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.702 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.702 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.702 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.702 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.702 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.702 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.702 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.702 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.702 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.702 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.702 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.702 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.702 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.703 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.703 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.703 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.703 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.703 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.703 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.703 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.703 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.703 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.703 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.703 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.703 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.704 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.704 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.704 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.704 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.704 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.704 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.704 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.704 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.704 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.704 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.704 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.704 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.705 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.705 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.705 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.705 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.705 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.705 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.705 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.705 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.705 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.705 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.705 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.705 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.705 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.706 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.706 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.706 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.706 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.706 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.706 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.706 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.706 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.706 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.706 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.707 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.707 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.707 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.707 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.707 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.707 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.707 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.707 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.707 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.707 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.707 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.707 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.708 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.708 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.708 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.708 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.708 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.708 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.708 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.708 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.708 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.708 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.708 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.708 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.708 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.709 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.709 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.709 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.709 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.709 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.709 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.709 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.709 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.709 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.709 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.709 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.709 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.710 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.710 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.710 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.710 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.710 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.710 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.710 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.710 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.710 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.710 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.710 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.710 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.710 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.711 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.711 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.711 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.711 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.711 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.711 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.732 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.734 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.736 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec 03 01:14:13 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.842 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpav3hmwj4/privsep.sock']
Dec 03 01:14:13 compute-0 sudo[176093]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/bin/ceilometer-rootwrap /etc/ceilometer/rootwrap.conf privsep-helper --privsep_context ceilometer.privsep.sys_admin_pctxt --privsep_sock_path /tmp/tmpav3hmwj4/privsep.sock
Dec 03 01:14:13 compute-0 python3.9[176088]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=kepler.json debug=False
Dec 03 01:14:13 compute-0 sudo[176093]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 03 01:14:13 compute-0 sudo[176093]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 03 01:14:13 compute-0 sudo[176086]: pam_unix(sudo:session): session closed for user root
Dec 03 01:14:14 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Dec 03 01:14:14 compute-0 sudo[176093]: pam_unix(sudo:session): session closed for user root
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.493 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.494 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpav3hmwj4/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.379 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.387 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.391 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.391 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Dec 03 01:14:14 compute-0 sudo[176247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcpxavuaupalhfubkiilwfapkvqpkmms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724454.158307-462-158656295954144/AnsiballZ_container_config_hash.py'
Dec 03 01:14:14 compute-0 sudo[176247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.625 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.626 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.627 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.628 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.628 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.628 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.628 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.629 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.629 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.629 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.629 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.630 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.630 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.635 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.635 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.635 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.636 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.636 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.636 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.636 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.636 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.637 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.637 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.637 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.637 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.637 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.638 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.638 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.638 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.638 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.639 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.639 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.639 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.639 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.640 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.640 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.640 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.640 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.640 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.640 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.641 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.641 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.641 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.641 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.642 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.642 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.643 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.644 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.645 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.646 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.646 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.646 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.646 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.647 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.647 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.647 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.647 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.647 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.648 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.648 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.648 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.648 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.648 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.649 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.649 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.649 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.649 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.649 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.650 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.650 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.650 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.650 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.650 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.650 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.651 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.651 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.651 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.651 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.651 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.652 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.652 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.652 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.652 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.652 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.653 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.653 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.653 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.653 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.653 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.653 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.654 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.654 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.654 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.654 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.654 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.654 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.655 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.655 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.655 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.655 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.655 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.656 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.656 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.656 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.656 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.657 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.657 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.657 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.657 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.658 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.658 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.658 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.659 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.659 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.659 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.659 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.660 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.660 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.660 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.661 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.661 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.661 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.662 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.662 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.662 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.662 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.663 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.663 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.663 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.664 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.664 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.664 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.664 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.665 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.665 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.665 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.666 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.666 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.666 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.667 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.667 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.667 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.667 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.668 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.668 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.668 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.668 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.669 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.669 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.669 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.670 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.670 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.670 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.670 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.671 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.671 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.671 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.671 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.672 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.672 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.672 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.673 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.673 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.673 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.673 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.674 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.674 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.674 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.675 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.675 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.675 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.675 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.676 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.676 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.676 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.677 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.677 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.677 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.677 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.678 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.678 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.678 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.678 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.678 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.679 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.679 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.679 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.679 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.679 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.679 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.680 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.680 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.680 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.680 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.680 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.681 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.681 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.681 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.681 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.681 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.682 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.682 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.682 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.682 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.683 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.683 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.683 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.683 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.683 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.684 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.684 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.684 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.684 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.685 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.685 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.685 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Dec 03 01:14:14 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.688 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Dec 03 01:14:14 compute-0 python3.9[176251]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 03 01:14:14 compute-0 sudo[176247]: pam_unix(sudo:session): session closed for user root
Dec 03 01:14:15 compute-0 sudo[176403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgdbpkzihvxnchdfkwxdfyayorparqkv ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764724455.1479692-472-71440562446241/AnsiballZ_edpm_container_manage.py'
Dec 03 01:14:15 compute-0 sudo[176403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:14:15 compute-0 python3[176405]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=kepler.json log_base_path=/var/log/containers/stdouts debug=False
Dec 03 01:14:21 compute-0 podman[176419]: 2025-12-03 01:14:21.818314263 +0000 UTC m=+5.915580710 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Dec 03 01:14:22 compute-0 podman[176617]: 2025-12-03 01:14:22.028729154 +0000 UTC m=+0.069981273 container create 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=base rhel9, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, vendor=Red Hat, Inc., release=1214.1726694543, version=9.4, config_id=edpm, build-date=2024-09-18T21:23:30, vcs-type=git, io.buildah.version=1.29.0, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, name=ubi9, com.redhat.component=ubi9-container, container_name=kepler, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 03 01:14:22 compute-0 podman[176617]: 2025-12-03 01:14:21.988503786 +0000 UTC m=+0.029755965 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Dec 03 01:14:22 compute-0 python3[176405]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name kepler --conmon-pidfile /run/kepler.pid --env ENABLE_GPU=true --env EXPOSE_CONTAINER_METRICS=true --env ENABLE_PROCESS_METRICS=true --env EXPOSE_VM_METRICS=true --env EXPOSE_ESTIMATED_IDLE_POWER_METRICS=false --env LIBVIRT_METADATA_URI=http://openstack.org/xmlns/libvirt/nova/1.1 --healthcheck-command /openstack/healthcheck kepler --label config_id=edpm --label container_name=kepler --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 8888:8888 --volume /lib/modules:/lib/modules:ro --volume /run/libvirt:/run/libvirt:shared,ro --volume /sys:/sys --volume /proc:/proc --volume /var/lib/openstack/healthchecks/kepler:/openstack:ro,z quay.io/sustainable_computing_io/kepler:release-0.7.12 -v=2
Dec 03 01:14:22 compute-0 sudo[176403]: pam_unix(sudo:session): session closed for user root
Dec 03 01:14:22 compute-0 sudo[176805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzeiprzddjlwknqlxqfascxbjhaabpnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724462.4499629-480-250732330071632/AnsiballZ_stat.py'
Dec 03 01:14:22 compute-0 sudo[176805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:14:23 compute-0 python3.9[176807]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:14:23 compute-0 sudo[176805]: pam_unix(sudo:session): session closed for user root
Dec 03 01:14:23 compute-0 sudo[176959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhkxotuquyzctlpaprarfugtubsknkzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724463.566711-489-209172908519700/AnsiballZ_file.py'
Dec 03 01:14:23 compute-0 sudo[176959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:14:24 compute-0 python3.9[176961]: ansible-file Invoked with path=/etc/systemd/system/edpm_kepler.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:14:24 compute-0 sudo[176959]: pam_unix(sudo:session): session closed for user root
Dec 03 01:14:24 compute-0 podman[177064]: 2025-12-03 01:14:24.874840966 +0000 UTC m=+0.132349802 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, version=9.6, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container)
Dec 03 01:14:24 compute-0 sudo[177131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swjabmqwgopapjpzjmcdpzeiikvalggy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724464.2897983-489-93043377665584/AnsiballZ_copy.py'
Dec 03 01:14:24 compute-0 sudo[177131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:14:25 compute-0 python3.9[177133]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764724464.2897983-489-93043377665584/source dest=/etc/systemd/system/edpm_kepler.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:14:25 compute-0 sudo[177131]: pam_unix(sudo:session): session closed for user root
Dec 03 01:14:25 compute-0 sudo[177207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyevcfysfffgrfnanowluugiqnvlgwao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724464.2897983-489-93043377665584/AnsiballZ_systemd.py'
Dec 03 01:14:25 compute-0 sudo[177207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:14:25 compute-0 python3.9[177209]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 03 01:14:25 compute-0 systemd[1]: Reloading.
Dec 03 01:14:25 compute-0 podman[177210]: 2025-12-03 01:14:25.824433766 +0000 UTC m=+0.082838924 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 01:14:25 compute-0 systemd-rc-local-generator[177259]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:14:25 compute-0 systemd-sysv-generator[177264]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:14:26 compute-0 sudo[177207]: pam_unix(sudo:session): session closed for user root
Dec 03 01:14:26 compute-0 sudo[177343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfakrwqmmlrerurolywfzjophzosqndk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724464.2897983-489-93043377665584/AnsiballZ_systemd.py'
Dec 03 01:14:26 compute-0 sudo[177343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:14:27 compute-0 python3.9[177345]: ansible-systemd Invoked with state=restarted name=edpm_kepler.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:14:27 compute-0 systemd[1]: Reloading.
Dec 03 01:14:27 compute-0 systemd-rc-local-generator[177377]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:14:27 compute-0 systemd-sysv-generator[177381]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:14:28 compute-0 systemd[1]: Starting kepler container...
Dec 03 01:14:28 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:14:28 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687.
Dec 03 01:14:28 compute-0 podman[177393]: 2025-12-03 01:14:28.727451844 +0000 UTC m=+0.659474044 container init 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, release=1214.1726694543, name=ubi9, vendor=Red Hat, Inc., distribution-scope=public, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, architecture=x86_64, io.buildah.version=1.29.0, managed_by=edpm_ansible, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec 03 01:14:28 compute-0 kepler[177408]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec 03 01:14:28 compute-0 podman[177393]: 2025-12-03 01:14:28.770571133 +0000 UTC m=+0.702593313 container start 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.buildah.version=1.29.0, managed_by=edpm_ansible, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, name=ubi9, config_id=edpm, distribution-scope=public, version=9.4, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git)
Dec 03 01:14:28 compute-0 kepler[177408]: I1203 01:14:28.774057       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Dec 03 01:14:28 compute-0 kepler[177408]: I1203 01:14:28.774272       1 config.go:293] using gCgroup ID in the BPF program: true
Dec 03 01:14:28 compute-0 kepler[177408]: I1203 01:14:28.774332       1 config.go:295] kernel version: 5.14
Dec 03 01:14:28 compute-0 kepler[177408]: I1203 01:14:28.775209       1 power.go:78] Unable to obtain power, use estimate method
Dec 03 01:14:28 compute-0 kepler[177408]: I1203 01:14:28.775252       1 redfish.go:169] failed to get redfish credential file path
Dec 03 01:14:28 compute-0 kepler[177408]: I1203 01:14:28.775955       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Dec 03 01:14:28 compute-0 kepler[177408]: I1203 01:14:28.775975       1 power.go:79] using none to obtain power
Dec 03 01:14:28 compute-0 kepler[177408]: E1203 01:14:28.776002       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Dec 03 01:14:28 compute-0 kepler[177408]: E1203 01:14:28.776040       1 exporter.go:154] failed to init GPU accelerators: no devices found
Dec 03 01:14:28 compute-0 podman[177393]: kepler
Dec 03 01:14:28 compute-0 kepler[177408]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec 03 01:14:28 compute-0 kepler[177408]: I1203 01:14:28.779166       1 exporter.go:84] Number of CPUs: 8
Dec 03 01:14:28 compute-0 systemd[1]: Started kepler container.
Dec 03 01:14:28 compute-0 sudo[177343]: pam_unix(sudo:session): session closed for user root
Dec 03 01:14:28 compute-0 podman[177418]: 2025-12-03 01:14:28.884811087 +0000 UTC m=+0.095694985 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, vcs-type=git, config_id=edpm, distribution-scope=public, release=1214.1726694543, com.redhat.component=ubi9-container, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, architecture=x86_64, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.buildah.version=1.29.0)
Dec 03 01:14:28 compute-0 systemd[1]: 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687-416ed2eac0816b80.service: Main process exited, code=exited, status=1/FAILURE
Dec 03 01:14:28 compute-0 systemd[1]: 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687-416ed2eac0816b80.service: Failed with result 'exit-code'.
Dec 03 01:14:29 compute-0 kepler[177408]: I1203 01:14:29.437410       1 watcher.go:83] Using in cluster k8s config
Dec 03 01:14:29 compute-0 kepler[177408]: I1203 01:14:29.437472       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Dec 03 01:14:29 compute-0 kepler[177408]: E1203 01:14:29.438090       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Dec 03 01:14:29 compute-0 kepler[177408]: I1203 01:14:29.445637       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Dec 03 01:14:29 compute-0 kepler[177408]: I1203 01:14:29.445718       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Dec 03 01:14:29 compute-0 kepler[177408]: I1203 01:14:29.453807       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Dec 03 01:14:29 compute-0 kepler[177408]: I1203 01:14:29.453872       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Dec 03 01:14:29 compute-0 kepler[177408]: I1203 01:14:29.467055       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 03 01:14:29 compute-0 kepler[177408]: I1203 01:14:29.467115       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec 03 01:14:29 compute-0 kepler[177408]: I1203 01:14:29.467142       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Dec 03 01:14:29 compute-0 kepler[177408]: I1203 01:14:29.479864       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 03 01:14:29 compute-0 kepler[177408]: I1203 01:14:29.479920       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 03 01:14:29 compute-0 kepler[177408]: I1203 01:14:29.479930       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 03 01:14:29 compute-0 kepler[177408]: I1203 01:14:29.479938       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 03 01:14:29 compute-0 kepler[177408]: I1203 01:14:29.479948       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec 03 01:14:29 compute-0 kepler[177408]: I1203 01:14:29.479964       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Dec 03 01:14:29 compute-0 kepler[177408]: I1203 01:14:29.480085       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Dec 03 01:14:29 compute-0 kepler[177408]: I1203 01:14:29.480128       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Dec 03 01:14:29 compute-0 kepler[177408]: I1203 01:14:29.480203       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Dec 03 01:14:29 compute-0 kepler[177408]: I1203 01:14:29.480241       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Dec 03 01:14:29 compute-0 kepler[177408]: I1203 01:14:29.480447       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Dec 03 01:14:29 compute-0 kepler[177408]: I1203 01:14:29.481180       1 exporter.go:208] Started Kepler in 707.50669ms
Dec 03 01:14:29 compute-0 sudo[177598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwtankekfbetypfxruokkvfegeltxurb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724469.080861-513-160652851378999/AnsiballZ_systemd.py'
Dec 03 01:14:29 compute-0 sudo[177598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:14:29 compute-0 podman[158098]: time="2025-12-03T01:14:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:14:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:14:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 18539 "" "Go-http-client/1.1"
Dec 03 01:14:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:14:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2985 "" "Go-http-client/1.1"
Dec 03 01:14:29 compute-0 python3.9[177602]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_ipmi.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 03 01:14:29 compute-0 systemd[1]: Stopping ceilometer_agent_ipmi container...
Dec 03 01:14:29 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:29.994 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Dec 03 01:14:30 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:30.097 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:304
Dec 03 01:14:30 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:30.098 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:308
Dec 03 01:14:30 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:30.098 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [12]
Dec 03 01:14:30 compute-0 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:30.113 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:320
Dec 03 01:14:30 compute-0 systemd[1]: libpod-ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92.scope: Deactivated successfully.
Dec 03 01:14:30 compute-0 systemd[1]: libpod-ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92.scope: Consumed 2.201s CPU time.
Dec 03 01:14:30 compute-0 podman[177606]: 2025-12-03 01:14:30.273114348 +0000 UTC m=+0.363860095 container died ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 03 01:14:30 compute-0 systemd[1]: ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92-445d4fc2e44b3551.timer: Deactivated successfully.
Dec 03 01:14:30 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92.
Dec 03 01:14:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92-userdata-shm.mount: Deactivated successfully.
Dec 03 01:14:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc6ab5567927337a784d4e1fad456ca1db68e67b38a0f6ac3c208559879cc889-merged.mount: Deactivated successfully.
Dec 03 01:14:30 compute-0 podman[177606]: 2025-12-03 01:14:30.611876828 +0000 UTC m=+0.702622565 container cleanup ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 03 01:14:30 compute-0 podman[177606]: ceilometer_agent_ipmi
Dec 03 01:14:30 compute-0 podman[177632]: ceilometer_agent_ipmi
Dec 03 01:14:30 compute-0 systemd[1]: edpm_ceilometer_agent_ipmi.service: Deactivated successfully.
Dec 03 01:14:30 compute-0 systemd[1]: Stopped ceilometer_agent_ipmi container.
Dec 03 01:14:30 compute-0 systemd[1]: Starting ceilometer_agent_ipmi container...
Dec 03 01:14:30 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:14:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc6ab5567927337a784d4e1fad456ca1db68e67b38a0f6ac3c208559879cc889/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 03 01:14:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc6ab5567927337a784d4e1fad456ca1db68e67b38a0f6ac3c208559879cc889/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec 03 01:14:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc6ab5567927337a784d4e1fad456ca1db68e67b38a0f6ac3c208559879cc889/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec 03 01:14:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc6ab5567927337a784d4e1fad456ca1db68e67b38a0f6ac3c208559879cc889/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec 03 01:14:30 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92.
Dec 03 01:14:30 compute-0 podman[177645]: 2025-12-03 01:14:30.978119848 +0000 UTC m=+0.221705478 container init ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: + sudo -E kolla_set_configs
Dec 03 01:14:31 compute-0 podman[177645]: 2025-12-03 01:14:31.01846145 +0000 UTC m=+0.262047070 container start ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm)
Dec 03 01:14:31 compute-0 podman[177645]: ceilometer_agent_ipmi
Dec 03 01:14:31 compute-0 sudo[177665]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 03 01:14:31 compute-0 sudo[177665]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 03 01:14:31 compute-0 sudo[177665]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 03 01:14:31 compute-0 systemd[1]: Started ceilometer_agent_ipmi container.
Dec 03 01:14:31 compute-0 sudo[177598]: pam_unix(sudo:session): session closed for user root
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: INFO:__main__:Validating config file
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: INFO:__main__:Copying service configuration files
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: INFO:__main__:Writing out command to execute
Dec 03 01:14:31 compute-0 sudo[177665]: pam_unix(sudo:session): session closed for user root
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: ++ cat /run_command
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: + ARGS=
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: + sudo kolla_copy_cacerts
Dec 03 01:14:31 compute-0 sudo[177682]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 03 01:14:31 compute-0 sudo[177682]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 03 01:14:31 compute-0 sudo[177682]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 03 01:14:31 compute-0 podman[177666]: 2025-12-03 01:14:31.146945963 +0000 UTC m=+0.105954793 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 03 01:14:31 compute-0 sudo[177682]: pam_unix(sudo:session): session closed for user root
Dec 03 01:14:31 compute-0 systemd[1]: ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92-64bdef25bfa2e2e5.service: Main process exited, code=exited, status=1/FAILURE
Dec 03 01:14:31 compute-0 systemd[1]: ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92-64bdef25bfa2e2e5.service: Failed with result 'exit-code'.
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: + [[ ! -n '' ]]
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: + . kolla_extend_start
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: + umask 0022
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Dec 03 01:14:31 compute-0 openstack_network_exporter[160250]: ERROR   01:14:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:14:31 compute-0 openstack_network_exporter[160250]: ERROR   01:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:14:31 compute-0 openstack_network_exporter[160250]: ERROR   01:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:14:31 compute-0 openstack_network_exporter[160250]: ERROR   01:14:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:14:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:14:31 compute-0 openstack_network_exporter[160250]: ERROR   01:14:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:14:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:14:31 compute-0 sudo[177840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdytukjwiqjtkwwwthbmqugrmnfiydgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724471.3613174-521-65552141621237/AnsiballZ_systemd.py'
Dec 03 01:14:31 compute-0 sudo[177840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.972 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.972 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.972 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.972 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.973 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.973 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.973 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.973 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.973 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.973 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.973 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.973 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.973 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.973 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.974 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.974 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.974 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.974 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.974 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.974 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.974 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.974 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.974 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.974 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.974 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.974 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.975 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.975 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.975 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.975 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.975 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.975 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.975 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.975 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.975 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.975 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.975 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.975 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.976 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.976 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.976 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.976 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.976 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.976 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.976 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.976 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.976 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.976 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.976 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.976 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.977 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.977 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.977 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.977 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.977 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.977 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.977 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.977 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.977 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.977 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.978 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.978 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.978 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.978 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.978 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.978 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.978 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.978 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.978 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.978 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.978 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.978 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.978 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.979 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.979 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.979 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.979 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.979 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.979 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.979 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.979 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.979 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.979 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.979 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.980 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.980 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.980 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.980 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.980 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.980 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.980 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.980 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.980 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.980 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.980 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.980 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.981 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.981 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.981 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.981 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.981 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.981 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.981 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.981 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.981 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.981 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.981 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.981 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.982 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.982 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.982 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.982 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.982 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.982 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.982 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.982 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.982 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.982 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.982 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.982 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.982 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.983 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.983 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.983 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.983 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.983 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.983 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.983 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.983 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.983 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.983 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.983 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.983 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.984 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.984 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.984 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.984 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.984 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.984 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.984 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.984 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.984 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.984 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.984 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.985 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.985 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.985 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.985 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.985 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.985 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.985 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.985 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.985 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.985 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.985 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.985 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.985 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.986 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.986 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.986 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.986 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.986 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.986 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.986 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:31 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.986 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.009 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.012 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.014 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.045 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpgdvpqcy1/privsep.sock']
Dec 03 01:14:32 compute-0 sudo[177847]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/bin/ceilometer-rootwrap /etc/ceilometer/rootwrap.conf privsep-helper --privsep_context ceilometer.privsep.sys_admin_pctxt --privsep_sock_path /tmp/tmpgdvpqcy1/privsep.sock
Dec 03 01:14:32 compute-0 sudo[177847]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 03 01:14:32 compute-0 sudo[177847]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 03 01:14:32 compute-0 python3.9[177842]: ansible-ansible.builtin.systemd Invoked with name=edpm_kepler.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 03 01:14:32 compute-0 systemd[1]: Stopping kepler container...
Dec 03 01:14:32 compute-0 kepler[177408]: I1203 01:14:32.490864       1 exporter.go:218] Received shutdown signal
Dec 03 01:14:32 compute-0 kepler[177408]: I1203 01:14:32.491674       1 exporter.go:226] Exiting...
Dec 03 01:14:32 compute-0 systemd[1]: libpod-96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687.scope: Deactivated successfully.
Dec 03 01:14:32 compute-0 systemd[1]: libpod-96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687.scope: Consumed 1.010s CPU time.
Dec 03 01:14:32 compute-0 podman[177854]: 2025-12-03 01:14:32.69627211 +0000 UTC m=+0.292990347 container died 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, io.openshift.expose-services=, io.openshift.tags=base rhel9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, release=1214.1726694543, release-0.7.12=, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec 03 01:14:32 compute-0 systemd[1]: 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687-416ed2eac0816b80.timer: Deactivated successfully.
Dec 03 01:14:32 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687.
Dec 03 01:14:32 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687-userdata-shm.mount: Deactivated successfully.
Dec 03 01:14:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-56bb532fcb66b2740ea57176a30adf601274f50f260afcb2d3f32777dc3ac537-merged.mount: Deactivated successfully.
Dec 03 01:14:32 compute-0 podman[177854]: 2025-12-03 01:14:32.75117191 +0000 UTC m=+0.347890147 container cleanup 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, build-date=2024-09-18T21:23:30, distribution-scope=public, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, io.openshift.tags=base rhel9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.29.0, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, managed_by=edpm_ansible, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec 03 01:14:32 compute-0 podman[177854]: kepler
Dec 03 01:14:32 compute-0 sudo[177847]: pam_unix(sudo:session): session closed for user root
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.755 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.756 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpgdvpqcy1/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.625 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.633 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.638 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.638 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Dec 03 01:14:32 compute-0 podman[177885]: kepler
Dec 03 01:14:32 compute-0 systemd[1]: edpm_kepler.service: Deactivated successfully.
Dec 03 01:14:32 compute-0 systemd[1]: Stopped kepler container.
Dec 03 01:14:32 compute-0 systemd[1]: Starting kepler container...
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.855 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.856 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.858 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.858 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.859 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.859 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.859 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.859 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.860 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.860 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.860 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.861 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.861 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.868 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.868 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.868 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.869 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.869 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.869 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.869 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.870 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.870 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.870 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.870 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.871 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.871 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.871 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.872 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.872 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.872 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.872 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.873 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.873 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.873 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.873 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.873 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.874 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.874 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.874 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.874 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.874 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.874 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.875 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.875 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.875 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.875 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.875 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.876 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.876 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.876 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.876 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.876 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.877 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.877 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.877 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.877 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.877 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.878 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.878 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.878 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.878 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.878 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.879 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.879 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.879 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.879 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.879 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.880 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.880 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.880 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.880 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.880 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.880 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.881 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.881 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.881 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.881 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.882 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.882 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.882 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.882 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.882 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.883 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.883 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.883 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.883 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.884 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.884 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.884 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.884 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.884 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.885 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.885 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.885 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.885 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.885 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.885 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.886 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.886 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.886 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.886 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.887 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.887 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.887 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.887 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.887 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.888 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.888 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.888 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.888 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.888 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.889 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.889 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.889 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.889 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.889 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.890 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.890 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.890 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.890 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.891 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.891 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.891 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.891 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.892 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.892 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.892 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.892 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.893 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.893 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.893 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.893 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.893 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.894 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.894 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.894 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.894 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.894 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.895 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.895 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.895 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.895 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.896 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.896 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.896 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.896 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.896 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.897 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.897 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.897 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.897 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.897 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.898 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.898 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.898 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.898 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.898 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.898 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.899 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.899 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.899 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.900 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.900 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.900 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.900 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.900 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.900 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.901 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.901 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.901 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.901 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.901 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.902 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.902 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.902 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.902 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.902 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.902 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.903 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.903 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.903 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.903 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.903 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.904 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.904 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.904 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.904 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.904 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.905 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.905 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.905 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.905 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.905 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.905 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.905 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.906 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.906 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.906 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.906 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.906 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.906 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.906 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.907 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.907 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.907 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.907 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.907 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.907 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.908 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.908 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.908 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.912 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.912 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.913 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.913 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.913 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Dec 03 01:14:32 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.918 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Dec 03 01:14:32 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:14:33 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687.
Dec 03 01:14:33 compute-0 podman[177898]: 2025-12-03 01:14:33.023427995 +0000 UTC m=+0.161130350 container init 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, config_id=edpm, io.openshift.tags=base rhel9, release=1214.1726694543, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, name=ubi9, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=)
Dec 03 01:14:33 compute-0 kepler[177915]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec 03 01:14:33 compute-0 podman[177898]: 2025-12-03 01:14:33.053710334 +0000 UTC m=+0.191412679 container start 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, architecture=x86_64, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-type=git, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, release=1214.1726694543, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec 03 01:14:33 compute-0 podman[177898]: kepler
Dec 03 01:14:33 compute-0 kepler[177915]: I1203 01:14:33.062674       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Dec 03 01:14:33 compute-0 kepler[177915]: I1203 01:14:33.062881       1 config.go:293] using gCgroup ID in the BPF program: true
Dec 03 01:14:33 compute-0 kepler[177915]: I1203 01:14:33.062906       1 config.go:295] kernel version: 5.14
Dec 03 01:14:33 compute-0 kepler[177915]: I1203 01:14:33.063625       1 power.go:78] Unable to obtain power, use estimate method
Dec 03 01:14:33 compute-0 kepler[177915]: I1203 01:14:33.063669       1 redfish.go:169] failed to get redfish credential file path
Dec 03 01:14:33 compute-0 kepler[177915]: I1203 01:14:33.064412       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Dec 03 01:14:33 compute-0 kepler[177915]: I1203 01:14:33.064451       1 power.go:79] using none to obtain power
Dec 03 01:14:33 compute-0 kepler[177915]: E1203 01:14:33.064477       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Dec 03 01:14:33 compute-0 kepler[177915]: E1203 01:14:33.064508       1 exporter.go:154] failed to init GPU accelerators: no devices found
Dec 03 01:14:33 compute-0 systemd[1]: Started kepler container.
Dec 03 01:14:33 compute-0 kepler[177915]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec 03 01:14:33 compute-0 kepler[177915]: I1203 01:14:33.067864       1 exporter.go:84] Number of CPUs: 8
Dec 03 01:14:33 compute-0 sudo[177840]: pam_unix(sudo:session): session closed for user root
Dec 03 01:14:33 compute-0 podman[177925]: 2025-12-03 01:14:33.149191671 +0000 UTC m=+0.083252585 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, maintainer=Red Hat, Inc., release-0.7.12=, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, distribution-scope=public, com.redhat.component=ubi9-container, version=9.4, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, io.buildah.version=1.29.0)
Dec 03 01:14:33 compute-0 systemd[1]: 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687-691e7a48be3cc627.service: Main process exited, code=exited, status=1/FAILURE
Dec 03 01:14:33 compute-0 systemd[1]: 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687-691e7a48be3cc627.service: Failed with result 'exit-code'.
Dec 03 01:14:33 compute-0 kepler[177915]: I1203 01:14:33.652809       1 watcher.go:83] Using in cluster k8s config
Dec 03 01:14:33 compute-0 kepler[177915]: I1203 01:14:33.652843       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Dec 03 01:14:33 compute-0 kepler[177915]: E1203 01:14:33.652885       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Dec 03 01:14:33 compute-0 kepler[177915]: I1203 01:14:33.660882       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Dec 03 01:14:33 compute-0 kepler[177915]: I1203 01:14:33.660944       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Dec 03 01:14:33 compute-0 sudo[178097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jflosmfmazwbcenajjtogzkayrpejncu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724473.297257-529-122709481790461/AnsiballZ_find.py'
Dec 03 01:14:33 compute-0 kepler[177915]: I1203 01:14:33.670120       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Dec 03 01:14:33 compute-0 kepler[177915]: I1203 01:14:33.670181       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Dec 03 01:14:33 compute-0 sudo[178097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:14:33 compute-0 kepler[177915]: I1203 01:14:33.684613       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 03 01:14:33 compute-0 kepler[177915]: I1203 01:14:33.684668       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec 03 01:14:33 compute-0 kepler[177915]: I1203 01:14:33.684690       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Dec 03 01:14:33 compute-0 kepler[177915]: I1203 01:14:33.698297       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 03 01:14:33 compute-0 kepler[177915]: I1203 01:14:33.698351       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 03 01:14:33 compute-0 kepler[177915]: I1203 01:14:33.698360       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 03 01:14:33 compute-0 kepler[177915]: I1203 01:14:33.698369       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 03 01:14:33 compute-0 kepler[177915]: I1203 01:14:33.698379       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec 03 01:14:33 compute-0 kepler[177915]: I1203 01:14:33.698395       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Dec 03 01:14:33 compute-0 kepler[177915]: I1203 01:14:33.698590       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Dec 03 01:14:33 compute-0 kepler[177915]: I1203 01:14:33.698648       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Dec 03 01:14:33 compute-0 kepler[177915]: I1203 01:14:33.698690       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Dec 03 01:14:33 compute-0 kepler[177915]: I1203 01:14:33.698724       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Dec 03 01:14:33 compute-0 kepler[177915]: I1203 01:14:33.698943       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Dec 03 01:14:33 compute-0 kepler[177915]: I1203 01:14:33.699643       1 exporter.go:208] Started Kepler in 637.374592ms
Dec 03 01:14:33 compute-0 python3.9[178107]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 03 01:14:33 compute-0 sudo[178097]: pam_unix(sudo:session): session closed for user root
Dec 03 01:14:35 compute-0 sudo[178259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqjmrpogkzgdmwsbxtenafuxakupotfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724474.4231439-539-237084828145336/AnsiballZ_podman_container_info.py'
Dec 03 01:14:35 compute-0 sudo[178259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:14:35 compute-0 python3.9[178261]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Dec 03 01:14:35 compute-0 sudo[178259]: pam_unix(sudo:session): session closed for user root
Dec 03 01:14:36 compute-0 sudo[178425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsimavlmndftjlgwtauuqsfmrsbfdzpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724475.9688795-547-228660227129492/AnsiballZ_podman_container_exec.py'
Dec 03 01:14:36 compute-0 sudo[178425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:14:36 compute-0 python3.9[178427]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:14:37 compute-0 systemd[1]: Started libpod-conmon-926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f.scope.
Dec 03 01:14:37 compute-0 podman[178428]: 2025-12-03 01:14:37.185972792 +0000 UTC m=+0.164867375 container exec 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 03 01:14:37 compute-0 podman[178428]: 2025-12-03 01:14:37.22372537 +0000 UTC m=+0.202619893 container exec_died 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec 03 01:14:37 compute-0 systemd[1]: libpod-conmon-926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f.scope: Deactivated successfully.
Dec 03 01:14:37 compute-0 sudo[178425]: pam_unix(sudo:session): session closed for user root
Dec 03 01:14:38 compute-0 sudo[178607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idpuwtzoncvqierlzxufpjeqejcmdxep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724477.6313856-555-274870197409362/AnsiballZ_podman_container_exec.py'
Dec 03 01:14:38 compute-0 sudo[178607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:14:38 compute-0 python3.9[178609]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:14:38 compute-0 systemd[1]: Started libpod-conmon-926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f.scope.
Dec 03 01:14:38 compute-0 podman[178610]: 2025-12-03 01:14:38.64904596 +0000 UTC m=+0.151104468 container exec 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 03 01:14:38 compute-0 podman[178610]: 2025-12-03 01:14:38.683003243 +0000 UTC m=+0.185061701 container exec_died 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 03 01:14:38 compute-0 sudo[178607]: pam_unix(sudo:session): session closed for user root
Dec 03 01:14:38 compute-0 systemd[1]: libpod-conmon-926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f.scope: Deactivated successfully.
Dec 03 01:14:39 compute-0 sudo[178788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrfpngeeywyswpdnqwqhmfjwhreaiezn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724479.0445457-563-97516559115183/AnsiballZ_file.py'
Dec 03 01:14:39 compute-0 sudo[178788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:14:39 compute-0 python3.9[178790]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:14:39 compute-0 sudo[178788]: pam_unix(sudo:session): session closed for user root
Dec 03 01:14:40 compute-0 sudo[178955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bytgbawpanpyvnpgfikspthvwuzmjryn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724480.3197873-572-88725129667647/AnsiballZ_podman_container_info.py'
Dec 03 01:14:40 compute-0 sudo[178955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:14:40 compute-0 podman[178914]: 2025-12-03 01:14:40.871463063 +0000 UTC m=+0.126386836 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 01:14:41 compute-0 python3.9[178966]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Dec 03 01:14:41 compute-0 sudo[178955]: pam_unix(sudo:session): session closed for user root
Dec 03 01:14:42 compute-0 sudo[179128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffgoxukjivutazcydsjtedljllooxidw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724481.5381978-580-197716948536857/AnsiballZ_podman_container_exec.py'
Dec 03 01:14:42 compute-0 sudo[179128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:14:42 compute-0 python3.9[179130]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:14:42 compute-0 systemd[1]: Started libpod-conmon-7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264.scope.
Dec 03 01:14:42 compute-0 podman[179131]: 2025-12-03 01:14:42.46035136 +0000 UTC m=+0.130921633 container exec 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_managed=true, managed_by=edpm_ansible, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Dec 03 01:14:42 compute-0 podman[179131]: 2025-12-03 01:14:42.495969539 +0000 UTC m=+0.166539752 container exec_died 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 03 01:14:42 compute-0 systemd[1]: libpod-conmon-7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264.scope: Deactivated successfully.
Dec 03 01:14:42 compute-0 sudo[179128]: pam_unix(sudo:session): session closed for user root
Dec 03 01:14:42 compute-0 podman[179163]: 2025-12-03 01:14:42.70640932 +0000 UTC m=+0.110938452 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec 03 01:14:42 compute-0 podman[179164]: 2025-12-03 01:14:42.781514766 +0000 UTC m=+0.182307113 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 03 01:14:43 compute-0 sudo[179356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvfkjyyfrpuflquczklcxydlnspyrewp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724482.851324-588-182684497239318/AnsiballZ_podman_container_exec.py'
Dec 03 01:14:43 compute-0 sudo[179356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:14:43 compute-0 python3.9[179358]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:14:43 compute-0 systemd[1]: Started libpod-conmon-7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264.scope.
Dec 03 01:14:43 compute-0 podman[179359]: 2025-12-03 01:14:43.848815466 +0000 UTC m=+0.159766871 container exec 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 03 01:14:43 compute-0 podman[179359]: 2025-12-03 01:14:43.883867689 +0000 UTC m=+0.194819104 container exec_died 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 03 01:14:43 compute-0 sudo[179356]: pam_unix(sudo:session): session closed for user root
Dec 03 01:14:43 compute-0 systemd[1]: libpod-conmon-7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264.scope: Deactivated successfully.
Dec 03 01:14:44 compute-0 sudo[179538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-einmoyjezfshgnyxakywfmhktvpsmqto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724484.2483804-596-8878470515321/AnsiballZ_file.py'
Dec 03 01:14:44 compute-0 sudo[179538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:14:45 compute-0 python3.9[179540]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:14:45 compute-0 sudo[179538]: pam_unix(sudo:session): session closed for user root
Dec 03 01:14:46 compute-0 sudo[179692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdrrupcpipkbfkbswplewwbwufiyxnfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724485.5104153-605-112659237467362/AnsiballZ_podman_container_info.py'
Dec 03 01:14:46 compute-0 sudo[179692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:14:46 compute-0 python3.9[179694]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Dec 03 01:14:46 compute-0 sudo[179692]: pam_unix(sudo:session): session closed for user root
Dec 03 01:14:47 compute-0 sudo[179857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emhfukmemxsddhamagancywwkkezcrba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724486.8137183-613-33483525876200/AnsiballZ_podman_container_exec.py'
Dec 03 01:14:47 compute-0 sudo[179857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:14:47 compute-0 python3.9[179859]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:14:47 compute-0 systemd[1]: Started libpod-conmon-0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb.scope.
Dec 03 01:14:47 compute-0 podman[179860]: 2025-12-03 01:14:47.82066255 +0000 UTC m=+0.157683583 container exec 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 01:14:47 compute-0 podman[179860]: 2025-12-03 01:14:47.852833527 +0000 UTC m=+0.189854590 container exec_died 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 03 01:14:47 compute-0 sudo[179857]: pam_unix(sudo:session): session closed for user root
Dec 03 01:14:47 compute-0 systemd[1]: libpod-conmon-0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb.scope: Deactivated successfully.
Dec 03 01:14:48 compute-0 sudo[180039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llbkgzscavkmucvkyiubklovqpijlvct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724488.216497-621-162744810323450/AnsiballZ_podman_container_exec.py'
Dec 03 01:14:48 compute-0 sudo[180039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:14:49 compute-0 python3.9[180041]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:14:49 compute-0 systemd[1]: Started libpod-conmon-0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb.scope.
Dec 03 01:14:49 compute-0 podman[180042]: 2025-12-03 01:14:49.194227049 +0000 UTC m=+0.141617405 container exec 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 03 01:14:49 compute-0 podman[180042]: 2025-12-03 01:14:49.226690794 +0000 UTC m=+0.174081180 container exec_died 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 03 01:14:49 compute-0 systemd[1]: libpod-conmon-0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb.scope: Deactivated successfully.
Dec 03 01:14:49 compute-0 sudo[180039]: pam_unix(sudo:session): session closed for user root
Dec 03 01:14:50 compute-0 sudo[180223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejrnbiyqzyrvhtpujqizdcgbjqikpele ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724489.5739062-629-39374109427721/AnsiballZ_file.py'
Dec 03 01:14:50 compute-0 sudo[180223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:14:50 compute-0 python3.9[180225]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:14:50 compute-0 sudo[180223]: pam_unix(sudo:session): session closed for user root
Dec 03 01:14:51 compute-0 sudo[180375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgxcsckovafzptfxqhsauzscjpwpbalf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724490.7615583-638-215453752865500/AnsiballZ_podman_container_info.py'
Dec 03 01:14:51 compute-0 sudo[180375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:14:51 compute-0 python3.9[180377]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Dec 03 01:14:51 compute-0 sudo[180375]: pam_unix(sudo:session): session closed for user root
Dec 03 01:14:52 compute-0 sudo[180540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esnzzndmmklginbbpszgzpdgtbsaxbvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724492.1022882-646-221033957495763/AnsiballZ_podman_container_exec.py'
Dec 03 01:14:52 compute-0 sudo[180540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:14:52 compute-0 python3.9[180542]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:14:53 compute-0 systemd[1]: Started libpod-conmon-7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a.scope.
Dec 03 01:14:53 compute-0 podman[180543]: 2025-12-03 01:14:53.123460251 +0000 UTC m=+0.149985789 container exec 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 01:14:53 compute-0 podman[180543]: 2025-12-03 01:14:53.160886351 +0000 UTC m=+0.187411829 container exec_died 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 03 01:14:53 compute-0 sudo[180540]: pam_unix(sudo:session): session closed for user root
Dec 03 01:14:53 compute-0 systemd[1]: libpod-conmon-7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a.scope: Deactivated successfully.
Dec 03 01:14:54 compute-0 sudo[180721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydnxttssdmmjbcxruwulurrvhzdauucn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724493.528803-654-145504539596285/AnsiballZ_podman_container_exec.py'
Dec 03 01:14:54 compute-0 sudo[180721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:14:54 compute-0 python3.9[180723]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:14:54 compute-0 systemd[1]: Started libpod-conmon-7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a.scope.
Dec 03 01:14:54 compute-0 podman[180724]: 2025-12-03 01:14:54.457586801 +0000 UTC m=+0.128824893 container exec 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 01:14:54 compute-0 podman[180724]: 2025-12-03 01:14:54.491524129 +0000 UTC m=+0.162762201 container exec_died 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 03 01:14:54 compute-0 systemd[1]: libpod-conmon-7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a.scope: Deactivated successfully.
Dec 03 01:14:54 compute-0 sudo[180721]: pam_unix(sudo:session): session closed for user root
Dec 03 01:14:55 compute-0 sudo[180914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jiyvhppetkydsxvejqhuiubcqyyhosth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724494.8266513-662-141905832288952/AnsiballZ_file.py'
Dec 03 01:14:55 compute-0 sudo[180914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:14:55 compute-0 podman[180877]: 2025-12-03 01:14:55.406277056 +0000 UTC m=+0.142150980 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, version=9.6, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container)
Dec 03 01:14:55 compute-0 python3.9[180922]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:14:55 compute-0 sudo[180914]: pam_unix(sudo:session): session closed for user root
Dec 03 01:14:56 compute-0 sudo[181091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egiwqvmagvzdisndpzsfwawaksejnsvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724495.9355297-671-173228714302689/AnsiballZ_podman_container_info.py'
Dec 03 01:14:56 compute-0 podman[181051]: 2025-12-03 01:14:56.516050114 +0000 UTC m=+0.110956372 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 01:14:56 compute-0 sudo[181091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:14:56 compute-0 python3.9[181100]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Dec 03 01:14:56 compute-0 sudo[181091]: pam_unix(sudo:session): session closed for user root
Dec 03 01:14:57 compute-0 sudo[181263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yflogtotsswbdzxrsbayxxsnhcngapxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724497.20583-679-40190531242784/AnsiballZ_podman_container_exec.py'
Dec 03 01:14:57 compute-0 sudo[181263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:14:58 compute-0 python3.9[181265]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:14:58 compute-0 systemd[1]: Started libpod-conmon-3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44.scope.
Dec 03 01:14:58 compute-0 podman[181266]: 2025-12-03 01:14:58.191458573 +0000 UTC m=+0.164084739 container exec 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, release=1755695350, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, name=ubi9-minimal, vcs-type=git, version=9.6)
Dec 03 01:14:58 compute-0 podman[181266]: 2025-12-03 01:14:58.226229605 +0000 UTC m=+0.198855721 container exec_died 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, name=ubi9-minimal, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=minimal rhel9, config_id=edpm, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter)
Dec 03 01:14:58 compute-0 sudo[181263]: pam_unix(sudo:session): session closed for user root
Dec 03 01:14:58 compute-0 systemd[1]: libpod-conmon-3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44.scope: Deactivated successfully.
Dec 03 01:14:59 compute-0 sudo[181447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpnrharfihxtzwpnofnaasbzkyvwduhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724498.5955043-687-142968144951926/AnsiballZ_podman_container_exec.py'
Dec 03 01:14:59 compute-0 sudo[181447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:14:59 compute-0 python3.9[181449]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:14:59 compute-0 systemd[1]: Started libpod-conmon-3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44.scope.
Dec 03 01:14:59 compute-0 podman[181450]: 2025-12-03 01:14:59.687808126 +0000 UTC m=+0.146693972 container exec 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, name=ubi9-minimal, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, version=9.6, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, distribution-scope=public)
Dec 03 01:14:59 compute-0 podman[181450]: 2025-12-03 01:14:59.723877217 +0000 UTC m=+0.182763043 container exec_died 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=9.6, config_id=edpm, io.buildah.version=1.33.7, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, vcs-type=git, managed_by=edpm_ansible, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, distribution-scope=public, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 03 01:14:59 compute-0 podman[158098]: time="2025-12-03T01:14:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:14:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:14:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 18535 "" "Go-http-client/1.1"
Dec 03 01:14:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:14:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2992 "" "Go-http-client/1.1"
Dec 03 01:14:59 compute-0 sudo[181447]: pam_unix(sudo:session): session closed for user root
Dec 03 01:14:59 compute-0 systemd[1]: libpod-conmon-3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44.scope: Deactivated successfully.
Dec 03 01:15:00 compute-0 sudo[181630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfmiaygldcqretbjmbuxclgnfimmlyxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724500.0869381-695-233470928027841/AnsiballZ_file.py'
Dec 03 01:15:00 compute-0 sudo[181630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:15:00 compute-0 python3.9[181632]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:15:00 compute-0 sudo[181630]: pam_unix(sudo:session): session closed for user root
Dec 03 01:15:01 compute-0 openstack_network_exporter[160250]: ERROR   01:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:15:01 compute-0 openstack_network_exporter[160250]: ERROR   01:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:15:01 compute-0 openstack_network_exporter[160250]: ERROR   01:15:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:15:01 compute-0 openstack_network_exporter[160250]: ERROR   01:15:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:15:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:15:01 compute-0 openstack_network_exporter[160250]: ERROR   01:15:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:15:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:15:01 compute-0 anacron[59208]: Job `cron.daily' started
Dec 03 01:15:01 compute-0 anacron[59208]: Job `cron.daily' terminated
Dec 03 01:15:01 compute-0 podman[181756]: 2025-12-03 01:15:01.771204406 +0000 UTC m=+0.108190311 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=2, health_log=, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 03 01:15:01 compute-0 sudo[181798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkctaustmfntdqcxmnyopiagxuweazkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724501.2689345-704-200066961581271/AnsiballZ_podman_container_info.py'
Dec 03 01:15:01 compute-0 sudo[181798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:15:01 compute-0 systemd[1]: ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92-64bdef25bfa2e2e5.service: Main process exited, code=exited, status=1/FAILURE
Dec 03 01:15:01 compute-0 systemd[1]: ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92-64bdef25bfa2e2e5.service: Failed with result 'exit-code'.
Dec 03 01:15:01 compute-0 python3.9[181805]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_ipmi'] executable=podman
Dec 03 01:15:02 compute-0 sudo[181798]: pam_unix(sudo:session): session closed for user root
Dec 03 01:15:03 compute-0 sudo[181968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvcgpzszybgdkzvkdosefqmcewxvyesh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724502.4833736-712-240167198537734/AnsiballZ_podman_container_exec.py'
Dec 03 01:15:03 compute-0 sudo[181968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:15:03 compute-0 python3.9[181970]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:15:03 compute-0 systemd[1]: Started libpod-conmon-ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92.scope.
Dec 03 01:15:03 compute-0 podman[181971]: 2025-12-03 01:15:03.576052363 +0000 UTC m=+0.155822578 container exec ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 03 01:15:03 compute-0 podman[181971]: 2025-12-03 01:15:03.61163717 +0000 UTC m=+0.191407415 container exec_died ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 03 01:15:03 compute-0 systemd[1]: libpod-conmon-ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92.scope: Deactivated successfully.
Dec 03 01:15:03 compute-0 sudo[181968]: pam_unix(sudo:session): session closed for user root
Dec 03 01:15:03 compute-0 podman[181986]: 2025-12-03 01:15:03.753230213 +0000 UTC m=+0.168337223 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, container_name=kepler, architecture=x86_64, release-0.7.12=, distribution-scope=public, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release=1214.1726694543, io.buildah.version=1.29.0, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec 03 01:15:04 compute-0 sudo[182166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucbqspilynyzhfynitbyqkduwcfpmxvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724503.9590397-720-168753226708406/AnsiballZ_podman_container_exec.py'
Dec 03 01:15:04 compute-0 sudo[182166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:15:04 compute-0 python3.9[182168]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:15:04 compute-0 systemd[1]: Started libpod-conmon-ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92.scope.
Dec 03 01:15:04 compute-0 podman[182169]: 2025-12-03 01:15:04.895026803 +0000 UTC m=+0.149271658 container exec ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi)
Dec 03 01:15:04 compute-0 podman[182169]: 2025-12-03 01:15:04.928977221 +0000 UTC m=+0.183222086 container exec_died ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi)
Dec 03 01:15:04 compute-0 systemd[1]: libpod-conmon-ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92.scope: Deactivated successfully.
Dec 03 01:15:04 compute-0 sudo[182166]: pam_unix(sudo:session): session closed for user root
Dec 03 01:15:05 compute-0 sudo[182348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egpequzevppwvibizvwmqnnzeeygnzpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724505.2765975-728-223584267375641/AnsiballZ_file.py'
Dec 03 01:15:05 compute-0 sudo[182348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:15:05 compute-0 python3.9[182350]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:15:06 compute-0 sudo[182348]: pam_unix(sudo:session): session closed for user root
Dec 03 01:15:06 compute-0 sudo[182500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsbzxsgqjwbfnekgplxjfccbxxrqzqaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724506.4417288-737-168219421468983/AnsiballZ_podman_container_info.py'
Dec 03 01:15:06 compute-0 sudo[182500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:15:07 compute-0 python3.9[182502]: ansible-containers.podman.podman_container_info Invoked with name=['kepler'] executable=podman
Dec 03 01:15:07 compute-0 sudo[182500]: pam_unix(sudo:session): session closed for user root
Dec 03 01:15:08 compute-0 sudo[182664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryvzxtpnrbimymdxccocdtsiivtffryi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724507.657004-745-180075502383320/AnsiballZ_podman_container_exec.py'
Dec 03 01:15:08 compute-0 sudo[182664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:15:08 compute-0 python3.9[182666]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:15:08 compute-0 systemd[1]: Started libpod-conmon-96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687.scope.
Dec 03 01:15:08 compute-0 podman[182667]: 2025-12-03 01:15:08.564491928 +0000 UTC m=+0.137356220 container exec 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, container_name=kepler, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, distribution-scope=public, architecture=x86_64, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.4, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc.)
Dec 03 01:15:08 compute-0 podman[182667]: 2025-12-03 01:15:08.59922331 +0000 UTC m=+0.172087592 container exec_died 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2024-09-18T21:23:30, container_name=kepler, managed_by=edpm_ansible, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, config_id=edpm, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, com.redhat.component=ubi9-container, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., version=9.4, io.openshift.tags=base rhel9)
Dec 03 01:15:08 compute-0 sudo[182664]: pam_unix(sudo:session): session closed for user root
Dec 03 01:15:08 compute-0 systemd[1]: libpod-conmon-96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687.scope: Deactivated successfully.
Dec 03 01:15:09 compute-0 sudo[182847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtguwsqxjucjctoezvoddjcdrwtwpbwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724508.991506-753-278941115795468/AnsiballZ_podman_container_exec.py'
Dec 03 01:15:09 compute-0 sudo[182847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:15:09 compute-0 python3.9[182849]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:15:09 compute-0 systemd[1]: Started libpod-conmon-96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687.scope.
Dec 03 01:15:09 compute-0 podman[182850]: 2025-12-03 01:15:09.976005571 +0000 UTC m=+0.133629322 container exec 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.openshift.expose-services=, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.29.0, name=ubi9, release-0.7.12=, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., version=9.4, managed_by=edpm_ansible, distribution-scope=public, release=1214.1726694543)
Dec 03 01:15:10 compute-0 podman[182850]: 2025-12-03 01:15:10.008784676 +0000 UTC m=+0.166408427 container exec_died 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, distribution-scope=public, managed_by=edpm_ansible, release=1214.1726694543, version=9.4, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, com.redhat.component=ubi9-container, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, config_id=edpm, vendor=Red Hat, Inc.)
Dec 03 01:15:10 compute-0 sudo[182847]: pam_unix(sudo:session): session closed for user root
Dec 03 01:15:10 compute-0 systemd[1]: libpod-conmon-96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687.scope: Deactivated successfully.
Dec 03 01:15:10 compute-0 sudo[183027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mauzblafkkfvvnriekipwlvjrzzxpjjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724510.3518846-761-109302721187659/AnsiballZ_file.py'
Dec 03 01:15:10 compute-0 sudo[183027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:15:11 compute-0 python3.9[183029]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/kepler recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:15:11 compute-0 sudo[183027]: pam_unix(sudo:session): session closed for user root
Dec 03 01:15:11 compute-0 podman[183141]: 2025-12-03 01:15:11.876171155 +0000 UTC m=+0.125281949 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 03 01:15:11 compute-0 sudo[183202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izomzxkfbjmbxebtdwqmufmncjlmcnli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724511.4371884-770-162184631084192/AnsiballZ_file.py'
Dec 03 01:15:11 compute-0 sudo[183202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:15:12 compute-0 python3.9[183204]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:15:12 compute-0 sudo[183202]: pam_unix(sudo:session): session closed for user root
Dec 03 01:15:13 compute-0 sudo[183386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtxabbhvozchbrkcdlksgkienoanoljg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724512.4711742-778-96190474094134/AnsiballZ_stat.py'
Dec 03 01:15:13 compute-0 sudo[183386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:15:13 compute-0 podman[183328]: 2025-12-03 01:15:13.109201161 +0000 UTC m=+0.163531623 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 03 01:15:13 compute-0 podman[183329]: 2025-12-03 01:15:13.135167687 +0000 UTC m=+0.186976226 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Dec 03 01:15:13 compute-0 python3.9[183393]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/kepler.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:15:13 compute-0 sudo[183386]: pam_unix(sudo:session): session closed for user root
Dec 03 01:15:14 compute-0 sudo[183519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfrhavaontaziwevylwgixefpnxqpjgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724512.4711742-778-96190474094134/AnsiballZ_copy.py'
Dec 03 01:15:14 compute-0 sudo[183519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:15:14 compute-0 python3.9[183521]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/kepler.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764724512.4711742-778-96190474094134/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:15:14 compute-0 sudo[183519]: pam_unix(sudo:session): session closed for user root
Dec 03 01:15:15 compute-0 sudo[183671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bammvotrmninmhylgvrrhdrdxtbnvygn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724514.7679195-794-50854056187810/AnsiballZ_file.py'
Dec 03 01:15:15 compute-0 sudo[183671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:15:15 compute-0 python3.9[183673]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:15:15 compute-0 sudo[183671]: pam_unix(sudo:session): session closed for user root
Dec 03 01:15:16 compute-0 sudo[183823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccbceukishmnqljenkvtmgbqicxhwuhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724515.9056234-802-73088468390870/AnsiballZ_stat.py'
Dec 03 01:15:16 compute-0 sudo[183823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:15:16 compute-0 python3.9[183825]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:15:16 compute-0 sudo[183823]: pam_unix(sudo:session): session closed for user root
Dec 03 01:15:17 compute-0 sudo[183901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jatszfgpaoafarhzjhgevitcubhyllwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724515.9056234-802-73088468390870/AnsiballZ_file.py'
Dec 03 01:15:17 compute-0 sudo[183901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:15:17 compute-0 python3.9[183903]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:15:17 compute-0 sudo[183901]: pam_unix(sudo:session): session closed for user root
Dec 03 01:15:18 compute-0 sudo[184053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obysmwdvzghxkmkqaaqdpciuaqwzhitc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724517.9542718-814-83302229580855/AnsiballZ_stat.py'
Dec 03 01:15:18 compute-0 sudo[184053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:15:18 compute-0 python3.9[184055]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:15:18 compute-0 sudo[184053]: pam_unix(sudo:session): session closed for user root
Dec 03 01:15:19 compute-0 sudo[184131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmtpiwahdoifpabcaixyaqhkvidxsxui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724517.9542718-814-83302229580855/AnsiballZ_file.py'
Dec 03 01:15:19 compute-0 sudo[184131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:15:19 compute-0 python3.9[184133]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.umbw37x2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:15:19 compute-0 sudo[184131]: pam_unix(sudo:session): session closed for user root
Dec 03 01:15:20 compute-0 sudo[184284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcomxdcaprymsprdgkmpvtxvptinunpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724519.7310948-826-178776259353725/AnsiballZ_stat.py'
Dec 03 01:15:20 compute-0 sudo[184284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:15:20 compute-0 python3.9[184286]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:15:20 compute-0 sudo[184284]: pam_unix(sudo:session): session closed for user root
Dec 03 01:15:20 compute-0 sudo[184362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxppvgnuzxhtxtxfmdruykfkwqovzrqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724519.7310948-826-178776259353725/AnsiballZ_file.py'
Dec 03 01:15:20 compute-0 sudo[184362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:15:21 compute-0 python3.9[184364]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:15:21 compute-0 sudo[184362]: pam_unix(sudo:session): session closed for user root
Dec 03 01:15:22 compute-0 sudo[184514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mctmtwzeboqqisktlwmvwrcpdthjejgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724521.6103535-839-278575330495693/AnsiballZ_command.py'
Dec 03 01:15:22 compute-0 sudo[184514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:15:22 compute-0 python3.9[184516]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:15:22 compute-0 sudo[184514]: pam_unix(sudo:session): session closed for user root
Dec 03 01:15:23 compute-0 sudo[184667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulqgastbkimseeezsqpycuqymqerrdvh ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764724522.8012955-847-75573617521941/AnsiballZ_edpm_nftables_from_files.py'
Dec 03 01:15:23 compute-0 sudo[184667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:15:23 compute-0 python3[184669]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 03 01:15:23 compute-0 sudo[184667]: pam_unix(sudo:session): session closed for user root
Dec 03 01:15:24 compute-0 sudo[184819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sajnlkvzeqjtihdpcqfvffbseeppyxfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724524.0538125-855-130031846408898/AnsiballZ_stat.py'
Dec 03 01:15:24 compute-0 sudo[184819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:15:24 compute-0 python3.9[184821]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:15:24 compute-0 sudo[184819]: pam_unix(sudo:session): session closed for user root
Dec 03 01:15:25 compute-0 sudo[184897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mchsixczltstjmtxbohqszbjvnwyywqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724524.0538125-855-130031846408898/AnsiballZ_file.py'
Dec 03 01:15:25 compute-0 sudo[184897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:15:25 compute-0 python3.9[184899]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:15:25 compute-0 sudo[184897]: pam_unix(sudo:session): session closed for user root
Dec 03 01:15:25 compute-0 podman[184900]: 2025-12-03 01:15:25.916267727 +0000 UTC m=+0.159288670 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.openshift.tags=minimal rhel9, vcs-type=git, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9-minimal, io.buildah.version=1.33.7, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec 03 01:15:27 compute-0 sudo[185068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwgtdaobnxzxjduulpjijvtdbxruxcvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724526.0035818-867-131580773403196/AnsiballZ_stat.py'
Dec 03 01:15:27 compute-0 sudo[185068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:15:27 compute-0 podman[185070]: 2025-12-03 01:15:27.254012773 +0000 UTC m=+0.108073848 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 03 01:15:27 compute-0 python3.9[185071]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:15:27 compute-0 sudo[185068]: pam_unix(sudo:session): session closed for user root
Dec 03 01:15:27 compute-0 sudo[185169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmzkjzpzvhkhuldbwfyhhwpwryxjkwdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724526.0035818-867-131580773403196/AnsiballZ_file.py'
Dec 03 01:15:27 compute-0 sudo[185169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:15:28 compute-0 python3.9[185171]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:15:28 compute-0 sudo[185169]: pam_unix(sudo:session): session closed for user root
Dec 03 01:15:29 compute-0 sudo[185321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnusvddnnmdgspnsmpkhcvzcfvwgyyti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724528.4238963-879-142011751392302/AnsiballZ_stat.py'
Dec 03 01:15:29 compute-0 sudo[185321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:15:29 compute-0 python3.9[185323]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:15:29 compute-0 sudo[185321]: pam_unix(sudo:session): session closed for user root
Dec 03 01:15:29 compute-0 podman[158098]: time="2025-12-03T01:15:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:15:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:15:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 18534 "" "Go-http-client/1.1"
Dec 03 01:15:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:15:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2995 "" "Go-http-client/1.1"
Dec 03 01:15:30 compute-0 sudo[185399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oliqduhpwlntalqjehfcabfzdwmoksqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724528.4238963-879-142011751392302/AnsiballZ_file.py'
Dec 03 01:15:30 compute-0 sudo[185399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:15:30 compute-0 python3.9[185401]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:15:30 compute-0 sudo[185399]: pam_unix(sudo:session): session closed for user root
Dec 03 01:15:31 compute-0 openstack_network_exporter[160250]: ERROR   01:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:15:31 compute-0 openstack_network_exporter[160250]: ERROR   01:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:15:31 compute-0 openstack_network_exporter[160250]: ERROR   01:15:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:15:31 compute-0 openstack_network_exporter[160250]: ERROR   01:15:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:15:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:15:31 compute-0 openstack_network_exporter[160250]: ERROR   01:15:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:15:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:15:31 compute-0 sudo[185551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umgrboinggiuxtraffobknzvcxjkhwbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724530.850293-891-198269098108044/AnsiballZ_stat.py'
Dec 03 01:15:31 compute-0 sudo[185551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:15:31 compute-0 python3.9[185553]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:15:31 compute-0 sudo[185551]: pam_unix(sudo:session): session closed for user root
Dec 03 01:15:32 compute-0 sudo[185645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmdquetonvjnvkwsmxdlbdzbvwxexyif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724530.850293-891-198269098108044/AnsiballZ_file.py'
Dec 03 01:15:32 compute-0 sudo[185645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:15:32 compute-0 podman[185603]: 2025-12-03 01:15:32.243381815 +0000 UTC m=+0.144293363 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Dec 03 01:15:32 compute-0 python3.9[185650]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:15:32 compute-0 sudo[185645]: pam_unix(sudo:session): session closed for user root
Dec 03 01:15:33 compute-0 sudo[185800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ontogoljozguespwmzkanerikkxzgtdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724532.7748115-903-161198752836126/AnsiballZ_stat.py'
Dec 03 01:15:33 compute-0 sudo[185800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:15:33 compute-0 python3.9[185802]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:15:33 compute-0 sudo[185800]: pam_unix(sudo:session): session closed for user root
Dec 03 01:15:34 compute-0 sudo[185943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndqxrkwvjodghtqdacmbviqmumgtqqls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724532.7748115-903-161198752836126/AnsiballZ_copy.py'
Dec 03 01:15:34 compute-0 sudo[185943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:15:34 compute-0 podman[185899]: 2025-12-03 01:15:34.48219488 +0000 UTC m=+0.131122139 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, io.openshift.tags=base rhel9, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, com.redhat.component=ubi9-container, version=9.4, maintainer=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, io.openshift.expose-services=, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc.)
Dec 03 01:15:34 compute-0 python3.9[185947]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764724532.7748115-903-161198752836126/.source.nft follow=False _original_basename=ruleset.j2 checksum=195cfcdc3ed4fc7d98b13eed88ef5cb7956fa1b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:15:34 compute-0 sudo[185943]: pam_unix(sudo:session): session closed for user root
Dec 03 01:15:35 compute-0 sudo[186098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jycpeckyptkjrtangkatvlkfrejjffdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724534.9842956-918-34384017707220/AnsiballZ_file.py'
Dec 03 01:15:35 compute-0 sudo[186098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:15:35 compute-0 python3.9[186100]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:15:35 compute-0 sudo[186098]: pam_unix(sudo:session): session closed for user root
Dec 03 01:15:36 compute-0 sudo[186250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsgqxzjlmhnmrbgmbrrcejfzskjgnxot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724536.0240066-926-155321821196691/AnsiballZ_command.py'
Dec 03 01:15:36 compute-0 sudo[186250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:15:36 compute-0 python3.9[186252]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:15:36 compute-0 sudo[186250]: pam_unix(sudo:session): session closed for user root
Dec 03 01:15:37 compute-0 sudo[186405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otczqnbqzzfphlefwyzlrxbmymuummwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724537.257892-934-5437231329311/AnsiballZ_blockinfile.py'
Dec 03 01:15:37 compute-0 sudo[186405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:15:38 compute-0 python3.9[186407]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:15:38 compute-0 sudo[186405]: pam_unix(sudo:session): session closed for user root
Dec 03 01:15:39 compute-0 sudo[186557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmlknxuoudkivkmhnqjwxayndvlkjwgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724538.5463407-943-142800757107803/AnsiballZ_command.py'
Dec 03 01:15:39 compute-0 sudo[186557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:15:39 compute-0 python3.9[186559]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:15:39 compute-0 sudo[186557]: pam_unix(sudo:session): session closed for user root
Dec 03 01:15:40 compute-0 sudo[186710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynkpljexpqgdwqulkbmtuafqmvbvuagp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724539.6809027-951-129817832802015/AnsiballZ_stat.py'
Dec 03 01:15:40 compute-0 sudo[186710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:15:40 compute-0 python3.9[186712]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:15:40 compute-0 sudo[186710]: pam_unix(sudo:session): session closed for user root
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.965 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.966 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.966 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e9581820>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.967 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f00ebd496a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.968 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e9581820>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.969 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eda45910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e9581820>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.969 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e9581820>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.969 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e9581820>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.969 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e9581820>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.969 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e9581820>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.970 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e9581820>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.970 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eabec2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e9581820>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.970 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e9581820>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.970 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e9581820>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.970 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e9581820>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.971 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e9581820>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.971 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e9581820>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.971 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e9581820>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.972 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e9581820>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.972 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e9581820>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.972 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e9581820>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.972 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e9581820>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.973 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e9581820>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.973 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e9581820>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.973 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e9581820>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.973 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e9581820>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.974 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebcadee0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e9581820>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.974 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e9581820>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.974 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e9581820>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.971 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.975 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f00ebd4b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.975 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.975 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f00edba6090>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.976 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.976 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f00ebd4bb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.976 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.976 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f00ebd4b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.977 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.977 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f00ebd4b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.977 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.977 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f00ebd4b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.977 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.978 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f00ebd4b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.978 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.978 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f00eabec290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.978 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.978 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f00ebd4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.979 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.979 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f00ebd4b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.979 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.979 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f00ebd4b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.979 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.979 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f00ebd4bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.980 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.980 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f00ebd4b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.980 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.980 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f00ebd4bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.980 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f00ebd4bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.981 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f00ebd4bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.981 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f00ebe0e030>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.982 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f00ebd4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.982 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f00ebd4b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.982 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f00ede91a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f00ebd4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f00ebd4b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f00ede92450>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f00ebd4bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f00ebd4bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.986 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.986 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.986 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.986 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.986 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.989 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.989 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.989 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.989 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.989 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.989 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.990 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.990 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:15:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.990 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:15:41 compute-0 sudo[186865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqgmvpcseftvivouyempbmauvnwwnuoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724540.7837176-959-268548950480171/AnsiballZ_command.py'
Dec 03 01:15:41 compute-0 sudo[186865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:15:41 compute-0 python3.9[186867]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:15:41 compute-0 sudo[186865]: pam_unix(sudo:session): session closed for user root
Dec 03 01:15:42 compute-0 sudo[187032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zokfejeccptvmlyepvmmpylpsgewjqgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724541.9004188-967-269183366998539/AnsiballZ_file.py'
Dec 03 01:15:42 compute-0 sudo[187032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:15:42 compute-0 podman[186994]: 2025-12-03 01:15:42.496651203 +0000 UTC m=+0.127338009 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 03 01:15:42 compute-0 python3.9[187044]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:15:42 compute-0 sudo[187032]: pam_unix(sudo:session): session closed for user root
Dec 03 01:15:43 compute-0 sshd-session[167942]: Connection closed by 192.168.122.30 port 42212
Dec 03 01:15:43 compute-0 sshd-session[167939]: pam_unix(sshd:session): session closed for user zuul
Dec 03 01:15:43 compute-0 systemd[1]: session-22.scope: Deactivated successfully.
Dec 03 01:15:43 compute-0 systemd[1]: session-22.scope: Consumed 2min 12.793s CPU time.
Dec 03 01:15:43 compute-0 systemd-logind[800]: Session 22 logged out. Waiting for processes to exit.
Dec 03 01:15:43 compute-0 systemd-logind[800]: Removed session 22.
Dec 03 01:15:43 compute-0 podman[187069]: 2025-12-03 01:15:43.393295784 +0000 UTC m=+0.129994186 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Dec 03 01:15:43 compute-0 podman[187070]: 2025-12-03 01:15:43.47247279 +0000 UTC m=+0.207305678 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 03 01:15:49 compute-0 sshd-session[187116]: Accepted publickey for zuul from 192.168.122.30 port 40994 ssh2: ECDSA SHA256:ja3ITS17A9km0/Ot+KN2pl9ub4ump/b6GV+vNoE7Szw
Dec 03 01:15:49 compute-0 systemd-logind[800]: New session 23 of user zuul.
Dec 03 01:15:49 compute-0 systemd[1]: Started Session 23 of User zuul.
Dec 03 01:15:49 compute-0 sshd-session[187116]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 03 01:15:50 compute-0 python3.9[187270]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 01:15:50 compute-0 sshd-session[187114]: Invalid user foundry from 14.103.201.7 port 49860
Dec 03 01:15:50 compute-0 sshd-session[187114]: Received disconnect from 14.103.201.7 port 49860:11: Bye Bye [preauth]
Dec 03 01:15:50 compute-0 sshd-session[187114]: Disconnected from invalid user foundry 14.103.201.7 port 49860 [preauth]
Dec 03 01:15:52 compute-0 sudo[187424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdqxyhxyarudlpezuxnwfkilhkuepula ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724551.2412434-34-114054592604414/AnsiballZ_systemd.py'
Dec 03 01:15:52 compute-0 sudo[187424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:15:52 compute-0 python3.9[187426]: ansible-ansible.builtin.systemd Invoked with name=rsyslog daemon_reload=False daemon_reexec=False scope=system no_block=False state=None enabled=None force=None masked=None
Dec 03 01:15:52 compute-0 sudo[187424]: pam_unix(sudo:session): session closed for user root
Dec 03 01:15:53 compute-0 sudo[187577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzkcgsiyrmfhfdmtqqcjzftfxyrwwhke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724552.7424507-42-21679459030459/AnsiballZ_setup.py'
Dec 03 01:15:53 compute-0 sudo[187577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:15:53 compute-0 python3.9[187579]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 03 01:15:54 compute-0 sudo[187577]: pam_unix(sudo:session): session closed for user root
Dec 03 01:15:54 compute-0 sudo[187661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbciatltkktrdlknjwkzxlrtfzglsjrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724552.7424507-42-21679459030459/AnsiballZ_dnf.py'
Dec 03 01:15:54 compute-0 sudo[187661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:15:54 compute-0 python3.9[187663]: ansible-ansible.legacy.dnf Invoked with name=['rsyslog-openssl'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 03 01:15:56 compute-0 podman[187665]: 2025-12-03 01:15:56.88025414 +0000 UTC m=+0.135143021 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, version=9.6, io.openshift.expose-services=, name=ubi9-minimal, distribution-scope=public, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, release=1755695350, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container)
Dec 03 01:15:57 compute-0 podman[187687]: 2025-12-03 01:15:57.889261497 +0000 UTC m=+0.146741506 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 01:15:59 compute-0 podman[158098]: time="2025-12-03T01:15:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:15:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:15:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 18533 "" "Go-http-client/1.1"
Dec 03 01:15:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:15:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2991 "" "Go-http-client/1.1"
Dec 03 01:16:01 compute-0 sudo[187661]: pam_unix(sudo:session): session closed for user root
Dec 03 01:16:01 compute-0 openstack_network_exporter[160250]: ERROR   01:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:16:01 compute-0 openstack_network_exporter[160250]: ERROR   01:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:16:01 compute-0 openstack_network_exporter[160250]: ERROR   01:16:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:16:01 compute-0 openstack_network_exporter[160250]: ERROR   01:16:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:16:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:16:01 compute-0 openstack_network_exporter[160250]: ERROR   01:16:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:16:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:16:02 compute-0 sudo[187866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-caiqxbozeurmdqaowkxfkcxqbkqgrsgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724561.5597394-54-210203165288696/AnsiballZ_stat.py'
Dec 03 01:16:02 compute-0 sudo[187866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:16:02 compute-0 python3.9[187868]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/rsyslog/ca-openshift.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:16:02 compute-0 sudo[187866]: pam_unix(sudo:session): session closed for user root
Dec 03 01:16:02 compute-0 podman[187913]: 2025-12-03 01:16:02.895266868 +0000 UTC m=+0.140119968 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi)
Dec 03 01:16:03 compute-0 sudo[188010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vujxwcdilhxxkwqzcdojubeblqkjmeqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724561.5597394-54-210203165288696/AnsiballZ_copy.py'
Dec 03 01:16:03 compute-0 sudo[188010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:16:03 compute-0 python3.9[188012]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/rsyslog/ca-openshift.crt mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764724561.5597394-54-210203165288696/.source.crt _original_basename=ca-openshift.crt follow=False checksum=1d88bab26da5c85710a770c705f3555781bf2a38 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:16:03 compute-0 sudo[188010]: pam_unix(sudo:session): session closed for user root
Dec 03 01:16:04 compute-0 sudo[188162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfwuiaypwdmksbfxcptwyjqlnysivjli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724563.9017274-69-124502416482956/AnsiballZ_file.py'
Dec 03 01:16:04 compute-0 sudo[188162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:16:04 compute-0 podman[188164]: 2025-12-03 01:16:04.822845121 +0000 UTC m=+0.147826614 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, managed_by=edpm_ansible, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, container_name=kepler, release-0.7.12=, distribution-scope=public, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec 03 01:16:04 compute-0 python3.9[188165]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/rsyslog.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:16:04 compute-0 sudo[188162]: pam_unix(sudo:session): session closed for user root
Dec 03 01:16:05 compute-0 sudo[188331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhlhefytzgnmhdmgghpdxfnfpbopbmty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724565.2082174-77-254176098394733/AnsiballZ_stat.py'
Dec 03 01:16:05 compute-0 sudo[188331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:16:05 compute-0 python3.9[188333]: ansible-ansible.legacy.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:16:05 compute-0 sudo[188331]: pam_unix(sudo:session): session closed for user root
Dec 03 01:16:06 compute-0 sudo[188454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txofjbrwonpiczwhfedwkodyhrddogzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724565.2082174-77-254176098394733/AnsiballZ_copy.py'
Dec 03 01:16:06 compute-0 sudo[188454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:16:06 compute-0 python3.9[188456]: ansible-ansible.legacy.copy Invoked with dest=/etc/rsyslog.d/10-telemetry.conf mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764724565.2082174-77-254176098394733/.source.conf _original_basename=10-telemetry.conf follow=False checksum=76865d9dd4bf9cd322a47065c046bcac194645ab backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:16:06 compute-0 sudo[188454]: pam_unix(sudo:session): session closed for user root
Dec 03 01:16:07 compute-0 sudo[188606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbqalnybbqryhjnamueecbydeyabwovp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724567.158358-92-208761024613499/AnsiballZ_systemd.py'
Dec 03 01:16:07 compute-0 sudo[188606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:16:08 compute-0 python3.9[188608]: ansible-ansible.builtin.systemd Invoked with name=rsyslog.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 03 01:16:08 compute-0 systemd[1]: Stopping System Logging Service...
Dec 03 01:16:08 compute-0 rsyslogd[1004]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1004" x-info="https://www.rsyslog.com"] exiting on signal 15.
Dec 03 01:16:08 compute-0 systemd[1]: rsyslog.service: Deactivated successfully.
Dec 03 01:16:08 compute-0 systemd[1]: Stopped System Logging Service.
Dec 03 01:16:08 compute-0 systemd[1]: rsyslog.service: Consumed 2.337s CPU time, 5.4M memory peak, read 0B from disk, written 4.0M to disk.
Dec 03 01:16:08 compute-0 systemd[1]: Starting System Logging Service...
Dec 03 01:16:08 compute-0 rsyslogd[188612]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="188612" x-info="https://www.rsyslog.com"] start
Dec 03 01:16:08 compute-0 systemd[1]: Started System Logging Service.
Dec 03 01:16:08 compute-0 rsyslogd[188612]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 03 01:16:08 compute-0 rsyslogd[188612]: Warning: Certificate file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2330 ]
Dec 03 01:16:08 compute-0 rsyslogd[188612]: Warning: Key file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2331 ]
Dec 03 01:16:08 compute-0 sudo[188606]: pam_unix(sudo:session): session closed for user root
Dec 03 01:16:08 compute-0 rsyslogd[188612]: nsd_ossl: TLS Connection initiated with remote syslog server '172.17.0.80'. [v8.2510.0-2.el9]
Dec 03 01:16:08 compute-0 rsyslogd[188612]: nsd_ossl: Information, no shared curve between syslog client '172.17.0.80' and server [v8.2510.0-2.el9]
Dec 03 01:16:09 compute-0 sshd-session[187119]: Connection closed by 192.168.122.30 port 40994
Dec 03 01:16:09 compute-0 sshd-session[187116]: pam_unix(sshd:session): session closed for user zuul
Dec 03 01:16:09 compute-0 systemd[1]: session-23.scope: Deactivated successfully.
Dec 03 01:16:09 compute-0 systemd[1]: session-23.scope: Consumed 16.635s CPU time.
Dec 03 01:16:09 compute-0 systemd-logind[800]: Session 23 logged out. Waiting for processes to exit.
Dec 03 01:16:09 compute-0 systemd-logind[800]: Removed session 23.
Dec 03 01:16:12 compute-0 podman[188641]: 2025-12-03 01:16:12.881224498 +0000 UTC m=+0.129618070 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 01:16:13 compute-0 podman[188664]: 2025-12-03 01:16:13.859018854 +0000 UTC m=+0.109244613 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Dec 03 01:16:13 compute-0 podman[188665]: 2025-12-03 01:16:13.981894879 +0000 UTC m=+0.226730433 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec 03 01:16:16 compute-0 sshd-session[188705]: Invalid user openbravo from 45.78.219.140 port 46250
Dec 03 01:16:17 compute-0 sshd-session[188705]: Received disconnect from 45.78.219.140 port 46250:11: Bye Bye [preauth]
Dec 03 01:16:17 compute-0 sshd-session[188705]: Disconnected from invalid user openbravo 45.78.219.140 port 46250 [preauth]
Dec 03 01:16:17 compute-0 sshd-session[188707]: Accepted publickey for zuul from 38.102.83.18 port 57172 ssh2: RSA SHA256:NqevRhMCntWIOoTdK6+DV077scp/CQGou+r/H3um4YU
Dec 03 01:16:17 compute-0 systemd-logind[800]: New session 24 of user zuul.
Dec 03 01:16:17 compute-0 systemd[1]: Started Session 24 of User zuul.
Dec 03 01:16:17 compute-0 sshd-session[188707]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 03 01:16:18 compute-0 sudo[188785]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxmeigcxnvqhrebayqoiwjtpwknfkzei ; /usr/bin/python3'
Dec 03 01:16:18 compute-0 sudo[188785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:16:18 compute-0 useradd[188789]: new group: name=ceph-admin, GID=42478
Dec 03 01:16:18 compute-0 useradd[188789]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Dec 03 01:16:18 compute-0 sshd-session[188754]: Received disconnect from 146.190.144.138 port 32770:11: Bye Bye [preauth]
Dec 03 01:16:18 compute-0 sshd-session[188754]: Disconnected from authenticating user root 146.190.144.138 port 32770 [preauth]
Dec 03 01:16:18 compute-0 sudo[188785]: pam_unix(sudo:session): session closed for user root
Dec 03 01:16:19 compute-0 sudo[188871]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxrhmpomilkhudeeovomjnbpnxxfgamp ; /usr/bin/python3'
Dec 03 01:16:19 compute-0 sudo[188871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:16:19 compute-0 sudo[188871]: pam_unix(sudo:session): session closed for user root
Dec 03 01:16:20 compute-0 sudo[188945]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygdszocnblofduzksctlghcmmpvelpqc ; /usr/bin/python3'
Dec 03 01:16:20 compute-0 sudo[188945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:16:20 compute-0 sudo[188945]: pam_unix(sudo:session): session closed for user root
Dec 03 01:16:20 compute-0 sudo[188995]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lriegkykirakxlmtlhcuodwkonixoiiz ; /usr/bin/python3'
Dec 03 01:16:20 compute-0 sudo[188995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:16:21 compute-0 sudo[188995]: pam_unix(sudo:session): session closed for user root
Dec 03 01:16:21 compute-0 sudo[189021]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmbwoicpoebkxxlvnylmlwslbmvuhunr ; /usr/bin/python3'
Dec 03 01:16:21 compute-0 sudo[189021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:16:21 compute-0 sudo[189021]: pam_unix(sudo:session): session closed for user root
Dec 03 01:16:21 compute-0 sudo[189047]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpuorlfniaondvmjigcyyaojwwsajurf ; /usr/bin/python3'
Dec 03 01:16:21 compute-0 sudo[189047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:16:21 compute-0 sudo[189047]: pam_unix(sudo:session): session closed for user root
Dec 03 01:16:22 compute-0 sudo[189073]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qizqkbgfklyiljnvidfknfjiyxgalrgh ; /usr/bin/python3'
Dec 03 01:16:22 compute-0 sudo[189073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:16:22 compute-0 sudo[189073]: pam_unix(sudo:session): session closed for user root
Dec 03 01:16:22 compute-0 sudo[189151]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzcutotddevndainjxkcozdvrdbvhluk ; /usr/bin/python3'
Dec 03 01:16:22 compute-0 sudo[189151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:16:23 compute-0 sudo[189151]: pam_unix(sudo:session): session closed for user root
Dec 03 01:16:23 compute-0 sudo[189224]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhkhsewcsvucqninrwmfcpgbiziazqnu ; /usr/bin/python3'
Dec 03 01:16:23 compute-0 sudo[189224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:16:23 compute-0 sudo[189224]: pam_unix(sudo:session): session closed for user root
Dec 03 01:16:24 compute-0 sudo[189326]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qiglbblensctnedmoklevzbybbvffdiu ; /usr/bin/python3'
Dec 03 01:16:24 compute-0 sudo[189326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:16:24 compute-0 sudo[189326]: pam_unix(sudo:session): session closed for user root
Dec 03 01:16:24 compute-0 sudo[189399]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrggqfocchiaeqdfvcmpjtexsyswtsta ; /usr/bin/python3'
Dec 03 01:16:24 compute-0 sudo[189399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:16:25 compute-0 sudo[189399]: pam_unix(sudo:session): session closed for user root
Dec 03 01:16:25 compute-0 sudo[189449]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afpctggbjmwwefseizwgfayjsuvafhir ; /usr/bin/python3'
Dec 03 01:16:25 compute-0 sudo[189449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:16:25 compute-0 python3[189451]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 01:16:27 compute-0 sudo[189449]: pam_unix(sudo:session): session closed for user root
Dec 03 01:16:27 compute-0 podman[189530]: 2025-12-03 01:16:27.890292386 +0000 UTC m=+0.137123507 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, name=ubi9-minimal, vendor=Red Hat, Inc., config_id=edpm, vcs-type=git, version=9.6, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, release=1755695350, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec 03 01:16:28 compute-0 sudo[189579]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itgvamdwqzemuihpyxmclserqlzwadmb ; /usr/bin/python3'
Dec 03 01:16:28 compute-0 sudo[189579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:16:28 compute-0 podman[189556]: 2025-12-03 01:16:28.080789358 +0000 UTC m=+0.127661135 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 01:16:28 compute-0 python3[189586]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 03 01:16:29 compute-0 sudo[189579]: pam_unix(sudo:session): session closed for user root
Dec 03 01:16:29 compute-0 sudo[189622]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsbtswmddgfpeftasfshmmhoioyaryov ; /usr/bin/python3'
Dec 03 01:16:29 compute-0 sudo[189622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:16:29 compute-0 python3[189624]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 03 01:16:29 compute-0 sudo[189622]: pam_unix(sudo:session): session closed for user root
Dec 03 01:16:29 compute-0 podman[158098]: time="2025-12-03T01:16:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:16:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:16:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 18533 "" "Go-http-client/1.1"
Dec 03 01:16:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:16:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2992 "" "Go-http-client/1.1"
Dec 03 01:16:30 compute-0 sudo[189648]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkhtacmaelwhmxwjoslsdtkovqbmyids ; /usr/bin/python3'
Dec 03 01:16:30 compute-0 sudo[189648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:16:30 compute-0 python3[189650]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                           losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                           lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:16:30 compute-0 kernel: loop: module loaded
Dec 03 01:16:30 compute-0 kernel: loop3: detected capacity change from 0 to 41943040
Dec 03 01:16:30 compute-0 sudo[189648]: pam_unix(sudo:session): session closed for user root
Dec 03 01:16:31 compute-0 sudo[189683]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyfitbmwcngefeowglvjklfdsxooyvbf ; /usr/bin/python3'
Dec 03 01:16:31 compute-0 sudo[189683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:16:31 compute-0 python3[189685]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                           vgcreate ceph_vg0 /dev/loop3
                                           lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                           lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:16:31 compute-0 lvm[189688]: PV /dev/loop3 not used.
Dec 03 01:16:31 compute-0 openstack_network_exporter[160250]: ERROR   01:16:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:16:31 compute-0 openstack_network_exporter[160250]: ERROR   01:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:16:31 compute-0 openstack_network_exporter[160250]: ERROR   01:16:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:16:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:16:31 compute-0 openstack_network_exporter[160250]: ERROR   01:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:16:31 compute-0 openstack_network_exporter[160250]: ERROR   01:16:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:16:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:16:31 compute-0 lvm[189697]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 03 01:16:31 compute-0 sudo[189683]: pam_unix(sudo:session): session closed for user root
Dec 03 01:16:31 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Dec 03 01:16:31 compute-0 lvm[189699]:   1 logical volume(s) in volume group "ceph_vg0" now active
Dec 03 01:16:31 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Dec 03 01:16:32 compute-0 sudo[189775]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvzceafdyhfdjyhrqiajrzzucahwdare ; /usr/bin/python3'
Dec 03 01:16:32 compute-0 sudo[189775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:16:32 compute-0 python3[189777]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 03 01:16:32 compute-0 sudo[189775]: pam_unix(sudo:session): session closed for user root
Dec 03 01:16:32 compute-0 sudo[189848]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ravwblkxcucoqjxjcoahgdsgyvktoobe ; /usr/bin/python3'
Dec 03 01:16:32 compute-0 sudo[189848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:16:32 compute-0 python3[189850]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764724591.761974-36743-119521888773527/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:16:32 compute-0 sudo[189848]: pam_unix(sudo:session): session closed for user root
Dec 03 01:16:33 compute-0 sudo[189898]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omfadkmfatexzioyhdjknzbcofmgnwyv ; /usr/bin/python3'
Dec 03 01:16:33 compute-0 sudo[189898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:16:33 compute-0 podman[189900]: 2025-12-03 01:16:33.689933448 +0000 UTC m=+0.142512992 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 03 01:16:33 compute-0 python3[189901]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:16:33 compute-0 systemd[1]: Reloading.
Dec 03 01:16:34 compute-0 systemd-rc-local-generator[189945]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:16:34 compute-0 systemd-sysv-generator[189950]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:16:34 compute-0 systemd[1]: Starting Ceph OSD losetup...
Dec 03 01:16:34 compute-0 bash[189959]: /dev/loop3: [64513]:4329306 (/var/lib/ceph-osd-0.img)
Dec 03 01:16:34 compute-0 systemd[1]: Finished Ceph OSD losetup.
Dec 03 01:16:34 compute-0 sudo[189898]: pam_unix(sudo:session): session closed for user root
Dec 03 01:16:34 compute-0 lvm[189961]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 03 01:16:34 compute-0 lvm[189961]: VG ceph_vg0 finished
Dec 03 01:16:34 compute-0 sudo[189985]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmbjnweachtxpaapmuasfhbbokzsoryu ; /usr/bin/python3'
Dec 03 01:16:34 compute-0 sudo[189985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:16:34 compute-0 python3[189987]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 03 01:16:35 compute-0 podman[189989]: 2025-12-03 01:16:35.88343676 +0000 UTC m=+0.127667665 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, version=9.4, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=base rhel9, name=ubi9, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec 03 01:16:36 compute-0 sudo[189985]: pam_unix(sudo:session): session closed for user root
Dec 03 01:16:36 compute-0 sudo[190032]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsptsnyixyfzcikoomrprevorpdpduww ; /usr/bin/python3'
Dec 03 01:16:36 compute-0 sudo[190032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:16:36 compute-0 python3[190034]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 03 01:16:36 compute-0 sudo[190032]: pam_unix(sudo:session): session closed for user root
Dec 03 01:16:36 compute-0 sudo[190058]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmacjjblsoxvhvsgabaafyzjpzlsyenv ; /usr/bin/python3'
Dec 03 01:16:36 compute-0 sudo[190058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:16:37 compute-0 python3[190060]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G
                                           losetup /dev/loop4 /var/lib/ceph-osd-1.img
                                           lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:16:37 compute-0 kernel: loop4: detected capacity change from 0 to 41943040
Dec 03 01:16:37 compute-0 sudo[190058]: pam_unix(sudo:session): session closed for user root
Dec 03 01:16:37 compute-0 sudo[190089]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfyboycszskwynyspgmnilmatxchvatn ; /usr/bin/python3'
Dec 03 01:16:37 compute-0 sudo[190089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:16:37 compute-0 python3[190091]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4
                                           vgcreate ceph_vg1 /dev/loop4
                                           lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1
                                           lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:16:37 compute-0 lvm[190096]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 03 01:16:37 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Dec 03 01:16:37 compute-0 lvm[190103]:   1 logical volume(s) in volume group "ceph_vg1" now active
Dec 03 01:16:37 compute-0 lvm[190108]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 03 01:16:37 compute-0 lvm[190108]: VG ceph_vg1 finished
Dec 03 01:16:37 compute-0 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Dec 03 01:16:37 compute-0 sudo[190089]: pam_unix(sudo:session): session closed for user root
Dec 03 01:16:38 compute-0 sudo[190184]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cezlilcskyfxcxnxwxqmzdpwvrqgjkrv ; /usr/bin/python3'
Dec 03 01:16:38 compute-0 sudo[190184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:16:38 compute-0 python3[190186]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 03 01:16:38 compute-0 sudo[190184]: pam_unix(sudo:session): session closed for user root
Dec 03 01:16:39 compute-0 sudo[190257]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zombijhkqvtjlruniwqvidpdzpsbcxcs ; /usr/bin/python3'
Dec 03 01:16:39 compute-0 sudo[190257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:16:39 compute-0 python3[190259]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764724598.1719015-36770-116689913042173/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:16:39 compute-0 sudo[190257]: pam_unix(sudo:session): session closed for user root
Dec 03 01:16:39 compute-0 sudo[190307]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkiapmulzcjtazjkvmtukewbarrzjddt ; /usr/bin/python3'
Dec 03 01:16:39 compute-0 sudo[190307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:16:39 compute-0 python3[190309]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:16:40 compute-0 systemd[1]: Reloading.
Dec 03 01:16:40 compute-0 systemd-sysv-generator[190340]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:16:40 compute-0 systemd-rc-local-generator[190330]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:16:40 compute-0 systemd[1]: Starting Ceph OSD losetup...
Dec 03 01:16:40 compute-0 bash[190348]: /dev/loop4: [64513]:4330089 (/var/lib/ceph-osd-1.img)
Dec 03 01:16:40 compute-0 systemd[1]: Finished Ceph OSD losetup.
Dec 03 01:16:40 compute-0 sudo[190307]: pam_unix(sudo:session): session closed for user root
Dec 03 01:16:40 compute-0 lvm[190349]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 03 01:16:40 compute-0 lvm[190349]: VG ceph_vg1 finished
Dec 03 01:16:40 compute-0 sudo[190373]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quzrhghajsjepaqlnodftpecxpojcifq ; /usr/bin/python3'
Dec 03 01:16:40 compute-0 sudo[190373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:16:41 compute-0 python3[190375]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 03 01:16:42 compute-0 sudo[190373]: pam_unix(sudo:session): session closed for user root
Dec 03 01:16:42 compute-0 sudo[190400]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxfqijdbwicbyfjajfdkwnzpnplwdqhg ; /usr/bin/python3'
Dec 03 01:16:42 compute-0 sudo[190400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:16:43 compute-0 python3[190403]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 03 01:16:43 compute-0 podman[190402]: 2025-12-03 01:16:43.14649894 +0000 UTC m=+0.139452360 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 03 01:16:43 compute-0 sudo[190400]: pam_unix(sudo:session): session closed for user root
Dec 03 01:16:43 compute-0 sudo[190449]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-effatxftdsglajgnoybloxreknxpiowa ; /usr/bin/python3'
Dec 03 01:16:43 compute-0 sudo[190449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:16:43 compute-0 python3[190451]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G
                                           losetup /dev/loop5 /var/lib/ceph-osd-2.img
                                           lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:16:43 compute-0 kernel: loop5: detected capacity change from 0 to 41943040
Dec 03 01:16:43 compute-0 sudo[190449]: pam_unix(sudo:session): session closed for user root
Dec 03 01:16:43 compute-0 sudo[190481]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyuptsusyyhgnvtuiaasdjzhexqztjcm ; /usr/bin/python3'
Dec 03 01:16:43 compute-0 sudo[190481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:16:43 compute-0 python3[190483]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5
                                           vgcreate ceph_vg2 /dev/loop5
                                           lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2
                                           lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:16:44 compute-0 lvm[190486]: PV /dev/loop5 not used.
Dec 03 01:16:44 compute-0 lvm[190499]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 03 01:16:44 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Dec 03 01:16:44 compute-0 lvm[190532]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 03 01:16:44 compute-0 lvm[190532]: VG ceph_vg2 finished
Dec 03 01:16:44 compute-0 lvm[190524]:   1 logical volume(s) in volume group "ceph_vg2" now active
Dec 03 01:16:44 compute-0 podman[190488]: 2025-12-03 01:16:44.337266451 +0000 UTC m=+0.157820453 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 03 01:16:44 compute-0 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Dec 03 01:16:44 compute-0 podman[190489]: 2025-12-03 01:16:44.370067725 +0000 UTC m=+0.188611809 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 03 01:16:44 compute-0 sudo[190481]: pam_unix(sudo:session): session closed for user root
Dec 03 01:16:44 compute-0 sudo[190617]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykxytpogxuvfpqtnwdbibcdpayvhniro ; /usr/bin/python3'
Dec 03 01:16:44 compute-0 sudo[190617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:16:45 compute-0 python3[190619]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 03 01:16:45 compute-0 sudo[190617]: pam_unix(sudo:session): session closed for user root
Dec 03 01:16:45 compute-0 sudo[190690]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyzckwnebpcqsylpybnujkctkeggbvii ; /usr/bin/python3'
Dec 03 01:16:45 compute-0 sudo[190690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:16:45 compute-0 python3[190692]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764724604.602049-36797-98417805017753/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:16:45 compute-0 sudo[190690]: pam_unix(sudo:session): session closed for user root
Dec 03 01:16:46 compute-0 sudo[190740]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dobkanyhnnktniamwmgupxhpxenuhoao ; /usr/bin/python3'
Dec 03 01:16:46 compute-0 sudo[190740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:16:46 compute-0 python3[190742]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:16:46 compute-0 systemd[1]: Reloading.
Dec 03 01:16:46 compute-0 systemd-rc-local-generator[190770]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:16:46 compute-0 systemd-sysv-generator[190773]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:16:46 compute-0 systemd[1]: Starting Ceph OSD losetup...
Dec 03 01:16:46 compute-0 bash[190781]: /dev/loop5: [64513]:4362427 (/var/lib/ceph-osd-2.img)
Dec 03 01:16:46 compute-0 systemd[1]: Finished Ceph OSD losetup.
Dec 03 01:16:46 compute-0 sudo[190740]: pam_unix(sudo:session): session closed for user root
Dec 03 01:16:47 compute-0 lvm[190783]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 03 01:16:47 compute-0 lvm[190783]: VG ceph_vg2 finished
Dec 03 01:16:49 compute-0 python3[190807]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 01:16:51 compute-0 sudo[190907]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qccfasmxsfjazbuwkyloefvbgqlnpadn ; /usr/bin/python3'
Dec 03 01:16:51 compute-0 sudo[190907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:16:52 compute-0 python3[190909]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 03 01:16:53 compute-0 groupadd[190915]: group added to /etc/group: name=cephadm, GID=990
Dec 03 01:16:53 compute-0 groupadd[190915]: group added to /etc/gshadow: name=cephadm
Dec 03 01:16:53 compute-0 groupadd[190915]: new group: name=cephadm, GID=990
Dec 03 01:16:53 compute-0 useradd[190922]: new user: name=cephadm, UID=990, GID=990, home=/var/lib/cephadm, shell=/bin/bash, from=none
Dec 03 01:16:53 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 03 01:16:53 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 03 01:16:54 compute-0 sudo[190907]: pam_unix(sudo:session): session closed for user root
Dec 03 01:16:54 compute-0 sudo[191033]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkmnimvudskolossermykksildtglbjn ; /usr/bin/python3'
Dec 03 01:16:54 compute-0 sudo[191033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:16:54 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 03 01:16:54 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 03 01:16:54 compute-0 systemd[1]: run-r068cba871ac04d0fbb54351cf90560af.service: Deactivated successfully.
Dec 03 01:16:54 compute-0 python3[191036]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 03 01:16:54 compute-0 sudo[191033]: pam_unix(sudo:session): session closed for user root
Dec 03 01:16:55 compute-0 sudo[191063]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obfwmrvwcihkwjnhyiqmpzukuacyyfbz ; /usr/bin/python3'
Dec 03 01:16:55 compute-0 sudo[191063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:16:55 compute-0 python3[191065]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:16:55 compute-0 sudo[191063]: pam_unix(sudo:session): session closed for user root
Dec 03 01:16:56 compute-0 sudo[191127]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwcjekaldypkftmlnozbwvyzmhlbnjgp ; /usr/bin/python3'
Dec 03 01:16:56 compute-0 sudo[191127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:16:56 compute-0 python3[191129]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:16:56 compute-0 sudo[191127]: pam_unix(sudo:session): session closed for user root
Dec 03 01:16:56 compute-0 sudo[191153]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcsxixhzpaaowchqwyoujvkypvpuvzjk ; /usr/bin/python3'
Dec 03 01:16:56 compute-0 sudo[191153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:16:56 compute-0 python3[191155]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:16:56 compute-0 sudo[191153]: pam_unix(sudo:session): session closed for user root
Dec 03 01:16:57 compute-0 sudo[191231]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfougzkhheiigugdatrqnzmqeawoesbs ; /usr/bin/python3'
Dec 03 01:16:57 compute-0 sudo[191231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:16:57 compute-0 python3[191233]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 03 01:16:57 compute-0 sudo[191231]: pam_unix(sudo:session): session closed for user root
Dec 03 01:16:58 compute-0 sudo[191304]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijrtnjgbnignnseuiqlrnysddrmdbxoe ; /usr/bin/python3'
Dec 03 01:16:58 compute-0 sudo[191304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:16:58 compute-0 podman[191306]: 2025-12-03 01:16:58.455222166 +0000 UTC m=+0.122975927 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 01:16:58 compute-0 podman[191307]: 2025-12-03 01:16:58.470113029 +0000 UTC m=+0.134987840 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, release=1755695350, container_name=openstack_network_exporter, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, architecture=x86_64, version=9.6, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec 03 01:16:58 compute-0 python3[191308]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764724617.4048393-36944-140993472414015/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:16:58 compute-0 sudo[191304]: pam_unix(sudo:session): session closed for user root
Dec 03 01:16:59 compute-0 sudo[191451]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlqznajfvqiglvyjrinfdshvbdvcnvac ; /usr/bin/python3'
Dec 03 01:16:59 compute-0 sudo[191451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:16:59 compute-0 python3[191453]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 03 01:16:59 compute-0 sudo[191451]: pam_unix(sudo:session): session closed for user root
Dec 03 01:16:59 compute-0 podman[158098]: time="2025-12-03T01:16:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:16:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:16:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 18533 "" "Go-http-client/1.1"
Dec 03 01:16:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:16:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2988 "" "Go-http-client/1.1"
Dec 03 01:17:00 compute-0 sudo[191524]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzhzqzuyfkxlfuvggttmlrwvavazejgf ; /usr/bin/python3'
Dec 03 01:17:00 compute-0 sudo[191524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:17:00 compute-0 python3[191526]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764724619.1588588-36962-67661698673033/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:17:00 compute-0 sudo[191524]: pam_unix(sudo:session): session closed for user root
Dec 03 01:17:00 compute-0 sudo[191574]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shhosmsfutdcoobzcnbganytocjbdsmy ; /usr/bin/python3'
Dec 03 01:17:00 compute-0 sudo[191574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:17:00 compute-0 python3[191576]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 03 01:17:00 compute-0 sudo[191574]: pam_unix(sudo:session): session closed for user root
Dec 03 01:17:01 compute-0 sudo[191602]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzobcjjjrborwzprzmantwkkncrdisbu ; /usr/bin/python3'
Dec 03 01:17:01 compute-0 sudo[191602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:17:01 compute-0 python3[191604]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 03 01:17:01 compute-0 sudo[191602]: pam_unix(sudo:session): session closed for user root
Dec 03 01:17:01 compute-0 openstack_network_exporter[160250]: ERROR   01:17:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:17:01 compute-0 openstack_network_exporter[160250]: ERROR   01:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:17:01 compute-0 openstack_network_exporter[160250]: ERROR   01:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:17:01 compute-0 openstack_network_exporter[160250]: ERROR   01:17:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:17:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:17:01 compute-0 openstack_network_exporter[160250]: ERROR   01:17:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:17:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:17:01 compute-0 sudo[191630]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stqmlbthapazrausvjyimqmjfmdsxbrq ; /usr/bin/python3'
Dec 03 01:17:01 compute-0 sudo[191630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:17:01 compute-0 python3[191632]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 03 01:17:01 compute-0 sudo[191630]: pam_unix(sudo:session): session closed for user root
Dec 03 01:17:02 compute-0 sudo[191658]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czwpmnzzeslgdzaglqenxkpgyikawhzw ; /usr/bin/python3'
Dec 03 01:17:02 compute-0 sudo[191658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:17:02 compute-0 python3[191660]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:17:02 compute-0 sshd-session[191675]: Accepted publickey for ceph-admin from 192.168.122.100 port 56586 ssh2: RSA SHA256:ElThYv4dSbR6hrHZ62VCcLS1SbZiTt9mq8RDg3WmxMM
Dec 03 01:17:02 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Dec 03 01:17:02 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec 03 01:17:02 compute-0 systemd-logind[800]: New session 25 of user ceph-admin.
Dec 03 01:17:02 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec 03 01:17:02 compute-0 systemd[1]: Starting User Manager for UID 42477...
Dec 03 01:17:02 compute-0 systemd[191679]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 03 01:17:02 compute-0 systemd[191679]: Queued start job for default target Main User Target.
Dec 03 01:17:02 compute-0 systemd[191679]: Created slice User Application Slice.
Dec 03 01:17:02 compute-0 systemd[191679]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 03 01:17:02 compute-0 systemd[191679]: Started Daily Cleanup of User's Temporary Directories.
Dec 03 01:17:02 compute-0 systemd[191679]: Reached target Paths.
Dec 03 01:17:02 compute-0 systemd[191679]: Reached target Timers.
Dec 03 01:17:02 compute-0 systemd[191679]: Starting D-Bus User Message Bus Socket...
Dec 03 01:17:02 compute-0 systemd[191679]: Starting Create User's Volatile Files and Directories...
Dec 03 01:17:02 compute-0 systemd[191679]: Listening on D-Bus User Message Bus Socket.
Dec 03 01:17:02 compute-0 systemd[191679]: Finished Create User's Volatile Files and Directories.
Dec 03 01:17:02 compute-0 systemd[191679]: Reached target Sockets.
Dec 03 01:17:02 compute-0 systemd[191679]: Reached target Basic System.
Dec 03 01:17:02 compute-0 systemd[191679]: Reached target Main User Target.
Dec 03 01:17:02 compute-0 systemd[191679]: Startup finished in 204ms.
Dec 03 01:17:02 compute-0 systemd[1]: Started User Manager for UID 42477.
Dec 03 01:17:02 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Dec 03 01:17:03 compute-0 sshd-session[191675]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 03 01:17:03 compute-0 sudo[191695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Dec 03 01:17:03 compute-0 sudo[191695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:17:03 compute-0 sudo[191695]: pam_unix(sudo:session): session closed for user root
Dec 03 01:17:03 compute-0 sshd-session[191694]: Received disconnect from 192.168.122.100 port 56586:11: disconnected by user
Dec 03 01:17:03 compute-0 sshd-session[191694]: Disconnected from user ceph-admin 192.168.122.100 port 56586
Dec 03 01:17:03 compute-0 sshd-session[191675]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 03 01:17:03 compute-0 systemd[1]: session-25.scope: Deactivated successfully.
Dec 03 01:17:03 compute-0 systemd-logind[800]: Session 25 logged out. Waiting for processes to exit.
Dec 03 01:17:03 compute-0 systemd-logind[800]: Removed session 25.
Dec 03 01:17:03 compute-0 podman[191746]: 2025-12-03 01:17:03.898471445 +0000 UTC m=+0.146083961 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm)
Dec 03 01:17:08 compute-0 podman[191788]: 2025-12-03 01:17:08.714457764 +0000 UTC m=+2.717164076 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, maintainer=Red Hat, Inc., release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, container_name=kepler, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, release=1214.1726694543, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, config_id=edpm)
Dec 03 01:17:13 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Dec 03 01:17:13 compute-0 systemd[191679]: Activating special unit Exit the Session...
Dec 03 01:17:13 compute-0 systemd[191679]: Stopped target Main User Target.
Dec 03 01:17:13 compute-0 systemd[191679]: Stopped target Basic System.
Dec 03 01:17:13 compute-0 systemd[191679]: Stopped target Paths.
Dec 03 01:17:13 compute-0 systemd[191679]: Stopped target Sockets.
Dec 03 01:17:13 compute-0 systemd[191679]: Stopped target Timers.
Dec 03 01:17:13 compute-0 systemd[191679]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec 03 01:17:13 compute-0 systemd[191679]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 03 01:17:13 compute-0 systemd[191679]: Closed D-Bus User Message Bus Socket.
Dec 03 01:17:13 compute-0 systemd[191679]: Stopped Create User's Volatile Files and Directories.
Dec 03 01:17:13 compute-0 systemd[191679]: Removed slice User Application Slice.
Dec 03 01:17:13 compute-0 systemd[191679]: Reached target Shutdown.
Dec 03 01:17:13 compute-0 systemd[191679]: Finished Exit the Session.
Dec 03 01:17:13 compute-0 systemd[191679]: Reached target Exit the Session.
Dec 03 01:17:13 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Dec 03 01:17:13 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Dec 03 01:17:13 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Dec 03 01:17:13 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Dec 03 01:17:13 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Dec 03 01:17:13 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Dec 03 01:17:13 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Dec 03 01:17:13 compute-0 podman[191811]: 2025-12-03 01:17:13.577069913 +0000 UTC m=+0.089465717 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 03 01:17:27 compute-0 sshd-session[191869]: error: kex_exchange_identification: read: Connection timed out
Dec 03 01:17:27 compute-0 sshd-session[191869]: banner exchange: Connection from 14.155.244.150 port 26984: Connection timed out
Dec 03 01:17:29 compute-0 podman[158098]: time="2025-12-03T01:17:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:17:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:17:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 18533 "" "Go-http-client/1.1"
Dec 03 01:17:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:17:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2998 "" "Go-http-client/1.1"
Dec 03 01:17:29 compute-0 podman[191849]: 2025-12-03 01:17:29.968820356 +0000 UTC m=+15.287711772 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute)
Dec 03 01:17:29 compute-0 podman[191872]: 2025-12-03 01:17:29.97673482 +0000 UTC m=+1.223458401 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 01:17:29 compute-0 podman[191873]: 2025-12-03 01:17:29.99435353 +0000 UTC m=+1.235308010 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, version=9.6, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container)
Dec 03 01:17:30 compute-0 podman[191850]: 2025-12-03 01:17:30.007488161 +0000 UTC m=+15.323153639 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 03 01:17:30 compute-0 podman[191732]: 2025-12-03 01:17:30.028103875 +0000 UTC m=+26.708038617 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:17:30 compute-0 podman[191935]: 2025-12-03 01:17:30.142773798 +0000 UTC m=+0.081638897 container create 3bbd7b83492151813d2e5f2f379aa08d28b2014b03c337b9a143cfa11352cab4 (image=quay.io/ceph/ceph:v18, name=hungry_jemison, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:17:30 compute-0 podman[191935]: 2025-12-03 01:17:30.101784726 +0000 UTC m=+0.040649875 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:17:30 compute-0 systemd[1]: Started libpod-conmon-3bbd7b83492151813d2e5f2f379aa08d28b2014b03c337b9a143cfa11352cab4.scope.
Dec 03 01:17:30 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:17:30 compute-0 podman[191935]: 2025-12-03 01:17:30.294053765 +0000 UTC m=+0.232918904 container init 3bbd7b83492151813d2e5f2f379aa08d28b2014b03c337b9a143cfa11352cab4 (image=quay.io/ceph/ceph:v18, name=hungry_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec 03 01:17:30 compute-0 podman[191935]: 2025-12-03 01:17:30.312375063 +0000 UTC m=+0.251240152 container start 3bbd7b83492151813d2e5f2f379aa08d28b2014b03c337b9a143cfa11352cab4 (image=quay.io/ceph/ceph:v18, name=hungry_jemison, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:17:30 compute-0 podman[191935]: 2025-12-03 01:17:30.318510313 +0000 UTC m=+0.257375472 container attach 3bbd7b83492151813d2e5f2f379aa08d28b2014b03c337b9a143cfa11352cab4 (image=quay.io/ceph/ceph:v18, name=hungry_jemison, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 03 01:17:30 compute-0 hungry_jemison[191949]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Dec 03 01:17:30 compute-0 systemd[1]: libpod-3bbd7b83492151813d2e5f2f379aa08d28b2014b03c337b9a143cfa11352cab4.scope: Deactivated successfully.
Dec 03 01:17:30 compute-0 podman[191935]: 2025-12-03 01:17:30.630956879 +0000 UTC m=+0.569821968 container died 3bbd7b83492151813d2e5f2f379aa08d28b2014b03c337b9a143cfa11352cab4 (image=quay.io/ceph/ceph:v18, name=hungry_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 03 01:17:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-6294dbc3c56cdd30cce0c9a478501fef6a6b15a576bc9eb1e5210954992a091f-merged.mount: Deactivated successfully.
Dec 03 01:17:30 compute-0 podman[191935]: 2025-12-03 01:17:30.70712142 +0000 UTC m=+0.645986519 container remove 3bbd7b83492151813d2e5f2f379aa08d28b2014b03c337b9a143cfa11352cab4 (image=quay.io/ceph/ceph:v18, name=hungry_jemison, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 03 01:17:30 compute-0 systemd[1]: libpod-conmon-3bbd7b83492151813d2e5f2f379aa08d28b2014b03c337b9a143cfa11352cab4.scope: Deactivated successfully.
Dec 03 01:17:30 compute-0 podman[191967]: 2025-12-03 01:17:30.840392787 +0000 UTC m=+0.088892163 container create e2dfc5115957b1ba372b309b46e56db59725f6e2413fc506ab2971bc9adbb94c (image=quay.io/ceph/ceph:v18, name=kind_blackwell, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:17:30 compute-0 podman[191967]: 2025-12-03 01:17:30.809354999 +0000 UTC m=+0.057854405 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:17:30 compute-0 systemd[1]: Started libpod-conmon-e2dfc5115957b1ba372b309b46e56db59725f6e2413fc506ab2971bc9adbb94c.scope.
Dec 03 01:17:30 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:17:30 compute-0 podman[191967]: 2025-12-03 01:17:30.970858556 +0000 UTC m=+0.219357912 container init e2dfc5115957b1ba372b309b46e56db59725f6e2413fc506ab2971bc9adbb94c (image=quay.io/ceph/ceph:v18, name=kind_blackwell, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 03 01:17:30 compute-0 podman[191967]: 2025-12-03 01:17:30.987160264 +0000 UTC m=+0.235659630 container start e2dfc5115957b1ba372b309b46e56db59725f6e2413fc506ab2971bc9adbb94c (image=quay.io/ceph/ceph:v18, name=kind_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Dec 03 01:17:30 compute-0 podman[191967]: 2025-12-03 01:17:30.995798455 +0000 UTC m=+0.244297871 container attach e2dfc5115957b1ba372b309b46e56db59725f6e2413fc506ab2971bc9adbb94c (image=quay.io/ceph/ceph:v18, name=kind_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 03 01:17:30 compute-0 kind_blackwell[191983]: 167 167
Dec 03 01:17:30 compute-0 systemd[1]: libpod-e2dfc5115957b1ba372b309b46e56db59725f6e2413fc506ab2971bc9adbb94c.scope: Deactivated successfully.
Dec 03 01:17:31 compute-0 podman[191967]: 2025-12-03 01:17:30.999928526 +0000 UTC m=+0.248427902 container died e2dfc5115957b1ba372b309b46e56db59725f6e2413fc506ab2971bc9adbb94c (image=quay.io/ceph/ceph:v18, name=kind_blackwell, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 03 01:17:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c7386033a1848d5401956961f907a5533b8065b9f61d6cc5512571f487be8ab-merged.mount: Deactivated successfully.
Dec 03 01:17:31 compute-0 podman[191967]: 2025-12-03 01:17:31.08026886 +0000 UTC m=+0.328768236 container remove e2dfc5115957b1ba372b309b46e56db59725f6e2413fc506ab2971bc9adbb94c (image=quay.io/ceph/ceph:v18, name=kind_blackwell, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec 03 01:17:31 compute-0 systemd[1]: libpod-conmon-e2dfc5115957b1ba372b309b46e56db59725f6e2413fc506ab2971bc9adbb94c.scope: Deactivated successfully.
Dec 03 01:17:31 compute-0 podman[191998]: 2025-12-03 01:17:31.216214862 +0000 UTC m=+0.087610322 container create 95b4ee40e1798ed77134814a83443e6d62533eec572efe0c0d35990926f99edf (image=quay.io/ceph/ceph:v18, name=hopeful_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:17:31 compute-0 systemd[1]: Started libpod-conmon-95b4ee40e1798ed77134814a83443e6d62533eec572efe0c0d35990926f99edf.scope.
Dec 03 01:17:31 compute-0 podman[191998]: 2025-12-03 01:17:31.17767131 +0000 UTC m=+0.049066820 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:17:31 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:17:31 compute-0 podman[191998]: 2025-12-03 01:17:31.335479557 +0000 UTC m=+0.206875007 container init 95b4ee40e1798ed77134814a83443e6d62533eec572efe0c0d35990926f99edf (image=quay.io/ceph/ceph:v18, name=hopeful_swanson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:17:31 compute-0 podman[191998]: 2025-12-03 01:17:31.348774112 +0000 UTC m=+0.220169552 container start 95b4ee40e1798ed77134814a83443e6d62533eec572efe0c0d35990926f99edf (image=quay.io/ceph/ceph:v18, name=hopeful_swanson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:17:31 compute-0 podman[191998]: 2025-12-03 01:17:31.353456646 +0000 UTC m=+0.224852086 container attach 95b4ee40e1798ed77134814a83443e6d62533eec572efe0c0d35990926f99edf (image=quay.io/ceph/ceph:v18, name=hopeful_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:17:31 compute-0 hopeful_swanson[192014]: AQCrjy9p4Tj3FhAA04NDm/ejqJUoebszWMI02w==
Dec 03 01:17:31 compute-0 systemd[1]: libpod-95b4ee40e1798ed77134814a83443e6d62533eec572efe0c0d35990926f99edf.scope: Deactivated successfully.
Dec 03 01:17:31 compute-0 podman[191998]: 2025-12-03 01:17:31.392351367 +0000 UTC m=+0.263746807 container died 95b4ee40e1798ed77134814a83443e6d62533eec572efe0c0d35990926f99edf (image=quay.io/ceph/ceph:v18, name=hopeful_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:17:31 compute-0 openstack_network_exporter[160250]: ERROR   01:17:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:17:31 compute-0 openstack_network_exporter[160250]: ERROR   01:17:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:17:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:17:31 compute-0 openstack_network_exporter[160250]: ERROR   01:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:17:31 compute-0 openstack_network_exporter[160250]: ERROR   01:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:17:31 compute-0 openstack_network_exporter[160250]: ERROR   01:17:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:17:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:17:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-297534cd7e92263766f9f007292d37b76121512c61540fbefad2e0708dd6fc2c-merged.mount: Deactivated successfully.
Dec 03 01:17:31 compute-0 podman[191998]: 2025-12-03 01:17:31.456595377 +0000 UTC m=+0.327990827 container remove 95b4ee40e1798ed77134814a83443e6d62533eec572efe0c0d35990926f99edf (image=quay.io/ceph/ceph:v18, name=hopeful_swanson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 03 01:17:31 compute-0 systemd[1]: libpod-conmon-95b4ee40e1798ed77134814a83443e6d62533eec572efe0c0d35990926f99edf.scope: Deactivated successfully.
Dec 03 01:17:31 compute-0 podman[192032]: 2025-12-03 01:17:31.55820091 +0000 UTC m=+0.064990909 container create 64618abe2e218e7e158a11eb8b8aa137d96d95b3a8b831f0ccc0f9927bcb1cb8 (image=quay.io/ceph/ceph:v18, name=elastic_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 03 01:17:31 compute-0 podman[192032]: 2025-12-03 01:17:31.527861969 +0000 UTC m=+0.034652048 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:17:31 compute-0 systemd[1]: Started libpod-conmon-64618abe2e218e7e158a11eb8b8aa137d96d95b3a8b831f0ccc0f9927bcb1cb8.scope.
Dec 03 01:17:31 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:17:31 compute-0 podman[192032]: 2025-12-03 01:17:31.690300619 +0000 UTC m=+0.197090718 container init 64618abe2e218e7e158a11eb8b8aa137d96d95b3a8b831f0ccc0f9927bcb1cb8 (image=quay.io/ceph/ceph:v18, name=elastic_lehmann, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 03 01:17:31 compute-0 podman[192032]: 2025-12-03 01:17:31.704851744 +0000 UTC m=+0.211641763 container start 64618abe2e218e7e158a11eb8b8aa137d96d95b3a8b831f0ccc0f9927bcb1cb8 (image=quay.io/ceph/ceph:v18, name=elastic_lehmann, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:17:31 compute-0 podman[192032]: 2025-12-03 01:17:31.712319087 +0000 UTC m=+0.219109086 container attach 64618abe2e218e7e158a11eb8b8aa137d96d95b3a8b831f0ccc0f9927bcb1cb8 (image=quay.io/ceph/ceph:v18, name=elastic_lehmann, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 03 01:17:31 compute-0 elastic_lehmann[192048]: AQCrjy9pDu9ELBAASAza0/QFMisisbtSvlcmew==
Dec 03 01:17:31 compute-0 systemd[1]: libpod-64618abe2e218e7e158a11eb8b8aa137d96d95b3a8b831f0ccc0f9927bcb1cb8.scope: Deactivated successfully.
Dec 03 01:17:31 compute-0 conmon[192048]: conmon 64618abe2e218e7e158a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-64618abe2e218e7e158a11eb8b8aa137d96d95b3a8b831f0ccc0f9927bcb1cb8.scope/container/memory.events
Dec 03 01:17:31 compute-0 podman[192032]: 2025-12-03 01:17:31.751460603 +0000 UTC m=+0.258250602 container died 64618abe2e218e7e158a11eb8b8aa137d96d95b3a8b831f0ccc0f9927bcb1cb8 (image=quay.io/ceph/ceph:v18, name=elastic_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:17:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-6488d519b588c2d2d5c872d821bc9d43c841eb421b6ae0066171a97c32272b50-merged.mount: Deactivated successfully.
Dec 03 01:17:31 compute-0 podman[192032]: 2025-12-03 01:17:31.829343577 +0000 UTC m=+0.336133576 container remove 64618abe2e218e7e158a11eb8b8aa137d96d95b3a8b831f0ccc0f9927bcb1cb8 (image=quay.io/ceph/ceph:v18, name=elastic_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:17:31 compute-0 systemd[1]: libpod-conmon-64618abe2e218e7e158a11eb8b8aa137d96d95b3a8b831f0ccc0f9927bcb1cb8.scope: Deactivated successfully.
Dec 03 01:17:31 compute-0 podman[192066]: 2025-12-03 01:17:31.947778301 +0000 UTC m=+0.081608075 container create 7aed69c8dcf77fbf99d57e57c44f8e2eb596ec44b88c12a610eabfbdca5bf3d4 (image=quay.io/ceph/ceph:v18, name=intelligent_colden, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 03 01:17:32 compute-0 podman[192066]: 2025-12-03 01:17:31.911806932 +0000 UTC m=+0.045636756 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:17:32 compute-0 systemd[1]: Started libpod-conmon-7aed69c8dcf77fbf99d57e57c44f8e2eb596ec44b88c12a610eabfbdca5bf3d4.scope.
Dec 03 01:17:32 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:17:32 compute-0 podman[192066]: 2025-12-03 01:17:32.081037958 +0000 UTC m=+0.214867762 container init 7aed69c8dcf77fbf99d57e57c44f8e2eb596ec44b88c12a610eabfbdca5bf3d4 (image=quay.io/ceph/ceph:v18, name=intelligent_colden, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 03 01:17:32 compute-0 podman[192066]: 2025-12-03 01:17:32.095364148 +0000 UTC m=+0.229193912 container start 7aed69c8dcf77fbf99d57e57c44f8e2eb596ec44b88c12a610eabfbdca5bf3d4 (image=quay.io/ceph/ceph:v18, name=intelligent_colden, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 03 01:17:32 compute-0 podman[192066]: 2025-12-03 01:17:32.103327033 +0000 UTC m=+0.237156807 container attach 7aed69c8dcf77fbf99d57e57c44f8e2eb596ec44b88c12a610eabfbdca5bf3d4 (image=quay.io/ceph/ceph:v18, name=intelligent_colden, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Dec 03 01:17:32 compute-0 intelligent_colden[192082]: AQCsjy9pqOAbCBAA0cRskAnphB81b+KjIn6MAw==
Dec 03 01:17:32 compute-0 systemd[1]: libpod-7aed69c8dcf77fbf99d57e57c44f8e2eb596ec44b88c12a610eabfbdca5bf3d4.scope: Deactivated successfully.
Dec 03 01:17:32 compute-0 conmon[192082]: conmon 7aed69c8dcf77fbf99d5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7aed69c8dcf77fbf99d57e57c44f8e2eb596ec44b88c12a610eabfbdca5bf3d4.scope/container/memory.events
Dec 03 01:17:32 compute-0 podman[192066]: 2025-12-03 01:17:32.143983166 +0000 UTC m=+0.277812910 container died 7aed69c8dcf77fbf99d57e57c44f8e2eb596ec44b88c12a610eabfbdca5bf3d4 (image=quay.io/ceph/ceph:v18, name=intelligent_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:17:32 compute-0 podman[192066]: 2025-12-03 01:17:32.217302447 +0000 UTC m=+0.351132191 container remove 7aed69c8dcf77fbf99d57e57c44f8e2eb596ec44b88c12a610eabfbdca5bf3d4 (image=quay.io/ceph/ceph:v18, name=intelligent_colden, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:17:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-b57f19b14275ce4b41cf11d1548d063131b74c4d2d2b9fdf94338086279442b8-merged.mount: Deactivated successfully.
Dec 03 01:17:32 compute-0 systemd[1]: libpod-conmon-7aed69c8dcf77fbf99d57e57c44f8e2eb596ec44b88c12a610eabfbdca5bf3d4.scope: Deactivated successfully.
Dec 03 01:17:32 compute-0 podman[192102]: 2025-12-03 01:17:32.324751813 +0000 UTC m=+0.077803392 container create b5185f34524470b06cb63d168597b03f73979e9bae416328178138f46da6425d (image=quay.io/ceph/ceph:v18, name=recursing_rhodes, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:17:32 compute-0 podman[192102]: 2025-12-03 01:17:32.283951016 +0000 UTC m=+0.037002605 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:17:32 compute-0 systemd[1]: Started libpod-conmon-b5185f34524470b06cb63d168597b03f73979e9bae416328178138f46da6425d.scope.
Dec 03 01:17:32 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:17:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/958752b5be4e6252bdea2658ce49e12ba65b7eb8d984288dcdc4dcc674cb625b/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Dec 03 01:17:32 compute-0 podman[192102]: 2025-12-03 01:17:32.468397994 +0000 UTC m=+0.221449663 container init b5185f34524470b06cb63d168597b03f73979e9bae416328178138f46da6425d (image=quay.io/ceph/ceph:v18, name=recursing_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 03 01:17:32 compute-0 podman[192102]: 2025-12-03 01:17:32.48296473 +0000 UTC m=+0.236016299 container start b5185f34524470b06cb63d168597b03f73979e9bae416328178138f46da6425d (image=quay.io/ceph/ceph:v18, name=recursing_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Dec 03 01:17:32 compute-0 podman[192102]: 2025-12-03 01:17:32.489948921 +0000 UTC m=+0.243000540 container attach b5185f34524470b06cb63d168597b03f73979e9bae416328178138f46da6425d (image=quay.io/ceph/ceph:v18, name=recursing_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:17:32 compute-0 recursing_rhodes[192116]: /usr/bin/monmaptool: monmap file /tmp/monmap
Dec 03 01:17:32 compute-0 recursing_rhodes[192116]: setting min_mon_release = pacific
Dec 03 01:17:32 compute-0 recursing_rhodes[192116]: /usr/bin/monmaptool: set fsid to 3765feb2-36f8-5b86-b74c-64e9221f9c4c
Dec 03 01:17:32 compute-0 recursing_rhodes[192116]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Dec 03 01:17:32 compute-0 systemd[1]: libpod-b5185f34524470b06cb63d168597b03f73979e9bae416328178138f46da6425d.scope: Deactivated successfully.
Dec 03 01:17:32 compute-0 podman[192102]: 2025-12-03 01:17:32.545654902 +0000 UTC m=+0.298706481 container died b5185f34524470b06cb63d168597b03f73979e9bae416328178138f46da6425d (image=quay.io/ceph/ceph:v18, name=recursing_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:17:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-958752b5be4e6252bdea2658ce49e12ba65b7eb8d984288dcdc4dcc674cb625b-merged.mount: Deactivated successfully.
Dec 03 01:17:32 compute-0 podman[192102]: 2025-12-03 01:17:32.626723283 +0000 UTC m=+0.379774852 container remove b5185f34524470b06cb63d168597b03f73979e9bae416328178138f46da6425d (image=quay.io/ceph/ceph:v18, name=recursing_rhodes, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 03 01:17:32 compute-0 systemd[1]: libpod-conmon-b5185f34524470b06cb63d168597b03f73979e9bae416328178138f46da6425d.scope: Deactivated successfully.
Dec 03 01:17:32 compute-0 podman[192134]: 2025-12-03 01:17:32.750662762 +0000 UTC m=+0.082746623 container create 58d052080155b8ed1b3e67d658b8fa174f2cbc6c36976f2d2a7604161336b023 (image=quay.io/ceph/ceph:v18, name=strange_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:17:32 compute-0 podman[192134]: 2025-12-03 01:17:32.715942944 +0000 UTC m=+0.048026855 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:17:32 compute-0 systemd[1]: Started libpod-conmon-58d052080155b8ed1b3e67d658b8fa174f2cbc6c36976f2d2a7604161336b023.scope.
Dec 03 01:17:32 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:17:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e209abc4648d559eb670875efe7c614824725f41a5bb77eacc9e5d84e8a97a8/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:17:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e209abc4648d559eb670875efe7c614824725f41a5bb77eacc9e5d84e8a97a8/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Dec 03 01:17:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e209abc4648d559eb670875efe7c614824725f41a5bb77eacc9e5d84e8a97a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:17:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e209abc4648d559eb670875efe7c614824725f41a5bb77eacc9e5d84e8a97a8/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 03 01:17:32 compute-0 podman[192134]: 2025-12-03 01:17:32.895225175 +0000 UTC m=+0.227309026 container init 58d052080155b8ed1b3e67d658b8fa174f2cbc6c36976f2d2a7604161336b023 (image=quay.io/ceph/ceph:v18, name=strange_tu, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Dec 03 01:17:32 compute-0 podman[192134]: 2025-12-03 01:17:32.920271818 +0000 UTC m=+0.252355669 container start 58d052080155b8ed1b3e67d658b8fa174f2cbc6c36976f2d2a7604161336b023 (image=quay.io/ceph/ceph:v18, name=strange_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 03 01:17:32 compute-0 podman[192134]: 2025-12-03 01:17:32.926735115 +0000 UTC m=+0.258819006 container attach 58d052080155b8ed1b3e67d658b8fa174f2cbc6c36976f2d2a7604161336b023 (image=quay.io/ceph/ceph:v18, name=strange_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:17:33 compute-0 systemd[1]: libpod-58d052080155b8ed1b3e67d658b8fa174f2cbc6c36976f2d2a7604161336b023.scope: Deactivated successfully.
Dec 03 01:17:33 compute-0 podman[192134]: 2025-12-03 01:17:33.057509992 +0000 UTC m=+0.389593833 container died 58d052080155b8ed1b3e67d658b8fa174f2cbc6c36976f2d2a7604161336b023 (image=quay.io/ceph/ceph:v18, name=strange_tu, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:17:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e209abc4648d559eb670875efe7c614824725f41a5bb77eacc9e5d84e8a97a8-merged.mount: Deactivated successfully.
Dec 03 01:17:33 compute-0 podman[192134]: 2025-12-03 01:17:33.145746668 +0000 UTC m=+0.477830529 container remove 58d052080155b8ed1b3e67d658b8fa174f2cbc6c36976f2d2a7604161336b023 (image=quay.io/ceph/ceph:v18, name=strange_tu, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:17:33 compute-0 systemd[1]: libpod-conmon-58d052080155b8ed1b3e67d658b8fa174f2cbc6c36976f2d2a7604161336b023.scope: Deactivated successfully.
Dec 03 01:17:33 compute-0 systemd[1]: Reloading.
Dec 03 01:17:33 compute-0 systemd-rc-local-generator[192217]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:17:33 compute-0 systemd-sysv-generator[192222]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:17:33 compute-0 systemd[1]: Reloading.
Dec 03 01:17:33 compute-0 systemd-sysv-generator[192261]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:17:33 compute-0 systemd-rc-local-generator[192257]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:17:33 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Dec 03 01:17:34 compute-0 systemd[1]: Reloading.
Dec 03 01:17:34 compute-0 systemd-sysv-generator[192312]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:17:34 compute-0 systemd-rc-local-generator[192309]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:17:34 compute-0 podman[192267]: 2025-12-03 01:17:34.191077655 +0000 UTC m=+0.144354038 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 03 01:17:34 compute-0 systemd[1]: Reached target Ceph cluster 3765feb2-36f8-5b86-b74c-64e9221f9c4c.
Dec 03 01:17:34 compute-0 systemd[1]: Reloading.
Dec 03 01:17:34 compute-0 systemd-rc-local-generator[192341]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:17:34 compute-0 systemd-sysv-generator[192348]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:17:34 compute-0 systemd[1]: Reloading.
Dec 03 01:17:35 compute-0 systemd-rc-local-generator[192388]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:17:35 compute-0 systemd-sysv-generator[192391]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:17:35 compute-0 sshd-session[192226]: Invalid user zhangsan from 103.146.202.174 port 55388
Dec 03 01:17:35 compute-0 systemd[1]: Created slice Slice /system/ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c.
Dec 03 01:17:35 compute-0 systemd[1]: Reached target System Time Set.
Dec 03 01:17:35 compute-0 systemd[1]: Reached target System Time Synchronized.
Dec 03 01:17:35 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 3765feb2-36f8-5b86-b74c-64e9221f9c4c...
Dec 03 01:17:35 compute-0 sshd-session[192226]: Received disconnect from 103.146.202.174 port 55388:11: Bye Bye [preauth]
Dec 03 01:17:35 compute-0 sshd-session[192226]: Disconnected from invalid user zhangsan 103.146.202.174 port 55388 [preauth]
Dec 03 01:17:35 compute-0 podman[192441]: 2025-12-03 01:17:35.754652179 +0000 UTC m=+0.078305845 container create f70b1c63b5f4737aa0f2e3104452100bd315e1afb4072c6b4a36af57baa73088 (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 03 01:17:35 compute-0 podman[192441]: 2025-12-03 01:17:35.721014747 +0000 UTC m=+0.044668463 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:17:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a609048b3c25870042e24f77e851cbb967507aa89e1bd1643fb30f7667c70e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:17:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a609048b3c25870042e24f77e851cbb967507aa89e1bd1643fb30f7667c70e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:17:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a609048b3c25870042e24f77e851cbb967507aa89e1bd1643fb30f7667c70e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:17:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a609048b3c25870042e24f77e851cbb967507aa89e1bd1643fb30f7667c70e9/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 03 01:17:35 compute-0 podman[192441]: 2025-12-03 01:17:35.922195633 +0000 UTC m=+0.245849339 container init f70b1c63b5f4737aa0f2e3104452100bd315e1afb4072c6b4a36af57baa73088 (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 03 01:17:35 compute-0 podman[192441]: 2025-12-03 01:17:35.936436361 +0000 UTC m=+0.260090017 container start f70b1c63b5f4737aa0f2e3104452100bd315e1afb4072c6b4a36af57baa73088 (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec 03 01:17:35 compute-0 bash[192441]: f70b1c63b5f4737aa0f2e3104452100bd315e1afb4072c6b4a36af57baa73088
Dec 03 01:17:35 compute-0 systemd[1]: Started Ceph mon.compute-0 for 3765feb2-36f8-5b86-b74c-64e9221f9c4c.
Dec 03 01:17:36 compute-0 ceph-mon[192460]: set uid:gid to 167:167 (ceph:ceph)
Dec 03 01:17:36 compute-0 ceph-mon[192460]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Dec 03 01:17:36 compute-0 ceph-mon[192460]: pidfile_write: ignore empty --pid-file
Dec 03 01:17:36 compute-0 ceph-mon[192460]: load: jerasure load: lrc 
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb: RocksDB version: 7.9.2
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb: Git sha 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb: Compile date 2025-05-06 23:30:25
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb: DB SUMMARY
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb: DB Session ID:  UO1TRDRI7DJ41Z0ZY1VU
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb: CURRENT file:  CURRENT
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb: IDENTITY file:  IDENTITY
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                         Options.error_if_exists: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                       Options.create_if_missing: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                         Options.paranoid_checks: 1
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                                     Options.env: 0x559f6004bc40
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                                      Options.fs: PosixFileSystem
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                                Options.info_log: 0x559f60e14e80
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                Options.max_file_opening_threads: 16
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                              Options.statistics: (nil)
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                               Options.use_fsync: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                       Options.max_log_file_size: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                         Options.allow_fallocate: 1
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                        Options.use_direct_reads: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:          Options.create_missing_column_families: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                              Options.db_log_dir: 
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                                 Options.wal_dir: 
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                   Options.advise_random_on_open: 1
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                    Options.write_buffer_manager: 0x559f60e24b40
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                            Options.rate_limiter: (nil)
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                  Options.unordered_write: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                               Options.row_cache: None
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                              Options.wal_filter: None
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:             Options.allow_ingest_behind: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:             Options.two_write_queues: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:             Options.manual_wal_flush: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:             Options.wal_compression: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:             Options.atomic_flush: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                 Options.log_readahead_size: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:             Options.allow_data_in_errors: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:             Options.db_host_id: __hostname__
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:             Options.max_background_jobs: 2
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:             Options.max_background_compactions: -1
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:             Options.max_subcompactions: 1
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:             Options.max_total_wal_size: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                          Options.max_open_files: -1
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                          Options.bytes_per_sync: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:       Options.compaction_readahead_size: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                  Options.max_background_flushes: -1
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb: Compression algorithms supported:
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:         kZSTD supported: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:         kXpressCompression supported: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:         kBZip2Compression supported: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:         kLZ4Compression supported: 1
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:         kZlibCompression supported: 1
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:         kLZ4HCCompression supported: 1
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:         kSnappyCompression supported: 1
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:           Options.merge_operator: 
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:        Options.compaction_filter: None
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f60e14a80)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x559f60e0d1f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:        Options.write_buffer_size: 33554432
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:  Options.max_write_buffer_number: 2
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:          Options.compression: NoCompression
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:             Options.num_levels: 7
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 934233b3-95a6-4219-87ec-c9177c468bdc
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764724656019708, "job": 1, "event": "recovery_started", "wal_files": [4]}
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764724656025137, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "UO1TRDRI7DJ41Z0ZY1VU", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764724656025343, "job": 1, "event": "recovery_finished"}
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x559f60e36e00
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb: DB pointer 0x559f60f40000
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 03 01:17:36 compute-0 ceph-mon[192460]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 0.0 total, 0.0 interval
                                            Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                            Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                            Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                             Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.0 total, 0.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.07 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.07 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x559f60e0d1f0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.1e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
Dec 03 01:17:36 compute-0 ceph-mon[192460]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c
Dec 03 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@-1(???) e0 preinit fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c
Dec 03 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Dec 03 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(probing) e0 win_standalone_election
Dec 03 01:17:36 compute-0 ceph-mon[192460]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Dec 03 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 03 01:17:36 compute-0 ceph-mon[192460]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 03 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Dec 03 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Dec 03 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Dec 03 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Dec 03 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 03 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Dec 03 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(probing) e1 win_standalone_election
Dec 03 01:17:36 compute-0 ceph-mon[192460]: paxos.0).electionLogic(2) init, last seen epoch 2
Dec 03 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 03 01:17:36 compute-0 ceph-mon[192460]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 03 01:17:36 compute-0 ceph-mon[192460]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 03 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 03 01:17:36 compute-0 ceph-mon[192460]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v18,cpu=AMD EPYC-Rome Processor,created_at=2025-12-03T01:17:32.985354Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025,kernel_version=5.14.0-645.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864312,os=Linux}
Dec 03 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Dec 03 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Dec 03 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Dec 03 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Dec 03 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 03 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Dec 03 01:17:36 compute-0 podman[192461]: 2025-12-03 01:17:36.092952636 +0000 UTC m=+0.090281828 container create ce7c140a0e2124877a5688a949666d9653f9045dcbfd191f0cba6d3f471f1760 (image=quay.io/ceph/ceph:v18, name=priceless_franklin, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 03 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(leader).mds e1 new map
Dec 03 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(leader).mds e1 print_map
                                            e1
                                            enable_multiple, ever_enabled_multiple: 1,1
                                            default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                            legacy client fscid: -1
                                             
                                            No filesystems configured
Dec 03 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Dec 03 01:17:36 compute-0 ceph-mon[192460]: log_channel(cluster) log [DBG] : fsmap 
Dec 03 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Dec 03 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec 03 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Dec 03 01:17:36 compute-0 ceph-mon[192460]: mkfs 3765feb2-36f8-5b86-b74c-64e9221f9c4c
Dec 03 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Dec 03 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 03 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 03 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 03 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Dec 03 01:17:36 compute-0 ceph-mon[192460]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Dec 03 01:17:36 compute-0 ceph-mon[192460]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Dec 03 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 03 01:17:36 compute-0 podman[192461]: 2025-12-03 01:17:36.054154087 +0000 UTC m=+0.051483329 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:17:36 compute-0 systemd[1]: Started libpod-conmon-ce7c140a0e2124877a5688a949666d9653f9045dcbfd191f0cba6d3f471f1760.scope.
Dec 03 01:17:36 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:17:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8b73a01ca0a6fb40b21217e74a68294fd60c34096104bbc9aafc622916204d0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:17:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8b73a01ca0a6fb40b21217e74a68294fd60c34096104bbc9aafc622916204d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:17:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8b73a01ca0a6fb40b21217e74a68294fd60c34096104bbc9aafc622916204d0/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 03 01:17:36 compute-0 podman[192461]: 2025-12-03 01:17:36.280255873 +0000 UTC m=+0.277585105 container init ce7c140a0e2124877a5688a949666d9653f9045dcbfd191f0cba6d3f471f1760 (image=quay.io/ceph/ceph:v18, name=priceless_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 03 01:17:36 compute-0 podman[192461]: 2025-12-03 01:17:36.310344519 +0000 UTC m=+0.307673711 container start ce7c140a0e2124877a5688a949666d9653f9045dcbfd191f0cba6d3f471f1760 (image=quay.io/ceph/ceph:v18, name=priceless_franklin, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 03 01:17:36 compute-0 podman[192461]: 2025-12-03 01:17:36.317442112 +0000 UTC m=+0.314771304 container attach ce7c140a0e2124877a5688a949666d9653f9045dcbfd191f0cba6d3f471f1760 (image=quay.io/ceph/ceph:v18, name=priceless_franklin, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 03 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Dec 03 01:17:36 compute-0 ceph-mon[192460]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1834612785' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 03 01:17:36 compute-0 priceless_franklin[192513]:   cluster:
Dec 03 01:17:36 compute-0 priceless_franklin[192513]:     id:     3765feb2-36f8-5b86-b74c-64e9221f9c4c
Dec 03 01:17:36 compute-0 priceless_franklin[192513]:     health: HEALTH_OK
Dec 03 01:17:36 compute-0 priceless_franklin[192513]:  
Dec 03 01:17:36 compute-0 priceless_franklin[192513]:   services:
Dec 03 01:17:36 compute-0 priceless_franklin[192513]:     mon: 1 daemons, quorum compute-0 (age 0.689301s)
Dec 03 01:17:36 compute-0 priceless_franklin[192513]:     mgr: no daemons active
Dec 03 01:17:36 compute-0 priceless_franklin[192513]:     osd: 0 osds: 0 up, 0 in
Dec 03 01:17:36 compute-0 priceless_franklin[192513]:  
Dec 03 01:17:36 compute-0 priceless_franklin[192513]:   data:
Dec 03 01:17:36 compute-0 priceless_franklin[192513]:     pools:   0 pools, 0 pgs
Dec 03 01:17:36 compute-0 priceless_franklin[192513]:     objects: 0 objects, 0 B
Dec 03 01:17:36 compute-0 priceless_franklin[192513]:     usage:   0 B used, 0 B / 0 B avail
Dec 03 01:17:36 compute-0 priceless_franklin[192513]:     pgs:     
Dec 03 01:17:36 compute-0 priceless_franklin[192513]:  
Dec 03 01:17:36 compute-0 systemd[1]: libpod-ce7c140a0e2124877a5688a949666d9653f9045dcbfd191f0cba6d3f471f1760.scope: Deactivated successfully.
Dec 03 01:17:36 compute-0 podman[192461]: 2025-12-03 01:17:36.798601191 +0000 UTC m=+0.795930383 container died ce7c140a0e2124877a5688a949666d9653f9045dcbfd191f0cba6d3f471f1760 (image=quay.io/ceph/ceph:v18, name=priceless_franklin, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 03 01:17:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8b73a01ca0a6fb40b21217e74a68294fd60c34096104bbc9aafc622916204d0-merged.mount: Deactivated successfully.
Dec 03 01:17:36 compute-0 podman[192461]: 2025-12-03 01:17:36.893911171 +0000 UTC m=+0.891240363 container remove ce7c140a0e2124877a5688a949666d9653f9045dcbfd191f0cba6d3f471f1760 (image=quay.io/ceph/ceph:v18, name=priceless_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 03 01:17:36 compute-0 systemd[1]: libpod-conmon-ce7c140a0e2124877a5688a949666d9653f9045dcbfd191f0cba6d3f471f1760.scope: Deactivated successfully.
Dec 03 01:17:37 compute-0 podman[192551]: 2025-12-03 01:17:37.041315543 +0000 UTC m=+0.100875716 container create 59145c025e99201709c61ca3265178618ff149585066e8b7c3ce172cb49501fc (image=quay.io/ceph/ceph:v18, name=xenodochial_lewin, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 03 01:17:37 compute-0 podman[192551]: 2025-12-03 01:17:36.99536698 +0000 UTC m=+0.054927173 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:17:37 compute-0 systemd[1]: Started libpod-conmon-59145c025e99201709c61ca3265178618ff149585066e8b7c3ce172cb49501fc.scope.
Dec 03 01:17:37 compute-0 ceph-mon[192460]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 03 01:17:37 compute-0 ceph-mon[192460]: monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 03 01:17:37 compute-0 ceph-mon[192460]: fsmap 
Dec 03 01:17:37 compute-0 ceph-mon[192460]: osdmap e1: 0 total, 0 up, 0 in
Dec 03 01:17:37 compute-0 ceph-mon[192460]: mgrmap e1: no daemons active
Dec 03 01:17:37 compute-0 ceph-mon[192460]: from='client.? 192.168.122.100:0/1834612785' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 03 01:17:37 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:17:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c58d2a3838ad7b1289a82aa5d8eae7569597b15a30dd35964823b91a60840e80/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:17:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c58d2a3838ad7b1289a82aa5d8eae7569597b15a30dd35964823b91a60840e80/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:17:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c58d2a3838ad7b1289a82aa5d8eae7569597b15a30dd35964823b91a60840e80/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:17:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c58d2a3838ad7b1289a82aa5d8eae7569597b15a30dd35964823b91a60840e80/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 03 01:17:37 compute-0 podman[192551]: 2025-12-03 01:17:37.196076806 +0000 UTC m=+0.255636989 container init 59145c025e99201709c61ca3265178618ff149585066e8b7c3ce172cb49501fc (image=quay.io/ceph/ceph:v18, name=xenodochial_lewin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:17:37 compute-0 podman[192551]: 2025-12-03 01:17:37.22489184 +0000 UTC m=+0.284452023 container start 59145c025e99201709c61ca3265178618ff149585066e8b7c3ce172cb49501fc (image=quay.io/ceph/ceph:v18, name=xenodochial_lewin, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 03 01:17:37 compute-0 podman[192551]: 2025-12-03 01:17:37.232482435 +0000 UTC m=+0.292042668 container attach 59145c025e99201709c61ca3265178618ff149585066e8b7c3ce172cb49501fc (image=quay.io/ceph/ceph:v18, name=xenodochial_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec 03 01:17:37 compute-0 ceph-mon[192460]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Dec 03 01:17:37 compute-0 ceph-mon[192460]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1010632956' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec 03 01:17:37 compute-0 ceph-mon[192460]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1010632956' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 03 01:17:37 compute-0 xenodochial_lewin[192565]: 
Dec 03 01:17:37 compute-0 xenodochial_lewin[192565]: [global]
Dec 03 01:17:37 compute-0 xenodochial_lewin[192565]:         fsid = 3765feb2-36f8-5b86-b74c-64e9221f9c4c
Dec 03 01:17:37 compute-0 xenodochial_lewin[192565]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Dec 03 01:17:37 compute-0 xenodochial_lewin[192565]:         osd_crush_chooseleaf_type = 0
Dec 03 01:17:37 compute-0 systemd[1]: libpod-59145c025e99201709c61ca3265178618ff149585066e8b7c3ce172cb49501fc.scope: Deactivated successfully.
Dec 03 01:17:37 compute-0 conmon[192565]: conmon 59145c025e99201709c6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-59145c025e99201709c61ca3265178618ff149585066e8b7c3ce172cb49501fc.scope/container/memory.events
Dec 03 01:17:37 compute-0 podman[192551]: 2025-12-03 01:17:37.711052451 +0000 UTC m=+0.770612634 container died 59145c025e99201709c61ca3265178618ff149585066e8b7c3ce172cb49501fc (image=quay.io/ceph/ceph:v18, name=xenodochial_lewin, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:17:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-c58d2a3838ad7b1289a82aa5d8eae7569597b15a30dd35964823b91a60840e80-merged.mount: Deactivated successfully.
Dec 03 01:17:37 compute-0 podman[192551]: 2025-12-03 01:17:37.810111562 +0000 UTC m=+0.869671715 container remove 59145c025e99201709c61ca3265178618ff149585066e8b7c3ce172cb49501fc (image=quay.io/ceph/ceph:v18, name=xenodochial_lewin, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:17:37 compute-0 systemd[1]: libpod-conmon-59145c025e99201709c61ca3265178618ff149585066e8b7c3ce172cb49501fc.scope: Deactivated successfully.
Dec 03 01:17:37 compute-0 podman[192604]: 2025-12-03 01:17:37.897726963 +0000 UTC m=+0.064036696 container create 0c73fdf5cbd70abab80a7da7cce28e50fe033eacd53b04688c0a84d044fd94c6 (image=quay.io/ceph/ceph:v18, name=competent_elbakyan, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:17:37 compute-0 podman[192604]: 2025-12-03 01:17:37.872573669 +0000 UTC m=+0.038883392 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:17:37 compute-0 systemd[1]: Started libpod-conmon-0c73fdf5cbd70abab80a7da7cce28e50fe033eacd53b04688c0a84d044fd94c6.scope.
Dec 03 01:17:38 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:17:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5646323742b72bc9f11b1b20cd6bebb655fc90c498a954f6d993269c969382b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:17:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5646323742b72bc9f11b1b20cd6bebb655fc90c498a954f6d993269c969382b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:17:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5646323742b72bc9f11b1b20cd6bebb655fc90c498a954f6d993269c969382b9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:17:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5646323742b72bc9f11b1b20cd6bebb655fc90c498a954f6d993269c969382b9/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 03 01:17:38 compute-0 podman[192604]: 2025-12-03 01:17:38.077308942 +0000 UTC m=+0.243618725 container init 0c73fdf5cbd70abab80a7da7cce28e50fe033eacd53b04688c0a84d044fd94c6 (image=quay.io/ceph/ceph:v18, name=competent_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 03 01:17:38 compute-0 podman[192604]: 2025-12-03 01:17:38.093055557 +0000 UTC m=+0.259365290 container start 0c73fdf5cbd70abab80a7da7cce28e50fe033eacd53b04688c0a84d044fd94c6 (image=quay.io/ceph/ceph:v18, name=competent_elbakyan, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 03 01:17:38 compute-0 podman[192604]: 2025-12-03 01:17:38.099783002 +0000 UTC m=+0.266092735 container attach 0c73fdf5cbd70abab80a7da7cce28e50fe033eacd53b04688c0a84d044fd94c6 (image=quay.io/ceph/ceph:v18, name=competent_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Dec 03 01:17:38 compute-0 ceph-mon[192460]: from='client.? 192.168.122.100:0/1010632956' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec 03 01:17:38 compute-0 ceph-mon[192460]: from='client.? 192.168.122.100:0/1010632956' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 03 01:17:38 compute-0 ceph-mon[192460]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:17:38 compute-0 ceph-mon[192460]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/11155986' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:17:38 compute-0 systemd[1]: libpod-0c73fdf5cbd70abab80a7da7cce28e50fe033eacd53b04688c0a84d044fd94c6.scope: Deactivated successfully.
Dec 03 01:17:38 compute-0 podman[192604]: 2025-12-03 01:17:38.55265152 +0000 UTC m=+0.718961253 container died 0c73fdf5cbd70abab80a7da7cce28e50fe033eacd53b04688c0a84d044fd94c6 (image=quay.io/ceph/ceph:v18, name=competent_elbakyan, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:17:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-5646323742b72bc9f11b1b20cd6bebb655fc90c498a954f6d993269c969382b9-merged.mount: Deactivated successfully.
Dec 03 01:17:38 compute-0 podman[192604]: 2025-12-03 01:17:38.640952728 +0000 UTC m=+0.807262421 container remove 0c73fdf5cbd70abab80a7da7cce28e50fe033eacd53b04688c0a84d044fd94c6 (image=quay.io/ceph/ceph:v18, name=competent_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 03 01:17:38 compute-0 systemd[1]: libpod-conmon-0c73fdf5cbd70abab80a7da7cce28e50fe033eacd53b04688c0a84d044fd94c6.scope: Deactivated successfully.
Dec 03 01:17:38 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for 3765feb2-36f8-5b86-b74c-64e9221f9c4c...
Dec 03 01:17:39 compute-0 ceph-mon[192460]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Dec 03 01:17:39 compute-0 ceph-mon[192460]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Dec 03 01:17:39 compute-0 ceph-mon[192460]: mon.compute-0@0(leader) e1 shutdown
Dec 03 01:17:39 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0[192456]: 2025-12-03T01:17:39.023+0000 7f8b7d467640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Dec 03 01:17:39 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0[192456]: 2025-12-03T01:17:39.023+0000 7f8b7d467640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Dec 03 01:17:39 compute-0 ceph-mon[192460]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 03 01:17:39 compute-0 ceph-mon[192460]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 03 01:17:39 compute-0 podman[192687]: 2025-12-03 01:17:39.175919463 +0000 UTC m=+0.229095051 container died f70b1c63b5f4737aa0f2e3104452100bd315e1afb4072c6b4a36af57baa73088 (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:17:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a609048b3c25870042e24f77e851cbb967507aa89e1bd1643fb30f7667c70e9-merged.mount: Deactivated successfully.
Dec 03 01:17:39 compute-0 podman[192687]: 2025-12-03 01:17:39.249668625 +0000 UTC m=+0.302844213 container remove f70b1c63b5f4737aa0f2e3104452100bd315e1afb4072c6b4a36af57baa73088 (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:17:39 compute-0 bash[192687]: ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0
Dec 03 01:17:39 compute-0 podman[192712]: 2025-12-03 01:17:39.372333172 +0000 UTC m=+0.104576046 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., container_name=kepler, config_id=edpm, architecture=x86_64, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, io.openshift.tags=base rhel9, managed_by=edpm_ansible, distribution-scope=public, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, name=ubi9, vcs-type=git)
Dec 03 01:17:39 compute-0 systemd[1]: ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c@mon.compute-0.service: Deactivated successfully.
Dec 03 01:17:39 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for 3765feb2-36f8-5b86-b74c-64e9221f9c4c.
Dec 03 01:17:39 compute-0 systemd[1]: ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c@mon.compute-0.service: Consumed 2.121s CPU time.
Dec 03 01:17:39 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 3765feb2-36f8-5b86-b74c-64e9221f9c4c...
Dec 03 01:17:39 compute-0 podman[192802]: 2025-12-03 01:17:39.978848774 +0000 UTC m=+0.095908425 container create d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:17:40 compute-0 podman[192802]: 2025-12-03 01:17:39.943797498 +0000 UTC m=+0.060857219 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:17:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a17c90aaa0980dc893b967dd7a98ae702fc3b91f3d9d360a62eaa92221b12847/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:17:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a17c90aaa0980dc893b967dd7a98ae702fc3b91f3d9d360a62eaa92221b12847/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:17:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a17c90aaa0980dc893b967dd7a98ae702fc3b91f3d9d360a62eaa92221b12847/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:17:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a17c90aaa0980dc893b967dd7a98ae702fc3b91f3d9d360a62eaa92221b12847/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 03 01:17:40 compute-0 podman[192802]: 2025-12-03 01:17:40.098218382 +0000 UTC m=+0.215278103 container init d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef)
Dec 03 01:17:40 compute-0 podman[192802]: 2025-12-03 01:17:40.123224143 +0000 UTC m=+0.240283804 container start d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 03 01:17:40 compute-0 bash[192802]: d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b
Dec 03 01:17:40 compute-0 systemd[1]: Started Ceph mon.compute-0 for 3765feb2-36f8-5b86-b74c-64e9221f9c4c.
Dec 03 01:17:40 compute-0 ceph-mon[192821]: set uid:gid to 167:167 (ceph:ceph)
Dec 03 01:17:40 compute-0 ceph-mon[192821]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Dec 03 01:17:40 compute-0 ceph-mon[192821]: pidfile_write: ignore empty --pid-file
Dec 03 01:17:40 compute-0 ceph-mon[192821]: load: jerasure load: lrc 
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb: RocksDB version: 7.9.2
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb: Git sha 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb: Compile date 2025-05-06 23:30:25
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb: DB SUMMARY
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb: DB Session ID:  8J96JYHVNMM2V9HBWT3Y
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb: CURRENT file:  CURRENT
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb: IDENTITY file:  IDENTITY
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 54564 ; 
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                         Options.error_if_exists: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                       Options.create_if_missing: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                         Options.paranoid_checks: 1
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                                     Options.env: 0x559a0ab11c40
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                                      Options.fs: PosixFileSystem
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                                Options.info_log: 0x559a0b5bf040
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                Options.max_file_opening_threads: 16
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                              Options.statistics: (nil)
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                               Options.use_fsync: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                       Options.max_log_file_size: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                         Options.allow_fallocate: 1
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                        Options.use_direct_reads: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:          Options.create_missing_column_families: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                              Options.db_log_dir: 
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                                 Options.wal_dir: 
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                   Options.advise_random_on_open: 1
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                    Options.write_buffer_manager: 0x559a0b5ceb40
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                            Options.rate_limiter: (nil)
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                  Options.unordered_write: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                               Options.row_cache: None
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                              Options.wal_filter: None
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:             Options.allow_ingest_behind: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:             Options.two_write_queues: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:             Options.manual_wal_flush: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:             Options.wal_compression: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:             Options.atomic_flush: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                 Options.log_readahead_size: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:             Options.allow_data_in_errors: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:             Options.db_host_id: __hostname__
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:             Options.max_background_jobs: 2
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:             Options.max_background_compactions: -1
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:             Options.max_subcompactions: 1
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:             Options.max_total_wal_size: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                          Options.max_open_files: -1
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                          Options.bytes_per_sync: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:       Options.compaction_readahead_size: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                  Options.max_background_flushes: -1
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb: Compression algorithms supported:
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:         kZSTD supported: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:         kXpressCompression supported: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:         kBZip2Compression supported: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:         kLZ4Compression supported: 1
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:         kZlibCompression supported: 1
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:         kLZ4HCCompression supported: 1
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:         kSnappyCompression supported: 1
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:           Options.merge_operator: 
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:        Options.compaction_filter: None
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559a0b5bec40)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x559a0b5b71f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:        Options.write_buffer_size: 33554432
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:  Options.max_write_buffer_number: 2
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:          Options.compression: NoCompression
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:             Options.num_levels: 7
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 934233b3-95a6-4219-87ec-c9177c468bdc
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764724660185193, "job": 1, "event": "recovery_started", "wal_files": [9]}
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764724660188430, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 54153, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 137, "table_properties": {"data_size": 52695, "index_size": 164, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 3023, "raw_average_key_size": 30, "raw_value_size": 50297, "raw_average_value_size": 502, "num_data_blocks": 8, "num_entries": 100, "num_filter_entries": 100, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724660, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764724660188604, "job": 1, "event": "recovery_finished"}
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x559a0b5e0e00
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb: DB pointer 0x559a0b66a000
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 03 01:17:40 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 0.0 total, 0.0 interval
                                            Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                            Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                            Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      2/0   54.78 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     18.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                             Sum      2/0   54.78 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     18.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     18.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     18.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.0 total, 0.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 2.41 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 2.41 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x559a0b5b71f0#2 capacity: 512.00 MB usage: 25.89 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.8e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,25.11 KB,0.00478923%) FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
Dec 03 01:17:40 compute-0 ceph-mon[192821]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c
Dec 03 01:17:40 compute-0 ceph-mon[192821]: mon.compute-0@-1(???) e1 preinit fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c
Dec 03 01:17:40 compute-0 ceph-mon[192821]: mon.compute-0@-1(???).mds e1 new map
Dec 03 01:17:40 compute-0 ceph-mon[192821]: mon.compute-0@-1(???).mds e1 print_map
                                            e1
                                            enable_multiple, ever_enabled_multiple: 1,1
                                            default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                            legacy client fscid: -1
                                             
                                            No filesystems configured
Dec 03 01:17:40 compute-0 ceph-mon[192821]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Dec 03 01:17:40 compute-0 ceph-mon[192821]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 03 01:17:40 compute-0 ceph-mon[192821]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 03 01:17:40 compute-0 ceph-mon[192821]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 03 01:17:40 compute-0 ceph-mon[192821]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Dec 03 01:17:40 compute-0 ceph-mon[192821]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Dec 03 01:17:40 compute-0 ceph-mon[192821]: mon.compute-0@0(probing) e1 win_standalone_election
Dec 03 01:17:40 compute-0 ceph-mon[192821]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Dec 03 01:17:40 compute-0 ceph-mon[192821]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 03 01:17:40 compute-0 ceph-mon[192821]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 03 01:17:40 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 03 01:17:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 03 01:17:40 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : fsmap 
Dec 03 01:17:40 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Dec 03 01:17:40 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Dec 03 01:17:40 compute-0 podman[192822]: 2025-12-03 01:17:40.26712173 +0000 UTC m=+0.086030504 container create 002e9d450ec5e2338be61cb938bc633b66d6ea5ea1306d010c2a0c1174dfed5e (image=quay.io/ceph/ceph:v18, name=practical_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 03 01:17:40 compute-0 ceph-mon[192821]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 03 01:17:40 compute-0 ceph-mon[192821]: monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 03 01:17:40 compute-0 ceph-mon[192821]: fsmap 
Dec 03 01:17:40 compute-0 ceph-mon[192821]: osdmap e1: 0 total, 0 up, 0 in
Dec 03 01:17:40 compute-0 ceph-mon[192821]: mgrmap e1: no daemons active
Dec 03 01:17:40 compute-0 podman[192822]: 2025-12-03 01:17:40.236427039 +0000 UTC m=+0.055335823 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:17:40 compute-0 systemd[1]: Started libpod-conmon-002e9d450ec5e2338be61cb938bc633b66d6ea5ea1306d010c2a0c1174dfed5e.scope.
Dec 03 01:17:40 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:17:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d1c7ae021e9ff4166d6da79bf78e4b54a953013bea90d655319c8f43538b9cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:17:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d1c7ae021e9ff4166d6da79bf78e4b54a953013bea90d655319c8f43538b9cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:17:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d1c7ae021e9ff4166d6da79bf78e4b54a953013bea90d655319c8f43538b9cd/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:17:40 compute-0 podman[192822]: 2025-12-03 01:17:40.437653917 +0000 UTC m=+0.256562741 container init 002e9d450ec5e2338be61cb938bc633b66d6ea5ea1306d010c2a0c1174dfed5e (image=quay.io/ceph/ceph:v18, name=practical_ritchie, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:17:40 compute-0 podman[192822]: 2025-12-03 01:17:40.465678512 +0000 UTC m=+0.284587296 container start 002e9d450ec5e2338be61cb938bc633b66d6ea5ea1306d010c2a0c1174dfed5e (image=quay.io/ceph/ceph:v18, name=practical_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:17:40 compute-0 podman[192822]: 2025-12-03 01:17:40.471909045 +0000 UTC m=+0.290817839 container attach 002e9d450ec5e2338be61cb938bc633b66d6ea5ea1306d010c2a0c1174dfed5e (image=quay.io/ceph/ceph:v18, name=practical_ritchie, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec 03 01:17:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0) v1
Dec 03 01:17:40 compute-0 systemd[1]: libpod-002e9d450ec5e2338be61cb938bc633b66d6ea5ea1306d010c2a0c1174dfed5e.scope: Deactivated successfully.
Dec 03 01:17:40 compute-0 podman[192822]: 2025-12-03 01:17:40.939227816 +0000 UTC m=+0.758136590 container died 002e9d450ec5e2338be61cb938bc633b66d6ea5ea1306d010c2a0c1174dfed5e (image=quay.io/ceph/ceph:v18, name=practical_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.966 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.967 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.967 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.969 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f00ebd496a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.970 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.970 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eda45910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.970 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.970 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.970 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.970 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.970 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.970 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eabec2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.970 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.970 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.971 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.971 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.971 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.971 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.971 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.971 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.971 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.971 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.971 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.971 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.971 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.971 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.971 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebcadee0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.971 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.972 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.975 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.975 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f00ebd4b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.975 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.976 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f00edba6090>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.976 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.977 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f00ebd4bb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:17:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d1c7ae021e9ff4166d6da79bf78e4b54a953013bea90d655319c8f43538b9cd-merged.mount: Deactivated successfully.
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.977 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f00ebd4b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f00ebd4b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f00ebd4b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f00ebd4b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f00eabec290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.992 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f00ebd4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.992 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f00ebd4b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.993 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f00ebd4b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.994 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.994 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f00ebd4bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.994 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f00ebd4b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.995 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f00ebd4bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.995 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f00ebd4bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.996 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.996 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f00ebd4bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.997 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.997 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f00ebe0e030>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.997 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.997 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f00ebd4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.998 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.998 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f00ebd4b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.998 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.999 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f00ede91a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.000 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.000 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f00ebd4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.000 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.000 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f00ebd4b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.000 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.000 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f00ede92450>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.000 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.000 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f00ebd4bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.002 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.002 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f00ebd4bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.002 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.005 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.005 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.005 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.005 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.005 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.005 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.005 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.005 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.005 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.005 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.005 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.006 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.006 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.006 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:17:41 compute-0 podman[192822]: 2025-12-03 01:17:41.023760051 +0000 UTC m=+0.842668785 container remove 002e9d450ec5e2338be61cb938bc633b66d6ea5ea1306d010c2a0c1174dfed5e (image=quay.io/ceph/ceph:v18, name=practical_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:17:41 compute-0 systemd[1]: libpod-conmon-002e9d450ec5e2338be61cb938bc633b66d6ea5ea1306d010c2a0c1174dfed5e.scope: Deactivated successfully.
Dec 03 01:17:41 compute-0 podman[192917]: 2025-12-03 01:17:41.140944185 +0000 UTC m=+0.086896224 container create b8a2df7493435c0b5424a64d9ecf640fcd5b60d5759191b88611bf70d6ac01e8 (image=quay.io/ceph/ceph:v18, name=hardcore_lewin, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:17:41 compute-0 podman[192917]: 2025-12-03 01:17:41.106474973 +0000 UTC m=+0.052427102 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:17:41 compute-0 systemd[1]: Started libpod-conmon-b8a2df7493435c0b5424a64d9ecf640fcd5b60d5759191b88611bf70d6ac01e8.scope.
Dec 03 01:17:41 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:17:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/701acbb52159b2b0d31d846ae377b4362b3d771258f3aad777bd46500aaf15af/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:17:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/701acbb52159b2b0d31d846ae377b4362b3d771258f3aad777bd46500aaf15af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:17:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/701acbb52159b2b0d31d846ae377b4362b3d771258f3aad777bd46500aaf15af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:17:41 compute-0 podman[192917]: 2025-12-03 01:17:41.28187221 +0000 UTC m=+0.227824349 container init b8a2df7493435c0b5424a64d9ecf640fcd5b60d5759191b88611bf70d6ac01e8 (image=quay.io/ceph/ceph:v18, name=hardcore_lewin, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:17:41 compute-0 podman[192917]: 2025-12-03 01:17:41.297973103 +0000 UTC m=+0.243925172 container start b8a2df7493435c0b5424a64d9ecf640fcd5b60d5759191b88611bf70d6ac01e8 (image=quay.io/ceph/ceph:v18, name=hardcore_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:17:41 compute-0 podman[192917]: 2025-12-03 01:17:41.307433434 +0000 UTC m=+0.253385743 container attach b8a2df7493435c0b5424a64d9ecf640fcd5b60d5759191b88611bf70d6ac01e8 (image=quay.io/ceph/ceph:v18, name=hardcore_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:17:41 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0) v1
Dec 03 01:17:41 compute-0 systemd[1]: libpod-b8a2df7493435c0b5424a64d9ecf640fcd5b60d5759191b88611bf70d6ac01e8.scope: Deactivated successfully.
Dec 03 01:17:41 compute-0 conmon[192931]: conmon b8a2df7493435c0b5424 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b8a2df7493435c0b5424a64d9ecf640fcd5b60d5759191b88611bf70d6ac01e8.scope/container/memory.events
Dec 03 01:17:41 compute-0 podman[192917]: 2025-12-03 01:17:41.815972743 +0000 UTC m=+0.761924802 container died b8a2df7493435c0b5424a64d9ecf640fcd5b60d5759191b88611bf70d6ac01e8 (image=quay.io/ceph/ceph:v18, name=hardcore_lewin, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:17:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-701acbb52159b2b0d31d846ae377b4362b3d771258f3aad777bd46500aaf15af-merged.mount: Deactivated successfully.
Dec 03 01:17:41 compute-0 podman[192917]: 2025-12-03 01:17:41.893894647 +0000 UTC m=+0.839846706 container remove b8a2df7493435c0b5424a64d9ecf640fcd5b60d5759191b88611bf70d6ac01e8 (image=quay.io/ceph/ceph:v18, name=hardcore_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:17:41 compute-0 systemd[1]: libpod-conmon-b8a2df7493435c0b5424a64d9ecf640fcd5b60d5759191b88611bf70d6ac01e8.scope: Deactivated successfully.
Dec 03 01:17:41 compute-0 systemd[1]: Reloading.
Dec 03 01:17:42 compute-0 systemd-sysv-generator[193001]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:17:42 compute-0 systemd-rc-local-generator[192997]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:17:42 compute-0 sshd-session[192966]: Invalid user autrede from 34.66.72.251 port 49830
Dec 03 01:17:42 compute-0 systemd[1]: Reloading.
Dec 03 01:17:42 compute-0 sshd-session[192966]: Received disconnect from 34.66.72.251 port 49830:11: Bye Bye [preauth]
Dec 03 01:17:42 compute-0 sshd-session[192966]: Disconnected from invalid user autrede 34.66.72.251 port 49830 [preauth]
Dec 03 01:17:42 compute-0 systemd-rc-local-generator[193036]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:17:42 compute-0 systemd-sysv-generator[193041]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:17:42 compute-0 systemd[1]: Starting Ceph mgr.compute-0.rysove for 3765feb2-36f8-5b86-b74c-64e9221f9c4c...
Dec 03 01:17:43 compute-0 podman[193090]: 2025-12-03 01:17:43.420767342 +0000 UTC m=+0.091307032 container create b81e9a34279123d4d10924068f04a6673437db50574802dc38a9eea052ed9afb (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec 03 01:17:43 compute-0 podman[193090]: 2025-12-03 01:17:43.385697125 +0000 UTC m=+0.056236865 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:17:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49b1f3537d6eaaa5f98e53a91979fcc53e9ba737d44edc85c6b3b38011879166/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:17:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49b1f3537d6eaaa5f98e53a91979fcc53e9ba737d44edc85c6b3b38011879166/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:17:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49b1f3537d6eaaa5f98e53a91979fcc53e9ba737d44edc85c6b3b38011879166/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:17:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49b1f3537d6eaaa5f98e53a91979fcc53e9ba737d44edc85c6b3b38011879166/merged/var/lib/ceph/mgr/ceph-compute-0.rysove supports timestamps until 2038 (0x7fffffff)
Dec 03 01:17:43 compute-0 podman[193090]: 2025-12-03 01:17:43.519657079 +0000 UTC m=+0.190196819 container init b81e9a34279123d4d10924068f04a6673437db50574802dc38a9eea052ed9afb (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Dec 03 01:17:43 compute-0 podman[193090]: 2025-12-03 01:17:43.546373952 +0000 UTC m=+0.216913632 container start b81e9a34279123d4d10924068f04a6673437db50574802dc38a9eea052ed9afb (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec 03 01:17:43 compute-0 bash[193090]: b81e9a34279123d4d10924068f04a6673437db50574802dc38a9eea052ed9afb
Dec 03 01:17:43 compute-0 systemd[1]: Started Ceph mgr.compute-0.rysove for 3765feb2-36f8-5b86-b74c-64e9221f9c4c.
Dec 03 01:17:43 compute-0 ceph-mgr[193109]: set uid:gid to 167:167 (ceph:ceph)
Dec 03 01:17:43 compute-0 ceph-mgr[193109]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Dec 03 01:17:43 compute-0 ceph-mgr[193109]: pidfile_write: ignore empty --pid-file
Dec 03 01:17:43 compute-0 podman[193110]: 2025-12-03 01:17:43.681909795 +0000 UTC m=+0.071863768 container create 7698cae6809da8495d498cb4a2e4a105496ca784ce39a497bccd51ddc3f27e9e (image=quay.io/ceph/ceph:v18, name=zealous_cerf, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:17:43 compute-0 systemd[1]: Started libpod-conmon-7698cae6809da8495d498cb4a2e4a105496ca784ce39a497bccd51ddc3f27e9e.scope.
Dec 03 01:17:43 compute-0 podman[193110]: 2025-12-03 01:17:43.660968263 +0000 UTC m=+0.050922266 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:17:43 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:17:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29d108ab2f7544e6b46b888ad4715cdf8bbb86356935b6e79d301ff1d8102918/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:17:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29d108ab2f7544e6b46b888ad4715cdf8bbb86356935b6e79d301ff1d8102918/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:17:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29d108ab2f7544e6b46b888ad4715cdf8bbb86356935b6e79d301ff1d8102918/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:17:43 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'alerts'
Dec 03 01:17:43 compute-0 podman[193110]: 2025-12-03 01:17:43.810309733 +0000 UTC m=+0.200263806 container init 7698cae6809da8495d498cb4a2e4a105496ca784ce39a497bccd51ddc3f27e9e (image=quay.io/ceph/ceph:v18, name=zealous_cerf, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Dec 03 01:17:43 compute-0 podman[193110]: 2025-12-03 01:17:43.820108932 +0000 UTC m=+0.210062945 container start 7698cae6809da8495d498cb4a2e4a105496ca784ce39a497bccd51ddc3f27e9e (image=quay.io/ceph/ceph:v18, name=zealous_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec 03 01:17:43 compute-0 podman[193110]: 2025-12-03 01:17:43.825976526 +0000 UTC m=+0.215930539 container attach 7698cae6809da8495d498cb4a2e4a105496ca784ce39a497bccd51ddc3f27e9e (image=quay.io/ceph/ceph:v18, name=zealous_cerf, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 03 01:17:43 compute-0 podman[193148]: 2025-12-03 01:17:43.89038705 +0000 UTC m=+0.155605344 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 01:17:44 compute-0 ceph-mgr[193109]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 03 01:17:44 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'balancer'
Dec 03 01:17:44 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:17:44.085+0000 7fca98514140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 03 01:17:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec 03 01:17:44 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1659027998' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 03 01:17:44 compute-0 zealous_cerf[193151]: 
Dec 03 01:17:44 compute-0 zealous_cerf[193151]: {
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:     "fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:     "health": {
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:         "status": "HEALTH_OK",
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:         "checks": {},
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:         "mutes": []
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:     },
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:     "election_epoch": 5,
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:     "quorum": [
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:         0
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:     ],
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:     "quorum_names": [
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:         "compute-0"
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:     ],
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:     "quorum_age": 4,
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:     "monmap": {
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:         "epoch": 1,
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:         "min_mon_release_name": "reef",
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:         "num_mons": 1
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:     },
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:     "osdmap": {
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:         "epoch": 1,
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:         "num_osds": 0,
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:         "num_up_osds": 0,
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:         "osd_up_since": 0,
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:         "num_in_osds": 0,
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:         "osd_in_since": 0,
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:         "num_remapped_pgs": 0
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:     },
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:     "pgmap": {
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:         "pgs_by_state": [],
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:         "num_pgs": 0,
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:         "num_pools": 0,
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:         "num_objects": 0,
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:         "data_bytes": 0,
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:         "bytes_used": 0,
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:         "bytes_avail": 0,
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:         "bytes_total": 0
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:     },
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:     "fsmap": {
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:         "epoch": 1,
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:         "by_rank": [],
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:         "up:standby": 0
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:     },
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:     "mgrmap": {
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:         "available": false,
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:         "num_standbys": 0,
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:         "modules": [
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:             "iostat",
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:             "nfs",
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:             "restful"
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:         ],
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:         "services": {}
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:     },
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:     "servicemap": {
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:         "epoch": 1,
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:         "modified": "2025-12-03T01:17:36.090330+0000",
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:         "services": {}
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:     },
Dec 03 01:17:44 compute-0 zealous_cerf[193151]:     "progress_events": {}
Dec 03 01:17:44 compute-0 zealous_cerf[193151]: }
Dec 03 01:17:44 compute-0 systemd[1]: libpod-7698cae6809da8495d498cb4a2e4a105496ca784ce39a497bccd51ddc3f27e9e.scope: Deactivated successfully.
Dec 03 01:17:44 compute-0 podman[193110]: 2025-12-03 01:17:44.297317645 +0000 UTC m=+0.687271668 container died 7698cae6809da8495d498cb4a2e4a105496ca784ce39a497bccd51ddc3f27e9e (image=quay.io/ceph/ceph:v18, name=zealous_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 03 01:17:44 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1659027998' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 03 01:17:44 compute-0 ceph-mgr[193109]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 03 01:17:44 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:17:44.334+0000 7fca98514140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 03 01:17:44 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'cephadm'
Dec 03 01:17:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-29d108ab2f7544e6b46b888ad4715cdf8bbb86356935b6e79d301ff1d8102918-merged.mount: Deactivated successfully.
Dec 03 01:17:44 compute-0 podman[193110]: 2025-12-03 01:17:44.386780451 +0000 UTC m=+0.776734434 container remove 7698cae6809da8495d498cb4a2e4a105496ca784ce39a497bccd51ddc3f27e9e (image=quay.io/ceph/ceph:v18, name=zealous_cerf, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 03 01:17:44 compute-0 systemd[1]: libpod-conmon-7698cae6809da8495d498cb4a2e4a105496ca784ce39a497bccd51ddc3f27e9e.scope: Deactivated successfully.
Dec 03 01:17:46 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'crash'
Dec 03 01:17:46 compute-0 ceph-mgr[193109]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 03 01:17:46 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:17:46.518+0000 7fca98514140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 03 01:17:46 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'dashboard'
Dec 03 01:17:46 compute-0 podman[193219]: 2025-12-03 01:17:46.546217606 +0000 UTC m=+0.106380800 container create 52ec72f920218c46819b05418a19bba889d0f6b6405e90d95f6d2e6e95e92516 (image=quay.io/ceph/ceph:v18, name=zen_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 03 01:17:46 compute-0 podman[193219]: 2025-12-03 01:17:46.510845762 +0000 UTC m=+0.071008996 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:17:46 compute-0 systemd[1]: Started libpod-conmon-52ec72f920218c46819b05418a19bba889d0f6b6405e90d95f6d2e6e95e92516.scope.
Dec 03 01:17:46 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:17:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eb1c9a05059269a86c66946e14e3b8d796474476fb2a76cac22c8ebec7d1f51/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:17:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eb1c9a05059269a86c66946e14e3b8d796474476fb2a76cac22c8ebec7d1f51/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:17:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eb1c9a05059269a86c66946e14e3b8d796474476fb2a76cac22c8ebec7d1f51/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:17:46 compute-0 podman[193219]: 2025-12-03 01:17:46.716128919 +0000 UTC m=+0.276292163 container init 52ec72f920218c46819b05418a19bba889d0f6b6405e90d95f6d2e6e95e92516 (image=quay.io/ceph/ceph:v18, name=zen_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:17:46 compute-0 podman[193219]: 2025-12-03 01:17:46.746722356 +0000 UTC m=+0.306885550 container start 52ec72f920218c46819b05418a19bba889d0f6b6405e90d95f6d2e6e95e92516 (image=quay.io/ceph/ceph:v18, name=zen_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Dec 03 01:17:46 compute-0 podman[193219]: 2025-12-03 01:17:46.775062329 +0000 UTC m=+0.335225583 container attach 52ec72f920218c46819b05418a19bba889d0f6b6405e90d95f6d2e6e95e92516 (image=quay.io/ceph/ceph:v18, name=zen_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Dec 03 01:17:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec 03 01:17:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/672473183' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 03 01:17:47 compute-0 zen_ganguly[193236]: 
Dec 03 01:17:47 compute-0 zen_ganguly[193236]: {
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:     "fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:     "health": {
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:         "status": "HEALTH_OK",
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:         "checks": {},
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:         "mutes": []
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:     },
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:     "election_epoch": 5,
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:     "quorum": [
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:         0
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:     ],
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:     "quorum_names": [
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:         "compute-0"
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:     ],
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:     "quorum_age": 6,
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:     "monmap": {
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:         "epoch": 1,
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:         "min_mon_release_name": "reef",
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:         "num_mons": 1
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:     },
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:     "osdmap": {
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:         "epoch": 1,
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:         "num_osds": 0,
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:         "num_up_osds": 0,
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:         "osd_up_since": 0,
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:         "num_in_osds": 0,
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:         "osd_in_since": 0,
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:         "num_remapped_pgs": 0
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:     },
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:     "pgmap": {
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:         "pgs_by_state": [],
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:         "num_pgs": 0,
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:         "num_pools": 0,
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:         "num_objects": 0,
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:         "data_bytes": 0,
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:         "bytes_used": 0,
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:         "bytes_avail": 0,
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:         "bytes_total": 0
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:     },
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:     "fsmap": {
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:         "epoch": 1,
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:         "by_rank": [],
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:         "up:standby": 0
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:     },
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:     "mgrmap": {
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:         "available": false,
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:         "num_standbys": 0,
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:         "modules": [
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:             "iostat",
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:             "nfs",
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:             "restful"
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:         ],
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:         "services": {}
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:     },
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:     "servicemap": {
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:         "epoch": 1,
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:         "modified": "2025-12-03T01:17:36.090330+0000",
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:         "services": {}
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:     },
Dec 03 01:17:47 compute-0 zen_ganguly[193236]:     "progress_events": {}
Dec 03 01:17:47 compute-0 zen_ganguly[193236]: }
Dec 03 01:17:47 compute-0 systemd[1]: libpod-52ec72f920218c46819b05418a19bba889d0f6b6405e90d95f6d2e6e95e92516.scope: Deactivated successfully.
Dec 03 01:17:47 compute-0 podman[193219]: 2025-12-03 01:17:47.219400188 +0000 UTC m=+0.779563382 container died 52ec72f920218c46819b05418a19bba889d0f6b6405e90d95f6d2e6e95e92516 (image=quay.io/ceph/ceph:v18, name=zen_ganguly, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:17:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/672473183' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 03 01:17:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-0eb1c9a05059269a86c66946e14e3b8d796474476fb2a76cac22c8ebec7d1f51-merged.mount: Deactivated successfully.
Dec 03 01:17:47 compute-0 podman[193219]: 2025-12-03 01:17:47.30499923 +0000 UTC m=+0.865162394 container remove 52ec72f920218c46819b05418a19bba889d0f6b6405e90d95f6d2e6e95e92516 (image=quay.io/ceph/ceph:v18, name=zen_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 03 01:17:47 compute-0 systemd[1]: libpod-conmon-52ec72f920218c46819b05418a19bba889d0f6b6405e90d95f6d2e6e95e92516.scope: Deactivated successfully.
Dec 03 01:17:47 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'devicehealth'
Dec 03 01:17:48 compute-0 ceph-mgr[193109]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 03 01:17:48 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'diskprediction_local'
Dec 03 01:17:48 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:17:48.189+0000 7fca98514140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 03 01:17:48 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 03 01:17:48 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 03 01:17:48 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]:   from numpy import show_config as show_numpy_config
Dec 03 01:17:48 compute-0 ceph-mgr[193109]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 03 01:17:48 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'influx'
Dec 03 01:17:48 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:17:48.684+0000 7fca98514140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 03 01:17:48 compute-0 ceph-mgr[193109]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 03 01:17:48 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'insights'
Dec 03 01:17:48 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:17:48.906+0000 7fca98514140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 03 01:17:49 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'iostat'
Dec 03 01:17:49 compute-0 ceph-mgr[193109]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 03 01:17:49 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'k8sevents'
Dec 03 01:17:49 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:17:49.350+0000 7fca98514140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 03 01:17:49 compute-0 podman[193276]: 2025-12-03 01:17:49.48421718 +0000 UTC m=+0.134438347 container create ccb7c39d81bf42a35dfe5522e0ba84d5fe333ca01edd4d78f8000bb528ae5c84 (image=quay.io/ceph/ceph:v18, name=confident_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 03 01:17:49 compute-0 podman[193276]: 2025-12-03 01:17:49.41221621 +0000 UTC m=+0.062437427 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:17:49 compute-0 systemd[1]: Started libpod-conmon-ccb7c39d81bf42a35dfe5522e0ba84d5fe333ca01edd4d78f8000bb528ae5c84.scope.
Dec 03 01:17:49 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:17:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e78a6e97892619b9640ae639d1953c1e91d3aca32ddaccc44de005930597ec6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:17:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e78a6e97892619b9640ae639d1953c1e91d3aca32ddaccc44de005930597ec6a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:17:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e78a6e97892619b9640ae639d1953c1e91d3aca32ddaccc44de005930597ec6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:17:49 compute-0 podman[193276]: 2025-12-03 01:17:49.623677578 +0000 UTC m=+0.273898745 container init ccb7c39d81bf42a35dfe5522e0ba84d5fe333ca01edd4d78f8000bb528ae5c84 (image=quay.io/ceph/ceph:v18, name=confident_morse, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 03 01:17:49 compute-0 podman[193276]: 2025-12-03 01:17:49.638970822 +0000 UTC m=+0.289191969 container start ccb7c39d81bf42a35dfe5522e0ba84d5fe333ca01edd4d78f8000bb528ae5c84 (image=quay.io/ceph/ceph:v18, name=confident_morse, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:17:49 compute-0 podman[193276]: 2025-12-03 01:17:49.643349399 +0000 UTC m=+0.293570536 container attach ccb7c39d81bf42a35dfe5522e0ba84d5fe333ca01edd4d78f8000bb528ae5c84 (image=quay.io/ceph/ceph:v18, name=confident_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:17:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec 03 01:17:50 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2661237115' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 03 01:17:50 compute-0 confident_morse[193293]: 
Dec 03 01:17:50 compute-0 confident_morse[193293]: {
Dec 03 01:17:50 compute-0 confident_morse[193293]:     "fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:17:50 compute-0 confident_morse[193293]:     "health": {
Dec 03 01:17:50 compute-0 confident_morse[193293]:         "status": "HEALTH_OK",
Dec 03 01:17:50 compute-0 confident_morse[193293]:         "checks": {},
Dec 03 01:17:50 compute-0 confident_morse[193293]:         "mutes": []
Dec 03 01:17:50 compute-0 confident_morse[193293]:     },
Dec 03 01:17:50 compute-0 confident_morse[193293]:     "election_epoch": 5,
Dec 03 01:17:50 compute-0 confident_morse[193293]:     "quorum": [
Dec 03 01:17:50 compute-0 confident_morse[193293]:         0
Dec 03 01:17:50 compute-0 confident_morse[193293]:     ],
Dec 03 01:17:50 compute-0 confident_morse[193293]:     "quorum_names": [
Dec 03 01:17:50 compute-0 confident_morse[193293]:         "compute-0"
Dec 03 01:17:50 compute-0 confident_morse[193293]:     ],
Dec 03 01:17:50 compute-0 confident_morse[193293]:     "quorum_age": 9,
Dec 03 01:17:50 compute-0 confident_morse[193293]:     "monmap": {
Dec 03 01:17:50 compute-0 confident_morse[193293]:         "epoch": 1,
Dec 03 01:17:50 compute-0 confident_morse[193293]:         "min_mon_release_name": "reef",
Dec 03 01:17:50 compute-0 confident_morse[193293]:         "num_mons": 1
Dec 03 01:17:50 compute-0 confident_morse[193293]:     },
Dec 03 01:17:50 compute-0 confident_morse[193293]:     "osdmap": {
Dec 03 01:17:50 compute-0 confident_morse[193293]:         "epoch": 1,
Dec 03 01:17:50 compute-0 confident_morse[193293]:         "num_osds": 0,
Dec 03 01:17:50 compute-0 confident_morse[193293]:         "num_up_osds": 0,
Dec 03 01:17:50 compute-0 confident_morse[193293]:         "osd_up_since": 0,
Dec 03 01:17:50 compute-0 confident_morse[193293]:         "num_in_osds": 0,
Dec 03 01:17:50 compute-0 confident_morse[193293]:         "osd_in_since": 0,
Dec 03 01:17:50 compute-0 confident_morse[193293]:         "num_remapped_pgs": 0
Dec 03 01:17:50 compute-0 confident_morse[193293]:     },
Dec 03 01:17:50 compute-0 confident_morse[193293]:     "pgmap": {
Dec 03 01:17:50 compute-0 confident_morse[193293]:         "pgs_by_state": [],
Dec 03 01:17:50 compute-0 confident_morse[193293]:         "num_pgs": 0,
Dec 03 01:17:50 compute-0 confident_morse[193293]:         "num_pools": 0,
Dec 03 01:17:50 compute-0 confident_morse[193293]:         "num_objects": 0,
Dec 03 01:17:50 compute-0 confident_morse[193293]:         "data_bytes": 0,
Dec 03 01:17:50 compute-0 confident_morse[193293]:         "bytes_used": 0,
Dec 03 01:17:50 compute-0 confident_morse[193293]:         "bytes_avail": 0,
Dec 03 01:17:50 compute-0 confident_morse[193293]:         "bytes_total": 0
Dec 03 01:17:50 compute-0 confident_morse[193293]:     },
Dec 03 01:17:50 compute-0 confident_morse[193293]:     "fsmap": {
Dec 03 01:17:50 compute-0 confident_morse[193293]:         "epoch": 1,
Dec 03 01:17:50 compute-0 confident_morse[193293]:         "by_rank": [],
Dec 03 01:17:50 compute-0 confident_morse[193293]:         "up:standby": 0
Dec 03 01:17:50 compute-0 confident_morse[193293]:     },
Dec 03 01:17:50 compute-0 confident_morse[193293]:     "mgrmap": {
Dec 03 01:17:50 compute-0 confident_morse[193293]:         "available": false,
Dec 03 01:17:50 compute-0 confident_morse[193293]:         "num_standbys": 0,
Dec 03 01:17:50 compute-0 confident_morse[193293]:         "modules": [
Dec 03 01:17:50 compute-0 confident_morse[193293]:             "iostat",
Dec 03 01:17:50 compute-0 confident_morse[193293]:             "nfs",
Dec 03 01:17:50 compute-0 confident_morse[193293]:             "restful"
Dec 03 01:17:50 compute-0 confident_morse[193293]:         ],
Dec 03 01:17:50 compute-0 confident_morse[193293]:         "services": {}
Dec 03 01:17:50 compute-0 confident_morse[193293]:     },
Dec 03 01:17:50 compute-0 confident_morse[193293]:     "servicemap": {
Dec 03 01:17:50 compute-0 confident_morse[193293]:         "epoch": 1,
Dec 03 01:17:50 compute-0 confident_morse[193293]:         "modified": "2025-12-03T01:17:36.090330+0000",
Dec 03 01:17:50 compute-0 confident_morse[193293]:         "services": {}
Dec 03 01:17:50 compute-0 confident_morse[193293]:     },
Dec 03 01:17:50 compute-0 confident_morse[193293]:     "progress_events": {}
Dec 03 01:17:50 compute-0 confident_morse[193293]: }
Dec 03 01:17:50 compute-0 systemd[1]: libpod-ccb7c39d81bf42a35dfe5522e0ba84d5fe333ca01edd4d78f8000bb528ae5c84.scope: Deactivated successfully.
Dec 03 01:17:50 compute-0 podman[193276]: 2025-12-03 01:17:50.105748159 +0000 UTC m=+0.755969336 container died ccb7c39d81bf42a35dfe5522e0ba84d5fe333ca01edd4d78f8000bb528ae5c84 (image=quay.io/ceph/ceph:v18, name=confident_morse, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 03 01:17:50 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2661237115' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 03 01:17:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-e78a6e97892619b9640ae639d1953c1e91d3aca32ddaccc44de005930597ec6a-merged.mount: Deactivated successfully.
Dec 03 01:17:50 compute-0 podman[193276]: 2025-12-03 01:17:50.183661603 +0000 UTC m=+0.833882770 container remove ccb7c39d81bf42a35dfe5522e0ba84d5fe333ca01edd4d78f8000bb528ae5c84 (image=quay.io/ceph/ceph:v18, name=confident_morse, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec 03 01:17:50 compute-0 systemd[1]: libpod-conmon-ccb7c39d81bf42a35dfe5522e0ba84d5fe333ca01edd4d78f8000bb528ae5c84.scope: Deactivated successfully.
Dec 03 01:17:51 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'localpool'
Dec 03 01:17:51 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'mds_autoscaler'
Dec 03 01:17:51 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'mirroring'
Dec 03 01:17:52 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'nfs'
Dec 03 01:17:52 compute-0 podman[193331]: 2025-12-03 01:17:52.319268886 +0000 UTC m=+0.091985749 container create f6c27d908a544a83088242a459004fa2f593ee6687d55f36935aa738bb88a4be (image=quay.io/ceph/ceph:v18, name=musing_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 03 01:17:52 compute-0 podman[193331]: 2025-12-03 01:17:52.286886335 +0000 UTC m=+0.059603248 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:17:52 compute-0 systemd[1]: Started libpod-conmon-f6c27d908a544a83088242a459004fa2f593ee6687d55f36935aa738bb88a4be.scope.
Dec 03 01:17:52 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:17:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59d94a939d743357a56cff4c30be32590bef37be9ef55bf970ce89212ffce0f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:17:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59d94a939d743357a56cff4c30be32590bef37be9ef55bf970ce89212ffce0f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:17:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59d94a939d743357a56cff4c30be32590bef37be9ef55bf970ce89212ffce0f4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:17:52 compute-0 podman[193331]: 2025-12-03 01:17:52.490419549 +0000 UTC m=+0.263136462 container init f6c27d908a544a83088242a459004fa2f593ee6687d55f36935aa738bb88a4be (image=quay.io/ceph/ceph:v18, name=musing_pascal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 03 01:17:52 compute-0 podman[193331]: 2025-12-03 01:17:52.50397489 +0000 UTC m=+0.276691723 container start f6c27d908a544a83088242a459004fa2f593ee6687d55f36935aa738bb88a4be (image=quay.io/ceph/ceph:v18, name=musing_pascal, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 03 01:17:52 compute-0 podman[193331]: 2025-12-03 01:17:52.51093844 +0000 UTC m=+0.283655293 container attach f6c27d908a544a83088242a459004fa2f593ee6687d55f36935aa738bb88a4be (image=quay.io/ceph/ceph:v18, name=musing_pascal, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 03 01:17:52 compute-0 sshd-session[193329]: Invalid user localhost from 80.253.31.232 port 35218
Dec 03 01:17:52 compute-0 sshd-session[193329]: Received disconnect from 80.253.31.232 port 35218:11: Bye Bye [preauth]
Dec 03 01:17:52 compute-0 sshd-session[193329]: Disconnected from invalid user localhost 80.253.31.232 port 35218 [preauth]
Dec 03 01:17:52 compute-0 ceph-mgr[193109]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 03 01:17:52 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'orchestrator'
Dec 03 01:17:52 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:17:52.914+0000 7fca98514140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 03 01:17:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec 03 01:17:52 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3636514488' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 03 01:17:52 compute-0 musing_pascal[193347]: 
Dec 03 01:17:52 compute-0 musing_pascal[193347]: {
Dec 03 01:17:52 compute-0 musing_pascal[193347]:     "fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:17:52 compute-0 musing_pascal[193347]:     "health": {
Dec 03 01:17:52 compute-0 musing_pascal[193347]:         "status": "HEALTH_OK",
Dec 03 01:17:52 compute-0 musing_pascal[193347]:         "checks": {},
Dec 03 01:17:52 compute-0 musing_pascal[193347]:         "mutes": []
Dec 03 01:17:52 compute-0 musing_pascal[193347]:     },
Dec 03 01:17:52 compute-0 musing_pascal[193347]:     "election_epoch": 5,
Dec 03 01:17:52 compute-0 musing_pascal[193347]:     "quorum": [
Dec 03 01:17:52 compute-0 musing_pascal[193347]:         0
Dec 03 01:17:52 compute-0 musing_pascal[193347]:     ],
Dec 03 01:17:52 compute-0 musing_pascal[193347]:     "quorum_names": [
Dec 03 01:17:52 compute-0 musing_pascal[193347]:         "compute-0"
Dec 03 01:17:52 compute-0 musing_pascal[193347]:     ],
Dec 03 01:17:52 compute-0 musing_pascal[193347]:     "quorum_age": 12,
Dec 03 01:17:52 compute-0 musing_pascal[193347]:     "monmap": {
Dec 03 01:17:52 compute-0 musing_pascal[193347]:         "epoch": 1,
Dec 03 01:17:52 compute-0 musing_pascal[193347]:         "min_mon_release_name": "reef",
Dec 03 01:17:52 compute-0 musing_pascal[193347]:         "num_mons": 1
Dec 03 01:17:52 compute-0 musing_pascal[193347]:     },
Dec 03 01:17:52 compute-0 musing_pascal[193347]:     "osdmap": {
Dec 03 01:17:52 compute-0 musing_pascal[193347]:         "epoch": 1,
Dec 03 01:17:52 compute-0 musing_pascal[193347]:         "num_osds": 0,
Dec 03 01:17:52 compute-0 musing_pascal[193347]:         "num_up_osds": 0,
Dec 03 01:17:52 compute-0 musing_pascal[193347]:         "osd_up_since": 0,
Dec 03 01:17:52 compute-0 musing_pascal[193347]:         "num_in_osds": 0,
Dec 03 01:17:52 compute-0 musing_pascal[193347]:         "osd_in_since": 0,
Dec 03 01:17:52 compute-0 musing_pascal[193347]:         "num_remapped_pgs": 0
Dec 03 01:17:52 compute-0 musing_pascal[193347]:     },
Dec 03 01:17:52 compute-0 musing_pascal[193347]:     "pgmap": {
Dec 03 01:17:52 compute-0 musing_pascal[193347]:         "pgs_by_state": [],
Dec 03 01:17:52 compute-0 musing_pascal[193347]:         "num_pgs": 0,
Dec 03 01:17:52 compute-0 musing_pascal[193347]:         "num_pools": 0,
Dec 03 01:17:52 compute-0 musing_pascal[193347]:         "num_objects": 0,
Dec 03 01:17:52 compute-0 musing_pascal[193347]:         "data_bytes": 0,
Dec 03 01:17:52 compute-0 musing_pascal[193347]:         "bytes_used": 0,
Dec 03 01:17:52 compute-0 musing_pascal[193347]:         "bytes_avail": 0,
Dec 03 01:17:52 compute-0 musing_pascal[193347]:         "bytes_total": 0
Dec 03 01:17:52 compute-0 musing_pascal[193347]:     },
Dec 03 01:17:52 compute-0 musing_pascal[193347]:     "fsmap": {
Dec 03 01:17:52 compute-0 musing_pascal[193347]:         "epoch": 1,
Dec 03 01:17:52 compute-0 musing_pascal[193347]:         "by_rank": [],
Dec 03 01:17:52 compute-0 musing_pascal[193347]:         "up:standby": 0
Dec 03 01:17:52 compute-0 musing_pascal[193347]:     },
Dec 03 01:17:52 compute-0 musing_pascal[193347]:     "mgrmap": {
Dec 03 01:17:52 compute-0 musing_pascal[193347]:         "available": false,
Dec 03 01:17:52 compute-0 musing_pascal[193347]:         "num_standbys": 0,
Dec 03 01:17:52 compute-0 musing_pascal[193347]:         "modules": [
Dec 03 01:17:52 compute-0 musing_pascal[193347]:             "iostat",
Dec 03 01:17:52 compute-0 musing_pascal[193347]:             "nfs",
Dec 03 01:17:52 compute-0 musing_pascal[193347]:             "restful"
Dec 03 01:17:52 compute-0 musing_pascal[193347]:         ],
Dec 03 01:17:52 compute-0 musing_pascal[193347]:         "services": {}
Dec 03 01:17:52 compute-0 musing_pascal[193347]:     },
Dec 03 01:17:52 compute-0 musing_pascal[193347]:     "servicemap": {
Dec 03 01:17:52 compute-0 musing_pascal[193347]:         "epoch": 1,
Dec 03 01:17:52 compute-0 musing_pascal[193347]:         "modified": "2025-12-03T01:17:36.090330+0000",
Dec 03 01:17:52 compute-0 musing_pascal[193347]:         "services": {}
Dec 03 01:17:52 compute-0 musing_pascal[193347]:     },
Dec 03 01:17:52 compute-0 musing_pascal[193347]:     "progress_events": {}
Dec 03 01:17:52 compute-0 musing_pascal[193347]: }
Dec 03 01:17:53 compute-0 systemd[1]: libpod-f6c27d908a544a83088242a459004fa2f593ee6687d55f36935aa738bb88a4be.scope: Deactivated successfully.
Dec 03 01:17:53 compute-0 podman[193331]: 2025-12-03 01:17:53.004718448 +0000 UTC m=+0.777435311 container died f6c27d908a544a83088242a459004fa2f593ee6687d55f36935aa738bb88a4be (image=quay.io/ceph/ceph:v18, name=musing_pascal, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Dec 03 01:17:53 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3636514488' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 03 01:17:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-59d94a939d743357a56cff4c30be32590bef37be9ef55bf970ce89212ffce0f4-merged.mount: Deactivated successfully.
Dec 03 01:17:53 compute-0 podman[193331]: 2025-12-03 01:17:53.103111073 +0000 UTC m=+0.875827906 container remove f6c27d908a544a83088242a459004fa2f593ee6687d55f36935aa738bb88a4be (image=quay.io/ceph/ceph:v18, name=musing_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:17:53 compute-0 systemd[1]: libpod-conmon-f6c27d908a544a83088242a459004fa2f593ee6687d55f36935aa738bb88a4be.scope: Deactivated successfully.
Dec 03 01:17:53 compute-0 ceph-mgr[193109]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 03 01:17:53 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'osd_perf_query'
Dec 03 01:17:53 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:17:53.600+0000 7fca98514140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 03 01:17:53 compute-0 ceph-mgr[193109]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 03 01:17:53 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'osd_support'
Dec 03 01:17:53 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:17:53.883+0000 7fca98514140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 03 01:17:54 compute-0 ceph-mgr[193109]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 03 01:17:54 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'pg_autoscaler'
Dec 03 01:17:54 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:17:54.128+0000 7fca98514140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 03 01:17:54 compute-0 ceph-mgr[193109]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 03 01:17:54 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'progress'
Dec 03 01:17:54 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:17:54.403+0000 7fca98514140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 03 01:17:54 compute-0 ceph-mgr[193109]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 03 01:17:54 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'prometheus'
Dec 03 01:17:54 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:17:54.640+0000 7fca98514140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 03 01:17:55 compute-0 podman[193384]: 2025-12-03 01:17:55.236896691 +0000 UTC m=+0.088630537 container create adcc461633371adb05942ba48819eda210bc7c0d667e8f48717a1d8c48c03b08 (image=quay.io/ceph/ceph:v18, name=peaceful_elion, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 03 01:17:55 compute-0 podman[193384]: 2025-12-03 01:17:55.203647898 +0000 UTC m=+0.055381764 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:17:55 compute-0 systemd[1]: Started libpod-conmon-adcc461633371adb05942ba48819eda210bc7c0d667e8f48717a1d8c48c03b08.scope.
Dec 03 01:17:55 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:17:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eabc1b10fa3f8aa6ac4ae6f9a5a13fe55fc8f528a8f21a7360d6757fbfceafad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:17:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eabc1b10fa3f8aa6ac4ae6f9a5a13fe55fc8f528a8f21a7360d6757fbfceafad/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:17:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eabc1b10fa3f8aa6ac4ae6f9a5a13fe55fc8f528a8f21a7360d6757fbfceafad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:17:55 compute-0 podman[193384]: 2025-12-03 01:17:55.411939409 +0000 UTC m=+0.263673335 container init adcc461633371adb05942ba48819eda210bc7c0d667e8f48717a1d8c48c03b08 (image=quay.io/ceph/ceph:v18, name=peaceful_elion, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:17:55 compute-0 podman[193384]: 2025-12-03 01:17:55.429801185 +0000 UTC m=+0.281535041 container start adcc461633371adb05942ba48819eda210bc7c0d667e8f48717a1d8c48c03b08 (image=quay.io/ceph/ceph:v18, name=peaceful_elion, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 03 01:17:55 compute-0 podman[193384]: 2025-12-03 01:17:55.436983101 +0000 UTC m=+0.288717007 container attach adcc461633371adb05942ba48819eda210bc7c0d667e8f48717a1d8c48c03b08 (image=quay.io/ceph/ceph:v18, name=peaceful_elion, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Dec 03 01:17:55 compute-0 ceph-mgr[193109]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 03 01:17:55 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:17:55.627+0000 7fca98514140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 03 01:17:55 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'rbd_support'
Dec 03 01:17:55 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec 03 01:17:55 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1824690462' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 03 01:17:55 compute-0 peaceful_elion[193400]: 
Dec 03 01:17:55 compute-0 peaceful_elion[193400]: {
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:     "fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:     "health": {
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:         "status": "HEALTH_OK",
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:         "checks": {},
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:         "mutes": []
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:     },
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:     "election_epoch": 5,
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:     "quorum": [
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:         0
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:     ],
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:     "quorum_names": [
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:         "compute-0"
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:     ],
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:     "quorum_age": 15,
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:     "monmap": {
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:         "epoch": 1,
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:         "min_mon_release_name": "reef",
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:         "num_mons": 1
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:     },
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:     "osdmap": {
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:         "epoch": 1,
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:         "num_osds": 0,
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:         "num_up_osds": 0,
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:         "osd_up_since": 0,
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:         "num_in_osds": 0,
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:         "osd_in_since": 0,
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:         "num_remapped_pgs": 0
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:     },
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:     "pgmap": {
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:         "pgs_by_state": [],
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:         "num_pgs": 0,
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:         "num_pools": 0,
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:         "num_objects": 0,
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:         "data_bytes": 0,
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:         "bytes_used": 0,
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:         "bytes_avail": 0,
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:         "bytes_total": 0
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:     },
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:     "fsmap": {
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:         "epoch": 1,
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:         "by_rank": [],
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:         "up:standby": 0
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:     },
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:     "mgrmap": {
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:         "available": false,
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:         "num_standbys": 0,
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:         "modules": [
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:             "iostat",
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:             "nfs",
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:             "restful"
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:         ],
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:         "services": {}
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:     },
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:     "servicemap": {
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:         "epoch": 1,
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:         "modified": "2025-12-03T01:17:36.090330+0000",
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:         "services": {}
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:     },
Dec 03 01:17:55 compute-0 peaceful_elion[193400]:     "progress_events": {}
Dec 03 01:17:55 compute-0 peaceful_elion[193400]: }
Dec 03 01:17:55 compute-0 systemd[1]: libpod-adcc461633371adb05942ba48819eda210bc7c0d667e8f48717a1d8c48c03b08.scope: Deactivated successfully.
Dec 03 01:17:55 compute-0 podman[193384]: 2025-12-03 01:17:55.918650492 +0000 UTC m=+0.770384348 container died adcc461633371adb05942ba48819eda210bc7c0d667e8f48717a1d8c48c03b08 (image=quay.io/ceph/ceph:v18, name=peaceful_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:17:55 compute-0 ceph-mgr[193109]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 03 01:17:55 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'restful'
Dec 03 01:17:55 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:17:55.919+0000 7fca98514140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 03 01:17:55 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1824690462' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 03 01:17:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-eabc1b10fa3f8aa6ac4ae6f9a5a13fe55fc8f528a8f21a7360d6757fbfceafad-merged.mount: Deactivated successfully.
Dec 03 01:17:55 compute-0 podman[193384]: 2025-12-03 01:17:55.988800697 +0000 UTC m=+0.840534523 container remove adcc461633371adb05942ba48819eda210bc7c0d667e8f48717a1d8c48c03b08 (image=quay.io/ceph/ceph:v18, name=peaceful_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:17:56 compute-0 systemd[1]: libpod-conmon-adcc461633371adb05942ba48819eda210bc7c0d667e8f48717a1d8c48c03b08.scope: Deactivated successfully.
Dec 03 01:17:56 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'rgw'
Dec 03 01:17:57 compute-0 ceph-mgr[193109]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 03 01:17:57 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'rook'
Dec 03 01:17:57 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:17:57.280+0000 7fca98514140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 03 01:17:57 compute-0 sshd-session[193226]: Connection closed by authenticating user root 193.32.162.157 port 37412 [preauth]
Dec 03 01:17:58 compute-0 podman[193437]: 2025-12-03 01:17:58.087367774 +0000 UTC m=+0.056315317 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:17:59 compute-0 ceph-mgr[193109]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 03 01:17:59 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'selftest'
Dec 03 01:17:59 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:17:59.288+0000 7fca98514140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 03 01:17:59 compute-0 podman[193437]: 2025-12-03 01:17:59.378123267 +0000 UTC m=+1.347070760 container create 8a09aafbcbc7727f9f1ae8a6918664b206a78597cab6351874441b4035f872b1 (image=quay.io/ceph/ceph:v18, name=dazzling_hermann, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:17:59 compute-0 systemd[1]: Started libpod-conmon-8a09aafbcbc7727f9f1ae8a6918664b206a78597cab6351874441b4035f872b1.scope.
Dec 03 01:17:59 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:17:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/948935d15c868dcbdf7fbda44974eff029cef04bc6d6be85c947adf14fa540ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:17:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/948935d15c868dcbdf7fbda44974eff029cef04bc6d6be85c947adf14fa540ae/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:17:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/948935d15c868dcbdf7fbda44974eff029cef04bc6d6be85c947adf14fa540ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:17:59 compute-0 ceph-mgr[193109]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 03 01:17:59 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'snap_schedule'
Dec 03 01:17:59 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:17:59.531+0000 7fca98514140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 03 01:17:59 compute-0 podman[193437]: 2025-12-03 01:17:59.532992462 +0000 UTC m=+1.501940005 container init 8a09aafbcbc7727f9f1ae8a6918664b206a78597cab6351874441b4035f872b1 (image=quay.io/ceph/ceph:v18, name=dazzling_hermann, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec 03 01:17:59 compute-0 podman[193437]: 2025-12-03 01:17:59.546855719 +0000 UTC m=+1.515803212 container start 8a09aafbcbc7727f9f1ae8a6918664b206a78597cab6351874441b4035f872b1 (image=quay.io/ceph/ceph:v18, name=dazzling_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:17:59 compute-0 podman[193437]: 2025-12-03 01:17:59.555049864 +0000 UTC m=+1.523997407 container attach 8a09aafbcbc7727f9f1ae8a6918664b206a78597cab6351874441b4035f872b1 (image=quay.io/ceph/ceph:v18, name=dazzling_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 03 01:17:59 compute-0 podman[158098]: time="2025-12-03T01:17:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:17:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:17:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 23488 "" "Go-http-client/1.1"
Dec 03 01:17:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:17:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4351 "" "Go-http-client/1.1"
Dec 03 01:17:59 compute-0 ceph-mgr[193109]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 03 01:17:59 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'stats'
Dec 03 01:17:59 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:17:59.788+0000 7fca98514140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 03 01:18:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec 03 01:18:00 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/137026255' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 03 01:18:00 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'status'
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]: 
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]: {
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:     "fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:     "health": {
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:         "status": "HEALTH_OK",
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:         "checks": {},
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:         "mutes": []
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:     },
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:     "election_epoch": 5,
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:     "quorum": [
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:         0
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:     ],
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:     "quorum_names": [
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:         "compute-0"
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:     ],
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:     "quorum_age": 19,
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:     "monmap": {
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:         "epoch": 1,
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:         "min_mon_release_name": "reef",
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:         "num_mons": 1
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:     },
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:     "osdmap": {
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:         "epoch": 1,
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:         "num_osds": 0,
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:         "num_up_osds": 0,
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:         "osd_up_since": 0,
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:         "num_in_osds": 0,
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:         "osd_in_since": 0,
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:         "num_remapped_pgs": 0
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:     },
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:     "pgmap": {
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:         "pgs_by_state": [],
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:         "num_pgs": 0,
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:         "num_pools": 0,
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:         "num_objects": 0,
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:         "data_bytes": 0,
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:         "bytes_used": 0,
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:         "bytes_avail": 0,
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:         "bytes_total": 0
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:     },
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:     "fsmap": {
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:         "epoch": 1,
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:         "by_rank": [],
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:         "up:standby": 0
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:     },
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:     "mgrmap": {
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:         "available": false,
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:         "num_standbys": 0,
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:         "modules": [
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:             "iostat",
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:             "nfs",
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:             "restful"
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:         ],
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:         "services": {}
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:     },
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:     "servicemap": {
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:         "epoch": 1,
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:         "modified": "2025-12-03T01:17:36.090330+0000",
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:         "services": {}
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:     },
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]:     "progress_events": {}
Dec 03 01:18:00 compute-0 dazzling_hermann[193454]: }
Dec 03 01:18:00 compute-0 systemd[1]: libpod-8a09aafbcbc7727f9f1ae8a6918664b206a78597cab6351874441b4035f872b1.scope: Deactivated successfully.
Dec 03 01:18:00 compute-0 podman[193437]: 2025-12-03 01:18:00.057008078 +0000 UTC m=+2.025955581 container died 8a09aafbcbc7727f9f1ae8a6918664b206a78597cab6351874441b4035f872b1 (image=quay.io/ceph/ceph:v18, name=dazzling_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:18:00 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/137026255' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 03 01:18:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-948935d15c868dcbdf7fbda44974eff029cef04bc6d6be85c947adf14fa540ae-merged.mount: Deactivated successfully.
Dec 03 01:18:00 compute-0 podman[193437]: 2025-12-03 01:18:00.170093227 +0000 UTC m=+2.139040690 container remove 8a09aafbcbc7727f9f1ae8a6918664b206a78597cab6351874441b4035f872b1 (image=quay.io/ceph/ceph:v18, name=dazzling_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Dec 03 01:18:00 compute-0 systemd[1]: libpod-conmon-8a09aafbcbc7727f9f1ae8a6918664b206a78597cab6351874441b4035f872b1.scope: Deactivated successfully.
Dec 03 01:18:00 compute-0 podman[193481]: 2025-12-03 01:18:00.227595823 +0000 UTC m=+0.119595546 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 03 01:18:00 compute-0 podman[193489]: 2025-12-03 01:18:00.235888601 +0000 UTC m=+0.112999597 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec 03 01:18:00 compute-0 podman[193487]: 2025-12-03 01:18:00.239485224 +0000 UTC m=+0.140711021 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_id=edpm, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, vcs-type=git, managed_by=edpm_ansible, distribution-scope=public)
Dec 03 01:18:00 compute-0 podman[193494]: 2025-12-03 01:18:00.26765699 +0000 UTC m=+0.141396600 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:18:00 compute-0 ceph-mgr[193109]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 03 01:18:00 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'telegraf'
Dec 03 01:18:00 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:00.316+0000 7fca98514140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 03 01:18:00 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:00.546+0000 7fca98514140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 03 01:18:00 compute-0 ceph-mgr[193109]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 03 01:18:00 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'telemetry'
Dec 03 01:18:00 compute-0 sshd-session[193577]: Invalid user frontend from 173.249.50.59 port 33078
Dec 03 01:18:01 compute-0 sshd-session[193577]: Received disconnect from 173.249.50.59 port 33078:11: Bye Bye [preauth]
Dec 03 01:18:01 compute-0 sshd-session[193577]: Disconnected from invalid user frontend 173.249.50.59 port 33078 [preauth]
Dec 03 01:18:01 compute-0 ceph-mgr[193109]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 03 01:18:01 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'test_orchestrator'
Dec 03 01:18:01 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:01.115+0000 7fca98514140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 03 01:18:01 compute-0 openstack_network_exporter[160250]: ERROR   01:18:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:18:01 compute-0 openstack_network_exporter[160250]: ERROR   01:18:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:18:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:18:01 compute-0 openstack_network_exporter[160250]: ERROR   01:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:18:01 compute-0 openstack_network_exporter[160250]: ERROR   01:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:18:01 compute-0 openstack_network_exporter[160250]: ERROR   01:18:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:18:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:18:01 compute-0 ceph-mgr[193109]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 03 01:18:01 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'volumes'
Dec 03 01:18:01 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:01.745+0000 7fca98514140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 03 01:18:02 compute-0 podman[193580]: 2025-12-03 01:18:02.333126737 +0000 UTC m=+0.118833144 container create 59ef0ca15039ab3d62bd7e0531260544f5c3261c587f8364d3b748bd1c91d562 (image=quay.io/ceph/ceph:v18, name=epic_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:18:02 compute-0 podman[193580]: 2025-12-03 01:18:02.266220731 +0000 UTC m=+0.051927158 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:18:02 compute-0 systemd[1]: Started libpod-conmon-59ef0ca15039ab3d62bd7e0531260544f5c3261c587f8364d3b748bd1c91d562.scope.
Dec 03 01:18:02 compute-0 ceph-mgr[193109]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 03 01:18:02 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'zabbix'
Dec 03 01:18:02 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:02.425+0000 7fca98514140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 03 01:18:02 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:18:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eabac7816a09ffc5ebd412a76886a5a1e54cd030ba10a0bcd5d74680bb247088/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eabac7816a09ffc5ebd412a76886a5a1e54cd030ba10a0bcd5d74680bb247088/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eabac7816a09ffc5ebd412a76886a5a1e54cd030ba10a0bcd5d74680bb247088/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:02 compute-0 podman[193580]: 2025-12-03 01:18:02.490886504 +0000 UTC m=+0.276592921 container init 59ef0ca15039ab3d62bd7e0531260544f5c3261c587f8364d3b748bd1c91d562 (image=quay.io/ceph/ceph:v18, name=epic_chaplygin, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:18:02 compute-0 podman[193580]: 2025-12-03 01:18:02.521727008 +0000 UTC m=+0.307433385 container start 59ef0ca15039ab3d62bd7e0531260544f5c3261c587f8364d3b748bd1c91d562 (image=quay.io/ceph/ceph:v18, name=epic_chaplygin, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 03 01:18:02 compute-0 podman[193580]: 2025-12-03 01:18:02.529274474 +0000 UTC m=+0.314980881 container attach 59ef0ca15039ab3d62bd7e0531260544f5c3261c587f8364d3b748bd1c91d562 (image=quay.io/ceph/ceph:v18, name=epic_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:18:02 compute-0 ceph-mgr[193109]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 03 01:18:02 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:02.666+0000 7fca98514140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 03 01:18:02 compute-0 ceph-mgr[193109]: ms_deliver_dispatch: unhandled message 0x562b3e82d1e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Dec 03 01:18:02 compute-0 ceph-mon[192821]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.rysove
Dec 03 01:18:02 compute-0 ceph-mgr[193109]: mgr handle_mgr_map Activating!
Dec 03 01:18:02 compute-0 ceph-mgr[193109]: mgr handle_mgr_map I am now activating
Dec 03 01:18:02 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.rysove(active, starting, since 0.0209008s)
Dec 03 01:18:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Dec 03 01:18:02 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1648204686' entity='mgr.compute-0.rysove' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 03 01:18:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).mds e1 all = 1
Dec 03 01:18:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Dec 03 01:18:02 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1648204686' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 03 01:18:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Dec 03 01:18:02 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1648204686' entity='mgr.compute-0.rysove' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 03 01:18:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Dec 03 01:18:02 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1648204686' entity='mgr.compute-0.rysove' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 03 01:18:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.rysove", "id": "compute-0.rysove"} v 0) v1
Dec 03 01:18:02 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1648204686' entity='mgr.compute-0.rysove' cmd=[{"prefix": "mgr metadata", "who": "compute-0.rysove", "id": "compute-0.rysove"}]: dispatch
Dec 03 01:18:02 compute-0 ceph-mgr[193109]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 03 01:18:02 compute-0 ceph-mgr[193109]: mgr load Constructed class from module: balancer
Dec 03 01:18:02 compute-0 ceph-mgr[193109]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 03 01:18:02 compute-0 ceph-mgr[193109]: mgr load Constructed class from module: crash
Dec 03 01:18:02 compute-0 ceph-mgr[193109]: [balancer INFO root] Starting
Dec 03 01:18:02 compute-0 ceph-mon[192821]: log_channel(cluster) log [INF] : Manager daemon compute-0.rysove is now available
Dec 03 01:18:02 compute-0 ceph-mon[192821]: Activating manager daemon compute-0.rysove
Dec 03 01:18:02 compute-0 ceph-mon[192821]: mgrmap e2: compute-0.rysove(active, starting, since 0.0209008s)
Dec 03 01:18:02 compute-0 ceph-mon[192821]: from='mgr.14102 192.168.122.100:0/1648204686' entity='mgr.compute-0.rysove' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 03 01:18:02 compute-0 ceph-mon[192821]: from='mgr.14102 192.168.122.100:0/1648204686' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 03 01:18:02 compute-0 ceph-mon[192821]: from='mgr.14102 192.168.122.100:0/1648204686' entity='mgr.compute-0.rysove' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 03 01:18:02 compute-0 ceph-mon[192821]: from='mgr.14102 192.168.122.100:0/1648204686' entity='mgr.compute-0.rysove' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 03 01:18:02 compute-0 ceph-mon[192821]: from='mgr.14102 192.168.122.100:0/1648204686' entity='mgr.compute-0.rysove' cmd=[{"prefix": "mgr metadata", "who": "compute-0.rysove", "id": "compute-0.rysove"}]: dispatch
Dec 03 01:18:02 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:18:02
Dec 03 01:18:02 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 01:18:02 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 01:18:02 compute-0 ceph-mgr[193109]: [balancer INFO root] No pools available
Dec 03 01:18:02 compute-0 ceph-mgr[193109]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 03 01:18:02 compute-0 ceph-mgr[193109]: mgr load Constructed class from module: devicehealth
Dec 03 01:18:02 compute-0 ceph-mgr[193109]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 03 01:18:02 compute-0 ceph-mgr[193109]: mgr load Constructed class from module: iostat
Dec 03 01:18:02 compute-0 ceph-mgr[193109]: [devicehealth INFO root] Starting
Dec 03 01:18:02 compute-0 ceph-mgr[193109]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 03 01:18:02 compute-0 ceph-mgr[193109]: mgr load Constructed class from module: nfs
Dec 03 01:18:02 compute-0 ceph-mgr[193109]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 03 01:18:02 compute-0 ceph-mgr[193109]: mgr load Constructed class from module: orchestrator
Dec 03 01:18:02 compute-0 ceph-mgr[193109]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 03 01:18:02 compute-0 ceph-mgr[193109]: mgr load Constructed class from module: pg_autoscaler
Dec 03 01:18:02 compute-0 ceph-mgr[193109]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 03 01:18:02 compute-0 ceph-mgr[193109]: mgr load Constructed class from module: progress
Dec 03 01:18:02 compute-0 ceph-mgr[193109]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 03 01:18:02 compute-0 ceph-mgr[193109]: [progress INFO root] Loading...
Dec 03 01:18:02 compute-0 ceph-mgr[193109]: [progress INFO root] No stored events to load
Dec 03 01:18:02 compute-0 ceph-mgr[193109]: [progress INFO root] Loaded [] historic events
Dec 03 01:18:02 compute-0 ceph-mgr[193109]: [progress INFO root] Loaded OSDMap, ready.
Dec 03 01:18:02 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 01:18:02 compute-0 ceph-mgr[193109]: [rbd_support INFO root] recovery thread starting
Dec 03 01:18:02 compute-0 ceph-mgr[193109]: [rbd_support INFO root] starting setup
Dec 03 01:18:02 compute-0 ceph-mgr[193109]: mgr load Constructed class from module: rbd_support
Dec 03 01:18:02 compute-0 ceph-mgr[193109]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 03 01:18:02 compute-0 ceph-mgr[193109]: mgr load Constructed class from module: restful
Dec 03 01:18:02 compute-0 ceph-mgr[193109]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 03 01:18:02 compute-0 ceph-mgr[193109]: mgr load Constructed class from module: status
Dec 03 01:18:02 compute-0 ceph-mgr[193109]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 03 01:18:02 compute-0 ceph-mgr[193109]: [restful INFO root] server_addr: :: server_port: 8003
Dec 03 01:18:02 compute-0 ceph-mgr[193109]: [restful WARNING root] server not running: no certificate configured
Dec 03 01:18:02 compute-0 ceph-mgr[193109]: mgr load Constructed class from module: telemetry
Dec 03 01:18:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rysove/mirror_snapshot_schedule"} v 0) v1
Dec 03 01:18:02 compute-0 ceph-mgr[193109]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 03 01:18:02 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1648204686' entity='mgr.compute-0.rysove' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rysove/mirror_snapshot_schedule"}]: dispatch
Dec 03 01:18:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0) v1
Dec 03 01:18:02 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 01:18:02 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec 03 01:18:02 compute-0 ceph-mgr[193109]: [rbd_support INFO root] PerfHandler: starting
Dec 03 01:18:02 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TaskHandler: starting
Dec 03 01:18:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rysove/trash_purge_schedule"} v 0) v1
Dec 03 01:18:02 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1648204686' entity='mgr.compute-0.rysove' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rysove/trash_purge_schedule"}]: dispatch
Dec 03 01:18:02 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1648204686' entity='mgr.compute-0.rysove' 
Dec 03 01:18:02 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 01:18:02 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec 03 01:18:02 compute-0 ceph-mgr[193109]: [rbd_support INFO root] setup complete
Dec 03 01:18:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0) v1
Dec 03 01:18:02 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1648204686' entity='mgr.compute-0.rysove' 
Dec 03 01:18:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0) v1
Dec 03 01:18:02 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1648204686' entity='mgr.compute-0.rysove' 
Dec 03 01:18:02 compute-0 ceph-mgr[193109]: mgr load Constructed class from module: volumes
Dec 03 01:18:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec 03 01:18:02 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1001752721' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]: 
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]: {
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:     "fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:     "health": {
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:         "status": "HEALTH_OK",
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:         "checks": {},
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:         "mutes": []
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:     },
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:     "election_epoch": 5,
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:     "quorum": [
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:         0
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:     ],
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:     "quorum_names": [
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:         "compute-0"
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:     ],
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:     "quorum_age": 22,
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:     "monmap": {
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:         "epoch": 1,
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:         "min_mon_release_name": "reef",
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:         "num_mons": 1
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:     },
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:     "osdmap": {
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:         "epoch": 1,
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:         "num_osds": 0,
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:         "num_up_osds": 0,
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:         "osd_up_since": 0,
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:         "num_in_osds": 0,
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:         "osd_in_since": 0,
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:         "num_remapped_pgs": 0
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:     },
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:     "pgmap": {
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:         "pgs_by_state": [],
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:         "num_pgs": 0,
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:         "num_pools": 0,
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:         "num_objects": 0,
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:         "data_bytes": 0,
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:         "bytes_used": 0,
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:         "bytes_avail": 0,
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:         "bytes_total": 0
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:     },
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:     "fsmap": {
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:         "epoch": 1,
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:         "by_rank": [],
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:         "up:standby": 0
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:     },
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:     "mgrmap": {
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:         "available": false,
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:         "num_standbys": 0,
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:         "modules": [
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:             "iostat",
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:             "nfs",
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:             "restful"
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:         ],
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:         "services": {}
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:     },
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:     "servicemap": {
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:         "epoch": 1,
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:         "modified": "2025-12-03T01:17:36.090330+0000",
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:         "services": {}
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:     },
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]:     "progress_events": {}
Dec 03 01:18:02 compute-0 epic_chaplygin[193596]: }
Dec 03 01:18:02 compute-0 systemd[1]: libpod-59ef0ca15039ab3d62bd7e0531260544f5c3261c587f8364d3b748bd1c91d562.scope: Deactivated successfully.
Dec 03 01:18:02 compute-0 podman[193580]: 2025-12-03 01:18:02.977117518 +0000 UTC m=+0.762823975 container died 59ef0ca15039ab3d62bd7e0531260544f5c3261c587f8364d3b748bd1c91d562 (image=quay.io/ceph/ceph:v18, name=epic_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 03 01:18:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-eabac7816a09ffc5ebd412a76886a5a1e54cd030ba10a0bcd5d74680bb247088-merged.mount: Deactivated successfully.
Dec 03 01:18:03 compute-0 podman[193580]: 2025-12-03 01:18:03.065913851 +0000 UTC m=+0.851620238 container remove 59ef0ca15039ab3d62bd7e0531260544f5c3261c587f8364d3b748bd1c91d562 (image=quay.io/ceph/ceph:v18, name=epic_chaplygin, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:18:03 compute-0 systemd[1]: libpod-conmon-59ef0ca15039ab3d62bd7e0531260544f5c3261c587f8364d3b748bd1c91d562.scope: Deactivated successfully.
Dec 03 01:18:03 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.rysove(active, since 1.04119s)
Dec 03 01:18:03 compute-0 ceph-mon[192821]: Manager daemon compute-0.rysove is now available
Dec 03 01:18:03 compute-0 ceph-mon[192821]: from='mgr.14102 192.168.122.100:0/1648204686' entity='mgr.compute-0.rysove' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rysove/mirror_snapshot_schedule"}]: dispatch
Dec 03 01:18:03 compute-0 ceph-mon[192821]: from='mgr.14102 192.168.122.100:0/1648204686' entity='mgr.compute-0.rysove' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rysove/trash_purge_schedule"}]: dispatch
Dec 03 01:18:03 compute-0 ceph-mon[192821]: from='mgr.14102 192.168.122.100:0/1648204686' entity='mgr.compute-0.rysove' 
Dec 03 01:18:03 compute-0 ceph-mon[192821]: from='mgr.14102 192.168.122.100:0/1648204686' entity='mgr.compute-0.rysove' 
Dec 03 01:18:03 compute-0 ceph-mon[192821]: from='mgr.14102 192.168.122.100:0/1648204686' entity='mgr.compute-0.rysove' 
Dec 03 01:18:03 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1001752721' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 03 01:18:03 compute-0 ceph-mon[192821]: mgrmap e3: compute-0.rysove(active, since 1.04119s)
Dec 03 01:18:04 compute-0 ceph-mgr[193109]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 03 01:18:04 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.rysove(active, since 2s)
Dec 03 01:18:04 compute-0 podman[193712]: 2025-12-03 01:18:04.893453574 +0000 UTC m=+0.137344184 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi)
Dec 03 01:18:05 compute-0 podman[193732]: 2025-12-03 01:18:05.200891618 +0000 UTC m=+0.097416001 container create 0f538f75a1ec5716486008907dc700f7bb3a89d2eefbf8b80aa7e02f00cc403b (image=quay.io/ceph/ceph:v18, name=intelligent_roentgen, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:18:05 compute-0 systemd[1]: Started libpod-conmon-0f538f75a1ec5716486008907dc700f7bb3a89d2eefbf8b80aa7e02f00cc403b.scope.
Dec 03 01:18:05 compute-0 podman[193732]: 2025-12-03 01:18:05.164251869 +0000 UTC m=+0.060776312 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:18:05 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:18:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29ff7c17938349a3d0bf02bf9e78d191b2e4b9686a1d0fe8dd77622d2c9c10a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29ff7c17938349a3d0bf02bf9e78d191b2e4b9686a1d0fe8dd77622d2c9c10a6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29ff7c17938349a3d0bf02bf9e78d191b2e4b9686a1d0fe8dd77622d2c9c10a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:05 compute-0 podman[193732]: 2025-12-03 01:18:05.351975304 +0000 UTC m=+0.248499697 container init 0f538f75a1ec5716486008907dc700f7bb3a89d2eefbf8b80aa7e02f00cc403b (image=quay.io/ceph/ceph:v18, name=intelligent_roentgen, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 03 01:18:05 compute-0 podman[193732]: 2025-12-03 01:18:05.376099365 +0000 UTC m=+0.272623758 container start 0f538f75a1ec5716486008907dc700f7bb3a89d2eefbf8b80aa7e02f00cc403b (image=quay.io/ceph/ceph:v18, name=intelligent_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 03 01:18:05 compute-0 podman[193732]: 2025-12-03 01:18:05.382737625 +0000 UTC m=+0.279262018 container attach 0f538f75a1ec5716486008907dc700f7bb3a89d2eefbf8b80aa7e02f00cc403b (image=quay.io/ceph/ceph:v18, name=intelligent_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec 03 01:18:05 compute-0 ceph-mon[192821]: mgrmap e4: compute-0.rysove(active, since 2s)
Dec 03 01:18:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec 03 01:18:06 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1104220266' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]: 
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]: {
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:     "fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:     "health": {
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:         "status": "HEALTH_OK",
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:         "checks": {},
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:         "mutes": []
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:     },
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:     "election_epoch": 5,
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:     "quorum": [
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:         0
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:     ],
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:     "quorum_names": [
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:         "compute-0"
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:     ],
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:     "quorum_age": 25,
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:     "monmap": {
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:         "epoch": 1,
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:         "min_mon_release_name": "reef",
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:         "num_mons": 1
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:     },
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:     "osdmap": {
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:         "epoch": 1,
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:         "num_osds": 0,
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:         "num_up_osds": 0,
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:         "osd_up_since": 0,
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:         "num_in_osds": 0,
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:         "osd_in_since": 0,
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:         "num_remapped_pgs": 0
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:     },
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:     "pgmap": {
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:         "pgs_by_state": [],
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:         "num_pgs": 0,
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:         "num_pools": 0,
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:         "num_objects": 0,
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:         "data_bytes": 0,
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:         "bytes_used": 0,
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:         "bytes_avail": 0,
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:         "bytes_total": 0
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:     },
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:     "fsmap": {
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:         "epoch": 1,
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:         "by_rank": [],
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:         "up:standby": 0
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:     },
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:     "mgrmap": {
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:         "available": true,
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:         "num_standbys": 0,
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:         "modules": [
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:             "iostat",
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:             "nfs",
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:             "restful"
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:         ],
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:         "services": {}
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:     },
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:     "servicemap": {
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:         "epoch": 1,
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:         "modified": "2025-12-03T01:17:36.090330+0000",
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:         "services": {}
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:     },
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]:     "progress_events": {}
Dec 03 01:18:06 compute-0 intelligent_roentgen[193749]: }
Dec 03 01:18:06 compute-0 systemd[1]: libpod-0f538f75a1ec5716486008907dc700f7bb3a89d2eefbf8b80aa7e02f00cc403b.scope: Deactivated successfully.
Dec 03 01:18:06 compute-0 podman[193775]: 2025-12-03 01:18:06.168821316 +0000 UTC m=+0.061166033 container died 0f538f75a1ec5716486008907dc700f7bb3a89d2eefbf8b80aa7e02f00cc403b (image=quay.io/ceph/ceph:v18, name=intelligent_roentgen, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:18:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-29ff7c17938349a3d0bf02bf9e78d191b2e4b9686a1d0fe8dd77622d2c9c10a6-merged.mount: Deactivated successfully.
Dec 03 01:18:06 compute-0 podman[193775]: 2025-12-03 01:18:06.270904739 +0000 UTC m=+0.163249456 container remove 0f538f75a1ec5716486008907dc700f7bb3a89d2eefbf8b80aa7e02f00cc403b (image=quay.io/ceph/ceph:v18, name=intelligent_roentgen, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 03 01:18:06 compute-0 systemd[1]: libpod-conmon-0f538f75a1ec5716486008907dc700f7bb3a89d2eefbf8b80aa7e02f00cc403b.scope: Deactivated successfully.
Dec 03 01:18:06 compute-0 podman[193790]: 2025-12-03 01:18:06.422423948 +0000 UTC m=+0.092044007 container create 544f13e5f6846048ebe5af697a8b573550d31defa73dd61e6cffaa4295ea6b53 (image=quay.io/ceph/ceph:v18, name=keen_haslett, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 03 01:18:06 compute-0 podman[193790]: 2025-12-03 01:18:06.386273023 +0000 UTC m=+0.055893122 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:18:06 compute-0 systemd[1]: Started libpod-conmon-544f13e5f6846048ebe5af697a8b573550d31defa73dd61e6cffaa4295ea6b53.scope.
Dec 03 01:18:06 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:18:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/838ae576b7637b59bb2f5bfca0aa18d3c0a2b6a58e2c7847446a1a6cf8dc9287/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/838ae576b7637b59bb2f5bfca0aa18d3c0a2b6a58e2c7847446a1a6cf8dc9287/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/838ae576b7637b59bb2f5bfca0aa18d3c0a2b6a58e2c7847446a1a6cf8dc9287/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/838ae576b7637b59bb2f5bfca0aa18d3c0a2b6a58e2c7847446a1a6cf8dc9287/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:06 compute-0 podman[193790]: 2025-12-03 01:18:06.589004949 +0000 UTC m=+0.258625058 container init 544f13e5f6846048ebe5af697a8b573550d31defa73dd61e6cffaa4295ea6b53 (image=quay.io/ceph/ceph:v18, name=keen_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 03 01:18:06 compute-0 podman[193790]: 2025-12-03 01:18:06.606819729 +0000 UTC m=+0.276439778 container start 544f13e5f6846048ebe5af697a8b573550d31defa73dd61e6cffaa4295ea6b53 (image=quay.io/ceph/ceph:v18, name=keen_haslett, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:18:06 compute-0 podman[193790]: 2025-12-03 01:18:06.61315838 +0000 UTC m=+0.282778489 container attach 544f13e5f6846048ebe5af697a8b573550d31defa73dd61e6cffaa4295ea6b53 (image=quay.io/ceph/ceph:v18, name=keen_haslett, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:18:06 compute-0 ceph-mgr[193109]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 03 01:18:06 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1104220266' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 03 01:18:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Dec 03 01:18:07 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2333107779' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec 03 01:18:07 compute-0 systemd[1]: libpod-544f13e5f6846048ebe5af697a8b573550d31defa73dd61e6cffaa4295ea6b53.scope: Deactivated successfully.
Dec 03 01:18:07 compute-0 podman[193790]: 2025-12-03 01:18:07.192322296 +0000 UTC m=+0.861942375 container died 544f13e5f6846048ebe5af697a8b573550d31defa73dd61e6cffaa4295ea6b53 (image=quay.io/ceph/ceph:v18, name=keen_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Dec 03 01:18:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-838ae576b7637b59bb2f5bfca0aa18d3c0a2b6a58e2c7847446a1a6cf8dc9287-merged.mount: Deactivated successfully.
Dec 03 01:18:07 compute-0 podman[193790]: 2025-12-03 01:18:07.273732777 +0000 UTC m=+0.943352836 container remove 544f13e5f6846048ebe5af697a8b573550d31defa73dd61e6cffaa4295ea6b53 (image=quay.io/ceph/ceph:v18, name=keen_haslett, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:18:07 compute-0 systemd[1]: libpod-conmon-544f13e5f6846048ebe5af697a8b573550d31defa73dd61e6cffaa4295ea6b53.scope: Deactivated successfully.
Dec 03 01:18:07 compute-0 podman[193843]: 2025-12-03 01:18:07.380850404 +0000 UTC m=+0.072684002 container create 266e2cabd687d64ab72828561984c0b5a520403518505c6dd8ab04c079808e41 (image=quay.io/ceph/ceph:v18, name=suspicious_burnell, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:18:07 compute-0 podman[193843]: 2025-12-03 01:18:07.346084959 +0000 UTC m=+0.037918597 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:18:07 compute-0 systemd[1]: Started libpod-conmon-266e2cabd687d64ab72828561984c0b5a520403518505c6dd8ab04c079808e41.scope.
Dec 03 01:18:07 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:18:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c7c6be2996c2b69dd0ee10083e283f8de7c9f6bdf048a7974f6e99056eaff6d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c7c6be2996c2b69dd0ee10083e283f8de7c9f6bdf048a7974f6e99056eaff6d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c7c6be2996c2b69dd0ee10083e283f8de7c9f6bdf048a7974f6e99056eaff6d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:07 compute-0 podman[193843]: 2025-12-03 01:18:07.541173615 +0000 UTC m=+0.233007203 container init 266e2cabd687d64ab72828561984c0b5a520403518505c6dd8ab04c079808e41 (image=quay.io/ceph/ceph:v18, name=suspicious_burnell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 03 01:18:07 compute-0 podman[193843]: 2025-12-03 01:18:07.556946737 +0000 UTC m=+0.248780335 container start 266e2cabd687d64ab72828561984c0b5a520403518505c6dd8ab04c079808e41 (image=quay.io/ceph/ceph:v18, name=suspicious_burnell, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:18:07 compute-0 podman[193843]: 2025-12-03 01:18:07.564716729 +0000 UTC m=+0.256550357 container attach 266e2cabd687d64ab72828561984c0b5a520403518505c6dd8ab04c079808e41 (image=quay.io/ceph/ceph:v18, name=suspicious_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec 03 01:18:07 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2333107779' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec 03 01:18:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1
Dec 03 01:18:08 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4209809939' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Dec 03 01:18:08 compute-0 ceph-mgr[193109]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 03 01:18:08 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/4209809939' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Dec 03 01:18:08 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4209809939' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Dec 03 01:18:08 compute-0 ceph-mgr[193109]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec 03 01:18:08 compute-0 ceph-mgr[193109]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec 03 01:18:08 compute-0 ceph-mgr[193109]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec 03 01:18:08 compute-0 ceph-mgr[193109]: mgr respawn  1: '-n'
Dec 03 01:18:08 compute-0 ceph-mgr[193109]: mgr respawn  2: 'mgr.compute-0.rysove'
Dec 03 01:18:08 compute-0 ceph-mgr[193109]: mgr respawn  3: '-f'
Dec 03 01:18:08 compute-0 ceph-mgr[193109]: mgr respawn  4: '--setuser'
Dec 03 01:18:08 compute-0 ceph-mgr[193109]: mgr respawn  5: 'ceph'
Dec 03 01:18:08 compute-0 ceph-mgr[193109]: mgr respawn  6: '--setgroup'
Dec 03 01:18:08 compute-0 ceph-mgr[193109]: mgr respawn  7: 'ceph'
Dec 03 01:18:08 compute-0 ceph-mgr[193109]: mgr respawn  8: '--default-log-to-file=false'
Dec 03 01:18:08 compute-0 ceph-mgr[193109]: mgr respawn  9: '--default-log-to-journald=true'
Dec 03 01:18:08 compute-0 ceph-mgr[193109]: mgr respawn  10: '--default-log-to-stderr=false'
Dec 03 01:18:08 compute-0 ceph-mgr[193109]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Dec 03 01:18:08 compute-0 ceph-mgr[193109]: mgr respawn  exe_path /proc/self/exe
Dec 03 01:18:08 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.rysove(active, since 6s)
Dec 03 01:18:08 compute-0 systemd[1]: libpod-266e2cabd687d64ab72828561984c0b5a520403518505c6dd8ab04c079808e41.scope: Deactivated successfully.
Dec 03 01:18:08 compute-0 podman[193843]: 2025-12-03 01:18:08.863359107 +0000 UTC m=+1.555192705 container died 266e2cabd687d64ab72828561984c0b5a520403518505c6dd8ab04c079808e41 (image=quay.io/ceph/ceph:v18, name=suspicious_burnell, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 03 01:18:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c7c6be2996c2b69dd0ee10083e283f8de7c9f6bdf048a7974f6e99056eaff6d-merged.mount: Deactivated successfully.
Dec 03 01:18:08 compute-0 podman[193843]: 2025-12-03 01:18:08.929151081 +0000 UTC m=+1.620984639 container remove 266e2cabd687d64ab72828561984c0b5a520403518505c6dd8ab04c079808e41 (image=quay.io/ceph/ceph:v18, name=suspicious_burnell, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 03 01:18:08 compute-0 systemd[1]: libpod-conmon-266e2cabd687d64ab72828561984c0b5a520403518505c6dd8ab04c079808e41.scope: Deactivated successfully.
Dec 03 01:18:08 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: ignoring --setuser ceph since I am not root
Dec 03 01:18:08 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: ignoring --setgroup ceph since I am not root
Dec 03 01:18:08 compute-0 ceph-mgr[193109]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Dec 03 01:18:08 compute-0 ceph-mgr[193109]: pidfile_write: ignore empty --pid-file
Dec 03 01:18:09 compute-0 podman[193894]: 2025-12-03 01:18:09.041846668 +0000 UTC m=+0.087893728 container create eaae965a3db105e92dea22149616c4ab7108000c1ded5b20f01cf704bba1b19e (image=quay.io/ceph/ceph:v18, name=tender_elion, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:18:09 compute-0 podman[193894]: 2025-12-03 01:18:09.010116859 +0000 UTC m=+0.056163969 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:18:09 compute-0 systemd[1]: Started libpod-conmon-eaae965a3db105e92dea22149616c4ab7108000c1ded5b20f01cf704bba1b19e.scope.
Dec 03 01:18:09 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:18:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0626ad42801a56b76fabfa4f558cce1f395cf22eaa096b1e8e1d2ac71a5d78d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0626ad42801a56b76fabfa4f558cce1f395cf22eaa096b1e8e1d2ac71a5d78d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0626ad42801a56b76fabfa4f558cce1f395cf22eaa096b1e8e1d2ac71a5d78d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:09 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'alerts'
Dec 03 01:18:09 compute-0 podman[193894]: 2025-12-03 01:18:09.196188178 +0000 UTC m=+0.242235248 container init eaae965a3db105e92dea22149616c4ab7108000c1ded5b20f01cf704bba1b19e (image=quay.io/ceph/ceph:v18, name=tender_elion, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 03 01:18:09 compute-0 podman[193894]: 2025-12-03 01:18:09.226261879 +0000 UTC m=+0.272308919 container start eaae965a3db105e92dea22149616c4ab7108000c1ded5b20f01cf704bba1b19e (image=quay.io/ceph/ceph:v18, name=tender_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 03 01:18:09 compute-0 podman[193894]: 2025-12-03 01:18:09.232995292 +0000 UTC m=+0.279042352 container attach eaae965a3db105e92dea22149616c4ab7108000c1ded5b20f01cf704bba1b19e (image=quay.io/ceph/ceph:v18, name=tender_elion, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:18:09 compute-0 ceph-mgr[193109]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 03 01:18:09 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'balancer'
Dec 03 01:18:09 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:09.476+0000 7fac125fd140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 03 01:18:09 compute-0 ceph-mgr[193109]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 03 01:18:09 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:09.714+0000 7fac125fd140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 03 01:18:09 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'cephadm'
Dec 03 01:18:09 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/4209809939' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Dec 03 01:18:09 compute-0 ceph-mon[192821]: mgrmap e5: compute-0.rysove(active, since 6s)
Dec 03 01:18:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Dec 03 01:18:09 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/946545370' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 03 01:18:09 compute-0 tender_elion[193933]: {
Dec 03 01:18:09 compute-0 tender_elion[193933]:     "epoch": 5,
Dec 03 01:18:09 compute-0 tender_elion[193933]:     "available": true,
Dec 03 01:18:09 compute-0 tender_elion[193933]:     "active_name": "compute-0.rysove",
Dec 03 01:18:09 compute-0 tender_elion[193933]:     "num_standby": 0
Dec 03 01:18:09 compute-0 tender_elion[193933]: }
Dec 03 01:18:09 compute-0 podman[193894]: 2025-12-03 01:18:09.861262833 +0000 UTC m=+0.907309863 container died eaae965a3db105e92dea22149616c4ab7108000c1ded5b20f01cf704bba1b19e (image=quay.io/ceph/ceph:v18, name=tender_elion, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec 03 01:18:09 compute-0 systemd[1]: libpod-eaae965a3db105e92dea22149616c4ab7108000c1ded5b20f01cf704bba1b19e.scope: Deactivated successfully.
Dec 03 01:18:09 compute-0 podman[193957]: 2025-12-03 01:18:09.870070315 +0000 UTC m=+0.126429121 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, release=1214.1726694543, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, distribution-scope=public, io.openshift.tags=base rhel9, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, vendor=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9)
Dec 03 01:18:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0626ad42801a56b76fabfa4f558cce1f395cf22eaa096b1e8e1d2ac71a5d78d-merged.mount: Deactivated successfully.
Dec 03 01:18:09 compute-0 podman[193894]: 2025-12-03 01:18:09.926785259 +0000 UTC m=+0.972832289 container remove eaae965a3db105e92dea22149616c4ab7108000c1ded5b20f01cf704bba1b19e (image=quay.io/ceph/ceph:v18, name=tender_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:18:09 compute-0 systemd[1]: libpod-conmon-eaae965a3db105e92dea22149616c4ab7108000c1ded5b20f01cf704bba1b19e.scope: Deactivated successfully.
Dec 03 01:18:10 compute-0 podman[193988]: 2025-12-03 01:18:10.016701714 +0000 UTC m=+0.061092490 container create 02f581213590560279935f0ffc48383056870d50a372367270ff17b306ff17b8 (image=quay.io/ceph/ceph:v18, name=optimistic_kilby, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 03 01:18:10 compute-0 systemd[1]: Started libpod-conmon-02f581213590560279935f0ffc48383056870d50a372367270ff17b306ff17b8.scope.
Dec 03 01:18:10 compute-0 podman[193988]: 2025-12-03 01:18:09.993646854 +0000 UTC m=+0.038037730 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:18:10 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:18:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0d64fb510050882907986cfd80e9c18378061cdc38577dbe1fc89271996dd44/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0d64fb510050882907986cfd80e9c18378061cdc38577dbe1fc89271996dd44/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0d64fb510050882907986cfd80e9c18378061cdc38577dbe1fc89271996dd44/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:10 compute-0 podman[193988]: 2025-12-03 01:18:10.145247315 +0000 UTC m=+0.189638131 container init 02f581213590560279935f0ffc48383056870d50a372367270ff17b306ff17b8 (image=quay.io/ceph/ceph:v18, name=optimistic_kilby, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:18:10 compute-0 podman[193988]: 2025-12-03 01:18:10.165490865 +0000 UTC m=+0.209881661 container start 02f581213590560279935f0ffc48383056870d50a372367270ff17b306ff17b8 (image=quay.io/ceph/ceph:v18, name=optimistic_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:18:10 compute-0 podman[193988]: 2025-12-03 01:18:10.172123265 +0000 UTC m=+0.216514061 container attach 02f581213590560279935f0ffc48383056870d50a372367270ff17b306ff17b8 (image=quay.io/ceph/ceph:v18, name=optimistic_kilby, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:18:10 compute-0 sshd-session[193451]: Connection closed by authenticating user root 193.32.162.157 port 48744 [preauth]
Dec 03 01:18:10 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/946545370' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 03 01:18:11 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'crash'
Dec 03 01:18:11 compute-0 ceph-mgr[193109]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 03 01:18:11 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'dashboard'
Dec 03 01:18:11 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:11.947+0000 7fac125fd140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 03 01:18:13 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'devicehealth'
Dec 03 01:18:13 compute-0 ceph-mgr[193109]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 03 01:18:13 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:13.586+0000 7fac125fd140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 03 01:18:13 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'diskprediction_local'
Dec 03 01:18:14 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 03 01:18:14 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 03 01:18:14 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]:   from numpy import show_config as show_numpy_config
Dec 03 01:18:14 compute-0 ceph-mgr[193109]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 03 01:18:14 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:14.092+0000 7fac125fd140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 03 01:18:14 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'influx'
Dec 03 01:18:14 compute-0 ceph-mgr[193109]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 03 01:18:14 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:14.317+0000 7fac125fd140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 03 01:18:14 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'insights'
Dec 03 01:18:14 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'iostat'
Dec 03 01:18:14 compute-0 ceph-mgr[193109]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 03 01:18:14 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:14.783+0000 7fac125fd140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 03 01:18:14 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'k8sevents'
Dec 03 01:18:14 compute-0 podman[194041]: 2025-12-03 01:18:14.816383218 +0000 UTC m=+0.113896152 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 01:18:16 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'localpool'
Dec 03 01:18:16 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'mds_autoscaler'
Dec 03 01:18:17 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'mirroring'
Dec 03 01:18:17 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'nfs'
Dec 03 01:18:18 compute-0 ceph-mgr[193109]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 03 01:18:18 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:18.244+0000 7fac125fd140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 03 01:18:18 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'orchestrator'
Dec 03 01:18:18 compute-0 ceph-mgr[193109]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 03 01:18:18 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:18.889+0000 7fac125fd140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 03 01:18:18 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'osd_perf_query'
Dec 03 01:18:19 compute-0 ceph-mgr[193109]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 03 01:18:19 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:19.147+0000 7fac125fd140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 03 01:18:19 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'osd_support'
Dec 03 01:18:19 compute-0 ceph-mgr[193109]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 03 01:18:19 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:19.375+0000 7fac125fd140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 03 01:18:19 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'pg_autoscaler'
Dec 03 01:18:19 compute-0 ceph-mgr[193109]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 03 01:18:19 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:19.640+0000 7fac125fd140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 03 01:18:19 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'progress'
Dec 03 01:18:19 compute-0 ceph-mgr[193109]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 03 01:18:19 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:19.866+0000 7fac125fd140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 03 01:18:19 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'prometheus'
Dec 03 01:18:20 compute-0 ceph-mgr[193109]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 03 01:18:20 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:20.825+0000 7fac125fd140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 03 01:18:20 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'rbd_support'
Dec 03 01:18:21 compute-0 ceph-mgr[193109]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 03 01:18:21 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:21.110+0000 7fac125fd140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 03 01:18:21 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'restful'
Dec 03 01:18:21 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'rgw'
Dec 03 01:18:22 compute-0 ceph-mgr[193109]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 03 01:18:22 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:22.481+0000 7fac125fd140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 03 01:18:22 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'rook'
Dec 03 01:18:22 compute-0 sshd-session[194028]: Connection closed by authenticating user root 193.32.162.157 port 33634 [preauth]
Dec 03 01:18:24 compute-0 ceph-mgr[193109]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 03 01:18:24 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:24.589+0000 7fac125fd140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 03 01:18:24 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'selftest'
Dec 03 01:18:24 compute-0 ceph-mgr[193109]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 03 01:18:24 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'snap_schedule'
Dec 03 01:18:24 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:24.870+0000 7fac125fd140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 03 01:18:25 compute-0 ceph-mgr[193109]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 03 01:18:25 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:25.138+0000 7fac125fd140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 03 01:18:25 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'stats'
Dec 03 01:18:25 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'status'
Dec 03 01:18:25 compute-0 ceph-mgr[193109]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 03 01:18:25 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:25.636+0000 7fac125fd140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 03 01:18:25 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'telegraf'
Dec 03 01:18:25 compute-0 ceph-mgr[193109]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 03 01:18:25 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:25.874+0000 7fac125fd140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 03 01:18:25 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'telemetry'
Dec 03 01:18:26 compute-0 ceph-mgr[193109]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 03 01:18:26 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:26.473+0000 7fac125fd140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 03 01:18:26 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'test_orchestrator'
Dec 03 01:18:27 compute-0 ceph-mgr[193109]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 03 01:18:27 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:27.132+0000 7fac125fd140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 03 01:18:27 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'volumes'
Dec 03 01:18:27 compute-0 ceph-mgr[193109]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 03 01:18:27 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:27.864+0000 7fac125fd140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 03 01:18:27 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'zabbix'
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 03 01:18:28 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:28.102+0000 7fac125fd140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: ms_deliver_dispatch: unhandled message 0x556670a1f1e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Dec 03 01:18:28 compute-0 ceph-mon[192821]: log_channel(cluster) log [INF] : Active manager daemon compute-0.rysove restarted
Dec 03 01:18:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Dec 03 01:18:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 03 01:18:28 compute-0 ceph-mon[192821]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.rysove
Dec 03 01:18:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Dec 03 01:18:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec 03 01:18:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Dec 03 01:18:28 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Dec 03 01:18:28 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.rysove(active, starting, since 0.029257s)
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: mgr handle_mgr_map Activating!
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: mgr handle_mgr_map I am now activating
Dec 03 01:18:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Dec 03 01:18:28 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 03 01:18:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.rysove", "id": "compute-0.rysove"} v 0) v1
Dec 03 01:18:28 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "mgr metadata", "who": "compute-0.rysove", "id": "compute-0.rysove"}]: dispatch
Dec 03 01:18:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Dec 03 01:18:28 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 03 01:18:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).mds e1 all = 1
Dec 03 01:18:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Dec 03 01:18:28 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 03 01:18:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Dec 03 01:18:28 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 03 01:18:28 compute-0 ceph-mon[192821]: Active manager daemon compute-0.rysove restarted
Dec 03 01:18:28 compute-0 ceph-mon[192821]: Activating manager daemon compute-0.rysove
Dec 03 01:18:28 compute-0 ceph-mon[192821]: osdmap e2: 0 total, 0 up, 0 in
Dec 03 01:18:28 compute-0 ceph-mon[192821]: mgrmap e6: compute-0.rysove(active, starting, since 0.029257s)
Dec 03 01:18:28 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 03 01:18:28 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "mgr metadata", "who": "compute-0.rysove", "id": "compute-0.rysove"}]: dispatch
Dec 03 01:18:28 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 03 01:18:28 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 03 01:18:28 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: mgr load Constructed class from module: balancer
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Starting
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:18:28
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: [balancer INFO root] No pools available
Dec 03 01:18:28 compute-0 ceph-mon[192821]: log_channel(cluster) log [INF] : Manager daemon compute-0.rysove is now available
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Dec 03 01:18:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0) v1
Dec 03 01:18:28 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:18:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0) v1
Dec 03 01:18:28 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: mgr load Constructed class from module: cephadm
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: mgr load Constructed class from module: crash
Dec 03 01:18:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Dec 03 01:18:28 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: mgr load Constructed class from module: devicehealth
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: mgr load Constructed class from module: iostat
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: mgr load Constructed class from module: nfs
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: mgr load Constructed class from module: orchestrator
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: mgr load Constructed class from module: pg_autoscaler
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: [devicehealth INFO root] Starting
Dec 03 01:18:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: mgr load Constructed class from module: progress
Dec 03 01:18:28 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: [progress INFO root] Loading...
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: [progress INFO root] No stored events to load
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: [progress INFO root] Loaded [] historic events
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: [progress INFO root] Loaded OSDMap, ready.
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] recovery thread starting
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] starting setup
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: mgr load Constructed class from module: rbd_support
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: mgr load Constructed class from module: restful
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: [restful INFO root] server_addr: :: server_port: 8003
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: [restful WARNING root] server not running: no certificate configured
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: mgr load Constructed class from module: status
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: mgr load Constructed class from module: telemetry
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 03 01:18:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rysove/mirror_snapshot_schedule"} v 0) v1
Dec 03 01:18:28 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rysove/mirror_snapshot_schedule"}]: dispatch
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] PerfHandler: starting
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TaskHandler: starting
Dec 03 01:18:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rysove/trash_purge_schedule"} v 0) v1
Dec 03 01:18:28 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rysove/trash_purge_schedule"}]: dispatch
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] setup complete
Dec 03 01:18:28 compute-0 ceph-mgr[193109]: mgr load Constructed class from module: volumes
Dec 03 01:18:29 compute-0 ceph-mon[192821]: Manager daemon compute-0.rysove is now available
Dec 03 01:18:29 compute-0 ceph-mon[192821]: Found migration_current of "None". Setting to last migration.
Dec 03 01:18:29 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:18:29 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:18:29 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 03 01:18:29 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 03 01:18:29 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rysove/mirror_snapshot_schedule"}]: dispatch
Dec 03 01:18:29 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rysove/trash_purge_schedule"}]: dispatch
Dec 03 01:18:29 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.rysove(active, since 1.12063s)
Dec 03 01:18:29 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Dec 03 01:18:29 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Dec 03 01:18:29 compute-0 optimistic_kilby[194004]: {
Dec 03 01:18:29 compute-0 optimistic_kilby[194004]:     "mgrmap_epoch": 7,
Dec 03 01:18:29 compute-0 optimistic_kilby[194004]:     "initialized": true
Dec 03 01:18:29 compute-0 optimistic_kilby[194004]: }
Dec 03 01:18:29 compute-0 podman[193988]: 2025-12-03 01:18:29.287849614 +0000 UTC m=+19.332240410 container died 02f581213590560279935f0ffc48383056870d50a372367270ff17b306ff17b8 (image=quay.io/ceph/ceph:v18, name=optimistic_kilby, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:18:29 compute-0 systemd[1]: libpod-02f581213590560279935f0ffc48383056870d50a372367270ff17b306ff17b8.scope: Deactivated successfully.
Dec 03 01:18:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0d64fb510050882907986cfd80e9c18378061cdc38577dbe1fc89271996dd44-merged.mount: Deactivated successfully.
Dec 03 01:18:29 compute-0 podman[193988]: 2025-12-03 01:18:29.375589856 +0000 UTC m=+19.419980662 container remove 02f581213590560279935f0ffc48383056870d50a372367270ff17b306ff17b8 (image=quay.io/ceph/ceph:v18, name=optimistic_kilby, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:18:29 compute-0 systemd[1]: libpod-conmon-02f581213590560279935f0ffc48383056870d50a372367270ff17b306ff17b8.scope: Deactivated successfully.
Dec 03 01:18:29 compute-0 podman[194194]: 2025-12-03 01:18:29.499305568 +0000 UTC m=+0.081548555 container create abd8a741f5691ec1ab814d53ac1e77aeeca3e2ab3382e305a9910e5897114f3a (image=quay.io/ceph/ceph:v18, name=wonderful_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:18:29 compute-0 podman[194194]: 2025-12-03 01:18:29.467679492 +0000 UTC m=+0.049922549 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:18:29 compute-0 systemd[1]: Started libpod-conmon-abd8a741f5691ec1ab814d53ac1e77aeeca3e2ab3382e305a9910e5897114f3a.scope.
Dec 03 01:18:29 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:18:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a205d1e514f771893a22cdb4875bd30a25f1a19bcba6c088a897b8b9213f8f23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a205d1e514f771893a22cdb4875bd30a25f1a19bcba6c088a897b8b9213f8f23/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a205d1e514f771893a22cdb4875bd30a25f1a19bcba6c088a897b8b9213f8f23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:29 compute-0 podman[194194]: 2025-12-03 01:18:29.720051509 +0000 UTC m=+0.302294556 container init abd8a741f5691ec1ab814d53ac1e77aeeca3e2ab3382e305a9910e5897114f3a (image=quay.io/ceph/ceph:v18, name=wonderful_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:18:29 compute-0 podman[194194]: 2025-12-03 01:18:29.737268212 +0000 UTC m=+0.319511209 container start abd8a741f5691ec1ab814d53ac1e77aeeca3e2ab3382e305a9910e5897114f3a (image=quay.io/ceph/ceph:v18, name=wonderful_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec 03 01:18:29 compute-0 podman[158098]: time="2025-12-03T01:18:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:18:29 compute-0 podman[194194]: 2025-12-03 01:18:29.750413369 +0000 UTC m=+0.332656386 container attach abd8a741f5691ec1ab814d53ac1e77aeeca3e2ab3382e305a9910e5897114f3a (image=quay.io/ceph/ceph:v18, name=wonderful_booth, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 03 01:18:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:18:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 23492 "" "Go-http-client/1.1"
Dec 03 01:18:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/cert}] v 0) v1
Dec 03 01:18:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:18:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4340 "" "Go-http-client/1.1"
Dec 03 01:18:29 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:18:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/key}] v 0) v1
Dec 03 01:18:29 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:18:30 compute-0 ceph-mgr[193109]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 03 01:18:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019923277 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:18:30 compute-0 ceph-mon[192821]: mgrmap e7: compute-0.rysove(active, since 1.12063s)
Dec 03 01:18:30 compute-0 ceph-mon[192821]: from='client.14134 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Dec 03 01:18:30 compute-0 ceph-mon[192821]: from='client.14134 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Dec 03 01:18:30 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:18:30 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:18:30 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 01:18:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0) v1
Dec 03 01:18:30 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:18:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Dec 03 01:18:30 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 03 01:18:30 compute-0 systemd[1]: libpod-abd8a741f5691ec1ab814d53ac1e77aeeca3e2ab3382e305a9910e5897114f3a.scope: Deactivated successfully.
Dec 03 01:18:30 compute-0 podman[194194]: 2025-12-03 01:18:30.388416469 +0000 UTC m=+0.970659456 container died abd8a741f5691ec1ab814d53ac1e77aeeca3e2ab3382e305a9910e5897114f3a (image=quay.io/ceph/ceph:v18, name=wonderful_booth, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:18:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-a205d1e514f771893a22cdb4875bd30a25f1a19bcba6c088a897b8b9213f8f23-merged.mount: Deactivated successfully.
Dec 03 01:18:30 compute-0 podman[194194]: 2025-12-03 01:18:30.489198845 +0000 UTC m=+1.071441812 container remove abd8a741f5691ec1ab814d53ac1e77aeeca3e2ab3382e305a9910e5897114f3a (image=quay.io/ceph/ceph:v18, name=wonderful_booth, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:18:30 compute-0 systemd[1]: libpod-conmon-abd8a741f5691ec1ab814d53ac1e77aeeca3e2ab3382e305a9910e5897114f3a.scope: Deactivated successfully.
Dec 03 01:18:30 compute-0 podman[194246]: 2025-12-03 01:18:30.550679345 +0000 UTC m=+0.103354680 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 03 01:18:30 compute-0 podman[194286]: 2025-12-03 01:18:30.577514184 +0000 UTC m=+0.059383982 container create bb033e721b31ef5a86cb5f2240eb1b5992e40ddc7d0ce7f7b72ea04bb603edcf (image=quay.io/ceph/ceph:v18, name=competent_bohr, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 03 01:18:30 compute-0 podman[194250]: 2025-12-03 01:18:30.581712744 +0000 UTC m=+0.120947714 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 03 01:18:30 compute-0 podman[194238]: 2025-12-03 01:18:30.583576577 +0000 UTC m=+0.147345580 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 01:18:30 compute-0 podman[194245]: 2025-12-03 01:18:30.58961115 +0000 UTC m=+0.148587526 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=ubi9-minimal-container, version=9.6, container_name=openstack_network_exporter, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, architecture=x86_64, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 03 01:18:30 compute-0 ceph-mgr[193109]: [cephadm INFO cherrypy.error] [03/Dec/2025:01:18:30] ENGINE Bus STARTING
Dec 03 01:18:30 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : [03/Dec/2025:01:18:30] ENGINE Bus STARTING
Dec 03 01:18:30 compute-0 systemd[1]: Started libpod-conmon-bb033e721b31ef5a86cb5f2240eb1b5992e40ddc7d0ce7f7b72ea04bb603edcf.scope.
Dec 03 01:18:30 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:18:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36b4b9ee7bae800fdd1d48d74ec43cdfb7e36d82bf7048f0ac80cf9d3a0c643f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36b4b9ee7bae800fdd1d48d74ec43cdfb7e36d82bf7048f0ac80cf9d3a0c643f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36b4b9ee7bae800fdd1d48d74ec43cdfb7e36d82bf7048f0ac80cf9d3a0c643f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:30 compute-0 podman[194286]: 2025-12-03 01:18:30.555291407 +0000 UTC m=+0.037161225 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:18:30 compute-0 podman[194286]: 2025-12-03 01:18:30.673207184 +0000 UTC m=+0.155077002 container init bb033e721b31ef5a86cb5f2240eb1b5992e40ddc7d0ce7f7b72ea04bb603edcf (image=quay.io/ceph/ceph:v18, name=competent_bohr, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:18:30 compute-0 podman[194286]: 2025-12-03 01:18:30.691818117 +0000 UTC m=+0.173687905 container start bb033e721b31ef5a86cb5f2240eb1b5992e40ddc7d0ce7f7b72ea04bb603edcf (image=quay.io/ceph/ceph:v18, name=competent_bohr, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:18:30 compute-0 podman[194286]: 2025-12-03 01:18:30.695675758 +0000 UTC m=+0.177545556 container attach bb033e721b31ef5a86cb5f2240eb1b5992e40ddc7d0ce7f7b72ea04bb603edcf (image=quay.io/ceph/ceph:v18, name=competent_bohr, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:18:30 compute-0 ceph-mgr[193109]: [cephadm INFO cherrypy.error] [03/Dec/2025:01:18:30] ENGINE Serving on https://192.168.122.100:7150
Dec 03 01:18:30 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : [03/Dec/2025:01:18:30] ENGINE Serving on https://192.168.122.100:7150
Dec 03 01:18:30 compute-0 ceph-mgr[193109]: [cephadm INFO cherrypy.error] [03/Dec/2025:01:18:30] ENGINE Client ('192.168.122.100', 37812) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 03 01:18:30 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : [03/Dec/2025:01:18:30] ENGINE Client ('192.168.122.100', 37812) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 03 01:18:30 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.rysove(active, since 2s)
Dec 03 01:18:30 compute-0 ceph-mgr[193109]: [cephadm INFO cherrypy.error] [03/Dec/2025:01:18:30] ENGINE Serving on http://192.168.122.100:8765
Dec 03 01:18:30 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : [03/Dec/2025:01:18:30] ENGINE Serving on http://192.168.122.100:8765
Dec 03 01:18:30 compute-0 ceph-mgr[193109]: [cephadm INFO cherrypy.error] [03/Dec/2025:01:18:30] ENGINE Bus STARTED
Dec 03 01:18:30 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : [03/Dec/2025:01:18:30] ENGINE Bus STARTED
Dec 03 01:18:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Dec 03 01:18:30 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 03 01:18:31 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 01:18:31 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0) v1
Dec 03 01:18:31 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:18:31 compute-0 ceph-mgr[193109]: [cephadm INFO root] Set ssh ssh_user
Dec 03 01:18:31 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Dec 03 01:18:31 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0) v1
Dec 03 01:18:31 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:18:31 compute-0 ceph-mgr[193109]: [cephadm INFO root] Set ssh ssh_config
Dec 03 01:18:31 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Dec 03 01:18:31 compute-0 ceph-mgr[193109]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Dec 03 01:18:31 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Dec 03 01:18:31 compute-0 competent_bohr[194359]: ssh user set to ceph-admin. sudo will be used
Dec 03 01:18:31 compute-0 systemd[1]: libpod-bb033e721b31ef5a86cb5f2240eb1b5992e40ddc7d0ce7f7b72ea04bb603edcf.scope: Deactivated successfully.
Dec 03 01:18:31 compute-0 podman[194286]: 2025-12-03 01:18:31.296863463 +0000 UTC m=+0.778733291 container died bb033e721b31ef5a86cb5f2240eb1b5992e40ddc7d0ce7f7b72ea04bb603edcf (image=quay.io/ceph/ceph:v18, name=competent_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:18:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-36b4b9ee7bae800fdd1d48d74ec43cdfb7e36d82bf7048f0ac80cf9d3a0c643f-merged.mount: Deactivated successfully.
Dec 03 01:18:31 compute-0 ceph-mon[192821]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 01:18:31 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:18:31 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 03 01:18:31 compute-0 ceph-mon[192821]: [03/Dec/2025:01:18:30] ENGINE Bus STARTING
Dec 03 01:18:31 compute-0 ceph-mon[192821]: [03/Dec/2025:01:18:30] ENGINE Serving on https://192.168.122.100:7150
Dec 03 01:18:31 compute-0 ceph-mon[192821]: [03/Dec/2025:01:18:30] ENGINE Client ('192.168.122.100', 37812) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 03 01:18:31 compute-0 ceph-mon[192821]: mgrmap e8: compute-0.rysove(active, since 2s)
Dec 03 01:18:31 compute-0 ceph-mon[192821]: [03/Dec/2025:01:18:30] ENGINE Serving on http://192.168.122.100:8765
Dec 03 01:18:31 compute-0 ceph-mon[192821]: [03/Dec/2025:01:18:30] ENGINE Bus STARTED
Dec 03 01:18:31 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 03 01:18:31 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:18:31 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:18:31 compute-0 podman[194286]: 2025-12-03 01:18:31.379841019 +0000 UTC m=+0.861710827 container remove bb033e721b31ef5a86cb5f2240eb1b5992e40ddc7d0ce7f7b72ea04bb603edcf (image=quay.io/ceph/ceph:v18, name=competent_bohr, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 03 01:18:31 compute-0 systemd[1]: libpod-conmon-bb033e721b31ef5a86cb5f2240eb1b5992e40ddc7d0ce7f7b72ea04bb603edcf.scope: Deactivated successfully.
Dec 03 01:18:31 compute-0 openstack_network_exporter[160250]: ERROR   01:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:18:31 compute-0 openstack_network_exporter[160250]: ERROR   01:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:18:31 compute-0 openstack_network_exporter[160250]: ERROR   01:18:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:18:31 compute-0 openstack_network_exporter[160250]: ERROR   01:18:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:18:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:18:31 compute-0 openstack_network_exporter[160250]: ERROR   01:18:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:18:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:18:31 compute-0 podman[194406]: 2025-12-03 01:18:31.511480409 +0000 UTC m=+0.089748341 container create a64e443648e2f219dff4fa36924369c2bad8fb8f21799898bda437c32222b2af (image=quay.io/ceph/ceph:v18, name=practical_blackburn, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:18:31 compute-0 podman[194406]: 2025-12-03 01:18:31.481224893 +0000 UTC m=+0.059492865 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:18:31 compute-0 systemd[1]: Started libpod-conmon-a64e443648e2f219dff4fa36924369c2bad8fb8f21799898bda437c32222b2af.scope.
Dec 03 01:18:31 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:18:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1d73d27ca8baf6a40e5a97ce3a913de10cf99936c72987724c952d72ef6dab3/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1d73d27ca8baf6a40e5a97ce3a913de10cf99936c72987724c952d72ef6dab3/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1d73d27ca8baf6a40e5a97ce3a913de10cf99936c72987724c952d72ef6dab3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1d73d27ca8baf6a40e5a97ce3a913de10cf99936c72987724c952d72ef6dab3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1d73d27ca8baf6a40e5a97ce3a913de10cf99936c72987724c952d72ef6dab3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:31 compute-0 podman[194406]: 2025-12-03 01:18:31.660035383 +0000 UTC m=+0.238303365 container init a64e443648e2f219dff4fa36924369c2bad8fb8f21799898bda437c32222b2af (image=quay.io/ceph/ceph:v18, name=practical_blackburn, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 03 01:18:31 compute-0 podman[194406]: 2025-12-03 01:18:31.688863859 +0000 UTC m=+0.267131811 container start a64e443648e2f219dff4fa36924369c2bad8fb8f21799898bda437c32222b2af (image=quay.io/ceph/ceph:v18, name=practical_blackburn, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:18:31 compute-0 podman[194406]: 2025-12-03 01:18:31.696074625 +0000 UTC m=+0.274342567 container attach a64e443648e2f219dff4fa36924369c2bad8fb8f21799898bda437c32222b2af (image=quay.io/ceph/ceph:v18, name=practical_blackburn, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:18:32 compute-0 ceph-mgr[193109]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 03 01:18:32 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 01:18:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0) v1
Dec 03 01:18:32 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:18:32 compute-0 ceph-mgr[193109]: [cephadm INFO root] Set ssh ssh_identity_key
Dec 03 01:18:32 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Dec 03 01:18:32 compute-0 ceph-mgr[193109]: [cephadm INFO root] Set ssh private key
Dec 03 01:18:32 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Set ssh private key
Dec 03 01:18:32 compute-0 systemd[1]: libpod-a64e443648e2f219dff4fa36924369c2bad8fb8f21799898bda437c32222b2af.scope: Deactivated successfully.
Dec 03 01:18:32 compute-0 conmon[194422]: conmon a64e443648e2f219dff4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a64e443648e2f219dff4fa36924369c2bad8fb8f21799898bda437c32222b2af.scope/container/memory.events
Dec 03 01:18:32 compute-0 podman[194406]: 2025-12-03 01:18:32.347033036 +0000 UTC m=+0.925300988 container died a64e443648e2f219dff4fa36924369c2bad8fb8f21799898bda437c32222b2af (image=quay.io/ceph/ceph:v18, name=practical_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:18:32 compute-0 ceph-mon[192821]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 01:18:32 compute-0 ceph-mon[192821]: Set ssh ssh_user
Dec 03 01:18:32 compute-0 ceph-mon[192821]: Set ssh ssh_config
Dec 03 01:18:32 compute-0 ceph-mon[192821]: ssh user set to ceph-admin. sudo will be used
Dec 03 01:18:32 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:18:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1d73d27ca8baf6a40e5a97ce3a913de10cf99936c72987724c952d72ef6dab3-merged.mount: Deactivated successfully.
Dec 03 01:18:32 compute-0 podman[194406]: 2025-12-03 01:18:32.440229495 +0000 UTC m=+1.018497417 container remove a64e443648e2f219dff4fa36924369c2bad8fb8f21799898bda437c32222b2af (image=quay.io/ceph/ceph:v18, name=practical_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Dec 03 01:18:32 compute-0 systemd[1]: libpod-conmon-a64e443648e2f219dff4fa36924369c2bad8fb8f21799898bda437c32222b2af.scope: Deactivated successfully.
Dec 03 01:18:32 compute-0 podman[194459]: 2025-12-03 01:18:32.562787295 +0000 UTC m=+0.082144834 container create 43c73db351d39d5b779503d400dabe7c03c5058cf7b52da724f3d8cb3271b1d7 (image=quay.io/ceph/ceph:v18, name=vigilant_sammet, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:18:32 compute-0 podman[194459]: 2025-12-03 01:18:32.528966516 +0000 UTC m=+0.048324135 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:18:32 compute-0 systemd[1]: Started libpod-conmon-43c73db351d39d5b779503d400dabe7c03c5058cf7b52da724f3d8cb3271b1d7.scope.
Dec 03 01:18:32 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:18:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eb83ef47e8c887f1fc7dc7fb5ee833d6917f54fcbd58c1cfe481d731c6e8ef6/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eb83ef47e8c887f1fc7dc7fb5ee833d6917f54fcbd58c1cfe481d731c6e8ef6/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eb83ef47e8c887f1fc7dc7fb5ee833d6917f54fcbd58c1cfe481d731c6e8ef6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eb83ef47e8c887f1fc7dc7fb5ee833d6917f54fcbd58c1cfe481d731c6e8ef6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eb83ef47e8c887f1fc7dc7fb5ee833d6917f54fcbd58c1cfe481d731c6e8ef6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:32 compute-0 podman[194459]: 2025-12-03 01:18:32.713753908 +0000 UTC m=+0.233111507 container init 43c73db351d39d5b779503d400dabe7c03c5058cf7b52da724f3d8cb3271b1d7 (image=quay.io/ceph/ceph:v18, name=vigilant_sammet, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:18:32 compute-0 podman[194459]: 2025-12-03 01:18:32.743880901 +0000 UTC m=+0.263238460 container start 43c73db351d39d5b779503d400dabe7c03c5058cf7b52da724f3d8cb3271b1d7 (image=quay.io/ceph/ceph:v18, name=vigilant_sammet, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 03 01:18:32 compute-0 podman[194459]: 2025-12-03 01:18:32.751260802 +0000 UTC m=+0.270618381 container attach 43c73db351d39d5b779503d400dabe7c03c5058cf7b52da724f3d8cb3271b1d7 (image=quay.io/ceph/ceph:v18, name=vigilant_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Dec 03 01:18:33 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 01:18:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0) v1
Dec 03 01:18:33 compute-0 ceph-mon[192821]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 01:18:33 compute-0 ceph-mon[192821]: Set ssh ssh_identity_key
Dec 03 01:18:33 compute-0 ceph-mon[192821]: Set ssh private key
Dec 03 01:18:33 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:18:33 compute-0 ceph-mgr[193109]: [cephadm INFO root] Set ssh ssh_identity_pub
Dec 03 01:18:33 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Dec 03 01:18:33 compute-0 systemd[1]: libpod-43c73db351d39d5b779503d400dabe7c03c5058cf7b52da724f3d8cb3271b1d7.scope: Deactivated successfully.
Dec 03 01:18:33 compute-0 podman[194501]: 2025-12-03 01:18:33.540190293 +0000 UTC m=+0.061667017 container died 43c73db351d39d5b779503d400dabe7c03c5058cf7b52da724f3d8cb3271b1d7 (image=quay.io/ceph/ceph:v18, name=vigilant_sammet, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 03 01:18:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-2eb83ef47e8c887f1fc7dc7fb5ee833d6917f54fcbd58c1cfe481d731c6e8ef6-merged.mount: Deactivated successfully.
Dec 03 01:18:33 compute-0 podman[194501]: 2025-12-03 01:18:33.615800148 +0000 UTC m=+0.137276872 container remove 43c73db351d39d5b779503d400dabe7c03c5058cf7b52da724f3d8cb3271b1d7 (image=quay.io/ceph/ceph:v18, name=vigilant_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 03 01:18:33 compute-0 systemd[1]: libpod-conmon-43c73db351d39d5b779503d400dabe7c03c5058cf7b52da724f3d8cb3271b1d7.scope: Deactivated successfully.
Dec 03 01:18:33 compute-0 podman[194515]: 2025-12-03 01:18:33.757088614 +0000 UTC m=+0.084721367 container create c5be07c3c1ee7751cc30bc008f3d3bc595b14297ca1a586e8cf57e7262059943 (image=quay.io/ceph/ceph:v18, name=elastic_carver, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:18:33 compute-0 podman[194515]: 2025-12-03 01:18:33.722768631 +0000 UTC m=+0.050401424 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:18:33 compute-0 systemd[1]: Started libpod-conmon-c5be07c3c1ee7751cc30bc008f3d3bc595b14297ca1a586e8cf57e7262059943.scope.
Dec 03 01:18:33 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:18:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d74fd3f54a13463d1e64678aba0927ef2c07369248defbc204b88bbcea4f58ba/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d74fd3f54a13463d1e64678aba0927ef2c07369248defbc204b88bbcea4f58ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d74fd3f54a13463d1e64678aba0927ef2c07369248defbc204b88bbcea4f58ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:33 compute-0 podman[194515]: 2025-12-03 01:18:33.915064898 +0000 UTC m=+0.242697711 container init c5be07c3c1ee7751cc30bc008f3d3bc595b14297ca1a586e8cf57e7262059943 (image=quay.io/ceph/ceph:v18, name=elastic_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 03 01:18:33 compute-0 podman[194515]: 2025-12-03 01:18:33.935122152 +0000 UTC m=+0.262754865 container start c5be07c3c1ee7751cc30bc008f3d3bc595b14297ca1a586e8cf57e7262059943 (image=quay.io/ceph/ceph:v18, name=elastic_carver, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 03 01:18:33 compute-0 podman[194515]: 2025-12-03 01:18:33.940595419 +0000 UTC m=+0.268228172 container attach c5be07c3c1ee7751cc30bc008f3d3bc595b14297ca1a586e8cf57e7262059943 (image=quay.io/ceph/ceph:v18, name=elastic_carver, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 03 01:18:34 compute-0 ceph-mgr[193109]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 03 01:18:34 compute-0 ceph-mon[192821]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 01:18:34 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:18:34 compute-0 ceph-mon[192821]: Set ssh ssh_identity_pub
Dec 03 01:18:34 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 01:18:34 compute-0 elastic_carver[194530]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC5ecfe2fcU5kzXrwNXMXzjiRxUzwK8oF8fdrLszAVoAy+DUtmImeC+47peZMOsTTrqix8ydvHWOXKAlmxJmmvnXn+I3jRZZaTkPCTt4Je2ClFXcOH2FtcM0sjmtzxWSN38IOGsugf5cTRq79WQuzzM3ONhanjwmk4bl0EUtIJaiEP37pO4216tx58/XIIJwdYM/By5PWhy7thuZYyCCVVTxGWigCOmE/1ndCn6IIkeKZJLbfQBXCBIi/S/1QG1DRN1zAsfKeIVL9RugQWFAWthIxzdjaRLHPPOfGJSFgZMGwLtw7GvWtAbJIoH8XL43xiyd7KOH6+oTkR4y/2JneoF4m96prdsYJUYwN0qbM12W1iKfWEIPfDL9nFQNFiBStP+86/I+GLsan1jvhHtVsQ59pMfXK6tmZe8RK4CAEMthH/lzI9zlVrNfCj0pEiR1FXVASJ25np6IMLEZbGsc1njBZ6fZ3iaee6MI6jLWt/lSPX7gLLn4Asq2x4P7/SxUWE= zuul@controller
Dec 03 01:18:34 compute-0 systemd[1]: libpod-c5be07c3c1ee7751cc30bc008f3d3bc595b14297ca1a586e8cf57e7262059943.scope: Deactivated successfully.
Dec 03 01:18:34 compute-0 podman[194515]: 2025-12-03 01:18:34.488759227 +0000 UTC m=+0.816391940 container died c5be07c3c1ee7751cc30bc008f3d3bc595b14297ca1a586e8cf57e7262059943 (image=quay.io/ceph/ceph:v18, name=elastic_carver, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 03 01:18:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-d74fd3f54a13463d1e64678aba0927ef2c07369248defbc204b88bbcea4f58ba-merged.mount: Deactivated successfully.
Dec 03 01:18:34 compute-0 podman[194515]: 2025-12-03 01:18:34.549407173 +0000 UTC m=+0.877039896 container remove c5be07c3c1ee7751cc30bc008f3d3bc595b14297ca1a586e8cf57e7262059943 (image=quay.io/ceph/ceph:v18, name=elastic_carver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 03 01:18:34 compute-0 sshd-session[194066]: Connection closed by authenticating user root 193.32.162.157 port 49766 [preauth]
Dec 03 01:18:34 compute-0 systemd[1]: libpod-conmon-c5be07c3c1ee7751cc30bc008f3d3bc595b14297ca1a586e8cf57e7262059943.scope: Deactivated successfully.
Dec 03 01:18:34 compute-0 podman[194567]: 2025-12-03 01:18:34.682211086 +0000 UTC m=+0.091746618 container create e012d32128b17c0fafac99cf104576a1df9006de548fc44b591b3184c6972509 (image=quay.io/ceph/ceph:v18, name=wonderful_lewin, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:18:34 compute-0 podman[194567]: 2025-12-03 01:18:34.645844955 +0000 UTC m=+0.055380557 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:18:34 compute-0 systemd[1]: Started libpod-conmon-e012d32128b17c0fafac99cf104576a1df9006de548fc44b591b3184c6972509.scope.
Dec 03 01:18:34 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:18:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c842aa05d05f03c944f0d99286b81f6fb140a3fc0c884271bedcc3484ccc4f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c842aa05d05f03c944f0d99286b81f6fb140a3fc0c884271bedcc3484ccc4f6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c842aa05d05f03c944f0d99286b81f6fb140a3fc0c884271bedcc3484ccc4f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:34 compute-0 podman[194567]: 2025-12-03 01:18:34.866763061 +0000 UTC m=+0.276298613 container init e012d32128b17c0fafac99cf104576a1df9006de548fc44b591b3184c6972509 (image=quay.io/ceph/ceph:v18, name=wonderful_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:18:34 compute-0 podman[194567]: 2025-12-03 01:18:34.888472873 +0000 UTC m=+0.298008395 container start e012d32128b17c0fafac99cf104576a1df9006de548fc44b591b3184c6972509 (image=quay.io/ceph/ceph:v18, name=wonderful_lewin, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:18:34 compute-0 podman[194567]: 2025-12-03 01:18:34.895780442 +0000 UTC m=+0.305315994 container attach e012d32128b17c0fafac99cf104576a1df9006de548fc44b591b3184c6972509 (image=quay.io/ceph/ceph:v18, name=wonderful_lewin, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 03 01:18:35 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053048 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:18:35 compute-0 ceph-mon[192821]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 01:18:35 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 01:18:35 compute-0 sshd-session[194609]: Accepted publickey for ceph-admin from 192.168.122.100 port 52202 ssh2: RSA SHA256:ElThYv4dSbR6hrHZ62VCcLS1SbZiTt9mq8RDg3WmxMM
Dec 03 01:18:35 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Dec 03 01:18:35 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec 03 01:18:35 compute-0 systemd-logind[800]: New session 27 of user ceph-admin.
Dec 03 01:18:35 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec 03 01:18:35 compute-0 systemd[1]: Starting User Manager for UID 42477...
Dec 03 01:18:35 compute-0 systemd[194622]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 03 01:18:35 compute-0 podman[194611]: 2025-12-03 01:18:35.857114701 +0000 UTC m=+0.140638328 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 03 01:18:35 compute-0 sshd-session[194634]: Accepted publickey for ceph-admin from 192.168.122.100 port 52216 ssh2: RSA SHA256:ElThYv4dSbR6hrHZ62VCcLS1SbZiTt9mq8RDg3WmxMM
Dec 03 01:18:35 compute-0 systemd-logind[800]: New session 29 of user ceph-admin.
Dec 03 01:18:36 compute-0 systemd[194622]: Queued start job for default target Main User Target.
Dec 03 01:18:36 compute-0 systemd[194622]: Created slice User Application Slice.
Dec 03 01:18:36 compute-0 systemd[194622]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 03 01:18:36 compute-0 systemd[194622]: Started Daily Cleanup of User's Temporary Directories.
Dec 03 01:18:36 compute-0 systemd[194622]: Reached target Paths.
Dec 03 01:18:36 compute-0 systemd[194622]: Reached target Timers.
Dec 03 01:18:36 compute-0 systemd[194622]: Starting D-Bus User Message Bus Socket...
Dec 03 01:18:36 compute-0 systemd[194622]: Starting Create User's Volatile Files and Directories...
Dec 03 01:18:36 compute-0 systemd[194622]: Finished Create User's Volatile Files and Directories.
Dec 03 01:18:36 compute-0 systemd[194622]: Listening on D-Bus User Message Bus Socket.
Dec 03 01:18:36 compute-0 systemd[194622]: Reached target Sockets.
Dec 03 01:18:36 compute-0 systemd[194622]: Reached target Basic System.
Dec 03 01:18:36 compute-0 systemd[194622]: Reached target Main User Target.
Dec 03 01:18:36 compute-0 systemd[194622]: Startup finished in 222ms.
Dec 03 01:18:36 compute-0 systemd[1]: Started User Manager for UID 42477.
Dec 03 01:18:36 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Dec 03 01:18:36 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Dec 03 01:18:36 compute-0 sshd-session[194609]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 03 01:18:36 compute-0 sshd-session[194634]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 03 01:18:36 compute-0 ceph-mgr[193109]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 03 01:18:36 compute-0 sudo[194649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:18:36 compute-0 sudo[194649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:36 compute-0 sudo[194649]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:36 compute-0 ceph-mon[192821]: from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 01:18:36 compute-0 sudo[194674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:18:36 compute-0 sudo[194674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:36 compute-0 sudo[194674]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:36 compute-0 sshd-session[194699]: Accepted publickey for ceph-admin from 192.168.122.100 port 52230 ssh2: RSA SHA256:ElThYv4dSbR6hrHZ62VCcLS1SbZiTt9mq8RDg3WmxMM
Dec 03 01:18:36 compute-0 systemd-logind[800]: New session 30 of user ceph-admin.
Dec 03 01:18:36 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Dec 03 01:18:36 compute-0 sshd-session[194699]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 03 01:18:36 compute-0 sudo[194703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:18:36 compute-0 sudo[194703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:36 compute-0 sudo[194703]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:37 compute-0 sudo[194729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Dec 03 01:18:37 compute-0 sudo[194729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:37 compute-0 sudo[194729]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:37 compute-0 sshd-session[194754]: Accepted publickey for ceph-admin from 192.168.122.100 port 52238 ssh2: RSA SHA256:ElThYv4dSbR6hrHZ62VCcLS1SbZiTt9mq8RDg3WmxMM
Dec 03 01:18:37 compute-0 systemd-logind[800]: New session 31 of user ceph-admin.
Dec 03 01:18:37 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Dec 03 01:18:37 compute-0 sshd-session[194754]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 03 01:18:37 compute-0 sudo[194758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:18:37 compute-0 sudo[194758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:37 compute-0 sudo[194758]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:37 compute-0 sudo[194783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Dec 03 01:18:37 compute-0 sudo[194783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:37 compute-0 sudo[194783]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:37 compute-0 ceph-mgr[193109]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Dec 03 01:18:37 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Dec 03 01:18:37 compute-0 sshd-session[194808]: Accepted publickey for ceph-admin from 192.168.122.100 port 52250 ssh2: RSA SHA256:ElThYv4dSbR6hrHZ62VCcLS1SbZiTt9mq8RDg3WmxMM
Dec 03 01:18:37 compute-0 systemd-logind[800]: New session 32 of user ceph-admin.
Dec 03 01:18:38 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Dec 03 01:18:38 compute-0 sshd-session[194808]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 03 01:18:38 compute-0 ceph-mon[192821]: Deploying cephadm binary to compute-0
Dec 03 01:18:38 compute-0 ceph-mgr[193109]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 03 01:18:38 compute-0 sudo[194812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:18:38 compute-0 sudo[194812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:38 compute-0 sudo[194812]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:38 compute-0 sudo[194837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c
Dec 03 01:18:38 compute-0 sudo[194837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:38 compute-0 sudo[194837]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:38 compute-0 sshd-session[194862]: Accepted publickey for ceph-admin from 192.168.122.100 port 52266 ssh2: RSA SHA256:ElThYv4dSbR6hrHZ62VCcLS1SbZiTt9mq8RDg3WmxMM
Dec 03 01:18:38 compute-0 systemd-logind[800]: New session 33 of user ceph-admin.
Dec 03 01:18:38 compute-0 systemd[1]: Started Session 33 of User ceph-admin.
Dec 03 01:18:38 compute-0 sshd-session[194862]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 03 01:18:38 compute-0 sudo[194866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:18:38 compute-0 sudo[194866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:38 compute-0 sudo[194866]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:38 compute-0 sudo[194891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-3765feb2-36f8-5b86-b74c-64e9221f9c4c/var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c
Dec 03 01:18:38 compute-0 sudo[194891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:38 compute-0 sudo[194891]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:39 compute-0 sshd-session[194916]: Accepted publickey for ceph-admin from 192.168.122.100 port 52268 ssh2: RSA SHA256:ElThYv4dSbR6hrHZ62VCcLS1SbZiTt9mq8RDg3WmxMM
Dec 03 01:18:39 compute-0 systemd-logind[800]: New session 34 of user ceph-admin.
Dec 03 01:18:39 compute-0 systemd[1]: Started Session 34 of User ceph-admin.
Dec 03 01:18:39 compute-0 sshd-session[194916]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 03 01:18:39 compute-0 sudo[194920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:18:39 compute-0 sudo[194920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:39 compute-0 sudo[194920]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:39 compute-0 sudo[194945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-3765feb2-36f8-5b86-b74c-64e9221f9c4c/var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Dec 03 01:18:39 compute-0 sudo[194945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:39 compute-0 sudo[194945]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:39 compute-0 sshd-session[194970]: Accepted publickey for ceph-admin from 192.168.122.100 port 52270 ssh2: RSA SHA256:ElThYv4dSbR6hrHZ62VCcLS1SbZiTt9mq8RDg3WmxMM
Dec 03 01:18:39 compute-0 systemd-logind[800]: New session 35 of user ceph-admin.
Dec 03 01:18:39 compute-0 systemd[1]: Started Session 35 of User ceph-admin.
Dec 03 01:18:39 compute-0 sshd-session[194970]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 03 01:18:40 compute-0 podman[194972]: 2025-12-03 01:18:40.089000877 +0000 UTC m=+0.141837953 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, container_name=kepler, maintainer=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, managed_by=edpm_ansible, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, vcs-type=git, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, release-0.7.12=)
Dec 03 01:18:40 compute-0 ceph-mgr[193109]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 03 01:18:40 compute-0 sudo[194991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:18:40 compute-0 sudo[194991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:40 compute-0 sudo[194991]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054710 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:18:40 compute-0 sudo[195020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-3765feb2-36f8-5b86-b74c-64e9221f9c4c
Dec 03 01:18:40 compute-0 sudo[195020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:40 compute-0 sudo[195020]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:40 compute-0 sshd-session[195045]: Accepted publickey for ceph-admin from 192.168.122.100 port 52278 ssh2: RSA SHA256:ElThYv4dSbR6hrHZ62VCcLS1SbZiTt9mq8RDg3WmxMM
Dec 03 01:18:40 compute-0 systemd-logind[800]: New session 36 of user ceph-admin.
Dec 03 01:18:40 compute-0 systemd[1]: Started Session 36 of User ceph-admin.
Dec 03 01:18:40 compute-0 sshd-session[195045]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 03 01:18:40 compute-0 sudo[195049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:18:40 compute-0 sudo[195049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:40 compute-0 sudo[195049]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:40 compute-0 sudo[195074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-3765feb2-36f8-5b86-b74c-64e9221f9c4c/var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Dec 03 01:18:40 compute-0 sudo[195074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:40 compute-0 sudo[195074]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:41 compute-0 sshd-session[195099]: Accepted publickey for ceph-admin from 192.168.122.100 port 52280 ssh2: RSA SHA256:ElThYv4dSbR6hrHZ62VCcLS1SbZiTt9mq8RDg3WmxMM
Dec 03 01:18:41 compute-0 systemd-logind[800]: New session 37 of user ceph-admin.
Dec 03 01:18:41 compute-0 systemd[1]: Started Session 37 of User ceph-admin.
Dec 03 01:18:41 compute-0 sshd-session[195099]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 03 01:18:41 compute-0 sshd-session[195126]: Accepted publickey for ceph-admin from 192.168.122.100 port 52284 ssh2: RSA SHA256:ElThYv4dSbR6hrHZ62VCcLS1SbZiTt9mq8RDg3WmxMM
Dec 03 01:18:41 compute-0 systemd-logind[800]: New session 38 of user ceph-admin.
Dec 03 01:18:41 compute-0 systemd[1]: Started Session 38 of User ceph-admin.
Dec 03 01:18:41 compute-0 sshd-session[195126]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 03 01:18:42 compute-0 sudo[195130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:18:42 compute-0 sudo[195130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:42 compute-0 sudo[195130]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:42 compute-0 ceph-mgr[193109]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 03 01:18:42 compute-0 sudo[195155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-3765feb2-36f8-5b86-b74c-64e9221f9c4c/var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Dec 03 01:18:42 compute-0 sudo[195155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:42 compute-0 sudo[195155]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:42 compute-0 sshd-session[195180]: Accepted publickey for ceph-admin from 192.168.122.100 port 52294 ssh2: RSA SHA256:ElThYv4dSbR6hrHZ62VCcLS1SbZiTt9mq8RDg3WmxMM
Dec 03 01:18:42 compute-0 systemd-logind[800]: New session 39 of user ceph-admin.
Dec 03 01:18:42 compute-0 systemd[1]: Started Session 39 of User ceph-admin.
Dec 03 01:18:42 compute-0 sshd-session[195180]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 03 01:18:42 compute-0 sudo[195184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:18:42 compute-0 sudo[195184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:42 compute-0 sudo[195184]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:42 compute-0 sudo[195209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Dec 03 01:18:42 compute-0 sudo[195209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:43 compute-0 sudo[195209]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Dec 03 01:18:43 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:18:43 compute-0 ceph-mgr[193109]: [cephadm INFO root] Added host compute-0
Dec 03 01:18:43 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Added host compute-0
Dec 03 01:18:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Dec 03 01:18:43 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 03 01:18:43 compute-0 wonderful_lewin[194582]: Added host 'compute-0' with addr '192.168.122.100'
Dec 03 01:18:43 compute-0 systemd[1]: libpod-e012d32128b17c0fafac99cf104576a1df9006de548fc44b591b3184c6972509.scope: Deactivated successfully.
Dec 03 01:18:43 compute-0 podman[195266]: 2025-12-03 01:18:43.409484943 +0000 UTC m=+0.031727660 container died e012d32128b17c0fafac99cf104576a1df9006de548fc44b591b3184c6972509 (image=quay.io/ceph/ceph:v18, name=wonderful_lewin, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:18:43 compute-0 sudo[195254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:18:43 compute-0 sudo[195254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:43 compute-0 sudo[195254]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c842aa05d05f03c944f0d99286b81f6fb140a3fc0c884271bedcc3484ccc4f6-merged.mount: Deactivated successfully.
Dec 03 01:18:43 compute-0 podman[195266]: 2025-12-03 01:18:43.490825042 +0000 UTC m=+0.113067789 container remove e012d32128b17c0fafac99cf104576a1df9006de548fc44b591b3184c6972509 (image=quay.io/ceph/ceph:v18, name=wonderful_lewin, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 03 01:18:43 compute-0 systemd[1]: libpod-conmon-e012d32128b17c0fafac99cf104576a1df9006de548fc44b591b3184c6972509.scope: Deactivated successfully.
Dec 03 01:18:43 compute-0 sudo[195292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:18:43 compute-0 sudo[195292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:43 compute-0 sudo[195292]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:43 compute-0 podman[195315]: 2025-12-03 01:18:43.600999827 +0000 UTC m=+0.061225254 container create 31337545b77aed56dfa8bddd617c0a7d0458a629348154b3296aaa32fc0a9955 (image=quay.io/ceph/ceph:v18, name=agitated_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 03 01:18:43 compute-0 systemd[1]: Started libpod-conmon-31337545b77aed56dfa8bddd617c0a7d0458a629348154b3296aaa32fc0a9955.scope.
Dec 03 01:18:43 compute-0 sudo[195325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:18:43 compute-0 sudo[195325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:43 compute-0 podman[195315]: 2025-12-03 01:18:43.578785391 +0000 UTC m=+0.039010858 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:18:43 compute-0 sudo[195325]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:43 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:18:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b578146de9772630a26ecc5e7b03c9facff4cf1506e92d1beee06b231cf258c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b578146de9772630a26ecc5e7b03c9facff4cf1506e92d1beee06b231cf258c2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b578146de9772630a26ecc5e7b03c9facff4cf1506e92d1beee06b231cf258c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:43 compute-0 podman[195315]: 2025-12-03 01:18:43.767509435 +0000 UTC m=+0.227734902 container init 31337545b77aed56dfa8bddd617c0a7d0458a629348154b3296aaa32fc0a9955 (image=quay.io/ceph/ceph:v18, name=agitated_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 03 01:18:43 compute-0 sudo[195360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph:v18 --timeout 895 inspect-image
Dec 03 01:18:43 compute-0 sudo[195360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:43 compute-0 podman[195315]: 2025-12-03 01:18:43.786599741 +0000 UTC m=+0.246825198 container start 31337545b77aed56dfa8bddd617c0a7d0458a629348154b3296aaa32fc0a9955 (image=quay.io/ceph/ceph:v18, name=agitated_black, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:18:43 compute-0 podman[195315]: 2025-12-03 01:18:43.793248631 +0000 UTC m=+0.253474088 container attach 31337545b77aed56dfa8bddd617c0a7d0458a629348154b3296aaa32fc0a9955 (image=quay.io/ceph/ceph:v18, name=agitated_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 03 01:18:43 compute-0 sshd-session[194579]: Invalid user postgres from 193.32.162.157 port 45126
Dec 03 01:18:44 compute-0 ceph-mgr[193109]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 03 01:18:44 compute-0 podman[195412]: 2025-12-03 01:18:44.194644116 +0000 UTC m=+0.100696235 container create 6e8eefbe0b9cd3223c0dccf23a09ba66e9e810fdcc21decf0b565b571cd9cf78 (image=quay.io/ceph/ceph:v18, name=compassionate_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec 03 01:18:44 compute-0 podman[195412]: 2025-12-03 01:18:44.156402801 +0000 UTC m=+0.062454970 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:18:44 compute-0 systemd[1]: Started libpod-conmon-6e8eefbe0b9cd3223c0dccf23a09ba66e9e810fdcc21decf0b565b571cd9cf78.scope.
Dec 03 01:18:44 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:18:44 compute-0 ceph-mon[192821]: Added host compute-0
Dec 03 01:18:44 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 03 01:18:44 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:18:44 compute-0 podman[195412]: 2025-12-03 01:18:44.335043726 +0000 UTC m=+0.241095835 container init 6e8eefbe0b9cd3223c0dccf23a09ba66e9e810fdcc21decf0b565b571cd9cf78 (image=quay.io/ceph/ceph:v18, name=compassionate_ritchie, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec 03 01:18:44 compute-0 podman[195412]: 2025-12-03 01:18:44.349754068 +0000 UTC m=+0.255806167 container start 6e8eefbe0b9cd3223c0dccf23a09ba66e9e810fdcc21decf0b565b571cd9cf78 (image=quay.io/ceph/ceph:v18, name=compassionate_ritchie, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:18:44 compute-0 podman[195412]: 2025-12-03 01:18:44.355146292 +0000 UTC m=+0.261198391 container attach 6e8eefbe0b9cd3223c0dccf23a09ba66e9e810fdcc21decf0b565b571cd9cf78 (image=quay.io/ceph/ceph:v18, name=compassionate_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 03 01:18:44 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 01:18:44 compute-0 ceph-mgr[193109]: [cephadm INFO root] Saving service mon spec with placement count:5
Dec 03 01:18:44 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Dec 03 01:18:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Dec 03 01:18:44 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:18:44 compute-0 agitated_black[195356]: Scheduled mon update...
Dec 03 01:18:44 compute-0 systemd[1]: libpod-31337545b77aed56dfa8bddd617c0a7d0458a629348154b3296aaa32fc0a9955.scope: Deactivated successfully.
Dec 03 01:18:44 compute-0 podman[195454]: 2025-12-03 01:18:44.553996656 +0000 UTC m=+0.045296668 container died 31337545b77aed56dfa8bddd617c0a7d0458a629348154b3296aaa32fc0a9955 (image=quay.io/ceph/ceph:v18, name=agitated_black, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:18:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-b578146de9772630a26ecc5e7b03c9facff4cf1506e92d1beee06b231cf258c2-merged.mount: Deactivated successfully.
Dec 03 01:18:44 compute-0 podman[195454]: 2025-12-03 01:18:44.618964167 +0000 UTC m=+0.110264169 container remove 31337545b77aed56dfa8bddd617c0a7d0458a629348154b3296aaa32fc0a9955 (image=quay.io/ceph/ceph:v18, name=agitated_black, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:18:44 compute-0 systemd[1]: libpod-conmon-31337545b77aed56dfa8bddd617c0a7d0458a629348154b3296aaa32fc0a9955.scope: Deactivated successfully.
Dec 03 01:18:44 compute-0 compassionate_ritchie[195447]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Dec 03 01:18:44 compute-0 systemd[1]: libpod-6e8eefbe0b9cd3223c0dccf23a09ba66e9e810fdcc21decf0b565b571cd9cf78.scope: Deactivated successfully.
Dec 03 01:18:44 compute-0 conmon[195447]: conmon 6e8eefbe0b9cd3223c0d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6e8eefbe0b9cd3223c0dccf23a09ba66e9e810fdcc21decf0b565b571cd9cf78.scope/container/memory.events
Dec 03 01:18:44 compute-0 podman[195412]: 2025-12-03 01:18:44.708027227 +0000 UTC m=+0.614079386 container died 6e8eefbe0b9cd3223c0dccf23a09ba66e9e810fdcc21decf0b565b571cd9cf78 (image=quay.io/ceph/ceph:v18, name=compassionate_ritchie, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:18:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-5204bc3182ad956b7f2dabf9c3b03b0d6b2eea500775c0a5481485a8ed6d3909-merged.mount: Deactivated successfully.
Dec 03 01:18:44 compute-0 podman[195468]: 2025-12-03 01:18:44.780471642 +0000 UTC m=+0.100566741 container create fdb8b09c3e853294d1fd85f4f02e2a3818b0627df8904ce6d9caef0dff3949fe (image=quay.io/ceph/ceph:v18, name=jolly_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Dec 03 01:18:44 compute-0 podman[195412]: 2025-12-03 01:18:44.816840943 +0000 UTC m=+0.722893032 container remove 6e8eefbe0b9cd3223c0dccf23a09ba66e9e810fdcc21decf0b565b571cd9cf78 (image=quay.io/ceph/ceph:v18, name=compassionate_ritchie, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:18:44 compute-0 systemd[1]: Started libpod-conmon-fdb8b09c3e853294d1fd85f4f02e2a3818b0627df8904ce6d9caef0dff3949fe.scope.
Dec 03 01:18:44 compute-0 systemd[1]: libpod-conmon-6e8eefbe0b9cd3223c0dccf23a09ba66e9e810fdcc21decf0b565b571cd9cf78.scope: Deactivated successfully.
Dec 03 01:18:44 compute-0 podman[195468]: 2025-12-03 01:18:44.751565104 +0000 UTC m=+0.071660233 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:18:44 compute-0 sudo[195360]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0) v1
Dec 03 01:18:44 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:18:44 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:18:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/172394f8995542d98d07ed233d7f689fb1bc97edc9519758768789dda1ffd64d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/172394f8995542d98d07ed233d7f689fb1bc97edc9519758768789dda1ffd64d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/172394f8995542d98d07ed233d7f689fb1bc97edc9519758768789dda1ffd64d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:44 compute-0 podman[195468]: 2025-12-03 01:18:44.912298257 +0000 UTC m=+0.232393386 container init fdb8b09c3e853294d1fd85f4f02e2a3818b0627df8904ce6d9caef0dff3949fe (image=quay.io/ceph/ceph:v18, name=jolly_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:18:44 compute-0 podman[195468]: 2025-12-03 01:18:44.932503775 +0000 UTC m=+0.252598884 container start fdb8b09c3e853294d1fd85f4f02e2a3818b0627df8904ce6d9caef0dff3949fe (image=quay.io/ceph/ceph:v18, name=jolly_nightingale, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 03 01:18:44 compute-0 podman[195468]: 2025-12-03 01:18:44.937329284 +0000 UTC m=+0.257424403 container attach fdb8b09c3e853294d1fd85f4f02e2a3818b0627df8904ce6d9caef0dff3949fe (image=quay.io/ceph/ceph:v18, name=jolly_nightingale, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 03 01:18:44 compute-0 podman[195498]: 2025-12-03 01:18:44.994926283 +0000 UTC m=+0.132496195 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 03 01:18:45 compute-0 sudo[195509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:18:45 compute-0 sudo[195509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:45 compute-0 sudo[195509]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:45 compute-0 sudo[195547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:18:45 compute-0 sudo[195547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:45 compute-0 sudo[195547]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:45 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:18:45 compute-0 sudo[195572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:18:45 compute-0 sudo[195572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:45 compute-0 sudo[195572]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:45 compute-0 sudo[195610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Dec 03 01:18:45 compute-0 sudo[195610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:45 compute-0 ceph-mon[192821]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 01:18:45 compute-0 ceph-mon[192821]: Saving service mon spec with placement count:5
Dec 03 01:18:45 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:18:45 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:18:45 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 01:18:45 compute-0 ceph-mgr[193109]: [cephadm INFO root] Saving service mgr spec with placement count:2
Dec 03 01:18:45 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Dec 03 01:18:45 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Dec 03 01:18:45 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:18:45 compute-0 jolly_nightingale[195496]: Scheduled mgr update...
Dec 03 01:18:45 compute-0 systemd[1]: libpod-fdb8b09c3e853294d1fd85f4f02e2a3818b0627df8904ce6d9caef0dff3949fe.scope: Deactivated successfully.
Dec 03 01:18:45 compute-0 podman[195468]: 2025-12-03 01:18:45.566600944 +0000 UTC m=+0.886696083 container died fdb8b09c3e853294d1fd85f4f02e2a3818b0627df8904ce6d9caef0dff3949fe (image=quay.io/ceph/ceph:v18, name=jolly_nightingale, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 03 01:18:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-172394f8995542d98d07ed233d7f689fb1bc97edc9519758768789dda1ffd64d-merged.mount: Deactivated successfully.
Dec 03 01:18:45 compute-0 podman[195468]: 2025-12-03 01:18:45.641429916 +0000 UTC m=+0.961525025 container remove fdb8b09c3e853294d1fd85f4f02e2a3818b0627df8904ce6d9caef0dff3949fe (image=quay.io/ceph/ceph:v18, name=jolly_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:18:45 compute-0 systemd[1]: libpod-conmon-fdb8b09c3e853294d1fd85f4f02e2a3818b0627df8904ce6d9caef0dff3949fe.scope: Deactivated successfully.
Dec 03 01:18:45 compute-0 podman[195666]: 2025-12-03 01:18:45.750277153 +0000 UTC m=+0.068453471 container create 6a44e23f7df4167f895b2ecd703ebe9cc9484a07cba83458349024496700446c (image=quay.io/ceph/ceph:v18, name=kind_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 03 01:18:45 compute-0 sudo[195610]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:45 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:18:45 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:18:45 compute-0 podman[195666]: 2025-12-03 01:18:45.725813813 +0000 UTC m=+0.043990161 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:18:45 compute-0 systemd[1]: Started libpod-conmon-6a44e23f7df4167f895b2ecd703ebe9cc9484a07cba83458349024496700446c.scope.
Dec 03 01:18:45 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:18:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c1485f5fee1eb9f45d6084223e7cfaa58e75dfc890ec54b0bee1e22896d1a6a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c1485f5fee1eb9f45d6084223e7cfaa58e75dfc890ec54b0bee1e22896d1a6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c1485f5fee1eb9f45d6084223e7cfaa58e75dfc890ec54b0bee1e22896d1a6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:45 compute-0 podman[195666]: 2025-12-03 01:18:45.89996282 +0000 UTC m=+0.218139198 container init 6a44e23f7df4167f895b2ecd703ebe9cc9484a07cba83458349024496700446c (image=quay.io/ceph/ceph:v18, name=kind_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 03 01:18:45 compute-0 sudo[195687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:18:45 compute-0 sudo[195687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:45 compute-0 podman[195666]: 2025-12-03 01:18:45.915847855 +0000 UTC m=+0.234024203 container start 6a44e23f7df4167f895b2ecd703ebe9cc9484a07cba83458349024496700446c (image=quay.io/ceph/ceph:v18, name=kind_chandrasekhar, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:18:45 compute-0 sudo[195687]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:45 compute-0 podman[195666]: 2025-12-03 01:18:45.921952089 +0000 UTC m=+0.240128437 container attach 6a44e23f7df4167f895b2ecd703ebe9cc9484a07cba83458349024496700446c (image=quay.io/ceph/ceph:v18, name=kind_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 03 01:18:45 compute-0 sshd-session[194579]: Connection closed by invalid user postgres 193.32.162.157 port 45126 [preauth]
Dec 03 01:18:46 compute-0 sudo[195717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:18:46 compute-0 sudo[195717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:46 compute-0 sudo[195717]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:46 compute-0 ceph-mgr[193109]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 03 01:18:46 compute-0 sudo[195742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:18:46 compute-0 sudo[195742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:46 compute-0 sudo[195742]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:46 compute-0 sudo[195768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 03 01:18:46 compute-0 sudo[195768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:46 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 01:18:46 compute-0 ceph-mgr[193109]: [cephadm INFO root] Saving service crash spec with placement *
Dec 03 01:18:46 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Dec 03 01:18:46 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Dec 03 01:18:46 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:18:46 compute-0 kind_chandrasekhar[195694]: Scheduled crash update...
Dec 03 01:18:46 compute-0 ceph-mon[192821]: from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 01:18:46 compute-0 ceph-mon[192821]: Saving service mgr spec with placement count:2
Dec 03 01:18:46 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:18:46 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:18:46 compute-0 systemd[1]: libpod-6a44e23f7df4167f895b2ecd703ebe9cc9484a07cba83458349024496700446c.scope: Deactivated successfully.
Dec 03 01:18:46 compute-0 podman[195666]: 2025-12-03 01:18:46.555239814 +0000 UTC m=+0.873416172 container died 6a44e23f7df4167f895b2ecd703ebe9cc9484a07cba83458349024496700446c (image=quay.io/ceph/ceph:v18, name=kind_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 03 01:18:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c1485f5fee1eb9f45d6084223e7cfaa58e75dfc890ec54b0bee1e22896d1a6a-merged.mount: Deactivated successfully.
Dec 03 01:18:46 compute-0 podman[195666]: 2025-12-03 01:18:46.663097443 +0000 UTC m=+0.981273761 container remove 6a44e23f7df4167f895b2ecd703ebe9cc9484a07cba83458349024496700446c (image=quay.io/ceph/ceph:v18, name=kind_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 03 01:18:46 compute-0 systemd[1]: libpod-conmon-6a44e23f7df4167f895b2ecd703ebe9cc9484a07cba83458349024496700446c.scope: Deactivated successfully.
Dec 03 01:18:46 compute-0 podman[195848]: 2025-12-03 01:18:46.798151871 +0000 UTC m=+0.097987327 container create 2929ece3817eff21b4d907bf502ddffa4220e72c190ecab2371d23ed8feb8da1 (image=quay.io/ceph/ceph:v18, name=suspicious_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 03 01:18:46 compute-0 podman[195848]: 2025-12-03 01:18:46.753644296 +0000 UTC m=+0.053479842 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:18:46 compute-0 systemd[1]: Started libpod-conmon-2929ece3817eff21b4d907bf502ddffa4220e72c190ecab2371d23ed8feb8da1.scope.
Dec 03 01:18:46 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:18:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cacc7767cc3a920c1752ba4a14070bfedebf64769de9e49fe559c59eee0ef6b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cacc7767cc3a920c1752ba4a14070bfedebf64769de9e49fe559c59eee0ef6b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cacc7767cc3a920c1752ba4a14070bfedebf64769de9e49fe559c59eee0ef6b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:46 compute-0 podman[195848]: 2025-12-03 01:18:46.939359924 +0000 UTC m=+0.239195400 container init 2929ece3817eff21b4d907bf502ddffa4220e72c190ecab2371d23ed8feb8da1 (image=quay.io/ceph/ceph:v18, name=suspicious_rubin, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:18:46 compute-0 podman[195848]: 2025-12-03 01:18:46.961603061 +0000 UTC m=+0.261438507 container start 2929ece3817eff21b4d907bf502ddffa4220e72c190ecab2371d23ed8feb8da1 (image=quay.io/ceph/ceph:v18, name=suspicious_rubin, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:18:46 compute-0 podman[195848]: 2025-12-03 01:18:46.966054959 +0000 UTC m=+0.265890425 container attach 2929ece3817eff21b4d907bf502ddffa4220e72c190ecab2371d23ed8feb8da1 (image=quay.io/ceph/ceph:v18, name=suspicious_rubin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 03 01:18:47 compute-0 podman[195910]: 2025-12-03 01:18:47.193821021 +0000 UTC m=+0.122476308 container exec d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:18:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1
Dec 03 01:18:47 compute-0 podman[195910]: 2025-12-03 01:18:47.533302132 +0000 UTC m=+0.461957349 container exec_died d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Dec 03 01:18:47 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1756499979' entity='client.admin' 
Dec 03 01:18:47 compute-0 ceph-mon[192821]: from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 01:18:47 compute-0 ceph-mon[192821]: Saving service crash spec with placement *
Dec 03 01:18:47 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:18:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1756499979' entity='client.admin' 
Dec 03 01:18:47 compute-0 systemd[1]: libpod-2929ece3817eff21b4d907bf502ddffa4220e72c190ecab2371d23ed8feb8da1.scope: Deactivated successfully.
Dec 03 01:18:47 compute-0 podman[195848]: 2025-12-03 01:18:47.573239205 +0000 UTC m=+0.873074681 container died 2929ece3817eff21b4d907bf502ddffa4220e72c190ecab2371d23ed8feb8da1 (image=quay.io/ceph/ceph:v18, name=suspicious_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 03 01:18:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-6cacc7767cc3a920c1752ba4a14070bfedebf64769de9e49fe559c59eee0ef6b-merged.mount: Deactivated successfully.
Dec 03 01:18:47 compute-0 podman[195848]: 2025-12-03 01:18:47.648272184 +0000 UTC m=+0.948107630 container remove 2929ece3817eff21b4d907bf502ddffa4220e72c190ecab2371d23ed8feb8da1 (image=quay.io/ceph/ceph:v18, name=suspicious_rubin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Dec 03 01:18:47 compute-0 systemd[1]: libpod-conmon-2929ece3817eff21b4d907bf502ddffa4220e72c190ecab2371d23ed8feb8da1.scope: Deactivated successfully.
Dec 03 01:18:47 compute-0 sudo[195768]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:18:47 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:18:47 compute-0 podman[195985]: 2025-12-03 01:18:47.739461095 +0000 UTC m=+0.065179147 container create ce75635221c396a77977b1c48b6cbef77afe59b2dba24f4c7c8e4d4bae819646 (image=quay.io/ceph/ceph:v18, name=amazing_lewin, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec 03 01:18:47 compute-0 systemd[1]: Started libpod-conmon-ce75635221c396a77977b1c48b6cbef77afe59b2dba24f4c7c8e4d4bae819646.scope.
Dec 03 01:18:47 compute-0 podman[195985]: 2025-12-03 01:18:47.705427101 +0000 UTC m=+0.031145203 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:18:47 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:18:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38c01183687b400de10e62e6c485c9d5ae5c7b303dffac0d8d5b8348ef90fca3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38c01183687b400de10e62e6c485c9d5ae5c7b303dffac0d8d5b8348ef90fca3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38c01183687b400de10e62e6c485c9d5ae5c7b303dffac0d8d5b8348ef90fca3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:47 compute-0 sudo[196002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:18:47 compute-0 sudo[196002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:47 compute-0 sudo[196002]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:47 compute-0 podman[195985]: 2025-12-03 01:18:47.891839249 +0000 UTC m=+0.217557291 container init ce75635221c396a77977b1c48b6cbef77afe59b2dba24f4c7c8e4d4bae819646 (image=quay.io/ceph/ceph:v18, name=amazing_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 03 01:18:47 compute-0 podman[195985]: 2025-12-03 01:18:47.9002548 +0000 UTC m=+0.225972832 container start ce75635221c396a77977b1c48b6cbef77afe59b2dba24f4c7c8e4d4bae819646 (image=quay.io/ceph/ceph:v18, name=amazing_lewin, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 03 01:18:47 compute-0 podman[195985]: 2025-12-03 01:18:47.905120349 +0000 UTC m=+0.230838371 container attach ce75635221c396a77977b1c48b6cbef77afe59b2dba24f4c7c8e4d4bae819646 (image=quay.io/ceph/ceph:v18, name=amazing_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:18:47 compute-0 sudo[196032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:18:47 compute-0 sudo[196032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:47 compute-0 sudo[196032]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:48 compute-0 sudo[196059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:18:48 compute-0 sudo[196059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:48 compute-0 sudo[196059]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:48 compute-0 ceph-mgr[193109]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Dec 03 01:18:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:18:48 compute-0 ceph-mon[192821]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Dec 03 01:18:48 compute-0 sudo[196084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 01:18:48 compute-0 sudo[196084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:48 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 196139 (sysctl)
Dec 03 01:18:48 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Dec 03 01:18:48 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Dec 03 01:18:48 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 01:18:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0) v1
Dec 03 01:18:48 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:18:48 compute-0 systemd[1]: libpod-ce75635221c396a77977b1c48b6cbef77afe59b2dba24f4c7c8e4d4bae819646.scope: Deactivated successfully.
Dec 03 01:18:48 compute-0 podman[196147]: 2025-12-03 01:18:48.600395539 +0000 UTC m=+0.067435022 container died ce75635221c396a77977b1c48b6cbef77afe59b2dba24f4c7c8e4d4bae819646 (image=quay.io/ceph/ceph:v18, name=amazing_lewin, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec 03 01:18:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-38c01183687b400de10e62e6c485c9d5ae5c7b303dffac0d8d5b8348ef90fca3-merged.mount: Deactivated successfully.
Dec 03 01:18:48 compute-0 podman[196147]: 2025-12-03 01:18:48.670467096 +0000 UTC m=+0.137506509 container remove ce75635221c396a77977b1c48b6cbef77afe59b2dba24f4c7c8e4d4bae819646 (image=quay.io/ceph/ceph:v18, name=amazing_lewin, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 03 01:18:48 compute-0 systemd[1]: libpod-conmon-ce75635221c396a77977b1c48b6cbef77afe59b2dba24f4c7c8e4d4bae819646.scope: Deactivated successfully.
Dec 03 01:18:48 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:18:48 compute-0 ceph-mon[192821]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Dec 03 01:18:48 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:18:48 compute-0 podman[196166]: 2025-12-03 01:18:48.785732237 +0000 UTC m=+0.074341740 container create 53ae2e7369ee1568c5816d3c0797d4a6d107a370fef4df44af6404b13dc5d60d (image=quay.io/ceph/ceph:v18, name=goofy_ptolemy, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:18:48 compute-0 sudo[196084]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:48 compute-0 podman[196166]: 2025-12-03 01:18:48.754013218 +0000 UTC m=+0.042622801 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:18:48 compute-0 systemd[1]: Started libpod-conmon-53ae2e7369ee1568c5816d3c0797d4a6d107a370fef4df44af6404b13dc5d60d.scope.
Dec 03 01:18:48 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:18:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/480c9b9eed57719c146fb487465dee6f57d1bd0f70bbdb458cf18c53ba7558ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/480c9b9eed57719c146fb487465dee6f57d1bd0f70bbdb458cf18c53ba7558ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/480c9b9eed57719c146fb487465dee6f57d1bd0f70bbdb458cf18c53ba7558ed/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:48 compute-0 podman[196166]: 2025-12-03 01:18:48.931791869 +0000 UTC m=+0.220401372 container init 53ae2e7369ee1568c5816d3c0797d4a6d107a370fef4df44af6404b13dc5d60d (image=quay.io/ceph/ceph:v18, name=goofy_ptolemy, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 03 01:18:48 compute-0 podman[196166]: 2025-12-03 01:18:48.943935137 +0000 UTC m=+0.232544680 container start 53ae2e7369ee1568c5816d3c0797d4a6d107a370fef4df44af6404b13dc5d60d (image=quay.io/ceph/ceph:v18, name=goofy_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:18:48 compute-0 podman[196166]: 2025-12-03 01:18:48.950122724 +0000 UTC m=+0.238732247 container attach 53ae2e7369ee1568c5816d3c0797d4a6d107a370fef4df44af6404b13dc5d60d (image=quay.io/ceph/ceph:v18, name=goofy_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec 03 01:18:49 compute-0 sudo[196196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:18:49 compute-0 sudo[196196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:49 compute-0 sudo[196196]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:49 compute-0 sudo[196223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:18:49 compute-0 sudo[196223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:49 compute-0 sudo[196223]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:49 compute-0 sudo[196248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:18:49 compute-0 sudo[196248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:49 compute-0 sudo[196248]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:49 compute-0 sudo[196283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Dec 03 01:18:49 compute-0 sudo[196283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:49 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 01:18:49 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Dec 03 01:18:49 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:18:49 compute-0 ceph-mgr[193109]: [cephadm INFO root] Added label _admin to host compute-0
Dec 03 01:18:49 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Dec 03 01:18:49 compute-0 goofy_ptolemy[196193]: Added label _admin to host compute-0
Dec 03 01:18:49 compute-0 systemd[1]: libpod-53ae2e7369ee1568c5816d3c0797d4a6d107a370fef4df44af6404b13dc5d60d.scope: Deactivated successfully.
Dec 03 01:18:49 compute-0 podman[196166]: 2025-12-03 01:18:49.539364438 +0000 UTC m=+0.827974021 container died 53ae2e7369ee1568c5816d3c0797d4a6d107a370fef4df44af6404b13dc5d60d (image=quay.io/ceph/ceph:v18, name=goofy_ptolemy, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec 03 01:18:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-480c9b9eed57719c146fb487465dee6f57d1bd0f70bbdb458cf18c53ba7558ed-merged.mount: Deactivated successfully.
Dec 03 01:18:49 compute-0 podman[196166]: 2025-12-03 01:18:49.613083669 +0000 UTC m=+0.901693222 container remove 53ae2e7369ee1568c5816d3c0797d4a6d107a370fef4df44af6404b13dc5d60d (image=quay.io/ceph/ceph:v18, name=goofy_ptolemy, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 03 01:18:49 compute-0 systemd[1]: libpod-conmon-53ae2e7369ee1568c5816d3c0797d4a6d107a370fef4df44af6404b13dc5d60d.scope: Deactivated successfully.
Dec 03 01:18:49 compute-0 sudo[196283]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:49 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:18:49 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:18:49 compute-0 podman[196346]: 2025-12-03 01:18:49.712770334 +0000 UTC m=+0.055172101 container create d45a036d82a496d4409d3b996677582147aceb330b052b2372ac0bbe0df0443f (image=quay.io/ceph/ceph:v18, name=busy_greider, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:18:49 compute-0 ceph-mon[192821]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:18:49 compute-0 ceph-mon[192821]: from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 01:18:49 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:18:49 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:18:49 compute-0 systemd[1]: Started libpod-conmon-d45a036d82a496d4409d3b996677582147aceb330b052b2372ac0bbe0df0443f.scope.
Dec 03 01:18:49 compute-0 podman[196346]: 2025-12-03 01:18:49.690379532 +0000 UTC m=+0.032781319 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:18:49 compute-0 sudo[196352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:18:49 compute-0 sudo[196352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:49 compute-0 sudo[196352]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:49 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:18:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d7e3af88523476c6b6f697818c6752a1f4ac9ca72978fd199ee8fd84a0abe99/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d7e3af88523476c6b6f697818c6752a1f4ac9ca72978fd199ee8fd84a0abe99/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d7e3af88523476c6b6f697818c6752a1f4ac9ca72978fd199ee8fd84a0abe99/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:49 compute-0 podman[196346]: 2025-12-03 01:18:49.866708442 +0000 UTC m=+0.209110219 container init d45a036d82a496d4409d3b996677582147aceb330b052b2372ac0bbe0df0443f (image=quay.io/ceph/ceph:v18, name=busy_greider, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec 03 01:18:49 compute-0 podman[196346]: 2025-12-03 01:18:49.874202656 +0000 UTC m=+0.216604413 container start d45a036d82a496d4409d3b996677582147aceb330b052b2372ac0bbe0df0443f (image=quay.io/ceph/ceph:v18, name=busy_greider, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:18:49 compute-0 podman[196346]: 2025-12-03 01:18:49.879360364 +0000 UTC m=+0.221762131 container attach d45a036d82a496d4409d3b996677582147aceb330b052b2372ac0bbe0df0443f (image=quay.io/ceph/ceph:v18, name=busy_greider, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec 03 01:18:49 compute-0 sudo[196388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:18:49 compute-0 sudo[196388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:49 compute-0 sudo[196388]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:50 compute-0 sudo[196415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:18:50 compute-0 sudo[196415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:50 compute-0 sudo[196415]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:18:50 compute-0 sudo[196440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- inventory --format=json-pretty --filter-for-batch
Dec 03 01:18:50 compute-0 sudo[196440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:18:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1
Dec 03 01:18:50 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/211957164' entity='client.admin' 
Dec 03 01:18:50 compute-0 systemd[1]: libpod-d45a036d82a496d4409d3b996677582147aceb330b052b2372ac0bbe0df0443f.scope: Deactivated successfully.
Dec 03 01:18:50 compute-0 podman[196346]: 2025-12-03 01:18:50.503435135 +0000 UTC m=+0.845836912 container died d45a036d82a496d4409d3b996677582147aceb330b052b2372ac0bbe0df0443f (image=quay.io/ceph/ceph:v18, name=busy_greider, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:18:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d7e3af88523476c6b6f697818c6752a1f4ac9ca72978fd199ee8fd84a0abe99-merged.mount: Deactivated successfully.
Dec 03 01:18:50 compute-0 podman[196346]: 2025-12-03 01:18:50.606674062 +0000 UTC m=+0.949075809 container remove d45a036d82a496d4409d3b996677582147aceb330b052b2372ac0bbe0df0443f (image=quay.io/ceph/ceph:v18, name=busy_greider, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec 03 01:18:50 compute-0 systemd[1]: libpod-conmon-d45a036d82a496d4409d3b996677582147aceb330b052b2372ac0bbe0df0443f.scope: Deactivated successfully.
Dec 03 01:18:50 compute-0 podman[196521]: 2025-12-03 01:18:50.687893468 +0000 UTC m=+0.058646221 container create a9225dda4944cce0e52fe7bcd26fd4a59312560cf3cfae794484e8c33525ad33 (image=quay.io/ceph/ceph:v18, name=charming_haibt, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 03 01:18:50 compute-0 systemd[1]: Started libpod-conmon-a9225dda4944cce0e52fe7bcd26fd4a59312560cf3cfae794484e8c33525ad33.scope.
Dec 03 01:18:50 compute-0 podman[196521]: 2025-12-03 01:18:50.654221173 +0000 UTC m=+0.024973926 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:18:50 compute-0 ceph-mon[192821]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 01:18:50 compute-0 ceph-mon[192821]: Added label _admin to host compute-0
Dec 03 01:18:50 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/211957164' entity='client.admin' 
Dec 03 01:18:50 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:18:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bc935939c322b2366642a68ce39975c65860264780960cbdc49f88f5349f613/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bc935939c322b2366642a68ce39975c65860264780960cbdc49f88f5349f613/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bc935939c322b2366642a68ce39975c65860264780960cbdc49f88f5349f613/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:50 compute-0 podman[196521]: 2025-12-03 01:18:50.822179753 +0000 UTC m=+0.192932566 container init a9225dda4944cce0e52fe7bcd26fd4a59312560cf3cfae794484e8c33525ad33 (image=quay.io/ceph/ceph:v18, name=charming_haibt, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec 03 01:18:50 compute-0 podman[196521]: 2025-12-03 01:18:50.837189693 +0000 UTC m=+0.207942446 container start a9225dda4944cce0e52fe7bcd26fd4a59312560cf3cfae794484e8c33525ad33 (image=quay.io/ceph/ceph:v18, name=charming_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec 03 01:18:50 compute-0 podman[196521]: 2025-12-03 01:18:50.844116231 +0000 UTC m=+0.214868984 container attach a9225dda4944cce0e52fe7bcd26fd4a59312560cf3cfae794484e8c33525ad33 (image=quay.io/ceph/ceph:v18, name=charming_haibt, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:18:50 compute-0 podman[196554]: 2025-12-03 01:18:50.989397151 +0000 UTC m=+0.086409075 container create 3705ecc855db4f0e2e6dd705dccb21a02ec411a98d8e2031865fc2760b7b9add (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:18:51 compute-0 podman[196554]: 2025-12-03 01:18:50.957414605 +0000 UTC m=+0.054426579 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:18:51 compute-0 systemd[1]: Started libpod-conmon-3705ecc855db4f0e2e6dd705dccb21a02ec411a98d8e2031865fc2760b7b9add.scope.
Dec 03 01:18:51 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:18:51 compute-0 podman[196554]: 2025-12-03 01:18:51.129821492 +0000 UTC m=+0.226833466 container init 3705ecc855db4f0e2e6dd705dccb21a02ec411a98d8e2031865fc2760b7b9add (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_neumann, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 03 01:18:51 compute-0 podman[196554]: 2025-12-03 01:18:51.147388455 +0000 UTC m=+0.244400369 container start 3705ecc855db4f0e2e6dd705dccb21a02ec411a98d8e2031865fc2760b7b9add (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 03 01:18:51 compute-0 musing_neumann[196570]: 167 167
Dec 03 01:18:51 compute-0 podman[196554]: 2025-12-03 01:18:51.153482829 +0000 UTC m=+0.250494813 container attach 3705ecc855db4f0e2e6dd705dccb21a02ec411a98d8e2031865fc2760b7b9add (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:18:51 compute-0 systemd[1]: libpod-3705ecc855db4f0e2e6dd705dccb21a02ec411a98d8e2031865fc2760b7b9add.scope: Deactivated successfully.
Dec 03 01:18:51 compute-0 podman[196554]: 2025-12-03 01:18:51.158600546 +0000 UTC m=+0.255612520 container died 3705ecc855db4f0e2e6dd705dccb21a02ec411a98d8e2031865fc2760b7b9add (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_neumann, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Dec 03 01:18:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-7cb3325b07966b12e0194ae71868b7d851bbdc1eb1083cc4573590477726172e-merged.mount: Deactivated successfully.
Dec 03 01:18:51 compute-0 podman[196554]: 2025-12-03 01:18:51.224865124 +0000 UTC m=+0.321877018 container remove 3705ecc855db4f0e2e6dd705dccb21a02ec411a98d8e2031865fc2760b7b9add (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_neumann, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 03 01:18:51 compute-0 systemd[1]: libpod-conmon-3705ecc855db4f0e2e6dd705dccb21a02ec411a98d8e2031865fc2760b7b9add.scope: Deactivated successfully.
Dec 03 01:18:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1
Dec 03 01:18:51 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2354559620' entity='client.admin' 
Dec 03 01:18:51 compute-0 charming_haibt[196549]: set mgr/dashboard/cluster/status
Dec 03 01:18:51 compute-0 systemd[1]: libpod-a9225dda4944cce0e52fe7bcd26fd4a59312560cf3cfae794484e8c33525ad33.scope: Deactivated successfully.
Dec 03 01:18:51 compute-0 podman[196606]: 2025-12-03 01:18:51.628403789 +0000 UTC m=+0.047475390 container died a9225dda4944cce0e52fe7bcd26fd4a59312560cf3cfae794484e8c33525ad33 (image=quay.io/ceph/ceph:v18, name=charming_haibt, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Dec 03 01:18:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-0bc935939c322b2366642a68ce39975c65860264780960cbdc49f88f5349f613-merged.mount: Deactivated successfully.
Dec 03 01:18:51 compute-0 podman[196606]: 2025-12-03 01:18:51.698823016 +0000 UTC m=+0.117894577 container remove a9225dda4944cce0e52fe7bcd26fd4a59312560cf3cfae794484e8c33525ad33 (image=quay.io/ceph/ceph:v18, name=charming_haibt, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec 03 01:18:51 compute-0 systemd[1]: libpod-conmon-a9225dda4944cce0e52fe7bcd26fd4a59312560cf3cfae794484e8c33525ad33.scope: Deactivated successfully.
Dec 03 01:18:51 compute-0 ceph-mon[192821]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:18:51 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2354559620' entity='client.admin' 
Dec 03 01:18:51 compute-0 sudo[191658]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:52 compute-0 podman[196625]: 2025-12-03 01:18:52.067197515 +0000 UTC m=+0.088504336 container create c820a3ba15f95b90bf9ea1322639d5f258c9cf966923fb03e353e18742859af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mclean, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Dec 03 01:18:52 compute-0 podman[196625]: 2025-12-03 01:18:52.034163879 +0000 UTC m=+0.055470730 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:18:52 compute-0 systemd[1]: Started libpod-conmon-c820a3ba15f95b90bf9ea1322639d5f258c9cf966923fb03e353e18742859af0.scope.
Dec 03 01:18:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:18:52 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:18:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d61c327c9b180440e660d07b1d252b636b753de89c5e59a3ae97e8104f1545e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d61c327c9b180440e660d07b1d252b636b753de89c5e59a3ae97e8104f1545e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d61c327c9b180440e660d07b1d252b636b753de89c5e59a3ae97e8104f1545e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d61c327c9b180440e660d07b1d252b636b753de89c5e59a3ae97e8104f1545e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:52 compute-0 podman[196625]: 2025-12-03 01:18:52.237883393 +0000 UTC m=+0.259190174 container init c820a3ba15f95b90bf9ea1322639d5f258c9cf966923fb03e353e18742859af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 03 01:18:52 compute-0 sudo[196667]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfldhaxtepntocvalsxqeapjhyrxznbo ; /usr/bin/python3'
Dec 03 01:18:52 compute-0 podman[196625]: 2025-12-03 01:18:52.265234676 +0000 UTC m=+0.286541467 container start c820a3ba15f95b90bf9ea1322639d5f258c9cf966923fb03e353e18742859af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 03 01:18:52 compute-0 podman[196625]: 2025-12-03 01:18:52.27132541 +0000 UTC m=+0.292632561 container attach c820a3ba15f95b90bf9ea1322639d5f258c9cf966923fb03e353e18742859af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mclean, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:18:52 compute-0 sudo[196667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:18:52 compute-0 python3[196672]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                            _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:18:52 compute-0 podman[196673]: 2025-12-03 01:18:52.568084638 +0000 UTC m=+0.102912508 container create 2290f2aa86a835896aeffe38502e9046951b56a81a55986fc50bbdadaab0b2aa (image=quay.io/ceph/ceph:v18, name=gallant_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:18:52 compute-0 podman[196673]: 2025-12-03 01:18:52.527719173 +0000 UTC m=+0.062547093 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:18:52 compute-0 systemd[1]: Started libpod-conmon-2290f2aa86a835896aeffe38502e9046951b56a81a55986fc50bbdadaab0b2aa.scope.
Dec 03 01:18:52 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:18:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f838aa5584e7cf1be4985ee1319f9f4aaa2f36560cd8b2259a8221b89830b19b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f838aa5584e7cf1be4985ee1319f9f4aaa2f36560cd8b2259a8221b89830b19b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:52 compute-0 podman[196673]: 2025-12-03 01:18:52.772817291 +0000 UTC m=+0.307645181 container init 2290f2aa86a835896aeffe38502e9046951b56a81a55986fc50bbdadaab0b2aa (image=quay.io/ceph/ceph:v18, name=gallant_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:18:52 compute-0 podman[196673]: 2025-12-03 01:18:52.788693456 +0000 UTC m=+0.323521306 container start 2290f2aa86a835896aeffe38502e9046951b56a81a55986fc50bbdadaab0b2aa (image=quay.io/ceph/ceph:v18, name=gallant_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:18:52 compute-0 podman[196673]: 2025-12-03 01:18:52.794977006 +0000 UTC m=+0.329804886 container attach 2290f2aa86a835896aeffe38502e9046951b56a81a55986fc50bbdadaab0b2aa (image=quay.io/ceph/ceph:v18, name=gallant_joliot, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:18:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1
Dec 03 01:18:53 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4003315585' entity='client.admin' 
Dec 03 01:18:53 compute-0 systemd[1]: libpod-2290f2aa86a835896aeffe38502e9046951b56a81a55986fc50bbdadaab0b2aa.scope: Deactivated successfully.
Dec 03 01:18:53 compute-0 podman[196673]: 2025-12-03 01:18:53.446438911 +0000 UTC m=+0.981266811 container died 2290f2aa86a835896aeffe38502e9046951b56a81a55986fc50bbdadaab0b2aa (image=quay.io/ceph/ceph:v18, name=gallant_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 03 01:18:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-f838aa5584e7cf1be4985ee1319f9f4aaa2f36560cd8b2259a8221b89830b19b-merged.mount: Deactivated successfully.
Dec 03 01:18:53 compute-0 podman[196673]: 2025-12-03 01:18:53.535047949 +0000 UTC m=+1.069875799 container remove 2290f2aa86a835896aeffe38502e9046951b56a81a55986fc50bbdadaab0b2aa (image=quay.io/ceph/ceph:v18, name=gallant_joliot, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:18:53 compute-0 systemd[1]: libpod-conmon-2290f2aa86a835896aeffe38502e9046951b56a81a55986fc50bbdadaab0b2aa.scope: Deactivated successfully.
Dec 03 01:18:53 compute-0 sudo[196667]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:53 compute-0 ceph-mon[192821]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:18:53 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/4003315585' entity='client.admin' 
Dec 03 01:18:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:18:54 compute-0 elegant_mclean[196645]: [
Dec 03 01:18:54 compute-0 elegant_mclean[196645]:     {
Dec 03 01:18:54 compute-0 elegant_mclean[196645]:         "available": false,
Dec 03 01:18:54 compute-0 elegant_mclean[196645]:         "ceph_device": false,
Dec 03 01:18:54 compute-0 elegant_mclean[196645]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 03 01:18:54 compute-0 elegant_mclean[196645]:         "lsm_data": {},
Dec 03 01:18:54 compute-0 elegant_mclean[196645]:         "lvs": [],
Dec 03 01:18:54 compute-0 elegant_mclean[196645]:         "path": "/dev/sr0",
Dec 03 01:18:54 compute-0 elegant_mclean[196645]:         "rejected_reasons": [
Dec 03 01:18:54 compute-0 elegant_mclean[196645]:             "Has a FileSystem",
Dec 03 01:18:54 compute-0 elegant_mclean[196645]:             "Insufficient space (<5GB)"
Dec 03 01:18:54 compute-0 elegant_mclean[196645]:         ],
Dec 03 01:18:54 compute-0 elegant_mclean[196645]:         "sys_api": {
Dec 03 01:18:54 compute-0 elegant_mclean[196645]:             "actuators": null,
Dec 03 01:18:54 compute-0 elegant_mclean[196645]:             "device_nodes": "sr0",
Dec 03 01:18:54 compute-0 elegant_mclean[196645]:             "devname": "sr0",
Dec 03 01:18:54 compute-0 elegant_mclean[196645]:             "human_readable_size": "482.00 KB",
Dec 03 01:18:54 compute-0 elegant_mclean[196645]:             "id_bus": "ata",
Dec 03 01:18:54 compute-0 elegant_mclean[196645]:             "model": "QEMU DVD-ROM",
Dec 03 01:18:54 compute-0 elegant_mclean[196645]:             "nr_requests": "2",
Dec 03 01:18:54 compute-0 elegant_mclean[196645]:             "parent": "/dev/sr0",
Dec 03 01:18:54 compute-0 elegant_mclean[196645]:             "partitions": {},
Dec 03 01:18:54 compute-0 elegant_mclean[196645]:             "path": "/dev/sr0",
Dec 03 01:18:54 compute-0 elegant_mclean[196645]:             "removable": "1",
Dec 03 01:18:54 compute-0 elegant_mclean[196645]:             "rev": "2.5+",
Dec 03 01:18:54 compute-0 elegant_mclean[196645]:             "ro": "0",
Dec 03 01:18:54 compute-0 elegant_mclean[196645]:             "rotational": "1",
Dec 03 01:18:54 compute-0 elegant_mclean[196645]:             "sas_address": "",
Dec 03 01:18:54 compute-0 elegant_mclean[196645]:             "sas_device_handle": "",
Dec 03 01:18:54 compute-0 elegant_mclean[196645]:             "scheduler_mode": "mq-deadline",
Dec 03 01:18:54 compute-0 elegant_mclean[196645]:             "sectors": 0,
Dec 03 01:18:54 compute-0 elegant_mclean[196645]:             "sectorsize": "2048",
Dec 03 01:18:54 compute-0 elegant_mclean[196645]:             "size": 493568.0,
Dec 03 01:18:54 compute-0 elegant_mclean[196645]:             "support_discard": "2048",
Dec 03 01:18:54 compute-0 elegant_mclean[196645]:             "type": "disk",
Dec 03 01:18:54 compute-0 elegant_mclean[196645]:             "vendor": "QEMU"
Dec 03 01:18:54 compute-0 elegant_mclean[196645]:         }
Dec 03 01:18:54 compute-0 elegant_mclean[196645]:     }
Dec 03 01:18:54 compute-0 elegant_mclean[196645]: ]
Dec 03 01:18:54 compute-0 systemd[1]: libpod-c820a3ba15f95b90bf9ea1322639d5f258c9cf966923fb03e353e18742859af0.scope: Deactivated successfully.
Dec 03 01:18:54 compute-0 systemd[1]: libpod-c820a3ba15f95b90bf9ea1322639d5f258c9cf966923fb03e353e18742859af0.scope: Consumed 2.190s CPU time.
Dec 03 01:18:54 compute-0 podman[196625]: 2025-12-03 01:18:54.425762135 +0000 UTC m=+2.447068986 container died c820a3ba15f95b90bf9ea1322639d5f258c9cf966923fb03e353e18742859af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 03 01:18:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d61c327c9b180440e660d07b1d252b636b753de89c5e59a3ae97e8104f1545e-merged.mount: Deactivated successfully.
Dec 03 01:18:54 compute-0 podman[196625]: 2025-12-03 01:18:54.531123802 +0000 UTC m=+2.552430603 container remove c820a3ba15f95b90bf9ea1322639d5f258c9cf966923fb03e353e18742859af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mclean, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:18:54 compute-0 systemd[1]: libpod-conmon-c820a3ba15f95b90bf9ea1322639d5f258c9cf966923fb03e353e18742859af0.scope: Deactivated successfully.
Dec 03 01:18:54 compute-0 sudo[196440]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:18:54 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:18:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:18:54 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:18:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:18:54 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:18:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:18:54 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:18:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec 03 01:18:54 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 03 01:18:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:18:54 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:18:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 01:18:54 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:18:54 compute-0 ceph-mgr[193109]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec 03 01:18:54 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec 03 01:18:54 compute-0 sudo[198681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:18:54 compute-0 sudo[198681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:54 compute-0 sudo[198681]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:54 compute-0 sudo[198729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klrcfykrpjogbirryexomxoovthoyhwr ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764724733.9468582-37007-50407713382505/async_wrapper.py j723132833440 30 /home/zuul/.ansible/tmp/ansible-tmp-1764724733.9468582-37007-50407713382505/AnsiballZ_command.py _'
Dec 03 01:18:54 compute-0 sudo[198729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:18:54 compute-0 sudo[198734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 03 01:18:54 compute-0 sudo[198734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:54 compute-0 sudo[198734]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:54 compute-0 ansible-async_wrapper.py[198733]: Invoked with j723132833440 30 /home/zuul/.ansible/tmp/ansible-tmp-1764724733.9468582-37007-50407713382505/AnsiballZ_command.py _
Dec 03 01:18:54 compute-0 ansible-async_wrapper.py[198782]: Starting module and watcher
Dec 03 01:18:54 compute-0 ansible-async_wrapper.py[198782]: Start watching 198783 (30)
Dec 03 01:18:54 compute-0 ansible-async_wrapper.py[198783]: Start module (198783)
Dec 03 01:18:54 compute-0 ansible-async_wrapper.py[198733]: Return async_wrapper task started.
Dec 03 01:18:55 compute-0 sudo[198729]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:55 compute-0 sudo[198759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:18:55 compute-0 sudo[198759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:55 compute-0 sudo[198759]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:55 compute-0 sudo[198789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-3765feb2-36f8-5b86-b74c-64e9221f9c4c/etc/ceph
Dec 03 01:18:55 compute-0 sudo[198789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:55 compute-0 sudo[198789]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:55 compute-0 python3[198784]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:18:55 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:18:55 compute-0 podman[198815]: 2025-12-03 01:18:55.257813421 +0000 UTC m=+0.071801307 container create 7d35d7d3b88143b6acad947a27a5056cb3a432a1459eed72e8e3b26f6772f15d (image=quay.io/ceph/ceph:v18, name=focused_cray, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec 03 01:18:55 compute-0 sudo[198814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:18:55 compute-0 sudo[198814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:55 compute-0 sudo[198814]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:55 compute-0 podman[198815]: 2025-12-03 01:18:55.22843542 +0000 UTC m=+0.042423306 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:18:55 compute-0 systemd[1]: Started libpod-conmon-7d35d7d3b88143b6acad947a27a5056cb3a432a1459eed72e8e3b26f6772f15d.scope.
Dec 03 01:18:55 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:18:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbde8df17b6ac9709b04ae729ae70c4eaa216bb8ead2372f452cb249bbcf89a8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbde8df17b6ac9709b04ae729ae70c4eaa216bb8ead2372f452cb249bbcf89a8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:55 compute-0 podman[198815]: 2025-12-03 01:18:55.418644797 +0000 UTC m=+0.232632753 container init 7d35d7d3b88143b6acad947a27a5056cb3a432a1459eed72e8e3b26f6772f15d (image=quay.io/ceph/ceph:v18, name=focused_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Dec 03 01:18:55 compute-0 podman[198815]: 2025-12-03 01:18:55.433992076 +0000 UTC m=+0.247979942 container start 7d35d7d3b88143b6acad947a27a5056cb3a432a1459eed72e8e3b26f6772f15d (image=quay.io/ceph/ceph:v18, name=focused_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:18:55 compute-0 sudo[198854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-3765feb2-36f8-5b86-b74c-64e9221f9c4c/etc/ceph/ceph.conf.new
Dec 03 01:18:55 compute-0 podman[198815]: 2025-12-03 01:18:55.439392631 +0000 UTC m=+0.253380517 container attach 7d35d7d3b88143b6acad947a27a5056cb3a432a1459eed72e8e3b26f6772f15d (image=quay.io/ceph/ceph:v18, name=focused_cray, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:18:55 compute-0 sudo[198854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:55 compute-0 sudo[198854]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:55 compute-0 sudo[198883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:18:55 compute-0 sudo[198883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:55 compute-0 sudo[198883]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:55 compute-0 ceph-mon[192821]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:18:55 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:18:55 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:18:55 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:18:55 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:18:55 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 03 01:18:55 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:18:55 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:18:55 compute-0 ceph-mon[192821]: Updating compute-0:/etc/ceph/ceph.conf
Dec 03 01:18:55 compute-0 sudo[198908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-3765feb2-36f8-5b86-b74c-64e9221f9c4c
Dec 03 01:18:55 compute-0 sudo[198908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:55 compute-0 sudo[198908]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:55 compute-0 sshd-session[195765]: Invalid user abc from 193.32.162.157 port 47874
Dec 03 01:18:55 compute-0 sudo[198934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:18:55 compute-0 sudo[198934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:55 compute-0 sudo[198934]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:55 compute-0 sudo[198977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-3765feb2-36f8-5b86-b74c-64e9221f9c4c/etc/ceph/ceph.conf.new
Dec 03 01:18:55 compute-0 sudo[198977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:55 compute-0 sudo[198977]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:56 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 03 01:18:56 compute-0 focused_cray[198858]: 
Dec 03 01:18:56 compute-0 focused_cray[198858]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec 03 01:18:56 compute-0 systemd[1]: libpod-7d35d7d3b88143b6acad947a27a5056cb3a432a1459eed72e8e3b26f6772f15d.scope: Deactivated successfully.
Dec 03 01:18:56 compute-0 podman[198815]: 2025-12-03 01:18:56.037435857 +0000 UTC m=+0.851423713 container died 7d35d7d3b88143b6acad947a27a5056cb3a432a1459eed72e8e3b26f6772f15d (image=quay.io/ceph/ceph:v18, name=focused_cray, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:18:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-dbde8df17b6ac9709b04ae729ae70c4eaa216bb8ead2372f452cb249bbcf89a8-merged.mount: Deactivated successfully.
Dec 03 01:18:56 compute-0 podman[198815]: 2025-12-03 01:18:56.129905655 +0000 UTC m=+0.943893511 container remove 7d35d7d3b88143b6acad947a27a5056cb3a432a1459eed72e8e3b26f6772f15d (image=quay.io/ceph/ceph:v18, name=focused_cray, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 03 01:18:56 compute-0 systemd[1]: libpod-conmon-7d35d7d3b88143b6acad947a27a5056cb3a432a1459eed72e8e3b26f6772f15d.scope: Deactivated successfully.
Dec 03 01:18:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:18:56 compute-0 ansible-async_wrapper.py[198783]: Module complete (198783)
Dec 03 01:18:56 compute-0 sudo[199060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:18:56 compute-0 sudo[199060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:56 compute-0 sudo[199060]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:56 compute-0 sudo[199088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-3765feb2-36f8-5b86-b74c-64e9221f9c4c/etc/ceph/ceph.conf.new
Dec 03 01:18:56 compute-0 sudo[199088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:56 compute-0 sudo[199088]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:56 compute-0 sudo[199113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:18:56 compute-0 sudo[199113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:56 compute-0 sudo[199159]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhthvjvjqrlsutwqbhcnreianrqsomab ; /usr/bin/python3'
Dec 03 01:18:56 compute-0 sudo[199159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:18:56 compute-0 sudo[199113]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:56 compute-0 ceph-mon[192821]: from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 03 01:18:56 compute-0 sudo[199164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-3765feb2-36f8-5b86-b74c-64e9221f9c4c/etc/ceph/ceph.conf.new
Dec 03 01:18:56 compute-0 sudo[199164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:56 compute-0 sudo[199164]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:56 compute-0 python3[199163]: ansible-ansible.legacy.async_status Invoked with jid=j723132833440.198733 mode=status _async_dir=/root/.ansible_async
Dec 03 01:18:56 compute-0 sudo[199159]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:56 compute-0 sudo[199189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:18:56 compute-0 sudo[199189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:56 compute-0 sudo[199189]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:56 compute-0 sudo[199237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-3765feb2-36f8-5b86-b74c-64e9221f9c4c/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 03 01:18:56 compute-0 sudo[199237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:56 compute-0 sudo[199237]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:56 compute-0 ceph-mgr[193109]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/config/ceph.conf
Dec 03 01:18:56 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/config/ceph.conf
Dec 03 01:18:56 compute-0 sudo[199285]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aeqlbzgooapoazsgwjojphyenqqicgwo ; /usr/bin/python3'
Dec 03 01:18:56 compute-0 sudo[199285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:18:57 compute-0 sudo[199286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:18:57 compute-0 sudo[199286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:57 compute-0 sudo[199286]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:57 compute-0 python3[199295]: ansible-ansible.legacy.async_status Invoked with jid=j723132833440.198733 mode=cleanup _async_dir=/root/.ansible_async
Dec 03 01:18:57 compute-0 sudo[199285]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:57 compute-0 sudo[199313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/config
Dec 03 01:18:57 compute-0 sudo[199313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:57 compute-0 sudo[199313]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:57 compute-0 sudo[199338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:18:57 compute-0 sudo[199338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:57 compute-0 sudo[199338]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:57 compute-0 sudo[199363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-3765feb2-36f8-5b86-b74c-64e9221f9c4c/var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/config
Dec 03 01:18:57 compute-0 sudo[199363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:57 compute-0 sudo[199363]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:57 compute-0 sudo[199388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:18:57 compute-0 sudo[199388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:57 compute-0 sudo[199388]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:57 compute-0 sudo[199435]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gigeklommlpftfumkbekfwspigeivilw ; /usr/bin/python3'
Dec 03 01:18:57 compute-0 sudo[199435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:18:57 compute-0 ceph-mon[192821]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:18:57 compute-0 ceph-mon[192821]: Updating compute-0:/var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/config/ceph.conf
Dec 03 01:18:57 compute-0 sudo[199437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-3765feb2-36f8-5b86-b74c-64e9221f9c4c/var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/config/ceph.conf.new
Dec 03 01:18:57 compute-0 sudo[199437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:57 compute-0 sudo[199437]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:57 compute-0 python3[199442]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 03 01:18:57 compute-0 sudo[199435]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:57 compute-0 sudo[199464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:18:57 compute-0 sudo[199464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:57 compute-0 sudo[199464]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:57 compute-0 sudo[199491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-3765feb2-36f8-5b86-b74c-64e9221f9c4c
Dec 03 01:18:57 compute-0 sudo[199491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:57 compute-0 sudo[199491]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:58 compute-0 sudo[199516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:18:58 compute-0 sudo[199516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:58 compute-0 sudo[199516]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:58 compute-0 sshd-session[195765]: Connection closed by invalid user abc 193.32.162.157 port 47874 [preauth]
Dec 03 01:18:58 compute-0 sudo[199541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-3765feb2-36f8-5b86-b74c-64e9221f9c4c/var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/config/ceph.conf.new
Dec 03 01:18:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:18:58 compute-0 sudo[199541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:58 compute-0 sudo[199541]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:58 compute-0 sudo[199601]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kiwjkotntpryxkiwrrptpkdeniuvjkwt ; /usr/bin/python3'
Dec 03 01:18:58 compute-0 sudo[199601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:18:58 compute-0 auditd[706]: Audit daemon rotating log files
Dec 03 01:18:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:18:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:18:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:18:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:18:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:18:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:18:58 compute-0 sudo[199616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:18:58 compute-0 sudo[199616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:58 compute-0 sudo[199616]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:58 compute-0 python3[199615]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:18:58 compute-0 podman[199642]: 2025-12-03 01:18:58.540638928 +0000 UTC m=+0.093633062 container create 01d1de545c347484e9eea556ca83d96dbf732995fa1cc74f1b896c015dae4f62 (image=quay.io/ceph/ceph:v18, name=flamboyant_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Dec 03 01:18:58 compute-0 sudo[199641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-3765feb2-36f8-5b86-b74c-64e9221f9c4c/var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/config/ceph.conf.new
Dec 03 01:18:58 compute-0 sudo[199641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:58 compute-0 sudo[199641]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:58 compute-0 podman[199642]: 2025-12-03 01:18:58.503643999 +0000 UTC m=+0.056638173 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:18:58 compute-0 systemd[1]: Started libpod-conmon-01d1de545c347484e9eea556ca83d96dbf732995fa1cc74f1b896c015dae4f62.scope.
Dec 03 01:18:58 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:18:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b42ad78bea4fa7c2789aec3d10ac30a430d7bd7aa54e754a40004eaf81b18ead/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b42ad78bea4fa7c2789aec3d10ac30a430d7bd7aa54e754a40004eaf81b18ead/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b42ad78bea4fa7c2789aec3d10ac30a430d7bd7aa54e754a40004eaf81b18ead/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:18:58 compute-0 podman[199642]: 2025-12-03 01:18:58.681129901 +0000 UTC m=+0.234124085 container init 01d1de545c347484e9eea556ca83d96dbf732995fa1cc74f1b896c015dae4f62 (image=quay.io/ceph/ceph:v18, name=flamboyant_matsumoto, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 03 01:18:58 compute-0 podman[199642]: 2025-12-03 01:18:58.699398765 +0000 UTC m=+0.252392889 container start 01d1de545c347484e9eea556ca83d96dbf732995fa1cc74f1b896c015dae4f62 (image=quay.io/ceph/ceph:v18, name=flamboyant_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 03 01:18:58 compute-0 podman[199642]: 2025-12-03 01:18:58.705967533 +0000 UTC m=+0.258961657 container attach 01d1de545c347484e9eea556ca83d96dbf732995fa1cc74f1b896c015dae4f62 (image=quay.io/ceph/ceph:v18, name=flamboyant_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:18:58 compute-0 sudo[199679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:18:58 compute-0 sudo[199679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:58 compute-0 sudo[199679]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:58 compute-0 sudo[199709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-3765feb2-36f8-5b86-b74c-64e9221f9c4c/var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/config/ceph.conf.new
Dec 03 01:18:58 compute-0 sudo[199709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:58 compute-0 sudo[199709]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:59 compute-0 sudo[199734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:18:59 compute-0 sudo[199734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:59 compute-0 sudo[199734]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:59 compute-0 sudo[199778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-3765feb2-36f8-5b86-b74c-64e9221f9c4c/var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/config/ceph.conf.new /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/config/ceph.conf
Dec 03 01:18:59 compute-0 sudo[199778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:59 compute-0 sudo[199778]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:59 compute-0 ceph-mgr[193109]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 03 01:18:59 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 03 01:18:59 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 03 01:18:59 compute-0 flamboyant_matsumoto[199684]: 
Dec 03 01:18:59 compute-0 flamboyant_matsumoto[199684]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec 03 01:18:59 compute-0 sudo[199803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:18:59 compute-0 systemd[1]: libpod-01d1de545c347484e9eea556ca83d96dbf732995fa1cc74f1b896c015dae4f62.scope: Deactivated successfully.
Dec 03 01:18:59 compute-0 podman[199642]: 2025-12-03 01:18:59.311117432 +0000 UTC m=+0.864111556 container died 01d1de545c347484e9eea556ca83d96dbf732995fa1cc74f1b896c015dae4f62 (image=quay.io/ceph/ceph:v18, name=flamboyant_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 03 01:18:59 compute-0 sudo[199803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:59 compute-0 sudo[199803]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-b42ad78bea4fa7c2789aec3d10ac30a430d7bd7aa54e754a40004eaf81b18ead-merged.mount: Deactivated successfully.
Dec 03 01:18:59 compute-0 podman[199642]: 2025-12-03 01:18:59.397715502 +0000 UTC m=+0.950709606 container remove 01d1de545c347484e9eea556ca83d96dbf732995fa1cc74f1b896c015dae4f62 (image=quay.io/ceph/ceph:v18, name=flamboyant_matsumoto, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec 03 01:18:59 compute-0 systemd[1]: libpod-conmon-01d1de545c347484e9eea556ca83d96dbf732995fa1cc74f1b896c015dae4f62.scope: Deactivated successfully.
Dec 03 01:18:59 compute-0 sudo[199601]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:59 compute-0 sudo[199837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 03 01:18:59 compute-0 sudo[199837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:59 compute-0 sudo[199837]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:59 compute-0 sudo[199869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:18:59 compute-0 sudo[199869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:59 compute-0 sudo[199869]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:59 compute-0 ceph-mon[192821]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:18:59 compute-0 sudo[199894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-3765feb2-36f8-5b86-b74c-64e9221f9c4c/etc/ceph
Dec 03 01:18:59 compute-0 sudo[199894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:59 compute-0 sudo[199894]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:59 compute-0 podman[158098]: time="2025-12-03T01:18:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:18:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:18:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 22105 "" "Go-http-client/1.1"
Dec 03 01:18:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:18:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3950 "" "Go-http-client/1.1"
Dec 03 01:18:59 compute-0 sudo[199954]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gthgxavwrwifntegajyqqbymosmczjht ; /usr/bin/python3'
Dec 03 01:18:59 compute-0 sudo[199954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:18:59 compute-0 sudo[199936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:18:59 compute-0 sudo[199936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:59 compute-0 sudo[199936]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:59 compute-0 sudo[199970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-3765feb2-36f8-5b86-b74c-64e9221f9c4c/etc/ceph/ceph.client.admin.keyring.new
Dec 03 01:18:59 compute-0 python3[199968]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:18:59 compute-0 sudo[199970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:18:59 compute-0 sudo[199970]: pam_unix(sudo:session): session closed for user root
Dec 03 01:18:59 compute-0 ansible-async_wrapper.py[198782]: Done in kid B.
Dec 03 01:19:00 compute-0 podman[199994]: 2025-12-03 01:19:00.04963092 +0000 UTC m=+0.088293399 container create 4260122c567d71c9f04b0b0f17a502267816ef8525a1faf3fd35559b6702b751 (image=quay.io/ceph/ceph:v18, name=exciting_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 03 01:19:00 compute-0 sudo[199996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:19:00 compute-0 sudo[199996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:00 compute-0 sudo[199996]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:00 compute-0 podman[199994]: 2025-12-03 01:19:00.025470828 +0000 UTC m=+0.064133297 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:19:00 compute-0 systemd[1]: Started libpod-conmon-4260122c567d71c9f04b0b0f17a502267816ef8525a1faf3fd35559b6702b751.scope.
Dec 03 01:19:00 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:19:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:19:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f210e829118c98b3fbd7e8bd25107fbac4d644e00d8b5ec410f31dc370544fd9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f210e829118c98b3fbd7e8bd25107fbac4d644e00d8b5ec410f31dc370544fd9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f210e829118c98b3fbd7e8bd25107fbac4d644e00d8b5ec410f31dc370544fd9/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:00 compute-0 podman[199994]: 2025-12-03 01:19:00.198613137 +0000 UTC m=+0.237275656 container init 4260122c567d71c9f04b0b0f17a502267816ef8525a1faf3fd35559b6702b751 (image=quay.io/ceph/ceph:v18, name=exciting_goldwasser, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:19:00 compute-0 podman[199994]: 2025-12-03 01:19:00.226340531 +0000 UTC m=+0.265002990 container start 4260122c567d71c9f04b0b0f17a502267816ef8525a1faf3fd35559b6702b751 (image=quay.io/ceph/ceph:v18, name=exciting_goldwasser, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec 03 01:19:00 compute-0 podman[199994]: 2025-12-03 01:19:00.233177736 +0000 UTC m=+0.271840255 container attach 4260122c567d71c9f04b0b0f17a502267816ef8525a1faf3fd35559b6702b751 (image=quay.io/ceph/ceph:v18, name=exciting_goldwasser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec 03 01:19:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:19:00 compute-0 sudo[200036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-3765feb2-36f8-5b86-b74c-64e9221f9c4c
Dec 03 01:19:00 compute-0 sudo[200036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:00 compute-0 sudo[200036]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:00 compute-0 sudo[200063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:19:00 compute-0 sudo[200063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:00 compute-0 sudo[200063]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:00 compute-0 sudo[200088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-3765feb2-36f8-5b86-b74c-64e9221f9c4c/etc/ceph/ceph.client.admin.keyring.new
Dec 03 01:19:00 compute-0 sudo[200088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:00 compute-0 sudo[200088]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:00 compute-0 ceph-mon[192821]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 03 01:19:00 compute-0 ceph-mon[192821]: from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 03 01:19:00 compute-0 sudo[200156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:19:00 compute-0 sudo[200156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:00 compute-0 sudo[200156]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:00 compute-0 podman[200162]: 2025-12-03 01:19:00.862021574 +0000 UTC m=+0.106205122 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 03 01:19:00 compute-0 podman[200169]: 2025-12-03 01:19:00.872083792 +0000 UTC m=+0.108028694 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, config_id=edpm, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, com.redhat.component=ubi9-minimal-container)
Dec 03 01:19:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0) v1
Dec 03 01:19:00 compute-0 podman[200171]: 2025-12-03 01:19:00.887050491 +0000 UTC m=+0.113539263 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS)
Dec 03 01:19:00 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/457463382' entity='client.admin' 
Dec 03 01:19:00 compute-0 sudo[200219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-3765feb2-36f8-5b86-b74c-64e9221f9c4c/etc/ceph/ceph.client.admin.keyring.new
Dec 03 01:19:00 compute-0 sudo[200219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:00 compute-0 sudo[200219]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:00 compute-0 systemd[1]: libpod-4260122c567d71c9f04b0b0f17a502267816ef8525a1faf3fd35559b6702b751.scope: Deactivated successfully.
Dec 03 01:19:00 compute-0 podman[200177]: 2025-12-03 01:19:00.939813322 +0000 UTC m=+0.163858713 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 03 01:19:00 compute-0 podman[200294]: 2025-12-03 01:19:00.962041938 +0000 UTC m=+0.033040077 container died 4260122c567d71c9f04b0b0f17a502267816ef8525a1faf3fd35559b6702b751 (image=quay.io/ceph/ceph:v18, name=exciting_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 03 01:19:00 compute-0 sudo[200290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:19:00 compute-0 sudo[200290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:00 compute-0 sudo[200290]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-f210e829118c98b3fbd7e8bd25107fbac4d644e00d8b5ec410f31dc370544fd9-merged.mount: Deactivated successfully.
Dec 03 01:19:01 compute-0 podman[200294]: 2025-12-03 01:19:01.016064285 +0000 UTC m=+0.087062434 container remove 4260122c567d71c9f04b0b0f17a502267816ef8525a1faf3fd35559b6702b751 (image=quay.io/ceph/ceph:v18, name=exciting_goldwasser, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:19:01 compute-0 systemd[1]: libpod-conmon-4260122c567d71c9f04b0b0f17a502267816ef8525a1faf3fd35559b6702b751.scope: Deactivated successfully.
Dec 03 01:19:01 compute-0 sudo[200327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-3765feb2-36f8-5b86-b74c-64e9221f9c4c/etc/ceph/ceph.client.admin.keyring.new
Dec 03 01:19:01 compute-0 sudo[200327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:01 compute-0 sudo[199954]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:01 compute-0 sudo[200327]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:01 compute-0 sudo[200352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:19:01 compute-0 sudo[200352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:01 compute-0 sudo[200352]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:01 compute-0 sudo[200377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-3765feb2-36f8-5b86-b74c-64e9221f9c4c/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 03 01:19:01 compute-0 sudo[200377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:01 compute-0 sudo[200423]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buuzthnzwrnbmihlvnktbpwhjhepbvlc ; /usr/bin/python3'
Dec 03 01:19:01 compute-0 sudo[200377]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:01 compute-0 sudo[200423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:19:01 compute-0 ceph-mgr[193109]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/config/ceph.client.admin.keyring
Dec 03 01:19:01 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/config/ceph.client.admin.keyring
Dec 03 01:19:01 compute-0 sudo[200428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:19:01 compute-0 sudo[200428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:01 compute-0 openstack_network_exporter[160250]: ERROR   01:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:19:01 compute-0 python3[200427]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:19:01 compute-0 sudo[200428]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:01 compute-0 openstack_network_exporter[160250]: ERROR   01:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:19:01 compute-0 openstack_network_exporter[160250]: ERROR   01:19:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:19:01 compute-0 openstack_network_exporter[160250]: ERROR   01:19:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:19:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:19:01 compute-0 openstack_network_exporter[160250]: ERROR   01:19:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:19:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:19:01 compute-0 podman[200453]: 2025-12-03 01:19:01.529409556 +0000 UTC m=+0.074668780 container create 2d6364d302020bbdf74b47f068aeec5f042d0aa09e57bfe9ba623fc5497453db (image=quay.io/ceph/ceph:v18, name=zealous_meninsky, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:19:01 compute-0 sudo[200454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/config
Dec 03 01:19:01 compute-0 sudo[200454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:01 compute-0 sudo[200454]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:01 compute-0 podman[200453]: 2025-12-03 01:19:01.501593209 +0000 UTC m=+0.046852423 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:19:01 compute-0 systemd[1]: Started libpod-conmon-2d6364d302020bbdf74b47f068aeec5f042d0aa09e57bfe9ba623fc5497453db.scope.
Dec 03 01:19:01 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:19:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47580e952cfc0f7f6faa1a11fc1e19d39e6d2ada54a2ffe0ccbd11e9e80533a4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47580e952cfc0f7f6faa1a11fc1e19d39e6d2ada54a2ffe0ccbd11e9e80533a4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47580e952cfc0f7f6faa1a11fc1e19d39e6d2ada54a2ffe0ccbd11e9e80533a4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:01 compute-0 podman[200453]: 2025-12-03 01:19:01.65108439 +0000 UTC m=+0.196343584 container init 2d6364d302020bbdf74b47f068aeec5f042d0aa09e57bfe9ba623fc5497453db (image=quay.io/ceph/ceph:v18, name=zealous_meninsky, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:19:01 compute-0 podman[200453]: 2025-12-03 01:19:01.67168111 +0000 UTC m=+0.216940304 container start 2d6364d302020bbdf74b47f068aeec5f042d0aa09e57bfe9ba623fc5497453db (image=quay.io/ceph/ceph:v18, name=zealous_meninsky, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:19:01 compute-0 podman[200453]: 2025-12-03 01:19:01.676053624 +0000 UTC m=+0.221312808 container attach 2d6364d302020bbdf74b47f068aeec5f042d0aa09e57bfe9ba623fc5497453db (image=quay.io/ceph/ceph:v18, name=zealous_meninsky, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:19:01 compute-0 sudo[200492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:19:01 compute-0 sudo[200492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:01 compute-0 sudo[200492]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:01 compute-0 sudo[200521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-3765feb2-36f8-5b86-b74c-64e9221f9c4c/var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/config
Dec 03 01:19:01 compute-0 sudo[200521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:01 compute-0 sudo[200521]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:01 compute-0 ceph-mon[192821]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:19:01 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/457463382' entity='client.admin' 
Dec 03 01:19:01 compute-0 sudo[200546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:19:01 compute-0 sudo[200546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:01 compute-0 sudo[200546]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:02 compute-0 sudo[200575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-3765feb2-36f8-5b86-b74c-64e9221f9c4c/var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/config/ceph.client.admin.keyring.new
Dec 03 01:19:02 compute-0 sudo[200575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:02 compute-0 sudo[200575]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:19:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0) v1
Dec 03 01:19:02 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3890487829' entity='client.admin' 
Dec 03 01:19:02 compute-0 sudo[200615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:19:02 compute-0 sudo[200615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:02 compute-0 sudo[200615]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:02 compute-0 systemd[1]: libpod-2d6364d302020bbdf74b47f068aeec5f042d0aa09e57bfe9ba623fc5497453db.scope: Deactivated successfully.
Dec 03 01:19:02 compute-0 podman[200453]: 2025-12-03 01:19:02.270860147 +0000 UTC m=+0.816119441 container died 2d6364d302020bbdf74b47f068aeec5f042d0aa09e57bfe9ba623fc5497453db (image=quay.io/ceph/ceph:v18, name=zealous_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec 03 01:19:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-47580e952cfc0f7f6faa1a11fc1e19d39e6d2ada54a2ffe0ccbd11e9e80533a4-merged.mount: Deactivated successfully.
Dec 03 01:19:02 compute-0 podman[200453]: 2025-12-03 01:19:02.362642655 +0000 UTC m=+0.907901849 container remove 2d6364d302020bbdf74b47f068aeec5f042d0aa09e57bfe9ba623fc5497453db (image=quay.io/ceph/ceph:v18, name=zealous_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec 03 01:19:02 compute-0 systemd[1]: libpod-conmon-2d6364d302020bbdf74b47f068aeec5f042d0aa09e57bfe9ba623fc5497453db.scope: Deactivated successfully.
Dec 03 01:19:02 compute-0 sudo[200423]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:02 compute-0 sudo[200648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-3765feb2-36f8-5b86-b74c-64e9221f9c4c
Dec 03 01:19:02 compute-0 sudo[200648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:02 compute-0 sudo[200648]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:02 compute-0 sudo[200679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:19:02 compute-0 sudo[200679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:02 compute-0 sudo[200679]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:02 compute-0 sudo[200704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-3765feb2-36f8-5b86-b74c-64e9221f9c4c/var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/config/ceph.client.admin.keyring.new
Dec 03 01:19:02 compute-0 sudo[200704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:02 compute-0 sudo[200704]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:02 compute-0 sudo[200750]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xybqzoipaedrwnlkwcabixdpapnknbkb ; /usr/bin/python3'
Dec 03 01:19:02 compute-0 sudo[200750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:19:02 compute-0 python3[200755]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                            _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:19:02 compute-0 ceph-mon[192821]: Updating compute-0:/var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/config/ceph.client.admin.keyring
Dec 03 01:19:02 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3890487829' entity='client.admin' 
Dec 03 01:19:02 compute-0 sudo[200778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:19:02 compute-0 sudo[200778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:02 compute-0 sudo[200778]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:02 compute-0 podman[200784]: 2025-12-03 01:19:02.938172316 +0000 UTC m=+0.097994367 container create 7db5a3a4a34f47722367833a829ef5d158958db0c15b009f1a786fd69ae1a038 (image=quay.io/ceph/ceph:v18, name=happy_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 03 01:19:02 compute-0 podman[200784]: 2025-12-03 01:19:02.902847445 +0000 UTC m=+0.062669556 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:19:03 compute-0 systemd[1]: Started libpod-conmon-7db5a3a4a34f47722367833a829ef5d158958db0c15b009f1a786fd69ae1a038.scope.
Dec 03 01:19:03 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:19:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29d6785f5b2446ff6cc6713b6d79ca5e12e60708148887feec57cab2e5e7a458/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29d6785f5b2446ff6cc6713b6d79ca5e12e60708148887feec57cab2e5e7a458/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29d6785f5b2446ff6cc6713b6d79ca5e12e60708148887feec57cab2e5e7a458/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:03 compute-0 sudo[200816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-3765feb2-36f8-5b86-b74c-64e9221f9c4c/var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/config/ceph.client.admin.keyring.new
Dec 03 01:19:03 compute-0 podman[200784]: 2025-12-03 01:19:03.094992487 +0000 UTC m=+0.254814538 container init 7db5a3a4a34f47722367833a829ef5d158958db0c15b009f1a786fd69ae1a038 (image=quay.io/ceph/ceph:v18, name=happy_mclean, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 03 01:19:03 compute-0 sudo[200816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:03 compute-0 sudo[200816]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:03 compute-0 podman[200784]: 2025-12-03 01:19:03.117199353 +0000 UTC m=+0.277021394 container start 7db5a3a4a34f47722367833a829ef5d158958db0c15b009f1a786fd69ae1a038 (image=quay.io/ceph/ceph:v18, name=happy_mclean, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:19:03 compute-0 podman[200784]: 2025-12-03 01:19:03.12374267 +0000 UTC m=+0.283564761 container attach 7db5a3a4a34f47722367833a829ef5d158958db0c15b009f1a786fd69ae1a038 (image=quay.io/ceph/ceph:v18, name=happy_mclean, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 03 01:19:03 compute-0 sudo[200847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:19:03 compute-0 sudo[200847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:03 compute-0 sudo[200847]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:03 compute-0 sudo[200872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-3765feb2-36f8-5b86-b74c-64e9221f9c4c/var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/config/ceph.client.admin.keyring.new
Dec 03 01:19:03 compute-0 sudo[200872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:03 compute-0 sudo[200872]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:03 compute-0 sudo[200897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:19:03 compute-0 sudo[200897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:03 compute-0 sudo[200897]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:03 compute-0 sudo[200922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-3765feb2-36f8-5b86-b74c-64e9221f9c4c/var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/config/ceph.client.admin.keyring.new /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/config/ceph.client.admin.keyring
Dec 03 01:19:03 compute-0 sudo[200922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:03 compute-0 sudo[200922]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:19:03 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:19:03 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 01:19:03 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:03 compute-0 ceph-mgr[193109]: [progress INFO root] update: starting ev bc9c9826-325b-4f1a-99c8-11b99d20a9ef (Updating crash deployment (+1 -> 1))
Dec 03 01:19:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Dec 03 01:19:03 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 03 01:19:03 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 03 01:19:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:19:03 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:19:03 compute-0 ceph-mgr[193109]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Dec 03 01:19:03 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Dec 03 01:19:03 compute-0 sudo[200966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:19:03 compute-0 sudo[200966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:03 compute-0 sudo[200966]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0) v1
Dec 03 01:19:03 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3216409173' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Dec 03 01:19:03 compute-0 sudo[200991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:19:03 compute-0 sudo[200991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:03 compute-0 sudo[200991]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:03 compute-0 ceph-mon[192821]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:19:03 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:03 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:03 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:03 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 03 01:19:03 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 03 01:19:03 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:19:03 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3216409173' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Dec 03 01:19:03 compute-0 sudo[201017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:19:03 compute-0 sudo[201017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:03 compute-0 sudo[201017]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:04 compute-0 sudo[201042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c
Dec 03 01:19:04 compute-0 sudo[201042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:19:04 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Dec 03 01:19:04 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 03 01:19:04 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3216409173' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Dec 03 01:19:04 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Dec 03 01:19:04 compute-0 happy_mclean[200838]: set require_min_compat_client to mimic
Dec 03 01:19:04 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Dec 03 01:19:04 compute-0 podman[201104]: 2025-12-03 01:19:04.693813665 +0000 UTC m=+0.078656019 container create d0363b4c46e03bf6783a6cc95680c146b7e98508421a82fa3b554439dc0b9b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:19:04 compute-0 systemd[1]: libpod-7db5a3a4a34f47722367833a829ef5d158958db0c15b009f1a786fd69ae1a038.scope: Deactivated successfully.
Dec 03 01:19:04 compute-0 podman[200784]: 2025-12-03 01:19:04.700931868 +0000 UTC m=+1.860753909 container died 7db5a3a4a34f47722367833a829ef5d158958db0c15b009f1a786fd69ae1a038 (image=quay.io/ceph/ceph:v18, name=happy_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec 03 01:19:04 compute-0 podman[201104]: 2025-12-03 01:19:04.661522207 +0000 UTC m=+0.046364611 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:19:04 compute-0 systemd[1]: Started libpod-conmon-d0363b4c46e03bf6783a6cc95680c146b7e98508421a82fa3b554439dc0b9b4a.scope.
Dec 03 01:19:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-29d6785f5b2446ff6cc6713b6d79ca5e12e60708148887feec57cab2e5e7a458-merged.mount: Deactivated successfully.
Dec 03 01:19:04 compute-0 podman[200784]: 2025-12-03 01:19:04.795359012 +0000 UTC m=+1.955181033 container remove 7db5a3a4a34f47722367833a829ef5d158958db0c15b009f1a786fd69ae1a038 (image=quay.io/ceph/ceph:v18, name=happy_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 03 01:19:04 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:19:04 compute-0 systemd[1]: libpod-conmon-7db5a3a4a34f47722367833a829ef5d158958db0c15b009f1a786fd69ae1a038.scope: Deactivated successfully.
Dec 03 01:19:04 compute-0 sudo[200750]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:04 compute-0 podman[201104]: 2025-12-03 01:19:04.834201849 +0000 UTC m=+0.219044243 container init d0363b4c46e03bf6783a6cc95680c146b7e98508421a82fa3b554439dc0b9b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_wright, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 03 01:19:04 compute-0 podman[201104]: 2025-12-03 01:19:04.849361468 +0000 UTC m=+0.234203792 container start d0363b4c46e03bf6783a6cc95680c146b7e98508421a82fa3b554439dc0b9b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_wright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:19:04 compute-0 podman[201104]: 2025-12-03 01:19:04.854663814 +0000 UTC m=+0.239506208 container attach d0363b4c46e03bf6783a6cc95680c146b7e98508421a82fa3b554439dc0b9b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 03 01:19:04 compute-0 agitated_wright[201129]: 167 167
Dec 03 01:19:04 compute-0 systemd[1]: libpod-d0363b4c46e03bf6783a6cc95680c146b7e98508421a82fa3b554439dc0b9b4a.scope: Deactivated successfully.
Dec 03 01:19:04 compute-0 podman[201104]: 2025-12-03 01:19:04.861688734 +0000 UTC m=+0.246531058 container died d0363b4c46e03bf6783a6cc95680c146b7e98508421a82fa3b554439dc0b9b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_wright, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:19:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee5bdf2329d7c2ea74b7a6855304786106879e03154c209336dafa3b371de0f4-merged.mount: Deactivated successfully.
Dec 03 01:19:04 compute-0 ceph-mon[192821]: Deploying daemon crash.compute-0 on compute-0
Dec 03 01:19:04 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3216409173' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Dec 03 01:19:04 compute-0 ceph-mon[192821]: osdmap e3: 0 total, 0 up, 0 in
Dec 03 01:19:04 compute-0 podman[201104]: 2025-12-03 01:19:04.934938424 +0000 UTC m=+0.319780748 container remove d0363b4c46e03bf6783a6cc95680c146b7e98508421a82fa3b554439dc0b9b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_wright, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec 03 01:19:04 compute-0 systemd[1]: libpod-conmon-d0363b4c46e03bf6783a6cc95680c146b7e98508421a82fa3b554439dc0b9b4a.scope: Deactivated successfully.
Dec 03 01:19:05 compute-0 systemd[1]: Reloading.
Dec 03 01:19:05 compute-0 systemd-sysv-generator[201176]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:19:05 compute-0 systemd-rc-local-generator[201172]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:19:05 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:19:05 compute-0 sudo[201208]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvtphztfulhwdiaaalcteuizbqrweelh ; /usr/bin/python3'
Dec 03 01:19:05 compute-0 sudo[201208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:19:05 compute-0 systemd[1]: Reloading.
Dec 03 01:19:05 compute-0 python3[201212]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:19:05 compute-0 systemd-rc-local-generator[201241]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:19:05 compute-0 systemd-sysv-generator[201245]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:19:05 compute-0 podman[201234]: 2025-12-03 01:19:05.702030821 +0000 UTC m=+0.062026583 container create bbdc0ccc0b78a416887e59b974abc2f56142a88cdd27fe7f336c8d8f565bac2d (image=quay.io/ceph/ceph:v18, name=youthful_cerf, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:19:05 compute-0 podman[201234]: 2025-12-03 01:19:05.679253396 +0000 UTC m=+0.039249168 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:19:05 compute-0 systemd[1]: Started libpod-conmon-bbdc0ccc0b78a416887e59b974abc2f56142a88cdd27fe7f336c8d8f565bac2d.scope.
Dec 03 01:19:05 compute-0 systemd[1]: Starting Ceph crash.compute-0 for 3765feb2-36f8-5b86-b74c-64e9221f9c4c...
Dec 03 01:19:05 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:19:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/795397e5f2936f14d9c62d20c61d4d422ae6fa78cc96eb44569d98c37f4195a3/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/795397e5f2936f14d9c62d20c61d4d422ae6fa78cc96eb44569d98c37f4195a3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/795397e5f2936f14d9c62d20c61d4d422ae6fa78cc96eb44569d98c37f4195a3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:05 compute-0 ceph-mon[192821]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:19:05 compute-0 podman[201234]: 2025-12-03 01:19:05.975217582 +0000 UTC m=+0.335213324 container init bbdc0ccc0b78a416887e59b974abc2f56142a88cdd27fe7f336c8d8f565bac2d (image=quay.io/ceph/ceph:v18, name=youthful_cerf, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:19:06 compute-0 podman[201234]: 2025-12-03 01:19:06.003403866 +0000 UTC m=+0.363399638 container start bbdc0ccc0b78a416887e59b974abc2f56142a88cdd27fe7f336c8d8f565bac2d (image=quay.io/ceph/ceph:v18, name=youthful_cerf, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:19:06 compute-0 podman[201234]: 2025-12-03 01:19:06.017961829 +0000 UTC m=+0.377957581 container attach bbdc0ccc0b78a416887e59b974abc2f56142a88cdd27fe7f336c8d8f565bac2d (image=quay.io/ceph/ceph:v18, name=youthful_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec 03 01:19:06 compute-0 podman[201267]: 2025-12-03 01:19:06.108246187 +0000 UTC m=+0.173052643 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 03 01:19:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:19:06 compute-0 podman[201334]: 2025-12-03 01:19:06.361290481 +0000 UTC m=+0.085180467 container create d1d072b9d1367535ea9a97a406976e46273f77b5fff1017f3092157bee37d42b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-crash-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 03 01:19:06 compute-0 podman[201334]: 2025-12-03 01:19:06.32268587 +0000 UTC m=+0.046575906 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:19:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fde572be95ff7fddc08501c609284090b33ad8da703642f5821a51fde595a69/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fde572be95ff7fddc08501c609284090b33ad8da703642f5821a51fde595a69/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fde572be95ff7fddc08501c609284090b33ad8da703642f5821a51fde595a69/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fde572be95ff7fddc08501c609284090b33ad8da703642f5821a51fde595a69/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:06 compute-0 podman[201334]: 2025-12-03 01:19:06.559490188 +0000 UTC m=+0.283380224 container init d1d072b9d1367535ea9a97a406976e46273f77b5fff1017f3092157bee37d42b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-crash-compute-0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:19:06 compute-0 podman[201334]: 2025-12-03 01:19:06.574021161 +0000 UTC m=+0.297911147 container start d1d072b9d1367535ea9a97a406976e46273f77b5fff1017f3092157bee37d42b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-crash-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:19:06 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.14182 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 01:19:06 compute-0 bash[201334]: d1d072b9d1367535ea9a97a406976e46273f77b5fff1017f3092157bee37d42b
Dec 03 01:19:06 compute-0 systemd[1]: Started Ceph crash.compute-0 for 3765feb2-36f8-5b86-b74c-64e9221f9c4c.
Dec 03 01:19:06 compute-0 sudo[201042]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:19:06 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:19:06 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Dec 03 01:19:06 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:06 compute-0 ceph-mgr[193109]: [progress INFO root] complete: finished ev bc9c9826-325b-4f1a-99c8-11b99d20a9ef (Updating crash deployment (+1 -> 1))
Dec 03 01:19:06 compute-0 ceph-mgr[193109]: [progress INFO root] Completed event bc9c9826-325b-4f1a-99c8-11b99d20a9ef (Updating crash deployment (+1 -> 1)) in 3 seconds
Dec 03 01:19:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Dec 03 01:19:06 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:06 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev e172a726-e37c-4dea-ac57-a1fbabd58d38 does not exist
Dec 03 01:19:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Dec 03 01:19:06 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:06 compute-0 ceph-mgr[193109]: [progress INFO root] update: starting ev 2279a671-6d1b-4c56-8cb2-1d26cdb52ade (Updating mgr deployment (+1 -> 2))
Dec 03 01:19:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.jzzeoa", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Dec 03 01:19:06 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.jzzeoa", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 03 01:19:06 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.jzzeoa", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec 03 01:19:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Dec 03 01:19:06 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 03 01:19:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:19:06 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:19:06 compute-0 ceph-mgr[193109]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.jzzeoa on compute-0
Dec 03 01:19:06 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.jzzeoa on compute-0
Dec 03 01:19:06 compute-0 sudo[201374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:19:06 compute-0 sudo[201374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:06 compute-0 sudo[201374]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:06 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-crash-compute-0[201368]: INFO:ceph-crash:pinging cluster to exercise our key
Dec 03 01:19:06 compute-0 sudo[201398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:19:06 compute-0 sudo[201398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:06 compute-0 sudo[201398]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:06 compute-0 sudo[201405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:19:06 compute-0 sudo[201405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:06 compute-0 sudo[201405]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:06 compute-0 sudo[201450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:19:06 compute-0 sudo[201450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:06 compute-0 sudo[201450]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:06 compute-0 sudo[201455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:19:07 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-crash-compute-0[201368]: 2025-12-03T01:19:06.998+0000 7fb179b59640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec 03 01:19:07 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-crash-compute-0[201368]: 2025-12-03T01:19:06.998+0000 7fb179b59640 -1 AuthRegistry(0x7fb174066fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec 03 01:19:07 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-crash-compute-0[201368]: 2025-12-03T01:19:07.000+0000 7fb179b59640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec 03 01:19:07 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-crash-compute-0[201368]: 2025-12-03T01:19:07.000+0000 7fb179b59640 -1 AuthRegistry(0x7fb179b58000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec 03 01:19:07 compute-0 sudo[201455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:07 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-crash-compute-0[201368]: 2025-12-03T01:19:07.003+0000 7fb1737fe640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Dec 03 01:19:07 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-crash-compute-0[201368]: 2025-12-03T01:19:07.004+0000 7fb179b59640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Dec 03 01:19:07 compute-0 sudo[201455]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:07 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-crash-compute-0[201368]: [errno 13] RADOS permission denied (error connecting to the cluster)
Dec 03 01:19:07 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-crash-compute-0[201368]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Dec 03 01:19:07 compute-0 sudo[201499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:19:07 compute-0 sudo[201499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:07 compute-0 sudo[201499]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:07 compute-0 sudo[201518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Dec 03 01:19:07 compute-0 sudo[201518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:07 compute-0 sudo[201559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c
Dec 03 01:19:07 compute-0 sudo[201559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:07 compute-0 sudo[201518]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Dec 03 01:19:07 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Dec 03 01:19:07 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Dec 03 01:19:07 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Dec 03 01:19:07 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:07 compute-0 ceph-mgr[193109]: [cephadm INFO root] Added host compute-0
Dec 03 01:19:07 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Added host compute-0
Dec 03 01:19:07 compute-0 ceph-mgr[193109]: [cephadm INFO root] Saving service mon spec with placement compute-0
Dec 03 01:19:07 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Dec 03 01:19:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Dec 03 01:19:07 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:07 compute-0 ceph-mgr[193109]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Dec 03 01:19:07 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Dec 03 01:19:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Dec 03 01:19:07 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:07 compute-0 ceph-mgr[193109]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Dec 03 01:19:07 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Dec 03 01:19:07 compute-0 ceph-mgr[193109]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Dec 03 01:19:07 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Dec 03 01:19:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0) v1
Dec 03 01:19:07 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:07 compute-0 youthful_cerf[201268]: Added host 'compute-0' with addr '192.168.122.100'
Dec 03 01:19:07 compute-0 youthful_cerf[201268]: Scheduled mon update...
Dec 03 01:19:07 compute-0 youthful_cerf[201268]: Scheduled mgr update...
Dec 03 01:19:07 compute-0 youthful_cerf[201268]: Scheduled osd.default_drive_group update...
Dec 03 01:19:07 compute-0 systemd[1]: libpod-bbdc0ccc0b78a416887e59b974abc2f56142a88cdd27fe7f336c8d8f565bac2d.scope: Deactivated successfully.
Dec 03 01:19:07 compute-0 podman[201234]: 2025-12-03 01:19:07.527486052 +0000 UTC m=+1.887481814 container died bbdc0ccc0b78a416887e59b974abc2f56142a88cdd27fe7f336c8d8f565bac2d (image=quay.io/ceph/ceph:v18, name=youthful_cerf, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:19:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-795397e5f2936f14d9c62d20c61d4d422ae6fa78cc96eb44569d98c37f4195a3-merged.mount: Deactivated successfully.
Dec 03 01:19:07 compute-0 podman[201234]: 2025-12-03 01:19:07.64506782 +0000 UTC m=+2.005063592 container remove bbdc0ccc0b78a416887e59b974abc2f56142a88cdd27fe7f336c8d8f565bac2d (image=quay.io/ceph/ceph:v18, name=youthful_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 03 01:19:07 compute-0 ceph-mon[192821]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:19:07 compute-0 ceph-mon[192821]: from='client.14182 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 01:19:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.jzzeoa", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 03 01:19:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.jzzeoa", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec 03 01:19:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 03 01:19:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:19:07 compute-0 ceph-mon[192821]: Deploying daemon mgr.compute-0.jzzeoa on compute-0
Dec 03 01:19:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:07 compute-0 systemd[1]: libpod-conmon-bbdc0ccc0b78a416887e59b974abc2f56142a88cdd27fe7f336c8d8f565bac2d.scope: Deactivated successfully.
Dec 03 01:19:07 compute-0 sudo[201208]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:07 compute-0 podman[201653]: 2025-12-03 01:19:07.817981468 +0000 UTC m=+0.086728067 container create 1e2260d693b6cd29eab623f955a8aebbe80c20ceaeebc45edebc8bd1d515d41e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_shamir, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec 03 01:19:07 compute-0 podman[201653]: 2025-12-03 01:19:07.783188925 +0000 UTC m=+0.051935524 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:19:07 compute-0 systemd[1]: Started libpod-conmon-1e2260d693b6cd29eab623f955a8aebbe80c20ceaeebc45edebc8bd1d515d41e.scope.
Dec 03 01:19:07 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:19:07 compute-0 podman[201653]: 2025-12-03 01:19:07.937400023 +0000 UTC m=+0.206146622 container init 1e2260d693b6cd29eab623f955a8aebbe80c20ceaeebc45edebc8bd1d515d41e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_shamir, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:19:07 compute-0 podman[201653]: 2025-12-03 01:19:07.954946523 +0000 UTC m=+0.223693112 container start 1e2260d693b6cd29eab623f955a8aebbe80c20ceaeebc45edebc8bd1d515d41e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 03 01:19:07 compute-0 nostalgic_shamir[201669]: 167 167
Dec 03 01:19:07 compute-0 podman[201653]: 2025-12-03 01:19:07.960385593 +0000 UTC m=+0.229132162 container attach 1e2260d693b6cd29eab623f955a8aebbe80c20ceaeebc45edebc8bd1d515d41e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_shamir, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:19:07 compute-0 systemd[1]: libpod-1e2260d693b6cd29eab623f955a8aebbe80c20ceaeebc45edebc8bd1d515d41e.scope: Deactivated successfully.
Dec 03 01:19:07 compute-0 podman[201653]: 2025-12-03 01:19:07.962114007 +0000 UTC m=+0.230860596 container died 1e2260d693b6cd29eab623f955a8aebbe80c20ceaeebc45edebc8bd1d515d41e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_shamir, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 03 01:19:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-a253c3d1122e38e5cb81a6bfc860786f0c4523255b78bc2d1db4f2553753afbe-merged.mount: Deactivated successfully.
Dec 03 01:19:08 compute-0 podman[201653]: 2025-12-03 01:19:08.02494509 +0000 UTC m=+0.293691649 container remove 1e2260d693b6cd29eab623f955a8aebbe80c20ceaeebc45edebc8bd1d515d41e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_shamir, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:19:08 compute-0 systemd[1]: libpod-conmon-1e2260d693b6cd29eab623f955a8aebbe80c20ceaeebc45edebc8bd1d515d41e.scope: Deactivated successfully.
Dec 03 01:19:08 compute-0 sudo[201708]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhnpvkvmanimbtxnwlftmaewwojibswl ; /usr/bin/python3'
Dec 03 01:19:08 compute-0 sudo[201708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:19:08 compute-0 systemd[1]: Reloading.
Dec 03 01:19:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:19:08 compute-0 systemd-rc-local-generator[201740]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:19:08 compute-0 systemd-sysv-generator[201745]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:19:08 compute-0 python3[201712]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:19:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec 03 01:19:08 compute-0 ceph-mgr[193109]: [progress INFO root] Writing back 1 completed events
Dec 03 01:19:08 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:08 compute-0 podman[201749]: 2025-12-03 01:19:08.405966579 +0000 UTC m=+0.105384446 container create 4d6894c58cf6802add4d543fb06e1a9b1d876ff7d3600cf1288c1f9017347cac (image=quay.io/ceph/ceph:v18, name=affectionate_satoshi, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 03 01:19:08 compute-0 podman[201749]: 2025-12-03 01:19:08.364337881 +0000 UTC m=+0.063755818 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:19:08 compute-0 systemd[1]: Started libpod-conmon-4d6894c58cf6802add4d543fb06e1a9b1d876ff7d3600cf1288c1f9017347cac.scope.
Dec 03 01:19:08 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:19:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84ab02870b6767d9d2d09e6eef59317def8fbb94d791dc5d365c78f239be17b5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84ab02870b6767d9d2d09e6eef59317def8fbb94d791dc5d365c78f239be17b5/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84ab02870b6767d9d2d09e6eef59317def8fbb94d791dc5d365c78f239be17b5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:08 compute-0 systemd[1]: Reloading.
Dec 03 01:19:08 compute-0 podman[201749]: 2025-12-03 01:19:08.602689228 +0000 UTC m=+0.302107155 container init 4d6894c58cf6802add4d543fb06e1a9b1d876ff7d3600cf1288c1f9017347cac (image=quay.io/ceph/ceph:v18, name=affectionate_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:19:08 compute-0 podman[201749]: 2025-12-03 01:19:08.643748732 +0000 UTC m=+0.343166579 container start 4d6894c58cf6802add4d543fb06e1a9b1d876ff7d3600cf1288c1f9017347cac (image=quay.io/ceph/ceph:v18, name=affectionate_satoshi, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:19:08 compute-0 podman[201749]: 2025-12-03 01:19:08.649224072 +0000 UTC m=+0.348641939 container attach 4d6894c58cf6802add4d543fb06e1a9b1d876ff7d3600cf1288c1f9017347cac (image=quay.io/ceph/ceph:v18, name=affectionate_satoshi, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 03 01:19:08 compute-0 ceph-mon[192821]: Added host compute-0
Dec 03 01:19:08 compute-0 ceph-mon[192821]: Saving service mon spec with placement compute-0
Dec 03 01:19:08 compute-0 ceph-mon[192821]: Saving service mgr spec with placement compute-0
Dec 03 01:19:08 compute-0 ceph-mon[192821]: Marking host: compute-0 for OSDSpec preview refresh.
Dec 03 01:19:08 compute-0 ceph-mon[192821]: Saving service osd.default_drive_group spec with placement compute-0
Dec 03 01:19:08 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:08 compute-0 systemd-rc-local-generator[201798]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:19:08 compute-0 systemd-sysv-generator[201801]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:19:09 compute-0 systemd[1]: Starting Ceph mgr.compute-0.jzzeoa for 3765feb2-36f8-5b86-b74c-64e9221f9c4c...
Dec 03 01:19:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Dec 03 01:19:09 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2953033915' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 03 01:19:09 compute-0 affectionate_satoshi[201767]: 
Dec 03 01:19:09 compute-0 affectionate_satoshi[201767]: {"fsid":"3765feb2-36f8-5b86-b74c-64e9221f9c4c","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":89,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-12-03T01:17:36.090330+0000","services":{}},"progress_events":{"2279a671-6d1b-4c56-8cb2-1d26cdb52ade":{"message":"Updating mgr deployment (+1 -> 2) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Dec 03 01:19:09 compute-0 systemd[1]: libpod-4d6894c58cf6802add4d543fb06e1a9b1d876ff7d3600cf1288c1f9017347cac.scope: Deactivated successfully.
Dec 03 01:19:09 compute-0 conmon[201767]: conmon 4d6894c58cf6802add4d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4d6894c58cf6802add4d543fb06e1a9b1d876ff7d3600cf1288c1f9017347cac.scope/container/memory.events
Dec 03 01:19:09 compute-0 podman[201749]: 2025-12-03 01:19:09.33217492 +0000 UTC m=+1.031592797 container died 4d6894c58cf6802add4d543fb06e1a9b1d876ff7d3600cf1288c1f9017347cac (image=quay.io/ceph/ceph:v18, name=affectionate_satoshi, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec 03 01:19:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-84ab02870b6767d9d2d09e6eef59317def8fbb94d791dc5d365c78f239be17b5-merged.mount: Deactivated successfully.
Dec 03 01:19:09 compute-0 podman[201749]: 2025-12-03 01:19:09.429767814 +0000 UTC m=+1.129185661 container remove 4d6894c58cf6802add4d543fb06e1a9b1d876ff7d3600cf1288c1f9017347cac (image=quay.io/ceph/ceph:v18, name=affectionate_satoshi, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:19:09 compute-0 systemd[1]: libpod-conmon-4d6894c58cf6802add4d543fb06e1a9b1d876ff7d3600cf1288c1f9017347cac.scope: Deactivated successfully.
Dec 03 01:19:09 compute-0 podman[201875]: 2025-12-03 01:19:09.463164982 +0000 UTC m=+0.115286990 container create 5bef9a27fca951228d9f01ffa5d61aa5c3f327c76fda497f735c2a15ffe82831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-jzzeoa, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:19:09 compute-0 sudo[201708]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:09 compute-0 podman[201875]: 2025-12-03 01:19:09.423847362 +0000 UTC m=+0.075969450 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:19:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e8fcc06e1885822a9d3ffab6363ba10f22d6422768f1836efa9b4ba39f3bbef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e8fcc06e1885822a9d3ffab6363ba10f22d6422768f1836efa9b4ba39f3bbef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e8fcc06e1885822a9d3ffab6363ba10f22d6422768f1836efa9b4ba39f3bbef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e8fcc06e1885822a9d3ffab6363ba10f22d6422768f1836efa9b4ba39f3bbef/merged/var/lib/ceph/mgr/ceph-compute-0.jzzeoa supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:09 compute-0 podman[201875]: 2025-12-03 01:19:09.588855278 +0000 UTC m=+0.240977316 container init 5bef9a27fca951228d9f01ffa5d61aa5c3f327c76fda497f735c2a15ffe82831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-jzzeoa, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:19:09 compute-0 podman[201875]: 2025-12-03 01:19:09.607360582 +0000 UTC m=+0.259482630 container start 5bef9a27fca951228d9f01ffa5d61aa5c3f327c76fda497f735c2a15ffe82831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-jzzeoa, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:19:09 compute-0 bash[201875]: 5bef9a27fca951228d9f01ffa5d61aa5c3f327c76fda497f735c2a15ffe82831
Dec 03 01:19:09 compute-0 systemd[1]: Started Ceph mgr.compute-0.jzzeoa for 3765feb2-36f8-5b86-b74c-64e9221f9c4c.
Dec 03 01:19:09 compute-0 sudo[201559]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:09 compute-0 ceph-mgr[201906]: set uid:gid to 167:167 (ceph:ceph)
Dec 03 01:19:09 compute-0 ceph-mgr[201906]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Dec 03 01:19:09 compute-0 ceph-mon[192821]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:19:09 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2953033915' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 03 01:19:09 compute-0 ceph-mgr[201906]: pidfile_write: ignore empty --pid-file
Dec 03 01:19:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:19:09 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:19:09 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Dec 03 01:19:09 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:09 compute-0 ceph-mgr[193109]: [progress INFO root] complete: finished ev 2279a671-6d1b-4c56-8cb2-1d26cdb52ade (Updating mgr deployment (+1 -> 2))
Dec 03 01:19:09 compute-0 ceph-mgr[193109]: [progress INFO root] Completed event 2279a671-6d1b-4c56-8cb2-1d26cdb52ade (Updating mgr deployment (+1 -> 2)) in 3 seconds
Dec 03 01:19:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Dec 03 01:19:09 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:09 compute-0 ceph-mgr[201906]: mgr[py] Loading python module 'alerts'
Dec 03 01:19:09 compute-0 sudo[201931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:19:09 compute-0 sudo[201931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:09 compute-0 sudo[201931]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:09 compute-0 sudo[201956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 01:19:09 compute-0 sudo[201956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:09 compute-0 sudo[201956]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:10 compute-0 sshd-session[199607]: Connection closed by authenticating user root 193.32.162.157 port 34462 [preauth]
Dec 03 01:19:10 compute-0 sudo[201981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:19:10 compute-0 sudo[201981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:10 compute-0 sudo[201981]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:10 compute-0 ceph-mgr[201906]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 03 01:19:10 compute-0 ceph-mgr[201906]: mgr[py] Loading python module 'balancer'
Dec 03 01:19:10 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-jzzeoa[201902]: 2025-12-03T01:19:10.149+0000 7fcf5ea18140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 03 01:19:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:19:10 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:19:10 compute-0 sudo[202006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:19:10 compute-0 sudo[202006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:10 compute-0 sudo[202006]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:10 compute-0 sudo[202038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:19:10 compute-0 sudo[202038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:10 compute-0 podman[202031]: 2025-12-03 01:19:10.400840087 +0000 UTC m=+0.130904800 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, release=1214.1726694543, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., io.openshift.expose-services=, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release-0.7.12=, config_id=edpm, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec 03 01:19:10 compute-0 sudo[202038]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:10 compute-0 ceph-mgr[201906]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 03 01:19:10 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-jzzeoa[201902]: 2025-12-03T01:19:10.417+0000 7fcf5ea18140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 03 01:19:10 compute-0 ceph-mgr[201906]: mgr[py] Loading python module 'cephadm'
Dec 03 01:19:10 compute-0 sudo[202076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 03 01:19:10 compute-0 sudo[202076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:10 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:10 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:10 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:10 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:11 compute-0 podman[202167]: 2025-12-03 01:19:11.384665208 +0000 UTC m=+0.140670751 container exec d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 03 01:19:11 compute-0 podman[202167]: 2025-12-03 01:19:11.483186747 +0000 UTC m=+0.239192310 container exec_died d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec 03 01:19:11 compute-0 ceph-mon[192821]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:19:11 compute-0 sudo[202076]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:19:11 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:19:11 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:19:11 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:19:12 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:19:12 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:19:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 01:19:12 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:19:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 01:19:12 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:12 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 27546808-74c7-4256-a5e6-9d0b617f7c05 does not exist
Dec 03 01:19:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Dec 03 01:19:12 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:12 compute-0 ceph-mgr[193109]: [progress INFO root] update: starting ev 0455ede0-4afe-4f3f-8d3d-e4ebe79e5b6a (Updating mgr deployment (-1 -> 1))
Dec 03 01:19:12 compute-0 ceph-mgr[193109]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.jzzeoa from compute-0 -- ports [8765]
Dec 03 01:19:12 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.jzzeoa from compute-0 -- ports [8765]
Dec 03 01:19:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:19:12 compute-0 sudo[202263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:19:12 compute-0 sudo[202263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:12 compute-0 sudo[202263]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:12 compute-0 sudo[202289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:19:12 compute-0 sudo[202289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:12 compute-0 sudo[202289]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:12 compute-0 sudo[202314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:19:12 compute-0 sudo[202314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:12 compute-0 sudo[202314]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:12 compute-0 ceph-mgr[201906]: mgr[py] Loading python module 'crash'
Dec 03 01:19:12 compute-0 sudo[202339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 rm-daemon --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --name mgr.compute-0.jzzeoa --force --tcp-ports 8765
Dec 03 01:19:12 compute-0 sudo[202339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:12 compute-0 ceph-mgr[201906]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 03 01:19:12 compute-0 ceph-mgr[201906]: mgr[py] Loading python module 'dashboard'
Dec 03 01:19:12 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-jzzeoa[201902]: 2025-12-03T01:19:12.793+0000 7fcf5ea18140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 03 01:19:12 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:12 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:12 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:12 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:12 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:19:12 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:19:12 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:12 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:12 compute-0 ceph-mon[192821]: Removing daemon mgr.compute-0.jzzeoa from compute-0 -- ports [8765]
Dec 03 01:19:13 compute-0 systemd[1]: Stopping Ceph mgr.compute-0.jzzeoa for 3765feb2-36f8-5b86-b74c-64e9221f9c4c...
Dec 03 01:19:13 compute-0 ceph-mgr[193109]: [progress INFO root] Writing back 2 completed events
Dec 03 01:19:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec 03 01:19:13 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:13 compute-0 podman[202431]: 2025-12-03 01:19:13.370847543 +0000 UTC m=+0.116251564 container died 5bef9a27fca951228d9f01ffa5d61aa5c3f327c76fda497f735c2a15ffe82831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-jzzeoa, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 03 01:19:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e8fcc06e1885822a9d3ffab6363ba10f22d6422768f1836efa9b4ba39f3bbef-merged.mount: Deactivated successfully.
Dec 03 01:19:13 compute-0 podman[202431]: 2025-12-03 01:19:13.433098311 +0000 UTC m=+0.178502332 container remove 5bef9a27fca951228d9f01ffa5d61aa5c3f327c76fda497f735c2a15ffe82831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-jzzeoa, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:19:13 compute-0 bash[202431]: ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-jzzeoa
Dec 03 01:19:13 compute-0 systemd[1]: ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c@mgr.compute-0.jzzeoa.service: Main process exited, code=exited, status=143/n/a
Dec 03 01:19:13 compute-0 systemd[1]: ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c@mgr.compute-0.jzzeoa.service: Failed with result 'exit-code'.
Dec 03 01:19:13 compute-0 systemd[1]: Stopped Ceph mgr.compute-0.jzzeoa for 3765feb2-36f8-5b86-b74c-64e9221f9c4c.
Dec 03 01:19:13 compute-0 systemd[1]: ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c@mgr.compute-0.jzzeoa.service: Consumed 5.464s CPU time.
Dec 03 01:19:13 compute-0 systemd[1]: Reloading.
Dec 03 01:19:13 compute-0 systemd-rc-local-generator[202506]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:19:13 compute-0 systemd-sysv-generator[202512]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:19:14 compute-0 ceph-mon[192821]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:19:14 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:14 compute-0 sudo[202339]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:14 compute-0 ceph-mgr[193109]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.jzzeoa
Dec 03 01:19:14 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.jzzeoa
Dec 03 01:19:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.jzzeoa"} v 0) v1
Dec 03 01:19:14 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.jzzeoa"}]: dispatch
Dec 03 01:19:14 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.jzzeoa"}]': finished
Dec 03 01:19:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Dec 03 01:19:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:19:14 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:14 compute-0 ceph-mgr[193109]: [progress INFO root] complete: finished ev 0455ede0-4afe-4f3f-8d3d-e4ebe79e5b6a (Updating mgr deployment (-1 -> 1))
Dec 03 01:19:14 compute-0 ceph-mgr[193109]: [progress INFO root] Completed event 0455ede0-4afe-4f3f-8d3d-e4ebe79e5b6a (Updating mgr deployment (-1 -> 1)) in 2 seconds
Dec 03 01:19:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Dec 03 01:19:14 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:14 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 2fee12d3-9cf0-45ee-b9e9-9903722e8bfa does not exist
Dec 03 01:19:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 01:19:14 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:19:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 01:19:14 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:19:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:19:14 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:19:14 compute-0 sudo[202522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:19:14 compute-0 sudo[202522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:14 compute-0 sudo[202522]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:14 compute-0 sudo[202547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:19:14 compute-0 sudo[202547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:14 compute-0 sudo[202547]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:14 compute-0 sudo[202572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:19:14 compute-0 sudo[202572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:14 compute-0 sudo[202572]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:14 compute-0 sudo[202597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 01:19:14 compute-0 sudo[202597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:15 compute-0 ceph-mon[192821]: Removing key for mgr.compute-0.jzzeoa
Dec 03 01:19:15 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.jzzeoa"}]: dispatch
Dec 03 01:19:15 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.jzzeoa"}]': finished
Dec 03 01:19:15 compute-0 ceph-mon[192821]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:19:15 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:15 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:15 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:19:15 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:19:15 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:19:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:19:15 compute-0 podman[202662]: 2025-12-03 01:19:15.330035077 +0000 UTC m=+0.081008480 container create d10a3277facf73c54a637e1e27fc906407fb90c689aeb6afb09fdca3e62bbb85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:19:15 compute-0 podman[202662]: 2025-12-03 01:19:15.295039709 +0000 UTC m=+0.046013192 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:19:15 compute-0 systemd[1]: Started libpod-conmon-d10a3277facf73c54a637e1e27fc906407fb90c689aeb6afb09fdca3e62bbb85.scope.
Dec 03 01:19:15 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:19:15 compute-0 podman[202662]: 2025-12-03 01:19:15.476484696 +0000 UTC m=+0.227458139 container init d10a3277facf73c54a637e1e27fc906407fb90c689aeb6afb09fdca3e62bbb85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_boyd, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:19:15 compute-0 podman[202662]: 2025-12-03 01:19:15.497851164 +0000 UTC m=+0.248824547 container start d10a3277facf73c54a637e1e27fc906407fb90c689aeb6afb09fdca3e62bbb85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_boyd, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec 03 01:19:15 compute-0 festive_boyd[202678]: 167 167
Dec 03 01:19:15 compute-0 systemd[1]: libpod-d10a3277facf73c54a637e1e27fc906407fb90c689aeb6afb09fdca3e62bbb85.scope: Deactivated successfully.
Dec 03 01:19:15 compute-0 podman[202662]: 2025-12-03 01:19:15.503270483 +0000 UTC m=+0.254243946 container attach d10a3277facf73c54a637e1e27fc906407fb90c689aeb6afb09fdca3e62bbb85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Dec 03 01:19:15 compute-0 podman[202662]: 2025-12-03 01:19:15.507925203 +0000 UTC m=+0.258898606 container died d10a3277facf73c54a637e1e27fc906407fb90c689aeb6afb09fdca3e62bbb85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_boyd, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:19:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-aec8b2e2aced7e8d643cdc28b6a581334dbafe603d70b665efb6bafa9c25b88c-merged.mount: Deactivated successfully.
Dec 03 01:19:15 compute-0 podman[202675]: 2025-12-03 01:19:15.564790123 +0000 UTC m=+0.153990234 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 01:19:15 compute-0 podman[202662]: 2025-12-03 01:19:15.57947859 +0000 UTC m=+0.330451973 container remove d10a3277facf73c54a637e1e27fc906407fb90c689aeb6afb09fdca3e62bbb85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 03 01:19:15 compute-0 systemd[1]: libpod-conmon-d10a3277facf73c54a637e1e27fc906407fb90c689aeb6afb09fdca3e62bbb85.scope: Deactivated successfully.
Dec 03 01:19:15 compute-0 podman[202721]: 2025-12-03 01:19:15.807926083 +0000 UTC m=+0.079472671 container create 8d0edba0eb0122e7f2ecc4e1b5ae3ac4e5cd44088009edbe0ed1c0056b775f59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_visvesvaraya, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:19:15 compute-0 podman[202721]: 2025-12-03 01:19:15.775231624 +0000 UTC m=+0.046778262 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:19:15 compute-0 systemd[1]: Started libpod-conmon-8d0edba0eb0122e7f2ecc4e1b5ae3ac4e5cd44088009edbe0ed1c0056b775f59.scope.
Dec 03 01:19:15 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:19:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3da81a669bfd7c30bed0b92971e8da86ca7ccb1302aba715fcf4e7bee9a96c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3da81a669bfd7c30bed0b92971e8da86ca7ccb1302aba715fcf4e7bee9a96c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3da81a669bfd7c30bed0b92971e8da86ca7ccb1302aba715fcf4e7bee9a96c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3da81a669bfd7c30bed0b92971e8da86ca7ccb1302aba715fcf4e7bee9a96c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3da81a669bfd7c30bed0b92971e8da86ca7ccb1302aba715fcf4e7bee9a96c6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:15 compute-0 podman[202721]: 2025-12-03 01:19:15.98349634 +0000 UTC m=+0.255042928 container init 8d0edba0eb0122e7f2ecc4e1b5ae3ac4e5cd44088009edbe0ed1c0056b775f59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_visvesvaraya, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 03 01:19:16 compute-0 podman[202721]: 2025-12-03 01:19:16.025116607 +0000 UTC m=+0.296663195 container start 8d0edba0eb0122e7f2ecc4e1b5ae3ac4e5cd44088009edbe0ed1c0056b775f59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_visvesvaraya, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:19:16 compute-0 podman[202721]: 2025-12-03 01:19:16.031918552 +0000 UTC m=+0.303465180 container attach 8d0edba0eb0122e7f2ecc4e1b5ae3ac4e5cd44088009edbe0ed1c0056b775f59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_visvesvaraya, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 03 01:19:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:19:17 compute-0 ceph-mon[192821]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:19:17 compute-0 gracious_visvesvaraya[202735]: --> passed data devices: 0 physical, 3 LVM
Dec 03 01:19:17 compute-0 gracious_visvesvaraya[202735]: --> relative data size: 1.0
Dec 03 01:19:17 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 03 01:19:17 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 551e0f4a-0b7e-47cf-9522-b82f94d4038c
Dec 03 01:19:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c"} v 0) v1
Dec 03 01:19:17 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1603431928' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c"}]: dispatch
Dec 03 01:19:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Dec 03 01:19:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 03 01:19:17 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1603431928' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c"}]': finished
Dec 03 01:19:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Dec 03 01:19:17 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Dec 03 01:19:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec 03 01:19:17 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 03 01:19:17 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 03 01:19:18 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 03 01:19:18 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Dec 03 01:19:18 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Dec 03 01:19:18 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 03 01:19:18 compute-0 lvm[202799]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 03 01:19:18 compute-0 lvm[202799]: VG ceph_vg0 finished
Dec 03 01:19:18 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 03 01:19:18 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Dec 03 01:19:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:19:18 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1603431928' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c"}]: dispatch
Dec 03 01:19:18 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1603431928' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c"}]': finished
Dec 03 01:19:18 compute-0 ceph-mon[192821]: osdmap e4: 1 total, 0 up, 1 in
Dec 03 01:19:18 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 03 01:19:18 compute-0 ceph-mgr[193109]: [progress INFO root] Writing back 3 completed events
Dec 03 01:19:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec 03 01:19:18 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Dec 03 01:19:18 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/760755779' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec 03 01:19:18 compute-0 gracious_visvesvaraya[202735]:  stderr: got monmap epoch 1
Dec 03 01:19:18 compute-0 gracious_visvesvaraya[202735]: --> Creating keyring file for osd.0
Dec 03 01:19:18 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Dec 03 01:19:18 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Dec 03 01:19:18 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 551e0f4a-0b7e-47cf-9522-b82f94d4038c --setuser ceph --setgroup ceph
Dec 03 01:19:18 compute-0 ceph-mon[192821]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Dec 03 01:19:18 compute-0 ceph-mon[192821]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec 03 01:19:19 compute-0 ceph-mon[192821]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:19:19 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:19 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/760755779' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec 03 01:19:19 compute-0 ceph-mon[192821]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Dec 03 01:19:19 compute-0 ceph-mon[192821]: Cluster is now healthy
Dec 03 01:19:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:19:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e4 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:19:21 compute-0 gracious_visvesvaraya[202735]:  stderr: 2025-12-03T01:19:18.730+0000 7f0d10b99740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec 03 01:19:21 compute-0 gracious_visvesvaraya[202735]:  stderr: 2025-12-03T01:19:18.731+0000 7f0d10b99740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec 03 01:19:21 compute-0 gracious_visvesvaraya[202735]:  stderr: 2025-12-03T01:19:18.731+0000 7f0d10b99740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec 03 01:19:21 compute-0 gracious_visvesvaraya[202735]:  stderr: 2025-12-03T01:19:18.731+0000 7f0d10b99740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Dec 03 01:19:21 compute-0 gracious_visvesvaraya[202735]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Dec 03 01:19:21 compute-0 ceph-mon[192821]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:19:21 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 03 01:19:21 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Dec 03 01:19:21 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 03 01:19:21 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Dec 03 01:19:21 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 03 01:19:21 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 03 01:19:21 compute-0 gracious_visvesvaraya[202735]: --> ceph-volume lvm activate successful for osd ID: 0
Dec 03 01:19:21 compute-0 gracious_visvesvaraya[202735]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Dec 03 01:19:21 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 03 01:19:21 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 38b78a6e-cf5e-4c74-a51c-1bb51cf53a18
Dec 03 01:19:21 compute-0 sshd-session[202019]: Connection closed by authenticating user root 193.32.162.157 port 55228 [preauth]
Dec 03 01:19:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18"} v 0) v1
Dec 03 01:19:22 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4206733841' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18"}]: dispatch
Dec 03 01:19:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Dec 03 01:19:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 03 01:19:22 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4206733841' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18"}]': finished
Dec 03 01:19:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Dec 03 01:19:22 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Dec 03 01:19:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec 03 01:19:22 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 03 01:19:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 03 01:19:22 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 03 01:19:22 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 03 01:19:22 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 03 01:19:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:19:22 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 03 01:19:22 compute-0 lvm[203763]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 03 01:19:22 compute-0 lvm[203763]: VG ceph_vg1 finished
Dec 03 01:19:22 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Dec 03 01:19:22 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Dec 03 01:19:22 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec 03 01:19:22 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec 03 01:19:22 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Dec 03 01:19:22 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/4206733841' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18"}]: dispatch
Dec 03 01:19:22 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/4206733841' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18"}]': finished
Dec 03 01:19:22 compute-0 ceph-mon[192821]: osdmap e5: 2 total, 0 up, 2 in
Dec 03 01:19:22 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 03 01:19:22 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 03 01:19:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Dec 03 01:19:22 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/511928018' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec 03 01:19:22 compute-0 gracious_visvesvaraya[202735]:  stderr: got monmap epoch 1
Dec 03 01:19:22 compute-0 gracious_visvesvaraya[202735]: --> Creating keyring file for osd.1
Dec 03 01:19:22 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Dec 03 01:19:22 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Dec 03 01:19:22 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 38b78a6e-cf5e-4c74-a51c-1bb51cf53a18 --setuser ceph --setgroup ceph
Dec 03 01:19:23 compute-0 ceph-mon[192821]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:19:23 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/511928018' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec 03 01:19:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:19:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:19:25 compute-0 ceph-mon[192821]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:19:25 compute-0 gracious_visvesvaraya[202735]:  stderr: 2025-12-03T01:19:22.979+0000 7f47cfa35740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec 03 01:19:25 compute-0 gracious_visvesvaraya[202735]:  stderr: 2025-12-03T01:19:22.979+0000 7f47cfa35740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec 03 01:19:25 compute-0 gracious_visvesvaraya[202735]:  stderr: 2025-12-03T01:19:22.980+0000 7f47cfa35740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec 03 01:19:25 compute-0 gracious_visvesvaraya[202735]:  stderr: 2025-12-03T01:19:22.980+0000 7f47cfa35740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Dec 03 01:19:25 compute-0 gracious_visvesvaraya[202735]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Dec 03 01:19:25 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 03 01:19:25 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec 03 01:19:25 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec 03 01:19:25 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec 03 01:19:25 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec 03 01:19:25 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 03 01:19:26 compute-0 gracious_visvesvaraya[202735]: --> ceph-volume lvm activate successful for osd ID: 1
Dec 03 01:19:26 compute-0 gracious_visvesvaraya[202735]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Dec 03 01:19:26 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 03 01:19:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:19:26 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 2ebf7eac-7883-4286-84a2-653e10a1ae8a
Dec 03 01:19:26 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a"} v 0) v1
Dec 03 01:19:26 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4021449647' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a"}]: dispatch
Dec 03 01:19:26 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Dec 03 01:19:26 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 03 01:19:26 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4021449647' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a"}]': finished
Dec 03 01:19:26 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Dec 03 01:19:26 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Dec 03 01:19:26 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec 03 01:19:26 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 03 01:19:26 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 03 01:19:26 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 03 01:19:26 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec 03 01:19:26 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 03 01:19:26 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 03 01:19:26 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 03 01:19:26 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 03 01:19:27 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 03 01:19:27 compute-0 lvm[204729]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 03 01:19:27 compute-0 lvm[204729]: VG ceph_vg2 finished
Dec 03 01:19:27 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Dec 03 01:19:27 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Dec 03 01:19:27 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec 03 01:19:27 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec 03 01:19:27 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Dec 03 01:19:27 compute-0 ceph-mon[192821]: pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:19:27 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/4021449647' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a"}]: dispatch
Dec 03 01:19:27 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/4021449647' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a"}]': finished
Dec 03 01:19:27 compute-0 ceph-mon[192821]: osdmap e6: 3 total, 0 up, 3 in
Dec 03 01:19:27 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 03 01:19:27 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 03 01:19:27 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 03 01:19:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Dec 03 01:19:27 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2008043112' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec 03 01:19:27 compute-0 gracious_visvesvaraya[202735]:  stderr: got monmap epoch 1
Dec 03 01:19:27 compute-0 gracious_visvesvaraya[202735]: --> Creating keyring file for osd.2
Dec 03 01:19:27 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Dec 03 01:19:27 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Dec 03 01:19:27 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 2ebf7eac-7883-4286-84a2-653e10a1ae8a --setuser ceph --setgroup ceph
Dec 03 01:19:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:19:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:19:28
Dec 03 01:19:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 01:19:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 01:19:28 compute-0 ceph-mgr[193109]: [balancer INFO root] No pools available
Dec 03 01:19:28 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 01:19:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:19:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:19:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 01:19:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 01:19:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:19:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:19:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:19:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:19:28 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2008043112' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec 03 01:19:29 compute-0 ceph-mon[192821]: pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:19:29 compute-0 podman[158098]: time="2025-12-03T01:19:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:19:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:19:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 25446 "" "Go-http-client/1.1"
Dec 03 01:19:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:19:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4846 "" "Go-http-client/1.1"
Dec 03 01:19:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:19:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:19:30 compute-0 gracious_visvesvaraya[202735]:  stderr: 2025-12-03T01:19:27.757+0000 7f119f54f740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec 03 01:19:30 compute-0 gracious_visvesvaraya[202735]:  stderr: 2025-12-03T01:19:27.758+0000 7f119f54f740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec 03 01:19:30 compute-0 gracious_visvesvaraya[202735]:  stderr: 2025-12-03T01:19:27.758+0000 7f119f54f740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec 03 01:19:30 compute-0 gracious_visvesvaraya[202735]:  stderr: 2025-12-03T01:19:27.759+0000 7f119f54f740 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Dec 03 01:19:30 compute-0 gracious_visvesvaraya[202735]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Dec 03 01:19:30 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec 03 01:19:30 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Dec 03 01:19:30 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec 03 01:19:30 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Dec 03 01:19:30 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec 03 01:19:30 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec 03 01:19:30 compute-0 gracious_visvesvaraya[202735]: --> ceph-volume lvm activate successful for osd ID: 2
Dec 03 01:19:30 compute-0 gracious_visvesvaraya[202735]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Dec 03 01:19:30 compute-0 systemd[1]: libpod-8d0edba0eb0122e7f2ecc4e1b5ae3ac4e5cd44088009edbe0ed1c0056b775f59.scope: Deactivated successfully.
Dec 03 01:19:30 compute-0 systemd[1]: libpod-8d0edba0eb0122e7f2ecc4e1b5ae3ac4e5cd44088009edbe0ed1c0056b775f59.scope: Consumed 8.735s CPU time.
Dec 03 01:19:30 compute-0 podman[205665]: 2025-12-03 01:19:30.707630269 +0000 UTC m=+0.064378353 container died 8d0edba0eb0122e7f2ecc4e1b5ae3ac4e5cd44088009edbe0ed1c0056b775f59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_visvesvaraya, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 03 01:19:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3da81a669bfd7c30bed0b92971e8da86ca7ccb1302aba715fcf4e7bee9a96c6-merged.mount: Deactivated successfully.
Dec 03 01:19:30 compute-0 podman[205665]: 2025-12-03 01:19:30.831874938 +0000 UTC m=+0.188622982 container remove 8d0edba0eb0122e7f2ecc4e1b5ae3ac4e5cd44088009edbe0ed1c0056b775f59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_visvesvaraya, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:19:30 compute-0 systemd[1]: libpod-conmon-8d0edba0eb0122e7f2ecc4e1b5ae3ac4e5cd44088009edbe0ed1c0056b775f59.scope: Deactivated successfully.
Dec 03 01:19:30 compute-0 sudo[202597]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:31 compute-0 sudo[205681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:19:31 compute-0 sudo[205681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:31 compute-0 sudo[205681]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:31 compute-0 podman[205705]: 2025-12-03 01:19:31.146468852 +0000 UTC m=+0.091244843 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 01:19:31 compute-0 sudo[205738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:19:31 compute-0 sudo[205738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:31 compute-0 sudo[205738]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:31 compute-0 podman[205707]: 2025-12-03 01:19:31.186229782 +0000 UTC m=+0.112925879 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec 03 01:19:31 compute-0 podman[205706]: 2025-12-03 01:19:31.192681458 +0000 UTC m=+0.127710589 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, managed_by=edpm_ansible, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec 03 01:19:31 compute-0 podman[205708]: 2025-12-03 01:19:31.201790752 +0000 UTC m=+0.133926309 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 03 01:19:31 compute-0 sudo[205812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:19:31 compute-0 sudo[205812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:31 compute-0 sudo[205812]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:31 compute-0 sudo[205837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 01:19:31 compute-0 sudo[205837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:31 compute-0 openstack_network_exporter[160250]: ERROR   01:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:19:31 compute-0 openstack_network_exporter[160250]: ERROR   01:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:19:31 compute-0 openstack_network_exporter[160250]: ERROR   01:19:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:19:31 compute-0 openstack_network_exporter[160250]: ERROR   01:19:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:19:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:19:31 compute-0 openstack_network_exporter[160250]: ERROR   01:19:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:19:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:19:31 compute-0 ceph-mon[192821]: pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:19:31 compute-0 podman[205899]: 2025-12-03 01:19:31.957369394 +0000 UTC m=+0.087741033 container create 027bed38fe655409f6725866e1cf43f00c49407cffefa69c72a68fdd712128e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:19:32 compute-0 podman[205899]: 2025-12-03 01:19:31.918794004 +0000 UTC m=+0.049165723 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:19:32 compute-0 systemd[1]: Started libpod-conmon-027bed38fe655409f6725866e1cf43f00c49407cffefa69c72a68fdd712128e0.scope.
Dec 03 01:19:32 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:19:32 compute-0 podman[205899]: 2025-12-03 01:19:32.127320836 +0000 UTC m=+0.257692485 container init 027bed38fe655409f6725866e1cf43f00c49407cffefa69c72a68fdd712128e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_varahamihira, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:19:32 compute-0 podman[205899]: 2025-12-03 01:19:32.14538846 +0000 UTC m=+0.275760109 container start 027bed38fe655409f6725866e1cf43f00c49407cffefa69c72a68fdd712128e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 03 01:19:32 compute-0 podman[205899]: 2025-12-03 01:19:32.152303347 +0000 UTC m=+0.282675046 container attach 027bed38fe655409f6725866e1cf43f00c49407cffefa69c72a68fdd712128e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:19:32 compute-0 eager_varahamihira[205915]: 167 167
Dec 03 01:19:32 compute-0 systemd[1]: libpod-027bed38fe655409f6725866e1cf43f00c49407cffefa69c72a68fdd712128e0.scope: Deactivated successfully.
Dec 03 01:19:32 compute-0 podman[205899]: 2025-12-03 01:19:32.160386065 +0000 UTC m=+0.290757744 container died 027bed38fe655409f6725866e1cf43f00c49407cffefa69c72a68fdd712128e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_varahamihira, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:19:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:19:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1a8ef57ab1f911a209a4330e036c180e891ec01bad672598cec5c8fac3e18ec-merged.mount: Deactivated successfully.
Dec 03 01:19:32 compute-0 podman[205899]: 2025-12-03 01:19:32.238826658 +0000 UTC m=+0.369198287 container remove 027bed38fe655409f6725866e1cf43f00c49407cffefa69c72a68fdd712128e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:19:32 compute-0 systemd[1]: libpod-conmon-027bed38fe655409f6725866e1cf43f00c49407cffefa69c72a68fdd712128e0.scope: Deactivated successfully.
Dec 03 01:19:32 compute-0 podman[205938]: 2025-12-03 01:19:32.493056613 +0000 UTC m=+0.087599149 container create 50583ba70ca14297d286f66988b6d32e4d829e97705433bde6b442bc20dc5a63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_bassi, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:19:32 compute-0 podman[205938]: 2025-12-03 01:19:32.46062104 +0000 UTC m=+0.055163596 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:19:32 compute-0 systemd[1]: Started libpod-conmon-50583ba70ca14297d286f66988b6d32e4d829e97705433bde6b442bc20dc5a63.scope.
Dec 03 01:19:32 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:19:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bd43b40e195cfd1a39d4afe2514a50bb17027b8ca9f7c29f47f6d6d9dec7d6d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bd43b40e195cfd1a39d4afe2514a50bb17027b8ca9f7c29f47f6d6d9dec7d6d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bd43b40e195cfd1a39d4afe2514a50bb17027b8ca9f7c29f47f6d6d9dec7d6d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bd43b40e195cfd1a39d4afe2514a50bb17027b8ca9f7c29f47f6d6d9dec7d6d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:32 compute-0 podman[205938]: 2025-12-03 01:19:32.639706027 +0000 UTC m=+0.234248603 container init 50583ba70ca14297d286f66988b6d32e4d829e97705433bde6b442bc20dc5a63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_bassi, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:19:32 compute-0 podman[205938]: 2025-12-03 01:19:32.659343721 +0000 UTC m=+0.253886257 container start 50583ba70ca14297d286f66988b6d32e4d829e97705433bde6b442bc20dc5a63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_bassi, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:19:32 compute-0 podman[205938]: 2025-12-03 01:19:32.666569406 +0000 UTC m=+0.261111942 container attach 50583ba70ca14297d286f66988b6d32e4d829e97705433bde6b442bc20dc5a63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_bassi, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 03 01:19:33 compute-0 hungry_bassi[205954]: {
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:     "0": [
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:         {
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:             "devices": [
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:                 "/dev/loop3"
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:             ],
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:             "lv_name": "ceph_lv0",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:             "lv_size": "21470642176",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:             "name": "ceph_lv0",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:             "tags": {
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:                 "ceph.cluster_name": "ceph",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:                 "ceph.crush_device_class": "",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:                 "ceph.encrypted": "0",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:                 "ceph.osd_id": "0",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:                 "ceph.type": "block",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:                 "ceph.vdo": "0"
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:             },
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:             "type": "block",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:             "vg_name": "ceph_vg0"
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:         }
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:     ],
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:     "1": [
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:         {
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:             "devices": [
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:                 "/dev/loop4"
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:             ],
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:             "lv_name": "ceph_lv1",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:             "lv_size": "21470642176",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:             "name": "ceph_lv1",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:             "tags": {
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:                 "ceph.cluster_name": "ceph",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:                 "ceph.crush_device_class": "",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:                 "ceph.encrypted": "0",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:                 "ceph.osd_id": "1",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:                 "ceph.type": "block",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:                 "ceph.vdo": "0"
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:             },
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:             "type": "block",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:             "vg_name": "ceph_vg1"
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:         }
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:     ],
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:     "2": [
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:         {
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:             "devices": [
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:                 "/dev/loop5"
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:             ],
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:             "lv_name": "ceph_lv2",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:             "lv_size": "21470642176",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:             "name": "ceph_lv2",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:             "tags": {
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:                 "ceph.cluster_name": "ceph",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:                 "ceph.crush_device_class": "",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:                 "ceph.encrypted": "0",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:                 "ceph.osd_id": "2",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:                 "ceph.type": "block",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:                 "ceph.vdo": "0"
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:             },
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:             "type": "block",
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:             "vg_name": "ceph_vg2"
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:         }
Dec 03 01:19:33 compute-0 hungry_bassi[205954]:     ]
Dec 03 01:19:33 compute-0 hungry_bassi[205954]: }
Dec 03 01:19:33 compute-0 systemd[1]: libpod-50583ba70ca14297d286f66988b6d32e4d829e97705433bde6b442bc20dc5a63.scope: Deactivated successfully.
Dec 03 01:19:33 compute-0 podman[205938]: 2025-12-03 01:19:33.508039193 +0000 UTC m=+1.102581719 container died 50583ba70ca14297d286f66988b6d32e4d829e97705433bde6b442bc20dc5a63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_bassi, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:19:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-7bd43b40e195cfd1a39d4afe2514a50bb17027b8ca9f7c29f47f6d6d9dec7d6d-merged.mount: Deactivated successfully.
Dec 03 01:19:33 compute-0 ceph-mon[192821]: pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:19:33 compute-0 podman[205938]: 2025-12-03 01:19:33.614841314 +0000 UTC m=+1.209383860 container remove 50583ba70ca14297d286f66988b6d32e4d829e97705433bde6b442bc20dc5a63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_bassi, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:19:33 compute-0 systemd[1]: libpod-conmon-50583ba70ca14297d286f66988b6d32e4d829e97705433bde6b442bc20dc5a63.scope: Deactivated successfully.
Dec 03 01:19:33 compute-0 sudo[205837]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Dec 03 01:19:33 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec 03 01:19:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:19:33 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:19:33 compute-0 ceph-mgr[193109]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Dec 03 01:19:33 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Dec 03 01:19:33 compute-0 sudo[205974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:19:33 compute-0 sudo[205974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:33 compute-0 sudo[205974]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:33 compute-0 sshd-session[203756]: Connection closed by authenticating user root 193.32.162.157 port 36876 [preauth]
Dec 03 01:19:33 compute-0 sudo[205999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:19:33 compute-0 sudo[205999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:33 compute-0 sudo[205999]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:34 compute-0 sudo[206024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:19:34 compute-0 sudo[206024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:34 compute-0 sudo[206024]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:19:34 compute-0 sudo[206050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c
Dec 03 01:19:34 compute-0 sudo[206050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:34 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec 03 01:19:34 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:19:34 compute-0 ceph-mon[192821]: Deploying daemon osd.0 on compute-0
Dec 03 01:19:34 compute-0 podman[206114]: 2025-12-03 01:19:34.806227151 +0000 UTC m=+0.079564893 container create b6c31ac039ae97a5726087f849c3864d78c199621e5965919209300ae0840b4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lovelace, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 03 01:19:34 compute-0 podman[206114]: 2025-12-03 01:19:34.769452737 +0000 UTC m=+0.042790549 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:19:34 compute-0 systemd[1]: Started libpod-conmon-b6c31ac039ae97a5726087f849c3864d78c199621e5965919209300ae0840b4c.scope.
Dec 03 01:19:34 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:19:34 compute-0 podman[206114]: 2025-12-03 01:19:34.956079827 +0000 UTC m=+0.229417629 container init b6c31ac039ae97a5726087f849c3864d78c199621e5965919209300ae0840b4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lovelace, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Dec 03 01:19:34 compute-0 podman[206114]: 2025-12-03 01:19:34.97412526 +0000 UTC m=+0.247463012 container start b6c31ac039ae97a5726087f849c3864d78c199621e5965919209300ae0840b4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lovelace, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:19:34 compute-0 podman[206114]: 2025-12-03 01:19:34.981238682 +0000 UTC m=+0.254576494 container attach b6c31ac039ae97a5726087f849c3864d78c199621e5965919209300ae0840b4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lovelace, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Dec 03 01:19:34 compute-0 sharp_lovelace[206131]: 167 167
Dec 03 01:19:34 compute-0 systemd[1]: libpod-b6c31ac039ae97a5726087f849c3864d78c199621e5965919209300ae0840b4c.scope: Deactivated successfully.
Dec 03 01:19:34 compute-0 podman[206114]: 2025-12-03 01:19:34.989378711 +0000 UTC m=+0.262716463 container died b6c31ac039ae97a5726087f849c3864d78c199621e5965919209300ae0840b4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lovelace, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 03 01:19:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-0716357fe3369ba6aeff026075cd50bb3d13e11db36edeb69bf57701511f0866-merged.mount: Deactivated successfully.
Dec 03 01:19:35 compute-0 podman[206114]: 2025-12-03 01:19:35.05829511 +0000 UTC m=+0.331632842 container remove b6c31ac039ae97a5726087f849c3864d78c199621e5965919209300ae0840b4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:19:35 compute-0 systemd[1]: libpod-conmon-b6c31ac039ae97a5726087f849c3864d78c199621e5965919209300ae0840b4c.scope: Deactivated successfully.
Dec 03 01:19:35 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:19:35 compute-0 sshd-session[204390]: error: kex_exchange_identification: read: Connection timed out
Dec 03 01:19:35 compute-0 sshd-session[204390]: banner exchange: Connection from 14.103.158.69 port 45082: Connection timed out
Dec 03 01:19:35 compute-0 podman[206162]: 2025-12-03 01:19:35.510192618 +0000 UTC m=+0.094496706 container create 35e74423b4d9663dec3acd69edf213d647dce6b30d351af8823c0e421583bc9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:19:35 compute-0 podman[206162]: 2025-12-03 01:19:35.478171306 +0000 UTC m=+0.062475444 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:19:35 compute-0 systemd[1]: Started libpod-conmon-35e74423b4d9663dec3acd69edf213d647dce6b30d351af8823c0e421583bc9c.scope.
Dec 03 01:19:35 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:19:35 compute-0 ceph-mon[192821]: pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:19:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/308e7236468aa12be49f0959cd8a8d5c6afb61f49af0c80f7fdf1832af26970c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/308e7236468aa12be49f0959cd8a8d5c6afb61f49af0c80f7fdf1832af26970c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/308e7236468aa12be49f0959cd8a8d5c6afb61f49af0c80f7fdf1832af26970c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/308e7236468aa12be49f0959cd8a8d5c6afb61f49af0c80f7fdf1832af26970c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/308e7236468aa12be49f0959cd8a8d5c6afb61f49af0c80f7fdf1832af26970c/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:35 compute-0 podman[206162]: 2025-12-03 01:19:35.669119367 +0000 UTC m=+0.253423475 container init 35e74423b4d9663dec3acd69edf213d647dce6b30d351af8823c0e421583bc9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:19:35 compute-0 podman[206162]: 2025-12-03 01:19:35.703811568 +0000 UTC m=+0.288115646 container start 35e74423b4d9663dec3acd69edf213d647dce6b30d351af8823c0e421583bc9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0-activate-test, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:19:35 compute-0 podman[206162]: 2025-12-03 01:19:35.709336529 +0000 UTC m=+0.293640607 container attach 35e74423b4d9663dec3acd69edf213d647dce6b30d351af8823c0e421583bc9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 03 01:19:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:19:36 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0-activate-test[206179]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Dec 03 01:19:36 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0-activate-test[206179]:                             [--no-systemd] [--no-tmpfs]
Dec 03 01:19:36 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0-activate-test[206179]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec 03 01:19:36 compute-0 systemd[1]: libpod-35e74423b4d9663dec3acd69edf213d647dce6b30d351af8823c0e421583bc9c.scope: Deactivated successfully.
Dec 03 01:19:36 compute-0 podman[206162]: 2025-12-03 01:19:36.36081462 +0000 UTC m=+0.945118708 container died 35e74423b4d9663dec3acd69edf213d647dce6b30d351af8823c0e421583bc9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec 03 01:19:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-308e7236468aa12be49f0959cd8a8d5c6afb61f49af0c80f7fdf1832af26970c-merged.mount: Deactivated successfully.
Dec 03 01:19:36 compute-0 podman[206162]: 2025-12-03 01:19:36.470081914 +0000 UTC m=+1.054385972 container remove 35e74423b4d9663dec3acd69edf213d647dce6b30d351af8823c0e421583bc9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 03 01:19:36 compute-0 systemd[1]: libpod-conmon-35e74423b4d9663dec3acd69edf213d647dce6b30d351af8823c0e421583bc9c.scope: Deactivated successfully.
Dec 03 01:19:36 compute-0 podman[206186]: 2025-12-03 01:19:36.558944345 +0000 UTC m=+0.151525140 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi)
Dec 03 01:19:36 compute-0 systemd[1]: Reloading.
Dec 03 01:19:36 compute-0 systemd-rc-local-generator[206259]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:19:36 compute-0 systemd-sysv-generator[206263]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:19:37 compute-0 systemd[1]: Reloading.
Dec 03 01:19:37 compute-0 systemd-rc-local-generator[206303]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:19:37 compute-0 systemd-sysv-generator[206306]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:19:37 compute-0 ceph-mon[192821]: pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:19:37 compute-0 systemd[1]: Starting Ceph osd.0 for 3765feb2-36f8-5b86-b74c-64e9221f9c4c...
Dec 03 01:19:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:19:38 compute-0 podman[206356]: 2025-12-03 01:19:38.232359103 +0000 UTC m=+0.093602133 container create 4d6db52bbeb627e4cf8d041d8aba80a7e6559a9d341a02b1666d557daf8333a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 03 01:19:38 compute-0 podman[206356]: 2025-12-03 01:19:38.19872933 +0000 UTC m=+0.059972390 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:19:38 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:19:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/840de21aa4bf583e81a78a51fb9cc67f9610fbdd4427370f1e1fbf599f85afbe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/840de21aa4bf583e81a78a51fb9cc67f9610fbdd4427370f1e1fbf599f85afbe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/840de21aa4bf583e81a78a51fb9cc67f9610fbdd4427370f1e1fbf599f85afbe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/840de21aa4bf583e81a78a51fb9cc67f9610fbdd4427370f1e1fbf599f85afbe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/840de21aa4bf583e81a78a51fb9cc67f9610fbdd4427370f1e1fbf599f85afbe/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:38 compute-0 podman[206356]: 2025-12-03 01:19:38.383506152 +0000 UTC m=+0.244749242 container init 4d6db52bbeb627e4cf8d041d8aba80a7e6559a9d341a02b1666d557daf8333a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:19:38 compute-0 podman[206356]: 2025-12-03 01:19:38.399876273 +0000 UTC m=+0.261119313 container start 4d6db52bbeb627e4cf8d041d8aba80a7e6559a9d341a02b1666d557daf8333a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:19:38 compute-0 podman[206356]: 2025-12-03 01:19:38.406738589 +0000 UTC m=+0.267981629 container attach 4d6db52bbeb627e4cf8d041d8aba80a7e6559a9d341a02b1666d557daf8333a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0-activate, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 03 01:19:39 compute-0 sudo[206427]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxexnvivwkbjgntfoccaalcfctyoumty ; /usr/bin/python3'
Dec 03 01:19:39 compute-0 sudo[206427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:19:39 compute-0 ceph-mon[192821]: pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:19:39 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0-activate[206371]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 03 01:19:39 compute-0 bash[206356]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 03 01:19:39 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0-activate[206371]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Dec 03 01:19:39 compute-0 bash[206356]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Dec 03 01:19:39 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0-activate[206371]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Dec 03 01:19:39 compute-0 bash[206356]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Dec 03 01:19:39 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0-activate[206371]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 03 01:19:39 compute-0 bash[206356]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 03 01:19:39 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0-activate[206371]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 03 01:19:39 compute-0 bash[206356]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 03 01:19:39 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0-activate[206371]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 03 01:19:39 compute-0 bash[206356]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 03 01:19:39 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0-activate[206371]: --> ceph-volume raw activate successful for osd ID: 0
Dec 03 01:19:39 compute-0 bash[206356]: --> ceph-volume raw activate successful for osd ID: 0
Dec 03 01:19:39 compute-0 python3[206462]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:19:39 compute-0 systemd[1]: libpod-4d6db52bbeb627e4cf8d041d8aba80a7e6559a9d341a02b1666d557daf8333a8.scope: Deactivated successfully.
Dec 03 01:19:39 compute-0 podman[206356]: 2025-12-03 01:19:39.858200751 +0000 UTC m=+1.719443791 container died 4d6db52bbeb627e4cf8d041d8aba80a7e6559a9d341a02b1666d557daf8333a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0-activate, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 03 01:19:39 compute-0 systemd[1]: libpod-4d6db52bbeb627e4cf8d041d8aba80a7e6559a9d341a02b1666d557daf8333a8.scope: Consumed 1.478s CPU time.
Dec 03 01:19:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-840de21aa4bf583e81a78a51fb9cc67f9610fbdd4427370f1e1fbf599f85afbe-merged.mount: Deactivated successfully.
Dec 03 01:19:39 compute-0 podman[206356]: 2025-12-03 01:19:39.955674713 +0000 UTC m=+1.816917713 container remove 4d6db52bbeb627e4cf8d041d8aba80a7e6559a9d341a02b1666d557daf8333a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:19:39 compute-0 podman[206527]: 2025-12-03 01:19:39.974758793 +0000 UTC m=+0.101611159 container create 6020181d75c602eb3191eae4ec4934eb5dfd87ce6234ff551e8822a35bd3380e (image=quay.io/ceph/ceph:v18, name=bold_bassi, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 03 01:19:40 compute-0 podman[206527]: 2025-12-03 01:19:39.940407331 +0000 UTC m=+0.067259787 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:19:40 compute-0 systemd[1]: Started libpod-conmon-6020181d75c602eb3191eae4ec4934eb5dfd87ce6234ff551e8822a35bd3380e.scope.
Dec 03 01:19:40 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:19:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/594df817889bdd0486fea0909b9de7e62a58ed726d71564289b540373d9af110/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/594df817889bdd0486fea0909b9de7e62a58ed726d71564289b540373d9af110/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/594df817889bdd0486fea0909b9de7e62a58ed726d71564289b540373d9af110/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:40 compute-0 podman[206527]: 2025-12-03 01:19:40.11257382 +0000 UTC m=+0.239426226 container init 6020181d75c602eb3191eae4ec4934eb5dfd87ce6234ff551e8822a35bd3380e (image=quay.io/ceph/ceph:v18, name=bold_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 03 01:19:40 compute-0 podman[206527]: 2025-12-03 01:19:40.133262751 +0000 UTC m=+0.260115157 container start 6020181d75c602eb3191eae4ec4934eb5dfd87ce6234ff551e8822a35bd3380e (image=quay.io/ceph/ceph:v18, name=bold_bassi, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:19:40 compute-0 podman[206527]: 2025-12-03 01:19:40.14022806 +0000 UTC m=+0.267080516 container attach 6020181d75c602eb3191eae4ec4934eb5dfd87ce6234ff551e8822a35bd3380e (image=quay.io/ceph/ceph:v18, name=bold_bassi, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec 03 01:19:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:19:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:19:40 compute-0 podman[206596]: 2025-12-03 01:19:40.367224026 +0000 UTC m=+0.077473300 container create 42c5471d35c5fdc17001e59ed959fef762fb1fec0ac41750cae402122a3b0431 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:19:40 compute-0 podman[206596]: 2025-12-03 01:19:40.331242852 +0000 UTC m=+0.041492186 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:19:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c583ea3a03bd3818cd12cab5471c6b4c0e0e18a215878a4bb19751b24a0d6d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c583ea3a03bd3818cd12cab5471c6b4c0e0e18a215878a4bb19751b24a0d6d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c583ea3a03bd3818cd12cab5471c6b4c0e0e18a215878a4bb19751b24a0d6d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c583ea3a03bd3818cd12cab5471c6b4c0e0e18a215878a4bb19751b24a0d6d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c583ea3a03bd3818cd12cab5471c6b4c0e0e18a215878a4bb19751b24a0d6d9/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:40 compute-0 podman[206596]: 2025-12-03 01:19:40.543830178 +0000 UTC m=+0.254079452 container init 42c5471d35c5fdc17001e59ed959fef762fb1fec0ac41750cae402122a3b0431 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:19:40 compute-0 podman[206596]: 2025-12-03 01:19:40.564641952 +0000 UTC m=+0.274891196 container start 42c5471d35c5fdc17001e59ed959fef762fb1fec0ac41750cae402122a3b0431 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:19:40 compute-0 bash[206596]: 42c5471d35c5fdc17001e59ed959fef762fb1fec0ac41750cae402122a3b0431
Dec 03 01:19:40 compute-0 systemd[1]: Started Ceph osd.0 for 3765feb2-36f8-5b86-b74c-64e9221f9c4c.
Dec 03 01:19:40 compute-0 sudo[206050]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:19:40 compute-0 ceph-osd[206633]: set uid:gid to 167:167 (ceph:ceph)
Dec 03 01:19:40 compute-0 ceph-osd[206633]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Dec 03 01:19:40 compute-0 ceph-osd[206633]: pidfile_write: ignore empty --pid-file
Dec 03 01:19:40 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:19:40 compute-0 ceph-osd[206633]: bdev(0x55cd94a6d800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 03 01:19:40 compute-0 ceph-osd[206633]: bdev(0x55cd94a6d800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 03 01:19:40 compute-0 ceph-osd[206633]: bdev(0x55cd94a6d800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 03 01:19:40 compute-0 ceph-osd[206633]: bdev(0x55cd94a6d800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 03 01:19:40 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 03 01:19:40 compute-0 ceph-osd[206633]: bdev(0x55cd958a5800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 03 01:19:40 compute-0 ceph-osd[206633]: bdev(0x55cd958a5800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 03 01:19:40 compute-0 ceph-osd[206633]: bdev(0x55cd958a5800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 03 01:19:40 compute-0 ceph-osd[206633]: bdev(0x55cd958a5800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 03 01:19:40 compute-0 ceph-osd[206633]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec 03 01:19:40 compute-0 ceph-osd[206633]: bdev(0x55cd958a5800 /var/lib/ceph/osd/ceph-0/block) close
Dec 03 01:19:40 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Dec 03 01:19:40 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec 03 01:19:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:19:40 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:19:40 compute-0 ceph-mgr[193109]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Dec 03 01:19:40 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Dec 03 01:19:40 compute-0 podman[206634]: 2025-12-03 01:19:40.738881724 +0000 UTC m=+0.109222984 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, vendor=Red Hat, Inc., architecture=x86_64, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, distribution-scope=public, config_id=edpm, maintainer=Red Hat, Inc., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, vcs-type=git, container_name=kepler, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible)
Dec 03 01:19:40 compute-0 sudo[206659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:19:40 compute-0 sudo[206659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:40 compute-0 sudo[206659]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Dec 03 01:19:40 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/785632389' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 03 01:19:40 compute-0 bold_bassi[206566]: 
Dec 03 01:19:40 compute-0 bold_bassi[206566]: {"fsid":"3765feb2-36f8-5b86-b74c-64e9221f9c4c","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":120,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":6,"num_osds":3,"num_up_osds":0,"osd_up_since":0,"num_in_osds":3,"osd_in_since":1764724766,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-03T01:19:30.172459+0000","services":{}},"progress_events":{}}
Dec 03 01:19:40 compute-0 systemd[1]: libpod-6020181d75c602eb3191eae4ec4934eb5dfd87ce6234ff551e8822a35bd3380e.scope: Deactivated successfully.
Dec 03 01:19:40 compute-0 sudo[206690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:19:40 compute-0 sudo[206690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:40 compute-0 sudo[206690]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:40 compute-0 ceph-osd[206633]: bdev(0x55cd94a6d800 /var/lib/ceph/osd/ceph-0/block) close
Dec 03 01:19:40 compute-0 podman[206710]: 2025-12-03 01:19:40.948214367 +0000 UTC m=+0.076752661 container died 6020181d75c602eb3191eae4ec4934eb5dfd87ce6234ff551e8822a35bd3380e (image=quay.io/ceph/ceph:v18, name=bold_bassi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.967 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.968 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.968 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.969 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f00ebd496a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.970 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.971 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eda45910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.973 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.973 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.973 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.974 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f00ebd4b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.974 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.975 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f00edba6090>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.975 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.976 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f00ebd4bb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.976 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.976 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f00ebd4b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.976 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.974 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eabec2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.980 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f00ebd4b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.981 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f00ebd4b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.981 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f00ebd4b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f00eabec290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f00ebd4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f00ebd4b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.986 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f00ebd4b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f00ebd4bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.987 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f00ebd4b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f00ebd4bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f00ebd4bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f00ebd4bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:19:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-594df817889bdd0486fea0909b9de7e62a58ed726d71564289b540373d9af110-merged.mount: Deactivated successfully.
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f00ebe0e030>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f00ebd4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.988 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebcadee0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f00ebd4b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.993 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f00ede91a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.993 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f00ebd4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.993 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f00ebd4b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.994 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.994 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f00ede92450>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.995 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f00ebd4bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.995 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'cpu': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.996 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f00ebd4bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:19:40 compute-0 sudo[206726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.997 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:19:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:19:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:19:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:19:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:19:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:41.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:19:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:41.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:19:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:41.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:19:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:41.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:19:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:41.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:19:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:41.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:19:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:41.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:19:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:41.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:19:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:41.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:19:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:41.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:19:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:41.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:19:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:41.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:19:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:41.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:19:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:41.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:19:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:41.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:19:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:41.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:19:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:41.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:19:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:41.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:19:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:41.002 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:19:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:41.002 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:19:41 compute-0 sudo[206726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:41 compute-0 sudo[206726]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:41 compute-0 podman[206710]: 2025-12-03 01:19:41.019662721 +0000 UTC m=+0.148200975 container remove 6020181d75c602eb3191eae4ec4934eb5dfd87ce6234ff551e8822a35bd3380e (image=quay.io/ceph/ceph:v18, name=bold_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec 03 01:19:41 compute-0 systemd[1]: libpod-conmon-6020181d75c602eb3191eae4ec4934eb5dfd87ce6234ff551e8822a35bd3380e.scope: Deactivated successfully.
Dec 03 01:19:41 compute-0 sudo[206427]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:41 compute-0 sudo[206758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c
Dec 03 01:19:41 compute-0 sudo[206758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:41 compute-0 ceph-osd[206633]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Dec 03 01:19:41 compute-0 ceph-osd[206633]: load: jerasure load: lrc 
Dec 03 01:19:41 compute-0 ceph-osd[206633]: bdev(0x55cd95926c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 03 01:19:41 compute-0 ceph-osd[206633]: bdev(0x55cd95926c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 03 01:19:41 compute-0 ceph-osd[206633]: bdev(0x55cd95926c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 03 01:19:41 compute-0 ceph-osd[206633]: bdev(0x55cd95926c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 03 01:19:41 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 03 01:19:41 compute-0 ceph-osd[206633]: bdev(0x55cd95926c00 /var/lib/ceph/osd/ceph-0/block) close
Dec 03 01:19:41 compute-0 ceph-osd[206633]: bdev(0x55cd95926c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 03 01:19:41 compute-0 ceph-osd[206633]: bdev(0x55cd95926c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 03 01:19:41 compute-0 ceph-osd[206633]: bdev(0x55cd95926c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 03 01:19:41 compute-0 ceph-osd[206633]: bdev(0x55cd95926c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 03 01:19:41 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 03 01:19:41 compute-0 ceph-osd[206633]: bdev(0x55cd95926c00 /var/lib/ceph/osd/ceph-0/block) close
Dec 03 01:19:41 compute-0 podman[206825]: 2025-12-03 01:19:41.623917708 +0000 UTC m=+0.098562000 container create 34b3dedb8c6721a419de51c067d40decf1f86468e452033664ad5d1baf006d31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hodgkin, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 03 01:19:41 compute-0 ceph-mon[192821]: pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:19:41 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:41 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:41 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec 03 01:19:41 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:19:41 compute-0 ceph-mon[192821]: Deploying daemon osd.1 on compute-0
Dec 03 01:19:41 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/785632389' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 03 01:19:41 compute-0 podman[206825]: 2025-12-03 01:19:41.589097505 +0000 UTC m=+0.063741847 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:19:41 compute-0 systemd[1]: Started libpod-conmon-34b3dedb8c6721a419de51c067d40decf1f86468e452033664ad5d1baf006d31.scope.
Dec 03 01:19:41 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:19:41 compute-0 podman[206825]: 2025-12-03 01:19:41.764853045 +0000 UTC m=+0.239497387 container init 34b3dedb8c6721a419de51c067d40decf1f86468e452033664ad5d1baf006d31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hodgkin, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 03 01:19:41 compute-0 podman[206825]: 2025-12-03 01:19:41.780953269 +0000 UTC m=+0.255597561 container start 34b3dedb8c6721a419de51c067d40decf1f86468e452033664ad5d1baf006d31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 03 01:19:41 compute-0 ceph-osd[206633]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec 03 01:19:41 compute-0 podman[206825]: 2025-12-03 01:19:41.790918004 +0000 UTC m=+0.265562306 container attach 34b3dedb8c6721a419de51c067d40decf1f86468e452033664ad5d1baf006d31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hodgkin, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec 03 01:19:41 compute-0 ceph-osd[206633]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec 03 01:19:41 compute-0 magical_hodgkin[206846]: 167 167
Dec 03 01:19:41 compute-0 ceph-osd[206633]: bdev(0x55cd95926c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 03 01:19:41 compute-0 ceph-osd[206633]: bdev(0x55cd95926c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 03 01:19:41 compute-0 ceph-osd[206633]: bdev(0x55cd95926c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 03 01:19:41 compute-0 systemd[1]: libpod-34b3dedb8c6721a419de51c067d40decf1f86468e452033664ad5d1baf006d31.scope: Deactivated successfully.
Dec 03 01:19:41 compute-0 podman[206825]: 2025-12-03 01:19:41.798099189 +0000 UTC m=+0.272743491 container died 34b3dedb8c6721a419de51c067d40decf1f86468e452033664ad5d1baf006d31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hodgkin, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Dec 03 01:19:41 compute-0 ceph-osd[206633]: bdev(0x55cd95926c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 03 01:19:41 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 03 01:19:41 compute-0 ceph-osd[206633]: bdev(0x55cd95927400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 03 01:19:41 compute-0 ceph-osd[206633]: bdev(0x55cd95927400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 03 01:19:41 compute-0 ceph-osd[206633]: bdev(0x55cd95927400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 03 01:19:41 compute-0 ceph-osd[206633]: bdev(0x55cd95927400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 03 01:19:41 compute-0 ceph-osd[206633]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec 03 01:19:41 compute-0 ceph-osd[206633]: bluefs mount
Dec 03 01:19:41 compute-0 ceph-osd[206633]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: bluefs mount shared_bdev_used = 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: RocksDB version: 7.9.2
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Git sha 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Compile date 2025-05-06 23:30:25
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: DB SUMMARY
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: DB Session ID:  CYHBGYLFJSJZ0MXF1HD1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: CURRENT file:  CURRENT
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: IDENTITY file:  IDENTITY
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                         Options.error_if_exists: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                       Options.create_if_missing: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                         Options.paranoid_checks: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                                     Options.env: 0x55cd958f7d50
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                                Options.info_log: 0x55cd94af47e0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.max_file_opening_threads: 16
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                              Options.statistics: (nil)
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                               Options.use_fsync: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                       Options.max_log_file_size: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                         Options.allow_fallocate: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                        Options.use_direct_reads: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.create_missing_column_families: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                              Options.db_log_dir: 
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                                 Options.wal_dir: db.wal
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.advise_random_on_open: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                    Options.write_buffer_manager: 0x55cd959fc460
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                            Options.rate_limiter: (nil)
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.unordered_write: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                               Options.row_cache: None
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                              Options.wal_filter: None
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.allow_ingest_behind: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.two_write_queues: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.manual_wal_flush: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.wal_compression: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.atomic_flush: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                 Options.log_readahead_size: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.allow_data_in_errors: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.db_host_id: __hostname__
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.max_background_jobs: 4
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.max_background_compactions: -1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.max_subcompactions: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                          Options.max_open_files: -1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                          Options.bytes_per_sync: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.max_background_flushes: -1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Compression algorithms supported:
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         kZSTD supported: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         kXpressCompression supported: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         kBZip2Compression supported: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         kLZ4Compression supported: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         kZlibCompression supported: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         kLZ4HCCompression supported: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         kSnappyCompression supported: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cd94af4200)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55cd94ae11f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:           Options.merge_operator: None
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cd94af4200)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55cd94ae11f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:           Options.merge_operator: None
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cd94af4200)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55cd94ae11f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:           Options.merge_operator: None
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cd94af4200)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55cd94ae11f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:           Options.merge_operator: None
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cd94af4200)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55cd94ae11f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:           Options.merge_operator: None
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cd94af4200)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55cd94ae11f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:           Options.merge_operator: None
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cd94af4200)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55cd94ae11f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:           Options.merge_operator: None
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cd94af4180)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55cd94ae1090
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:           Options.merge_operator: None
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8346a1d68eed95b957b0eace13869e26620226129516a7e68c68bf6934eaa13-merged.mount: Deactivated successfully.
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cd94af4180)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55cd94ae1090
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:           Options.merge_operator: None
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cd94af4180)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55cd94ae1090
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:41 compute-0 podman[206825]: 2025-12-03 01:19:41.882858224 +0000 UTC m=+0.357502496 container remove 34b3dedb8c6721a419de51c067d40decf1f86468e452033664ad5d1baf006d31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hodgkin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: fb464fcb-4fed-4245-84a5-bd0adda5e152
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764724781882014, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764724781882446, "job": 1, "event": "recovery_finished"}
Dec 03 01:19:41 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 03 01:19:41 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Dec 03 01:19:41 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Dec 03 01:19:41 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec 03 01:19:41 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Dec 03 01:19:41 compute-0 ceph-osd[206633]: freelist init
Dec 03 01:19:41 compute-0 ceph-osd[206633]: freelist _read_cfg
Dec 03 01:19:41 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 03 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 03 01:19:41 compute-0 ceph-osd[206633]: bluefs umount
Dec 03 01:19:41 compute-0 ceph-osd[206633]: bdev(0x55cd95927400 /var/lib/ceph/osd/ceph-0/block) close
Dec 03 01:19:41 compute-0 systemd[1]: libpod-conmon-34b3dedb8c6721a419de51c067d40decf1f86468e452033664ad5d1baf006d31.scope: Deactivated successfully.
Dec 03 01:19:42 compute-0 ceph-osd[206633]: bdev(0x55cd95927400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 03 01:19:42 compute-0 ceph-osd[206633]: bdev(0x55cd95927400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 03 01:19:42 compute-0 ceph-osd[206633]: bdev(0x55cd95927400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 03 01:19:42 compute-0 ceph-osd[206633]: bdev(0x55cd95927400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 03 01:19:42 compute-0 ceph-osd[206633]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec 03 01:19:42 compute-0 ceph-osd[206633]: bluefs mount
Dec 03 01:19:42 compute-0 ceph-osd[206633]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: bluefs mount shared_bdev_used = 4718592
Dec 03 01:19:42 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: RocksDB version: 7.9.2
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Git sha 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Compile date 2025-05-06 23:30:25
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: DB SUMMARY
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: DB Session ID:  CYHBGYLFJSJZ0MXF1HD0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: CURRENT file:  CURRENT
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: IDENTITY file:  IDENTITY
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                         Options.error_if_exists: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                       Options.create_if_missing: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                         Options.paranoid_checks: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                                     Options.env: 0x55cd95a8c230
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                                Options.info_log: 0x55cd94af4540
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.max_file_opening_threads: 16
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                              Options.statistics: (nil)
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                               Options.use_fsync: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                       Options.max_log_file_size: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                         Options.allow_fallocate: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                        Options.use_direct_reads: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.create_missing_column_families: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                              Options.db_log_dir: 
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                                 Options.wal_dir: db.wal
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.advise_random_on_open: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                    Options.write_buffer_manager: 0x55cd959fc460
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                            Options.rate_limiter: (nil)
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.unordered_write: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                               Options.row_cache: None
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                              Options.wal_filter: None
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.allow_ingest_behind: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.two_write_queues: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.manual_wal_flush: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.wal_compression: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.atomic_flush: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                 Options.log_readahead_size: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.allow_data_in_errors: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.db_host_id: __hostname__
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.max_background_jobs: 4
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.max_background_compactions: -1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.max_subcompactions: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                          Options.max_open_files: -1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                          Options.bytes_per_sync: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.max_background_flushes: -1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Compression algorithms supported:
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         kZSTD supported: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         kXpressCompression supported: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         kBZip2Compression supported: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         kLZ4Compression supported: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         kZlibCompression supported: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         kLZ4HCCompression supported: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         kSnappyCompression supported: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cd94af4980)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55cd94ae11f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:           Options.merge_operator: None
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cd94af4980)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55cd94ae11f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:           Options.merge_operator: None
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cd94af4980)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55cd94ae11f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:           Options.merge_operator: None
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cd94af4980)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55cd94ae11f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:           Options.merge_operator: None
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cd94af4980)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55cd94ae11f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:           Options.merge_operator: None
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cd94af4980)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55cd94ae11f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:           Options.merge_operator: None
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cd94af4980)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55cd94ae11f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:           Options.merge_operator: None
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cd94af4300)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55cd94ae1090
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:           Options.merge_operator: None
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cd94af4300)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55cd94ae1090
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:           Options.merge_operator: None
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cd94af4300)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55cd94ae1090
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: fb464fcb-4fed-4245-84a5-bd0adda5e152
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764724782153478, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764724782159232, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724782, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb464fcb-4fed-4245-84a5-bd0adda5e152", "db_session_id": "CYHBGYLFJSJZ0MXF1HD0", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764724782164294, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724782, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb464fcb-4fed-4245-84a5-bd0adda5e152", "db_session_id": "CYHBGYLFJSJZ0MXF1HD0", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764724782168746, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724782, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb464fcb-4fed-4245-84a5-bd0adda5e152", "db_session_id": "CYHBGYLFJSJZ0MXF1HD0", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764724782171037, "job": 1, "event": "recovery_finished"}
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec 03 01:19:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55cd94c4e000
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: DB pointer 0x55cd959e1a00
Dec 03 01:19:42 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 03 01:19:42 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Dec 03 01:19:42 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 03 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                            Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                            Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                             Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55cd94ae11f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.3e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
                                            
                                            ** Compaction Stats [m-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55cd94ae11f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.3e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-0] **
                                            
                                            ** Compaction Stats [m-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55cd94ae11f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.3e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-1] **
                                            
                                            ** Compaction Stats [m-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55cd94ae11f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.3e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-2] **
                                            
                                            ** Compaction Stats [p-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                             Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55cd94ae11f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.3e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-0] **
                                            
                                            ** Compaction Stats [p-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55cd94ae11f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.3e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-1] **
                                            
                                            ** Compaction Stats [p-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55cd94ae11f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.3e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-2] **
                                            
                                            ** Compaction Stats [O-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55cd94ae1090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-0] **
                                            
                                            ** Compaction Stats [O-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55cd94ae1090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-1] **
                                            
                                            ** Compaction Stats [O-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                             Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55cd94ae1090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-2] **
                                            
                                            ** Compaction Stats [L] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [L] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55cd94ae11f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.3e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [L] **
                                            
                                            ** Compaction Stats [P] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [P] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55cd94ae11f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.3e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [P] **
Dec 03 01:19:42 compute-0 ceph-osd[206633]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec 03 01:19:42 compute-0 ceph-osd[206633]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec 03 01:19:42 compute-0 ceph-osd[206633]: _get_class not permitted to load lua
Dec 03 01:19:42 compute-0 ceph-osd[206633]: _get_class not permitted to load sdk
Dec 03 01:19:42 compute-0 ceph-osd[206633]: _get_class not permitted to load test_remote_reads
Dec 03 01:19:42 compute-0 ceph-osd[206633]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec 03 01:19:42 compute-0 ceph-osd[206633]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec 03 01:19:42 compute-0 ceph-osd[206633]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec 03 01:19:42 compute-0 ceph-osd[206633]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec 03 01:19:42 compute-0 ceph-osd[206633]: osd.0 0 load_pgs
Dec 03 01:19:42 compute-0 ceph-osd[206633]: osd.0 0 load_pgs opened 0 pgs
Dec 03 01:19:42 compute-0 ceph-osd[206633]: osd.0 0 log_to_monitors true
Dec 03 01:19:42 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0[206610]: 2025-12-03T01:19:42.210+0000 7f2a29dc9740 -1 osd.0 0 log_to_monitors true
Dec 03 01:19:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0) v1
Dec 03 01:19:42 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2166370730,v1:192.168.122.100:6803/2166370730]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Dec 03 01:19:42 compute-0 podman[207252]: 2025-12-03 01:19:42.27662713 +0000 UTC m=+0.079790569 container create cd98723d7cb36e5ce4b04456cafc5ef010729fda491b5539e63155561775c0f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1-activate-test, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 03 01:19:42 compute-0 podman[207252]: 2025-12-03 01:19:42.24232513 +0000 UTC m=+0.045488629 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:19:42 compute-0 systemd[1]: Started libpod-conmon-cd98723d7cb36e5ce4b04456cafc5ef010729fda491b5539e63155561775c0f5.scope.
Dec 03 01:19:42 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:19:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a7ce26ded24f94315fc65706dd26fb152415f20b3f2e0d5af70702455935b97/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a7ce26ded24f94315fc65706dd26fb152415f20b3f2e0d5af70702455935b97/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a7ce26ded24f94315fc65706dd26fb152415f20b3f2e0d5af70702455935b97/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a7ce26ded24f94315fc65706dd26fb152415f20b3f2e0d5af70702455935b97/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a7ce26ded24f94315fc65706dd26fb152415f20b3f2e0d5af70702455935b97/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:42 compute-0 podman[207252]: 2025-12-03 01:19:42.437874609 +0000 UTC m=+0.241038088 container init cd98723d7cb36e5ce4b04456cafc5ef010729fda491b5539e63155561775c0f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1-activate-test, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:19:42 compute-0 podman[207252]: 2025-12-03 01:19:42.472518508 +0000 UTC m=+0.275681937 container start cd98723d7cb36e5ce4b04456cafc5ef010729fda491b5539e63155561775c0f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1-activate-test, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:19:42 compute-0 podman[207252]: 2025-12-03 01:19:42.479261061 +0000 UTC m=+0.282424550 container attach cd98723d7cb36e5ce4b04456cafc5ef010729fda491b5539e63155561775c0f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:19:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Dec 03 01:19:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 03 01:19:42 compute-0 ceph-mon[192821]: from='osd.0 [v2:192.168.122.100:6802/2166370730,v1:192.168.122.100:6803/2166370730]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Dec 03 01:19:42 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2166370730,v1:192.168.122.100:6803/2166370730]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Dec 03 01:19:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Dec 03 01:19:42 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Dec 03 01:19:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Dec 03 01:19:42 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2166370730,v1:192.168.122.100:6803/2166370730]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec 03 01:19:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-0,root=default}
Dec 03 01:19:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec 03 01:19:42 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 03 01:19:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 03 01:19:42 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 03 01:19:42 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 03 01:19:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec 03 01:19:42 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 03 01:19:42 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 03 01:19:42 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 03 01:19:43 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1-activate-test[207299]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Dec 03 01:19:43 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1-activate-test[207299]:                             [--no-systemd] [--no-tmpfs]
Dec 03 01:19:43 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1-activate-test[207299]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec 03 01:19:43 compute-0 systemd[1]: libpod-cd98723d7cb36e5ce4b04456cafc5ef010729fda491b5539e63155561775c0f5.scope: Deactivated successfully.
Dec 03 01:19:43 compute-0 podman[207252]: 2025-12-03 01:19:43.209897933 +0000 UTC m=+1.013061372 container died cd98723d7cb36e5ce4b04456cafc5ef010729fda491b5539e63155561775c0f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1-activate-test, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 03 01:19:43 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec 03 01:19:43 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec 03 01:19:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a7ce26ded24f94315fc65706dd26fb152415f20b3f2e0d5af70702455935b97-merged.mount: Deactivated successfully.
Dec 03 01:19:43 compute-0 podman[207252]: 2025-12-03 01:19:43.309038558 +0000 UTC m=+1.112201967 container remove cd98723d7cb36e5ce4b04456cafc5ef010729fda491b5539e63155561775c0f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 03 01:19:43 compute-0 systemd[1]: libpod-conmon-cd98723d7cb36e5ce4b04456cafc5ef010729fda491b5539e63155561775c0f5.scope: Deactivated successfully.
Dec 03 01:19:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Dec 03 01:19:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 03 01:19:43 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2166370730,v1:192.168.122.100:6803/2166370730]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 03 01:19:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Dec 03 01:19:43 compute-0 ceph-osd[206633]: osd.0 0 done with init, starting boot process
Dec 03 01:19:43 compute-0 ceph-osd[206633]: osd.0 0 start_boot
Dec 03 01:19:43 compute-0 ceph-osd[206633]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec 03 01:19:43 compute-0 ceph-osd[206633]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec 03 01:19:43 compute-0 ceph-osd[206633]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec 03 01:19:43 compute-0 ceph-osd[206633]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec 03 01:19:43 compute-0 ceph-osd[206633]: osd.0 0  bench count 12288000 bsize 4 KiB
Dec 03 01:19:43 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Dec 03 01:19:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec 03 01:19:43 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 03 01:19:43 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 03 01:19:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 03 01:19:43 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 03 01:19:43 compute-0 ceph-mgr[193109]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2166370730; not ready for session (expect reconnect)
Dec 03 01:19:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec 03 01:19:43 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 03 01:19:43 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 03 01:19:43 compute-0 ceph-mon[192821]: pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:19:43 compute-0 ceph-mon[192821]: from='osd.0 [v2:192.168.122.100:6802/2166370730,v1:192.168.122.100:6803/2166370730]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Dec 03 01:19:43 compute-0 ceph-mon[192821]: osdmap e7: 3 total, 0 up, 3 in
Dec 03 01:19:43 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 03 01:19:43 compute-0 ceph-mon[192821]: from='osd.0 [v2:192.168.122.100:6802/2166370730,v1:192.168.122.100:6803/2166370730]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec 03 01:19:43 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 03 01:19:43 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 03 01:19:43 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 03 01:19:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec 03 01:19:43 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 03 01:19:43 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 03 01:19:43 compute-0 systemd[1]: Reloading.
Dec 03 01:19:43 compute-0 systemd-rc-local-generator[207356]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:19:43 compute-0 systemd-sysv-generator[207359]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:19:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:19:44 compute-0 systemd[1]: Reloading.
Dec 03 01:19:44 compute-0 systemd-rc-local-generator[207400]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:19:44 compute-0 systemd-sysv-generator[207404]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:19:44 compute-0 ceph-mgr[193109]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2166370730; not ready for session (expect reconnect)
Dec 03 01:19:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec 03 01:19:44 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 03 01:19:44 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 03 01:19:44 compute-0 systemd[1]: Starting Ceph osd.1 for 3765feb2-36f8-5b86-b74c-64e9221f9c4c...
Dec 03 01:19:44 compute-0 ceph-mon[192821]: from='osd.0 [v2:192.168.122.100:6802/2166370730,v1:192.168.122.100:6803/2166370730]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 03 01:19:44 compute-0 ceph-mon[192821]: osdmap e8: 3 total, 0 up, 3 in
Dec 03 01:19:44 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 03 01:19:44 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 03 01:19:44 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 03 01:19:44 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 03 01:19:44 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 03 01:19:45 compute-0 podman[207454]: 2025-12-03 01:19:45.154898512 +0000 UTC m=+0.101623159 container create 3e77ce91418a2e6ca52034631f4444ebb47f9e46aa02e68af12e153d1beb976b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:19:45 compute-0 podman[207454]: 2025-12-03 01:19:45.109020174 +0000 UTC m=+0.055744881 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:19:45 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:19:45 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:19:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acaf7e0b5779b401043a565a2616685d945fcf859e4865a0c0c1b11ba53a8b04/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acaf7e0b5779b401043a565a2616685d945fcf859e4865a0c0c1b11ba53a8b04/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acaf7e0b5779b401043a565a2616685d945fcf859e4865a0c0c1b11ba53a8b04/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acaf7e0b5779b401043a565a2616685d945fcf859e4865a0c0c1b11ba53a8b04/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acaf7e0b5779b401043a565a2616685d945fcf859e4865a0c0c1b11ba53a8b04/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:45 compute-0 podman[207454]: 2025-12-03 01:19:45.308028172 +0000 UTC m=+0.254752879 container init 3e77ce91418a2e6ca52034631f4444ebb47f9e46aa02e68af12e153d1beb976b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1-activate, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:19:45 compute-0 podman[207454]: 2025-12-03 01:19:45.328711923 +0000 UTC m=+0.275436570 container start 3e77ce91418a2e6ca52034631f4444ebb47f9e46aa02e68af12e153d1beb976b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1-activate, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 03 01:19:45 compute-0 podman[207454]: 2025-12-03 01:19:45.353790376 +0000 UTC m=+0.300515023 container attach 3e77ce91418a2e6ca52034631f4444ebb47f9e46aa02e68af12e153d1beb976b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 03 01:19:45 compute-0 sshd-session[206028]: Connection closed by authenticating user root 193.32.162.157 port 45312 [preauth]
Dec 03 01:19:45 compute-0 ceph-mgr[193109]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2166370730; not ready for session (expect reconnect)
Dec 03 01:19:45 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec 03 01:19:45 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 03 01:19:45 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 03 01:19:45 compute-0 ceph-mon[192821]: purged_snaps scrub starts
Dec 03 01:19:45 compute-0 ceph-mon[192821]: purged_snaps scrub ok
Dec 03 01:19:45 compute-0 ceph-mon[192821]: pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:19:45 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 03 01:19:45 compute-0 podman[207474]: 2025-12-03 01:19:45.840163499 +0000 UTC m=+0.106226387 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 03 01:19:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:19:46 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1-activate[207468]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 03 01:19:46 compute-0 bash[207454]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 03 01:19:46 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1-activate[207468]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Dec 03 01:19:46 compute-0 bash[207454]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Dec 03 01:19:46 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1-activate[207468]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Dec 03 01:19:46 compute-0 bash[207454]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Dec 03 01:19:46 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1-activate[207468]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec 03 01:19:46 compute-0 bash[207454]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec 03 01:19:46 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1-activate[207468]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec 03 01:19:46 compute-0 bash[207454]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec 03 01:19:46 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1-activate[207468]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 03 01:19:46 compute-0 bash[207454]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 03 01:19:46 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1-activate[207468]: --> ceph-volume raw activate successful for osd ID: 1
Dec 03 01:19:46 compute-0 bash[207454]: --> ceph-volume raw activate successful for osd ID: 1
Dec 03 01:19:46 compute-0 ceph-mgr[193109]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2166370730; not ready for session (expect reconnect)
Dec 03 01:19:46 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec 03 01:19:46 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 03 01:19:46 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 03 01:19:46 compute-0 systemd[1]: libpod-3e77ce91418a2e6ca52034631f4444ebb47f9e46aa02e68af12e153d1beb976b.scope: Deactivated successfully.
Dec 03 01:19:46 compute-0 podman[207454]: 2025-12-03 01:19:46.726515028 +0000 UTC m=+1.673239675 container died 3e77ce91418a2e6ca52034631f4444ebb47f9e46aa02e68af12e153d1beb976b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1-activate, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 03 01:19:46 compute-0 systemd[1]: libpod-3e77ce91418a2e6ca52034631f4444ebb47f9e46aa02e68af12e153d1beb976b.scope: Consumed 1.423s CPU time.
Dec 03 01:19:46 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 03 01:19:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-acaf7e0b5779b401043a565a2616685d945fcf859e4865a0c0c1b11ba53a8b04-merged.mount: Deactivated successfully.
Dec 03 01:19:46 compute-0 podman[207454]: 2025-12-03 01:19:46.874486536 +0000 UTC m=+1.821211153 container remove 3e77ce91418a2e6ca52034631f4444ebb47f9e46aa02e68af12e153d1beb976b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1-activate, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:19:47 compute-0 podman[207686]: 2025-12-03 01:19:47.336305939 +0000 UTC m=+0.117342923 container create a464c63d7c3230f0b989d23ed0fa3df0dde5b72cc239e46ea2f70efbee749d3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 03 01:19:47 compute-0 podman[207686]: 2025-12-03 01:19:47.290519074 +0000 UTC m=+0.071556108 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:19:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8aa97d9c21534306e2ce4d3abcd1ff4de31d567e8c3a6cd8b7d05e9985db1d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8aa97d9c21534306e2ce4d3abcd1ff4de31d567e8c3a6cd8b7d05e9985db1d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8aa97d9c21534306e2ce4d3abcd1ff4de31d567e8c3a6cd8b7d05e9985db1d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8aa97d9c21534306e2ce4d3abcd1ff4de31d567e8c3a6cd8b7d05e9985db1d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8aa97d9c21534306e2ce4d3abcd1ff4de31d567e8c3a6cd8b7d05e9985db1d7/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:47 compute-0 podman[207686]: 2025-12-03 01:19:47.507949794 +0000 UTC m=+0.288986788 container init a464c63d7c3230f0b989d23ed0fa3df0dde5b72cc239e46ea2f70efbee749d3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:19:47 compute-0 podman[207686]: 2025-12-03 01:19:47.525500644 +0000 UTC m=+0.306537628 container start a464c63d7c3230f0b989d23ed0fa3df0dde5b72cc239e46ea2f70efbee749d3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:19:47 compute-0 bash[207686]: a464c63d7c3230f0b989d23ed0fa3df0dde5b72cc239e46ea2f70efbee749d3a
Dec 03 01:19:47 compute-0 systemd[1]: Started Ceph osd.1 for 3765feb2-36f8-5b86-b74c-64e9221f9c4c.
Dec 03 01:19:47 compute-0 ceph-osd[207705]: set uid:gid to 167:167 (ceph:ceph)
Dec 03 01:19:47 compute-0 ceph-osd[207705]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Dec 03 01:19:47 compute-0 ceph-osd[207705]: pidfile_write: ignore empty --pid-file
Dec 03 01:19:47 compute-0 sudo[206758]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:47 compute-0 ceph-osd[207705]: bdev(0x55f0a3ce9800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 03 01:19:47 compute-0 ceph-osd[207705]: bdev(0x55f0a3ce9800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 03 01:19:47 compute-0 ceph-osd[207705]: bdev(0x55f0a3ce9800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 03 01:19:47 compute-0 ceph-osd[207705]: bdev(0x55f0a3ce9800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 03 01:19:47 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 03 01:19:47 compute-0 ceph-osd[207705]: bdev(0x55f0a4b21800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 03 01:19:47 compute-0 ceph-osd[207705]: bdev(0x55f0a4b21800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 03 01:19:47 compute-0 ceph-osd[207705]: bdev(0x55f0a4b21800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 03 01:19:47 compute-0 ceph-osd[207705]: bdev(0x55f0a4b21800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 03 01:19:47 compute-0 ceph-osd[207705]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec 03 01:19:47 compute-0 ceph-osd[207705]: bdev(0x55f0a4b21800 /var/lib/ceph/osd/ceph-1/block) close
Dec 03 01:19:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:19:47 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:47 compute-0 ceph-osd[207705]: bdev(0x55f0a3ce9800 /var/lib/ceph/osd/ceph-1/block) close
Dec 03 01:19:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:19:47 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) v1
Dec 03 01:19:47 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Dec 03 01:19:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:19:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:19:47 compute-0 ceph-mgr[193109]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Dec 03 01:19:47 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Dec 03 01:19:47 compute-0 ceph-mgr[193109]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2166370730; not ready for session (expect reconnect)
Dec 03 01:19:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec 03 01:19:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 03 01:19:47 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 03 01:19:47 compute-0 sudo[207718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:19:47 compute-0 sudo[207718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:47 compute-0 sudo[207718]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:47 compute-0 ceph-mon[192821]: pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:19:47 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:47 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:47 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Dec 03 01:19:47 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:19:47 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 03 01:19:47 compute-0 ceph-osd[207705]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Dec 03 01:19:47 compute-0 sudo[207745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:19:47 compute-0 ceph-osd[207705]: load: jerasure load: lrc 
Dec 03 01:19:47 compute-0 ceph-osd[207705]: bdev(0x55f0a4ba2c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 03 01:19:47 compute-0 ceph-osd[207705]: bdev(0x55f0a4ba2c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 03 01:19:47 compute-0 ceph-osd[207705]: bdev(0x55f0a4ba2c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 03 01:19:47 compute-0 ceph-osd[207705]: bdev(0x55f0a4ba2c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 03 01:19:47 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 03 01:19:47 compute-0 ceph-osd[207705]: bdev(0x55f0a4ba2c00 /var/lib/ceph/osd/ceph-1/block) close
Dec 03 01:19:47 compute-0 sudo[207745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:47 compute-0 sudo[207745]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:47 compute-0 ceph-osd[207705]: bdev(0x55f0a4ba2c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 03 01:19:47 compute-0 ceph-osd[207705]: bdev(0x55f0a4ba2c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 03 01:19:47 compute-0 ceph-osd[207705]: bdev(0x55f0a4ba2c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 03 01:19:47 compute-0 ceph-osd[207705]: bdev(0x55f0a4ba2c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 03 01:19:47 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 03 01:19:47 compute-0 ceph-osd[207705]: bdev(0x55f0a4ba2c00 /var/lib/ceph/osd/ceph-1/block) close
Dec 03 01:19:48 compute-0 sudo[207775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:19:48 compute-0 sudo[207775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:48 compute-0 sudo[207775]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:48 compute-0 sudo[207804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c
Dec 03 01:19:48 compute-0 sudo[207804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:19:48 compute-0 ceph-osd[206633]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 17.295 iops: 4427.503 elapsed_sec: 0.678
Dec 03 01:19:48 compute-0 ceph-osd[207705]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec 03 01:19:48 compute-0 ceph-osd[207705]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec 03 01:19:48 compute-0 ceph-osd[206633]: log_channel(cluster) log [WRN] : OSD bench result of 4427.503498 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 03 01:19:48 compute-0 ceph-osd[206633]: osd.0 0 waiting for initial osdmap
Dec 03 01:19:48 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0[206610]: 2025-12-03T01:19:48.215+0000 7f2a25d49640 -1 osd.0 0 waiting for initial osdmap
Dec 03 01:19:48 compute-0 ceph-osd[207705]: bdev(0x55f0a4ba2c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 03 01:19:48 compute-0 ceph-osd[207705]: bdev(0x55f0a4ba2c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 03 01:19:48 compute-0 ceph-osd[207705]: bdev(0x55f0a4ba2c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 03 01:19:48 compute-0 ceph-osd[207705]: bdev(0x55f0a4ba2c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 03 01:19:48 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 03 01:19:48 compute-0 ceph-osd[207705]: bdev(0x55f0a4ba3400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 03 01:19:48 compute-0 ceph-osd[207705]: bdev(0x55f0a4ba3400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 03 01:19:48 compute-0 ceph-osd[207705]: bdev(0x55f0a4ba3400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 03 01:19:48 compute-0 ceph-osd[207705]: bdev(0x55f0a4ba3400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 03 01:19:48 compute-0 ceph-osd[207705]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec 03 01:19:48 compute-0 ceph-osd[207705]: bluefs mount
Dec 03 01:19:48 compute-0 ceph-osd[207705]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: bluefs mount shared_bdev_used = 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 03 01:19:48 compute-0 ceph-osd[206633]: osd.0 8 crush map has features 288514050185494528, adjusting msgr requires for clients
Dec 03 01:19:48 compute-0 ceph-osd[206633]: osd.0 8 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Dec 03 01:19:48 compute-0 ceph-osd[206633]: osd.0 8 crush map has features 3314932999778484224, adjusting msgr requires for osds
Dec 03 01:19:48 compute-0 ceph-osd[206633]: osd.0 8 check_osdmap_features require_osd_release unknown -> reef
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: RocksDB version: 7.9.2
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Git sha 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Compile date 2025-05-06 23:30:25
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: DB SUMMARY
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: DB Session ID:  KYPQACPV34ZGSGC2ZUZA
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: CURRENT file:  CURRENT
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: IDENTITY file:  IDENTITY
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                         Options.error_if_exists: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.create_if_missing: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                         Options.paranoid_checks: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                                     Options.env: 0x55f0a4b73d50
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                                Options.info_log: 0x55f0a3d707e0
Dec 03 01:19:48 compute-0 ceph-osd[206633]: osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 03 01:19:48 compute-0 ceph-osd[206633]: osd.0 8 set_numa_affinity not setting numa affinity
Dec 03 01:19:48 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0[206610]: 2025-12-03T01:19:48.250+0000 7f2a21371640 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.max_file_opening_threads: 16
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                              Options.statistics: (nil)
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                               Options.use_fsync: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.max_log_file_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 03 01:19:48 compute-0 ceph-osd[206633]: osd.0 8 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                         Options.allow_fallocate: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.use_direct_reads: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.create_missing_column_families: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                              Options.db_log_dir: 
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                                 Options.wal_dir: db.wal
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.advise_random_on_open: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.write_buffer_manager: 0x55f0a4c82460
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                            Options.rate_limiter: (nil)
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.unordered_write: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                               Options.row_cache: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                              Options.wal_filter: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.allow_ingest_behind: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.two_write_queues: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.manual_wal_flush: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.wal_compression: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.atomic_flush: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.log_readahead_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.allow_data_in_errors: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.db_host_id: __hostname__
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.max_background_jobs: 4
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.max_background_compactions: -1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.max_subcompactions: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.max_open_files: -1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.bytes_per_sync: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.max_background_flushes: -1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Compression algorithms supported:
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         kZSTD supported: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         kXpressCompression supported: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         kBZip2Compression supported: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         kLZ4Compression supported: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         kZlibCompression supported: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         kLZ4HCCompression supported: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         kSnappyCompression supported: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f0a3d70200)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55f0a3d5d1f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:           Options.merge_operator: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f0a3d70200)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55f0a3d5d1f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:           Options.merge_operator: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f0a3d70200)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55f0a3d5d1f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:           Options.merge_operator: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f0a3d70200)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55f0a3d5d1f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:           Options.merge_operator: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f0a3d70200)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55f0a3d5d1f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:           Options.merge_operator: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f0a3d70200)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55f0a3d5d1f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:           Options.merge_operator: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f0a3d70200)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55f0a3d5d1f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:           Options.merge_operator: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f0a3d70180)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55f0a3d5d090
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:           Options.merge_operator: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f0a3d70180)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55f0a3d5d090
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:           Options.merge_operator: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f0a3d70180)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55f0a3d5d090
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 62d22037-8fb2-4da9-b46c-8157fa97fa34
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764724788351197, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764724788351501, "job": 1, "event": "recovery_finished"}
Dec 03 01:19:48 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Dec 03 01:19:48 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Dec 03 01:19:48 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec 03 01:19:48 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: freelist init
Dec 03 01:19:48 compute-0 ceph-osd[207705]: freelist _read_cfg
Dec 03 01:19:48 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 03 01:19:48 compute-0 ceph-osd[207705]: bluefs umount
Dec 03 01:19:48 compute-0 ceph-osd[207705]: bdev(0x55f0a4ba3400 /var/lib/ceph/osd/ceph-1/block) close
Dec 03 01:19:48 compute-0 ceph-osd[207705]: bdev(0x55f0a4ba3400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 03 01:19:48 compute-0 ceph-osd[207705]: bdev(0x55f0a4ba3400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 03 01:19:48 compute-0 ceph-osd[207705]: bdev(0x55f0a4ba3400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 03 01:19:48 compute-0 ceph-osd[207705]: bdev(0x55f0a4ba3400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 03 01:19:48 compute-0 ceph-osd[207705]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec 03 01:19:48 compute-0 ceph-osd[207705]: bluefs mount
Dec 03 01:19:48 compute-0 ceph-osd[207705]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: bluefs mount shared_bdev_used = 4718592
Dec 03 01:19:48 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: RocksDB version: 7.9.2
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Git sha 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Compile date 2025-05-06 23:30:25
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: DB SUMMARY
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: DB Session ID:  KYPQACPV34ZGSGC2ZUZB
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: CURRENT file:  CURRENT
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: IDENTITY file:  IDENTITY
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                         Options.error_if_exists: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.create_if_missing: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                         Options.paranoid_checks: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                                     Options.env: 0x55f0a4d12230
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                                Options.info_log: 0x55f0a3d70540
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.max_file_opening_threads: 16
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                              Options.statistics: (nil)
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                               Options.use_fsync: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.max_log_file_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                         Options.allow_fallocate: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.use_direct_reads: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.create_missing_column_families: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                              Options.db_log_dir: 
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                                 Options.wal_dir: db.wal
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.advise_random_on_open: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.write_buffer_manager: 0x55f0a4c826e0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                            Options.rate_limiter: (nil)
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.unordered_write: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                               Options.row_cache: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                              Options.wal_filter: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.allow_ingest_behind: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.two_write_queues: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.manual_wal_flush: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.wal_compression: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.atomic_flush: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.log_readahead_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.allow_data_in_errors: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.db_host_id: __hostname__
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.max_background_jobs: 4
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.max_background_compactions: -1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.max_subcompactions: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.max_open_files: -1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.bytes_per_sync: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.max_background_flushes: -1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Compression algorithms supported:
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         kZSTD supported: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         kXpressCompression supported: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         kBZip2Compression supported: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         kLZ4Compression supported: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         kZlibCompression supported: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         kLZ4HCCompression supported: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         kSnappyCompression supported: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f0a3d70980)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55f0a3d5d1f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:           Options.merge_operator: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f0a3d70980)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55f0a3d5d1f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:           Options.merge_operator: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f0a3d70980)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55f0a3d5d1f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:           Options.merge_operator: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f0a3d70980)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55f0a3d5d1f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:           Options.merge_operator: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f0a3d70980)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55f0a3d5d1f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:           Options.merge_operator: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f0a3d70980)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55f0a3d5d1f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:           Options.merge_operator: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f0a3d70980)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55f0a3d5d1f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:           Options.merge_operator: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f0a3d43f00)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55f0a3d5d090
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:           Options.merge_operator: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f0a3d43f00)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55f0a3d5d090
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:           Options.merge_operator: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f0a3d43f00)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55f0a3d5d090
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 62d22037-8fb2-4da9-b46c-8157fa97fa34
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764724788569637, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764724788578430, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724788, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62d22037-8fb2-4da9-b46c-8157fa97fa34", "db_session_id": "KYPQACPV34ZGSGC2ZUZB", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764724788586798, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724788, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62d22037-8fb2-4da9-b46c-8157fa97fa34", "db_session_id": "KYPQACPV34ZGSGC2ZUZB", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764724788592647, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724788, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62d22037-8fb2-4da9-b46c-8157fa97fa34", "db_session_id": "KYPQACPV34ZGSGC2ZUZB", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764724788595749, "job": 1, "event": "recovery_finished"}
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec 03 01:19:48 compute-0 podman[208163]: 2025-12-03 01:19:48.622403746 +0000 UTC m=+0.055366632 container create 3ff0cc9cfa6c1234c3e1ce43a2c703a0cf945b614184e0e135395ebd8927e2a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55f0a4d3bc00
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: DB pointer 0x55f0a4c5da00
Dec 03 01:19:48 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 03 01:19:48 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Dec 03 01:19:48 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 03 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                            Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                            Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                             Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55f0a3d5d1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.5e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
                                            
                                            ** Compaction Stats [m-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55f0a3d5d1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.5e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-0] **
                                            
                                            ** Compaction Stats [m-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55f0a3d5d1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.5e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-1] **
                                            
                                            ** Compaction Stats [m-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55f0a3d5d1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.5e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-2] **
                                            
                                            ** Compaction Stats [p-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                             Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55f0a3d5d1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.5e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-0] **
                                            
                                            ** Compaction Stats [p-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55f0a3d5d1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.5e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-1] **
                                            
                                            ** Compaction Stats [p-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55f0a3d5d1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.5e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-2] **
                                            
                                            ** Compaction Stats [O-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55f0a3d5d090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-0] **
                                            
                                            ** Compaction Stats [O-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55f0a3d5d090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-1] **
                                            
                                            ** Compaction Stats [O-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                             Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55f0a3d5d090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-2] **
                                            
                                            ** Compaction Stats [L] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [L] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55f0a3d5d1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.5e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [L] **
                                            
                                            ** Compaction Stats [P] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [P] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55f0a3d5d1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.5e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [P] **
Dec 03 01:19:48 compute-0 ceph-osd[207705]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec 03 01:19:48 compute-0 ceph-osd[207705]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec 03 01:19:48 compute-0 ceph-osd[207705]: _get_class not permitted to load lua
Dec 03 01:19:48 compute-0 ceph-osd[207705]: _get_class not permitted to load sdk
Dec 03 01:19:48 compute-0 ceph-osd[207705]: _get_class not permitted to load test_remote_reads
Dec 03 01:19:48 compute-0 ceph-osd[207705]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec 03 01:19:48 compute-0 ceph-osd[207705]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec 03 01:19:48 compute-0 ceph-osd[207705]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec 03 01:19:48 compute-0 ceph-osd[207705]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec 03 01:19:48 compute-0 ceph-osd[207705]: osd.1 0 load_pgs
Dec 03 01:19:48 compute-0 ceph-osd[207705]: osd.1 0 load_pgs opened 0 pgs
Dec 03 01:19:48 compute-0 ceph-osd[207705]: osd.1 0 log_to_monitors true
Dec 03 01:19:48 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1[207701]: 2025-12-03T01:19:48.645+0000 7fb70d768740 -1 osd.1 0 log_to_monitors true
Dec 03 01:19:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0) v1
Dec 03 01:19:48 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1951846642,v1:192.168.122.100:6807/1951846642]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Dec 03 01:19:48 compute-0 systemd[1]: Started libpod-conmon-3ff0cc9cfa6c1234c3e1ce43a2c703a0cf945b614184e0e135395ebd8927e2a8.scope.
Dec 03 01:19:48 compute-0 podman[208163]: 2025-12-03 01:19:48.598830401 +0000 UTC m=+0.031793287 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:19:48 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:19:48 compute-0 ceph-mgr[193109]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2166370730; not ready for session (expect reconnect)
Dec 03 01:19:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec 03 01:19:48 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 03 01:19:48 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 03 01:19:48 compute-0 podman[208163]: 2025-12-03 01:19:48.715611948 +0000 UTC m=+0.148574824 container init 3ff0cc9cfa6c1234c3e1ce43a2c703a0cf945b614184e0e135395ebd8927e2a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_carver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:19:48 compute-0 podman[208163]: 2025-12-03 01:19:48.72349298 +0000 UTC m=+0.156455836 container start 3ff0cc9cfa6c1234c3e1ce43a2c703a0cf945b614184e0e135395ebd8927e2a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_carver, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 03 01:19:48 compute-0 podman[208163]: 2025-12-03 01:19:48.727434072 +0000 UTC m=+0.160396938 container attach 3ff0cc9cfa6c1234c3e1ce43a2c703a0cf945b614184e0e135395ebd8927e2a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_carver, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Dec 03 01:19:48 compute-0 festive_carver[208293]: 167 167
Dec 03 01:19:48 compute-0 systemd[1]: libpod-3ff0cc9cfa6c1234c3e1ce43a2c703a0cf945b614184e0e135395ebd8927e2a8.scope: Deactivated successfully.
Dec 03 01:19:48 compute-0 conmon[208293]: conmon 3ff0cc9cfa6c1234c3e1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3ff0cc9cfa6c1234c3e1ce43a2c703a0cf945b614184e0e135395ebd8927e2a8.scope/container/memory.events
Dec 03 01:19:48 compute-0 podman[208163]: 2025-12-03 01:19:48.734596185 +0000 UTC m=+0.167559061 container died 3ff0cc9cfa6c1234c3e1ce43a2c703a0cf945b614184e0e135395ebd8927e2a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Dec 03 01:19:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-683a05fa03e3c1f808064764e514ea91c20a591a85985a0103a66ba0470360e6-merged.mount: Deactivated successfully.
Dec 03 01:19:48 compute-0 podman[208163]: 2025-12-03 01:19:48.787666248 +0000 UTC m=+0.220629124 container remove 3ff0cc9cfa6c1234c3e1ce43a2c703a0cf945b614184e0e135395ebd8927e2a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_carver, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 03 01:19:48 compute-0 systemd[1]: libpod-conmon-3ff0cc9cfa6c1234c3e1ce43a2c703a0cf945b614184e0e135395ebd8927e2a8.scope: Deactivated successfully.
Dec 03 01:19:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Dec 03 01:19:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 03 01:19:48 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1951846642,v1:192.168.122.100:6807/1951846642]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Dec 03 01:19:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e9 e9: 3 total, 1 up, 3 in
Dec 03 01:19:48 compute-0 ceph-mon[192821]: Deploying daemon osd.2 on compute-0
Dec 03 01:19:48 compute-0 ceph-mon[192821]: from='osd.1 [v2:192.168.122.100:6806/1951846642,v1:192.168.122.100:6807/1951846642]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Dec 03 01:19:48 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 03 01:19:48 compute-0 ceph-osd[206633]: osd.0 9 state: booting -> active
Dec 03 01:19:48 compute-0 ceph-mon[192821]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/2166370730,v1:192.168.122.100:6803/2166370730] boot
Dec 03 01:19:48 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 1 up, 3 in
Dec 03 01:19:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Dec 03 01:19:48 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1951846642,v1:192.168.122.100:6807/1951846642]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec 03 01:19:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Dec 03 01:19:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec 03 01:19:48 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 03 01:19:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 03 01:19:48 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 03 01:19:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec 03 01:19:48 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 03 01:19:48 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 03 01:19:48 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 03 01:19:49 compute-0 podman[208325]: 2025-12-03 01:19:49.154755459 +0000 UTC m=+0.085648289 container create 82137c4387f9fb84546ebab24f9bbb008562805244da507c393f617742896571 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:19:49 compute-0 podman[208325]: 2025-12-03 01:19:49.123991419 +0000 UTC m=+0.054884279 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:19:49 compute-0 systemd[1]: Started libpod-conmon-82137c4387f9fb84546ebab24f9bbb008562805244da507c393f617742896571.scope.
Dec 03 01:19:49 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:19:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf419895a72ba26fc5308459b72be7c7eb4e80c0749ac27775066a3333ddc344/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf419895a72ba26fc5308459b72be7c7eb4e80c0749ac27775066a3333ddc344/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf419895a72ba26fc5308459b72be7c7eb4e80c0749ac27775066a3333ddc344/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf419895a72ba26fc5308459b72be7c7eb4e80c0749ac27775066a3333ddc344/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf419895a72ba26fc5308459b72be7c7eb4e80c0749ac27775066a3333ddc344/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:49 compute-0 podman[208325]: 2025-12-03 01:19:49.295742457 +0000 UTC m=+0.226635357 container init 82137c4387f9fb84546ebab24f9bbb008562805244da507c393f617742896571 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2-activate-test, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec 03 01:19:49 compute-0 podman[208325]: 2025-12-03 01:19:49.308271369 +0000 UTC m=+0.239164179 container start 82137c4387f9fb84546ebab24f9bbb008562805244da507c393f617742896571 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2-activate-test, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:19:49 compute-0 podman[208325]: 2025-12-03 01:19:49.31338341 +0000 UTC m=+0.244276250 container attach 82137c4387f9fb84546ebab24f9bbb008562805244da507c393f617742896571 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 03 01:19:49 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec 03 01:19:49 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec 03 01:19:49 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Dec 03 01:19:49 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 03 01:19:49 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1951846642,v1:192.168.122.100:6807/1951846642]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 03 01:19:49 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e10 e10: 3 total, 1 up, 3 in
Dec 03 01:19:49 compute-0 ceph-osd[207705]: osd.1 0 done with init, starting boot process
Dec 03 01:19:49 compute-0 ceph-osd[207705]: osd.1 0 start_boot
Dec 03 01:19:49 compute-0 ceph-osd[207705]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec 03 01:19:49 compute-0 ceph-osd[207705]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec 03 01:19:49 compute-0 ceph-osd[207705]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec 03 01:19:49 compute-0 ceph-osd[207705]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec 03 01:19:49 compute-0 ceph-osd[207705]: osd.1 0  bench count 12288000 bsize 4 KiB
Dec 03 01:19:49 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 1 up, 3 in
Dec 03 01:19:49 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 03 01:19:49 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 03 01:19:49 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec 03 01:19:49 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 03 01:19:49 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 03 01:19:49 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 03 01:19:49 compute-0 ceph-mgr[193109]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1951846642; not ready for session (expect reconnect)
Dec 03 01:19:49 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 03 01:19:49 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 03 01:19:49 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 03 01:19:49 compute-0 ceph-mon[192821]: pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 03 01:19:49 compute-0 ceph-mon[192821]: OSD bench result of 4427.503498 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 03 01:19:49 compute-0 ceph-mon[192821]: from='osd.1 [v2:192.168.122.100:6806/1951846642,v1:192.168.122.100:6807/1951846642]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Dec 03 01:19:49 compute-0 ceph-mon[192821]: osd.0 [v2:192.168.122.100:6802/2166370730,v1:192.168.122.100:6803/2166370730] boot
Dec 03 01:19:49 compute-0 ceph-mon[192821]: osdmap e9: 3 total, 1 up, 3 in
Dec 03 01:19:49 compute-0 ceph-mon[192821]: from='osd.1 [v2:192.168.122.100:6806/1951846642,v1:192.168.122.100:6807/1951846642]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec 03 01:19:49 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 03 01:19:49 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 03 01:19:49 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 03 01:19:49 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2-activate-test[208340]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Dec 03 01:19:49 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2-activate-test[208340]:                             [--no-systemd] [--no-tmpfs]
Dec 03 01:19:49 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2-activate-test[208340]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec 03 01:19:49 compute-0 systemd[1]: libpod-82137c4387f9fb84546ebab24f9bbb008562805244da507c393f617742896571.scope: Deactivated successfully.
Dec 03 01:19:49 compute-0 podman[208325]: 2025-12-03 01:19:49.987901972 +0000 UTC m=+0.918794862 container died 82137c4387f9fb84546ebab24f9bbb008562805244da507c393f617742896571 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2-activate-test, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 03 01:19:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf419895a72ba26fc5308459b72be7c7eb4e80c0749ac27775066a3333ddc344-merged.mount: Deactivated successfully.
Dec 03 01:19:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v42: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec 03 01:19:50 compute-0 podman[208325]: 2025-12-03 01:19:50.201800472 +0000 UTC m=+1.132693312 container remove 82137c4387f9fb84546ebab24f9bbb008562805244da507c393f617742896571 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2-activate-test, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec 03 01:19:50 compute-0 systemd[1]: libpod-conmon-82137c4387f9fb84546ebab24f9bbb008562805244da507c393f617742896571.scope: Deactivated successfully.
Dec 03 01:19:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e10 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:19:50 compute-0 ceph-mgr[193109]: [devicehealth INFO root] creating mgr pool
Dec 03 01:19:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0) v1
Dec 03 01:19:50 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Dec 03 01:19:50 compute-0 systemd[1]: Reloading.
Dec 03 01:19:50 compute-0 systemd-sysv-generator[208412]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:19:50 compute-0 systemd-rc-local-generator[208409]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:19:50 compute-0 ceph-mgr[193109]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1951846642; not ready for session (expect reconnect)
Dec 03 01:19:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 03 01:19:50 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 03 01:19:50 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 03 01:19:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Dec 03 01:19:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 03 01:19:50 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Dec 03 01:19:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Dec 03 01:19:50 compute-0 ceph-mon[192821]: from='osd.1 [v2:192.168.122.100:6806/1951846642,v1:192.168.122.100:6807/1951846642]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 03 01:19:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e11 crush map has features 3314933000852226048, adjusting msgr requires
Dec 03 01:19:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Dec 03 01:19:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Dec 03 01:19:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Dec 03 01:19:50 compute-0 ceph-mon[192821]: osdmap e10: 3 total, 1 up, 3 in
Dec 03 01:19:50 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 03 01:19:50 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 03 01:19:50 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 03 01:19:50 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Dec 03 01:19:50 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 03 01:19:50 compute-0 sshd-session[208380]: Received disconnect from 34.66.72.251 port 46762:11: Bye Bye [preauth]
Dec 03 01:19:50 compute-0 sshd-session[208380]: Disconnected from authenticating user root 34.66.72.251 port 46762 [preauth]
Dec 03 01:19:50 compute-0 ceph-osd[206633]: osd.0 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec 03 01:19:50 compute-0 ceph-osd[206633]: osd.0 11 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Dec 03 01:19:50 compute-0 ceph-osd[206633]: osd.0 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec 03 01:19:50 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Dec 03 01:19:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 03 01:19:50 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 03 01:19:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec 03 01:19:50 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 03 01:19:50 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 03 01:19:50 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 03 01:19:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0) v1
Dec 03 01:19:50 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Dec 03 01:19:51 compute-0 systemd[1]: Reloading.
Dec 03 01:19:51 compute-0 systemd-rc-local-generator[208450]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:19:51 compute-0 systemd-sysv-generator[208453]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:19:51 compute-0 systemd[1]: Starting Ceph osd.2 for 3765feb2-36f8-5b86-b74c-64e9221f9c4c...
Dec 03 01:19:51 compute-0 ceph-mgr[193109]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1951846642; not ready for session (expect reconnect)
Dec 03 01:19:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 03 01:19:51 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 03 01:19:51 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 03 01:19:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Dec 03 01:19:51 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Dec 03 01:19:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e12 e12: 3 total, 1 up, 3 in
Dec 03 01:19:51 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 1 up, 3 in
Dec 03 01:19:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 03 01:19:51 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 03 01:19:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec 03 01:19:51 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 03 01:19:51 compute-0 ceph-mon[192821]: purged_snaps scrub starts
Dec 03 01:19:51 compute-0 ceph-mon[192821]: purged_snaps scrub ok
Dec 03 01:19:51 compute-0 ceph-mon[192821]: pgmap v42: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec 03 01:19:51 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Dec 03 01:19:51 compute-0 ceph-mon[192821]: osdmap e11: 3 total, 1 up, 3 in
Dec 03 01:19:51 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 03 01:19:51 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 03 01:19:51 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Dec 03 01:19:51 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 03 01:19:51 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 03 01:19:51 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 03 01:19:52 compute-0 podman[208502]: 2025-12-03 01:19:52.135395718 +0000 UTC m=+0.115335791 container create c02c4dc6a168c25719173e8b3f98f35beed4f2455e0107d273436be151548467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2-activate, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:19:52 compute-0 podman[208502]: 2025-12-03 01:19:52.082338576 +0000 UTC m=+0.062278699 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:19:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v45: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec 03 01:19:52 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:19:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fa6dceb8aa82b7221f66ade5ac561b52b495564f11ea5171f56c2a155ec5ab8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fa6dceb8aa82b7221f66ade5ac561b52b495564f11ea5171f56c2a155ec5ab8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fa6dceb8aa82b7221f66ade5ac561b52b495564f11ea5171f56c2a155ec5ab8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fa6dceb8aa82b7221f66ade5ac561b52b495564f11ea5171f56c2a155ec5ab8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fa6dceb8aa82b7221f66ade5ac561b52b495564f11ea5171f56c2a155ec5ab8/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:52 compute-0 podman[208502]: 2025-12-03 01:19:52.350811487 +0000 UTC m=+0.330751530 container init c02c4dc6a168c25719173e8b3f98f35beed4f2455e0107d273436be151548467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2-activate, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:19:52 compute-0 podman[208502]: 2025-12-03 01:19:52.367221698 +0000 UTC m=+0.347161741 container start c02c4dc6a168c25719173e8b3f98f35beed4f2455e0107d273436be151548467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:19:52 compute-0 podman[208502]: 2025-12-03 01:19:52.387396696 +0000 UTC m=+0.367336769 container attach c02c4dc6a168c25719173e8b3f98f35beed4f2455e0107d273436be151548467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2-activate, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 03 01:19:52 compute-0 ceph-mgr[193109]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1951846642; not ready for session (expect reconnect)
Dec 03 01:19:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 03 01:19:52 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 03 01:19:52 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 03 01:19:52 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Dec 03 01:19:52 compute-0 ceph-mon[192821]: osdmap e12: 3 total, 1 up, 3 in
Dec 03 01:19:52 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 03 01:19:52 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 03 01:19:52 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 03 01:19:53 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2-activate[208517]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec 03 01:19:53 compute-0 bash[208502]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec 03 01:19:53 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2-activate[208517]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Dec 03 01:19:53 compute-0 bash[208502]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Dec 03 01:19:53 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2-activate[208517]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Dec 03 01:19:53 compute-0 bash[208502]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Dec 03 01:19:53 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2-activate[208517]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec 03 01:19:53 compute-0 bash[208502]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec 03 01:19:53 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2-activate[208517]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec 03 01:19:53 compute-0 bash[208502]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec 03 01:19:53 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2-activate[208517]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec 03 01:19:53 compute-0 bash[208502]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec 03 01:19:53 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2-activate[208517]: --> ceph-volume raw activate successful for osd ID: 2
Dec 03 01:19:53 compute-0 bash[208502]: --> ceph-volume raw activate successful for osd ID: 2
Dec 03 01:19:53 compute-0 systemd[1]: libpod-c02c4dc6a168c25719173e8b3f98f35beed4f2455e0107d273436be151548467.scope: Deactivated successfully.
Dec 03 01:19:53 compute-0 podman[208502]: 2025-12-03 01:19:53.711214372 +0000 UTC m=+1.691154435 container died c02c4dc6a168c25719173e8b3f98f35beed4f2455e0107d273436be151548467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2-activate, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 03 01:19:53 compute-0 systemd[1]: libpod-c02c4dc6a168c25719173e8b3f98f35beed4f2455e0107d273436be151548467.scope: Consumed 1.356s CPU time.
Dec 03 01:19:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-9fa6dceb8aa82b7221f66ade5ac561b52b495564f11ea5171f56c2a155ec5ab8-merged.mount: Deactivated successfully.
Dec 03 01:19:53 compute-0 ceph-mgr[193109]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1951846642; not ready for session (expect reconnect)
Dec 03 01:19:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 03 01:19:53 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 03 01:19:53 compute-0 podman[208502]: 2025-12-03 01:19:53.858225836 +0000 UTC m=+1.838165879 container remove c02c4dc6a168c25719173e8b3f98f35beed4f2455e0107d273436be151548467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 03 01:19:53 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 03 01:19:54 compute-0 ceph-mon[192821]: pgmap v45: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec 03 01:19:54 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 03 01:19:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v46: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec 03 01:19:54 compute-0 podman[208712]: 2025-12-03 01:19:54.205722185 +0000 UTC m=+0.076406342 container create 8463edd2b7dbdc905640f8f015989671b483be937ce442ff49f078714b648dcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:19:54 compute-0 podman[208712]: 2025-12-03 01:19:54.173403805 +0000 UTC m=+0.044087972 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:19:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99fb35f9df473732875a1a99e6ba123e335cca124d0153a1f0bf001e0eb1f973/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99fb35f9df473732875a1a99e6ba123e335cca124d0153a1f0bf001e0eb1f973/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99fb35f9df473732875a1a99e6ba123e335cca124d0153a1f0bf001e0eb1f973/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99fb35f9df473732875a1a99e6ba123e335cca124d0153a1f0bf001e0eb1f973/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99fb35f9df473732875a1a99e6ba123e335cca124d0153a1f0bf001e0eb1f973/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:54 compute-0 podman[208712]: 2025-12-03 01:19:54.327152772 +0000 UTC m=+0.197836929 container init 8463edd2b7dbdc905640f8f015989671b483be937ce442ff49f078714b648dcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 03 01:19:54 compute-0 podman[208712]: 2025-12-03 01:19:54.341485179 +0000 UTC m=+0.212169336 container start 8463edd2b7dbdc905640f8f015989671b483be937ce442ff49f078714b648dcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:19:54 compute-0 bash[208712]: 8463edd2b7dbdc905640f8f015989671b483be937ce442ff49f078714b648dcc
Dec 03 01:19:54 compute-0 systemd[1]: Started Ceph osd.2 for 3765feb2-36f8-5b86-b74c-64e9221f9c4c.
Dec 03 01:19:54 compute-0 ceph-osd[208731]: set uid:gid to 167:167 (ceph:ceph)
Dec 03 01:19:54 compute-0 ceph-osd[208731]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Dec 03 01:19:54 compute-0 ceph-osd[208731]: pidfile_write: ignore empty --pid-file
Dec 03 01:19:54 compute-0 ceph-osd[208731]: bdev(0x558b82199800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 03 01:19:54 compute-0 ceph-osd[208731]: bdev(0x558b82199800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 03 01:19:54 compute-0 ceph-osd[208731]: bdev(0x558b82199800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 03 01:19:54 compute-0 ceph-osd[208731]: bdev(0x558b82199800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 03 01:19:54 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 03 01:19:54 compute-0 ceph-osd[208731]: bdev(0x558b82fd1800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 03 01:19:54 compute-0 ceph-osd[208731]: bdev(0x558b82fd1800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 03 01:19:54 compute-0 ceph-osd[208731]: bdev(0x558b82fd1800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 03 01:19:54 compute-0 ceph-osd[208731]: bdev(0x558b82fd1800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 03 01:19:54 compute-0 ceph-osd[208731]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Dec 03 01:19:54 compute-0 ceph-osd[208731]: bdev(0x558b82fd1800 /var/lib/ceph/osd/ceph-2/block) close
Dec 03 01:19:54 compute-0 ceph-osd[207705]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 18.459 iops: 4725.506 elapsed_sec: 0.635
Dec 03 01:19:54 compute-0 ceph-osd[207705]: log_channel(cluster) log [WRN] : OSD bench result of 4725.505655 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 03 01:19:54 compute-0 sudo[207804]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:54 compute-0 ceph-osd[207705]: osd.1 0 waiting for initial osdmap
Dec 03 01:19:54 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1[207701]: 2025-12-03T01:19:54.400+0000 7fb7096e8640 -1 osd.1 0 waiting for initial osdmap
Dec 03 01:19:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:19:54 compute-0 ceph-osd[207705]: osd.1 12 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec 03 01:19:54 compute-0 ceph-osd[207705]: osd.1 12 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Dec 03 01:19:54 compute-0 ceph-osd[207705]: osd.1 12 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec 03 01:19:54 compute-0 ceph-osd[207705]: osd.1 12 check_osdmap_features require_osd_release unknown -> reef
Dec 03 01:19:54 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:19:54 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:54 compute-0 ceph-osd[207705]: osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 03 01:19:54 compute-0 ceph-osd[207705]: osd.1 12 set_numa_affinity not setting numa affinity
Dec 03 01:19:54 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1[207701]: 2025-12-03T01:19:54.436+0000 7fb704d10640 -1 osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 03 01:19:54 compute-0 ceph-osd[207705]: osd.1 12 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial
Dec 03 01:19:54 compute-0 sudo[208744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:19:54 compute-0 sudo[208744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:54 compute-0 sudo[208744]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:54 compute-0 sudo[208771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:19:54 compute-0 sudo[208771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:54 compute-0 sudo[208771]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:54 compute-0 ceph-osd[208731]: bdev(0x558b82199800 /var/lib/ceph/osd/ceph-2/block) close
Dec 03 01:19:54 compute-0 sudo[208796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:19:54 compute-0 sudo[208796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:54 compute-0 sudo[208796]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:54 compute-0 ceph-mgr[193109]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1951846642; not ready for session (expect reconnect)
Dec 03 01:19:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 03 01:19:54 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 03 01:19:54 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 03 01:19:54 compute-0 sudo[208822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 01:19:54 compute-0 sudo[208822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:54 compute-0 ceph-osd[208731]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Dec 03 01:19:54 compute-0 ceph-osd[208731]: load: jerasure load: lrc 
Dec 03 01:19:54 compute-0 ceph-osd[208731]: bdev(0x558b83044c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 03 01:19:54 compute-0 ceph-osd[208731]: bdev(0x558b83044c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 03 01:19:54 compute-0 ceph-osd[208731]: bdev(0x558b83044c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 03 01:19:54 compute-0 ceph-osd[208731]: bdev(0x558b83044c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 03 01:19:54 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 03 01:19:54 compute-0 ceph-osd[208731]: bdev(0x558b83044c00 /var/lib/ceph/osd/ceph-2/block) close
Dec 03 01:19:55 compute-0 ceph-osd[208731]: bdev(0x558b83044c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 03 01:19:55 compute-0 ceph-osd[208731]: bdev(0x558b83044c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 03 01:19:55 compute-0 ceph-osd[208731]: bdev(0x558b83044c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 03 01:19:55 compute-0 ceph-osd[208731]: bdev(0x558b83044c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 03 01:19:55 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 03 01:19:55 compute-0 ceph-osd[208731]: bdev(0x558b83044c00 /var/lib/ceph/osd/ceph-2/block) close
Dec 03 01:19:55 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:19:55 compute-0 ceph-osd[207705]: osd.1 12 tick checking mon for new map
Dec 03 01:19:55 compute-0 ceph-osd[208731]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec 03 01:19:55 compute-0 ceph-osd[208731]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec 03 01:19:55 compute-0 ceph-osd[208731]: bdev(0x558b83044c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 03 01:19:55 compute-0 ceph-osd[208731]: bdev(0x558b83044c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 03 01:19:55 compute-0 ceph-osd[208731]: bdev(0x558b83044c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 03 01:19:55 compute-0 ceph-osd[208731]: bdev(0x558b83044c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 03 01:19:55 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 03 01:19:55 compute-0 ceph-osd[208731]: bdev(0x558b83045400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 03 01:19:55 compute-0 ceph-osd[208731]: bdev(0x558b83045400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 03 01:19:55 compute-0 ceph-osd[208731]: bdev(0x558b83045400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 03 01:19:55 compute-0 ceph-osd[208731]: bdev(0x558b83045400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 03 01:19:55 compute-0 ceph-osd[208731]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Dec 03 01:19:55 compute-0 ceph-osd[208731]: bluefs mount
Dec 03 01:19:55 compute-0 ceph-osd[208731]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: bluefs mount shared_bdev_used = 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 03 01:19:55 compute-0 podman[208891]: 2025-12-03 01:19:55.435456567 +0000 UTC m=+0.055240669 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: RocksDB version: 7.9.2
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Git sha 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Compile date 2025-05-06 23:30:25
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: DB SUMMARY
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: DB Session ID:  K9B5HJRG0MH6OVU3TJ45
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: CURRENT file:  CURRENT
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: IDENTITY file:  IDENTITY
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                         Options.error_if_exists: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.create_if_missing: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                         Options.paranoid_checks: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                                     Options.env: 0x558b83023d50
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                                Options.info_log: 0x558b82220840
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.max_file_opening_threads: 16
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                              Options.statistics: (nil)
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                               Options.use_fsync: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.max_log_file_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                         Options.allow_fallocate: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.use_direct_reads: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.create_missing_column_families: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                              Options.db_log_dir: 
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                                 Options.wal_dir: db.wal
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.advise_random_on_open: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.write_buffer_manager: 0x558b83128460
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                            Options.rate_limiter: (nil)
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.unordered_write: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                               Options.row_cache: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                              Options.wal_filter: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.allow_ingest_behind: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.two_write_queues: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.manual_wal_flush: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.wal_compression: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.atomic_flush: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.log_readahead_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.allow_data_in_errors: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.db_host_id: __hostname__
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.max_background_jobs: 4
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.max_background_compactions: -1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.max_subcompactions: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.max_open_files: -1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.bytes_per_sync: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.max_background_flushes: -1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Compression algorithms supported:
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         kZSTD supported: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         kXpressCompression supported: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         kBZip2Compression supported: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         kLZ4Compression supported: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         kZlibCompression supported: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         kLZ4HCCompression supported: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         kSnappyCompression supported: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558b82220240)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x558b8220d1f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:           Options.merge_operator: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558b82220240)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x558b8220d1f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:           Options.merge_operator: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558b82220240)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x558b8220d1f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:           Options.merge_operator: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558b82220240)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x558b8220d1f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:           Options.merge_operator: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558b82220240)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x558b8220d1f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:           Options.merge_operator: None
Dec 03 01:19:55 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Dec 03 01:19:55 compute-0 ceph-mon[192821]: pgmap v46: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec 03 01:19:55 compute-0 ceph-mon[192821]: OSD bench result of 4725.505655 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 03 01:19:55 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:55 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:55 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 03 01:19:55 compute-0 podman[208891]: 2025-12-03 01:19:55.575152583 +0000 UTC m=+0.194936685 container create 1fbfda5db0d4e493e74cdef0f7432049672a2f18e2538ba7273f8abf920899c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_curie, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558b82220240)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x558b8220d1f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:           Options.merge_operator: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558b82220240)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x558b8220d1f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:           Options.merge_operator: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558b82220260)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x558b8220d090
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:           Options.merge_operator: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558b82220260)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x558b8220d090
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:           Options.merge_operator: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558b82220260)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x558b8220d090
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 03 01:19:55 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e13 e13: 3 total, 2 up, 3 in
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 53c81f52-1eb7-420b-9ec6-1a961223b8b4
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764724795607048, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764724795607440, "job": 1, "event": "recovery_finished"}
Dec 03 01:19:55 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Dec 03 01:19:55 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Dec 03 01:19:55 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec 03 01:19:55 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Dec 03 01:19:55 compute-0 ceph-mon[192821]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/1951846642,v1:192.168.122.100:6807/1951846642] boot
Dec 03 01:19:55 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 2 up, 3 in
Dec 03 01:19:55 compute-0 ceph-osd[208731]: freelist init
Dec 03 01:19:55 compute-0 ceph-osd[208731]: freelist _read_cfg
Dec 03 01:19:55 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 03 01:19:55 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 03 01:19:55 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec 03 01:19:55 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 03 01:19:55 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 03 01:19:55 compute-0 ceph-osd[208731]: bluefs umount
Dec 03 01:19:55 compute-0 ceph-osd[208731]: bdev(0x558b83045400 /var/lib/ceph/osd/ceph-2/block) close
Dec 03 01:19:55 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 03 01:19:55 compute-0 ceph-osd[207705]: osd.1 13 state: booting -> active
Dec 03 01:19:55 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 13 pg[1.0( empty local-lis/les=0/0 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 pi=[11,13)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:19:55 compute-0 systemd[1]: Started libpod-conmon-1fbfda5db0d4e493e74cdef0f7432049672a2f18e2538ba7273f8abf920899c7.scope.
Dec 03 01:19:55 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:19:55 compute-0 podman[208891]: 2025-12-03 01:19:55.733142598 +0000 UTC m=+0.352926700 container init 1fbfda5db0d4e493e74cdef0f7432049672a2f18e2538ba7273f8abf920899c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_curie, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 03 01:19:55 compute-0 podman[208891]: 2025-12-03 01:19:55.750679328 +0000 UTC m=+0.370463420 container start 1fbfda5db0d4e493e74cdef0f7432049672a2f18e2538ba7273f8abf920899c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec 03 01:19:55 compute-0 podman[208891]: 2025-12-03 01:19:55.757223616 +0000 UTC m=+0.377007708 container attach 1fbfda5db0d4e493e74cdef0f7432049672a2f18e2538ba7273f8abf920899c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_curie, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 03 01:19:55 compute-0 hopeful_curie[209101]: 167 167
Dec 03 01:19:55 compute-0 systemd[1]: libpod-1fbfda5db0d4e493e74cdef0f7432049672a2f18e2538ba7273f8abf920899c7.scope: Deactivated successfully.
Dec 03 01:19:55 compute-0 podman[208891]: 2025-12-03 01:19:55.763807775 +0000 UTC m=+0.383591867 container died 1fbfda5db0d4e493e74cdef0f7432049672a2f18e2538ba7273f8abf920899c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_curie, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec 03 01:19:55 compute-0 ceph-osd[208731]: bdev(0x558b83045400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 03 01:19:55 compute-0 ceph-osd[208731]: bdev(0x558b83045400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 03 01:19:55 compute-0 ceph-osd[208731]: bdev(0x558b83045400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 03 01:19:55 compute-0 ceph-osd[208731]: bdev(0x558b83045400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 03 01:19:55 compute-0 ceph-osd[208731]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Dec 03 01:19:55 compute-0 ceph-osd[208731]: bluefs mount
Dec 03 01:19:55 compute-0 ceph-osd[208731]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: bluefs mount shared_bdev_used = 4718592
Dec 03 01:19:55 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: RocksDB version: 7.9.2
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Git sha 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Compile date 2025-05-06 23:30:25
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: DB SUMMARY
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: DB Session ID:  K9B5HJRG0MH6OVU3TJ44
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: CURRENT file:  CURRENT
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: IDENTITY file:  IDENTITY
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                         Options.error_if_exists: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.create_if_missing: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                         Options.paranoid_checks: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                                     Options.env: 0x558b831b8310
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                                Options.info_log: 0x558b824e6d80
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.max_file_opening_threads: 16
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                              Options.statistics: (nil)
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                               Options.use_fsync: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.max_log_file_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                         Options.allow_fallocate: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.use_direct_reads: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.create_missing_column_families: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                              Options.db_log_dir: 
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                                 Options.wal_dir: db.wal
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.advise_random_on_open: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.write_buffer_manager: 0x558b831286e0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                            Options.rate_limiter: (nil)
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.unordered_write: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                               Options.row_cache: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                              Options.wal_filter: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.allow_ingest_behind: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.two_write_queues: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.manual_wal_flush: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.wal_compression: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.atomic_flush: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.log_readahead_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.allow_data_in_errors: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.db_host_id: __hostname__
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.max_background_jobs: 4
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.max_background_compactions: -1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.max_subcompactions: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.max_open_files: -1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.bytes_per_sync: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.max_background_flushes: -1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Compression algorithms supported:
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         kZSTD supported: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         kXpressCompression supported: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         kBZip2Compression supported: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         kLZ4Compression supported: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         kZlibCompression supported: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         kLZ4HCCompression supported: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         kSnappyCompression supported: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558b821f2f40)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x558b8220d1f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:           Options.merge_operator: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558b821f2f40)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x558b8220d1f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:           Options.merge_operator: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558b821f2f40)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x558b8220d1f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:           Options.merge_operator: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558b821f2f40)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x558b8220d1f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4fb40ed2d11f24e75587ec5394c565198f26c137e3e413e091eba462e72f11a-merged.mount: Deactivated successfully.
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:           Options.merge_operator: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558b821f2f40)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x558b8220d1f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:           Options.merge_operator: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558b821f2f40)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x558b8220d1f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:           Options.merge_operator: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558b821f2f40)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x558b8220d1f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:55 compute-0 podman[208891]: 2025-12-03 01:19:55.834057618 +0000 UTC m=+0.453841690 container remove 1fbfda5db0d4e493e74cdef0f7432049672a2f18e2538ba7273f8abf920899c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_curie, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:           Options.merge_operator: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558b822168c0)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x558b8220cf30
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:           Options.merge_operator: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558b822168c0)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x558b8220cf30
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:           Options.merge_operator: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter_factory: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.sst_partitioner_factory: None
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558b822168c0)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x558b8220cf30
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.write_buffer_size: 16777216
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.max_write_buffer_number: 64
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.compression: LZ4
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.prefix_extractor: nullptr
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.num_levels: 7
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.level: 32767
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.compression_opts.strategy: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.enabled: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.arena_block_size: 1048576
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.disable_auto_compactions: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.inplace_update_support: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.bloom_locality: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_successive_merges: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.paranoid_file_checks: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.force_consistency_checks: 1
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.report_bg_io_stats: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                               Options.ttl: 2592000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.enable_blob_files: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.min_blob_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.blob_file_size: 268435456
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.blob_file_starting_level: 0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 03 01:19:55 compute-0 systemd[1]: libpod-conmon-1fbfda5db0d4e493e74cdef0f7432049672a2f18e2538ba7273f8abf920899c7.scope: Deactivated successfully.
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 53c81f52-1eb7-420b-9ec6-1a961223b8b4
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764724795874809, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764724795881263, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724795, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "53c81f52-1eb7-420b-9ec6-1a961223b8b4", "db_session_id": "K9B5HJRG0MH6OVU3TJ44", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764724795888417, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724795, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "53c81f52-1eb7-420b-9ec6-1a961223b8b4", "db_session_id": "K9B5HJRG0MH6OVU3TJ44", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764724795893366, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724795, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "53c81f52-1eb7-420b-9ec6-1a961223b8b4", "db_session_id": "K9B5HJRG0MH6OVU3TJ44", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764724795895716, "job": 1, "event": "recovery_finished"}
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x558b82255c00
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: DB pointer 0x558b83103a00
Dec 03 01:19:55 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 03 01:19:55 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Dec 03 01:19:55 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 03 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                            Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                            Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                             Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x558b8220d1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.3e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
                                            
                                            ** Compaction Stats [m-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x558b8220d1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.3e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-0] **
                                            
                                            ** Compaction Stats [m-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x558b8220d1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.3e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-1] **
                                            
                                            ** Compaction Stats [m-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x558b8220d1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.3e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-2] **
                                            
                                            ** Compaction Stats [p-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                             Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x558b8220d1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.3e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-0] **
                                            
                                            ** Compaction Stats [p-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x558b8220d1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.3e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-1] **
                                            
                                            ** Compaction Stats [p-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x558b8220d1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.3e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-2] **
                                            
                                            ** Compaction Stats [O-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x558b8220cf30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-0] **
                                            
                                            ** Compaction Stats [O-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x558b8220cf30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-1] **
                                            
                                            ** Compaction Stats [O-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                             Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x558b8220cf30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-2] **
                                            
                                            ** Compaction Stats [L] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [L] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x558b8220d1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.3e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [L] **
                                            
                                            ** Compaction Stats [P] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [P] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x558b8220d1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.3e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [P] **
Dec 03 01:19:55 compute-0 ceph-osd[208731]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec 03 01:19:55 compute-0 ceph-osd[208731]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec 03 01:19:55 compute-0 ceph-osd[208731]: _get_class not permitted to load lua
Dec 03 01:19:55 compute-0 ceph-osd[208731]: _get_class not permitted to load sdk
Dec 03 01:19:55 compute-0 ceph-osd[208731]: _get_class not permitted to load test_remote_reads
Dec 03 01:19:55 compute-0 ceph-osd[208731]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec 03 01:19:55 compute-0 ceph-osd[208731]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec 03 01:19:55 compute-0 ceph-osd[208731]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec 03 01:19:55 compute-0 ceph-osd[208731]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec 03 01:19:55 compute-0 ceph-osd[208731]: osd.2 0 load_pgs
Dec 03 01:19:55 compute-0 ceph-osd[208731]: osd.2 0 load_pgs opened 0 pgs
Dec 03 01:19:55 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2[208727]: 2025-12-03T01:19:55.935+0000 7fa138e91740 -1 osd.2 0 log_to_monitors true
Dec 03 01:19:55 compute-0 ceph-osd[208731]: osd.2 0 log_to_monitors true
Dec 03 01:19:55 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Dec 03 01:19:55 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/312088855,v1:192.168.122.100:6811/312088855]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec 03 01:19:56 compute-0 podman[209339]: 2025-12-03 01:19:56.013428492 +0000 UTC m=+0.042548083 container create 359bf3fc06f551d15ab571d14e95110f1f6ba9d00145f0dfb7142d1bb6ce04a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_sanderson, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec 03 01:19:56 compute-0 systemd[1]: Started libpod-conmon-359bf3fc06f551d15ab571d14e95110f1f6ba9d00145f0dfb7142d1bb6ce04a3.scope.
Dec 03 01:19:56 compute-0 podman[209339]: 2025-12-03 01:19:55.996270241 +0000 UTC m=+0.025389852 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:19:56 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:19:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c446ae8829d37ca8829b712fc99bd40d9ca25e8d56d9f6c34f095de545481b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c446ae8829d37ca8829b712fc99bd40d9ca25e8d56d9f6c34f095de545481b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c446ae8829d37ca8829b712fc99bd40d9ca25e8d56d9f6c34f095de545481b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c446ae8829d37ca8829b712fc99bd40d9ca25e8d56d9f6c34f095de545481b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:19:56 compute-0 podman[209339]: 2025-12-03 01:19:56.165855934 +0000 UTC m=+0.194975615 container init 359bf3fc06f551d15ab571d14e95110f1f6ba9d00145f0dfb7142d1bb6ce04a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_sanderson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 03 01:19:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v48: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Dec 03 01:19:56 compute-0 podman[209339]: 2025-12-03 01:19:56.198077561 +0000 UTC m=+0.227197192 container start 359bf3fc06f551d15ab571d14e95110f1f6ba9d00145f0dfb7142d1bb6ce04a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:19:56 compute-0 podman[209339]: 2025-12-03 01:19:56.205816019 +0000 UTC m=+0.234935640 container attach 359bf3fc06f551d15ab571d14e95110f1f6ba9d00145f0dfb7142d1bb6ce04a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_sanderson, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:19:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Dec 03 01:19:56 compute-0 ceph-mon[192821]: osd.1 [v2:192.168.122.100:6806/1951846642,v1:192.168.122.100:6807/1951846642] boot
Dec 03 01:19:56 compute-0 ceph-mon[192821]: osdmap e13: 3 total, 2 up, 3 in
Dec 03 01:19:56 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 03 01:19:56 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 03 01:19:56 compute-0 ceph-mon[192821]: from='osd.2 [v2:192.168.122.100:6810/312088855,v1:192.168.122.100:6811/312088855]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec 03 01:19:56 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/312088855,v1:192.168.122.100:6811/312088855]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec 03 01:19:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Dec 03 01:19:56 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Dec 03 01:19:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Dec 03 01:19:56 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/312088855,v1:192.168.122.100:6811/312088855]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec 03 01:19:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e14 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-0,root=default}
Dec 03 01:19:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec 03 01:19:56 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 03 01:19:56 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 03 01:19:56 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 14 pg[1.0( empty local-lis/les=13/14 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 pi=[11,13)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:19:56 compute-0 ceph-mgr[193109]: [devicehealth INFO root] creating main.db for devicehealth
Dec 03 01:19:56 compute-0 ceph-mgr[193109]: [devicehealth INFO root] Check health
Dec 03 01:19:56 compute-0 ceph-mgr[193109]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Dec 03 01:19:56 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Dec 03 01:19:56 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec 03 01:19:56 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec 03 01:19:56 compute-0 sudo[209375]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Dec 03 01:19:56 compute-0 sudo[209375]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 03 01:19:56 compute-0 sudo[209375]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Dec 03 01:19:56 compute-0 sudo[209375]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:56 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Dec 03 01:19:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Dec 03 01:19:56 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 03 01:19:57 compute-0 nostalgic_sanderson[209354]: {
Dec 03 01:19:57 compute-0 nostalgic_sanderson[209354]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 01:19:57 compute-0 nostalgic_sanderson[209354]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:19:57 compute-0 nostalgic_sanderson[209354]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 01:19:57 compute-0 nostalgic_sanderson[209354]:         "osd_id": 2,
Dec 03 01:19:57 compute-0 nostalgic_sanderson[209354]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:19:57 compute-0 nostalgic_sanderson[209354]:         "type": "bluestore"
Dec 03 01:19:57 compute-0 nostalgic_sanderson[209354]:     },
Dec 03 01:19:57 compute-0 nostalgic_sanderson[209354]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 01:19:57 compute-0 nostalgic_sanderson[209354]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:19:57 compute-0 nostalgic_sanderson[209354]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 01:19:57 compute-0 nostalgic_sanderson[209354]:         "osd_id": 1,
Dec 03 01:19:57 compute-0 nostalgic_sanderson[209354]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:19:57 compute-0 nostalgic_sanderson[209354]:         "type": "bluestore"
Dec 03 01:19:57 compute-0 nostalgic_sanderson[209354]:     },
Dec 03 01:19:57 compute-0 nostalgic_sanderson[209354]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 01:19:57 compute-0 nostalgic_sanderson[209354]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:19:57 compute-0 nostalgic_sanderson[209354]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 01:19:57 compute-0 nostalgic_sanderson[209354]:         "osd_id": 0,
Dec 03 01:19:57 compute-0 nostalgic_sanderson[209354]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:19:57 compute-0 nostalgic_sanderson[209354]:         "type": "bluestore"
Dec 03 01:19:57 compute-0 nostalgic_sanderson[209354]:     }
Dec 03 01:19:57 compute-0 nostalgic_sanderson[209354]: }
Dec 03 01:19:57 compute-0 sshd-session[207473]: Connection closed by authenticating user root 193.32.162.157 port 57544 [preauth]
Dec 03 01:19:57 compute-0 systemd[1]: libpod-359bf3fc06f551d15ab571d14e95110f1f6ba9d00145f0dfb7142d1bb6ce04a3.scope: Deactivated successfully.
Dec 03 01:19:57 compute-0 podman[209339]: 2025-12-03 01:19:57.380220241 +0000 UTC m=+1.409339862 container died 359bf3fc06f551d15ab571d14e95110f1f6ba9d00145f0dfb7142d1bb6ce04a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 03 01:19:57 compute-0 systemd[1]: libpod-359bf3fc06f551d15ab571d14e95110f1f6ba9d00145f0dfb7142d1bb6ce04a3.scope: Consumed 1.177s CPU time.
Dec 03 01:19:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c446ae8829d37ca8829b712fc99bd40d9ca25e8d56d9f6c34f095de545481b6-merged.mount: Deactivated successfully.
Dec 03 01:19:57 compute-0 podman[209339]: 2025-12-03 01:19:57.488465219 +0000 UTC m=+1.517584830 container remove 359bf3fc06f551d15ab571d14e95110f1f6ba9d00145f0dfb7142d1bb6ce04a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:19:57 compute-0 systemd[1]: libpod-conmon-359bf3fc06f551d15ab571d14e95110f1f6ba9d00145f0dfb7142d1bb6ce04a3.scope: Deactivated successfully.
Dec 03 01:19:57 compute-0 sudo[208822]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:19:57 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:19:57 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Dec 03 01:19:57 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/312088855,v1:192.168.122.100:6811/312088855]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 03 01:19:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e15 e15: 3 total, 2 up, 3 in
Dec 03 01:19:57 compute-0 ceph-osd[208731]: osd.2 0 done with init, starting boot process
Dec 03 01:19:57 compute-0 ceph-osd[208731]: osd.2 0 start_boot
Dec 03 01:19:57 compute-0 ceph-osd[208731]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec 03 01:19:57 compute-0 ceph-osd[208731]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec 03 01:19:57 compute-0 ceph-osd[208731]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec 03 01:19:57 compute-0 ceph-osd[208731]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec 03 01:19:57 compute-0 ceph-osd[208731]: osd.2 0  bench count 12288000 bsize 4 KiB
Dec 03 01:19:57 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 2 up, 3 in
Dec 03 01:19:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec 03 01:19:57 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 03 01:19:57 compute-0 ceph-mon[192821]: pgmap v48: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Dec 03 01:19:57 compute-0 ceph-mon[192821]: from='osd.2 [v2:192.168.122.100:6810/312088855,v1:192.168.122.100:6811/312088855]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec 03 01:19:57 compute-0 ceph-mon[192821]: osdmap e14: 3 total, 2 up, 3 in
Dec 03 01:19:57 compute-0 ceph-mon[192821]: from='osd.2 [v2:192.168.122.100:6810/312088855,v1:192.168.122.100:6811/312088855]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec 03 01:19:57 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 03 01:19:57 compute-0 ceph-mon[192821]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Dec 03 01:19:57 compute-0 ceph-mon[192821]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Dec 03 01:19:57 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 03 01:19:57 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:57 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:19:57 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 03 01:19:57 compute-0 ceph-mgr[193109]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/312088855; not ready for session (expect reconnect)
Dec 03 01:19:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec 03 01:19:57 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 03 01:19:57 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 03 01:19:57 compute-0 sudo[209419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:19:57 compute-0 sudo[209419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:57 compute-0 sudo[209419]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:57 compute-0 sudo[209444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 01:19:57 compute-0 sudo[209444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:57 compute-0 sudo[209444]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:57 compute-0 sudo[209469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:19:58 compute-0 sudo[209469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:58 compute-0 sudo[209469]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:58 compute-0 sudo[209494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:19:58 compute-0 sudo[209494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:58 compute-0 sudo[209494]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v51: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Dec 03 01:19:58 compute-0 sudo[209519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:19:58 compute-0 sudo[209519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:58 compute-0 sudo[209519]: pam_unix(sudo:session): session closed for user root
Dec 03 01:19:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:19:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:19:58 compute-0 sudo[209544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 03 01:19:58 compute-0 sudo[209544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:19:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:19:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:19:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:19:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:19:58 compute-0 ceph-mgr[193109]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/312088855; not ready for session (expect reconnect)
Dec 03 01:19:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec 03 01:19:58 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 03 01:19:58 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 03 01:19:58 compute-0 ceph-mon[192821]: from='osd.2 [v2:192.168.122.100:6810/312088855,v1:192.168.122.100:6811/312088855]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 03 01:19:58 compute-0 ceph-mon[192821]: osdmap e15: 3 total, 2 up, 3 in
Dec 03 01:19:58 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 03 01:19:58 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 03 01:19:58 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.rysove(active, since 90s)
Dec 03 01:19:59 compute-0 podman[209631]: 2025-12-03 01:19:59.232636094 +0000 UTC m=+0.140244471 container exec d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 03 01:19:59 compute-0 podman[209631]: 2025-12-03 01:19:59.34936968 +0000 UTC m=+0.256977977 container exec_died d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 03 01:19:59 compute-0 ceph-mgr[193109]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/312088855; not ready for session (expect reconnect)
Dec 03 01:19:59 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec 03 01:19:59 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 03 01:19:59 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 03 01:19:59 compute-0 ceph-mon[192821]: purged_snaps scrub starts
Dec 03 01:19:59 compute-0 ceph-mon[192821]: purged_snaps scrub ok
Dec 03 01:19:59 compute-0 ceph-mon[192821]: pgmap v51: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Dec 03 01:19:59 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 03 01:19:59 compute-0 ceph-mon[192821]: mgrmap e9: compute-0.rysove(active, since 90s)
Dec 03 01:19:59 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 03 01:19:59 compute-0 podman[158098]: time="2025-12-03T01:19:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:19:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:19:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29176 "" "Go-http-client/1.1"
Dec 03 01:19:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:19:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5806 "" "Go-http-client/1.1"
Dec 03 01:20:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v52: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Dec 03 01:20:00 compute-0 sudo[209544]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:20:00 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:20:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e15 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:20:00 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:00 compute-0 sudo[209746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:20:00 compute-0 sudo[209746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:00 compute-0 sudo[209746]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:00 compute-0 sudo[209771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:20:00 compute-0 sudo[209771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:00 compute-0 sudo[209771]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:00 compute-0 ceph-mgr[193109]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/312088855; not ready for session (expect reconnect)
Dec 03 01:20:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec 03 01:20:00 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 03 01:20:00 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 03 01:20:00 compute-0 sudo[209796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:20:00 compute-0 sudo[209796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:00 compute-0 sudo[209796]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:00 compute-0 sudo[209821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 01:20:00 compute-0 sudo[209821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:01 compute-0 ceph-mon[192821]: pgmap v52: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Dec 03 01:20:01 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:01 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:01 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 03 01:20:01 compute-0 openstack_network_exporter[160250]: ERROR   01:20:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:20:01 compute-0 openstack_network_exporter[160250]: ERROR   01:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:20:01 compute-0 openstack_network_exporter[160250]: ERROR   01:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:20:01 compute-0 openstack_network_exporter[160250]: ERROR   01:20:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:20:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:20:01 compute-0 openstack_network_exporter[160250]: ERROR   01:20:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:20:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:20:01 compute-0 sudo[209821]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:01 compute-0 ceph-mgr[193109]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/312088855; not ready for session (expect reconnect)
Dec 03 01:20:01 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec 03 01:20:01 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 03 01:20:01 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 03 01:20:01 compute-0 sudo[209875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:20:01 compute-0 sudo[209875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:01 compute-0 sudo[209875]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:01 compute-0 sudo[209926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:20:01 compute-0 sudo[209926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:01 compute-0 sudo[209926]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:01 compute-0 podman[209900]: 2025-12-03 01:20:01.843149794 +0000 UTC m=+0.114643223 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.buildah.version=1.33.7, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter)
Dec 03 01:20:01 compute-0 podman[209899]: 2025-12-03 01:20:01.864257376 +0000 UTC m=+0.138016694 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 03 01:20:01 compute-0 podman[209901]: 2025-12-03 01:20:01.880376129 +0000 UTC m=+0.150684688 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 03 01:20:01 compute-0 podman[209902]: 2025-12-03 01:20:01.889413171 +0000 UTC m=+0.167457939 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 03 01:20:01 compute-0 sudo[210004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:20:01 compute-0 sudo[210004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:01 compute-0 sudo[210004]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:01 compute-0 sudo[210037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- inventory --format=json-pretty --filter-for-batch
Dec 03 01:20:01 compute-0 sudo[210037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Dec 03 01:20:02 compute-0 ceph-osd[208731]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 18.393 iops: 4708.491 elapsed_sec: 0.637
Dec 03 01:20:02 compute-0 ceph-osd[208731]: log_channel(cluster) log [WRN] : OSD bench result of 4708.491241 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 03 01:20:02 compute-0 ceph-osd[208731]: osd.2 0 waiting for initial osdmap
Dec 03 01:20:02 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2[208727]: 2025-12-03T01:20:02.225+0000 7fa134e11640 -1 osd.2 0 waiting for initial osdmap
Dec 03 01:20:02 compute-0 ceph-osd[208731]: osd.2 15 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec 03 01:20:02 compute-0 ceph-osd[208731]: osd.2 15 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Dec 03 01:20:02 compute-0 ceph-osd[208731]: osd.2 15 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec 03 01:20:02 compute-0 ceph-osd[208731]: osd.2 15 check_osdmap_features require_osd_release unknown -> reef
Dec 03 01:20:02 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 03 01:20:02 compute-0 ceph-osd[208731]: osd.2 15 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 03 01:20:02 compute-0 ceph-osd[208731]: osd.2 15 set_numa_affinity not setting numa affinity
Dec 03 01:20:02 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2[208727]: 2025-12-03T01:20:02.262+0000 7fa130439640 -1 osd.2 15 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 03 01:20:02 compute-0 ceph-osd[208731]: osd.2 15 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial
Dec 03 01:20:02 compute-0 podman[210103]: 2025-12-03 01:20:02.559897549 +0000 UTC m=+0.098070917 container create f564b47326de960418e58925ff3ebc52e34cf8db48ac82c64339eda97db731a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:20:02 compute-0 podman[210103]: 2025-12-03 01:20:02.52486691 +0000 UTC m=+0.063040348 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:20:02 compute-0 systemd[1]: Started libpod-conmon-f564b47326de960418e58925ff3ebc52e34cf8db48ac82c64339eda97db731a7.scope.
Dec 03 01:20:02 compute-0 ceph-mgr[193109]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/312088855; not ready for session (expect reconnect)
Dec 03 01:20:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec 03 01:20:02 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 03 01:20:02 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 03 01:20:02 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:20:02 compute-0 podman[210103]: 2025-12-03 01:20:02.711237503 +0000 UTC m=+0.249410911 container init f564b47326de960418e58925ff3ebc52e34cf8db48ac82c64339eda97db731a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_lehmann, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 03 01:20:02 compute-0 podman[210103]: 2025-12-03 01:20:02.730487147 +0000 UTC m=+0.268660485 container start f564b47326de960418e58925ff3ebc52e34cf8db48ac82c64339eda97db731a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_lehmann, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:20:02 compute-0 podman[210103]: 2025-12-03 01:20:02.736145852 +0000 UTC m=+0.274319210 container attach f564b47326de960418e58925ff3ebc52e34cf8db48ac82c64339eda97db731a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_lehmann, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:20:02 compute-0 reverent_lehmann[210119]: 167 167
Dec 03 01:20:02 compute-0 systemd[1]: libpod-f564b47326de960418e58925ff3ebc52e34cf8db48ac82c64339eda97db731a7.scope: Deactivated successfully.
Dec 03 01:20:02 compute-0 podman[210103]: 2025-12-03 01:20:02.746616991 +0000 UTC m=+0.284790349 container died f564b47326de960418e58925ff3ebc52e34cf8db48ac82c64339eda97db731a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:20:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-018c788e321842f0d1bb78befa9e9e03a2e157fb724599a7c9a0a7bd3ed70092-merged.mount: Deactivated successfully.
Dec 03 01:20:02 compute-0 podman[210103]: 2025-12-03 01:20:02.82373393 +0000 UTC m=+0.361907268 container remove f564b47326de960418e58925ff3ebc52e34cf8db48ac82c64339eda97db731a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 03 01:20:02 compute-0 systemd[1]: libpod-conmon-f564b47326de960418e58925ff3ebc52e34cf8db48ac82c64339eda97db731a7.scope: Deactivated successfully.
Dec 03 01:20:03 compute-0 podman[210141]: 2025-12-03 01:20:03.112869781 +0000 UTC m=+0.095416110 container create 7c7d1b4342e98786527b033f71a4fb20a8de0ae4b1a7afd997109df77e08e636 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_lehmann, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:20:03 compute-0 podman[210141]: 2025-12-03 01:20:03.077290688 +0000 UTC m=+0.059837027 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:20:03 compute-0 systemd[1]: Started libpod-conmon-7c7d1b4342e98786527b033f71a4fb20a8de0ae4b1a7afd997109df77e08e636.scope.
Dec 03 01:20:03 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:20:03 compute-0 ceph-osd[208731]: osd.2 15 tick checking mon for new map
Dec 03 01:20:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7ab52b737b7f9cfb514a13cbecdafe2ce643d38ab808fe397ba265f89fec11c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7ab52b737b7f9cfb514a13cbecdafe2ce643d38ab808fe397ba265f89fec11c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7ab52b737b7f9cfb514a13cbecdafe2ce643d38ab808fe397ba265f89fec11c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Dec 03 01:20:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7ab52b737b7f9cfb514a13cbecdafe2ce643d38ab808fe397ba265f89fec11c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:03 compute-0 ceph-mon[192821]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Dec 03 01:20:03 compute-0 ceph-mon[192821]: OSD bench result of 4708.491241 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 03 01:20:03 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 03 01:20:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e16 e16: 3 total, 3 up, 3 in
Dec 03 01:20:03 compute-0 ceph-mon[192821]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/312088855,v1:192.168.122.100:6811/312088855] boot
Dec 03 01:20:03 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 3 up, 3 in
Dec 03 01:20:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec 03 01:20:03 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 03 01:20:03 compute-0 podman[210141]: 2025-12-03 01:20:03.30648825 +0000 UTC m=+0.289034609 container init 7c7d1b4342e98786527b033f71a4fb20a8de0ae4b1a7afd997109df77e08e636 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_lehmann, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:20:03 compute-0 ceph-osd[208731]: osd.2 16 state: booting -> active
Dec 03 01:20:03 compute-0 podman[210141]: 2025-12-03 01:20:03.330666261 +0000 UTC m=+0.313212570 container start 7c7d1b4342e98786527b033f71a4fb20a8de0ae4b1a7afd997109df77e08e636 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_lehmann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Dec 03 01:20:03 compute-0 podman[210141]: 2025-12-03 01:20:03.336251294 +0000 UTC m=+0.318797643 container attach 7c7d1b4342e98786527b033f71a4fb20a8de0ae4b1a7afd997109df77e08e636 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_lehmann, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec 03 01:20:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Dec 03 01:20:04 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Dec 03 01:20:04 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e17 e17: 3 total, 3 up, 3 in
Dec 03 01:20:04 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 3 up, 3 in
Dec 03 01:20:04 compute-0 ceph-mon[192821]: osd.2 [v2:192.168.122.100:6810/312088855,v1:192.168.122.100:6811/312088855] boot
Dec 03 01:20:04 compute-0 ceph-mon[192821]: osdmap e16: 3 total, 3 up, 3 in
Dec 03 01:20:04 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 03 01:20:05 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e17 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:20:05 compute-0 ceph-mon[192821]: pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Dec 03 01:20:05 compute-0 ceph-mon[192821]: osdmap e17: 3 total, 3 up, 3 in
Dec 03 01:20:05 compute-0 admiring_lehmann[210157]: [
Dec 03 01:20:05 compute-0 admiring_lehmann[210157]:     {
Dec 03 01:20:05 compute-0 rsyslogd[188612]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 03 01:20:05 compute-0 admiring_lehmann[210157]:         "available": false,
Dec 03 01:20:05 compute-0 admiring_lehmann[210157]:         "ceph_device": false,
Dec 03 01:20:05 compute-0 admiring_lehmann[210157]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 03 01:20:05 compute-0 admiring_lehmann[210157]:         "lsm_data": {},
Dec 03 01:20:05 compute-0 admiring_lehmann[210157]:         "lvs": [],
Dec 03 01:20:05 compute-0 admiring_lehmann[210157]:         "path": "/dev/sr0",
Dec 03 01:20:05 compute-0 admiring_lehmann[210157]:         "rejected_reasons": [
Dec 03 01:20:05 compute-0 admiring_lehmann[210157]:             "Insufficient space (<5GB)",
Dec 03 01:20:05 compute-0 admiring_lehmann[210157]:             "Has a FileSystem"
Dec 03 01:20:05 compute-0 admiring_lehmann[210157]:         ],
Dec 03 01:20:05 compute-0 admiring_lehmann[210157]:         "sys_api": {
Dec 03 01:20:05 compute-0 admiring_lehmann[210157]:             "actuators": null,
Dec 03 01:20:05 compute-0 admiring_lehmann[210157]:             "device_nodes": "sr0",
Dec 03 01:20:05 compute-0 admiring_lehmann[210157]:             "devname": "sr0",
Dec 03 01:20:05 compute-0 admiring_lehmann[210157]:             "human_readable_size": "482.00 KB",
Dec 03 01:20:05 compute-0 admiring_lehmann[210157]:             "id_bus": "ata",
Dec 03 01:20:05 compute-0 admiring_lehmann[210157]:             "model": "QEMU DVD-ROM",
Dec 03 01:20:05 compute-0 admiring_lehmann[210157]:             "nr_requests": "2",
Dec 03 01:20:05 compute-0 admiring_lehmann[210157]:             "parent": "/dev/sr0",
Dec 03 01:20:05 compute-0 admiring_lehmann[210157]:             "partitions": {},
Dec 03 01:20:05 compute-0 admiring_lehmann[210157]:             "path": "/dev/sr0",
Dec 03 01:20:05 compute-0 admiring_lehmann[210157]:             "removable": "1",
Dec 03 01:20:05 compute-0 admiring_lehmann[210157]:             "rev": "2.5+",
Dec 03 01:20:05 compute-0 admiring_lehmann[210157]:             "ro": "0",
Dec 03 01:20:05 compute-0 admiring_lehmann[210157]:             "rotational": "1",
Dec 03 01:20:05 compute-0 admiring_lehmann[210157]:             "sas_address": "",
Dec 03 01:20:05 compute-0 admiring_lehmann[210157]:             "sas_device_handle": "",
Dec 03 01:20:05 compute-0 admiring_lehmann[210157]:             "scheduler_mode": "mq-deadline",
Dec 03 01:20:05 compute-0 admiring_lehmann[210157]:             "sectors": 0,
Dec 03 01:20:05 compute-0 admiring_lehmann[210157]:             "sectorsize": "2048",
Dec 03 01:20:05 compute-0 admiring_lehmann[210157]:             "size": 493568.0,
Dec 03 01:20:05 compute-0 admiring_lehmann[210157]:             "support_discard": "2048",
Dec 03 01:20:05 compute-0 admiring_lehmann[210157]:             "type": "disk",
Dec 03 01:20:05 compute-0 admiring_lehmann[210157]:             "vendor": "QEMU"
Dec 03 01:20:05 compute-0 admiring_lehmann[210157]:         }
Dec 03 01:20:05 compute-0 admiring_lehmann[210157]:     }
Dec 03 01:20:05 compute-0 admiring_lehmann[210157]: ]
Dec 03 01:20:05 compute-0 systemd[1]: libpod-7c7d1b4342e98786527b033f71a4fb20a8de0ae4b1a7afd997109df77e08e636.scope: Deactivated successfully.
Dec 03 01:20:05 compute-0 systemd[1]: libpod-7c7d1b4342e98786527b033f71a4fb20a8de0ae4b1a7afd997109df77e08e636.scope: Consumed 2.784s CPU time.
Dec 03 01:20:05 compute-0 podman[210141]: 2025-12-03 01:20:05.949141426 +0000 UTC m=+2.931687755 container died 7c7d1b4342e98786527b033f71a4fb20a8de0ae4b1a7afd997109df77e08e636 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_lehmann, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:20:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7ab52b737b7f9cfb514a13cbecdafe2ce643d38ab808fe397ba265f89fec11c-merged.mount: Deactivated successfully.
Dec 03 01:20:06 compute-0 podman[210141]: 2025-12-03 01:20:06.050845196 +0000 UTC m=+3.033391495 container remove 7c7d1b4342e98786527b033f71a4fb20a8de0ae4b1a7afd997109df77e08e636 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_lehmann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 03 01:20:06 compute-0 systemd[1]: libpod-conmon-7c7d1b4342e98786527b033f71a4fb20a8de0ae4b1a7afd997109df77e08e636.scope: Deactivated successfully.
Dec 03 01:20:06 compute-0 sudo[210037]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:20:06 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:20:06 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Dec 03 01:20:06 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec 03 01:20:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Dec 03 01:20:06 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec 03 01:20:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Dec 03 01:20:06 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Dec 03 01:20:06 compute-0 ceph-mgr[193109]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43688k
Dec 03 01:20:06 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43688k
Dec 03 01:20:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Dec 03 01:20:06 compute-0 ceph-mgr[193109]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44737331: error parsing value: Value '44737331' is below minimum 939524096
Dec 03 01:20:06 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44737331: error parsing value: Value '44737331' is below minimum 939524096
Dec 03 01:20:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:20:06 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:20:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 01:20:06 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:20:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 01:20:06 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:06 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 1e4e1413-00da-47c0-9b21-f54446e7d26e does not exist
Dec 03 01:20:06 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev d4e92ed5-a8c5-404c-8983-429b6c6b8ca7 does not exist
Dec 03 01:20:06 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev d1fce79c-9b8c-48ed-ba2d-93eda69b7fc5 does not exist
Dec 03 01:20:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 01:20:06 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:20:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 01:20:06 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:20:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:20:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:20:06 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:20:06 compute-0 sudo[212419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:20:06 compute-0 sudo[212419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:06 compute-0 sudo[212419]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:06 compute-0 sudo[212444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:20:06 compute-0 sudo[212444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:06 compute-0 sudo[212444]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:06 compute-0 sudo[212469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:20:06 compute-0 sudo[212469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:06 compute-0 sudo[212469]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:06 compute-0 sudo[212494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 01:20:06 compute-0 sudo[212494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:06 compute-0 podman[212518]: 2025-12-03 01:20:06.840976464 +0000 UTC m=+0.124026424 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 03 01:20:07 compute-0 sshd-session[209418]: Invalid user jenkins from 193.32.162.157 port 50802
Dec 03 01:20:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec 03 01:20:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec 03 01:20:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Dec 03 01:20:07 compute-0 ceph-mon[192821]: Adjusting osd_memory_target on compute-0 to 43688k
Dec 03 01:20:07 compute-0 ceph-mon[192821]: Unable to set osd_memory_target on compute-0 to 44737331: error parsing value: Value '44737331' is below minimum 939524096
Dec 03 01:20:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:20:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:20:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:20:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:20:07 compute-0 ceph-mon[192821]: pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:20:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:20:07 compute-0 podman[212580]: 2025-12-03 01:20:07.287807782 +0000 UTC m=+0.086751487 container create 528583b4d8206f2dbc5e845381659b87481c160cfcc94066d5aa531d8236afd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:20:07 compute-0 podman[212580]: 2025-12-03 01:20:07.260680376 +0000 UTC m=+0.059624161 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:20:07 compute-0 systemd[1]: Started libpod-conmon-528583b4d8206f2dbc5e845381659b87481c160cfcc94066d5aa531d8236afd1.scope.
Dec 03 01:20:07 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:20:07 compute-0 podman[212580]: 2025-12-03 01:20:07.426777169 +0000 UTC m=+0.225720904 container init 528583b4d8206f2dbc5e845381659b87481c160cfcc94066d5aa531d8236afd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 03 01:20:07 compute-0 podman[212580]: 2025-12-03 01:20:07.445031197 +0000 UTC m=+0.243974922 container start 528583b4d8206f2dbc5e845381659b87481c160cfcc94066d5aa531d8236afd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:20:07 compute-0 podman[212580]: 2025-12-03 01:20:07.452690544 +0000 UTC m=+0.251634329 container attach 528583b4d8206f2dbc5e845381659b87481c160cfcc94066d5aa531d8236afd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec 03 01:20:07 compute-0 quirky_bassi[212596]: 167 167
Dec 03 01:20:07 compute-0 systemd[1]: libpod-528583b4d8206f2dbc5e845381659b87481c160cfcc94066d5aa531d8236afd1.scope: Deactivated successfully.
Dec 03 01:20:07 compute-0 podman[212580]: 2025-12-03 01:20:07.458357749 +0000 UTC m=+0.257301474 container died 528583b4d8206f2dbc5e845381659b87481c160cfcc94066d5aa531d8236afd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:20:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc3ef435dc3f689e2862609141e295dc2d2ef7481c3bd13bb4694af95f96f05a-merged.mount: Deactivated successfully.
Dec 03 01:20:07 compute-0 podman[212580]: 2025-12-03 01:20:07.545013254 +0000 UTC m=+0.343956979 container remove 528583b4d8206f2dbc5e845381659b87481c160cfcc94066d5aa531d8236afd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_bassi, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:20:07 compute-0 systemd[1]: libpod-conmon-528583b4d8206f2dbc5e845381659b87481c160cfcc94066d5aa531d8236afd1.scope: Deactivated successfully.
Dec 03 01:20:07 compute-0 podman[212618]: 2025-12-03 01:20:07.831198489 +0000 UTC m=+0.084779887 container create c41174541bb25bddc099f35d918c3bf301acef7917e21a362a7c6be165d0b868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 03 01:20:07 compute-0 podman[212618]: 2025-12-03 01:20:07.800824779 +0000 UTC m=+0.054406187 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:20:07 compute-0 systemd[1]: Started libpod-conmon-c41174541bb25bddc099f35d918c3bf301acef7917e21a362a7c6be165d0b868.scope.
Dec 03 01:20:07 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:20:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f16cf92c052412f540d7b058976f51f5e99ce8259d19584f2c6b22134c967908/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f16cf92c052412f540d7b058976f51f5e99ce8259d19584f2c6b22134c967908/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f16cf92c052412f540d7b058976f51f5e99ce8259d19584f2c6b22134c967908/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f16cf92c052412f540d7b058976f51f5e99ce8259d19584f2c6b22134c967908/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f16cf92c052412f540d7b058976f51f5e99ce8259d19584f2c6b22134c967908/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:08 compute-0 podman[212618]: 2025-12-03 01:20:08.063006048 +0000 UTC m=+0.316587436 container init c41174541bb25bddc099f35d918c3bf301acef7917e21a362a7c6be165d0b868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_wozniak, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:20:08 compute-0 podman[212618]: 2025-12-03 01:20:08.081742159 +0000 UTC m=+0.335323547 container start c41174541bb25bddc099f35d918c3bf301acef7917e21a362a7c6be165d0b868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_wozniak, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Dec 03 01:20:08 compute-0 podman[212618]: 2025-12-03 01:20:08.088445661 +0000 UTC m=+0.342027099 container attach c41174541bb25bddc099f35d918c3bf301acef7917e21a362a7c6be165d0b868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_wozniak, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Dec 03 01:20:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:20:09 compute-0 sshd-session[209418]: Connection closed by invalid user jenkins 193.32.162.157 port 50802 [preauth]
Dec 03 01:20:09 compute-0 ceph-mon[192821]: pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:20:09 compute-0 exciting_wozniak[212632]: --> passed data devices: 0 physical, 3 LVM
Dec 03 01:20:09 compute-0 exciting_wozniak[212632]: --> relative data size: 1.0
Dec 03 01:20:09 compute-0 exciting_wozniak[212632]: --> All data devices are unavailable
Dec 03 01:20:09 compute-0 systemd[1]: libpod-c41174541bb25bddc099f35d918c3bf301acef7917e21a362a7c6be165d0b868.scope: Deactivated successfully.
Dec 03 01:20:09 compute-0 systemd[1]: libpod-c41174541bb25bddc099f35d918c3bf301acef7917e21a362a7c6be165d0b868.scope: Consumed 1.248s CPU time.
Dec 03 01:20:09 compute-0 podman[212661]: 2025-12-03 01:20:09.473452991 +0000 UTC m=+0.065122360 container died c41174541bb25bddc099f35d918c3bf301acef7917e21a362a7c6be165d0b868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_wozniak, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:20:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-f16cf92c052412f540d7b058976f51f5e99ce8259d19584f2c6b22134c967908-merged.mount: Deactivated successfully.
Dec 03 01:20:09 compute-0 podman[212661]: 2025-12-03 01:20:09.585002345 +0000 UTC m=+0.176671664 container remove c41174541bb25bddc099f35d918c3bf301acef7917e21a362a7c6be165d0b868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_wozniak, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:20:09 compute-0 systemd[1]: libpod-conmon-c41174541bb25bddc099f35d918c3bf301acef7917e21a362a7c6be165d0b868.scope: Deactivated successfully.
Dec 03 01:20:09 compute-0 sudo[212494]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:09 compute-0 sudo[212678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:20:09 compute-0 sudo[212678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:09 compute-0 sudo[212678]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:09 compute-0 sudo[212703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:20:09 compute-0 sudo[212703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:09 compute-0 sudo[212703]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:10 compute-0 sshd-session[212676]: Invalid user redmine from 146.190.144.138 port 34160
Dec 03 01:20:10 compute-0 sudo[212728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:20:10 compute-0 sudo[212728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:10 compute-0 sudo[212728]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:10 compute-0 sshd-session[212676]: Received disconnect from 146.190.144.138 port 34160:11: Bye Bye [preauth]
Dec 03 01:20:10 compute-0 sshd-session[212676]: Disconnected from invalid user redmine 146.190.144.138 port 34160 [preauth]
Dec 03 01:20:10 compute-0 sudo[212753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 01:20:10 compute-0 sudo[212753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:20:10 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e17 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:20:10 compute-0 podman[212815]: 2025-12-03 01:20:10.67384445 +0000 UTC m=+0.093664360 container create 3f7c3cbf67d667eb05aa2367cfa67993adc90a6653a19b7db901db32755d6a94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 03 01:20:10 compute-0 podman[212815]: 2025-12-03 01:20:10.639884361 +0000 UTC m=+0.059704321 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:20:10 compute-0 systemd[1]: Started libpod-conmon-3f7c3cbf67d667eb05aa2367cfa67993adc90a6653a19b7db901db32755d6a94.scope.
Dec 03 01:20:10 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:20:10 compute-0 podman[212815]: 2025-12-03 01:20:10.831418655 +0000 UTC m=+0.251238555 container init 3f7c3cbf67d667eb05aa2367cfa67993adc90a6653a19b7db901db32755d6a94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_lamarr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:20:10 compute-0 podman[212815]: 2025-12-03 01:20:10.840695732 +0000 UTC m=+0.260515612 container start 3f7c3cbf67d667eb05aa2367cfa67993adc90a6653a19b7db901db32755d6a94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 03 01:20:10 compute-0 podman[212815]: 2025-12-03 01:20:10.846371319 +0000 UTC m=+0.266191199 container attach 3f7c3cbf67d667eb05aa2367cfa67993adc90a6653a19b7db901db32755d6a94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_lamarr, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:20:10 compute-0 competent_lamarr[212831]: 167 167
Dec 03 01:20:10 compute-0 systemd[1]: libpod-3f7c3cbf67d667eb05aa2367cfa67993adc90a6653a19b7db901db32755d6a94.scope: Deactivated successfully.
Dec 03 01:20:10 compute-0 podman[212815]: 2025-12-03 01:20:10.848798366 +0000 UTC m=+0.268618256 container died 3f7c3cbf67d667eb05aa2367cfa67993adc90a6653a19b7db901db32755d6a94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_lamarr, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Dec 03 01:20:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-60f2781f6e14009bb92163540a7af863f660d57f7d7531a00ee8d09f6e429645-merged.mount: Deactivated successfully.
Dec 03 01:20:10 compute-0 podman[212815]: 2025-12-03 01:20:10.891938588 +0000 UTC m=+0.311758458 container remove 3f7c3cbf67d667eb05aa2367cfa67993adc90a6653a19b7db901db32755d6a94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_lamarr, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 03 01:20:10 compute-0 podman[212832]: 2025-12-03 01:20:10.89634142 +0000 UTC m=+0.103547473 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, vendor=Red Hat, Inc., vcs-type=git, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, version=9.4, io.openshift.tags=base rhel9, distribution-scope=public, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec 03 01:20:10 compute-0 systemd[1]: libpod-conmon-3f7c3cbf67d667eb05aa2367cfa67993adc90a6653a19b7db901db32755d6a94.scope: Deactivated successfully.
Dec 03 01:20:11 compute-0 podman[212874]: 2025-12-03 01:20:11.165176201 +0000 UTC m=+0.097728723 container create 57ff9b70e1cbac0c37b4d23189a52a7c0e0c8b1849a63d351ce3f265a00f3c0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 03 01:20:11 compute-0 systemd[1]: Started libpod-conmon-57ff9b70e1cbac0c37b4d23189a52a7c0e0c8b1849a63d351ce3f265a00f3c0b.scope.
Dec 03 01:20:11 compute-0 podman[212874]: 2025-12-03 01:20:11.131356146 +0000 UTC m=+0.063908748 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:20:11 compute-0 sudo[212910]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sksptlhxeggmrkptjxlquadoxuhumccg ; /usr/bin/python3'
Dec 03 01:20:11 compute-0 sudo[212910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:20:11 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:20:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8e757aeebc4ad4e8857fff26b8413decdd763e271b2dc8a5923c9b7c5de83bf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8e757aeebc4ad4e8857fff26b8413decdd763e271b2dc8a5923c9b7c5de83bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8e757aeebc4ad4e8857fff26b8413decdd763e271b2dc8a5923c9b7c5de83bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8e757aeebc4ad4e8857fff26b8413decdd763e271b2dc8a5923c9b7c5de83bf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:11 compute-0 ceph-mon[192821]: pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:20:11 compute-0 podman[212874]: 2025-12-03 01:20:11.294348541 +0000 UTC m=+0.226901123 container init 57ff9b70e1cbac0c37b4d23189a52a7c0e0c8b1849a63d351ce3f265a00f3c0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_feynman, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 03 01:20:11 compute-0 podman[212874]: 2025-12-03 01:20:11.310473227 +0000 UTC m=+0.243025779 container start 57ff9b70e1cbac0c37b4d23189a52a7c0e0c8b1849a63d351ce3f265a00f3c0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:20:11 compute-0 podman[212874]: 2025-12-03 01:20:11.317295045 +0000 UTC m=+0.249847657 container attach 57ff9b70e1cbac0c37b4d23189a52a7c0e0c8b1849a63d351ce3f265a00f3c0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_feynman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:20:11 compute-0 python3[212916]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:20:11 compute-0 podman[212921]: 2025-12-03 01:20:11.519054622 +0000 UTC m=+0.107765190 container create e71d81812b6e7bbe8be864e4b8b914097ae7df18f1bb6534243eb3eb10e11b4d (image=quay.io/ceph/ceph:v18, name=charming_elion, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:20:11 compute-0 podman[212921]: 2025-12-03 01:20:11.486543263 +0000 UTC m=+0.075253851 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:20:11 compute-0 systemd[1]: Started libpod-conmon-e71d81812b6e7bbe8be864e4b8b914097ae7df18f1bb6534243eb3eb10e11b4d.scope.
Dec 03 01:20:11 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:20:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a812bebaab9b4c960137747810f3eacbd2d67c439645d3f5df1ad0257ce14df/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a812bebaab9b4c960137747810f3eacbd2d67c439645d3f5df1ad0257ce14df/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a812bebaab9b4c960137747810f3eacbd2d67c439645d3f5df1ad0257ce14df/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:11 compute-0 podman[212921]: 2025-12-03 01:20:11.692176807 +0000 UTC m=+0.280887355 container init e71d81812b6e7bbe8be864e4b8b914097ae7df18f1bb6534243eb3eb10e11b4d (image=quay.io/ceph/ceph:v18, name=charming_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 03 01:20:11 compute-0 podman[212921]: 2025-12-03 01:20:11.709396203 +0000 UTC m=+0.298106741 container start e71d81812b6e7bbe8be864e4b8b914097ae7df18f1bb6534243eb3eb10e11b4d (image=quay.io/ceph/ceph:v18, name=charming_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:20:11 compute-0 podman[212921]: 2025-12-03 01:20:11.713548948 +0000 UTC m=+0.302259486 container attach e71d81812b6e7bbe8be864e4b8b914097ae7df18f1bb6534243eb3eb10e11b4d (image=quay.io/ceph/ceph:v18, name=charming_elion, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:20:12 compute-0 interesting_feynman[212914]: {
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:     "0": [
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:         {
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:             "devices": [
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:                 "/dev/loop3"
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:             ],
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:             "lv_name": "ceph_lv0",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:             "lv_size": "21470642176",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:             "name": "ceph_lv0",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:             "tags": {
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:                 "ceph.cluster_name": "ceph",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:                 "ceph.crush_device_class": "",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:                 "ceph.encrypted": "0",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:                 "ceph.osd_id": "0",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:                 "ceph.type": "block",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:                 "ceph.vdo": "0"
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:             },
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:             "type": "block",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:             "vg_name": "ceph_vg0"
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:         }
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:     ],
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:     "1": [
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:         {
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:             "devices": [
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:                 "/dev/loop4"
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:             ],
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:             "lv_name": "ceph_lv1",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:             "lv_size": "21470642176",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:             "name": "ceph_lv1",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:             "tags": {
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:                 "ceph.cluster_name": "ceph",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:                 "ceph.crush_device_class": "",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:                 "ceph.encrypted": "0",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:                 "ceph.osd_id": "1",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:                 "ceph.type": "block",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:                 "ceph.vdo": "0"
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:             },
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:             "type": "block",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:             "vg_name": "ceph_vg1"
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:         }
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:     ],
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:     "2": [
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:         {
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:             "devices": [
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:                 "/dev/loop5"
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:             ],
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:             "lv_name": "ceph_lv2",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:             "lv_size": "21470642176",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:             "name": "ceph_lv2",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:             "tags": {
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:                 "ceph.cluster_name": "ceph",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:                 "ceph.crush_device_class": "",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:                 "ceph.encrypted": "0",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:                 "ceph.osd_id": "2",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:                 "ceph.type": "block",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:                 "ceph.vdo": "0"
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:             },
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:             "type": "block",
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:             "vg_name": "ceph_vg2"
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:         }
Dec 03 01:20:12 compute-0 interesting_feynman[212914]:     ]
Dec 03 01:20:12 compute-0 interesting_feynman[212914]: }
Dec 03 01:20:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:20:12 compute-0 systemd[1]: libpod-57ff9b70e1cbac0c37b4d23189a52a7c0e0c8b1849a63d351ce3f265a00f3c0b.scope: Deactivated successfully.
Dec 03 01:20:12 compute-0 podman[212965]: 2025-12-03 01:20:12.283542473 +0000 UTC m=+0.063516877 container died 57ff9b70e1cbac0c37b4d23189a52a7c0e0c8b1849a63d351ce3f265a00f3c0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 03 01:20:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8e757aeebc4ad4e8857fff26b8413decdd763e271b2dc8a5923c9b7c5de83bf-merged.mount: Deactivated successfully.
Dec 03 01:20:12 compute-0 podman[212965]: 2025-12-03 01:20:12.382905949 +0000 UTC m=+0.162880323 container remove 57ff9b70e1cbac0c37b4d23189a52a7c0e0c8b1849a63d351ce3f265a00f3c0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_feynman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec 03 01:20:12 compute-0 systemd[1]: libpod-conmon-57ff9b70e1cbac0c37b4d23189a52a7c0e0c8b1849a63d351ce3f265a00f3c0b.scope: Deactivated successfully.
Dec 03 01:20:12 compute-0 sudo[212753]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Dec 03 01:20:12 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4019109794' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 03 01:20:12 compute-0 charming_elion[212938]: 
Dec 03 01:20:12 compute-0 charming_elion[212938]: {"fsid":"3765feb2-36f8-5b86-b74c-64e9221f9c4c","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":152,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":17,"num_osds":3,"num_up_osds":3,"osd_up_since":1764724803,"num_in_osds":3,"osd_in_since":1764724766,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":502763520,"bytes_avail":63909163008,"bytes_total":64411926528},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-03T01:19:30.172459+0000","services":{}},"progress_events":{}}
Dec 03 01:20:12 compute-0 systemd[1]: libpod-e71d81812b6e7bbe8be864e4b8b914097ae7df18f1bb6534243eb3eb10e11b4d.scope: Deactivated successfully.
Dec 03 01:20:12 compute-0 conmon[212938]: conmon e71d81812b6e7bbe8be8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e71d81812b6e7bbe8be864e4b8b914097ae7df18f1bb6534243eb3eb10e11b4d.scope/container/memory.events
Dec 03 01:20:12 compute-0 sudo[212979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:20:12 compute-0 sudo[212979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:12 compute-0 podman[213001]: 2025-12-03 01:20:12.553441943 +0000 UTC m=+0.030948856 container died e71d81812b6e7bbe8be864e4b8b914097ae7df18f1bb6534243eb3eb10e11b4d (image=quay.io/ceph/ceph:v18, name=charming_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 03 01:20:12 compute-0 sudo[212979]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a812bebaab9b4c960137747810f3eacbd2d67c439645d3f5df1ad0257ce14df-merged.mount: Deactivated successfully.
Dec 03 01:20:12 compute-0 podman[213001]: 2025-12-03 01:20:12.62024935 +0000 UTC m=+0.097756223 container remove e71d81812b6e7bbe8be864e4b8b914097ae7df18f1bb6534243eb3eb10e11b4d (image=quay.io/ceph/ceph:v18, name=charming_elion, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 03 01:20:12 compute-0 systemd[1]: libpod-conmon-e71d81812b6e7bbe8be864e4b8b914097ae7df18f1bb6534243eb3eb10e11b4d.scope: Deactivated successfully.
Dec 03 01:20:12 compute-0 sudo[213018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:20:12 compute-0 sudo[213018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:12 compute-0 sudo[213018]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:12 compute-0 sudo[212910]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:12 compute-0 sudo[213045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:20:12 compute-0 sudo[213045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:12 compute-0 sudo[213045]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:12 compute-0 sudo[213070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 01:20:12 compute-0 sudo[213070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:13 compute-0 sudo[213139]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iiucnldmdzmeyxypesmrvdmzrcgxqfwc ; /usr/bin/python3'
Dec 03 01:20:13 compute-0 sudo[213139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:20:13 compute-0 python3[213143]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:20:13 compute-0 ceph-mon[192821]: pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:20:13 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/4019109794' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 03 01:20:13 compute-0 podman[213157]: 2025-12-03 01:20:13.2968444 +0000 UTC m=+0.084612209 container create 85f16529f36546b576908e08d973ee97972d657849717bfacd43b7aabe2167d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_ellis, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:20:13 compute-0 podman[213157]: 2025-12-03 01:20:13.260631299 +0000 UTC m=+0.048399158 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:20:13 compute-0 podman[213164]: 2025-12-03 01:20:13.35834358 +0000 UTC m=+0.105297331 container create ba8053ceda77b425ecab812c516e4174762d1110dc1ed6d77d09c49b4f5d28c7 (image=quay.io/ceph/ceph:v18, name=gifted_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 03 01:20:13 compute-0 systemd[1]: Started libpod-conmon-85f16529f36546b576908e08d973ee97972d657849717bfacd43b7aabe2167d9.scope.
Dec 03 01:20:13 compute-0 podman[213164]: 2025-12-03 01:20:13.321112391 +0000 UTC m=+0.068066212 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:20:13 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:20:13 compute-0 systemd[1]: Started libpod-conmon-ba8053ceda77b425ecab812c516e4174762d1110dc1ed6d77d09c49b4f5d28c7.scope.
Dec 03 01:20:13 compute-0 podman[213157]: 2025-12-03 01:20:13.455212847 +0000 UTC m=+0.242980676 container init 85f16529f36546b576908e08d973ee97972d657849717bfacd43b7aabe2167d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Dec 03 01:20:13 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:20:13 compute-0 podman[213157]: 2025-12-03 01:20:13.466934031 +0000 UTC m=+0.254701820 container start 85f16529f36546b576908e08d973ee97972d657849717bfacd43b7aabe2167d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_ellis, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:20:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af7dd8be0fd2fb49e4280012725222dc15bd05df388591e567811b5b38c1a985/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af7dd8be0fd2fb49e4280012725222dc15bd05df388591e567811b5b38c1a985/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:13 compute-0 podman[213157]: 2025-12-03 01:20:13.472758202 +0000 UTC m=+0.260526071 container attach 85f16529f36546b576908e08d973ee97972d657849717bfacd43b7aabe2167d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_ellis, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:20:13 compute-0 peaceful_ellis[213184]: 167 167
Dec 03 01:20:13 compute-0 podman[213164]: 2025-12-03 01:20:13.498142934 +0000 UTC m=+0.245096705 container init ba8053ceda77b425ecab812c516e4174762d1110dc1ed6d77d09c49b4f5d28c7 (image=quay.io/ceph/ceph:v18, name=gifted_feistel, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec 03 01:20:13 compute-0 systemd[1]: libpod-85f16529f36546b576908e08d973ee97972d657849717bfacd43b7aabe2167d9.scope: Deactivated successfully.
Dec 03 01:20:13 compute-0 conmon[213184]: conmon 85f16529f36546b57690 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-85f16529f36546b576908e08d973ee97972d657849717bfacd43b7aabe2167d9.scope/container/memory.events
Dec 03 01:20:13 compute-0 podman[213157]: 2025-12-03 01:20:13.509916179 +0000 UTC m=+0.297683998 container died 85f16529f36546b576908e08d973ee97972d657849717bfacd43b7aabe2167d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_ellis, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:20:13 compute-0 podman[213164]: 2025-12-03 01:20:13.515814702 +0000 UTC m=+0.262768453 container start ba8053ceda77b425ecab812c516e4174762d1110dc1ed6d77d09c49b4f5d28c7 (image=quay.io/ceph/ceph:v18, name=gifted_feistel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec 03 01:20:13 compute-0 podman[213164]: 2025-12-03 01:20:13.532069042 +0000 UTC m=+0.279022873 container attach ba8053ceda77b425ecab812c516e4174762d1110dc1ed6d77d09c49b4f5d28c7 (image=quay.io/ceph/ceph:v18, name=gifted_feistel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Dec 03 01:20:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-10cdad83084b65ea5d8d3a4d31a0823c6e803f73a363595423f7732fdfd6d105-merged.mount: Deactivated successfully.
Dec 03 01:20:13 compute-0 podman[213157]: 2025-12-03 01:20:13.572961842 +0000 UTC m=+0.360729641 container remove 85f16529f36546b576908e08d973ee97972d657849717bfacd43b7aabe2167d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:20:13 compute-0 systemd[1]: libpod-conmon-85f16529f36546b576908e08d973ee97972d657849717bfacd43b7aabe2167d9.scope: Deactivated successfully.
Dec 03 01:20:13 compute-0 podman[213214]: 2025-12-03 01:20:13.876933414 +0000 UTC m=+0.100857509 container create 6c30b6b9b5581ec410080795ac5ae151cadce7b7e8c31a167e201e0d3b811064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_gould, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:20:13 compute-0 podman[213214]: 2025-12-03 01:20:13.836678521 +0000 UTC m=+0.060602696 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:20:13 compute-0 systemd[1]: Started libpod-conmon-6c30b6b9b5581ec410080795ac5ae151cadce7b7e8c31a167e201e0d3b811064.scope.
Dec 03 01:20:14 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:20:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/490cab92d943df89fed96029ab79ef64af6bc1d6cc5f415c58190cd38e03fc54/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/490cab92d943df89fed96029ab79ef64af6bc1d6cc5f415c58190cd38e03fc54/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/490cab92d943df89fed96029ab79ef64af6bc1d6cc5f415c58190cd38e03fc54/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/490cab92d943df89fed96029ab79ef64af6bc1d6cc5f415c58190cd38e03fc54/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:14 compute-0 podman[213214]: 2025-12-03 01:20:14.074417633 +0000 UTC m=+0.298341758 container init 6c30b6b9b5581ec410080795ac5ae151cadce7b7e8c31a167e201e0d3b811064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_gould, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:20:14 compute-0 podman[213214]: 2025-12-03 01:20:14.086969149 +0000 UTC m=+0.310893254 container start 6c30b6b9b5581ec410080795ac5ae151cadce7b7e8c31a167e201e0d3b811064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec 03 01:20:14 compute-0 podman[213214]: 2025-12-03 01:20:14.092704458 +0000 UTC m=+0.316628553 container attach 6c30b6b9b5581ec410080795ac5ae151cadce7b7e8c31a167e201e0d3b811064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_gould, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 03 01:20:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Dec 03 01:20:14 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3160741069' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 03 01:20:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:20:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Dec 03 01:20:14 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3160741069' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 03 01:20:14 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3160741069' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 03 01:20:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Dec 03 01:20:14 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Dec 03 01:20:14 compute-0 gifted_feistel[213189]: pool 'vms' created
Dec 03 01:20:14 compute-0 systemd[1]: libpod-ba8053ceda77b425ecab812c516e4174762d1110dc1ed6d77d09c49b4f5d28c7.scope: Deactivated successfully.
Dec 03 01:20:14 compute-0 podman[213164]: 2025-12-03 01:20:14.363417441 +0000 UTC m=+1.110371212 container died ba8053ceda77b425ecab812c516e4174762d1110dc1ed6d77d09c49b4f5d28c7 (image=quay.io/ceph/ceph:v18, name=gifted_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 03 01:20:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-af7dd8be0fd2fb49e4280012725222dc15bd05df388591e567811b5b38c1a985-merged.mount: Deactivated successfully.
Dec 03 01:20:14 compute-0 podman[213164]: 2025-12-03 01:20:14.455194027 +0000 UTC m=+1.202147778 container remove ba8053ceda77b425ecab812c516e4174762d1110dc1ed6d77d09c49b4f5d28c7 (image=quay.io/ceph/ceph:v18, name=gifted_feistel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:20:14 compute-0 systemd[1]: libpod-conmon-ba8053ceda77b425ecab812c516e4174762d1110dc1ed6d77d09c49b4f5d28c7.scope: Deactivated successfully.
Dec 03 01:20:14 compute-0 sudo[213139]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:14 compute-0 sudo[213290]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxzftzncnkqahbjfeywlmgnpoxblgjqx ; /usr/bin/python3'
Dec 03 01:20:14 compute-0 sudo[213290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:20:14 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 18 pg[2.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [2] r=0 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:14 compute-0 python3[213292]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:20:14 compute-0 podman[213294]: 2025-12-03 01:20:14.96469063 +0000 UTC m=+0.097939908 container create 97f2c97c529edcb3d31f5eaecf8515dcb651795804cb837bea0ab37a255addde (image=quay.io/ceph/ceph:v18, name=keen_turing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:20:15 compute-0 podman[213294]: 2025-12-03 01:20:14.920755566 +0000 UTC m=+0.054004884 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:20:15 compute-0 systemd[1]: Started libpod-conmon-97f2c97c529edcb3d31f5eaecf8515dcb651795804cb837bea0ab37a255addde.scope.
Dec 03 01:20:15 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:20:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/336f9fbdf7ba402aa31e58ac54c1c0b9e55f4e4353cdb1981d4a1220b3542170/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/336f9fbdf7ba402aa31e58ac54c1c0b9e55f4e4353cdb1981d4a1220b3542170/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:15 compute-0 podman[213294]: 2025-12-03 01:20:15.125810563 +0000 UTC m=+0.259059851 container init 97f2c97c529edcb3d31f5eaecf8515dcb651795804cb837bea0ab37a255addde (image=quay.io/ceph/ceph:v18, name=keen_turing, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 03 01:20:15 compute-0 podman[213294]: 2025-12-03 01:20:15.167862286 +0000 UTC m=+0.301111534 container start 97f2c97c529edcb3d31f5eaecf8515dcb651795804cb837bea0ab37a255addde (image=quay.io/ceph/ceph:v18, name=keen_turing, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:20:15 compute-0 podman[213294]: 2025-12-03 01:20:15.172985648 +0000 UTC m=+0.306234896 container attach 97f2c97c529edcb3d31f5eaecf8515dcb651795804cb837bea0ab37a255addde (image=quay.io/ceph/ceph:v18, name=keen_turing, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:20:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e18 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:20:15 compute-0 eager_gould[213248]: {
Dec 03 01:20:15 compute-0 eager_gould[213248]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 01:20:15 compute-0 eager_gould[213248]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:20:15 compute-0 eager_gould[213248]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 01:20:15 compute-0 eager_gould[213248]:         "osd_id": 2,
Dec 03 01:20:15 compute-0 eager_gould[213248]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:20:15 compute-0 eager_gould[213248]:         "type": "bluestore"
Dec 03 01:20:15 compute-0 eager_gould[213248]:     },
Dec 03 01:20:15 compute-0 eager_gould[213248]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 01:20:15 compute-0 eager_gould[213248]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:20:15 compute-0 eager_gould[213248]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 01:20:15 compute-0 eager_gould[213248]:         "osd_id": 1,
Dec 03 01:20:15 compute-0 eager_gould[213248]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:20:15 compute-0 eager_gould[213248]:         "type": "bluestore"
Dec 03 01:20:15 compute-0 eager_gould[213248]:     },
Dec 03 01:20:15 compute-0 eager_gould[213248]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 01:20:15 compute-0 eager_gould[213248]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:20:15 compute-0 eager_gould[213248]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 01:20:15 compute-0 eager_gould[213248]:         "osd_id": 0,
Dec 03 01:20:15 compute-0 eager_gould[213248]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:20:15 compute-0 eager_gould[213248]:         "type": "bluestore"
Dec 03 01:20:15 compute-0 eager_gould[213248]:     }
Dec 03 01:20:15 compute-0 eager_gould[213248]: }
Dec 03 01:20:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Dec 03 01:20:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Dec 03 01:20:15 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Dec 03 01:20:15 compute-0 systemd[1]: libpod-6c30b6b9b5581ec410080795ac5ae151cadce7b7e8c31a167e201e0d3b811064.scope: Deactivated successfully.
Dec 03 01:20:15 compute-0 ceph-mon[192821]: pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:20:15 compute-0 systemd[1]: libpod-6c30b6b9b5581ec410080795ac5ae151cadce7b7e8c31a167e201e0d3b811064.scope: Consumed 1.232s CPU time.
Dec 03 01:20:15 compute-0 podman[213214]: 2025-12-03 01:20:15.324982409 +0000 UTC m=+1.548906534 container died 6c30b6b9b5581ec410080795ac5ae151cadce7b7e8c31a167e201e0d3b811064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_gould, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:20:15 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3160741069' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 03 01:20:15 compute-0 ceph-mon[192821]: osdmap e18: 3 total, 3 up, 3 in
Dec 03 01:20:15 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 19 pg[2.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [2] r=0 lpr=18 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-490cab92d943df89fed96029ab79ef64af6bc1d6cc5f415c58190cd38e03fc54-merged.mount: Deactivated successfully.
Dec 03 01:20:15 compute-0 podman[213214]: 2025-12-03 01:20:15.418162984 +0000 UTC m=+1.642087079 container remove 6c30b6b9b5581ec410080795ac5ae151cadce7b7e8c31a167e201e0d3b811064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_gould, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:20:15 compute-0 systemd[1]: libpod-conmon-6c30b6b9b5581ec410080795ac5ae151cadce7b7e8c31a167e201e0d3b811064.scope: Deactivated successfully.
Dec 03 01:20:15 compute-0 sudo[213070]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:20:15 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:20:15 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:15 compute-0 sudo[213353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:20:15 compute-0 sudo[213353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:15 compute-0 sudo[213353]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:15 compute-0 sudo[213395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 01:20:15 compute-0 sudo[213395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:15 compute-0 sudo[213395]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0) v1
Dec 03 01:20:15 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0) v1
Dec 03 01:20:15 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Dec 03 01:20:15 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4279297250' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 03 01:20:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0) v1
Dec 03 01:20:15 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0) v1
Dec 03 01:20:15 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:15 compute-0 ceph-mgr[193109]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Dec 03 01:20:15 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Dec 03 01:20:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Dec 03 01:20:15 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 03 01:20:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Dec 03 01:20:15 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 03 01:20:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:20:15 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:20:15 compute-0 ceph-mgr[193109]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Dec 03 01:20:15 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Dec 03 01:20:16 compute-0 sudo[213423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:20:16 compute-0 sudo[213423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:16 compute-0 sudo[213423]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:16 compute-0 podman[213447]: 2025-12-03 01:20:16.141423906 +0000 UTC m=+0.118277621 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 01:20:16 compute-0 sudo[213454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:20:16 compute-0 sudo[213454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:16 compute-0 sudo[213454]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v64: 2 pgs: 1 active+clean, 1 unknown; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:20:16 compute-0 sudo[213494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:20:16 compute-0 sudo[213494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:16 compute-0 sudo[213494]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:16 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Dec 03 01:20:16 compute-0 ceph-mon[192821]: osdmap e19: 3 total, 3 up, 3 in
Dec 03 01:20:16 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:16 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:16 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:16 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:16 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/4279297250' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 03 01:20:16 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:16 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:16 compute-0 ceph-mon[192821]: Reconfiguring mon.compute-0 (unknown last config time)...
Dec 03 01:20:16 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 03 01:20:16 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 03 01:20:16 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:20:16 compute-0 ceph-mon[192821]: Reconfiguring daemon mon.compute-0 on compute-0
Dec 03 01:20:16 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4279297250' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 03 01:20:16 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Dec 03 01:20:16 compute-0 keen_turing[213317]: pool 'volumes' created
Dec 03 01:20:16 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Dec 03 01:20:16 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 20 pg[3.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [1] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:16 compute-0 systemd[1]: libpod-97f2c97c529edcb3d31f5eaecf8515dcb651795804cb837bea0ab37a255addde.scope: Deactivated successfully.
Dec 03 01:20:16 compute-0 podman[213294]: 2025-12-03 01:20:16.385732478 +0000 UTC m=+1.518981746 container died 97f2c97c529edcb3d31f5eaecf8515dcb651795804cb837bea0ab37a255addde (image=quay.io/ceph/ceph:v18, name=keen_turing, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:20:16 compute-0 sudo[213519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c
Dec 03 01:20:16 compute-0 sudo[213519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-336f9fbdf7ba402aa31e58ac54c1c0b9e55f4e4353cdb1981d4a1220b3542170-merged.mount: Deactivated successfully.
Dec 03 01:20:16 compute-0 podman[213294]: 2025-12-03 01:20:16.47623864 +0000 UTC m=+1.609487888 container remove 97f2c97c529edcb3d31f5eaecf8515dcb651795804cb837bea0ab37a255addde (image=quay.io/ceph/ceph:v18, name=keen_turing, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 03 01:20:16 compute-0 systemd[1]: libpod-conmon-97f2c97c529edcb3d31f5eaecf8515dcb651795804cb837bea0ab37a255addde.scope: Deactivated successfully.
Dec 03 01:20:16 compute-0 sudo[213290]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:16 compute-0 sudo[213602]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxdbpmhqurztuvzcxvjdyxyxkastwbev ; /usr/bin/python3'
Dec 03 01:20:16 compute-0 sudo[213602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:20:16 compute-0 podman[213593]: 2025-12-03 01:20:16.804891204 +0000 UTC m=+0.078883371 container create 5163fd9371585a5ee18531f76f39d44654a042fbbb25c12770715335bffc7028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_liskov, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:20:16 compute-0 ceph-mon[192821]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 03 01:20:16 compute-0 podman[213593]: 2025-12-03 01:20:16.771068539 +0000 UTC m=+0.045060756 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:20:16 compute-0 systemd[1]: Started libpod-conmon-5163fd9371585a5ee18531f76f39d44654a042fbbb25c12770715335bffc7028.scope.
Dec 03 01:20:16 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:20:16 compute-0 python3[213608]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:20:16 compute-0 podman[213593]: 2025-12-03 01:20:16.962725946 +0000 UTC m=+0.236718173 container init 5163fd9371585a5ee18531f76f39d44654a042fbbb25c12770715335bffc7028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_liskov, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:20:16 compute-0 podman[213593]: 2025-12-03 01:20:16.981120724 +0000 UTC m=+0.255112881 container start 5163fd9371585a5ee18531f76f39d44654a042fbbb25c12770715335bffc7028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_liskov, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 03 01:20:16 compute-0 podman[213593]: 2025-12-03 01:20:16.987918612 +0000 UTC m=+0.261910779 container attach 5163fd9371585a5ee18531f76f39d44654a042fbbb25c12770715335bffc7028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:20:16 compute-0 peaceful_liskov[213614]: 167 167
Dec 03 01:20:16 compute-0 podman[213593]: 2025-12-03 01:20:16.996298914 +0000 UTC m=+0.270291081 container died 5163fd9371585a5ee18531f76f39d44654a042fbbb25c12770715335bffc7028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_liskov, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 03 01:20:16 compute-0 systemd[1]: libpod-5163fd9371585a5ee18531f76f39d44654a042fbbb25c12770715335bffc7028.scope: Deactivated successfully.
Dec 03 01:20:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b0dcfa42131ede9a60c2df56d0b692e65bd1abab8f5d7013bbbe28fc38aaba5-merged.mount: Deactivated successfully.
Dec 03 01:20:17 compute-0 podman[213617]: 2025-12-03 01:20:17.078578578 +0000 UTC m=+0.112114780 container create b2eb5feb638da1e8765861292155ea5cb22f00196bd7ad88d41efd07c62d3c89 (image=quay.io/ceph/ceph:v18, name=festive_curran, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:20:17 compute-0 podman[213593]: 2025-12-03 01:20:17.088303417 +0000 UTC m=+0.362295574 container remove 5163fd9371585a5ee18531f76f39d44654a042fbbb25c12770715335bffc7028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_liskov, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:20:17 compute-0 systemd[1]: libpod-conmon-5163fd9371585a5ee18531f76f39d44654a042fbbb25c12770715335bffc7028.scope: Deactivated successfully.
Dec 03 01:20:17 compute-0 podman[213617]: 2025-12-03 01:20:17.037651527 +0000 UTC m=+0.071187869 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:20:17 compute-0 systemd[1]: Started libpod-conmon-b2eb5feb638da1e8765861292155ea5cb22f00196bd7ad88d41efd07c62d3c89.scope.
Dec 03 01:20:17 compute-0 sudo[213519]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:20:17 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:17 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:20:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:20:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15a667f94fea81cd0c0836a2e9324becad684bb3f33879fe74a5630bdc5273c1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15a667f94fea81cd0c0836a2e9324becad684bb3f33879fe74a5630bdc5273c1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:17 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:17 compute-0 ceph-mgr[193109]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.rysove (unknown last config time)...
Dec 03 01:20:17 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.rysove (unknown last config time)...
Dec 03 01:20:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.rysove", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Dec 03 01:20:17 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.rysove", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 03 01:20:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Dec 03 01:20:17 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 03 01:20:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:20:17 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:20:17 compute-0 ceph-mgr[193109]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.rysove on compute-0
Dec 03 01:20:17 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.rysove on compute-0
Dec 03 01:20:17 compute-0 podman[213617]: 2025-12-03 01:20:17.201062253 +0000 UTC m=+0.234598495 container init b2eb5feb638da1e8765861292155ea5cb22f00196bd7ad88d41efd07c62d3c89 (image=quay.io/ceph/ceph:v18, name=festive_curran, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec 03 01:20:17 compute-0 podman[213617]: 2025-12-03 01:20:17.211955344 +0000 UTC m=+0.245491536 container start b2eb5feb638da1e8765861292155ea5cb22f00196bd7ad88d41efd07c62d3c89 (image=quay.io/ceph/ceph:v18, name=festive_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Dec 03 01:20:17 compute-0 podman[213617]: 2025-12-03 01:20:17.21614675 +0000 UTC m=+0.249682992 container attach b2eb5feb638da1e8765861292155ea5cb22f00196bd7ad88d41efd07c62d3c89 (image=quay.io/ceph/ceph:v18, name=festive_curran, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:20:17 compute-0 sudo[213651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:20:17 compute-0 sudo[213651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:17 compute-0 sudo[213651]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Dec 03 01:20:17 compute-0 ceph-mon[192821]: pgmap v64: 2 pgs: 1 active+clean, 1 unknown; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:20:17 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/4279297250' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 03 01:20:17 compute-0 ceph-mon[192821]: osdmap e20: 3 total, 3 up, 3 in
Dec 03 01:20:17 compute-0 ceph-mon[192821]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 03 01:20:17 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:17 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:17 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.rysove", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 03 01:20:17 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 03 01:20:17 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:20:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Dec 03 01:20:17 compute-0 sudo[213676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:20:17 compute-0 sudo[213676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:17 compute-0 sudo[213676]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:17 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Dec 03 01:20:17 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 21 pg[3.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [1] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:17 compute-0 sudo[213701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:20:17 compute-0 sudo[213701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:17 compute-0 sudo[213701]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:17 compute-0 sudo[213726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c
Dec 03 01:20:17 compute-0 sudo[213726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Dec 03 01:20:17 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3136411572' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 03 01:20:17 compute-0 podman[213789]: 2025-12-03 01:20:17.951176687 +0000 UTC m=+0.088386194 container create 8fe83601e72151583e61efa0f1cfa76c8624b0ac8d38f9597e5ca1e7aecf13f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_heyrovsky, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:20:18 compute-0 systemd[1]: Started libpod-conmon-8fe83601e72151583e61efa0f1cfa76c8624b0ac8d38f9597e5ca1e7aecf13f2.scope.
Dec 03 01:20:18 compute-0 podman[213789]: 2025-12-03 01:20:17.923288306 +0000 UTC m=+0.060497833 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:20:18 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:20:18 compute-0 podman[213789]: 2025-12-03 01:20:18.078168307 +0000 UTC m=+0.215377874 container init 8fe83601e72151583e61efa0f1cfa76c8624b0ac8d38f9597e5ca1e7aecf13f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_heyrovsky, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:20:18 compute-0 podman[213789]: 2025-12-03 01:20:18.093199823 +0000 UTC m=+0.230409340 container start 8fe83601e72151583e61efa0f1cfa76c8624b0ac8d38f9597e5ca1e7aecf13f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_heyrovsky, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec 03 01:20:18 compute-0 podman[213789]: 2025-12-03 01:20:18.099706392 +0000 UTC m=+0.236915949 container attach 8fe83601e72151583e61efa0f1cfa76c8624b0ac8d38f9597e5ca1e7aecf13f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Dec 03 01:20:18 compute-0 wonderful_heyrovsky[213805]: 167 167
Dec 03 01:20:18 compute-0 systemd[1]: libpod-8fe83601e72151583e61efa0f1cfa76c8624b0ac8d38f9597e5ca1e7aecf13f2.scope: Deactivated successfully.
Dec 03 01:20:18 compute-0 podman[213789]: 2025-12-03 01:20:18.105258846 +0000 UTC m=+0.242468363 container died 8fe83601e72151583e61efa0f1cfa76c8624b0ac8d38f9597e5ca1e7aecf13f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:20:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-763bd738ce2462d984813cdbe3306af05f5eaac7499ea093a95e5079e6fe0b45-merged.mount: Deactivated successfully.
Dec 03 01:20:18 compute-0 podman[213789]: 2025-12-03 01:20:18.177163923 +0000 UTC m=+0.314373430 container remove 8fe83601e72151583e61efa0f1cfa76c8624b0ac8d38f9597e5ca1e7aecf13f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_heyrovsky, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:20:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v67: 3 pgs: 1 active+clean, 2 unknown; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:20:18 compute-0 systemd[1]: libpod-conmon-8fe83601e72151583e61efa0f1cfa76c8624b0ac8d38f9597e5ca1e7aecf13f2.scope: Deactivated successfully.
Dec 03 01:20:18 compute-0 sudo[213726]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:20:18 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:20:18 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Dec 03 01:20:18 compute-0 ceph-mon[192821]: Reconfiguring mgr.compute-0.rysove (unknown last config time)...
Dec 03 01:20:18 compute-0 ceph-mon[192821]: Reconfiguring daemon mgr.compute-0.rysove on compute-0
Dec 03 01:20:18 compute-0 ceph-mon[192821]: osdmap e21: 3 total, 3 up, 3 in
Dec 03 01:20:18 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3136411572' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 03 01:20:18 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:18 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:18 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3136411572' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 03 01:20:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Dec 03 01:20:18 compute-0 festive_curran[213647]: pool 'backups' created
Dec 03 01:20:18 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Dec 03 01:20:18 compute-0 sudo[213822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:20:18 compute-0 sudo[213822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:18 compute-0 sudo[213822]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:18 compute-0 systemd[1]: libpod-b2eb5feb638da1e8765861292155ea5cb22f00196bd7ad88d41efd07c62d3c89.scope: Deactivated successfully.
Dec 03 01:20:18 compute-0 podman[213617]: 2025-12-03 01:20:18.463737994 +0000 UTC m=+1.497274226 container died b2eb5feb638da1e8765861292155ea5cb22f00196bd7ad88d41efd07c62d3c89 (image=quay.io/ceph/ceph:v18, name=festive_curran, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:20:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-15a667f94fea81cd0c0836a2e9324becad684bb3f33879fe74a5630bdc5273c1-merged.mount: Deactivated successfully.
Dec 03 01:20:18 compute-0 podman[213617]: 2025-12-03 01:20:18.567243315 +0000 UTC m=+1.600779547 container remove b2eb5feb638da1e8765861292155ea5cb22f00196bd7ad88d41efd07c62d3c89 (image=quay.io/ceph/ceph:v18, name=festive_curran, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:20:18 compute-0 systemd[1]: libpod-conmon-b2eb5feb638da1e8765861292155ea5cb22f00196bd7ad88d41efd07c62d3c89.scope: Deactivated successfully.
Dec 03 01:20:18 compute-0 sudo[213602]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:18 compute-0 sudo[213848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:20:18 compute-0 sudo[213848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:18 compute-0 sudo[213848]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:18 compute-0 sudo[213883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:20:18 compute-0 sudo[213883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:18 compute-0 sudo[213883]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:18 compute-0 sudo[213943]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljeydbejnljauutbzcfdugjotdvyyrwr ; /usr/bin/python3'
Dec 03 01:20:18 compute-0 sudo[213943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:20:18 compute-0 sudo[213921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 03 01:20:18 compute-0 sudo[213921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:19 compute-0 python3[213956]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:20:19 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 22 pg[4.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [0] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:19 compute-0 podman[213959]: 2025-12-03 01:20:19.116804065 +0000 UTC m=+0.088302071 container create ecb8b249ee6f35c0512ccb3c6f88a4d9db448f9b052070ae4fd252e29705ffba (image=quay.io/ceph/ceph:v18, name=loving_gates, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:20:19 compute-0 podman[213959]: 2025-12-03 01:20:19.084957825 +0000 UTC m=+0.056455851 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:20:19 compute-0 systemd[1]: Started libpod-conmon-ecb8b249ee6f35c0512ccb3c6f88a4d9db448f9b052070ae4fd252e29705ffba.scope.
Dec 03 01:20:19 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:20:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7288ef81c696e57eb619971cc21707f2074b82193ebf502a7ae0ba8302477478/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7288ef81c696e57eb619971cc21707f2074b82193ebf502a7ae0ba8302477478/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:19 compute-0 podman[213959]: 2025-12-03 01:20:19.296040019 +0000 UTC m=+0.267538095 container init ecb8b249ee6f35c0512ccb3c6f88a4d9db448f9b052070ae4fd252e29705ffba (image=quay.io/ceph/ceph:v18, name=loving_gates, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 03 01:20:19 compute-0 podman[213959]: 2025-12-03 01:20:19.310696395 +0000 UTC m=+0.282194381 container start ecb8b249ee6f35c0512ccb3c6f88a4d9db448f9b052070ae4fd252e29705ffba (image=quay.io/ceph/ceph:v18, name=loving_gates, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:20:19 compute-0 podman[213959]: 2025-12-03 01:20:19.315674482 +0000 UTC m=+0.287172498 container attach ecb8b249ee6f35c0512ccb3c6f88a4d9db448f9b052070ae4fd252e29705ffba (image=quay.io/ceph/ceph:v18, name=loving_gates, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:20:19 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Dec 03 01:20:19 compute-0 ceph-mon[192821]: pgmap v67: 3 pgs: 1 active+clean, 2 unknown; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:20:19 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3136411572' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 03 01:20:19 compute-0 ceph-mon[192821]: osdmap e22: 3 total, 3 up, 3 in
Dec 03 01:20:19 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Dec 03 01:20:19 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Dec 03 01:20:19 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 23 pg[4.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [0] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:19 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Dec 03 01:20:19 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2749902737' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 03 01:20:19 compute-0 podman[214058]: 2025-12-03 01:20:19.914515854 +0000 UTC m=+0.153877444 container exec d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 03 01:20:20 compute-0 podman[214058]: 2025-12-03 01:20:20.057916928 +0000 UTC m=+0.297278458 container exec_died d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:20:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v70: 4 pgs: 3 active+clean, 1 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:20:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e23 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:20:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Dec 03 01:20:20 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2749902737' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 03 01:20:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Dec 03 01:20:20 compute-0 loving_gates[213995]: pool 'images' created
Dec 03 01:20:20 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Dec 03 01:20:20 compute-0 ceph-mon[192821]: osdmap e23: 3 total, 3 up, 3 in
Dec 03 01:20:20 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2749902737' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 03 01:20:20 compute-0 systemd[1]: libpod-ecb8b249ee6f35c0512ccb3c6f88a4d9db448f9b052070ae4fd252e29705ffba.scope: Deactivated successfully.
Dec 03 01:20:20 compute-0 podman[213959]: 2025-12-03 01:20:20.486079182 +0000 UTC m=+1.457577198 container died ecb8b249ee6f35c0512ccb3c6f88a4d9db448f9b052070ae4fd252e29705ffba (image=quay.io/ceph/ceph:v18, name=loving_gates, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef)
Dec 03 01:20:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-7288ef81c696e57eb619971cc21707f2074b82193ebf502a7ae0ba8302477478-merged.mount: Deactivated successfully.
Dec 03 01:20:20 compute-0 podman[213959]: 2025-12-03 01:20:20.579277258 +0000 UTC m=+1.550775244 container remove ecb8b249ee6f35c0512ccb3c6f88a4d9db448f9b052070ae4fd252e29705ffba (image=quay.io/ceph/ceph:v18, name=loving_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:20:20 compute-0 systemd[1]: libpod-conmon-ecb8b249ee6f35c0512ccb3c6f88a4d9db448f9b052070ae4fd252e29705ffba.scope: Deactivated successfully.
Dec 03 01:20:20 compute-0 sudo[213943]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:20 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 24 pg[5.0( empty local-lis/les=0/0 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [2] r=0 lpr=24 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:20 compute-0 sudo[214213]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvvgdnksiyjjujskvgjeybmrcrzvlzya ; /usr/bin/python3'
Dec 03 01:20:20 compute-0 sudo[214213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:20:20 compute-0 sudo[213921]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:20:20 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:20:20 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:20:20 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:20:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 01:20:20 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:20:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 01:20:20 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:20 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 956a7060-fb19-4748-b8fa-bc36867a42d8 does not exist
Dec 03 01:20:20 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev c9aec575-cde9-47de-a8eb-c6fae78cdcae does not exist
Dec 03 01:20:20 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 1f5bbf98-9013-4c7a-abb8-2c93e75050ee does not exist
Dec 03 01:20:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 01:20:20 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:20:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 01:20:20 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:20:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:20:20 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:20:21 compute-0 python3[214218]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:20:21 compute-0 sudo[214219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:20:21 compute-0 sudo[214219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:21 compute-0 sudo[214219]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:21 compute-0 sshd-session[212667]: Connection closed by authenticating user root 193.32.162.157 port 45562 [preauth]
Dec 03 01:20:21 compute-0 podman[214237]: 2025-12-03 01:20:21.16363388 +0000 UTC m=+0.091429959 container create beac8b425de06037cf2ef690969b73e374621d57e567c8b4a3e33e12d0eedb7c (image=quay.io/ceph/ceph:v18, name=nice_galileo, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec 03 01:20:21 compute-0 podman[214237]: 2025-12-03 01:20:21.121340471 +0000 UTC m=+0.049136550 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:20:21 compute-0 systemd[1]: Started libpod-conmon-beac8b425de06037cf2ef690969b73e374621d57e567c8b4a3e33e12d0eedb7c.scope.
Dec 03 01:20:21 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:20:21 compute-0 sudo[214257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:20:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25fe81900b3b7686f602c366506a0a168ff3208ff96d5b76ab91178c9b6a38a7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25fe81900b3b7686f602c366506a0a168ff3208ff96d5b76ab91178c9b6a38a7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:21 compute-0 sudo[214257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:21 compute-0 sudo[214257]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:21 compute-0 podman[214237]: 2025-12-03 01:20:21.329372331 +0000 UTC m=+0.257168470 container init beac8b425de06037cf2ef690969b73e374621d57e567c8b4a3e33e12d0eedb7c (image=quay.io/ceph/ceph:v18, name=nice_galileo, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec 03 01:20:21 compute-0 podman[214237]: 2025-12-03 01:20:21.352159811 +0000 UTC m=+0.279955910 container start beac8b425de06037cf2ef690969b73e374621d57e567c8b4a3e33e12d0eedb7c (image=quay.io/ceph/ceph:v18, name=nice_galileo, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 03 01:20:21 compute-0 podman[214237]: 2025-12-03 01:20:21.359404571 +0000 UTC m=+0.287200740 container attach beac8b425de06037cf2ef690969b73e374621d57e567c8b4a3e33e12d0eedb7c (image=quay.io/ceph/ceph:v18, name=nice_galileo, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 03 01:20:21 compute-0 sudo[214287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:20:21 compute-0 sudo[214287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:21 compute-0 sudo[214287]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:21 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Dec 03 01:20:21 compute-0 ceph-mon[192821]: pgmap v70: 4 pgs: 3 active+clean, 1 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:20:21 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2749902737' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 03 01:20:21 compute-0 ceph-mon[192821]: osdmap e24: 3 total, 3 up, 3 in
Dec 03 01:20:21 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:21 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:21 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:20:21 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:20:21 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:21 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:20:21 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:20:21 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:20:21 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Dec 03 01:20:21 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Dec 03 01:20:21 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 25 pg[5.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [2] r=0 lpr=24 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:21 compute-0 sudo[214314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 01:20:21 compute-0 sudo[214314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Dec 03 01:20:22 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2747225468' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 03 01:20:22 compute-0 podman[214399]: 2025-12-03 01:20:22.114672067 +0000 UTC m=+0.077210225 container create 65628ad404aedd3ced05fef920cebc034870f9ef9984d8054b14016eb02f1c6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_gates, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 03 01:20:22 compute-0 podman[214399]: 2025-12-03 01:20:22.079738011 +0000 UTC m=+0.042276199 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:20:22 compute-0 systemd[1]: Started libpod-conmon-65628ad404aedd3ced05fef920cebc034870f9ef9984d8054b14016eb02f1c6f.scope.
Dec 03 01:20:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v73: 5 pgs: 4 active+clean, 1 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:20:22 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:20:22 compute-0 podman[214399]: 2025-12-03 01:20:22.262778321 +0000 UTC m=+0.225316529 container init 65628ad404aedd3ced05fef920cebc034870f9ef9984d8054b14016eb02f1c6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_gates, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec 03 01:20:22 compute-0 podman[214399]: 2025-12-03 01:20:22.281900239 +0000 UTC m=+0.244438387 container start 65628ad404aedd3ced05fef920cebc034870f9ef9984d8054b14016eb02f1c6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_gates, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 03 01:20:22 compute-0 podman[214399]: 2025-12-03 01:20:22.289283083 +0000 UTC m=+0.251821281 container attach 65628ad404aedd3ced05fef920cebc034870f9ef9984d8054b14016eb02f1c6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_gates, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:20:22 compute-0 great_gates[214413]: 167 167
Dec 03 01:20:22 compute-0 systemd[1]: libpod-65628ad404aedd3ced05fef920cebc034870f9ef9984d8054b14016eb02f1c6f.scope: Deactivated successfully.
Dec 03 01:20:22 compute-0 conmon[214413]: conmon 65628ad404aedd3ced05 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-65628ad404aedd3ced05fef920cebc034870f9ef9984d8054b14016eb02f1c6f.scope/container/memory.events
Dec 03 01:20:22 compute-0 podman[214399]: 2025-12-03 01:20:22.294416875 +0000 UTC m=+0.256955033 container died 65628ad404aedd3ced05fef920cebc034870f9ef9984d8054b14016eb02f1c6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_gates, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 03 01:20:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-a83e669ed51f9890cacebd3a2d0623002fcb0fd2f3b6dd0f77fb5a5cd540e59b-merged.mount: Deactivated successfully.
Dec 03 01:20:22 compute-0 podman[214399]: 2025-12-03 01:20:22.383850167 +0000 UTC m=+0.346388315 container remove 65628ad404aedd3ced05fef920cebc034870f9ef9984d8054b14016eb02f1c6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 03 01:20:22 compute-0 systemd[1]: libpod-conmon-65628ad404aedd3ced05fef920cebc034870f9ef9984d8054b14016eb02f1c6f.scope: Deactivated successfully.
Dec 03 01:20:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Dec 03 01:20:22 compute-0 ceph-mon[192821]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 03 01:20:22 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2747225468' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 03 01:20:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Dec 03 01:20:22 compute-0 nice_galileo[214280]: pool 'cephfs.cephfs.meta' created
Dec 03 01:20:22 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Dec 03 01:20:22 compute-0 ceph-mon[192821]: osdmap e25: 3 total, 3 up, 3 in
Dec 03 01:20:22 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2747225468' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 03 01:20:22 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 26 pg[6.0( empty local-lis/les=0/0 n=0 ec=26/26 lis/c=0/0 les/c/f=0/0/0 sis=26) [0] r=0 lpr=26 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:22 compute-0 systemd[1]: libpod-beac8b425de06037cf2ef690969b73e374621d57e567c8b4a3e33e12d0eedb7c.scope: Deactivated successfully.
Dec 03 01:20:22 compute-0 podman[214237]: 2025-12-03 01:20:22.553114106 +0000 UTC m=+1.480910215 container died beac8b425de06037cf2ef690969b73e374621d57e567c8b4a3e33e12d0eedb7c (image=quay.io/ceph/ceph:v18, name=nice_galileo, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:20:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-25fe81900b3b7686f602c366506a0a168ff3208ff96d5b76ab91178c9b6a38a7-merged.mount: Deactivated successfully.
Dec 03 01:20:22 compute-0 sshd-session[214381]: Invalid user kapsch from 80.253.31.232 port 42226
Dec 03 01:20:22 compute-0 podman[214237]: 2025-12-03 01:20:22.628729426 +0000 UTC m=+1.556525495 container remove beac8b425de06037cf2ef690969b73e374621d57e567c8b4a3e33e12d0eedb7c (image=quay.io/ceph/ceph:v18, name=nice_galileo, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:20:22 compute-0 systemd[1]: libpod-conmon-beac8b425de06037cf2ef690969b73e374621d57e567c8b4a3e33e12d0eedb7c.scope: Deactivated successfully.
Dec 03 01:20:22 compute-0 sudo[214213]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:22 compute-0 podman[214444]: 2025-12-03 01:20:22.700491209 +0000 UTC m=+0.074522341 container create c18804b0584e2aafc0f7356b4c6116a2ed25cb866e155bbc5d20294d23088396 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bell, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 03 01:20:22 compute-0 sshd-session[214381]: Received disconnect from 80.253.31.232 port 42226:11: Bye Bye [preauth]
Dec 03 01:20:22 compute-0 sshd-session[214381]: Disconnected from invalid user kapsch 80.253.31.232 port 42226 [preauth]
Dec 03 01:20:22 compute-0 podman[214444]: 2025-12-03 01:20:22.673262837 +0000 UTC m=+0.047293979 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:20:22 compute-0 systemd[1]: Started libpod-conmon-c18804b0584e2aafc0f7356b4c6116a2ed25cb866e155bbc5d20294d23088396.scope.
Dec 03 01:20:22 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:20:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/766ff20d30bc2760e555be2ea1d478ac5805e8c7745bd4744feada345ccec964/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/766ff20d30bc2760e555be2ea1d478ac5805e8c7745bd4744feada345ccec964/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/766ff20d30bc2760e555be2ea1d478ac5805e8c7745bd4744feada345ccec964/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/766ff20d30bc2760e555be2ea1d478ac5805e8c7745bd4744feada345ccec964/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/766ff20d30bc2760e555be2ea1d478ac5805e8c7745bd4744feada345ccec964/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:22 compute-0 podman[214444]: 2025-12-03 01:20:22.868328668 +0000 UTC m=+0.242359820 container init c18804b0584e2aafc0f7356b4c6116a2ed25cb866e155bbc5d20294d23088396 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bell, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 03 01:20:22 compute-0 podman[214444]: 2025-12-03 01:20:22.887074496 +0000 UTC m=+0.261105618 container start c18804b0584e2aafc0f7356b4c6116a2ed25cb866e155bbc5d20294d23088396 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bell, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 03 01:20:22 compute-0 podman[214444]: 2025-12-03 01:20:22.89262535 +0000 UTC m=+0.266656462 container attach c18804b0584e2aafc0f7356b4c6116a2ed25cb866e155bbc5d20294d23088396 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bell, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 03 01:20:22 compute-0 sudo[214490]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iubcjmknqcwwslvmehnbwmtvyfqkmhup ; /usr/bin/python3'
Dec 03 01:20:22 compute-0 sudo[214490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:20:23 compute-0 python3[214493]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:20:23 compute-0 podman[214494]: 2025-12-03 01:20:23.211629497 +0000 UTC m=+0.086926864 container create cef72690d4012a8d74aec56d07d52218c3ff33690b9eac7f65ae03d020e929e0 (image=quay.io/ceph/ceph:v18, name=interesting_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:20:23 compute-0 podman[214494]: 2025-12-03 01:20:23.178416649 +0000 UTC m=+0.053714076 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:20:23 compute-0 systemd[1]: Started libpod-conmon-cef72690d4012a8d74aec56d07d52218c3ff33690b9eac7f65ae03d020e929e0.scope.
Dec 03 01:20:23 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:20:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44867f99faadb6b42b9ac795cbf16a9642e4481ec3ee1683e5d8858c46267282/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44867f99faadb6b42b9ac795cbf16a9642e4481ec3ee1683e5d8858c46267282/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:23 compute-0 podman[214494]: 2025-12-03 01:20:23.372499124 +0000 UTC m=+0.247796471 container init cef72690d4012a8d74aec56d07d52218c3ff33690b9eac7f65ae03d020e929e0 (image=quay.io/ceph/ceph:v18, name=interesting_lalande, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec 03 01:20:23 compute-0 podman[214494]: 2025-12-03 01:20:23.388036073 +0000 UTC m=+0.263333430 container start cef72690d4012a8d74aec56d07d52218c3ff33690b9eac7f65ae03d020e929e0 (image=quay.io/ceph/ceph:v18, name=interesting_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 03 01:20:23 compute-0 podman[214494]: 2025-12-03 01:20:23.396153608 +0000 UTC m=+0.271450965 container attach cef72690d4012a8d74aec56d07d52218c3ff33690b9eac7f65ae03d020e929e0 (image=quay.io/ceph/ceph:v18, name=interesting_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:20:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Dec 03 01:20:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Dec 03 01:20:23 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Dec 03 01:20:23 compute-0 ceph-mon[192821]: pgmap v73: 5 pgs: 4 active+clean, 1 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:20:23 compute-0 ceph-mon[192821]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 03 01:20:23 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2747225468' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 03 01:20:23 compute-0 ceph-mon[192821]: osdmap e26: 3 total, 3 up, 3 in
Dec 03 01:20:23 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 27 pg[6.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=0/0 les/c/f=0/0/0 sis=26) [0] r=0 lpr=26 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Dec 03 01:20:23 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1024770579' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 03 01:20:24 compute-0 musing_bell[214467]: --> passed data devices: 0 physical, 3 LVM
Dec 03 01:20:24 compute-0 musing_bell[214467]: --> relative data size: 1.0
Dec 03 01:20:24 compute-0 musing_bell[214467]: --> All data devices are unavailable
Dec 03 01:20:24 compute-0 systemd[1]: libpod-c18804b0584e2aafc0f7356b4c6116a2ed25cb866e155bbc5d20294d23088396.scope: Deactivated successfully.
Dec 03 01:20:24 compute-0 systemd[1]: libpod-c18804b0584e2aafc0f7356b4c6116a2ed25cb866e155bbc5d20294d23088396.scope: Consumed 1.215s CPU time.
Dec 03 01:20:24 compute-0 podman[214444]: 2025-12-03 01:20:24.170385937 +0000 UTC m=+1.544417069 container died c18804b0584e2aafc0f7356b4c6116a2ed25cb866e155bbc5d20294d23088396 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:20:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v76: 6 pgs: 5 active+clean, 1 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:20:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-766ff20d30bc2760e555be2ea1d478ac5805e8c7745bd4744feada345ccec964-merged.mount: Deactivated successfully.
Dec 03 01:20:24 compute-0 podman[214444]: 2025-12-03 01:20:24.287835093 +0000 UTC m=+1.661866225 container remove c18804b0584e2aafc0f7356b4c6116a2ed25cb866e155bbc5d20294d23088396 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec 03 01:20:24 compute-0 systemd[1]: libpod-conmon-c18804b0584e2aafc0f7356b4c6116a2ed25cb866e155bbc5d20294d23088396.scope: Deactivated successfully.
Dec 03 01:20:24 compute-0 sudo[214314]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:24 compute-0 sudo[214573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:20:24 compute-0 sudo[214573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:24 compute-0 sudo[214573]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:24 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Dec 03 01:20:24 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1024770579' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 03 01:20:24 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Dec 03 01:20:24 compute-0 interesting_lalande[214509]: pool 'cephfs.cephfs.data' created
Dec 03 01:20:24 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Dec 03 01:20:24 compute-0 ceph-mon[192821]: osdmap e27: 3 total, 3 up, 3 in
Dec 03 01:20:24 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1024770579' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 03 01:20:24 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 28 pg[7.0( empty local-lis/les=0/0 n=0 ec=28/28 lis/c=0/0 les/c/f=0/0/0 sis=28) [1] r=0 lpr=28 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:24 compute-0 systemd[1]: libpod-cef72690d4012a8d74aec56d07d52218c3ff33690b9eac7f65ae03d020e929e0.scope: Deactivated successfully.
Dec 03 01:20:24 compute-0 podman[214494]: 2025-12-03 01:20:24.590933761 +0000 UTC m=+1.466231158 container died cef72690d4012a8d74aec56d07d52218c3ff33690b9eac7f65ae03d020e929e0 (image=quay.io/ceph/ceph:v18, name=interesting_lalande, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 03 01:20:24 compute-0 sudo[214598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:20:24 compute-0 sudo[214598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-44867f99faadb6b42b9ac795cbf16a9642e4481ec3ee1683e5d8858c46267282-merged.mount: Deactivated successfully.
Dec 03 01:20:24 compute-0 sudo[214598]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:24 compute-0 podman[214494]: 2025-12-03 01:20:24.707647837 +0000 UTC m=+1.582945204 container remove cef72690d4012a8d74aec56d07d52218c3ff33690b9eac7f65ae03d020e929e0 (image=quay.io/ceph/ceph:v18, name=interesting_lalande, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:20:24 compute-0 systemd[1]: libpod-conmon-cef72690d4012a8d74aec56d07d52218c3ff33690b9eac7f65ae03d020e929e0.scope: Deactivated successfully.
Dec 03 01:20:24 compute-0 sudo[214490]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:24 compute-0 sudo[214635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:20:24 compute-0 sudo[214635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:24 compute-0 sudo[214635]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:24 compute-0 sudo[214660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 01:20:24 compute-0 sudo[214660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:24 compute-0 sudo[214708]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tegjqvsmvzljgqbcxkwyvjyyktvgkvqh ; /usr/bin/python3'
Dec 03 01:20:25 compute-0 sudo[214708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:20:25 compute-0 python3[214710]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:20:25 compute-0 podman[214729]: 2025-12-03 01:20:25.262904334 +0000 UTC m=+0.070898670 container create 88b2e3693f701bb75afb5ccc399bcc51975b6c2cf5fc2c482b457e5955faf784 (image=quay.io/ceph/ceph:v18, name=priceless_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec 03 01:20:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e28 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:20:25 compute-0 systemd[1]: Started libpod-conmon-88b2e3693f701bb75afb5ccc399bcc51975b6c2cf5fc2c482b457e5955faf784.scope.
Dec 03 01:20:25 compute-0 podman[214729]: 2025-12-03 01:20:25.240942707 +0000 UTC m=+0.048937093 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:20:25 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:20:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8caf3d13fb4dbaabd1d462d16482e92982f430aa912cf09951e1d3baf01c219b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8caf3d13fb4dbaabd1d462d16482e92982f430aa912cf09951e1d3baf01c219b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:25 compute-0 podman[214729]: 2025-12-03 01:20:25.421888439 +0000 UTC m=+0.229882815 container init 88b2e3693f701bb75afb5ccc399bcc51975b6c2cf5fc2c482b457e5955faf784 (image=quay.io/ceph/ceph:v18, name=priceless_black, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:20:25 compute-0 podman[214729]: 2025-12-03 01:20:25.44110795 +0000 UTC m=+0.249102346 container start 88b2e3693f701bb75afb5ccc399bcc51975b6c2cf5fc2c482b457e5955faf784 (image=quay.io/ceph/ceph:v18, name=priceless_black, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:20:25 compute-0 podman[214729]: 2025-12-03 01:20:25.888360862 +0000 UTC m=+0.696355218 container attach 88b2e3693f701bb75afb5ccc399bcc51975b6c2cf5fc2c482b457e5955faf784 (image=quay.io/ceph/ceph:v18, name=priceless_black, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:20:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Dec 03 01:20:25 compute-0 ceph-mon[192821]: pgmap v76: 6 pgs: 5 active+clean, 1 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:20:25 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1024770579' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 03 01:20:25 compute-0 ceph-mon[192821]: osdmap e28: 3 total, 3 up, 3 in
Dec 03 01:20:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Dec 03 01:20:25 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Dec 03 01:20:25 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 29 pg[7.0( empty local-lis/les=28/29 n=0 ec=28/28 lis/c=0/0 les/c/f=0/0/0 sis=28) [1] r=0 lpr=28 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:25 compute-0 podman[214783]: 2025-12-03 01:20:25.998763874 +0000 UTC m=+0.062950711 container create 7dfd0b6a92b55e928d8356ae13ef01b2fb0978e01a0d1d7a8b93be51d07744a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec 03 01:20:26 compute-0 systemd[1]: Started libpod-conmon-7dfd0b6a92b55e928d8356ae13ef01b2fb0978e01a0d1d7a8b93be51d07744a7.scope.
Dec 03 01:20:26 compute-0 podman[214783]: 2025-12-03 01:20:25.980739026 +0000 UTC m=+0.044925903 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:20:26 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:20:26 compute-0 podman[214783]: 2025-12-03 01:20:26.144814621 +0000 UTC m=+0.209001498 container init 7dfd0b6a92b55e928d8356ae13ef01b2fb0978e01a0d1d7a8b93be51d07744a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_satoshi, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:20:26 compute-0 podman[214783]: 2025-12-03 01:20:26.154733285 +0000 UTC m=+0.218920172 container start 7dfd0b6a92b55e928d8356ae13ef01b2fb0978e01a0d1d7a8b93be51d07744a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_satoshi, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 03 01:20:26 compute-0 podman[214783]: 2025-12-03 01:20:26.160459743 +0000 UTC m=+0.224646590 container attach 7dfd0b6a92b55e928d8356ae13ef01b2fb0978e01a0d1d7a8b93be51d07744a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 03 01:20:26 compute-0 priceless_satoshi[214802]: 167 167
Dec 03 01:20:26 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0) v1
Dec 03 01:20:26 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/800152169' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Dec 03 01:20:26 compute-0 systemd[1]: libpod-7dfd0b6a92b55e928d8356ae13ef01b2fb0978e01a0d1d7a8b93be51d07744a7.scope: Deactivated successfully.
Dec 03 01:20:26 compute-0 podman[214783]: 2025-12-03 01:20:26.167795976 +0000 UTC m=+0.231982893 container died 7dfd0b6a92b55e928d8356ae13ef01b2fb0978e01a0d1d7a8b93be51d07744a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 03 01:20:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v79: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:20:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6a346cc584e143eee2b78bb4ac669d0ccfdfa5afdb9d4258fabd06ec7519054-merged.mount: Deactivated successfully.
Dec 03 01:20:26 compute-0 podman[214783]: 2025-12-03 01:20:26.227879407 +0000 UTC m=+0.292066234 container remove 7dfd0b6a92b55e928d8356ae13ef01b2fb0978e01a0d1d7a8b93be51d07744a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_satoshi, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 03 01:20:26 compute-0 systemd[1]: libpod-conmon-7dfd0b6a92b55e928d8356ae13ef01b2fb0978e01a0d1d7a8b93be51d07744a7.scope: Deactivated successfully.
Dec 03 01:20:26 compute-0 podman[214826]: 2025-12-03 01:20:26.514859019 +0000 UTC m=+0.090119462 container create 52ace7bc7380ed41be1c2e3e4d2725cd1bae10b9502bb06d19223e16b763e225 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:20:26 compute-0 podman[214826]: 2025-12-03 01:20:26.476208261 +0000 UTC m=+0.051468754 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:20:26 compute-0 systemd[1]: Started libpod-conmon-52ace7bc7380ed41be1c2e3e4d2725cd1bae10b9502bb06d19223e16b763e225.scope.
Dec 03 01:20:26 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:20:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb9e7086c02fa76e34c728d4177bdb13c7a5914e581c6899897df53f8f876187/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb9e7086c02fa76e34c728d4177bdb13c7a5914e581c6899897df53f8f876187/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb9e7086c02fa76e34c728d4177bdb13c7a5914e581c6899897df53f8f876187/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb9e7086c02fa76e34c728d4177bdb13c7a5914e581c6899897df53f8f876187/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:26 compute-0 podman[214826]: 2025-12-03 01:20:26.656330309 +0000 UTC m=+0.231590762 container init 52ace7bc7380ed41be1c2e3e4d2725cd1bae10b9502bb06d19223e16b763e225 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_visvesvaraya, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 03 01:20:26 compute-0 podman[214826]: 2025-12-03 01:20:26.675215451 +0000 UTC m=+0.250475864 container start 52ace7bc7380ed41be1c2e3e4d2725cd1bae10b9502bb06d19223e16b763e225 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_visvesvaraya, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:20:26 compute-0 podman[214826]: 2025-12-03 01:20:26.681804943 +0000 UTC m=+0.257065446 container attach 52ace7bc7380ed41be1c2e3e4d2725cd1bae10b9502bb06d19223e16b763e225 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_visvesvaraya, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:20:26 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Dec 03 01:20:26 compute-0 ceph-mon[192821]: osdmap e29: 3 total, 3 up, 3 in
Dec 03 01:20:26 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/800152169' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Dec 03 01:20:26 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/800152169' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Dec 03 01:20:26 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Dec 03 01:20:26 compute-0 priceless_black[214762]: enabled application 'rbd' on pool 'vms'
Dec 03 01:20:26 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Dec 03 01:20:26 compute-0 systemd[1]: libpod-88b2e3693f701bb75afb5ccc399bcc51975b6c2cf5fc2c482b457e5955faf784.scope: Deactivated successfully.
Dec 03 01:20:26 compute-0 podman[214729]: 2025-12-03 01:20:26.961369911 +0000 UTC m=+1.769364247 container died 88b2e3693f701bb75afb5ccc399bcc51975b6c2cf5fc2c482b457e5955faf784 (image=quay.io/ceph/ceph:v18, name=priceless_black, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 03 01:20:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-8caf3d13fb4dbaabd1d462d16482e92982f430aa912cf09951e1d3baf01c219b-merged.mount: Deactivated successfully.
Dec 03 01:20:27 compute-0 podman[214729]: 2025-12-03 01:20:27.022868341 +0000 UTC m=+1.830862677 container remove 88b2e3693f701bb75afb5ccc399bcc51975b6c2cf5fc2c482b457e5955faf784 (image=quay.io/ceph/ceph:v18, name=priceless_black, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 03 01:20:27 compute-0 systemd[1]: libpod-conmon-88b2e3693f701bb75afb5ccc399bcc51975b6c2cf5fc2c482b457e5955faf784.scope: Deactivated successfully.
Dec 03 01:20:27 compute-0 sudo[214708]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:27 compute-0 sudo[214881]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlsfqhltdrdxxtajfnjxrkuiammbdvci ; /usr/bin/python3'
Dec 03 01:20:27 compute-0 sudo[214881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:20:27 compute-0 python3[214883]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]: {
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:     "0": [
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:         {
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:             "devices": [
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:                 "/dev/loop3"
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:             ],
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:             "lv_name": "ceph_lv0",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:             "lv_size": "21470642176",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:             "name": "ceph_lv0",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:             "tags": {
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:                 "ceph.cluster_name": "ceph",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:                 "ceph.crush_device_class": "",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:                 "ceph.encrypted": "0",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:                 "ceph.osd_id": "0",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:                 "ceph.type": "block",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:                 "ceph.vdo": "0"
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:             },
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:             "type": "block",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:             "vg_name": "ceph_vg0"
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:         }
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:     ],
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:     "1": [
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:         {
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:             "devices": [
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:                 "/dev/loop4"
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:             ],
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:             "lv_name": "ceph_lv1",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:             "lv_size": "21470642176",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:             "name": "ceph_lv1",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:             "tags": {
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:                 "ceph.cluster_name": "ceph",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:                 "ceph.crush_device_class": "",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:                 "ceph.encrypted": "0",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:                 "ceph.osd_id": "1",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:                 "ceph.type": "block",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:                 "ceph.vdo": "0"
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:             },
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:             "type": "block",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:             "vg_name": "ceph_vg1"
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:         }
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:     ],
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:     "2": [
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:         {
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:             "devices": [
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:                 "/dev/loop5"
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:             ],
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:             "lv_name": "ceph_lv2",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:             "lv_size": "21470642176",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:             "name": "ceph_lv2",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:             "tags": {
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:                 "ceph.cluster_name": "ceph",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:                 "ceph.crush_device_class": "",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:                 "ceph.encrypted": "0",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:                 "ceph.osd_id": "2",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:                 "ceph.type": "block",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:                 "ceph.vdo": "0"
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:             },
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:             "type": "block",
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:             "vg_name": "ceph_vg2"
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:         }
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]:     ]
Dec 03 01:20:27 compute-0 nifty_visvesvaraya[214842]: }
Dec 03 01:20:27 compute-0 systemd[1]: libpod-52ace7bc7380ed41be1c2e3e4d2725cd1bae10b9502bb06d19223e16b763e225.scope: Deactivated successfully.
Dec 03 01:20:27 compute-0 podman[214826]: 2025-12-03 01:20:27.565365585 +0000 UTC m=+1.140625988 container died 52ace7bc7380ed41be1c2e3e4d2725cd1bae10b9502bb06d19223e16b763e225 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 03 01:20:27 compute-0 podman[214888]: 2025-12-03 01:20:27.59666839 +0000 UTC m=+0.084963329 container create 428133f3fad9f17a5df573f9346b06cec814fcfbbb3f15d10ce1d8777cc95cd9 (image=quay.io/ceph/ceph:v18, name=nervous_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 03 01:20:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb9e7086c02fa76e34c728d4177bdb13c7a5914e581c6899897df53f8f876187-merged.mount: Deactivated successfully.
Dec 03 01:20:27 compute-0 systemd[1]: Started libpod-conmon-428133f3fad9f17a5df573f9346b06cec814fcfbbb3f15d10ce1d8777cc95cd9.scope.
Dec 03 01:20:27 compute-0 podman[214888]: 2025-12-03 01:20:27.569618553 +0000 UTC m=+0.057913512 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:20:27 compute-0 podman[214826]: 2025-12-03 01:20:27.666186781 +0000 UTC m=+1.241447224 container remove 52ace7bc7380ed41be1c2e3e4d2725cd1bae10b9502bb06d19223e16b763e225 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:20:27 compute-0 systemd[1]: libpod-conmon-52ace7bc7380ed41be1c2e3e4d2725cd1bae10b9502bb06d19223e16b763e225.scope: Deactivated successfully.
Dec 03 01:20:27 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:20:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90a43e13a2560c79c53fd0b8f93079cad9dd594fed5248371f9d542eded72656/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90a43e13a2560c79c53fd0b8f93079cad9dd594fed5248371f9d542eded72656/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:27 compute-0 sudo[214660]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:27 compute-0 podman[214888]: 2025-12-03 01:20:27.742893541 +0000 UTC m=+0.231188510 container init 428133f3fad9f17a5df573f9346b06cec814fcfbbb3f15d10ce1d8777cc95cd9 (image=quay.io/ceph/ceph:v18, name=nervous_allen, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:20:27 compute-0 podman[214888]: 2025-12-03 01:20:27.755115909 +0000 UTC m=+0.243410848 container start 428133f3fad9f17a5df573f9346b06cec814fcfbbb3f15d10ce1d8777cc95cd9 (image=quay.io/ceph/ceph:v18, name=nervous_allen, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:20:27 compute-0 podman[214888]: 2025-12-03 01:20:27.759893901 +0000 UTC m=+0.248188870 container attach 428133f3fad9f17a5df573f9346b06cec814fcfbbb3f15d10ce1d8777cc95cd9 (image=quay.io/ceph/ceph:v18, name=nervous_allen, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:20:27 compute-0 sudo[214917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:20:27 compute-0 sudo[214917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:27 compute-0 sudo[214917]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:27 compute-0 ceph-mon[192821]: pgmap v79: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:20:27 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/800152169' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Dec 03 01:20:27 compute-0 ceph-mon[192821]: osdmap e30: 3 total, 3 up, 3 in
Dec 03 01:20:27 compute-0 sudo[214943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:20:27 compute-0 sudo[214943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:27 compute-0 sudo[214943]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:28 compute-0 sudo[214968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:20:28 compute-0 sudo[214968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:28 compute-0 sudo[214968]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v81: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:20:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:20:28
Dec 03 01:20:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 01:20:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Some PGs (0.142857) are inactive; try again later
Dec 03 01:20:28 compute-0 sshd-session[214069]: Received disconnect from 45.78.219.140 port 51482:11: Bye Bye [preauth]
Dec 03 01:20:28 compute-0 sshd-session[214069]: Disconnected from 45.78.219.140 port 51482 [preauth]
Dec 03 01:20:28 compute-0 sudo[214994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 01:20:28 compute-0 sudo[214994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:28 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 01:20:28 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:20:28 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 01:20:28 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:20:28 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 03 01:20:28 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:20:28 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 03 01:20:28 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:20:28 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 03 01:20:28 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:20:28 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 03 01:20:28 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:20:28 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 03 01:20:28 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:20:28 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 03 01:20:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0) v1
Dec 03 01:20:28 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Dec 03 01:20:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:20:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:20:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 01:20:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:20:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 01:20:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:20:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:20:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:20:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:20:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:20:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0) v1
Dec 03 01:20:28 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/520206880' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Dec 03 01:20:28 compute-0 podman[215074]: 2025-12-03 01:20:28.9023707 +0000 UTC m=+0.080758683 container create 148f23d5531e171ef277b94ef3b56b2c11e28573efe4abd9fbbd97042d29a0ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_kowalevski, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 03 01:20:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Dec 03 01:20:28 compute-0 ceph-mon[192821]: log_channel(cluster) log [WRN] : Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 03 01:20:28 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Dec 03 01:20:28 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/520206880' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Dec 03 01:20:28 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec 03 01:20:28 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/520206880' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Dec 03 01:20:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Dec 03 01:20:28 compute-0 nervous_allen[214913]: enabled application 'rbd' on pool 'volumes'
Dec 03 01:20:28 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Dec 03 01:20:28 compute-0 ceph-mgr[193109]: [progress INFO root] update: starting ev e6b4f978-1441-4303-b9a2-cf3500d39b60 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Dec 03 01:20:28 compute-0 podman[215074]: 2025-12-03 01:20:28.874462939 +0000 UTC m=+0.052850932 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:20:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0) v1
Dec 03 01:20:28 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Dec 03 01:20:28 compute-0 systemd[1]: Started libpod-conmon-148f23d5531e171ef277b94ef3b56b2c11e28573efe4abd9fbbd97042d29a0ff.scope.
Dec 03 01:20:29 compute-0 podman[214888]: 2025-12-03 01:20:29.005051558 +0000 UTC m=+1.493346517 container died 428133f3fad9f17a5df573f9346b06cec814fcfbbb3f15d10ce1d8777cc95cd9 (image=quay.io/ceph/ceph:v18, name=nervous_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:20:29 compute-0 systemd[1]: libpod-428133f3fad9f17a5df573f9346b06cec814fcfbbb3f15d10ce1d8777cc95cd9.scope: Deactivated successfully.
Dec 03 01:20:29 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:20:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-90a43e13a2560c79c53fd0b8f93079cad9dd594fed5248371f9d542eded72656-merged.mount: Deactivated successfully.
Dec 03 01:20:29 compute-0 podman[215074]: 2025-12-03 01:20:29.059321578 +0000 UTC m=+0.237709561 container init 148f23d5531e171ef277b94ef3b56b2c11e28573efe4abd9fbbd97042d29a0ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_kowalevski, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:20:29 compute-0 podman[215074]: 2025-12-03 01:20:29.070627301 +0000 UTC m=+0.249015264 container start 148f23d5531e171ef277b94ef3b56b2c11e28573efe4abd9fbbd97042d29a0ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_kowalevski, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:20:29 compute-0 agitated_kowalevski[215091]: 167 167
Dec 03 01:20:29 compute-0 systemd[1]: libpod-148f23d5531e171ef277b94ef3b56b2c11e28573efe4abd9fbbd97042d29a0ff.scope: Deactivated successfully.
Dec 03 01:20:29 compute-0 podman[214888]: 2025-12-03 01:20:29.097221736 +0000 UTC m=+1.585516705 container remove 428133f3fad9f17a5df573f9346b06cec814fcfbbb3f15d10ce1d8777cc95cd9 (image=quay.io/ceph/ceph:v18, name=nervous_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 03 01:20:29 compute-0 podman[215074]: 2025-12-03 01:20:29.109104774 +0000 UTC m=+0.287492767 container attach 148f23d5531e171ef277b94ef3b56b2c11e28573efe4abd9fbbd97042d29a0ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_kowalevski, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 03 01:20:29 compute-0 podman[215074]: 2025-12-03 01:20:29.109458374 +0000 UTC m=+0.287846337 container died 148f23d5531e171ef277b94ef3b56b2c11e28573efe4abd9fbbd97042d29a0ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_kowalevski, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:20:29 compute-0 systemd[1]: libpod-conmon-428133f3fad9f17a5df573f9346b06cec814fcfbbb3f15d10ce1d8777cc95cd9.scope: Deactivated successfully.
Dec 03 01:20:29 compute-0 sudo[214881]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-0863024cf891f2812956fcf4c6309ad0f99b1f32c160d6bffb0b5b67ee9518a1-merged.mount: Deactivated successfully.
Dec 03 01:20:29 compute-0 podman[215074]: 2025-12-03 01:20:29.175847819 +0000 UTC m=+0.354235812 container remove 148f23d5531e171ef277b94ef3b56b2c11e28573efe4abd9fbbd97042d29a0ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 03 01:20:29 compute-0 systemd[1]: libpod-conmon-148f23d5531e171ef277b94ef3b56b2c11e28573efe4abd9fbbd97042d29a0ff.scope: Deactivated successfully.
Dec 03 01:20:29 compute-0 sudo[215144]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sznkdgsfeywumurhwjlvrcjtvkltgcnt ; /usr/bin/python3'
Dec 03 01:20:29 compute-0 sudo[215144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:20:29 compute-0 podman[215152]: 2025-12-03 01:20:29.425666344 +0000 UTC m=+0.077023530 container create 22701487c8c1db73a0310bdc08e4e2416829e91739d945c2116310ed45a25661 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_rubin, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 03 01:20:29 compute-0 systemd[1]: Started libpod-conmon-22701487c8c1db73a0310bdc08e4e2416829e91739d945c2116310ed45a25661.scope.
Dec 03 01:20:29 compute-0 podman[215152]: 2025-12-03 01:20:29.392896608 +0000 UTC m=+0.044253834 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:20:29 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:20:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cbc13ebc97bb64c993f322b387d06e566b2fad50478d2216d535679a36f43b9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cbc13ebc97bb64c993f322b387d06e566b2fad50478d2216d535679a36f43b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cbc13ebc97bb64c993f322b387d06e566b2fad50478d2216d535679a36f43b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cbc13ebc97bb64c993f322b387d06e566b2fad50478d2216d535679a36f43b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:29 compute-0 podman[215152]: 2025-12-03 01:20:29.562483726 +0000 UTC m=+0.213840992 container init 22701487c8c1db73a0310bdc08e4e2416829e91739d945c2116310ed45a25661 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_rubin, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:20:29 compute-0 python3[215151]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:20:29 compute-0 podman[215152]: 2025-12-03 01:20:29.581681946 +0000 UTC m=+0.233039132 container start 22701487c8c1db73a0310bdc08e4e2416829e91739d945c2116310ed45a25661 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec 03 01:20:29 compute-0 podman[215152]: 2025-12-03 01:20:29.586970212 +0000 UTC m=+0.238327478 container attach 22701487c8c1db73a0310bdc08e4e2416829e91739d945c2116310ed45a25661 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_rubin, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 03 01:20:29 compute-0 podman[215173]: 2025-12-03 01:20:29.695247905 +0000 UTC m=+0.096087887 container create 148cfcc7cb520c7ec528619659e735f018d43204c0ada2e15ec5bbfa092d298e (image=quay.io/ceph/ceph:v18, name=relaxed_nash, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 03 01:20:29 compute-0 podman[215173]: 2025-12-03 01:20:29.652318129 +0000 UTC m=+0.053158171 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:20:29 compute-0 podman[158098]: time="2025-12-03T01:20:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:20:29 compute-0 systemd[1]: Started libpod-conmon-148cfcc7cb520c7ec528619659e735f018d43204c0ada2e15ec5bbfa092d298e.scope.
Dec 03 01:20:29 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:20:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd6006de0be103f15edecc6d18b113826f1212d2a94d1fb578be69cb2e402451/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd6006de0be103f15edecc6d18b113826f1212d2a94d1fb578be69cb2e402451/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:29 compute-0 podman[215173]: 2025-12-03 01:20:29.833584269 +0000 UTC m=+0.234424271 container init 148cfcc7cb520c7ec528619659e735f018d43204c0ada2e15ec5bbfa092d298e (image=quay.io/ceph/ceph:v18, name=relaxed_nash, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:20:29 compute-0 podman[215173]: 2025-12-03 01:20:29.851914725 +0000 UTC m=+0.252754707 container start 148cfcc7cb520c7ec528619659e735f018d43204c0ada2e15ec5bbfa092d298e (image=quay.io/ceph/ceph:v18, name=relaxed_nash, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:20:29 compute-0 podman[215173]: 2025-12-03 01:20:29.857439118 +0000 UTC m=+0.258279130 container attach 148cfcc7cb520c7ec528619659e735f018d43204c0ada2e15ec5bbfa092d298e (image=quay.io/ceph/ceph:v18, name=relaxed_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:20:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:20:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32224 "" "Go-http-client/1.1"
Dec 03 01:20:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:20:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6666 "" "Go-http-client/1.1"
Dec 03 01:20:29 compute-0 ceph-mon[192821]: pgmap v81: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:20:29 compute-0 ceph-mon[192821]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 03 01:20:29 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec 03 01:20:29 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/520206880' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Dec 03 01:20:29 compute-0 ceph-mon[192821]: osdmap e31: 3 total, 3 up, 3 in
Dec 03 01:20:29 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Dec 03 01:20:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Dec 03 01:20:29 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec 03 01:20:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Dec 03 01:20:29 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Dec 03 01:20:29 compute-0 ceph-mgr[193109]: [progress INFO root] update: starting ev 78db669c-15ce-4651-8c69-cd7c7014c693 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Dec 03 01:20:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Dec 03 01:20:29 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Dec 03 01:20:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v84: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:20:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec 03 01:20:30 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 03 01:20:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec 03 01:20:30 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 03 01:20:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:20:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Dec 03 01:20:30 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/182692636' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Dec 03 01:20:30 compute-0 sharp_rubin[215168]: {
Dec 03 01:20:30 compute-0 sharp_rubin[215168]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 01:20:30 compute-0 sharp_rubin[215168]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:20:30 compute-0 sharp_rubin[215168]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 01:20:30 compute-0 sharp_rubin[215168]:         "osd_id": 2,
Dec 03 01:20:30 compute-0 sharp_rubin[215168]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:20:30 compute-0 sharp_rubin[215168]:         "type": "bluestore"
Dec 03 01:20:30 compute-0 sharp_rubin[215168]:     },
Dec 03 01:20:30 compute-0 sharp_rubin[215168]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 01:20:30 compute-0 sharp_rubin[215168]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:20:30 compute-0 sharp_rubin[215168]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 01:20:30 compute-0 sharp_rubin[215168]:         "osd_id": 1,
Dec 03 01:20:30 compute-0 sharp_rubin[215168]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:20:30 compute-0 sharp_rubin[215168]:         "type": "bluestore"
Dec 03 01:20:30 compute-0 sharp_rubin[215168]:     },
Dec 03 01:20:30 compute-0 sharp_rubin[215168]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 01:20:30 compute-0 sharp_rubin[215168]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:20:30 compute-0 sharp_rubin[215168]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 01:20:30 compute-0 sharp_rubin[215168]:         "osd_id": 0,
Dec 03 01:20:30 compute-0 sharp_rubin[215168]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:20:30 compute-0 sharp_rubin[215168]:         "type": "bluestore"
Dec 03 01:20:30 compute-0 sharp_rubin[215168]:     }
Dec 03 01:20:30 compute-0 sharp_rubin[215168]: }
Dec 03 01:20:30 compute-0 systemd[1]: libpod-22701487c8c1db73a0310bdc08e4e2416829e91739d945c2116310ed45a25661.scope: Deactivated successfully.
Dec 03 01:20:30 compute-0 systemd[1]: libpod-22701487c8c1db73a0310bdc08e4e2416829e91739d945c2116310ed45a25661.scope: Consumed 1.135s CPU time.
Dec 03 01:20:30 compute-0 podman[215152]: 2025-12-03 01:20:30.738502751 +0000 UTC m=+1.389859977 container died 22701487c8c1db73a0310bdc08e4e2416829e91739d945c2116310ed45a25661 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_rubin, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec 03 01:20:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-3cbc13ebc97bb64c993f322b387d06e566b2fad50478d2216d535679a36f43b9-merged.mount: Deactivated successfully.
Dec 03 01:20:30 compute-0 podman[215152]: 2025-12-03 01:20:30.852192563 +0000 UTC m=+1.503549769 container remove 22701487c8c1db73a0310bdc08e4e2416829e91739d945c2116310ed45a25661 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_rubin, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:20:30 compute-0 systemd[1]: libpod-conmon-22701487c8c1db73a0310bdc08e4e2416829e91739d945c2116310ed45a25661.scope: Deactivated successfully.
Dec 03 01:20:30 compute-0 sudo[214994]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:20:30 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:20:30 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Dec 03 01:20:30 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec 03 01:20:30 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec 03 01:20:30 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec 03 01:20:30 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/182692636' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Dec 03 01:20:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Dec 03 01:20:30 compute-0 relaxed_nash[215188]: enabled application 'rbd' on pool 'backups'
Dec 03 01:20:30 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec 03 01:20:30 compute-0 ceph-mon[192821]: osdmap e32: 3 total, 3 up, 3 in
Dec 03 01:20:30 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Dec 03 01:20:30 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 03 01:20:30 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 03 01:20:30 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/182692636' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Dec 03 01:20:30 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:30 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:31 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 33 pg[2.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=33 pruub=8.348283768s) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active pruub 43.424728394s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:31 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Dec 03 01:20:31 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 33 pg[2.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=33 pruub=8.348283768s) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown pruub 43.424728394s@ mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:31 compute-0 ceph-mgr[193109]: [progress INFO root] update: starting ev 45b2a057-9edd-4848-82cf-8f672677f39f (PG autoscaler increasing pool 4 PGs from 1 to 32)
Dec 03 01:20:31 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0) v1
Dec 03 01:20:31 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Dec 03 01:20:31 compute-0 systemd[1]: libpod-148cfcc7cb520c7ec528619659e735f018d43204c0ada2e15ec5bbfa092d298e.scope: Deactivated successfully.
Dec 03 01:20:31 compute-0 podman[215173]: 2025-12-03 01:20:31.027430467 +0000 UTC m=+1.428270479 container died 148cfcc7cb520c7ec528619659e735f018d43204c0ada2e15ec5bbfa092d298e (image=quay.io/ceph/ceph:v18, name=relaxed_nash, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 03 01:20:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd6006de0be103f15edecc6d18b113826f1212d2a94d1fb578be69cb2e402451-merged.mount: Deactivated successfully.
Dec 03 01:20:31 compute-0 sudo[215251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:20:31 compute-0 podman[215173]: 2025-12-03 01:20:31.110259307 +0000 UTC m=+1.511099279 container remove 148cfcc7cb520c7ec528619659e735f018d43204c0ada2e15ec5bbfa092d298e (image=quay.io/ceph/ceph:v18, name=relaxed_nash, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec 03 01:20:31 compute-0 sudo[215251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:31 compute-0 sudo[215251]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:31 compute-0 systemd[1]: libpod-conmon-148cfcc7cb520c7ec528619659e735f018d43204c0ada2e15ec5bbfa092d298e.scope: Deactivated successfully.
Dec 03 01:20:31 compute-0 sudo[215144]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:31 compute-0 sudo[215289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 01:20:31 compute-0 sudo[215289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:31 compute-0 sudo[215289]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:31 compute-0 openstack_network_exporter[160250]: ERROR   01:20:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:20:31 compute-0 openstack_network_exporter[160250]: ERROR   01:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:20:31 compute-0 openstack_network_exporter[160250]: ERROR   01:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:20:31 compute-0 openstack_network_exporter[160250]: ERROR   01:20:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:20:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:20:31 compute-0 openstack_network_exporter[160250]: ERROR   01:20:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:20:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:20:31 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 33 pg[3.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=33 pruub=9.891637802s) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active pruub 52.819717407s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:31 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 33 pg[3.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=33 pruub=9.891637802s) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown pruub 52.819717407s@ mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:31 compute-0 sudo[215337]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkxvzwpxmsnvfpbdnplaetbehjucpfxt ; /usr/bin/python3'
Dec 03 01:20:31 compute-0 sudo[215337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:20:31 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Dec 03 01:20:31 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec 03 01:20:31 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Dec 03 01:20:32 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.1f( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-mgr[193109]: [progress INFO root] update: starting ev 9c7b44ee-6243-4701-a66a-ed63a204737b (PG autoscaler increasing pool 5 PGs from 1 to 32)
Dec 03 01:20:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} v 0) v1
Dec 03 01:20:32 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.1e( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.1d( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.1c( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.b( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.a( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.9( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.8( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.6( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.5( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.4( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.3( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.2( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.1( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.7( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.c( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.e( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.d( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.f( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.10( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.11( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.12( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.13( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.14( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.15( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.16( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.17( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.18( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.19( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.1a( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.1b( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-mon[192821]: pgmap v84: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:20:32 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec 03 01:20:32 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec 03 01:20:32 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec 03 01:20:32 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/182692636' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Dec 03 01:20:32 compute-0 ceph-mon[192821]: osdmap e33: 3 total, 3 up, 3 in
Dec 03 01:20:32 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.1f( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.1e( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.1d( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.1b( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.1c( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.a( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.8( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.7( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.9( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.5( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.3( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.1( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.4( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.6( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.b( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.c( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.d( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.e( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.f( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.10( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.11( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.12( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.13( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.14( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.16( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.15( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.17( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.18( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.19( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.2( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.1a( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.1f( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.1e( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.1d( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.1c( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.b( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.1( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.2( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.0( empty local-lis/les=33/34 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.c( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.f( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.10( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.11( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.e( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.12( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.8( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.13( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.14( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.16( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.19( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.1a( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.18( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.1f( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.1e( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.1d( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.1b( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.1c( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.a( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.8( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.7( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.5( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.3( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.1( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.4( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.6( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.0( empty local-lis/les=33/34 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.9( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.c( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.d( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.b( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.10( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.11( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.e( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.12( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.16( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.13( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.f( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.15( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.19( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.17( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.2( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.18( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.1a( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.14( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:32 compute-0 podman[215339]: 2025-12-03 01:20:32.111079598 +0000 UTC m=+0.130752475 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 03 01:20:32 compute-0 podman[215340]: 2025-12-03 01:20:32.111248773 +0000 UTC m=+0.128078551 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, architecture=x86_64, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, version=9.6, distribution-scope=public, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., io.openshift.expose-services=, config_id=edpm)
Dec 03 01:20:32 compute-0 podman[215341]: 2025-12-03 01:20:32.141279663 +0000 UTC m=+0.153848804 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Dec 03 01:20:32 compute-0 python3[215353]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:20:32 compute-0 podman[215342]: 2025-12-03 01:20:32.187036527 +0000 UTC m=+0.181673032 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller)
Dec 03 01:20:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v87: 69 pgs: 62 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:20:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec 03 01:20:32 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 03 01:20:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec 03 01:20:32 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 03 01:20:32 compute-0 podman[215418]: 2025-12-03 01:20:32.217957852 +0000 UTC m=+0.059799124 container create f5c394354b0978548db45487f61dbecf4d2934c376c95cf75350da910f3734a6 (image=quay.io/ceph/ceph:v18, name=loving_ellis, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 03 01:20:32 compute-0 systemd[1]: Started libpod-conmon-f5c394354b0978548db45487f61dbecf4d2934c376c95cf75350da910f3734a6.scope.
Dec 03 01:20:32 compute-0 podman[215418]: 2025-12-03 01:20:32.192640212 +0000 UTC m=+0.034481494 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:20:32 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:20:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f18aab917db20725ff4ff03ca3a616a91b4727386fad80b504d170195dd93ad1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f18aab917db20725ff4ff03ca3a616a91b4727386fad80b504d170195dd93ad1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:32 compute-0 podman[215418]: 2025-12-03 01:20:32.375769134 +0000 UTC m=+0.217610436 container init f5c394354b0978548db45487f61dbecf4d2934c376c95cf75350da910f3734a6 (image=quay.io/ceph/ceph:v18, name=loving_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:20:32 compute-0 podman[215418]: 2025-12-03 01:20:32.393946936 +0000 UTC m=+0.235788208 container start f5c394354b0978548db45487f61dbecf4d2934c376c95cf75350da910f3734a6 (image=quay.io/ceph/ceph:v18, name=loving_ellis, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:20:32 compute-0 podman[215418]: 2025-12-03 01:20:32.398941594 +0000 UTC m=+0.240782936 container attach f5c394354b0978548db45487f61dbecf4d2934c376c95cf75350da910f3734a6 (image=quay.io/ceph/ceph:v18, name=loving_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:20:32 compute-0 sshd-session[214288]: Connection closed by authenticating user root 193.32.162.157 port 36064 [preauth]
Dec 03 01:20:32 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 2.1 deep-scrub starts
Dec 03 01:20:32 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 2.1 deep-scrub ok
Dec 03 01:20:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Dec 03 01:20:32 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/854089705' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Dec 03 01:20:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Dec 03 01:20:33 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Dec 03 01:20:33 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec 03 01:20:33 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec 03 01:20:33 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/854089705' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Dec 03 01:20:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Dec 03 01:20:33 compute-0 loving_ellis[215437]: enabled application 'rbd' on pool 'images'
Dec 03 01:20:33 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Dec 03 01:20:33 compute-0 ceph-mgr[193109]: [progress INFO root] update: starting ev 38621d60-c54c-4259-aa9c-3be614ce8469 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Dec 03 01:20:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0) v1
Dec 03 01:20:33 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Dec 03 01:20:33 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 35 pg[5.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=35 pruub=12.498203278s) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active pruub 49.587493896s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:33 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 35 pg[5.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=35 pruub=12.498203278s) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown pruub 49.587493896s@ mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:33 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec 03 01:20:33 compute-0 ceph-mon[192821]: osdmap e34: 3 total, 3 up, 3 in
Dec 03 01:20:33 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec 03 01:20:33 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 03 01:20:33 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 03 01:20:33 compute-0 ceph-mon[192821]: 2.1 deep-scrub starts
Dec 03 01:20:33 compute-0 ceph-mon[192821]: 2.1 deep-scrub ok
Dec 03 01:20:33 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/854089705' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Dec 03 01:20:33 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Dec 03 01:20:33 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec 03 01:20:33 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec 03 01:20:33 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/854089705' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Dec 03 01:20:33 compute-0 ceph-mon[192821]: osdmap e35: 3 total, 3 up, 3 in
Dec 03 01:20:33 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Dec 03 01:20:33 compute-0 systemd[1]: libpod-f5c394354b0978548db45487f61dbecf4d2934c376c95cf75350da910f3734a6.scope: Deactivated successfully.
Dec 03 01:20:33 compute-0 podman[215418]: 2025-12-03 01:20:33.050151424 +0000 UTC m=+0.891992726 container died f5c394354b0978548db45487f61dbecf4d2934c376c95cf75350da910f3734a6 (image=quay.io/ceph/ceph:v18, name=loving_ellis, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:20:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-f18aab917db20725ff4ff03ca3a616a91b4727386fad80b504d170195dd93ad1-merged.mount: Deactivated successfully.
Dec 03 01:20:33 compute-0 podman[215418]: 2025-12-03 01:20:33.137888839 +0000 UTC m=+0.979730111 container remove f5c394354b0978548db45487f61dbecf4d2934c376c95cf75350da910f3734a6 (image=quay.io/ceph/ceph:v18, name=loving_ellis, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 03 01:20:33 compute-0 systemd[1]: libpod-conmon-f5c394354b0978548db45487f61dbecf4d2934c376c95cf75350da910f3734a6.scope: Deactivated successfully.
Dec 03 01:20:33 compute-0 sudo[215337]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:33 compute-0 sudo[215499]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khkyloywifriwrfateqgaboewmzvuypv ; /usr/bin/python3'
Dec 03 01:20:33 compute-0 sudo[215499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:20:33 compute-0 ceph-mgr[193109]: [progress WARNING root] Starting Global Recovery Event,124 pgs not in active + clean state
Dec 03 01:20:33 compute-0 python3[215501]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:20:33 compute-0 podman[215502]: 2025-12-03 01:20:33.613158886 +0000 UTC m=+0.065704787 container create df4957ae99038534927c5faa895a568023a053bf7ce6515c08f191d1eac790cf (image=quay.io/ceph/ceph:v18, name=naughty_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 03 01:20:33 compute-0 podman[215502]: 2025-12-03 01:20:33.58761553 +0000 UTC m=+0.040161411 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:20:33 compute-0 systemd[1]: Started libpod-conmon-df4957ae99038534927c5faa895a568023a053bf7ce6515c08f191d1eac790cf.scope.
Dec 03 01:20:33 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:20:33 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Dec 03 01:20:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05140ca0b8293fcb4c45a6c5cc9980c8bb6e17c98dad3dbee40b836efb9b3863/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05140ca0b8293fcb4c45a6c5cc9980c8bb6e17c98dad3dbee40b836efb9b3863/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:33 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Dec 03 01:20:33 compute-0 podman[215502]: 2025-12-03 01:20:33.781497039 +0000 UTC m=+0.234042940 container init df4957ae99038534927c5faa895a568023a053bf7ce6515c08f191d1eac790cf (image=quay.io/ceph/ceph:v18, name=naughty_satoshi, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:20:33 compute-0 podman[215502]: 2025-12-03 01:20:33.801099431 +0000 UTC m=+0.253645322 container start df4957ae99038534927c5faa895a568023a053bf7ce6515c08f191d1eac790cf (image=quay.io/ceph/ceph:v18, name=naughty_satoshi, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:20:33 compute-0 podman[215502]: 2025-12-03 01:20:33.807818046 +0000 UTC m=+0.260363987 container attach df4957ae99038534927c5faa895a568023a053bf7ce6515c08f191d1eac790cf (image=quay.io/ceph/ceph:v18, name=naughty_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:20:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Dec 03 01:20:34 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec 03 01:20:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Dec 03 01:20:34 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Dec 03 01:20:34 compute-0 ceph-mgr[193109]: [progress INFO root] update: starting ev 26e960c2-8609-46ff-82f5-0728fd2a751c (PG autoscaler increasing pool 7 PGs from 1 to 32)
Dec 03 01:20:34 compute-0 ceph-mgr[193109]: [progress INFO root] complete: finished ev e6b4f978-1441-4303-b9a2-cf3500d39b60 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Dec 03 01:20:34 compute-0 ceph-mgr[193109]: [progress INFO root] Completed event e6b4f978-1441-4303-b9a2-cf3500d39b60 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 5 seconds
Dec 03 01:20:34 compute-0 ceph-mgr[193109]: [progress INFO root] complete: finished ev 78db669c-15ce-4651-8c69-cd7c7014c693 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Dec 03 01:20:34 compute-0 ceph-mgr[193109]: [progress INFO root] Completed event 78db669c-15ce-4651-8c69-cd7c7014c693 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 4 seconds
Dec 03 01:20:34 compute-0 ceph-mgr[193109]: [progress INFO root] complete: finished ev 45b2a057-9edd-4848-82cf-8f672677f39f (PG autoscaler increasing pool 4 PGs from 1 to 32)
Dec 03 01:20:34 compute-0 ceph-mgr[193109]: [progress INFO root] Completed event 45b2a057-9edd-4848-82cf-8f672677f39f (PG autoscaler increasing pool 4 PGs from 1 to 32) in 3 seconds
Dec 03 01:20:34 compute-0 ceph-mgr[193109]: [progress INFO root] complete: finished ev 9c7b44ee-6243-4701-a66a-ed63a204737b (PG autoscaler increasing pool 5 PGs from 1 to 32)
Dec 03 01:20:34 compute-0 ceph-mgr[193109]: [progress INFO root] Completed event 9c7b44ee-6243-4701-a66a-ed63a204737b (PG autoscaler increasing pool 5 PGs from 1 to 32) in 2 seconds
Dec 03 01:20:34 compute-0 ceph-mgr[193109]: [progress INFO root] complete: finished ev 38621d60-c54c-4259-aa9c-3be614ce8469 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Dec 03 01:20:34 compute-0 ceph-mgr[193109]: [progress INFO root] Completed event 38621d60-c54c-4259-aa9c-3be614ce8469 (PG autoscaler increasing pool 6 PGs from 1 to 32) in 1 seconds
Dec 03 01:20:34 compute-0 ceph-mgr[193109]: [progress INFO root] complete: finished ev 26e960c2-8609-46ff-82f5-0728fd2a751c (PG autoscaler increasing pool 7 PGs from 1 to 32)
Dec 03 01:20:34 compute-0 ceph-mgr[193109]: [progress INFO root] Completed event 26e960c2-8609-46ff-82f5-0728fd2a751c (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.1c( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.1d( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.1e( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.1f( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.10( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.11( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.12( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.13( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.15( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.14( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.17( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.8( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.9( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.a( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.b( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.7( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.6( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.5( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.4( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.3( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.2( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.1( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.f( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.e( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.d( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.1b( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.1a( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.19( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.18( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.c( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.16( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.1c( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.1e( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:34 compute-0 ceph-mon[192821]: pgmap v87: 69 pgs: 62 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:20:34 compute-0 ceph-mon[192821]: 2.2 scrub starts
Dec 03 01:20:34 compute-0 ceph-mon[192821]: 2.2 scrub ok
Dec 03 01:20:34 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec 03 01:20:34 compute-0 ceph-mon[192821]: osdmap e36: 3 total, 3 up, 3 in
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.1f( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.11( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.10( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.13( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.15( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.1d( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.14( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.17( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.8( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.9( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.12( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.a( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.0( empty local-lis/les=35/36 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.b( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.7( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.5( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.4( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.3( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.6( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.2( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.1( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.e( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.f( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.d( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.1b( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.1a( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.18( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.16( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.c( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.19( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v90: 131 pgs: 1 peering, 93 unknown, 37 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:20:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec 03 01:20:34 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 03 01:20:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec 03 01:20:34 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 03 01:20:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Dec 03 01:20:34 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4081602292' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Dec 03 01:20:35 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Dec 03 01:20:35 compute-0 ceph-mon[192821]: log_channel(cluster) log [WRN] : Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 03 01:20:35 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 03 01:20:35 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 03 01:20:35 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/4081602292' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Dec 03 01:20:35 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec 03 01:20:35 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec 03 01:20:35 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4081602292' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Dec 03 01:20:35 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Dec 03 01:20:35 compute-0 naughty_satoshi[215517]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Dec 03 01:20:35 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Dec 03 01:20:35 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 37 pg[7.0( empty local-lis/les=28/29 n=0 ec=28/28 lis/c=28/28 les/c/f=29/29/0 sis=37 pruub=14.834046364s) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active pruub 61.296497345s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:35 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 37 pg[7.0( empty local-lis/les=28/29 n=0 ec=28/28 lis/c=28/28 les/c/f=29/29/0 sis=37 pruub=14.834046364s) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown pruub 61.296497345s@ mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:35 compute-0 systemd[1]: libpod-df4957ae99038534927c5faa895a568023a053bf7ce6515c08f191d1eac790cf.scope: Deactivated successfully.
Dec 03 01:20:35 compute-0 podman[215545]: 2025-12-03 01:20:35.210033044 +0000 UTC m=+0.052710858 container died df4957ae99038534927c5faa895a568023a053bf7ce6515c08f191d1eac790cf (image=quay.io/ceph/ceph:v18, name=naughty_satoshi, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 03 01:20:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-05140ca0b8293fcb4c45a6c5cc9980c8bb6e17c98dad3dbee40b836efb9b3863-merged.mount: Deactivated successfully.
Dec 03 01:20:35 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:20:35 compute-0 podman[215545]: 2025-12-03 01:20:35.299770594 +0000 UTC m=+0.142448368 container remove df4957ae99038534927c5faa895a568023a053bf7ce6515c08f191d1eac790cf (image=quay.io/ceph/ceph:v18, name=naughty_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Dec 03 01:20:35 compute-0 systemd[1]: libpod-conmon-df4957ae99038534927c5faa895a568023a053bf7ce6515c08f191d1eac790cf.scope: Deactivated successfully.
Dec 03 01:20:35 compute-0 sudo[215499]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:35 compute-0 sudo[215584]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elfyosulelftegcgtskzbbebfwnfqwzr ; /usr/bin/python3'
Dec 03 01:20:35 compute-0 sudo[215584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 35 pg[4.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=35 pruub=15.796587944s) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active pruub 69.264137268s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=35 pruub=15.796587944s) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown pruub 69.264137268s@ mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.1( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.4( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.5( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.c( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.d( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.a( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.b( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.6( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.7( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.8( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.9( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.10( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.11( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.f( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.14( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.15( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.e( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.12( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.13( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.16( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.17( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.1a( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.1b( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.18( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.19( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.1c( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.1d( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.1e( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.2( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.1f( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.3( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:35 compute-0 python3[215586]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:20:35 compute-0 podman[215587]: 2025-12-03 01:20:35.836036787 +0000 UTC m=+0.090663767 container create d423937529564ceafb15dbcdf789485855897671d0cabb80566a1c4f9ece66ee (image=quay.io/ceph/ceph:v18, name=unruffled_hopper, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Dec 03 01:20:35 compute-0 podman[215587]: 2025-12-03 01:20:35.805429221 +0000 UTC m=+0.060056241 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:20:35 compute-0 systemd[1]: Started libpod-conmon-d423937529564ceafb15dbcdf789485855897671d0cabb80566a1c4f9ece66ee.scope.
Dec 03 01:20:35 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:20:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6a39640ca3fd2ad93ebdbce36d98d33b225a9f773d8a189c4c85c93e4e07e35/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6a39640ca3fd2ad93ebdbce36d98d33b225a9f773d8a189c4c85c93e4e07e35/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:35 compute-0 podman[215587]: 2025-12-03 01:20:35.99244145 +0000 UTC m=+0.247068480 container init d423937529564ceafb15dbcdf789485855897671d0cabb80566a1c4f9ece66ee (image=quay.io/ceph/ceph:v18, name=unruffled_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 03 01:20:36 compute-0 podman[215587]: 2025-12-03 01:20:36.003513086 +0000 UTC m=+0.258140066 container start d423937529564ceafb15dbcdf789485855897671d0cabb80566a1c4f9ece66ee (image=quay.io/ceph/ceph:v18, name=unruffled_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec 03 01:20:36 compute-0 podman[215587]: 2025-12-03 01:20:36.010474188 +0000 UTC m=+0.265101248 container attach d423937529564ceafb15dbcdf789485855897671d0cabb80566a1c4f9ece66ee (image=quay.io/ceph/ceph:v18, name=unruffled_hopper, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 03 01:20:36 compute-0 ceph-mon[192821]: pgmap v90: 131 pgs: 1 peering, 93 unknown, 37 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:20:36 compute-0 ceph-mon[192821]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 03 01:20:36 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec 03 01:20:36 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec 03 01:20:36 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/4081602292' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Dec 03 01:20:36 compute-0 ceph-mon[192821]: osdmap e37: 3 total, 3 up, 3 in
Dec 03 01:20:36 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Dec 03 01:20:36 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Dec 03 01:20:36 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.1d( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 37 pg[6.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=37 pruub=11.445006371s) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active pruub 65.343589783s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=37 pruub=11.445006371s) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown pruub 65.343589783s@ mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.1c( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.1e( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.13( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.12( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.11( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.d( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.10( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.e( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.f( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.10( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.11( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.12( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.15( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.16( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.13( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.14( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.17( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.18( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.1b( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.1c( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.19( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.1a( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.1f( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.1d( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.1e( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.1( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.2( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.3( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.4( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.7( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.8( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.5( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.6( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.b( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.c( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.9( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.a( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.16( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.15( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.14( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.b( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.a( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.9( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.8( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.f( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.6( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.4( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.7( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.18( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.17( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.13( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.15( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.14( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.12( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.11( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.16( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.f( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.10( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.d( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.0( empty local-lis/les=35/38 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.2( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.e( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.c( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.1( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.19( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.3( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.9( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.5( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.1a( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.a( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.4( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.1b( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.6( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.7( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.b( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.8( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.1d( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.1c( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.1e( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.1f( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.5( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.1( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.2( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.3( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.c( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.e( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.d( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.1f( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.18( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.19( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.17( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.1a( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.1b( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.1e( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.1c( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.11( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.13( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.12( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.10( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.16( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.1d( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.15( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.14( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.b( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.a( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.9( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.8( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.f( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.6( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.4( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.0( empty local-lis/les=37/38 n=0 ec=28/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.7( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.5( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.c( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.2( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.e( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.d( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.1f( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.18( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.19( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.17( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.1b( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.1a( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.3( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.1( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v93: 193 pgs: 2 peering, 124 unknown, 67 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:20:36 compute-0 sshd-session[215542]: Invalid user frontend from 103.146.202.174 port 45126
Dec 03 01:20:36 compute-0 sshd-session[215542]: Received disconnect from 103.146.202.174 port 45126:11: Bye Bye [preauth]
Dec 03 01:20:36 compute-0 sshd-session[215542]: Disconnected from invalid user frontend 103.146.202.174 port 45126 [preauth]
Dec 03 01:20:36 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Dec 03 01:20:36 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1032722943' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Dec 03 01:20:36 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Dec 03 01:20:36 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Dec 03 01:20:36 compute-0 systemd[194622]: Starting Mark boot as successful...
Dec 03 01:20:36 compute-0 systemd[194622]: Finished Mark boot as successful.
Dec 03 01:20:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Dec 03 01:20:37 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1032722943' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Dec 03 01:20:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Dec 03 01:20:37 compute-0 unruffled_hopper[215602]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Dec 03 01:20:37 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Dec 03 01:20:37 compute-0 ceph-mon[192821]: osdmap e38: 3 total, 3 up, 3 in
Dec 03 01:20:37 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1032722943' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Dec 03 01:20:37 compute-0 ceph-mon[192821]: 2.3 scrub starts
Dec 03 01:20:37 compute-0 ceph-mon[192821]: 2.3 scrub ok
Dec 03 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.15( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.1a( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.10( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.16( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.14( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.12( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.17( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.c( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.d( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.13( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.e( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.3( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.11( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.2( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.1( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.1b( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.0( empty local-lis/les=37/39 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.f( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.6( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.18( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.4( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.b( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.7( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.19( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.9( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.8( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.5( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.a( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.1e( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.1d( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.1c( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.1f( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:37 compute-0 systemd[1]: libpod-d423937529564ceafb15dbcdf789485855897671d0cabb80566a1c4f9ece66ee.scope: Deactivated successfully.
Dec 03 01:20:37 compute-0 podman[215587]: 2025-12-03 01:20:37.144293887 +0000 UTC m=+1.398920867 container died d423937529564ceafb15dbcdf789485855897671d0cabb80566a1c4f9ece66ee (image=quay.io/ceph/ceph:v18, name=unruffled_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 03 01:20:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6a39640ca3fd2ad93ebdbce36d98d33b225a9f773d8a189c4c85c93e4e07e35-merged.mount: Deactivated successfully.
Dec 03 01:20:37 compute-0 podman[215587]: 2025-12-03 01:20:37.23699905 +0000 UTC m=+1.491626040 container remove d423937529564ceafb15dbcdf789485855897671d0cabb80566a1c4f9ece66ee (image=quay.io/ceph/ceph:v18, name=unruffled_hopper, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 03 01:20:37 compute-0 systemd[1]: libpod-conmon-d423937529564ceafb15dbcdf789485855897671d0cabb80566a1c4f9ece66ee.scope: Deactivated successfully.
Dec 03 01:20:37 compute-0 sudo[215584]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:37 compute-0 podman[215629]: 2025-12-03 01:20:37.355507305 +0000 UTC m=+0.165372162 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 03 01:20:37 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Dec 03 01:20:37 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Dec 03 01:20:38 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Dec 03 01:20:38 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Dec 03 01:20:38 compute-0 ceph-mon[192821]: pgmap v93: 193 pgs: 2 peering, 124 unknown, 67 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:20:38 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1032722943' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Dec 03 01:20:38 compute-0 ceph-mon[192821]: osdmap e39: 3 total, 3 up, 3 in
Dec 03 01:20:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v95: 193 pgs: 2 peering, 124 unknown, 67 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:20:38 compute-0 ceph-mgr[193109]: [progress INFO root] Writing back 9 completed events
Dec 03 01:20:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec 03 01:20:38 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:38 compute-0 python3[215733]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 03 01:20:39 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Dec 03 01:20:39 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Dec 03 01:20:39 compute-0 ceph-mon[192821]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec 03 01:20:39 compute-0 ceph-mon[192821]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec 03 01:20:39 compute-0 ceph-mon[192821]: 3.1 scrub starts
Dec 03 01:20:39 compute-0 ceph-mon[192821]: 3.1 scrub ok
Dec 03 01:20:39 compute-0 ceph-mon[192821]: 4.1 scrub starts
Dec 03 01:20:39 compute-0 ceph-mon[192821]: 4.1 scrub ok
Dec 03 01:20:39 compute-0 ceph-mon[192821]: pgmap v95: 193 pgs: 2 peering, 124 unknown, 67 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:20:39 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:39 compute-0 python3[215804]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764724838.1950521-37122-278134840308959/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:20:39 compute-0 sudo[215904]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvijeohemaahjxbhkgeykbahjpxukxdk ; /usr/bin/python3'
Dec 03 01:20:39 compute-0 sudo[215904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:20:40 compute-0 ceph-mon[192821]: 4.2 scrub starts
Dec 03 01:20:40 compute-0 ceph-mon[192821]: 4.2 scrub ok
Dec 03 01:20:40 compute-0 ceph-mon[192821]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec 03 01:20:40 compute-0 ceph-mon[192821]: Cluster is now healthy
Dec 03 01:20:40 compute-0 python3[215906]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 03 01:20:40 compute-0 sudo[215904]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v96: 193 pgs: 1 peering, 31 unknown, 161 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:20:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:20:40 compute-0 sudo[215979]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzieullmlgrutlejfygukmclebhdqksd ; /usr/bin/python3'
Dec 03 01:20:40 compute-0 sudo[215979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:20:40 compute-0 python3[215981]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764724839.6590133-37136-212816261421466/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=125cc056cc8761ce32a20a6ad2f9158e18b24cbb backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:20:40 compute-0 sudo[215979]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:40 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Dec 03 01:20:40 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Dec 03 01:20:41 compute-0 ceph-mon[192821]: pgmap v96: 193 pgs: 1 peering, 31 unknown, 161 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:20:41 compute-0 sudo[216029]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ithnozvkbrppzpemmuzdtrcjbcduymtw ; /usr/bin/python3'
Dec 03 01:20:41 compute-0 sudo[216029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:20:41 compute-0 python3[216032]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf
                                            _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:20:41 compute-0 podman[216031]: 2025-12-03 01:20:41.395215623 +0000 UTC m=+0.160136007 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, name=ubi9, container_name=kepler, vcs-type=git, config_id=edpm, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, io.openshift.tags=base rhel9, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public)
Dec 03 01:20:41 compute-0 podman[216052]: 2025-12-03 01:20:41.526090261 +0000 UTC m=+0.097121656 container create cc2b0a1b6596b98ddba5a1e6703119efe39ae4cf295fab386526263d7a7d7f73 (image=quay.io/ceph/ceph:v18, name=fervent_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 03 01:20:41 compute-0 podman[216052]: 2025-12-03 01:20:41.487820933 +0000 UTC m=+0.058852378 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:20:41 compute-0 systemd[1]: Started libpod-conmon-cc2b0a1b6596b98ddba5a1e6703119efe39ae4cf295fab386526263d7a7d7f73.scope.
Dec 03 01:20:41 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:20:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e0692aa21b3be1db5a671eb30a5a97d67f84c7d74e8b0efedd33e83f2fe1901/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e0692aa21b3be1db5a671eb30a5a97d67f84c7d74e8b0efedd33e83f2fe1901/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e0692aa21b3be1db5a671eb30a5a97d67f84c7d74e8b0efedd33e83f2fe1901/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:41 compute-0 podman[216052]: 2025-12-03 01:20:41.713010167 +0000 UTC m=+0.284041632 container init cc2b0a1b6596b98ddba5a1e6703119efe39ae4cf295fab386526263d7a7d7f73 (image=quay.io/ceph/ceph:v18, name=fervent_khayyam, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Dec 03 01:20:41 compute-0 podman[216052]: 2025-12-03 01:20:41.738673947 +0000 UTC m=+0.309705352 container start cc2b0a1b6596b98ddba5a1e6703119efe39ae4cf295fab386526263d7a7d7f73 (image=quay.io/ceph/ceph:v18, name=fervent_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Dec 03 01:20:41 compute-0 podman[216052]: 2025-12-03 01:20:41.746079271 +0000 UTC m=+0.317110736 container attach cc2b0a1b6596b98ddba5a1e6703119efe39ae4cf295fab386526263d7a7d7f73 (image=quay.io/ceph/ceph:v18, name=fervent_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:20:42 compute-0 ceph-mon[192821]: 4.3 scrub starts
Dec 03 01:20:42 compute-0 ceph-mon[192821]: 4.3 scrub ok
Dec 03 01:20:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v97: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:20:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec 03 01:20:42 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 03 01:20:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec 03 01:20:42 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 03 01:20:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec 03 01:20:42 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 03 01:20:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec 03 01:20:42 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 03 01:20:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec 03 01:20:42 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 03 01:20:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec 03 01:20:42 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 03 01:20:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Dec 03 01:20:42 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2484226444' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec 03 01:20:42 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2484226444' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 03 01:20:42 compute-0 fervent_khayyam[216067]: 
Dec 03 01:20:42 compute-0 fervent_khayyam[216067]: [global]
Dec 03 01:20:42 compute-0 fervent_khayyam[216067]:         fsid = 3765feb2-36f8-5b86-b74c-64e9221f9c4c
Dec 03 01:20:42 compute-0 fervent_khayyam[216067]:         mon_host = 192.168.122.100
Dec 03 01:20:42 compute-0 systemd[1]: libpod-cc2b0a1b6596b98ddba5a1e6703119efe39ae4cf295fab386526263d7a7d7f73.scope: Deactivated successfully.
Dec 03 01:20:42 compute-0 podman[216052]: 2025-12-03 01:20:42.380336052 +0000 UTC m=+0.951367457 container died cc2b0a1b6596b98ddba5a1e6703119efe39ae4cf295fab386526263d7a7d7f73 (image=quay.io/ceph/ceph:v18, name=fervent_khayyam, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:20:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e0692aa21b3be1db5a671eb30a5a97d67f84c7d74e8b0efedd33e83f2fe1901-merged.mount: Deactivated successfully.
Dec 03 01:20:42 compute-0 podman[216052]: 2025-12-03 01:20:42.466876023 +0000 UTC m=+1.037907398 container remove cc2b0a1b6596b98ddba5a1e6703119efe39ae4cf295fab386526263d7a7d7f73 (image=quay.io/ceph/ceph:v18, name=fervent_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 03 01:20:42 compute-0 systemd[1]: libpod-conmon-cc2b0a1b6596b98ddba5a1e6703119efe39ae4cf295fab386526263d7a7d7f73.scope: Deactivated successfully.
Dec 03 01:20:42 compute-0 sudo[216029]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:42 compute-0 sudo[216092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:20:42 compute-0 sudo[216092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:42 compute-0 sudo[216092]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:42 compute-0 sudo[216128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:20:42 compute-0 sudo[216128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:42 compute-0 sudo[216128]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:42 compute-0 sudo[216177]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhtyuchmwtcenvavziwvysmajgjcsfoe ; /usr/bin/python3'
Dec 03 01:20:42 compute-0 sudo[216177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:20:42 compute-0 sudo[216176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:20:42 compute-0 sudo[216176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:42 compute-0 sudo[216176]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:42 compute-0 python3[216185]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1
                                            _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:20:42 compute-0 sudo[216204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 03 01:20:42 compute-0 sudo[216204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:42 compute-0 podman[216227]: 2025-12-03 01:20:42.958204014 +0000 UTC m=+0.084368243 container create 4ed5f45987c55b371b284f2f8fcfb3a8d38b4907e1223851d84eb294346d71f6 (image=quay.io/ceph/ceph:v18, name=heuristic_hermann, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 03 01:20:43 compute-0 podman[216227]: 2025-12-03 01:20:42.928202135 +0000 UTC m=+0.054366454 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:20:43 compute-0 systemd[1]: Started libpod-conmon-4ed5f45987c55b371b284f2f8fcfb3a8d38b4907e1223851d84eb294346d71f6.scope.
Dec 03 01:20:43 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:20:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cefca2c566632f15b3a038fc127486c0469b9ad81d752645e664ef181139b064/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cefca2c566632f15b3a038fc127486c0469b9ad81d752645e664ef181139b064/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cefca2c566632f15b3a038fc127486c0469b9ad81d752645e664ef181139b064/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:43 compute-0 podman[216227]: 2025-12-03 01:20:43.126964019 +0000 UTC m=+0.253128328 container init 4ed5f45987c55b371b284f2f8fcfb3a8d38b4907e1223851d84eb294346d71f6 (image=quay.io/ceph/ceph:v18, name=heuristic_hermann, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 03 01:20:43 compute-0 podman[216227]: 2025-12-03 01:20:43.149230384 +0000 UTC m=+0.275394603 container start 4ed5f45987c55b371b284f2f8fcfb3a8d38b4907e1223851d84eb294346d71f6 (image=quay.io/ceph/ceph:v18, name=heuristic_hermann, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 03 01:20:43 compute-0 podman[216227]: 2025-12-03 01:20:43.155066785 +0000 UTC m=+0.281231084 container attach 4ed5f45987c55b371b284f2f8fcfb3a8d38b4907e1223851d84eb294346d71f6 (image=quay.io/ceph/ceph:v18, name=heuristic_hermann, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:20:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Dec 03 01:20:43 compute-0 ceph-mon[192821]: pgmap v97: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:20:43 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 03 01:20:43 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 03 01:20:43 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 03 01:20:43 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 03 01:20:43 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 03 01:20:43 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 03 01:20:43 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2484226444' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec 03 01:20:43 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2484226444' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 03 01:20:43 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 03 01:20:43 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 03 01:20:43 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 03 01:20:43 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 03 01:20:43 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 03 01:20:43 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 03 01:20:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Dec 03 01:20:43 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.15( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.925077438s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 70.928672791s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.15( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.925030708s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.928672791s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.18( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.909792900s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.913459778s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.14( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.925196648s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 70.928901672s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.18( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.909742355s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.913459778s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.14( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.925130844s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.928901672s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.17( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.925121307s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 70.928962708s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.17( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.925107002s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.928962708s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.14( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.909637451s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.913513184s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.14( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.909616470s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.913513184s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.11( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.925292969s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 70.929244995s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.13( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.909537315s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.913520813s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.11( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.925277710s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.929244995s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.11( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.909441948s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.913558960s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.13( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.909496307s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.913520813s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.12( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.909509659s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.913543701s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.11( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.909420967s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.913558960s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.13( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.924942970s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 70.929168701s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.13( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.924926758s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.929168701s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.12( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.909379959s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.913543701s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.f( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.909258842s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.913566589s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.d( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.924689293s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 70.929031372s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.d( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.924675941s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.929031372s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.f( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.909224510s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.913566589s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.c( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.924533844s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 70.928985596s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.10( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.909150124s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.913589478s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.c( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.924518585s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.928985596s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.d( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.909098625s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.913597107s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.10( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.909090042s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.913589478s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.f( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.924591064s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 70.929130554s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.d( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.909068108s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.913597107s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.f( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.924578667s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.929130554s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.e( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.924288750s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 70.929046631s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.e( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.924226761s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.929046631s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.2( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.908742905s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.913658142s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.1( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.908735275s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.913711548s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.2( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.908703804s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.913658142s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.1( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.908716202s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.913711548s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.2( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.924112320s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 70.929260254s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.4( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.908642769s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.913795471s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.1( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.924101830s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 70.929275513s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.2( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.924075127s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.929260254s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.6( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.924107552s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 70.929344177s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.1( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.924056053s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.929275513s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.6( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.924077988s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.929344177s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.b( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.923935890s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 70.929382324s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.b( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.923906326s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.929382324s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.1a( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.908274651s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.913772583s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.1a( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.908241272s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.913772583s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.4( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.908620834s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.913795471s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.5( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.908068657s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.913757324s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.8( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.923716545s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 70.929435730s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.a( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.908052444s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.913780212s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.8( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.923698425s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.929435730s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.5( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.908040047s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.913757324s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.a( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.908013344s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.913780212s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.1b( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.908015251s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.913810730s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.1b( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.908002853s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.913810730s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.9( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.907896042s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.913749695s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.9( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.907869339s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.913749695s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.4( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.923476219s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 70.929389954s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.7( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.907832146s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.913856506s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.7( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.907732010s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.913856506s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.1e( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.923254967s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 70.929473877s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.1e( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.923239708s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.929473877s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.8( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.907852173s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.913894653s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.4( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.923446655s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.929389954s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.1f( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.931925774s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 70.938255310s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.8( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.907569885s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.913894653s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.e( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.907341957s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.913673401s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.1f( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.931911469s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.938255310s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.1c( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.907565117s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.913917542s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.1c( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.907526970s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.913917542s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.e( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.907274246s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.913673401s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.1d( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.923031807s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 70.929504395s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.1c( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.922818184s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 70.929519653s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.1c( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.922801971s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.929519653s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.1d( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.922622681s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.929504395s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[4.18( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[4.1b( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[4.1a( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[6.f( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[4.e( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[4.1( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[6.8( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[4.a( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[6.14( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[6.15( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[4.13( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[6.11( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[4.11( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[6.13( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[4.1c( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[6.1f( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.809853554s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 60.112083435s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.809832573s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.112083435s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.1d( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.826898575s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 62.129302979s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.1d( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.826884270s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.129302979s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.812939644s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 62.115432739s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.812928200s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.115432739s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.19( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.809509277s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 60.112071991s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.19( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.809497833s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.112071991s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.18( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.809473038s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 60.112117767s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.18( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.809461594s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.112117767s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.809336662s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 60.112068176s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.16( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.809015274s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 60.112060547s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.16( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.808982849s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.112060547s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.825245857s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 62.128501892s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.825227737s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.128501892s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.827450752s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 62.130931854s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.827425957s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.130931854s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.808449745s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 60.112056732s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.13( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.825565338s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 62.129222870s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.808361053s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.112056732s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.13( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.802107811s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 60.105953217s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.13( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.825403214s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.129222870s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.13( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.802091599s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.105953217s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.826066017s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 62.130004883s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.826048851s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.130004883s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.15( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.825430870s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 62.129425049s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.15( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.825416565s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.129425049s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.11( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.801868439s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 60.105918884s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.11( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.801853180s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.105918884s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.16( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.827801704s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 62.131893158s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.16( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.827788353s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.131893158s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.f( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.801709175s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 60.105876923s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.f( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.801693916s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.105876923s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.825785637s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 62.130012512s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.825771332s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.130012512s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.801587105s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 60.105865479s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.801566124s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.105865479s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.827269554s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 62.131668091s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.827250481s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.131668091s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.801406860s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 60.105854034s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.801389694s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.105854034s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.2( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.800129890s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 60.104690552s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.2( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.800110817s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.104690552s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.5( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.827075005s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 62.131679535s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.5( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.827056885s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.131679535s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.799907684s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 60.104595184s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.826981544s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 62.131687164s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.799892426s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.104595184s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.826968193s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.131687164s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.3( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.826895714s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 62.131694794s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.3( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.826881409s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.131694794s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.799719810s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 60.104568481s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.799695015s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.104568481s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.2( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.826835632s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 62.131740570s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.2( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.826821327s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.131740570s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.800139427s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 60.105125427s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.800117493s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.105125427s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.826739311s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 62.131759644s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.826723099s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.131759644s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.8( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.799365044s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 60.104473114s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.826889038s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 62.132007599s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.8( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.799349785s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.104473114s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.826873779s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.132007599s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.798804283s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 60.104057312s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.b( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.798768044s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 60.104045868s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.798783302s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.104057312s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.b( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.798748016s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.104045868s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.826456070s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 62.131896973s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.1c( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.798575401s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 60.104038239s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.826438904s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.131896973s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.1c( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.798559189s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.104038239s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.1d( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.798506737s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 60.104076385s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.826265335s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 62.131843567s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.826251030s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.131843567s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.1d( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.798459053s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.104076385s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.826229095s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 62.131900787s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.826214790s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.131900787s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.1f( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.796794891s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 60.102523804s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.1f( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.796780586s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.102523804s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.18( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.826092720s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 62.131889343s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.18( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.826077461s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.131889343s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.798119545s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 60.104064941s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.798089027s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.104064941s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.798554420s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 60.104587555s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.798313141s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.104587555s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[6.1e( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[5.1d( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[4.d( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[6.c( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[6.d( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[4.f( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[6.2( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[4.2( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[6.6( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[4.4( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[6.4( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[6.1( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[4.7( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[4.5( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[6.e( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[5.1e( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[6.b( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[2.19( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[2.18( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[2.16( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[2.13( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[5.14( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[5.15( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[4.9( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.797544479s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.112068176s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[4.8( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[6.17( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[4.14( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[5.12( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[2.11( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[2.15( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[4.12( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[2.f( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[5.13( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[4.10( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[5.11( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[5.7( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[6.1d( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[6.1c( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[2.1b( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[2.2( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.1c( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.873694420s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 63.488307953s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.1c( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.873666763s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.488307953s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.18( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.795949936s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.410713196s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.18( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.795934677s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.410713196s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.17( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.796019554s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.410881042s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.17( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.795972824s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.410881042s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.13( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.873249054s) [0] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 63.488361359s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.13( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.873228073s) [0] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.488361359s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.16( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.795376778s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.410636902s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.16( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.795359612s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.410636902s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.15( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.795269966s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.410652161s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.15( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.795253754s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.410652161s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.11( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.872805595s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 63.488334656s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.11( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.872785568s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.488334656s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.12( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.794851303s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.410591125s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.12( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.794834137s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.410591125s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.11( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.794728279s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.410583496s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.11( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.794713020s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.410583496s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.15( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.872503281s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 63.488487244s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.15( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.872488022s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.488487244s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.f( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.794487000s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.410644531s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.f( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.794467926s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.410644531s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.e( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.794275284s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.410598755s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.e( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.794255257s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.410598755s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.a( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.872102737s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 63.488574982s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.a( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.872078896s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.488574982s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.9( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.871980667s) [0] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 63.488578796s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.9( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.871963501s) [0] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.488578796s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.c( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.793778419s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.410545349s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.c( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.793760300s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.410545349s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.8( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.871706009s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 63.488594055s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[5.5( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.8( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.871688843s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.488594055s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.f( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.871526718s) [0] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 63.488616943s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.f( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.871507645s) [0] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.488616943s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.6( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.871363640s) [0] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 63.488620758s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.6( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.871346474s) [0] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.488620758s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.4( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.871274948s) [0] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 63.488651276s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.4( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.871257782s) [0] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.488651276s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.1( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.792291641s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.409851074s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.1( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.792263985s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.409851074s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.5( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.870978355s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 63.488723755s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.5( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.870955467s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.488723755s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.3( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.791929245s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.409812927s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.3( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.791883469s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.409812927s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[5.4( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[5.3( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.1( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.870505333s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 63.489326477s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.1( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.870460510s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.489326477s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.6( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.791410446s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.410537720s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.6( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.791353226s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.410537720s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.2( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.869458199s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 63.488822937s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.5( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.790431023s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.409797668s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.2( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.869435310s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.488822937s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.5( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.790398598s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.409797668s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[5.2( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[2.8( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[2.b( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[2.1c( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[2.1d( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[2.1f( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[7.1c( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[3.18( empty local-lis/les=0/0 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[3.16( empty local-lis/les=0/0 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[7.11( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[3.11( empty local-lis/les=0/0 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[7.15( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[3.e( empty local-lis/les=0/0 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.7( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.790238380s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.409790039s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.7( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.790216446s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.409790039s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.3( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.869569778s) [0] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 63.489196777s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.3( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.869544983s) [0] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.489196777s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.8( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.790086746s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.409790039s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.8( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.790070534s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.409790039s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.c( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.868990898s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 63.488792419s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.9( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.790706635s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.410545349s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.c( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.868967056s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.488792419s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.9( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.790687561s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.410545349s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.a( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.789342880s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.409431458s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.e( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.868741989s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 63.488849640s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.a( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.789314270s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.409431458s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.e( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.868718147s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.488849640s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.1f( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.868660927s) [0] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 63.488925934s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.1b( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.789170265s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.409454346s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.1b( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.789145470s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.409454346s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.1f( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.868627548s) [0] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.488925934s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.18( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.868504524s) [0] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 63.488952637s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.18( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.868486404s) [0] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.488952637s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.1d( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.788828850s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.409408569s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.1d( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.788810730s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.409408569s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.1e( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.788667679s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.409393311s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.1a( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.868463516s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 63.489189148s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.1e( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.788648605s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.409393311s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.1a( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.868432999s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.489189148s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.1b( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.868255615s) [0] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 63.489154816s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.1b( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.868234634s) [0] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.489154816s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[5.16( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[5.9( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[2.d( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[2.7( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[2.3( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[2.5( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[2.6( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[5.1( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[5.f( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[2.a( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[5.c( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[5.1a( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[5.19( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[5.18( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[2.9( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[2.4( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[7.a( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[2.17( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.1f( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.781010628s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.408958435s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[7.8( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[7.5( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.1f( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.780977249s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.408958435s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[7.1( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[3.17( empty local-lis/les=0/0 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[7.2( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[7.13( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [0] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[3.15( empty local-lis/les=0/0 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[3.5( empty local-lis/les=0/0 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[3.7( empty local-lis/les=0/0 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[3.8( empty local-lis/les=0/0 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[7.c( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[7.e( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[3.12( empty local-lis/les=0/0 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[3.1d( empty local-lis/les=0/0 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[3.f( empty local-lis/les=0/0 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[7.9( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [0] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[3.c( empty local-lis/les=0/0 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[3.1e( empty local-lis/les=0/0 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[7.f( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [0] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[7.1a( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[7.6( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [0] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[7.4( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [0] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[3.1( empty local-lis/les=0/0 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[3.3( empty local-lis/les=0/0 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[3.6( empty local-lis/les=0/0 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[7.3( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [0] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[3.9( empty local-lis/les=0/0 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[3.a( empty local-lis/les=0/0 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[7.1f( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [0] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[7.18( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [0] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[3.1b( empty local-lis/les=0/0 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[7.1b( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [0] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[3.1f( empty local-lis/les=0/0 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:20:43 compute-0 ceph-mgr[193109]: [progress INFO root] Completed event c6336473-620e-4396-9742-5356787fe4c2 (Global Recovery Event) in 10 seconds
Dec 03 01:20:43 compute-0 podman[216313]: 2025-12-03 01:20:43.660676681 +0000 UTC m=+0.148926868 container exec d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 03 01:20:43 compute-0 podman[216313]: 2025-12-03 01:20:43.774302541 +0000 UTC m=+0.262552718 container exec_died d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:20:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0) v1
Dec 03 01:20:43 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/246700991' entity='client.admin' 
Dec 03 01:20:43 compute-0 heuristic_hermann[216253]: set ssl_option
Dec 03 01:20:43 compute-0 systemd[1]: libpod-4ed5f45987c55b371b284f2f8fcfb3a8d38b4907e1223851d84eb294346d71f6.scope: Deactivated successfully.
Dec 03 01:20:43 compute-0 podman[216227]: 2025-12-03 01:20:43.884076566 +0000 UTC m=+1.010240835 container died 4ed5f45987c55b371b284f2f8fcfb3a8d38b4907e1223851d84eb294346d71f6 (image=quay.io/ceph/ceph:v18, name=heuristic_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:20:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-cefca2c566632f15b3a038fc127486c0469b9ad81d752645e664ef181139b064-merged.mount: Deactivated successfully.
Dec 03 01:20:43 compute-0 podman[216227]: 2025-12-03 01:20:43.97938775 +0000 UTC m=+1.105552009 container remove 4ed5f45987c55b371b284f2f8fcfb3a8d38b4907e1223851d84eb294346d71f6 (image=quay.io/ceph/ceph:v18, name=heuristic_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Dec 03 01:20:43 compute-0 systemd[1]: libpod-conmon-4ed5f45987c55b371b284f2f8fcfb3a8d38b4907e1223851d84eb294346d71f6.scope: Deactivated successfully.
Dec 03 01:20:44 compute-0 sudo[216177]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Dec 03 01:20:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Dec 03 01:20:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v99: 193 pgs: 36 peering, 157 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:20:44 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Dec 03 01:20:44 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 03 01:20:44 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 03 01:20:44 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 03 01:20:44 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 03 01:20:44 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 03 01:20:44 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 03 01:20:44 compute-0 ceph-mon[192821]: osdmap e40: 3 total, 3 up, 3 in
Dec 03 01:20:44 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/246700991' entity='client.admin' 
Dec 03 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[6.1f( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[3.18( empty local-lis/les=40/41 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[7.1c( empty local-lis/les=40/41 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 sudo[216437]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlklixeniruwreosaxpocatwqivkfhpc ; /usr/bin/python3'
Dec 03 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[3.1f( empty local-lis/les=40/41 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[6.d( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[4.f( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[4.14( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[4.12( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[4.10( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[2.1b( empty local-lis/les=40/41 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[6.17( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[6.1c( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[5.1d( empty local-lis/les=40/41 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[5.11( empty local-lis/les=40/41 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[6.1d( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[4.1c( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[6.13( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[4.11( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[3.16( empty local-lis/les=40/41 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[6.11( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[4.13( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[7.11( empty local-lis/les=40/41 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[6.15( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[3.11( empty local-lis/les=40/41 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[7.15( empty local-lis/les=40/41 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[6.14( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[7.a( empty local-lis/les=40/41 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[3.e( empty local-lis/les=40/41 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[7.8( empty local-lis/les=40/41 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[7.5( empty local-lis/les=40/41 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[4.1( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[7.2( empty local-lis/les=40/41 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[4.a( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[3.5( empty local-lis/les=40/41 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[7.1( empty local-lis/les=40/41 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[3.7( empty local-lis/les=40/41 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[4.e( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[3.8( empty local-lis/les=40/41 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[7.c( empty local-lis/les=40/41 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[6.f( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[3.1d( empty local-lis/les=40/41 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[7.e( empty local-lis/les=40/41 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[2.11( empty local-lis/les=40/41 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[4.1a( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[4.1b( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[4.18( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[3.1e( empty local-lis/les=40/41 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[3.12( empty local-lis/les=40/41 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[7.1a( empty local-lis/les=40/41 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[6.8( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[2.13( empty local-lis/les=40/41 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[5.14( empty local-lis/les=40/41 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[5.15( empty local-lis/les=40/41 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[7.1b( empty local-lis/les=40/41 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [0] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[2.16( empty local-lis/les=40/41 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[3.17( empty local-lis/les=40/41 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[3.9( empty local-lis/les=40/41 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[3.15( empty local-lis/les=40/41 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[3.a( empty local-lis/les=40/41 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[5.3( empty local-lis/les=40/41 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[7.13( empty local-lis/les=40/41 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [0] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[7.3( empty local-lis/les=40/41 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [0] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[3.6( empty local-lis/les=40/41 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[5.2( empty local-lis/les=40/41 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[3.3( empty local-lis/les=40/41 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[2.b( empty local-lis/les=40/41 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[5.5( empty local-lis/les=40/41 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[2.8( empty local-lis/les=40/41 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[2.1c( empty local-lis/les=40/41 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[7.6( empty local-lis/les=40/41 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [0] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[7.9( empty local-lis/les=40/41 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [0] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[2.1d( empty local-lis/les=40/41 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[7.18( empty local-lis/les=40/41 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [0] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[5.7( empty local-lis/les=40/41 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[3.1( empty local-lis/les=40/41 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[3.c( empty local-lis/les=40/41 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[2.2( empty local-lis/les=40/41 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[7.4( empty local-lis/les=40/41 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [0] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[3.f( empty local-lis/les=40/41 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[7.1f( empty local-lis/les=40/41 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [0] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[3.1b( empty local-lis/les=40/41 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[2.18( empty local-lis/les=40/41 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[5.1e( empty local-lis/les=40/41 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[5.4( empty local-lis/les=40/41 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[2.19( empty local-lis/les=40/41 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[2.f( empty local-lis/les=40/41 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[7.f( empty local-lis/les=40/41 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [0] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 sudo[216437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[2.1f( empty local-lis/les=40/41 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[2.17( empty local-lis/les=40/41 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[5.13( empty local-lis/les=40/41 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[2.15( empty local-lis/les=40/41 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[5.12( empty local-lis/les=40/41 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[5.16( empty local-lis/les=40/41 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[5.9( empty local-lis/les=40/41 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[4.8( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[6.b( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[2.d( empty local-lis/les=40/41 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[4.9( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[2.a( empty local-lis/les=40/41 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[6.e( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[4.5( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[2.3( empty local-lis/les=40/41 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[4.7( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[6.1( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[6.4( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[6.6( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[2.4( empty local-lis/les=40/41 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[4.2( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[2.5( empty local-lis/les=40/41 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[5.1( empty local-lis/les=40/41 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[2.7( empty local-lis/les=40/41 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[6.2( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[2.6( empty local-lis/les=40/41 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[2.9( empty local-lis/les=40/41 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[6.c( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[4.d( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[5.c( empty local-lis/les=40/41 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[5.1a( empty local-lis/les=40/41 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[5.18( empty local-lis/les=40/41 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[5.19( empty local-lis/les=40/41 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[6.1e( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[5.f( empty local-lis/les=40/41 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[4.4( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:20:44 compute-0 python3[216446]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:20:44 compute-0 sshd-session[215460]: Connection closed by authenticating user root 193.32.162.157 port 58720 [preauth]
Dec 03 01:20:44 compute-0 podman[216471]: 2025-12-03 01:20:44.508412312 +0000 UTC m=+0.081656858 container create dfd4f67ab4b07f0cf2527dacb2075e71937b2cb4ddba09543ba4ffe0d74cfd30 (image=quay.io/ceph/ceph:v18, name=nervous_shannon, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 03 01:20:44 compute-0 systemd[1]: Started libpod-conmon-dfd4f67ab4b07f0cf2527dacb2075e71937b2cb4ddba09543ba4ffe0d74cfd30.scope.
Dec 03 01:20:44 compute-0 podman[216471]: 2025-12-03 01:20:44.480732177 +0000 UTC m=+0.053976713 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:20:44 compute-0 sudo[216204]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:44 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:20:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:20:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e0db5a9aac97be4d6b1331c4d10536f16bab95fbf20039dab67bd8a732ed4d9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e0db5a9aac97be4d6b1331c4d10536f16bab95fbf20039dab67bd8a732ed4d9/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e0db5a9aac97be4d6b1331c4d10536f16bab95fbf20039dab67bd8a732ed4d9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:44 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:20:44 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:20:44 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:20:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 01:20:44 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:20:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 01:20:44 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:44 compute-0 podman[216471]: 2025-12-03 01:20:44.644483944 +0000 UTC m=+0.217728510 container init dfd4f67ab4b07f0cf2527dacb2075e71937b2cb4ddba09543ba4ffe0d74cfd30 (image=quay.io/ceph/ceph:v18, name=nervous_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:20:44 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 6df04731-ee4e-45c6-a84d-11b50089b0f9 does not exist
Dec 03 01:20:44 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 2d8b3d8f-b81a-4020-95da-dcda1fa2f3ec does not exist
Dec 03 01:20:44 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 8977d211-6977-4840-8bc6-93500eed2775 does not exist
Dec 03 01:20:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 01:20:44 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:20:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 01:20:44 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:20:44 compute-0 podman[216471]: 2025-12-03 01:20:44.653231845 +0000 UTC m=+0.226476381 container start dfd4f67ab4b07f0cf2527dacb2075e71937b2cb4ddba09543ba4ffe0d74cfd30 (image=quay.io/ceph/ceph:v18, name=nervous_shannon, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:20:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:20:44 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:20:44 compute-0 podman[216471]: 2025-12-03 01:20:44.657759691 +0000 UTC m=+0.231004257 container attach dfd4f67ab4b07f0cf2527dacb2075e71937b2cb4ddba09543ba4ffe0d74cfd30 (image=quay.io/ceph/ceph:v18, name=nervous_shannon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:20:44 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Dec 03 01:20:44 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Dec 03 01:20:44 compute-0 sudo[216504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:20:44 compute-0 sudo[216504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:44 compute-0 sudo[216504]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:44 compute-0 sudo[216529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:20:44 compute-0 sudo[216529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:44 compute-0 sudo[216529]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:44 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Dec 03 01:20:44 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Dec 03 01:20:45 compute-0 sudo[216555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:20:45 compute-0 sudo[216555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:45 compute-0 sudo[216555]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:45 compute-0 sudo[216598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 01:20:45 compute-0 sudo[216598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:45 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 01:20:45 compute-0 ceph-mgr[193109]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Dec 03 01:20:45 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Dec 03 01:20:45 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Dec 03 01:20:45 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:45 compute-0 nervous_shannon[216499]: Scheduled rgw.rgw update...
Dec 03 01:20:45 compute-0 ceph-mon[192821]: pgmap v99: 193 pgs: 36 peering, 157 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:20:45 compute-0 ceph-mon[192821]: osdmap e41: 3 total, 3 up, 3 in
Dec 03 01:20:45 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:45 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:45 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:20:45 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:20:45 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:45 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:20:45 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:20:45 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:20:45 compute-0 systemd[1]: libpod-dfd4f67ab4b07f0cf2527dacb2075e71937b2cb4ddba09543ba4ffe0d74cfd30.scope: Deactivated successfully.
Dec 03 01:20:45 compute-0 podman[216471]: 2025-12-03 01:20:45.252352885 +0000 UTC m=+0.825597461 container died dfd4f67ab4b07f0cf2527dacb2075e71937b2cb4ddba09543ba4ffe0d74cfd30 (image=quay.io/ceph/ceph:v18, name=nervous_shannon, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 03 01:20:45 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:20:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e0db5a9aac97be4d6b1331c4d10536f16bab95fbf20039dab67bd8a732ed4d9-merged.mount: Deactivated successfully.
Dec 03 01:20:45 compute-0 podman[216471]: 2025-12-03 01:20:45.346074086 +0000 UTC m=+0.919318662 container remove dfd4f67ab4b07f0cf2527dacb2075e71937b2cb4ddba09543ba4ffe0d74cfd30 (image=quay.io/ceph/ceph:v18, name=nervous_shannon, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:20:45 compute-0 systemd[1]: libpod-conmon-dfd4f67ab4b07f0cf2527dacb2075e71937b2cb4ddba09543ba4ffe0d74cfd30.scope: Deactivated successfully.
Dec 03 01:20:45 compute-0 sudo[216437]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:45 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Dec 03 01:20:45 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Dec 03 01:20:45 compute-0 podman[216675]: 2025-12-03 01:20:45.778561969 +0000 UTC m=+0.066277753 container create c456a975fca70027cf66babe6be692410584505e0f6fc305a40a8367c8075458 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 03 01:20:45 compute-0 systemd[1]: Started libpod-conmon-c456a975fca70027cf66babe6be692410584505e0f6fc305a40a8367c8075458.scope.
Dec 03 01:20:45 compute-0 podman[216675]: 2025-12-03 01:20:45.757403564 +0000 UTC m=+0.045119388 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:20:45 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:20:45 compute-0 podman[216675]: 2025-12-03 01:20:45.901131367 +0000 UTC m=+0.188847211 container init c456a975fca70027cf66babe6be692410584505e0f6fc305a40a8367c8075458 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 03 01:20:45 compute-0 podman[216675]: 2025-12-03 01:20:45.917476689 +0000 UTC m=+0.205192493 container start c456a975fca70027cf66babe6be692410584505e0f6fc305a40a8367c8075458 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_murdock, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:20:45 compute-0 podman[216675]: 2025-12-03 01:20:45.924007919 +0000 UTC m=+0.211723743 container attach c456a975fca70027cf66babe6be692410584505e0f6fc305a40a8367c8075458 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_murdock, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:20:45 compute-0 relaxed_murdock[216690]: 167 167
Dec 03 01:20:45 compute-0 systemd[1]: libpod-c456a975fca70027cf66babe6be692410584505e0f6fc305a40a8367c8075458.scope: Deactivated successfully.
Dec 03 01:20:45 compute-0 podman[216675]: 2025-12-03 01:20:45.92622191 +0000 UTC m=+0.213937694 container died c456a975fca70027cf66babe6be692410584505e0f6fc305a40a8367c8075458 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_murdock, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:20:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-2547c6140542ca90a2c8f66aff2616a58465208cfabe0f04c0527164914d9a48-merged.mount: Deactivated successfully.
Dec 03 01:20:45 compute-0 podman[216675]: 2025-12-03 01:20:45.994663402 +0000 UTC m=+0.282379186 container remove c456a975fca70027cf66babe6be692410584505e0f6fc305a40a8367c8075458 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_murdock, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 03 01:20:46 compute-0 systemd[1]: libpod-conmon-c456a975fca70027cf66babe6be692410584505e0f6fc305a40a8367c8075458.scope: Deactivated successfully.
Dec 03 01:20:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v101: 193 pgs: 36 peering, 157 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:20:46 compute-0 ceph-mon[192821]: 3.2 scrub starts
Dec 03 01:20:46 compute-0 ceph-mon[192821]: 3.2 scrub ok
Dec 03 01:20:46 compute-0 ceph-mon[192821]: 4.6 scrub starts
Dec 03 01:20:46 compute-0 ceph-mon[192821]: 4.6 scrub ok
Dec 03 01:20:46 compute-0 ceph-mon[192821]: from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 01:20:46 compute-0 ceph-mon[192821]: Saving service rgw.rgw spec with placement compute-0
Dec 03 01:20:46 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:46 compute-0 podman[216731]: 2025-12-03 01:20:46.278820846 +0000 UTC m=+0.093419953 container create eb802bf4a15bf0bcb465cad5ffe30945a4498b9097b06ccb9dc14d222d9ab7a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_kirch, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 03 01:20:46 compute-0 podman[216731]: 2025-12-03 01:20:46.229172934 +0000 UTC m=+0.043772111 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:20:46 compute-0 systemd[1]: Started libpod-conmon-eb802bf4a15bf0bcb465cad5ffe30945a4498b9097b06ccb9dc14d222d9ab7a9.scope.
Dec 03 01:20:46 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:20:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/640bac675afb6769cd5f9f5b1ad6030b258b008876817d42f20307910f1bb009/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/640bac675afb6769cd5f9f5b1ad6030b258b008876817d42f20307910f1bb009/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/640bac675afb6769cd5f9f5b1ad6030b258b008876817d42f20307910f1bb009/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/640bac675afb6769cd5f9f5b1ad6030b258b008876817d42f20307910f1bb009/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/640bac675afb6769cd5f9f5b1ad6030b258b008876817d42f20307910f1bb009/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:46 compute-0 podman[216731]: 2025-12-03 01:20:46.42476184 +0000 UTC m=+0.239360977 container init eb802bf4a15bf0bcb465cad5ffe30945a4498b9097b06ccb9dc14d222d9ab7a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 03 01:20:46 compute-0 podman[216731]: 2025-12-03 01:20:46.439453906 +0000 UTC m=+0.254053033 container start eb802bf4a15bf0bcb465cad5ffe30945a4498b9097b06ccb9dc14d222d9ab7a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_kirch, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 03 01:20:46 compute-0 podman[216731]: 2025-12-03 01:20:46.446284085 +0000 UTC m=+0.260883272 container attach eb802bf4a15bf0bcb465cad5ffe30945a4498b9097b06ccb9dc14d222d9ab7a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_kirch, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 03 01:20:46 compute-0 podman[216770]: 2025-12-03 01:20:46.476211892 +0000 UTC m=+0.126465536 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 01:20:46 compute-0 python3[216830]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 03 01:20:46 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 3.b scrub starts
Dec 03 01:20:46 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 3.b scrub ok
Dec 03 01:20:47 compute-0 python3[216901]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764724846.2006643-37177-150490665735229/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:20:47 compute-0 ceph-mon[192821]: 3.4 scrub starts
Dec 03 01:20:47 compute-0 ceph-mon[192821]: 3.4 scrub ok
Dec 03 01:20:47 compute-0 ceph-mon[192821]: pgmap v101: 193 pgs: 36 peering, 157 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:20:47 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 2.c scrub starts
Dec 03 01:20:47 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 2.c scrub ok
Dec 03 01:20:47 compute-0 sudo[216971]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbuaohuyptqphudjjmhxwzxqytqxysol ; /usr/bin/python3'
Dec 03 01:20:47 compute-0 sudo[216971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:20:47 compute-0 friendly_kirch[216786]: --> passed data devices: 0 physical, 3 LVM
Dec 03 01:20:47 compute-0 friendly_kirch[216786]: --> relative data size: 1.0
Dec 03 01:20:47 compute-0 friendly_kirch[216786]: --> All data devices are unavailable
Dec 03 01:20:47 compute-0 systemd[1]: libpod-eb802bf4a15bf0bcb465cad5ffe30945a4498b9097b06ccb9dc14d222d9ab7a9.scope: Deactivated successfully.
Dec 03 01:20:47 compute-0 podman[216731]: 2025-12-03 01:20:47.790172521 +0000 UTC m=+1.604771628 container died eb802bf4a15bf0bcb465cad5ffe30945a4498b9097b06ccb9dc14d222d9ab7a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_kirch, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:20:47 compute-0 systemd[1]: libpod-eb802bf4a15bf0bcb465cad5ffe30945a4498b9097b06ccb9dc14d222d9ab7a9.scope: Consumed 1.291s CPU time.
Dec 03 01:20:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-640bac675afb6769cd5f9f5b1ad6030b258b008876817d42f20307910f1bb009-merged.mount: Deactivated successfully.
Dec 03 01:20:47 compute-0 python3[216975]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '
                                            _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:20:47 compute-0 podman[216731]: 2025-12-03 01:20:47.908482291 +0000 UTC m=+1.723081398 container remove eb802bf4a15bf0bcb465cad5ffe30945a4498b9097b06ccb9dc14d222d9ab7a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_kirch, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 03 01:20:47 compute-0 systemd[1]: libpod-conmon-eb802bf4a15bf0bcb465cad5ffe30945a4498b9097b06ccb9dc14d222d9ab7a9.scope: Deactivated successfully.
Dec 03 01:20:47 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 4.b scrub starts
Dec 03 01:20:47 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 4.b scrub ok
Dec 03 01:20:47 compute-0 sudo[216598]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:47 compute-0 podman[216988]: 2025-12-03 01:20:47.994453187 +0000 UTC m=+0.063759313 container create e6ebac011e4996a3a19a68fdf2a07d5365eb7fbdcba1b1fdaffb1e69e2228e0c (image=quay.io/ceph/ceph:v18, name=gracious_visvesvaraya, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 03 01:20:48 compute-0 systemd[1]: Started libpod-conmon-e6ebac011e4996a3a19a68fdf2a07d5365eb7fbdcba1b1fdaffb1e69e2228e0c.scope.
Dec 03 01:20:48 compute-0 podman[216988]: 2025-12-03 01:20:47.97319461 +0000 UTC m=+0.042500696 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:20:48 compute-0 sudo[216999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:20:48 compute-0 sudo[216999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:48 compute-0 sudo[216999]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:48 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:20:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/531d51b1524b1958dd8bd48023675f7cea1c8c98787a856b4b3611e2c34efa8e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/531d51b1524b1958dd8bd48023675f7cea1c8c98787a856b4b3611e2c34efa8e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/531d51b1524b1958dd8bd48023675f7cea1c8c98787a856b4b3611e2c34efa8e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:48 compute-0 podman[216988]: 2025-12-03 01:20:48.142609142 +0000 UTC m=+0.211915238 container init e6ebac011e4996a3a19a68fdf2a07d5365eb7fbdcba1b1fdaffb1e69e2228e0c (image=quay.io/ceph/ceph:v18, name=gracious_visvesvaraya, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Dec 03 01:20:48 compute-0 podman[216988]: 2025-12-03 01:20:48.152716052 +0000 UTC m=+0.222022138 container start e6ebac011e4996a3a19a68fdf2a07d5365eb7fbdcba1b1fdaffb1e69e2228e0c (image=quay.io/ceph/ceph:v18, name=gracious_visvesvaraya, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:20:48 compute-0 podman[216988]: 2025-12-03 01:20:48.156766884 +0000 UTC m=+0.226072970 container attach e6ebac011e4996a3a19a68fdf2a07d5365eb7fbdcba1b1fdaffb1e69e2228e0c (image=quay.io/ceph/ceph:v18, name=gracious_visvesvaraya, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:20:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v102: 193 pgs: 36 peering, 157 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:20:48 compute-0 sudo[217031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:20:48 compute-0 sudo[217031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:48 compute-0 sudo[217031]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:48 compute-0 ceph-mon[192821]: 3.b scrub starts
Dec 03 01:20:48 compute-0 ceph-mon[192821]: 3.b scrub ok
Dec 03 01:20:48 compute-0 ceph-mon[192821]: 2.c scrub starts
Dec 03 01:20:48 compute-0 ceph-mon[192821]: 2.c scrub ok
Dec 03 01:20:48 compute-0 ceph-mon[192821]: 4.b scrub starts
Dec 03 01:20:48 compute-0 ceph-mon[192821]: 4.b scrub ok
Dec 03 01:20:48 compute-0 sudo[217057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:20:48 compute-0 sudo[217057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:48 compute-0 sudo[217057]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:48 compute-0 ceph-mgr[193109]: [progress INFO root] Writing back 10 completed events
Dec 03 01:20:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec 03 01:20:48 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:48 compute-0 sudo[217082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 01:20:48 compute-0 sudo[217082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:48 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 2.e scrub starts
Dec 03 01:20:48 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 2.e scrub ok
Dec 03 01:20:48 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 3.d scrub starts
Dec 03 01:20:48 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 3.d scrub ok
Dec 03 01:20:48 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.14246 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 01:20:48 compute-0 ceph-mgr[193109]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Dec 03 01:20:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0) v1
Dec 03 01:20:48 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Dec 03 01:20:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0) v1
Dec 03 01:20:48 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Dec 03 01:20:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0) v1
Dec 03 01:20:48 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Dec 03 01:20:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Dec 03 01:20:48 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0[192817]: 2025-12-03T01:20:48.773+0000 7ff315e65640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec 03 01:20:48 compute-0 ceph-mon[192821]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec 03 01:20:48 compute-0 ceph-mon[192821]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Dec 03 01:20:48 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Dec 03 01:20:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).mds e2 new map
Dec 03 01:20:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).mds e2 print_map
                                            e2
                                            enable_multiple, ever_enabled_multiple: 1,1
                                            default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                            legacy client fscid: 1
                                             
                                            Filesystem 'cephfs' (1)
                                            fs_name        cephfs
                                            epoch        2
                                            flags        12 joinable allow_snaps allow_multimds_snaps
                                            created        2025-12-03T01:20:48.773680+0000
                                            modified        2025-12-03T01:20:48.773725+0000
                                            tableserver        0
                                            root        0
                                            session_timeout        60
                                            session_autoclose        300
                                            max_file_size        1099511627776
                                            max_xattr_size        65536
                                            required_client_features        {}
                                            last_failure        0
                                            last_failure_osd_epoch        0
                                            compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                            max_mds        1
                                            in        
                                            up        {}
                                            failed        
                                            damaged        
                                            stopped        
                                            data_pools        [7]
                                            metadata_pool        6
                                            inline_data        disabled
                                            balancer        
                                            bal_rank_mask        -1
                                            standby_count_wanted        0
                                             
                                             
Dec 03 01:20:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Dec 03 01:20:48 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Dec 03 01:20:48 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Dec 03 01:20:48 compute-0 ceph-mgr[193109]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Dec 03 01:20:48 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Dec 03 01:20:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Dec 03 01:20:48 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:48 compute-0 ceph-mgr[193109]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Dec 03 01:20:48 compute-0 systemd[1]: libpod-e6ebac011e4996a3a19a68fdf2a07d5365eb7fbdcba1b1fdaffb1e69e2228e0c.scope: Deactivated successfully.
Dec 03 01:20:48 compute-0 podman[216988]: 2025-12-03 01:20:48.845200602 +0000 UTC m=+0.914506728 container died e6ebac011e4996a3a19a68fdf2a07d5365eb7fbdcba1b1fdaffb1e69e2228e0c (image=quay.io/ceph/ceph:v18, name=gracious_visvesvaraya, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 03 01:20:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-531d51b1524b1958dd8bd48023675f7cea1c8c98787a856b4b3611e2c34efa8e-merged.mount: Deactivated successfully.
Dec 03 01:20:48 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 4.c scrub starts
Dec 03 01:20:48 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 4.c scrub ok
Dec 03 01:20:48 compute-0 podman[216988]: 2025-12-03 01:20:48.955393538 +0000 UTC m=+1.024699634 container remove e6ebac011e4996a3a19a68fdf2a07d5365eb7fbdcba1b1fdaffb1e69e2228e0c (image=quay.io/ceph/ceph:v18, name=gracious_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 03 01:20:48 compute-0 systemd[1]: libpod-conmon-e6ebac011e4996a3a19a68fdf2a07d5365eb7fbdcba1b1fdaffb1e69e2228e0c.scope: Deactivated successfully.
Dec 03 01:20:48 compute-0 sudo[216971]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:49 compute-0 podman[217175]: 2025-12-03 01:20:49.126720083 +0000 UTC m=+0.084570418 container create ca1a5a2c80b7ee69a37503df0b800eb2cc696d8280658e1ee80ebd463753c3b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_ardinghelli, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec 03 01:20:49 compute-0 podman[217175]: 2025-12-03 01:20:49.09659656 +0000 UTC m=+0.054446955 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:20:49 compute-0 systemd[1]: Started libpod-conmon-ca1a5a2c80b7ee69a37503df0b800eb2cc696d8280658e1ee80ebd463753c3b7.scope.
Dec 03 01:20:49 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:20:49 compute-0 sudo[217214]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkzncssyqenhkdemxweyiurvnpovbhts ; /usr/bin/python3'
Dec 03 01:20:49 compute-0 sudo[217214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:20:49 compute-0 podman[217175]: 2025-12-03 01:20:49.267396561 +0000 UTC m=+0.225246936 container init ca1a5a2c80b7ee69a37503df0b800eb2cc696d8280658e1ee80ebd463753c3b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_ardinghelli, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 03 01:20:49 compute-0 podman[217175]: 2025-12-03 01:20:49.284113403 +0000 UTC m=+0.241963688 container start ca1a5a2c80b7ee69a37503df0b800eb2cc696d8280658e1ee80ebd463753c3b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 03 01:20:49 compute-0 podman[217175]: 2025-12-03 01:20:49.289281786 +0000 UTC m=+0.247132171 container attach ca1a5a2c80b7ee69a37503df0b800eb2cc696d8280658e1ee80ebd463753c3b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 03 01:20:49 compute-0 elastic_ardinghelli[217215]: 167 167
Dec 03 01:20:49 compute-0 systemd[1]: libpod-ca1a5a2c80b7ee69a37503df0b800eb2cc696d8280658e1ee80ebd463753c3b7.scope: Deactivated successfully.
Dec 03 01:20:49 compute-0 podman[217175]: 2025-12-03 01:20:49.292656199 +0000 UTC m=+0.250506574 container died ca1a5a2c80b7ee69a37503df0b800eb2cc696d8280658e1ee80ebd463753c3b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_ardinghelli, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 03 01:20:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-98baa9ce423971f568ff3af76f932c2149b3db26facba9579012b9bb9c0eced0-merged.mount: Deactivated successfully.
Dec 03 01:20:49 compute-0 podman[217175]: 2025-12-03 01:20:49.374137521 +0000 UTC m=+0.331987826 container remove ca1a5a2c80b7ee69a37503df0b800eb2cc696d8280658e1ee80ebd463753c3b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_ardinghelli, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 03 01:20:49 compute-0 python3[217219]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:20:49 compute-0 systemd[1]: libpod-conmon-ca1a5a2c80b7ee69a37503df0b800eb2cc696d8280658e1ee80ebd463753c3b7.scope: Deactivated successfully.
Dec 03 01:20:49 compute-0 ceph-mon[192821]: pgmap v102: 193 pgs: 36 peering, 157 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:20:49 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:49 compute-0 ceph-mon[192821]: 2.e scrub starts
Dec 03 01:20:49 compute-0 ceph-mon[192821]: 2.e scrub ok
Dec 03 01:20:49 compute-0 ceph-mon[192821]: from='client.14246 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 01:20:49 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Dec 03 01:20:49 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Dec 03 01:20:49 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Dec 03 01:20:49 compute-0 ceph-mon[192821]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec 03 01:20:49 compute-0 ceph-mon[192821]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Dec 03 01:20:49 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Dec 03 01:20:49 compute-0 ceph-mon[192821]: osdmap e42: 3 total, 3 up, 3 in
Dec 03 01:20:49 compute-0 ceph-mon[192821]: fsmap cephfs:0
Dec 03 01:20:49 compute-0 ceph-mon[192821]: Saving service mds.cephfs spec with placement compute-0
Dec 03 01:20:49 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:49 compute-0 ceph-mon[192821]: 4.c scrub starts
Dec 03 01:20:49 compute-0 ceph-mon[192821]: 4.c scrub ok
Dec 03 01:20:49 compute-0 podman[217236]: 2025-12-03 01:20:49.490007994 +0000 UTC m=+0.073595605 container create bcbbd9cc8de050deae2f1fffc7a6f91b38791673ff226df86e462923e4b2e226 (image=quay.io/ceph/ceph:v18, name=focused_jepsen, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 03 01:20:49 compute-0 podman[217236]: 2025-12-03 01:20:49.46126148 +0000 UTC m=+0.044849161 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:20:49 compute-0 systemd[1]: Started libpod-conmon-bcbbd9cc8de050deae2f1fffc7a6f91b38791673ff226df86e462923e4b2e226.scope.
Dec 03 01:20:49 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:20:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b11737a3222362a10b18a13558d00739660e19f90c4ea3a6aa28c6c235a1c61b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b11737a3222362a10b18a13558d00739660e19f90c4ea3a6aa28c6c235a1c61b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b11737a3222362a10b18a13558d00739660e19f90c4ea3a6aa28c6c235a1c61b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:49 compute-0 podman[217236]: 2025-12-03 01:20:49.668104557 +0000 UTC m=+0.251692198 container init bcbbd9cc8de050deae2f1fffc7a6f91b38791673ff226df86e462923e4b2e226 (image=quay.io/ceph/ceph:v18, name=focused_jepsen, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 03 01:20:49 compute-0 podman[217236]: 2025-12-03 01:20:49.685658672 +0000 UTC m=+0.269246283 container start bcbbd9cc8de050deae2f1fffc7a6f91b38791673ff226df86e462923e4b2e226 (image=quay.io/ceph/ceph:v18, name=focused_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:20:49 compute-0 podman[217261]: 2025-12-03 01:20:49.693338774 +0000 UTC m=+0.095452429 container create 1f314677d5c021a33430f7fc17bd7ea3e1b9a7383a9f9f83a1242f40399eb341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_jepsen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:20:49 compute-0 podman[217236]: 2025-12-03 01:20:49.698556328 +0000 UTC m=+0.282143939 container attach bcbbd9cc8de050deae2f1fffc7a6f91b38791673ff226df86e462923e4b2e226 (image=quay.io/ceph/ceph:v18, name=focused_jepsen, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Dec 03 01:20:49 compute-0 podman[217261]: 2025-12-03 01:20:49.654442219 +0000 UTC m=+0.056555894 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:20:49 compute-0 systemd[1]: Started libpod-conmon-1f314677d5c021a33430f7fc17bd7ea3e1b9a7383a9f9f83a1242f40399eb341.scope.
Dec 03 01:20:49 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:20:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fee8e292cc305b79df03174f006d7cf77a6497c3981b9ebab33178a65ad88155/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fee8e292cc305b79df03174f006d7cf77a6497c3981b9ebab33178a65ad88155/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fee8e292cc305b79df03174f006d7cf77a6497c3981b9ebab33178a65ad88155/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fee8e292cc305b79df03174f006d7cf77a6497c3981b9ebab33178a65ad88155/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:49 compute-0 podman[217261]: 2025-12-03 01:20:49.881853065 +0000 UTC m=+0.283966730 container init 1f314677d5c021a33430f7fc17bd7ea3e1b9a7383a9f9f83a1242f40399eb341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_jepsen, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:20:49 compute-0 podman[217261]: 2025-12-03 01:20:49.902196787 +0000 UTC m=+0.304310452 container start 1f314677d5c021a33430f7fc17bd7ea3e1b9a7383a9f9f83a1242f40399eb341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_jepsen, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:20:49 compute-0 podman[217261]: 2025-12-03 01:20:49.908199163 +0000 UTC m=+0.310312818 container attach 1f314677d5c021a33430f7fc17bd7ea3e1b9a7383a9f9f83a1242f40399eb341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:20:49 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Dec 03 01:20:49 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Dec 03 01:20:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v104: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:20:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:20:50 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.14248 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 01:20:50 compute-0 ceph-mgr[193109]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Dec 03 01:20:50 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Dec 03 01:20:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Dec 03 01:20:50 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:50 compute-0 focused_jepsen[217259]: Scheduled mds.cephfs update...
Dec 03 01:20:50 compute-0 systemd[1]: libpod-bcbbd9cc8de050deae2f1fffc7a6f91b38791673ff226df86e462923e4b2e226.scope: Deactivated successfully.
Dec 03 01:20:50 compute-0 conmon[217259]: conmon bcbbd9cc8de050deae2f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bcbbd9cc8de050deae2f1fffc7a6f91b38791673ff226df86e462923e4b2e226.scope/container/memory.events
Dec 03 01:20:50 compute-0 podman[217236]: 2025-12-03 01:20:50.36286561 +0000 UTC m=+0.946453241 container died bcbbd9cc8de050deae2f1fffc7a6f91b38791673ff226df86e462923e4b2e226 (image=quay.io/ceph/ceph:v18, name=focused_jepsen, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:20:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-b11737a3222362a10b18a13558d00739660e19f90c4ea3a6aa28c6c235a1c61b-merged.mount: Deactivated successfully.
Dec 03 01:20:50 compute-0 ceph-mon[192821]: 3.d scrub starts
Dec 03 01:20:50 compute-0 ceph-mon[192821]: 3.d scrub ok
Dec 03 01:20:50 compute-0 ceph-mon[192821]: 4.15 scrub starts
Dec 03 01:20:50 compute-0 ceph-mon[192821]: 4.15 scrub ok
Dec 03 01:20:50 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:50 compute-0 podman[217236]: 2025-12-03 01:20:50.459919333 +0000 UTC m=+1.043506954 container remove bcbbd9cc8de050deae2f1fffc7a6f91b38791673ff226df86e462923e4b2e226 (image=quay.io/ceph/ceph:v18, name=focused_jepsen, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:20:50 compute-0 systemd[1]: libpod-conmon-bcbbd9cc8de050deae2f1fffc7a6f91b38791673ff226df86e462923e4b2e226.scope: Deactivated successfully.
Dec 03 01:20:50 compute-0 sudo[217214]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:50 compute-0 magical_jepsen[217278]: {
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:     "0": [
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:         {
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:             "devices": [
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:                 "/dev/loop3"
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:             ],
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:             "lv_name": "ceph_lv0",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:             "lv_size": "21470642176",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:             "name": "ceph_lv0",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:             "tags": {
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:                 "ceph.cluster_name": "ceph",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:                 "ceph.crush_device_class": "",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:                 "ceph.encrypted": "0",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:                 "ceph.osd_id": "0",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:                 "ceph.type": "block",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:                 "ceph.vdo": "0"
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:             },
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:             "type": "block",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:             "vg_name": "ceph_vg0"
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:         }
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:     ],
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:     "1": [
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:         {
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:             "devices": [
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:                 "/dev/loop4"
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:             ],
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:             "lv_name": "ceph_lv1",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:             "lv_size": "21470642176",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:             "name": "ceph_lv1",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:             "tags": {
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:                 "ceph.cluster_name": "ceph",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:                 "ceph.crush_device_class": "",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:                 "ceph.encrypted": "0",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:                 "ceph.osd_id": "1",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:                 "ceph.type": "block",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:                 "ceph.vdo": "0"
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:             },
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:             "type": "block",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:             "vg_name": "ceph_vg1"
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:         }
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:     ],
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:     "2": [
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:         {
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:             "devices": [
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:                 "/dev/loop5"
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:             ],
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:             "lv_name": "ceph_lv2",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:             "lv_size": "21470642176",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:             "name": "ceph_lv2",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:             "tags": {
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:                 "ceph.cluster_name": "ceph",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:                 "ceph.crush_device_class": "",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:                 "ceph.encrypted": "0",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:                 "ceph.osd_id": "2",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:                 "ceph.type": "block",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:                 "ceph.vdo": "0"
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:             },
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:             "type": "block",
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:             "vg_name": "ceph_vg2"
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:         }
Dec 03 01:20:50 compute-0 magical_jepsen[217278]:     ]
Dec 03 01:20:50 compute-0 magical_jepsen[217278]: }
Dec 03 01:20:50 compute-0 systemd[1]: libpod-1f314677d5c021a33430f7fc17bd7ea3e1b9a7383a9f9f83a1242f40399eb341.scope: Deactivated successfully.
Dec 03 01:20:50 compute-0 podman[217261]: 2025-12-03 01:20:50.754078134 +0000 UTC m=+1.156191799 container died 1f314677d5c021a33430f7fc17bd7ea3e1b9a7383a9f9f83a1242f40399eb341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_jepsen, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:20:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-fee8e292cc305b79df03174f006d7cf77a6497c3981b9ebab33178a65ad88155-merged.mount: Deactivated successfully.
Dec 03 01:20:50 compute-0 podman[217261]: 2025-12-03 01:20:50.863068256 +0000 UTC m=+1.265181921 container remove 1f314677d5c021a33430f7fc17bd7ea3e1b9a7383a9f9f83a1242f40399eb341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_jepsen, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec 03 01:20:50 compute-0 systemd[1]: libpod-conmon-1f314677d5c021a33430f7fc17bd7ea3e1b9a7383a9f9f83a1242f40399eb341.scope: Deactivated successfully.
Dec 03 01:20:50 compute-0 sudo[217082]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:50 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 4.16 deep-scrub starts
Dec 03 01:20:50 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 4.16 deep-scrub ok
Dec 03 01:20:51 compute-0 sudo[217354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:20:51 compute-0 sudo[217354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:51 compute-0 sudo[217354]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:51 compute-0 sudo[217408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:20:51 compute-0 sudo[217408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:51 compute-0 sudo[217408]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:51 compute-0 sudo[217456]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erhfwkyyjynuxhipyprnupqdhfzygjkm ; /usr/bin/python3'
Dec 03 01:20:51 compute-0 sudo[217456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:20:51 compute-0 sudo[217457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:20:51 compute-0 sudo[217457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:51 compute-0 sudo[217457]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:51 compute-0 ceph-mon[192821]: pgmap v104: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:20:51 compute-0 ceph-mon[192821]: from='client.14248 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 01:20:51 compute-0 ceph-mon[192821]: Saving service mds.cephfs spec with placement compute-0
Dec 03 01:20:51 compute-0 ceph-mon[192821]: 4.16 deep-scrub starts
Dec 03 01:20:51 compute-0 ceph-mon[192821]: 4.16 deep-scrub ok
Dec 03 01:20:51 compute-0 python3[217464]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 03 01:20:51 compute-0 sudo[217456]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:51 compute-0 sudo[217484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 01:20:51 compute-0 sudo[217484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:51 compute-0 sudo[217613]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tslhawibtmkdqfteksfzbmfgmmdvihib ; /usr/bin/python3'
Dec 03 01:20:51 compute-0 sudo[217613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:20:51 compute-0 podman[217614]: 2025-12-03 01:20:51.971906365 +0000 UTC m=+0.090658197 container create 6987009783f5c4c3ae359444271fac5096774aab16d122f0f3f67990fecc9702 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_volhard, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 03 01:20:52 compute-0 podman[217614]: 2025-12-03 01:20:51.932500966 +0000 UTC m=+0.051252878 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:20:52 compute-0 systemd[1]: Started libpod-conmon-6987009783f5c4c3ae359444271fac5096774aab16d122f0f3f67990fecc9702.scope.
Dec 03 01:20:52 compute-0 python3[217622]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764724850.9495606-37207-99371674157266/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=085db63d611f66658452414c8f83e35d20a7cbf6 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:20:52 compute-0 sudo[217613]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:52 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:20:52 compute-0 podman[217614]: 2025-12-03 01:20:52.095391348 +0000 UTC m=+0.214143250 container init 6987009783f5c4c3ae359444271fac5096774aab16d122f0f3f67990fecc9702 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_volhard, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 03 01:20:52 compute-0 sshd-session[217349]: Invalid user userroot from 14.103.201.7 port 60040
Dec 03 01:20:52 compute-0 podman[217614]: 2025-12-03 01:20:52.120326877 +0000 UTC m=+0.239078739 container start 6987009783f5c4c3ae359444271fac5096774aab16d122f0f3f67990fecc9702 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 03 01:20:52 compute-0 podman[217614]: 2025-12-03 01:20:52.126904769 +0000 UTC m=+0.245656641 container attach 6987009783f5c4c3ae359444271fac5096774aab16d122f0f3f67990fecc9702 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_volhard, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:20:52 compute-0 friendly_volhard[217632]: 167 167
Dec 03 01:20:52 compute-0 systemd[1]: libpod-6987009783f5c4c3ae359444271fac5096774aab16d122f0f3f67990fecc9702.scope: Deactivated successfully.
Dec 03 01:20:52 compute-0 podman[217614]: 2025-12-03 01:20:52.130156689 +0000 UTC m=+0.248908551 container died 6987009783f5c4c3ae359444271fac5096774aab16d122f0f3f67990fecc9702 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:20:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd074524eb6d814e2110c9c63379c94f1d1edc751a622185b23b06b097fdee65-merged.mount: Deactivated successfully.
Dec 03 01:20:52 compute-0 podman[217614]: 2025-12-03 01:20:52.203513217 +0000 UTC m=+0.322265089 container remove 6987009783f5c4c3ae359444271fac5096774aab16d122f0f3f67990fecc9702 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_volhard, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:20:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v105: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:20:52 compute-0 systemd[1]: libpod-conmon-6987009783f5c4c3ae359444271fac5096774aab16d122f0f3f67990fecc9702.scope: Deactivated successfully.
Dec 03 01:20:52 compute-0 sshd-session[217349]: Received disconnect from 14.103.201.7 port 60040:11: Bye Bye [preauth]
Dec 03 01:20:52 compute-0 sshd-session[217349]: Disconnected from invalid user userroot 14.103.201.7 port 60040 [preauth]
Dec 03 01:20:52 compute-0 podman[217680]: 2025-12-03 01:20:52.431817017 +0000 UTC m=+0.064470423 container create c903d71a10e6484bf577a9ca4b7f19877010c36a24cf072ca80905ba4acb5b17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:20:52 compute-0 systemd[1]: Started libpod-conmon-c903d71a10e6484bf577a9ca4b7f19877010c36a24cf072ca80905ba4acb5b17.scope.
Dec 03 01:20:52 compute-0 podman[217680]: 2025-12-03 01:20:52.413596314 +0000 UTC m=+0.046249730 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:20:52 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:20:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26bebf424fa60c4d78ff8ac78623c8955b3b0062eead4312b2a14ddae0e9c4ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26bebf424fa60c4d78ff8ac78623c8955b3b0062eead4312b2a14ddae0e9c4ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26bebf424fa60c4d78ff8ac78623c8955b3b0062eead4312b2a14ddae0e9c4ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26bebf424fa60c4d78ff8ac78623c8955b3b0062eead4312b2a14ddae0e9c4ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:52 compute-0 podman[217680]: 2025-12-03 01:20:52.610141466 +0000 UTC m=+0.242794922 container init c903d71a10e6484bf577a9ca4b7f19877010c36a24cf072ca80905ba4acb5b17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_wright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 03 01:20:52 compute-0 sudo[217722]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqnrdxcryzcopdjphgreuecwjuiunqzt ; /usr/bin/python3'
Dec 03 01:20:52 compute-0 sudo[217722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:20:52 compute-0 podman[217680]: 2025-12-03 01:20:52.649361649 +0000 UTC m=+0.282015085 container start c903d71a10e6484bf577a9ca4b7f19877010c36a24cf072ca80905ba4acb5b17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_wright, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec 03 01:20:52 compute-0 podman[217680]: 2025-12-03 01:20:52.656919528 +0000 UTC m=+0.289572984 container attach c903d71a10e6484bf577a9ca4b7f19877010c36a24cf072ca80905ba4acb5b17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_wright, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Dec 03 01:20:52 compute-0 python3[217725]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:20:52 compute-0 podman[217727]: 2025-12-03 01:20:52.864707061 +0000 UTC m=+0.078408248 container create ccea185fcb59bcde5081bb0bb90bfe3fd587109811d7ca0a2c25f0be89e01bd7 (image=quay.io/ceph/ceph:v18, name=mystifying_rhodes, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 03 01:20:52 compute-0 podman[217727]: 2025-12-03 01:20:52.826073314 +0000 UTC m=+0.039774541 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:20:52 compute-0 systemd[1]: Started libpod-conmon-ccea185fcb59bcde5081bb0bb90bfe3fd587109811d7ca0a2c25f0be89e01bd7.scope.
Dec 03 01:20:52 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Dec 03 01:20:52 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Dec 03 01:20:52 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:20:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af27cbc15dc94e397f4ba4ce99f9766b61a0f48b5b111319c9db06b59ed9212f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af27cbc15dc94e397f4ba4ce99f9766b61a0f48b5b111319c9db06b59ed9212f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:53 compute-0 podman[217727]: 2025-12-03 01:20:53.037689023 +0000 UTC m=+0.251390240 container init ccea185fcb59bcde5081bb0bb90bfe3fd587109811d7ca0a2c25f0be89e01bd7 (image=quay.io/ceph/ceph:v18, name=mystifying_rhodes, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 03 01:20:53 compute-0 podman[217727]: 2025-12-03 01:20:53.053988233 +0000 UTC m=+0.267689390 container start ccea185fcb59bcde5081bb0bb90bfe3fd587109811d7ca0a2c25f0be89e01bd7 (image=quay.io/ceph/ceph:v18, name=mystifying_rhodes, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 03 01:20:53 compute-0 podman[217727]: 2025-12-03 01:20:53.058858768 +0000 UTC m=+0.272560005 container attach ccea185fcb59bcde5081bb0bb90bfe3fd587109811d7ca0a2c25f0be89e01bd7 (image=quay.io/ceph/ceph:v18, name=mystifying_rhodes, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 03 01:20:53 compute-0 ceph-mon[192821]: pgmap v105: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:20:53 compute-0 ceph-mon[192821]: 4.17 scrub starts
Dec 03 01:20:53 compute-0 ceph-mon[192821]: 4.17 scrub ok
Dec 03 01:20:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0) v1
Dec 03 01:20:53 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2251543419' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Dec 03 01:20:53 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2251543419' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Dec 03 01:20:53 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Dec 03 01:20:53 compute-0 systemd[1]: libpod-ccea185fcb59bcde5081bb0bb90bfe3fd587109811d7ca0a2c25f0be89e01bd7.scope: Deactivated successfully.
Dec 03 01:20:53 compute-0 podman[217727]: 2025-12-03 01:20:53.760324117 +0000 UTC m=+0.974025314 container died ccea185fcb59bcde5081bb0bb90bfe3fd587109811d7ca0a2c25f0be89e01bd7 (image=quay.io/ceph/ceph:v18, name=mystifying_rhodes, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:20:53 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Dec 03 01:20:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-af27cbc15dc94e397f4ba4ce99f9766b61a0f48b5b111319c9db06b59ed9212f-merged.mount: Deactivated successfully.
Dec 03 01:20:53 compute-0 upbeat_wright[217696]: {
Dec 03 01:20:53 compute-0 upbeat_wright[217696]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 01:20:53 compute-0 upbeat_wright[217696]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:20:53 compute-0 upbeat_wright[217696]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 01:20:53 compute-0 upbeat_wright[217696]:         "osd_id": 2,
Dec 03 01:20:53 compute-0 upbeat_wright[217696]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:20:53 compute-0 upbeat_wright[217696]:         "type": "bluestore"
Dec 03 01:20:53 compute-0 upbeat_wright[217696]:     },
Dec 03 01:20:53 compute-0 upbeat_wright[217696]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 01:20:53 compute-0 upbeat_wright[217696]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:20:53 compute-0 upbeat_wright[217696]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 01:20:53 compute-0 upbeat_wright[217696]:         "osd_id": 1,
Dec 03 01:20:53 compute-0 upbeat_wright[217696]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:20:53 compute-0 upbeat_wright[217696]:         "type": "bluestore"
Dec 03 01:20:53 compute-0 upbeat_wright[217696]:     },
Dec 03 01:20:53 compute-0 upbeat_wright[217696]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 01:20:53 compute-0 upbeat_wright[217696]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:20:53 compute-0 upbeat_wright[217696]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 01:20:53 compute-0 upbeat_wright[217696]:         "osd_id": 0,
Dec 03 01:20:53 compute-0 upbeat_wright[217696]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:20:53 compute-0 upbeat_wright[217696]:         "type": "bluestore"
Dec 03 01:20:53 compute-0 upbeat_wright[217696]:     }
Dec 03 01:20:53 compute-0 upbeat_wright[217696]: }
Dec 03 01:20:53 compute-0 podman[217727]: 2025-12-03 01:20:53.854909001 +0000 UTC m=+1.068610188 container remove ccea185fcb59bcde5081bb0bb90bfe3fd587109811d7ca0a2c25f0be89e01bd7 (image=quay.io/ceph/ceph:v18, name=mystifying_rhodes, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:20:53 compute-0 systemd[1]: libpod-c903d71a10e6484bf577a9ca4b7f19877010c36a24cf072ca80905ba4acb5b17.scope: Deactivated successfully.
Dec 03 01:20:53 compute-0 systemd[1]: libpod-c903d71a10e6484bf577a9ca4b7f19877010c36a24cf072ca80905ba4acb5b17.scope: Consumed 1.201s CPU time.
Dec 03 01:20:53 compute-0 sudo[217722]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:53 compute-0 systemd[1]: libpod-conmon-ccea185fcb59bcde5081bb0bb90bfe3fd587109811d7ca0a2c25f0be89e01bd7.scope: Deactivated successfully.
Dec 03 01:20:53 compute-0 podman[217806]: 2025-12-03 01:20:53.944348103 +0000 UTC m=+0.059498195 container died c903d71a10e6484bf577a9ca4b7f19877010c36a24cf072ca80905ba4acb5b17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 03 01:20:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-26bebf424fa60c4d78ff8ac78623c8955b3b0062eead4312b2a14ddae0e9c4ed-merged.mount: Deactivated successfully.
Dec 03 01:20:54 compute-0 podman[217806]: 2025-12-03 01:20:54.045563671 +0000 UTC m=+0.160713693 container remove c903d71a10e6484bf577a9ca4b7f19877010c36a24cf072ca80905ba4acb5b17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_wright, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:20:54 compute-0 systemd[1]: libpod-conmon-c903d71a10e6484bf577a9ca4b7f19877010c36a24cf072ca80905ba4acb5b17.scope: Deactivated successfully.
Dec 03 01:20:54 compute-0 sudo[217484]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:20:54 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:20:54 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v106: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:20:54 compute-0 sudo[217822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:20:54 compute-0 sudo[217822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:54 compute-0 sudo[217822]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:54 compute-0 sudo[217847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 01:20:54 compute-0 sudo[217847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:54 compute-0 sudo[217847]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:54 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2251543419' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Dec 03 01:20:54 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2251543419' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Dec 03 01:20:54 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:54 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:54 compute-0 sudo[217872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:20:54 compute-0 sudo[217872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:54 compute-0 sudo[217872]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:54 compute-0 sudo[217919]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-leqgrfitkzewxmgjvjdcpuvcszqmbkje ; /usr/bin/python3'
Dec 03 01:20:54 compute-0 sudo[217919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:20:54 compute-0 sudo[217921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:20:54 compute-0 sudo[217921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:54 compute-0 sudo[217921]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:54 compute-0 python3[217928]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:20:54 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Dec 03 01:20:54 compute-0 sudo[217948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:20:54 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Dec 03 01:20:54 compute-0 sudo[217948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:54 compute-0 sudo[217948]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:54 compute-0 podman[217968]: 2025-12-03 01:20:54.788391433 +0000 UTC m=+0.087659254 container create 2d5da8b52333547ed08227319940ad480b9d8389a41738b6da9bf9511cdbd969 (image=quay.io/ceph/ceph:v18, name=objective_fermat, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:20:54 compute-0 podman[217968]: 2025-12-03 01:20:54.765064038 +0000 UTC m=+0.064331929 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:20:54 compute-0 systemd[1]: Started libpod-conmon-2d5da8b52333547ed08227319940ad480b9d8389a41738b6da9bf9511cdbd969.scope.
Dec 03 01:20:54 compute-0 sudo[217984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 03 01:20:54 compute-0 sudo[217984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:54 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:20:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfa7425ca14dc1e8d72670dedfd0b6eb43bda65f7301c31631c1bc5d16a49d0c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfa7425ca14dc1e8d72670dedfd0b6eb43bda65f7301c31631c1bc5d16a49d0c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:54 compute-0 podman[217968]: 2025-12-03 01:20:54.939188161 +0000 UTC m=+0.238456052 container init 2d5da8b52333547ed08227319940ad480b9d8389a41738b6da9bf9511cdbd969 (image=quay.io/ceph/ceph:v18, name=objective_fermat, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:20:54 compute-0 podman[217968]: 2025-12-03 01:20:54.955809401 +0000 UTC m=+0.255077222 container start 2d5da8b52333547ed08227319940ad480b9d8389a41738b6da9bf9511cdbd969 (image=quay.io/ceph/ceph:v18, name=objective_fermat, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:20:54 compute-0 podman[217968]: 2025-12-03 01:20:54.960908282 +0000 UTC m=+0.260176123 container attach 2d5da8b52333547ed08227319940ad480b9d8389a41738b6da9bf9511cdbd969 (image=quay.io/ceph/ceph:v18, name=objective_fermat, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:20:55 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:20:55 compute-0 ceph-mon[192821]: 3.10 scrub starts
Dec 03 01:20:55 compute-0 ceph-mon[192821]: 3.10 scrub ok
Dec 03 01:20:55 compute-0 ceph-mon[192821]: pgmap v106: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:20:55 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Dec 03 01:20:55 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4156561531' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 03 01:20:55 compute-0 podman[218103]: 2025-12-03 01:20:55.640176327 +0000 UTC m=+0.102540715 container exec d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 03 01:20:55 compute-0 objective_fermat[218014]: 
Dec 03 01:20:55 compute-0 objective_fermat[218014]: {"fsid":"3765feb2-36f8-5b86-b74c-64e9221f9c4c","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":195,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":42,"num_osds":3,"num_up_osds":3,"osd_up_since":1764724803,"num_in_osds":3,"osd_in_since":1764724766,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":193}],"num_pgs":193,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":84168704,"bytes_avail":64327757824,"bytes_total":64411926528},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":3,"modified":"2025-12-03T01:20:50.203901+0000","services":{"osd":{"daemons":{"summary":"","1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Dec 03 01:20:55 compute-0 systemd[1]: libpod-2d5da8b52333547ed08227319940ad480b9d8389a41738b6da9bf9511cdbd969.scope: Deactivated successfully.
Dec 03 01:20:55 compute-0 podman[217968]: 2025-12-03 01:20:55.662231746 +0000 UTC m=+0.961499567 container died 2d5da8b52333547ed08227319940ad480b9d8389a41738b6da9bf9511cdbd969 (image=quay.io/ceph/ceph:v18, name=objective_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec 03 01:20:55 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Dec 03 01:20:55 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Dec 03 01:20:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-bfa7425ca14dc1e8d72670dedfd0b6eb43bda65f7301c31631c1bc5d16a49d0c-merged.mount: Deactivated successfully.
Dec 03 01:20:55 compute-0 podman[217968]: 2025-12-03 01:20:55.752022218 +0000 UTC m=+1.051290039 container remove 2d5da8b52333547ed08227319940ad480b9d8389a41738b6da9bf9511cdbd969 (image=quay.io/ceph/ceph:v18, name=objective_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 03 01:20:55 compute-0 systemd[1]: libpod-conmon-2d5da8b52333547ed08227319940ad480b9d8389a41738b6da9bf9511cdbd969.scope: Deactivated successfully.
Dec 03 01:20:55 compute-0 podman[218103]: 2025-12-03 01:20:55.782699086 +0000 UTC m=+0.245063444 container exec_died d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:20:55 compute-0 sudo[217919]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:56 compute-0 sudo[218201]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvzmuhxhlehvqmaawumflgdmmvmmrtfu ; /usr/bin/python3'
Dec 03 01:20:56 compute-0 sudo[218201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:20:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v107: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:20:56 compute-0 python3[218206]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:20:56 compute-0 podman[218225]: 2025-12-03 01:20:56.338672613 +0000 UTC m=+0.071894929 container create fcf3365b15bd3338bcaf339156ee602e1a0aaeec2ea387c23404cc71dba87665 (image=quay.io/ceph/ceph:v18, name=elastic_pike, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Dec 03 01:20:56 compute-0 podman[218225]: 2025-12-03 01:20:56.303082869 +0000 UTC m=+0.036305225 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:20:56 compute-0 systemd[1]: Started libpod-conmon-fcf3365b15bd3338bcaf339156ee602e1a0aaeec2ea387c23404cc71dba87665.scope.
Dec 03 01:20:56 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:20:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86f7b64c4291c6b4dae54e3528b322c86cd7e9b0614d32745651ea6135a8ba51/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86f7b64c4291c6b4dae54e3528b322c86cd7e9b0614d32745651ea6135a8ba51/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:56 compute-0 ceph-mon[192821]: 3.13 scrub starts
Dec 03 01:20:56 compute-0 ceph-mon[192821]: 3.13 scrub ok
Dec 03 01:20:56 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/4156561531' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 03 01:20:56 compute-0 podman[218225]: 2025-12-03 01:20:56.482262131 +0000 UTC m=+0.215484427 container init fcf3365b15bd3338bcaf339156ee602e1a0aaeec2ea387c23404cc71dba87665 (image=quay.io/ceph/ceph:v18, name=elastic_pike, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Dec 03 01:20:56 compute-0 podman[218225]: 2025-12-03 01:20:56.49306075 +0000 UTC m=+0.226283026 container start fcf3365b15bd3338bcaf339156ee602e1a0aaeec2ea387c23404cc71dba87665 (image=quay.io/ceph/ceph:v18, name=elastic_pike, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:20:56 compute-0 podman[218225]: 2025-12-03 01:20:56.497874093 +0000 UTC m=+0.231096449 container attach fcf3365b15bd3338bcaf339156ee602e1a0aaeec2ea387c23404cc71dba87665 (image=quay.io/ceph/ceph:v18, name=elastic_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:20:56 compute-0 sudo[217984]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:20:56 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:20:56 compute-0 sshd-session[216502]: Connection closed by authenticating user root 193.32.162.157 port 55484 [preauth]
Dec 03 01:20:56 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:20:56 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:20:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 01:20:56 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:20:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 01:20:56 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:56 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 0744da57-f806-48ae-9324-54022f6b951f does not exist
Dec 03 01:20:56 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 201b98e6-a573-4b95-9cab-d3a3dae42431 does not exist
Dec 03 01:20:56 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 3edf50b9-a89c-467f-8014-35ac5716ba08 does not exist
Dec 03 01:20:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 01:20:56 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:20:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 01:20:56 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:20:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:20:56 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:20:56 compute-0 sshd-session[218274]: Invalid user mcserver from 34.66.72.251 port 51748
Dec 03 01:20:56 compute-0 sudo[218278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:20:56 compute-0 sudo[218278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:56 compute-0 sudo[218278]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:56 compute-0 sshd-session[218274]: Received disconnect from 34.66.72.251 port 51748:11: Bye Bye [preauth]
Dec 03 01:20:56 compute-0 sshd-session[218274]: Disconnected from invalid user mcserver 34.66.72.251 port 51748 [preauth]
Dec 03 01:20:56 compute-0 sudo[218304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:20:56 compute-0 sudo[218304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:56 compute-0 sudo[218304]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:57 compute-0 sudo[218348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:20:57 compute-0 sudo[218348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:57 compute-0 sudo[218348]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 03 01:20:57 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3927838492' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 01:20:57 compute-0 elastic_pike[218258]: 
Dec 03 01:20:57 compute-0 elastic_pike[218258]: {"epoch":1,"fsid":"3765feb2-36f8-5b86-b74c-64e9221f9c4c","modified":"2025-12-03T01:17:32.534037Z","created":"2025-12-03T01:17:32.534037Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Dec 03 01:20:57 compute-0 elastic_pike[218258]: dumped monmap epoch 1
Dec 03 01:20:57 compute-0 systemd[1]: libpod-fcf3365b15bd3338bcaf339156ee602e1a0aaeec2ea387c23404cc71dba87665.scope: Deactivated successfully.
Dec 03 01:20:57 compute-0 conmon[218258]: conmon fcf3365b15bd3338bcaf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fcf3365b15bd3338bcaf339156ee602e1a0aaeec2ea387c23404cc71dba87665.scope/container/memory.events
Dec 03 01:20:57 compute-0 podman[218225]: 2025-12-03 01:20:57.178956728 +0000 UTC m=+0.912179094 container died fcf3365b15bd3338bcaf339156ee602e1a0aaeec2ea387c23404cc71dba87665 (image=quay.io/ceph/ceph:v18, name=elastic_pike, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec 03 01:20:57 compute-0 sudo[218373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 01:20:57 compute-0 sudo[218373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:20:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-86f7b64c4291c6b4dae54e3528b322c86cd7e9b0614d32745651ea6135a8ba51-merged.mount: Deactivated successfully.
Dec 03 01:20:57 compute-0 podman[218225]: 2025-12-03 01:20:57.261116559 +0000 UTC m=+0.994338835 container remove fcf3365b15bd3338bcaf339156ee602e1a0aaeec2ea387c23404cc71dba87665 (image=quay.io/ceph/ceph:v18, name=elastic_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Dec 03 01:20:57 compute-0 systemd[1]: libpod-conmon-fcf3365b15bd3338bcaf339156ee602e1a0aaeec2ea387c23404cc71dba87665.scope: Deactivated successfully.
Dec 03 01:20:57 compute-0 sudo[218201]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:57 compute-0 ceph-mon[192821]: 3.14 scrub starts
Dec 03 01:20:57 compute-0 ceph-mon[192821]: 3.14 scrub ok
Dec 03 01:20:57 compute-0 ceph-mon[192821]: pgmap v107: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:20:57 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:57 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:57 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:20:57 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:20:57 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:20:57 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:20:57 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:20:57 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:20:57 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3927838492' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 01:20:57 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Dec 03 01:20:57 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Dec 03 01:20:57 compute-0 sudo[218475]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kybaqfehwdqaswnhjboytrnhpbirfhxz ; /usr/bin/python3'
Dec 03 01:20:57 compute-0 sudo[218475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:20:57 compute-0 podman[218456]: 2025-12-03 01:20:57.790856262 +0000 UTC m=+0.115812633 container create 271db199829fef8919c6e7025e969a47aba24fc478900903beebf8d0c1dc720e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bassi, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:20:57 compute-0 podman[218456]: 2025-12-03 01:20:57.760231625 +0000 UTC m=+0.085188086 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:20:57 compute-0 systemd[1]: Started libpod-conmon-271db199829fef8919c6e7025e969a47aba24fc478900903beebf8d0c1dc720e.scope.
Dec 03 01:20:57 compute-0 python3[218479]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:20:57 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:20:57 compute-0 podman[218456]: 2025-12-03 01:20:57.946277037 +0000 UTC m=+0.271233448 container init 271db199829fef8919c6e7025e969a47aba24fc478900903beebf8d0c1dc720e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bassi, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec 03 01:20:57 compute-0 podman[218456]: 2025-12-03 01:20:57.961051596 +0000 UTC m=+0.286007997 container start 271db199829fef8919c6e7025e969a47aba24fc478900903beebf8d0c1dc720e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:20:57 compute-0 podman[218456]: 2025-12-03 01:20:57.967695779 +0000 UTC m=+0.292652140 container attach 271db199829fef8919c6e7025e969a47aba24fc478900903beebf8d0c1dc720e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:20:57 compute-0 unruffled_bassi[218485]: 167 167
Dec 03 01:20:57 compute-0 systemd[1]: libpod-271db199829fef8919c6e7025e969a47aba24fc478900903beebf8d0c1dc720e.scope: Deactivated successfully.
Dec 03 01:20:57 compute-0 podman[218456]: 2025-12-03 01:20:57.972873583 +0000 UTC m=+0.297829964 container died 271db199829fef8919c6e7025e969a47aba24fc478900903beebf8d0c1dc720e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bassi, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:20:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c2344a7da3a65aa2dc55f98f13f6a3563c26929695d6ba1dbfa39180b5a85bc-merged.mount: Deactivated successfully.
Dec 03 01:20:58 compute-0 podman[218488]: 2025-12-03 01:20:58.030204867 +0000 UTC m=+0.105088685 container create 47f472a55ca1fe3f24186fa8c966ad8fff709eaaa19fa619f3e75d511208bcba (image=quay.io/ceph/ceph:v18, name=recursing_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 03 01:20:58 compute-0 podman[218456]: 2025-12-03 01:20:58.064964498 +0000 UTC m=+0.389920869 container remove 271db199829fef8919c6e7025e969a47aba24fc478900903beebf8d0c1dc720e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bassi, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:20:58 compute-0 podman[218488]: 2025-12-03 01:20:57.994944513 +0000 UTC m=+0.069828401 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:20:58 compute-0 systemd[1]: libpod-conmon-271db199829fef8919c6e7025e969a47aba24fc478900903beebf8d0c1dc720e.scope: Deactivated successfully.
Dec 03 01:20:58 compute-0 systemd[1]: Started libpod-conmon-47f472a55ca1fe3f24186fa8c966ad8fff709eaaa19fa619f3e75d511208bcba.scope.
Dec 03 01:20:58 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:20:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f47523d188b44566dc19821ca832415a32a314cd14deb1ac54a417cc1798ab69/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f47523d188b44566dc19821ca832415a32a314cd14deb1ac54a417cc1798ab69/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:58 compute-0 podman[218488]: 2025-12-03 01:20:58.209836252 +0000 UTC m=+0.284720160 container init 47f472a55ca1fe3f24186fa8c966ad8fff709eaaa19fa619f3e75d511208bcba (image=quay.io/ceph/ceph:v18, name=recursing_satoshi, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec 03 01:20:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v108: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:20:58 compute-0 podman[218488]: 2025-12-03 01:20:58.221024742 +0000 UTC m=+0.295908560 container start 47f472a55ca1fe3f24186fa8c966ad8fff709eaaa19fa619f3e75d511208bcba (image=quay.io/ceph/ceph:v18, name=recursing_satoshi, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec 03 01:20:58 compute-0 podman[218488]: 2025-12-03 01:20:58.225893386 +0000 UTC m=+0.300777234 container attach 47f472a55ca1fe3f24186fa8c966ad8fff709eaaa19fa619f3e75d511208bcba (image=quay.io/ceph/ceph:v18, name=recursing_satoshi, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 03 01:20:58 compute-0 podman[218526]: 2025-12-03 01:20:58.306820693 +0000 UTC m=+0.073319868 container create 1a66b678927623a636f287bb5c32bb0916470c795c4858093029ad0d2ef5498c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 03 01:20:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:20:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:20:58 compute-0 podman[218526]: 2025-12-03 01:20:58.271161967 +0000 UTC m=+0.037661142 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:20:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:20:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:20:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:20:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:20:58 compute-0 systemd[1]: Started libpod-conmon-1a66b678927623a636f287bb5c32bb0916470c795c4858093029ad0d2ef5498c.scope.
Dec 03 01:20:58 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:20:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02d39bf1e69197740a2eae7118d13ba01742c6a034d3625d8759cf549f80345f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02d39bf1e69197740a2eae7118d13ba01742c6a034d3625d8759cf549f80345f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02d39bf1e69197740a2eae7118d13ba01742c6a034d3625d8759cf549f80345f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02d39bf1e69197740a2eae7118d13ba01742c6a034d3625d8759cf549f80345f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02d39bf1e69197740a2eae7118d13ba01742c6a034d3625d8759cf549f80345f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:20:58 compute-0 podman[218526]: 2025-12-03 01:20:58.49918144 +0000 UTC m=+0.265680625 container init 1a66b678927623a636f287bb5c32bb0916470c795c4858093029ad0d2ef5498c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wing, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:20:58 compute-0 podman[218526]: 2025-12-03 01:20:58.521841696 +0000 UTC m=+0.288340871 container start 1a66b678927623a636f287bb5c32bb0916470c795c4858093029ad0d2ef5498c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wing, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Dec 03 01:20:58 compute-0 podman[218526]: 2025-12-03 01:20:58.528022887 +0000 UTC m=+0.294522042 container attach 1a66b678927623a636f287bb5c32bb0916470c795c4858093029ad0d2ef5498c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wing, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:20:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0) v1
Dec 03 01:20:58 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3838596819' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Dec 03 01:20:58 compute-0 recursing_satoshi[218517]: [client.openstack]
Dec 03 01:20:58 compute-0 recursing_satoshi[218517]:         key = AQCCjy9pAAAAABAAp+KNKPmL/Q89NduD/bXpeQ==
Dec 03 01:20:58 compute-0 recursing_satoshi[218517]:         caps mgr = "allow *"
Dec 03 01:20:58 compute-0 recursing_satoshi[218517]:         caps mon = "profile rbd"
Dec 03 01:20:58 compute-0 recursing_satoshi[218517]:         caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Dec 03 01:20:58 compute-0 systemd[1]: libpod-47f472a55ca1fe3f24186fa8c966ad8fff709eaaa19fa619f3e75d511208bcba.scope: Deactivated successfully.
Dec 03 01:20:58 compute-0 podman[218488]: 2025-12-03 01:20:58.9601166 +0000 UTC m=+1.035000448 container died 47f472a55ca1fe3f24186fa8c966ad8fff709eaaa19fa619f3e75d511208bcba (image=quay.io/ceph/ceph:v18, name=recursing_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:20:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-f47523d188b44566dc19821ca832415a32a314cd14deb1ac54a417cc1798ab69-merged.mount: Deactivated successfully.
Dec 03 01:20:59 compute-0 podman[218488]: 2025-12-03 01:20:59.050698084 +0000 UTC m=+1.125581932 container remove 47f472a55ca1fe3f24186fa8c966ad8fff709eaaa19fa619f3e75d511208bcba (image=quay.io/ceph/ceph:v18, name=recursing_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 03 01:20:59 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Dec 03 01:20:59 compute-0 systemd[1]: libpod-conmon-47f472a55ca1fe3f24186fa8c966ad8fff709eaaa19fa619f3e75d511208bcba.scope: Deactivated successfully.
Dec 03 01:20:59 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Dec 03 01:20:59 compute-0 sudo[218475]: pam_unix(sudo:session): session closed for user root
Dec 03 01:20:59 compute-0 ceph-mon[192821]: 3.19 scrub starts
Dec 03 01:20:59 compute-0 ceph-mon[192821]: 3.19 scrub ok
Dec 03 01:20:59 compute-0 ceph-mon[192821]: pgmap v108: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:20:59 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3838596819' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Dec 03 01:20:59 compute-0 ceph-mon[192821]: 4.19 scrub starts
Dec 03 01:20:59 compute-0 ceph-mon[192821]: 4.19 scrub ok
Dec 03 01:20:59 compute-0 podman[158098]: time="2025-12-03T01:20:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:20:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:20:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30878 "" "Go-http-client/1.1"
Dec 03 01:20:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:20:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6281 "" "Go-http-client/1.1"
Dec 03 01:20:59 compute-0 quirky_wing[218542]: --> passed data devices: 0 physical, 3 LVM
Dec 03 01:20:59 compute-0 quirky_wing[218542]: --> relative data size: 1.0
Dec 03 01:20:59 compute-0 quirky_wing[218542]: --> All data devices are unavailable
Dec 03 01:20:59 compute-0 systemd[1]: libpod-1a66b678927623a636f287bb5c32bb0916470c795c4858093029ad0d2ef5498c.scope: Deactivated successfully.
Dec 03 01:20:59 compute-0 podman[218526]: 2025-12-03 01:20:59.8523167 +0000 UTC m=+1.618815885 container died 1a66b678927623a636f287bb5c32bb0916470c795c4858093029ad0d2ef5498c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:20:59 compute-0 systemd[1]: libpod-1a66b678927623a636f287bb5c32bb0916470c795c4858093029ad0d2ef5498c.scope: Consumed 1.258s CPU time.
Dec 03 01:20:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-02d39bf1e69197740a2eae7118d13ba01742c6a034d3625d8759cf549f80345f-merged.mount: Deactivated successfully.
Dec 03 01:20:59 compute-0 podman[218526]: 2025-12-03 01:20:59.972622826 +0000 UTC m=+1.739121981 container remove 1a66b678927623a636f287bb5c32bb0916470c795c4858093029ad0d2ef5498c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 03 01:20:59 compute-0 systemd[1]: libpod-conmon-1a66b678927623a636f287bb5c32bb0916470c795c4858093029ad0d2ef5498c.scope: Deactivated successfully.
Dec 03 01:21:00 compute-0 sudo[218373]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:00 compute-0 sudo[218617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:21:00 compute-0 sudo[218617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:00 compute-0 sudo[218617]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v109: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:21:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:21:00 compute-0 sudo[218671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:21:00 compute-0 sudo[218671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:00 compute-0 sudo[218671]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:00 compute-0 sudo[218723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:21:00 compute-0 sudo[218723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:00 compute-0 sudo[218723]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:00 compute-0 sudo[218775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 01:21:00 compute-0 sudo[218775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:00 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Dec 03 01:21:00 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Dec 03 01:21:00 compute-0 sudo[218873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlyogfjtgywavnsumdhcrfqkoaciqydg ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764724860.1459956-37279-206255486704134/async_wrapper.py j777914220905 30 /home/zuul/.ansible/tmp/ansible-tmp-1764724860.1459956-37279-206255486704134/AnsiballZ_command.py _'
Dec 03 01:21:00 compute-0 sudo[218873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:21:01 compute-0 ansible-async_wrapper.py[218878]: Invoked with j777914220905 30 /home/zuul/.ansible/tmp/ansible-tmp-1764724860.1459956-37279-206255486704134/AnsiballZ_command.py _
Dec 03 01:21:01 compute-0 ansible-async_wrapper.py[218902]: Starting module and watcher
Dec 03 01:21:01 compute-0 ansible-async_wrapper.py[218902]: Start watching 218905 (30)
Dec 03 01:21:01 compute-0 ansible-async_wrapper.py[218905]: Start module (218905)
Dec 03 01:21:01 compute-0 ansible-async_wrapper.py[218878]: Return async_wrapper task started.
Dec 03 01:21:01 compute-0 sudo[218873]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:01 compute-0 podman[218910]: 2025-12-03 01:21:01.190381435 +0000 UTC m=+0.079615732 container create f31c11df518317e99f2d93d61acb7e0ea896131c4990b95c44c14d024482de7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_almeida, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec 03 01:21:01 compute-0 systemd[1]: Started libpod-conmon-f31c11df518317e99f2d93d61acb7e0ea896131c4990b95c44c14d024482de7f.scope.
Dec 03 01:21:01 compute-0 podman[218910]: 2025-12-03 01:21:01.157882317 +0000 UTC m=+0.047116694 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:21:01 compute-0 python3[218906]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:21:01 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:21:01 compute-0 podman[218910]: 2025-12-03 01:21:01.303835161 +0000 UTC m=+0.193069478 container init f31c11df518317e99f2d93d61acb7e0ea896131c4990b95c44c14d024482de7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 03 01:21:01 compute-0 podman[218910]: 2025-12-03 01:21:01.313829077 +0000 UTC m=+0.203063374 container start f31c11df518317e99f2d93d61acb7e0ea896131c4990b95c44c14d024482de7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_almeida, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 03 01:21:01 compute-0 podman[218910]: 2025-12-03 01:21:01.318516487 +0000 UTC m=+0.207750804 container attach f31c11df518317e99f2d93d61acb7e0ea896131c4990b95c44c14d024482de7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_almeida, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:21:01 compute-0 interesting_almeida[218928]: 167 167
Dec 03 01:21:01 compute-0 systemd[1]: libpod-f31c11df518317e99f2d93d61acb7e0ea896131c4990b95c44c14d024482de7f.scope: Deactivated successfully.
Dec 03 01:21:01 compute-0 podman[218910]: 2025-12-03 01:21:01.320722628 +0000 UTC m=+0.209956935 container died f31c11df518317e99f2d93d61acb7e0ea896131c4990b95c44c14d024482de7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_almeida, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:21:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c17166b0a46361cdc8c66720f2cf4bbd54997d52bdf1ea0aabfed1d8825cc60-merged.mount: Deactivated successfully.
Dec 03 01:21:01 compute-0 podman[218910]: 2025-12-03 01:21:01.400123092 +0000 UTC m=+0.289357429 container remove f31c11df518317e99f2d93d61acb7e0ea896131c4990b95c44c14d024482de7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:21:01 compute-0 podman[218930]: 2025-12-03 01:21:01.41342117 +0000 UTC m=+0.135654341 container create bf764f366a945d50477a1f631f593d52efea140b003fe03ed9bdf965034b3ade (image=quay.io/ceph/ceph:v18, name=happy_albattani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:21:01 compute-0 podman[218930]: 2025-12-03 01:21:01.342400297 +0000 UTC m=+0.064633518 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:21:01 compute-0 openstack_network_exporter[160250]: ERROR   01:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:21:01 compute-0 openstack_network_exporter[160250]: ERROR   01:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:21:01 compute-0 openstack_network_exporter[160250]: ERROR   01:21:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:21:01 compute-0 openstack_network_exporter[160250]: ERROR   01:21:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:21:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:21:01 compute-0 openstack_network_exporter[160250]: ERROR   01:21:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:21:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:21:01 compute-0 systemd[1]: libpod-conmon-f31c11df518317e99f2d93d61acb7e0ea896131c4990b95c44c14d024482de7f.scope: Deactivated successfully.
Dec 03 01:21:01 compute-0 systemd[1]: Started libpod-conmon-bf764f366a945d50477a1f631f593d52efea140b003fe03ed9bdf965034b3ade.scope.
Dec 03 01:21:01 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:21:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e61affc03522beda1f9cbd4ffb4563c9abb5ed0e0f536920effddb13a5c0e18/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e61affc03522beda1f9cbd4ffb4563c9abb5ed0e0f536920effddb13a5c0e18/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:01 compute-0 podman[218930]: 2025-12-03 01:21:01.551302011 +0000 UTC m=+0.273535262 container init bf764f366a945d50477a1f631f593d52efea140b003fe03ed9bdf965034b3ade (image=quay.io/ceph/ceph:v18, name=happy_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Dec 03 01:21:01 compute-0 podman[218930]: 2025-12-03 01:21:01.562411798 +0000 UTC m=+0.284645009 container start bf764f366a945d50477a1f631f593d52efea140b003fe03ed9bdf965034b3ade (image=quay.io/ceph/ceph:v18, name=happy_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 03 01:21:01 compute-0 podman[218930]: 2025-12-03 01:21:01.569084623 +0000 UTC m=+0.291317904 container attach bf764f366a945d50477a1f631f593d52efea140b003fe03ed9bdf965034b3ade (image=quay.io/ceph/ceph:v18, name=happy_albattani, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:21:01 compute-0 ceph-mon[192821]: pgmap v109: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:21:01 compute-0 ceph-mon[192821]: 2.10 scrub starts
Dec 03 01:21:01 compute-0 ceph-mon[192821]: 2.10 scrub ok
Dec 03 01:21:01 compute-0 podman[218970]: 2025-12-03 01:21:01.663639546 +0000 UTC m=+0.079576920 container create 331c0e4b365aaf300a12c69dbf02941437e1ebed51bf563dc59116fed544f30a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wozniak, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 03 01:21:01 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Dec 03 01:21:01 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Dec 03 01:21:01 compute-0 podman[218970]: 2025-12-03 01:21:01.634965854 +0000 UTC m=+0.050903228 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:21:01 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Dec 03 01:21:01 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Dec 03 01:21:01 compute-0 systemd[1]: Started libpod-conmon-331c0e4b365aaf300a12c69dbf02941437e1ebed51bf563dc59116fed544f30a.scope.
Dec 03 01:21:01 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:21:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a58bb7a6f14446725388e4a4b3405ce2a506838886ddbbd51c4e1d9fa95ce287/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a58bb7a6f14446725388e4a4b3405ce2a506838886ddbbd51c4e1d9fa95ce287/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a58bb7a6f14446725388e4a4b3405ce2a506838886ddbbd51c4e1d9fa95ce287/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a58bb7a6f14446725388e4a4b3405ce2a506838886ddbbd51c4e1d9fa95ce287/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:01 compute-0 podman[218970]: 2025-12-03 01:21:01.857062072 +0000 UTC m=+0.272999446 container init 331c0e4b365aaf300a12c69dbf02941437e1ebed51bf563dc59116fed544f30a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 03 01:21:01 compute-0 podman[218970]: 2025-12-03 01:21:01.878451744 +0000 UTC m=+0.294389108 container start 331c0e4b365aaf300a12c69dbf02941437e1ebed51bf563dc59116fed544f30a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wozniak, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 03 01:21:01 compute-0 podman[218970]: 2025-12-03 01:21:01.885247311 +0000 UTC m=+0.301184675 container attach 331c0e4b365aaf300a12c69dbf02941437e1ebed51bf563dc59116fed544f30a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 03 01:21:02 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 03 01:21:02 compute-0 happy_albattani[218959]: 
Dec 03 01:21:02 compute-0 happy_albattani[218959]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec 03 01:21:02 compute-0 systemd[1]: libpod-bf764f366a945d50477a1f631f593d52efea140b003fe03ed9bdf965034b3ade.scope: Deactivated successfully.
Dec 03 01:21:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v110: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:21:02 compute-0 podman[219038]: 2025-12-03 01:21:02.288998991 +0000 UTC m=+0.059629529 container died bf764f366a945d50477a1f631f593d52efea140b003fe03ed9bdf965034b3ade (image=quay.io/ceph/ceph:v18, name=happy_albattani, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:21:02 compute-0 sudo[219106]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dctiehzopdpeqtkauqidtdurlfemrptv ; /usr/bin/python3'
Dec 03 01:21:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e61affc03522beda1f9cbd4ffb4563c9abb5ed0e0f536920effddb13a5c0e18-merged.mount: Deactivated successfully.
Dec 03 01:21:02 compute-0 sudo[219106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:21:02 compute-0 podman[219038]: 2025-12-03 01:21:02.361569037 +0000 UTC m=+0.132199525 container remove bf764f366a945d50477a1f631f593d52efea140b003fe03ed9bdf965034b3ade (image=quay.io/ceph/ceph:v18, name=happy_albattani, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 03 01:21:02 compute-0 systemd[1]: libpod-conmon-bf764f366a945d50477a1f631f593d52efea140b003fe03ed9bdf965034b3ade.scope: Deactivated successfully.
Dec 03 01:21:02 compute-0 podman[219037]: 2025-12-03 01:21:02.393345966 +0000 UTC m=+0.134167460 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 03 01:21:02 compute-0 ansible-async_wrapper.py[218905]: Module complete (218905)
Dec 03 01:21:02 compute-0 podman[219041]: 2025-12-03 01:21:02.426288046 +0000 UTC m=+0.178575437 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, name=ubi9-minimal, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git)
Dec 03 01:21:02 compute-0 podman[219047]: 2025-12-03 01:21:02.427899461 +0000 UTC m=+0.170404351 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Dec 03 01:21:02 compute-0 podman[219043]: 2025-12-03 01:21:02.440668224 +0000 UTC m=+0.178839335 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 03 01:21:02 compute-0 python3[219117]: ansible-ansible.legacy.async_status Invoked with jid=j777914220905.218878 mode=status _async_dir=/root/.ansible_async
Dec 03 01:21:02 compute-0 sudo[219106]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:02 compute-0 ceph-mon[192821]: 2.12 scrub starts
Dec 03 01:21:02 compute-0 ceph-mon[192821]: 2.12 scrub ok
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]: {
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:     "0": [
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:         {
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:             "devices": [
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:                 "/dev/loop3"
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:             ],
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:             "lv_name": "ceph_lv0",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:             "lv_size": "21470642176",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:             "name": "ceph_lv0",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:             "tags": {
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:                 "ceph.cluster_name": "ceph",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:                 "ceph.crush_device_class": "",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:                 "ceph.encrypted": "0",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:                 "ceph.osd_id": "0",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:                 "ceph.type": "block",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:                 "ceph.vdo": "0"
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:             },
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:             "type": "block",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:             "vg_name": "ceph_vg0"
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:         }
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:     ],
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:     "1": [
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:         {
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:             "devices": [
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:                 "/dev/loop4"
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:             ],
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:             "lv_name": "ceph_lv1",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:             "lv_size": "21470642176",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:             "name": "ceph_lv1",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:             "tags": {
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:                 "ceph.cluster_name": "ceph",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:                 "ceph.crush_device_class": "",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:                 "ceph.encrypted": "0",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:                 "ceph.osd_id": "1",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:                 "ceph.type": "block",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:                 "ceph.vdo": "0"
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:             },
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:             "type": "block",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:             "vg_name": "ceph_vg1"
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:         }
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:     ],
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:     "2": [
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:         {
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:             "devices": [
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:                 "/dev/loop5"
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:             ],
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:             "lv_name": "ceph_lv2",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:             "lv_size": "21470642176",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:             "name": "ceph_lv2",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:             "tags": {
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:                 "ceph.cluster_name": "ceph",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:                 "ceph.crush_device_class": "",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:                 "ceph.encrypted": "0",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:                 "ceph.osd_id": "2",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:                 "ceph.type": "block",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:                 "ceph.vdo": "0"
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:             },
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:             "type": "block",
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:             "vg_name": "ceph_vg2"
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:         }
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]:     ]
Dec 03 01:21:02 compute-0 heuristic_wozniak[218985]: }
Dec 03 01:21:02 compute-0 systemd[1]: libpod-331c0e4b365aaf300a12c69dbf02941437e1ebed51bf563dc59116fed544f30a.scope: Deactivated successfully.
Dec 03 01:21:02 compute-0 podman[218970]: 2025-12-03 01:21:02.710648216 +0000 UTC m=+1.126585560 container died 331c0e4b365aaf300a12c69dbf02941437e1ebed51bf563dc59116fed544f30a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:21:02 compute-0 sudo[219205]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnssrcgpdjcwdadvxqmzkonufxwjsrzt ; /usr/bin/python3'
Dec 03 01:21:02 compute-0 sudo[219205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:21:02 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 3.1c deep-scrub starts
Dec 03 01:21:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-a58bb7a6f14446725388e4a4b3405ce2a506838886ddbbd51c4e1d9fa95ce287-merged.mount: Deactivated successfully.
Dec 03 01:21:02 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 3.1c deep-scrub ok
Dec 03 01:21:02 compute-0 podman[218970]: 2025-12-03 01:21:02.777562435 +0000 UTC m=+1.193499769 container remove 331c0e4b365aaf300a12c69dbf02941437e1ebed51bf563dc59116fed544f30a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wozniak, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:21:02 compute-0 systemd[1]: libpod-conmon-331c0e4b365aaf300a12c69dbf02941437e1ebed51bf563dc59116fed544f30a.scope: Deactivated successfully.
Dec 03 01:21:02 compute-0 sudo[218775]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:02 compute-0 sshd-session[219032]: Invalid user redmine from 173.249.50.59 port 50312
Dec 03 01:21:02 compute-0 python3[219214]: ansible-ansible.legacy.async_status Invoked with jid=j777914220905.218878 mode=cleanup _async_dir=/root/.ansible_async
Dec 03 01:21:02 compute-0 sudo[219205]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:02 compute-0 sudo[219220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:21:02 compute-0 sudo[219220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:02 compute-0 sudo[219220]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:02 compute-0 sshd-session[219032]: Received disconnect from 173.249.50.59 port 50312:11: Bye Bye [preauth]
Dec 03 01:21:02 compute-0 sshd-session[219032]: Disconnected from invalid user redmine 173.249.50.59 port 50312 [preauth]
Dec 03 01:21:03 compute-0 sudo[219245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:21:03 compute-0 sudo[219245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:03 compute-0 sudo[219245]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:03 compute-0 sudo[219270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:21:03 compute-0 sudo[219270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:03 compute-0 sudo[219270]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:03 compute-0 sudo[219295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 01:21:03 compute-0 sudo[219295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:03 compute-0 sudo[219345]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnruaocukbnbgxqxjthlanlylnsomtts ; /usr/bin/python3'
Dec 03 01:21:03 compute-0 sudo[219345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:21:03 compute-0 python3[219352]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:21:03 compute-0 ceph-mon[192821]: 3.1a scrub starts
Dec 03 01:21:03 compute-0 ceph-mon[192821]: 3.1a scrub ok
Dec 03 01:21:03 compute-0 ceph-mon[192821]: from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 03 01:21:03 compute-0 ceph-mon[192821]: pgmap v110: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:21:03 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Dec 03 01:21:03 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Dec 03 01:21:03 compute-0 podman[219370]: 2025-12-03 01:21:03.779885459 +0000 UTC m=+0.110785143 container create c438ca28a8e4ec31835898bd7b4d266406d7adf6d79821a703c1137398e96d1d (image=quay.io/ceph/ceph:v18, name=gallant_albattani, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 03 01:21:03 compute-0 podman[219370]: 2025-12-03 01:21:03.746234849 +0000 UTC m=+0.077134593 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:21:03 compute-0 systemd[1]: Started libpod-conmon-c438ca28a8e4ec31835898bd7b4d266406d7adf6d79821a703c1137398e96d1d.scope.
Dec 03 01:21:03 compute-0 podman[219393]: 2025-12-03 01:21:03.901294835 +0000 UTC m=+0.088588050 container create 3063bb03dfb01272cfa072c7809973283be3be9e99bd7973063c3d5f98c70fe2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lamarr, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 03 01:21:03 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:21:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73d0499ecf6536d78ebd4778909315f3092ea7d1506b97a120e65532b254ff34/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73d0499ecf6536d78ebd4778909315f3092ea7d1506b97a120e65532b254ff34/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:03 compute-0 podman[219370]: 2025-12-03 01:21:03.958708782 +0000 UTC m=+0.289608476 container init c438ca28a8e4ec31835898bd7b4d266406d7adf6d79821a703c1137398e96d1d (image=quay.io/ceph/ceph:v18, name=gallant_albattani, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:21:03 compute-0 podman[219393]: 2025-12-03 01:21:03.869238319 +0000 UTC m=+0.056531514 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:21:03 compute-0 podman[219370]: 2025-12-03 01:21:03.974320593 +0000 UTC m=+0.305220247 container start c438ca28a8e4ec31835898bd7b4d266406d7adf6d79821a703c1137398e96d1d (image=quay.io/ceph/ceph:v18, name=gallant_albattani, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 03 01:21:03 compute-0 podman[219370]: 2025-12-03 01:21:03.978938011 +0000 UTC m=+0.309837675 container attach c438ca28a8e4ec31835898bd7b4d266406d7adf6d79821a703c1137398e96d1d (image=quay.io/ceph/ceph:v18, name=gallant_albattani, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec 03 01:21:03 compute-0 systemd[1]: Started libpod-conmon-3063bb03dfb01272cfa072c7809973283be3be9e99bd7973063c3d5f98c70fe2.scope.
Dec 03 01:21:04 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Dec 03 01:21:04 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Dec 03 01:21:04 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:21:04 compute-0 podman[219393]: 2025-12-03 01:21:04.064300141 +0000 UTC m=+0.251593396 container init 3063bb03dfb01272cfa072c7809973283be3be9e99bd7973063c3d5f98c70fe2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:21:04 compute-0 podman[219393]: 2025-12-03 01:21:04.080787076 +0000 UTC m=+0.268080281 container start 3063bb03dfb01272cfa072c7809973283be3be9e99bd7973063c3d5f98c70fe2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lamarr, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:21:04 compute-0 wonderful_lamarr[219414]: 167 167
Dec 03 01:21:04 compute-0 systemd[1]: libpod-3063bb03dfb01272cfa072c7809973283be3be9e99bd7973063c3d5f98c70fe2.scope: Deactivated successfully.
Dec 03 01:21:04 compute-0 podman[219393]: 2025-12-03 01:21:04.090068513 +0000 UTC m=+0.277361738 container attach 3063bb03dfb01272cfa072c7809973283be3be9e99bd7973063c3d5f98c70fe2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 03 01:21:04 compute-0 podman[219393]: 2025-12-03 01:21:04.091100301 +0000 UTC m=+0.278393516 container died 3063bb03dfb01272cfa072c7809973283be3be9e99bd7973063c3d5f98c70fe2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lamarr, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:21:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-de9fa81dfcefa1ea5af6cac1913a05e1fed2177f1a0ad3a25251b3fa06f1f154-merged.mount: Deactivated successfully.
Dec 03 01:21:04 compute-0 podman[219393]: 2025-12-03 01:21:04.163633706 +0000 UTC m=+0.350926911 container remove 3063bb03dfb01272cfa072c7809973283be3be9e99bd7973063c3d5f98c70fe2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:21:04 compute-0 systemd[1]: libpod-conmon-3063bb03dfb01272cfa072c7809973283be3be9e99bd7973063c3d5f98c70fe2.scope: Deactivated successfully.
Dec 03 01:21:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v111: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:21:04 compute-0 podman[219456]: 2025-12-03 01:21:04.446182396 +0000 UTC m=+0.083830188 container create 43c946cb74afa1b8096abb4d9203405745768081c65319344a77d05bac979aea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_pascal, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:21:04 compute-0 podman[219456]: 2025-12-03 01:21:04.41629577 +0000 UTC m=+0.053943572 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:21:04 compute-0 systemd[1]: Started libpod-conmon-43c946cb74afa1b8096abb4d9203405745768081c65319344a77d05bac979aea.scope.
Dec 03 01:21:04 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 03 01:21:04 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:21:04 compute-0 gallant_albattani[219408]: 
Dec 03 01:21:04 compute-0 gallant_albattani[219408]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec 03 01:21:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9d5d7d8a12fbd2c25110365991bc9112d671ac8cf734a64a4ca6748a13fffa1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9d5d7d8a12fbd2c25110365991bc9112d671ac8cf734a64a4ca6748a13fffa1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9d5d7d8a12fbd2c25110365991bc9112d671ac8cf734a64a4ca6748a13fffa1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9d5d7d8a12fbd2c25110365991bc9112d671ac8cf734a64a4ca6748a13fffa1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:04 compute-0 systemd[1]: libpod-c438ca28a8e4ec31835898bd7b4d266406d7adf6d79821a703c1137398e96d1d.scope: Deactivated successfully.
Dec 03 01:21:04 compute-0 podman[219456]: 2025-12-03 01:21:04.604823731 +0000 UTC m=+0.242471493 container init 43c946cb74afa1b8096abb4d9203405745768081c65319344a77d05bac979aea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:21:04 compute-0 podman[219370]: 2025-12-03 01:21:04.615614749 +0000 UTC m=+0.946514403 container died c438ca28a8e4ec31835898bd7b4d266406d7adf6d79821a703c1137398e96d1d (image=quay.io/ceph/ceph:v18, name=gallant_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:21:04 compute-0 podman[219456]: 2025-12-03 01:21:04.618995553 +0000 UTC m=+0.256643335 container start 43c946cb74afa1b8096abb4d9203405745768081c65319344a77d05bac979aea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_pascal, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 03 01:21:04 compute-0 podman[219456]: 2025-12-03 01:21:04.635286803 +0000 UTC m=+0.272934575 container attach 43c946cb74afa1b8096abb4d9203405745768081c65319344a77d05bac979aea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:21:04 compute-0 ceph-mon[192821]: 3.1c deep-scrub starts
Dec 03 01:21:04 compute-0 ceph-mon[192821]: 3.1c deep-scrub ok
Dec 03 01:21:04 compute-0 ceph-mon[192821]: 2.14 scrub starts
Dec 03 01:21:04 compute-0 ceph-mon[192821]: 2.14 scrub ok
Dec 03 01:21:04 compute-0 ceph-mon[192821]: 4.1d scrub starts
Dec 03 01:21:04 compute-0 ceph-mon[192821]: 4.1d scrub ok
Dec 03 01:21:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-73d0499ecf6536d78ebd4778909315f3092ea7d1506b97a120e65532b254ff34-merged.mount: Deactivated successfully.
Dec 03 01:21:04 compute-0 podman[219370]: 2025-12-03 01:21:04.696048482 +0000 UTC m=+1.026948166 container remove c438ca28a8e4ec31835898bd7b4d266406d7adf6d79821a703c1137398e96d1d (image=quay.io/ceph/ceph:v18, name=gallant_albattani, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 03 01:21:04 compute-0 systemd[1]: libpod-conmon-c438ca28a8e4ec31835898bd7b4d266406d7adf6d79821a703c1137398e96d1d.scope: Deactivated successfully.
Dec 03 01:21:04 compute-0 sudo[219345]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:05 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:21:05 compute-0 sudo[219526]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icahpasgxqmgehberutyujmniasmntbq ; /usr/bin/python3'
Dec 03 01:21:05 compute-0 sudo[219526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:21:05 compute-0 python3[219531]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:21:05 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Dec 03 01:21:05 compute-0 ceph-mon[192821]: pgmap v111: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:21:05 compute-0 ceph-mon[192821]: from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 03 01:21:05 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Dec 03 01:21:05 compute-0 podman[219540]: 2025-12-03 01:21:05.767020504 +0000 UTC m=+0.089707450 container create 995a6a59de54d9c224a5328453037076d8f167a57f20540a5a2d32ac694be587 (image=quay.io/ceph/ceph:v18, name=quizzical_colden, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:21:05 compute-0 quirky_pascal[219472]: {
Dec 03 01:21:05 compute-0 quirky_pascal[219472]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 01:21:05 compute-0 quirky_pascal[219472]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:21:05 compute-0 quirky_pascal[219472]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 01:21:05 compute-0 quirky_pascal[219472]:         "osd_id": 2,
Dec 03 01:21:05 compute-0 quirky_pascal[219472]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:21:05 compute-0 quirky_pascal[219472]:         "type": "bluestore"
Dec 03 01:21:05 compute-0 quirky_pascal[219472]:     },
Dec 03 01:21:05 compute-0 quirky_pascal[219472]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 01:21:05 compute-0 quirky_pascal[219472]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:21:05 compute-0 quirky_pascal[219472]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 01:21:05 compute-0 quirky_pascal[219472]:         "osd_id": 1,
Dec 03 01:21:05 compute-0 quirky_pascal[219472]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:21:05 compute-0 quirky_pascal[219472]:         "type": "bluestore"
Dec 03 01:21:05 compute-0 quirky_pascal[219472]:     },
Dec 03 01:21:05 compute-0 quirky_pascal[219472]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 01:21:05 compute-0 quirky_pascal[219472]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:21:05 compute-0 quirky_pascal[219472]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 01:21:05 compute-0 quirky_pascal[219472]:         "osd_id": 0,
Dec 03 01:21:05 compute-0 quirky_pascal[219472]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:21:05 compute-0 quirky_pascal[219472]:         "type": "bluestore"
Dec 03 01:21:05 compute-0 quirky_pascal[219472]:     }
Dec 03 01:21:05 compute-0 quirky_pascal[219472]: }
Dec 03 01:21:05 compute-0 podman[219456]: 2025-12-03 01:21:05.820427681 +0000 UTC m=+1.458075443 container died 43c946cb74afa1b8096abb4d9203405745768081c65319344a77d05bac979aea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_pascal, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 03 01:21:05 compute-0 systemd[1]: Started libpod-conmon-995a6a59de54d9c224a5328453037076d8f167a57f20540a5a2d32ac694be587.scope.
Dec 03 01:21:05 compute-0 podman[219540]: 2025-12-03 01:21:05.734366512 +0000 UTC m=+0.057053528 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:21:05 compute-0 systemd[1]: libpod-43c946cb74afa1b8096abb4d9203405745768081c65319344a77d05bac979aea.scope: Deactivated successfully.
Dec 03 01:21:05 compute-0 systemd[1]: libpod-43c946cb74afa1b8096abb4d9203405745768081c65319344a77d05bac979aea.scope: Consumed 1.187s CPU time.
Dec 03 01:21:05 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:21:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9d5d7d8a12fbd2c25110365991bc9112d671ac8cf734a64a4ca6748a13fffa1-merged.mount: Deactivated successfully.
Dec 03 01:21:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d490fbb1191314f8999713e9230657963a6b902a8c7fdfce38a6ee77188c7e9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d490fbb1191314f8999713e9230657963a6b902a8c7fdfce38a6ee77188c7e9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:05 compute-0 podman[219456]: 2025-12-03 01:21:05.884948354 +0000 UTC m=+1.522596106 container remove 43c946cb74afa1b8096abb4d9203405745768081c65319344a77d05bac979aea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_pascal, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 03 01:21:05 compute-0 systemd[1]: libpod-conmon-43c946cb74afa1b8096abb4d9203405745768081c65319344a77d05bac979aea.scope: Deactivated successfully.
Dec 03 01:21:05 compute-0 podman[219540]: 2025-12-03 01:21:05.902598002 +0000 UTC m=+0.225284958 container init 995a6a59de54d9c224a5328453037076d8f167a57f20540a5a2d32ac694be587 (image=quay.io/ceph/ceph:v18, name=quizzical_colden, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 03 01:21:05 compute-0 podman[219540]: 2025-12-03 01:21:05.911495018 +0000 UTC m=+0.234181954 container start 995a6a59de54d9c224a5328453037076d8f167a57f20540a5a2d32ac694be587 (image=quay.io/ceph/ceph:v18, name=quizzical_colden, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:21:05 compute-0 podman[219540]: 2025-12-03 01:21:05.915776496 +0000 UTC m=+0.238463442 container attach 995a6a59de54d9c224a5328453037076d8f167a57f20540a5a2d32ac694be587 (image=quay.io/ceph/ceph:v18, name=quizzical_colden, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 03 01:21:05 compute-0 sudo[219295]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:05 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:21:05 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:21:05 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:21:05 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:21:05 compute-0 ceph-mgr[193109]: [progress INFO root] update: starting ev f22a4605-e838-4b3e-916f-c2f3f4566817 (Updating rgw.rgw deployment (+1 -> 1))
Dec 03 01:21:05 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.rxmili", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Dec 03 01:21:05 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.rxmili", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 03 01:21:05 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.rxmili", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 03 01:21:05 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Dec 03 01:21:05 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:21:05 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:21:05 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:21:05 compute-0 ceph-mgr[193109]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.rxmili on compute-0
Dec 03 01:21:05 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.rxmili on compute-0
Dec 03 01:21:06 compute-0 ansible-async_wrapper.py[218902]: Done in kid B.
Dec 03 01:21:06 compute-0 sudo[219575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:21:06 compute-0 sudo[219575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:06 compute-0 sudo[219575]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:06 compute-0 sudo[219600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:21:06 compute-0 sudo[219600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:06 compute-0 sudo[219600]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v112: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:21:06 compute-0 sudo[219629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:21:06 compute-0 sudo[219629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:06 compute-0 sudo[219629]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:06 compute-0 sudo[219669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c
Dec 03 01:21:06 compute-0 sudo[219669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:06 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 03 01:21:06 compute-0 quizzical_colden[219561]: 
Dec 03 01:21:06 compute-0 quizzical_colden[219561]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Dec 03 01:21:06 compute-0 systemd[1]: libpod-995a6a59de54d9c224a5328453037076d8f167a57f20540a5a2d32ac694be587.scope: Deactivated successfully.
Dec 03 01:21:06 compute-0 podman[219540]: 2025-12-03 01:21:06.530691793 +0000 UTC m=+0.853378769 container died 995a6a59de54d9c224a5328453037076d8f167a57f20540a5a2d32ac694be587 (image=quay.io/ceph/ceph:v18, name=quizzical_colden, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 03 01:21:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d490fbb1191314f8999713e9230657963a6b902a8c7fdfce38a6ee77188c7e9-merged.mount: Deactivated successfully.
Dec 03 01:21:06 compute-0 podman[219540]: 2025-12-03 01:21:06.627779216 +0000 UTC m=+0.950466152 container remove 995a6a59de54d9c224a5328453037076d8f167a57f20540a5a2d32ac694be587 (image=quay.io/ceph/ceph:v18, name=quizzical_colden, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:21:06 compute-0 systemd[1]: libpod-conmon-995a6a59de54d9c224a5328453037076d8f167a57f20540a5a2d32ac694be587.scope: Deactivated successfully.
Dec 03 01:21:06 compute-0 sudo[219526]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:06 compute-0 podman[219742]: 2025-12-03 01:21:06.872405738 +0000 UTC m=+0.071890828 container create 34ba4deff48e09031739b71505d15b4156fa54f76aac5cafe16a5d5237dc9288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_perlman, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:21:06 compute-0 podman[219742]: 2025-12-03 01:21:06.841706659 +0000 UTC m=+0.041191829 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:21:06 compute-0 systemd[1]: Started libpod-conmon-34ba4deff48e09031739b71505d15b4156fa54f76aac5cafe16a5d5237dc9288.scope.
Dec 03 01:21:06 compute-0 ceph-mon[192821]: 2.1a scrub starts
Dec 03 01:21:06 compute-0 ceph-mon[192821]: 2.1a scrub ok
Dec 03 01:21:06 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:21:06 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:21:06 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.rxmili", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 03 01:21:06 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.rxmili", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 03 01:21:06 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:21:06 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:21:06 compute-0 ceph-mon[192821]: Deploying daemon rgw.rgw.compute-0.rxmili on compute-0
Dec 03 01:21:06 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:21:07 compute-0 podman[219742]: 2025-12-03 01:21:07.004435746 +0000 UTC m=+0.203920866 container init 34ba4deff48e09031739b71505d15b4156fa54f76aac5cafe16a5d5237dc9288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_perlman, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 03 01:21:07 compute-0 podman[219742]: 2025-12-03 01:21:07.018941537 +0000 UTC m=+0.218426627 container start 34ba4deff48e09031739b71505d15b4156fa54f76aac5cafe16a5d5237dc9288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_perlman, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:21:07 compute-0 podman[219742]: 2025-12-03 01:21:07.022846735 +0000 UTC m=+0.222331905 container attach 34ba4deff48e09031739b71505d15b4156fa54f76aac5cafe16a5d5237dc9288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_perlman, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:21:07 compute-0 sharp_perlman[219757]: 167 167
Dec 03 01:21:07 compute-0 systemd[1]: libpod-34ba4deff48e09031739b71505d15b4156fa54f76aac5cafe16a5d5237dc9288.scope: Deactivated successfully.
Dec 03 01:21:07 compute-0 podman[219742]: 2025-12-03 01:21:07.026391003 +0000 UTC m=+0.225876093 container died 34ba4deff48e09031739b71505d15b4156fa54f76aac5cafe16a5d5237dc9288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_perlman, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Dec 03 01:21:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6e64bdd9564afa8444d2ebf6f93bf325483c21ffd24d55b2c1c6809555345c0-merged.mount: Deactivated successfully.
Dec 03 01:21:07 compute-0 podman[219742]: 2025-12-03 01:21:07.090889695 +0000 UTC m=+0.290374825 container remove 34ba4deff48e09031739b71505d15b4156fa54f76aac5cafe16a5d5237dc9288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec 03 01:21:07 compute-0 systemd[1]: libpod-conmon-34ba4deff48e09031739b71505d15b4156fa54f76aac5cafe16a5d5237dc9288.scope: Deactivated successfully.
Dec 03 01:21:07 compute-0 systemd[1]: Reloading.
Dec 03 01:21:07 compute-0 systemd-rc-local-generator[219799]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:21:07 compute-0 systemd-sysv-generator[219804]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:21:07 compute-0 sudo[219834]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwzjhthcvyjwxaukvuilrjdfntuthzbg ; /usr/bin/python3'
Dec 03 01:21:07 compute-0 sudo[219834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:21:07 compute-0 systemd[1]: Reloading.
Dec 03 01:21:07 compute-0 podman[219836]: 2025-12-03 01:21:07.731091271 +0000 UTC m=+0.108068768 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 03 01:21:07 compute-0 python3[219839]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:21:07 compute-0 systemd-rc-local-generator[219881]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:21:07 compute-0 systemd-sysv-generator[219892]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:21:07 compute-0 podman[219867]: 2025-12-03 01:21:07.869226349 +0000 UTC m=+0.059429194 container create 0434529f9c3963875cc0f68c3b0334bead9bcdfba5ebae95c3b9944eabccb7d3 (image=quay.io/ceph/ceph:v18, name=unruffled_babbage, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 03 01:21:07 compute-0 podman[219867]: 2025-12-03 01:21:07.845944595 +0000 UTC m=+0.036147460 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:21:07 compute-0 ceph-mon[192821]: pgmap v112: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:21:07 compute-0 ceph-mon[192821]: from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 03 01:21:08 compute-0 systemd[1]: Started libpod-conmon-0434529f9c3963875cc0f68c3b0334bead9bcdfba5ebae95c3b9944eabccb7d3.scope.
Dec 03 01:21:08 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.rxmili for 3765feb2-36f8-5b86-b74c-64e9221f9c4c...
Dec 03 01:21:08 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:21:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9484d7af0da6cc3db48178dfbeb2744ba8b450a7a63c3cafcaf8df0b7320b03a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9484d7af0da6cc3db48178dfbeb2744ba8b450a7a63c3cafcaf8df0b7320b03a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v113: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:21:08 compute-0 podman[219867]: 2025-12-03 01:21:08.2339363 +0000 UTC m=+0.424139225 container init 0434529f9c3963875cc0f68c3b0334bead9bcdfba5ebae95c3b9944eabccb7d3 (image=quay.io/ceph/ceph:v18, name=unruffled_babbage, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:21:08 compute-0 podman[219867]: 2025-12-03 01:21:08.251968598 +0000 UTC m=+0.442171473 container start 0434529f9c3963875cc0f68c3b0334bead9bcdfba5ebae95c3b9944eabccb7d3 (image=quay.io/ceph/ceph:v18, name=unruffled_babbage, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Dec 03 01:21:08 compute-0 podman[219867]: 2025-12-03 01:21:08.259081405 +0000 UTC m=+0.449284280 container attach 0434529f9c3963875cc0f68c3b0334bead9bcdfba5ebae95c3b9944eabccb7d3 (image=quay.io/ceph/ceph:v18, name=unruffled_babbage, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:21:08 compute-0 sshd-session[218280]: Connection closed by authenticating user root 193.32.162.157 port 35742 [preauth]
Dec 03 01:21:08 compute-0 podman[219959]: 2025-12-03 01:21:08.623321322 +0000 UTC m=+0.069137501 container create 0a80651d9ff686c169908448ddf808b1c557b63a6a28c6818273c2bf61849243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-rgw-rgw-compute-0-rxmili, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 03 01:21:08 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Dec 03 01:21:08 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Dec 03 01:21:08 compute-0 podman[219959]: 2025-12-03 01:21:08.598800575 +0000 UTC m=+0.044616834 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:21:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06ec27f5b25a1fff7dbf0661716bfab7c465e3e6b20fed26c285fffb7a2eb5e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06ec27f5b25a1fff7dbf0661716bfab7c465e3e6b20fed26c285fffb7a2eb5e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06ec27f5b25a1fff7dbf0661716bfab7c465e3e6b20fed26c285fffb7a2eb5e0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06ec27f5b25a1fff7dbf0661716bfab7c465e3e6b20fed26c285fffb7a2eb5e0/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.rxmili supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:08 compute-0 podman[219959]: 2025-12-03 01:21:08.721503776 +0000 UTC m=+0.167320005 container init 0a80651d9ff686c169908448ddf808b1c557b63a6a28c6818273c2bf61849243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-rgw-rgw-compute-0-rxmili, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 03 01:21:08 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Dec 03 01:21:08 compute-0 podman[219959]: 2025-12-03 01:21:08.742589159 +0000 UTC m=+0.188405378 container start 0a80651d9ff686c169908448ddf808b1c557b63a6a28c6818273c2bf61849243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-rgw-rgw-compute-0-rxmili, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 03 01:21:08 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Dec 03 01:21:08 compute-0 bash[219959]: 0a80651d9ff686c169908448ddf808b1c557b63a6a28c6818273c2bf61849243
Dec 03 01:21:08 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.rxmili for 3765feb2-36f8-5b86-b74c-64e9221f9c4c.
Dec 03 01:21:08 compute-0 sudo[219669]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:21:08 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:21:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:21:08 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:21:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Dec 03 01:21:08 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:21:08 compute-0 ceph-mgr[193109]: [progress INFO root] complete: finished ev f22a4605-e838-4b3e-916f-c2f3f4566817 (Updating rgw.rgw deployment (+1 -> 1))
Dec 03 01:21:08 compute-0 ceph-mgr[193109]: [progress INFO root] Completed event f22a4605-e838-4b3e-916f-c2f3f4566817 (Updating rgw.rgw deployment (+1 -> 1)) in 3 seconds
Dec 03 01:21:08 compute-0 ceph-mgr[193109]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Dec 03 01:21:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Dec 03 01:21:08 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Dec 03 01:21:08 compute-0 radosgw[219997]: deferred set uid:gid to 167:167 (ceph:ceph)
Dec 03 01:21:08 compute-0 radosgw[219997]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Dec 03 01:21:08 compute-0 radosgw[219997]: framework: beast
Dec 03 01:21:08 compute-0 radosgw[219997]: framework conf key: endpoint, val: 192.168.122.100:8082
Dec 03 01:21:08 compute-0 radosgw[219997]: init_numa not setting numa affinity
Dec 03 01:21:08 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:21:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Dec 03 01:21:08 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:21:08 compute-0 ceph-mgr[193109]: [progress INFO root] update: starting ev 2fc1d00e-bb55-41de-90e2-200846901dc1 (Updating mds.cephfs deployment (+1 -> 1))
Dec 03 01:21:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.bgmlsq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Dec 03 01:21:08 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.bgmlsq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec 03 01:21:08 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.bgmlsq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 03 01:21:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:21:08 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:21:08 compute-0 ceph-mgr[193109]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.bgmlsq on compute-0
Dec 03 01:21:08 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.bgmlsq on compute-0
Dec 03 01:21:08 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.14264 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 03 01:21:08 compute-0 unruffled_babbage[219912]: 
Dec 03 01:21:08 compute-0 unruffled_babbage[219912]: [{"container_id": "d1d072b9d136", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.41%", "created": "2025-12-03T01:19:06.601363Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2025-12-03T01:19:06.663617Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-03T01:20:56.566575Z", "memory_usage": 11628707, "ports": [], "service_name": "crash", "started": "2025-12-03T01:19:06.334103Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c@crash.compute-0", "version": "18.2.7"}, {"container_id": "b81e9a342791", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "25.65%", "created": "2025-12-03T01:17:43.571058Z", "daemon_id": "compute-0.rysove", "daemon_name": "mgr.compute-0.rysove", "daemon_type": "mgr", "events": ["2025-12-03T01:20:18.292907Z daemon:mgr.compute-0.rysove [INFO] \"Reconfigured mgr.compute-0.rysove on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-03T01:20:56.566440Z", "memory_usage": 549139251, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-12-03T01:17:43.394166Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c@mgr.compute-0.rysove", "version": "18.2.7"}, {"container_id": "d4928ec355dd", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "3.23%", "created": "2025-12-03T01:17:35.963872Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2025-12-03T01:20:17.187496Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-03T01:20:56.566301Z", "memory_request": 2147483648, "memory_usage": 41030778, "ports": [], "service_name": "mon", "started": "2025-12-03T01:17:39.952128Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c@mon.compute-0", "version": "18.2.7"}, {"container_id": "42c5471d35c5", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "3.89%", "created": "2025-12-03T01:19:40.591235Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2025-12-03T01:19:40.670029Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-03T01:20:56.566666Z", "memory_request": 4294967296, "memory_usage": 67496837, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-03T01:19:40.340364Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c@osd.0", "version": "18.2.7"}, {"container_id": "a464c63d7c32", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "4.11%", "created": "2025-12-03T01:19:47.558414Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2025-12-03T01:19:47.656817Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-03T01:20:56.566751Z", "memory_request": 4294967296, "memory_usage": 66815262, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-03T01:19:47.298414Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c@osd.1", "version": "18.2.7"}, {"container_id": "8463edd2b7db", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "4.10%", "created": "2025-12-03T01:19:54.374589Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2025-12-03T01:19:54.434839Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-03T01:20:56.566833Z", "memory_request": 4294967296, "memory_usage": 66112716, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-03T01:19:54.176860Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c@osd.2", "version": "18.2.7"}, {"daemon_id": "rgw.compute-0.rxmili", "daemon_name": "rgw.rgw.compute-0.rxmili", "daemon_type": "rgw", "events": ["2025-12-03T01:21:08.855308Z daemon:rgw.rgw.compute-0.rxmili [INFO] \"Deployed rgw.rgw.compute-0.rxmili on host 'compute-0'\""], "hostname": "compute-0", "ip": "192.168.122.100", "is_active": false, "ports": [8082], "service_name": "rgw.rgw", "status": 2, "status_desc": "starting"}]
Dec 03 01:21:08 compute-0 systemd[1]: libpod-0434529f9c3963875cc0f68c3b0334bead9bcdfba5ebae95c3b9944eabccb7d3.scope: Deactivated successfully.
Dec 03 01:21:08 compute-0 podman[219867]: 2025-12-03 01:21:08.996933409 +0000 UTC m=+1.187136284 container died 0434529f9c3963875cc0f68c3b0334bead9bcdfba5ebae95c3b9944eabccb7d3 (image=quay.io/ceph/ceph:v18, name=unruffled_babbage, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:21:09 compute-0 sudo[220059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:21:09 compute-0 sudo[220059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:09 compute-0 sudo[220059]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-9484d7af0da6cc3db48178dfbeb2744ba8b450a7a63c3cafcaf8df0b7320b03a-merged.mount: Deactivated successfully.
Dec 03 01:21:09 compute-0 podman[219867]: 2025-12-03 01:21:09.082820133 +0000 UTC m=+1.273022978 container remove 0434529f9c3963875cc0f68c3b0334bead9bcdfba5ebae95c3b9944eabccb7d3 (image=quay.io/ceph/ceph:v18, name=unruffled_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Dec 03 01:21:09 compute-0 systemd[1]: libpod-conmon-0434529f9c3963875cc0f68c3b0334bead9bcdfba5ebae95c3b9944eabccb7d3.scope: Deactivated successfully.
Dec 03 01:21:09 compute-0 sudo[219834]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:09 compute-0 sudo[220097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:21:09 compute-0 sudo[220097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:09 compute-0 sudo[220097]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:09 compute-0 sudo[220122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:21:09 compute-0 sudo[220122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:09 compute-0 sudo[220122]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:09 compute-0 sudo[220147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c
Dec 03 01:21:09 compute-0 sudo[220147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:09 compute-0 ceph-mon[192821]: pgmap v113: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:21:09 compute-0 ceph-mon[192821]: 2.1e scrub starts
Dec 03 01:21:09 compute-0 ceph-mon[192821]: 2.1e scrub ok
Dec 03 01:21:09 compute-0 ceph-mon[192821]: 7.7 scrub starts
Dec 03 01:21:09 compute-0 ceph-mon[192821]: 7.7 scrub ok
Dec 03 01:21:09 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:21:09 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:21:09 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:21:09 compute-0 ceph-mon[192821]: Saving service rgw.rgw spec with placement compute-0
Dec 03 01:21:09 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:21:09 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:21:09 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.bgmlsq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec 03 01:21:09 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.bgmlsq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 03 01:21:09 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:21:09 compute-0 ceph-mon[192821]: Deploying daemon mds.cephfs.compute-0.bgmlsq on compute-0
Dec 03 01:21:09 compute-0 ceph-mon[192821]: from='client.14264 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 03 01:21:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Dec 03 01:21:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Dec 03 01:21:09 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Dec 03 01:21:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Dec 03 01:21:09 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2760567315' entity='client.rgw.rgw.compute-0.rxmili' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec 03 01:21:09 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Dec 03 01:21:09 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Dec 03 01:21:10 compute-0 podman[220211]: 2025-12-03 01:21:10.009402514 +0000 UTC m=+0.074363136 container create cd58bb294ea3ed7406e470cd3e45e02d36edd902943dacb083757ab6a1cf1e5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_satoshi, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:21:10 compute-0 sudo[220247]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymydxlouqcbaoxvjmgcccqjejthcvgzl ; /usr/bin/python3'
Dec 03 01:21:10 compute-0 sudo[220247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:21:10 compute-0 systemd[1]: Started libpod-conmon-cd58bb294ea3ed7406e470cd3e45e02d36edd902943dacb083757ab6a1cf1e5c.scope.
Dec 03 01:21:10 compute-0 podman[220211]: 2025-12-03 01:21:09.988936498 +0000 UTC m=+0.053897140 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:21:10 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:21:10 compute-0 podman[220211]: 2025-12-03 01:21:10.126209053 +0000 UTC m=+0.191169705 container init cd58bb294ea3ed7406e470cd3e45e02d36edd902943dacb083757ab6a1cf1e5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_satoshi, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 03 01:21:10 compute-0 podman[220211]: 2025-12-03 01:21:10.136335393 +0000 UTC m=+0.201296025 container start cd58bb294ea3ed7406e470cd3e45e02d36edd902943dacb083757ab6a1cf1e5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:21:10 compute-0 podman[220211]: 2025-12-03 01:21:10.141695971 +0000 UTC m=+0.206656603 container attach cd58bb294ea3ed7406e470cd3e45e02d36edd902943dacb083757ab6a1cf1e5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 03 01:21:10 compute-0 cranky_satoshi[220253]: 167 167
Dec 03 01:21:10 compute-0 systemd[1]: libpod-cd58bb294ea3ed7406e470cd3e45e02d36edd902943dacb083757ab6a1cf1e5c.scope: Deactivated successfully.
Dec 03 01:21:10 compute-0 podman[220211]: 2025-12-03 01:21:10.144719734 +0000 UTC m=+0.209680366 container died cd58bb294ea3ed7406e470cd3e45e02d36edd902943dacb083757ab6a1cf1e5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 03 01:21:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-555624ac1bd436b118a4efec88a7508b72f6ff2044428392fbe07d5b4a46a58d-merged.mount: Deactivated successfully.
Dec 03 01:21:10 compute-0 podman[220211]: 2025-12-03 01:21:10.198732217 +0000 UTC m=+0.263692819 container remove cd58bb294ea3ed7406e470cd3e45e02d36edd902943dacb083757ab6a1cf1e5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_satoshi, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 03 01:21:10 compute-0 python3[220252]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:21:10 compute-0 systemd[1]: libpod-conmon-cd58bb294ea3ed7406e470cd3e45e02d36edd902943dacb083757ab6a1cf1e5c.scope: Deactivated successfully.
Dec 03 01:21:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v115: 194 pgs: 1 unknown, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:21:10 compute-0 systemd[1]: Reloading.
Dec 03 01:21:10 compute-0 podman[220270]: 2025-12-03 01:21:10.284045095 +0000 UTC m=+0.056721349 container create 62aeb9beeaffdf698fefe4bb89716f3ab504c41ec47dd8f8b88e830e9a30c85f (image=quay.io/ceph/ceph:v18, name=serene_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:21:10 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:21:10 compute-0 podman[220270]: 2025-12-03 01:21:10.265115412 +0000 UTC m=+0.037791696 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:21:10 compute-0 systemd-rc-local-generator[220315]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:21:10 compute-0 systemd-sysv-generator[220318]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:21:10 compute-0 systemd[1]: Started libpod-conmon-62aeb9beeaffdf698fefe4bb89716f3ab504c41ec47dd8f8b88e830e9a30c85f.scope.
Dec 03 01:21:10 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 43 pg[8.0( empty local-lis/les=0/0 n=0 ec=43/43 lis/c=0/0 les/c/f=0/0/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:10 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:21:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da662e3b693213a5bfd44acc6e7f4b595097da56dfd958b6a0f6205832bd8a61/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da662e3b693213a5bfd44acc6e7f4b595097da56dfd958b6a0f6205832bd8a61/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:10 compute-0 systemd[1]: Reloading.
Dec 03 01:21:10 compute-0 podman[220270]: 2025-12-03 01:21:10.823788343 +0000 UTC m=+0.596464607 container init 62aeb9beeaffdf698fefe4bb89716f3ab504c41ec47dd8f8b88e830e9a30c85f (image=quay.io/ceph/ceph:v18, name=serene_galois, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:21:10 compute-0 podman[220270]: 2025-12-03 01:21:10.839635551 +0000 UTC m=+0.612311795 container start 62aeb9beeaffdf698fefe4bb89716f3ab504c41ec47dd8f8b88e830e9a30c85f (image=quay.io/ceph/ceph:v18, name=serene_galois, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Dec 03 01:21:10 compute-0 podman[220270]: 2025-12-03 01:21:10.843784036 +0000 UTC m=+0.616460280 container attach 62aeb9beeaffdf698fefe4bb89716f3ab504c41ec47dd8f8b88e830e9a30c85f (image=quay.io/ceph/ceph:v18, name=serene_galois, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 03 01:21:10 compute-0 ceph-mon[192821]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 03 01:21:10 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Dec 03 01:21:10 compute-0 ceph-mon[192821]: osdmap e43: 3 total, 3 up, 3 in
Dec 03 01:21:10 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2760567315' entity='client.rgw.rgw.compute-0.rxmili' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec 03 01:21:10 compute-0 ceph-mon[192821]: 4.1e scrub starts
Dec 03 01:21:10 compute-0 ceph-mon[192821]: 4.1e scrub ok
Dec 03 01:21:10 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2760567315' entity='client.rgw.rgw.compute-0.rxmili' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Dec 03 01:21:10 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Dec 03 01:21:10 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Dec 03 01:21:10 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 44 pg[8.0( empty local-lis/les=43/44 n=0 ec=43/43 lis/c=0/0 les/c/f=0/0/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:10 compute-0 systemd-rc-local-generator[220362]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:21:11 compute-0 systemd-sysv-generator[220366]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:21:11 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.bgmlsq for 3765feb2-36f8-5b86-b74c-64e9221f9c4c...
Dec 03 01:21:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Dec 03 01:21:11 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3935809746' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 03 01:21:11 compute-0 serene_galois[220326]: 
Dec 03 01:21:11 compute-0 serene_galois[220326]: {"fsid":"3765feb2-36f8-5b86-b74c-64e9221f9c4c","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false},"POOL_APP_NOT_ENABLED":{"severity":"HEALTH_WARN","summary":{"message":"1 pool(s) do not have an application enabled","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":211,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":44,"num_osds":3,"num_up_osds":3,"osd_up_since":1764724803,"num_in_osds":3,"osd_in_since":1764724766,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":193},{"state_name":"unknown","count":1}],"num_pgs":194,"num_pools":8,"num_objects":2,"data_bytes":459280,"bytes_used":84156416,"bytes_avail":64327770112,"bytes_total":64411926528,"unknown_pgs_ratio":0.0051546390168368816},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":5,"modified":"2025-12-03T01:21:04.220965+0000","services":{}},"progress_events":{"2fc1d00e-bb55-41de-90e2-200846901dc1":{"message":"Updating mds.cephfs deployment (+1 -> 1) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Dec 03 01:21:11 compute-0 systemd[1]: libpod-62aeb9beeaffdf698fefe4bb89716f3ab504c41ec47dd8f8b88e830e9a30c85f.scope: Deactivated successfully.
Dec 03 01:21:11 compute-0 podman[220270]: 2025-12-03 01:21:11.581825995 +0000 UTC m=+1.354502279 container died 62aeb9beeaffdf698fefe4bb89716f3ab504c41ec47dd8f8b88e830e9a30c85f (image=quay.io/ceph/ceph:v18, name=serene_galois, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:21:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-da662e3b693213a5bfd44acc6e7f4b595097da56dfd958b6a0f6205832bd8a61-merged.mount: Deactivated successfully.
Dec 03 01:21:11 compute-0 podman[220270]: 2025-12-03 01:21:11.670289611 +0000 UTC m=+1.442965855 container remove 62aeb9beeaffdf698fefe4bb89716f3ab504c41ec47dd8f8b88e830e9a30c85f (image=quay.io/ceph/ceph:v18, name=serene_galois, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 03 01:21:11 compute-0 systemd[1]: libpod-conmon-62aeb9beeaffdf698fefe4bb89716f3ab504c41ec47dd8f8b88e830e9a30c85f.scope: Deactivated successfully.
Dec 03 01:21:11 compute-0 sudo[220247]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:11 compute-0 podman[220455]: 2025-12-03 01:21:11.745281503 +0000 UTC m=+0.063770103 container create 1c1d1d808cb06f16e54581921c03dc9d7a5982264b59cca441ed0cf076fdeef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mds-cephfs-compute-0-bgmlsq, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:21:11 compute-0 podman[220435]: 2025-12-03 01:21:11.771741655 +0000 UTC m=+0.136914136 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, name=ubi9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., version=9.4, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git)
Dec 03 01:21:11 compute-0 podman[220455]: 2025-12-03 01:21:11.719886231 +0000 UTC m=+0.038374851 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:21:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbf55e30530ffacd6638fe2d802aa6bf57318793b1d18c99661a35eb7f8593ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbf55e30530ffacd6638fe2d802aa6bf57318793b1d18c99661a35eb7f8593ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbf55e30530ffacd6638fe2d802aa6bf57318793b1d18c99661a35eb7f8593ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbf55e30530ffacd6638fe2d802aa6bf57318793b1d18c99661a35eb7f8593ff/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.bgmlsq supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:11 compute-0 podman[220455]: 2025-12-03 01:21:11.872982893 +0000 UTC m=+0.191471503 container init 1c1d1d808cb06f16e54581921c03dc9d7a5982264b59cca441ed0cf076fdeef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mds-cephfs-compute-0-bgmlsq, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 03 01:21:11 compute-0 podman[220455]: 2025-12-03 01:21:11.891287759 +0000 UTC m=+0.209776399 container start 1c1d1d808cb06f16e54581921c03dc9d7a5982264b59cca441ed0cf076fdeef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mds-cephfs-compute-0-bgmlsq, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:21:11 compute-0 bash[220455]: 1c1d1d808cb06f16e54581921c03dc9d7a5982264b59cca441ed0cf076fdeef5
Dec 03 01:21:11 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.bgmlsq for 3765feb2-36f8-5b86-b74c-64e9221f9c4c.
Dec 03 01:21:11 compute-0 ceph-mon[192821]: pgmap v115: 194 pgs: 1 unknown, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:21:11 compute-0 ceph-mon[192821]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 03 01:21:11 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2760567315' entity='client.rgw.rgw.compute-0.rxmili' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Dec 03 01:21:11 compute-0 ceph-mon[192821]: osdmap e44: 3 total, 3 up, 3 in
Dec 03 01:21:11 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3935809746' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 03 01:21:11 compute-0 sudo[220147]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Dec 03 01:21:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:21:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Dec 03 01:21:11 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Dec 03 01:21:11 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 45 pg[9.0( empty local-lis/les=0/0 n=0 ec=45/45 lis/c=0/0 les/c/f=0/0/0 sis=45) [1] r=0 lpr=45 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:11 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:21:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Dec 03 01:21:11 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2760567315' entity='client.rgw.rgw.compute-0.rxmili' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec 03 01:21:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:21:11 compute-0 ceph-mds[220488]: set uid:gid to 167:167 (ceph:ceph)
Dec 03 01:21:11 compute-0 ceph-mds[220488]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Dec 03 01:21:11 compute-0 ceph-mds[220488]: main not setting numa affinity
Dec 03 01:21:11 compute-0 ceph-mds[220488]: pidfile_write: ignore empty --pid-file
Dec 03 01:21:11 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mds-cephfs-compute-0-bgmlsq[220484]: starting mds.cephfs.compute-0.bgmlsq at 
Dec 03 01:21:11 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:21:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Dec 03 01:21:12 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:21:12 compute-0 ceph-mgr[193109]: [progress INFO root] complete: finished ev 2fc1d00e-bb55-41de-90e2-200846901dc1 (Updating mds.cephfs deployment (+1 -> 1))
Dec 03 01:21:12 compute-0 ceph-mgr[193109]: [progress INFO root] Completed event 2fc1d00e-bb55-41de-90e2-200846901dc1 (Updating mds.cephfs deployment (+1 -> 1)) in 3 seconds
Dec 03 01:21:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0) v1
Dec 03 01:21:12 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:21:12 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq Updating MDS map to version 2 from mon.0
Dec 03 01:21:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Dec 03 01:21:12 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:21:12 compute-0 sudo[220507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:21:12 compute-0 sudo[220507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:12 compute-0 sudo[220507]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v118: 195 pgs: 1 unknown, 1 creating+peering, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:21:12 compute-0 sudo[220532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 01:21:12 compute-0 sudo[220532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:12 compute-0 sudo[220532]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:12 compute-0 sudo[220557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:21:12 compute-0 sudo[220557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:12 compute-0 sudo[220557]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:12 compute-0 sudo[220582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:21:12 compute-0 sudo[220582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:12 compute-0 sudo[220582]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:12 compute-0 sudo[220631]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hezeqfphrohwpdihqoixockfbhkyxgkb ; /usr/bin/python3'
Dec 03 01:21:12 compute-0 sudo[220631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:21:12 compute-0 sudo[220630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:21:12 compute-0 sudo[220630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:12 compute-0 sudo[220630]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:12 compute-0 python3[220644]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:21:12 compute-0 sudo[220658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 03 01:21:12 compute-0 sudo[220658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:12 compute-0 podman[220682]: 2025-12-03 01:21:12.936996203 +0000 UTC m=+0.082474001 container create abd99f0a93e6a365a5542385a098f27734a3445a06329eab5e162de08f7af561 (image=quay.io/ceph/ceph:v18, name=gracious_bell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 03 01:21:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Dec 03 01:21:12 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2760567315' entity='client.rgw.rgw.compute-0.rxmili' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec 03 01:21:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Dec 03 01:21:12 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Dec 03 01:21:12 compute-0 ceph-mon[192821]: osdmap e45: 3 total, 3 up, 3 in
Dec 03 01:21:12 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:21:12 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2760567315' entity='client.rgw.rgw.compute-0.rxmili' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec 03 01:21:12 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:21:12 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:21:12 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:21:12 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:21:12 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 46 pg[9.0( empty local-lis/les=45/46 n=0 ec=45/45 lis/c=0/0 les/c/f=0/0/0 sis=45) [1] r=0 lpr=45 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:13 compute-0 podman[220682]: 2025-12-03 01:21:12.913214406 +0000 UTC m=+0.058692244 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:21:13 compute-0 systemd[1]: Started libpod-conmon-abd99f0a93e6a365a5542385a098f27734a3445a06329eab5e162de08f7af561.scope.
Dec 03 01:21:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).mds e3 new map
Dec 03 01:21:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).mds e3 print_map
                                            e3
                                            enable_multiple, ever_enabled_multiple: 1,1
                                            default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                            legacy client fscid: 1
                                             
                                            Filesystem 'cephfs' (1)
                                            fs_name        cephfs
                                            epoch        2
                                            flags        12 joinable allow_snaps allow_multimds_snaps
                                            created        2025-12-03T01:20:48.773680+0000
                                            modified        2025-12-03T01:20:48.773725+0000
                                            tableserver        0
                                            root        0
                                            session_timeout        60
                                            session_autoclose        300
                                            max_file_size        1099511627776
                                            max_xattr_size        65536
                                            required_client_features        {}
                                            last_failure        0
                                            last_failure_osd_epoch        0
                                            compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                            max_mds        1
                                            in        
                                            up        {}
                                            failed        
                                            damaged        
                                            stopped        
                                            data_pools        [7]
                                            metadata_pool        6
                                            inline_data        disabled
                                            balancer        
                                            bal_rank_mask        -1
                                            standby_count_wanted        0
                                             
                                             
                                            Standby daemons:
                                             
                                            [mds.cephfs.compute-0.bgmlsq{-1:14271} state up:standby seq 1 addr [v2:192.168.122.100:6814/2595993109,v1:192.168.122.100:6815/2595993109] compat {c=[1],r=[1],i=[7ff]}]
Dec 03 01:21:13 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq Updating MDS map to version 3 from mon.0
Dec 03 01:21:13 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq Monitors have assigned me to become a standby.
Dec 03 01:21:13 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/2595993109,v1:192.168.122.100:6815/2595993109] up:boot
Dec 03 01:21:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/2595993109,v1:192.168.122.100:6815/2595993109] as mds.0
Dec 03 01:21:13 compute-0 ceph-mon[192821]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.bgmlsq assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec 03 01:21:13 compute-0 ceph-mon[192821]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec 03 01:21:13 compute-0 ceph-mon[192821]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec 03 01:21:13 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Dec 03 01:21:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.bgmlsq"} v 0) v1
Dec 03 01:21:13 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.bgmlsq"}]: dispatch
Dec 03 01:21:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).mds e3 all = 0
Dec 03 01:21:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).mds e4 new map
Dec 03 01:21:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).mds e4 print_map
                                            e4
                                            enable_multiple, ever_enabled_multiple: 1,1
                                            default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                            legacy client fscid: 1
                                             
                                            Filesystem 'cephfs' (1)
                                            fs_name        cephfs
                                            epoch        4
                                            flags        12 joinable allow_snaps allow_multimds_snaps
                                            created        2025-12-03T01:20:48.773680+0000
                                            modified        2025-12-03T01:21:13.027391+0000
                                            tableserver        0
                                            root        0
                                            session_timeout        60
                                            session_autoclose        300
                                            max_file_size        1099511627776
                                            max_xattr_size        65536
                                            required_client_features        {}
                                            last_failure        0
                                            last_failure_osd_epoch        0
                                            compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                            max_mds        1
                                            in        0
                                            up        {0=14271}
                                            failed        
                                            damaged        
                                            stopped        
                                            data_pools        [7]
                                            metadata_pool        6
                                            inline_data        disabled
                                            balancer        
                                            bal_rank_mask        -1
                                            standby_count_wanted        0
                                            [mds.cephfs.compute-0.bgmlsq{0:14271} state up:creating seq 1 addr [v2:192.168.122.100:6814/2595993109,v1:192.168.122.100:6815/2595993109] compat {c=[1],r=[1],i=[7ff]}]
                                             
                                             
Dec 03 01:21:13 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq Updating MDS map to version 4 from mon.0
Dec 03 01:21:13 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.bgmlsq=up:creating}
Dec 03 01:21:13 compute-0 ceph-mds[220488]: mds.0.4 handle_mds_map i am now mds.0.4
Dec 03 01:21:13 compute-0 ceph-mds[220488]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Dec 03 01:21:13 compute-0 ceph-mds[220488]: mds.0.cache creating system inode with ino:0x1
Dec 03 01:21:13 compute-0 ceph-mds[220488]: mds.0.cache creating system inode with ino:0x100
Dec 03 01:21:13 compute-0 ceph-mds[220488]: mds.0.cache creating system inode with ino:0x600
Dec 03 01:21:13 compute-0 ceph-mds[220488]: mds.0.cache creating system inode with ino:0x601
Dec 03 01:21:13 compute-0 ceph-mds[220488]: mds.0.cache creating system inode with ino:0x602
Dec 03 01:21:13 compute-0 ceph-mds[220488]: mds.0.cache creating system inode with ino:0x603
Dec 03 01:21:13 compute-0 ceph-mds[220488]: mds.0.cache creating system inode with ino:0x604
Dec 03 01:21:13 compute-0 ceph-mds[220488]: mds.0.cache creating system inode with ino:0x605
Dec 03 01:21:13 compute-0 ceph-mds[220488]: mds.0.cache creating system inode with ino:0x606
Dec 03 01:21:13 compute-0 ceph-mds[220488]: mds.0.cache creating system inode with ino:0x607
Dec 03 01:21:13 compute-0 ceph-mds[220488]: mds.0.cache creating system inode with ino:0x608
Dec 03 01:21:13 compute-0 ceph-mds[220488]: mds.0.cache creating system inode with ino:0x609
Dec 03 01:21:13 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:21:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae06884a7e1ccf36ce3e4804e78741e5c58771aebe20e0809e909bd142781265/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae06884a7e1ccf36ce3e4804e78741e5c58771aebe20e0809e909bd142781265/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:13 compute-0 podman[220682]: 2025-12-03 01:21:13.0953467 +0000 UTC m=+0.240824518 container init abd99f0a93e6a365a5542385a098f27734a3445a06329eab5e162de08f7af561 (image=quay.io/ceph/ceph:v18, name=gracious_bell, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 03 01:21:13 compute-0 podman[220682]: 2025-12-03 01:21:13.107253539 +0000 UTC m=+0.252731347 container start abd99f0a93e6a365a5542385a098f27734a3445a06329eab5e162de08f7af561 (image=quay.io/ceph/ceph:v18, name=gracious_bell, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:21:13 compute-0 podman[220682]: 2025-12-03 01:21:13.125269687 +0000 UTC m=+0.270747525 container attach abd99f0a93e6a365a5542385a098f27734a3445a06329eab5e162de08f7af561 (image=quay.io/ceph/ceph:v18, name=gracious_bell, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec 03 01:21:13 compute-0 ceph-mds[220488]: mds.0.4 creating_done
Dec 03 01:21:13 compute-0 ceph-mon[192821]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.bgmlsq is now active in filesystem cephfs as rank 0
Dec 03 01:21:13 compute-0 ceph-mgr[193109]: [progress INFO root] Writing back 12 completed events
Dec 03 01:21:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec 03 01:21:13 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:21:13 compute-0 podman[220795]: 2025-12-03 01:21:13.622208039 +0000 UTC m=+0.103621422 container exec d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 03 01:21:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Dec 03 01:21:13 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/707348916' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 03 01:21:13 compute-0 gracious_bell[220711]: 
Dec 03 01:21:13 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 7.b scrub starts
Dec 03 01:21:13 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 7.b scrub ok
Dec 03 01:21:13 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Dec 03 01:21:13 compute-0 gracious_bell[220711]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"6","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.rxmili","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Dec 03 01:21:13 compute-0 systemd[1]: libpod-abd99f0a93e6a365a5542385a098f27734a3445a06329eab5e162de08f7af561.scope: Deactivated successfully.
Dec 03 01:21:13 compute-0 podman[220682]: 2025-12-03 01:21:13.723519606 +0000 UTC m=+0.868997444 container died abd99f0a93e6a365a5542385a098f27734a3445a06329eab5e162de08f7af561 (image=quay.io/ceph/ceph:v18, name=gracious_bell, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 03 01:21:13 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Dec 03 01:21:13 compute-0 podman[220795]: 2025-12-03 01:21:13.744634506 +0000 UTC m=+0.226047919 container exec_died d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Dec 03 01:21:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae06884a7e1ccf36ce3e4804e78741e5c58771aebe20e0809e909bd142781265-merged.mount: Deactivated successfully.
Dec 03 01:21:13 compute-0 podman[220682]: 2025-12-03 01:21:13.83100878 +0000 UTC m=+0.976486618 container remove abd99f0a93e6a365a5542385a098f27734a3445a06329eab5e162de08f7af561 (image=quay.io/ceph/ceph:v18, name=gracious_bell, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:21:13 compute-0 systemd[1]: libpod-conmon-abd99f0a93e6a365a5542385a098f27734a3445a06329eab5e162de08f7af561.scope: Deactivated successfully.
Dec 03 01:21:13 compute-0 sudo[220631]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Dec 03 01:21:13 compute-0 ceph-mon[192821]: pgmap v118: 195 pgs: 1 unknown, 1 creating+peering, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:21:13 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2760567315' entity='client.rgw.rgw.compute-0.rxmili' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec 03 01:21:13 compute-0 ceph-mon[192821]: osdmap e46: 3 total, 3 up, 3 in
Dec 03 01:21:13 compute-0 ceph-mon[192821]: mds.? [v2:192.168.122.100:6814/2595993109,v1:192.168.122.100:6815/2595993109] up:boot
Dec 03 01:21:13 compute-0 ceph-mon[192821]: daemon mds.cephfs.compute-0.bgmlsq assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec 03 01:21:13 compute-0 ceph-mon[192821]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec 03 01:21:13 compute-0 ceph-mon[192821]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec 03 01:21:13 compute-0 ceph-mon[192821]: fsmap cephfs:0 1 up:standby
Dec 03 01:21:13 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.bgmlsq"}]: dispatch
Dec 03 01:21:13 compute-0 ceph-mon[192821]: fsmap cephfs:1 {0=cephfs.compute-0.bgmlsq=up:creating}
Dec 03 01:21:13 compute-0 ceph-mon[192821]: daemon mds.cephfs.compute-0.bgmlsq is now active in filesystem cephfs as rank 0
Dec 03 01:21:13 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:21:13 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/707348916' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 03 01:21:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Dec 03 01:21:14 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Dec 03 01:21:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Dec 03 01:21:14 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2760567315' entity='client.rgw.rgw.compute-0.rxmili' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 03 01:21:14 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Dec 03 01:21:14 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Dec 03 01:21:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).mds e5 new map
Dec 03 01:21:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).mds e5 print_map
                                            e5
                                            enable_multiple, ever_enabled_multiple: 1,1
                                            default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                            legacy client fscid: 1
                                             
                                            Filesystem 'cephfs' (1)
                                            fs_name        cephfs
                                            epoch        5
                                            flags        12 joinable allow_snaps allow_multimds_snaps
                                            created        2025-12-03T01:20:48.773680+0000
                                            modified        2025-12-03T01:21:14.042948+0000
                                            tableserver        0
                                            root        0
                                            session_timeout        60
                                            session_autoclose        300
                                            max_file_size        1099511627776
                                            max_xattr_size        65536
                                            required_client_features        {}
                                            last_failure        0
                                            last_failure_osd_epoch        0
                                            compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                            max_mds        1
                                            in        0
                                            up        {0=14271}
                                            failed        
                                            damaged        
                                            stopped        
                                            data_pools        [7]
                                            metadata_pool        6
                                            inline_data        disabled
                                            balancer        
                                            bal_rank_mask        -1
                                            standby_count_wanted        0
                                            [mds.cephfs.compute-0.bgmlsq{0:14271} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/2595993109,v1:192.168.122.100:6815/2595993109] compat {c=[1],r=[1],i=[7ff]}]
                                             
                                             
Dec 03 01:21:14 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq Updating MDS map to version 5 from mon.0
Dec 03 01:21:14 compute-0 ceph-mds[220488]: mds.0.4 handle_mds_map i am now mds.0.4
Dec 03 01:21:14 compute-0 ceph-mds[220488]: mds.0.4 handle_mds_map state change up:creating --> up:active
Dec 03 01:21:14 compute-0 ceph-mds[220488]: mds.0.4 recovery_done -- successful recovery!
Dec 03 01:21:14 compute-0 ceph-mds[220488]: mds.0.4 active_start
Dec 03 01:21:14 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/2595993109,v1:192.168.122.100:6815/2595993109] up:active
Dec 03 01:21:14 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.bgmlsq=up:active}
Dec 03 01:21:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v121: 196 pgs: 2 unknown, 1 creating+peering, 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s wr, 4 op/s
Dec 03 01:21:14 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 47 pg[10.0( empty local-lis/les=0/0 n=0 ec=47/47 lis/c=0/0 les/c/f=0/0/0 sis=47) [2] r=0 lpr=47 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:14 compute-0 sudo[220982]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzftmyrjyjghkurjckeqlwwmjuedcrys ; /usr/bin/python3'
Dec 03 01:21:14 compute-0 sudo[220982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:21:14 compute-0 sudo[220658]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:21:14 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:21:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:21:14 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:21:14 compute-0 ceph-mon[192821]: 7.b scrub starts
Dec 03 01:21:14 compute-0 ceph-mon[192821]: 7.b scrub ok
Dec 03 01:21:14 compute-0 ceph-mon[192821]: 5.6 scrub starts
Dec 03 01:21:14 compute-0 ceph-mon[192821]: 5.6 scrub ok
Dec 03 01:21:14 compute-0 ceph-mon[192821]: osdmap e47: 3 total, 3 up, 3 in
Dec 03 01:21:14 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2760567315' entity='client.rgw.rgw.compute-0.rxmili' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 03 01:21:14 compute-0 ceph-mon[192821]: 4.1f scrub starts
Dec 03 01:21:14 compute-0 ceph-mon[192821]: 4.1f scrub ok
Dec 03 01:21:14 compute-0 ceph-mon[192821]: mds.? [v2:192.168.122.100:6814/2595993109,v1:192.168.122.100:6815/2595993109] up:active
Dec 03 01:21:14 compute-0 ceph-mon[192821]: fsmap cephfs:1 {0=cephfs.compute-0.bgmlsq=up:active}
Dec 03 01:21:14 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:21:14 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:21:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Dec 03 01:21:15 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2760567315' entity='client.rgw.rgw.compute-0.rxmili' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 03 01:21:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Dec 03 01:21:15 compute-0 python3[220985]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:21:15 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Dec 03 01:21:15 compute-0 sudo[220986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:21:15 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Dec 03 01:21:15 compute-0 sudo[220986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:15 compute-0 sudo[220986]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:15 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 48 pg[10.0( empty local-lis/les=47/48 n=0 ec=47/47 lis/c=0/0 les/c/f=0/0/0 sis=47) [2] r=0 lpr=47 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:15 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Dec 03 01:21:15 compute-0 podman[221009]: 2025-12-03 01:21:15.129872588 +0000 UTC m=+0.072606612 container create c58be8a62780046628e0dbc754a201b104346cfb116a7b579ce5515db17947d7 (image=quay.io/ceph/ceph:v18, name=practical_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Dec 03 01:21:15 compute-0 sudo[221017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:21:15 compute-0 sudo[221017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:15 compute-0 sudo[221017]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:15 compute-0 podman[221009]: 2025-12-03 01:21:15.098227643 +0000 UTC m=+0.040961697 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:21:15 compute-0 systemd[1]: Started libpod-conmon-c58be8a62780046628e0dbc754a201b104346cfb116a7b579ce5515db17947d7.scope.
Dec 03 01:21:15 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:21:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84c171330c5e56faeb7a838cffd806ade4739b69e13a6f277983f07c4853e41a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84c171330c5e56faeb7a838cffd806ade4739b69e13a6f277983f07c4853e41a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:15 compute-0 podman[221009]: 2025-12-03 01:21:15.261900435 +0000 UTC m=+0.204634489 container init c58be8a62780046628e0dbc754a201b104346cfb116a7b579ce5515db17947d7 (image=quay.io/ceph/ceph:v18, name=practical_jepsen, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:21:15 compute-0 podman[221009]: 2025-12-03 01:21:15.27097955 +0000 UTC m=+0.213713564 container start c58be8a62780046628e0dbc754a201b104346cfb116a7b579ce5515db17947d7 (image=quay.io/ceph/ceph:v18, name=practical_jepsen, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 03 01:21:15 compute-0 podman[221009]: 2025-12-03 01:21:15.276180081 +0000 UTC m=+0.218914185 container attach c58be8a62780046628e0dbc754a201b104346cfb116a7b579ce5515db17947d7 (image=quay.io/ceph/ceph:v18, name=practical_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:21:15 compute-0 sudo[221051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:21:15 compute-0 sudo[221051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:21:15 compute-0 sudo[221051]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:15 compute-0 sudo[221080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 01:21:15 compute-0 sudo[221080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0) v1
Dec 03 01:21:15 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/688098421' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Dec 03 01:21:15 compute-0 practical_jepsen[221055]: mimic
Dec 03 01:21:15 compute-0 systemd[1]: libpod-c58be8a62780046628e0dbc754a201b104346cfb116a7b579ce5515db17947d7.scope: Deactivated successfully.
Dec 03 01:21:15 compute-0 conmon[221055]: conmon c58be8a62780046628e0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c58be8a62780046628e0dbc754a201b104346cfb116a7b579ce5515db17947d7.scope/container/memory.events
Dec 03 01:21:15 compute-0 podman[221009]: 2025-12-03 01:21:15.859699255 +0000 UTC m=+0.802433359 container died c58be8a62780046628e0dbc754a201b104346cfb116a7b579ce5515db17947d7 (image=quay.io/ceph/ceph:v18, name=practical_jepsen, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:21:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-84c171330c5e56faeb7a838cffd806ade4739b69e13a6f277983f07c4853e41a-merged.mount: Deactivated successfully.
Dec 03 01:21:15 compute-0 podman[221009]: 2025-12-03 01:21:15.959029038 +0000 UTC m=+0.901763062 container remove c58be8a62780046628e0dbc754a201b104346cfb116a7b579ce5515db17947d7 (image=quay.io/ceph/ceph:v18, name=practical_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 03 01:21:15 compute-0 systemd[1]: libpod-conmon-c58be8a62780046628e0dbc754a201b104346cfb116a7b579ce5515db17947d7.scope: Deactivated successfully.
Dec 03 01:21:15 compute-0 sudo[220982]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:15 compute-0 sudo[221080]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:16 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Dec 03 01:21:16 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:21:16 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:21:16 compute-0 ceph-mon[192821]: pgmap v121: 196 pgs: 2 unknown, 1 creating+peering, 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s wr, 4 op/s
Dec 03 01:21:16 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2760567315' entity='client.rgw.rgw.compute-0.rxmili' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 03 01:21:16 compute-0 ceph-mon[192821]: osdmap e48: 3 total, 3 up, 3 in
Dec 03 01:21:16 compute-0 ceph-mon[192821]: 6.3 scrub starts
Dec 03 01:21:16 compute-0 ceph-mon[192821]: 6.3 scrub ok
Dec 03 01:21:16 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/688098421' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Dec 03 01:21:16 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 01:21:16 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:21:16 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Dec 03 01:21:16 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Dec 03 01:21:16 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Dec 03 01:21:16 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3891097115' entity='client.rgw.rgw.compute-0.rxmili' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 03 01:21:16 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 01:21:16 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:21:16 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev b51972ea-25de-4b21-a61c-86e331f6aea6 does not exist
Dec 03 01:21:16 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 7d772d0f-3c1e-44b5-a0ff-77fd027ff35e does not exist
Dec 03 01:21:16 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev fb9bce39-566e-46d5-a2f8-54ea31910a6b does not exist
Dec 03 01:21:16 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 01:21:16 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:21:16 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 01:21:16 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:21:16 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:21:16 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:21:16 compute-0 sudo[221180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:21:16 compute-0 sudo[221180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:16 compute-0 sudo[221180]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v124: 197 pgs: 2 unknown, 195 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 14 op/s
Dec 03 01:21:16 compute-0 sudo[221205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:21:16 compute-0 sudo[221205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:16 compute-0 sudo[221205]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:16 compute-0 sudo[221230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:21:16 compute-0 sudo[221230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:16 compute-0 sudo[221230]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:16 compute-0 sudo[221261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 01:21:16 compute-0 sudo[221261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:16 compute-0 podman[221254]: 2025-12-03 01:21:16.697809128 +0000 UTC m=+0.169663396 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 03 01:21:16 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 49 pg[11.0( empty local-lis/les=0/0 n=0 ec=49/49 lis/c=0/0 les/c/f=0/0/0 sis=49) [1] r=0 lpr=49 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:16 compute-0 sudo[221328]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgjlxhttputzerxarhwfdagdppyoozue ; /usr/bin/python3'
Dec 03 01:21:16 compute-0 sudo[221328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:21:16 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Dec 03 01:21:16 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Dec 03 01:21:17 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:21:17 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:21:17 compute-0 ceph-mon[192821]: osdmap e49: 3 total, 3 up, 3 in
Dec 03 01:21:17 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3891097115' entity='client.rgw.rgw.compute-0.rxmili' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 03 01:21:17 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:21:17 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:21:17 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:21:17 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:21:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Dec 03 01:21:17 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3891097115' entity='client.rgw.rgw.compute-0.rxmili' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 03 01:21:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Dec 03 01:21:17 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Dec 03 01:21:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Dec 03 01:21:17 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3891097115' entity='client.rgw.rgw.compute-0.rxmili' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 03 01:21:17 compute-0 python3[221340]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:21:17 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 50 pg[11.0( empty local-lis/les=49/50 n=0 ec=49/49 lis/c=0/0 les/c/f=0/0/0 sis=49) [1] r=0 lpr=49 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:17 compute-0 podman[221364]: 2025-12-03 01:21:17.189086069 +0000 UTC m=+0.080571998 container create a6fd231ab4d949bb8a681846a25b956314a9c13121dd339135aa731dcd9c5e2b (image=quay.io/ceph/ceph:v18, name=awesome_black, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:21:17 compute-0 podman[221371]: 2025-12-03 01:21:17.212756398 +0000 UTC m=+0.073242309 container create 6d35c30d7d4fdc6c55df2e461c254877d4b214add8ff24e292b96096327a227d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 03 01:21:17 compute-0 podman[221364]: 2025-12-03 01:21:17.14802478 +0000 UTC m=+0.039510819 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:21:17 compute-0 systemd[1]: Started libpod-conmon-a6fd231ab4d949bb8a681846a25b956314a9c13121dd339135aa731dcd9c5e2b.scope.
Dec 03 01:21:17 compute-0 podman[221371]: 2025-12-03 01:21:17.182145151 +0000 UTC m=+0.042631152 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:21:17 compute-0 systemd[1]: Started libpod-conmon-6d35c30d7d4fdc6c55df2e461c254877d4b214add8ff24e292b96096327a227d.scope.
Dec 03 01:21:17 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:21:17 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:21:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e746f55c848ae8cc54b53a170c447a0c4bd396ec41f0d7919fd98a37b89404b6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e746f55c848ae8cc54b53a170c447a0c4bd396ec41f0d7919fd98a37b89404b6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:17 compute-0 podman[221364]: 2025-12-03 01:21:17.376935874 +0000 UTC m=+0.268421843 container init a6fd231ab4d949bb8a681846a25b956314a9c13121dd339135aa731dcd9c5e2b (image=quay.io/ceph/ceph:v18, name=awesome_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:21:17 compute-0 podman[221371]: 2025-12-03 01:21:17.382286238 +0000 UTC m=+0.242772259 container init 6d35c30d7d4fdc6c55df2e461c254877d4b214add8ff24e292b96096327a227d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_dijkstra, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 03 01:21:17 compute-0 podman[221364]: 2025-12-03 01:21:17.393598174 +0000 UTC m=+0.285084133 container start a6fd231ab4d949bb8a681846a25b956314a9c13121dd339135aa731dcd9c5e2b (image=quay.io/ceph/ceph:v18, name=awesome_black, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Dec 03 01:21:17 compute-0 podman[221371]: 2025-12-03 01:21:17.39346449 +0000 UTC m=+0.253950411 container start 6d35c30d7d4fdc6c55df2e461c254877d4b214add8ff24e292b96096327a227d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_dijkstra, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:21:17 compute-0 podman[221364]: 2025-12-03 01:21:17.399831532 +0000 UTC m=+0.291317531 container attach a6fd231ab4d949bb8a681846a25b956314a9c13121dd339135aa731dcd9c5e2b (image=quay.io/ceph/ceph:v18, name=awesome_black, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:21:17 compute-0 flamboyant_dijkstra[221397]: 167 167
Dec 03 01:21:17 compute-0 systemd[1]: libpod-6d35c30d7d4fdc6c55df2e461c254877d4b214add8ff24e292b96096327a227d.scope: Deactivated successfully.
Dec 03 01:21:17 compute-0 podman[221371]: 2025-12-03 01:21:17.412055693 +0000 UTC m=+0.272541634 container attach 6d35c30d7d4fdc6c55df2e461c254877d4b214add8ff24e292b96096327a227d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:21:17 compute-0 podman[221371]: 2025-12-03 01:21:17.412465954 +0000 UTC m=+0.272951895 container died 6d35c30d7d4fdc6c55df2e461c254877d4b214add8ff24e292b96096327a227d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 03 01:21:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-f160de2dfcaa116ce3823a884fcd36cb519f67bd6f31ef757f50c676b717a975-merged.mount: Deactivated successfully.
Dec 03 01:21:17 compute-0 podman[221371]: 2025-12-03 01:21:17.496120914 +0000 UTC m=+0.356606835 container remove 6d35c30d7d4fdc6c55df2e461c254877d4b214add8ff24e292b96096327a227d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_dijkstra, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 03 01:21:17 compute-0 systemd[1]: libpod-conmon-6d35c30d7d4fdc6c55df2e461c254877d4b214add8ff24e292b96096327a227d.scope: Deactivated successfully.
Dec 03 01:21:17 compute-0 podman[221424]: 2025-12-03 01:21:17.798381818 +0000 UTC m=+0.081682697 container create 70df358747fa65898bc96807e53c37dbb136cb87d5b2ad6e499910d443536aea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_burnell, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:21:17 compute-0 podman[221424]: 2025-12-03 01:21:17.766755604 +0000 UTC m=+0.050056523 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:21:17 compute-0 systemd[1]: Started libpod-conmon-70df358747fa65898bc96807e53c37dbb136cb87d5b2ad6e499910d443536aea.scope.
Dec 03 01:21:17 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:21:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e05eb8541bde2015aebc407c7bead008f74985b1b2569cf3df0c248b6b6621e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e05eb8541bde2015aebc407c7bead008f74985b1b2569cf3df0c248b6b6621e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e05eb8541bde2015aebc407c7bead008f74985b1b2569cf3df0c248b6b6621e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e05eb8541bde2015aebc407c7bead008f74985b1b2569cf3df0c248b6b6621e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e05eb8541bde2015aebc407c7bead008f74985b1b2569cf3df0c248b6b6621e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:17 compute-0 podman[221424]: 2025-12-03 01:21:17.97611737 +0000 UTC m=+0.259418229 container init 70df358747fa65898bc96807e53c37dbb136cb87d5b2ad6e499910d443536aea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_burnell, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 03 01:21:17 compute-0 podman[221424]: 2025-12-03 01:21:17.993584862 +0000 UTC m=+0.276885711 container start 70df358747fa65898bc96807e53c37dbb136cb87d5b2ad6e499910d443536aea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_burnell, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:21:17 compute-0 podman[221424]: 2025-12-03 01:21:17.999174843 +0000 UTC m=+0.282475712 container attach 70df358747fa65898bc96807e53c37dbb136cb87d5b2ad6e499910d443536aea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_burnell, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec 03 01:21:18 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Dec 03 01:21:18 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Dec 03 01:21:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Dec 03 01:21:18 compute-0 awesome_black[221395]: 
Dec 03 01:21:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0) v1
Dec 03 01:21:18 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4146977044' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Dec 03 01:21:18 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3891097115' entity='client.rgw.rgw.compute-0.rxmili' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 03 01:21:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Dec 03 01:21:18 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mds-cephfs-compute-0-bgmlsq[220484]: 2025-12-03T01:21:18.072+0000 7f8c6dd4f640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Dec 03 01:21:18 compute-0 ceph-mds[220488]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Dec 03 01:21:18 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Dec 03 01:21:18 compute-0 ceph-mon[192821]: pgmap v124: 197 pgs: 2 unknown, 195 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 14 op/s
Dec 03 01:21:18 compute-0 ceph-mon[192821]: 6.5 scrub starts
Dec 03 01:21:18 compute-0 ceph-mon[192821]: 6.5 scrub ok
Dec 03 01:21:18 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3891097115' entity='client.rgw.rgw.compute-0.rxmili' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 03 01:21:18 compute-0 ceph-mon[192821]: osdmap e50: 3 total, 3 up, 3 in
Dec 03 01:21:18 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3891097115' entity='client.rgw.rgw.compute-0.rxmili' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 03 01:21:18 compute-0 awesome_black[221395]: {"mon":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"mgr":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"osd":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mds":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"overall":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":6}}
Dec 03 01:21:18 compute-0 systemd[1]: libpod-a6fd231ab4d949bb8a681846a25b956314a9c13121dd339135aa731dcd9c5e2b.scope: Deactivated successfully.
Dec 03 01:21:18 compute-0 podman[221465]: 2025-12-03 01:21:18.195475806 +0000 UTC m=+0.058244374 container died a6fd231ab4d949bb8a681846a25b956314a9c13121dd339135aa731dcd9c5e2b (image=quay.io/ceph/ceph:v18, name=awesome_black, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:21:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v127: 197 pgs: 2 unknown, 195 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 9 op/s
Dec 03 01:21:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-e746f55c848ae8cc54b53a170c447a0c4bd396ec41f0d7919fd98a37b89404b6-merged.mount: Deactivated successfully.
Dec 03 01:21:18 compute-0 podman[221465]: 2025-12-03 01:21:18.28888789 +0000 UTC m=+0.151656378 container remove a6fd231ab4d949bb8a681846a25b956314a9c13121dd339135aa731dcd9c5e2b (image=quay.io/ceph/ceph:v18, name=awesome_black, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec 03 01:21:18 compute-0 systemd[1]: libpod-conmon-a6fd231ab4d949bb8a681846a25b956314a9c13121dd339135aa731dcd9c5e2b.scope: Deactivated successfully.
Dec 03 01:21:18 compute-0 sudo[221328]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:18 compute-0 radosgw[219997]: LDAP not started since no server URIs were provided in the configuration.
Dec 03 01:21:18 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-rgw-rgw-compute-0-rxmili[219992]: 2025-12-03T01:21:18.362+0000 7f7f0a2ec940 -1 LDAP not started since no server URIs were provided in the configuration.
Dec 03 01:21:18 compute-0 radosgw[219997]: framework: beast
Dec 03 01:21:18 compute-0 radosgw[219997]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Dec 03 01:21:18 compute-0 radosgw[219997]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Dec 03 01:21:18 compute-0 radosgw[219997]: starting handler: beast
Dec 03 01:21:18 compute-0 radosgw[219997]: set uid:gid to 167:167 (ceph:ceph)
Dec 03 01:21:18 compute-0 radosgw[219997]: mgrc service_daemon_register rgw.14275 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.rxmili,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025,kernel_version=5.14.0-645.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864312,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=12c704e0-e093-4406-ae80-90d748b4eab2,zone_name=default,zonegroup_id=ed145ee8-3dce-430d-a202-c3985ac1478c,zonegroup_name=default}
Dec 03 01:21:18 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 7.d deep-scrub starts
Dec 03 01:21:18 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 7.d deep-scrub ok
Dec 03 01:21:19 compute-0 ceph-mon[192821]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec 03 01:21:19 compute-0 ceph-mon[192821]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec 03 01:21:19 compute-0 ceph-mon[192821]: 6.7 scrub starts
Dec 03 01:21:19 compute-0 ceph-mon[192821]: 6.7 scrub ok
Dec 03 01:21:19 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/4146977044' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Dec 03 01:21:19 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3891097115' entity='client.rgw.rgw.compute-0.rxmili' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 03 01:21:19 compute-0 ceph-mon[192821]: osdmap e51: 3 total, 3 up, 3 in
Dec 03 01:21:19 compute-0 ceph-mon[192821]: 7.d deep-scrub starts
Dec 03 01:21:19 compute-0 ceph-mon[192821]: 7.d deep-scrub ok
Dec 03 01:21:19 compute-0 suspicious_burnell[221458]: --> passed data devices: 0 physical, 3 LVM
Dec 03 01:21:19 compute-0 suspicious_burnell[221458]: --> relative data size: 1.0
Dec 03 01:21:19 compute-0 suspicious_burnell[221458]: --> All data devices are unavailable
Dec 03 01:21:19 compute-0 systemd[1]: libpod-70df358747fa65898bc96807e53c37dbb136cb87d5b2ad6e499910d443536aea.scope: Deactivated successfully.
Dec 03 01:21:19 compute-0 systemd[1]: libpod-70df358747fa65898bc96807e53c37dbb136cb87d5b2ad6e499910d443536aea.scope: Consumed 1.235s CPU time.
Dec 03 01:21:19 compute-0 podman[221424]: 2025-12-03 01:21:19.34827342 +0000 UTC m=+1.631574299 container died 70df358747fa65898bc96807e53c37dbb136cb87d5b2ad6e499910d443536aea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 03 01:21:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e05eb8541bde2015aebc407c7bead008f74985b1b2569cf3df0c248b6b6621e-merged.mount: Deactivated successfully.
Dec 03 01:21:19 compute-0 podman[221424]: 2025-12-03 01:21:19.452731512 +0000 UTC m=+1.736032341 container remove 70df358747fa65898bc96807e53c37dbb136cb87d5b2ad6e499910d443536aea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_burnell, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 03 01:21:19 compute-0 systemd[1]: libpod-conmon-70df358747fa65898bc96807e53c37dbb136cb87d5b2ad6e499910d443536aea.scope: Deactivated successfully.
Dec 03 01:21:19 compute-0 sudo[221261]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:19 compute-0 sudo[222057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:21:19 compute-0 sudo[222057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:19 compute-0 sudo[222057]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:19 compute-0 sudo[222082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:21:19 compute-0 sudo[222082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:19 compute-0 sudo[222082]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:19 compute-0 sudo[222107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:21:19 compute-0 sudo[222107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:19 compute-0 sudo[222107]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:19 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Dec 03 01:21:19 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Dec 03 01:21:20 compute-0 sudo[222132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 01:21:20 compute-0 sudo[222132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:20 compute-0 ceph-mon[192821]: pgmap v127: 197 pgs: 2 unknown, 195 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 9 op/s
Dec 03 01:21:20 compute-0 ceph-mon[192821]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec 03 01:21:20 compute-0 ceph-mon[192821]: Cluster is now healthy
Dec 03 01:21:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v128: 197 pgs: 1 unknown, 196 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 982 B/s rd, 1.3 KiB/s wr, 4 op/s
Dec 03 01:21:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:21:20 compute-0 podman[222194]: 2025-12-03 01:21:20.538037182 +0000 UTC m=+0.060673090 container create f78df721ec5f7b84b3daacc24e8d41851243428d214d8b187a323a22ad4d12ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:21:20 compute-0 sshd-session[219995]: Connection closed by authenticating user root 193.32.162.157 port 55696 [preauth]
Dec 03 01:21:20 compute-0 podman[222194]: 2025-12-03 01:21:20.519614444 +0000 UTC m=+0.042250372 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:21:20 compute-0 systemd[1]: Started libpod-conmon-f78df721ec5f7b84b3daacc24e8d41851243428d214d8b187a323a22ad4d12ae.scope.
Dec 03 01:21:20 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:21:20 compute-0 podman[222194]: 2025-12-03 01:21:20.746344409 +0000 UTC m=+0.268980357 container init f78df721ec5f7b84b3daacc24e8d41851243428d214d8b187a323a22ad4d12ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_aryabhata, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:21:20 compute-0 podman[222194]: 2025-12-03 01:21:20.768751155 +0000 UTC m=+0.291387103 container start f78df721ec5f7b84b3daacc24e8d41851243428d214d8b187a323a22ad4d12ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_aryabhata, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 03 01:21:20 compute-0 podman[222194]: 2025-12-03 01:21:20.775289861 +0000 UTC m=+0.297925809 container attach f78df721ec5f7b84b3daacc24e8d41851243428d214d8b187a323a22ad4d12ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 03 01:21:20 compute-0 elated_aryabhata[222210]: 167 167
Dec 03 01:21:20 compute-0 systemd[1]: libpod-f78df721ec5f7b84b3daacc24e8d41851243428d214d8b187a323a22ad4d12ae.scope: Deactivated successfully.
Dec 03 01:21:20 compute-0 podman[222194]: 2025-12-03 01:21:20.782311401 +0000 UTC m=+0.304947339 container died f78df721ec5f7b84b3daacc24e8d41851243428d214d8b187a323a22ad4d12ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 03 01:21:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ca7ee2198628af7d2b2e9f227896da255ab8527f88de36c8804d198851f1fd0-merged.mount: Deactivated successfully.
Dec 03 01:21:20 compute-0 podman[222194]: 2025-12-03 01:21:20.864707787 +0000 UTC m=+0.387343705 container remove f78df721ec5f7b84b3daacc24e8d41851243428d214d8b187a323a22ad4d12ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_aryabhata, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:21:20 compute-0 systemd[1]: libpod-conmon-f78df721ec5f7b84b3daacc24e8d41851243428d214d8b187a323a22ad4d12ae.scope: Deactivated successfully.
Dec 03 01:21:21 compute-0 ceph-mon[192821]: 6.9 scrub starts
Dec 03 01:21:21 compute-0 ceph-mon[192821]: 6.9 scrub ok
Dec 03 01:21:21 compute-0 podman[222236]: 2025-12-03 01:21:21.115960675 +0000 UTC m=+0.076700683 container create fd185a5f107f3680dcb677dcff4b39fd1714196e0e13feec29369e3f32e444c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_poincare, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 03 01:21:21 compute-0 podman[222236]: 2025-12-03 01:21:21.082493711 +0000 UTC m=+0.043233779 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:21:21 compute-0 systemd[1]: Started libpod-conmon-fd185a5f107f3680dcb677dcff4b39fd1714196e0e13feec29369e3f32e444c7.scope.
Dec 03 01:21:21 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:21:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/221385a6e952c4969c7155489543a3624903b4f6a9e5a393f498c2d44194943f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/221385a6e952c4969c7155489543a3624903b4f6a9e5a393f498c2d44194943f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/221385a6e952c4969c7155489543a3624903b4f6a9e5a393f498c2d44194943f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/221385a6e952c4969c7155489543a3624903b4f6a9e5a393f498c2d44194943f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:21 compute-0 podman[222236]: 2025-12-03 01:21:21.253992494 +0000 UTC m=+0.214732512 container init fd185a5f107f3680dcb677dcff4b39fd1714196e0e13feec29369e3f32e444c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_poincare, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 03 01:21:21 compute-0 podman[222236]: 2025-12-03 01:21:21.285922105 +0000 UTC m=+0.246662123 container start fd185a5f107f3680dcb677dcff4b39fd1714196e0e13feec29369e3f32e444c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_poincare, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 03 01:21:21 compute-0 podman[222236]: 2025-12-03 01:21:21.292916294 +0000 UTC m=+0.253656312 container attach fd185a5f107f3680dcb677dcff4b39fd1714196e0e13feec29369e3f32e444c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_poincare, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:21:21 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 6.a scrub starts
Dec 03 01:21:21 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 6.a scrub ok
Dec 03 01:21:22 compute-0 kind_poincare[222251]: {
Dec 03 01:21:22 compute-0 kind_poincare[222251]:     "0": [
Dec 03 01:21:22 compute-0 kind_poincare[222251]:         {
Dec 03 01:21:22 compute-0 kind_poincare[222251]:             "devices": [
Dec 03 01:21:22 compute-0 kind_poincare[222251]:                 "/dev/loop3"
Dec 03 01:21:22 compute-0 kind_poincare[222251]:             ],
Dec 03 01:21:22 compute-0 kind_poincare[222251]:             "lv_name": "ceph_lv0",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:             "lv_size": "21470642176",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:             "name": "ceph_lv0",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:             "tags": {
Dec 03 01:21:22 compute-0 kind_poincare[222251]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:                 "ceph.cluster_name": "ceph",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:                 "ceph.crush_device_class": "",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:                 "ceph.encrypted": "0",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:                 "ceph.osd_id": "0",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:                 "ceph.type": "block",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:                 "ceph.vdo": "0"
Dec 03 01:21:22 compute-0 kind_poincare[222251]:             },
Dec 03 01:21:22 compute-0 kind_poincare[222251]:             "type": "block",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:             "vg_name": "ceph_vg0"
Dec 03 01:21:22 compute-0 kind_poincare[222251]:         }
Dec 03 01:21:22 compute-0 kind_poincare[222251]:     ],
Dec 03 01:21:22 compute-0 kind_poincare[222251]:     "1": [
Dec 03 01:21:22 compute-0 kind_poincare[222251]:         {
Dec 03 01:21:22 compute-0 kind_poincare[222251]:             "devices": [
Dec 03 01:21:22 compute-0 kind_poincare[222251]:                 "/dev/loop4"
Dec 03 01:21:22 compute-0 kind_poincare[222251]:             ],
Dec 03 01:21:22 compute-0 kind_poincare[222251]:             "lv_name": "ceph_lv1",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:             "lv_size": "21470642176",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:             "name": "ceph_lv1",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:             "tags": {
Dec 03 01:21:22 compute-0 kind_poincare[222251]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:                 "ceph.cluster_name": "ceph",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:                 "ceph.crush_device_class": "",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:                 "ceph.encrypted": "0",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:                 "ceph.osd_id": "1",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:                 "ceph.type": "block",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:                 "ceph.vdo": "0"
Dec 03 01:21:22 compute-0 kind_poincare[222251]:             },
Dec 03 01:21:22 compute-0 kind_poincare[222251]:             "type": "block",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:             "vg_name": "ceph_vg1"
Dec 03 01:21:22 compute-0 kind_poincare[222251]:         }
Dec 03 01:21:22 compute-0 kind_poincare[222251]:     ],
Dec 03 01:21:22 compute-0 kind_poincare[222251]:     "2": [
Dec 03 01:21:22 compute-0 kind_poincare[222251]:         {
Dec 03 01:21:22 compute-0 kind_poincare[222251]:             "devices": [
Dec 03 01:21:22 compute-0 kind_poincare[222251]:                 "/dev/loop5"
Dec 03 01:21:22 compute-0 kind_poincare[222251]:             ],
Dec 03 01:21:22 compute-0 kind_poincare[222251]:             "lv_name": "ceph_lv2",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:             "lv_size": "21470642176",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:             "name": "ceph_lv2",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:             "tags": {
Dec 03 01:21:22 compute-0 kind_poincare[222251]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:                 "ceph.cluster_name": "ceph",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:                 "ceph.crush_device_class": "",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:                 "ceph.encrypted": "0",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:                 "ceph.osd_id": "2",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:                 "ceph.type": "block",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:                 "ceph.vdo": "0"
Dec 03 01:21:22 compute-0 kind_poincare[222251]:             },
Dec 03 01:21:22 compute-0 kind_poincare[222251]:             "type": "block",
Dec 03 01:21:22 compute-0 kind_poincare[222251]:             "vg_name": "ceph_vg2"
Dec 03 01:21:22 compute-0 kind_poincare[222251]:         }
Dec 03 01:21:22 compute-0 kind_poincare[222251]:     ]
Dec 03 01:21:22 compute-0 kind_poincare[222251]: }
Dec 03 01:21:22 compute-0 ceph-mon[192821]: pgmap v128: 197 pgs: 1 unknown, 196 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 982 B/s rd, 1.3 KiB/s wr, 4 op/s
Dec 03 01:21:22 compute-0 systemd[1]: libpod-fd185a5f107f3680dcb677dcff4b39fd1714196e0e13feec29369e3f32e444c7.scope: Deactivated successfully.
Dec 03 01:21:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v129: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 110 KiB/s rd, 5.2 KiB/s wr, 246 op/s
Dec 03 01:21:22 compute-0 podman[222260]: 2025-12-03 01:21:22.233835254 +0000 UTC m=+0.065031308 container died fd185a5f107f3680dcb677dcff4b39fd1714196e0e13feec29369e3f32e444c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_poincare, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 03 01:21:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-221385a6e952c4969c7155489543a3624903b4f6a9e5a393f498c2d44194943f-merged.mount: Deactivated successfully.
Dec 03 01:21:22 compute-0 podman[222260]: 2025-12-03 01:21:22.337447263 +0000 UTC m=+0.168643227 container remove fd185a5f107f3680dcb677dcff4b39fd1714196e0e13feec29369e3f32e444c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 03 01:21:22 compute-0 systemd[1]: libpod-conmon-fd185a5f107f3680dcb677dcff4b39fd1714196e0e13feec29369e3f32e444c7.scope: Deactivated successfully.
Dec 03 01:21:22 compute-0 sudo[222132]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:22 compute-0 sudo[222272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:21:22 compute-0 sudo[222272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:22 compute-0 sudo[222272]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:22 compute-0 sudo[222297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:21:22 compute-0 sudo[222297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:22 compute-0 sudo[222297]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:22 compute-0 sudo[222322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:21:22 compute-0 sudo[222322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:22 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Dec 03 01:21:22 compute-0 sudo[222322]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:22 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Dec 03 01:21:22 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 6.10 scrub starts
Dec 03 01:21:22 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 6.10 scrub ok
Dec 03 01:21:22 compute-0 sudo[222347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 01:21:22 compute-0 sudo[222347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:23 compute-0 ceph-mon[192821]: 6.a scrub starts
Dec 03 01:21:23 compute-0 ceph-mon[192821]: 6.a scrub ok
Dec 03 01:21:23 compute-0 ceph-mon[192821]: 5.8 scrub starts
Dec 03 01:21:23 compute-0 ceph-mon[192821]: 5.8 scrub ok
Dec 03 01:21:23 compute-0 podman[222409]: 2025-12-03 01:21:23.529438875 +0000 UTC m=+0.079684804 container create 44d96570f57635dd97295a96de52376e9728fed89dda38b2f9e4e86d654cc7ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_northcutt, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 03 01:21:23 compute-0 systemd[1]: Started libpod-conmon-44d96570f57635dd97295a96de52376e9728fed89dda38b2f9e4e86d654cc7ea.scope.
Dec 03 01:21:23 compute-0 podman[222409]: 2025-12-03 01:21:23.501074599 +0000 UTC m=+0.051320618 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:21:23 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:21:23 compute-0 podman[222409]: 2025-12-03 01:21:23.657359801 +0000 UTC m=+0.207605740 container init 44d96570f57635dd97295a96de52376e9728fed89dda38b2f9e4e86d654cc7ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_northcutt, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 03 01:21:23 compute-0 podman[222409]: 2025-12-03 01:21:23.671919414 +0000 UTC m=+0.222165373 container start 44d96570f57635dd97295a96de52376e9728fed89dda38b2f9e4e86d654cc7ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:21:23 compute-0 podman[222409]: 2025-12-03 01:21:23.679871439 +0000 UTC m=+0.230117368 container attach 44d96570f57635dd97295a96de52376e9728fed89dda38b2f9e4e86d654cc7ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_northcutt, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:21:23 compute-0 pedantic_northcutt[222424]: 167 167
Dec 03 01:21:23 compute-0 systemd[1]: libpod-44d96570f57635dd97295a96de52376e9728fed89dda38b2f9e4e86d654cc7ea.scope: Deactivated successfully.
Dec 03 01:21:23 compute-0 podman[222409]: 2025-12-03 01:21:23.684818663 +0000 UTC m=+0.235064632 container died 44d96570f57635dd97295a96de52376e9728fed89dda38b2f9e4e86d654cc7ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_northcutt, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:21:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-65e9487937eed16ba70e418fcf74f2aac6854b88456784a7b18af3e1d36d75a6-merged.mount: Deactivated successfully.
Dec 03 01:21:23 compute-0 podman[222409]: 2025-12-03 01:21:23.757222589 +0000 UTC m=+0.307468558 container remove 44d96570f57635dd97295a96de52376e9728fed89dda38b2f9e4e86d654cc7ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 03 01:21:23 compute-0 systemd[1]: libpod-conmon-44d96570f57635dd97295a96de52376e9728fed89dda38b2f9e4e86d654cc7ea.scope: Deactivated successfully.
Dec 03 01:21:24 compute-0 podman[222446]: 2025-12-03 01:21:24.056152575 +0000 UTC m=+0.104372841 container create 477d62980865055bf3bbd519efcfa1f3e13d01cb5a35472ebf18a4b2757ad0ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bohr, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec 03 01:21:24 compute-0 podman[222446]: 2025-12-03 01:21:24.005976949 +0000 UTC m=+0.054197265 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:21:24 compute-0 systemd[1]: Started libpod-conmon-477d62980865055bf3bbd519efcfa1f3e13d01cb5a35472ebf18a4b2757ad0ab.scope.
Dec 03 01:21:24 compute-0 ceph-mon[192821]: pgmap v129: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 110 KiB/s rd, 5.2 KiB/s wr, 246 op/s
Dec 03 01:21:24 compute-0 ceph-mon[192821]: 6.10 scrub starts
Dec 03 01:21:24 compute-0 ceph-mon[192821]: 6.10 scrub ok
Dec 03 01:21:24 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:21:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/647055bdf1df94acd87540ef7480273fe762b664a21d234b4a321eba2949dcf3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/647055bdf1df94acd87540ef7480273fe762b664a21d234b4a321eba2949dcf3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/647055bdf1df94acd87540ef7480273fe762b664a21d234b4a321eba2949dcf3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/647055bdf1df94acd87540ef7480273fe762b664a21d234b4a321eba2949dcf3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v130: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 4.0 KiB/s wr, 190 op/s
Dec 03 01:21:24 compute-0 podman[222446]: 2025-12-03 01:21:24.249647232 +0000 UTC m=+0.297867518 container init 477d62980865055bf3bbd519efcfa1f3e13d01cb5a35472ebf18a4b2757ad0ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 03 01:21:24 compute-0 podman[222446]: 2025-12-03 01:21:24.2699203 +0000 UTC m=+0.318140576 container start 477d62980865055bf3bbd519efcfa1f3e13d01cb5a35472ebf18a4b2757ad0ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:21:24 compute-0 podman[222446]: 2025-12-03 01:21:24.276858217 +0000 UTC m=+0.325078463 container attach 477d62980865055bf3bbd519efcfa1f3e13d01cb5a35472ebf18a4b2757ad0ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 03 01:21:24 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Dec 03 01:21:24 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Dec 03 01:21:25 compute-0 ceph-mon[192821]: pgmap v130: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 4.0 KiB/s wr, 190 op/s
Dec 03 01:21:25 compute-0 ceph-mon[192821]: 7.10 scrub starts
Dec 03 01:21:25 compute-0 ceph-mon[192821]: 7.10 scrub ok
Dec 03 01:21:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:21:25 compute-0 nice_bohr[222463]: {
Dec 03 01:21:25 compute-0 nice_bohr[222463]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 01:21:25 compute-0 nice_bohr[222463]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:21:25 compute-0 nice_bohr[222463]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 01:21:25 compute-0 nice_bohr[222463]:         "osd_id": 2,
Dec 03 01:21:25 compute-0 nice_bohr[222463]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:21:25 compute-0 nice_bohr[222463]:         "type": "bluestore"
Dec 03 01:21:25 compute-0 nice_bohr[222463]:     },
Dec 03 01:21:25 compute-0 nice_bohr[222463]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 01:21:25 compute-0 nice_bohr[222463]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:21:25 compute-0 nice_bohr[222463]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 01:21:25 compute-0 nice_bohr[222463]:         "osd_id": 1,
Dec 03 01:21:25 compute-0 nice_bohr[222463]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:21:25 compute-0 nice_bohr[222463]:         "type": "bluestore"
Dec 03 01:21:25 compute-0 nice_bohr[222463]:     },
Dec 03 01:21:25 compute-0 nice_bohr[222463]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 01:21:25 compute-0 nice_bohr[222463]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:21:25 compute-0 nice_bohr[222463]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 01:21:25 compute-0 nice_bohr[222463]:         "osd_id": 0,
Dec 03 01:21:25 compute-0 nice_bohr[222463]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:21:25 compute-0 nice_bohr[222463]:         "type": "bluestore"
Dec 03 01:21:25 compute-0 nice_bohr[222463]:     }
Dec 03 01:21:25 compute-0 nice_bohr[222463]: }
Dec 03 01:21:25 compute-0 systemd[1]: libpod-477d62980865055bf3bbd519efcfa1f3e13d01cb5a35472ebf18a4b2757ad0ab.scope: Deactivated successfully.
Dec 03 01:21:25 compute-0 systemd[1]: libpod-477d62980865055bf3bbd519efcfa1f3e13d01cb5a35472ebf18a4b2757ad0ab.scope: Consumed 1.085s CPU time.
Dec 03 01:21:25 compute-0 conmon[222463]: conmon 477d62980865055bf3bb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-477d62980865055bf3bbd519efcfa1f3e13d01cb5a35472ebf18a4b2757ad0ab.scope/container/memory.events
Dec 03 01:21:25 compute-0 podman[222446]: 2025-12-03 01:21:25.354623043 +0000 UTC m=+1.402843319 container died 477d62980865055bf3bbd519efcfa1f3e13d01cb5a35472ebf18a4b2757ad0ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bohr, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 03 01:21:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-647055bdf1df94acd87540ef7480273fe762b664a21d234b4a321eba2949dcf3-merged.mount: Deactivated successfully.
Dec 03 01:21:25 compute-0 podman[222446]: 2025-12-03 01:21:25.454117181 +0000 UTC m=+1.502337447 container remove 477d62980865055bf3bbd519efcfa1f3e13d01cb5a35472ebf18a4b2757ad0ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bohr, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 03 01:21:25 compute-0 systemd[1]: libpod-conmon-477d62980865055bf3bbd519efcfa1f3e13d01cb5a35472ebf18a4b2757ad0ab.scope: Deactivated successfully.
Dec 03 01:21:25 compute-0 sudo[222347]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:21:25 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:21:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:21:25 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:21:25 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 11f94589-e536-405f-9731-6bffd45fe368 does not exist
Dec 03 01:21:25 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 25d082bf-00a8-4264-aad9-8ac0c5974ecb does not exist
Dec 03 01:21:25 compute-0 sudo[222507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:21:25 compute-0 sudo[222507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:25 compute-0 sudo[222507]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:25 compute-0 sudo[222532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 01:21:25 compute-0 sudo[222532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:25 compute-0 sudo[222532]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:25 compute-0 sudo[222557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:21:25 compute-0 sudo[222557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:25 compute-0 sudo[222557]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:26 compute-0 sudo[222582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:21:26 compute-0 sudo[222582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:26 compute-0 sudo[222582]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:26 compute-0 sudo[222607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:21:26 compute-0 sudo[222607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:26 compute-0 sudo[222607]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v131: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 77 KiB/s rd, 3.5 KiB/s wr, 170 op/s
Dec 03 01:21:26 compute-0 sudo[222640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 03 01:21:26 compute-0 sudo[222640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:26 compute-0 sudo[222668]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idmmtsoqtzjjjaahrhwoimwncdiqxuuh ; /usr/bin/python3'
Dec 03 01:21:26 compute-0 sudo[222668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:21:26 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:21:26 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:21:26 compute-0 python3[222682]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:21:26 compute-0 podman[222695]: 2025-12-03 01:21:26.713353509 +0000 UTC m=+0.092653144 container create a323385278a70f194cbcca73f95f6e62d9fad3d10a7701376dac51dbeed65994 (image=quay.io/ceph/ceph:v18, name=relaxed_pascal, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:21:26 compute-0 podman[222695]: 2025-12-03 01:21:26.673321388 +0000 UTC m=+0.052621083 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:21:26 compute-0 systemd[1]: Started libpod-conmon-a323385278a70f194cbcca73f95f6e62d9fad3d10a7701376dac51dbeed65994.scope.
Dec 03 01:21:26 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:21:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca851a31dcf49dd594a83ec3ebaf6559f58131d8f657ce91946562b3caea05df/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca851a31dcf49dd594a83ec3ebaf6559f58131d8f657ce91946562b3caea05df/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:26 compute-0 podman[222695]: 2025-12-03 01:21:26.867116813 +0000 UTC m=+0.246416518 container init a323385278a70f194cbcca73f95f6e62d9fad3d10a7701376dac51dbeed65994 (image=quay.io/ceph/ceph:v18, name=relaxed_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 03 01:21:26 compute-0 podman[222695]: 2025-12-03 01:21:26.878799939 +0000 UTC m=+0.258099574 container start a323385278a70f194cbcca73f95f6e62d9fad3d10a7701376dac51dbeed65994 (image=quay.io/ceph/ceph:v18, name=relaxed_pascal, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 03 01:21:26 compute-0 podman[222695]: 2025-12-03 01:21:26.884865833 +0000 UTC m=+0.264165528 container attach a323385278a70f194cbcca73f95f6e62d9fad3d10a7701376dac51dbeed65994 (image=quay.io/ceph/ceph:v18, name=relaxed_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 03 01:21:26 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 5.a scrub starts
Dec 03 01:21:26 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 5.a scrub ok
Dec 03 01:21:27 compute-0 relaxed_pascal[222727]: could not fetch user info: no user info saved
Dec 03 01:21:27 compute-0 systemd[1]: libpod-a323385278a70f194cbcca73f95f6e62d9fad3d10a7701376dac51dbeed65994.scope: Deactivated successfully.
Dec 03 01:21:27 compute-0 podman[222695]: 2025-12-03 01:21:27.326457833 +0000 UTC m=+0.705757518 container died a323385278a70f194cbcca73f95f6e62d9fad3d10a7701376dac51dbeed65994 (image=quay.io/ceph/ceph:v18, name=relaxed_pascal, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 03 01:21:27 compute-0 podman[222837]: 2025-12-03 01:21:27.346209396 +0000 UTC m=+0.126956010 container exec d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:21:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca851a31dcf49dd594a83ec3ebaf6559f58131d8f657ce91946562b3caea05df-merged.mount: Deactivated successfully.
Dec 03 01:21:27 compute-0 podman[222695]: 2025-12-03 01:21:27.388448797 +0000 UTC m=+0.767748422 container remove a323385278a70f194cbcca73f95f6e62d9fad3d10a7701376dac51dbeed65994 (image=quay.io/ceph/ceph:v18, name=relaxed_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 03 01:21:27 compute-0 systemd[1]: libpod-conmon-a323385278a70f194cbcca73f95f6e62d9fad3d10a7701376dac51dbeed65994.scope: Deactivated successfully.
Dec 03 01:21:27 compute-0 sudo[222668]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:27 compute-0 podman[222837]: 2025-12-03 01:21:27.455110618 +0000 UTC m=+0.235857202 container exec_died d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:21:27 compute-0 ceph-mon[192821]: pgmap v131: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 77 KiB/s rd, 3.5 KiB/s wr, 170 op/s
Dec 03 01:21:27 compute-0 ceph-mon[192821]: 5.a scrub starts
Dec 03 01:21:27 compute-0 ceph-mon[192821]: 5.a scrub ok
Dec 03 01:21:27 compute-0 sudo[222936]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyvswwialkumngpatgaedtyedevzxeqw ; /usr/bin/python3'
Dec 03 01:21:27 compute-0 sudo[222936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:21:27 compute-0 python3[222947]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:21:27 compute-0 podman[222966]: 2025-12-03 01:21:27.939883635 +0000 UTC m=+0.093536928 container create 687df95c5c595a8cfb6c2b432dc0256fc365162a4933bdd8cb271b18348be384 (image=quay.io/ceph/ceph:v18, name=brave_beaver, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:21:27 compute-0 podman[222966]: 2025-12-03 01:21:27.906617866 +0000 UTC m=+0.060271189 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 03 01:21:28 compute-0 systemd[1]: Started libpod-conmon-687df95c5c595a8cfb6c2b432dc0256fc365162a4933bdd8cb271b18348be384.scope.
Dec 03 01:21:28 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:21:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5a0a9957730585a21d5523f9a5ac3e6b82aff134a1abcd0199aa5c1b9a34de0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5a0a9957730585a21d5523f9a5ac3e6b82aff134a1abcd0199aa5c1b9a34de0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:28 compute-0 podman[222966]: 2025-12-03 01:21:28.086000442 +0000 UTC m=+0.239653785 container init 687df95c5c595a8cfb6c2b432dc0256fc365162a4933bdd8cb271b18348be384 (image=quay.io/ceph/ceph:v18, name=brave_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:21:28 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 6.12 deep-scrub starts
Dec 03 01:21:28 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 6.12 deep-scrub ok
Dec 03 01:21:28 compute-0 podman[222966]: 2025-12-03 01:21:28.107949325 +0000 UTC m=+0.261602618 container start 687df95c5c595a8cfb6c2b432dc0256fc365162a4933bdd8cb271b18348be384 (image=quay.io/ceph/ceph:v18, name=brave_beaver, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:21:28 compute-0 podman[222966]: 2025-12-03 01:21:28.116089775 +0000 UTC m=+0.269743118 container attach 687df95c5c595a8cfb6c2b432dc0256fc365162a4933bdd8cb271b18348be384 (image=quay.io/ceph/ceph:v18, name=brave_beaver, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:21:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:21:28
Dec 03 01:21:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 01:21:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 01:21:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['.mgr', 'vms', '.rgw.root', 'backups', 'images', 'volumes', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control']
Dec 03 01:21:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 01:21:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v132: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 3.1 KiB/s wr, 154 op/s
Dec 03 01:21:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:21:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:21:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 01:21:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:21:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:21:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:21:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 01:21:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:21:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:21:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:21:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:21:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:21:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:21:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:21:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:21:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:21:28 compute-0 brave_beaver[222996]: {
Dec 03 01:21:28 compute-0 brave_beaver[222996]:     "user_id": "openstack",
Dec 03 01:21:28 compute-0 brave_beaver[222996]:     "display_name": "openstack",
Dec 03 01:21:28 compute-0 brave_beaver[222996]:     "email": "",
Dec 03 01:21:28 compute-0 brave_beaver[222996]:     "suspended": 0,
Dec 03 01:21:28 compute-0 brave_beaver[222996]:     "max_buckets": 1000,
Dec 03 01:21:28 compute-0 brave_beaver[222996]:     "subusers": [],
Dec 03 01:21:28 compute-0 brave_beaver[222996]:     "keys": [
Dec 03 01:21:28 compute-0 brave_beaver[222996]:         {
Dec 03 01:21:28 compute-0 brave_beaver[222996]:             "user": "openstack",
Dec 03 01:21:28 compute-0 brave_beaver[222996]:             "access_key": "YB1RTHMPFEW18QROY7ER",
Dec 03 01:21:28 compute-0 brave_beaver[222996]:             "secret_key": "TxYHATGxG73QroJOSanT2WXDGyRVB7BDnSwtbkdZ"
Dec 03 01:21:28 compute-0 brave_beaver[222996]:         }
Dec 03 01:21:28 compute-0 brave_beaver[222996]:     ],
Dec 03 01:21:28 compute-0 brave_beaver[222996]:     "swift_keys": [],
Dec 03 01:21:28 compute-0 brave_beaver[222996]:     "caps": [],
Dec 03 01:21:28 compute-0 brave_beaver[222996]:     "op_mask": "read, write, delete",
Dec 03 01:21:28 compute-0 brave_beaver[222996]:     "default_placement": "",
Dec 03 01:21:28 compute-0 brave_beaver[222996]:     "default_storage_class": "",
Dec 03 01:21:28 compute-0 brave_beaver[222996]:     "placement_tags": [],
Dec 03 01:21:28 compute-0 brave_beaver[222996]:     "bucket_quota": {
Dec 03 01:21:28 compute-0 brave_beaver[222996]:         "enabled": false,
Dec 03 01:21:28 compute-0 brave_beaver[222996]:         "check_on_raw": false,
Dec 03 01:21:28 compute-0 brave_beaver[222996]:         "max_size": -1,
Dec 03 01:21:28 compute-0 brave_beaver[222996]:         "max_size_kb": 0,
Dec 03 01:21:28 compute-0 brave_beaver[222996]:         "max_objects": -1
Dec 03 01:21:28 compute-0 brave_beaver[222996]:     },
Dec 03 01:21:28 compute-0 brave_beaver[222996]:     "user_quota": {
Dec 03 01:21:28 compute-0 brave_beaver[222996]:         "enabled": false,
Dec 03 01:21:28 compute-0 brave_beaver[222996]:         "check_on_raw": false,
Dec 03 01:21:28 compute-0 brave_beaver[222996]:         "max_size": -1,
Dec 03 01:21:28 compute-0 brave_beaver[222996]:         "max_size_kb": 0,
Dec 03 01:21:28 compute-0 brave_beaver[222996]:         "max_objects": -1
Dec 03 01:21:28 compute-0 brave_beaver[222996]:     },
Dec 03 01:21:28 compute-0 brave_beaver[222996]:     "temp_url_keys": [],
Dec 03 01:21:28 compute-0 brave_beaver[222996]:     "type": "rgw",
Dec 03 01:21:28 compute-0 brave_beaver[222996]:     "mfa_ids": []
Dec 03 01:21:28 compute-0 brave_beaver[222996]: }
Dec 03 01:21:28 compute-0 brave_beaver[222996]: 
Dec 03 01:21:28 compute-0 sudo[222640]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:21:28 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:21:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:21:28 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:21:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:21:28 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:21:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 01:21:28 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:21:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 01:21:28 compute-0 systemd[1]: libpod-687df95c5c595a8cfb6c2b432dc0256fc365162a4933bdd8cb271b18348be384.scope: Deactivated successfully.
Dec 03 01:21:28 compute-0 podman[222966]: 2025-12-03 01:21:28.528956408 +0000 UTC m=+0.682609701 container died 687df95c5c595a8cfb6c2b432dc0256fc365162a4933bdd8cb271b18348be384 (image=quay.io/ceph/ceph:v18, name=brave_beaver, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 03 01:21:28 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:21:28 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev de87f886-d48c-403a-b0cc-ddb32d2895f2 does not exist
Dec 03 01:21:28 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 9e57237c-2656-4531-a88a-59122439576a does not exist
Dec 03 01:21:28 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev b1d86312-6c16-4b49-8810-7b4497ac8729 does not exist
Dec 03 01:21:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 01:21:28 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:21:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 01:21:28 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:21:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:21:28 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:21:28 compute-0 ceph-mon[192821]: 6.12 deep-scrub starts
Dec 03 01:21:28 compute-0 ceph-mon[192821]: 6.12 deep-scrub ok
Dec 03 01:21:28 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:21:28 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:21:28 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:21:28 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:21:28 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:21:28 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:21:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-a5a0a9957730585a21d5523f9a5ac3e6b82aff134a1abcd0199aa5c1b9a34de0-merged.mount: Deactivated successfully.
Dec 03 01:21:28 compute-0 podman[222966]: 2025-12-03 01:21:28.588858876 +0000 UTC m=+0.742512129 container remove 687df95c5c595a8cfb6c2b432dc0256fc365162a4933bdd8cb271b18348be384 (image=quay.io/ceph/ceph:v18, name=brave_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:21:28 compute-0 systemd[1]: libpod-conmon-687df95c5c595a8cfb6c2b432dc0256fc365162a4933bdd8cb271b18348be384.scope: Deactivated successfully.
Dec 03 01:21:28 compute-0 sudo[222936]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:28 compute-0 sudo[223138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:21:28 compute-0 sudo[223138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:28 compute-0 sudo[223138]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:28 compute-0 sudo[223164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:21:28 compute-0 sudo[223164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:28 compute-0 sudo[223164]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:28 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 5.b scrub starts
Dec 03 01:21:28 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 5.b scrub ok
Dec 03 01:21:28 compute-0 sudo[223189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:21:28 compute-0 sudo[223189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:28 compute-0 sudo[223189]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:29 compute-0 sudo[223214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 01:21:29 compute-0 sudo[223214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:29 compute-0 ceph-mon[192821]: pgmap v132: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 3.1 KiB/s wr, 154 op/s
Dec 03 01:21:29 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:21:29 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:21:29 compute-0 ceph-mon[192821]: 5.b scrub starts
Dec 03 01:21:29 compute-0 ceph-mon[192821]: 5.b scrub ok
Dec 03 01:21:29 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Dec 03 01:21:29 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Dec 03 01:21:29 compute-0 podman[223278]: 2025-12-03 01:21:29.627361682 +0000 UTC m=+0.088783789 container create c588c0c8522cf36a8bb36939f1b4ee8c99d4ee193fd99a5a1c6611ca3e70d359 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_boyd, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec 03 01:21:29 compute-0 podman[223278]: 2025-12-03 01:21:29.591475713 +0000 UTC m=+0.052897820 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:21:29 compute-0 systemd[1]: Started libpod-conmon-c588c0c8522cf36a8bb36939f1b4ee8c99d4ee193fd99a5a1c6611ca3e70d359.scope.
Dec 03 01:21:29 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:21:29 compute-0 podman[158098]: time="2025-12-03T01:21:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:21:29 compute-0 podman[223278]: 2025-12-03 01:21:29.77534151 +0000 UTC m=+0.236763627 container init c588c0c8522cf36a8bb36939f1b4ee8c99d4ee193fd99a5a1c6611ca3e70d359 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:21:29 compute-0 podman[223278]: 2025-12-03 01:21:29.790695725 +0000 UTC m=+0.252117832 container start c588c0c8522cf36a8bb36939f1b4ee8c99d4ee193fd99a5a1c6611ca3e70d359 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_boyd, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec 03 01:21:29 compute-0 podman[223278]: 2025-12-03 01:21:29.797125578 +0000 UTC m=+0.258547685 container attach c588c0c8522cf36a8bb36939f1b4ee8c99d4ee193fd99a5a1c6611ca3e70d359 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec 03 01:21:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:21:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 34189 "" "Go-http-client/1.1"
Dec 03 01:21:29 compute-0 great_boyd[223293]: 167 167
Dec 03 01:21:29 compute-0 systemd[1]: libpod-c588c0c8522cf36a8bb36939f1b4ee8c99d4ee193fd99a5a1c6611ca3e70d359.scope: Deactivated successfully.
Dec 03 01:21:29 compute-0 podman[223278]: 2025-12-03 01:21:29.801982489 +0000 UTC m=+0.263404566 container died c588c0c8522cf36a8bb36939f1b4ee8c99d4ee193fd99a5a1c6611ca3e70d359 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_boyd, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:21:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-9de458d4527777bb17e90025fc91703dd309149b6bb395edff4260c734ee8409-merged.mount: Deactivated successfully.
Dec 03 01:21:29 compute-0 podman[223278]: 2025-12-03 01:21:29.884137249 +0000 UTC m=+0.345559346 container remove c588c0c8522cf36a8bb36939f1b4ee8c99d4ee193fd99a5a1c6611ca3e70d359 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_boyd, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:21:29 compute-0 sshd-session[222213]: Invalid user git from 193.32.162.157 port 53402
Dec 03 01:21:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:21:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6800 "" "Go-http-client/1.1"
Dec 03 01:21:29 compute-0 systemd[1]: libpod-conmon-c588c0c8522cf36a8bb36939f1b4ee8c99d4ee193fd99a5a1c6611ca3e70d359.scope: Deactivated successfully.
Dec 03 01:21:30 compute-0 sshd-session[223265]: Invalid user kyt from 80.253.31.232 port 42042
Dec 03 01:21:30 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 6.16 scrub starts
Dec 03 01:21:30 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 6.16 scrub ok
Dec 03 01:21:30 compute-0 podman[223316]: 2025-12-03 01:21:30.187439363 +0000 UTC m=+0.125437480 container create 88bc8a080db5cd858738feef7a26d9dfd7f1e9bca312702741ac5e9baf29504a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_carver, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:21:30 compute-0 sshd-session[223265]: Received disconnect from 80.253.31.232 port 42042:11: Bye Bye [preauth]
Dec 03 01:21:30 compute-0 sshd-session[223265]: Disconnected from invalid user kyt 80.253.31.232 port 42042 [preauth]
Dec 03 01:21:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v133: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 2.7 KiB/s wr, 130 op/s
Dec 03 01:21:30 compute-0 podman[223316]: 2025-12-03 01:21:30.147446912 +0000 UTC m=+0.085445079 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:21:30 compute-0 systemd[1]: Started libpod-conmon-88bc8a080db5cd858738feef7a26d9dfd7f1e9bca312702741ac5e9baf29504a.scope.
Dec 03 01:21:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:21:30 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:21:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a385f52ed5e7a546126cea21a529bbb2e712d4ea97a96d3ce5c4d4d43947dbf1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a385f52ed5e7a546126cea21a529bbb2e712d4ea97a96d3ce5c4d4d43947dbf1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a385f52ed5e7a546126cea21a529bbb2e712d4ea97a96d3ce5c4d4d43947dbf1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a385f52ed5e7a546126cea21a529bbb2e712d4ea97a96d3ce5c4d4d43947dbf1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a385f52ed5e7a546126cea21a529bbb2e712d4ea97a96d3ce5c4d4d43947dbf1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:30 compute-0 podman[223316]: 2025-12-03 01:21:30.380977041 +0000 UTC m=+0.318975188 container init 88bc8a080db5cd858738feef7a26d9dfd7f1e9bca312702741ac5e9baf29504a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_carver, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 03 01:21:30 compute-0 podman[223316]: 2025-12-03 01:21:30.406436089 +0000 UTC m=+0.344434186 container start 88bc8a080db5cd858738feef7a26d9dfd7f1e9bca312702741ac5e9baf29504a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_carver, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 03 01:21:30 compute-0 podman[223316]: 2025-12-03 01:21:30.413823159 +0000 UTC m=+0.351821306 container attach 88bc8a080db5cd858738feef7a26d9dfd7f1e9bca312702741ac5e9baf29504a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:21:30 compute-0 ceph-mon[192821]: 7.12 scrub starts
Dec 03 01:21:30 compute-0 ceph-mon[192821]: 7.12 scrub ok
Dec 03 01:21:30 compute-0 ceph-mon[192821]: 6.16 scrub starts
Dec 03 01:21:30 compute-0 ceph-mon[192821]: 6.16 scrub ok
Dec 03 01:21:30 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 5.d deep-scrub starts
Dec 03 01:21:30 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 5.d deep-scrub ok
Dec 03 01:21:31 compute-0 openstack_network_exporter[160250]: ERROR   01:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:21:31 compute-0 openstack_network_exporter[160250]: ERROR   01:21:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:21:31 compute-0 openstack_network_exporter[160250]: ERROR   01:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:21:31 compute-0 openstack_network_exporter[160250]: ERROR   01:21:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:21:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:21:31 compute-0 openstack_network_exporter[160250]: ERROR   01:21:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:21:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:21:31 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Dec 03 01:21:31 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Dec 03 01:21:31 compute-0 ceph-mon[192821]: pgmap v133: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 2.7 KiB/s wr, 130 op/s
Dec 03 01:21:31 compute-0 ceph-mon[192821]: 5.d deep-scrub starts
Dec 03 01:21:31 compute-0 ceph-mon[192821]: 5.d deep-scrub ok
Dec 03 01:21:31 compute-0 exciting_carver[223332]: --> passed data devices: 0 physical, 3 LVM
Dec 03 01:21:31 compute-0 exciting_carver[223332]: --> relative data size: 1.0
Dec 03 01:21:31 compute-0 exciting_carver[223332]: --> All data devices are unavailable
Dec 03 01:21:31 compute-0 systemd[1]: libpod-88bc8a080db5cd858738feef7a26d9dfd7f1e9bca312702741ac5e9baf29504a.scope: Deactivated successfully.
Dec 03 01:21:31 compute-0 systemd[1]: libpod-88bc8a080db5cd858738feef7a26d9dfd7f1e9bca312702741ac5e9baf29504a.scope: Consumed 1.211s CPU time.
Dec 03 01:21:31 compute-0 podman[223361]: 2025-12-03 01:21:31.741497917 +0000 UTC m=+0.045531931 container died 88bc8a080db5cd858738feef7a26d9dfd7f1e9bca312702741ac5e9baf29504a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_carver, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 03 01:21:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-a385f52ed5e7a546126cea21a529bbb2e712d4ea97a96d3ce5c4d4d43947dbf1-merged.mount: Deactivated successfully.
Dec 03 01:21:31 compute-0 podman[223361]: 2025-12-03 01:21:31.817439178 +0000 UTC m=+0.121473152 container remove 88bc8a080db5cd858738feef7a26d9dfd7f1e9bca312702741ac5e9baf29504a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:21:31 compute-0 systemd[1]: libpod-conmon-88bc8a080db5cd858738feef7a26d9dfd7f1e9bca312702741ac5e9baf29504a.scope: Deactivated successfully.
Dec 03 01:21:31 compute-0 sudo[223214]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:31 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 5.e scrub starts
Dec 03 01:21:31 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 5.e scrub ok
Dec 03 01:21:31 compute-0 sudo[223376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:21:31 compute-0 sudo[223376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:31 compute-0 sudo[223376]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:32 compute-0 sudo[223401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:21:32 compute-0 sudo[223401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:32 compute-0 sudo[223401]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v134: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 2.8 KiB/s wr, 132 op/s
Dec 03 01:21:32 compute-0 sudo[223426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:21:32 compute-0 sudo[223426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:32 compute-0 sudo[223426]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:32 compute-0 sudo[223451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 01:21:32 compute-0 sudo[223451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:32 compute-0 sshd-session[222213]: Connection closed by invalid user git 193.32.162.157 port 53402 [preauth]
Dec 03 01:21:32 compute-0 ceph-mon[192821]: 7.14 scrub starts
Dec 03 01:21:32 compute-0 ceph-mon[192821]: 7.14 scrub ok
Dec 03 01:21:32 compute-0 ceph-mon[192821]: 5.e scrub starts
Dec 03 01:21:32 compute-0 ceph-mon[192821]: 5.e scrub ok
Dec 03 01:21:32 compute-0 podman[223503]: 2025-12-03 01:21:32.878951725 +0000 UTC m=+0.111860813 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, distribution-scope=public, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, release=1755695350, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 03 01:21:32 compute-0 podman[223502]: 2025-12-03 01:21:32.879044237 +0000 UTC m=+0.131637867 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 01:21:32 compute-0 podman[223504]: 2025-12-03 01:21:32.89691766 +0000 UTC m=+0.126947931 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Dec 03 01:21:32 compute-0 podman[223505]: 2025-12-03 01:21:32.900561869 +0000 UTC m=+0.134298880 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 03 01:21:32 compute-0 podman[223598]: 2025-12-03 01:21:32.977440325 +0000 UTC m=+0.058863751 container create 667da34691532e82325bee118eb4049177ae2bf1277a1c66c53bbe1e2752abba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_tu, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:21:33 compute-0 systemd[1]: Started libpod-conmon-667da34691532e82325bee118eb4049177ae2bf1277a1c66c53bbe1e2752abba.scope.
Dec 03 01:21:33 compute-0 podman[223598]: 2025-12-03 01:21:32.954130886 +0000 UTC m=+0.035554302 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:21:33 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:21:33 compute-0 podman[223598]: 2025-12-03 01:21:33.107426917 +0000 UTC m=+0.188850413 container init 667da34691532e82325bee118eb4049177ae2bf1277a1c66c53bbe1e2752abba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_tu, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec 03 01:21:33 compute-0 podman[223598]: 2025-12-03 01:21:33.120785068 +0000 UTC m=+0.202208504 container start 667da34691532e82325bee118eb4049177ae2bf1277a1c66c53bbe1e2752abba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_tu, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:21:33 compute-0 podman[223598]: 2025-12-03 01:21:33.126367619 +0000 UTC m=+0.207791045 container attach 667da34691532e82325bee118eb4049177ae2bf1277a1c66c53bbe1e2752abba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_tu, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 03 01:21:33 compute-0 intelligent_tu[223614]: 167 167
Dec 03 01:21:33 compute-0 systemd[1]: libpod-667da34691532e82325bee118eb4049177ae2bf1277a1c66c53bbe1e2752abba.scope: Deactivated successfully.
Dec 03 01:21:33 compute-0 podman[223598]: 2025-12-03 01:21:33.134050906 +0000 UTC m=+0.215474332 container died 667da34691532e82325bee118eb4049177ae2bf1277a1c66c53bbe1e2752abba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_tu, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec 03 01:21:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-bab20ceccd3c583d288a0ded46de836fc2affe424cc0c5d442e8a6024064ef2c-merged.mount: Deactivated successfully.
Dec 03 01:21:33 compute-0 podman[223598]: 2025-12-03 01:21:33.207901021 +0000 UTC m=+0.289324447 container remove 667da34691532e82325bee118eb4049177ae2bf1277a1c66c53bbe1e2752abba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_tu, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:21:33 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 6.18 scrub starts
Dec 03 01:21:33 compute-0 systemd[1]: libpod-conmon-667da34691532e82325bee118eb4049177ae2bf1277a1c66c53bbe1e2752abba.scope: Deactivated successfully.
Dec 03 01:21:33 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 6.18 scrub ok
Dec 03 01:21:33 compute-0 podman[223636]: 2025-12-03 01:21:33.495204283 +0000 UTC m=+0.091876613 container create 2ffada55ab06243ca141c185dbf71e32b5f07195ec740c61893db8ae4e929efd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_thompson, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:21:33 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Dec 03 01:21:33 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Dec 03 01:21:33 compute-0 podman[223636]: 2025-12-03 01:21:33.460153276 +0000 UTC m=+0.056825616 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:21:33 compute-0 systemd[1]: Started libpod-conmon-2ffada55ab06243ca141c185dbf71e32b5f07195ec740c61893db8ae4e929efd.scope.
Dec 03 01:21:33 compute-0 ceph-mon[192821]: pgmap v134: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 2.8 KiB/s wr, 132 op/s
Dec 03 01:21:33 compute-0 ceph-mon[192821]: 6.18 scrub starts
Dec 03 01:21:33 compute-0 ceph-mon[192821]: 6.18 scrub ok
Dec 03 01:21:33 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:21:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef0114df1ac583a032bc5d463be4d97651af1cfd931818f3a4942e5c9723c9a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef0114df1ac583a032bc5d463be4d97651af1cfd931818f3a4942e5c9723c9a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef0114df1ac583a032bc5d463be4d97651af1cfd931818f3a4942e5c9723c9a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef0114df1ac583a032bc5d463be4d97651af1cfd931818f3a4942e5c9723c9a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:33 compute-0 podman[223636]: 2025-12-03 01:21:33.702243066 +0000 UTC m=+0.298915446 container init 2ffada55ab06243ca141c185dbf71e32b5f07195ec740c61893db8ae4e929efd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:21:33 compute-0 podman[223636]: 2025-12-03 01:21:33.73199106 +0000 UTC m=+0.328663400 container start 2ffada55ab06243ca141c185dbf71e32b5f07195ec740c61893db8ae4e929efd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec 03 01:21:33 compute-0 podman[223636]: 2025-12-03 01:21:33.738622849 +0000 UTC m=+0.335295199 container attach 2ffada55ab06243ca141c185dbf71e32b5f07195ec740c61893db8ae4e929efd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Dec 03 01:21:33 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Dec 03 01:21:33 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Dec 03 01:21:34 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 01:21:34 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:21:34 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 01:21:34 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:21:34 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:21:34 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:21:34 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:21:34 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:21:34 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:21:34 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:21:34 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:21:34 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:21:34 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 01:21:34 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:21:34 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:21:34 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:21:34 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 1)
Dec 03 01:21:34 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:21:34 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 1)
Dec 03 01:21:34 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:21:34 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 03 01:21:34 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:21:34 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 1)
Dec 03 01:21:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0) v1
Dec 03 01:21:34 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Dec 03 01:21:34 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 6.19 scrub starts
Dec 03 01:21:34 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 6.19 scrub ok
Dec 03 01:21:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v135: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 170 B/s wr, 4 op/s
Dec 03 01:21:34 compute-0 blissful_thompson[223651]: {
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:     "0": [
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:         {
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:             "devices": [
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:                 "/dev/loop3"
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:             ],
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:             "lv_name": "ceph_lv0",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:             "lv_size": "21470642176",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:             "name": "ceph_lv0",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:             "tags": {
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:                 "ceph.cluster_name": "ceph",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:                 "ceph.crush_device_class": "",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:                 "ceph.encrypted": "0",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:                 "ceph.osd_id": "0",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:                 "ceph.type": "block",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:                 "ceph.vdo": "0"
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:             },
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:             "type": "block",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:             "vg_name": "ceph_vg0"
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:         }
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:     ],
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:     "1": [
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:         {
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:             "devices": [
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:                 "/dev/loop4"
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:             ],
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:             "lv_name": "ceph_lv1",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:             "lv_size": "21470642176",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:             "name": "ceph_lv1",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:             "tags": {
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:                 "ceph.cluster_name": "ceph",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:                 "ceph.crush_device_class": "",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:                 "ceph.encrypted": "0",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:                 "ceph.osd_id": "1",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:                 "ceph.type": "block",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:                 "ceph.vdo": "0"
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:             },
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:             "type": "block",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:             "vg_name": "ceph_vg1"
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:         }
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:     ],
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:     "2": [
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:         {
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:             "devices": [
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:                 "/dev/loop5"
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:             ],
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:             "lv_name": "ceph_lv2",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:             "lv_size": "21470642176",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:             "name": "ceph_lv2",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:             "tags": {
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:                 "ceph.cluster_name": "ceph",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:                 "ceph.crush_device_class": "",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:                 "ceph.encrypted": "0",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:                 "ceph.osd_id": "2",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:                 "ceph.type": "block",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:                 "ceph.vdo": "0"
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:             },
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:             "type": "block",
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:             "vg_name": "ceph_vg2"
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:         }
Dec 03 01:21:34 compute-0 blissful_thompson[223651]:     ]
Dec 03 01:21:34 compute-0 blissful_thompson[223651]: }
Dec 03 01:21:34 compute-0 systemd[1]: libpod-2ffada55ab06243ca141c185dbf71e32b5f07195ec740c61893db8ae4e929efd.scope: Deactivated successfully.
Dec 03 01:21:34 compute-0 podman[223636]: 2025-12-03 01:21:34.526361841 +0000 UTC m=+1.123034181 container died 2ffada55ab06243ca141c185dbf71e32b5f07195ec740c61893db8ae4e929efd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_thompson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:21:34 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Dec 03 01:21:34 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Dec 03 01:21:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef0114df1ac583a032bc5d463be4d97651af1cfd931818f3a4942e5c9723c9a7-merged.mount: Deactivated successfully.
Dec 03 01:21:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Dec 03 01:21:34 compute-0 podman[223636]: 2025-12-03 01:21:34.630787372 +0000 UTC m=+1.227459712 container remove 2ffada55ab06243ca141c185dbf71e32b5f07195ec740c61893db8ae4e929efd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_thompson, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 03 01:21:34 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Dec 03 01:21:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Dec 03 01:21:34 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Dec 03 01:21:34 compute-0 ceph-mgr[193109]: [progress INFO root] update: starting ev 233c150f-04e0-477a-98e5-41621722b9d6 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Dec 03 01:21:34 compute-0 ceph-mon[192821]: 7.16 scrub starts
Dec 03 01:21:34 compute-0 ceph-mon[192821]: 7.16 scrub ok
Dec 03 01:21:34 compute-0 ceph-mon[192821]: 5.10 scrub starts
Dec 03 01:21:34 compute-0 ceph-mon[192821]: 5.10 scrub ok
Dec 03 01:21:34 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Dec 03 01:21:34 compute-0 ceph-mon[192821]: 6.19 scrub starts
Dec 03 01:21:34 compute-0 ceph-mon[192821]: 6.19 scrub ok
Dec 03 01:21:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0) v1
Dec 03 01:21:34 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Dec 03 01:21:34 compute-0 systemd[1]: libpod-conmon-2ffada55ab06243ca141c185dbf71e32b5f07195ec740c61893db8ae4e929efd.scope: Deactivated successfully.
Dec 03 01:21:34 compute-0 sudo[223451]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:34 compute-0 sudo[223672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:21:34 compute-0 sudo[223672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:34 compute-0 sudo[223672]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:34 compute-0 sudo[223698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:21:34 compute-0 sudo[223698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:34 compute-0 sudo[223698]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:35 compute-0 sudo[223723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:21:35 compute-0 sudo[223723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:35 compute-0 sudo[223723]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:35 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 6.1a scrub starts
Dec 03 01:21:35 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 6.1a scrub ok
Dec 03 01:21:35 compute-0 sudo[223748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 01:21:35 compute-0 sudo[223748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:35 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:21:35 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Dec 03 01:21:35 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Dec 03 01:21:35 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Dec 03 01:21:35 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Dec 03 01:21:35 compute-0 ceph-mgr[193109]: [progress INFO root] update: starting ev 35647f5d-0578-4b5f-a725-29df3e74f44a (PG autoscaler increasing pool 9 PGs from 1 to 32)
Dec 03 01:21:35 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0) v1
Dec 03 01:21:35 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Dec 03 01:21:35 compute-0 ceph-mon[192821]: pgmap v135: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 170 B/s wr, 4 op/s
Dec 03 01:21:35 compute-0 ceph-mon[192821]: 7.17 scrub starts
Dec 03 01:21:35 compute-0 ceph-mon[192821]: 7.17 scrub ok
Dec 03 01:21:35 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Dec 03 01:21:35 compute-0 ceph-mon[192821]: osdmap e52: 3 total, 3 up, 3 in
Dec 03 01:21:35 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Dec 03 01:21:35 compute-0 ceph-mon[192821]: 6.1a scrub starts
Dec 03 01:21:35 compute-0 ceph-mon[192821]: 6.1a scrub ok
Dec 03 01:21:35 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Dec 03 01:21:35 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Dec 03 01:21:35 compute-0 podman[223811]: 2025-12-03 01:21:35.893208566 +0000 UTC m=+0.089744726 container create 1a4b8451c2f793daf00c7e9c48939fde2c337741ed53eef79dcbc6c7d0f7e7db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec 03 01:21:35 compute-0 podman[223811]: 2025-12-03 01:21:35.855440995 +0000 UTC m=+0.051977235 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:21:35 compute-0 systemd[1]: Started libpod-conmon-1a4b8451c2f793daf00c7e9c48939fde2c337741ed53eef79dcbc6c7d0f7e7db.scope.
Dec 03 01:21:36 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:21:36 compute-0 podman[223811]: 2025-12-03 01:21:36.047956766 +0000 UTC m=+0.244493016 container init 1a4b8451c2f793daf00c7e9c48939fde2c337741ed53eef79dcbc6c7d0f7e7db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 03 01:21:36 compute-0 podman[223811]: 2025-12-03 01:21:36.063403153 +0000 UTC m=+0.259939343 container start 1a4b8451c2f793daf00c7e9c48939fde2c337741ed53eef79dcbc6c7d0f7e7db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_williamson, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 03 01:21:36 compute-0 podman[223811]: 2025-12-03 01:21:36.070073634 +0000 UTC m=+0.266609824 container attach 1a4b8451c2f793daf00c7e9c48939fde2c337741ed53eef79dcbc6c7d0f7e7db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 03 01:21:36 compute-0 heuristic_williamson[223826]: 167 167
Dec 03 01:21:36 compute-0 systemd[1]: libpod-1a4b8451c2f793daf00c7e9c48939fde2c337741ed53eef79dcbc6c7d0f7e7db.scope: Deactivated successfully.
Dec 03 01:21:36 compute-0 podman[223811]: 2025-12-03 01:21:36.075156711 +0000 UTC m=+0.271692941 container died 1a4b8451c2f793daf00c7e9c48939fde2c337741ed53eef79dcbc6c7d0f7e7db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_williamson, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 03 01:21:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-560e9db6b0dee563d0f4501391334332f860c2d389156b89f9900bd414f76011-merged.mount: Deactivated successfully.
Dec 03 01:21:36 compute-0 podman[223811]: 2025-12-03 01:21:36.158710038 +0000 UTC m=+0.355246218 container remove 1a4b8451c2f793daf00c7e9c48939fde2c337741ed53eef79dcbc6c7d0f7e7db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:21:36 compute-0 systemd[1]: libpod-conmon-1a4b8451c2f793daf00c7e9c48939fde2c337741ed53eef79dcbc6c7d0f7e7db.scope: Deactivated successfully.
Dec 03 01:21:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v138: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 2 op/s
Dec 03 01:21:36 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec 03 01:21:36 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 03 01:21:36 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec 03 01:21:36 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 03 01:21:36 compute-0 podman[223849]: 2025-12-03 01:21:36.438275891 +0000 UTC m=+0.110917318 container create e9236b34a26ff28cc4e6b1afa8a8ecdc7cb0265a17de3efd5a59e630acafc762 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:21:36 compute-0 podman[223849]: 2025-12-03 01:21:36.3797582 +0000 UTC m=+0.052399667 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:21:36 compute-0 systemd[1]: Started libpod-conmon-e9236b34a26ff28cc4e6b1afa8a8ecdc7cb0265a17de3efd5a59e630acafc762.scope.
Dec 03 01:21:36 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:21:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2e17c624155ad6a4007bc429d48cea2ea702d8d30a55d3bb7b61f625489aac2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2e17c624155ad6a4007bc429d48cea2ea702d8d30a55d3bb7b61f625489aac2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2e17c624155ad6a4007bc429d48cea2ea702d8d30a55d3bb7b61f625489aac2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2e17c624155ad6a4007bc429d48cea2ea702d8d30a55d3bb7b61f625489aac2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:21:36 compute-0 podman[223849]: 2025-12-03 01:21:36.635973312 +0000 UTC m=+0.308614769 container init e9236b34a26ff28cc4e6b1afa8a8ecdc7cb0265a17de3efd5a59e630acafc762 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 03 01:21:36 compute-0 podman[223849]: 2025-12-03 01:21:36.656725282 +0000 UTC m=+0.329366709 container start e9236b34a26ff28cc4e6b1afa8a8ecdc7cb0265a17de3efd5a59e630acafc762 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jang, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:21:36 compute-0 podman[223849]: 2025-12-03 01:21:36.663196657 +0000 UTC m=+0.335838134 container attach e9236b34a26ff28cc4e6b1afa8a8ecdc7cb0265a17de3efd5a59e630acafc762 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:21:36 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Dec 03 01:21:36 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Dec 03 01:21:36 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec 03 01:21:36 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec 03 01:21:36 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Dec 03 01:21:36 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Dec 03 01:21:36 compute-0 ceph-mgr[193109]: [progress INFO root] update: starting ev 2025b0f0-f11a-46d2-b151-a6ffa9da5e6a (PG autoscaler increasing pool 10 PGs from 1 to 32)
Dec 03 01:21:36 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0) v1
Dec 03 01:21:36 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec 03 01:21:36 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Dec 03 01:21:36 compute-0 ceph-mon[192821]: osdmap e53: 3 total, 3 up, 3 in
Dec 03 01:21:36 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Dec 03 01:21:36 compute-0 ceph-mon[192821]: 5.17 scrub starts
Dec 03 01:21:36 compute-0 ceph-mon[192821]: 5.17 scrub ok
Dec 03 01:21:36 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 03 01:21:36 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 03 01:21:36 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Dec 03 01:21:36 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec 03 01:21:36 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec 03 01:21:36 compute-0 ceph-mon[192821]: osdmap e54: 3 total, 3 up, 3 in
Dec 03 01:21:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 54 pg[9.0( v 51'584 (0'0,51'584] local-lis/les=45/46 n=209 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=8.293481827s) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 51'583 mlcod 51'583 active pruub 116.351562500s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 54 pg[8.0( v 44'4 (0'0,44'4] local-lis/les=43/44 n=4 ec=43/43 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=14.271992683s) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 44'3 mlcod 44'3 active pruub 122.331710815s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 54 pg[8.0( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=43/43 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=14.271992683s) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 44'3 mlcod 0'0 unknown pruub 122.331710815s@ mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 54 pg[9.0( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=6 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=8.293481827s) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 51'583 mlcod 0'0 unknown pruub 116.351562500s@ mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 6.1b scrub starts
Dec 03 01:21:37 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 6.1b scrub ok
Dec 03 01:21:37 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Dec 03 01:21:37 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Dec 03 01:21:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Dec 03 01:21:37 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Dec 03 01:21:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Dec 03 01:21:37 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Dec 03 01:21:37 compute-0 ceph-mgr[193109]: [progress INFO root] update: starting ev 14b2a7ad-382c-42a9-a127-e60085bd2d5d (PG autoscaler increasing pool 11 PGs from 1 to 32)
Dec 03 01:21:37 compute-0 ceph-mgr[193109]: [progress INFO root] complete: finished ev 233c150f-04e0-477a-98e5-41621722b9d6 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Dec 03 01:21:37 compute-0 ceph-mgr[193109]: [progress INFO root] Completed event 233c150f-04e0-477a-98e5-41621722b9d6 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Dec 03 01:21:37 compute-0 ceph-mgr[193109]: [progress INFO root] complete: finished ev 35647f5d-0578-4b5f-a725-29df3e74f44a (PG autoscaler increasing pool 9 PGs from 1 to 32)
Dec 03 01:21:37 compute-0 ceph-mgr[193109]: [progress INFO root] Completed event 35647f5d-0578-4b5f-a725-29df3e74f44a (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Dec 03 01:21:37 compute-0 ceph-mgr[193109]: [progress INFO root] complete: finished ev 2025b0f0-f11a-46d2-b151-a6ffa9da5e6a (PG autoscaler increasing pool 10 PGs from 1 to 32)
Dec 03 01:21:37 compute-0 ceph-mgr[193109]: [progress INFO root] Completed event 2025b0f0-f11a-46d2-b151-a6ffa9da5e6a (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Dec 03 01:21:37 compute-0 ceph-mgr[193109]: [progress INFO root] complete: finished ev 14b2a7ad-382c-42a9-a127-e60085bd2d5d (PG autoscaler increasing pool 11 PGs from 1 to 32)
Dec 03 01:21:37 compute-0 ceph-mgr[193109]: [progress INFO root] Completed event 14b2a7ad-382c-42a9-a127-e60085bd2d5d (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Dec 03 01:21:37 compute-0 ceph-mon[192821]: pgmap v138: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 2 op/s
Dec 03 01:21:37 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec 03 01:21:37 compute-0 ceph-mon[192821]: 6.1b scrub starts
Dec 03 01:21:37 compute-0 ceph-mon[192821]: 6.1b scrub ok
Dec 03 01:21:37 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Dec 03 01:21:37 compute-0 ceph-mon[192821]: osdmap e55: 3 total, 3 up, 3 in
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.15( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.15( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.14( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.14( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.16( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.17( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.17( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.16( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.10( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.11( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.1( v 44'4 (0'0,44'4] local-lis/les=43/44 n=1 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.2( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=1 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.3( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.3( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=1 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.2( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.c( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.d( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.d( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.c( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.e( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.f( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.8( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.9( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.a( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.b( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.f( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.e( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.b( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.a( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.9( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.8( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.1( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.7( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.6( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.6( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.7( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.5( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.4( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.4( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=1 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.5( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.1a( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.1b( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.18( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.19( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.19( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.18( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.1e( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.1f( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.1e( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.1c( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.1d( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.1d( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.1c( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.13( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.12( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.13( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.1f( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.12( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.1b( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.1a( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.11( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.10( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.15( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.15( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.16( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.14( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.17( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.10( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.16( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.0( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 51'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.11( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.2( v 44'4 (0'0,44'4] local-lis/les=54/55 n=1 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.1( v 44'4 (0'0,44'4] local-lis/les=54/55 n=1 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.17( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.3( v 44'4 (0'0,44'4] local-lis/les=54/55 n=1 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.2( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.d( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.c( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.d( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.c( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.e( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.3( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.f( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.8( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.a( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.9( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.b( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.f( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.e( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.a( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.9( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.b( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.0( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=43/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 44'3 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.1( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.7( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.6( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.7( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.4( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.5( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.4( v 44'4 (0'0,44'4] local-lis/les=54/55 n=1 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.8( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.5( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.1b( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.18( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.1a( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.14( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.19( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.19( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.18( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.6( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.1d( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.1c( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.1d( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.1c( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.1e( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.13( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.1e( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.12( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.1b( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.12( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.1f( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.11( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.10( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.1a( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.13( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:37 compute-0 determined_jang[223865]: {
Dec 03 01:21:37 compute-0 determined_jang[223865]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 01:21:37 compute-0 determined_jang[223865]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:21:37 compute-0 determined_jang[223865]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 01:21:37 compute-0 determined_jang[223865]:         "osd_id": 2,
Dec 03 01:21:37 compute-0 determined_jang[223865]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:21:37 compute-0 determined_jang[223865]:         "type": "bluestore"
Dec 03 01:21:37 compute-0 determined_jang[223865]:     },
Dec 03 01:21:37 compute-0 determined_jang[223865]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 01:21:37 compute-0 determined_jang[223865]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:21:37 compute-0 determined_jang[223865]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 01:21:37 compute-0 determined_jang[223865]:         "osd_id": 1,
Dec 03 01:21:37 compute-0 determined_jang[223865]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:21:37 compute-0 determined_jang[223865]:         "type": "bluestore"
Dec 03 01:21:37 compute-0 determined_jang[223865]:     },
Dec 03 01:21:37 compute-0 determined_jang[223865]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 01:21:37 compute-0 determined_jang[223865]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:21:37 compute-0 determined_jang[223865]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 01:21:37 compute-0 determined_jang[223865]:         "osd_id": 0,
Dec 03 01:21:37 compute-0 determined_jang[223865]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:21:37 compute-0 determined_jang[223865]:         "type": "bluestore"
Dec 03 01:21:37 compute-0 determined_jang[223865]:     }
Dec 03 01:21:37 compute-0 determined_jang[223865]: }
Dec 03 01:21:37 compute-0 systemd[1]: libpod-e9236b34a26ff28cc4e6b1afa8a8ecdc7cb0265a17de3efd5a59e630acafc762.scope: Deactivated successfully.
Dec 03 01:21:37 compute-0 podman[223849]: 2025-12-03 01:21:37.869228129 +0000 UTC m=+1.541869586 container died e9236b34a26ff28cc4e6b1afa8a8ecdc7cb0265a17de3efd5a59e630acafc762 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jang, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:21:37 compute-0 systemd[1]: libpod-e9236b34a26ff28cc4e6b1afa8a8ecdc7cb0265a17de3efd5a59e630acafc762.scope: Consumed 1.202s CPU time.
Dec 03 01:21:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-d2e17c624155ad6a4007bc429d48cea2ea702d8d30a55d3bb7b61f625489aac2-merged.mount: Deactivated successfully.
Dec 03 01:21:37 compute-0 podman[223849]: 2025-12-03 01:21:37.9780997 +0000 UTC m=+1.650741127 container remove e9236b34a26ff28cc4e6b1afa8a8ecdc7cb0265a17de3efd5a59e630acafc762 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:21:38 compute-0 systemd[1]: libpod-conmon-e9236b34a26ff28cc4e6b1afa8a8ecdc7cb0265a17de3efd5a59e630acafc762.scope: Deactivated successfully.
Dec 03 01:21:38 compute-0 sudo[223748]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:21:38 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:21:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:21:38 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:21:38 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev d8d632f8-d5c5-43c9-ab07-ddc05fd6c2ff does not exist
Dec 03 01:21:38 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 661c6b45-9ad5-4e4a-8289-2f6b7280fa8a does not exist
Dec 03 01:21:38 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Dec 03 01:21:38 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Dec 03 01:21:38 compute-0 sudo[223911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:21:38 compute-0 sudo[223911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:38 compute-0 sudo[223911]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v141: 259 pgs: 62 unknown, 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:21:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec 03 01:21:38 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 03 01:21:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec 03 01:21:38 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 03 01:21:38 compute-0 sudo[223942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 01:21:38 compute-0 podman[223935]: 2025-12-03 01:21:38.343398149 +0000 UTC m=+0.127746642 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 03 01:21:38 compute-0 sudo[223942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:21:38 compute-0 sudo[223942]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:38 compute-0 ceph-mgr[193109]: [progress INFO root] Writing back 16 completed events
Dec 03 01:21:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec 03 01:21:38 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:21:38 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Dec 03 01:21:38 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Dec 03 01:21:38 compute-0 ceph-mon[192821]: 7.19 scrub starts
Dec 03 01:21:38 compute-0 ceph-mon[192821]: 7.19 scrub ok
Dec 03 01:21:38 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:21:38 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:21:38 compute-0 ceph-mon[192821]: 5.1e scrub starts
Dec 03 01:21:38 compute-0 ceph-mon[192821]: 5.1e scrub ok
Dec 03 01:21:38 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 03 01:21:38 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 03 01:21:38 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:21:38 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Dec 03 01:21:38 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Dec 03 01:21:39 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Dec 03 01:21:39 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Dec 03 01:21:39 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec 03 01:21:39 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Dec 03 01:21:39 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Dec 03 01:21:39 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 56 pg[10.0( v 51'64 (0'0,51'64] local-lis/les=47/48 n=8 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=56 pruub=15.955306053s) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 51'63 mlcod 51'63 active pruub 119.121765137s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:39 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 56 pg[10.0( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=56 pruub=15.955306053s) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 51'63 mlcod 0'0 unknown pruub 119.121765137s@ mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:39 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 56 pg[11.0( v 51'2 (0'0,51'2] local-lis/les=49/50 n=2 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=56 pruub=9.681290627s) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 51'1 mlcod 51'1 active pruub 120.458557129s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:39 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 56 pg[11.0( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=56 pruub=9.681290627s) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 51'1 mlcod 0'0 unknown pruub 120.458557129s@ mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:39 compute-0 ceph-mon[192821]: pgmap v141: 259 pgs: 62 unknown, 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:21:39 compute-0 ceph-mon[192821]: 7.1d scrub starts
Dec 03 01:21:39 compute-0 ceph-mon[192821]: 7.1d scrub ok
Dec 03 01:21:39 compute-0 ceph-mon[192821]: 5.1b scrub starts
Dec 03 01:21:39 compute-0 ceph-mon[192821]: 5.1b scrub ok
Dec 03 01:21:39 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Dec 03 01:21:39 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec 03 01:21:39 compute-0 ceph-mon[192821]: osdmap e56: 3 total, 3 up, 3 in
Dec 03 01:21:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Dec 03 01:21:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Dec 03 01:21:40 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.d( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.b( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.1e( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.13( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.12( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.1b( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.11( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.10( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.1f( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.1d( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.1c( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.1a( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.19( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.18( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.7( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.6( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.5( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.4( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.8( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.f( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.9( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.c( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.e( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.1( v 51'64 (0'0,51'64] local-lis/les=47/48 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.3( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.14( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.2( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.15( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.16( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.a( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.17( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.1e( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.d( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.17( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.16( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.15( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.14( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.13( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.2( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=1 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.1( v 51'2 (0'0,51'2] local-lis/les=49/50 n=1 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.e( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.d( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.f( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.9( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.c( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.8( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.a( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.5( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.4( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.b( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.7( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.6( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.3( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.1b( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.1c( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.1a( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.1d( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.1e( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.1f( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.10( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.11( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.18( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.12( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.19( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.16( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.15( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.14( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.b( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.12( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.1b( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.11( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.10( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.1f( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.1a( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.19( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.1d( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.1c( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.18( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.7( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.5( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.6( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.4( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.f( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.13( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.8( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.9( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.c( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.e( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.0( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 51'63 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.3( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.14( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.2( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.15( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.16( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.17( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.2( v 51'2 (0'0,51'2] local-lis/les=56/57 n=1 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.0( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 51'1 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.1( v 51'2 (0'0,51'2] local-lis/les=56/57 n=1 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.e( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.f( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.d( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.c( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.9( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.13( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.8( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.a( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.5( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.4( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.7( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.3( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.1b( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.1a( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.1c( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.1d( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.1e( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.6( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.11( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.10( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.1f( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.b( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.18( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.12( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.19( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.a( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.17( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.1( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v144: 321 pgs: 124 unknown, 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:21:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e57 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:21:40 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Dec 03 01:21:40 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.968 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.969 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.969 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.970 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f00ebd496a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.971 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.972 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eda45910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.972 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.972 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.972 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.972 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.973 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.973 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eabec2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.973 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.973 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.974 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.974 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.974 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.975 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.975 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.975 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.974 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.978 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f00ebd4b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.978 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.979 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f00edba6090>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.979 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.979 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f00ebd4bb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.980 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.980 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f00ebd4b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.980 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.980 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f00ebd4b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.981 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.981 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebcadee0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f00ebd4b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f00ebd4b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f00eabec290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f00ebd4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f00ebd4b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f00ebd4b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f00ebd4bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f00ebd4b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.986 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f00ebd4bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.986 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f00ebd4bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f00ebd4bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f00ebe0e030>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f00ebd4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f00ebd4b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f00ede91a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f00ebd4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f00ebd4b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f00ede92450>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f00ebd4bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f00ebd4bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.992 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.992 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:21:41 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Dec 03 01:21:41 compute-0 ceph-mon[192821]: osdmap e57: 3 total, 3 up, 3 in
Dec 03 01:21:41 compute-0 ceph-mon[192821]: 7.1e scrub starts
Dec 03 01:21:41 compute-0 ceph-mon[192821]: 7.1e scrub ok
Dec 03 01:21:41 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Dec 03 01:21:41 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Dec 03 01:21:41 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Dec 03 01:21:42 compute-0 ceph-mon[192821]: pgmap v144: 321 pgs: 124 unknown, 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:21:42 compute-0 ceph-mon[192821]: 2.18 scrub starts
Dec 03 01:21:42 compute-0 ceph-mon[192821]: 2.18 scrub ok
Dec 03 01:21:42 compute-0 ceph-mon[192821]: 5.1c scrub starts
Dec 03 01:21:42 compute-0 ceph-mon[192821]: 5.1c scrub ok
Dec 03 01:21:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v145: 321 pgs: 31 unknown, 290 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:21:42 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Dec 03 01:21:42 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Dec 03 01:21:42 compute-0 podman[223982]: 2025-12-03 01:21:42.898618889 +0000 UTC m=+0.151565085 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, distribution-scope=public, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-container, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, container_name=kepler, io.openshift.expose-services=, vcs-type=git)
Dec 03 01:21:43 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 2.16 deep-scrub starts
Dec 03 01:21:43 compute-0 ceph-mon[192821]: 5.1d scrub starts
Dec 03 01:21:43 compute-0 ceph-mon[192821]: 5.1d scrub ok
Dec 03 01:21:43 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 2.16 deep-scrub ok
Dec 03 01:21:43 compute-0 sshd-session[224002]: Accepted publickey for zuul from 192.168.122.30 port 49242 ssh2: ECDSA SHA256:ja3ITS17A9km0/Ot+KN2pl9ub4ump/b6GV+vNoE7Szw
Dec 03 01:21:43 compute-0 systemd-logind[800]: New session 40 of user zuul.
Dec 03 01:21:43 compute-0 systemd[1]: Started Session 40 of User zuul.
Dec 03 01:21:43 compute-0 sshd-session[224002]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 03 01:21:43 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Dec 03 01:21:43 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Dec 03 01:21:44 compute-0 sshd-session[223477]: Connection closed by authenticating user root 193.32.162.157 port 37114 [preauth]
Dec 03 01:21:44 compute-0 ceph-mon[192821]: pgmap v145: 321 pgs: 31 unknown, 290 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:21:44 compute-0 ceph-mon[192821]: 2.16 deep-scrub starts
Dec 03 01:21:44 compute-0 ceph-mon[192821]: 2.16 deep-scrub ok
Dec 03 01:21:44 compute-0 ceph-mon[192821]: 5.1f scrub starts
Dec 03 01:21:44 compute-0 ceph-mon[192821]: 5.1f scrub ok
Dec 03 01:21:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v146: 321 pgs: 321 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:21:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec 03 01:21:44 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 03 01:21:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec 03 01:21:44 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 03 01:21:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Dec 03 01:21:44 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec 03 01:21:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec 03 01:21:44 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 03 01:21:44 compute-0 python3.9[224156]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 01:21:45 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Dec 03 01:21:45 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Dec 03 01:21:45 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 03 01:21:45 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 03 01:21:45 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec 03 01:21:45 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Dec 03 01:21:45 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 03 01:21:45 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Dec 03 01:21:45 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Dec 03 01:21:45 compute-0 ceph-mon[192821]: pgmap v146: 321 pgs: 321 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:21:45 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 03 01:21:45 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 03 01:21:45 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec 03 01:21:45 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 03 01:21:45 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:21:45 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Dec 03 01:21:45 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Dec 03 01:21:45 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Dec 03 01:21:45 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Dec 03 01:21:46 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Dec 03 01:21:46 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Dec 03 01:21:46 compute-0 ceph-mon[192821]: 2.13 scrub starts
Dec 03 01:21:46 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 03 01:21:46 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 03 01:21:46 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec 03 01:21:46 compute-0 ceph-mon[192821]: 2.13 scrub ok
Dec 03 01:21:46 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 03 01:21:46 compute-0 ceph-mon[192821]: osdmap e58: 3 total, 3 up, 3 in
Dec 03 01:21:46 compute-0 ceph-mon[192821]: 2.15 scrub starts
Dec 03 01:21:46 compute-0 ceph-mon[192821]: 2.15 scrub ok
Dec 03 01:21:46 compute-0 ceph-mon[192821]: 4.18 scrub starts
Dec 03 01:21:46 compute-0 ceph-mon[192821]: 4.18 scrub ok
Dec 03 01:21:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v148: 321 pgs: 321 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:21:46 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0) v1
Dec 03 01:21:46 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.d( v 57'65 (0'0,57'65] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.370283127s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 51'64 mlcod 51'64 active pruub 120.194587708s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[10.1e( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.1e( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.370039940s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 120.194450378s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.b( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.385140419s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 120.209640503s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.1e( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.369914055s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.194450378s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.b( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.385062218s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.209640503s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.13( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.384991646s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 120.209678650s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.12( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.385028839s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 120.209892273s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.12( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.385006905s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.209892273s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.d( v 57'65 (0'0,57'65] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.369626045s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 51'64 mlcod 0'0 unknown NOTIFY pruub 120.194587708s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.10( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.384907722s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 120.210037231s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.10( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.384888649s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.210037231s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.1a( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.384524345s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 120.210105896s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.1a( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.384474754s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.210105896s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.19( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.384259224s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 120.210113525s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.19( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.384223938s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.210113525s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.13( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.384879112s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.209678650s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.6( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.383697510s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 120.210243225s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.6( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.383665085s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.210243225s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.4( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.383474350s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 120.210258484s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.8( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.383479118s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 120.210304260s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.8( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.383446693s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.210304260s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.4( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.383396149s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.210258484s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.f( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.383208275s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 120.210273743s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.f( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.383172989s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.210273743s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.9( v 57'65 (0'0,57'65] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.383893013s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 51'64 mlcod 51'64 active pruub 120.211059570s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.9( v 57'65 (0'0,57'65] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.383845329s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 51'64 mlcod 0'0 unknown NOTIFY pruub 120.211059570s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.e( v 57'65 (0'0,57'65] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.383638382s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 51'64 mlcod 51'64 active pruub 120.211250305s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.7( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.382546425s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 120.210205078s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.e( v 57'65 (0'0,57'65] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.383591652s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 51'64 mlcod 0'0 unknown NOTIFY pruub 120.211250305s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.7( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.382506371s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.210205078s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.1( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.384715080s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 120.212638855s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.1( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.384687424s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.212638855s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.11( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.382074356s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 120.210029602s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.11( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.382016182s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.210029602s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[10.d( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[10.8( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[10.4( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.2( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.383605003s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 120.211791992s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.2( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.383579254s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.211791992s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.14( v 57'65 (0'0,57'65] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.382976532s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 51'64 mlcod 51'64 active pruub 120.211723328s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[10.10( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.14( v 57'65 (0'0,57'65] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.382926941s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 51'64 mlcod 0'0 unknown NOTIFY pruub 120.211723328s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[10.1a( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[10.19( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[10.b( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[10.12( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.17( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.388989449s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 127.514541626s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.17( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.388964653s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.514541626s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.14( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.964451790s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 133.090469360s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.14( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.964327812s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.090469360s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.15( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.953183174s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 133.079544067s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.15( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.953125000s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.079544067s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.15( v 57'65 (0'0,57'65] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.383259773s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 51'64 mlcod 51'64 active pruub 120.212539673s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.16( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.383138657s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 120.212570190s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.15( v 57'65 (0'0,57'65] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.383132935s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 51'64 mlcod 0'0 unknown NOTIFY pruub 120.212539673s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.15( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.952919006s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 133.079666138s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.16( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.383103371s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.212570190s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.17( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.964179993s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 133.090972900s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.17( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.382589340s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 120.212608337s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.17( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.382558823s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.212608337s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.17( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.964162827s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.090972900s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.15( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.952864647s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.079666138s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.14( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.376405716s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 127.503578186s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.14( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.376358986s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.503578186s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[10.9( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[10.e( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[8.15( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[10.7( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[10.1( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[10.15( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[10.13( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.15( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.375605583s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 127.503540039s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.15( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.372826576s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.503540039s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[10.6( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[10.16( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[10.11( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[10.f( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[10.2( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[10.14( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[10.17( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[11.17( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.11( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.955656052s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 133.091552734s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.11( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.955075264s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.091552734s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.2( v 51'2 (0'0,51'2] local-lis/les=56/57 n=1 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.378023148s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 127.514656067s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[8.14( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.2( v 51'2 (0'0,51'2] local-lis/les=56/57 n=1 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.377953529s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.514656067s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.1( v 51'2 (0'0,51'2] local-lis/les=56/57 n=1 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.377737045s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 127.514732361s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.1( v 51'2 (0'0,51'2] local-lis/les=56/57 n=1 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.377631187s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.514732361s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.2( v 44'4 (0'0,44'4] local-lis/les=54/55 n=1 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.954754829s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 133.091903687s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.10( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.955168724s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 133.091079712s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.2( v 44'4 (0'0,44'4] local-lis/les=54/55 n=1 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.954708099s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.091903687s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.10( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.953830719s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.091079712s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.3( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.955630302s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 133.092971802s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.3( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.955589294s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.092971802s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.c( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.954670906s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 133.092498779s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.d( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.954593658s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 133.092498779s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.f( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.376757622s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 127.514831543s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.d( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.954345703s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.092498779s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.e( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.376403809s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 127.514778137s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.f( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.376577377s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.514831543s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.e( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.376350403s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.514778137s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.d( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.376034737s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 127.514846802s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.d( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.376001358s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.514846802s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.e( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.953623772s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 133.092971802s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.e( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.953594208s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.092971802s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.f( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.953457832s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 133.093017578s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.f( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.953430176s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.093017578s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.b( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.375778198s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 127.515495300s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.b( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.375752449s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.515495300s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[9.17( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[11.14( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[9.11( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[11.15( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[11.1( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[8.10( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.c( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.954625130s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.092498779s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.9( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.953042030s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 133.094512939s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.9( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.952994347s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.094512939s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[9.3( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[9.d( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.d( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.954445839s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 133.092956543s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.d( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.950722694s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.092956543s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.9( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.372582436s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 127.514900208s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[11.e( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.b( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.951845169s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 133.094528198s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.b( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.951808929s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.094528198s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[11.f( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.f( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.951431274s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 133.094528198s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.f( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.951395988s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.094528198s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.8( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.371561050s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 127.514907837s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.8( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.371533394s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.514907837s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.b( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.951010704s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 133.094573975s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.b( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.950970650s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.094573975s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.9( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.950576782s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 133.094573975s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.9( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.950470924s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.094573975s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[11.2( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.3( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.370937347s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 127.515220642s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.3( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.370909691s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.515220642s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.1( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.949990273s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 133.094589233s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.1( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.949938774s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.094589233s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.4( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.370164871s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 127.514961243s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.4( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.370131493s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.514961243s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.6( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.949840546s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 133.094924927s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.7( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.949512482s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 133.094619751s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.7( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.949478149s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.094619751s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.6( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.949789047s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.094924927s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.6( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.370048523s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 127.515419006s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.6( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.369997025s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.515419006s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.9( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.369194031s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.514900208s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.4( v 44'4 (0'0,44'4] local-lis/les=54/55 n=1 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.948913574s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 133.094680786s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.4( v 44'4 (0'0,44'4] local-lis/les=54/55 n=1 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.948868752s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.094680786s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.5( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.948781013s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 133.094726562s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.5( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.948717117s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.094726562s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.18( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.369411469s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 127.515533447s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.18( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.369376183s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.515533447s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[11.d( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[8.2( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.1a( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.368890762s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 127.515296936s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.1a( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.368840218s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.515296936s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.1b( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.948098183s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 133.094726562s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.1b( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.368616104s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 127.515251160s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.1b( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.368573189s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.515251160s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.1b( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.948044777s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.094726562s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.19( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.948143959s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 133.094848633s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.18( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.948064804s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 133.094863892s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.19( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.948086739s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.094848633s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.18( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.948025703s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.094863892s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.1c( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.368432999s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 127.515319824s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.1c( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.368400574s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.515319824s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.1f( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.947911263s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 133.095062256s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.1f( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.947873116s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.095062256s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.1e( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.368158340s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 127.515388489s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.1e( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.368126869s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.515388489s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.947536469s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 133.094879150s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.947506905s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.094879150s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.1d( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.947427750s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 133.094924927s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.1f( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.367975235s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 127.515487671s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.1d( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.947430611s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 133.094970703s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.1f( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.367946625s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.515487671s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.1d( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.947390556s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.094924927s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.1d( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.947400093s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.094970703s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.10( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.367665291s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 127.515464783s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.10( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.367633820s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.515464783s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.11( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.367336273s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 127.515449524s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.1c( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.946849823s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 133.094985962s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.11( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.367304802s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.515449524s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.1c( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.946805954s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.094985962s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.12( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.946585655s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 133.095046997s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.12( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.946508408s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.095046997s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.13( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.946412086s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 133.095062256s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[8.e( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.13( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.946379662s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.095062256s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.11( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.946297646s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 133.095077515s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.11( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.946269035s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.095077515s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.19( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.366655350s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 127.515548706s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[8.c( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.19( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.366628647s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.515548706s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[9.9( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[11.8( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[9.b( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.1b( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.945642471s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 133.095062256s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.1b( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.945541382s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.095062256s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[8.f( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[8.b( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.12( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.365485191s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 127.515541077s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.12( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.365409851s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.515541077s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[8.d( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[11.3( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.1a( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.943103790s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 133.095077515s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.1a( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.942958832s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.095077515s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[11.b( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[11.9( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[8.4( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[8.9( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[9.1( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[11.18( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[11.1a( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[11.4( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[9.7( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[11.6( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[9.5( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[11.1b( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[8.18( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[8.1b( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[11.1c( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[11.1e( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[8.1f( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[11.11( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[8.1d( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[9.1d( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[11.10( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[9.13( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[11.19( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[8.1c( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[9.1b( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[8.6( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[11.1f( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[8.12( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[8.1a( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[8.11( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[11.12( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:46 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 4.1b deep-scrub starts
Dec 03 01:21:46 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 4.1b deep-scrub ok
Dec 03 01:21:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Dec 03 01:21:47 compute-0 ceph-mon[192821]: 5.14 scrub starts
Dec 03 01:21:47 compute-0 ceph-mon[192821]: 5.14 scrub ok
Dec 03 01:21:47 compute-0 ceph-mon[192821]: pgmap v148: 321 pgs: 321 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:21:47 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Dec 03 01:21:47 compute-0 ceph-mon[192821]: 4.1b deep-scrub starts
Dec 03 01:21:47 compute-0 ceph-mon[192821]: 4.1b deep-scrub ok
Dec 03 01:21:47 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Dec 03 01:21:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Dec 03 01:21:47 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.1b( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.1b( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.15( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.15( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.1b( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.1d( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.1d( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.3( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.3( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.19( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.19( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.17( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.17( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.d( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.d( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.1( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.1( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.11( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.11( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.9( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.9( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.17( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.17( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.1b( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.3( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.3( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.7( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.7( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.b( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.b( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.d( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.d( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.f( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.f( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.11( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.11( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.5( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.5( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.1( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.1( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.9( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.1d( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.9( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.1d( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.7( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.7( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.b( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.b( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.5( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.5( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:47 compute-0 sudo[224395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inlracunxqbwujonfdxcpnbybppmmoxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724906.5260978-32-133763322707022/AnsiballZ_command.py'
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.13( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.13( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.13( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.13( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:47 compute-0 sudo[224395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:21:47 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 59 pg[8.1c( v 44'4 (0'0,44'4] local-lis/les=58/59 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 59 pg[11.1f( v 51'2 (0'0,51'2] local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 59 pg[11.1a( v 51'2 (0'0,51'2] local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 59 pg[8.11( v 44'4 (0'0,44'4] local-lis/les=58/59 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 59 pg[11.12( v 51'2 (0'0,51'2] local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 59 pg[8.12( v 44'4 (0'0,44'4] local-lis/les=58/59 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 59 pg[11.b( v 51'2 (0'0,51'2] local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[10.1e( v 51'64 (0'0,51'64] local-lis/les=58/59 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[8.1d( v 44'4 (0'0,44'4] local-lis/les=58/59 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[8.1f( v 44'4 (0'0,44'4] local-lis/les=58/59 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[10.1( v 51'64 (0'0,51'64] local-lis/les=58/59 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[8.14( v 44'4 (0'0,44'4] local-lis/les=58/59 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[10.16( v 51'64 (0'0,51'64] local-lis/les=58/59 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[8.18( v 44'4 (0'0,44'4] local-lis/les=58/59 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[11.19( v 51'2 (0'0,51'2] local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[11.17( v 51'2 (0'0,51'2] local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[8.1a( v 44'4 (0'0,44'4] local-lis/les=58/59 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[10.b( v 51'64 (0'0,51'64] local-lis/les=58/59 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[10.13( v 51'64 (0'0,51'64] local-lis/les=58/59 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[10.14( v 57'65 lc 51'54 (0'0,57'65] local-lis/les=58/59 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=57'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[10.12( v 51'64 (0'0,51'64] local-lis/les=58/59 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[10.2( v 51'64 (0'0,51'64] local-lis/les=58/59 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[10.f( v 51'64 (0'0,51'64] local-lis/les=58/59 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[10.6( v 51'64 (0'0,51'64] local-lis/les=58/59 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 podman[224355]: 2025-12-03 01:21:47.259661364 +0000 UTC m=+0.130818035 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[10.19( v 51'64 (0'0,51'64] local-lis/les=58/59 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[10.1a( v 51'64 (0'0,51'64] local-lis/les=58/59 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[10.11( v 51'64 (0'0,51'64] local-lis/les=58/59 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[10.10( v 51'64 (0'0,51'64] local-lis/les=58/59 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 59 pg[11.1e( v 51'2 (0'0,51'2] local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 59 pg[11.11( v 51'2 (0'0,51'2] local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 59 pg[11.1c( v 51'2 (0'0,51'2] local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 59 pg[11.18( v 51'2 (0'0,51'2] local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 59 pg[11.1b( v 51'2 (0'0,51'2] local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 59 pg[8.1b( v 44'4 (0'0,44'4] local-lis/les=58/59 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 59 pg[8.4( v 44'4 (0'0,44'4] local-lis/les=58/59 n=1 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 59 pg[11.9( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=51'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 59 pg[11.8( v 51'2 (0'0,51'2] local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 59 pg[11.d( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=51'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 59 pg[8.2( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=58/59 n=1 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[54,58)/1 crt=44'4 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 59 pg[8.d( v 44'4 (0'0,44'4] local-lis/les=58/59 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 59 pg[11.3( v 51'2 (0'0,51'2] local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 59 pg[11.2( v 51'2 (0'0,51'2] local-lis/les=58/59 n=1 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 59 pg[11.15( v 51'2 (0'0,51'2] local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 59 pg[8.15( v 44'4 (0'0,44'4] local-lis/les=58/59 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[8.c( v 44'4 (0'0,44'4] local-lis/les=58/59 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[11.f( v 51'2 (0'0,51'2] local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[11.1( v 51'2 (0'0,51'2] local-lis/les=58/59 n=1 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[10.e( v 57'65 lc 51'48 (0'0,57'65] local-lis/les=58/59 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=57'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[11.e( v 51'2 (0'0,51'2] local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[8.f( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=58/59 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=44'4 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[8.e( v 44'4 (0'0,44'4] local-lis/les=58/59 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[11.6( v 51'2 (0'0,51'2] local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[10.7( v 51'64 (0'0,51'64] local-lis/les=58/59 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[8.9( v 44'4 (0'0,44'4] local-lis/les=58/59 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[8.6( v 44'4 (0'0,44'4] local-lis/les=58/59 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[10.4( v 51'64 (0'0,51'64] local-lis/les=58/59 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[11.14( v 51'2 (0'0,51'2] local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[10.8( v 51'64 (0'0,51'64] local-lis/les=58/59 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[11.4( v 51'2 (0'0,51'2] local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[8.b( v 44'4 (0'0,44'4] local-lis/les=58/59 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[10.17( v 51'64 (0'0,51'64] local-lis/les=58/59 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[10.9( v 57'65 lc 51'56 (0'0,57'65] local-lis/les=58/59 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=57'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[8.10( v 44'4 (0'0,44'4] local-lis/les=58/59 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[11.10( v 51'2 (0'0,51'2] local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[10.d( v 57'65 lc 51'50 (0'0,57'65] local-lis/les=58/59 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=57'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[10.15( v 57'65 lc 51'46 (0'0,57'65] local-lis/les=58/59 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=57'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:47 compute-0 python3.9[224406]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                             pushd /var/tmp
                                             curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                             pushd repo-setup-main
                                             python3 -m venv ./venv
                                             PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                             ./venv/bin/repo-setup current-podified -b antelope
                                             popd
                                             rm -rf repo-setup-main
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:21:47 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Dec 03 01:21:47 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Dec 03 01:21:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Dec 03 01:21:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Dec 03 01:21:48 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Dec 03 01:21:48 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Dec 03 01:21:48 compute-0 ceph-mon[192821]: osdmap e59: 3 total, 3 up, 3 in
Dec 03 01:21:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v151: 321 pgs: 321 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:21:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Dec 03 01:21:48 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Dec 03 01:21:48 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Dec 03 01:21:48 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Dec 03 01:21:49 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Dec 03 01:21:49 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Dec 03 01:21:49 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 60 pg[9.11( v 51'584 (0'0,51'584] local-lis/les=59/60 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] async=[0] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:49 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 60 pg[9.7( v 51'584 (0'0,51'584] local-lis/les=59/60 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] async=[0] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:49 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 60 pg[9.5( v 51'584 (0'0,51'584] local-lis/les=59/60 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] async=[0] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:49 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 60 pg[9.f( v 51'584 (0'0,51'584] local-lis/les=59/60 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] async=[0] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:49 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 60 pg[9.13( v 51'584 (0'0,51'584] local-lis/les=59/60 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] async=[0] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:49 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 60 pg[9.9( v 51'584 (0'0,51'584] local-lis/les=59/60 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] async=[0] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:49 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 60 pg[9.d( v 51'584 (0'0,51'584] local-lis/les=59/60 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] async=[0] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:49 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 60 pg[9.1( v 51'584 (0'0,51'584] local-lis/les=59/60 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] async=[0] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:49 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 60 pg[9.1d( v 51'584 (0'0,51'584] local-lis/les=59/60 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] async=[0] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:49 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 60 pg[9.3( v 51'584 (0'0,51'584] local-lis/les=59/60 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] async=[0] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:49 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 60 pg[9.b( v 51'584 (0'0,51'584] local-lis/les=59/60 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] async=[0] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:49 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 60 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=59/60 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] async=[0] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:49 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 60 pg[9.1b( v 51'584 (0'0,51'584] local-lis/les=59/60 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] async=[0] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:49 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 60 pg[9.15( v 51'584 (0'0,51'584] local-lis/les=59/60 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] async=[0] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:49 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 60 pg[9.19( v 51'584 (0'0,51'584] local-lis/les=59/60 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] async=[0] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=11}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:49 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 60 pg[9.17( v 51'584 (0'0,51'584] local-lis/les=59/60 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] async=[0] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:49 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Dec 03 01:21:49 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Dec 03 01:21:49 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Dec 03 01:21:49 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Dec 03 01:21:49 compute-0 ceph-mon[192821]: 5.11 scrub starts
Dec 03 01:21:49 compute-0 ceph-mon[192821]: 5.11 scrub ok
Dec 03 01:21:49 compute-0 ceph-mon[192821]: osdmap e60: 3 total, 3 up, 3 in
Dec 03 01:21:49 compute-0 ceph-mon[192821]: pgmap v151: 321 pgs: 321 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:21:49 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Dec 03 01:21:49 compute-0 ceph-mon[192821]: 5.12 scrub starts
Dec 03 01:21:49 compute-0 ceph-mon[192821]: 5.12 scrub ok
Dec 03 01:21:49 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Dec 03 01:21:50 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Dec 03 01:21:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v153: 321 pgs: 321 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 175 B/s, 1 objects/s recovering
Dec 03 01:21:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0) v1
Dec 03 01:21:50 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Dec 03 01:21:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Dec 03 01:21:50 compute-0 ceph-mon[192821]: 2.19 scrub starts
Dec 03 01:21:50 compute-0 ceph-mon[192821]: 2.19 scrub ok
Dec 03 01:21:50 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Dec 03 01:21:50 compute-0 ceph-mon[192821]: osdmap e61: 3 total, 3 up, 3 in
Dec 03 01:21:50 compute-0 ceph-mon[192821]: 4.1a scrub starts
Dec 03 01:21:50 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Dec 03 01:21:50 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Dec 03 01:21:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Dec 03 01:21:50 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Dec 03 01:21:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:21:50 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 62 pg[9.11( v 51'584 (0'0,51'584] local-lis/les=59/60 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=62 pruub=14.857470512s) [0] async=[0] r=-1 lpr=62 pi=[54,62)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 136.537094116s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:50 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 62 pg[9.11( v 51'584 (0'0,51'584] local-lis/les=59/60 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=62 pruub=14.857365608s) [0] r=-1 lpr=62 pi=[54,62)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.537094116s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:50 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 62 pg[9.7( v 51'584 (0'0,51'584] local-lis/les=59/60 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=62 pruub=14.855461121s) [0] async=[0] r=-1 lpr=62 pi=[54,62)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 136.537475586s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:50 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 62 pg[9.7( v 51'584 (0'0,51'584] local-lis/les=59/60 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=62 pruub=14.854380608s) [0] r=-1 lpr=62 pi=[54,62)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.537475586s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:50 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 62 pg[9.7( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:50 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 62 pg[9.11( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:50 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 62 pg[9.7( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:50 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 62 pg[9.11( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:50 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 6.f scrub starts
Dec 03 01:21:50 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 6.f scrub ok
Dec 03 01:21:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Dec 03 01:21:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Dec 03 01:21:51 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Dec 03 01:21:51 compute-0 ceph-mon[192821]: 4.1a scrub ok
Dec 03 01:21:51 compute-0 ceph-mon[192821]: pgmap v153: 321 pgs: 321 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 175 B/s, 1 objects/s recovering
Dec 03 01:21:51 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Dec 03 01:21:51 compute-0 ceph-mon[192821]: osdmap e62: 3 total, 3 up, 3 in
Dec 03 01:21:51 compute-0 ceph-mon[192821]: 6.f scrub starts
Dec 03 01:21:51 compute-0 ceph-mon[192821]: 6.f scrub ok
Dec 03 01:21:51 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 63 pg[9.15( v 51'584 (0'0,51'584] local-lis/les=59/60 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63 pruub=13.859754562s) [0] async=[0] r=-1 lpr=63 pi=[54,63)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 136.537872314s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:51 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 63 pg[9.15( v 51'584 (0'0,51'584] local-lis/les=59/60 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63 pruub=13.859655380s) [0] r=-1 lpr=63 pi=[54,63)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.537872314s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:51 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 63 pg[9.d( v 51'584 (0'0,51'584] local-lis/les=59/60 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63 pruub=13.857912064s) [0] async=[0] r=-1 lpr=63 pi=[54,63)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 136.537689209s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:51 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 63 pg[9.f( v 51'584 (0'0,51'584] local-lis/les=59/60 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63 pruub=13.857936859s) [0] async=[0] r=-1 lpr=63 pi=[54,63)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 136.537551880s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:51 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 63 pg[9.d( v 51'584 (0'0,51'584] local-lis/les=59/60 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63 pruub=13.857842445s) [0] r=-1 lpr=63 pi=[54,63)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.537689209s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:51 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 63 pg[9.f( v 51'584 (0'0,51'584] local-lis/les=59/60 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63 pruub=13.857610703s) [0] r=-1 lpr=63 pi=[54,63)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.537551880s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:51 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 63 pg[9.b( v 51'584 (0'0,51'584] local-lis/les=59/60 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63 pruub=13.857017517s) [0] async=[0] r=-1 lpr=63 pi=[54,63)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 136.537139893s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:51 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 63 pg[9.b( v 51'584 (0'0,51'584] local-lis/les=59/60 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63 pruub=13.856969833s) [0] r=-1 lpr=63 pi=[54,63)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.537139893s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:51 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 63 pg[9.1( v 51'584 (0'0,51'584] local-lis/les=59/60 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63 pruub=13.857148170s) [0] async=[0] r=-1 lpr=63 pi=[54,63)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 136.537719727s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:51 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 63 pg[9.1( v 51'584 (0'0,51'584] local-lis/les=59/60 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63 pruub=13.857081413s) [0] r=-1 lpr=63 pi=[54,63)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.537719727s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:51 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 63 pg[9.5( v 51'584 (0'0,51'584] local-lis/les=59/60 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63 pruub=13.856276512s) [0] async=[0] r=-1 lpr=63 pi=[54,63)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 136.537124634s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:51 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 63 pg[9.5( v 51'584 (0'0,51'584] local-lis/les=59/60 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63 pruub=13.856222153s) [0] r=-1 lpr=63 pi=[54,63)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.537124634s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:51 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 63 pg[9.1d( v 51'584 (0'0,51'584] local-lis/les=59/60 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63 pruub=13.856063843s) [0] async=[0] r=-1 lpr=63 pi=[54,63)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 136.537734985s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:51 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 63 pg[9.1d( v 51'584 (0'0,51'584] local-lis/les=59/60 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63 pruub=13.855966568s) [0] r=-1 lpr=63 pi=[54,63)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.537734985s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:51 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 63 pg[9.1b( v 51'584 (0'0,51'584] local-lis/les=59/60 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63 pruub=13.855431557s) [0] async=[0] r=-1 lpr=63 pi=[54,63)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 136.537811279s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:51 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 63 pg[9.1b( v 51'584 (0'0,51'584] local-lis/les=59/60 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63 pruub=13.855373383s) [0] r=-1 lpr=63 pi=[54,63)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.537811279s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:51 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 63 pg[9.13( v 51'584 (0'0,51'584] local-lis/les=59/60 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63 pruub=13.855633736s) [0] async=[0] r=-1 lpr=63 pi=[54,63)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 136.537628174s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:51 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 63 pg[9.13( v 51'584 (0'0,51'584] local-lis/les=59/60 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63 pruub=13.855039597s) [0] r=-1 lpr=63 pi=[54,63)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.537628174s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:51 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 63 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=59/60 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63 pruub=13.854851723s) [0] async=[0] r=-1 lpr=63 pi=[54,63)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 136.537811279s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:51 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 63 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=59/60 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63 pruub=13.854633331s) [0] r=-1 lpr=63 pi=[54,63)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.537811279s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:51 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 63 pg[9.9( v 51'584 (0'0,51'584] local-lis/les=59/60 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63 pruub=13.851238251s) [0] async=[0] r=-1 lpr=63 pi=[54,63)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 136.537658691s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:51 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 63 pg[9.9( v 51'584 (0'0,51'584] local-lis/les=59/60 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63 pruub=13.850149155s) [0] r=-1 lpr=63 pi=[54,63)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.537658691s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:51 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 63 pg[9.b( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:51 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 63 pg[9.5( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:51 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 63 pg[9.b( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:51 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 63 pg[9.5( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:51 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 63 pg[9.f( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:51 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 63 pg[9.f( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:51 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 63 pg[9.d( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:51 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 63 pg[9.d( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:51 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 63 pg[9.1( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:51 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 63 pg[9.1( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:51 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 63 pg[9.1d( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:51 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 63 pg[9.1d( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:51 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 63 pg[9.1b( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:51 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 63 pg[9.1b( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:51 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 63 pg[9.15( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:51 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 63 pg[9.15( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:51 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 63 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:51 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 63 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:51 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 63 pg[9.9( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:51 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 63 pg[9.9( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:51 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 63 pg[9.13( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:51 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 63 pg[9.13( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:51 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 63 pg[9.11( v 51'584 (0'0,51'584] local-lis/les=62/63 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:51 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 63 pg[9.7( v 51'584 (0'0,51'584] local-lis/les=62/63 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:52 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 2.f deep-scrub starts
Dec 03 01:21:52 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 2.f deep-scrub ok
Dec 03 01:21:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v156: 321 pgs: 6 active+remapped, 7 active+recovery_wait+remapped, 1 active+recovering+remapped, 307 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 43/247 objects misplaced (17.409%); 450 B/s, 14 objects/s recovering
Dec 03 01:21:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Dec 03 01:21:52 compute-0 ceph-mon[192821]: osdmap e63: 3 total, 3 up, 3 in
Dec 03 01:21:52 compute-0 ceph-mon[192821]: 2.f deep-scrub starts
Dec 03 01:21:52 compute-0 ceph-mon[192821]: 2.f deep-scrub ok
Dec 03 01:21:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Dec 03 01:21:52 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Dec 03 01:21:52 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 64 pg[9.19( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=64) [0] r=0 lpr=64 pi=[54,64)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:52 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 64 pg[9.19( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=64) [0] r=0 lpr=64 pi=[54,64)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:52 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 64 pg[9.3( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=64) [0] r=0 lpr=64 pi=[54,64)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:52 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 64 pg[9.17( v 51'584 (0'0,51'584] local-lis/les=59/60 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=64 pruub=12.819808960s) [0] async=[0] r=-1 lpr=64 pi=[54,64)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 136.537933350s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:52 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 64 pg[9.17( v 51'584 (0'0,51'584] local-lis/les=59/60 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=64 pruub=12.819722176s) [0] r=-1 lpr=64 pi=[54,64)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.537933350s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:52 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 64 pg[9.3( v 51'584 (0'0,51'584] local-lis/les=59/60 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=64 pruub=12.819305420s) [0] async=[0] r=-1 lpr=64 pi=[54,64)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 136.537765503s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:52 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 64 pg[9.3( v 51'584 (0'0,51'584] local-lis/les=59/60 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=64 pruub=12.819208145s) [0] r=-1 lpr=64 pi=[54,64)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.537765503s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:52 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 64 pg[9.3( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=64) [0] r=0 lpr=64 pi=[54,64)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:52 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 64 pg[9.19( v 51'584 (0'0,51'584] local-lis/les=59/60 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=64 pruub=12.817730904s) [0] async=[0] r=-1 lpr=64 pi=[54,64)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 136.537902832s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:52 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 64 pg[9.19( v 51'584 (0'0,51'584] local-lis/les=59/60 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=64 pruub=12.817616463s) [0] r=-1 lpr=64 pi=[54,64)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.537902832s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:52 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 64 pg[9.17( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=64) [0] r=0 lpr=64 pi=[54,64)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:52 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 64 pg[9.17( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=64) [0] r=0 lpr=64 pi=[54,64)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:52 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 64 pg[9.1b( v 51'584 (0'0,51'584] local-lis/les=63/64 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:52 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 64 pg[9.15( v 51'584 (0'0,51'584] local-lis/les=63/64 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:52 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 64 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=63/64 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:52 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 64 pg[9.d( v 51'584 (0'0,51'584] local-lis/les=63/64 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:52 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 64 pg[9.f( v 51'584 (0'0,51'584] local-lis/les=63/64 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:52 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 64 pg[9.9( v 51'584 (0'0,51'584] local-lis/les=63/64 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:52 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 64 pg[9.1( v 51'584 (0'0,51'584] local-lis/les=63/64 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:52 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 64 pg[9.13( v 51'584 (0'0,51'584] local-lis/les=63/64 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:52 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 64 pg[9.5( v 51'584 (0'0,51'584] local-lis/les=63/64 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:52 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 64 pg[9.b( v 51'584 (0'0,51'584] local-lis/les=63/64 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:52 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 64 pg[9.1d( v 51'584 (0'0,51'584] local-lis/les=63/64 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:52 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Dec 03 01:21:53 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Dec 03 01:21:53 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 4.e deep-scrub starts
Dec 03 01:21:53 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 4.e deep-scrub ok
Dec 03 01:21:53 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 5.15 deep-scrub starts
Dec 03 01:21:53 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 5.15 deep-scrub ok
Dec 03 01:21:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Dec 03 01:21:53 compute-0 ceph-mon[192821]: pgmap v156: 321 pgs: 6 active+remapped, 7 active+recovery_wait+remapped, 1 active+recovering+remapped, 307 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 43/247 objects misplaced (17.409%); 450 B/s, 14 objects/s recovering
Dec 03 01:21:53 compute-0 ceph-mon[192821]: osdmap e64: 3 total, 3 up, 3 in
Dec 03 01:21:53 compute-0 ceph-mon[192821]: 5.15 deep-scrub starts
Dec 03 01:21:53 compute-0 ceph-mon[192821]: 5.15 deep-scrub ok
Dec 03 01:21:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Dec 03 01:21:53 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Dec 03 01:21:53 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 65 pg[9.19( v 51'584 (0'0,51'584] local-lis/les=64/65 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=64) [0] r=0 lpr=64 pi=[54,64)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:53 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 65 pg[9.3( v 51'584 (0'0,51'584] local-lis/les=64/65 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=64) [0] r=0 lpr=64 pi=[54,64)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:53 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 65 pg[9.17( v 51'584 (0'0,51'584] local-lis/les=64/65 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=64) [0] r=0 lpr=64 pi=[54,64)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:21:53 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 5.13 deep-scrub starts
Dec 03 01:21:53 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 5.13 deep-scrub ok
Dec 03 01:21:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v159: 321 pgs: 6 active+remapped, 7 active+recovery_wait+remapped, 1 active+recovering+remapped, 307 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 43/247 objects misplaced (17.409%); 230 B/s, 12 objects/s recovering
Dec 03 01:21:54 compute-0 ceph-mon[192821]: 2.1b scrub starts
Dec 03 01:21:54 compute-0 ceph-mon[192821]: 2.1b scrub ok
Dec 03 01:21:54 compute-0 ceph-mon[192821]: 4.e deep-scrub starts
Dec 03 01:21:54 compute-0 ceph-mon[192821]: 4.e deep-scrub ok
Dec 03 01:21:54 compute-0 ceph-mon[192821]: osdmap e65: 3 total, 3 up, 3 in
Dec 03 01:21:54 compute-0 ceph-mon[192821]: 5.13 deep-scrub starts
Dec 03 01:21:54 compute-0 ceph-mon[192821]: 5.13 deep-scrub ok
Dec 03 01:21:54 compute-0 sudo[224395]: pam_unix(sudo:session): session closed for user root
Dec 03 01:21:55 compute-0 sshd-session[224005]: Connection closed by 192.168.122.30 port 49242
Dec 03 01:21:55 compute-0 sshd-session[224002]: pam_unix(sshd:session): session closed for user zuul
Dec 03 01:21:55 compute-0 systemd[1]: session-40.scope: Deactivated successfully.
Dec 03 01:21:55 compute-0 systemd[1]: session-40.scope: Consumed 9.956s CPU time.
Dec 03 01:21:55 compute-0 systemd-logind[800]: Session 40 logged out. Waiting for processes to exit.
Dec 03 01:21:55 compute-0 systemd-logind[800]: Removed session 40.
Dec 03 01:21:55 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:21:55 compute-0 ceph-mon[192821]: pgmap v159: 321 pgs: 6 active+remapped, 7 active+recovery_wait+remapped, 1 active+recovering+remapped, 307 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 43/247 objects misplaced (17.409%); 230 B/s, 12 objects/s recovering
Dec 03 01:21:56 compute-0 sshd-session[224105]: Connection closed by authenticating user root 193.32.162.157 port 42334 [preauth]
Dec 03 01:21:56 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Dec 03 01:21:56 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Dec 03 01:21:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v160: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 376 B/s, 17 objects/s recovering
Dec 03 01:21:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0) v1
Dec 03 01:21:56 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Dec 03 01:21:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Dec 03 01:21:56 compute-0 ceph-mon[192821]: 2.11 scrub starts
Dec 03 01:21:56 compute-0 ceph-mon[192821]: 2.11 scrub ok
Dec 03 01:21:56 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Dec 03 01:21:56 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec 03 01:21:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Dec 03 01:21:56 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Dec 03 01:21:56 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Dec 03 01:21:56 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Dec 03 01:21:57 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Dec 03 01:21:57 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Dec 03 01:21:57 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Dec 03 01:21:57 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Dec 03 01:21:57 compute-0 ceph-mon[192821]: pgmap v160: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 376 B/s, 17 objects/s recovering
Dec 03 01:21:57 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec 03 01:21:57 compute-0 ceph-mon[192821]: osdmap e66: 3 total, 3 up, 3 in
Dec 03 01:21:57 compute-0 ceph-mon[192821]: 5.16 scrub starts
Dec 03 01:21:57 compute-0 ceph-mon[192821]: 5.16 scrub ok
Dec 03 01:21:57 compute-0 ceph-mon[192821]: 5.7 scrub starts
Dec 03 01:21:57 compute-0 ceph-mon[192821]: 5.7 scrub ok
Dec 03 01:21:57 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Dec 03 01:21:57 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Dec 03 01:21:58 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 4.a deep-scrub starts
Dec 03 01:21:58 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 4.a deep-scrub ok
Dec 03 01:21:58 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Dec 03 01:21:58 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Dec 03 01:21:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v162: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 219 B/s, 9 objects/s recovering
Dec 03 01:21:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0) v1
Dec 03 01:21:58 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec 03 01:21:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:21:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:21:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:21:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:21:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:21:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:21:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Dec 03 01:21:58 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec 03 01:21:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Dec 03 01:21:58 compute-0 ceph-mon[192821]: 6.8 scrub starts
Dec 03 01:21:58 compute-0 ceph-mon[192821]: 6.8 scrub ok
Dec 03 01:21:58 compute-0 ceph-mon[192821]: 5.9 scrub starts
Dec 03 01:21:58 compute-0 ceph-mon[192821]: 5.9 scrub ok
Dec 03 01:21:58 compute-0 ceph-mon[192821]: 5.5 scrub starts
Dec 03 01:21:58 compute-0 ceph-mon[192821]: 5.5 scrub ok
Dec 03 01:21:58 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec 03 01:21:58 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 67 pg[9.16( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=67 pruub=11.275618553s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 141.091445923s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:58 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 67 pg[9.16( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=67 pruub=11.275558472s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.091445923s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:58 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 67 pg[9.e( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=67 pruub=11.278839111s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 141.095245361s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:58 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 67 pg[9.e( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=67 pruub=11.278799057s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.095245361s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:58 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 67 pg[9.6( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=67 pruub=11.279337883s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 141.095932007s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:58 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 67 pg[9.6( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=67 pruub=11.279305458s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.095932007s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:58 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 67 pg[9.1e( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=67 pruub=11.279449463s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 141.096679688s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:58 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 67 pg[9.1e( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=67 pruub=11.279399872s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.096679688s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:58 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Dec 03 01:21:58 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 67 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:58 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 67 pg[9.6( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:58 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 67 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:58 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 67 pg[9.e( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:58 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 2.d deep-scrub starts
Dec 03 01:21:58 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 2.d deep-scrub ok
Dec 03 01:21:58 compute-0 sshd-session[224465]: Received disconnect from 103.146.202.174 port 44794:11: Bye Bye [preauth]
Dec 03 01:21:58 compute-0 sshd-session[224465]: Disconnected from authenticating user root 103.146.202.174 port 44794 [preauth]
Dec 03 01:21:59 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 6.14 deep-scrub starts
Dec 03 01:21:59 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 6.14 deep-scrub ok
Dec 03 01:21:59 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 5.4 deep-scrub starts
Dec 03 01:21:59 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 5.4 deep-scrub ok
Dec 03 01:21:59 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Dec 03 01:21:59 compute-0 ceph-mon[192821]: 4.a deep-scrub starts
Dec 03 01:21:59 compute-0 ceph-mon[192821]: 4.a deep-scrub ok
Dec 03 01:21:59 compute-0 ceph-mon[192821]: pgmap v162: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 219 B/s, 9 objects/s recovering
Dec 03 01:21:59 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec 03 01:21:59 compute-0 ceph-mon[192821]: osdmap e67: 3 total, 3 up, 3 in
Dec 03 01:21:59 compute-0 ceph-mon[192821]: 2.d deep-scrub starts
Dec 03 01:21:59 compute-0 ceph-mon[192821]: 2.d deep-scrub ok
Dec 03 01:21:59 compute-0 ceph-mon[192821]: 5.4 deep-scrub starts
Dec 03 01:21:59 compute-0 ceph-mon[192821]: 5.4 deep-scrub ok
Dec 03 01:21:59 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Dec 03 01:21:59 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Dec 03 01:21:59 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 68 pg[9.16( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] r=0 lpr=68 pi=[54,68)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:59 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 68 pg[9.16( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] r=0 lpr=68 pi=[54,68)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:59 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 68 pg[9.e( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] r=0 lpr=68 pi=[54,68)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:59 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 68 pg[9.e( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] r=0 lpr=68 pi=[54,68)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:59 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 68 pg[9.6( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] r=0 lpr=68 pi=[54,68)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:59 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 68 pg[9.6( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] r=0 lpr=68 pi=[54,68)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:59 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 68 pg[9.1e( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] r=0 lpr=68 pi=[54,68)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:59 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 68 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[54,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:59 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 68 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[54,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:59 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 68 pg[9.6( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[54,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:59 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 68 pg[9.6( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[54,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:59 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 68 pg[9.1e( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] r=0 lpr=68 pi=[54,68)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 03 01:21:59 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 68 pg[9.e( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[54,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:59 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 68 pg[9.e( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[54,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:59 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 68 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[54,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:21:59 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 68 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[54,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 03 01:21:59 compute-0 podman[158098]: time="2025-12-03T01:21:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:21:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:21:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec 03 01:21:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:21:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6810 "" "Go-http-client/1.1"
Dec 03 01:22:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v165: 321 pgs: 4 unknown, 317 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 219 B/s, 9 objects/s recovering
Dec 03 01:22:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:22:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Dec 03 01:22:00 compute-0 ceph-mon[192821]: 6.14 deep-scrub starts
Dec 03 01:22:00 compute-0 ceph-mon[192821]: 6.14 deep-scrub ok
Dec 03 01:22:00 compute-0 ceph-mon[192821]: osdmap e68: 3 total, 3 up, 3 in
Dec 03 01:22:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Dec 03 01:22:00 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Dec 03 01:22:01 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 69 pg[9.16( v 51'584 (0'0,51'584] local-lis/les=68/69 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[54,68)/1 crt=51'584 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:22:01 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 69 pg[9.6( v 51'584 (0'0,51'584] local-lis/les=68/69 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[54,68)/1 crt=51'584 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:22:01 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 69 pg[9.1e( v 51'584 (0'0,51'584] local-lis/les=68/69 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[54,68)/1 crt=51'584 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:22:01 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 69 pg[9.e( v 51'584 (0'0,51'584] local-lis/les=68/69 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[54,68)/1 crt=51'584 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:22:01 compute-0 openstack_network_exporter[160250]: ERROR   01:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:22:01 compute-0 openstack_network_exporter[160250]: ERROR   01:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:22:01 compute-0 openstack_network_exporter[160250]: ERROR   01:22:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:22:01 compute-0 openstack_network_exporter[160250]: ERROR   01:22:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:22:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:22:01 compute-0 openstack_network_exporter[160250]: ERROR   01:22:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:22:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:22:01 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Dec 03 01:22:01 compute-0 ceph-mon[192821]: pgmap v165: 321 pgs: 4 unknown, 317 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 219 B/s, 9 objects/s recovering
Dec 03 01:22:01 compute-0 ceph-mon[192821]: osdmap e69: 3 total, 3 up, 3 in
Dec 03 01:22:01 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Dec 03 01:22:01 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Dec 03 01:22:01 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 70 pg[9.e( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70) [2] r=0 lpr=70 pi=[54,70)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:01 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 70 pg[9.e( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70) [2] r=0 lpr=70 pi=[54,70)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:22:01 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 70 pg[9.6( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70) [2] r=0 lpr=70 pi=[54,70)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:01 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 70 pg[9.6( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70) [2] r=0 lpr=70 pi=[54,70)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:22:01 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 70 pg[9.16( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70) [2] r=0 lpr=70 pi=[54,70)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:01 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 70 pg[9.16( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70) [2] r=0 lpr=70 pi=[54,70)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:22:01 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 70 pg[9.1e( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70) [2] r=0 lpr=70 pi=[54,70)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:01 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 70 pg[9.e( v 51'584 (0'0,51'584] local-lis/les=68/69 n=7 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70 pruub=15.589989662s) [2] async=[2] r=-1 lpr=70 pi=[54,70)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 148.482360840s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:01 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 70 pg[9.e( v 51'584 (0'0,51'584] local-lis/les=68/69 n=7 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70 pruub=15.589857101s) [2] r=-1 lpr=70 pi=[54,70)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.482360840s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:22:01 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 70 pg[9.16( v 51'584 (0'0,51'584] local-lis/les=68/69 n=6 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70 pruub=15.580208778s) [2] async=[2] r=-1 lpr=70 pi=[54,70)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 148.473205566s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:01 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 70 pg[9.16( v 51'584 (0'0,51'584] local-lis/les=68/69 n=6 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70 pruub=15.580095291s) [2] r=-1 lpr=70 pi=[54,70)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.473205566s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:22:01 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 70 pg[9.1e( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70) [2] r=0 lpr=70 pi=[54,70)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:22:01 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 70 pg[9.6( v 51'584 (0'0,51'584] local-lis/les=68/69 n=7 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70 pruub=15.578581810s) [2] async=[2] r=-1 lpr=70 pi=[54,70)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 148.473434448s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:01 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 70 pg[9.6( v 51'584 (0'0,51'584] local-lis/les=68/69 n=7 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70 pruub=15.578310013s) [2] r=-1 lpr=70 pi=[54,70)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.473434448s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:22:01 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 70 pg[9.1e( v 51'584 (0'0,51'584] local-lis/les=68/69 n=6 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70 pruub=15.586289406s) [2] async=[2] r=-1 lpr=70 pi=[54,70)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 148.481918335s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:01 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 70 pg[9.1e( v 51'584 (0'0,51'584] local-lis/les=68/69 n=6 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70 pruub=15.586196899s) [2] r=-1 lpr=70 pi=[54,70)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.481918335s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:22:01 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Dec 03 01:22:01 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Dec 03 01:22:02 compute-0 sshd-session[224468]: Invalid user kyt from 34.66.72.251 port 51758
Dec 03 01:22:02 compute-0 sshd-session[224468]: Received disconnect from 34.66.72.251 port 51758:11: Bye Bye [preauth]
Dec 03 01:22:02 compute-0 sshd-session[224468]: Disconnected from invalid user kyt 34.66.72.251 port 51758 [preauth]
Dec 03 01:22:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v168: 321 pgs: 4 activating+remapped, 317 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 24/247 objects misplaced (9.717%)
Dec 03 01:22:02 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 5.3 deep-scrub starts
Dec 03 01:22:02 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 5.3 deep-scrub ok
Dec 03 01:22:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Dec 03 01:22:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Dec 03 01:22:02 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Dec 03 01:22:02 compute-0 ceph-mon[192821]: osdmap e70: 3 total, 3 up, 3 in
Dec 03 01:22:02 compute-0 ceph-mon[192821]: 2.7 scrub starts
Dec 03 01:22:02 compute-0 ceph-mon[192821]: 2.7 scrub ok
Dec 03 01:22:02 compute-0 ceph-mon[192821]: 5.3 deep-scrub starts
Dec 03 01:22:02 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 71 pg[9.e( v 51'584 (0'0,51'584] local-lis/les=70/71 n=7 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70) [2] r=0 lpr=70 pi=[54,70)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:22:02 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 71 pg[9.1e( v 51'584 (0'0,51'584] local-lis/les=70/71 n=6 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70) [2] r=0 lpr=70 pi=[54,70)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:22:02 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 71 pg[9.6( v 51'584 (0'0,51'584] local-lis/les=70/71 n=7 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70) [2] r=0 lpr=70 pi=[54,70)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:22:02 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 71 pg[9.16( v 51'584 (0'0,51'584] local-lis/les=70/71 n=6 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70) [2] r=0 lpr=70 pi=[54,70)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:22:03 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 6.15 scrub starts
Dec 03 01:22:03 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 6.15 scrub ok
Dec 03 01:22:03 compute-0 ceph-mon[192821]: pgmap v168: 321 pgs: 4 activating+remapped, 317 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 24/247 objects misplaced (9.717%)
Dec 03 01:22:03 compute-0 ceph-mon[192821]: 5.3 deep-scrub ok
Dec 03 01:22:03 compute-0 ceph-mon[192821]: osdmap e71: 3 total, 3 up, 3 in
Dec 03 01:22:03 compute-0 podman[224472]: 2025-12-03 01:22:03.877639855 +0000 UTC m=+0.115524352 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 03 01:22:03 compute-0 podman[224470]: 2025-12-03 01:22:03.880352298 +0000 UTC m=+0.132191902 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 01:22:03 compute-0 podman[224471]: 2025-12-03 01:22:03.890190514 +0000 UTC m=+0.135064440 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, architecture=x86_64, maintainer=Red Hat, Inc., release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, container_name=openstack_network_exporter, managed_by=edpm_ansible, version=9.6, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec 03 01:22:03 compute-0 podman[224473]: 2025-12-03 01:22:03.91447394 +0000 UTC m=+0.146421977 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 03 01:22:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v170: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 39 op/s; 45 B/s, 5 objects/s recovering
Dec 03 01:22:04 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0) v1
Dec 03 01:22:04 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Dec 03 01:22:04 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Dec 03 01:22:04 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Dec 03 01:22:04 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Dec 03 01:22:04 compute-0 ceph-mon[192821]: 6.15 scrub starts
Dec 03 01:22:04 compute-0 ceph-mon[192821]: 6.15 scrub ok
Dec 03 01:22:04 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Dec 03 01:22:04 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 72 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=63/64 n=6 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=72 pruub=11.795778275s) [2] r=-1 lpr=72 pi=[63,72)/1 crt=51'584 mlcod 0'0 active pruub 154.161010742s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:04 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 72 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=63/64 n=6 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=72 pruub=11.795719147s) [2] r=-1 lpr=72 pi=[63,72)/1 crt=51'584 mlcod 0'0 unknown NOTIFY pruub 154.161010742s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:22:04 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Dec 03 01:22:04 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 72 pg[9.7( v 51'584 (0'0,51'584] local-lis/les=62/63 n=7 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=72 pruub=10.766449928s) [2] r=-1 lpr=72 pi=[62,72)/1 crt=51'584 mlcod 0'0 active pruub 153.134201050s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:04 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 72 pg[9.7( v 51'584 (0'0,51'584] local-lis/les=62/63 n=7 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=72 pruub=10.766380310s) [2] r=-1 lpr=72 pi=[62,72)/1 crt=51'584 mlcod 0'0 unknown NOTIFY pruub 153.134201050s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:22:04 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 72 pg[9.f( v 51'584 (0'0,51'584] local-lis/les=63/64 n=7 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=72 pruub=11.809161186s) [2] r=-1 lpr=72 pi=[63,72)/1 crt=51'584 mlcod 0'0 active pruub 154.177764893s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:04 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 72 pg[9.f( v 51'584 (0'0,51'584] local-lis/les=63/64 n=7 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=72 pruub=11.809054375s) [2] r=-1 lpr=72 pi=[63,72)/1 crt=51'584 mlcod 0'0 unknown NOTIFY pruub 154.177764893s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:22:04 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 72 pg[9.17( v 51'584 (0'0,51'584] local-lis/les=64/65 n=6 ec=54/45 lis/c=64/64 les/c/f=65/65/0 sis=72 pruub=12.812279701s) [2] r=-1 lpr=72 pi=[64,72)/1 crt=51'584 mlcod 0'0 active pruub 155.181747437s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:04 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 72 pg[9.17( v 51'584 (0'0,51'584] local-lis/les=64/65 n=6 ec=54/45 lis/c=64/64 les/c/f=65/65/0 sis=72 pruub=12.811389923s) [2] r=-1 lpr=72 pi=[64,72)/1 crt=51'584 mlcod 0'0 unknown NOTIFY pruub 155.181747437s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:22:04 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 72 pg[9.17( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=64/64 les/c/f=65/65/0 sis=72) [2] r=0 lpr=72 pi=[64,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:22:04 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 72 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=72) [2] r=0 lpr=72 pi=[63,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:22:04 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 72 pg[9.7( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=72) [2] r=0 lpr=72 pi=[62,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:22:04 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 72 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=72) [2] r=0 lpr=72 pi=[63,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:22:05 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:22:05 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Dec 03 01:22:05 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Dec 03 01:22:05 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Dec 03 01:22:05 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 73 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[63,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:05 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 73 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[63,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 03 01:22:05 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 73 pg[9.7( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[62,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:05 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 73 pg[9.7( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[62,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 03 01:22:05 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 73 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[63,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:05 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 73 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[63,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 03 01:22:05 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 73 pg[9.17( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=64/64 les/c/f=65/65/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[64,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:05 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 73 pg[9.17( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=64/64 les/c/f=65/65/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[64,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 03 01:22:05 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 73 pg[9.7( v 51'584 (0'0,51'584] local-lis/les=62/63 n=7 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=73) [2]/[0] r=0 lpr=73 pi=[62,73)/1 crt=51'584 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:05 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 73 pg[9.7( v 51'584 (0'0,51'584] local-lis/les=62/63 n=7 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=73) [2]/[0] r=0 lpr=73 pi=[62,73)/1 crt=51'584 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 03 01:22:05 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 73 pg[9.17( v 51'584 (0'0,51'584] local-lis/les=64/65 n=6 ec=54/45 lis/c=64/64 les/c/f=65/65/0 sis=73) [2]/[0] r=0 lpr=73 pi=[64,73)/1 crt=51'584 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:05 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 73 pg[9.f( v 51'584 (0'0,51'584] local-lis/les=63/64 n=7 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=73) [2]/[0] r=0 lpr=73 pi=[63,73)/1 crt=51'584 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:05 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 73 pg[9.f( v 51'584 (0'0,51'584] local-lis/les=63/64 n=7 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=73) [2]/[0] r=0 lpr=73 pi=[63,73)/1 crt=51'584 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 03 01:22:05 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 73 pg[9.17( v 51'584 (0'0,51'584] local-lis/les=64/65 n=6 ec=54/45 lis/c=64/64 les/c/f=65/65/0 sis=73) [2]/[0] r=0 lpr=73 pi=[64,73)/1 crt=51'584 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 03 01:22:05 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 73 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=63/64 n=6 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=73) [2]/[0] r=0 lpr=73 pi=[63,73)/1 crt=51'584 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:05 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 73 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=63/64 n=6 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=73) [2]/[0] r=0 lpr=73 pi=[63,73)/1 crt=51'584 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 03 01:22:05 compute-0 ceph-mon[192821]: pgmap v170: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 39 op/s; 45 B/s, 5 objects/s recovering
Dec 03 01:22:05 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Dec 03 01:22:05 compute-0 ceph-mon[192821]: osdmap e72: 3 total, 3 up, 3 in
Dec 03 01:22:05 compute-0 ceph-mon[192821]: osdmap e73: 3 total, 3 up, 3 in
Dec 03 01:22:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v173: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 46 B/s, 5 objects/s recovering
Dec 03 01:22:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0) v1
Dec 03 01:22:06 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Dec 03 01:22:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Dec 03 01:22:06 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Dec 03 01:22:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Dec 03 01:22:06 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Dec 03 01:22:06 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 74 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=73/74 n=6 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[63,73)/1 crt=51'584 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:22:06 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 74 pg[9.f( v 51'584 (0'0,51'584] local-lis/les=73/74 n=7 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[63,73)/1 crt=51'584 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:22:06 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 74 pg[9.17( v 51'584 (0'0,51'584] local-lis/les=73/74 n=6 ec=54/45 lis/c=64/64 les/c/f=65/65/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[64,73)/1 crt=51'584 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:22:06 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 74 pg[9.7( v 51'584 (0'0,51'584] local-lis/les=73/74 n=7 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[62,73)/1 crt=51'584 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:22:06 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Dec 03 01:22:06 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Dec 03 01:22:06 compute-0 ceph-mon[192821]: osdmap e74: 3 total, 3 up, 3 in
Dec 03 01:22:06 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 74 pg[9.8( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=74 pruub=10.910098076s) [2] r=-1 lpr=74 pi=[54,74)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 149.096481323s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:06 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 74 pg[9.8( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=74 pruub=10.910028458s) [2] r=-1 lpr=74 pi=[54,74)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 149.096481323s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:22:06 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 74 pg[9.18( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=74 pruub=10.910104752s) [2] r=-1 lpr=74 pi=[54,74)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 149.097000122s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:06 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 74 pg[9.18( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=74 pruub=10.910059929s) [2] r=-1 lpr=74 pi=[54,74)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 149.097000122s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:22:06 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 74 pg[9.8( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=74) [2] r=0 lpr=74 pi=[54,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:22:06 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 74 pg[9.18( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=74) [2] r=0 lpr=74 pi=[54,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:22:07 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Dec 03 01:22:07 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Dec 03 01:22:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Dec 03 01:22:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Dec 03 01:22:07 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Dec 03 01:22:07 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 75 pg[9.8( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=75) [2]/[1] r=-1 lpr=75 pi=[54,75)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:07 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 75 pg[9.8( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=75) [2]/[1] r=-1 lpr=75 pi=[54,75)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 03 01:22:07 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 75 pg[9.18( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=75) [2]/[1] r=-1 lpr=75 pi=[54,75)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:07 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 75 pg[9.18( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=75) [2]/[1] r=-1 lpr=75 pi=[54,75)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 03 01:22:07 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 75 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=73/63 les/c/f=74/64/0 sis=75) [2] r=0 lpr=75 pi=[63,75)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:07 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 75 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=73/63 les/c/f=74/64/0 sis=75) [2] r=0 lpr=75 pi=[63,75)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:22:07 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 75 pg[9.7( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=73/62 les/c/f=74/63/0 sis=75) [2] r=0 lpr=75 pi=[62,75)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:07 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 75 pg[9.7( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=73/62 les/c/f=74/63/0 sis=75) [2] r=0 lpr=75 pi=[62,75)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:22:07 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 75 pg[9.f( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=73/63 les/c/f=74/64/0 sis=75) [2] r=0 lpr=75 pi=[63,75)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:07 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 75 pg[9.f( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=73/63 les/c/f=74/64/0 sis=75) [2] r=0 lpr=75 pi=[63,75)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:22:07 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 75 pg[9.18( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=75) [2]/[1] r=0 lpr=75 pi=[54,75)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:07 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 75 pg[9.18( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=75) [2]/[1] r=0 lpr=75 pi=[54,75)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 03 01:22:07 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 75 pg[9.8( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=75) [2]/[1] r=0 lpr=75 pi=[54,75)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:07 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 75 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=73/74 n=6 ec=54/45 lis/c=73/63 les/c/f=74/64/0 sis=75 pruub=15.008032799s) [2] async=[2] r=-1 lpr=75 pi=[63,75)/1 crt=51'584 mlcod 51'584 active pruub 160.152435303s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:07 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 75 pg[9.f( v 51'584 (0'0,51'584] local-lis/les=73/74 n=7 ec=54/45 lis/c=73/63 les/c/f=74/64/0 sis=75 pruub=15.015460968s) [2] async=[2] r=-1 lpr=75 pi=[63,75)/1 crt=51'584 mlcod 51'584 active pruub 160.160507202s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:07 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 75 pg[9.f( v 51'584 (0'0,51'584] local-lis/les=73/74 n=7 ec=54/45 lis/c=73/63 les/c/f=74/64/0 sis=75 pruub=15.015264511s) [2] r=-1 lpr=75 pi=[63,75)/1 crt=51'584 mlcod 0'0 unknown NOTIFY pruub 160.160507202s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:22:07 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 75 pg[9.17( v 51'584 (0'0,51'584] local-lis/les=73/74 n=6 ec=54/45 lis/c=73/64 les/c/f=74/65/0 sis=75 pruub=15.014729500s) [2] async=[2] r=-1 lpr=75 pi=[64,75)/1 crt=51'584 mlcod 51'584 active pruub 160.160614014s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:07 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 75 pg[9.17( v 51'584 (0'0,51'584] local-lis/les=73/74 n=6 ec=54/45 lis/c=73/64 les/c/f=74/65/0 sis=75 pruub=15.014652252s) [2] r=-1 lpr=75 pi=[64,75)/1 crt=51'584 mlcod 0'0 unknown NOTIFY pruub 160.160614014s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:22:07 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 75 pg[9.7( v 51'584 (0'0,51'584] local-lis/les=73/74 n=7 ec=54/45 lis/c=73/62 les/c/f=74/63/0 sis=75 pruub=15.014231682s) [2] async=[2] r=-1 lpr=75 pi=[62,75)/1 crt=51'584 mlcod 51'584 active pruub 160.160644531s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:07 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 75 pg[9.7( v 51'584 (0'0,51'584] local-lis/les=73/74 n=7 ec=54/45 lis/c=73/62 les/c/f=74/63/0 sis=75 pruub=15.014155388s) [2] r=-1 lpr=75 pi=[62,75)/1 crt=51'584 mlcod 0'0 unknown NOTIFY pruub 160.160644531s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:22:07 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 75 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=73/74 n=6 ec=54/45 lis/c=73/63 les/c/f=74/64/0 sis=75 pruub=15.007957458s) [2] r=-1 lpr=75 pi=[63,75)/1 crt=51'584 mlcod 0'0 unknown NOTIFY pruub 160.152435303s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:22:07 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 75 pg[9.8( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=75) [2]/[1] r=0 lpr=75 pi=[54,75)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 03 01:22:07 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 75 pg[9.17( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=73/64 les/c/f=74/65/0 sis=75) [2] r=0 lpr=75 pi=[64,75)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:07 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 75 pg[9.17( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=73/64 les/c/f=74/65/0 sis=75) [2] r=0 lpr=75 pi=[64,75)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:22:07 compute-0 ceph-mon[192821]: pgmap v173: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 46 B/s, 5 objects/s recovering
Dec 03 01:22:07 compute-0 ceph-mon[192821]: 5.2 scrub starts
Dec 03 01:22:07 compute-0 ceph-mon[192821]: osdmap e75: 3 total, 3 up, 3 in
Dec 03 01:22:08 compute-0 sshd-session[224464]: Connection closed by authenticating user root 193.32.162.157 port 43858 [preauth]
Dec 03 01:22:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v176: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:22:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0) v1
Dec 03 01:22:08 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Dec 03 01:22:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Dec 03 01:22:08 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Dec 03 01:22:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Dec 03 01:22:08 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Dec 03 01:22:08 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 76 pg[9.f( v 51'584 (0'0,51'584] local-lis/les=75/76 n=7 ec=54/45 lis/c=73/63 les/c/f=74/64/0 sis=75) [2] r=0 lpr=75 pi=[63,75)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:22:08 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 76 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=75/76 n=6 ec=54/45 lis/c=73/63 les/c/f=74/64/0 sis=75) [2] r=0 lpr=75 pi=[63,75)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:22:08 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 76 pg[9.7( v 51'584 (0'0,51'584] local-lis/les=75/76 n=7 ec=54/45 lis/c=73/62 les/c/f=74/63/0 sis=75) [2] r=0 lpr=75 pi=[62,75)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:22:08 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 76 pg[9.17( v 51'584 (0'0,51'584] local-lis/les=75/76 n=6 ec=54/45 lis/c=73/64 les/c/f=74/65/0 sis=75) [2] r=0 lpr=75 pi=[64,75)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:22:08 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 76 pg[9.18( v 51'584 (0'0,51'584] local-lis/les=75/76 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=75) [2]/[1] async=[2] r=0 lpr=75 pi=[54,75)/1 crt=51'584 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:22:08 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 76 pg[9.8( v 51'584 (0'0,51'584] local-lis/les=75/76 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=75) [2]/[1] async=[2] r=0 lpr=75 pi=[54,75)/1 crt=51'584 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:22:08 compute-0 sshd-session[224560]: Received disconnect from 146.190.144.138 port 45446:11: Bye Bye [preauth]
Dec 03 01:22:08 compute-0 sshd-session[224560]: Disconnected from authenticating user root 146.190.144.138 port 45446 [preauth]
Dec 03 01:22:08 compute-0 ceph-mon[192821]: 5.2 scrub ok
Dec 03 01:22:08 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Dec 03 01:22:08 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Dec 03 01:22:08 compute-0 ceph-mon[192821]: osdmap e76: 3 total, 3 up, 3 in
Dec 03 01:22:08 compute-0 podman[224563]: 2025-12-03 01:22:08.877076256 +0000 UTC m=+0.126305194 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm)
Dec 03 01:22:09 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 6.11 scrub starts
Dec 03 01:22:09 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 6.11 scrub ok
Dec 03 01:22:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Dec 03 01:22:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Dec 03 01:22:09 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Dec 03 01:22:09 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 77 pg[9.8( v 51'584 (0'0,51'584] local-lis/les=75/76 n=7 ec=54/45 lis/c=75/54 les/c/f=76/55/0 sis=77 pruub=15.017930984s) [2] async=[2] r=-1 lpr=77 pi=[54,77)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 155.747756958s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:09 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 77 pg[9.8( v 51'584 (0'0,51'584] local-lis/les=75/76 n=7 ec=54/45 lis/c=75/54 les/c/f=76/55/0 sis=77 pruub=15.017811775s) [2] r=-1 lpr=77 pi=[54,77)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.747756958s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:22:09 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 77 pg[9.18( v 51'584 (0'0,51'584] local-lis/les=75/76 n=6 ec=54/45 lis/c=75/54 les/c/f=76/55/0 sis=77 pruub=15.016313553s) [2] async=[2] r=-1 lpr=77 pi=[54,77)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 155.746994019s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:09 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 77 pg[9.18( v 51'584 (0'0,51'584] local-lis/les=75/76 n=6 ec=54/45 lis/c=75/54 les/c/f=76/55/0 sis=77 pruub=15.016211510s) [2] r=-1 lpr=77 pi=[54,77)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.746994019s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:22:09 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 77 pg[9.18( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=75/54 les/c/f=76/55/0 sis=77) [2] r=0 lpr=77 pi=[54,77)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:09 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 77 pg[9.18( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=75/54 les/c/f=76/55/0 sis=77) [2] r=0 lpr=77 pi=[54,77)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:22:09 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 77 pg[9.8( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=75/54 les/c/f=76/55/0 sis=77) [2] r=0 lpr=77 pi=[54,77)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:09 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 77 pg[9.8( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=75/54 les/c/f=76/55/0 sis=77) [2] r=0 lpr=77 pi=[54,77)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:22:09 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 2.6 deep-scrub starts
Dec 03 01:22:09 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 2.6 deep-scrub ok
Dec 03 01:22:09 compute-0 ceph-mon[192821]: pgmap v176: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:22:09 compute-0 ceph-mon[192821]: osdmap e77: 3 total, 3 up, 3 in
Dec 03 01:22:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v179: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 137 B/s, 5 objects/s recovering
Dec 03 01:22:10 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0) v1
Dec 03 01:22:10 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Dec 03 01:22:10 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e77 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:22:10 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Dec 03 01:22:10 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Dec 03 01:22:10 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Dec 03 01:22:10 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec 03 01:22:10 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Dec 03 01:22:10 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Dec 03 01:22:10 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 78 pg[9.8( v 51'584 (0'0,51'584] local-lis/les=77/78 n=7 ec=54/45 lis/c=75/54 les/c/f=76/55/0 sis=77) [2] r=0 lpr=77 pi=[54,77)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:22:10 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 78 pg[9.18( v 51'584 (0'0,51'584] local-lis/les=77/78 n=6 ec=54/45 lis/c=75/54 les/c/f=76/55/0 sis=77) [2] r=0 lpr=77 pi=[54,77)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:22:10 compute-0 ceph-mon[192821]: 6.11 scrub starts
Dec 03 01:22:10 compute-0 ceph-mon[192821]: 6.11 scrub ok
Dec 03 01:22:10 compute-0 ceph-mon[192821]: 2.6 deep-scrub starts
Dec 03 01:22:10 compute-0 ceph-mon[192821]: 2.6 deep-scrub ok
Dec 03 01:22:10 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Dec 03 01:22:10 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec 03 01:22:10 compute-0 ceph-mon[192821]: osdmap e78: 3 total, 3 up, 3 in
Dec 03 01:22:11 compute-0 sshd-session[224584]: Accepted publickey for zuul from 192.168.122.30 port 54440 ssh2: ECDSA SHA256:ja3ITS17A9km0/Ot+KN2pl9ub4ump/b6GV+vNoE7Szw
Dec 03 01:22:11 compute-0 systemd-logind[800]: New session 41 of user zuul.
Dec 03 01:22:11 compute-0 systemd[1]: Started Session 41 of User zuul.
Dec 03 01:22:11 compute-0 sshd-session[224584]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 03 01:22:11 compute-0 ceph-mon[192821]: pgmap v179: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 137 B/s, 5 objects/s recovering
Dec 03 01:22:11 compute-0 ceph-mon[192821]: 2.8 scrub starts
Dec 03 01:22:11 compute-0 ceph-mon[192821]: 2.8 scrub ok
Dec 03 01:22:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v181: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 111 B/s, 4 objects/s recovering
Dec 03 01:22:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0) v1
Dec 03 01:22:12 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Dec 03 01:22:12 compute-0 python3.9[224737]: ansible-ansible.legacy.ping Invoked with data=pong
Dec 03 01:22:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Dec 03 01:22:12 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Dec 03 01:22:12 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Dec 03 01:22:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Dec 03 01:22:12 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Dec 03 01:22:13 compute-0 ceph-mon[192821]: pgmap v181: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 111 B/s, 4 objects/s recovering
Dec 03 01:22:13 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Dec 03 01:22:13 compute-0 ceph-mon[192821]: osdmap e79: 3 total, 3 up, 3 in
Dec 03 01:22:13 compute-0 podman[224885]: 2025-12-03 01:22:13.838872461 +0000 UTC m=+0.137016122 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, build-date=2024-09-18T21:23:30, name=ubi9, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, config_id=edpm, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, container_name=kepler, managed_by=edpm_ansible)
Dec 03 01:22:14 compute-0 python3.9[224927]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 01:22:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v183: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 130 B/s, 6 objects/s recovering
Dec 03 01:22:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0) v1
Dec 03 01:22:14 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Dec 03 01:22:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Dec 03 01:22:14 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Dec 03 01:22:14 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Dec 03 01:22:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Dec 03 01:22:14 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Dec 03 01:22:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:22:15 compute-0 sudo[225084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdgrztdaykowzrimxdddbgijdniukrve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724934.8076725-45-160500720658182/AnsiballZ_command.py'
Dec 03 01:22:15 compute-0 sudo[225084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:22:15 compute-0 python3.9[225086]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:22:15 compute-0 ceph-mon[192821]: pgmap v183: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 130 B/s, 6 objects/s recovering
Dec 03 01:22:15 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Dec 03 01:22:15 compute-0 ceph-mon[192821]: osdmap e80: 3 total, 3 up, 3 in
Dec 03 01:22:15 compute-0 sudo[225084]: pam_unix(sudo:session): session closed for user root
Dec 03 01:22:15 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 80 pg[9.1c( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=80 pruub=9.868826866s) [2] r=-1 lpr=80 pi=[54,80)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 157.097564697s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:15 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 80 pg[9.1c( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=80 pruub=9.868764877s) [2] r=-1 lpr=80 pi=[54,80)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 157.097564697s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:22:15 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 80 pg[9.c( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=80 pruub=9.864498138s) [2] r=-1 lpr=80 pi=[54,80)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 157.093978882s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:15 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 80 pg[9.c( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=80 pruub=9.864447594s) [2] r=-1 lpr=80 pi=[54,80)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 157.093978882s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:22:15 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 80 pg[9.1c( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=80) [2] r=0 lpr=80 pi=[54,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:22:15 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 80 pg[9.c( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=80) [2] r=0 lpr=80 pi=[54,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:22:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v185: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 2 objects/s recovering
Dec 03 01:22:16 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0) v1
Dec 03 01:22:16 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Dec 03 01:22:16 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Dec 03 01:22:16 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Dec 03 01:22:16 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Dec 03 01:22:16 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Dec 03 01:22:16 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Dec 03 01:22:16 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 81 pg[9.1c( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[54,81)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:16 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 81 pg[9.c( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[54,81)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:16 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 81 pg[9.c( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[54,81)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 03 01:22:16 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 81 pg[9.1c( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[54,81)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 03 01:22:16 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 81 pg[9.1c( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=81) [2]/[1] r=0 lpr=81 pi=[54,81)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:16 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 81 pg[9.1c( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=81) [2]/[1] r=0 lpr=81 pi=[54,81)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 03 01:22:16 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 81 pg[9.c( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=81) [2]/[1] r=0 lpr=81 pi=[54,81)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:16 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 81 pg[9.c( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=81) [2]/[1] r=0 lpr=81 pi=[54,81)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 03 01:22:16 compute-0 sudo[225237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqmofheovofbdtqbtyqbnwzgkcmhysgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724936.1217098-57-175948753740870/AnsiballZ_stat.py'
Dec 03 01:22:16 compute-0 sudo[225237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:22:17 compute-0 python3.9[225239]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:22:17 compute-0 sudo[225237]: pam_unix(sudo:session): session closed for user root
Dec 03 01:22:17 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 5.f scrub starts
Dec 03 01:22:17 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 5.f scrub ok
Dec 03 01:22:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Dec 03 01:22:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Dec 03 01:22:17 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Dec 03 01:22:17 compute-0 ceph-mon[192821]: pgmap v185: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 2 objects/s recovering
Dec 03 01:22:17 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Dec 03 01:22:17 compute-0 ceph-mon[192821]: osdmap e81: 3 total, 3 up, 3 in
Dec 03 01:22:17 compute-0 ceph-mon[192821]: 5.f scrub starts
Dec 03 01:22:17 compute-0 ceph-mon[192821]: 5.f scrub ok
Dec 03 01:22:17 compute-0 podman[225318]: 2025-12-03 01:22:17.904909727 +0000 UTC m=+0.147079845 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 03 01:22:18 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Dec 03 01:22:18 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Dec 03 01:22:18 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 82 pg[9.c( v 51'584 (0'0,51'584] local-lis/les=81/82 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[54,81)/1 crt=51'584 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:22:18 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 82 pg[9.1c( v 51'584 (0'0,51'584] local-lis/les=81/82 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[54,81)/1 crt=51'584 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:22:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v188: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 39 B/s, 2 objects/s recovering
Dec 03 01:22:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0) v1
Dec 03 01:22:18 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Dec 03 01:22:18 compute-0 sudo[225416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtemrqhopfxlnggzeqbptauadvrlueye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724937.5899878-68-11146070062572/AnsiballZ_file.py'
Dec 03 01:22:18 compute-0 sudo[225416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:22:18 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Dec 03 01:22:18 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Dec 03 01:22:18 compute-0 sshd-session[225334]: Invalid user temp from 173.249.50.59 port 48572
Dec 03 01:22:18 compute-0 sshd-session[225334]: Received disconnect from 173.249.50.59 port 48572:11: Bye Bye [preauth]
Dec 03 01:22:18 compute-0 sshd-session[225334]: Disconnected from invalid user temp 173.249.50.59 port 48572 [preauth]
Dec 03 01:22:18 compute-0 python3.9[225418]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:22:18 compute-0 sudo[225416]: pam_unix(sudo:session): session closed for user root
Dec 03 01:22:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Dec 03 01:22:18 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Dec 03 01:22:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Dec 03 01:22:18 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Dec 03 01:22:18 compute-0 ceph-mon[192821]: osdmap e82: 3 total, 3 up, 3 in
Dec 03 01:22:18 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Dec 03 01:22:18 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 83 pg[9.c( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=81/54 les/c/f=82/55/0 sis=83) [2] r=0 lpr=83 pi=[54,83)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:18 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 83 pg[9.c( v 51'584 (0'0,51'584] local-lis/les=81/82 n=7 ec=54/45 lis/c=81/54 les/c/f=82/55/0 sis=83 pruub=15.466417313s) [2] async=[2] r=-1 lpr=83 pi=[54,83)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 165.617324829s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:18 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 83 pg[9.c( v 51'584 (0'0,51'584] local-lis/les=81/82 n=7 ec=54/45 lis/c=81/54 les/c/f=82/55/0 sis=83 pruub=15.466328621s) [2] r=-1 lpr=83 pi=[54,83)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 165.617324829s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:22:18 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 83 pg[9.c( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=81/54 les/c/f=82/55/0 sis=83) [2] r=0 lpr=83 pi=[54,83)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:22:18 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 83 pg[9.1c( v 51'584 (0'0,51'584] local-lis/les=81/82 n=6 ec=54/45 lis/c=81/54 les/c/f=82/55/0 sis=83 pruub=15.465260506s) [2] async=[2] r=-1 lpr=83 pi=[54,83)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 165.617401123s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:18 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 83 pg[9.1c( v 51'584 (0'0,51'584] local-lis/les=81/82 n=6 ec=54/45 lis/c=81/54 les/c/f=82/55/0 sis=83 pruub=15.465150833s) [2] r=-1 lpr=83 pi=[54,83)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 165.617401123s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:22:18 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 83 pg[9.1c( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=81/54 les/c/f=82/55/0 sis=83) [2] r=0 lpr=83 pi=[54,83)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:18 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 83 pg[9.1c( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=81/54 les/c/f=82/55/0 sis=83) [2] r=0 lpr=83 pi=[54,83)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:22:19 compute-0 sudo[225569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqelxbgxvrxrnomjzpzmwjgxsfmotjgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724938.9302685-77-128169868503221/AnsiballZ_file.py'
Dec 03 01:22:19 compute-0 sudo[225569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:22:19 compute-0 python3.9[225571]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:22:19 compute-0 sudo[225569]: pam_unix(sudo:session): session closed for user root
Dec 03 01:22:19 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Dec 03 01:22:19 compute-0 sshd-session[224562]: Connection closed by authenticating user root 193.32.162.157 port 57568 [preauth]
Dec 03 01:22:19 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Dec 03 01:22:19 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Dec 03 01:22:19 compute-0 ceph-mon[192821]: 4.13 scrub starts
Dec 03 01:22:19 compute-0 ceph-mon[192821]: 4.13 scrub ok
Dec 03 01:22:19 compute-0 ceph-mon[192821]: pgmap v188: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 39 B/s, 2 objects/s recovering
Dec 03 01:22:19 compute-0 ceph-mon[192821]: 2.1d scrub starts
Dec 03 01:22:19 compute-0 ceph-mon[192821]: 2.1d scrub ok
Dec 03 01:22:19 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Dec 03 01:22:19 compute-0 ceph-mon[192821]: osdmap e83: 3 total, 3 up, 3 in
Dec 03 01:22:19 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 84 pg[9.1c( v 51'584 (0'0,51'584] local-lis/les=83/84 n=6 ec=54/45 lis/c=81/54 les/c/f=82/55/0 sis=83) [2] r=0 lpr=83 pi=[54,83)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:22:19 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 84 pg[9.c( v 51'584 (0'0,51'584] local-lis/les=83/84 n=7 ec=54/45 lis/c=81/54 les/c/f=82/55/0 sis=83) [2] r=0 lpr=83 pi=[54,83)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:22:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v191: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:22:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Dec 03 01:22:20 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Dec 03 01:22:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:22:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Dec 03 01:22:20 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Dec 03 01:22:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Dec 03 01:22:20 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Dec 03 01:22:20 compute-0 ceph-mon[192821]: osdmap e84: 3 total, 3 up, 3 in
Dec 03 01:22:20 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Dec 03 01:22:20 compute-0 python3.9[225722]: ansible-ansible.builtin.service_facts Invoked
Dec 03 01:22:21 compute-0 network[225739]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 03 01:22:21 compute-0 network[225740]: 'network-scripts' will be removed from distribution in near future.
Dec 03 01:22:21 compute-0 network[225741]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 03 01:22:21 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 2.a scrub starts
Dec 03 01:22:21 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 2.a scrub ok
Dec 03 01:22:21 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 2.b scrub starts
Dec 03 01:22:21 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 2.b scrub ok
Dec 03 01:22:21 compute-0 ceph-mon[192821]: pgmap v191: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:22:21 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Dec 03 01:22:21 compute-0 ceph-mon[192821]: osdmap e85: 3 total, 3 up, 3 in
Dec 03 01:22:21 compute-0 ceph-mon[192821]: 2.a scrub starts
Dec 03 01:22:21 compute-0 ceph-mon[192821]: 2.a scrub ok
Dec 03 01:22:22 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 4.1c deep-scrub starts
Dec 03 01:22:22 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 4.1c deep-scrub ok
Dec 03 01:22:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v193: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:22:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Dec 03 01:22:22 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Dec 03 01:22:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Dec 03 01:22:22 compute-0 ceph-mon[192821]: 2.b scrub starts
Dec 03 01:22:22 compute-0 ceph-mon[192821]: 2.b scrub ok
Dec 03 01:22:22 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Dec 03 01:22:22 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Dec 03 01:22:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Dec 03 01:22:22 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Dec 03 01:22:23 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 5.c scrub starts
Dec 03 01:22:23 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 5.c scrub ok
Dec 03 01:22:23 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Dec 03 01:22:23 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Dec 03 01:22:23 compute-0 ceph-mon[192821]: 4.1c deep-scrub starts
Dec 03 01:22:23 compute-0 ceph-mon[192821]: 4.1c deep-scrub ok
Dec 03 01:22:23 compute-0 ceph-mon[192821]: pgmap v193: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:22:23 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Dec 03 01:22:23 compute-0 ceph-mon[192821]: osdmap e86: 3 total, 3 up, 3 in
Dec 03 01:22:23 compute-0 ceph-mon[192821]: 5.c scrub starts
Dec 03 01:22:23 compute-0 ceph-mon[192821]: 5.c scrub ok
Dec 03 01:22:24 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Dec 03 01:22:24 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Dec 03 01:22:24 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 6.1f scrub starts
Dec 03 01:22:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v195: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 20 B/s, 2 objects/s recovering
Dec 03 01:22:24 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0) v1
Dec 03 01:22:24 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Dec 03 01:22:24 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 6.1f scrub ok
Dec 03 01:22:24 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Dec 03 01:22:24 compute-0 ceph-mon[192821]: 2.1c scrub starts
Dec 03 01:22:24 compute-0 ceph-mon[192821]: 2.1c scrub ok
Dec 03 01:22:24 compute-0 ceph-mon[192821]: 2.5 scrub starts
Dec 03 01:22:24 compute-0 ceph-mon[192821]: 2.5 scrub ok
Dec 03 01:22:24 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Dec 03 01:22:24 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Dec 03 01:22:24 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Dec 03 01:22:24 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Dec 03 01:22:25 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Dec 03 01:22:25 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Dec 03 01:22:25 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Dec 03 01:22:25 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Dec 03 01:22:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:22:25 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Dec 03 01:22:25 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Dec 03 01:22:25 compute-0 ceph-mon[192821]: 6.1f scrub starts
Dec 03 01:22:25 compute-0 ceph-mon[192821]: pgmap v195: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 20 B/s, 2 objects/s recovering
Dec 03 01:22:25 compute-0 ceph-mon[192821]: 6.1f scrub ok
Dec 03 01:22:25 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Dec 03 01:22:25 compute-0 ceph-mon[192821]: osdmap e87: 3 total, 3 up, 3 in
Dec 03 01:22:25 compute-0 ceph-mon[192821]: 5.19 scrub starts
Dec 03 01:22:25 compute-0 ceph-mon[192821]: 5.19 scrub ok
Dec 03 01:22:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v197: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Dec 03 01:22:26 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Dec 03 01:22:26 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Dec 03 01:22:26 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 6.13 scrub starts
Dec 03 01:22:26 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 6.13 scrub ok
Dec 03 01:22:26 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Dec 03 01:22:26 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Dec 03 01:22:26 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Dec 03 01:22:26 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Dec 03 01:22:26 compute-0 ceph-mon[192821]: 4.11 scrub starts
Dec 03 01:22:26 compute-0 ceph-mon[192821]: 4.11 scrub ok
Dec 03 01:22:26 compute-0 ceph-mon[192821]: 3.17 scrub starts
Dec 03 01:22:26 compute-0 ceph-mon[192821]: 3.17 scrub ok
Dec 03 01:22:26 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Dec 03 01:22:27 compute-0 python3.9[226011]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:22:27 compute-0 ceph-mon[192821]: pgmap v197: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Dec 03 01:22:27 compute-0 ceph-mon[192821]: 6.13 scrub starts
Dec 03 01:22:27 compute-0 ceph-mon[192821]: 6.13 scrub ok
Dec 03 01:22:27 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Dec 03 01:22:27 compute-0 ceph-mon[192821]: osdmap e88: 3 total, 3 up, 3 in
Dec 03 01:22:28 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Dec 03 01:22:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:22:28
Dec 03 01:22:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 01:22:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 01:22:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['.mgr', 'backups', 'default.rgw.meta', 'default.rgw.log', '.rgw.root', 'images', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', 'vms']
Dec 03 01:22:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 01:22:28 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Dec 03 01:22:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v199: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Dec 03 01:22:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Dec 03 01:22:28 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Dec 03 01:22:28 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Dec 03 01:22:28 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Dec 03 01:22:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:22:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:22:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:22:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:22:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:22:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:22:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 01:22:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:22:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 01:22:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:22:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:22:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:22:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:22:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:22:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:22:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:22:28 compute-0 python3.9[226161]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 01:22:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Dec 03 01:22:28 compute-0 ceph-mon[192821]: 5.18 scrub starts
Dec 03 01:22:28 compute-0 ceph-mon[192821]: 5.18 scrub ok
Dec 03 01:22:28 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Dec 03 01:22:28 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Dec 03 01:22:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Dec 03 01:22:28 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Dec 03 01:22:29 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Dec 03 01:22:29 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Dec 03 01:22:29 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Dec 03 01:22:29 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Dec 03 01:22:29 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 89 pg[9.13( v 51'584 (0'0,51'584] local-lis/les=63/64 n=6 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=89 pruub=11.021018028s) [2] r=-1 lpr=89 pi=[63,89)/1 crt=51'584 mlcod 0'0 active pruub 178.180419922s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:29 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 89 pg[9.13( v 51'584 (0'0,51'584] local-lis/les=63/64 n=6 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=89 pruub=11.020961761s) [2] r=-1 lpr=89 pi=[63,89)/1 crt=51'584 mlcod 0'0 unknown NOTIFY pruub 178.180419922s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:22:29 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 89 pg[9.13( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=89) [2] r=0 lpr=89 pi=[63,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:22:29 compute-0 podman[158098]: time="2025-12-03T01:22:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:22:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:22:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec 03 01:22:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:22:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6816 "" "Go-http-client/1.1"
Dec 03 01:22:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Dec 03 01:22:29 compute-0 ceph-mon[192821]: pgmap v199: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Dec 03 01:22:29 compute-0 ceph-mon[192821]: 3.15 scrub starts
Dec 03 01:22:29 compute-0 ceph-mon[192821]: 3.15 scrub ok
Dec 03 01:22:29 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Dec 03 01:22:29 compute-0 ceph-mon[192821]: osdmap e89: 3 total, 3 up, 3 in
Dec 03 01:22:29 compute-0 ceph-mon[192821]: 5.1 scrub starts
Dec 03 01:22:29 compute-0 ceph-mon[192821]: 5.1 scrub ok
Dec 03 01:22:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Dec 03 01:22:30 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Dec 03 01:22:30 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 90 pg[9.13( v 51'584 (0'0,51'584] local-lis/les=63/64 n=6 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=90) [2]/[0] r=0 lpr=90 pi=[63,90)/1 crt=51'584 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:30 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 90 pg[9.13( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=90) [2]/[0] r=-1 lpr=90 pi=[63,90)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:30 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 90 pg[9.13( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=90) [2]/[0] r=-1 lpr=90 pi=[63,90)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 03 01:22:30 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 90 pg[9.13( v 51'584 (0'0,51'584] local-lis/les=63/64 n=6 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=90) [2]/[0] r=0 lpr=90 pi=[63,90)/1 crt=51'584 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 03 01:22:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v202: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:22:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Dec 03 01:22:30 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Dec 03 01:22:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:22:30 compute-0 python3.9[226315]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 01:22:31 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Dec 03 01:22:31 compute-0 ceph-mon[192821]: 7.1c scrub starts
Dec 03 01:22:31 compute-0 ceph-mon[192821]: 7.1c scrub ok
Dec 03 01:22:31 compute-0 ceph-mon[192821]: osdmap e90: 3 total, 3 up, 3 in
Dec 03 01:22:31 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Dec 03 01:22:31 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Dec 03 01:22:31 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Dec 03 01:22:31 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Dec 03 01:22:31 compute-0 sshd-session[225595]: Connection closed by authenticating user root 193.32.162.157 port 35072 [preauth]
Dec 03 01:22:31 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Dec 03 01:22:31 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Dec 03 01:22:31 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Dec 03 01:22:31 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Dec 03 01:22:31 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 91 pg[9.13( v 51'584 (0'0,51'584] local-lis/les=90/91 n=6 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=90) [2]/[0] async=[2] r=0 lpr=90 pi=[63,90)/1 crt=51'584 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:22:31 compute-0 openstack_network_exporter[160250]: ERROR   01:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:22:31 compute-0 openstack_network_exporter[160250]: ERROR   01:22:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:22:31 compute-0 openstack_network_exporter[160250]: ERROR   01:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:22:31 compute-0 openstack_network_exporter[160250]: ERROR   01:22:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:22:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:22:31 compute-0 openstack_network_exporter[160250]: ERROR   01:22:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:22:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:22:31 compute-0 sudo[226472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yusxriytxavictnomwvawgsnxiycbcre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724951.1432776-125-40386253071698/AnsiballZ_setup.py'
Dec 03 01:22:31 compute-0 sudo[226472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:22:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Dec 03 01:22:32 compute-0 ceph-mon[192821]: pgmap v202: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:22:32 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Dec 03 01:22:32 compute-0 ceph-mon[192821]: osdmap e91: 3 total, 3 up, 3 in
Dec 03 01:22:32 compute-0 ceph-mon[192821]: 2.9 scrub starts
Dec 03 01:22:32 compute-0 ceph-mon[192821]: 2.9 scrub ok
Dec 03 01:22:32 compute-0 ceph-mon[192821]: 3.18 scrub starts
Dec 03 01:22:32 compute-0 ceph-mon[192821]: 3.18 scrub ok
Dec 03 01:22:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Dec 03 01:22:32 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Dec 03 01:22:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 92 pg[9.13( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=90/63 les/c/f=91/64/0 sis=92) [2] r=0 lpr=92 pi=[63,92)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 92 pg[9.13( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=90/63 les/c/f=91/64/0 sis=92) [2] r=0 lpr=92 pi=[63,92)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:22:32 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 92 pg[9.13( v 51'584 (0'0,51'584] local-lis/les=90/91 n=6 ec=54/45 lis/c=90/63 les/c/f=91/64/0 sis=92 pruub=15.261384964s) [2] async=[2] r=-1 lpr=92 pi=[63,92)/1 crt=51'584 mlcod 51'584 active pruub 185.113540649s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:32 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 92 pg[9.13( v 51'584 (0'0,51'584] local-lis/les=90/91 n=6 ec=54/45 lis/c=90/63 les/c/f=91/64/0 sis=92 pruub=15.261201859s) [2] r=-1 lpr=92 pi=[63,92)/1 crt=51'584 mlcod 0'0 unknown NOTIFY pruub 185.113540649s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:22:32 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 5.1a deep-scrub starts
Dec 03 01:22:32 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 5.1a deep-scrub ok
Dec 03 01:22:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v205: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:22:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Dec 03 01:22:32 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec 03 01:22:32 compute-0 python3.9[226474]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 03 01:22:32 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Dec 03 01:22:32 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Dec 03 01:22:32 compute-0 sudo[226472]: pam_unix(sudo:session): session closed for user root
Dec 03 01:22:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Dec 03 01:22:33 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec 03 01:22:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Dec 03 01:22:33 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Dec 03 01:22:33 compute-0 ceph-mon[192821]: osdmap e92: 3 total, 3 up, 3 in
Dec 03 01:22:33 compute-0 ceph-mon[192821]: 5.1a deep-scrub starts
Dec 03 01:22:33 compute-0 ceph-mon[192821]: 5.1a deep-scrub ok
Dec 03 01:22:33 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec 03 01:22:33 compute-0 ceph-mon[192821]: 3.16 scrub starts
Dec 03 01:22:33 compute-0 ceph-mon[192821]: 3.16 scrub ok
Dec 03 01:22:33 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 93 pg[9.13( v 51'584 (0'0,51'584] local-lis/les=92/93 n=6 ec=54/45 lis/c=90/63 les/c/f=91/64/0 sis=92) [2] r=0 lpr=92 pi=[63,92)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:22:33 compute-0 sudo[226556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyqmpegzvxjvwzstrjnbwttddhewmlka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724951.1432776-125-40386253071698/AnsiballZ_dnf.py'
Dec 03 01:22:33 compute-0 sudo[226556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:22:33 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Dec 03 01:22:33 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Dec 03 01:22:33 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 93 pg[9.15( v 51'584 (0'0,51'584] local-lis/les=63/64 n=6 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=93 pruub=14.969126701s) [1] r=-1 lpr=93 pi=[63,93)/1 crt=51'584 mlcod 0'0 active pruub 186.162857056s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:33 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 93 pg[9.15( v 51'584 (0'0,51'584] local-lis/les=63/64 n=6 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=93 pruub=14.969079971s) [1] r=-1 lpr=93 pi=[63,93)/1 crt=51'584 mlcod 0'0 unknown NOTIFY pruub 186.162857056s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:22:33 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 93 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=93) [1] r=0 lpr=93 pi=[63,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:22:33 compute-0 python3.9[226558]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 03 01:22:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Dec 03 01:22:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Dec 03 01:22:34 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Dec 03 01:22:34 compute-0 ceph-mon[192821]: pgmap v205: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:22:34 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec 03 01:22:34 compute-0 ceph-mon[192821]: osdmap e93: 3 total, 3 up, 3 in
Dec 03 01:22:34 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 94 pg[9.15( v 51'584 (0'0,51'584] local-lis/les=63/64 n=6 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=94) [1]/[0] r=0 lpr=94 pi=[63,94)/1 crt=51'584 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:34 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 94 pg[9.15( v 51'584 (0'0,51'584] local-lis/les=63/64 n=6 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=94) [1]/[0] r=0 lpr=94 pi=[63,94)/1 crt=51'584 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 03 01:22:34 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 94 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=94) [1]/[0] r=-1 lpr=94 pi=[63,94)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:34 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 94 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=94) [1]/[0] r=-1 lpr=94 pi=[63,94)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 03 01:22:34 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Dec 03 01:22:34 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Dec 03 01:22:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v208: 321 pgs: 321 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Dec 03 01:22:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Dec 03 01:22:34 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Dec 03 01:22:34 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Dec 03 01:22:34 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Dec 03 01:22:34 compute-0 podman[226592]: 2025-12-03 01:22:34.883821481 +0000 UTC m=+0.120736252 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4)
Dec 03 01:22:34 compute-0 podman[226591]: 2025-12-03 01:22:34.889252291 +0000 UTC m=+0.127477518 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_id=edpm, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, architecture=x86_64, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec 03 01:22:34 compute-0 podman[226590]: 2025-12-03 01:22:34.913821624 +0000 UTC m=+0.151825575 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 01:22:34 compute-0 podman[226593]: 2025-12-03 01:22:34.923949642 +0000 UTC m=+0.154748006 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible)
Dec 03 01:22:35 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Dec 03 01:22:35 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Dec 03 01:22:35 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Dec 03 01:22:35 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Dec 03 01:22:35 compute-0 ceph-mon[192821]: 7.13 scrub starts
Dec 03 01:22:35 compute-0 ceph-mon[192821]: 7.13 scrub ok
Dec 03 01:22:35 compute-0 ceph-mon[192821]: osdmap e94: 3 total, 3 up, 3 in
Dec 03 01:22:35 compute-0 ceph-mon[192821]: 7.11 scrub starts
Dec 03 01:22:35 compute-0 ceph-mon[192821]: 7.11 scrub ok
Dec 03 01:22:35 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Dec 03 01:22:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 95 pg[9.15( v 51'584 (0'0,51'584] local-lis/les=94/95 n=6 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=94) [1]/[0] async=[1] r=0 lpr=94 pi=[63,94)/1 crt=51'584 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:22:35 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Dec 03 01:22:35 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Dec 03 01:22:35 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:22:35 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Dec 03 01:22:35 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Dec 03 01:22:35 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Dec 03 01:22:35 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 95 pg[9.16( v 51'584 (0'0,51'584] local-lis/les=70/71 n=6 ec=54/45 lis/c=70/70 les/c/f=71/71/0 sis=95 pruub=15.210299492s) [0] r=-1 lpr=95 pi=[70,95)/1 crt=51'584 mlcod 0'0 active pruub 174.617996216s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:35 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 96 pg[9.16( v 51'584 (0'0,51'584] local-lis/les=70/71 n=6 ec=54/45 lis/c=70/70 les/c/f=71/71/0 sis=95 pruub=15.210191727s) [0] r=-1 lpr=95 pi=[70,95)/1 crt=51'584 mlcod 0'0 unknown NOTIFY pruub 174.617996216s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:22:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 96 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=70/70 les/c/f=71/71/0 sis=95) [0] r=0 lpr=96 pi=[70,95)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:22:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 96 pg[9.15( v 51'584 (0'0,51'584] local-lis/les=94/95 n=6 ec=54/45 lis/c=94/63 les/c/f=95/64/0 sis=96 pruub=15.794424057s) [1] async=[1] r=-1 lpr=96 pi=[63,96)/1 crt=51'584 mlcod 51'584 active pruub 188.931213379s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 96 pg[9.15( v 51'584 (0'0,51'584] local-lis/les=94/95 n=6 ec=54/45 lis/c=94/63 les/c/f=95/64/0 sis=96 pruub=15.794380188s) [1] r=-1 lpr=96 pi=[63,96)/1 crt=51'584 mlcod 0'0 unknown NOTIFY pruub 188.931213379s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:22:35 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 96 pg[9.15( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=94/63 les/c/f=95/64/0 sis=96) [1] r=0 lpr=96 pi=[63,96)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:35 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 96 pg[9.15( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=94/63 les/c/f=95/64/0 sis=96) [1] r=0 lpr=96 pi=[63,96)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:22:35 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 7.9 deep-scrub starts
Dec 03 01:22:35 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 7.9 deep-scrub ok
Dec 03 01:22:35 compute-0 sshd-session[226579]: Received disconnect from 14.103.201.7 port 46632:11: Bye Bye [preauth]
Dec 03 01:22:35 compute-0 sshd-session[226579]: Disconnected from authenticating user root 14.103.201.7 port 46632 [preauth]
Dec 03 01:22:36 compute-0 ceph-mon[192821]: pgmap v208: 321 pgs: 321 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Dec 03 01:22:36 compute-0 ceph-mon[192821]: 3.12 scrub starts
Dec 03 01:22:36 compute-0 ceph-mon[192821]: 3.12 scrub ok
Dec 03 01:22:36 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Dec 03 01:22:36 compute-0 ceph-mon[192821]: osdmap e95: 3 total, 3 up, 3 in
Dec 03 01:22:36 compute-0 ceph-mon[192821]: 3.11 scrub starts
Dec 03 01:22:36 compute-0 ceph-mon[192821]: 3.11 scrub ok
Dec 03 01:22:36 compute-0 ceph-mon[192821]: osdmap e96: 3 total, 3 up, 3 in
Dec 03 01:22:36 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Dec 03 01:22:36 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Dec 03 01:22:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v211: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 2 objects/s recovering
Dec 03 01:22:36 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Dec 03 01:22:36 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Dec 03 01:22:36 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Dec 03 01:22:36 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 97 pg[9.16( v 51'584 (0'0,51'584] local-lis/les=70/71 n=6 ec=54/45 lis/c=70/70 les/c/f=71/71/0 sis=97) [0]/[2] r=0 lpr=97 pi=[70,97)/2 crt=51'584 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:36 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 97 pg[9.16( v 51'584 (0'0,51'584] local-lis/les=70/71 n=6 ec=54/45 lis/c=70/70 les/c/f=71/71/0 sis=97) [0]/[2] r=0 lpr=97 pi=[70,97)/2 crt=51'584 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 03 01:22:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 97 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=70/70 les/c/f=71/71/0 sis=97) [0]/[2] r=-1 lpr=97 pi=[70,97)/2 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 97 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=70/70 les/c/f=71/71/0 sis=97) [0]/[2] r=-1 lpr=97 pi=[70,97)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 03 01:22:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 97 pg[9.15( v 51'584 (0'0,51'584] local-lis/les=96/97 n=6 ec=54/45 lis/c=94/63 les/c/f=95/64/0 sis=96) [1] r=0 lpr=96 pi=[63,96)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:22:37 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Dec 03 01:22:37 compute-0 ceph-mon[192821]: 7.9 deep-scrub starts
Dec 03 01:22:37 compute-0 ceph-mon[192821]: 7.9 deep-scrub ok
Dec 03 01:22:37 compute-0 ceph-mon[192821]: 7.15 scrub starts
Dec 03 01:22:37 compute-0 ceph-mon[192821]: 7.15 scrub ok
Dec 03 01:22:37 compute-0 ceph-mon[192821]: osdmap e97: 3 total, 3 up, 3 in
Dec 03 01:22:37 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Dec 03 01:22:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Dec 03 01:22:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Dec 03 01:22:37 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Dec 03 01:22:37 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 3.f scrub starts
Dec 03 01:22:37 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 3.f scrub ok
Dec 03 01:22:37 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 98 pg[9.16( v 51'584 (0'0,51'584] local-lis/les=97/98 n=6 ec=54/45 lis/c=70/70 les/c/f=71/71/0 sis=97) [0]/[2] async=[0] r=0 lpr=97 pi=[70,97)/2 crt=51'584 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:22:37 compute-0 sshd-session[226706]: Invalid user sonarqube from 80.253.31.232 port 54602
Dec 03 01:22:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 01:22:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:22:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 01:22:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:22:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:22:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:22:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:22:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:22:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:22:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:22:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:22:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:22:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 01:22:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:22:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:22:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:22:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 01:22:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:22:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.225674773718825e-06 of space, bias 1.0, pg target 0.0006677024321156476 quantized to 32 (current 32)
Dec 03 01:22:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:22:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:22:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:22:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 01:22:37 compute-0 sshd-session[226706]: Received disconnect from 80.253.31.232 port 54602:11: Bye Bye [preauth]
Dec 03 01:22:37 compute-0 sshd-session[226706]: Disconnected from invalid user sonarqube 80.253.31.232 port 54602 [preauth]
Dec 03 01:22:38 compute-0 ceph-mon[192821]: pgmap v211: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 2 objects/s recovering
Dec 03 01:22:38 compute-0 ceph-mon[192821]: 2.17 scrub starts
Dec 03 01:22:38 compute-0 ceph-mon[192821]: 2.17 scrub ok
Dec 03 01:22:38 compute-0 ceph-mon[192821]: osdmap e98: 3 total, 3 up, 3 in
Dec 03 01:22:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v214: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Dec 03 01:22:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Dec 03 01:22:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Dec 03 01:22:38 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Dec 03 01:22:38 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 99 pg[9.16( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=97/70 les/c/f=98/71/0 sis=99) [0] r=0 lpr=99 pi=[70,99)/2 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:38 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 99 pg[9.16( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=97/70 les/c/f=98/71/0 sis=99) [0] r=0 lpr=99 pi=[70,99)/2 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:22:38 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 99 pg[9.16( v 51'584 (0'0,51'584] local-lis/les=97/98 n=6 ec=54/45 lis/c=97/70 les/c/f=98/71/0 sis=99 pruub=15.138869286s) [0] async=[0] r=-1 lpr=99 pi=[70,99)/2 crt=51'584 mlcod 51'584 active pruub 177.577514648s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:38 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 99 pg[9.16( v 51'584 (0'0,51'584] local-lis/les=97/98 n=6 ec=54/45 lis/c=97/70 les/c/f=98/71/0 sis=99 pruub=15.138772011s) [0] r=-1 lpr=99 pi=[70,99)/2 crt=51'584 mlcod 0'0 unknown NOTIFY pruub 177.577514648s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:22:38 compute-0 sudo[226711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:22:38 compute-0 sudo[226711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:22:38 compute-0 sudo[226711]: pam_unix(sudo:session): session closed for user root
Dec 03 01:22:38 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Dec 03 01:22:38 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Dec 03 01:22:38 compute-0 sudo[226736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:22:38 compute-0 sudo[226736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:22:38 compute-0 sudo[226736]: pam_unix(sudo:session): session closed for user root
Dec 03 01:22:38 compute-0 sudo[226761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:22:38 compute-0 sudo[226761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:22:38 compute-0 sudo[226761]: pam_unix(sudo:session): session closed for user root
Dec 03 01:22:38 compute-0 sudo[226786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 03 01:22:38 compute-0 sudo[226786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:22:39 compute-0 podman[226810]: 2025-12-03 01:22:39.080307765 +0000 UTC m=+0.132897106 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 03 01:22:39 compute-0 ceph-mon[192821]: 3.f scrub starts
Dec 03 01:22:39 compute-0 ceph-mon[192821]: 3.f scrub ok
Dec 03 01:22:39 compute-0 ceph-mon[192821]: osdmap e99: 3 total, 3 up, 3 in
Dec 03 01:22:39 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Dec 03 01:22:39 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Dec 03 01:22:39 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Dec 03 01:22:39 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 100 pg[9.16( v 51'584 (0'0,51'584] local-lis/les=99/100 n=6 ec=54/45 lis/c=97/70 les/c/f=98/71/0 sis=99) [0] r=0 lpr=99 pi=[70,99)/2 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:22:39 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Dec 03 01:22:39 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Dec 03 01:22:39 compute-0 podman[226901]: 2025-12-03 01:22:39.764164364 +0000 UTC m=+0.119026566 container exec d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 03 01:22:39 compute-0 podman[226901]: 2025-12-03 01:22:39.875128178 +0000 UTC m=+0.229990310 container exec_died d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:22:40 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Dec 03 01:22:40 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Dec 03 01:22:40 compute-0 ceph-mon[192821]: pgmap v214: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Dec 03 01:22:40 compute-0 ceph-mon[192821]: 7.6 scrub starts
Dec 03 01:22:40 compute-0 ceph-mon[192821]: 7.6 scrub ok
Dec 03 01:22:40 compute-0 ceph-mon[192821]: osdmap e100: 3 total, 3 up, 3 in
Dec 03 01:22:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v217: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Dec 03 01:22:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:22:40 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Dec 03 01:22:40 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Dec 03 01:22:41 compute-0 sudo[226786]: pam_unix(sudo:session): session closed for user root
Dec 03 01:22:41 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:22:41 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:22:41 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:22:41 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:22:41 compute-0 ceph-mon[192821]: 3.3 scrub starts
Dec 03 01:22:41 compute-0 ceph-mon[192821]: 3.3 scrub ok
Dec 03 01:22:41 compute-0 ceph-mon[192821]: 2.4 scrub starts
Dec 03 01:22:41 compute-0 ceph-mon[192821]: 2.4 scrub ok
Dec 03 01:22:41 compute-0 ceph-mon[192821]: pgmap v217: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Dec 03 01:22:41 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:22:41 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:22:41 compute-0 sudo[227052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:22:41 compute-0 sudo[227052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:22:41 compute-0 sudo[227052]: pam_unix(sudo:session): session closed for user root
Dec 03 01:22:41 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 3.e scrub starts
Dec 03 01:22:41 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 3.e scrub ok
Dec 03 01:22:41 compute-0 sudo[227077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:22:41 compute-0 sudo[227077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:22:41 compute-0 sudo[227077]: pam_unix(sudo:session): session closed for user root
Dec 03 01:22:41 compute-0 sudo[227102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:22:41 compute-0 sudo[227102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:22:41 compute-0 sudo[227102]: pam_unix(sudo:session): session closed for user root
Dec 03 01:22:41 compute-0 sudo[227127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 01:22:41 compute-0 sudo[227127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:22:42 compute-0 ceph-mon[192821]: 7.4 scrub starts
Dec 03 01:22:42 compute-0 ceph-mon[192821]: 7.4 scrub ok
Dec 03 01:22:42 compute-0 ceph-mon[192821]: 3.e scrub starts
Dec 03 01:22:42 compute-0 ceph-mon[192821]: 3.e scrub ok
Dec 03 01:22:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v218: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Dec 03 01:22:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0) v1
Dec 03 01:22:42 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Dec 03 01:22:42 compute-0 sudo[227127]: pam_unix(sudo:session): session closed for user root
Dec 03 01:22:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:22:42 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:22:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 01:22:42 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:22:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 01:22:42 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:22:42 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev bb9a678d-0fae-45e4-b1a1-16b768a5f871 does not exist
Dec 03 01:22:42 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 2abab8ee-f607-40c9-91f0-e9d4e19e5fd1 does not exist
Dec 03 01:22:42 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 51b9be22-dc49-4b84-82d9-80cd0a7f3329 does not exist
Dec 03 01:22:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 01:22:42 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:22:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 01:22:42 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:22:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:22:42 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:22:42 compute-0 sudo[227182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:22:42 compute-0 sudo[227182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:22:42 compute-0 sudo[227182]: pam_unix(sudo:session): session closed for user root
Dec 03 01:22:42 compute-0 sudo[227207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:22:42 compute-0 sudo[227207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:22:42 compute-0 sudo[227207]: pam_unix(sudo:session): session closed for user root
Dec 03 01:22:42 compute-0 sudo[227232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:22:42 compute-0 sudo[227232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:22:42 compute-0 sudo[227232]: pam_unix(sudo:session): session closed for user root
Dec 03 01:22:42 compute-0 sudo[227257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 01:22:42 compute-0 sudo[227257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:22:43 compute-0 sshd-session[226389]: Connection closed by authenticating user root 193.32.162.157 port 43362 [preauth]
Dec 03 01:22:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Dec 03 01:22:43 compute-0 ceph-mon[192821]: pgmap v218: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Dec 03 01:22:43 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Dec 03 01:22:43 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:22:43 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:22:43 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:22:43 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:22:43 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:22:43 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:22:43 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Dec 03 01:22:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Dec 03 01:22:43 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Dec 03 01:22:43 compute-0 podman[227319]: 2025-12-03 01:22:43.55683098 +0000 UTC m=+0.085376983 container create e3640668b83fb8aabe53abec9fc57e1b32a15ce49523e30a54d95d2fff12f43b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:22:43 compute-0 podman[227319]: 2025-12-03 01:22:43.518673754 +0000 UTC m=+0.047219797 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:22:43 compute-0 systemd[1]: Started libpod-conmon-e3640668b83fb8aabe53abec9fc57e1b32a15ce49523e30a54d95d2fff12f43b.scope.
Dec 03 01:22:43 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:22:43 compute-0 podman[227319]: 2025-12-03 01:22:43.720762406 +0000 UTC m=+0.249308409 container init e3640668b83fb8aabe53abec9fc57e1b32a15ce49523e30a54d95d2fff12f43b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:22:43 compute-0 podman[227319]: 2025-12-03 01:22:43.73911833 +0000 UTC m=+0.267664333 container start e3640668b83fb8aabe53abec9fc57e1b32a15ce49523e30a54d95d2fff12f43b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_shaw, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:22:43 compute-0 brave_shaw[227333]: 167 167
Dec 03 01:22:43 compute-0 systemd[1]: libpod-e3640668b83fb8aabe53abec9fc57e1b32a15ce49523e30a54d95d2fff12f43b.scope: Deactivated successfully.
Dec 03 01:22:43 compute-0 podman[227319]: 2025-12-03 01:22:43.754336737 +0000 UTC m=+0.282882750 container attach e3640668b83fb8aabe53abec9fc57e1b32a15ce49523e30a54d95d2fff12f43b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec 03 01:22:43 compute-0 podman[227319]: 2025-12-03 01:22:43.755043487 +0000 UTC m=+0.283589460 container died e3640668b83fb8aabe53abec9fc57e1b32a15ce49523e30a54d95d2fff12f43b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec 03 01:22:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b2358cbf7342a65578f1c7824bedb51afc73cbaae83f049c7a3cfd445496635-merged.mount: Deactivated successfully.
Dec 03 01:22:43 compute-0 podman[227319]: 2025-12-03 01:22:43.83247216 +0000 UTC m=+0.361018133 container remove e3640668b83fb8aabe53abec9fc57e1b32a15ce49523e30a54d95d2fff12f43b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_shaw, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 03 01:22:43 compute-0 systemd[1]: libpod-conmon-e3640668b83fb8aabe53abec9fc57e1b32a15ce49523e30a54d95d2fff12f43b.scope: Deactivated successfully.
Dec 03 01:22:44 compute-0 podman[227357]: 2025-12-03 01:22:44.08983819 +0000 UTC m=+0.079904843 container create f34adea03fb13a6802c40e28e76c659f40fa23a0c7c0ab2da7a6e10e50b2d7b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_dubinsky, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 03 01:22:44 compute-0 podman[227357]: 2025-12-03 01:22:44.053048651 +0000 UTC m=+0.043115304 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:22:44 compute-0 systemd[1]: Started libpod-conmon-f34adea03fb13a6802c40e28e76c659f40fa23a0c7c0ab2da7a6e10e50b2d7b9.scope.
Dec 03 01:22:44 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Dec 03 01:22:44 compute-0 ceph-mon[192821]: osdmap e101: 3 total, 3 up, 3 in
Dec 03 01:22:44 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:22:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80205ca41da077b6ade472f0bc45db0d10e47b343d3280c789df1c0108197be2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:22:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80205ca41da077b6ade472f0bc45db0d10e47b343d3280c789df1c0108197be2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:22:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80205ca41da077b6ade472f0bc45db0d10e47b343d3280c789df1c0108197be2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:22:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80205ca41da077b6ade472f0bc45db0d10e47b343d3280c789df1c0108197be2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:22:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80205ca41da077b6ade472f0bc45db0d10e47b343d3280c789df1c0108197be2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:22:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v220: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Dec 03 01:22:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0) v1
Dec 03 01:22:44 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Dec 03 01:22:44 compute-0 podman[227357]: 2025-12-03 01:22:44.28411221 +0000 UTC m=+0.274178923 container init f34adea03fb13a6802c40e28e76c659f40fa23a0c7c0ab2da7a6e10e50b2d7b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_dubinsky, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:22:44 compute-0 podman[227371]: 2025-12-03 01:22:44.29544166 +0000 UTC m=+0.124693371 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.buildah.version=1.29.0, release-0.7.12=, version=9.4, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, vcs-type=git, architecture=x86_64)
Dec 03 01:22:44 compute-0 podman[227357]: 2025-12-03 01:22:44.313047413 +0000 UTC m=+0.303114046 container start f34adea03fb13a6802c40e28e76c659f40fa23a0c7c0ab2da7a6e10e50b2d7b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_dubinsky, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 03 01:22:44 compute-0 podman[227357]: 2025-12-03 01:22:44.318086601 +0000 UTC m=+0.308153315 container attach f34adea03fb13a6802c40e28e76c659f40fa23a0c7c0ab2da7a6e10e50b2d7b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_dubinsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 03 01:22:45 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Dec 03 01:22:45 compute-0 ceph-mon[192821]: pgmap v220: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Dec 03 01:22:45 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Dec 03 01:22:45 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Dec 03 01:22:45 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Dec 03 01:22:45 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Dec 03 01:22:45 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:22:45 compute-0 priceless_dubinsky[227382]: --> passed data devices: 0 physical, 3 LVM
Dec 03 01:22:45 compute-0 priceless_dubinsky[227382]: --> relative data size: 1.0
Dec 03 01:22:45 compute-0 priceless_dubinsky[227382]: --> All data devices are unavailable
Dec 03 01:22:45 compute-0 systemd[1]: libpod-f34adea03fb13a6802c40e28e76c659f40fa23a0c7c0ab2da7a6e10e50b2d7b9.scope: Deactivated successfully.
Dec 03 01:22:45 compute-0 podman[227357]: 2025-12-03 01:22:45.620728774 +0000 UTC m=+1.610795477 container died f34adea03fb13a6802c40e28e76c659f40fa23a0c7c0ab2da7a6e10e50b2d7b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_dubinsky, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:22:45 compute-0 systemd[1]: libpod-f34adea03fb13a6802c40e28e76c659f40fa23a0c7c0ab2da7a6e10e50b2d7b9.scope: Consumed 1.242s CPU time.
Dec 03 01:22:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-80205ca41da077b6ade472f0bc45db0d10e47b343d3280c789df1c0108197be2-merged.mount: Deactivated successfully.
Dec 03 01:22:45 compute-0 podman[227357]: 2025-12-03 01:22:45.743000738 +0000 UTC m=+1.733067391 container remove f34adea03fb13a6802c40e28e76c659f40fa23a0c7c0ab2da7a6e10e50b2d7b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_dubinsky, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 03 01:22:45 compute-0 systemd[1]: libpod-conmon-f34adea03fb13a6802c40e28e76c659f40fa23a0c7c0ab2da7a6e10e50b2d7b9.scope: Deactivated successfully.
Dec 03 01:22:45 compute-0 sudo[227257]: pam_unix(sudo:session): session closed for user root
Dec 03 01:22:45 compute-0 sudo[227439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:22:45 compute-0 sudo[227439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:22:45 compute-0 sudo[227439]: pam_unix(sudo:session): session closed for user root
Dec 03 01:22:46 compute-0 sudo[227464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:22:46 compute-0 sudo[227464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:22:46 compute-0 sudo[227464]: pam_unix(sudo:session): session closed for user root
Dec 03 01:22:46 compute-0 sudo[227489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:22:46 compute-0 sudo[227489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:22:46 compute-0 sudo[227489]: pam_unix(sudo:session): session closed for user root
Dec 03 01:22:46 compute-0 sudo[227514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 01:22:46 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 7.a scrub starts
Dec 03 01:22:46 compute-0 sudo[227514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:22:46 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 7.a scrub ok
Dec 03 01:22:46 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Dec 03 01:22:46 compute-0 ceph-mon[192821]: osdmap e102: 3 total, 3 up, 3 in
Dec 03 01:22:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v222: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 15 B/s, 0 objects/s recovering
Dec 03 01:22:46 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0) v1
Dec 03 01:22:46 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Dec 03 01:22:46 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Dec 03 01:22:46 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Dec 03 01:22:46 compute-0 podman[227575]: 2025-12-03 01:22:46.851716932 +0000 UTC m=+0.082545385 container create 92ed2ead8eb477108dcce747e488de1446a23bef778c808911c743b0ffbb4b9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_knuth, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 03 01:22:46 compute-0 podman[227575]: 2025-12-03 01:22:46.818778918 +0000 UTC m=+0.049607421 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:22:46 compute-0 systemd[1]: Started libpod-conmon-92ed2ead8eb477108dcce747e488de1446a23bef778c808911c743b0ffbb4b9c.scope.
Dec 03 01:22:46 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:22:47 compute-0 podman[227575]: 2025-12-03 01:22:47.04010925 +0000 UTC m=+0.270937713 container init 92ed2ead8eb477108dcce747e488de1446a23bef778c808911c743b0ffbb4b9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_knuth, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:22:47 compute-0 podman[227575]: 2025-12-03 01:22:47.057498007 +0000 UTC m=+0.288326470 container start 92ed2ead8eb477108dcce747e488de1446a23bef778c808911c743b0ffbb4b9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec 03 01:22:47 compute-0 podman[227575]: 2025-12-03 01:22:47.064876769 +0000 UTC m=+0.295705272 container attach 92ed2ead8eb477108dcce747e488de1446a23bef778c808911c743b0ffbb4b9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_knuth, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 03 01:22:47 compute-0 hardcore_knuth[227591]: 167 167
Dec 03 01:22:47 compute-0 systemd[1]: libpod-92ed2ead8eb477108dcce747e488de1446a23bef778c808911c743b0ffbb4b9c.scope: Deactivated successfully.
Dec 03 01:22:47 compute-0 podman[227575]: 2025-12-03 01:22:47.070849813 +0000 UTC m=+0.301678326 container died 92ed2ead8eb477108dcce747e488de1446a23bef778c808911c743b0ffbb4b9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_knuth, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec 03 01:22:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c742a451224b77b42ebcf673a5ca6f5d6212e439ac6fbbd8fa46ffea884f64b-merged.mount: Deactivated successfully.
Dec 03 01:22:47 compute-0 podman[227575]: 2025-12-03 01:22:47.159461844 +0000 UTC m=+0.390290297 container remove 92ed2ead8eb477108dcce747e488de1446a23bef778c808911c743b0ffbb4b9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_knuth, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:22:47 compute-0 systemd[1]: libpod-conmon-92ed2ead8eb477108dcce747e488de1446a23bef778c808911c743b0ffbb4b9c.scope: Deactivated successfully.
Dec 03 01:22:47 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Dec 03 01:22:47 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Dec 03 01:22:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Dec 03 01:22:47 compute-0 ceph-mon[192821]: 7.a scrub starts
Dec 03 01:22:47 compute-0 ceph-mon[192821]: 7.a scrub ok
Dec 03 01:22:47 compute-0 ceph-mon[192821]: pgmap v222: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 15 B/s, 0 objects/s recovering
Dec 03 01:22:47 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Dec 03 01:22:47 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Dec 03 01:22:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Dec 03 01:22:47 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Dec 03 01:22:47 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 3.a scrub starts
Dec 03 01:22:47 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 3.a scrub ok
Dec 03 01:22:47 compute-0 podman[227614]: 2025-12-03 01:22:47.459244236 +0000 UTC m=+0.103119690 container create d2ad25cc24cd95b156389ef51a63cef0526835b5be096242fbaed5a5c575c14b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:22:47 compute-0 podman[227614]: 2025-12-03 01:22:47.429157701 +0000 UTC m=+0.073033145 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:22:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 103 pg[9.19( v 51'584 (0'0,51'584] local-lis/les=64/65 n=6 ec=54/45 lis/c=64/64 les/c/f=65/65/0 sis=103 pruub=9.852219582s) [2] r=-1 lpr=103 pi=[64,103)/1 crt=51'584 mlcod 0'0 active pruub 195.177246094s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 103 pg[9.19( v 51'584 (0'0,51'584] local-lis/les=64/65 n=6 ec=54/45 lis/c=64/64 les/c/f=65/65/0 sis=103 pruub=9.852044106s) [2] r=-1 lpr=103 pi=[64,103)/1 crt=51'584 mlcod 0'0 unknown NOTIFY pruub 195.177246094s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:22:47 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 103 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=64/64 les/c/f=65/65/0 sis=103) [2] r=0 lpr=103 pi=[64,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:22:47 compute-0 systemd[1]: Started libpod-conmon-d2ad25cc24cd95b156389ef51a63cef0526835b5be096242fbaed5a5c575c14b.scope.
Dec 03 01:22:47 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:22:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b289cd5d73a0337b23d2f25629b0eec2199a095019a98e5408fed6261427230/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:22:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b289cd5d73a0337b23d2f25629b0eec2199a095019a98e5408fed6261427230/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:22:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b289cd5d73a0337b23d2f25629b0eec2199a095019a98e5408fed6261427230/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:22:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b289cd5d73a0337b23d2f25629b0eec2199a095019a98e5408fed6261427230/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:22:47 compute-0 podman[227614]: 2025-12-03 01:22:47.657170745 +0000 UTC m=+0.301046189 container init d2ad25cc24cd95b156389ef51a63cef0526835b5be096242fbaed5a5c575c14b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_goldstine, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:22:47 compute-0 podman[227614]: 2025-12-03 01:22:47.675515919 +0000 UTC m=+0.319391343 container start d2ad25cc24cd95b156389ef51a63cef0526835b5be096242fbaed5a5c575c14b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_goldstine, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec 03 01:22:47 compute-0 podman[227614]: 2025-12-03 01:22:47.682655364 +0000 UTC m=+0.326530798 container attach d2ad25cc24cd95b156389ef51a63cef0526835b5be096242fbaed5a5c575c14b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_goldstine, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 03 01:22:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v224: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:22:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Dec 03 01:22:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Dec 03 01:22:48 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Dec 03 01:22:48 compute-0 ceph-mon[192821]: 7.3 scrub starts
Dec 03 01:22:48 compute-0 ceph-mon[192821]: 7.3 scrub ok
Dec 03 01:22:48 compute-0 ceph-mon[192821]: 3.5 scrub starts
Dec 03 01:22:48 compute-0 ceph-mon[192821]: 3.5 scrub ok
Dec 03 01:22:48 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Dec 03 01:22:48 compute-0 ceph-mon[192821]: osdmap e103: 3 total, 3 up, 3 in
Dec 03 01:22:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Dec 03 01:22:48 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Dec 03 01:22:48 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 104 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=64/64 les/c/f=65/65/0 sis=104) [2]/[0] r=-1 lpr=104 pi=[64,104)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:48 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 104 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=64/64 les/c/f=65/65/0 sis=104) [2]/[0] r=-1 lpr=104 pi=[64,104)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 03 01:22:48 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 104 pg[9.19( v 51'584 (0'0,51'584] local-lis/les=64/65 n=6 ec=54/45 lis/c=64/64 les/c/f=65/65/0 sis=104) [2]/[0] r=0 lpr=104 pi=[64,104)/1 crt=51'584 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:48 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 104 pg[9.19( v 51'584 (0'0,51'584] local-lis/les=64/65 n=6 ec=54/45 lis/c=64/64 les/c/f=65/65/0 sis=104) [2]/[0] r=0 lpr=104 pi=[64,104)/1 crt=51'584 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]: {
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:     "0": [
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:         {
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:             "devices": [
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:                 "/dev/loop3"
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:             ],
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:             "lv_name": "ceph_lv0",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:             "lv_size": "21470642176",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:             "name": "ceph_lv0",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:             "tags": {
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:                 "ceph.cluster_name": "ceph",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:                 "ceph.crush_device_class": "",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:                 "ceph.encrypted": "0",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:                 "ceph.osd_id": "0",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:                 "ceph.type": "block",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:                 "ceph.vdo": "0"
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:             },
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:             "type": "block",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:             "vg_name": "ceph_vg0"
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:         }
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:     ],
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:     "1": [
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:         {
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:             "devices": [
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:                 "/dev/loop4"
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:             ],
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:             "lv_name": "ceph_lv1",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:             "lv_size": "21470642176",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:             "name": "ceph_lv1",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:             "tags": {
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:                 "ceph.cluster_name": "ceph",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:                 "ceph.crush_device_class": "",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:                 "ceph.encrypted": "0",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:                 "ceph.osd_id": "1",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:                 "ceph.type": "block",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:                 "ceph.vdo": "0"
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:             },
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:             "type": "block",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:             "vg_name": "ceph_vg1"
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:         }
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:     ],
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:     "2": [
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:         {
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:             "devices": [
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:                 "/dev/loop5"
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:             ],
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:             "lv_name": "ceph_lv2",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:             "lv_size": "21470642176",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:             "name": "ceph_lv2",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:             "tags": {
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:                 "ceph.cluster_name": "ceph",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:                 "ceph.crush_device_class": "",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:                 "ceph.encrypted": "0",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:                 "ceph.osd_id": "2",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:                 "ceph.type": "block",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:                 "ceph.vdo": "0"
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:             },
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:             "type": "block",
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:             "vg_name": "ceph_vg2"
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:         }
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]:     ]
Dec 03 01:22:48 compute-0 laughing_goldstine[227627]: }
Dec 03 01:22:48 compute-0 systemd[1]: libpod-d2ad25cc24cd95b156389ef51a63cef0526835b5be096242fbaed5a5c575c14b.scope: Deactivated successfully.
Dec 03 01:22:48 compute-0 conmon[227627]: conmon d2ad25cc24cd95b15638 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d2ad25cc24cd95b156389ef51a63cef0526835b5be096242fbaed5a5c575c14b.scope/container/memory.events
Dec 03 01:22:48 compute-0 podman[227614]: 2025-12-03 01:22:48.516288512 +0000 UTC m=+1.160163936 container died d2ad25cc24cd95b156389ef51a63cef0526835b5be096242fbaed5a5c575c14b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_goldstine, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 03 01:22:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b289cd5d73a0337b23d2f25629b0eec2199a095019a98e5408fed6261427230-merged.mount: Deactivated successfully.
Dec 03 01:22:48 compute-0 podman[227614]: 2025-12-03 01:22:48.624793418 +0000 UTC m=+1.268668842 container remove d2ad25cc24cd95b156389ef51a63cef0526835b5be096242fbaed5a5c575c14b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_goldstine, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:22:48 compute-0 systemd[1]: libpod-conmon-d2ad25cc24cd95b156389ef51a63cef0526835b5be096242fbaed5a5c575c14b.scope: Deactivated successfully.
Dec 03 01:22:48 compute-0 sudo[227514]: pam_unix(sudo:session): session closed for user root
Dec 03 01:22:48 compute-0 podman[227641]: 2025-12-03 01:22:48.718262262 +0000 UTC m=+0.163031813 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 01:22:48 compute-0 sudo[227672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:22:48 compute-0 sudo[227672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:22:48 compute-0 sudo[227672]: pam_unix(sudo:session): session closed for user root
Dec 03 01:22:48 compute-0 sudo[227703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:22:48 compute-0 sudo[227703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:22:48 compute-0 sudo[227703]: pam_unix(sudo:session): session closed for user root
Dec 03 01:22:49 compute-0 sudo[227728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:22:49 compute-0 sudo[227728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:22:49 compute-0 sudo[227728]: pam_unix(sudo:session): session closed for user root
Dec 03 01:22:49 compute-0 sudo[227753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 01:22:49 compute-0 sudo[227753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:22:49 compute-0 ceph-mon[192821]: 3.a scrub starts
Dec 03 01:22:49 compute-0 ceph-mon[192821]: 3.a scrub ok
Dec 03 01:22:49 compute-0 ceph-mon[192821]: pgmap v224: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:22:49 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Dec 03 01:22:49 compute-0 ceph-mon[192821]: osdmap e104: 3 total, 3 up, 3 in
Dec 03 01:22:49 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Dec 03 01:22:49 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Dec 03 01:22:49 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Dec 03 01:22:49 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Dec 03 01:22:49 compute-0 podman[227826]: 2025-12-03 01:22:49.766380943 +0000 UTC m=+0.090663127 container create 1c5a0d301c33c8ae02d2836b56c0c36415efcbaf3c9828ef803665cb2e459225 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_dubinsky, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec 03 01:22:49 compute-0 podman[227826]: 2025-12-03 01:22:49.728059732 +0000 UTC m=+0.052341956 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:22:49 compute-0 systemd[1]: Started libpod-conmon-1c5a0d301c33c8ae02d2836b56c0c36415efcbaf3c9828ef803665cb2e459225.scope.
Dec 03 01:22:49 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:22:49 compute-0 podman[227826]: 2025-12-03 01:22:49.902648961 +0000 UTC m=+0.226931195 container init 1c5a0d301c33c8ae02d2836b56c0c36415efcbaf3c9828ef803665cb2e459225 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_dubinsky, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec 03 01:22:49 compute-0 podman[227826]: 2025-12-03 01:22:49.920123551 +0000 UTC m=+0.244405735 container start 1c5a0d301c33c8ae02d2836b56c0c36415efcbaf3c9828ef803665cb2e459225 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 03 01:22:49 compute-0 clever_dubinsky[227847]: 167 167
Dec 03 01:22:49 compute-0 systemd[1]: libpod-1c5a0d301c33c8ae02d2836b56c0c36415efcbaf3c9828ef803665cb2e459225.scope: Deactivated successfully.
Dec 03 01:22:49 compute-0 podman[227826]: 2025-12-03 01:22:49.929129388 +0000 UTC m=+0.253411622 container attach 1c5a0d301c33c8ae02d2836b56c0c36415efcbaf3c9828ef803665cb2e459225 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_dubinsky, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 03 01:22:49 compute-0 podman[227826]: 2025-12-03 01:22:49.937335013 +0000 UTC m=+0.261617197 container died 1c5a0d301c33c8ae02d2836b56c0c36415efcbaf3c9828ef803665cb2e459225 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec 03 01:22:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-00ec8186e83add3c50326dee57e9090d54f6f8e0d99f3b59b16de38250330549-merged.mount: Deactivated successfully.
Dec 03 01:22:50 compute-0 podman[227826]: 2025-12-03 01:22:50.020014011 +0000 UTC m=+0.344296195 container remove 1c5a0d301c33c8ae02d2836b56c0c36415efcbaf3c9828ef803665cb2e459225 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_dubinsky, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 03 01:22:50 compute-0 systemd[1]: libpod-conmon-1c5a0d301c33c8ae02d2836b56c0c36415efcbaf3c9828ef803665cb2e459225.scope: Deactivated successfully.
Dec 03 01:22:50 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 105 pg[9.19( v 51'584 (0'0,51'584] local-lis/les=104/105 n=6 ec=54/45 lis/c=64/64 les/c/f=65/65/0 sis=104) [2]/[0] async=[2] r=0 lpr=104 pi=[64,104)/1 crt=51'584 mlcod 0'0 active+remapped mbc={255={(0+1)=11}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:22:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v227: 321 pgs: 1 activating+remapped, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 11/247 objects misplaced (4.453%)
Dec 03 01:22:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e105 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:22:50 compute-0 podman[227870]: 2025-12-03 01:22:50.341952452 +0000 UTC m=+0.108502887 container create 97ca828246667053e39bdc765042bbf175c7af8ae72d70e5f0912a71d78ce68a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:22:50 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Dec 03 01:22:50 compute-0 ceph-mon[192821]: osdmap e105: 3 total, 3 up, 3 in
Dec 03 01:22:50 compute-0 podman[227870]: 2025-12-03 01:22:50.294690516 +0000 UTC m=+0.061240991 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:22:50 compute-0 systemd[1]: Started libpod-conmon-97ca828246667053e39bdc765042bbf175c7af8ae72d70e5f0912a71d78ce68a.scope.
Dec 03 01:22:50 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Dec 03 01:22:50 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:22:50 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Dec 03 01:22:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/446edad9d286f17430d53d6e40075d8ca6b7312e91d79e1bb8080d589982bd88/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:22:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/446edad9d286f17430d53d6e40075d8ca6b7312e91d79e1bb8080d589982bd88/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:22:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/446edad9d286f17430d53d6e40075d8ca6b7312e91d79e1bb8080d589982bd88/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:22:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/446edad9d286f17430d53d6e40075d8ca6b7312e91d79e1bb8080d589982bd88/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:22:50 compute-0 podman[227870]: 2025-12-03 01:22:50.529679772 +0000 UTC m=+0.296230217 container init 97ca828246667053e39bdc765042bbf175c7af8ae72d70e5f0912a71d78ce68a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_joliot, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 03 01:22:50 compute-0 podman[227870]: 2025-12-03 01:22:50.541154466 +0000 UTC m=+0.307704871 container start 97ca828246667053e39bdc765042bbf175c7af8ae72d70e5f0912a71d78ce68a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_joliot, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:22:50 compute-0 podman[227870]: 2025-12-03 01:22:50.548023715 +0000 UTC m=+0.314574150 container attach 97ca828246667053e39bdc765042bbf175c7af8ae72d70e5f0912a71d78ce68a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_joliot, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:22:51 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 6.1e scrub starts
Dec 03 01:22:51 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 6.1e scrub ok
Dec 03 01:22:51 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Dec 03 01:22:51 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Dec 03 01:22:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Dec 03 01:22:51 compute-0 ceph-mon[192821]: pgmap v227: 321 pgs: 1 activating+remapped, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 11/247 objects misplaced (4.453%)
Dec 03 01:22:51 compute-0 ceph-mon[192821]: 3.9 scrub starts
Dec 03 01:22:51 compute-0 ceph-mon[192821]: 3.9 scrub ok
Dec 03 01:22:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Dec 03 01:22:51 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Dec 03 01:22:51 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 106 pg[9.19( v 51'584 (0'0,51'584] local-lis/les=104/105 n=6 ec=54/45 lis/c=104/64 les/c/f=105/65/0 sis=106 pruub=14.746579170s) [2] async=[2] r=-1 lpr=106 pi=[64,106)/1 crt=51'584 mlcod 51'584 active pruub 204.004882812s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:51 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 106 pg[9.19( v 51'584 (0'0,51'584] local-lis/les=104/105 n=6 ec=54/45 lis/c=104/64 les/c/f=105/65/0 sis=106 pruub=14.746412277s) [2] r=-1 lpr=106 pi=[64,106)/1 crt=51'584 mlcod 0'0 unknown NOTIFY pruub 204.004882812s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:22:51 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 106 pg[9.19( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=104/64 les/c/f=105/65/0 sis=106) [2] r=0 lpr=106 pi=[64,106)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:51 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 106 pg[9.19( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=104/64 les/c/f=105/65/0 sis=106) [2] r=0 lpr=106 pi=[64,106)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:22:51 compute-0 nice_joliot[227892]: {
Dec 03 01:22:51 compute-0 nice_joliot[227892]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 01:22:51 compute-0 nice_joliot[227892]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:22:51 compute-0 nice_joliot[227892]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 01:22:51 compute-0 nice_joliot[227892]:         "osd_id": 2,
Dec 03 01:22:51 compute-0 nice_joliot[227892]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:22:51 compute-0 nice_joliot[227892]:         "type": "bluestore"
Dec 03 01:22:51 compute-0 nice_joliot[227892]:     },
Dec 03 01:22:51 compute-0 nice_joliot[227892]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 01:22:51 compute-0 nice_joliot[227892]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:22:51 compute-0 nice_joliot[227892]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 01:22:51 compute-0 nice_joliot[227892]:         "osd_id": 1,
Dec 03 01:22:51 compute-0 nice_joliot[227892]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:22:51 compute-0 nice_joliot[227892]:         "type": "bluestore"
Dec 03 01:22:51 compute-0 nice_joliot[227892]:     },
Dec 03 01:22:51 compute-0 nice_joliot[227892]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 01:22:51 compute-0 nice_joliot[227892]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:22:51 compute-0 nice_joliot[227892]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 01:22:51 compute-0 nice_joliot[227892]:         "osd_id": 0,
Dec 03 01:22:51 compute-0 nice_joliot[227892]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:22:51 compute-0 nice_joliot[227892]:         "type": "bluestore"
Dec 03 01:22:51 compute-0 nice_joliot[227892]:     }
Dec 03 01:22:51 compute-0 nice_joliot[227892]: }
Dec 03 01:22:51 compute-0 systemd[1]: libpod-97ca828246667053e39bdc765042bbf175c7af8ae72d70e5f0912a71d78ce68a.scope: Deactivated successfully.
Dec 03 01:22:51 compute-0 podman[227870]: 2025-12-03 01:22:51.645617722 +0000 UTC m=+1.412168127 container died 97ca828246667053e39bdc765042bbf175c7af8ae72d70e5f0912a71d78ce68a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_joliot, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 03 01:22:51 compute-0 systemd[1]: libpod-97ca828246667053e39bdc765042bbf175c7af8ae72d70e5f0912a71d78ce68a.scope: Consumed 1.093s CPU time.
Dec 03 01:22:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-446edad9d286f17430d53d6e40075d8ca6b7312e91d79e1bb8080d589982bd88-merged.mount: Deactivated successfully.
Dec 03 01:22:51 compute-0 podman[227870]: 2025-12-03 01:22:51.754626392 +0000 UTC m=+1.521176807 container remove 97ca828246667053e39bdc765042bbf175c7af8ae72d70e5f0912a71d78ce68a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec 03 01:22:51 compute-0 systemd[1]: libpod-conmon-97ca828246667053e39bdc765042bbf175c7af8ae72d70e5f0912a71d78ce68a.scope: Deactivated successfully.
Dec 03 01:22:51 compute-0 sudo[227753]: pam_unix(sudo:session): session closed for user root
Dec 03 01:22:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:22:51 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:22:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:22:51 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:22:51 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 85c914a6-0973-46fc-8145-be06b55ef61b does not exist
Dec 03 01:22:51 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 35e69fdf-0d0a-4833-a23f-23db4ee8aa04 does not exist
Dec 03 01:22:51 compute-0 sudo[227951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:22:51 compute-0 sudo[227951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:22:51 compute-0 sudo[227951]: pam_unix(sudo:session): session closed for user root
Dec 03 01:22:52 compute-0 sudo[227976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 01:22:52 compute-0 sudo[227976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:22:52 compute-0 sudo[227976]: pam_unix(sudo:session): session closed for user root
Dec 03 01:22:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v229: 321 pgs: 1 activating+remapped, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 11/247 objects misplaced (4.453%)
Dec 03 01:22:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Dec 03 01:22:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Dec 03 01:22:52 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Dec 03 01:22:52 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 107 pg[9.19( v 51'584 (0'0,51'584] local-lis/les=106/107 n=6 ec=54/45 lis/c=104/64 les/c/f=105/65/0 sis=106) [2] r=0 lpr=106 pi=[64,106)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:22:52 compute-0 ceph-mon[192821]: 6.1e scrub starts
Dec 03 01:22:52 compute-0 ceph-mon[192821]: 6.1e scrub ok
Dec 03 01:22:52 compute-0 ceph-mon[192821]: 7.1 scrub starts
Dec 03 01:22:52 compute-0 ceph-mon[192821]: 7.1 scrub ok
Dec 03 01:22:52 compute-0 ceph-mon[192821]: osdmap e106: 3 total, 3 up, 3 in
Dec 03 01:22:52 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:22:52 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:22:52 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 4.d scrub starts
Dec 03 01:22:53 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 4.d scrub ok
Dec 03 01:22:53 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Dec 03 01:22:53 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Dec 03 01:22:53 compute-0 ceph-mon[192821]: pgmap v229: 321 pgs: 1 activating+remapped, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 11/247 objects misplaced (4.453%)
Dec 03 01:22:53 compute-0 ceph-mon[192821]: osdmap e107: 3 total, 3 up, 3 in
Dec 03 01:22:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v231: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Dec 03 01:22:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Dec 03 01:22:54 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Dec 03 01:22:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Dec 03 01:22:54 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Dec 03 01:22:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Dec 03 01:22:54 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Dec 03 01:22:54 compute-0 ceph-mon[192821]: 4.d scrub starts
Dec 03 01:22:54 compute-0 ceph-mon[192821]: 4.d scrub ok
Dec 03 01:22:54 compute-0 ceph-mon[192821]: 7.5 scrub starts
Dec 03 01:22:54 compute-0 ceph-mon[192821]: 7.5 scrub ok
Dec 03 01:22:54 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Dec 03 01:22:55 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 6.d scrub starts
Dec 03 01:22:55 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 6.d scrub ok
Dec 03 01:22:55 compute-0 sshd-session[227305]: Connection closed by authenticating user root 193.32.162.157 port 53604 [preauth]
Dec 03 01:22:55 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Dec 03 01:22:55 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Dec 03 01:22:55 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:22:55 compute-0 ceph-mon[192821]: pgmap v231: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Dec 03 01:22:55 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Dec 03 01:22:55 compute-0 ceph-mon[192821]: osdmap e108: 3 total, 3 up, 3 in
Dec 03 01:22:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v233: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Dec 03 01:22:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Dec 03 01:22:56 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Dec 03 01:22:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Dec 03 01:22:56 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Dec 03 01:22:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Dec 03 01:22:56 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Dec 03 01:22:56 compute-0 ceph-mon[192821]: 6.d scrub starts
Dec 03 01:22:56 compute-0 ceph-mon[192821]: 6.d scrub ok
Dec 03 01:22:56 compute-0 ceph-mon[192821]: 7.8 scrub starts
Dec 03 01:22:56 compute-0 ceph-mon[192821]: 7.8 scrub ok
Dec 03 01:22:56 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Dec 03 01:22:56 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 6.c scrub starts
Dec 03 01:22:56 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 6.c scrub ok
Dec 03 01:22:57 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 3.c deep-scrub starts
Dec 03 01:22:57 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 3.c deep-scrub ok
Dec 03 01:22:57 compute-0 ceph-mon[192821]: pgmap v233: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Dec 03 01:22:57 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Dec 03 01:22:57 compute-0 ceph-mon[192821]: osdmap e109: 3 total, 3 up, 3 in
Dec 03 01:22:58 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 4.f scrub starts
Dec 03 01:22:58 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 4.f scrub ok
Dec 03 01:22:58 compute-0 sshd-session[228002]: Connection closed by authenticating user root 80.94.95.115 port 57688 [preauth]
Dec 03 01:22:58 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 109 pg[9.1c( v 51'584 (0'0,51'584] local-lis/les=83/84 n=6 ec=54/45 lis/c=83/83 les/c/f=84/84/0 sis=109 pruub=9.550034523s) [0] r=-1 lpr=109 pi=[83,109)/1 crt=51'584 mlcod 0'0 active pruub 191.896911621s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:58 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 109 pg[9.1c( v 51'584 (0'0,51'584] local-lis/les=83/84 n=6 ec=54/45 lis/c=83/83 les/c/f=84/84/0 sis=109 pruub=9.549950600s) [0] r=-1 lpr=109 pi=[83,109)/1 crt=51'584 mlcod 0'0 unknown NOTIFY pruub 191.896911621s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:22:58 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 109 pg[9.1c( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=83/83 les/c/f=84/84/0 sis=109) [0] r=0 lpr=109 pi=[83,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:22:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v235: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Dec 03 01:22:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Dec 03 01:22:58 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Dec 03 01:22:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:22:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:22:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:22:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:22:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:22:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:22:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Dec 03 01:22:58 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Dec 03 01:22:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Dec 03 01:22:58 compute-0 ceph-mon[192821]: 6.c scrub starts
Dec 03 01:22:58 compute-0 ceph-mon[192821]: 6.c scrub ok
Dec 03 01:22:58 compute-0 ceph-mon[192821]: 3.c deep-scrub starts
Dec 03 01:22:58 compute-0 ceph-mon[192821]: 3.c deep-scrub ok
Dec 03 01:22:58 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Dec 03 01:22:58 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Dec 03 01:22:58 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 110 pg[9.1c( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=83/83 les/c/f=84/84/0 sis=110) [0]/[2] r=-1 lpr=110 pi=[83,110)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:58 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 110 pg[9.1c( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=83/83 les/c/f=84/84/0 sis=110) [0]/[2] r=-1 lpr=110 pi=[83,110)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 03 01:22:58 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 110 pg[9.1c( v 51'584 (0'0,51'584] local-lis/les=83/84 n=6 ec=54/45 lis/c=83/83 les/c/f=84/84/0 sis=110) [0]/[2] r=0 lpr=110 pi=[83,110)/1 crt=51'584 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:22:58 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 110 pg[9.1c( v 51'584 (0'0,51'584] local-lis/les=83/84 n=6 ec=54/45 lis/c=83/83 les/c/f=84/84/0 sis=110) [0]/[2] r=0 lpr=110 pi=[83,110)/1 crt=51'584 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 03 01:22:59 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Dec 03 01:22:59 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Dec 03 01:22:59 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Dec 03 01:22:59 compute-0 ceph-mon[192821]: 4.f scrub starts
Dec 03 01:22:59 compute-0 ceph-mon[192821]: 4.f scrub ok
Dec 03 01:22:59 compute-0 ceph-mon[192821]: pgmap v235: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Dec 03 01:22:59 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Dec 03 01:22:59 compute-0 ceph-mon[192821]: osdmap e110: 3 total, 3 up, 3 in
Dec 03 01:22:59 compute-0 podman[158098]: time="2025-12-03T01:22:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:22:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:22:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec 03 01:22:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:22:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6816 "" "Go-http-client/1.1"
Dec 03 01:23:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v238: 321 pgs: 1 remapped+peering, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:23:00 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 111 pg[9.1c( v 51'584 (0'0,51'584] local-lis/les=110/111 n=6 ec=54/45 lis/c=83/83 les/c/f=84/84/0 sis=110) [0]/[2] async=[0] r=0 lpr=110 pi=[83,110)/1 crt=51'584 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:23:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:23:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Dec 03 01:23:00 compute-0 ceph-mon[192821]: osdmap e111: 3 total, 3 up, 3 in
Dec 03 01:23:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Dec 03 01:23:00 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Dec 03 01:23:00 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 112 pg[9.1c( v 51'584 (0'0,51'584] local-lis/les=110/111 n=6 ec=54/45 lis/c=110/83 les/c/f=111/84/0 sis=112 pruub=15.621837616s) [0] async=[0] r=-1 lpr=112 pi=[83,112)/1 crt=51'584 mlcod 51'584 active pruub 200.365066528s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:23:00 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 112 pg[9.1c( v 51'584 (0'0,51'584] local-lis/les=110/111 n=6 ec=54/45 lis/c=110/83 les/c/f=111/84/0 sis=112 pruub=15.621644974s) [0] r=-1 lpr=112 pi=[83,112)/1 crt=51'584 mlcod 0'0 unknown NOTIFY pruub 200.365066528s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:23:00 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 112 pg[9.1c( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=110/83 les/c/f=111/84/0 sis=112) [0] r=0 lpr=112 pi=[83,112)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:23:00 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 112 pg[9.1c( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=110/83 les/c/f=111/84/0 sis=112) [0] r=0 lpr=112 pi=[83,112)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:23:01 compute-0 openstack_network_exporter[160250]: ERROR   01:23:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:23:01 compute-0 openstack_network_exporter[160250]: ERROR   01:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:23:01 compute-0 openstack_network_exporter[160250]: ERROR   01:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:23:01 compute-0 openstack_network_exporter[160250]: ERROR   01:23:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:23:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:23:01 compute-0 openstack_network_exporter[160250]: ERROR   01:23:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:23:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:23:01 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Dec 03 01:23:01 compute-0 ceph-mon[192821]: pgmap v238: 321 pgs: 1 remapped+peering, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:23:01 compute-0 ceph-mon[192821]: osdmap e112: 3 total, 3 up, 3 in
Dec 03 01:23:01 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Dec 03 01:23:01 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Dec 03 01:23:01 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 113 pg[9.1c( v 51'584 (0'0,51'584] local-lis/les=112/113 n=6 ec=54/45 lis/c=110/83 les/c/f=111/84/0 sis=112) [0] r=0 lpr=112 pi=[83,112)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:23:01 compute-0 sshd[113879]: Timeout before authentication for connection from 101.126.54.245 to 38.102.83.36, pid = 218672
Dec 03 01:23:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v241: 321 pgs: 1 remapped+peering, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:23:02 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Dec 03 01:23:02 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Dec 03 01:23:02 compute-0 ceph-mon[192821]: osdmap e113: 3 total, 3 up, 3 in
Dec 03 01:23:02 compute-0 ceph-mon[192821]: 3.1b scrub starts
Dec 03 01:23:02 compute-0 ceph-mon[192821]: 3.1b scrub ok
Dec 03 01:23:02 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Dec 03 01:23:02 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Dec 03 01:23:03 compute-0 ceph-mon[192821]: pgmap v241: 321 pgs: 1 remapped+peering, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:23:03 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Dec 03 01:23:03 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Dec 03 01:23:04 compute-0 sshd-session[228001]: Invalid user vijay from 193.32.162.157 port 58998
Dec 03 01:23:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v242: 321 pgs: 1 remapped+peering, 320 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:23:04 compute-0 ceph-mon[192821]: 6.6 scrub starts
Dec 03 01:23:04 compute-0 ceph-mon[192821]: 6.6 scrub ok
Dec 03 01:23:05 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Dec 03 01:23:05 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Dec 03 01:23:05 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:23:05 compute-0 ceph-mon[192821]: 4.4 scrub starts
Dec 03 01:23:05 compute-0 ceph-mon[192821]: 4.4 scrub ok
Dec 03 01:23:05 compute-0 ceph-mon[192821]: pgmap v242: 321 pgs: 1 remapped+peering, 320 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:23:05 compute-0 podman[228034]: 2025-12-03 01:23:05.867490022 +0000 UTC m=+0.108663951 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 01:23:05 compute-0 podman[228036]: 2025-12-03 01:23:05.888220181 +0000 UTC m=+0.121869624 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec 03 01:23:05 compute-0 podman[228035]: 2025-12-03 01:23:05.888312783 +0000 UTC m=+0.128449554 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.openshift.expose-services=, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6)
Dec 03 01:23:05 compute-0 podman[228037]: 2025-12-03 01:23:05.937814571 +0000 UTC m=+0.160739990 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 03 01:23:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v243: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 16 B/s, 1 objects/s recovering
Dec 03 01:23:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Dec 03 01:23:06 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Dec 03 01:23:06 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Dec 03 01:23:06 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Dec 03 01:23:06 compute-0 sshd-session[228118]: Invalid user temp from 34.66.72.251 port 50770
Dec 03 01:23:06 compute-0 sshd-session[228118]: Received disconnect from 34.66.72.251 port 50770:11: Bye Bye [preauth]
Dec 03 01:23:06 compute-0 sshd-session[228118]: Disconnected from invalid user temp 34.66.72.251 port 50770 [preauth]
Dec 03 01:23:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Dec 03 01:23:06 compute-0 ceph-mon[192821]: 3.7 scrub starts
Dec 03 01:23:06 compute-0 ceph-mon[192821]: 3.7 scrub ok
Dec 03 01:23:06 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Dec 03 01:23:06 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Dec 03 01:23:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Dec 03 01:23:06 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Dec 03 01:23:06 compute-0 sshd-session[228001]: Connection closed by invalid user vijay 193.32.162.157 port 58998 [preauth]
Dec 03 01:23:07 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 114 pg[9.1e( v 51'584 (0'0,51'584] local-lis/les=70/71 n=6 ec=54/45 lis/c=70/70 les/c/f=71/71/0 sis=114 pruub=14.875950813s) [0] r=-1 lpr=114 pi=[70,114)/1 crt=51'584 mlcod 0'0 active pruub 206.620376587s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:23:07 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 114 pg[9.1e( v 51'584 (0'0,51'584] local-lis/les=70/71 n=6 ec=54/45 lis/c=70/70 les/c/f=71/71/0 sis=114 pruub=14.875855446s) [0] r=-1 lpr=114 pi=[70,114)/1 crt=51'584 mlcod 0'0 unknown NOTIFY pruub 206.620376587s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:23:07 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 114 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=70/70 les/c/f=71/71/0 sis=114) [0] r=0 lpr=114 pi=[70,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:23:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Dec 03 01:23:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Dec 03 01:23:07 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Dec 03 01:23:07 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 115 pg[9.1e( v 51'584 (0'0,51'584] local-lis/les=70/71 n=6 ec=54/45 lis/c=70/70 les/c/f=71/71/0 sis=115) [0]/[2] r=0 lpr=115 pi=[70,115)/1 crt=51'584 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:23:07 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 115 pg[9.1e( v 51'584 (0'0,51'584] local-lis/les=70/71 n=6 ec=54/45 lis/c=70/70 les/c/f=71/71/0 sis=115) [0]/[2] r=0 lpr=115 pi=[70,115)/1 crt=51'584 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 03 01:23:07 compute-0 ceph-mon[192821]: pgmap v243: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 16 B/s, 1 objects/s recovering
Dec 03 01:23:07 compute-0 ceph-mon[192821]: 7.18 scrub starts
Dec 03 01:23:07 compute-0 ceph-mon[192821]: 7.18 scrub ok
Dec 03 01:23:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Dec 03 01:23:07 compute-0 ceph-mon[192821]: osdmap e114: 3 total, 3 up, 3 in
Dec 03 01:23:07 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 115 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=70/70 les/c/f=71/71/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[70,115)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:23:07 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 115 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=70/70 les/c/f=71/71/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[70,115)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 03 01:23:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v246: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 16 B/s, 1 objects/s recovering
Dec 03 01:23:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec 03 01:23:08 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 03 01:23:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Dec 03 01:23:08 compute-0 ceph-mon[192821]: osdmap e115: 3 total, 3 up, 3 in
Dec 03 01:23:08 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 03 01:23:08 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 03 01:23:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Dec 03 01:23:08 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Dec 03 01:23:08 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 116 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=75/76 n=6 ec=54/45 lis/c=75/75 les/c/f=76/76/0 sis=116 pruub=11.509423256s) [1] r=-1 lpr=116 pi=[75,116)/1 crt=51'584 mlcod 0'0 active pruub 204.445098877s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:23:08 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 116 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=75/76 n=6 ec=54/45 lis/c=75/75 les/c/f=76/76/0 sis=116 pruub=11.508624077s) [1] r=-1 lpr=116 pi=[75,116)/1 crt=51'584 mlcod 0'0 unknown NOTIFY pruub 204.445098877s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:23:08 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=75/75 les/c/f=76/76/0 sis=116) [1] r=0 lpr=116 pi=[75,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:23:08 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 7.e scrub starts
Dec 03 01:23:09 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 7.e scrub ok
Dec 03 01:23:09 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 116 pg[9.1e( v 51'584 (0'0,51'584] local-lis/les=115/116 n=6 ec=54/45 lis/c=70/70 les/c/f=71/71/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[70,115)/1 crt=51'584 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:23:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Dec 03 01:23:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Dec 03 01:23:09 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 117 pg[9.1e( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=115/70 les/c/f=116/71/0 sis=117) [0] r=0 lpr=117 pi=[70,117)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:23:09 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 117 pg[9.1e( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=115/70 les/c/f=116/71/0 sis=117) [0] r=0 lpr=117 pi=[70,117)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:23:09 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Dec 03 01:23:09 compute-0 podman[228122]: 2025-12-03 01:23:09.891219467 +0000 UTC m=+0.143905179 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 03 01:23:09 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=75/75 les/c/f=76/76/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[75,117)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:23:09 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=75/75 les/c/f=76/76/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[75,117)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 03 01:23:09 compute-0 ceph-mon[192821]: pgmap v246: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 16 B/s, 1 objects/s recovering
Dec 03 01:23:09 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 03 01:23:09 compute-0 ceph-mon[192821]: osdmap e116: 3 total, 3 up, 3 in
Dec 03 01:23:09 compute-0 ceph-mon[192821]: 7.e scrub starts
Dec 03 01:23:09 compute-0 ceph-mon[192821]: 7.e scrub ok
Dec 03 01:23:09 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 117 pg[9.1e( v 51'584 (0'0,51'584] local-lis/les=115/116 n=6 ec=54/45 lis/c=115/70 les/c/f=116/71/0 sis=117 pruub=15.444588661s) [0] async=[0] r=-1 lpr=117 pi=[70,117)/1 crt=51'584 mlcod 51'584 active pruub 209.424819946s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:23:09 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 117 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=75/76 n=6 ec=54/45 lis/c=75/75 les/c/f=76/76/0 sis=117) [1]/[2] r=0 lpr=117 pi=[75,117)/1 crt=51'584 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:23:09 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 117 pg[9.1e( v 51'584 (0'0,51'584] local-lis/les=115/116 n=6 ec=54/45 lis/c=115/70 les/c/f=116/71/0 sis=117 pruub=15.444419861s) [0] r=-1 lpr=117 pi=[70,117)/1 crt=51'584 mlcod 0'0 unknown NOTIFY pruub 209.424819946s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:23:09 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 117 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=75/76 n=6 ec=54/45 lis/c=75/75 les/c/f=76/76/0 sis=117) [1]/[2] r=0 lpr=117 pi=[75,117)/1 crt=51'584 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 03 01:23:09 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 3.8 deep-scrub starts
Dec 03 01:23:09 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 3.8 deep-scrub ok
Dec 03 01:23:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v249: 321 pgs: 1 remapped+peering, 320 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:23:10 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:23:10 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Dec 03 01:23:10 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Dec 03 01:23:10 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Dec 03 01:23:10 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 118 pg[9.1e( v 51'584 (0'0,51'584] local-lis/les=117/118 n=6 ec=54/45 lis/c=115/70 les/c/f=116/71/0 sis=117) [0] r=0 lpr=117 pi=[70,117)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:23:10 compute-0 ceph-mon[192821]: osdmap e117: 3 total, 3 up, 3 in
Dec 03 01:23:10 compute-0 ceph-mon[192821]: 3.8 deep-scrub starts
Dec 03 01:23:10 compute-0 ceph-mon[192821]: 3.8 deep-scrub ok
Dec 03 01:23:10 compute-0 ceph-mon[192821]: osdmap e118: 3 total, 3 up, 3 in
Dec 03 01:23:11 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Dec 03 01:23:11 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Dec 03 01:23:11 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Dec 03 01:23:11 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Dec 03 01:23:11 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 118 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=117/118 n=6 ec=54/45 lis/c=75/75 les/c/f=76/76/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[75,117)/1 crt=51'584 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:23:11 compute-0 ceph-mon[192821]: pgmap v249: 321 pgs: 1 remapped+peering, 320 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:23:11 compute-0 ceph-mon[192821]: 3.1d scrub starts
Dec 03 01:23:11 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Dec 03 01:23:11 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Dec 03 01:23:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v251: 321 pgs: 1 remapped+peering, 320 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:23:12 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 4.7 deep-scrub starts
Dec 03 01:23:12 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 4.7 deep-scrub ok
Dec 03 01:23:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Dec 03 01:23:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Dec 03 01:23:12 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Dec 03 01:23:12 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 119 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=117/75 les/c/f=118/76/0 sis=119) [1] r=0 lpr=119 pi=[75,119)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:23:12 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 119 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=117/75 les/c/f=118/76/0 sis=119) [1] r=0 lpr=119 pi=[75,119)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 03 01:23:12 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 119 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=117/118 n=6 ec=54/45 lis/c=117/75 les/c/f=118/76/0 sis=119 pruub=14.921385765s) [1] async=[1] r=-1 lpr=119 pi=[75,119)/1 crt=51'584 mlcod 51'584 active pruub 211.960662842s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 03 01:23:12 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 119 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=117/118 n=6 ec=54/45 lis/c=117/75 les/c/f=118/76/0 sis=119 pruub=14.921197891s) [1] r=-1 lpr=119 pi=[75,119)/1 crt=51'584 mlcod 0'0 unknown NOTIFY pruub 211.960662842s@ mbc={}] state<Start>: transitioning to Stray
Dec 03 01:23:12 compute-0 ceph-mon[192821]: 3.1d scrub ok
Dec 03 01:23:12 compute-0 ceph-mon[192821]: 6.4 scrub starts
Dec 03 01:23:12 compute-0 ceph-mon[192821]: 6.4 scrub ok
Dec 03 01:23:12 compute-0 ceph-mon[192821]: 7.2 scrub starts
Dec 03 01:23:12 compute-0 ceph-mon[192821]: 7.2 scrub ok
Dec 03 01:23:13 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Dec 03 01:23:13 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Dec 03 01:23:13 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 4.5 deep-scrub starts
Dec 03 01:23:13 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 4.5 deep-scrub ok
Dec 03 01:23:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Dec 03 01:23:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Dec 03 01:23:13 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Dec 03 01:23:13 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 120 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=119/120 n=6 ec=54/45 lis/c=117/75 les/c/f=118/76/0 sis=119) [1] r=0 lpr=119 pi=[75,119)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 03 01:23:13 compute-0 ceph-mon[192821]: pgmap v251: 321 pgs: 1 remapped+peering, 320 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:23:13 compute-0 ceph-mon[192821]: 4.7 deep-scrub starts
Dec 03 01:23:13 compute-0 ceph-mon[192821]: 4.7 deep-scrub ok
Dec 03 01:23:13 compute-0 ceph-mon[192821]: osdmap e119: 3 total, 3 up, 3 in
Dec 03 01:23:13 compute-0 ceph-mon[192821]: osdmap e120: 3 total, 3 up, 3 in
Dec 03 01:23:14 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 7.1a deep-scrub starts
Dec 03 01:23:14 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 7.1a deep-scrub ok
Dec 03 01:23:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v254: 321 pgs: 1 remapped+peering, 320 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:23:14 compute-0 podman[228140]: 2025-12-03 01:23:14.860947491 +0000 UTC m=+0.121238976 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, maintainer=Red Hat, Inc., release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.buildah.version=1.29.0, architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, config_id=edpm, io.openshift.tags=base rhel9, name=ubi9, release-0.7.12=, vendor=Red Hat, Inc.)
Dec 03 01:23:15 compute-0 ceph-mon[192821]: 2.1f scrub starts
Dec 03 01:23:15 compute-0 ceph-mon[192821]: 2.1f scrub ok
Dec 03 01:23:15 compute-0 ceph-mon[192821]: 4.5 deep-scrub starts
Dec 03 01:23:15 compute-0 ceph-mon[192821]: 4.5 deep-scrub ok
Dec 03 01:23:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:23:16 compute-0 ceph-mon[192821]: 7.1a deep-scrub starts
Dec 03 01:23:16 compute-0 ceph-mon[192821]: 7.1a deep-scrub ok
Dec 03 01:23:16 compute-0 ceph-mon[192821]: pgmap v254: 321 pgs: 1 remapped+peering, 320 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:23:16 compute-0 sudo[226556]: pam_unix(sudo:session): session closed for user root
Dec 03 01:23:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v255: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Dec 03 01:23:17 compute-0 sudo[228309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fioggldtehhjlrlmrombsbsralgcencv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724996.486154-137-253667871817743/AnsiballZ_command.py'
Dec 03 01:23:17 compute-0 sudo[228309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:23:17 compute-0 python3.9[228311]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:23:18 compute-0 ceph-mon[192821]: pgmap v255: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Dec 03 01:23:18 compute-0 sshd-session[228312]: Connection closed by 118.194.250.60 port 53168
Dec 03 01:23:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v256: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 14 B/s, 1 objects/s recovering
Dec 03 01:23:18 compute-0 sshd-session[228120]: Connection closed by authenticating user root 193.32.162.157 port 41560 [preauth]
Dec 03 01:23:18 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Dec 03 01:23:18 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Dec 03 01:23:18 compute-0 sudo[228309]: pam_unix(sudo:session): session closed for user root
Dec 03 01:23:18 compute-0 sshd-session[228448]: Connection closed by 118.194.250.60 port 53482 [preauth]
Dec 03 01:23:18 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 7.c scrub starts
Dec 03 01:23:18 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 7.c scrub ok
Dec 03 01:23:19 compute-0 ceph-mon[192821]: 7.c scrub starts
Dec 03 01:23:19 compute-0 ceph-mon[192821]: 7.c scrub ok
Dec 03 01:23:19 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 7.1b deep-scrub starts
Dec 03 01:23:19 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 7.1b deep-scrub ok
Dec 03 01:23:19 compute-0 sudo[228617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krifatglyensfmmqjuqqsmhxdeqifcxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764724998.9326177-145-258455597589645/AnsiballZ_selinux.py'
Dec 03 01:23:19 compute-0 sudo[228617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:23:19 compute-0 podman[228577]: 2025-12-03 01:23:19.878942879 +0000 UTC m=+0.132739223 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 03 01:23:20 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Dec 03 01:23:20 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Dec 03 01:23:20 compute-0 ceph-mon[192821]: pgmap v256: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 14 B/s, 1 objects/s recovering
Dec 03 01:23:20 compute-0 ceph-mon[192821]: 7.1f scrub starts
Dec 03 01:23:20 compute-0 ceph-mon[192821]: 7.1f scrub ok
Dec 03 01:23:20 compute-0 ceph-mon[192821]: 3.1e scrub starts
Dec 03 01:23:20 compute-0 ceph-mon[192821]: 3.1e scrub ok
Dec 03 01:23:20 compute-0 sshd-session[228627]: Unable to negotiate with 118.194.250.60 port 53910: no matching host key type found. Their offer: ssh-rsa [preauth]
Dec 03 01:23:20 compute-0 python3.9[228626]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec 03 01:23:20 compute-0 sudo[228617]: pam_unix(sudo:session): session closed for user root
Dec 03 01:23:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v257: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 1 objects/s recovering
Dec 03 01:23:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:23:20 compute-0 sshd-session[228527]: Invalid user mcserver from 103.146.202.174 port 44466
Dec 03 01:23:21 compute-0 ceph-mon[192821]: 7.1b deep-scrub starts
Dec 03 01:23:21 compute-0 ceph-mon[192821]: 7.1b deep-scrub ok
Dec 03 01:23:21 compute-0 sshd-session[228527]: Received disconnect from 103.146.202.174 port 44466:11: Bye Bye [preauth]
Dec 03 01:23:21 compute-0 sshd-session[228527]: Disconnected from invalid user mcserver 103.146.202.174 port 44466 [preauth]
Dec 03 01:23:21 compute-0 sudo[228779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-paudbjbmjemgsweslvtxzbeavwbbzudf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725000.7891643-156-10220005598095/AnsiballZ_command.py'
Dec 03 01:23:21 compute-0 sudo[228779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:23:21 compute-0 python3.9[228781]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec 03 01:23:21 compute-0 sudo[228779]: pam_unix(sudo:session): session closed for user root
Dec 03 01:23:22 compute-0 ceph-mon[192821]: pgmap v257: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 1 objects/s recovering
Dec 03 01:23:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v258: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 11 B/s, 1 objects/s recovering
Dec 03 01:23:22 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 3.1f deep-scrub starts
Dec 03 01:23:22 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 3.1f deep-scrub ok
Dec 03 01:23:22 compute-0 sudo[228931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvxuzaedhysihtvegyzwrcvwmhpdwdej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725002.0222363-164-261417740275064/AnsiballZ_file.py'
Dec 03 01:23:22 compute-0 sudo[228931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:23:22 compute-0 python3.9[228933]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:23:22 compute-0 sudo[228931]: pam_unix(sudo:session): session closed for user root
Dec 03 01:23:22 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 10.3 deep-scrub starts
Dec 03 01:23:22 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 10.3 deep-scrub ok
Dec 03 01:23:23 compute-0 ceph-mon[192821]: 10.3 deep-scrub starts
Dec 03 01:23:23 compute-0 ceph-mon[192821]: 10.3 deep-scrub ok
Dec 03 01:23:23 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 10.5 deep-scrub starts
Dec 03 01:23:23 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 10.5 deep-scrub ok
Dec 03 01:23:24 compute-0 sudo[229083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqiwdvafwqluqcbojudwbslnmzyonizn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725003.261958-172-38770481904613/AnsiballZ_mount.py'
Dec 03 01:23:24 compute-0 sudo[229083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:23:24 compute-0 ceph-mon[192821]: pgmap v258: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 11 B/s, 1 objects/s recovering
Dec 03 01:23:24 compute-0 ceph-mon[192821]: 3.1f deep-scrub starts
Dec 03 01:23:24 compute-0 ceph-mon[192821]: 3.1f deep-scrub ok
Dec 03 01:23:24 compute-0 ceph-mon[192821]: 10.5 deep-scrub starts
Dec 03 01:23:24 compute-0 ceph-mon[192821]: 10.5 deep-scrub ok
Dec 03 01:23:24 compute-0 python3.9[229085]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec 03 01:23:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v259: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 10 B/s, 1 objects/s recovering
Dec 03 01:23:24 compute-0 sudo[229083]: pam_unix(sudo:session): session closed for user root
Dec 03 01:23:24 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 7.f scrub starts
Dec 03 01:23:24 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 7.f scrub ok
Dec 03 01:23:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:23:25 compute-0 sudo[229235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdfanhannjrmgbhdmgyjowswzkgmxosj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725005.1481283-200-66722537532354/AnsiballZ_file.py'
Dec 03 01:23:25 compute-0 sudo[229235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:23:25 compute-0 python3.9[229237]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:23:25 compute-0 sudo[229235]: pam_unix(sudo:session): session closed for user root
Dec 03 01:23:26 compute-0 ceph-mon[192821]: pgmap v259: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 10 B/s, 1 objects/s recovering
Dec 03 01:23:26 compute-0 ceph-mon[192821]: 7.f scrub starts
Dec 03 01:23:26 compute-0 ceph-mon[192821]: 7.f scrub ok
Dec 03 01:23:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v260: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Dec 03 01:23:26 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Dec 03 01:23:26 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Dec 03 01:23:26 compute-0 sudo[229387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfxwregailwzsngeudecjltcxmtiyeng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725006.2015846-208-136711068649691/AnsiballZ_stat.py'
Dec 03 01:23:26 compute-0 sudo[229387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:23:27 compute-0 python3.9[229389]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:23:27 compute-0 sudo[229387]: pam_unix(sudo:session): session closed for user root
Dec 03 01:23:27 compute-0 ceph-mon[192821]: pgmap v260: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Dec 03 01:23:27 compute-0 sudo[229465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-didztvehanfvojyeolinjguuduwxghxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725006.2015846-208-136711068649691/AnsiballZ_file.py'
Dec 03 01:23:27 compute-0 sudo[229465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:23:27 compute-0 python3.9[229467]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:23:27 compute-0 sudo[229465]: pam_unix(sudo:session): session closed for user root
Dec 03 01:23:27 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 10.a deep-scrub starts
Dec 03 01:23:27 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 10.a deep-scrub ok
Dec 03 01:23:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:23:28
Dec 03 01:23:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 01:23:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 01:23:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.log', 'cephfs.cephfs.data', 'images', 'default.rgw.control', 'volumes', 'backups', 'vms', 'default.rgw.meta']
Dec 03 01:23:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 01:23:28 compute-0 ceph-mon[192821]: 3.6 scrub starts
Dec 03 01:23:28 compute-0 ceph-mon[192821]: 3.6 scrub ok
Dec 03 01:23:28 compute-0 ceph-mon[192821]: 10.a deep-scrub starts
Dec 03 01:23:28 compute-0 ceph-mon[192821]: 10.a deep-scrub ok
Dec 03 01:23:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v261: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:23:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:23:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:23:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:23:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:23:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:23:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:23:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 01:23:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:23:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 01:23:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:23:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:23:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:23:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:23:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:23:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:23:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:23:28 compute-0 sudo[229617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gswtozlcwyuznvpdkbzqfihzktaeazsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725008.3992605-229-225282784422486/AnsiballZ_stat.py'
Dec 03 01:23:28 compute-0 sudo[229617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:23:29 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 10.c scrub starts
Dec 03 01:23:29 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 10.c scrub ok
Dec 03 01:23:29 compute-0 python3.9[229619]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:23:29 compute-0 sudo[229617]: pam_unix(sudo:session): session closed for user root
Dec 03 01:23:29 compute-0 ceph-mon[192821]: pgmap v261: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:23:29 compute-0 ceph-mon[192821]: 10.c scrub starts
Dec 03 01:23:29 compute-0 ceph-mon[192821]: 10.c scrub ok
Dec 03 01:23:29 compute-0 podman[158098]: time="2025-12-03T01:23:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:23:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:23:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec 03 01:23:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:23:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6816 "" "Go-http-client/1.1"
Dec 03 01:23:30 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Dec 03 01:23:30 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Dec 03 01:23:30 compute-0 sshd-session[228450]: Connection closed by authenticating user root 193.32.162.157 port 49848 [preauth]
Dec 03 01:23:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v262: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:23:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:23:30 compute-0 ceph-mon[192821]: 10.18 scrub starts
Dec 03 01:23:30 compute-0 ceph-mon[192821]: 10.18 scrub ok
Dec 03 01:23:30 compute-0 sudo[229772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjnynfylhyllufvehzwffjhnyycljkds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725009.8109632-242-4553248195356/AnsiballZ_getent.py'
Dec 03 01:23:30 compute-0 sudo[229772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:23:30 compute-0 python3.9[229774]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec 03 01:23:30 compute-0 sudo[229772]: pam_unix(sudo:session): session closed for user root
Dec 03 01:23:30 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Dec 03 01:23:30 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Dec 03 01:23:30 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Dec 03 01:23:30 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Dec 03 01:23:31 compute-0 ceph-mon[192821]: pgmap v262: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:23:31 compute-0 ceph-mon[192821]: 10.1b scrub starts
Dec 03 01:23:31 compute-0 ceph-mon[192821]: 10.1b scrub ok
Dec 03 01:23:31 compute-0 openstack_network_exporter[160250]: ERROR   01:23:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:23:31 compute-0 openstack_network_exporter[160250]: ERROR   01:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:23:31 compute-0 openstack_network_exporter[160250]: ERROR   01:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:23:31 compute-0 openstack_network_exporter[160250]: ERROR   01:23:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:23:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:23:31 compute-0 openstack_network_exporter[160250]: ERROR   01:23:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:23:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:23:31 compute-0 sudo[229925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdpwumrsbjpaecnvfxfldtrxmorzyyiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725011.1848376-252-253030079722052/AnsiballZ_getent.py'
Dec 03 01:23:31 compute-0 sudo[229925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:23:31 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 6.b scrub starts
Dec 03 01:23:31 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 6.b scrub ok
Dec 03 01:23:32 compute-0 python3.9[229927]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec 03 01:23:32 compute-0 sudo[229925]: pam_unix(sudo:session): session closed for user root
Dec 03 01:23:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v263: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:23:32 compute-0 ceph-mon[192821]: 6.2 scrub starts
Dec 03 01:23:32 compute-0 ceph-mon[192821]: 6.2 scrub ok
Dec 03 01:23:32 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 10.d scrub starts
Dec 03 01:23:32 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 10.d scrub ok
Dec 03 01:23:33 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Dec 03 01:23:33 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Dec 03 01:23:33 compute-0 sudo[230079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwjlkwbubbjnhzbpcngjgulgkamzthxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725012.3819647-260-223599679838515/AnsiballZ_group.py'
Dec 03 01:23:33 compute-0 sudo[230079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:23:33 compute-0 python3.9[230081]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 03 01:23:33 compute-0 sudo[230079]: pam_unix(sudo:session): session closed for user root
Dec 03 01:23:33 compute-0 ceph-mon[192821]: 6.b scrub starts
Dec 03 01:23:33 compute-0 ceph-mon[192821]: 6.b scrub ok
Dec 03 01:23:33 compute-0 ceph-mon[192821]: pgmap v263: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:23:33 compute-0 ceph-mon[192821]: 10.d scrub starts
Dec 03 01:23:33 compute-0 ceph-mon[192821]: 10.d scrub ok
Dec 03 01:23:34 compute-0 sudo[230233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgeuzhrfmmgcysvghfmjvgcvincgzgsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725013.6980367-269-117447227934625/AnsiballZ_file.py'
Dec 03 01:23:34 compute-0 sudo[230233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:23:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v264: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:23:34 compute-0 sshd-session[230136]: Invalid user userroot from 173.249.50.59 port 46820
Dec 03 01:23:34 compute-0 ceph-mon[192821]: 10.1c scrub starts
Dec 03 01:23:34 compute-0 ceph-mon[192821]: 10.1c scrub ok
Dec 03 01:23:34 compute-0 sshd-session[230136]: Received disconnect from 173.249.50.59 port 46820:11: Bye Bye [preauth]
Dec 03 01:23:34 compute-0 sshd-session[230136]: Disconnected from invalid user userroot 173.249.50.59 port 46820 [preauth]
Dec 03 01:23:34 compute-0 python3.9[230235]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec 03 01:23:34 compute-0 sudo[230233]: pam_unix(sudo:session): session closed for user root
Dec 03 01:23:35 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:23:35 compute-0 ceph-mon[192821]: pgmap v264: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:23:35 compute-0 sudo[230385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gztubgglyyewaxevfsewmbmgexylrnjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725014.963867-280-18219584266939/AnsiballZ_dnf.py'
Dec 03 01:23:35 compute-0 sudo[230385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:23:35 compute-0 python3.9[230387]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 03 01:23:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v265: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:23:36 compute-0 systemd[194622]: Created slice User Background Tasks Slice.
Dec 03 01:23:36 compute-0 systemd[194622]: Starting Cleanup of User's Temporary Files and Directories...
Dec 03 01:23:36 compute-0 systemd[194622]: Finished Cleanup of User's Temporary Files and Directories.
Dec 03 01:23:36 compute-0 podman[230395]: 2025-12-03 01:23:36.873343067 +0000 UTC m=+0.102647203 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 03 01:23:36 compute-0 podman[230390]: 2025-12-03 01:23:36.888899059 +0000 UTC m=+0.142579974 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 01:23:36 compute-0 podman[230391]: 2025-12-03 01:23:36.890194491 +0000 UTC m=+0.126159350 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, distribution-scope=public, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.6, com.redhat.component=ubi9-minimal-container)
Dec 03 01:23:36 compute-0 podman[230398]: 2025-12-03 01:23:36.915206905 +0000 UTC m=+0.139745044 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec 03 01:23:36 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 6.e scrub starts
Dec 03 01:23:36 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 6.e scrub ok
Dec 03 01:23:37 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Dec 03 01:23:37 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Dec 03 01:23:37 compute-0 sudo[230385]: pam_unix(sudo:session): session closed for user root
Dec 03 01:23:37 compute-0 ceph-mon[192821]: pgmap v265: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:23:37 compute-0 ceph-mon[192821]: 10.1d scrub starts
Dec 03 01:23:37 compute-0 ceph-mon[192821]: 10.1d scrub ok
Dec 03 01:23:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 01:23:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:23:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 01:23:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:23:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:23:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:23:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:23:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:23:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:23:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:23:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:23:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:23:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 01:23:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:23:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:23:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:23:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 01:23:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:23:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 01:23:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:23:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:23:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:23:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 01:23:38 compute-0 sudo[230620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwgclcnosleljxxaodzjfckdiwozyzsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725017.5372477-288-169075166992550/AnsiballZ_file.py'
Dec 03 01:23:38 compute-0 sudo[230620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:23:38 compute-0 python3.9[230622]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:23:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v266: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:23:38 compute-0 sudo[230620]: pam_unix(sudo:session): session closed for user root
Dec 03 01:23:38 compute-0 ceph-mon[192821]: 6.e scrub starts
Dec 03 01:23:38 compute-0 ceph-mon[192821]: 6.e scrub ok
Dec 03 01:23:38 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Dec 03 01:23:38 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Dec 03 01:23:39 compute-0 sudo[230772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqaovloijxqhazgpkbpedimmewzhqhok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725018.6743536-296-256844594929435/AnsiballZ_stat.py'
Dec 03 01:23:39 compute-0 sudo[230772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:23:39 compute-0 python3.9[230774]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:23:39 compute-0 ceph-mon[192821]: pgmap v266: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:23:39 compute-0 sudo[230772]: pam_unix(sudo:session): session closed for user root
Dec 03 01:23:39 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 4.8 deep-scrub starts
Dec 03 01:23:39 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 4.8 deep-scrub ok
Dec 03 01:23:40 compute-0 sudo[230850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkpqwetsmxitzomztgalbospsvwbgxdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725018.6743536-296-256844594929435/AnsiballZ_file.py'
Dec 03 01:23:40 compute-0 sudo[230850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:23:40 compute-0 podman[230852]: 2025-12-03 01:23:40.174892749 +0000 UTC m=+0.125151686 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 03 01:23:40 compute-0 python3.9[230853]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:23:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v267: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:23:40 compute-0 sudo[230850]: pam_unix(sudo:session): session closed for user root
Dec 03 01:23:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:23:40 compute-0 ceph-mon[192821]: 4.9 scrub starts
Dec 03 01:23:40 compute-0 ceph-mon[192821]: 4.9 scrub ok
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.968 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.969 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.969 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.970 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f00ebd496a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.970 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.971 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eda45910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.972 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.972 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.972 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.972 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.973 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.973 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.974 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f00ebd4b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.974 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.973 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eabec2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.975 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebcadee0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.975 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f00edba6090>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.981 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f00ebd4bb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.981 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f00ebd4b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.982 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f00ebd4b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.982 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f00ebd4b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f00ebd4b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f00eabec290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f00ebd4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.986 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f00ebd4b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f00ebd4b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f00ebd4bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f00ebd4b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f00ebd4bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f00ebd4bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f00ebd4bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f00ebe0e030>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f00ebd4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.992 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f00ebd4b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.992 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f00ede91a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.993 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f00ebd4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.993 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f00ebd4b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.994 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.994 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f00ede92450>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.994 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.994 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f00ebd4bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.995 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f00ebd4bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.995 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:23:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:23:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:41.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:23:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:41.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:23:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:41.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:23:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:41.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:23:41 compute-0 sudo[231020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lahflfqktcbjyeivcnzqpxybhdxioyjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725020.6489358-309-186515479237400/AnsiballZ_stat.py'
Dec 03 01:23:41 compute-0 sudo[231020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:23:41 compute-0 python3.9[231022]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:23:41 compute-0 ceph-mon[192821]: 4.8 deep-scrub starts
Dec 03 01:23:41 compute-0 ceph-mon[192821]: 4.8 deep-scrub ok
Dec 03 01:23:41 compute-0 ceph-mon[192821]: pgmap v267: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:23:41 compute-0 sudo[231020]: pam_unix(sudo:session): session closed for user root
Dec 03 01:23:41 compute-0 sudo[231098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkylkkiufwjkzacqjmnaifmyyxeaeabq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725020.6489358-309-186515479237400/AnsiballZ_file.py'
Dec 03 01:23:41 compute-0 sudo[231098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:23:42 compute-0 python3.9[231100]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:23:42 compute-0 sudo[231098]: pam_unix(sudo:session): session closed for user root
Dec 03 01:23:42 compute-0 sshd-session[229748]: Connection closed by authenticating user root 193.32.162.157 port 53376 [preauth]
Dec 03 01:23:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v268: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:23:42 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Dec 03 01:23:42 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Dec 03 01:23:43 compute-0 sudo[231251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcpnjvzcvhdyexohljllmywwauddhnnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725022.616321-324-51433407423338/AnsiballZ_dnf.py'
Dec 03 01:23:43 compute-0 sudo[231251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:23:43 compute-0 python3.9[231253]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 03 01:23:43 compute-0 ceph-mon[192821]: pgmap v268: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:23:43 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Dec 03 01:23:43 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Dec 03 01:23:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v269: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:23:44 compute-0 ceph-mon[192821]: 6.1 scrub starts
Dec 03 01:23:44 compute-0 ceph-mon[192821]: 6.1 scrub ok
Dec 03 01:23:44 compute-0 ceph-mon[192821]: 10.1e scrub starts
Dec 03 01:23:44 compute-0 ceph-mon[192821]: 10.1e scrub ok
Dec 03 01:23:44 compute-0 sudo[231251]: pam_unix(sudo:session): session closed for user root
Dec 03 01:23:45 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Dec 03 01:23:45 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Dec 03 01:23:45 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:23:45 compute-0 ceph-mon[192821]: pgmap v269: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:23:45 compute-0 podman[231379]: 2025-12-03 01:23:45.873478602 +0000 UTC m=+0.124122330 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, vcs-type=git, container_name=kepler, name=ubi9, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, io.openshift.expose-services=, version=9.4)
Dec 03 01:23:46 compute-0 python3.9[231421]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:23:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v270: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:23:46 compute-0 ceph-mon[192821]: 10.1f scrub starts
Dec 03 01:23:46 compute-0 ceph-mon[192821]: 10.1f scrub ok
Dec 03 01:23:46 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 6.17 deep-scrub starts
Dec 03 01:23:46 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 6.17 deep-scrub ok
Dec 03 01:23:47 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Dec 03 01:23:47 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Dec 03 01:23:47 compute-0 python3.9[231576]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec 03 01:23:47 compute-0 ceph-mon[192821]: pgmap v270: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:23:47 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Dec 03 01:23:47 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Dec 03 01:23:47 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Dec 03 01:23:47 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Dec 03 01:23:48 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Dec 03 01:23:48 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Dec 03 01:23:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v271: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:23:48 compute-0 python3.9[231726]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:23:48 compute-0 ceph-mon[192821]: 6.17 deep-scrub starts
Dec 03 01:23:48 compute-0 ceph-mon[192821]: 6.17 deep-scrub ok
Dec 03 01:23:48 compute-0 ceph-mon[192821]: 8.15 scrub starts
Dec 03 01:23:48 compute-0 ceph-mon[192821]: 8.15 scrub ok
Dec 03 01:23:48 compute-0 ceph-mon[192821]: 10.4 scrub starts
Dec 03 01:23:48 compute-0 ceph-mon[192821]: 10.4 scrub ok
Dec 03 01:23:48 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Dec 03 01:23:48 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Dec 03 01:23:48 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Dec 03 01:23:48 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Dec 03 01:23:49 compute-0 sshd-session[231751]: Invalid user foundry from 80.253.31.232 port 56270
Dec 03 01:23:49 compute-0 ceph-mon[192821]: 4.14 scrub starts
Dec 03 01:23:49 compute-0 ceph-mon[192821]: 4.14 scrub ok
Dec 03 01:23:49 compute-0 ceph-mon[192821]: 11.15 scrub starts
Dec 03 01:23:49 compute-0 ceph-mon[192821]: 11.15 scrub ok
Dec 03 01:23:49 compute-0 ceph-mon[192821]: pgmap v271: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:23:49 compute-0 ceph-mon[192821]: 10.9 scrub starts
Dec 03 01:23:49 compute-0 ceph-mon[192821]: 10.9 scrub ok
Dec 03 01:23:49 compute-0 sshd-session[231751]: Received disconnect from 80.253.31.232 port 56270:11: Bye Bye [preauth]
Dec 03 01:23:49 compute-0 sshd-session[231751]: Disconnected from invalid user foundry 80.253.31.232 port 56270 [preauth]
Dec 03 01:23:49 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Dec 03 01:23:49 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Dec 03 01:23:49 compute-0 sudo[231879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwvgvykjxohhtamttaralsqyktjyovpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725028.9978106-365-100048105466655/AnsiballZ_systemd.py'
Dec 03 01:23:49 compute-0 sudo[231879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:23:50 compute-0 python3.9[231881]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:23:50 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec 03 01:23:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v272: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:23:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:23:50 compute-0 podman[231883]: 2025-12-03 01:23:50.379299947 +0000 UTC m=+0.124660604 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 03 01:23:50 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Dec 03 01:23:50 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec 03 01:23:50 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 03 01:23:50 compute-0 ceph-mon[192821]: 4.10 scrub starts
Dec 03 01:23:50 compute-0 ceph-mon[192821]: 4.10 scrub ok
Dec 03 01:23:50 compute-0 ceph-mon[192821]: 10.8 scrub starts
Dec 03 01:23:50 compute-0 ceph-mon[192821]: 10.8 scrub ok
Dec 03 01:23:50 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 10.1 deep-scrub starts
Dec 03 01:23:50 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 10.1 deep-scrub ok
Dec 03 01:23:50 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 03 01:23:50 compute-0 sudo[231879]: pam_unix(sudo:session): session closed for user root
Dec 03 01:23:51 compute-0 sshd-session[231125]: Invalid user transfer from 193.32.162.157 port 35664
Dec 03 01:23:51 compute-0 ceph-mon[192821]: pgmap v272: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:23:51 compute-0 ceph-mon[192821]: 10.1 deep-scrub starts
Dec 03 01:23:51 compute-0 ceph-mon[192821]: 10.1 deep-scrub ok
Dec 03 01:23:51 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Dec 03 01:23:51 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Dec 03 01:23:52 compute-0 python3.9[232066]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec 03 01:23:52 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 11.d scrub starts
Dec 03 01:23:52 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 11.d scrub ok
Dec 03 01:23:52 compute-0 sudo[232069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:23:52 compute-0 sudo[232069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:23:52 compute-0 sudo[232069]: pam_unix(sudo:session): session closed for user root
Dec 03 01:23:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v273: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:23:52 compute-0 sudo[232116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:23:52 compute-0 sudo[232116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:23:52 compute-0 sudo[232116]: pam_unix(sudo:session): session closed for user root
Dec 03 01:23:52 compute-0 sudo[232141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:23:52 compute-0 sudo[232141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:23:52 compute-0 sudo[232141]: pam_unix(sudo:session): session closed for user root
Dec 03 01:23:52 compute-0 sudo[232166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 01:23:52 compute-0 sudo[232166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:23:52 compute-0 ceph-mon[192821]: 10.15 scrub starts
Dec 03 01:23:52 compute-0 ceph-mon[192821]: 10.15 scrub ok
Dec 03 01:23:53 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Dec 03 01:23:53 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Dec 03 01:23:53 compute-0 sudo[232166]: pam_unix(sudo:session): session closed for user root
Dec 03 01:23:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:23:53 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:23:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 01:23:53 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:23:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 01:23:53 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:23:53 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev ef044f53-83e6-4493-a979-c18aaf0720e1 does not exist
Dec 03 01:23:53 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev f127b0c4-f378-4b12-8718-44e035b60329 does not exist
Dec 03 01:23:53 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 54816d78-661a-4bf4-a89f-704629f58347 does not exist
Dec 03 01:23:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 01:23:53 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:23:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 01:23:53 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:23:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:23:53 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:23:53 compute-0 sudo[232223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:23:53 compute-0 sudo[232223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:23:53 compute-0 sudo[232223]: pam_unix(sudo:session): session closed for user root
Dec 03 01:23:53 compute-0 sudo[232248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:23:53 compute-0 sudo[232248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:23:53 compute-0 sudo[232248]: pam_unix(sudo:session): session closed for user root
Dec 03 01:23:53 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 10.7 deep-scrub starts
Dec 03 01:23:53 compute-0 ceph-mon[192821]: 11.d scrub starts
Dec 03 01:23:53 compute-0 ceph-mon[192821]: 11.d scrub ok
Dec 03 01:23:53 compute-0 ceph-mon[192821]: pgmap v273: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:23:53 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:23:53 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:23:53 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:23:53 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:23:53 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:23:53 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:23:53 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 10.7 deep-scrub ok
Dec 03 01:23:53 compute-0 sudo[232273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:23:53 compute-0 sudo[232273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:23:53 compute-0 sudo[232273]: pam_unix(sudo:session): session closed for user root
Dec 03 01:23:53 compute-0 sudo[232298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 01:23:53 compute-0 sudo[232298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:23:54 compute-0 sshd-session[231125]: Connection closed by invalid user transfer 193.32.162.157 port 35664 [preauth]
Dec 03 01:23:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v274: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:23:54 compute-0 podman[232387]: 2025-12-03 01:23:54.419839981 +0000 UTC m=+0.088771532 container create a2a7a2ab44ed001b50b81ff70804a0303a105a6bba6f257bf2b5dd4fc4a70553 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:23:54 compute-0 podman[232387]: 2025-12-03 01:23:54.384960704 +0000 UTC m=+0.053892355 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:23:54 compute-0 systemd[1]: Started libpod-conmon-a2a7a2ab44ed001b50b81ff70804a0303a105a6bba6f257bf2b5dd4fc4a70553.scope.
Dec 03 01:23:54 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:23:54 compute-0 podman[232387]: 2025-12-03 01:23:54.570917572 +0000 UTC m=+0.239849203 container init a2a7a2ab44ed001b50b81ff70804a0303a105a6bba6f257bf2b5dd4fc4a70553 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 03 01:23:54 compute-0 podman[232387]: 2025-12-03 01:23:54.594220025 +0000 UTC m=+0.263151566 container start a2a7a2ab44ed001b50b81ff70804a0303a105a6bba6f257bf2b5dd4fc4a70553 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_tu, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:23:54 compute-0 podman[232387]: 2025-12-03 01:23:54.598699545 +0000 UTC m=+0.267631186 container attach a2a7a2ab44ed001b50b81ff70804a0303a105a6bba6f257bf2b5dd4fc4a70553 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_tu, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:23:54 compute-0 adoring_tu[232440]: 167 167
Dec 03 01:23:54 compute-0 systemd[1]: libpod-a2a7a2ab44ed001b50b81ff70804a0303a105a6bba6f257bf2b5dd4fc4a70553.scope: Deactivated successfully.
Dec 03 01:23:54 compute-0 podman[232387]: 2025-12-03 01:23:54.604931018 +0000 UTC m=+0.273862649 container died a2a7a2ab44ed001b50b81ff70804a0303a105a6bba6f257bf2b5dd4fc4a70553 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:23:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-e2f63e05110fad165aad105e48689c650f0e3d2ed12c98f27dc7d1fa5620554f-merged.mount: Deactivated successfully.
Dec 03 01:23:54 compute-0 ceph-mon[192821]: 8.2 scrub starts
Dec 03 01:23:54 compute-0 ceph-mon[192821]: 8.2 scrub ok
Dec 03 01:23:54 compute-0 ceph-mon[192821]: 10.7 deep-scrub starts
Dec 03 01:23:54 compute-0 ceph-mon[192821]: 10.7 deep-scrub ok
Dec 03 01:23:54 compute-0 podman[232387]: 2025-12-03 01:23:54.673670836 +0000 UTC m=+0.342602397 container remove a2a7a2ab44ed001b50b81ff70804a0303a105a6bba6f257bf2b5dd4fc4a70553 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 03 01:23:54 compute-0 systemd[1]: libpod-conmon-a2a7a2ab44ed001b50b81ff70804a0303a105a6bba6f257bf2b5dd4fc4a70553.scope: Deactivated successfully.
Dec 03 01:23:54 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 6.1d scrub starts
Dec 03 01:23:54 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 6.1d scrub ok
Dec 03 01:23:54 compute-0 rsyslogd[188612]: imjournal from <compute-0:ceph-osd>: begin to drop messages due to rate-limiting
Dec 03 01:23:54 compute-0 sudo[232526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbnyniitilxxddaqiejogxijoaysefox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725034.2820263-422-49423261780874/AnsiballZ_systemd.py'
Dec 03 01:23:54 compute-0 sudo[232526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:23:54 compute-0 podman[232524]: 2025-12-03 01:23:54.932420752 +0000 UTC m=+0.093704642 container create 0428af1fc87cc5a31ba9940fe3e20ecde084d2777950a3343958802167cc9ed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_brattain, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:23:54 compute-0 podman[232524]: 2025-12-03 01:23:54.898593132 +0000 UTC m=+0.059877082 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:23:55 compute-0 systemd[1]: Started libpod-conmon-0428af1fc87cc5a31ba9940fe3e20ecde084d2777950a3343958802167cc9ed6.scope.
Dec 03 01:23:55 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:23:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/590850dd4acc82d2ec88e74bca0c0738f86bfa8d91910e3297c40871f4089669/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:23:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/590850dd4acc82d2ec88e74bca0c0738f86bfa8d91910e3297c40871f4089669/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:23:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/590850dd4acc82d2ec88e74bca0c0738f86bfa8d91910e3297c40871f4089669/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:23:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/590850dd4acc82d2ec88e74bca0c0738f86bfa8d91910e3297c40871f4089669/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:23:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/590850dd4acc82d2ec88e74bca0c0738f86bfa8d91910e3297c40871f4089669/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:23:55 compute-0 podman[232524]: 2025-12-03 01:23:55.102847479 +0000 UTC m=+0.264131369 container init 0428af1fc87cc5a31ba9940fe3e20ecde084d2777950a3343958802167cc9ed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_brattain, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec 03 01:23:55 compute-0 podman[232524]: 2025-12-03 01:23:55.126196443 +0000 UTC m=+0.287480333 container start 0428af1fc87cc5a31ba9940fe3e20ecde084d2777950a3343958802167cc9ed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:23:55 compute-0 podman[232524]: 2025-12-03 01:23:55.132259981 +0000 UTC m=+0.293543881 container attach 0428af1fc87cc5a31ba9940fe3e20ecde084d2777950a3343958802167cc9ed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 03 01:23:55 compute-0 python3.9[232533]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:23:55 compute-0 sudo[232526]: pam_unix(sudo:session): session closed for user root
Dec 03 01:23:55 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:23:55 compute-0 ceph-mon[192821]: pgmap v274: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:23:55 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 10.e scrub starts
Dec 03 01:23:55 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 10.e scrub ok
Dec 03 01:23:56 compute-0 sudo[232713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wowigauomyxcqzgplvfufigufjzztijw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725035.586912-422-212262310905300/AnsiballZ_systemd.py'
Dec 03 01:23:56 compute-0 sudo[232713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:23:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v275: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:23:56 compute-0 vigilant_brattain[232543]: --> passed data devices: 0 physical, 3 LVM
Dec 03 01:23:56 compute-0 vigilant_brattain[232543]: --> relative data size: 1.0
Dec 03 01:23:56 compute-0 vigilant_brattain[232543]: --> All data devices are unavailable
Dec 03 01:23:56 compute-0 systemd[1]: libpod-0428af1fc87cc5a31ba9940fe3e20ecde084d2777950a3343958802167cc9ed6.scope: Deactivated successfully.
Dec 03 01:23:56 compute-0 systemd[1]: libpod-0428af1fc87cc5a31ba9940fe3e20ecde084d2777950a3343958802167cc9ed6.scope: Consumed 1.172s CPU time.
Dec 03 01:23:56 compute-0 podman[232524]: 2025-12-03 01:23:56.378222958 +0000 UTC m=+1.539506848 container died 0428af1fc87cc5a31ba9940fe3e20ecde084d2777950a3343958802167cc9ed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_brattain, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Dec 03 01:23:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-590850dd4acc82d2ec88e74bca0c0738f86bfa8d91910e3297c40871f4089669-merged.mount: Deactivated successfully.
Dec 03 01:23:56 compute-0 python3.9[232715]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:23:56 compute-0 podman[232524]: 2025-12-03 01:23:56.512601809 +0000 UTC m=+1.673885699 container remove 0428af1fc87cc5a31ba9940fe3e20ecde084d2777950a3343958802167cc9ed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:23:56 compute-0 systemd[1]: libpod-conmon-0428af1fc87cc5a31ba9940fe3e20ecde084d2777950a3343958802167cc9ed6.scope: Deactivated successfully.
Dec 03 01:23:56 compute-0 sudo[232298]: pam_unix(sudo:session): session closed for user root
Dec 03 01:23:56 compute-0 sudo[232713]: pam_unix(sudo:session): session closed for user root
Dec 03 01:23:56 compute-0 ceph-mon[192821]: 6.1d scrub starts
Dec 03 01:23:56 compute-0 ceph-mon[192821]: 6.1d scrub ok
Dec 03 01:23:56 compute-0 ceph-mon[192821]: 10.e scrub starts
Dec 03 01:23:56 compute-0 ceph-mon[192821]: 10.e scrub ok
Dec 03 01:23:56 compute-0 sudo[232738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:23:56 compute-0 sudo[232738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:23:56 compute-0 sudo[232738]: pam_unix(sudo:session): session closed for user root
Dec 03 01:23:56 compute-0 sudo[232787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:23:56 compute-0 sudo[232787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:23:56 compute-0 sudo[232787]: pam_unix(sudo:session): session closed for user root
Dec 03 01:23:56 compute-0 sudo[232813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:23:56 compute-0 sudo[232813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:23:56 compute-0 sudo[232813]: pam_unix(sudo:session): session closed for user root
Dec 03 01:23:57 compute-0 sshd-session[224587]: Connection closed by 192.168.122.30 port 54440
Dec 03 01:23:57 compute-0 sudo[232838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 01:23:57 compute-0 sudo[232838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:23:57 compute-0 sshd-session[224584]: pam_unix(sshd:session): session closed for user zuul
Dec 03 01:23:57 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Dec 03 01:23:57 compute-0 systemd[1]: session-41.scope: Consumed 1min 24.681s CPU time.
Dec 03 01:23:57 compute-0 systemd-logind[800]: Session 41 logged out. Waiting for processes to exit.
Dec 03 01:23:57 compute-0 systemd-logind[800]: Removed session 41.
Dec 03 01:23:57 compute-0 podman[232902]: 2025-12-03 01:23:57.65087609 +0000 UTC m=+0.078069289 container create 84b8fe075d8d0f94ea4624973ffc1415fd22029e2aa9e3e0a87bad4bcd087691 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:23:57 compute-0 podman[232902]: 2025-12-03 01:23:57.61788113 +0000 UTC m=+0.045074369 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:23:57 compute-0 ceph-mon[192821]: pgmap v275: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:23:57 compute-0 systemd[1]: Started libpod-conmon-84b8fe075d8d0f94ea4624973ffc1415fd22029e2aa9e3e0a87bad4bcd087691.scope.
Dec 03 01:23:57 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:23:57 compute-0 podman[232902]: 2025-12-03 01:23:57.809366124 +0000 UTC m=+0.236559323 container init 84b8fe075d8d0f94ea4624973ffc1415fd22029e2aa9e3e0a87bad4bcd087691 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_newton, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 03 01:23:57 compute-0 podman[232902]: 2025-12-03 01:23:57.830715158 +0000 UTC m=+0.257908357 container start 84b8fe075d8d0f94ea4624973ffc1415fd22029e2aa9e3e0a87bad4bcd087691 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_newton, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 03 01:23:57 compute-0 podman[232902]: 2025-12-03 01:23:57.837757301 +0000 UTC m=+0.264950490 container attach 84b8fe075d8d0f94ea4624973ffc1415fd22029e2aa9e3e0a87bad4bcd087691 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_newton, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 03 01:23:57 compute-0 friendly_newton[232918]: 167 167
Dec 03 01:23:57 compute-0 systemd[1]: libpod-84b8fe075d8d0f94ea4624973ffc1415fd22029e2aa9e3e0a87bad4bcd087691.scope: Deactivated successfully.
Dec 03 01:23:57 compute-0 podman[232902]: 2025-12-03 01:23:57.843613845 +0000 UTC m=+0.270807014 container died 84b8fe075d8d0f94ea4624973ffc1415fd22029e2aa9e3e0a87bad4bcd087691 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_newton, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Dec 03 01:23:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-78d20ed7d3bb00a44040048c5610e305d895c22842245ce009ecace83cc561ed-merged.mount: Deactivated successfully.
Dec 03 01:23:57 compute-0 podman[232902]: 2025-12-03 01:23:57.899682722 +0000 UTC m=+0.326875941 container remove 84b8fe075d8d0f94ea4624973ffc1415fd22029e2aa9e3e0a87bad4bcd087691 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_newton, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Dec 03 01:23:57 compute-0 systemd[1]: libpod-conmon-84b8fe075d8d0f94ea4624973ffc1415fd22029e2aa9e3e0a87bad4bcd087691.scope: Deactivated successfully.
Dec 03 01:23:58 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 8.d scrub starts
Dec 03 01:23:58 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 8.d scrub ok
Dec 03 01:23:58 compute-0 podman[232940]: 2025-12-03 01:23:58.166725392 +0000 UTC m=+0.081269807 container create 2bee1f3c9c3455b0f8aa4f07bc742281eee684aede368856f254ea6203a147e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Dec 03 01:23:58 compute-0 podman[232940]: 2025-12-03 01:23:58.132217084 +0000 UTC m=+0.046761559 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:23:58 compute-0 systemd[1]: Started libpod-conmon-2bee1f3c9c3455b0f8aa4f07bc742281eee684aede368856f254ea6203a147e5.scope.
Dec 03 01:23:58 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:23:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d53825bbffb372808c71cf47f775b0b596ce3ae8fbc0c7308229f74c19e329bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:23:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d53825bbffb372808c71cf47f775b0b596ce3ae8fbc0c7308229f74c19e329bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:23:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d53825bbffb372808c71cf47f775b0b596ce3ae8fbc0c7308229f74c19e329bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:23:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d53825bbffb372808c71cf47f775b0b596ce3ae8fbc0c7308229f74c19e329bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:23:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v276: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:23:58 compute-0 podman[232940]: 2025-12-03 01:23:58.363308781 +0000 UTC m=+0.277853196 container init 2bee1f3c9c3455b0f8aa4f07bc742281eee684aede368856f254ea6203a147e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_gould, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 03 01:23:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:23:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:23:58 compute-0 podman[232940]: 2025-12-03 01:23:58.374470485 +0000 UTC m=+0.289014880 container start 2bee1f3c9c3455b0f8aa4f07bc742281eee684aede368856f254ea6203a147e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_gould, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:23:58 compute-0 podman[232940]: 2025-12-03 01:23:58.380434332 +0000 UTC m=+0.294978747 container attach 2bee1f3c9c3455b0f8aa4f07bc742281eee684aede368856f254ea6203a147e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_gould, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 03 01:23:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:23:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:23:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:23:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:23:58 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Dec 03 01:23:58 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Dec 03 01:23:58 compute-0 ceph-mon[192821]: 8.d scrub starts
Dec 03 01:23:58 compute-0 ceph-mon[192821]: 8.d scrub ok
Dec 03 01:23:59 compute-0 peaceful_gould[232957]: {
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:     "0": [
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:         {
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:             "devices": [
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:                 "/dev/loop3"
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:             ],
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:             "lv_name": "ceph_lv0",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:             "lv_size": "21470642176",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:             "name": "ceph_lv0",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:             "tags": {
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:                 "ceph.cluster_name": "ceph",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:                 "ceph.crush_device_class": "",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:                 "ceph.encrypted": "0",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:                 "ceph.osd_id": "0",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:                 "ceph.type": "block",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:                 "ceph.vdo": "0"
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:             },
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:             "type": "block",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:             "vg_name": "ceph_vg0"
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:         }
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:     ],
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:     "1": [
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:         {
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:             "devices": [
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:                 "/dev/loop4"
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:             ],
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:             "lv_name": "ceph_lv1",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:             "lv_size": "21470642176",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:             "name": "ceph_lv1",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:             "tags": {
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:                 "ceph.cluster_name": "ceph",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:                 "ceph.crush_device_class": "",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:                 "ceph.encrypted": "0",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:                 "ceph.osd_id": "1",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:                 "ceph.type": "block",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:                 "ceph.vdo": "0"
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:             },
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:             "type": "block",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:             "vg_name": "ceph_vg1"
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:         }
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:     ],
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:     "2": [
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:         {
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:             "devices": [
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:                 "/dev/loop5"
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:             ],
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:             "lv_name": "ceph_lv2",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:             "lv_size": "21470642176",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:             "name": "ceph_lv2",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:             "tags": {
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:                 "ceph.cluster_name": "ceph",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:                 "ceph.crush_device_class": "",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:                 "ceph.encrypted": "0",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:                 "ceph.osd_id": "2",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:                 "ceph.type": "block",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:                 "ceph.vdo": "0"
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:             },
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:             "type": "block",
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:             "vg_name": "ceph_vg2"
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:         }
Dec 03 01:23:59 compute-0 peaceful_gould[232957]:     ]
Dec 03 01:23:59 compute-0 peaceful_gould[232957]: }
Dec 03 01:23:59 compute-0 systemd[1]: libpod-2bee1f3c9c3455b0f8aa4f07bc742281eee684aede368856f254ea6203a147e5.scope: Deactivated successfully.
Dec 03 01:23:59 compute-0 podman[232940]: 2025-12-03 01:23:59.234710306 +0000 UTC m=+1.149254681 container died 2bee1f3c9c3455b0f8aa4f07bc742281eee684aede368856f254ea6203a147e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 03 01:23:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-d53825bbffb372808c71cf47f775b0b596ce3ae8fbc0c7308229f74c19e329bc-merged.mount: Deactivated successfully.
Dec 03 01:23:59 compute-0 podman[232940]: 2025-12-03 01:23:59.332475868 +0000 UTC m=+1.247020253 container remove 2bee1f3c9c3455b0f8aa4f07bc742281eee684aede368856f254ea6203a147e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_gould, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Dec 03 01:23:59 compute-0 systemd[1]: libpod-conmon-2bee1f3c9c3455b0f8aa4f07bc742281eee684aede368856f254ea6203a147e5.scope: Deactivated successfully.
Dec 03 01:23:59 compute-0 sudo[232838]: pam_unix(sudo:session): session closed for user root
Dec 03 01:23:59 compute-0 sudo[232978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:23:59 compute-0 sudo[232978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:23:59 compute-0 sudo[232978]: pam_unix(sudo:session): session closed for user root
Dec 03 01:23:59 compute-0 sudo[233003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:23:59 compute-0 sudo[233003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:23:59 compute-0 sudo[233003]: pam_unix(sudo:session): session closed for user root
Dec 03 01:23:59 compute-0 ceph-mon[192821]: pgmap v276: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:23:59 compute-0 ceph-mon[192821]: 10.16 scrub starts
Dec 03 01:23:59 compute-0 ceph-mon[192821]: 10.16 scrub ok
Dec 03 01:23:59 compute-0 podman[158098]: time="2025-12-03T01:23:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:23:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:23:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec 03 01:23:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:23:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6815 "" "Go-http-client/1.1"
Dec 03 01:23:59 compute-0 sudo[233028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:23:59 compute-0 sudo[233028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:23:59 compute-0 sudo[233028]: pam_unix(sudo:session): session closed for user root
Dec 03 01:23:59 compute-0 sudo[233053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 01:23:59 compute-0 sudo[233053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:24:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v277: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:24:00 compute-0 podman[233115]: 2025-12-03 01:24:00.45287137 +0000 UTC m=+0.086415064 container create b11e027a5fb35f63047562ee10bcb28cbfa60fa1c2c266051f76395b9592b5fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 03 01:24:00 compute-0 podman[233115]: 2025-12-03 01:24:00.419291275 +0000 UTC m=+0.052835039 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:24:00 compute-0 systemd[1]: Started libpod-conmon-b11e027a5fb35f63047562ee10bcb28cbfa60fa1c2c266051f76395b9592b5fb.scope.
Dec 03 01:24:00 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:24:00 compute-0 podman[233115]: 2025-12-03 01:24:00.58883705 +0000 UTC m=+0.222380824 container init b11e027a5fb35f63047562ee10bcb28cbfa60fa1c2c266051f76395b9592b5fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_gates, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 03 01:24:00 compute-0 podman[233115]: 2025-12-03 01:24:00.609677792 +0000 UTC m=+0.243221506 container start b11e027a5fb35f63047562ee10bcb28cbfa60fa1c2c266051f76395b9592b5fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 03 01:24:00 compute-0 podman[233115]: 2025-12-03 01:24:00.616776016 +0000 UTC m=+0.250319790 container attach b11e027a5fb35f63047562ee10bcb28cbfa60fa1c2c266051f76395b9592b5fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_gates, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:24:00 compute-0 sad_gates[233131]: 167 167
Dec 03 01:24:00 compute-0 systemd[1]: libpod-b11e027a5fb35f63047562ee10bcb28cbfa60fa1c2c266051f76395b9592b5fb.scope: Deactivated successfully.
Dec 03 01:24:00 compute-0 podman[233115]: 2025-12-03 01:24:00.624001524 +0000 UTC m=+0.257545238 container died b11e027a5fb35f63047562ee10bcb28cbfa60fa1c2c266051f76395b9592b5fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_gates, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 03 01:24:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-29ca31bc3a00951933cbb4aaec5f9621e092360c4c303726e6f03c28691362ff-merged.mount: Deactivated successfully.
Dec 03 01:24:00 compute-0 podman[233115]: 2025-12-03 01:24:00.706406408 +0000 UTC m=+0.339950132 container remove b11e027a5fb35f63047562ee10bcb28cbfa60fa1c2c266051f76395b9592b5fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_gates, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 03 01:24:00 compute-0 systemd[1]: libpod-conmon-b11e027a5fb35f63047562ee10bcb28cbfa60fa1c2c266051f76395b9592b5fb.scope: Deactivated successfully.
Dec 03 01:24:01 compute-0 podman[233154]: 2025-12-03 01:24:01.00814587 +0000 UTC m=+0.091008646 container create 63013aef9a949dbba06113159be0cf176f37b42e67ef471ff8578761482d8666 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:24:01 compute-0 podman[233154]: 2025-12-03 01:24:00.974900494 +0000 UTC m=+0.057763290 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:24:01 compute-0 systemd[1]: Started libpod-conmon-63013aef9a949dbba06113159be0cf176f37b42e67ef471ff8578761482d8666.scope.
Dec 03 01:24:01 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:24:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c134517eef989ab5c98eb6dcea7335120df95754a4bb0b2e9f6a09cffd8a251/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:24:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c134517eef989ab5c98eb6dcea7335120df95754a4bb0b2e9f6a09cffd8a251/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:24:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c134517eef989ab5c98eb6dcea7335120df95754a4bb0b2e9f6a09cffd8a251/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:24:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c134517eef989ab5c98eb6dcea7335120df95754a4bb0b2e9f6a09cffd8a251/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:24:01 compute-0 podman[233154]: 2025-12-03 01:24:01.184155704 +0000 UTC m=+0.267018530 container init 63013aef9a949dbba06113159be0cf176f37b42e67ef471ff8578761482d8666 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:24:01 compute-0 podman[233154]: 2025-12-03 01:24:01.216516249 +0000 UTC m=+0.299379035 container start 63013aef9a949dbba06113159be0cf176f37b42e67ef471ff8578761482d8666 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bouman, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Dec 03 01:24:01 compute-0 podman[233154]: 2025-12-03 01:24:01.261593426 +0000 UTC m=+0.344456202 container attach 63013aef9a949dbba06113159be0cf176f37b42e67ef471ff8578761482d8666 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bouman, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:24:01 compute-0 openstack_network_exporter[160250]: ERROR   01:24:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:24:01 compute-0 openstack_network_exporter[160250]: ERROR   01:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:24:01 compute-0 openstack_network_exporter[160250]: ERROR   01:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:24:01 compute-0 openstack_network_exporter[160250]: ERROR   01:24:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:24:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:24:01 compute-0 openstack_network_exporter[160250]: ERROR   01:24:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:24:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:24:01 compute-0 ceph-mon[192821]: pgmap v277: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:02 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Dec 03 01:24:02 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Dec 03 01:24:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v278: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:02 compute-0 festive_bouman[233170]: {
Dec 03 01:24:02 compute-0 festive_bouman[233170]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 01:24:02 compute-0 festive_bouman[233170]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:24:02 compute-0 festive_bouman[233170]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 01:24:02 compute-0 festive_bouman[233170]:         "osd_id": 2,
Dec 03 01:24:02 compute-0 festive_bouman[233170]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:24:02 compute-0 festive_bouman[233170]:         "type": "bluestore"
Dec 03 01:24:02 compute-0 festive_bouman[233170]:     },
Dec 03 01:24:02 compute-0 festive_bouman[233170]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 01:24:02 compute-0 festive_bouman[233170]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:24:02 compute-0 festive_bouman[233170]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 01:24:02 compute-0 festive_bouman[233170]:         "osd_id": 1,
Dec 03 01:24:02 compute-0 festive_bouman[233170]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:24:02 compute-0 festive_bouman[233170]:         "type": "bluestore"
Dec 03 01:24:02 compute-0 festive_bouman[233170]:     },
Dec 03 01:24:02 compute-0 festive_bouman[233170]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 01:24:02 compute-0 festive_bouman[233170]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:24:02 compute-0 festive_bouman[233170]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 01:24:02 compute-0 festive_bouman[233170]:         "osd_id": 0,
Dec 03 01:24:02 compute-0 festive_bouman[233170]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:24:02 compute-0 festive_bouman[233170]:         "type": "bluestore"
Dec 03 01:24:02 compute-0 festive_bouman[233170]:     }
Dec 03 01:24:02 compute-0 festive_bouman[233170]: }
Dec 03 01:24:02 compute-0 systemd[1]: libpod-63013aef9a949dbba06113159be0cf176f37b42e67ef471ff8578761482d8666.scope: Deactivated successfully.
Dec 03 01:24:02 compute-0 systemd[1]: libpod-63013aef9a949dbba06113159be0cf176f37b42e67ef471ff8578761482d8666.scope: Consumed 1.278s CPU time.
Dec 03 01:24:02 compute-0 podman[233154]: 2025-12-03 01:24:02.497239089 +0000 UTC m=+1.580101855 container died 63013aef9a949dbba06113159be0cf176f37b42e67ef471ff8578761482d8666 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bouman, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 03 01:24:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c134517eef989ab5c98eb6dcea7335120df95754a4bb0b2e9f6a09cffd8a251-merged.mount: Deactivated successfully.
Dec 03 01:24:02 compute-0 sshd-session[233203]: Accepted publickey for zuul from 192.168.122.30 port 42856 ssh2: ECDSA SHA256:ja3ITS17A9km0/Ot+KN2pl9ub4ump/b6GV+vNoE7Szw
Dec 03 01:24:02 compute-0 podman[233154]: 2025-12-03 01:24:02.604940124 +0000 UTC m=+1.687802880 container remove 63013aef9a949dbba06113159be0cf176f37b42e67ef471ff8578761482d8666 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:24:02 compute-0 systemd-logind[800]: New session 42 of user zuul.
Dec 03 01:24:02 compute-0 systemd[1]: libpod-conmon-63013aef9a949dbba06113159be0cf176f37b42e67ef471ff8578761482d8666.scope: Deactivated successfully.
Dec 03 01:24:02 compute-0 systemd[1]: Started Session 42 of User zuul.
Dec 03 01:24:02 compute-0 sudo[233053]: pam_unix(sudo:session): session closed for user root
Dec 03 01:24:02 compute-0 sshd-session[233203]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 03 01:24:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:24:02 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:24:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:24:02 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:24:02 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev b4e91c0b-4eed-4e05-9e6b-aa915af9121f does not exist
Dec 03 01:24:02 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 6a1b54b7-0ac7-4739-bf8a-d1a685bfc544 does not exist
Dec 03 01:24:02 compute-0 ceph-mon[192821]: 11.9 scrub starts
Dec 03 01:24:02 compute-0 ceph-mon[192821]: 11.9 scrub ok
Dec 03 01:24:02 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:24:02 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:24:02 compute-0 sudo[233218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:24:02 compute-0 sudo[233218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:24:02 compute-0 sudo[233218]: pam_unix(sudo:session): session closed for user root
Dec 03 01:24:02 compute-0 sudo[233267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 01:24:02 compute-0 sudo[233267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:24:02 compute-0 sudo[233267]: pam_unix(sudo:session): session closed for user root
Dec 03 01:24:03 compute-0 ceph-mon[192821]: pgmap v278: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:03 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Dec 03 01:24:03 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Dec 03 01:24:04 compute-0 python3.9[233417]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 01:24:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v279: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:04 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 11.17 deep-scrub starts
Dec 03 01:24:04 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 11.17 deep-scrub ok
Dec 03 01:24:04 compute-0 ceph-mon[192821]: 11.8 scrub starts
Dec 03 01:24:04 compute-0 ceph-mon[192821]: 11.8 scrub ok
Dec 03 01:24:05 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:24:05 compute-0 sudo[233573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhocrtjsjxurmzrxiqnsqakgifduvfib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725044.9672897-36-151352296851570/AnsiballZ_getent.py'
Dec 03 01:24:05 compute-0 sudo[233573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:24:05 compute-0 ceph-mon[192821]: pgmap v279: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:05 compute-0 ceph-mon[192821]: 11.17 deep-scrub starts
Dec 03 01:24:05 compute-0 ceph-mon[192821]: 11.17 deep-scrub ok
Dec 03 01:24:05 compute-0 python3.9[233575]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec 03 01:24:05 compute-0 sudo[233573]: pam_unix(sudo:session): session closed for user root
Dec 03 01:24:05 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Dec 03 01:24:05 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Dec 03 01:24:06 compute-0 sshd-session[233545]: Received disconnect from 146.190.144.138 port 45612:11: Bye Bye [preauth]
Dec 03 01:24:06 compute-0 sshd-session[233545]: Disconnected from authenticating user root 146.190.144.138 port 45612 [preauth]
Dec 03 01:24:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v280: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:06 compute-0 sshd-session[232342]: Connection closed by authenticating user root 193.32.162.157 port 58830 [preauth]
Dec 03 01:24:06 compute-0 ceph-mon[192821]: 11.18 scrub starts
Dec 03 01:24:06 compute-0 ceph-mon[192821]: 11.18 scrub ok
Dec 03 01:24:06 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 6.1c scrub starts
Dec 03 01:24:06 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 6.1c scrub ok
Dec 03 01:24:07 compute-0 sudo[233772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsofriryluzezrtyrcpvipfbdnxdzssc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725046.4857876-48-272315012234091/AnsiballZ_setup.py'
Dec 03 01:24:07 compute-0 sudo[233772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:24:07 compute-0 podman[233702]: 2025-12-03 01:24:07.089356363 +0000 UTC m=+0.115507829 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, version=9.6, name=ubi9-minimal, distribution-scope=public, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec 03 01:24:07 compute-0 podman[233701]: 2025-12-03 01:24:07.092358947 +0000 UTC m=+0.123455494 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 01:24:07 compute-0 podman[233703]: 2025-12-03 01:24:07.101814129 +0000 UTC m=+0.125567286 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm)
Dec 03 01:24:07 compute-0 podman[233705]: 2025-12-03 01:24:07.121663957 +0000 UTC m=+0.133071180 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 03 01:24:07 compute-0 python3.9[233793]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 03 01:24:07 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Dec 03 01:24:07 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Dec 03 01:24:07 compute-0 sudo[233772]: pam_unix(sudo:session): session closed for user root
Dec 03 01:24:07 compute-0 ceph-mon[192821]: pgmap v280: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:07 compute-0 ceph-mon[192821]: 6.1c scrub starts
Dec 03 01:24:07 compute-0 ceph-mon[192821]: 6.1c scrub ok
Dec 03 01:24:07 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 11.b scrub starts
Dec 03 01:24:07 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 11.b scrub ok
Dec 03 01:24:08 compute-0 sudo[233890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdqhnitgyflnpaekojlhpjzuqyfcncyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725046.4857876-48-272315012234091/AnsiballZ_dnf.py'
Dec 03 01:24:08 compute-0 sudo[233890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:24:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v281: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:08 compute-0 python3.9[233892]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 03 01:24:08 compute-0 ceph-mon[192821]: 8.14 scrub starts
Dec 03 01:24:08 compute-0 ceph-mon[192821]: 8.14 scrub ok
Dec 03 01:24:08 compute-0 ceph-mon[192821]: 11.b scrub starts
Dec 03 01:24:08 compute-0 ceph-mon[192821]: 11.b scrub ok
Dec 03 01:24:08 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Dec 03 01:24:08 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Dec 03 01:24:08 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 11.1a deep-scrub starts
Dec 03 01:24:09 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 11.1a deep-scrub ok
Dec 03 01:24:09 compute-0 sshd-session[233895]: Invalid user zhangsan from 34.66.72.251 port 33276
Dec 03 01:24:09 compute-0 sshd-session[233895]: Received disconnect from 34.66.72.251 port 33276:11: Bye Bye [preauth]
Dec 03 01:24:09 compute-0 sshd-session[233895]: Disconnected from invalid user zhangsan 34.66.72.251 port 33276 [preauth]
Dec 03 01:24:09 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Dec 03 01:24:09 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Dec 03 01:24:09 compute-0 sudo[233890]: pam_unix(sudo:session): session closed for user root
Dec 03 01:24:09 compute-0 ceph-mon[192821]: pgmap v281: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:09 compute-0 ceph-mon[192821]: 4.12 scrub starts
Dec 03 01:24:09 compute-0 ceph-mon[192821]: 4.12 scrub ok
Dec 03 01:24:09 compute-0 ceph-mon[192821]: 11.1a deep-scrub starts
Dec 03 01:24:09 compute-0 ceph-mon[192821]: 11.1a deep-scrub ok
Dec 03 01:24:09 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Dec 03 01:24:09 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Dec 03 01:24:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v282: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:10 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:24:10 compute-0 ceph-mon[192821]: 11.1 scrub starts
Dec 03 01:24:10 compute-0 ceph-mon[192821]: 11.1 scrub ok
Dec 03 01:24:10 compute-0 ceph-mon[192821]: 9.2 scrub starts
Dec 03 01:24:10 compute-0 ceph-mon[192821]: 9.2 scrub ok
Dec 03 01:24:10 compute-0 sudo[234063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebhiodhzhgybwqlhwnioxsbxoaukpudc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725050.2104282-62-35613318040600/AnsiballZ_dnf.py'
Dec 03 01:24:10 compute-0 sudo[234063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:24:10 compute-0 podman[234020]: 2025-12-03 01:24:10.909196726 +0000 UTC m=+0.160882843 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Dec 03 01:24:10 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Dec 03 01:24:10 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Dec 03 01:24:10 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Dec 03 01:24:10 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Dec 03 01:24:11 compute-0 python3.9[234066]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 03 01:24:11 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Dec 03 01:24:11 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Dec 03 01:24:11 compute-0 ceph-mon[192821]: pgmap v282: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:11 compute-0 ceph-mon[192821]: 8.1 scrub starts
Dec 03 01:24:11 compute-0 ceph-mon[192821]: 8.1 scrub ok
Dec 03 01:24:11 compute-0 ceph-mon[192821]: 11.2 scrub starts
Dec 03 01:24:11 compute-0 ceph-mon[192821]: 11.2 scrub ok
Dec 03 01:24:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v283: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:12 compute-0 sudo[234063]: pam_unix(sudo:session): session closed for user root
Dec 03 01:24:12 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 11.e scrub starts
Dec 03 01:24:12 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 11.e scrub ok
Dec 03 01:24:12 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Dec 03 01:24:12 compute-0 ceph-mon[192821]: 8.10 scrub starts
Dec 03 01:24:12 compute-0 ceph-mon[192821]: 8.10 scrub ok
Dec 03 01:24:12 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Dec 03 01:24:13 compute-0 sudo[234217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edwiindvpapdsdqeheejvgnnqbjiebzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725052.7763627-70-256448893452324/AnsiballZ_systemd.py'
Dec 03 01:24:13 compute-0 sudo[234217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:24:13 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Dec 03 01:24:13 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Dec 03 01:24:13 compute-0 ceph-mon[192821]: pgmap v283: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:13 compute-0 ceph-mon[192821]: 11.e scrub starts
Dec 03 01:24:13 compute-0 ceph-mon[192821]: 11.e scrub ok
Dec 03 01:24:13 compute-0 ceph-mon[192821]: 11.1b scrub starts
Dec 03 01:24:13 compute-0 ceph-mon[192821]: 11.1b scrub ok
Dec 03 01:24:13 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Dec 03 01:24:13 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Dec 03 01:24:13 compute-0 python3.9[234219]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 03 01:24:14 compute-0 sudo[234217]: pam_unix(sudo:session): session closed for user root
Dec 03 01:24:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v284: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:14 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Dec 03 01:24:14 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Dec 03 01:24:14 compute-0 ceph-mon[192821]: 9.4 scrub starts
Dec 03 01:24:14 compute-0 ceph-mon[192821]: 8.4 scrub starts
Dec 03 01:24:14 compute-0 ceph-mon[192821]: 8.4 scrub ok
Dec 03 01:24:14 compute-0 ceph-mon[192821]: 9.4 scrub ok
Dec 03 01:24:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:24:15 compute-0 python3.9[234372]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 01:24:15 compute-0 ceph-mon[192821]: pgmap v284: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:15 compute-0 ceph-mon[192821]: 11.1c scrub starts
Dec 03 01:24:15 compute-0 ceph-mon[192821]: 11.1c scrub ok
Dec 03 01:24:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v285: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:16 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 11.f scrub starts
Dec 03 01:24:16 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 11.f scrub ok
Dec 03 01:24:16 compute-0 sudo[234538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofuoouhdtuibkfbobqauxnqenxxqkmic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725055.951853-88-90235266225150/AnsiballZ_sefcontext.py'
Dec 03 01:24:16 compute-0 sudo[234538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:24:16 compute-0 podman[234496]: 2025-12-03 01:24:16.746827614 +0000 UTC m=+0.145738650 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, container_name=kepler, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.29.0, distribution-scope=public, name=ubi9, version=9.4, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-type=git)
Dec 03 01:24:16 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Dec 03 01:24:16 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Dec 03 01:24:16 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Dec 03 01:24:16 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Dec 03 01:24:17 compute-0 python3.9[234543]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec 03 01:24:17 compute-0 sudo[234538]: pam_unix(sudo:session): session closed for user root
Dec 03 01:24:17 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Dec 03 01:24:17 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Dec 03 01:24:17 compute-0 sshd-session[233631]: Connection closed by authenticating user root 193.32.162.157 port 52278 [preauth]
Dec 03 01:24:17 compute-0 ceph-mon[192821]: pgmap v285: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:17 compute-0 ceph-mon[192821]: 11.f scrub starts
Dec 03 01:24:17 compute-0 ceph-mon[192821]: 11.f scrub ok
Dec 03 01:24:17 compute-0 ceph-mon[192821]: 8.3 scrub starts
Dec 03 01:24:17 compute-0 ceph-mon[192821]: 11.3 scrub starts
Dec 03 01:24:17 compute-0 ceph-mon[192821]: 8.3 scrub ok
Dec 03 01:24:17 compute-0 ceph-mon[192821]: 11.3 scrub ok
Dec 03 01:24:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v286: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:18 compute-0 python3.9[234695]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 01:24:18 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 8.e scrub starts
Dec 03 01:24:18 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 8.e scrub ok
Dec 03 01:24:18 compute-0 ceph-mon[192821]: 10.17 scrub starts
Dec 03 01:24:18 compute-0 ceph-mon[192821]: 10.17 scrub ok
Dec 03 01:24:19 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 8.c scrub starts
Dec 03 01:24:19 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 8.c scrub ok
Dec 03 01:24:19 compute-0 sudo[234852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrivntvqwgrbmnvztstbrqfvoyrndbva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725059.084401-106-276403691213480/AnsiballZ_dnf.py'
Dec 03 01:24:19 compute-0 sudo[234852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:24:19 compute-0 python3.9[234854]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 03 01:24:20 compute-0 ceph-mon[192821]: pgmap v286: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:20 compute-0 ceph-mon[192821]: 8.e scrub starts
Dec 03 01:24:20 compute-0 ceph-mon[192821]: 8.e scrub ok
Dec 03 01:24:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v287: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:24:20 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 11.14 deep-scrub starts
Dec 03 01:24:20 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 11.14 deep-scrub ok
Dec 03 01:24:20 compute-0 podman[234857]: 2025-12-03 01:24:20.877173305 +0000 UTC m=+0.124231093 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 01:24:20 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 8.5 deep-scrub starts
Dec 03 01:24:20 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 8.5 deep-scrub ok
Dec 03 01:24:20 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Dec 03 01:24:20 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Dec 03 01:24:21 compute-0 ceph-mon[192821]: 8.c scrub starts
Dec 03 01:24:21 compute-0 ceph-mon[192821]: 8.c scrub ok
Dec 03 01:24:21 compute-0 sudo[234852]: pam_unix(sudo:session): session closed for user root
Dec 03 01:24:21 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 8.f scrub starts
Dec 03 01:24:21 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 8.f scrub ok
Dec 03 01:24:22 compute-0 ceph-mon[192821]: pgmap v287: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:22 compute-0 ceph-mon[192821]: 11.14 deep-scrub starts
Dec 03 01:24:22 compute-0 ceph-mon[192821]: 11.14 deep-scrub ok
Dec 03 01:24:22 compute-0 ceph-mon[192821]: 8.5 deep-scrub starts
Dec 03 01:24:22 compute-0 ceph-mon[192821]: 8.5 deep-scrub ok
Dec 03 01:24:22 compute-0 ceph-mon[192821]: 11.11 scrub starts
Dec 03 01:24:22 compute-0 ceph-mon[192821]: 11.11 scrub ok
Dec 03 01:24:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v288: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:22 compute-0 sudo[235031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbkdkmoebcaweygqirlutgnnlowtvwqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725061.7000177-114-144791886803025/AnsiballZ_command.py'
Dec 03 01:24:22 compute-0 sudo[235031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:24:22 compute-0 python3.9[235033]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:24:22 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Dec 03 01:24:22 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Dec 03 01:24:23 compute-0 ceph-mon[192821]: 8.f scrub starts
Dec 03 01:24:23 compute-0 ceph-mon[192821]: 8.f scrub ok
Dec 03 01:24:23 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Dec 03 01:24:23 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Dec 03 01:24:23 compute-0 sudo[235031]: pam_unix(sudo:session): session closed for user root
Dec 03 01:24:24 compute-0 ceph-mon[192821]: pgmap v288: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:24 compute-0 ceph-mon[192821]: 8.7 scrub starts
Dec 03 01:24:24 compute-0 ceph-mon[192821]: 8.7 scrub ok
Dec 03 01:24:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v289: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:24 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 11.6 deep-scrub starts
Dec 03 01:24:24 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 11.6 deep-scrub ok
Dec 03 01:24:24 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 8.1c deep-scrub starts
Dec 03 01:24:24 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 8.1c deep-scrub ok
Dec 03 01:24:25 compute-0 sudo[235318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvumezkwqpaouyaiqiryddpspmsjlftf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725064.066578-122-148394119232722/AnsiballZ_file.py'
Dec 03 01:24:25 compute-0 sudo[235318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:24:25 compute-0 ceph-mon[192821]: 8.9 scrub starts
Dec 03 01:24:25 compute-0 ceph-mon[192821]: 8.9 scrub ok
Dec 03 01:24:25 compute-0 ceph-mon[192821]: 8.1c deep-scrub starts
Dec 03 01:24:25 compute-0 ceph-mon[192821]: 8.1c deep-scrub ok
Dec 03 01:24:25 compute-0 python3.9[235320]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 03 01:24:25 compute-0 sudo[235318]: pam_unix(sudo:session): session closed for user root
Dec 03 01:24:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:24:26 compute-0 ceph-mon[192821]: pgmap v289: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:26 compute-0 ceph-mon[192821]: 11.6 deep-scrub starts
Dec 03 01:24:26 compute-0 ceph-mon[192821]: 11.6 deep-scrub ok
Dec 03 01:24:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v290: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:26 compute-0 python3.9[235470]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:24:26 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Dec 03 01:24:26 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Dec 03 01:24:26 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Dec 03 01:24:26 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Dec 03 01:24:27 compute-0 ceph-mon[192821]: 11.1f scrub starts
Dec 03 01:24:27 compute-0 ceph-mon[192821]: 11.1f scrub ok
Dec 03 01:24:27 compute-0 sudo[235622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-istsbifelplfctgjrzxhgzohycgbecdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725067.0569594-138-15275972930900/AnsiballZ_dnf.py'
Dec 03 01:24:27 compute-0 sudo[235622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:24:27 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 8.b scrub starts
Dec 03 01:24:27 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 8.b scrub ok
Dec 03 01:24:27 compute-0 python3.9[235624]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 03 01:24:28 compute-0 ceph-mon[192821]: pgmap v290: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:28 compute-0 ceph-mon[192821]: 8.8 scrub starts
Dec 03 01:24:28 compute-0 ceph-mon[192821]: 8.8 scrub ok
Dec 03 01:24:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:24:28
Dec 03 01:24:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 01:24:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 01:24:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['.mgr', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log', 'volumes', 'images', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'backups']
Dec 03 01:24:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 01:24:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v291: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:24:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:24:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:24:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:24:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:24:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:24:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 01:24:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:24:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 01:24:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:24:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:24:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:24:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:24:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:24:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:24:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:24:28 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Dec 03 01:24:28 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Dec 03 01:24:29 compute-0 sudo[235622]: pam_unix(sudo:session): session closed for user root
Dec 03 01:24:29 compute-0 ceph-mon[192821]: 8.b scrub starts
Dec 03 01:24:29 compute-0 ceph-mon[192821]: 8.b scrub ok
Dec 03 01:24:29 compute-0 podman[158098]: time="2025-12-03T01:24:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:24:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:24:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec 03 01:24:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:24:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6821 "" "Go-http-client/1.1"
Dec 03 01:24:29 compute-0 sshd-session[234668]: Connection closed by authenticating user root 193.32.162.157 port 58222 [preauth]
Dec 03 01:24:29 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Dec 03 01:24:29 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Dec 03 01:24:30 compute-0 sudo[235776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pouxzlsuespgsioydxrssowwawksunui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725069.510594-147-188658480429696/AnsiballZ_dnf.py'
Dec 03 01:24:30 compute-0 sudo[235776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:24:30 compute-0 ceph-mon[192821]: pgmap v291: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:30 compute-0 ceph-mon[192821]: 8.18 scrub starts
Dec 03 01:24:30 compute-0 ceph-mon[192821]: 8.18 scrub ok
Dec 03 01:24:30 compute-0 ceph-mon[192821]: 8.12 scrub starts
Dec 03 01:24:30 compute-0 ceph-mon[192821]: 8.12 scrub ok
Dec 03 01:24:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v292: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:24:30 compute-0 python3.9[235778]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 03 01:24:30 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Dec 03 01:24:30 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Dec 03 01:24:30 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 9.a scrub starts
Dec 03 01:24:30 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 9.a scrub ok
Dec 03 01:24:31 compute-0 openstack_network_exporter[160250]: ERROR   01:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:24:31 compute-0 openstack_network_exporter[160250]: ERROR   01:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:24:31 compute-0 openstack_network_exporter[160250]: ERROR   01:24:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:24:31 compute-0 openstack_network_exporter[160250]: ERROR   01:24:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:24:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:24:31 compute-0 openstack_network_exporter[160250]: ERROR   01:24:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:24:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:24:31 compute-0 sudo[235776]: pam_unix(sudo:session): session closed for user root
Dec 03 01:24:32 compute-0 ceph-mon[192821]: pgmap v292: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:32 compute-0 ceph-mon[192821]: 11.10 scrub starts
Dec 03 01:24:32 compute-0 ceph-mon[192821]: 11.10 scrub ok
Dec 03 01:24:32 compute-0 ceph-mon[192821]: 9.a scrub starts
Dec 03 01:24:32 compute-0 ceph-mon[192821]: 9.a scrub ok
Dec 03 01:24:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v293: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:32 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Dec 03 01:24:32 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Dec 03 01:24:32 compute-0 sudo[235930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-beatiqmheqeblohezvklbpyzsykfdnri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725072.1910305-159-64987687528562/AnsiballZ_stat.py'
Dec 03 01:24:32 compute-0 sudo[235930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:24:32 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Dec 03 01:24:32 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Dec 03 01:24:32 compute-0 python3.9[235932]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:24:33 compute-0 sudo[235930]: pam_unix(sudo:session): session closed for user root
Dec 03 01:24:33 compute-0 ceph-mon[192821]: 8.11 scrub starts
Dec 03 01:24:33 compute-0 ceph-mon[192821]: 8.11 scrub ok
Dec 03 01:24:34 compute-0 sudo[236084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dldqcexstirtqavmskuobkozbeehuqib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725073.3612294-167-130100206500243/AnsiballZ_slurp.py'
Dec 03 01:24:34 compute-0 sudo[236084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:24:34 compute-0 ceph-mon[192821]: pgmap v293: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:34 compute-0 ceph-mon[192821]: 8.1f scrub starts
Dec 03 01:24:34 compute-0 ceph-mon[192821]: 8.1f scrub ok
Dec 03 01:24:34 compute-0 python3.9[236086]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Dec 03 01:24:34 compute-0 sudo[236084]: pam_unix(sudo:session): session closed for user root
Dec 03 01:24:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v294: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:34 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Dec 03 01:24:34 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Dec 03 01:24:34 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Dec 03 01:24:34 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Dec 03 01:24:35 compute-0 ceph-mon[192821]: 11.12 scrub starts
Dec 03 01:24:35 compute-0 ceph-mon[192821]: 11.12 scrub ok
Dec 03 01:24:35 compute-0 sshd-session[233217]: Connection closed by 192.168.122.30 port 42856
Dec 03 01:24:35 compute-0 sshd-session[233203]: pam_unix(sshd:session): session closed for user zuul
Dec 03 01:24:35 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Dec 03 01:24:35 compute-0 systemd[1]: session-42.scope: Consumed 26.496s CPU time.
Dec 03 01:24:35 compute-0 systemd-logind[800]: Session 42 logged out. Waiting for processes to exit.
Dec 03 01:24:35 compute-0 systemd-logind[800]: Removed session 42.
Dec 03 01:24:35 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:24:35 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Dec 03 01:24:35 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Dec 03 01:24:36 compute-0 ceph-mon[192821]: pgmap v294: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:36 compute-0 ceph-mon[192821]: 8.1d scrub starts
Dec 03 01:24:36 compute-0 ceph-mon[192821]: 8.1d scrub ok
Dec 03 01:24:36 compute-0 ceph-mon[192821]: 8.1b scrub starts
Dec 03 01:24:36 compute-0 ceph-mon[192821]: 8.1b scrub ok
Dec 03 01:24:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v295: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:36 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 8.a scrub starts
Dec 03 01:24:36 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 8.a scrub ok
Dec 03 01:24:36 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Dec 03 01:24:36 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Dec 03 01:24:37 compute-0 ceph-mon[192821]: pgmap v295: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:37 compute-0 ceph-mon[192821]: 8.a scrub starts
Dec 03 01:24:37 compute-0 ceph-mon[192821]: 8.a scrub ok
Dec 03 01:24:37 compute-0 ceph-mon[192821]: 11.1e scrub starts
Dec 03 01:24:37 compute-0 ceph-mon[192821]: 11.1e scrub ok
Dec 03 01:24:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 01:24:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:24:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 01:24:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:24:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:24:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:24:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:24:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:24:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:24:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:24:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:24:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:24:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 01:24:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:24:37 compute-0 sshd-session[236111]: Invalid user temp from 103.146.202.174 port 44126
Dec 03 01:24:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:24:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:24:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 01:24:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:24:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 01:24:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:24:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:24:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:24:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 01:24:37 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Dec 03 01:24:37 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Dec 03 01:24:37 compute-0 podman[236113]: 2025-12-03 01:24:37.841590476 +0000 UTC m=+0.101183374 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 03 01:24:37 compute-0 podman[236116]: 2025-12-03 01:24:37.852916518 +0000 UTC m=+0.095619156 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 03 01:24:37 compute-0 podman[236114]: 2025-12-03 01:24:37.863763116 +0000 UTC m=+0.110527830 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, vcs-type=git, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.6)
Dec 03 01:24:37 compute-0 podman[236121]: 2025-12-03 01:24:37.902470365 +0000 UTC m=+0.131665060 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 03 01:24:37 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 9.e scrub starts
Dec 03 01:24:37 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 9.e scrub ok
Dec 03 01:24:38 compute-0 sshd-session[236111]: Received disconnect from 103.146.202.174 port 44126:11: Bye Bye [preauth]
Dec 03 01:24:38 compute-0 sshd-session[236111]: Disconnected from invalid user temp 103.146.202.174 port 44126 [preauth]
Dec 03 01:24:38 compute-0 ceph-mon[192821]: 9.e scrub starts
Dec 03 01:24:38 compute-0 ceph-mon[192821]: 9.e scrub ok
Dec 03 01:24:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v296: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:38 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Dec 03 01:24:38 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Dec 03 01:24:39 compute-0 ceph-mon[192821]: 8.13 scrub starts
Dec 03 01:24:39 compute-0 ceph-mon[192821]: 8.13 scrub ok
Dec 03 01:24:39 compute-0 ceph-mon[192821]: pgmap v296: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:39 compute-0 ceph-mon[192821]: 9.6 scrub starts
Dec 03 01:24:39 compute-0 ceph-mon[192821]: 9.6 scrub ok
Dec 03 01:24:39 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Dec 03 01:24:39 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:24:39.288807) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 03 01:24:39 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Dec 03 01:24:39 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725079288979, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7183, "num_deletes": 251, "total_data_size": 8662783, "memory_usage": 8937280, "flush_reason": "Manual Compaction"}
Dec 03 01:24:39 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Dec 03 01:24:39 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725079351841, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7026799, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 140, "largest_seqno": 7320, "table_properties": {"data_size": 7000643, "index_size": 16953, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8133, "raw_key_size": 75111, "raw_average_key_size": 23, "raw_value_size": 6938600, "raw_average_value_size": 2146, "num_data_blocks": 744, "num_entries": 3232, "num_filter_entries": 3232, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724660, "oldest_key_time": 1764724660, "file_creation_time": 1764725079, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Dec 03 01:24:39 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 63102 microseconds, and 28120 cpu microseconds.
Dec 03 01:24:39 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:24:39.351920) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7026799 bytes OK
Dec 03 01:24:39 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:24:39.351942) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Dec 03 01:24:39 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:24:39.354384) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Dec 03 01:24:39 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:24:39.354404) EVENT_LOG_v1 {"time_micros": 1764725079354397, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Dec 03 01:24:39 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:24:39.354439) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Dec 03 01:24:39 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 8631583, prev total WAL file size 8631583, number of live WAL files 2.
Dec 03 01:24:39 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 01:24:39 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:24:39.357774) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Dec 03 01:24:39 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Dec 03 01:24:39 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(6862KB) 13(52KB) 8(1944B)]
Dec 03 01:24:39 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725079357965, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 7082896, "oldest_snapshot_seqno": -1}
Dec 03 01:24:39 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3047 keys, 7038772 bytes, temperature: kUnknown
Dec 03 01:24:39 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725079431737, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 7038772, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7013032, "index_size": 16990, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7685, "raw_key_size": 73156, "raw_average_key_size": 24, "raw_value_size": 6952624, "raw_average_value_size": 2281, "num_data_blocks": 747, "num_entries": 3047, "num_filter_entries": 3047, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764725079, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Dec 03 01:24:39 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 01:24:39 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:24:39.432078) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 7038772 bytes
Dec 03 01:24:39 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:24:39.435270) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 95.9 rd, 95.3 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(6.8, 0.0 +0.0 blob) out(6.7 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3337, records dropped: 290 output_compression: NoCompression
Dec 03 01:24:39 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:24:39.435332) EVENT_LOG_v1 {"time_micros": 1764725079435289, "job": 4, "event": "compaction_finished", "compaction_time_micros": 73855, "compaction_time_cpu_micros": 35875, "output_level": 6, "num_output_files": 1, "total_output_size": 7038772, "num_input_records": 3337, "num_output_records": 3047, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 03 01:24:39 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 01:24:39 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725079438042, "job": 4, "event": "table_file_deletion", "file_number": 19}
Dec 03 01:24:39 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 01:24:39 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725079438151, "job": 4, "event": "table_file_deletion", "file_number": 13}
Dec 03 01:24:39 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 01:24:39 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725079438208, "job": 4, "event": "table_file_deletion", "file_number": 8}
Dec 03 01:24:39 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:24:39.356866) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:24:39 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Dec 03 01:24:39 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Dec 03 01:24:39 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 9.f scrub starts
Dec 03 01:24:39 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 9.f scrub ok
Dec 03 01:24:40 compute-0 ceph-mon[192821]: 9.f scrub starts
Dec 03 01:24:40 compute-0 ceph-mon[192821]: 9.f scrub ok
Dec 03 01:24:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v297: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:24:40 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 8.16 deep-scrub starts
Dec 03 01:24:40 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 8.16 deep-scrub ok
Dec 03 01:24:40 compute-0 sshd-session[236198]: Accepted publickey for zuul from 192.168.122.30 port 54420 ssh2: ECDSA SHA256:ja3ITS17A9km0/Ot+KN2pl9ub4ump/b6GV+vNoE7Szw
Dec 03 01:24:41 compute-0 systemd-logind[800]: New session 43 of user zuul.
Dec 03 01:24:41 compute-0 systemd[1]: Started Session 43 of User zuul.
Dec 03 01:24:41 compute-0 sshd-session[236198]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 03 01:24:41 compute-0 podman[236200]: 2025-12-03 01:24:41.148899106 +0000 UTC m=+0.150468484 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 03 01:24:41 compute-0 ceph-mon[192821]: 11.19 scrub starts
Dec 03 01:24:41 compute-0 ceph-mon[192821]: 11.19 scrub ok
Dec 03 01:24:41 compute-0 ceph-mon[192821]: pgmap v297: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:41 compute-0 ceph-mon[192821]: 8.16 deep-scrub starts
Dec 03 01:24:41 compute-0 ceph-mon[192821]: 8.16 deep-scrub ok
Dec 03 01:24:41 compute-0 sshd-session[235769]: Connection closed by authenticating user root 193.32.162.157 port 53166 [preauth]
Dec 03 01:24:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v298: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:42 compute-0 python3.9[236372]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 01:24:42 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Dec 03 01:24:42 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Dec 03 01:24:43 compute-0 ceph-mon[192821]: pgmap v298: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:43 compute-0 ceph-mon[192821]: 8.1a scrub starts
Dec 03 01:24:43 compute-0 ceph-mon[192821]: 8.1a scrub ok
Dec 03 01:24:43 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Dec 03 01:24:43 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Dec 03 01:24:44 compute-0 python3.9[236529]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 03 01:24:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v299: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:44 compute-0 ceph-mon[192821]: 9.7 scrub starts
Dec 03 01:24:44 compute-0 ceph-mon[192821]: 9.7 scrub ok
Dec 03 01:24:45 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:24:45 compute-0 sshd-session[236507]: Received disconnect from 173.249.50.59 port 45084:11: Bye Bye [preauth]
Dec 03 01:24:45 compute-0 sshd-session[236507]: Disconnected from authenticating user root 173.249.50.59 port 45084 [preauth]
Dec 03 01:24:45 compute-0 ceph-mon[192821]: pgmap v299: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:45 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Dec 03 01:24:45 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Dec 03 01:24:45 compute-0 python3.9[236730]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:24:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v300: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:46 compute-0 ceph-mon[192821]: 11.4 scrub starts
Dec 03 01:24:46 compute-0 ceph-mon[192821]: 11.4 scrub ok
Dec 03 01:24:46 compute-0 sshd-session[236213]: Connection closed by 192.168.122.30 port 54420
Dec 03 01:24:46 compute-0 sshd-session[236198]: pam_unix(sshd:session): session closed for user zuul
Dec 03 01:24:46 compute-0 systemd[1]: session-43.scope: Deactivated successfully.
Dec 03 01:24:46 compute-0 systemd[1]: session-43.scope: Consumed 4.202s CPU time.
Dec 03 01:24:46 compute-0 systemd-logind[800]: Session 43 logged out. Waiting for processes to exit.
Dec 03 01:24:46 compute-0 systemd-logind[800]: Removed session 43.
Dec 03 01:24:47 compute-0 ceph-mon[192821]: pgmap v300: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:47 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Dec 03 01:24:47 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Dec 03 01:24:47 compute-0 podman[236756]: 2025-12-03 01:24:47.877357146 +0000 UTC m=+0.126539535 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, architecture=x86_64, name=ubi9, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, distribution-scope=public, io.openshift.expose-services=, managed_by=edpm_ansible, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., vcs-type=git, build-date=2024-09-18T21:23:30, config_id=edpm, container_name=kepler)
Dec 03 01:24:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v301: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:48 compute-0 ceph-mon[192821]: 9.17 scrub starts
Dec 03 01:24:48 compute-0 ceph-mon[192821]: 9.17 scrub ok
Dec 03 01:24:48 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Dec 03 01:24:48 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Dec 03 01:24:49 compute-0 ceph-mon[192821]: pgmap v301: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:49 compute-0 ceph-mon[192821]: 9.10 scrub starts
Dec 03 01:24:49 compute-0 ceph-mon[192821]: 9.10 scrub ok
Dec 03 01:24:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v302: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:24:50 compute-0 sshd-session[236274]: Invalid user butter from 193.32.162.157 port 49758
Dec 03 01:24:50 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Dec 03 01:24:50 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Dec 03 01:24:51 compute-0 ceph-mon[192821]: pgmap v302: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:51 compute-0 ceph-mon[192821]: 8.6 scrub starts
Dec 03 01:24:51 compute-0 ceph-mon[192821]: 8.6 scrub ok
Dec 03 01:24:51 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Dec 03 01:24:51 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Dec 03 01:24:51 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Dec 03 01:24:51 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Dec 03 01:24:51 compute-0 podman[236777]: 2025-12-03 01:24:51.874644945 +0000 UTC m=+0.123212380 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 01:24:52 compute-0 sshd-session[236801]: Accepted publickey for zuul from 192.168.122.30 port 54638 ssh2: ECDSA SHA256:ja3ITS17A9km0/Ot+KN2pl9ub4ump/b6GV+vNoE7Szw
Dec 03 01:24:52 compute-0 systemd-logind[800]: New session 44 of user zuul.
Dec 03 01:24:52 compute-0 systemd[1]: Started Session 44 of User zuul.
Dec 03 01:24:52 compute-0 sshd-session[236801]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 03 01:24:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v303: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:52 compute-0 ceph-mon[192821]: 8.17 scrub starts
Dec 03 01:24:52 compute-0 ceph-mon[192821]: 8.17 scrub ok
Dec 03 01:24:52 compute-0 ceph-mon[192821]: 9.11 scrub starts
Dec 03 01:24:52 compute-0 ceph-mon[192821]: 9.11 scrub ok
Dec 03 01:24:53 compute-0 sshd-session[236274]: Connection closed by invalid user butter 193.32.162.157 port 49758 [preauth]
Dec 03 01:24:53 compute-0 ceph-mon[192821]: pgmap v303: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:53 compute-0 python3.9[236955]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 01:24:53 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 9.b deep-scrub starts
Dec 03 01:24:53 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Dec 03 01:24:53 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 9.b deep-scrub ok
Dec 03 01:24:53 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Dec 03 01:24:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v304: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:54 compute-0 ceph-mon[192821]: 9.b deep-scrub starts
Dec 03 01:24:54 compute-0 ceph-mon[192821]: 9.b deep-scrub ok
Dec 03 01:24:54 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Dec 03 01:24:54 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Dec 03 01:24:54 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Dec 03 01:24:54 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Dec 03 01:24:55 compute-0 python3.9[237109]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 01:24:55 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:24:55 compute-0 ceph-mon[192821]: 8.19 scrub starts
Dec 03 01:24:55 compute-0 ceph-mon[192821]: 8.19 scrub ok
Dec 03 01:24:55 compute-0 ceph-mon[192821]: pgmap v304: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:55 compute-0 ceph-mon[192821]: 9.8 scrub starts
Dec 03 01:24:55 compute-0 ceph-mon[192821]: 9.8 scrub ok
Dec 03 01:24:55 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Dec 03 01:24:55 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Dec 03 01:24:55 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 9.d deep-scrub starts
Dec 03 01:24:55 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 9.d deep-scrub ok
Dec 03 01:24:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v305: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:56 compute-0 sudo[237264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avdipqkfhvdjzhlavozmpqimcnklxkly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725095.8335028-40-223588287577707/AnsiballZ_setup.py'
Dec 03 01:24:56 compute-0 sudo[237264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:24:56 compute-0 ceph-mon[192821]: 9.12 scrub starts
Dec 03 01:24:56 compute-0 ceph-mon[192821]: 9.12 scrub ok
Dec 03 01:24:56 compute-0 ceph-mon[192821]: 9.18 scrub starts
Dec 03 01:24:56 compute-0 ceph-mon[192821]: 9.18 scrub ok
Dec 03 01:24:56 compute-0 ceph-mon[192821]: 9.d deep-scrub starts
Dec 03 01:24:56 compute-0 ceph-mon[192821]: 9.d deep-scrub ok
Dec 03 01:24:56 compute-0 python3.9[237266]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 03 01:24:57 compute-0 sudo[237264]: pam_unix(sudo:session): session closed for user root
Dec 03 01:24:57 compute-0 ceph-mon[192821]: pgmap v305: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:57 compute-0 sudo[237348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-meoshuqwladzfqmziyaubtuwiylgzdul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725095.8335028-40-223588287577707/AnsiballZ_dnf.py'
Dec 03 01:24:57 compute-0 sudo[237348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:24:57 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 9.c scrub starts
Dec 03 01:24:57 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 9.c scrub ok
Dec 03 01:24:57 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 9.14 deep-scrub starts
Dec 03 01:24:57 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 9.14 deep-scrub ok
Dec 03 01:24:57 compute-0 python3.9[237350]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 03 01:24:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v306: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:24:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:24:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:24:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:24:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:24:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:24:58 compute-0 ceph-mon[192821]: 9.c scrub starts
Dec 03 01:24:58 compute-0 ceph-mon[192821]: 9.c scrub ok
Dec 03 01:24:59 compute-0 sudo[237348]: pam_unix(sudo:session): session closed for user root
Dec 03 01:24:59 compute-0 ceph-mon[192821]: 9.14 deep-scrub starts
Dec 03 01:24:59 compute-0 ceph-mon[192821]: 9.14 deep-scrub ok
Dec 03 01:24:59 compute-0 ceph-mon[192821]: pgmap v306: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:24:59 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Dec 03 01:24:59 compute-0 podman[158098]: time="2025-12-03T01:24:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:24:59 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Dec 03 01:24:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:24:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec 03 01:24:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:24:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6820 "" "Go-http-client/1.1"
Dec 03 01:25:00 compute-0 sudo[237501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-liwaprtukzsotdkbikealozqdtaacxre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725099.4556122-52-78672326231877/AnsiballZ_setup.py'
Dec 03 01:25:00 compute-0 sudo[237501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:25:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v307: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:25:00 compute-0 python3.9[237503]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 03 01:25:00 compute-0 ceph-mon[192821]: 9.13 scrub starts
Dec 03 01:25:00 compute-0 ceph-mon[192821]: 9.13 scrub ok
Dec 03 01:25:00 compute-0 sudo[237501]: pam_unix(sudo:session): session closed for user root
Dec 03 01:25:00 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Dec 03 01:25:00 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Dec 03 01:25:01 compute-0 openstack_network_exporter[160250]: ERROR   01:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:25:01 compute-0 openstack_network_exporter[160250]: ERROR   01:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:25:01 compute-0 openstack_network_exporter[160250]: ERROR   01:25:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:25:01 compute-0 openstack_network_exporter[160250]: ERROR   01:25:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:25:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:25:01 compute-0 openstack_network_exporter[160250]: ERROR   01:25:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:25:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:25:01 compute-0 ceph-mon[192821]: pgmap v307: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:01 compute-0 sudo[237706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqtinqhckqprfjyaqaotahxrufxthqmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725101.1589956-63-262683496599849/AnsiballZ_file.py'
Dec 03 01:25:01 compute-0 sudo[237706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:25:02 compute-0 sshd-session[237603]: Received disconnect from 80.253.31.232 port 60528:11: Bye Bye [preauth]
Dec 03 01:25:02 compute-0 sshd-session[237603]: Disconnected from authenticating user root 80.253.31.232 port 60528 [preauth]
Dec 03 01:25:02 compute-0 python3.9[237708]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:25:02 compute-0 sudo[237706]: pam_unix(sudo:session): session closed for user root
Dec 03 01:25:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v308: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:02 compute-0 ceph-mon[192821]: 8.1e scrub starts
Dec 03 01:25:02 compute-0 ceph-mon[192821]: 8.1e scrub ok
Dec 03 01:25:02 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 9.1a deep-scrub starts
Dec 03 01:25:02 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 9.1a deep-scrub ok
Dec 03 01:25:03 compute-0 sudo[237830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:25:03 compute-0 sudo[237830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:25:03 compute-0 sudo[237830]: pam_unix(sudo:session): session closed for user root
Dec 03 01:25:03 compute-0 sudo[237887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqydetycaalzlcrwtysorywqmcqbkykj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725102.4776478-71-151781772928696/AnsiballZ_command.py'
Dec 03 01:25:03 compute-0 sudo[237887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:25:03 compute-0 sudo[237883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:25:03 compute-0 sudo[237883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:25:03 compute-0 sudo[237883]: pam_unix(sudo:session): session closed for user root
Dec 03 01:25:03 compute-0 sudo[237911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:25:03 compute-0 sudo[237911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:25:03 compute-0 sudo[237911]: pam_unix(sudo:session): session closed for user root
Dec 03 01:25:03 compute-0 python3.9[237898]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:25:03 compute-0 sudo[237936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 01:25:03 compute-0 sudo[237936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:25:03 compute-0 sudo[237887]: pam_unix(sudo:session): session closed for user root
Dec 03 01:25:03 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Dec 03 01:25:03 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Dec 03 01:25:03 compute-0 ceph-mon[192821]: pgmap v308: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:03 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Dec 03 01:25:03 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Dec 03 01:25:03 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Dec 03 01:25:03 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Dec 03 01:25:04 compute-0 sudo[237936]: pam_unix(sudo:session): session closed for user root
Dec 03 01:25:04 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:25:04 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:25:04 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 01:25:04 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:25:04 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 01:25:04 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:25:04 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev a8d6c9df-e3d4-4d60-9c43-da3103607072 does not exist
Dec 03 01:25:04 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 4a8d4b0e-53f2-4e66-9e13-6fcbbdd47ba4 does not exist
Dec 03 01:25:04 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev cef62853-3f2b-4fb4-ab28-f363eeb7c256 does not exist
Dec 03 01:25:04 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 01:25:04 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:25:04 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 01:25:04 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:25:04 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:25:04 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:25:04 compute-0 sudo[238079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:25:04 compute-0 sudo[238079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:25:04 compute-0 sudo[238079]: pam_unix(sudo:session): session closed for user root
Dec 03 01:25:04 compute-0 sudo[238125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:25:04 compute-0 sudo[238125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:25:04 compute-0 sudo[238125]: pam_unix(sudo:session): session closed for user root
Dec 03 01:25:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v309: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:04 compute-0 sudo[238165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:25:04 compute-0 sudo[238165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:25:04 compute-0 sudo[238165]: pam_unix(sudo:session): session closed for user root
Dec 03 01:25:04 compute-0 sudo[238232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnqhfmlapexafmsfumtxwnhinmonhgrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725103.83758-79-115025515669450/AnsiballZ_stat.py'
Dec 03 01:25:04 compute-0 sudo[238232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:25:04 compute-0 sudo[238224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 01:25:04 compute-0 sudo[238224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:25:04 compute-0 sshd-session[236934]: Connection closed by authenticating user root 193.32.162.157 port 34206 [preauth]
Dec 03 01:25:04 compute-0 ceph-mon[192821]: 9.1a deep-scrub starts
Dec 03 01:25:04 compute-0 ceph-mon[192821]: 9.1a deep-scrub ok
Dec 03 01:25:04 compute-0 ceph-mon[192821]: 9.19 scrub starts
Dec 03 01:25:04 compute-0 ceph-mon[192821]: 9.19 scrub ok
Dec 03 01:25:04 compute-0 ceph-mon[192821]: 9.1 scrub starts
Dec 03 01:25:04 compute-0 ceph-mon[192821]: 9.1 scrub ok
Dec 03 01:25:04 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:25:04 compute-0 python3.9[238249]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:25:04 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:25:04 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:25:04 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:25:04 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:25:04 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:25:04 compute-0 sudo[238232]: pam_unix(sudo:session): session closed for user root
Dec 03 01:25:05 compute-0 podman[238343]: 2025-12-03 01:25:05.159910573 +0000 UTC m=+0.065408118 container create 8ab605dc08f6e2395714328a35eef2866f47161051519088f6a728eaa80a6fa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 03 01:25:05 compute-0 systemd[1]: Started libpod-conmon-8ab605dc08f6e2395714328a35eef2866f47161051519088f6a728eaa80a6fa2.scope.
Dec 03 01:25:05 compute-0 sudo[238382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptpaugtfgldwlipmvptpagitcvyrjzhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725103.83758-79-115025515669450/AnsiballZ_file.py'
Dec 03 01:25:05 compute-0 sudo[238382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:25:05 compute-0 podman[238343]: 2025-12-03 01:25:05.139443722 +0000 UTC m=+0.044941287 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:25:05 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:25:05 compute-0 podman[238343]: 2025-12-03 01:25:05.288350191 +0000 UTC m=+0.193847816 container init 8ab605dc08f6e2395714328a35eef2866f47161051519088f6a728eaa80a6fa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_shtern, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:25:05 compute-0 podman[238343]: 2025-12-03 01:25:05.303761238 +0000 UTC m=+0.209258823 container start 8ab605dc08f6e2395714328a35eef2866f47161051519088f6a728eaa80a6fa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_shtern, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 03 01:25:05 compute-0 podman[238343]: 2025-12-03 01:25:05.310250172 +0000 UTC m=+0.215747757 container attach 8ab605dc08f6e2395714328a35eef2866f47161051519088f6a728eaa80a6fa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_shtern, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:25:05 compute-0 beautiful_shtern[238388]: 167 167
Dec 03 01:25:05 compute-0 systemd[1]: libpod-8ab605dc08f6e2395714328a35eef2866f47161051519088f6a728eaa80a6fa2.scope: Deactivated successfully.
Dec 03 01:25:05 compute-0 podman[238343]: 2025-12-03 01:25:05.315316816 +0000 UTC m=+0.220814401 container died 8ab605dc08f6e2395714328a35eef2866f47161051519088f6a728eaa80a6fa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:25:05 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:25:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-fdc4e11f14c5affe2713c5e578bd7b86c6b6146f5329ff30f68722b37ec812b1-merged.mount: Deactivated successfully.
Dec 03 01:25:05 compute-0 podman[238343]: 2025-12-03 01:25:05.406398753 +0000 UTC m=+0.311896338 container remove 8ab605dc08f6e2395714328a35eef2866f47161051519088f6a728eaa80a6fa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_shtern, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 03 01:25:05 compute-0 systemd[1]: libpod-conmon-8ab605dc08f6e2395714328a35eef2866f47161051519088f6a728eaa80a6fa2.scope: Deactivated successfully.
Dec 03 01:25:05 compute-0 python3.9[238390]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:25:05 compute-0 sudo[238382]: pam_unix(sudo:session): session closed for user root
Dec 03 01:25:05 compute-0 podman[238429]: 2025-12-03 01:25:05.673776766 +0000 UTC m=+0.082397641 container create a2acdd3fb3ba498971927642dee0e9bb30bf50e3935c6154c051b35b1931264f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_sanderson, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:25:05 compute-0 podman[238429]: 2025-12-03 01:25:05.640937113 +0000 UTC m=+0.049558078 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:25:05 compute-0 systemd[1]: Started libpod-conmon-a2acdd3fb3ba498971927642dee0e9bb30bf50e3935c6154c051b35b1931264f.scope.
Dec 03 01:25:05 compute-0 ceph-mon[192821]: 11.5 scrub starts
Dec 03 01:25:05 compute-0 ceph-mon[192821]: 11.5 scrub ok
Dec 03 01:25:05 compute-0 ceph-mon[192821]: pgmap v309: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:05 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:25:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f646f300520ead085f2c0a29cb3955f59559f5029e5494a745d4e7940276944/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:25:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f646f300520ead085f2c0a29cb3955f59559f5029e5494a745d4e7940276944/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:25:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f646f300520ead085f2c0a29cb3955f59559f5029e5494a745d4e7940276944/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:25:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f646f300520ead085f2c0a29cb3955f59559f5029e5494a745d4e7940276944/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:25:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f646f300520ead085f2c0a29cb3955f59559f5029e5494a745d4e7940276944/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:25:05 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Dec 03 01:25:05 compute-0 podman[238429]: 2025-12-03 01:25:05.858702067 +0000 UTC m=+0.267323032 container init a2acdd3fb3ba498971927642dee0e9bb30bf50e3935c6154c051b35b1931264f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:25:05 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Dec 03 01:25:05 compute-0 podman[238429]: 2025-12-03 01:25:05.890468579 +0000 UTC m=+0.299089484 container start a2acdd3fb3ba498971927642dee0e9bb30bf50e3935c6154c051b35b1931264f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 03 01:25:05 compute-0 podman[238429]: 2025-12-03 01:25:05.897709725 +0000 UTC m=+0.306330690 container attach a2acdd3fb3ba498971927642dee0e9bb30bf50e3935c6154c051b35b1931264f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:25:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v310: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:06 compute-0 sudo[238580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cibksgzlrdmwzskytmbkvxkcptxkjhfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725105.8076897-91-24389908991131/AnsiballZ_stat.py'
Dec 03 01:25:06 compute-0 sudo[238580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:25:06 compute-0 python3.9[238582]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:25:06 compute-0 sudo[238580]: pam_unix(sudo:session): session closed for user root
Dec 03 01:25:06 compute-0 ceph-mon[192821]: 9.1d scrub starts
Dec 03 01:25:06 compute-0 ceph-mon[192821]: 9.1d scrub ok
Dec 03 01:25:07 compute-0 suspicious_sanderson[238468]: --> passed data devices: 0 physical, 3 LVM
Dec 03 01:25:07 compute-0 suspicious_sanderson[238468]: --> relative data size: 1.0
Dec 03 01:25:07 compute-0 suspicious_sanderson[238468]: --> All data devices are unavailable
Dec 03 01:25:07 compute-0 sudo[238682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-libelqzzzxjtldzrkrfjancngmnitfay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725105.8076897-91-24389908991131/AnsiballZ_file.py'
Dec 03 01:25:07 compute-0 sudo[238682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:25:07 compute-0 systemd[1]: libpod-a2acdd3fb3ba498971927642dee0e9bb30bf50e3935c6154c051b35b1931264f.scope: Deactivated successfully.
Dec 03 01:25:07 compute-0 systemd[1]: libpod-a2acdd3fb3ba498971927642dee0e9bb30bf50e3935c6154c051b35b1931264f.scope: Consumed 1.226s CPU time.
Dec 03 01:25:07 compute-0 podman[238429]: 2025-12-03 01:25:07.200400847 +0000 UTC m=+1.609021742 container died a2acdd3fb3ba498971927642dee0e9bb30bf50e3935c6154c051b35b1931264f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_sanderson, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:25:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f646f300520ead085f2c0a29cb3955f59559f5029e5494a745d4e7940276944-merged.mount: Deactivated successfully.
Dec 03 01:25:07 compute-0 podman[238429]: 2025-12-03 01:25:07.317503683 +0000 UTC m=+1.726124558 container remove a2acdd3fb3ba498971927642dee0e9bb30bf50e3935c6154c051b35b1931264f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:25:07 compute-0 systemd[1]: libpod-conmon-a2acdd3fb3ba498971927642dee0e9bb30bf50e3935c6154c051b35b1931264f.scope: Deactivated successfully.
Dec 03 01:25:07 compute-0 sudo[238224]: pam_unix(sudo:session): session closed for user root
Dec 03 01:25:07 compute-0 python3.9[238684]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:25:07 compute-0 sudo[238696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:25:07 compute-0 sudo[238682]: pam_unix(sudo:session): session closed for user root
Dec 03 01:25:07 compute-0 sudo[238696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:25:07 compute-0 sudo[238696]: pam_unix(sudo:session): session closed for user root
Dec 03 01:25:07 compute-0 sudo[238721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:25:07 compute-0 sudo[238721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:25:07 compute-0 sudo[238721]: pam_unix(sudo:session): session closed for user root
Dec 03 01:25:07 compute-0 ceph-mon[192821]: pgmap v310: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:07 compute-0 sudo[238770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:25:07 compute-0 sudo[238770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:25:07 compute-0 sudo[238770]: pam_unix(sudo:session): session closed for user root
Dec 03 01:25:07 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Dec 03 01:25:07 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Dec 03 01:25:07 compute-0 sudo[238816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 01:25:07 compute-0 sudo[238816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:25:08 compute-0 podman[238869]: 2025-12-03 01:25:08.081290303 +0000 UTC m=+0.094106064 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 03 01:25:08 compute-0 podman[238872]: 2025-12-03 01:25:08.09739514 +0000 UTC m=+0.107382360 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, vendor=Red Hat, Inc., io.openshift.expose-services=, version=9.6, architecture=x86_64, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 03 01:25:08 compute-0 podman[238873]: 2025-12-03 01:25:08.13401784 +0000 UTC m=+0.137389822 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 03 01:25:08 compute-0 podman[238874]: 2025-12-03 01:25:08.14246422 +0000 UTC m=+0.144090953 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 03 01:25:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v311: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:08 compute-0 sudo[239076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qosryocmszjfimdibfbvslzzgzaskzxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725107.8371222-104-170242132719125/AnsiballZ_ini_file.py'
Dec 03 01:25:08 compute-0 sudo[239076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:25:08 compute-0 podman[239048]: 2025-12-03 01:25:08.468079827 +0000 UTC m=+0.073575381 container create b52cb712d336caf0dc7f8db4f94608c028c0cd5cef716c409410659f7e7155b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ishizaka, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:25:08 compute-0 systemd[1]: Started libpod-conmon-b52cb712d336caf0dc7f8db4f94608c028c0cd5cef716c409410659f7e7155b8.scope.
Dec 03 01:25:08 compute-0 podman[239048]: 2025-12-03 01:25:08.439410743 +0000 UTC m=+0.044906337 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:25:08 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:25:08 compute-0 podman[239048]: 2025-12-03 01:25:08.625368203 +0000 UTC m=+0.230863847 container init b52cb712d336caf0dc7f8db4f94608c028c0cd5cef716c409410659f7e7155b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 03 01:25:08 compute-0 python3.9[239080]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:25:08 compute-0 podman[239048]: 2025-12-03 01:25:08.643007244 +0000 UTC m=+0.248502798 container start b52cb712d336caf0dc7f8db4f94608c028c0cd5cef716c409410659f7e7155b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 03 01:25:08 compute-0 cool_ishizaka[239083]: 167 167
Dec 03 01:25:08 compute-0 systemd[1]: libpod-b52cb712d336caf0dc7f8db4f94608c028c0cd5cef716c409410659f7e7155b8.scope: Deactivated successfully.
Dec 03 01:25:08 compute-0 podman[239048]: 2025-12-03 01:25:08.667129939 +0000 UTC m=+0.272625523 container attach b52cb712d336caf0dc7f8db4f94608c028c0cd5cef716c409410659f7e7155b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ishizaka, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 03 01:25:08 compute-0 podman[239048]: 2025-12-03 01:25:08.668850108 +0000 UTC m=+0.274345692 container died b52cb712d336caf0dc7f8db4f94608c028c0cd5cef716c409410659f7e7155b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 03 01:25:08 compute-0 sudo[239076]: pam_unix(sudo:session): session closed for user root
Dec 03 01:25:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-68d93cfbe97dc99572d384bc062e79777021c7a07a58a14b2e194642a16593c4-merged.mount: Deactivated successfully.
Dec 03 01:25:08 compute-0 podman[239048]: 2025-12-03 01:25:08.733472323 +0000 UTC m=+0.338967877 container remove b52cb712d336caf0dc7f8db4f94608c028c0cd5cef716c409410659f7e7155b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:25:08 compute-0 systemd[1]: libpod-conmon-b52cb712d336caf0dc7f8db4f94608c028c0cd5cef716c409410659f7e7155b8.scope: Deactivated successfully.
Dec 03 01:25:08 compute-0 ceph-mon[192821]: 9.1b scrub starts
Dec 03 01:25:08 compute-0 ceph-mon[192821]: 9.1b scrub ok
Dec 03 01:25:08 compute-0 podman[239153]: 2025-12-03 01:25:08.985114649 +0000 UTC m=+0.076292877 container create d4a8f21937964dc6e104dfb5fa24a8daeba64d668140f6280a1a90c4e242793f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:25:09 compute-0 podman[239153]: 2025-12-03 01:25:08.94921832 +0000 UTC m=+0.040396598 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:25:09 compute-0 systemd[1]: Started libpod-conmon-d4a8f21937964dc6e104dfb5fa24a8daeba64d668140f6280a1a90c4e242793f.scope.
Dec 03 01:25:09 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:25:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b390fe9f3049e1bcd568a07666d6072291c9cda84e22ce5696888eb9c26162f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:25:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b390fe9f3049e1bcd568a07666d6072291c9cda84e22ce5696888eb9c26162f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:25:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b390fe9f3049e1bcd568a07666d6072291c9cda84e22ce5696888eb9c26162f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:25:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b390fe9f3049e1bcd568a07666d6072291c9cda84e22ce5696888eb9c26162f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:25:09 compute-0 podman[239153]: 2025-12-03 01:25:09.124210619 +0000 UTC m=+0.215388937 container init d4a8f21937964dc6e104dfb5fa24a8daeba64d668140f6280a1a90c4e242793f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 03 01:25:09 compute-0 podman[239153]: 2025-12-03 01:25:09.15381669 +0000 UTC m=+0.244994888 container start d4a8f21937964dc6e104dfb5fa24a8daeba64d668140f6280a1a90c4e242793f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_shamir, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 03 01:25:09 compute-0 podman[239153]: 2025-12-03 01:25:09.160436798 +0000 UTC m=+0.251615036 container attach d4a8f21937964dc6e104dfb5fa24a8daeba64d668140f6280a1a90c4e242793f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_shamir, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 03 01:25:09 compute-0 sudo[239278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrhcyilvbypsjobeafilqyepjvjljkac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725108.8918834-104-29202710142469/AnsiballZ_ini_file.py'
Dec 03 01:25:09 compute-0 sudo[239278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:25:09 compute-0 python3.9[239280]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:25:09 compute-0 sudo[239278]: pam_unix(sudo:session): session closed for user root
Dec 03 01:25:09 compute-0 sshd-session[239261]: Received disconnect from 34.66.72.251 port 38638:11: Bye Bye [preauth]
Dec 03 01:25:09 compute-0 sshd-session[239261]: Disconnected from authenticating user root 34.66.72.251 port 38638 [preauth]
Dec 03 01:25:09 compute-0 ceph-mon[192821]: pgmap v311: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:09 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Dec 03 01:25:09 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Dec 03 01:25:09 compute-0 serene_shamir[239202]: {
Dec 03 01:25:09 compute-0 serene_shamir[239202]:     "0": [
Dec 03 01:25:09 compute-0 serene_shamir[239202]:         {
Dec 03 01:25:09 compute-0 serene_shamir[239202]:             "devices": [
Dec 03 01:25:09 compute-0 serene_shamir[239202]:                 "/dev/loop3"
Dec 03 01:25:09 compute-0 serene_shamir[239202]:             ],
Dec 03 01:25:09 compute-0 serene_shamir[239202]:             "lv_name": "ceph_lv0",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:             "lv_size": "21470642176",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:             "name": "ceph_lv0",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:             "tags": {
Dec 03 01:25:09 compute-0 serene_shamir[239202]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:                 "ceph.cluster_name": "ceph",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:                 "ceph.crush_device_class": "",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:                 "ceph.encrypted": "0",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:                 "ceph.osd_id": "0",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:                 "ceph.type": "block",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:                 "ceph.vdo": "0"
Dec 03 01:25:09 compute-0 serene_shamir[239202]:             },
Dec 03 01:25:09 compute-0 serene_shamir[239202]:             "type": "block",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:             "vg_name": "ceph_vg0"
Dec 03 01:25:09 compute-0 serene_shamir[239202]:         }
Dec 03 01:25:09 compute-0 serene_shamir[239202]:     ],
Dec 03 01:25:09 compute-0 serene_shamir[239202]:     "1": [
Dec 03 01:25:09 compute-0 serene_shamir[239202]:         {
Dec 03 01:25:09 compute-0 serene_shamir[239202]:             "devices": [
Dec 03 01:25:09 compute-0 serene_shamir[239202]:                 "/dev/loop4"
Dec 03 01:25:09 compute-0 serene_shamir[239202]:             ],
Dec 03 01:25:09 compute-0 serene_shamir[239202]:             "lv_name": "ceph_lv1",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:             "lv_size": "21470642176",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:             "name": "ceph_lv1",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:             "tags": {
Dec 03 01:25:09 compute-0 serene_shamir[239202]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:                 "ceph.cluster_name": "ceph",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:                 "ceph.crush_device_class": "",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:                 "ceph.encrypted": "0",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:                 "ceph.osd_id": "1",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:                 "ceph.type": "block",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:                 "ceph.vdo": "0"
Dec 03 01:25:09 compute-0 serene_shamir[239202]:             },
Dec 03 01:25:09 compute-0 serene_shamir[239202]:             "type": "block",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:             "vg_name": "ceph_vg1"
Dec 03 01:25:09 compute-0 serene_shamir[239202]:         }
Dec 03 01:25:09 compute-0 serene_shamir[239202]:     ],
Dec 03 01:25:09 compute-0 serene_shamir[239202]:     "2": [
Dec 03 01:25:09 compute-0 serene_shamir[239202]:         {
Dec 03 01:25:09 compute-0 serene_shamir[239202]:             "devices": [
Dec 03 01:25:09 compute-0 serene_shamir[239202]:                 "/dev/loop5"
Dec 03 01:25:09 compute-0 serene_shamir[239202]:             ],
Dec 03 01:25:09 compute-0 serene_shamir[239202]:             "lv_name": "ceph_lv2",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:             "lv_size": "21470642176",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:             "name": "ceph_lv2",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:             "tags": {
Dec 03 01:25:09 compute-0 serene_shamir[239202]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:                 "ceph.cluster_name": "ceph",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:                 "ceph.crush_device_class": "",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:                 "ceph.encrypted": "0",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:                 "ceph.osd_id": "2",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:                 "ceph.type": "block",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:                 "ceph.vdo": "0"
Dec 03 01:25:09 compute-0 serene_shamir[239202]:             },
Dec 03 01:25:09 compute-0 serene_shamir[239202]:             "type": "block",
Dec 03 01:25:09 compute-0 serene_shamir[239202]:             "vg_name": "ceph_vg2"
Dec 03 01:25:09 compute-0 serene_shamir[239202]:         }
Dec 03 01:25:09 compute-0 serene_shamir[239202]:     ]
Dec 03 01:25:09 compute-0 serene_shamir[239202]: }
Dec 03 01:25:09 compute-0 systemd[1]: libpod-d4a8f21937964dc6e104dfb5fa24a8daeba64d668140f6280a1a90c4e242793f.scope: Deactivated successfully.
Dec 03 01:25:09 compute-0 podman[239153]: 2025-12-03 01:25:09.991845308 +0000 UTC m=+1.083023516 container died d4a8f21937964dc6e104dfb5fa24a8daeba64d668140f6280a1a90c4e242793f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:25:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-b390fe9f3049e1bcd568a07666d6072291c9cda84e22ce5696888eb9c26162f8-merged.mount: Deactivated successfully.
Dec 03 01:25:10 compute-0 podman[239153]: 2025-12-03 01:25:10.09228143 +0000 UTC m=+1.183459638 container remove d4a8f21937964dc6e104dfb5fa24a8daeba64d668140f6280a1a90c4e242793f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 03 01:25:10 compute-0 systemd[1]: libpod-conmon-d4a8f21937964dc6e104dfb5fa24a8daeba64d668140f6280a1a90c4e242793f.scope: Deactivated successfully.
Dec 03 01:25:10 compute-0 sudo[238816]: pam_unix(sudo:session): session closed for user root
Dec 03 01:25:10 compute-0 sudo[239385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:25:10 compute-0 sudo[239385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:25:10 compute-0 sudo[239385]: pam_unix(sudo:session): session closed for user root
Dec 03 01:25:10 compute-0 sudo[239433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:25:10 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:25:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v312: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:10 compute-0 sudo[239433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:25:10 compute-0 sudo[239433]: pam_unix(sudo:session): session closed for user root
Dec 03 01:25:10 compute-0 sudo[239512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkkxdhcqehsjxjcogtshjpgnrvdpojxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725109.9429703-104-114427299963063/AnsiballZ_ini_file.py'
Dec 03 01:25:10 compute-0 sudo[239512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:25:10 compute-0 sudo[239480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:25:10 compute-0 sudo[239480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:25:10 compute-0 sudo[239480]: pam_unix(sudo:session): session closed for user root
Dec 03 01:25:10 compute-0 sudo[239523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 01:25:10 compute-0 sudo[239523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:25:10 compute-0 python3.9[239521]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:25:10 compute-0 sudo[239512]: pam_unix(sudo:session): session closed for user root
Dec 03 01:25:10 compute-0 ceph-mon[192821]: 11.7 scrub starts
Dec 03 01:25:10 compute-0 ceph-mon[192821]: 11.7 scrub ok
Dec 03 01:25:11 compute-0 podman[239662]: 2025-12-03 01:25:11.163012736 +0000 UTC m=+0.071546203 container create 15001f6236071722710085d5ab156196e3921f659a621cd33f7bd3b94fccfb8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:25:11 compute-0 systemd[1]: Started libpod-conmon-15001f6236071722710085d5ab156196e3921f659a621cd33f7bd3b94fccfb8e.scope.
Dec 03 01:25:11 compute-0 podman[239662]: 2025-12-03 01:25:11.127297701 +0000 UTC m=+0.035831258 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:25:11 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:25:11 compute-0 podman[239662]: 2025-12-03 01:25:11.246068944 +0000 UTC m=+0.154602431 container init 15001f6236071722710085d5ab156196e3921f659a621cd33f7bd3b94fccfb8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ride, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 03 01:25:11 compute-0 podman[239662]: 2025-12-03 01:25:11.257337864 +0000 UTC m=+0.165871331 container start 15001f6236071722710085d5ab156196e3921f659a621cd33f7bd3b94fccfb8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ride, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:25:11 compute-0 podman[239662]: 2025-12-03 01:25:11.262073289 +0000 UTC m=+0.170606756 container attach 15001f6236071722710085d5ab156196e3921f659a621cd33f7bd3b94fccfb8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:25:11 compute-0 fervent_ride[239708]: 167 167
Dec 03 01:25:11 compute-0 systemd[1]: libpod-15001f6236071722710085d5ab156196e3921f659a621cd33f7bd3b94fccfb8e.scope: Deactivated successfully.
Dec 03 01:25:11 compute-0 conmon[239708]: conmon 15001f62360717227100 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-15001f6236071722710085d5ab156196e3921f659a621cd33f7bd3b94fccfb8e.scope/container/memory.events
Dec 03 01:25:11 compute-0 podman[239662]: 2025-12-03 01:25:11.266011331 +0000 UTC m=+0.174544818 container died 15001f6236071722710085d5ab156196e3921f659a621cd33f7bd3b94fccfb8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ride, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 03 01:25:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d9301db2408947f57a9733cd051a5e0aecd023aec773feceade1bb1fed3a262-merged.mount: Deactivated successfully.
Dec 03 01:25:11 compute-0 podman[239699]: 2025-12-03 01:25:11.303011561 +0000 UTC m=+0.090287715 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:25:11 compute-0 podman[239662]: 2025-12-03 01:25:11.32445531 +0000 UTC m=+0.232988787 container remove 15001f6236071722710085d5ab156196e3921f659a621cd33f7bd3b94fccfb8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ride, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:25:11 compute-0 systemd[1]: libpod-conmon-15001f6236071722710085d5ab156196e3921f659a621cd33f7bd3b94fccfb8e.scope: Deactivated successfully.
Dec 03 01:25:11 compute-0 sudo[239786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kehnqilotrscwvhdlajgnhwnmecpmlmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725110.9406598-104-217633814965197/AnsiballZ_ini_file.py'
Dec 03 01:25:11 compute-0 sudo[239786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:25:11 compute-0 python3.9[239788]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:25:11 compute-0 podman[239794]: 2025-12-03 01:25:11.568082639 +0000 UTC m=+0.087964519 container create c7b5a13887f31030d9d84a703d36247473fdbaad744484f70163f4de1f12250d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_lamport, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:25:11 compute-0 sudo[239786]: pam_unix(sudo:session): session closed for user root
Dec 03 01:25:11 compute-0 podman[239794]: 2025-12-03 01:25:11.536182793 +0000 UTC m=+0.056064703 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:25:11 compute-0 systemd[1]: Started libpod-conmon-c7b5a13887f31030d9d84a703d36247473fdbaad744484f70163f4de1f12250d.scope.
Dec 03 01:25:11 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:25:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b27592728e14b94aa26bf0c2634e471e3b5af274c2e80d7d1e94b51ce94c0cb6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:25:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b27592728e14b94aa26bf0c2634e471e3b5af274c2e80d7d1e94b51ce94c0cb6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:25:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b27592728e14b94aa26bf0c2634e471e3b5af274c2e80d7d1e94b51ce94c0cb6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:25:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b27592728e14b94aa26bf0c2634e471e3b5af274c2e80d7d1e94b51ce94c0cb6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:25:11 compute-0 podman[239794]: 2025-12-03 01:25:11.730582363 +0000 UTC m=+0.250464243 container init c7b5a13887f31030d9d84a703d36247473fdbaad744484f70163f4de1f12250d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:25:11 compute-0 podman[239794]: 2025-12-03 01:25:11.758370293 +0000 UTC m=+0.278252143 container start c7b5a13887f31030d9d84a703d36247473fdbaad744484f70163f4de1f12250d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:25:11 compute-0 podman[239794]: 2025-12-03 01:25:11.762992284 +0000 UTC m=+0.282874174 container attach c7b5a13887f31030d9d84a703d36247473fdbaad744484f70163f4de1f12250d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 03 01:25:11 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Dec 03 01:25:11 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Dec 03 01:25:11 compute-0 ceph-mon[192821]: pgmap v312: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v313: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:12 compute-0 sudo[239964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbzlwiinhoqxprxkokrqaymsfbiowfwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725111.9859722-135-7025710660316/AnsiballZ_dnf.py'
Dec 03 01:25:12 compute-0 sudo[239964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:25:12 compute-0 python3.9[239967]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 03 01:25:12 compute-0 ceph-mon[192821]: 9.5 scrub starts
Dec 03 01:25:12 compute-0 ceph-mon[192821]: 9.5 scrub ok
Dec 03 01:25:12 compute-0 loving_lamport[239820]: {
Dec 03 01:25:12 compute-0 loving_lamport[239820]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 01:25:12 compute-0 loving_lamport[239820]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:25:12 compute-0 loving_lamport[239820]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 01:25:12 compute-0 loving_lamport[239820]:         "osd_id": 2,
Dec 03 01:25:12 compute-0 loving_lamport[239820]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:25:12 compute-0 loving_lamport[239820]:         "type": "bluestore"
Dec 03 01:25:12 compute-0 loving_lamport[239820]:     },
Dec 03 01:25:12 compute-0 loving_lamport[239820]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 01:25:12 compute-0 loving_lamport[239820]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:25:12 compute-0 loving_lamport[239820]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 01:25:12 compute-0 loving_lamport[239820]:         "osd_id": 1,
Dec 03 01:25:12 compute-0 loving_lamport[239820]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:25:12 compute-0 loving_lamport[239820]:         "type": "bluestore"
Dec 03 01:25:12 compute-0 loving_lamport[239820]:     },
Dec 03 01:25:12 compute-0 loving_lamport[239820]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 01:25:12 compute-0 loving_lamport[239820]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:25:12 compute-0 loving_lamport[239820]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 01:25:12 compute-0 loving_lamport[239820]:         "osd_id": 0,
Dec 03 01:25:12 compute-0 loving_lamport[239820]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:25:12 compute-0 loving_lamport[239820]:         "type": "bluestore"
Dec 03 01:25:12 compute-0 loving_lamport[239820]:     }
Dec 03 01:25:12 compute-0 loving_lamport[239820]: }
Dec 03 01:25:12 compute-0 systemd[1]: libpod-c7b5a13887f31030d9d84a703d36247473fdbaad744484f70163f4de1f12250d.scope: Deactivated successfully.
Dec 03 01:25:12 compute-0 systemd[1]: libpod-c7b5a13887f31030d9d84a703d36247473fdbaad744484f70163f4de1f12250d.scope: Consumed 1.183s CPU time.
Dec 03 01:25:12 compute-0 conmon[239820]: conmon c7b5a13887f31030d9d8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c7b5a13887f31030d9d84a703d36247473fdbaad744484f70163f4de1f12250d.scope/container/memory.events
Dec 03 01:25:12 compute-0 podman[239794]: 2025-12-03 01:25:12.957007451 +0000 UTC m=+1.476889321 container died c7b5a13887f31030d9d84a703d36247473fdbaad744484f70163f4de1f12250d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_lamport, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:25:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-b27592728e14b94aa26bf0c2634e471e3b5af274c2e80d7d1e94b51ce94c0cb6-merged.mount: Deactivated successfully.
Dec 03 01:25:13 compute-0 podman[239794]: 2025-12-03 01:25:13.070663079 +0000 UTC m=+1.590544929 container remove c7b5a13887f31030d9d84a703d36247473fdbaad744484f70163f4de1f12250d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:25:13 compute-0 systemd[1]: libpod-conmon-c7b5a13887f31030d9d84a703d36247473fdbaad744484f70163f4de1f12250d.scope: Deactivated successfully.
Dec 03 01:25:13 compute-0 sudo[239523]: pam_unix(sudo:session): session closed for user root
Dec 03 01:25:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:25:13 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:25:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:25:13 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:25:13 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev ffda87b4-5e8e-4e61-a7c3-3a83a4ef98bc does not exist
Dec 03 01:25:13 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 27152f65-31d3-448d-b108-fbec084432e1 does not exist
Dec 03 01:25:13 compute-0 sudo[240008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:25:13 compute-0 sudo[240008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:25:13 compute-0 sudo[240008]: pam_unix(sudo:session): session closed for user root
Dec 03 01:25:13 compute-0 sudo[240033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 01:25:13 compute-0 sudo[240033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:25:13 compute-0 sudo[240033]: pam_unix(sudo:session): session closed for user root
Dec 03 01:25:14 compute-0 sudo[239964]: pam_unix(sudo:session): session closed for user root
Dec 03 01:25:14 compute-0 ceph-mon[192821]: pgmap v313: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:14 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:25:14 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:25:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v314: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:14 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Dec 03 01:25:14 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Dec 03 01:25:15 compute-0 sudo[240207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxreberckkskhcwqvdakkoimsjnqttou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725114.7010772-146-109536268439529/AnsiballZ_setup.py'
Dec 03 01:25:15 compute-0 sudo[240207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:25:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:25:15 compute-0 python3.9[240209]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 01:25:15 compute-0 sudo[240207]: pam_unix(sudo:session): session closed for user root
Dec 03 01:25:16 compute-0 ceph-mon[192821]: pgmap v314: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:16 compute-0 ceph-mon[192821]: 9.9 scrub starts
Dec 03 01:25:16 compute-0 ceph-mon[192821]: 9.9 scrub ok
Dec 03 01:25:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v315: 321 pgs: 1 active+clean+scrubbing, 320 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:16 compute-0 sudo[240361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vabrhpwftvjjgienjbbbpelgnyqhoovi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725115.970142-154-52869924735468/AnsiballZ_stat.py'
Dec 03 01:25:16 compute-0 sudo[240361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:25:16 compute-0 sshd-session[238283]: Connection closed by authenticating user root 193.32.162.157 port 41830 [preauth]
Dec 03 01:25:16 compute-0 python3.9[240363]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:25:16 compute-0 sudo[240361]: pam_unix(sudo:session): session closed for user root
Dec 03 01:25:17 compute-0 sudo[240514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwcdthqlvurvctwmpcvszkumppokkkmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725117.1687076-163-138367192496695/AnsiballZ_stat.py'
Dec 03 01:25:17 compute-0 sudo[240514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:25:17 compute-0 python3.9[240516]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:25:17 compute-0 sudo[240514]: pam_unix(sudo:session): session closed for user root
Dec 03 01:25:18 compute-0 ceph-mon[192821]: pgmap v315: 321 pgs: 1 active+clean+scrubbing, 320 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v316: 321 pgs: 1 active+clean+scrubbing, 320 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:18 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Dec 03 01:25:18 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Dec 03 01:25:18 compute-0 podman[240624]: 2025-12-03 01:25:18.88780454 +0000 UTC m=+0.133091370 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, container_name=kepler, name=ubi9, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, vcs-type=git, version=9.4, architecture=x86_64, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30)
Dec 03 01:25:18 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 11.a scrub starts
Dec 03 01:25:18 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 11.a scrub ok
Dec 03 01:25:18 compute-0 sudo[240686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjmiafbzmmmqifucooztzjkyczozgbyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725118.373903-173-150590013538220/AnsiballZ_command.py'
Dec 03 01:25:18 compute-0 sudo[240686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:25:19 compute-0 python3.9[240689]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:25:19 compute-0 sudo[240686]: pam_unix(sudo:session): session closed for user root
Dec 03 01:25:19 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Dec 03 01:25:19 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Dec 03 01:25:20 compute-0 ceph-mon[192821]: pgmap v316: 321 pgs: 1 active+clean+scrubbing, 320 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:20 compute-0 ceph-mon[192821]: 9.3 scrub starts
Dec 03 01:25:20 compute-0 ceph-mon[192821]: 9.3 scrub ok
Dec 03 01:25:20 compute-0 ceph-mon[192821]: 11.a scrub starts
Dec 03 01:25:20 compute-0 ceph-mon[192821]: 11.a scrub ok
Dec 03 01:25:20 compute-0 sudo[240841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onbfrpmpioqoskecgtotosseaxhmedhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725119.5730934-183-182720377410801/AnsiballZ_service_facts.py'
Dec 03 01:25:20 compute-0 sudo[240841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:25:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:25:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v317: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:20 compute-0 python3.9[240843]: ansible-service_facts Invoked
Dec 03 01:25:20 compute-0 network[240860]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 03 01:25:20 compute-0 network[240861]: 'network-scripts' will be removed from distribution in near future.
Dec 03 01:25:20 compute-0 network[240862]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 03 01:25:20 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Dec 03 01:25:20 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Dec 03 01:25:21 compute-0 ceph-mon[192821]: 9.16 scrub starts
Dec 03 01:25:21 compute-0 ceph-mon[192821]: 9.16 scrub ok
Dec 03 01:25:22 compute-0 podman[240876]: 2025-12-03 01:25:22.075999607 +0000 UTC m=+0.126114202 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 03 01:25:22 compute-0 ceph-mon[192821]: pgmap v317: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:22 compute-0 ceph-mon[192821]: 9.1c scrub starts
Dec 03 01:25:22 compute-0 ceph-mon[192821]: 9.1c scrub ok
Dec 03 01:25:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v318: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:22 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Dec 03 01:25:22 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Dec 03 01:25:23 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 11.c scrub starts
Dec 03 01:25:23 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 11.c scrub ok
Dec 03 01:25:24 compute-0 ceph-mon[192821]: pgmap v318: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:24 compute-0 ceph-mon[192821]: 9.1e scrub starts
Dec 03 01:25:24 compute-0 ceph-mon[192821]: 9.1e scrub ok
Dec 03 01:25:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v319: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v320: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:27 compute-0 sudo[240841]: pam_unix(sudo:session): session closed for user root
Dec 03 01:25:27 compute-0 sshd-session[240364]: Invalid user asterisk from 193.32.162.157 port 49620
Dec 03 01:25:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:25:27 compute-0 ceph-mon[192821]: 11.c scrub starts
Dec 03 01:25:27 compute-0 ceph-mon[192821]: 11.c scrub ok
Dec 03 01:25:27 compute-0 ceph-mon[192821]: pgmap v319: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:27 compute-0 ceph-mon[192821]: pgmap v320: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:28 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Dec 03 01:25:28 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Dec 03 01:25:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:25:28
Dec 03 01:25:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 01:25:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 01:25:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['default.rgw.control', 'backups', '.rgw.root', 'volumes', 'vms', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.data', 'images', 'default.rgw.log', 'cephfs.cephfs.meta']
Dec 03 01:25:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 01:25:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:25:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:25:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v321: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:25:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:25:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:25:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:25:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 01:25:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:25:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 01:25:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:25:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:25:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:25:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:25:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:25:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:25:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:25:28 compute-0 sudo[241178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzqdkgeadwgirerciveddrvuanvkyvfy ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1764725127.948319-198-9716184274593/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1764725127.948319-198-9716184274593/args'
Dec 03 01:25:28 compute-0 sudo[241178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:25:28 compute-0 sudo[241178]: pam_unix(sudo:session): session closed for user root
Dec 03 01:25:29 compute-0 ceph-mon[192821]: 11.13 scrub starts
Dec 03 01:25:29 compute-0 ceph-mon[192821]: 11.13 scrub ok
Dec 03 01:25:29 compute-0 ceph-mon[192821]: pgmap v321: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:29 compute-0 sshd-session[240364]: Connection closed by invalid user asterisk 193.32.162.157 port 49620 [preauth]
Dec 03 01:25:29 compute-0 podman[158098]: time="2025-12-03T01:25:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:25:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:25:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec 03 01:25:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:25:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6828 "" "Go-http-client/1.1"
Dec 03 01:25:30 compute-0 sudo[241346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yncvxrvbkqomghiowzamlbkfpnizjiak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725129.5118248-209-155073526681045/AnsiballZ_dnf.py'
Dec 03 01:25:30 compute-0 sudo[241346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:25:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v322: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:30 compute-0 python3.9[241348]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 03 01:25:31 compute-0 openstack_network_exporter[160250]: ERROR   01:25:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:25:31 compute-0 openstack_network_exporter[160250]: ERROR   01:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:25:31 compute-0 openstack_network_exporter[160250]: ERROR   01:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:25:31 compute-0 openstack_network_exporter[160250]: ERROR   01:25:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:25:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:25:31 compute-0 openstack_network_exporter[160250]: ERROR   01:25:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:25:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:25:31 compute-0 ceph-mon[192821]: pgmap v322: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:31 compute-0 sudo[241346]: pam_unix(sudo:session): session closed for user root
Dec 03 01:25:31 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Dec 03 01:25:31 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Dec 03 01:25:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:25:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v323: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:32 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Dec 03 01:25:32 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Dec 03 01:25:33 compute-0 sudo[241500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubfrvgbotwamgagpfbblrqqsxzhotodv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725132.1715415-222-151783056548803/AnsiballZ_package_facts.py'
Dec 03 01:25:33 compute-0 sudo[241500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:25:33 compute-0 python3.9[241502]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec 03 01:25:33 compute-0 ceph-mon[192821]: 11.16 scrub starts
Dec 03 01:25:33 compute-0 ceph-mon[192821]: 11.16 scrub ok
Dec 03 01:25:33 compute-0 ceph-mon[192821]: pgmap v323: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:33 compute-0 sudo[241500]: pam_unix(sudo:session): session closed for user root
Dec 03 01:25:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v324: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:34 compute-0 ceph-mon[192821]: 11.1d scrub starts
Dec 03 01:25:34 compute-0 ceph-mon[192821]: 11.1d scrub ok
Dec 03 01:25:35 compute-0 sudo[241652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhndaeevbqcaswqolydaulnhmaoczgla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725134.404053-232-54890242631622/AnsiballZ_stat.py'
Dec 03 01:25:35 compute-0 sudo[241652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:25:35 compute-0 python3.9[241654]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:25:35 compute-0 sudo[241652]: pam_unix(sudo:session): session closed for user root
Dec 03 01:25:35 compute-0 ceph-mon[192821]: pgmap v324: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:35 compute-0 sudo[241730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iarssgbcalobmhwmhsmmosrzvwxuvedh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725134.404053-232-54890242631622/AnsiballZ_file.py'
Dec 03 01:25:35 compute-0 sudo[241730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:25:35 compute-0 python3.9[241732]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:25:35 compute-0 sudo[241730]: pam_unix(sudo:session): session closed for user root
Dec 03 01:25:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v325: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:36 compute-0 sudo[241882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mffyhijdhdtadcbxyspnafsgrydnlctr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725136.3237312-244-124054044527086/AnsiballZ_stat.py'
Dec 03 01:25:36 compute-0 sudo[241882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:25:36 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Dec 03 01:25:36 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Dec 03 01:25:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:25:37 compute-0 python3.9[241884]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:25:37 compute-0 sudo[241882]: pam_unix(sudo:session): session closed for user root
Dec 03 01:25:37 compute-0 ceph-mon[192821]: pgmap v325: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:37 compute-0 sudo[241960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onmungwwfjgzhaiarckvgbatkmnnyqao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725136.3237312-244-124054044527086/AnsiballZ_file.py'
Dec 03 01:25:37 compute-0 sudo[241960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:25:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 01:25:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:25:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 01:25:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:25:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:25:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:25:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:25:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:25:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:25:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:25:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:25:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:25:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 01:25:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:25:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:25:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:25:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 01:25:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:25:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 01:25:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:25:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:25:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:25:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 01:25:37 compute-0 python3.9[241962]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:25:37 compute-0 sudo[241960]: pam_unix(sudo:session): session closed for user root
Dec 03 01:25:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v326: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:38 compute-0 ceph-mon[192821]: 10.10 scrub starts
Dec 03 01:25:38 compute-0 ceph-mon[192821]: 10.10 scrub ok
Dec 03 01:25:38 compute-0 podman[242040]: 2025-12-03 01:25:38.882230686 +0000 UTC m=+0.116577831 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, container_name=openstack_network_exporter, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, vcs-type=git, vendor=Red Hat, Inc., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 03 01:25:38 compute-0 podman[242041]: 2025-12-03 01:25:38.890746104 +0000 UTC m=+0.109630057 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 03 01:25:38 compute-0 podman[242039]: 2025-12-03 01:25:38.91169921 +0000 UTC m=+0.138446523 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 01:25:38 compute-0 podman[242042]: 2025-12-03 01:25:38.924140708 +0000 UTC m=+0.136476438 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 03 01:25:38 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 10.b scrub starts
Dec 03 01:25:38 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 10.b scrub ok
Dec 03 01:25:39 compute-0 sudo[242194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbsccxdlcrdrsqfaluuvxmceklulmpcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725138.5245333-262-79481323304917/AnsiballZ_lineinfile.py'
Dec 03 01:25:39 compute-0 sudo[242194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:25:39 compute-0 sshd-session[239995]: ssh_dispatch_run_fatal: Connection from 45.78.219.140 port 48882: Connection timed out [preauth]
Dec 03 01:25:39 compute-0 python3.9[242196]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:25:39 compute-0 sudo[242194]: pam_unix(sudo:session): session closed for user root
Dec 03 01:25:39 compute-0 ceph-mon[192821]: pgmap v326: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v327: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:40 compute-0 ceph-mon[192821]: 10.b scrub starts
Dec 03 01:25:40 compute-0 ceph-mon[192821]: 10.b scrub ok
Dec 03 01:25:40 compute-0 sudo[242346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikneiqqyoausxdivgxisrpmbyqqpyfya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725140.3488545-277-74429857172833/AnsiballZ_setup.py'
Dec 03 01:25:40 compute-0 sudo[242346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.969 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.970 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.970 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.971 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f00ebd496a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.972 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.972 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eda45910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.973 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.973 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.973 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.973 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.973 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.974 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eabec2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.974 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.974 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.974 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.974 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.974 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.975 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.975 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.975 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.975 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.975 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebcadee0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.977 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.978 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f00ebd4b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.978 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.978 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f00edba6090>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.978 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.978 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f00ebd4bb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.978 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.979 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f00ebd4b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.979 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.979 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f00ebd4b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.979 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.979 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f00ebd4b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.979 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.980 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f00ebd4b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.980 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.980 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f00eabec290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.980 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.980 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f00ebd4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.980 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f00ebd4b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.981 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f00ebd4b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.981 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f00ebd4bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.982 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f00ebd4b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.982 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f00ebd4bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.982 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f00ebd4bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f00ebd4bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f00ebe0e030>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f00ebd4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f00ebd4b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f00ede91a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f00ebd4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f00ebd4b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f00ede92450>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.986 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f00ebd4bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.986 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f00ebd4bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.986 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.989 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.989 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.990 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.991 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.991 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.992 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.992 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:25:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:25:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:40.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:25:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:41.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:25:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:41.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:25:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:25:41.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:25:41 compute-0 sshd-session[241233]: Connection closed by authenticating user root 193.32.162.157 port 41716 [preauth]
Dec 03 01:25:41 compute-0 python3.9[242348]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 03 01:25:41 compute-0 ceph-mon[192821]: pgmap v327: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:41 compute-0 sudo[242346]: pam_unix(sudo:session): session closed for user root
Dec 03 01:25:41 compute-0 podman[242359]: 2025-12-03 01:25:41.91989427 +0000 UTC m=+0.165845019 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 03 01:25:41 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Dec 03 01:25:41 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Dec 03 01:25:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:25:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v328: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:42 compute-0 sudo[242451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyiukjsrojhleplkjwwybbrznspsijeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725140.3488545-277-74429857172833/AnsiballZ_systemd.py'
Dec 03 01:25:42 compute-0 sudo[242451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:25:42 compute-0 python3.9[242453]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:25:42 compute-0 sudo[242451]: pam_unix(sudo:session): session closed for user root
Dec 03 01:25:43 compute-0 ceph-mon[192821]: 10.19 scrub starts
Dec 03 01:25:43 compute-0 ceph-mon[192821]: 10.19 scrub ok
Dec 03 01:25:43 compute-0 ceph-mon[192821]: pgmap v328: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:43 compute-0 sshd-session[236804]: Connection closed by 192.168.122.30 port 54638
Dec 03 01:25:43 compute-0 sshd-session[236801]: pam_unix(sshd:session): session closed for user zuul
Dec 03 01:25:43 compute-0 systemd[1]: session-44.scope: Deactivated successfully.
Dec 03 01:25:43 compute-0 systemd[1]: session-44.scope: Consumed 38.797s CPU time.
Dec 03 01:25:43 compute-0 systemd-logind[800]: Session 44 logged out. Waiting for processes to exit.
Dec 03 01:25:43 compute-0 systemd-logind[800]: Removed session 44.
Dec 03 01:25:43 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Dec 03 01:25:43 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Dec 03 01:25:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v329: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:44 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Dec 03 01:25:44 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Dec 03 01:25:45 compute-0 ceph-mon[192821]: 10.12 scrub starts
Dec 03 01:25:45 compute-0 ceph-mon[192821]: 10.12 scrub ok
Dec 03 01:25:45 compute-0 ceph-mon[192821]: pgmap v329: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:45 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Dec 03 01:25:45 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Dec 03 01:25:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v330: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:46 compute-0 ceph-mon[192821]: 10.1a scrub starts
Dec 03 01:25:46 compute-0 ceph-mon[192821]: 10.1a scrub ok
Dec 03 01:25:46 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Dec 03 01:25:46 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Dec 03 01:25:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:25:47 compute-0 ceph-mon[192821]: 10.13 scrub starts
Dec 03 01:25:47 compute-0 ceph-mon[192821]: 10.13 scrub ok
Dec 03 01:25:47 compute-0 ceph-mon[192821]: pgmap v330: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:47 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Dec 03 01:25:47 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Dec 03 01:25:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v331: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:48 compute-0 ceph-mon[192821]: 10.6 scrub starts
Dec 03 01:25:48 compute-0 ceph-mon[192821]: 10.6 scrub ok
Dec 03 01:25:48 compute-0 sshd-session[242481]: Accepted publickey for zuul from 192.168.122.30 port 34438 ssh2: ECDSA SHA256:ja3ITS17A9km0/Ot+KN2pl9ub4ump/b6GV+vNoE7Szw
Dec 03 01:25:48 compute-0 systemd-logind[800]: New session 45 of user zuul.
Dec 03 01:25:49 compute-0 systemd[1]: Started Session 45 of User zuul.
Dec 03 01:25:49 compute-0 sshd-session[242481]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 03 01:25:49 compute-0 podman[242483]: 2025-12-03 01:25:49.113286864 +0000 UTC m=+0.137998840 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., vcs-type=git, container_name=kepler, distribution-scope=public, maintainer=Red Hat, Inc., version=9.4, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec 03 01:25:49 compute-0 ceph-mon[192821]: 10.11 scrub starts
Dec 03 01:25:49 compute-0 ceph-mon[192821]: 10.11 scrub ok
Dec 03 01:25:49 compute-0 ceph-mon[192821]: pgmap v331: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:50 compute-0 sudo[242656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yijfznjifpqridxqjyhpmwklhmbucrft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725149.1822944-22-273117723286097/AnsiballZ_file.py'
Dec 03 01:25:50 compute-0 sudo[242656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:25:50 compute-0 python3.9[242658]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:25:50 compute-0 sudo[242656]: pam_unix(sudo:session): session closed for user root
Dec 03 01:25:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v332: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:50 compute-0 sshd-session[242358]: Invalid user david from 193.32.162.157 port 54696
Dec 03 01:25:51 compute-0 sudo[242808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnkxeraeabpmgafzmtapfsekbczqahsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725150.6639352-34-96925933480296/AnsiballZ_stat.py'
Dec 03 01:25:51 compute-0 sudo[242808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:25:51 compute-0 sshd-session[242628]: Invalid user myuser from 103.146.202.174 port 43792
Dec 03 01:25:51 compute-0 sshd-session[242602]: Invalid user kyt from 173.249.50.59 port 43368
Dec 03 01:25:51 compute-0 sshd-session[242602]: Received disconnect from 173.249.50.59 port 43368:11: Bye Bye [preauth]
Dec 03 01:25:51 compute-0 sshd-session[242602]: Disconnected from invalid user kyt 173.249.50.59 port 43368 [preauth]
Dec 03 01:25:51 compute-0 python3.9[242810]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:25:51 compute-0 sshd-session[242628]: Received disconnect from 103.146.202.174 port 43792:11: Bye Bye [preauth]
Dec 03 01:25:51 compute-0 sshd-session[242628]: Disconnected from invalid user myuser 103.146.202.174 port 43792 [preauth]
Dec 03 01:25:51 compute-0 sudo[242808]: pam_unix(sudo:session): session closed for user root
Dec 03 01:25:51 compute-0 ceph-mon[192821]: pgmap v332: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:51 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 10.f scrub starts
Dec 03 01:25:52 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 10.f scrub ok
Dec 03 01:25:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:25:52 compute-0 sudo[242886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aikswstyovdjblaxyorvbvkndshjfaua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725150.6639352-34-96925933480296/AnsiballZ_file.py'
Dec 03 01:25:52 compute-0 sudo[242886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:25:52 compute-0 podman[242888]: 2025-12-03 01:25:52.322910236 +0000 UTC m=+0.137872926 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 03 01:25:52 compute-0 python3.9[242889]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:25:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v333: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:52 compute-0 sudo[242886]: pam_unix(sudo:session): session closed for user root
Dec 03 01:25:52 compute-0 sshd-session[242493]: Connection closed by 192.168.122.30 port 34438
Dec 03 01:25:52 compute-0 sshd-session[242481]: pam_unix(sshd:session): session closed for user zuul
Dec 03 01:25:52 compute-0 systemd[1]: session-45.scope: Deactivated successfully.
Dec 03 01:25:52 compute-0 systemd[1]: session-45.scope: Consumed 2.923s CPU time.
Dec 03 01:25:52 compute-0 systemd-logind[800]: Session 45 logged out. Waiting for processes to exit.
Dec 03 01:25:52 compute-0 systemd-logind[800]: Removed session 45.
Dec 03 01:25:53 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 10.2 deep-scrub starts
Dec 03 01:25:53 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 10.2 deep-scrub ok
Dec 03 01:25:53 compute-0 sshd-session[242358]: Connection closed by invalid user david 193.32.162.157 port 54696 [preauth]
Dec 03 01:25:53 compute-0 ceph-mon[192821]: 10.f scrub starts
Dec 03 01:25:53 compute-0 ceph-mon[192821]: 10.f scrub ok
Dec 03 01:25:53 compute-0 ceph-mon[192821]: pgmap v333: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:54 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Dec 03 01:25:54 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Dec 03 01:25:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v334: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:54 compute-0 ceph-mon[192821]: 10.2 deep-scrub starts
Dec 03 01:25:54 compute-0 ceph-mon[192821]: 10.2 deep-scrub ok
Dec 03 01:25:55 compute-0 ceph-mon[192821]: 10.14 scrub starts
Dec 03 01:25:55 compute-0 ceph-mon[192821]: 10.14 scrub ok
Dec 03 01:25:55 compute-0 ceph-mon[192821]: pgmap v334: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v335: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:25:57 compute-0 ceph-mon[192821]: pgmap v335: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:57 compute-0 sshd-session[242938]: Accepted publickey for zuul from 192.168.122.30 port 34446 ssh2: ECDSA SHA256:ja3ITS17A9km0/Ot+KN2pl9ub4ump/b6GV+vNoE7Szw
Dec 03 01:25:58 compute-0 systemd-logind[800]: New session 46 of user zuul.
Dec 03 01:25:58 compute-0 systemd[1]: Started Session 46 of User zuul.
Dec 03 01:25:58 compute-0 sshd-session[242938]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 03 01:25:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:25:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:25:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:25:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:25:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:25:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:25:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v336: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:59 compute-0 python3.9[243091]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 01:25:59 compute-0 ceph-mon[192821]: pgmap v336: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:25:59 compute-0 podman[158098]: time="2025-12-03T01:25:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:25:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:25:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec 03 01:25:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:25:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6833 "" "Go-http-client/1.1"
Dec 03 01:26:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v337: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:00 compute-0 sudo[243245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzrfikfvdsdiiionfqapeculajitfxml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725160.1071725-33-168055438167922/AnsiballZ_file.py'
Dec 03 01:26:00 compute-0 sudo[243245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:26:01 compute-0 python3.9[243247]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:26:01 compute-0 sudo[243245]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:01 compute-0 sshd-session[243248]: Invalid user userroot from 146.190.144.138 port 47840
Dec 03 01:26:01 compute-0 openstack_network_exporter[160250]: ERROR   01:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:26:01 compute-0 openstack_network_exporter[160250]: ERROR   01:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:26:01 compute-0 openstack_network_exporter[160250]: ERROR   01:26:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:26:01 compute-0 openstack_network_exporter[160250]: ERROR   01:26:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:26:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:26:01 compute-0 openstack_network_exporter[160250]: ERROR   01:26:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:26:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:26:01 compute-0 sshd-session[243248]: Received disconnect from 146.190.144.138 port 47840:11: Bye Bye [preauth]
Dec 03 01:26:01 compute-0 sshd-session[243248]: Disconnected from invalid user userroot 146.190.144.138 port 47840 [preauth]
Dec 03 01:26:01 compute-0 ceph-mon[192821]: pgmap v337: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:26:02 compute-0 sudo[243422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgwefqpdbjosyptxndbkjgllhysggwmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725161.4441288-41-279612000691307/AnsiballZ_stat.py'
Dec 03 01:26:02 compute-0 sudo[243422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:26:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v338: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:02 compute-0 python3.9[243424]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:26:02 compute-0 sudo[243422]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:03 compute-0 sudo[243500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwqjnselhpngfuuwznaswpymvmnjrjln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725161.4441288-41-279612000691307/AnsiballZ_file.py'
Dec 03 01:26:03 compute-0 sudo[243500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:26:03 compute-0 python3.9[243502]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.ysh_p92e recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:26:03 compute-0 sudo[243500]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:03 compute-0 ceph-mon[192821]: pgmap v338: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:04 compute-0 sudo[243652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkxoijrbvgacwjzytdltxxgnecjzxuvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725163.706662-61-210066079582825/AnsiballZ_stat.py'
Dec 03 01:26:04 compute-0 sudo[243652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:26:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v339: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:04 compute-0 python3.9[243654]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:26:04 compute-0 sudo[243652]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:04 compute-0 sshd-session[242936]: Connection closed by authenticating user root 193.32.162.157 port 59848 [preauth]
Dec 03 01:26:05 compute-0 sudo[243731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aeaafgjaxcllkuyzumdqkvlfrsdrearm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725163.706662-61-210066079582825/AnsiballZ_file.py'
Dec 03 01:26:05 compute-0 sudo[243731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:26:05 compute-0 python3.9[243733]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.8fa2ysim recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:26:05 compute-0 sudo[243731]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:05 compute-0 ceph-mon[192821]: pgmap v339: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:06 compute-0 sudo[243883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecvhpvedcbirmtpmburpazthrmldeemp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725165.6028302-74-45154003253324/AnsiballZ_file.py'
Dec 03 01:26:06 compute-0 sudo[243883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:26:06 compute-0 python3.9[243885]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:26:06 compute-0 sudo[243883]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v340: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:07 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Dec 03 01:26:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:26:07 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Dec 03 01:26:07 compute-0 sudo[244035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shocgymnnepsysrfdeydukvchdklipfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725166.6900508-82-237209838798956/AnsiballZ_stat.py'
Dec 03 01:26:07 compute-0 sudo[244035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:26:07 compute-0 python3.9[244038]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:26:07 compute-0 sudo[244035]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:07 compute-0 ceph-mon[192821]: pgmap v340: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:08 compute-0 sudo[244114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbljppcwpvvedhgdoavrztzbrfdronqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725166.6900508-82-237209838798956/AnsiballZ_file.py'
Dec 03 01:26:08 compute-0 sudo[244114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:26:08 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Dec 03 01:26:08 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Dec 03 01:26:08 compute-0 python3.9[244116]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:26:08 compute-0 sudo[244114]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v341: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:08 compute-0 ceph-mon[192821]: 9.15 scrub starts
Dec 03 01:26:08 compute-0 ceph-mon[192821]: 9.15 scrub ok
Dec 03 01:26:09 compute-0 sshd-session[244216]: Invalid user frontend from 34.66.72.251 port 54924
Dec 03 01:26:09 compute-0 rsyslogd[188612]: imjournal: 1750 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Dec 03 01:26:09 compute-0 sshd-session[244216]: Received disconnect from 34.66.72.251 port 54924:11: Bye Bye [preauth]
Dec 03 01:26:09 compute-0 sshd-session[244216]: Disconnected from invalid user frontend 34.66.72.251 port 54924 [preauth]
Dec 03 01:26:09 compute-0 sudo[244325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obypstvghalawgjbhnvmtsozstulyafq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725168.5778954-82-165154101827534/AnsiballZ_stat.py'
Dec 03 01:26:09 compute-0 podman[244242]: 2025-12-03 01:26:09.163100619 +0000 UTC m=+0.113419923 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 03 01:26:09 compute-0 sudo[244325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:26:09 compute-0 podman[244243]: 2025-12-03 01:26:09.16888378 +0000 UTC m=+0.115240173 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, architecture=x86_64, maintainer=Red Hat, Inc., vcs-type=git, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, version=9.6, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 03 01:26:09 compute-0 podman[244244]: 2025-12-03 01:26:09.179400864 +0000 UTC m=+0.118279718 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 03 01:26:09 compute-0 podman[244245]: 2025-12-03 01:26:09.204146106 +0000 UTC m=+0.140884910 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 03 01:26:09 compute-0 python3.9[244348]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:26:09 compute-0 sudo[244325]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:09 compute-0 sudo[244431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsvuhnmxledkwmomdqmrgdhbmfuvyejk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725168.5778954-82-165154101827534/AnsiballZ_file.py'
Dec 03 01:26:09 compute-0 sudo[244431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:26:09 compute-0 ceph-mon[192821]: 9.1f scrub starts
Dec 03 01:26:09 compute-0 ceph-mon[192821]: 9.1f scrub ok
Dec 03 01:26:09 compute-0 ceph-mon[192821]: pgmap v341: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:10 compute-0 python3.9[244433]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:26:10 compute-0 sudo[244431]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v342: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:10 compute-0 sudo[244583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qurhcnmjhyqiuaobxkmotfovektjynui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725170.3653102-105-77554594129322/AnsiballZ_file.py'
Dec 03 01:26:10 compute-0 sudo[244583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:26:11 compute-0 python3.9[244585]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:26:11 compute-0 sudo[244583]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:11 compute-0 ceph-mon[192821]: pgmap v342: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:12 compute-0 sudo[244735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxgozwkpdvxdxtatyvxhnocsofmnctkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725171.4405475-113-88030526375242/AnsiballZ_stat.py'
Dec 03 01:26:12 compute-0 sudo[244735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:26:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:26:12 compute-0 podman[244737]: 2025-12-03 01:26:12.14667532 +0000 UTC m=+0.113591927 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi)
Dec 03 01:26:12 compute-0 python3.9[244738]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:26:12 compute-0 sudo[244735]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v343: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:12 compute-0 sudo[244832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnozczrdnsxtqjyelfidgumdwfodpbpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725171.4405475-113-88030526375242/AnsiballZ_file.py'
Dec 03 01:26:12 compute-0 sudo[244832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:26:12 compute-0 python3.9[244834]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:26:13 compute-0 sudo[244832]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:13 compute-0 sudo[244888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:26:13 compute-0 sudo[244888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:26:13 compute-0 sudo[244888]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:13 compute-0 sudo[244936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:26:13 compute-0 sudo[244936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:26:13 compute-0 sudo[244936]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:13 compute-0 sudo[244984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:26:13 compute-0 sudo[244984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:26:13 compute-0 sudo[244984]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:13 compute-0 ceph-mon[192821]: pgmap v343: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:13 compute-0 sudo[245033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 01:26:13 compute-0 sudo[245033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:26:13 compute-0 sudo[245084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umevjtblxlirjocfdunoygpcrhvvaskj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725173.352197-125-57738225636436/AnsiballZ_stat.py'
Dec 03 01:26:13 compute-0 sudo[245084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:26:14 compute-0 python3.9[245086]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:26:14 compute-0 sudo[245084]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v344: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:14 compute-0 sudo[245033]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:26:14 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:26:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 01:26:14 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:26:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 01:26:14 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:26:14 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 9a6aff1a-18dd-4e25-a770-d2a0162934d4 does not exist
Dec 03 01:26:14 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 8fb424ba-0dd5-4144-9a48-14c3d3e2f647 does not exist
Dec 03 01:26:14 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 20a07829-5a21-4bd3-8243-bf40b0f87793 does not exist
Dec 03 01:26:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 01:26:14 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:26:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 01:26:14 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:26:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:26:14 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:26:14 compute-0 sudo[245198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yaexbfttuogkhrbzhcrxulrcpmoevbpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725173.352197-125-57738225636436/AnsiballZ_file.py'
Dec 03 01:26:14 compute-0 sudo[245198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:26:14 compute-0 sudo[245190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:26:14 compute-0 sudo[245190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:26:14 compute-0 sudo[245190]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:14 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:26:14 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:26:14 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:26:14 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:26:14 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:26:14 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:26:14 compute-0 sudo[245220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:26:14 compute-0 sudo[245220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:26:14 compute-0 python3.9[245213]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:26:14 compute-0 sudo[245220]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:14 compute-0 sudo[245198]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:15 compute-0 sudo[245245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:26:15 compute-0 sudo[245245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:26:15 compute-0 sudo[245245]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:15 compute-0 sudo[245293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 01:26:15 compute-0 sudo[245293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:26:15 compute-0 podman[245408]: 2025-12-03 01:26:15.758367817 +0000 UTC m=+0.087783176 container create 7db2e8357fd04ddfd78caab34252b0f78c467d6b4d01d85734489193e18d6846 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bartik, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec 03 01:26:15 compute-0 podman[245408]: 2025-12-03 01:26:15.722897625 +0000 UTC m=+0.052313074 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:26:15 compute-0 systemd[1]: Started libpod-conmon-7db2e8357fd04ddfd78caab34252b0f78c467d6b4d01d85734489193e18d6846.scope.
Dec 03 01:26:15 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:26:15 compute-0 ceph-mon[192821]: pgmap v344: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:15 compute-0 podman[245408]: 2025-12-03 01:26:15.912018003 +0000 UTC m=+0.241433402 container init 7db2e8357fd04ddfd78caab34252b0f78c467d6b4d01d85734489193e18d6846 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bartik, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec 03 01:26:15 compute-0 podman[245408]: 2025-12-03 01:26:15.929137772 +0000 UTC m=+0.258553151 container start 7db2e8357fd04ddfd78caab34252b0f78c467d6b4d01d85734489193e18d6846 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bartik, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 03 01:26:15 compute-0 podman[245408]: 2025-12-03 01:26:15.936067586 +0000 UTC m=+0.265483015 container attach 7db2e8357fd04ddfd78caab34252b0f78c467d6b4d01d85734489193e18d6846 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:26:15 compute-0 vigorous_bartik[245443]: 167 167
Dec 03 01:26:15 compute-0 systemd[1]: libpod-7db2e8357fd04ddfd78caab34252b0f78c467d6b4d01d85734489193e18d6846.scope: Deactivated successfully.
Dec 03 01:26:15 compute-0 podman[245408]: 2025-12-03 01:26:15.942513186 +0000 UTC m=+0.271928575 container died 7db2e8357fd04ddfd78caab34252b0f78c467d6b4d01d85734489193e18d6846 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 03 01:26:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-1638cfae26e73462d871b7c5fab0252684ffedc2e77d4acf81eba37ff3cf1c6c-merged.mount: Deactivated successfully.
Dec 03 01:26:16 compute-0 podman[245408]: 2025-12-03 01:26:16.042691597 +0000 UTC m=+0.372106986 container remove 7db2e8357fd04ddfd78caab34252b0f78c467d6b4d01d85734489193e18d6846 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:26:16 compute-0 systemd[1]: libpod-conmon-7db2e8357fd04ddfd78caab34252b0f78c467d6b4d01d85734489193e18d6846.scope: Deactivated successfully.
Dec 03 01:26:16 compute-0 sudo[245520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyteyeqrjbaeryvxbcsielcngcjcpcbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725175.2672696-137-32606195692336/AnsiballZ_systemd.py'
Dec 03 01:26:16 compute-0 sudo[245520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:26:16 compute-0 podman[245523]: 2025-12-03 01:26:16.325355922 +0000 UTC m=+0.081025527 container create d16ffa72fb37b040fccbee760c71af8e915248b67c18f387b97da4b140724c27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_merkle, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 03 01:26:16 compute-0 podman[245523]: 2025-12-03 01:26:16.293667096 +0000 UTC m=+0.049336761 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:26:16 compute-0 systemd[1]: Started libpod-conmon-d16ffa72fb37b040fccbee760c71af8e915248b67c18f387b97da4b140724c27.scope.
Dec 03 01:26:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v345: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:16 compute-0 sshd-session[245419]: Invalid user usuario2 from 80.253.31.232 port 43386
Dec 03 01:26:16 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:26:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbef025294150dc5b37e8d2a426c5e49f7bebcb4339077727f3edcd777db06df/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:26:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbef025294150dc5b37e8d2a426c5e49f7bebcb4339077727f3edcd777db06df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:26:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbef025294150dc5b37e8d2a426c5e49f7bebcb4339077727f3edcd777db06df/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:26:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbef025294150dc5b37e8d2a426c5e49f7bebcb4339077727f3edcd777db06df/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:26:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbef025294150dc5b37e8d2a426c5e49f7bebcb4339077727f3edcd777db06df/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:26:16 compute-0 podman[245523]: 2025-12-03 01:26:16.497989979 +0000 UTC m=+0.253659564 container init d16ffa72fb37b040fccbee760c71af8e915248b67c18f387b97da4b140724c27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Dec 03 01:26:16 compute-0 podman[245523]: 2025-12-03 01:26:16.522446523 +0000 UTC m=+0.278116128 container start d16ffa72fb37b040fccbee760c71af8e915248b67c18f387b97da4b140724c27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_merkle, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 03 01:26:16 compute-0 podman[245523]: 2025-12-03 01:26:16.530144889 +0000 UTC m=+0.285814494 container attach d16ffa72fb37b040fccbee760c71af8e915248b67c18f387b97da4b140724c27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_merkle, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec 03 01:26:16 compute-0 sshd-session[245419]: Received disconnect from 80.253.31.232 port 43386:11: Bye Bye [preauth]
Dec 03 01:26:16 compute-0 sshd-session[245419]: Disconnected from invalid user usuario2 80.253.31.232 port 43386 [preauth]
Dec 03 01:26:16 compute-0 sshd-session[243703]: Connection closed by authenticating user root 193.32.162.157 port 34792 [preauth]
Dec 03 01:26:16 compute-0 python3.9[245531]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:26:16 compute-0 systemd[1]: Reloading.
Dec 03 01:26:16 compute-0 systemd-rc-local-generator[245571]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:26:16 compute-0 systemd-sysv-generator[245576]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:26:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:26:17 compute-0 sudo[245520]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:17 compute-0 dreamy_merkle[245541]: --> passed data devices: 0 physical, 3 LVM
Dec 03 01:26:17 compute-0 dreamy_merkle[245541]: --> relative data size: 1.0
Dec 03 01:26:17 compute-0 dreamy_merkle[245541]: --> All data devices are unavailable
Dec 03 01:26:17 compute-0 systemd[1]: libpod-d16ffa72fb37b040fccbee760c71af8e915248b67c18f387b97da4b140724c27.scope: Deactivated successfully.
Dec 03 01:26:17 compute-0 systemd[1]: libpod-d16ffa72fb37b040fccbee760c71af8e915248b67c18f387b97da4b140724c27.scope: Consumed 1.243s CPU time.
Dec 03 01:26:17 compute-0 podman[245523]: 2025-12-03 01:26:17.834205945 +0000 UTC m=+1.589875520 container died d16ffa72fb37b040fccbee760c71af8e915248b67c18f387b97da4b140724c27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_merkle, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:26:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-bbef025294150dc5b37e8d2a426c5e49f7bebcb4339077727f3edcd777db06df-merged.mount: Deactivated successfully.
Dec 03 01:26:17 compute-0 ceph-mon[192821]: pgmap v345: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:17 compute-0 podman[245523]: 2025-12-03 01:26:17.925076236 +0000 UTC m=+1.680745841 container remove d16ffa72fb37b040fccbee760c71af8e915248b67c18f387b97da4b140724c27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 03 01:26:17 compute-0 sudo[245771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqqgolghbwcjndrblbvehcphdbgswyop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725177.4195764-145-232886993565615/AnsiballZ_stat.py'
Dec 03 01:26:17 compute-0 sudo[245771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:26:17 compute-0 systemd[1]: libpod-conmon-d16ffa72fb37b040fccbee760c71af8e915248b67c18f387b97da4b140724c27.scope: Deactivated successfully.
Dec 03 01:26:17 compute-0 sudo[245293]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:18 compute-0 sudo[245774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:26:18 compute-0 sudo[245774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:26:18 compute-0 sudo[245774]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:18 compute-0 python3.9[245773]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:26:18 compute-0 sudo[245771]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:18 compute-0 sudo[245800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:26:18 compute-0 sudo[245800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:26:18 compute-0 sudo[245800]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:18 compute-0 sudo[245835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:26:18 compute-0 sudo[245835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:26:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v346: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:18 compute-0 sudo[245835]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:18 compute-0 sudo[245887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 01:26:18 compute-0 sudo[245887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:26:18 compute-0 sudo[245949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuevxncqprzonkrdjtbyuqbsjlmfmzoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725177.4195764-145-232886993565615/AnsiballZ_file.py'
Dec 03 01:26:18 compute-0 sudo[245949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:26:18 compute-0 python3.9[245951]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:26:18 compute-0 sudo[245949]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:19 compute-0 podman[246015]: 2025-12-03 01:26:19.146204413 +0000 UTC m=+0.092487757 container create aa348ac29a4fbf3ffc77ceb62622999f4e1b1f4013090b21fd46b8f85b1a999f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_ramanujan, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 03 01:26:19 compute-0 podman[246015]: 2025-12-03 01:26:19.113119638 +0000 UTC m=+0.059403032 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:26:19 compute-0 systemd[1]: Started libpod-conmon-aa348ac29a4fbf3ffc77ceb62622999f4e1b1f4013090b21fd46b8f85b1a999f.scope.
Dec 03 01:26:19 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:26:19 compute-0 podman[246015]: 2025-12-03 01:26:19.314992003 +0000 UTC m=+0.261275387 container init aa348ac29a4fbf3ffc77ceb62622999f4e1b1f4013090b21fd46b8f85b1a999f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_ramanujan, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:26:19 compute-0 podman[246015]: 2025-12-03 01:26:19.328780608 +0000 UTC m=+0.275063942 container start aa348ac29a4fbf3ffc77ceb62622999f4e1b1f4013090b21fd46b8f85b1a999f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_ramanujan, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:26:19 compute-0 focused_ramanujan[246067]: 167 167
Dec 03 01:26:19 compute-0 podman[246015]: 2025-12-03 01:26:19.334705594 +0000 UTC m=+0.280988978 container attach aa348ac29a4fbf3ffc77ceb62622999f4e1b1f4013090b21fd46b8f85b1a999f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_ramanujan, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec 03 01:26:19 compute-0 systemd[1]: libpod-aa348ac29a4fbf3ffc77ceb62622999f4e1b1f4013090b21fd46b8f85b1a999f.scope: Deactivated successfully.
Dec 03 01:26:19 compute-0 conmon[246067]: conmon aa348ac29a4fbf3ffc77 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aa348ac29a4fbf3ffc77ceb62622999f4e1b1f4013090b21fd46b8f85b1a999f.scope/container/memory.events
Dec 03 01:26:19 compute-0 podman[246051]: 2025-12-03 01:26:19.382025347 +0000 UTC m=+0.159404448 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, architecture=x86_64, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, config_id=edpm, release-0.7.12=, vcs-type=git, managed_by=edpm_ansible, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec 03 01:26:19 compute-0 podman[246105]: 2025-12-03 01:26:19.408629091 +0000 UTC m=+0.052787357 container died aa348ac29a4fbf3ffc77ceb62622999f4e1b1f4013090b21fd46b8f85b1a999f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:26:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b7f30fbd89e55700eb5ee25c17253f7c90b4f8988a9b1f75cd830b1f176910b-merged.mount: Deactivated successfully.
Dec 03 01:26:19 compute-0 podman[246105]: 2025-12-03 01:26:19.527367732 +0000 UTC m=+0.171525978 container remove aa348ac29a4fbf3ffc77ceb62622999f4e1b1f4013090b21fd46b8f85b1a999f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:26:19 compute-0 systemd[1]: libpod-conmon-aa348ac29a4fbf3ffc77ceb62622999f4e1b1f4013090b21fd46b8f85b1a999f.scope: Deactivated successfully.
Dec 03 01:26:19 compute-0 sudo[246209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qboveowfjyedxammjyngbqolshdjicdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725179.1891415-157-203619410686444/AnsiballZ_stat.py'
Dec 03 01:26:19 compute-0 sudo[246209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:26:19 compute-0 podman[246197]: 2025-12-03 01:26:19.826051894 +0000 UTC m=+0.077847268 container create ce8e7f62f146b0f79078e6a870d8a862f953c9f13ca5fd266f8a3ae2204af155 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_taussig, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:26:19 compute-0 podman[246197]: 2025-12-03 01:26:19.795013796 +0000 UTC m=+0.046809260 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:26:19 compute-0 systemd[1]: Started libpod-conmon-ce8e7f62f146b0f79078e6a870d8a862f953c9f13ca5fd266f8a3ae2204af155.scope.
Dec 03 01:26:19 compute-0 ceph-mon[192821]: pgmap v346: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:19 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:26:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/476b290fd756a478c79272bda1d3bdf1fdad6f8ba6b5466c97aeb6cf7ed9ff6f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:26:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/476b290fd756a478c79272bda1d3bdf1fdad6f8ba6b5466c97aeb6cf7ed9ff6f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:26:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/476b290fd756a478c79272bda1d3bdf1fdad6f8ba6b5466c97aeb6cf7ed9ff6f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:26:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/476b290fd756a478c79272bda1d3bdf1fdad6f8ba6b5466c97aeb6cf7ed9ff6f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:26:19 compute-0 python3.9[246217]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:26:20 compute-0 podman[246197]: 2025-12-03 01:26:20.010691047 +0000 UTC m=+0.262486511 container init ce8e7f62f146b0f79078e6a870d8a862f953c9f13ca5fd266f8a3ae2204af155 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_taussig, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 03 01:26:20 compute-0 podman[246197]: 2025-12-03 01:26:20.032169038 +0000 UTC m=+0.283964442 container start ce8e7f62f146b0f79078e6a870d8a862f953c9f13ca5fd266f8a3ae2204af155 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_taussig, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:26:20 compute-0 podman[246197]: 2025-12-03 01:26:20.039787391 +0000 UTC m=+0.291582855 container attach ce8e7f62f146b0f79078e6a870d8a862f953c9f13ca5fd266f8a3ae2204af155 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_taussig, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 03 01:26:20 compute-0 sudo[246209]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v347: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:20 compute-0 sudo[246302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khbfovxjkbtaaksweuecffvxazrlkvcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725179.1891415-157-203619410686444/AnsiballZ_file.py'
Dec 03 01:26:20 compute-0 sudo[246302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:26:20 compute-0 python3.9[246304]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:26:20 compute-0 sudo[246302]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]: {
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:     "0": [
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:         {
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:             "devices": [
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:                 "/dev/loop3"
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:             ],
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:             "lv_name": "ceph_lv0",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:             "lv_size": "21470642176",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:             "name": "ceph_lv0",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:             "tags": {
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:                 "ceph.cluster_name": "ceph",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:                 "ceph.crush_device_class": "",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:                 "ceph.encrypted": "0",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:                 "ceph.osd_id": "0",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:26:20 compute-0 rsyslogd[188612]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:                 "ceph.type": "block",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:                 "ceph.vdo": "0"
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:             },
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:             "type": "block",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:             "vg_name": "ceph_vg0"
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:         }
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:     ],
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:     "1": [
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:         {
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:             "devices": [
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:                 "/dev/loop4"
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:             ],
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:             "lv_name": "ceph_lv1",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:             "lv_size": "21470642176",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:             "name": "ceph_lv1",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:             "tags": {
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:                 "ceph.cluster_name": "ceph",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:                 "ceph.crush_device_class": "",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:                 "ceph.encrypted": "0",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:                 "ceph.osd_id": "1",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:                 "ceph.type": "block",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:                 "ceph.vdo": "0"
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:             },
Dec 03 01:26:20 compute-0 rsyslogd[188612]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:             "type": "block",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:             "vg_name": "ceph_vg1"
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:         }
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:     ],
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:     "2": [
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:         {
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:             "devices": [
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:                 "/dev/loop5"
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:             ],
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:             "lv_name": "ceph_lv2",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:             "lv_size": "21470642176",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:             "name": "ceph_lv2",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:             "tags": {
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:                 "ceph.cluster_name": "ceph",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:                 "ceph.crush_device_class": "",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:                 "ceph.encrypted": "0",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:                 "ceph.osd_id": "2",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:                 "ceph.type": "block",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:                 "ceph.vdo": "0"
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:             },
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:             "type": "block",
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:             "vg_name": "ceph_vg2"
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:         }
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]:     ]
Dec 03 01:26:20 compute-0 thirsty_taussig[246222]: }
Dec 03 01:26:20 compute-0 systemd[1]: libpod-ce8e7f62f146b0f79078e6a870d8a862f953c9f13ca5fd266f8a3ae2204af155.scope: Deactivated successfully.
Dec 03 01:26:20 compute-0 podman[246197]: 2025-12-03 01:26:20.928736379 +0000 UTC m=+1.180531813 container died ce8e7f62f146b0f79078e6a870d8a862f953c9f13ca5fd266f8a3ae2204af155 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_taussig, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 03 01:26:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-476b290fd756a478c79272bda1d3bdf1fdad6f8ba6b5466c97aeb6cf7ed9ff6f-merged.mount: Deactivated successfully.
Dec 03 01:26:21 compute-0 podman[246197]: 2025-12-03 01:26:21.022367468 +0000 UTC m=+1.274162882 container remove ce8e7f62f146b0f79078e6a870d8a862f953c9f13ca5fd266f8a3ae2204af155 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 03 01:26:21 compute-0 systemd[1]: libpod-conmon-ce8e7f62f146b0f79078e6a870d8a862f953c9f13ca5fd266f8a3ae2204af155.scope: Deactivated successfully.
Dec 03 01:26:21 compute-0 sudo[245887]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:21 compute-0 sudo[246372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:26:21 compute-0 sudo[246372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:26:21 compute-0 sudo[246372]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:21 compute-0 sudo[246424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:26:21 compute-0 sudo[246424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:26:21 compute-0 sudo[246424]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:21 compute-0 sudo[246471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:26:21 compute-0 sudo[246471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:26:21 compute-0 sudo[246471]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:21 compute-0 sudo[246567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndhdsengrqgdhwxstwcffvewcazmrkia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725181.0552514-169-77902457179374/AnsiballZ_systemd.py'
Dec 03 01:26:21 compute-0 sudo[246522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 01:26:21 compute-0 sudo[246567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:26:21 compute-0 sudo[246522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:26:21 compute-0 ceph-mon[192821]: pgmap v347: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:21 compute-0 python3.9[246570]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:26:21 compute-0 systemd[1]: Reloading.
Dec 03 01:26:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:26:22 compute-0 systemd-rc-local-generator[246641]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:26:22 compute-0 systemd-sysv-generator[246646]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:26:22 compute-0 podman[246611]: 2025-12-03 01:26:22.159017142 +0000 UTC m=+0.077307313 container create f0aed331c3a01d55a88cd920241ad1b2dbb230c5146f57824ee4e4e28487fe1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_cerf, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:26:22 compute-0 podman[246611]: 2025-12-03 01:26:22.127206902 +0000 UTC m=+0.045497113 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:26:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v348: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:22 compute-0 systemd[1]: Started libpod-conmon-f0aed331c3a01d55a88cd920241ad1b2dbb230c5146f57824ee4e4e28487fe1a.scope.
Dec 03 01:26:22 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:26:22 compute-0 podman[246611]: 2025-12-03 01:26:22.497450846 +0000 UTC m=+0.415741097 container init f0aed331c3a01d55a88cd920241ad1b2dbb230c5146f57824ee4e4e28487fe1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 03 01:26:22 compute-0 podman[246611]: 2025-12-03 01:26:22.51441432 +0000 UTC m=+0.432704521 container start f0aed331c3a01d55a88cd920241ad1b2dbb230c5146f57824ee4e4e28487fe1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_cerf, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Dec 03 01:26:22 compute-0 systemd[1]: Starting Create netns directory...
Dec 03 01:26:22 compute-0 podman[246611]: 2025-12-03 01:26:22.52047933 +0000 UTC m=+0.438769531 container attach f0aed331c3a01d55a88cd920241ad1b2dbb230c5146f57824ee4e4e28487fe1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_cerf, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:26:22 compute-0 xenodochial_cerf[246664]: 167 167
Dec 03 01:26:22 compute-0 systemd[1]: libpod-f0aed331c3a01d55a88cd920241ad1b2dbb230c5146f57824ee4e4e28487fe1a.scope: Deactivated successfully.
Dec 03 01:26:22 compute-0 podman[246611]: 2025-12-03 01:26:22.52800125 +0000 UTC m=+0.446291421 container died f0aed331c3a01d55a88cd920241ad1b2dbb230c5146f57824ee4e4e28487fe1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:26:22 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 03 01:26:22 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 03 01:26:22 compute-0 systemd[1]: Finished Create netns directory.
Dec 03 01:26:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1fc553474b9911d72b93ceb0797c3f30ecebf8b3bb67976fbb715e186bf4cd0-merged.mount: Deactivated successfully.
Dec 03 01:26:22 compute-0 podman[246663]: 2025-12-03 01:26:22.59093695 +0000 UTC m=+0.143508284 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 03 01:26:22 compute-0 podman[246611]: 2025-12-03 01:26:22.597752571 +0000 UTC m=+0.516042742 container remove f0aed331c3a01d55a88cd920241ad1b2dbb230c5146f57824ee4e4e28487fe1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_cerf, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 03 01:26:22 compute-0 systemd[1]: libpod-conmon-f0aed331c3a01d55a88cd920241ad1b2dbb230c5146f57824ee4e4e28487fe1a.scope: Deactivated successfully.
Dec 03 01:26:22 compute-0 sudo[246567]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:22 compute-0 podman[246735]: 2025-12-03 01:26:22.832447594 +0000 UTC m=+0.078145386 container create 789b0e0c4daf721be697afe9b4b2d8b659ed490c73373b7bd3755cff6b0b73fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Dec 03 01:26:22 compute-0 podman[246735]: 2025-12-03 01:26:22.794066411 +0000 UTC m=+0.039764213 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:26:22 compute-0 systemd[1]: Started libpod-conmon-789b0e0c4daf721be697afe9b4b2d8b659ed490c73373b7bd3755cff6b0b73fc.scope.
Dec 03 01:26:22 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:26:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdf437a68565c6aacc30599f4288aac464be10197017df9cce1966cb082c2b03/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:26:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdf437a68565c6aacc30599f4288aac464be10197017df9cce1966cb082c2b03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:26:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdf437a68565c6aacc30599f4288aac464be10197017df9cce1966cb082c2b03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:26:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdf437a68565c6aacc30599f4288aac464be10197017df9cce1966cb082c2b03/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:26:23 compute-0 podman[246735]: 2025-12-03 01:26:23.021400518 +0000 UTC m=+0.267098350 container init 789b0e0c4daf721be697afe9b4b2d8b659ed490c73373b7bd3755cff6b0b73fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 03 01:26:23 compute-0 podman[246735]: 2025-12-03 01:26:23.038487356 +0000 UTC m=+0.284185148 container start 789b0e0c4daf721be697afe9b4b2d8b659ed490c73373b7bd3755cff6b0b73fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 03 01:26:23 compute-0 podman[246735]: 2025-12-03 01:26:23.045254535 +0000 UTC m=+0.290952307 container attach 789b0e0c4daf721be697afe9b4b2d8b659ed490c73373b7bd3755cff6b0b73fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wilson, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:26:23 compute-0 python3.9[246880]: ansible-ansible.builtin.service_facts Invoked
Dec 03 01:26:23 compute-0 ceph-mon[192821]: pgmap v348: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:23 compute-0 network[246908]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 03 01:26:23 compute-0 network[246909]: 'network-scripts' will be removed from distribution in near future.
Dec 03 01:26:23 compute-0 network[246910]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 03 01:26:24 compute-0 quirky_wilson[246773]: {
Dec 03 01:26:24 compute-0 quirky_wilson[246773]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 01:26:24 compute-0 quirky_wilson[246773]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:26:24 compute-0 quirky_wilson[246773]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 01:26:24 compute-0 quirky_wilson[246773]:         "osd_id": 2,
Dec 03 01:26:24 compute-0 quirky_wilson[246773]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:26:24 compute-0 quirky_wilson[246773]:         "type": "bluestore"
Dec 03 01:26:24 compute-0 quirky_wilson[246773]:     },
Dec 03 01:26:24 compute-0 quirky_wilson[246773]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 01:26:24 compute-0 quirky_wilson[246773]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:26:24 compute-0 quirky_wilson[246773]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 01:26:24 compute-0 quirky_wilson[246773]:         "osd_id": 1,
Dec 03 01:26:24 compute-0 quirky_wilson[246773]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:26:24 compute-0 quirky_wilson[246773]:         "type": "bluestore"
Dec 03 01:26:24 compute-0 quirky_wilson[246773]:     },
Dec 03 01:26:24 compute-0 quirky_wilson[246773]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 01:26:24 compute-0 quirky_wilson[246773]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:26:24 compute-0 quirky_wilson[246773]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 01:26:24 compute-0 quirky_wilson[246773]:         "osd_id": 0,
Dec 03 01:26:24 compute-0 quirky_wilson[246773]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:26:24 compute-0 quirky_wilson[246773]:         "type": "bluestore"
Dec 03 01:26:24 compute-0 quirky_wilson[246773]:     }
Dec 03 01:26:24 compute-0 quirky_wilson[246773]: }
Dec 03 01:26:24 compute-0 podman[246933]: 2025-12-03 01:26:24.389752092 +0000 UTC m=+0.054380411 container died 789b0e0c4daf721be697afe9b4b2d8b659ed490c73373b7bd3755cff6b0b73fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 03 01:26:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v349: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:24 compute-0 systemd[1]: libpod-789b0e0c4daf721be697afe9b4b2d8b659ed490c73373b7bd3755cff6b0b73fc.scope: Deactivated successfully.
Dec 03 01:26:24 compute-0 systemd[1]: libpod-789b0e0c4daf721be697afe9b4b2d8b659ed490c73373b7bd3755cff6b0b73fc.scope: Consumed 1.257s CPU time.
Dec 03 01:26:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-cdf437a68565c6aacc30599f4288aac464be10197017df9cce1966cb082c2b03-merged.mount: Deactivated successfully.
Dec 03 01:26:24 compute-0 podman[246933]: 2025-12-03 01:26:24.960251156 +0000 UTC m=+0.624879405 container remove 789b0e0c4daf721be697afe9b4b2d8b659ed490c73373b7bd3755cff6b0b73fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wilson, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:26:24 compute-0 systemd[1]: libpod-conmon-789b0e0c4daf721be697afe9b4b2d8b659ed490c73373b7bd3755cff6b0b73fc.scope: Deactivated successfully.
Dec 03 01:26:25 compute-0 sudo[246522]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:26:25 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:26:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:26:25 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:26:25 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev e9c18877-f89c-4991-927d-a5d177ee540d does not exist
Dec 03 01:26:25 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev e4a8a479-2831-48a0-b3af-0f2bc9bb80fe does not exist
Dec 03 01:26:25 compute-0 sudo[246953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:26:25 compute-0 sudo[246953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:26:25 compute-0 sudo[246953]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:25 compute-0 sudo[246981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 01:26:25 compute-0 sudo[246981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:26:25 compute-0 sudo[246981]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:26 compute-0 ceph-mon[192821]: pgmap v349: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:26 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:26:26 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:26:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v350: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:26:28 compute-0 ceph-mon[192821]: pgmap v350: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:26:28
Dec 03 01:26:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 01:26:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 01:26:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['images', 'default.rgw.meta', 'backups', 'volumes', 'vms', 'default.rgw.log', '.mgr', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Dec 03 01:26:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 01:26:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:26:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:26:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:26:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:26:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:26:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:26:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v351: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:28 compute-0 sshd-session[188710]: Received disconnect from 38.102.83.18 port 57172:11: disconnected by user
Dec 03 01:26:28 compute-0 sshd-session[188710]: Disconnected from user zuul 38.102.83.18 port 57172
Dec 03 01:26:28 compute-0 sshd-session[188707]: pam_unix(sshd:session): session closed for user zuul
Dec 03 01:26:28 compute-0 systemd[1]: session-24.scope: Deactivated successfully.
Dec 03 01:26:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 01:26:28 compute-0 systemd[1]: session-24.scope: Consumed 2min 52.148s CPU time.
Dec 03 01:26:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:26:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 01:26:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:26:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:26:28 compute-0 systemd-logind[800]: Session 24 logged out. Waiting for processes to exit.
Dec 03 01:26:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:26:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:26:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:26:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:26:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:26:28 compute-0 systemd-logind[800]: Removed session 24.
Dec 03 01:26:28 compute-0 sshd-session[245548]: Connection closed by authenticating user root 193.32.162.157 port 55182 [preauth]
Dec 03 01:26:29 compute-0 ceph-mon[192821]: pgmap v351: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:29 compute-0 podman[158098]: time="2025-12-03T01:26:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:26:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:26:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec 03 01:26:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:26:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6840 "" "Go-http-client/1.1"
Dec 03 01:26:30 compute-0 sudo[247262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uigdtaocwgyjycjikoydpgbhphrtpbqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725189.6030679-195-96314623802327/AnsiballZ_stat.py'
Dec 03 01:26:30 compute-0 sudo[247262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:26:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v352: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:30 compute-0 python3.9[247264]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:26:30 compute-0 sudo[247262]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:30 compute-0 sudo[247341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dojbqcozfwrjgcgnxrgyirrltsuqxggd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725189.6030679-195-96314623802327/AnsiballZ_file.py'
Dec 03 01:26:30 compute-0 sudo[247341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:26:31 compute-0 python3.9[247343]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:26:31 compute-0 sudo[247341]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:31 compute-0 openstack_network_exporter[160250]: ERROR   01:26:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:26:31 compute-0 openstack_network_exporter[160250]: ERROR   01:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:26:31 compute-0 openstack_network_exporter[160250]: ERROR   01:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:26:31 compute-0 openstack_network_exporter[160250]: ERROR   01:26:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:26:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:26:31 compute-0 openstack_network_exporter[160250]: ERROR   01:26:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:26:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:26:31 compute-0 ceph-mon[192821]: pgmap v352: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:32 compute-0 sudo[247493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sarkqtksukdqoinaphygafwyskrhhbfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725191.5169106-208-116584802172617/AnsiballZ_file.py'
Dec 03 01:26:32 compute-0 sudo[247493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:26:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:26:32 compute-0 python3.9[247495]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:26:32 compute-0 sudo[247493]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v353: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:33 compute-0 sudo[247645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyphnwcjraewrhrvklmnlpcvylfbwfvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725192.6984835-216-29823792356429/AnsiballZ_stat.py'
Dec 03 01:26:33 compute-0 sudo[247645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:26:33 compute-0 ceph-mon[192821]: pgmap v353: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:33 compute-0 python3.9[247647]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:26:33 compute-0 sudo[247645]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:34 compute-0 sudo[247723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgzflvexeoghdbhuwzvtywujkncvzrae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725192.6984835-216-29823792356429/AnsiballZ_file.py'
Dec 03 01:26:34 compute-0 sudo[247723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:26:34 compute-0 python3.9[247725]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:26:34 compute-0 sudo[247723]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v354: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:35 compute-0 ceph-mon[192821]: pgmap v354: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:35 compute-0 sudo[247875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pulqzlhxoxblunflwglctxnuwnqhksng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725194.720131-231-61798053031499/AnsiballZ_timezone.py'
Dec 03 01:26:35 compute-0 sudo[247875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:26:35 compute-0 python3.9[247877]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 03 01:26:35 compute-0 systemd[1]: Starting Time & Date Service...
Dec 03 01:26:35 compute-0 systemd[1]: Started Time & Date Service.
Dec 03 01:26:36 compute-0 sudo[247875]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v355: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:37 compute-0 sudo[248031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ralderjhfboyynztadwdvbadoagmfgts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725196.4489136-240-7978916285890/AnsiballZ_file.py'
Dec 03 01:26:37 compute-0 sudo[248031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:26:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:26:37 compute-0 python3.9[248033]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:26:37 compute-0 sudo[248031]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:37 compute-0 ceph-mon[192821]: pgmap v355: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 01:26:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:26:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 01:26:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:26:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:26:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:26:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:26:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:26:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:26:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:26:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:26:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:26:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 01:26:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:26:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:26:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:26:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 01:26:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:26:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 01:26:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:26:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:26:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:26:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 01:26:38 compute-0 sudo[248183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mojvedyaleikneiokiyfvbafpgmujpmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725197.5869017-248-38751512411683/AnsiballZ_stat.py'
Dec 03 01:26:38 compute-0 sudo[248183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:26:38 compute-0 python3.9[248185]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:26:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v356: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:38 compute-0 sudo[248183]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:38 compute-0 sudo[248261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbndlnwcdvpmdvseinwojkkrokmcdyyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725197.5869017-248-38751512411683/AnsiballZ_file.py'
Dec 03 01:26:38 compute-0 sudo[248261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:26:39 compute-0 python3.9[248263]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:26:39 compute-0 sudo[248261]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:39 compute-0 ceph-mon[192821]: pgmap v356: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:39 compute-0 sudo[248472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcmdjbfswhnlkvrtkpntwfbjfocpxcbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725199.3347073-260-125521702917581/AnsiballZ_stat.py'
Dec 03 01:26:39 compute-0 podman[248386]: 2025-12-03 01:26:39.86415654 +0000 UTC m=+0.102493997 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 01:26:39 compute-0 sudo[248472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:26:39 compute-0 podman[248388]: 2025-12-03 01:26:39.871953208 +0000 UTC m=+0.112179472 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., release=1755695350, architecture=x86_64, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vendor=Red Hat, Inc., config_id=edpm, managed_by=edpm_ansible, vcs-type=git)
Dec 03 01:26:39 compute-0 podman[248389]: 2025-12-03 01:26:39.885866454 +0000 UTC m=+0.123589772 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 03 01:26:39 compute-0 podman[248390]: 2025-12-03 01:26:39.905848136 +0000 UTC m=+0.134872268 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 03 01:26:40 compute-0 python3.9[248491]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:26:40 compute-0 sudo[248472]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:40 compute-0 sshd-session[247107]: Connection closed by authenticating user root 193.32.162.157 port 35066 [preauth]
Dec 03 01:26:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v357: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:40 compute-0 sudo[248574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzjwqrkfihpwpydgwyrrhrdgxygbjcre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725199.3347073-260-125521702917581/AnsiballZ_file.py'
Dec 03 01:26:40 compute-0 sudo[248574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:26:40 compute-0 python3.9[248576]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.maty7npy recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:26:40 compute-0 sudo[248574]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:41 compute-0 ceph-mon[192821]: pgmap v357: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:41 compute-0 sudo[248726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qualvakvsqcfctclyxdxqxkzowdzyfwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725201.0665314-272-250543449715878/AnsiballZ_stat.py'
Dec 03 01:26:41 compute-0 sudo[248726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:26:41 compute-0 python3.9[248728]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:26:41 compute-0 sudo[248726]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:26:42 compute-0 sudo[248820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dltnumakbjgerkczbirxgzzmxfczvwtb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725201.0665314-272-250543449715878/AnsiballZ_file.py'
Dec 03 01:26:42 compute-0 sudo[248820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:26:42 compute-0 podman[248778]: 2025-12-03 01:26:42.384174099 +0000 UTC m=+0.144137541 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 03 01:26:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v358: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:42 compute-0 python3.9[248825]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:26:42 compute-0 sudo[248820]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:43 compute-0 ceph-mon[192821]: pgmap v358: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:43 compute-0 sudo[248976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xubpxqimiwtlowagbsgimtspmxfdbrrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725202.89025-285-81377059184470/AnsiballZ_command.py'
Dec 03 01:26:43 compute-0 sudo[248976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:26:43 compute-0 python3.9[248978]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:26:43 compute-0 sudo[248976]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v359: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:44 compute-0 sudo[249129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdpmheodldoumvuaeyagasbxwnkbmdtw ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764725204.1991098-293-111068046417187/AnsiballZ_edpm_nftables_from_files.py'
Dec 03 01:26:44 compute-0 sudo[249129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:26:45 compute-0 python3[249131]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 03 01:26:45 compute-0 sudo[249129]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:45 compute-0 ceph-mon[192821]: pgmap v359: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:46 compute-0 sudo[249281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyrxgtdvpnpoxfhpitzmslynlmednpop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725205.425336-301-121598179618998/AnsiballZ_stat.py'
Dec 03 01:26:46 compute-0 sudo[249281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:26:46 compute-0 python3.9[249283]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:26:46 compute-0 sudo[249281]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v360: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:46 compute-0 sudo[249359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vexoipbdplqtjmazhvmtpxqjmntdsffv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725205.425336-301-121598179618998/AnsiballZ_file.py'
Dec 03 01:26:46 compute-0 sudo[249359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:26:46 compute-0 python3.9[249361]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:26:46 compute-0 sudo[249359]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:26:47 compute-0 ceph-mon[192821]: pgmap v360: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:47 compute-0 sudo[249511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxgbnsojpzopfmpqgvwmfxoxgrlvelzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725207.1772926-313-170348231870379/AnsiballZ_stat.py'
Dec 03 01:26:47 compute-0 sudo[249511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:26:48 compute-0 python3.9[249513]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:26:48 compute-0 sudo[249511]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v361: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:48 compute-0 sudo[249589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luflngagoayrxuicplpumhvsimwvsuhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725207.1772926-313-170348231870379/AnsiballZ_file.py'
Dec 03 01:26:48 compute-0 sudo[249589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:26:48 compute-0 python3.9[249591]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:26:48 compute-0 sudo[249589]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:49 compute-0 ceph-mon[192821]: pgmap v361: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:49 compute-0 sudo[249757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tydorjgksnfnjnaxpdxyvdlyebozrkro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725209.0893793-325-6043196869580/AnsiballZ_stat.py'
Dec 03 01:26:49 compute-0 sudo[249757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:26:49 compute-0 podman[249716]: 2025-12-03 01:26:49.711634192 +0000 UTC m=+0.162528595 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, release=1214.1726694543, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, managed_by=edpm_ansible, version=9.4, build-date=2024-09-18T21:23:30, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, distribution-scope=public)
Dec 03 01:26:49 compute-0 python3.9[249762]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:26:49 compute-0 sudo[249757]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:50 compute-0 sudo[249838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wydpgtlvbgynbuowcwwfkmcrgimactki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725209.0893793-325-6043196869580/AnsiballZ_file.py'
Dec 03 01:26:50 compute-0 sudo[249838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:26:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v362: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:50 compute-0 python3.9[249840]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:26:50 compute-0 sudo[249838]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:51 compute-0 sudo[249990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqfwvwkivqulzyqispxmgehyimkilpsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725210.9125586-337-4853573140295/AnsiballZ_stat.py'
Dec 03 01:26:51 compute-0 sudo[249990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:26:51 compute-0 ceph-mon[192821]: pgmap v362: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:51 compute-0 python3.9[249992]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:26:51 compute-0 sudo[249990]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:51 compute-0 sshd-session[248563]: Connection closed by authenticating user root 193.32.162.157 port 35320 [preauth]
Dec 03 01:26:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:26:52 compute-0 sudo[250069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrokkminqnzckiitgkqtmjbbygzpayay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725210.9125586-337-4853573140295/AnsiballZ_file.py'
Dec 03 01:26:52 compute-0 sudo[250069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:26:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v363: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:52 compute-0 python3.9[250071]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:26:52 compute-0 sudo[250069]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:52 compute-0 podman[250104]: 2025-12-03 01:26:52.89206594 +0000 UTC m=+0.142898934 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 01:26:53 compute-0 sudo[250243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlgmouvxsaenunnprrqbsjmwknvnngnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725212.7731733-349-46112203941106/AnsiballZ_stat.py'
Dec 03 01:26:53 compute-0 sudo[250243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:26:53 compute-0 ceph-mon[192821]: pgmap v363: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:53 compute-0 python3.9[250245]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:26:53 compute-0 sudo[250243]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:54 compute-0 sudo[250321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dblojnlgsqmnlptujndwxtrpqaatasgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725212.7731733-349-46112203941106/AnsiballZ_file.py'
Dec 03 01:26:54 compute-0 sudo[250321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:26:54 compute-0 python3.9[250323]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:26:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v364: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:54 compute-0 sudo[250321]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:55 compute-0 sudo[250474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpzykgqoxepmqcwsmhitjiobucbxqczi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725214.7799745-362-278305457592423/AnsiballZ_command.py'
Dec 03 01:26:55 compute-0 sudo[250474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:26:55 compute-0 python3.9[250476]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:26:55 compute-0 sudo[250474]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:55 compute-0 ceph-mon[192821]: pgmap v364: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v365: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:56 compute-0 sudo[250631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcehjyxplvpbpfzzxqzjfvewnrxmeqfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725215.969024-370-149842149591474/AnsiballZ_blockinfile.py'
Dec 03 01:26:56 compute-0 sudo[250631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:26:56 compute-0 python3.9[250633]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:26:56 compute-0 sudo[250631]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:26:57 compute-0 sshd-session[250607]: Invalid user zhangsan from 173.249.50.59 port 41596
Dec 03 01:26:57 compute-0 sshd-session[250607]: Received disconnect from 173.249.50.59 port 41596:11: Bye Bye [preauth]
Dec 03 01:26:57 compute-0 sshd-session[250607]: Disconnected from invalid user zhangsan 173.249.50.59 port 41596 [preauth]
Dec 03 01:26:57 compute-0 ceph-mon[192821]: pgmap v365: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:57 compute-0 sudo[250783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqlnjrmrslvlbudeoubsyxmnjgjaeufw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725217.2949667-379-85488892150326/AnsiballZ_file.py'
Dec 03 01:26:57 compute-0 sudo[250783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:26:58 compute-0 python3.9[250785]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:26:58 compute-0 sudo[250783]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:26:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:26:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:26:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:26:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:26:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:26:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v366: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:58 compute-0 sudo[250935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlzaplpcidphopdpvsrutqhlqprrfbcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725218.297448-379-105718153169192/AnsiballZ_file.py'
Dec 03 01:26:58 compute-0 sudo[250935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:26:59 compute-0 python3.9[250937]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:26:59 compute-0 sudo[250935]: pam_unix(sudo:session): session closed for user root
Dec 03 01:26:59 compute-0 ceph-mon[192821]: pgmap v366: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:26:59 compute-0 podman[158098]: time="2025-12-03T01:26:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:26:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:26:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec 03 01:26:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:26:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6824 "" "Go-http-client/1.1"
Dec 03 01:27:00 compute-0 sudo[251087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtcskagkbycqcsezyuxwlmtcsntfgeei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725219.4211922-394-201782341353737/AnsiballZ_mount.py'
Dec 03 01:27:00 compute-0 sudo[251087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:27:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v367: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:00 compute-0 python3.9[251089]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 03 01:27:00 compute-0 sudo[251087]: pam_unix(sudo:session): session closed for user root
Dec 03 01:27:01 compute-0 sudo[251239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrsessjcyukaxnubnljsouwueuwfmpys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725220.7751489-394-37147248672225/AnsiballZ_mount.py'
Dec 03 01:27:01 compute-0 sudo[251239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:27:01 compute-0 openstack_network_exporter[160250]: ERROR   01:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:27:01 compute-0 openstack_network_exporter[160250]: ERROR   01:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:27:01 compute-0 openstack_network_exporter[160250]: ERROR   01:27:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:27:01 compute-0 openstack_network_exporter[160250]: ERROR   01:27:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:27:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:27:01 compute-0 openstack_network_exporter[160250]: ERROR   01:27:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:27:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:27:01 compute-0 python3.9[251241]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 03 01:27:01 compute-0 sudo[251239]: pam_unix(sudo:session): session closed for user root
Dec 03 01:27:01 compute-0 ceph-mon[192821]: pgmap v367: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:02 compute-0 sshd-session[242941]: Connection closed by 192.168.122.30 port 34446
Dec 03 01:27:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:27:02 compute-0 sshd-session[242938]: pam_unix(sshd:session): session closed for user zuul
Dec 03 01:27:02 compute-0 systemd[1]: session-46.scope: Deactivated successfully.
Dec 03 01:27:02 compute-0 systemd[1]: session-46.scope: Consumed 52.864s CPU time.
Dec 03 01:27:02 compute-0 systemd-logind[800]: Session 46 logged out. Waiting for processes to exit.
Dec 03 01:27:02 compute-0 systemd-logind[800]: Removed session 46.
Dec 03 01:27:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v368: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:03 compute-0 sshd-session[250024]: Connection closed by authenticating user root 193.32.162.157 port 53804 [preauth]
Dec 03 01:27:03 compute-0 ceph-mon[192821]: pgmap v368: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v369: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:05 compute-0 ceph-mon[192821]: pgmap v369: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:06 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 03 01:27:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v370: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:27:07 compute-0 sshd-session[251267]: Invalid user guest from 103.146.202.174 port 43444
Dec 03 01:27:07 compute-0 sshd-session[251267]: Received disconnect from 103.146.202.174 port 43444:11: Bye Bye [preauth]
Dec 03 01:27:07 compute-0 sshd-session[251267]: Disconnected from invalid user guest 103.146.202.174 port 43444 [preauth]
Dec 03 01:27:07 compute-0 ceph-mon[192821]: pgmap v370: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:08 compute-0 sshd-session[251272]: Accepted publickey for zuul from 192.168.122.30 port 58130 ssh2: ECDSA SHA256:ja3ITS17A9km0/Ot+KN2pl9ub4ump/b6GV+vNoE7Szw
Dec 03 01:27:08 compute-0 systemd-logind[800]: New session 47 of user zuul.
Dec 03 01:27:08 compute-0 systemd[1]: Started Session 47 of User zuul.
Dec 03 01:27:08 compute-0 sshd-session[251272]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 03 01:27:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v371: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:09 compute-0 sudo[251425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzmbqxjeywdewksrnofsmdxqqtjdrzed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725228.2863147-16-154759890883347/AnsiballZ_tempfile.py'
Dec 03 01:27:09 compute-0 sudo[251425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:27:09 compute-0 python3.9[251427]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec 03 01:27:09 compute-0 sudo[251425]: pam_unix(sudo:session): session closed for user root
Dec 03 01:27:09 compute-0 ceph-mon[192821]: pgmap v371: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:10 compute-0 sudo[251626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxblkcufywawejfyazsqxfpakqtoojfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725229.6878092-28-76118511639114/AnsiballZ_stat.py'
Dec 03 01:27:10 compute-0 sudo[251626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:27:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v372: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:10 compute-0 podman[251551]: 2025-12-03 01:27:10.450246176 +0000 UTC m=+0.122616903 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 01:27:10 compute-0 podman[251552]: 2025-12-03 01:27:10.450169713 +0000 UTC m=+0.121676084 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, name=ubi9-minimal, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec 03 01:27:10 compute-0 podman[251553]: 2025-12-03 01:27:10.467217535 +0000 UTC m=+0.136749725 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true)
Dec 03 01:27:10 compute-0 podman[251554]: 2025-12-03 01:27:10.482276656 +0000 UTC m=+0.143400579 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 03 01:27:10 compute-0 python3.9[251652]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:27:10 compute-0 sudo[251626]: pam_unix(sudo:session): session closed for user root
Dec 03 01:27:11 compute-0 sshd-session[251675]: Received disconnect from 34.66.72.251 port 57916:11: Bye Bye [preauth]
Dec 03 01:27:11 compute-0 sshd-session[251675]: Disconnected from authenticating user root 34.66.72.251 port 57916 [preauth]
Dec 03 01:27:11 compute-0 ceph-mon[192821]: pgmap v372: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:11 compute-0 sudo[251813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gojsrfucfrvdqkvlmopnjlyadoekzeqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725231.0465477-36-125718981545544/AnsiballZ_slurp.py'
Dec 03 01:27:11 compute-0 sudo[251813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:27:12 compute-0 python3.9[251815]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Dec 03 01:27:12 compute-0 sudo[251813]: pam_unix(sudo:session): session closed for user root
Dec 03 01:27:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:27:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v373: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:12 compute-0 podman[251935]: 2025-12-03 01:27:12.892164165 +0000 UTC m=+0.141698647 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Dec 03 01:27:12 compute-0 sudo[251985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lerphwgsnlhskjzyxbkzjeebaqsoarmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725232.3443868-44-30616758922603/AnsiballZ_stat.py'
Dec 03 01:27:12 compute-0 sudo[251985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:27:13 compute-0 python3.9[251987]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.2pvr_1rq follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:27:13 compute-0 sudo[251985]: pam_unix(sudo:session): session closed for user root
Dec 03 01:27:13 compute-0 ceph-mon[192821]: pgmap v373: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:14 compute-0 sudo[252110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkbnzkiptszqfzdvuovnyokpmocnhnku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725232.3443868-44-30616758922603/AnsiballZ_copy.py'
Dec 03 01:27:14 compute-0 sudo[252110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:27:14 compute-0 python3.9[252112]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.2pvr_1rq mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764725232.3443868-44-30616758922603/.source.2pvr_1rq _original_basename=._0mku4us follow=False checksum=9a092da6e6f6a5987ec5f2d86818ad0135a14436 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:27:14 compute-0 sudo[252110]: pam_unix(sudo:session): session closed for user root
Dec 03 01:27:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v374: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:15 compute-0 sshd-session[251266]: Connection closed by authenticating user root 193.32.162.157 port 60796 [preauth]
Dec 03 01:27:15 compute-0 sudo[252262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijhggtowsqeuhaodunfjypagowdmqqao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725234.603581-59-54575932967692/AnsiballZ_setup.py'
Dec 03 01:27:15 compute-0 sudo[252262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:27:15 compute-0 ceph-mon[192821]: pgmap v374: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:15 compute-0 python3.9[252264]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 01:27:15 compute-0 sudo[252262]: pam_unix(sudo:session): session closed for user root
Dec 03 01:27:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v375: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:27:17 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Dec 03 01:27:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:27:17.111674) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 03 01:27:17 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Dec 03 01:27:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725237111711, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1658, "num_deletes": 250, "total_data_size": 2387445, "memory_usage": 2422744, "flush_reason": "Manual Compaction"}
Dec 03 01:27:17 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Dec 03 01:27:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725237127995, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1392420, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7321, "largest_seqno": 8978, "table_properties": {"data_size": 1387020, "index_size": 2412, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15780, "raw_average_key_size": 20, "raw_value_size": 1374191, "raw_average_value_size": 1803, "num_data_blocks": 114, "num_entries": 762, "num_filter_entries": 762, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764725080, "oldest_key_time": 1764725080, "file_creation_time": 1764725237, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Dec 03 01:27:17 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 16445 microseconds, and 8896 cpu microseconds.
Dec 03 01:27:17 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 01:27:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:27:17.128113) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1392420 bytes OK
Dec 03 01:27:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:27:17.128143) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Dec 03 01:27:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:27:17.131429) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Dec 03 01:27:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:27:17.131475) EVENT_LOG_v1 {"time_micros": 1764725237131463, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 03 01:27:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:27:17.131504) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 03 01:27:17 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2380077, prev total WAL file size 2380077, number of live WAL files 2.
Dec 03 01:27:17 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 01:27:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:27:17.133075) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Dec 03 01:27:17 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 03 01:27:17 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1359KB)], [20(6873KB)]
Dec 03 01:27:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725237133201, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 8431192, "oldest_snapshot_seqno": -1}
Dec 03 01:27:17 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3370 keys, 6759710 bytes, temperature: kUnknown
Dec 03 01:27:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725237213412, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 6759710, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6734056, "index_size": 16137, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8453, "raw_key_size": 80646, "raw_average_key_size": 23, "raw_value_size": 6669968, "raw_average_value_size": 1979, "num_data_blocks": 716, "num_entries": 3370, "num_filter_entries": 3370, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764725237, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Dec 03 01:27:17 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 01:27:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:27:17.214149) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 6759710 bytes
Dec 03 01:27:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:27:17.216365) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 105.0 rd, 84.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 6.7 +0.0 blob) out(6.4 +0.0 blob), read-write-amplify(10.9) write-amplify(4.9) OK, records in: 3809, records dropped: 439 output_compression: NoCompression
Dec 03 01:27:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:27:17.216387) EVENT_LOG_v1 {"time_micros": 1764725237216377, "job": 6, "event": "compaction_finished", "compaction_time_micros": 80280, "compaction_time_cpu_micros": 39290, "output_level": 6, "num_output_files": 1, "total_output_size": 6759710, "num_input_records": 3809, "num_output_records": 3370, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 03 01:27:17 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 01:27:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725237217333, "job": 6, "event": "table_file_deletion", "file_number": 22}
Dec 03 01:27:17 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 01:27:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725237219458, "job": 6, "event": "table_file_deletion", "file_number": 20}
Dec 03 01:27:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:27:17.132900) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:27:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:27:17.219615) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:27:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:27:17.219620) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:27:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:27:17.219622) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:27:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:27:17.219624) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:27:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:27:17.219625) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:27:17 compute-0 sudo[252416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgemmbdqbcnffufnfggxebbirsysiyut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725236.314473-68-24562765724957/AnsiballZ_blockinfile.py'
Dec 03 01:27:17 compute-0 sudo[252416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:27:17 compute-0 ceph-mon[192821]: pgmap v375: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:18 compute-0 python3.9[252418]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDUXzfc0dZJxCJJ4PEHADvL0LyRTIDw765KVVRPjKe66bZHCDrMnH3lZh13FtxojtEeAMtDjWC+H3ZGbvKAjyg6wN6ZmxRsL7o57jFWbBEQCHr3VQojAmFhu1UrX7NiAqOVCHai4lYrpddO28T1lK3oP3KKbw3gMA9o0GCA5TlMf5uAu10Zmp6u/NuST5GBQqc8D2ID2cZ5OL+IJ5OedhsuV0SutU2S7A/ua95d57ddgc8ltJh/JzrnYCjHsD4NNKpp1HDuLXzKlMVFpbxi5ihzlepdP4BMWtBqKzvoCCD+KxwXBNVjKLo57B/h+kfTNX/PI8IkDAGLOxYZyPozHtsLiKtTLao7Q1nU67ZcSZbDPBluTaBcUuiS12fEsU2SjMVNRPDFBKj8pn5cXmIZJaLccIvvWYr4u9xIEA1aX0IjZS9FEHD+eVLVe3HkQ+rFJ2WgMARupAMDmyso43Cje+xIL0vZYayq3PyCWhVln1wW80k/cY/5JCqhzF2lelqLBlU=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICuIgcpw897dA3mGBxBK8DwsvfOOhRnRBasT73h7OlLn
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBITA4C6TXl/AXsVGH1teKmoFi3piNxhosC0B5paSBiifwK5pyHq3w8pYOtVe+KhAjGKZJREVbl0k3rnMeNo31ps=
                                              create=True mode=0644 path=/tmp/ansible.2pvr_1rq state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:27:18 compute-0 sudo[252416]: pam_unix(sudo:session): session closed for user root
Dec 03 01:27:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v376: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:19 compute-0 ceph-mon[192821]: pgmap v376: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:19 compute-0 sudo[252569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubewobalyjjgdbmantzxrtfsahtlxzrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725238.287817-76-70010511538391/AnsiballZ_command.py'
Dec 03 01:27:19 compute-0 sudo[252569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:27:19 compute-0 podman[252571]: 2025-12-03 01:27:19.981473 +0000 UTC m=+0.088812989 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., version=9.4, release-0.7.12=, container_name=kepler, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=base rhel9, vendor=Red Hat, Inc.)
Dec 03 01:27:20 compute-0 python3.9[252572]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.2pvr_1rq' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:27:20 compute-0 sudo[252569]: pam_unix(sudo:session): session closed for user root
Dec 03 01:27:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v377: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:21 compute-0 sudo[252741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcmqocopporikiswyeaamkitdftisgcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725240.4331777-84-48197787710031/AnsiballZ_file.py'
Dec 03 01:27:21 compute-0 sudo[252741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:27:21 compute-0 python3.9[252743]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.2pvr_1rq state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:27:21 compute-0 sudo[252741]: pam_unix(sudo:session): session closed for user root
Dec 03 01:27:21 compute-0 sshd-session[251275]: Connection closed by 192.168.122.30 port 58130
Dec 03 01:27:21 compute-0 sshd-session[251272]: pam_unix(sshd:session): session closed for user zuul
Dec 03 01:27:21 compute-0 systemd[1]: session-47.scope: Deactivated successfully.
Dec 03 01:27:21 compute-0 systemd[1]: session-47.scope: Consumed 9.232s CPU time.
Dec 03 01:27:21 compute-0 systemd-logind[800]: Session 47 logged out. Waiting for processes to exit.
Dec 03 01:27:21 compute-0 systemd-logind[800]: Removed session 47.
Dec 03 01:27:21 compute-0 ceph-mon[192821]: pgmap v377: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:27:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v378: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:23 compute-0 ceph-mon[192821]: pgmap v378: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:23 compute-0 podman[252768]: 2025-12-03 01:27:23.887154059 +0000 UTC m=+0.137672253 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 03 01:27:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v379: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:25 compute-0 sudo[252790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:27:25 compute-0 sudo[252790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:27:25 compute-0 sudo[252790]: pam_unix(sudo:session): session closed for user root
Dec 03 01:27:25 compute-0 sudo[252815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:27:25 compute-0 sudo[252815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:27:25 compute-0 sudo[252815]: pam_unix(sudo:session): session closed for user root
Dec 03 01:27:25 compute-0 sudo[252840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:27:25 compute-0 sudo[252840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:27:25 compute-0 sudo[252840]: pam_unix(sudo:session): session closed for user root
Dec 03 01:27:25 compute-0 sudo[252865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 01:27:25 compute-0 sudo[252865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:27:25 compute-0 ceph-mon[192821]: pgmap v379: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v380: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:27:27 compute-0 sshd-session[252906]: Accepted publickey for zuul from 192.168.122.30 port 38684 ssh2: ECDSA SHA256:ja3ITS17A9km0/Ot+KN2pl9ub4ump/b6GV+vNoE7Szw
Dec 03 01:27:27 compute-0 systemd-logind[800]: New session 48 of user zuul.
Dec 03 01:27:27 compute-0 systemd[1]: Started Session 48 of User zuul.
Dec 03 01:27:27 compute-0 sshd-session[252906]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 03 01:27:27 compute-0 sudo[252865]: pam_unix(sudo:session): session closed for user root
Dec 03 01:27:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:27:27 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:27:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 01:27:27 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:27:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 01:27:27 compute-0 sshd-session[252265]: Connection closed by authenticating user root 193.32.162.157 port 36102 [preauth]
Dec 03 01:27:27 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:27:27 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 0f6c8f22-165d-4480-884c-5eb5c81c100a does not exist
Dec 03 01:27:27 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 613899ee-81b0-4d0a-be50-ff31874b8fac does not exist
Dec 03 01:27:27 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 568ea4b2-e755-43db-b1e6-b664ef9ba457 does not exist
Dec 03 01:27:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 01:27:27 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:27:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 01:27:27 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:27:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:27:27 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:27:27 compute-0 sudo[252928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:27:27 compute-0 sudo[252928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:27:27 compute-0 sudo[252928]: pam_unix(sudo:session): session closed for user root
Dec 03 01:27:27 compute-0 sudo[252992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:27:27 compute-0 sudo[252992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:27:27 compute-0 sudo[252992]: pam_unix(sudo:session): session closed for user root
Dec 03 01:27:27 compute-0 sudo[253027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:27:27 compute-0 sudo[253027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:27:27 compute-0 sudo[253027]: pam_unix(sudo:session): session closed for user root
Dec 03 01:27:27 compute-0 sudo[253052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 01:27:27 compute-0 sudo[253052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:27:27 compute-0 ceph-mon[192821]: pgmap v380: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:27 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:27:27 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:27:27 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:27:27 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:27:27 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:27:27 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:27:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:27:28
Dec 03 01:27:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 01:27:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 01:27:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['vms', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', 'images', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'volumes']
Dec 03 01:27:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 01:27:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:27:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:27:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:27:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:27:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:27:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:27:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 01:27:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:27:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 01:27:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:27:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:27:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:27:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:27:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:27:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:27:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:27:28 compute-0 podman[253209]: 2025-12-03 01:27:28.492471117 +0000 UTC m=+0.087403406 container create 3f88b93b9a66a2a55bf947a279d8b3a965fd9eaa76fc219d608f5a7b76c7539f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 03 01:27:28 compute-0 podman[253209]: 2025-12-03 01:27:28.453474744 +0000 UTC m=+0.048407073 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:27:28 compute-0 systemd[1]: Started libpod-conmon-3f88b93b9a66a2a55bf947a279d8b3a965fd9eaa76fc219d608f5a7b76c7539f.scope.
Dec 03 01:27:28 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:27:28 compute-0 podman[253209]: 2025-12-03 01:27:28.637866566 +0000 UTC m=+0.232798895 container init 3f88b93b9a66a2a55bf947a279d8b3a965fd9eaa76fc219d608f5a7b76c7539f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_goldberg, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:27:28 compute-0 podman[253209]: 2025-12-03 01:27:28.655125964 +0000 UTC m=+0.250058243 container start 3f88b93b9a66a2a55bf947a279d8b3a965fd9eaa76fc219d608f5a7b76c7539f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_goldberg, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec 03 01:27:28 compute-0 podman[253209]: 2025-12-03 01:27:28.661630393 +0000 UTC m=+0.256562662 container attach 3f88b93b9a66a2a55bf947a279d8b3a965fd9eaa76fc219d608f5a7b76c7539f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_goldberg, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 03 01:27:28 compute-0 funny_goldberg[253227]: 167 167
Dec 03 01:27:28 compute-0 systemd[1]: libpod-3f88b93b9a66a2a55bf947a279d8b3a965fd9eaa76fc219d608f5a7b76c7539f.scope: Deactivated successfully.
Dec 03 01:27:28 compute-0 podman[253209]: 2025-12-03 01:27:28.67003614 +0000 UTC m=+0.264968469 container died 3f88b93b9a66a2a55bf947a279d8b3a965fd9eaa76fc219d608f5a7b76c7539f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_goldberg, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec 03 01:27:28 compute-0 python3.9[253211]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 01:27:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-e10405bbab55ad30d09317b6afc7343e2f1996a53361bfa49c2bd088ee5e3e1f-merged.mount: Deactivated successfully.
Dec 03 01:27:28 compute-0 podman[253209]: 2025-12-03 01:27:28.767230414 +0000 UTC m=+0.362162673 container remove 3f88b93b9a66a2a55bf947a279d8b3a965fd9eaa76fc219d608f5a7b76c7539f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Dec 03 01:27:28 compute-0 systemd[1]: libpod-conmon-3f88b93b9a66a2a55bf947a279d8b3a965fd9eaa76fc219d608f5a7b76c7539f.scope: Deactivated successfully.
Dec 03 01:27:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v381: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:29 compute-0 podman[253255]: 2025-12-03 01:27:29.058959401 +0000 UTC m=+0.097210626 container create 4e661ea338fa1b4c3383570735b1970c015ced0f4c8f86f1d133d16daab0507e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_feynman, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:27:29 compute-0 podman[253255]: 2025-12-03 01:27:29.022836586 +0000 UTC m=+0.061087861 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:27:29 compute-0 systemd[1]: Started libpod-conmon-4e661ea338fa1b4c3383570735b1970c015ced0f4c8f86f1d133d16daab0507e.scope.
Dec 03 01:27:29 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04fc488a14387855cab0cedbe9014de8b882620df1cbc99fa9c52b0ecb3db12c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04fc488a14387855cab0cedbe9014de8b882620df1cbc99fa9c52b0ecb3db12c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04fc488a14387855cab0cedbe9014de8b882620df1cbc99fa9c52b0ecb3db12c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04fc488a14387855cab0cedbe9014de8b882620df1cbc99fa9c52b0ecb3db12c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04fc488a14387855cab0cedbe9014de8b882620df1cbc99fa9c52b0ecb3db12c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:27:29 compute-0 podman[253255]: 2025-12-03 01:27:29.233747399 +0000 UTC m=+0.271998644 container init 4e661ea338fa1b4c3383570735b1970c015ced0f4c8f86f1d133d16daab0507e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_feynman, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:27:29 compute-0 podman[253255]: 2025-12-03 01:27:29.258768665 +0000 UTC m=+0.297019870 container start 4e661ea338fa1b4c3383570735b1970c015ced0f4c8f86f1d133d16daab0507e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_feynman, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:27:29 compute-0 podman[253255]: 2025-12-03 01:27:29.265929574 +0000 UTC m=+0.304180779 container attach 4e661ea338fa1b4c3383570735b1970c015ced0f4c8f86f1d133d16daab0507e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:27:29 compute-0 podman[158098]: time="2025-12-03T01:27:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:27:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:27:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 34530 "" "Go-http-client/1.1"
Dec 03 01:27:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:27:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7252 "" "Go-http-client/1.1"
Dec 03 01:27:29 compute-0 ceph-mon[192821]: pgmap v381: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:30 compute-0 sudo[253435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yshouioitistaybtkypuidrpcyiyfkok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725249.3452504-32-162251393895758/AnsiballZ_systemd.py'
Dec 03 01:27:30 compute-0 sudo[253435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:27:30 compute-0 python3.9[253439]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 03 01:27:30 compute-0 suspicious_feynman[253295]: --> passed data devices: 0 physical, 3 LVM
Dec 03 01:27:30 compute-0 suspicious_feynman[253295]: --> relative data size: 1.0
Dec 03 01:27:30 compute-0 suspicious_feynman[253295]: --> All data devices are unavailable
Dec 03 01:27:30 compute-0 podman[253255]: 2025-12-03 01:27:30.657976209 +0000 UTC m=+1.696227424 container died 4e661ea338fa1b4c3383570735b1970c015ced0f4c8f86f1d133d16daab0507e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:27:30 compute-0 systemd[1]: libpod-4e661ea338fa1b4c3383570735b1970c015ced0f4c8f86f1d133d16daab0507e.scope: Deactivated successfully.
Dec 03 01:27:30 compute-0 systemd[1]: libpod-4e661ea338fa1b4c3383570735b1970c015ced0f4c8f86f1d133d16daab0507e.scope: Consumed 1.332s CPU time.
Dec 03 01:27:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-04fc488a14387855cab0cedbe9014de8b882620df1cbc99fa9c52b0ecb3db12c-merged.mount: Deactivated successfully.
Dec 03 01:27:30 compute-0 sudo[253435]: pam_unix(sudo:session): session closed for user root
Dec 03 01:27:30 compute-0 podman[253255]: 2025-12-03 01:27:30.738491482 +0000 UTC m=+1.776742667 container remove 4e661ea338fa1b4c3383570735b1970c015ced0f4c8f86f1d133d16daab0507e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:27:30 compute-0 systemd[1]: libpod-conmon-4e661ea338fa1b4c3383570735b1970c015ced0f4c8f86f1d133d16daab0507e.scope: Deactivated successfully.
Dec 03 01:27:30 compute-0 sudo[253052]: pam_unix(sudo:session): session closed for user root
Dec 03 01:27:30 compute-0 sudo[253468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:27:30 compute-0 sudo[253468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:27:30 compute-0 sudo[253468]: pam_unix(sudo:session): session closed for user root
Dec 03 01:27:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v382: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:30 compute-0 sudo[253516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:27:30 compute-0 sudo[253516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:27:30 compute-0 sudo[253516]: pam_unix(sudo:session): session closed for user root
Dec 03 01:27:31 compute-0 sudo[253560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:27:31 compute-0 sudo[253560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:27:31 compute-0 sudo[253560]: pam_unix(sudo:session): session closed for user root
Dec 03 01:27:31 compute-0 sudo[253602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 01:27:31 compute-0 sudo[253602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:27:31 compute-0 openstack_network_exporter[160250]: ERROR   01:27:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:27:31 compute-0 openstack_network_exporter[160250]: ERROR   01:27:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:27:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:27:31 compute-0 openstack_network_exporter[160250]: ERROR   01:27:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:27:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:27:31 compute-0 openstack_network_exporter[160250]: ERROR   01:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:27:31 compute-0 openstack_network_exporter[160250]: ERROR   01:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:27:31 compute-0 sudo[253747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crfmqqwhwugoqnjmwholwgpalruasnda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725250.9915874-40-156332952147842/AnsiballZ_systemd.py'
Dec 03 01:27:31 compute-0 sudo[253747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:27:31 compute-0 podman[253756]: 2025-12-03 01:27:31.672688168 +0000 UTC m=+0.087022544 container create 51dfa9f3675284c95a5d9801c56bc255513b2b72b4ecf1cdcab9f858da385a9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_ritchie, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 03 01:27:31 compute-0 podman[253756]: 2025-12-03 01:27:31.636869972 +0000 UTC m=+0.051204408 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:27:31 compute-0 systemd[1]: Started libpod-conmon-51dfa9f3675284c95a5d9801c56bc255513b2b72b4ecf1cdcab9f858da385a9a.scope.
Dec 03 01:27:31 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:27:31 compute-0 podman[253756]: 2025-12-03 01:27:31.815484918 +0000 UTC m=+0.229819354 container init 51dfa9f3675284c95a5d9801c56bc255513b2b72b4ecf1cdcab9f858da385a9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 03 01:27:31 compute-0 podman[253756]: 2025-12-03 01:27:31.827128454 +0000 UTC m=+0.241462830 container start 51dfa9f3675284c95a5d9801c56bc255513b2b72b4ecf1cdcab9f858da385a9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_ritchie, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 03 01:27:31 compute-0 python3.9[253753]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 03 01:27:31 compute-0 podman[253756]: 2025-12-03 01:27:31.834170109 +0000 UTC m=+0.248504545 container attach 51dfa9f3675284c95a5d9801c56bc255513b2b72b4ecf1cdcab9f858da385a9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:27:31 compute-0 angry_ritchie[253772]: 167 167
Dec 03 01:27:31 compute-0 systemd[1]: libpod-51dfa9f3675284c95a5d9801c56bc255513b2b72b4ecf1cdcab9f858da385a9a.scope: Deactivated successfully.
Dec 03 01:27:31 compute-0 podman[253756]: 2025-12-03 01:27:31.841261526 +0000 UTC m=+0.255595872 container died 51dfa9f3675284c95a5d9801c56bc255513b2b72b4ecf1cdcab9f858da385a9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Dec 03 01:27:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-c276cbda9a423c440865bd8c892181b5ca6069dbabb660a32c322696fc10b255-merged.mount: Deactivated successfully.
Dec 03 01:27:31 compute-0 podman[253756]: 2025-12-03 01:27:31.895857987 +0000 UTC m=+0.310192333 container remove 51dfa9f3675284c95a5d9801c56bc255513b2b72b4ecf1cdcab9f858da385a9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_ritchie, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 03 01:27:31 compute-0 sudo[253747]: pam_unix(sudo:session): session closed for user root
Dec 03 01:27:31 compute-0 systemd[1]: libpod-conmon-51dfa9f3675284c95a5d9801c56bc255513b2b72b4ecf1cdcab9f858da385a9a.scope: Deactivated successfully.
Dec 03 01:27:31 compute-0 ceph-mon[192821]: pgmap v382: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:27:32 compute-0 podman[253820]: 2025-12-03 01:27:32.169401477 +0000 UTC m=+0.098696641 container create 86abc304136b6d86e3c3443694ffe5ae08b05341336f00c663a7c54e9881d1cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_germain, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 03 01:27:32 compute-0 podman[253820]: 2025-12-03 01:27:32.127509475 +0000 UTC m=+0.056804669 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:27:32 compute-0 systemd[1]: Started libpod-conmon-86abc304136b6d86e3c3443694ffe5ae08b05341336f00c663a7c54e9881d1cd.scope.
Dec 03 01:27:32 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:27:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d735332863a644b0c0efac0c5e4641b513eb2d00100d7c7300fc366839bf094/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:27:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d735332863a644b0c0efac0c5e4641b513eb2d00100d7c7300fc366839bf094/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:27:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d735332863a644b0c0efac0c5e4641b513eb2d00100d7c7300fc366839bf094/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:27:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d735332863a644b0c0efac0c5e4641b513eb2d00100d7c7300fc366839bf094/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:27:32 compute-0 podman[253820]: 2025-12-03 01:27:32.349365134 +0000 UTC m=+0.278660338 container init 86abc304136b6d86e3c3443694ffe5ae08b05341336f00c663a7c54e9881d1cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_germain, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:27:32 compute-0 podman[253820]: 2025-12-03 01:27:32.37931411 +0000 UTC m=+0.308609254 container start 86abc304136b6d86e3c3443694ffe5ae08b05341336f00c663a7c54e9881d1cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_germain, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Dec 03 01:27:32 compute-0 podman[253820]: 2025-12-03 01:27:32.38584276 +0000 UTC m=+0.315137904 container attach 86abc304136b6d86e3c3443694ffe5ae08b05341336f00c663a7c54e9881d1cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_germain, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:27:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v383: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:33 compute-0 sudo[253969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frqbkrngoenwuotqmtzkgcniokxgpfbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725252.2742617-49-256379199589357/AnsiballZ_command.py'
Dec 03 01:27:33 compute-0 sudo[253969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:27:33 compute-0 jolly_germain[253859]: {
Dec 03 01:27:33 compute-0 jolly_germain[253859]:     "0": [
Dec 03 01:27:33 compute-0 jolly_germain[253859]:         {
Dec 03 01:27:33 compute-0 jolly_germain[253859]:             "devices": [
Dec 03 01:27:33 compute-0 jolly_germain[253859]:                 "/dev/loop3"
Dec 03 01:27:33 compute-0 jolly_germain[253859]:             ],
Dec 03 01:27:33 compute-0 jolly_germain[253859]:             "lv_name": "ceph_lv0",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:             "lv_size": "21470642176",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:             "name": "ceph_lv0",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:             "tags": {
Dec 03 01:27:33 compute-0 jolly_germain[253859]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:                 "ceph.cluster_name": "ceph",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:                 "ceph.crush_device_class": "",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:                 "ceph.encrypted": "0",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:                 "ceph.osd_id": "0",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:                 "ceph.type": "block",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:                 "ceph.vdo": "0"
Dec 03 01:27:33 compute-0 jolly_germain[253859]:             },
Dec 03 01:27:33 compute-0 jolly_germain[253859]:             "type": "block",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:             "vg_name": "ceph_vg0"
Dec 03 01:27:33 compute-0 jolly_germain[253859]:         }
Dec 03 01:27:33 compute-0 jolly_germain[253859]:     ],
Dec 03 01:27:33 compute-0 jolly_germain[253859]:     "1": [
Dec 03 01:27:33 compute-0 jolly_germain[253859]:         {
Dec 03 01:27:33 compute-0 jolly_germain[253859]:             "devices": [
Dec 03 01:27:33 compute-0 jolly_germain[253859]:                 "/dev/loop4"
Dec 03 01:27:33 compute-0 jolly_germain[253859]:             ],
Dec 03 01:27:33 compute-0 jolly_germain[253859]:             "lv_name": "ceph_lv1",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:             "lv_size": "21470642176",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:             "name": "ceph_lv1",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:             "tags": {
Dec 03 01:27:33 compute-0 jolly_germain[253859]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:                 "ceph.cluster_name": "ceph",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:                 "ceph.crush_device_class": "",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:                 "ceph.encrypted": "0",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:                 "ceph.osd_id": "1",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:                 "ceph.type": "block",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:                 "ceph.vdo": "0"
Dec 03 01:27:33 compute-0 jolly_germain[253859]:             },
Dec 03 01:27:33 compute-0 jolly_germain[253859]:             "type": "block",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:             "vg_name": "ceph_vg1"
Dec 03 01:27:33 compute-0 jolly_germain[253859]:         }
Dec 03 01:27:33 compute-0 jolly_germain[253859]:     ],
Dec 03 01:27:33 compute-0 jolly_germain[253859]:     "2": [
Dec 03 01:27:33 compute-0 jolly_germain[253859]:         {
Dec 03 01:27:33 compute-0 jolly_germain[253859]:             "devices": [
Dec 03 01:27:33 compute-0 jolly_germain[253859]:                 "/dev/loop5"
Dec 03 01:27:33 compute-0 jolly_germain[253859]:             ],
Dec 03 01:27:33 compute-0 jolly_germain[253859]:             "lv_name": "ceph_lv2",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:             "lv_size": "21470642176",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:             "name": "ceph_lv2",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:             "tags": {
Dec 03 01:27:33 compute-0 jolly_germain[253859]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:                 "ceph.cluster_name": "ceph",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:                 "ceph.crush_device_class": "",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:                 "ceph.encrypted": "0",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:                 "ceph.osd_id": "2",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:                 "ceph.type": "block",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:                 "ceph.vdo": "0"
Dec 03 01:27:33 compute-0 jolly_germain[253859]:             },
Dec 03 01:27:33 compute-0 jolly_germain[253859]:             "type": "block",
Dec 03 01:27:33 compute-0 jolly_germain[253859]:             "vg_name": "ceph_vg2"
Dec 03 01:27:33 compute-0 jolly_germain[253859]:         }
Dec 03 01:27:33 compute-0 jolly_germain[253859]:     ]
Dec 03 01:27:33 compute-0 jolly_germain[253859]: }
Dec 03 01:27:33 compute-0 systemd[1]: libpod-86abc304136b6d86e3c3443694ffe5ae08b05341336f00c663a7c54e9881d1cd.scope: Deactivated successfully.
Dec 03 01:27:33 compute-0 podman[253820]: 2025-12-03 01:27:33.165956411 +0000 UTC m=+1.095251515 container died 86abc304136b6d86e3c3443694ffe5ae08b05341336f00c663a7c54e9881d1cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_germain, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:27:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d735332863a644b0c0efac0c5e4641b513eb2d00100d7c7300fc366839bf094-merged.mount: Deactivated successfully.
Dec 03 01:27:33 compute-0 podman[253820]: 2025-12-03 01:27:33.233385104 +0000 UTC m=+1.162680198 container remove 86abc304136b6d86e3c3443694ffe5ae08b05341336f00c663a7c54e9881d1cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_germain, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:27:33 compute-0 systemd[1]: libpod-conmon-86abc304136b6d86e3c3443694ffe5ae08b05341336f00c663a7c54e9881d1cd.scope: Deactivated successfully.
Dec 03 01:27:33 compute-0 python3.9[253972]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:27:33 compute-0 sudo[253602]: pam_unix(sudo:session): session closed for user root
Dec 03 01:27:33 compute-0 sudo[253969]: pam_unix(sudo:session): session closed for user root
Dec 03 01:27:33 compute-0 sudo[253987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:27:33 compute-0 sudo[253987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:27:33 compute-0 sudo[253987]: pam_unix(sudo:session): session closed for user root
Dec 03 01:27:33 compute-0 sshd-session[253924]: Invalid user myuser from 80.253.31.232 port 36544
Dec 03 01:27:33 compute-0 sudo[254032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:27:33 compute-0 sudo[254032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:27:33 compute-0 sudo[254032]: pam_unix(sudo:session): session closed for user root
Dec 03 01:27:33 compute-0 sudo[254074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:27:33 compute-0 sshd-session[253924]: Received disconnect from 80.253.31.232 port 36544:11: Bye Bye [preauth]
Dec 03 01:27:33 compute-0 sudo[254074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:27:33 compute-0 sshd-session[253924]: Disconnected from invalid user myuser 80.253.31.232 port 36544 [preauth]
Dec 03 01:27:33 compute-0 sudo[254074]: pam_unix(sudo:session): session closed for user root
Dec 03 01:27:33 compute-0 sudo[254109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 01:27:33 compute-0 sudo[254109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:27:34 compute-0 ceph-mon[192821]: pgmap v383: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:34 compute-0 podman[254233]: 2025-12-03 01:27:34.289170209 +0000 UTC m=+0.078891665 container create 8726b693011e25fe5245cc96c44280ca0a04abdfcbc5e6f108de53b8f8a86412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lichterman, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:27:34 compute-0 podman[254233]: 2025-12-03 01:27:34.249427713 +0000 UTC m=+0.039149229 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:27:34 compute-0 systemd[1]: Started libpod-conmon-8726b693011e25fe5245cc96c44280ca0a04abdfcbc5e6f108de53b8f8a86412.scope.
Dec 03 01:27:34 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:27:34 compute-0 sudo[254290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emeyfhwjaqfddjfjhaccwhrivfkwvfzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725253.6059287-57-15422407748201/AnsiballZ_stat.py'
Dec 03 01:27:34 compute-0 sudo[254290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:27:34 compute-0 podman[254233]: 2025-12-03 01:27:34.440573942 +0000 UTC m=+0.230295398 container init 8726b693011e25fe5245cc96c44280ca0a04abdfcbc5e6f108de53b8f8a86412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lichterman, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:27:34 compute-0 podman[254233]: 2025-12-03 01:27:34.459271134 +0000 UTC m=+0.248992580 container start 8726b693011e25fe5245cc96c44280ca0a04abdfcbc5e6f108de53b8f8a86412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lichterman, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:27:34 compute-0 podman[254233]: 2025-12-03 01:27:34.465955979 +0000 UTC m=+0.255677505 container attach 8726b693011e25fe5245cc96c44280ca0a04abdfcbc5e6f108de53b8f8a86412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:27:34 compute-0 vibrant_lichterman[254291]: 167 167
Dec 03 01:27:34 compute-0 systemd[1]: libpod-8726b693011e25fe5245cc96c44280ca0a04abdfcbc5e6f108de53b8f8a86412.scope: Deactivated successfully.
Dec 03 01:27:34 compute-0 conmon[254291]: conmon 8726b693011e25fe5245 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8726b693011e25fe5245cc96c44280ca0a04abdfcbc5e6f108de53b8f8a86412.scope/container/memory.events
Dec 03 01:27:34 compute-0 podman[254233]: 2025-12-03 01:27:34.474093088 +0000 UTC m=+0.263814564 container died 8726b693011e25fe5245cc96c44280ca0a04abdfcbc5e6f108de53b8f8a86412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:27:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-928d0652422507db5ae2751f53c25f366ebaf3a1a0fdce15a205530f2cb1c568-merged.mount: Deactivated successfully.
Dec 03 01:27:34 compute-0 podman[254233]: 2025-12-03 01:27:34.543919045 +0000 UTC m=+0.333640471 container remove 8726b693011e25fe5245cc96c44280ca0a04abdfcbc5e6f108de53b8f8a86412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 03 01:27:34 compute-0 systemd[1]: libpod-conmon-8726b693011e25fe5245cc96c44280ca0a04abdfcbc5e6f108de53b8f8a86412.scope: Deactivated successfully.
Dec 03 01:27:34 compute-0 python3.9[254295]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:27:34 compute-0 sudo[254290]: pam_unix(sudo:session): session closed for user root
Dec 03 01:27:34 compute-0 podman[254321]: 2025-12-03 01:27:34.812451331 +0000 UTC m=+0.076040667 container create acf0b7980b80d8cb16c586ab0bf8ab4e76e934a2446bd39ccf5e413bb4acd8a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 03 01:27:34 compute-0 podman[254321]: 2025-12-03 01:27:34.780865175 +0000 UTC m=+0.044454561 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:27:34 compute-0 systemd[1]: Started libpod-conmon-acf0b7980b80d8cb16c586ab0bf8ab4e76e934a2446bd39ccf5e413bb4acd8a4.scope.
Dec 03 01:27:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v384: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:34 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:27:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb8c1f70a516f2567047a6743a9c7e5a6308bb5b14caf1cd5511dcd95c20075d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:27:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb8c1f70a516f2567047a6743a9c7e5a6308bb5b14caf1cd5511dcd95c20075d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:27:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb8c1f70a516f2567047a6743a9c7e5a6308bb5b14caf1cd5511dcd95c20075d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:27:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb8c1f70a516f2567047a6743a9c7e5a6308bb5b14caf1cd5511dcd95c20075d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:27:34 compute-0 podman[254321]: 2025-12-03 01:27:34.998881396 +0000 UTC m=+0.262470772 container init acf0b7980b80d8cb16c586ab0bf8ab4e76e934a2446bd39ccf5e413bb4acd8a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_jepsen, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 03 01:27:35 compute-0 podman[254321]: 2025-12-03 01:27:35.018872878 +0000 UTC m=+0.282462224 container start acf0b7980b80d8cb16c586ab0bf8ab4e76e934a2446bd39ccf5e413bb4acd8a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_jepsen, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:27:35 compute-0 podman[254321]: 2025-12-03 01:27:35.025832381 +0000 UTC m=+0.289421767 container attach acf0b7980b80d8cb16c586ab0bf8ab4e76e934a2446bd39ccf5e413bb4acd8a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:27:35 compute-0 sudo[254491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ospjxtxuajveyldhvltvlxmjxuywmqgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725255.0214875-66-113382150839243/AnsiballZ_file.py'
Dec 03 01:27:35 compute-0 sudo[254491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:27:36 compute-0 python3.9[254493]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:27:36 compute-0 ceph-mon[192821]: pgmap v384: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:36 compute-0 sudo[254491]: pam_unix(sudo:session): session closed for user root
Dec 03 01:27:36 compute-0 goofy_jepsen[254360]: {
Dec 03 01:27:36 compute-0 goofy_jepsen[254360]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 01:27:36 compute-0 goofy_jepsen[254360]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:27:36 compute-0 goofy_jepsen[254360]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 01:27:36 compute-0 goofy_jepsen[254360]:         "osd_id": 2,
Dec 03 01:27:36 compute-0 goofy_jepsen[254360]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:27:36 compute-0 goofy_jepsen[254360]:         "type": "bluestore"
Dec 03 01:27:36 compute-0 goofy_jepsen[254360]:     },
Dec 03 01:27:36 compute-0 goofy_jepsen[254360]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 01:27:36 compute-0 goofy_jepsen[254360]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:27:36 compute-0 goofy_jepsen[254360]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 01:27:36 compute-0 goofy_jepsen[254360]:         "osd_id": 1,
Dec 03 01:27:36 compute-0 goofy_jepsen[254360]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:27:36 compute-0 goofy_jepsen[254360]:         "type": "bluestore"
Dec 03 01:27:36 compute-0 goofy_jepsen[254360]:     },
Dec 03 01:27:36 compute-0 goofy_jepsen[254360]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 01:27:36 compute-0 goofy_jepsen[254360]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:27:36 compute-0 goofy_jepsen[254360]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 01:27:36 compute-0 goofy_jepsen[254360]:         "osd_id": 0,
Dec 03 01:27:36 compute-0 goofy_jepsen[254360]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:27:36 compute-0 goofy_jepsen[254360]:         "type": "bluestore"
Dec 03 01:27:36 compute-0 goofy_jepsen[254360]:     }
Dec 03 01:27:36 compute-0 goofy_jepsen[254360]: }
Dec 03 01:27:36 compute-0 systemd[1]: libpod-acf0b7980b80d8cb16c586ab0bf8ab4e76e934a2446bd39ccf5e413bb4acd8a4.scope: Deactivated successfully.
Dec 03 01:27:36 compute-0 podman[254321]: 2025-12-03 01:27:36.223267351 +0000 UTC m=+1.486856697 container died acf0b7980b80d8cb16c586ab0bf8ab4e76e934a2446bd39ccf5e413bb4acd8a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_jepsen, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 03 01:27:36 compute-0 systemd[1]: libpod-acf0b7980b80d8cb16c586ab0bf8ab4e76e934a2446bd39ccf5e413bb4acd8a4.scope: Consumed 1.204s CPU time.
Dec 03 01:27:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb8c1f70a516f2567047a6743a9c7e5a6308bb5b14caf1cd5511dcd95c20075d-merged.mount: Deactivated successfully.
Dec 03 01:27:36 compute-0 podman[254321]: 2025-12-03 01:27:36.316822044 +0000 UTC m=+1.580411390 container remove acf0b7980b80d8cb16c586ab0bf8ab4e76e934a2446bd39ccf5e413bb4acd8a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_jepsen, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 03 01:27:36 compute-0 systemd[1]: libpod-conmon-acf0b7980b80d8cb16c586ab0bf8ab4e76e934a2446bd39ccf5e413bb4acd8a4.scope: Deactivated successfully.
Dec 03 01:27:36 compute-0 sudo[254109]: pam_unix(sudo:session): session closed for user root
Dec 03 01:27:36 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:27:36 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:27:36 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:27:36 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:27:36 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 90e13e77-fb15-4bc4-8bd6-20443c6ae463 does not exist
Dec 03 01:27:36 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev c51daf6a-6c43-4906-a6d5-70e927b50160 does not exist
Dec 03 01:27:36 compute-0 sshd-session[252923]: Connection closed by 192.168.122.30 port 38684
Dec 03 01:27:36 compute-0 sshd-session[252906]: pam_unix(sshd:session): session closed for user zuul
Dec 03 01:27:36 compute-0 systemd[1]: session-48.scope: Deactivated successfully.
Dec 03 01:27:36 compute-0 systemd[1]: session-48.scope: Consumed 6.806s CPU time.
Dec 03 01:27:36 compute-0 systemd-logind[800]: Session 48 logged out. Waiting for processes to exit.
Dec 03 01:27:36 compute-0 systemd-logind[800]: Removed session 48.
Dec 03 01:27:36 compute-0 sudo[254555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:27:36 compute-0 sudo[254555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:27:36 compute-0 sudo[254555]: pam_unix(sudo:session): session closed for user root
Dec 03 01:27:36 compute-0 sudo[254580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 01:27:36 compute-0 sudo[254580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:27:36 compute-0 sudo[254580]: pam_unix(sudo:session): session closed for user root
Dec 03 01:27:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v385: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:27:37 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:27:37 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:27:37 compute-0 ceph-mon[192821]: pgmap v385: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 01:27:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:27:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 01:27:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:27:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:27:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:27:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:27:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:27:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:27:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:27:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:27:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:27:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 01:27:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:27:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:27:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:27:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 01:27:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:27:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 01:27:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:27:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:27:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:27:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 01:27:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v386: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:38 compute-0 sshd-session[252976]: Connection closed by authenticating user root 193.32.162.157 port 56598 [preauth]
Dec 03 01:27:39 compute-0 ceph-mon[192821]: pgmap v386: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:40 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 03 01:27:40 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 600.0 total, 600.0 interval
                                            Cumulative writes: 2037 writes, 9033 keys, 2037 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s
                                            Cumulative WAL: 2037 writes, 2037 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 2037 writes, 9033 keys, 2037 commit groups, 1.0 writes per commit group, ingest: 10.85 MB, 0.02 MB/s
                                            Interval WAL: 2037 writes, 2037 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     98.1      0.08              0.04         3    0.027       0      0       0.0       0.0
                                              L6      1/0    6.45 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6     96.0     85.4      0.15              0.08         2    0.077    7146    729       0.0       0.0
                                             Sum      1/0    6.45 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     62.6     89.8      0.24              0.11         5    0.047    7146    729       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     63.3     90.7      0.23              0.11         4    0.058    7146    729       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0     96.0     85.4      0.15              0.08         2    0.077    7146    729       0.0       0.0
                                            High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    100.9      0.08              0.04         2    0.040       0      0       0.0       0.0
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     18.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.0 total, 600.0 interval
                                            Flush(GB): cumulative 0.008, interval 0.008
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.01 GB read, 0.02 MB/s read, 0.2 seconds
                                            Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.01 GB read, 0.02 MB/s read, 0.2 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x559a0b5b71f0#2 capacity: 308.00 MB usage: 508.19 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 5.3e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(36,421.47 KB,0.133633%) FilterBlock(6,27.55 KB,0.00873417%) IndexBlock(6,59.17 KB,0.0187614%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
Dec 03 01:27:40 compute-0 podman[254606]: 2025-12-03 01:27:40.883837931 +0000 UTC m=+0.123099598 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 01:27:40 compute-0 podman[254608]: 2025-12-03 01:27:40.892975131 +0000 UTC m=+0.127796232 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 03 01:27:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v387: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:40 compute-0 podman[254607]: 2025-12-03 01:27:40.907191436 +0000 UTC m=+0.145543965 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, version=9.6, io.buildah.version=1.33.7, managed_by=edpm_ansible, vendor=Red Hat, Inc., config_id=edpm, container_name=openstack_network_exporter, vcs-type=git)
Dec 03 01:27:40 compute-0 podman[254609]: 2025-12-03 01:27:40.925831706 +0000 UTC m=+0.156215251 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.970 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.970 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.970 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.971 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f00ebd496a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.971 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.972 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eda45910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.972 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.972 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.972 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.973 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.973 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.973 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eabec2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.973 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.973 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.973 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.974 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.974 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.975 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.975 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.975 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.975 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.974 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.976 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f00ebd4b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.976 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.977 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f00edba6090>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.977 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.977 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f00ebd4bb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.977 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.977 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f00ebd4b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.978 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.978 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f00ebd4b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.978 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.978 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f00ebd4b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.979 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.979 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f00ebd4b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.979 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.979 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f00eabec290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.979 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.980 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f00ebd4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.980 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.980 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f00ebd4b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.980 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f00ebd4b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.981 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f00ebd4bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.975 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebcadee0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.981 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f00ebd4b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f00ebd4bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f00ebd4bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f00ebd4bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.986 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f00ebe0e030>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.986 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f00ebd4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.986 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f00ebd4b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f00ede91a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f00ebd4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f00ebd4b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f00ede92450>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f00ebd4bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f00ebd4bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.990 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.990 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.990 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.990 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.990 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.991 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.991 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.991 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.991 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.991 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.991 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.992 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.992 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.992 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.992 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.992 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.992 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.992 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:27:41 compute-0 ceph-mon[192821]: pgmap v387: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:27:42 compute-0 sshd-session[254690]: Accepted publickey for zuul from 192.168.122.30 port 55374 ssh2: ECDSA SHA256:ja3ITS17A9km0/Ot+KN2pl9ub4ump/b6GV+vNoE7Szw
Dec 03 01:27:42 compute-0 systemd-logind[800]: New session 49 of user zuul.
Dec 03 01:27:42 compute-0 systemd[1]: Started Session 49 of User zuul.
Dec 03 01:27:42 compute-0 sshd-session[254690]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 03 01:27:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v388: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:43 compute-0 podman[254817]: 2025-12-03 01:27:43.734010117 +0000 UTC m=+0.105137740 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi)
Dec 03 01:27:44 compute-0 ceph-mon[192821]: pgmap v388: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:44 compute-0 python3.9[254860]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 01:27:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v389: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:45 compute-0 sudo[255017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hoatdgvzpmokrxrcimcuttyrgjqzgbwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725264.7277873-34-280373750787783/AnsiballZ_setup.py'
Dec 03 01:27:45 compute-0 sudo[255017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:27:45 compute-0 python3.9[255019]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 03 01:27:46 compute-0 ceph-mon[192821]: pgmap v389: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:46 compute-0 sudo[255017]: pam_unix(sudo:session): session closed for user root
Dec 03 01:27:46 compute-0 sudo[255101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxgenyuyagoeylietpdupclkpiikicfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725264.7277873-34-280373750787783/AnsiballZ_dnf.py'
Dec 03 01:27:46 compute-0 sudo[255101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:27:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v390: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:27:47 compute-0 python3.9[255103]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 03 01:27:48 compute-0 ceph-mon[192821]: pgmap v390: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:48 compute-0 sudo[255101]: pam_unix(sudo:session): session closed for user root
Dec 03 01:27:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v391: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:49 compute-0 sshd-session[255105]: Received disconnect from 14.103.201.7 port 60338:11: Bye Bye [preauth]
Dec 03 01:27:49 compute-0 sshd-session[255105]: Disconnected from authenticating user root 14.103.201.7 port 60338 [preauth]
Dec 03 01:27:50 compute-0 ceph-mon[192821]: pgmap v391: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:50 compute-0 podman[255231]: 2025-12-03 01:27:50.17738656 +0000 UTC m=+0.135553671 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., config_id=edpm, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, io.openshift.expose-services=, release=1214.1726694543, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, architecture=x86_64, build-date=2024-09-18T21:23:30, name=ubi9)
Dec 03 01:27:50 compute-0 sshd-session[254605]: Connection closed by authenticating user root 193.32.162.157 port 56358 [preauth]
Dec 03 01:27:50 compute-0 python3.9[255274]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:27:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v392: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:52 compute-0 ceph-mon[192821]: pgmap v392: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:27:52 compute-0 python3.9[255429]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 03 01:27:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v393: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:53 compute-0 python3.9[255580]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:27:54 compute-0 ceph-mon[192821]: pgmap v393: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:54 compute-0 podman[255704]: 2025-12-03 01:27:54.857166168 +0000 UTC m=+0.109588994 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 01:27:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v394: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:55 compute-0 python3.9[255754]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:27:55 compute-0 sshd-session[254693]: Connection closed by 192.168.122.30 port 55374
Dec 03 01:27:55 compute-0 sshd-session[254690]: pam_unix(sshd:session): session closed for user zuul
Dec 03 01:27:55 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Dec 03 01:27:55 compute-0 systemd[1]: session-49.scope: Consumed 9.405s CPU time.
Dec 03 01:27:55 compute-0 systemd-logind[800]: Session 49 logged out. Waiting for processes to exit.
Dec 03 01:27:55 compute-0 systemd-logind[800]: Removed session 49.
Dec 03 01:27:56 compute-0 ceph-mon[192821]: pgmap v394: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v395: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:27:58 compute-0 ceph-mon[192821]: pgmap v395: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:27:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:27:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:27:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:27:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:27:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:27:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v396: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:27:59 compute-0 podman[158098]: time="2025-12-03T01:27:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:27:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:27:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec 03 01:27:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:27:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6827 "" "Go-http-client/1.1"
Dec 03 01:28:00 compute-0 ceph-mon[192821]: pgmap v396: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v397: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:01 compute-0 ceph-mon[192821]: pgmap v397: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:01 compute-0 openstack_network_exporter[160250]: ERROR   01:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:28:01 compute-0 openstack_network_exporter[160250]: ERROR   01:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:28:01 compute-0 openstack_network_exporter[160250]: ERROR   01:28:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:28:01 compute-0 openstack_network_exporter[160250]: ERROR   01:28:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:28:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:28:01 compute-0 openstack_network_exporter[160250]: ERROR   01:28:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:28:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:28:01 compute-0 sshd-session[255781]: Accepted publickey for zuul from 192.168.122.30 port 55404 ssh2: ECDSA SHA256:ja3ITS17A9km0/Ot+KN2pl9ub4ump/b6GV+vNoE7Szw
Dec 03 01:28:01 compute-0 systemd-logind[800]: New session 50 of user zuul.
Dec 03 01:28:01 compute-0 systemd[1]: Started Session 50 of User zuul.
Dec 03 01:28:01 compute-0 sshd-session[255781]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 03 01:28:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:28:02 compute-0 sshd-session[255279]: Connection closed by authenticating user root 193.32.162.157 port 51830 [preauth]
Dec 03 01:28:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v398: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:03 compute-0 python3.9[255937]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 01:28:03 compute-0 sshd-session[255932]: Invalid user gns3 from 173.249.50.59 port 39844
Dec 03 01:28:03 compute-0 sshd-session[255932]: Received disconnect from 173.249.50.59 port 39844:11: Bye Bye [preauth]
Dec 03 01:28:03 compute-0 sshd-session[255932]: Disconnected from invalid user gns3 173.249.50.59 port 39844 [preauth]
Dec 03 01:28:03 compute-0 ceph-mon[192821]: pgmap v398: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v399: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:06 compute-0 ceph-mon[192821]: pgmap v399: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:06 compute-0 sudo[256092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oasrgxiytkyyfjxyifxcmqpynyvreuxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725284.9243777-50-29385098196096/AnsiballZ_file.py'
Dec 03 01:28:06 compute-0 sudo[256092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:28:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v400: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:07 compute-0 python3.9[256094]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:28:07 compute-0 sudo[256092]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:28:07 compute-0 sudo[256244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfowknfwpsqfzfbhnelnoyhmljixkjml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725287.3015542-50-161607711462292/AnsiballZ_file.py'
Dec 03 01:28:07 compute-0 sudo[256244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:28:08 compute-0 ceph-mon[192821]: pgmap v400: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:08 compute-0 python3.9[256246]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:28:08 compute-0 sudo[256244]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v401: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:09 compute-0 sudo[256396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-meqacivadyichzjbnvqlhnlajkwsnsxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725288.4235284-65-58041398562571/AnsiballZ_stat.py'
Dec 03 01:28:09 compute-0 sudo[256396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:28:09 compute-0 python3.9[256398]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:28:09 compute-0 sudo[256396]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:09 compute-0 sudo[256474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brnekxfefjkccxvgvqliptfvbhjjlltr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725288.4235284-65-58041398562571/AnsiballZ_file.py'
Dec 03 01:28:09 compute-0 sudo[256474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:28:10 compute-0 ceph-mon[192821]: pgmap v401: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:10 compute-0 python3.9[256476]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/ovn/default/tls.crt _original_basename=compute-0.ctlplane.example.com-tls.crt recurse=False state=file path=/var/lib/openstack/certs/ovn/default/tls.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:28:10 compute-0 sudo[256474]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v402: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:10 compute-0 sudo[256626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcizrbxrbttrgpxmnxivxcpizzeugvnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725290.341877-65-201239756292410/AnsiballZ_stat.py'
Dec 03 01:28:10 compute-0 sudo[256626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:28:11 compute-0 podman[256629]: 2025-12-03 01:28:11.092001291 +0000 UTC m=+0.124537731 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, name=ubi9-minimal, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, managed_by=edpm_ansible, release=1755695350)
Dec 03 01:28:11 compute-0 podman[256630]: 2025-12-03 01:28:11.099021281 +0000 UTC m=+0.126906802 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_managed=true, config_id=edpm, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 03 01:28:11 compute-0 podman[256628]: 2025-12-03 01:28:11.109952759 +0000 UTC m=+0.147143949 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 01:28:11 compute-0 podman[256631]: 2025-12-03 01:28:11.127123533 +0000 UTC m=+0.144117728 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller)
Dec 03 01:28:11 compute-0 python3.9[256636]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:28:11 compute-0 sudo[256626]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:11 compute-0 sudo[256788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vskjzfjvpnrhroirvyhnbhnddtvpxxak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725290.341877-65-201239756292410/AnsiballZ_file.py'
Dec 03 01:28:11 compute-0 sudo[256788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:28:11 compute-0 python3.9[256790]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/ovn/default/ca.crt _original_basename=compute-0.ctlplane.example.com-ca.crt recurse=False state=file path=/var/lib/openstack/certs/ovn/default/ca.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:28:11 compute-0 sudo[256788]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:12 compute-0 ceph-mon[192821]: pgmap v402: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:28:12 compute-0 sudo[256940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkmkmbjydxuagdniujctwuptcwpzpmyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725292.1078556-65-195667381201780/AnsiballZ_stat.py'
Dec 03 01:28:12 compute-0 sudo[256940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:28:12 compute-0 python3.9[256942]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:28:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v403: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:12 compute-0 sudo[256940]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:13 compute-0 sudo[257018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpyhmstgvqfxombizkuvtyhbiilpgvux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725292.1078556-65-195667381201780/AnsiballZ_file.py'
Dec 03 01:28:13 compute-0 sudo[257018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:28:13 compute-0 python3.9[257020]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/ovn/default/tls.key _original_basename=compute-0.ctlplane.example.com-tls.key recurse=False state=file path=/var/lib/openstack/certs/ovn/default/tls.key force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:28:13 compute-0 sudo[257018]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:13 compute-0 sshd-session[255884]: Connection closed by authenticating user root 193.32.162.157 port 33830 [preauth]
Dec 03 01:28:14 compute-0 ceph-mon[192821]: pgmap v403: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:14 compute-0 sshd-session[257046]: Invalid user testuser from 34.66.72.251 port 45998
Dec 03 01:28:14 compute-0 sshd-session[257046]: Received disconnect from 34.66.72.251 port 45998:11: Bye Bye [preauth]
Dec 03 01:28:14 compute-0 sshd-session[257046]: Disconnected from invalid user testuser 34.66.72.251 port 45998 [preauth]
Dec 03 01:28:14 compute-0 podman[257092]: 2025-12-03 01:28:14.365172108 +0000 UTC m=+0.134852240 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 03 01:28:14 compute-0 sudo[257193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpbazzwgahnkkgxhqyessehhjvitnrew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725294.1286635-100-9148230318877/AnsiballZ_file.py'
Dec 03 01:28:14 compute-0 sudo[257193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:28:14 compute-0 python3.9[257195]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:28:14 compute-0 sudo[257193]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v404: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:15 compute-0 sudo[257345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhdsmubwbeeeibjjchimydqkqazzhknf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725295.0941882-100-249993042824683/AnsiballZ_file.py'
Dec 03 01:28:15 compute-0 sudo[257345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:28:15 compute-0 python3.9[257347]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:28:15 compute-0 sudo[257345]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:16 compute-0 ceph-mon[192821]: pgmap v404: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v405: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:17 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Dec 03 01:28:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:28:17.090964) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 03 01:28:17 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Dec 03 01:28:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725297091078, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 707, "num_deletes": 251, "total_data_size": 899073, "memory_usage": 911896, "flush_reason": "Manual Compaction"}
Dec 03 01:28:17 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Dec 03 01:28:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725297102795, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 891241, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8979, "largest_seqno": 9685, "table_properties": {"data_size": 887556, "index_size": 1529, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7950, "raw_average_key_size": 18, "raw_value_size": 880193, "raw_average_value_size": 2051, "num_data_blocks": 71, "num_entries": 429, "num_filter_entries": 429, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764725237, "oldest_key_time": 1764725237, "file_creation_time": 1764725297, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Dec 03 01:28:17 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 11922 microseconds, and 6789 cpu microseconds.
Dec 03 01:28:17 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 01:28:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:28:17.102875) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 891241 bytes OK
Dec 03 01:28:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:28:17.102913) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Dec 03 01:28:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:28:17.105786) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Dec 03 01:28:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:28:17.105807) EVENT_LOG_v1 {"time_micros": 1764725297105801, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 03 01:28:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:28:17.105831) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 03 01:28:17 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 895409, prev total WAL file size 895409, number of live WAL files 2.
Dec 03 01:28:17 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 01:28:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:28:17.107065) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Dec 03 01:28:17 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 03 01:28:17 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(870KB)], [23(6601KB)]
Dec 03 01:28:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725297107159, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 7650951, "oldest_snapshot_seqno": -1}
Dec 03 01:28:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:28:17 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3285 keys, 6071306 bytes, temperature: kUnknown
Dec 03 01:28:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725297159531, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6071306, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6047504, "index_size": 14477, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8261, "raw_key_size": 79671, "raw_average_key_size": 24, "raw_value_size": 5986185, "raw_average_value_size": 1822, "num_data_blocks": 632, "num_entries": 3285, "num_filter_entries": 3285, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764725297, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Dec 03 01:28:17 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 01:28:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:28:17.160033) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6071306 bytes
Dec 03 01:28:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:28:17.163046) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 145.5 rd, 115.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 6.4 +0.0 blob) out(5.8 +0.0 blob), read-write-amplify(15.4) write-amplify(6.8) OK, records in: 3799, records dropped: 514 output_compression: NoCompression
Dec 03 01:28:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:28:17.163085) EVENT_LOG_v1 {"time_micros": 1764725297163065, "job": 8, "event": "compaction_finished", "compaction_time_micros": 52574, "compaction_time_cpu_micros": 31878, "output_level": 6, "num_output_files": 1, "total_output_size": 6071306, "num_input_records": 3799, "num_output_records": 3285, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 03 01:28:17 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 01:28:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725297163777, "job": 8, "event": "table_file_deletion", "file_number": 25}
Dec 03 01:28:17 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 01:28:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725297166605, "job": 8, "event": "table_file_deletion", "file_number": 23}
Dec 03 01:28:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:28:17.106801) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:28:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:28:17.166935) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:28:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:28:17.166943) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:28:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:28:17.166946) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:28:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:28:17.166950) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:28:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:28:17.166953) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:28:17 compute-0 sudo[257498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfjtnzakvesujuoerckabekdkxweodsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725297.0974128-115-161831668493285/AnsiballZ_stat.py'
Dec 03 01:28:17 compute-0 sudo[257498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:28:17 compute-0 python3.9[257500]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:28:17 compute-0 sudo[257498]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:18 compute-0 ceph-mon[192821]: pgmap v405: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v406: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:19 compute-0 sudo[257576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmfgqafzbrhmfksqvrsuleqwnrlqqaua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725297.0974128-115-161831668493285/AnsiballZ_file.py'
Dec 03 01:28:19 compute-0 sudo[257576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:28:19 compute-0 python3.9[257578]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/telemetry/default/tls.crt _original_basename=compute-0.ctlplane.example.com-tls.crt recurse=False state=file path=/var/lib/openstack/certs/telemetry/default/tls.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:28:19 compute-0 sudo[257576]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:20 compute-0 ceph-mon[192821]: pgmap v406: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:20 compute-0 sudo[257729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymakvdemaoznspurvrmcavqgsnkiasck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725299.7002268-115-269802243697577/AnsiballZ_stat.py'
Dec 03 01:28:20 compute-0 sudo[257729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:28:20 compute-0 podman[257731]: 2025-12-03 01:28:20.44128942 +0000 UTC m=+0.136261972 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, version=9.4, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, container_name=kepler, io.openshift.tags=base rhel9, managed_by=edpm_ansible, release=1214.1726694543, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, config_id=edpm, io.buildah.version=1.29.0)
Dec 03 01:28:20 compute-0 python3.9[257732]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:28:20 compute-0 sudo[257729]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v407: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:21 compute-0 sudo[257825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzpvolmkxefajvvlvunwkcpzbrligbag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725299.7002268-115-269802243697577/AnsiballZ_file.py'
Dec 03 01:28:21 compute-0 sudo[257825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:28:21 compute-0 python3.9[257827]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/telemetry/default/ca.crt _original_basename=compute-0.ctlplane.example.com-ca.crt recurse=False state=file path=/var/lib/openstack/certs/telemetry/default/ca.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:28:21 compute-0 sudo[257825]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:22 compute-0 sudo[257977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzsoybozjtdjmrtuwzhbzsadqqtpisup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725301.6306772-115-194827040810966/AnsiballZ_stat.py'
Dec 03 01:28:22 compute-0 sudo[257977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:28:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:28:22 compute-0 ceph-mon[192821]: pgmap v407: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:22 compute-0 python3.9[257979]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:28:22 compute-0 sudo[257977]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:22 compute-0 sudo[258055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wziejmnrvyhrpafdnthbfnlprvlcswmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725301.6306772-115-194827040810966/AnsiballZ_file.py'
Dec 03 01:28:22 compute-0 sudo[258055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:28:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v408: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:22 compute-0 python3.9[258057]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/telemetry/default/tls.key _original_basename=compute-0.ctlplane.example.com-tls.key recurse=False state=file path=/var/lib/openstack/certs/telemetry/default/tls.key force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:28:22 compute-0 sudo[258055]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:23 compute-0 sudo[258207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kufikrawbotqhiolezmybrwcsmoszqev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725303.2512639-150-28883243871440/AnsiballZ_file.py'
Dec 03 01:28:23 compute-0 sudo[258207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:28:23 compute-0 python3.9[258209]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:28:23 compute-0 sudo[258207]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:24 compute-0 ceph-mon[192821]: pgmap v408: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:24 compute-0 sudo[258359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdrjlffrvdpdrkwbeskialigwvxhkley ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725304.2689593-150-266775147512971/AnsiballZ_file.py'
Dec 03 01:28:24 compute-0 sudo[258359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:28:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v409: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:25 compute-0 python3.9[258361]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:28:25 compute-0 sudo[258359]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:25 compute-0 ceph-mon[192821]: pgmap v409: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:25 compute-0 sshd-session[257045]: Connection closed by authenticating user root 193.32.162.157 port 55830 [preauth]
Dec 03 01:28:26 compute-0 sudo[258529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztnxkuhjqnmbxkhrfaonxtadhimidrfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725305.34203-165-148324838040167/AnsiballZ_stat.py'
Dec 03 01:28:26 compute-0 sudo[258529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:28:26 compute-0 podman[258486]: 2025-12-03 01:28:26.562852043 +0000 UTC m=+0.123107088 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 03 01:28:26 compute-0 python3.9[258540]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:28:26 compute-0 sudo[258529]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v410: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:28:27 compute-0 sshd-session[258463]: Invalid user kyt from 103.146.202.174 port 43112
Dec 03 01:28:27 compute-0 sudo[258661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juitegjydesvfmtdjznnlwsjfmtrcmpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725305.34203-165-148324838040167/AnsiballZ_copy.py'
Dec 03 01:28:27 compute-0 sshd-session[258463]: Received disconnect from 103.146.202.174 port 43112:11: Bye Bye [preauth]
Dec 03 01:28:27 compute-0 sshd-session[258463]: Disconnected from invalid user kyt 103.146.202.174 port 43112 [preauth]
Dec 03 01:28:27 compute-0 sudo[258661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:28:27 compute-0 python3.9[258663]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764725305.34203-165-148324838040167/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=218641854d443cf9f2580943ef1d852a26c0c89e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:28:27 compute-0 sudo[258661]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:28 compute-0 ceph-mon[192821]: pgmap v410: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:28:28
Dec 03 01:28:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 01:28:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 01:28:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', 'volumes', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root', 'images']
Dec 03 01:28:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 01:28:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:28:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:28:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:28:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:28:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:28:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:28:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 01:28:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:28:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 01:28:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:28:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:28:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:28:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:28:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:28:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:28:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:28:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v411: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:29 compute-0 sudo[258814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elmnbblprvnjdxamcpiswglbxzngxkvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725308.8860426-165-215100664161251/AnsiballZ_stat.py'
Dec 03 01:28:29 compute-0 sudo[258814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:28:29 compute-0 python3.9[258816]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:28:29 compute-0 sudo[258814]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:29 compute-0 podman[158098]: time="2025-12-03T01:28:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:28:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:28:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec 03 01:28:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:28:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6838 "" "Go-http-client/1.1"
Dec 03 01:28:30 compute-0 ceph-mon[192821]: pgmap v411: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:30 compute-0 sudo[258937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilyneabywohiyecnbgaxpntxkiwlljfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725308.8860426-165-215100664161251/AnsiballZ_copy.py'
Dec 03 01:28:30 compute-0 sudo[258937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:28:30 compute-0 python3.9[258939]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764725308.8860426-165-215100664161251/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=2a64f1b8009feb5d4193c68d35401643b8ae94ef backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:28:30 compute-0 sudo[258937]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v412: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:31 compute-0 openstack_network_exporter[160250]: ERROR   01:28:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:28:31 compute-0 openstack_network_exporter[160250]: ERROR   01:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:28:31 compute-0 openstack_network_exporter[160250]: ERROR   01:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:28:31 compute-0 openstack_network_exporter[160250]: ERROR   01:28:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:28:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:28:31 compute-0 openstack_network_exporter[160250]: ERROR   01:28:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:28:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:28:32 compute-0 sudo[259089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvtduauctxtoquxuoiguhtsjiialmpnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725311.51834-165-103598946085168/AnsiballZ_stat.py'
Dec 03 01:28:32 compute-0 sudo[259089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:28:32 compute-0 ceph-mon[192821]: pgmap v412: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:28:32 compute-0 python3.9[259091]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:28:32 compute-0 sudo[259089]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:32 compute-0 sudo[259212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxnofblqdkhqmengwzrirxpkzknywgpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725311.51834-165-103598946085168/AnsiballZ_copy.py'
Dec 03 01:28:32 compute-0 sudo[259212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:28:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v413: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:33 compute-0 python3.9[259214]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764725311.51834-165-103598946085168/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=ffe41f9485555366da5b2c6bd47d14387ba26ee1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:28:33 compute-0 sudo[259212]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:34 compute-0 ceph-mon[192821]: pgmap v413: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:34 compute-0 sudo[259364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbpkawrncdhzigdjtbiopxzmgxwqrugp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725313.508308-209-40406734886261/AnsiballZ_file.py'
Dec 03 01:28:34 compute-0 sudo[259364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:28:34 compute-0 python3.9[259366]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:28:34 compute-0 sudo[259364]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v414: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:35 compute-0 sudo[259516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgcsobalwabxsepemfbrsroqpdeysszg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725314.5477319-209-131172579101895/AnsiballZ_file.py'
Dec 03 01:28:35 compute-0 sudo[259516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:28:35 compute-0 python3.9[259518]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:28:35 compute-0 sudo[259516]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:36 compute-0 ceph-mon[192821]: pgmap v414: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:36 compute-0 sudo[259668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srrjjmegtizsxfcxukfgcjeeusitgnuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725315.6360204-224-252228506926573/AnsiballZ_stat.py'
Dec 03 01:28:36 compute-0 sudo[259668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:28:36 compute-0 python3.9[259670]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:28:36 compute-0 sudo[259668]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:36 compute-0 sudo[259700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:28:36 compute-0 sudo[259700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:28:36 compute-0 sudo[259700]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:36 compute-0 sudo[259788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydrkjfusrydowyvxdalesoxwxnwtyxnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725315.6360204-224-252228506926573/AnsiballZ_file.py'
Dec 03 01:28:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v415: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:36 compute-0 sudo[259788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:28:36 compute-0 sudo[259755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:28:36 compute-0 sudo[259755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:28:36 compute-0 sudo[259755]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:37 compute-0 sudo[259799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:28:37 compute-0 sudo[259799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:28:37 compute-0 sudo[259799]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:37 compute-0 sshd-session[258488]: Connection closed by authenticating user root 193.32.162.157 port 53262 [preauth]
Dec 03 01:28:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:28:37 compute-0 python3.9[259796]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/libvirt/default/tls.crt _original_basename=compute-0.ctlplane.example.com-tls.crt recurse=False state=file path=/var/lib/openstack/certs/libvirt/default/tls.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:28:37 compute-0 sudo[259788]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:37 compute-0 sudo[259824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 01:28:37 compute-0 sudo[259824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:28:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 01:28:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:28:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 01:28:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:28:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:28:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:28:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:28:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:28:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:28:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:28:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:28:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:28:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 01:28:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:28:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:28:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:28:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 01:28:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:28:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 01:28:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:28:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:28:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:28:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 01:28:37 compute-0 sudo[259824]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:28:37 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:28:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 01:28:37 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:28:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 01:28:37 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:28:38 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev ddadf2f6-92e5-4800-991f-a6f22c4a8c8a does not exist
Dec 03 01:28:38 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev b7d1d7bb-8346-4277-b80d-4d5a3e5361ad does not exist
Dec 03 01:28:38 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev e8b6bbfd-6506-44be-bd4b-aced31f43a8c does not exist
Dec 03 01:28:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 01:28:38 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:28:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 01:28:38 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:28:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:28:38 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:28:38 compute-0 ceph-mon[192821]: pgmap v415: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:38 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:28:38 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:28:38 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:28:38 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:28:38 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:28:38 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:28:38 compute-0 sudo[260050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbxihshdbpkbjtnnxvjuovfvzlmndewu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725317.5093617-224-224181456553664/AnsiballZ_stat.py'
Dec 03 01:28:38 compute-0 sudo[260050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:28:38 compute-0 sudo[260011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:28:38 compute-0 sudo[260011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:28:38 compute-0 sudo[260011]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:38 compute-0 sudo[260057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:28:38 compute-0 sudo[260057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:28:38 compute-0 sudo[260057]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:38 compute-0 python3.9[260054]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:28:38 compute-0 sudo[260050]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:38 compute-0 sudo[260082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:28:38 compute-0 sudo[260082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:28:38 compute-0 sudo[260082]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:38 compute-0 sudo[260121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 01:28:38 compute-0 sudo[260121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:28:38 compute-0 sudo[260214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjojafruqwyksytarzbaezcldabiqtbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725317.5093617-224-224181456553664/AnsiballZ_file.py'
Dec 03 01:28:38 compute-0 sudo[260214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:28:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v416: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:39 compute-0 python3.9[260220]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/libvirt/default/ca.crt _original_basename=compute-0.ctlplane.example.com-ca.crt recurse=False state=file path=/var/lib/openstack/certs/libvirt/default/ca.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:28:39 compute-0 sudo[260214]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:39 compute-0 podman[260247]: 2025-12-03 01:28:39.157864956 +0000 UTC m=+0.080091720 container create 51a11def7af6a4fc7b76a6167b8ca5a7c225010c2d7ab3a842b329e5745c8644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_elgamal, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:28:39 compute-0 podman[260247]: 2025-12-03 01:28:39.12462059 +0000 UTC m=+0.046847364 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:28:39 compute-0 systemd[1]: Started libpod-conmon-51a11def7af6a4fc7b76a6167b8ca5a7c225010c2d7ab3a842b329e5745c8644.scope.
Dec 03 01:28:39 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:28:39 compute-0 podman[260247]: 2025-12-03 01:28:39.330166846 +0000 UTC m=+0.252393620 container init 51a11def7af6a4fc7b76a6167b8ca5a7c225010c2d7ab3a842b329e5745c8644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 03 01:28:39 compute-0 podman[260247]: 2025-12-03 01:28:39.346942349 +0000 UTC m=+0.269169123 container start 51a11def7af6a4fc7b76a6167b8ca5a7c225010c2d7ab3a842b329e5745c8644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 03 01:28:39 compute-0 podman[260247]: 2025-12-03 01:28:39.353710561 +0000 UTC m=+0.275937325 container attach 51a11def7af6a4fc7b76a6167b8ca5a7c225010c2d7ab3a842b329e5745c8644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_elgamal, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:28:39 compute-0 admiring_elgamal[260287]: 167 167
Dec 03 01:28:39 compute-0 systemd[1]: libpod-51a11def7af6a4fc7b76a6167b8ca5a7c225010c2d7ab3a842b329e5745c8644.scope: Deactivated successfully.
Dec 03 01:28:39 compute-0 podman[260247]: 2025-12-03 01:28:39.360310879 +0000 UTC m=+0.282537653 container died 51a11def7af6a4fc7b76a6167b8ca5a7c225010c2d7ab3a842b329e5745c8644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_elgamal, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 03 01:28:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-57899f17c6b7756174482355245870482f8767bfe225151e168db9741eac29a7-merged.mount: Deactivated successfully.
Dec 03 01:28:39 compute-0 podman[260247]: 2025-12-03 01:28:39.44746418 +0000 UTC m=+0.369690964 container remove 51a11def7af6a4fc7b76a6167b8ca5a7c225010c2d7ab3a842b329e5745c8644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_elgamal, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 03 01:28:39 compute-0 systemd[1]: libpod-conmon-51a11def7af6a4fc7b76a6167b8ca5a7c225010c2d7ab3a842b329e5745c8644.scope: Deactivated successfully.
Dec 03 01:28:39 compute-0 podman[260374]: 2025-12-03 01:28:39.748846547 +0000 UTC m=+0.080790931 container create 45b77ebe2429467d3763add18ce3962327fab7705cc227bad3b73dc4d2fe01bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:28:39 compute-0 podman[260374]: 2025-12-03 01:28:39.727382414 +0000 UTC m=+0.059326878 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:28:39 compute-0 systemd[1]: Started libpod-conmon-45b77ebe2429467d3763add18ce3962327fab7705cc227bad3b73dc4d2fe01bb.scope.
Dec 03 01:28:39 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:28:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39cbc8e0822219b0abbe701544a3557075e3e7ed1e33137cb082edc13e9600a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:28:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39cbc8e0822219b0abbe701544a3557075e3e7ed1e33137cb082edc13e9600a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:28:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39cbc8e0822219b0abbe701544a3557075e3e7ed1e33137cb082edc13e9600a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:28:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39cbc8e0822219b0abbe701544a3557075e3e7ed1e33137cb082edc13e9600a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:28:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39cbc8e0822219b0abbe701544a3557075e3e7ed1e33137cb082edc13e9600a4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:28:39 compute-0 podman[260374]: 2025-12-03 01:28:39.896348415 +0000 UTC m=+0.228292839 container init 45b77ebe2429467d3763add18ce3962327fab7705cc227bad3b73dc4d2fe01bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_euler, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec 03 01:28:39 compute-0 podman[260374]: 2025-12-03 01:28:39.932113096 +0000 UTC m=+0.264057490 container start 45b77ebe2429467d3763add18ce3962327fab7705cc227bad3b73dc4d2fe01bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_euler, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 03 01:28:39 compute-0 podman[260374]: 2025-12-03 01:28:39.939146837 +0000 UTC m=+0.271091351 container attach 45b77ebe2429467d3763add18ce3962327fab7705cc227bad3b73dc4d2fe01bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:28:40 compute-0 sudo[260455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpdefyupaprfxrlvggxpithlnvfdexqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725319.4490457-224-244504971026350/AnsiballZ_stat.py'
Dec 03 01:28:40 compute-0 sudo[260455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:28:40 compute-0 ceph-mon[192821]: pgmap v416: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:40 compute-0 python3.9[260457]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:28:40 compute-0 sudo[260455]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:40 compute-0 sudo[260541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iapubfwhrhfstpeuljhuhqccpwhhnork ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725319.4490457-224-244504971026350/AnsiballZ_file.py'
Dec 03 01:28:40 compute-0 sudo[260541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:28:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v417: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:41 compute-0 python3.9[260545]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/libvirt/default/tls.key _original_basename=compute-0.ctlplane.example.com-tls.key recurse=False state=file path=/var/lib/openstack/certs/libvirt/default/tls.key force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:28:41 compute-0 sudo[260541]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:41 compute-0 intelligent_euler[260423]: --> passed data devices: 0 physical, 3 LVM
Dec 03 01:28:41 compute-0 intelligent_euler[260423]: --> relative data size: 1.0
Dec 03 01:28:41 compute-0 intelligent_euler[260423]: --> All data devices are unavailable
Dec 03 01:28:41 compute-0 systemd[1]: libpod-45b77ebe2429467d3763add18ce3962327fab7705cc227bad3b73dc4d2fe01bb.scope: Deactivated successfully.
Dec 03 01:28:41 compute-0 systemd[1]: libpod-45b77ebe2429467d3763add18ce3962327fab7705cc227bad3b73dc4d2fe01bb.scope: Consumed 1.302s CPU time.
Dec 03 01:28:41 compute-0 podman[260374]: 2025-12-03 01:28:41.299067129 +0000 UTC m=+1.631011553 container died 45b77ebe2429467d3763add18ce3962327fab7705cc227bad3b73dc4d2fe01bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:28:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-39cbc8e0822219b0abbe701544a3557075e3e7ed1e33137cb082edc13e9600a4-merged.mount: Deactivated successfully.
Dec 03 01:28:41 compute-0 podman[260374]: 2025-12-03 01:28:41.409307941 +0000 UTC m=+1.741252335 container remove 45b77ebe2429467d3763add18ce3962327fab7705cc227bad3b73dc4d2fe01bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_euler, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Dec 03 01:28:41 compute-0 systemd[1]: libpod-conmon-45b77ebe2429467d3763add18ce3962327fab7705cc227bad3b73dc4d2fe01bb.scope: Deactivated successfully.
Dec 03 01:28:41 compute-0 sudo[260121]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:41 compute-0 podman[260569]: 2025-12-03 01:28:41.473645848 +0000 UTC m=+0.106952655 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=Red Hat, Inc., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, managed_by=edpm_ansible, name=ubi9-minimal, version=9.6, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.openshift.expose-services=)
Dec 03 01:28:41 compute-0 podman[260561]: 2025-12-03 01:28:41.492459421 +0000 UTC m=+0.135969483 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 01:28:41 compute-0 podman[260571]: 2025-12-03 01:28:41.515093708 +0000 UTC m=+0.155344653 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Dec 03 01:28:41 compute-0 podman[260573]: 2025-12-03 01:28:41.517346386 +0000 UTC m=+0.148107806 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:28:41 compute-0 sudo[260645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:28:41 compute-0 sudo[260645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:28:41 compute-0 sudo[260645]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:41 compute-0 sudo[260682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:28:41 compute-0 sudo[260682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:28:41 compute-0 sudo[260682]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:41 compute-0 sudo[260707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:28:41 compute-0 sudo[260707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:28:41 compute-0 sudo[260707]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:41 compute-0 sudo[260745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 01:28:41 compute-0 sudo[260745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:28:42 compute-0 ceph-mon[192821]: pgmap v417: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:28:42 compute-0 podman[260850]: 2025-12-03 01:28:42.396978933 +0000 UTC m=+0.071800532 container create 985846a1282d7d71a252f226a9d9a160e3a4a0ff09b347c9a4ffa668592d6ebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Dec 03 01:28:42 compute-0 podman[260850]: 2025-12-03 01:28:42.370356045 +0000 UTC m=+0.045177714 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:28:42 compute-0 systemd[1]: Started libpod-conmon-985846a1282d7d71a252f226a9d9a160e3a4a0ff09b347c9a4ffa668592d6ebc.scope.
Dec 03 01:28:42 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:28:42 compute-0 podman[260850]: 2025-12-03 01:28:42.552240123 +0000 UTC m=+0.227061802 container init 985846a1282d7d71a252f226a9d9a160e3a4a0ff09b347c9a4ffa668592d6ebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 03 01:28:42 compute-0 podman[260850]: 2025-12-03 01:28:42.568861661 +0000 UTC m=+0.243683290 container start 985846a1282d7d71a252f226a9d9a160e3a4a0ff09b347c9a4ffa668592d6ebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:28:42 compute-0 podman[260850]: 2025-12-03 01:28:42.575962474 +0000 UTC m=+0.250784143 container attach 985846a1282d7d71a252f226a9d9a160e3a4a0ff09b347c9a4ffa668592d6ebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lamarr, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 03 01:28:42 compute-0 fervent_lamarr[260895]: 167 167
Dec 03 01:28:42 compute-0 systemd[1]: libpod-985846a1282d7d71a252f226a9d9a160e3a4a0ff09b347c9a4ffa668592d6ebc.scope: Deactivated successfully.
Dec 03 01:28:42 compute-0 podman[260850]: 2025-12-03 01:28:42.582454758 +0000 UTC m=+0.257276377 container died 985846a1282d7d71a252f226a9d9a160e3a4a0ff09b347c9a4ffa668592d6ebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 03 01:28:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-8967eb5eb3956829039ea588900493fb07642ec7e2293d192ef9ae74e407f589-merged.mount: Deactivated successfully.
Dec 03 01:28:42 compute-0 podman[260850]: 2025-12-03 01:28:42.665957369 +0000 UTC m=+0.340778998 container remove 985846a1282d7d71a252f226a9d9a160e3a4a0ff09b347c9a4ffa668592d6ebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:28:42 compute-0 systemd[1]: libpod-conmon-985846a1282d7d71a252f226a9d9a160e3a4a0ff09b347c9a4ffa668592d6ebc.scope: Deactivated successfully.
Dec 03 01:28:42 compute-0 sudo[260979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shmwdmlyzfexijgildpjakbloviypctr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725322.2528298-259-48643966596483/AnsiballZ_file.py'
Dec 03 01:28:42 compute-0 sudo[260979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:28:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v418: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:42 compute-0 podman[260985]: 2025-12-03 01:28:42.947385358 +0000 UTC m=+0.078332067 container create 5ac19363d3cb70cc04d324aa6ceb993033144ae71ce336ebdb7bf7d3f658a329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:28:43 compute-0 podman[260985]: 2025-12-03 01:28:42.917826973 +0000 UTC m=+0.048773692 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:28:43 compute-0 systemd[1]: Started libpod-conmon-5ac19363d3cb70cc04d324aa6ceb993033144ae71ce336ebdb7bf7d3f658a329.scope.
Dec 03 01:28:43 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:28:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c12358a9aa47911a5c6f75334095bf68ac423d7464ecda4af45e15065121828/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:28:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c12358a9aa47911a5c6f75334095bf68ac423d7464ecda4af45e15065121828/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:28:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c12358a9aa47911a5c6f75334095bf68ac423d7464ecda4af45e15065121828/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:28:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c12358a9aa47911a5c6f75334095bf68ac423d7464ecda4af45e15065121828/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:28:43 compute-0 python3.9[260992]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:28:43 compute-0 podman[260985]: 2025-12-03 01:28:43.12237492 +0000 UTC m=+0.253321609 container init 5ac19363d3cb70cc04d324aa6ceb993033144ae71ce336ebdb7bf7d3f658a329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Dec 03 01:28:43 compute-0 sudo[260979]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:43 compute-0 podman[260985]: 2025-12-03 01:28:43.152868733 +0000 UTC m=+0.283815392 container start 5ac19363d3cb70cc04d324aa6ceb993033144ae71ce336ebdb7bf7d3f658a329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_margulis, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec 03 01:28:43 compute-0 podman[260985]: 2025-12-03 01:28:43.157162832 +0000 UTC m=+0.288109541 container attach 5ac19363d3cb70cc04d324aa6ceb993033144ae71ce336ebdb7bf7d3f658a329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_margulis, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:28:43 compute-0 festive_margulis[261002]: {
Dec 03 01:28:43 compute-0 festive_margulis[261002]:     "0": [
Dec 03 01:28:43 compute-0 festive_margulis[261002]:         {
Dec 03 01:28:43 compute-0 festive_margulis[261002]:             "devices": [
Dec 03 01:28:43 compute-0 festive_margulis[261002]:                 "/dev/loop3"
Dec 03 01:28:43 compute-0 festive_margulis[261002]:             ],
Dec 03 01:28:43 compute-0 festive_margulis[261002]:             "lv_name": "ceph_lv0",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:             "lv_size": "21470642176",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:             "name": "ceph_lv0",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:             "tags": {
Dec 03 01:28:43 compute-0 festive_margulis[261002]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:                 "ceph.cluster_name": "ceph",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:                 "ceph.crush_device_class": "",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:                 "ceph.encrypted": "0",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:                 "ceph.osd_id": "0",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:                 "ceph.type": "block",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:                 "ceph.vdo": "0"
Dec 03 01:28:43 compute-0 festive_margulis[261002]:             },
Dec 03 01:28:43 compute-0 festive_margulis[261002]:             "type": "block",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:             "vg_name": "ceph_vg0"
Dec 03 01:28:43 compute-0 festive_margulis[261002]:         }
Dec 03 01:28:43 compute-0 festive_margulis[261002]:     ],
Dec 03 01:28:43 compute-0 festive_margulis[261002]:     "1": [
Dec 03 01:28:43 compute-0 festive_margulis[261002]:         {
Dec 03 01:28:43 compute-0 festive_margulis[261002]:             "devices": [
Dec 03 01:28:43 compute-0 festive_margulis[261002]:                 "/dev/loop4"
Dec 03 01:28:43 compute-0 festive_margulis[261002]:             ],
Dec 03 01:28:43 compute-0 festive_margulis[261002]:             "lv_name": "ceph_lv1",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:             "lv_size": "21470642176",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:             "name": "ceph_lv1",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:             "tags": {
Dec 03 01:28:43 compute-0 festive_margulis[261002]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:                 "ceph.cluster_name": "ceph",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:                 "ceph.crush_device_class": "",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:                 "ceph.encrypted": "0",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:                 "ceph.osd_id": "1",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:                 "ceph.type": "block",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:                 "ceph.vdo": "0"
Dec 03 01:28:43 compute-0 festive_margulis[261002]:             },
Dec 03 01:28:43 compute-0 festive_margulis[261002]:             "type": "block",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:             "vg_name": "ceph_vg1"
Dec 03 01:28:43 compute-0 festive_margulis[261002]:         }
Dec 03 01:28:43 compute-0 festive_margulis[261002]:     ],
Dec 03 01:28:43 compute-0 festive_margulis[261002]:     "2": [
Dec 03 01:28:43 compute-0 festive_margulis[261002]:         {
Dec 03 01:28:43 compute-0 festive_margulis[261002]:             "devices": [
Dec 03 01:28:43 compute-0 festive_margulis[261002]:                 "/dev/loop5"
Dec 03 01:28:43 compute-0 festive_margulis[261002]:             ],
Dec 03 01:28:43 compute-0 festive_margulis[261002]:             "lv_name": "ceph_lv2",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:             "lv_size": "21470642176",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:             "name": "ceph_lv2",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:             "tags": {
Dec 03 01:28:43 compute-0 festive_margulis[261002]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:                 "ceph.cluster_name": "ceph",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:                 "ceph.crush_device_class": "",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:                 "ceph.encrypted": "0",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:                 "ceph.osd_id": "2",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:                 "ceph.type": "block",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:                 "ceph.vdo": "0"
Dec 03 01:28:43 compute-0 festive_margulis[261002]:             },
Dec 03 01:28:43 compute-0 festive_margulis[261002]:             "type": "block",
Dec 03 01:28:43 compute-0 festive_margulis[261002]:             "vg_name": "ceph_vg2"
Dec 03 01:28:43 compute-0 festive_margulis[261002]:         }
Dec 03 01:28:43 compute-0 festive_margulis[261002]:     ]
Dec 03 01:28:43 compute-0 festive_margulis[261002]: }
Dec 03 01:28:44 compute-0 systemd[1]: libpod-5ac19363d3cb70cc04d324aa6ceb993033144ae71ce336ebdb7bf7d3f658a329.scope: Deactivated successfully.
Dec 03 01:28:44 compute-0 podman[260985]: 2025-12-03 01:28:44.023225462 +0000 UTC m=+1.154172141 container died 5ac19363d3cb70cc04d324aa6ceb993033144ae71ce336ebdb7bf7d3f658a329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 03 01:28:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c12358a9aa47911a5c6f75334095bf68ac423d7464ecda4af45e15065121828-merged.mount: Deactivated successfully.
Dec 03 01:28:44 compute-0 podman[260985]: 2025-12-03 01:28:44.114701472 +0000 UTC m=+1.245648131 container remove 5ac19363d3cb70cc04d324aa6ceb993033144ae71ce336ebdb7bf7d3f658a329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 03 01:28:44 compute-0 ceph-mon[192821]: pgmap v418: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:44 compute-0 systemd[1]: libpod-conmon-5ac19363d3cb70cc04d324aa6ceb993033144ae71ce336ebdb7bf7d3f658a329.scope: Deactivated successfully.
Dec 03 01:28:44 compute-0 sudo[260745]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:44 compute-0 sudo[261074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:28:44 compute-0 sudo[261074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:28:44 compute-0 sudo[261074]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:44 compute-0 sudo[261126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:28:44 compute-0 sudo[261126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:28:44 compute-0 sudo[261126]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:44 compute-0 sudo[261172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:28:44 compute-0 sudo[261172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:28:44 compute-0 sudo[261172]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:44 compute-0 podman[261171]: 2025-12-03 01:28:44.516208338 +0000 UTC m=+0.103784310 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 03 01:28:44 compute-0 sudo[261240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 01:28:44 compute-0 sudo[261240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:28:44 compute-0 sudo[261290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwlhhoemtuavuhsjizvhoglkghgiauuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725324.1327977-259-182357402735832/AnsiballZ_file.py'
Dec 03 01:28:44 compute-0 sudo[261290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:28:44 compute-0 python3.9[261293]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:28:44 compute-0 sudo[261290]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v419: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:45 compute-0 podman[261374]: 2025-12-03 01:28:45.209634126 +0000 UTC m=+0.068440740 container create d9647996dc8e83b97fdb43ab2e6764e197dfce289346e384d20037fc33353d65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_shaw, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 03 01:28:45 compute-0 systemd[1]: Started libpod-conmon-d9647996dc8e83b97fdb43ab2e6764e197dfce289346e384d20037fc33353d65.scope.
Dec 03 01:28:45 compute-0 podman[261374]: 2025-12-03 01:28:45.188359769 +0000 UTC m=+0.047166453 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:28:45 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:28:45 compute-0 podman[261374]: 2025-12-03 01:28:45.350111314 +0000 UTC m=+0.208918028 container init d9647996dc8e83b97fdb43ab2e6764e197dfce289346e384d20037fc33353d65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:28:45 compute-0 podman[261374]: 2025-12-03 01:28:45.370471484 +0000 UTC m=+0.229278128 container start d9647996dc8e83b97fdb43ab2e6764e197dfce289346e384d20037fc33353d65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_shaw, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:28:45 compute-0 podman[261374]: 2025-12-03 01:28:45.37634351 +0000 UTC m=+0.235150204 container attach d9647996dc8e83b97fdb43ab2e6764e197dfce289346e384d20037fc33353d65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 03 01:28:45 compute-0 pensive_shaw[261418]: 167 167
Dec 03 01:28:45 compute-0 systemd[1]: libpod-d9647996dc8e83b97fdb43ab2e6764e197dfce289346e384d20037fc33353d65.scope: Deactivated successfully.
Dec 03 01:28:45 compute-0 podman[261374]: 2025-12-03 01:28:45.382000909 +0000 UTC m=+0.240807523 container died d9647996dc8e83b97fdb43ab2e6764e197dfce289346e384d20037fc33353d65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Dec 03 01:28:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-405b6de4d6e5ae48f1caa49be21b9970b708adf099712e39406dc301f2983022-merged.mount: Deactivated successfully.
Dec 03 01:28:45 compute-0 podman[261374]: 2025-12-03 01:28:45.447433099 +0000 UTC m=+0.306239713 container remove d9647996dc8e83b97fdb43ab2e6764e197dfce289346e384d20037fc33353d65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Dec 03 01:28:45 compute-0 systemd[1]: libpod-conmon-d9647996dc8e83b97fdb43ab2e6764e197dfce289346e384d20037fc33353d65.scope: Deactivated successfully.
Dec 03 01:28:45 compute-0 podman[261492]: 2025-12-03 01:28:45.692895961 +0000 UTC m=+0.083764640 container create 1d2fa81ae6ce9e1c55c1a9550f38cf8eacaf8ae9ffb2cf009726c103d176025c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_zhukovsky, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 03 01:28:45 compute-0 systemd[1]: Started libpod-conmon-1d2fa81ae6ce9e1c55c1a9550f38cf8eacaf8ae9ffb2cf009726c103d176025c.scope.
Dec 03 01:28:45 compute-0 podman[261492]: 2025-12-03 01:28:45.664748908 +0000 UTC m=+0.055617587 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:28:45 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:28:45 compute-0 sudo[261537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltxdsasaotndcrgbrbpfwcqunttekvmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725325.1605215-274-141491256616106/AnsiballZ_stat.py'
Dec 03 01:28:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb247ef66f415552a57e6cca0788dae2a40387e20445d740177c2b6b3b56717e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:28:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb247ef66f415552a57e6cca0788dae2a40387e20445d740177c2b6b3b56717e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:28:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb247ef66f415552a57e6cca0788dae2a40387e20445d740177c2b6b3b56717e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:28:45 compute-0 sudo[261537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:28:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb247ef66f415552a57e6cca0788dae2a40387e20445d740177c2b6b3b56717e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:28:45 compute-0 podman[261492]: 2025-12-03 01:28:45.840289486 +0000 UTC m=+0.231158145 container init 1d2fa81ae6ce9e1c55c1a9550f38cf8eacaf8ae9ffb2cf009726c103d176025c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_zhukovsky, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:28:45 compute-0 podman[261492]: 2025-12-03 01:28:45.85176943 +0000 UTC m=+0.242638069 container start 1d2fa81ae6ce9e1c55c1a9550f38cf8eacaf8ae9ffb2cf009726c103d176025c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 03 01:28:45 compute-0 podman[261492]: 2025-12-03 01:28:45.856400128 +0000 UTC m=+0.247268797 container attach 1d2fa81ae6ce9e1c55c1a9550f38cf8eacaf8ae9ffb2cf009726c103d176025c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_zhukovsky, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Dec 03 01:28:45 compute-0 python3.9[261541]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:28:46 compute-0 sudo[261537]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:46 compute-0 ceph-mon[192821]: pgmap v419: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:46 compute-0 sudo[261621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzvcuhnwinzlutnjozvtzkaisvbmshva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725325.1605215-274-141491256616106/AnsiballZ_file.py'
Dec 03 01:28:46 compute-0 sudo[261621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:28:46 compute-0 python3.9[261623]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt _original_basename=compute-0.ctlplane.example.com-tls.crt recurse=False state=file path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:28:46 compute-0 sudo[261621]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:46 compute-0 thirsty_zhukovsky[261536]: {
Dec 03 01:28:46 compute-0 thirsty_zhukovsky[261536]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 01:28:46 compute-0 thirsty_zhukovsky[261536]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:28:46 compute-0 thirsty_zhukovsky[261536]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 01:28:46 compute-0 thirsty_zhukovsky[261536]:         "osd_id": 2,
Dec 03 01:28:46 compute-0 thirsty_zhukovsky[261536]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:28:46 compute-0 thirsty_zhukovsky[261536]:         "type": "bluestore"
Dec 03 01:28:46 compute-0 thirsty_zhukovsky[261536]:     },
Dec 03 01:28:46 compute-0 thirsty_zhukovsky[261536]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 01:28:46 compute-0 thirsty_zhukovsky[261536]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:28:46 compute-0 thirsty_zhukovsky[261536]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 01:28:46 compute-0 thirsty_zhukovsky[261536]:         "osd_id": 1,
Dec 03 01:28:46 compute-0 thirsty_zhukovsky[261536]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:28:46 compute-0 thirsty_zhukovsky[261536]:         "type": "bluestore"
Dec 03 01:28:46 compute-0 thirsty_zhukovsky[261536]:     },
Dec 03 01:28:46 compute-0 thirsty_zhukovsky[261536]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 01:28:46 compute-0 thirsty_zhukovsky[261536]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:28:46 compute-0 thirsty_zhukovsky[261536]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 01:28:46 compute-0 thirsty_zhukovsky[261536]:         "osd_id": 0,
Dec 03 01:28:46 compute-0 thirsty_zhukovsky[261536]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:28:46 compute-0 thirsty_zhukovsky[261536]:         "type": "bluestore"
Dec 03 01:28:46 compute-0 thirsty_zhukovsky[261536]:     }
Dec 03 01:28:46 compute-0 thirsty_zhukovsky[261536]: }
Dec 03 01:28:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v420: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:46 compute-0 podman[261492]: 2025-12-03 01:28:46.984234859 +0000 UTC m=+1.375103508 container died 1d2fa81ae6ce9e1c55c1a9550f38cf8eacaf8ae9ffb2cf009726c103d176025c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2)
Dec 03 01:28:46 compute-0 systemd[1]: libpod-1d2fa81ae6ce9e1c55c1a9550f38cf8eacaf8ae9ffb2cf009726c103d176025c.scope: Deactivated successfully.
Dec 03 01:28:46 compute-0 systemd[1]: libpod-1d2fa81ae6ce9e1c55c1a9550f38cf8eacaf8ae9ffb2cf009726c103d176025c.scope: Consumed 1.132s CPU time.
Dec 03 01:28:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb247ef66f415552a57e6cca0788dae2a40387e20445d740177c2b6b3b56717e-merged.mount: Deactivated successfully.
Dec 03 01:28:47 compute-0 podman[261492]: 2025-12-03 01:28:47.115746358 +0000 UTC m=+1.506615037 container remove 1d2fa81ae6ce9e1c55c1a9550f38cf8eacaf8ae9ffb2cf009726c103d176025c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_zhukovsky, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:28:47 compute-0 systemd[1]: libpod-conmon-1d2fa81ae6ce9e1c55c1a9550f38cf8eacaf8ae9ffb2cf009726c103d176025c.scope: Deactivated successfully.
Dec 03 01:28:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:28:47 compute-0 sudo[261240]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:28:47 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:28:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:28:47 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:28:47 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 05cb2194-d464-4d1a-af5d-d1fad4920941 does not exist
Dec 03 01:28:47 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 8385837d-c00a-488e-b786-32c7d5de71ec does not exist
Dec 03 01:28:47 compute-0 sudo[261757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:28:47 compute-0 sudo[261757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:28:47 compute-0 sudo[261757]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:47 compute-0 sudo[261799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 01:28:47 compute-0 sudo[261799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:28:47 compute-0 sudo[261799]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:47 compute-0 sudo[261862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfarqxldwrdpjkixiuhjagoawxokjvdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725326.9917424-274-150535677246038/AnsiballZ_stat.py'
Dec 03 01:28:47 compute-0 sudo[261862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:28:47 compute-0 python3.9[261864]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:28:47 compute-0 sudo[261862]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:48 compute-0 ceph-mon[192821]: pgmap v420: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:48 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:28:48 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:28:48 compute-0 sudo[261940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksehqdbrqgplcjohhmwspsvqjlsgsdfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725326.9917424-274-150535677246038/AnsiballZ_file.py'
Dec 03 01:28:48 compute-0 sudo[261940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:28:48 compute-0 python3.9[261942]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt _original_basename=compute-0.ctlplane.example.com-ca.crt recurse=False state=file path=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:28:48 compute-0 sudo[261940]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:48 compute-0 sshd-session[259847]: Connection closed by authenticating user root 193.32.162.157 port 50266 [preauth]
Dec 03 01:28:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v421: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:49 compute-0 ceph-mon[192821]: pgmap v421: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:49 compute-0 sudo[262093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oatduawiynyimtrdqaykjcelkamscvab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725328.7379441-274-145713636621464/AnsiballZ_stat.py'
Dec 03 01:28:49 compute-0 sudo[262093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:28:49 compute-0 python3.9[262095]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:28:49 compute-0 sudo[262093]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:50 compute-0 sudo[262172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajudektorbwitypplvgogrnlirwpumav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725328.7379441-274-145713636621464/AnsiballZ_file.py'
Dec 03 01:28:50 compute-0 sudo[262172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:28:50 compute-0 python3.9[262174]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key _original_basename=compute-0.ctlplane.example.com-tls.key recurse=False state=file path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:28:50 compute-0 sudo[262172]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:50 compute-0 podman[262199]: 2025-12-03 01:28:50.867878119 +0000 UTC m=+0.119688607 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, release=1214.1726694543, distribution-scope=public, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, version=9.4, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., release-0.7.12=, build-date=2024-09-18T21:23:30, container_name=kepler, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git)
Dec 03 01:28:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v422: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:51 compute-0 sudo[262344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idewsjhyucgqwetsohdnyvnscbtpghmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725331.376544-325-96495942385844/AnsiballZ_file.py'
Dec 03 01:28:51 compute-0 sudo[262344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:28:52 compute-0 ceph-mon[192821]: pgmap v422: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:28:52 compute-0 python3.9[262346]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry-power-monitoring setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:28:52 compute-0 sudo[262344]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:52 compute-0 sshd-session[262347]: Received disconnect from 80.253.31.232 port 40888:11: Bye Bye [preauth]
Dec 03 01:28:52 compute-0 sshd-session[262347]: Disconnected from authenticating user root 80.253.31.232 port 40888 [preauth]
Dec 03 01:28:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v423: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:53 compute-0 sudo[262498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpchbkfnzonmcfbraxdgpwvdfcitbsho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725332.502018-333-203180233713053/AnsiballZ_stat.py'
Dec 03 01:28:53 compute-0 sudo[262498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:28:53 compute-0 python3.9[262500]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:28:53 compute-0 sudo[262498]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:53 compute-0 sudo[262576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxlhviiadjefbsxtxyoczhfnuvuzqhwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725332.502018-333-203180233713053/AnsiballZ_file.py'
Dec 03 01:28:53 compute-0 sudo[262576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:28:54 compute-0 ceph-mon[192821]: pgmap v423: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:54 compute-0 python3.9[262578]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:28:54 compute-0 sudo[262576]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v424: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:55 compute-0 sudo[262728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhgzxzizjwzeggxfwdzzxvawydbdrohz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725334.4137342-346-4395322647763/AnsiballZ_file.py'
Dec 03 01:28:55 compute-0 sudo[262728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:28:55 compute-0 python3.9[262730]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:28:55 compute-0 sudo[262728]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:56 compute-0 ceph-mon[192821]: pgmap v424: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:56 compute-0 sudo[262895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghhwjgshuxcffpxqlkoyivlfficjpqtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725336.2681499-354-272614734372190/AnsiballZ_stat.py'
Dec 03 01:28:56 compute-0 podman[262854]: 2025-12-03 01:28:56.840607072 +0000 UTC m=+0.113925733 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 03 01:28:56 compute-0 sudo[262895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:28:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v425: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:57 compute-0 python3.9[262904]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:28:57 compute-0 sudo[262895]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:28:58 compute-0 ceph-mon[192821]: pgmap v425: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:58 compute-0 sudo[262981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwwkbdzkvhgdzzpfafdgihdmtcxzecrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725336.2681499-354-272614734372190/AnsiballZ_file.py'
Dec 03 01:28:58 compute-0 sudo[262981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:28:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:28:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:28:58 compute-0 python3.9[262983]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:28:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:28:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:28:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:28:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:28:58 compute-0 sudo[262981]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v426: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:28:59 compute-0 sudo[263133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjptcucqlwspqlwojcsswqinpnycowca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725338.8095393-367-30016703410075/AnsiballZ_file.py'
Dec 03 01:28:59 compute-0 sudo[263133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:28:59 compute-0 python3.9[263135]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:28:59 compute-0 sudo[263133]: pam_unix(sudo:session): session closed for user root
Dec 03 01:28:59 compute-0 podman[158098]: time="2025-12-03T01:28:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:28:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:28:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec 03 01:28:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:28:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6834 "" "Go-http-client/1.1"
Dec 03 01:29:00 compute-0 ceph-mon[192821]: pgmap v426: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:00 compute-0 sudo[263285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvwljnvogxtaftrtjkgqspizqrwavhzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725339.894152-375-126828885757992/AnsiballZ_stat.py'
Dec 03 01:29:00 compute-0 sudo[263285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:29:00 compute-0 python3.9[263287]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:29:00 compute-0 sudo[263285]: pam_unix(sudo:session): session closed for user root
Dec 03 01:29:00 compute-0 sshd-session[262019]: Connection closed by authenticating user root 193.32.162.157 port 53714 [preauth]
Dec 03 01:29:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v427: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:01 compute-0 openstack_network_exporter[160250]: ERROR   01:29:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:29:01 compute-0 openstack_network_exporter[160250]: ERROR   01:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:29:01 compute-0 openstack_network_exporter[160250]: ERROR   01:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:29:01 compute-0 openstack_network_exporter[160250]: ERROR   01:29:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:29:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:29:01 compute-0 openstack_network_exporter[160250]: ERROR   01:29:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:29:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:29:01 compute-0 sudo[263409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgmzjafjqcyssrawhtztfhejosvrsapz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725339.894152-375-126828885757992/AnsiballZ_copy.py'
Dec 03 01:29:01 compute-0 sudo[263409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:29:01 compute-0 python3.9[263411]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764725339.894152-375-126828885757992/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=93ed2f21639fbbc78ab23db012b5cabf31590b1b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:29:01 compute-0 sudo[263409]: pam_unix(sudo:session): session closed for user root
Dec 03 01:29:02 compute-0 ceph-mon[192821]: pgmap v427: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:29:02 compute-0 sudo[263561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vesqhgsxjbeuwceoxkffskagdaxmbaet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725342.1941712-391-191586229204350/AnsiballZ_file.py'
Dec 03 01:29:02 compute-0 sudo[263561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:29:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v428: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:02 compute-0 python3.9[263563]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:29:02 compute-0 sudo[263561]: pam_unix(sudo:session): session closed for user root
Dec 03 01:29:03 compute-0 sudo[263714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkvbhlgizkkvmwwxwcuwacgcpymgdefv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725343.3003666-399-213654524588804/AnsiballZ_stat.py'
Dec 03 01:29:03 compute-0 sudo[263714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:29:04 compute-0 ceph-mon[192821]: pgmap v428: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:04 compute-0 python3.9[263716]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:29:04 compute-0 sudo[263714]: pam_unix(sudo:session): session closed for user root
Dec 03 01:29:04 compute-0 sudo[263792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjgrrpkkiqbqifeqnflezkkldgijumjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725343.3003666-399-213654524588804/AnsiballZ_file.py'
Dec 03 01:29:04 compute-0 sudo[263792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:29:04 compute-0 python3.9[263794]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:29:04 compute-0 sudo[263792]: pam_unix(sudo:session): session closed for user root
Dec 03 01:29:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v429: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:05 compute-0 sudo[263944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxvovelwyyhbmoyegjznntuqyggznhxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725345.2642279-412-17077669411542/AnsiballZ_file.py'
Dec 03 01:29:05 compute-0 sudo[263944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:29:06 compute-0 python3.9[263946]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:29:06 compute-0 sudo[263944]: pam_unix(sudo:session): session closed for user root
Dec 03 01:29:06 compute-0 ceph-mon[192821]: pgmap v429: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:06 compute-0 sudo[264096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnbjhvzfbiqvuqkjjphzkjesiiqktbjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725346.371127-420-16390221109684/AnsiballZ_stat.py'
Dec 03 01:29:06 compute-0 sudo[264096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:29:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v430: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:29:07 compute-0 python3.9[264098]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:29:07 compute-0 sudo[264096]: pam_unix(sudo:session): session closed for user root
Dec 03 01:29:07 compute-0 sudo[264174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qddmnqeinmvrtrfcgaggvvpfzpqiqluv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725346.371127-420-16390221109684/AnsiballZ_file.py'
Dec 03 01:29:07 compute-0 sudo[264174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:29:07 compute-0 python3.9[264176]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:29:07 compute-0 sudo[264174]: pam_unix(sudo:session): session closed for user root
Dec 03 01:29:08 compute-0 ceph-mon[192821]: pgmap v430: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v431: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:09 compute-0 sudo[264326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzsiezjhadfsgfhpjzkqayvzwqvmjuzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725348.2169538-433-23141862550320/AnsiballZ_file.py'
Dec 03 01:29:09 compute-0 sudo[264326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:29:09 compute-0 python3.9[264328]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:29:09 compute-0 sudo[264326]: pam_unix(sudo:session): session closed for user root
Dec 03 01:29:10 compute-0 ceph-mon[192821]: pgmap v431: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:10 compute-0 sudo[264478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hthuoutmjynbeknqxghtlejsthvordjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725349.8369641-441-215053344701328/AnsiballZ_stat.py'
Dec 03 01:29:10 compute-0 sudo[264478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:29:10 compute-0 python3.9[264480]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:29:10 compute-0 sudo[264478]: pam_unix(sudo:session): session closed for user root
Dec 03 01:29:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v432: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:11 compute-0 podman[264575]: 2025-12-03 01:29:11.82102513 +0000 UTC m=+0.101890378 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 03 01:29:11 compute-0 sudo[264651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxndkooyclctjtohojsubianhlhingog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725349.8369641-441-215053344701328/AnsiballZ_copy.py'
Dec 03 01:29:11 compute-0 sudo[264651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:29:11 compute-0 podman[264577]: 2025-12-03 01:29:11.873710318 +0000 UTC m=+0.141589345 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Dec 03 01:29:11 compute-0 podman[264576]: 2025-12-03 01:29:11.89471846 +0000 UTC m=+0.168478145 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, release=1755695350, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., vcs-type=git, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.openshift.expose-services=, io.openshift.tags=minimal rhel9)
Dec 03 01:29:11 compute-0 podman[264578]: 2025-12-03 01:29:11.895375629 +0000 UTC m=+0.155033080 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible)
Dec 03 01:29:12 compute-0 python3.9[264675]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764725349.8369641-441-215053344701328/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=93ed2f21639fbbc78ab23db012b5cabf31590b1b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:29:12 compute-0 sudo[264651]: pam_unix(sudo:session): session closed for user root
Dec 03 01:29:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:29:12 compute-0 ceph-mon[192821]: pgmap v432: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:12 compute-0 sshd-session[263311]: Connection closed by authenticating user root 193.32.162.157 port 37430 [preauth]
Dec 03 01:29:12 compute-0 sshd-session[264711]: Received disconnect from 173.249.50.59 port 38104:11: Bye Bye [preauth]
Dec 03 01:29:12 compute-0 sshd-session[264711]: Disconnected from authenticating user root 173.249.50.59 port 38104 [preauth]
Dec 03 01:29:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v433: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:12 compute-0 sudo[264842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-noqhkndccykcbkxrqgwxcswxzqjwlksn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725352.384851-457-196367185559436/AnsiballZ_file.py'
Dec 03 01:29:12 compute-0 sudo[264842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:29:13 compute-0 ceph-mon[192821]: pgmap v433: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:13 compute-0 python3.9[264844]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:29:13 compute-0 sudo[264842]: pam_unix(sudo:session): session closed for user root
Dec 03 01:29:14 compute-0 sudo[264994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yydgimdneygqhpffpkipfadvvyraumjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725353.5339668-465-218165496003201/AnsiballZ_stat.py'
Dec 03 01:29:14 compute-0 sudo[264994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:29:14 compute-0 python3.9[264996]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:29:14 compute-0 sudo[264994]: pam_unix(sudo:session): session closed for user root
Dec 03 01:29:14 compute-0 podman[265041]: 2025-12-03 01:29:14.872109064 +0000 UTC m=+0.151687424 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 03 01:29:14 compute-0 sudo[265092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmydctdffanvaskmuszndefdupwhljos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725353.5339668-465-218165496003201/AnsiballZ_file.py'
Dec 03 01:29:14 compute-0 sudo[265092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:29:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v434: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:15 compute-0 python3.9[265094]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:29:15 compute-0 sudo[265092]: pam_unix(sudo:session): session closed for user root
Dec 03 01:29:16 compute-0 ceph-mon[192821]: pgmap v434: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:16 compute-0 sudo[265245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzacbrkvxgvqdjajzjajdspiwmqqvwoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725355.5950682-478-26178029432187/AnsiballZ_file.py'
Dec 03 01:29:16 compute-0 sudo[265245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:29:16 compute-0 python3.9[265247]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:29:16 compute-0 sudo[265245]: pam_unix(sudo:session): session closed for user root
Dec 03 01:29:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v435: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:29:17 compute-0 sudo[265397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnerlctjgltwcicjbuhkwaxqevgzufue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725356.8176374-486-217493365754109/AnsiballZ_stat.py'
Dec 03 01:29:17 compute-0 sudo[265397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:29:17 compute-0 python3.9[265399]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:29:17 compute-0 sudo[265397]: pam_unix(sudo:session): session closed for user root
Dec 03 01:29:18 compute-0 ceph-mon[192821]: pgmap v435: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:18 compute-0 sudo[265477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rulwfnufprlwargpzktyydaedecqstwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725356.8176374-486-217493365754109/AnsiballZ_file.py'
Dec 03 01:29:18 compute-0 sudo[265477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:29:18 compute-0 sshd-session[265456]: Received disconnect from 34.66.72.251 port 55778:11: Bye Bye [preauth]
Dec 03 01:29:18 compute-0 sshd-session[265456]: Disconnected from authenticating user root 34.66.72.251 port 55778 [preauth]
Dec 03 01:29:18 compute-0 python3.9[265479]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:29:18 compute-0 sudo[265477]: pam_unix(sudo:session): session closed for user root
Dec 03 01:29:18 compute-0 sshd-session[255784]: Connection closed by 192.168.122.30 port 55404
Dec 03 01:29:18 compute-0 sshd-session[255781]: pam_unix(sshd:session): session closed for user zuul
Dec 03 01:29:18 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Dec 03 01:29:18 compute-0 systemd[1]: session-50.scope: Consumed 59.257s CPU time.
Dec 03 01:29:18 compute-0 systemd-logind[800]: Session 50 logged out. Waiting for processes to exit.
Dec 03 01:29:18 compute-0 systemd-logind[800]: Removed session 50.
Dec 03 01:29:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v436: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:20 compute-0 ceph-mon[192821]: pgmap v436: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v437: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:21 compute-0 podman[265505]: 2025-12-03 01:29:21.929414686 +0000 UTC m=+0.176660319 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, vcs-type=git, config_id=edpm, io.openshift.tags=base rhel9, release-0.7.12=, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9)
Dec 03 01:29:22 compute-0 ceph-mon[192821]: pgmap v437: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:29:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v438: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:23 compute-0 sshd-session[264768]: Connection closed by authenticating user root 193.32.162.157 port 51594 [preauth]
Dec 03 01:29:24 compute-0 ceph-mon[192821]: pgmap v438: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v439: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:26 compute-0 sshd-session[265526]: Accepted publickey for zuul from 192.168.122.30 port 33760 ssh2: ECDSA SHA256:ja3ITS17A9km0/Ot+KN2pl9ub4ump/b6GV+vNoE7Szw
Dec 03 01:29:26 compute-0 ceph-mon[192821]: pgmap v439: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:26 compute-0 systemd-logind[800]: New session 51 of user zuul.
Dec 03 01:29:26 compute-0 systemd[1]: Started Session 51 of User zuul.
Dec 03 01:29:26 compute-0 sshd-session[265526]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 03 01:29:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v440: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:29:27 compute-0 ceph-mon[192821]: pgmap v440: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:27 compute-0 sudo[265690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxupvobfecviyoiqphfszwpkuyxwwkmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725366.6272714-22-205716164507754/AnsiballZ_file.py'
Dec 03 01:29:27 compute-0 sudo[265690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:29:27 compute-0 podman[265654]: 2025-12-03 01:29:27.523177298 +0000 UTC m=+0.128006106 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 01:29:27 compute-0 python3.9[265696]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:29:27 compute-0 sudo[265690]: pam_unix(sudo:session): session closed for user root
Dec 03 01:29:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:29:28
Dec 03 01:29:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 01:29:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 01:29:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['volumes', 'default.rgw.log', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta', 'vms', 'images', 'default.rgw.control', '.rgw.root']
Dec 03 01:29:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 01:29:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:29:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:29:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:29:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:29:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:29:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:29:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 01:29:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:29:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 01:29:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:29:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:29:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:29:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:29:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:29:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:29:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:29:28 compute-0 sudo[265853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbdphcexbncpxraixvrkfdzahfaruhzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725368.054308-34-42021197726591/AnsiballZ_stat.py'
Dec 03 01:29:28 compute-0 sudo[265853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:29:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v441: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:29 compute-0 python3.9[265855]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:29:29 compute-0 sudo[265853]: pam_unix(sudo:session): session closed for user root
Dec 03 01:29:29 compute-0 podman[158098]: time="2025-12-03T01:29:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:29:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:29:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec 03 01:29:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:29:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6830 "" "Go-http-client/1.1"
Dec 03 01:29:30 compute-0 ceph-mon[192821]: pgmap v441: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:30 compute-0 sudo[265976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtlnqsxzrbhcbebcjjycmhtykoxcfgcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725368.054308-34-42021197726591/AnsiballZ_copy.py'
Dec 03 01:29:30 compute-0 sudo[265976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:29:30 compute-0 python3.9[265978]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764725368.054308-34-42021197726591/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=085db63d611f66658452414c8f83e35d20a7cbf6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:29:30 compute-0 sudo[265976]: pam_unix(sudo:session): session closed for user root
Dec 03 01:29:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v442: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:31 compute-0 openstack_network_exporter[160250]: ERROR   01:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:29:31 compute-0 openstack_network_exporter[160250]: ERROR   01:29:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:29:31 compute-0 openstack_network_exporter[160250]: ERROR   01:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:29:31 compute-0 openstack_network_exporter[160250]: ERROR   01:29:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:29:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:29:31 compute-0 openstack_network_exporter[160250]: ERROR   01:29:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:29:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:29:31 compute-0 sudo[266128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyjfmdqnaxvspqlggubybenmayrchfdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725370.9035518-34-228682796998500/AnsiballZ_stat.py'
Dec 03 01:29:31 compute-0 sudo[266128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:29:31 compute-0 python3.9[266130]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:29:31 compute-0 sudo[266128]: pam_unix(sudo:session): session closed for user root
Dec 03 01:29:32 compute-0 ceph-mon[192821]: pgmap v442: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:29:32 compute-0 sudo[266251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-meiiqqkqcrnmjssttjwtpkbgwaoqwlgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725370.9035518-34-228682796998500/AnsiballZ_copy.py'
Dec 03 01:29:32 compute-0 sudo[266251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:29:32 compute-0 python3.9[266253]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764725370.9035518-34-228682796998500/.source.conf _original_basename=ceph.conf follow=False checksum=187519a7b5e19437fc29d35550effe70e5660ce7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:29:32 compute-0 sudo[266251]: pam_unix(sudo:session): session closed for user root
Dec 03 01:29:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v443: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:33 compute-0 sshd-session[265530]: Connection closed by 192.168.122.30 port 33760
Dec 03 01:29:33 compute-0 sshd-session[265526]: pam_unix(sshd:session): session closed for user zuul
Dec 03 01:29:33 compute-0 systemd[1]: session-51.scope: Deactivated successfully.
Dec 03 01:29:33 compute-0 systemd[1]: session-51.scope: Consumed 4.872s CPU time.
Dec 03 01:29:33 compute-0 systemd-logind[800]: Session 51 logged out. Waiting for processes to exit.
Dec 03 01:29:33 compute-0 systemd-logind[800]: Removed session 51.
Dec 03 01:29:34 compute-0 ceph-mon[192821]: pgmap v443: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v444: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:35 compute-0 sshd-session[265525]: Connection closed by authenticating user root 193.32.162.157 port 37946 [preauth]
Dec 03 01:29:36 compute-0 ceph-mon[192821]: pgmap v444: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v445: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:29:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 01:29:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:29:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 01:29:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:29:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:29:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:29:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:29:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:29:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:29:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:29:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:29:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:29:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 01:29:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:29:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:29:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:29:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 01:29:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:29:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 01:29:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:29:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:29:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:29:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 01:29:38 compute-0 ceph-mon[192821]: pgmap v445: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v446: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:40 compute-0 ceph-mon[192821]: pgmap v446: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:40 compute-0 sshd-session[266280]: Accepted publickey for zuul from 192.168.122.30 port 40212 ssh2: ECDSA SHA256:ja3ITS17A9km0/Ot+KN2pl9ub4ump/b6GV+vNoE7Szw
Dec 03 01:29:40 compute-0 systemd-logind[800]: New session 52 of user zuul.
Dec 03 01:29:40 compute-0 systemd[1]: Started Session 52 of User zuul.
Dec 03 01:29:40 compute-0 sshd-session[266280]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.971 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.972 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.972 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.973 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f00ebd496a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.974 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.974 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eda45910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.974 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.974 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.974 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.975 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.975 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.975 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eabec2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.978 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f00ebd4b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.981 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f00edba6090>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.982 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f00ebd4bb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f00ebd4b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f00ebd4b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f00ebd4b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebcadee0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:29:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v447: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f00ebd4b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f00eabec290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f00ebd4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f00ebd4b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f00ebd4b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f00ebd4bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f00ebd4b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f00ebd4bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.992 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f00ebd4bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.992 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.992 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f00ebd4bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.992 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.992 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f00ebe0e030>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.993 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f00ebd4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.993 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f00ebd4b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.993 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.994 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f00ede91a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.994 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.994 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f00ebd4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.994 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.994 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f00ebd4b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.994 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f00ede92450>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.995 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f00ebd4bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.995 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f00ebd4bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.995 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:29:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:29:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:29:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:41.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:29:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:41.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:29:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:41.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:29:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:41.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:29:41 compute-0 python3.9[266434]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 01:29:42 compute-0 ceph-mon[192821]: pgmap v447: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:29:42 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 03 01:29:42 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Cumulative writes: 5478 writes, 23K keys, 5478 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                            Cumulative WAL: 5478 writes, 779 syncs, 7.03 writes per sync, written: 0.02 GB, 0.03 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 5478 writes, 23K keys, 5478 commit groups, 1.0 writes per commit group, ingest: 18.42 MB, 0.03 MB/s
                                            Interval WAL: 5478 writes, 779 syncs, 7.03 writes per sync, written: 0.02 GB, 0.03 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                             Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55cd94ae11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
                                            
                                            ** Compaction Stats [m-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55cd94ae11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-0] **
                                            
                                            ** Compaction Stats [m-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55cd94ae11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-1] **
                                            
                                            ** Compaction Stats [m-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55cd94ae11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-2] **
                                            
                                            ** Compaction Stats [p-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                             Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55cd94ae11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-0] **
                                            
                                            ** Compaction Stats [p-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55cd94ae11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-1] **
                                            
                                            ** Compaction Stats [p-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55cd94ae11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-2] **
                                            
                                            ** Compaction Stats [O-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55cd94ae1090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-0] **
                                            
                                            ** Compaction Stats [O-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55cd94ae1090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-1] **
                                            
                                            ** Compaction Stats [O-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                             Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55cd94ae1090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-2] **
                                            
                                            ** Compaction Stats [L] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [L] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55cd94ae11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [L] **
                                            
                                            ** Compaction Stats [P] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [P] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55cd94ae11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [P] **
Dec 03 01:29:42 compute-0 podman[266515]: 2025-12-03 01:29:42.872305623 +0000 UTC m=+0.119349318 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 01:29:42 compute-0 podman[266516]: 2025-12-03 01:29:42.885221153 +0000 UTC m=+0.126326998 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, name=ubi9-minimal, config_id=edpm, distribution-scope=public, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container)
Dec 03 01:29:42 compute-0 podman[266517]: 2025-12-03 01:29:42.908758077 +0000 UTC m=+0.144967812 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Dec 03 01:29:42 compute-0 podman[266518]: 2025-12-03 01:29:42.929368757 +0000 UTC m=+0.162622367 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 03 01:29:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v448: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:43 compute-0 sudo[266669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svnzsqvptwftkbmdnevfnwroiysmdwoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725382.5020146-34-226986841848874/AnsiballZ_file.py'
Dec 03 01:29:43 compute-0 sudo[266669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:29:43 compute-0 python3.9[266671]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:29:43 compute-0 sudo[266669]: pam_unix(sudo:session): session closed for user root
Dec 03 01:29:44 compute-0 ceph-mon[192821]: pgmap v448: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:44 compute-0 sudo[266821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jngegoxvsnsycecplbuiafvktmplpoco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725383.7316525-34-34639748444294/AnsiballZ_file.py'
Dec 03 01:29:44 compute-0 sudo[266821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:29:44 compute-0 python3.9[266823]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:29:44 compute-0 sudo[266821]: pam_unix(sudo:session): session closed for user root
Dec 03 01:29:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v449: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:45 compute-0 podman[266947]: 2025-12-03 01:29:45.51447691 +0000 UTC m=+0.167104395 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Dec 03 01:29:45 compute-0 python3.9[266989]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 01:29:46 compute-0 ceph-mon[192821]: pgmap v449: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:46 compute-0 sudo[267144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbjtbtoywaxqupbabneomytobopadhmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725386.1022942-57-188187270956501/AnsiballZ_seboolean.py'
Dec 03 01:29:46 compute-0 sudo[267144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:29:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v450: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:47 compute-0 python3.9[267146]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec 03 01:29:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:29:47 compute-0 sshd-session[266278]: Connection closed by authenticating user root 193.32.162.157 port 43648 [preauth]
Dec 03 01:29:47 compute-0 ceph-mon[192821]: pgmap v450: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:47 compute-0 sudo[267148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:29:47 compute-0 sudo[267148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:29:47 compute-0 sudo[267148]: pam_unix(sudo:session): session closed for user root
Dec 03 01:29:47 compute-0 sudo[267173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:29:47 compute-0 sudo[267173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:29:47 compute-0 sudo[267173]: pam_unix(sudo:session): session closed for user root
Dec 03 01:29:47 compute-0 sudo[267144]: pam_unix(sudo:session): session closed for user root
Dec 03 01:29:47 compute-0 sudo[267198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:29:47 compute-0 sudo[267198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:29:47 compute-0 sudo[267198]: pam_unix(sudo:session): session closed for user root
Dec 03 01:29:48 compute-0 sudo[267247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Dec 03 01:29:48 compute-0 sudo[267247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:29:48 compute-0 sshd-session[267069]: Received disconnect from 103.146.202.174 port 42780:11: Bye Bye [preauth]
Dec 03 01:29:48 compute-0 sshd-session[267069]: Disconnected from authenticating user root 103.146.202.174 port 42780 [preauth]
Dec 03 01:29:48 compute-0 sudo[267247]: pam_unix(sudo:session): session closed for user root
Dec 03 01:29:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:29:48 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:29:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:29:48 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:29:48 compute-0 sudo[267291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:29:48 compute-0 sudo[267291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:29:48 compute-0 sudo[267291]: pam_unix(sudo:session): session closed for user root
Dec 03 01:29:48 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 03 01:29:48 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Cumulative writes: 6740 writes, 28K keys, 6740 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                            Cumulative WAL: 6740 writes, 1152 syncs, 5.85 writes per sync, written: 0.02 GB, 0.03 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 6740 writes, 28K keys, 6740 commit groups, 1.0 writes per commit group, ingest: 19.56 MB, 0.03 MB/s
                                            Interval WAL: 6740 writes, 1152 syncs, 5.85 writes per sync, written: 0.02 GB, 0.03 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                             Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55f0a3d5d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
                                            
                                            ** Compaction Stats [m-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55f0a3d5d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-0] **
                                            
                                            ** Compaction Stats [m-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55f0a3d5d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-1] **
                                            
                                            ** Compaction Stats [m-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55f0a3d5d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-2] **
                                            
                                            ** Compaction Stats [p-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                             Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55f0a3d5d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-0] **
                                            
                                            ** Compaction Stats [p-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55f0a3d5d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-1] **
                                            
                                            ** Compaction Stats [p-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55f0a3d5d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-2] **
                                            
                                            ** Compaction Stats [O-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55f0a3d5d090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1.2e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-0] **
                                            
                                            ** Compaction Stats [O-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55f0a3d5d090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1.2e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-1] **
                                            
                                            ** Compaction Stats [O-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                             Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55f0a3d5d090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1.2e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-2] **
                                            
                                            ** Compaction Stats [L] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [L] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55f0a3d5d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [L] **
                                            
                                            ** Compaction Stats [P] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [P] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55f0a3d5d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [P] **
Dec 03 01:29:48 compute-0 sudo[267316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:29:48 compute-0 sudo[267316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:29:48 compute-0 sudo[267316]: pam_unix(sudo:session): session closed for user root
Dec 03 01:29:48 compute-0 sudo[267343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:29:48 compute-0 sudo[267343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:29:48 compute-0 sudo[267343]: pam_unix(sudo:session): session closed for user root
Dec 03 01:29:48 compute-0 sudo[267401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 01:29:48 compute-0 sudo[267401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:29:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v451: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:49 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:29:49 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:29:49 compute-0 ceph-mon[192821]: pgmap v451: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:49 compute-0 sudo[267535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdhuquihwnfgyoysudlxeemdbldcordx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725388.7880921-67-273663597388856/AnsiballZ_setup.py'
Dec 03 01:29:49 compute-0 sudo[267535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:29:49 compute-0 sudo[267401]: pam_unix(sudo:session): session closed for user root
Dec 03 01:29:49 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:29:49 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:29:49 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 01:29:49 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:29:49 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 01:29:49 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:29:49 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 77188929-5e1a-43fa-b4e5-c090c027db3b does not exist
Dec 03 01:29:49 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 1ff8441e-4827-460f-ad89-cf9ee895dd67 does not exist
Dec 03 01:29:49 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 0a624c58-f189-4463-979a-56a7e035f58d does not exist
Dec 03 01:29:49 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 01:29:49 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:29:49 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 01:29:49 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:29:49 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:29:49 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:29:49 compute-0 sudo[267551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:29:49 compute-0 sudo[267551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:29:49 compute-0 sudo[267551]: pam_unix(sudo:session): session closed for user root
Dec 03 01:29:49 compute-0 python3.9[267546]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 03 01:29:49 compute-0 sudo[267576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:29:49 compute-0 sudo[267576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:29:49 compute-0 sudo[267576]: pam_unix(sudo:session): session closed for user root
Dec 03 01:29:49 compute-0 sudo[267608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:29:49 compute-0 sudo[267608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:29:49 compute-0 sudo[267608]: pam_unix(sudo:session): session closed for user root
Dec 03 01:29:50 compute-0 sudo[267633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 01:29:50 compute-0 sudo[267633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:29:50 compute-0 sudo[267535]: pam_unix(sudo:session): session closed for user root
Dec 03 01:29:50 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:29:50 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:29:50 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:29:50 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:29:50 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:29:50 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:29:50 compute-0 podman[267696]: 2025-12-03 01:29:50.602897204 +0000 UTC m=+0.087465766 container create 94916ee59ea49f492fe109ca834c3de062a6bc940088b3d36ee0f28d2bc63f95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:29:50 compute-0 podman[267696]: 2025-12-03 01:29:50.572058171 +0000 UTC m=+0.056626773 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:29:50 compute-0 systemd[1]: Started libpod-conmon-94916ee59ea49f492fe109ca834c3de062a6bc940088b3d36ee0f28d2bc63f95.scope.
Dec 03 01:29:50 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:29:50 compute-0 podman[267696]: 2025-12-03 01:29:50.768624589 +0000 UTC m=+0.253193201 container init 94916ee59ea49f492fe109ca834c3de062a6bc940088b3d36ee0f28d2bc63f95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_tu, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Dec 03 01:29:50 compute-0 podman[267696]: 2025-12-03 01:29:50.786896672 +0000 UTC m=+0.271465234 container start 94916ee59ea49f492fe109ca834c3de062a6bc940088b3d36ee0f28d2bc63f95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_tu, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Dec 03 01:29:50 compute-0 podman[267696]: 2025-12-03 01:29:50.79417761 +0000 UTC m=+0.278746212 container attach 94916ee59ea49f492fe109ca834c3de062a6bc940088b3d36ee0f28d2bc63f95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Dec 03 01:29:50 compute-0 condescending_tu[267733]: 167 167
Dec 03 01:29:50 compute-0 systemd[1]: libpod-94916ee59ea49f492fe109ca834c3de062a6bc940088b3d36ee0f28d2bc63f95.scope: Deactivated successfully.
Dec 03 01:29:50 compute-0 podman[267696]: 2025-12-03 01:29:50.803695093 +0000 UTC m=+0.288263655 container died 94916ee59ea49f492fe109ca834c3de062a6bc940088b3d36ee0f28d2bc63f95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_tu, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 03 01:29:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3d409ac54cd9eb25e9484786f32ca02089076a3be4dd9b0aa3dc531419e485d-merged.mount: Deactivated successfully.
Dec 03 01:29:50 compute-0 podman[267696]: 2025-12-03 01:29:50.882318974 +0000 UTC m=+0.366887546 container remove 94916ee59ea49f492fe109ca834c3de062a6bc940088b3d36ee0f28d2bc63f95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 03 01:29:50 compute-0 systemd[1]: libpod-conmon-94916ee59ea49f492fe109ca834c3de062a6bc940088b3d36ee0f28d2bc63f95.scope: Deactivated successfully.
Dec 03 01:29:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v452: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:51 compute-0 sudo[267804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zobzthrdpujdnkyqnowtpififikanooq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725388.7880921-67-273663597388856/AnsiballZ_dnf.py'
Dec 03 01:29:51 compute-0 sudo[267804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:29:51 compute-0 podman[267812]: 2025-12-03 01:29:51.21574915 +0000 UTC m=+0.099720316 container create b8f3d625d837cd1f6a08f910e370dd930a3a1930f1784fd61faacf88b4963e4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_buck, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 03 01:29:51 compute-0 podman[267812]: 2025-12-03 01:29:51.176191958 +0000 UTC m=+0.060163174 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:29:51 compute-0 systemd[1]: Started libpod-conmon-b8f3d625d837cd1f6a08f910e370dd930a3a1930f1784fd61faacf88b4963e4b.scope.
Dec 03 01:29:51 compute-0 python3.9[267806]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 03 01:29:51 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:29:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d07cff8a68612a11cb3189ac370c4b7f436286ad350c243c3a2f209b33c1bcf9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:29:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d07cff8a68612a11cb3189ac370c4b7f436286ad350c243c3a2f209b33c1bcf9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:29:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d07cff8a68612a11cb3189ac370c4b7f436286ad350c243c3a2f209b33c1bcf9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:29:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d07cff8a68612a11cb3189ac370c4b7f436286ad350c243c3a2f209b33c1bcf9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:29:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d07cff8a68612a11cb3189ac370c4b7f436286ad350c243c3a2f209b33c1bcf9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:29:51 compute-0 ceph-mon[192821]: pgmap v452: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:51 compute-0 podman[267812]: 2025-12-03 01:29:51.493353888 +0000 UTC m=+0.377325104 container init b8f3d625d837cd1f6a08f910e370dd930a3a1930f1784fd61faacf88b4963e4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_buck, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:29:51 compute-0 podman[267812]: 2025-12-03 01:29:51.514746261 +0000 UTC m=+0.398717437 container start b8f3d625d837cd1f6a08f910e370dd930a3a1930f1784fd61faacf88b4963e4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_buck, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:29:51 compute-0 podman[267812]: 2025-12-03 01:29:51.521207566 +0000 UTC m=+0.405178792 container attach b8f3d625d837cd1f6a08f910e370dd930a3a1930f1784fd61faacf88b4963e4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_buck, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 03 01:29:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:29:52 compute-0 sudo[267804]: pam_unix(sudo:session): session closed for user root
Dec 03 01:29:52 compute-0 vibrant_buck[267830]: --> passed data devices: 0 physical, 3 LVM
Dec 03 01:29:52 compute-0 vibrant_buck[267830]: --> relative data size: 1.0
Dec 03 01:29:52 compute-0 vibrant_buck[267830]: --> All data devices are unavailable
Dec 03 01:29:52 compute-0 systemd[1]: libpod-b8f3d625d837cd1f6a08f910e370dd930a3a1930f1784fd61faacf88b4963e4b.scope: Deactivated successfully.
Dec 03 01:29:52 compute-0 systemd[1]: libpod-b8f3d625d837cd1f6a08f910e370dd930a3a1930f1784fd61faacf88b4963e4b.scope: Consumed 1.255s CPU time.
Dec 03 01:29:52 compute-0 podman[267812]: 2025-12-03 01:29:52.844971901 +0000 UTC m=+1.728943077 container died b8f3d625d837cd1f6a08f910e370dd930a3a1930f1784fd61faacf88b4963e4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 03 01:29:52 compute-0 podman[267878]: 2025-12-03 01:29:52.868340342 +0000 UTC m=+0.116070637 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, distribution-scope=public, io.buildah.version=1.29.0, release-0.7.12=, version=9.4, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.openshift.tags=base rhel9, release=1214.1726694543)
Dec 03 01:29:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-d07cff8a68612a11cb3189ac370c4b7f436286ad350c243c3a2f209b33c1bcf9-merged.mount: Deactivated successfully.
Dec 03 01:29:52 compute-0 podman[267812]: 2025-12-03 01:29:52.947616417 +0000 UTC m=+1.831587593 container remove b8f3d625d837cd1f6a08f910e370dd930a3a1930f1784fd61faacf88b4963e4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_buck, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 03 01:29:52 compute-0 systemd[1]: libpod-conmon-b8f3d625d837cd1f6a08f910e370dd930a3a1930f1784fd61faacf88b4963e4b.scope: Deactivated successfully.
Dec 03 01:29:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v453: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:52 compute-0 sudo[267633]: pam_unix(sudo:session): session closed for user root
Dec 03 01:29:53 compute-0 sudo[267940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:29:53 compute-0 sudo[267940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:29:53 compute-0 sudo[267940]: pam_unix(sudo:session): session closed for user root
Dec 03 01:29:53 compute-0 sudo[267993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:29:53 compute-0 sudo[267993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:29:53 compute-0 sudo[267993]: pam_unix(sudo:session): session closed for user root
Dec 03 01:29:53 compute-0 sudo[268018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:29:53 compute-0 sudo[268018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:29:53 compute-0 sudo[268018]: pam_unix(sudo:session): session closed for user root
Dec 03 01:29:53 compute-0 sudo[268043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 01:29:53 compute-0 sudo[268043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:29:53 compute-0 sudo[268171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldufopzokkszebtfhwykbqauhughihox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725392.984864-79-77460086322483/AnsiballZ_systemd.py'
Dec 03 01:29:53 compute-0 sudo[268171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:29:54 compute-0 podman[268180]: 2025-12-03 01:29:54.039997826 +0000 UTC m=+0.086012156 container create beb5a83444ab512430d81d827974b88df77aa465224cc8c79c8591ef2364c00a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:29:54 compute-0 ceph-mon[192821]: pgmap v453: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:54 compute-0 podman[268180]: 2025-12-03 01:29:54.002118474 +0000 UTC m=+0.048132854 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:29:54 compute-0 systemd[1]: Started libpod-conmon-beb5a83444ab512430d81d827974b88df77aa465224cc8c79c8591ef2364c00a.scope.
Dec 03 01:29:54 compute-0 python3.9[268177]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 03 01:29:54 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:29:54 compute-0 podman[268180]: 2025-12-03 01:29:54.222058621 +0000 UTC m=+0.268072991 container init beb5a83444ab512430d81d827974b88df77aa465224cc8c79c8591ef2364c00a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_jepsen, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:29:54 compute-0 podman[268180]: 2025-12-03 01:29:54.239810024 +0000 UTC m=+0.285824344 container start beb5a83444ab512430d81d827974b88df77aa465224cc8c79c8591ef2364c00a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_jepsen, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:29:54 compute-0 podman[268180]: 2025-12-03 01:29:54.246475443 +0000 UTC m=+0.292489813 container attach beb5a83444ab512430d81d827974b88df77aa465224cc8c79c8591ef2364c00a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 03 01:29:54 compute-0 optimistic_jepsen[268196]: 167 167
Dec 03 01:29:54 compute-0 systemd[1]: libpod-beb5a83444ab512430d81d827974b88df77aa465224cc8c79c8591ef2364c00a.scope: Deactivated successfully.
Dec 03 01:29:54 compute-0 podman[268180]: 2025-12-03 01:29:54.250721023 +0000 UTC m=+0.296735383 container died beb5a83444ab512430d81d827974b88df77aa465224cc8c79c8591ef2364c00a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:29:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-fcc482fe3e923d8924d99d47db184da12a06189a235f15a8baeb8bc0faa1ab9a-merged.mount: Deactivated successfully.
Dec 03 01:29:54 compute-0 podman[268180]: 2025-12-03 01:29:54.331984214 +0000 UTC m=+0.377998514 container remove beb5a83444ab512430d81d827974b88df77aa465224cc8c79c8591ef2364c00a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_jepsen, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:29:54 compute-0 sudo[268171]: pam_unix(sudo:session): session closed for user root
Dec 03 01:29:54 compute-0 systemd[1]: libpod-conmon-beb5a83444ab512430d81d827974b88df77aa465224cc8c79c8591ef2364c00a.scope: Deactivated successfully.
Dec 03 01:29:54 compute-0 podman[268246]: 2025-12-03 01:29:54.606866767 +0000 UTC m=+0.085362408 container create 6248ff61efa8196ae6958d46a0afe006756bc302be0ec52f32746370393c1a0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hamilton, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec 03 01:29:54 compute-0 podman[268246]: 2025-12-03 01:29:54.573958885 +0000 UTC m=+0.052454616 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:29:54 compute-0 systemd[1]: Started libpod-conmon-6248ff61efa8196ae6958d46a0afe006756bc302be0ec52f32746370393c1a0e.scope.
Dec 03 01:29:54 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:29:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5184cd34f40a2da13f157a3f26fbfd8185e5d252de45bfa48b112f590008ed7a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:29:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5184cd34f40a2da13f157a3f26fbfd8185e5d252de45bfa48b112f590008ed7a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:29:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5184cd34f40a2da13f157a3f26fbfd8185e5d252de45bfa48b112f590008ed7a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:29:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5184cd34f40a2da13f157a3f26fbfd8185e5d252de45bfa48b112f590008ed7a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:29:54 compute-0 podman[268246]: 2025-12-03 01:29:54.746789429 +0000 UTC m=+0.225285110 container init 6248ff61efa8196ae6958d46a0afe006756bc302be0ec52f32746370393c1a0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:29:54 compute-0 podman[268246]: 2025-12-03 01:29:54.772664242 +0000 UTC m=+0.251159873 container start 6248ff61efa8196ae6958d46a0afe006756bc302be0ec52f32746370393c1a0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hamilton, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Dec 03 01:29:54 compute-0 podman[268246]: 2025-12-03 01:29:54.780617147 +0000 UTC m=+0.259112828 container attach 6248ff61efa8196ae6958d46a0afe006756bc302be0ec52f32746370393c1a0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hamilton, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:29:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v454: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:55 compute-0 sudo[268392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpigxghbcxmzctjlhloylrhnwlqjkqmt ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764725394.6479614-87-120486384790422/AnsiballZ_edpm_nftables_snippet.py'
Dec 03 01:29:55 compute-0 sudo[268392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]: {
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:     "0": [
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:         {
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:             "devices": [
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:                 "/dev/loop3"
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:             ],
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:             "lv_name": "ceph_lv0",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:             "lv_size": "21470642176",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:             "name": "ceph_lv0",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:             "tags": {
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:                 "ceph.cluster_name": "ceph",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:                 "ceph.crush_device_class": "",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:                 "ceph.encrypted": "0",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:                 "ceph.osd_id": "0",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:                 "ceph.type": "block",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:                 "ceph.vdo": "0"
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:             },
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:             "type": "block",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:             "vg_name": "ceph_vg0"
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:         }
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:     ],
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:     "1": [
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:         {
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:             "devices": [
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:                 "/dev/loop4"
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:             ],
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:             "lv_name": "ceph_lv1",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:             "lv_size": "21470642176",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:             "name": "ceph_lv1",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:             "tags": {
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:                 "ceph.cluster_name": "ceph",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:                 "ceph.crush_device_class": "",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:                 "ceph.encrypted": "0",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:                 "ceph.osd_id": "1",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:                 "ceph.type": "block",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:                 "ceph.vdo": "0"
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:             },
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:             "type": "block",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:             "vg_name": "ceph_vg1"
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:         }
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:     ],
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:     "2": [
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:         {
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:             "devices": [
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:                 "/dev/loop5"
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:             ],
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:             "lv_name": "ceph_lv2",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:             "lv_size": "21470642176",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:             "name": "ceph_lv2",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:             "tags": {
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:                 "ceph.cluster_name": "ceph",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:                 "ceph.crush_device_class": "",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:                 "ceph.encrypted": "0",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:                 "ceph.osd_id": "2",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:                 "ceph.type": "block",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:                 "ceph.vdo": "0"
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:             },
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:             "type": "block",
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:             "vg_name": "ceph_vg2"
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:         }
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]:     ]
Dec 03 01:29:55 compute-0 vibrant_hamilton[268289]: }
Dec 03 01:29:55 compute-0 python3[268394]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Dec 03 01:29:55 compute-0 sudo[268392]: pam_unix(sudo:session): session closed for user root
Dec 03 01:29:55 compute-0 systemd[1]: libpod-6248ff61efa8196ae6958d46a0afe006756bc302be0ec52f32746370393c1a0e.scope: Deactivated successfully.
Dec 03 01:29:55 compute-0 podman[268399]: 2025-12-03 01:29:55.737641824 +0000 UTC m=+0.084134843 container died 6248ff61efa8196ae6958d46a0afe006756bc302be0ec52f32746370393c1a0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hamilton, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:29:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-5184cd34f40a2da13f157a3f26fbfd8185e5d252de45bfa48b112f590008ed7a-merged.mount: Deactivated successfully.
Dec 03 01:29:55 compute-0 podman[268399]: 2025-12-03 01:29:55.838951263 +0000 UTC m=+0.185444282 container remove 6248ff61efa8196ae6958d46a0afe006756bc302be0ec52f32746370393c1a0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hamilton, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:29:55 compute-0 systemd[1]: libpod-conmon-6248ff61efa8196ae6958d46a0afe006756bc302be0ec52f32746370393c1a0e.scope: Deactivated successfully.
Dec 03 01:29:55 compute-0 sudo[268043]: pam_unix(sudo:session): session closed for user root
Dec 03 01:29:55 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 03 01:29:55 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Cumulative writes: 5531 writes, 24K keys, 5531 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                            Cumulative WAL: 5531 writes, 820 syncs, 6.75 writes per sync, written: 0.02 GB, 0.03 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 5531 writes, 24K keys, 5531 commit groups, 1.0 writes per commit group, ingest: 18.46 MB, 0.03 MB/s
                                            Interval WAL: 5531 writes, 820 syncs, 6.75 writes per sync, written: 0.02 GB, 0.03 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                             Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x558b8220d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
                                            
                                            ** Compaction Stats [m-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x558b8220d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-0] **
                                            
                                            ** Compaction Stats [m-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x558b8220d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-1] **
                                            
                                            ** Compaction Stats [m-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x558b8220d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-2] **
                                            
                                            ** Compaction Stats [p-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                             Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x558b8220d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-0] **
                                            
                                            ** Compaction Stats [p-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x558b8220d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-1] **
                                            
                                            ** Compaction Stats [p-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x558b8220d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-2] **
                                            
                                            ** Compaction Stats [O-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x558b8220cf30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-0] **
                                            
                                            ** Compaction Stats [O-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x558b8220cf30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-1] **
                                            
                                            ** Compaction Stats [O-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                             Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x558b8220cf30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-2] **
                                            
                                            ** Compaction Stats [L] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [L] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x558b8220d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [L] **
                                            
                                            ** Compaction Stats [P] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [P] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x558b8220d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [P] **
Dec 03 01:29:56 compute-0 sudo[268459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:29:56 compute-0 sudo[268459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:29:56 compute-0 sudo[268459]: pam_unix(sudo:session): session closed for user root
Dec 03 01:29:56 compute-0 ceph-mon[192821]: pgmap v454: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:56 compute-0 sudo[268513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:29:56 compute-0 sudo[268513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:29:56 compute-0 sudo[268513]: pam_unix(sudo:session): session closed for user root
Dec 03 01:29:56 compute-0 sudo[268561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:29:56 compute-0 sudo[268561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:29:56 compute-0 sudo[268561]: pam_unix(sudo:session): session closed for user root
Dec 03 01:29:56 compute-0 sudo[268609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 01:29:56 compute-0 sudo[268609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:29:56 compute-0 sudo[268661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whqokcboqpibpjwoxzzdzmnukifvoofh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725395.9545321-96-29858208562493/AnsiballZ_file.py'
Dec 03 01:29:56 compute-0 sudo[268661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:29:56 compute-0 python3.9[268663]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:29:56 compute-0 sudo[268661]: pam_unix(sudo:session): session closed for user root
Dec 03 01:29:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v455: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:57 compute-0 ceph-mgr[193109]: [devicehealth INFO root] Check health
Dec 03 01:29:57 compute-0 podman[268728]: 2025-12-03 01:29:57.055972572 +0000 UTC m=+0.085889113 container create 593dce56af5c078d4fec12ec49ef16d619f3d529cf29f4e10ee8e5e72c13760a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec 03 01:29:57 compute-0 systemd[1]: Started libpod-conmon-593dce56af5c078d4fec12ec49ef16d619f3d529cf29f4e10ee8e5e72c13760a.scope.
Dec 03 01:29:57 compute-0 podman[268728]: 2025-12-03 01:29:57.023218684 +0000 UTC m=+0.053135315 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:29:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:29:57 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:29:57 compute-0 podman[268728]: 2025-12-03 01:29:57.183549414 +0000 UTC m=+0.213465965 container init 593dce56af5c078d4fec12ec49ef16d619f3d529cf29f4e10ee8e5e72c13760a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Dec 03 01:29:57 compute-0 podman[268728]: 2025-12-03 01:29:57.196800019 +0000 UTC m=+0.226716610 container start 593dce56af5c078d4fec12ec49ef16d619f3d529cf29f4e10ee8e5e72c13760a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mclaren, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 03 01:29:57 compute-0 podman[268728]: 2025-12-03 01:29:57.202603214 +0000 UTC m=+0.232519805 container attach 593dce56af5c078d4fec12ec49ef16d619f3d529cf29f4e10ee8e5e72c13760a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 03 01:29:57 compute-0 gallant_mclaren[268756]: 167 167
Dec 03 01:29:57 compute-0 systemd[1]: libpod-593dce56af5c078d4fec12ec49ef16d619f3d529cf29f4e10ee8e5e72c13760a.scope: Deactivated successfully.
Dec 03 01:29:57 compute-0 podman[268728]: 2025-12-03 01:29:57.204984781 +0000 UTC m=+0.234901352 container died 593dce56af5c078d4fec12ec49ef16d619f3d529cf29f4e10ee8e5e72c13760a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mclaren, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:29:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-f03423c4dba93152571e4add6636579ce1afe19dc074f12b43ae158226579795-merged.mount: Deactivated successfully.
Dec 03 01:29:57 compute-0 podman[268728]: 2025-12-03 01:29:57.255687027 +0000 UTC m=+0.285603588 container remove 593dce56af5c078d4fec12ec49ef16d619f3d529cf29f4e10ee8e5e72c13760a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:29:57 compute-0 systemd[1]: libpod-conmon-593dce56af5c078d4fec12ec49ef16d619f3d529cf29f4e10ee8e5e72c13760a.scope: Deactivated successfully.
Dec 03 01:29:57 compute-0 podman[268819]: 2025-12-03 01:29:57.487623714 +0000 UTC m=+0.091300186 container create 14a666816107010bfd3dfb0b0281263136b2846f746312fcd872af5418411383 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 03 01:29:57 compute-0 podman[268819]: 2025-12-03 01:29:57.450232245 +0000 UTC m=+0.053908747 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:29:57 compute-0 systemd[1]: Started libpod-conmon-14a666816107010bfd3dfb0b0281263136b2846f746312fcd872af5418411383.scope.
Dec 03 01:29:57 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:29:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b18413666fb03b3ef04bc2e27e84355fb096e1c5c1129102b9dbfd52aa4c8c0b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:29:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b18413666fb03b3ef04bc2e27e84355fb096e1c5c1129102b9dbfd52aa4c8c0b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:29:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b18413666fb03b3ef04bc2e27e84355fb096e1c5c1129102b9dbfd52aa4c8c0b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:29:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b18413666fb03b3ef04bc2e27e84355fb096e1c5c1129102b9dbfd52aa4c8c0b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:29:57 compute-0 podman[268819]: 2025-12-03 01:29:57.658333357 +0000 UTC m=+0.262009849 container init 14a666816107010bfd3dfb0b0281263136b2846f746312fcd872af5418411383 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_blackburn, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 03 01:29:57 compute-0 podman[268819]: 2025-12-03 01:29:57.690331894 +0000 UTC m=+0.294008336 container start 14a666816107010bfd3dfb0b0281263136b2846f746312fcd872af5418411383 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_blackburn, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec 03 01:29:57 compute-0 podman[268819]: 2025-12-03 01:29:57.694827841 +0000 UTC m=+0.298504283 container attach 14a666816107010bfd3dfb0b0281263136b2846f746312fcd872af5418411383 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_blackburn, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:29:57 compute-0 podman[268857]: 2025-12-03 01:29:57.722162935 +0000 UTC m=+0.117721244 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 01:29:57 compute-0 sudo[268933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exshmhcqbuysegpdeldmcmfmkmxdhwds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725397.1615899-104-243092235736433/AnsiballZ_stat.py'
Dec 03 01:29:57 compute-0 sudo[268933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:29:58 compute-0 python3.9[268935]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:29:58 compute-0 sudo[268933]: pam_unix(sudo:session): session closed for user root
Dec 03 01:29:58 compute-0 ceph-mon[192821]: pgmap v455: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:29:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:29:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:29:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:29:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:29:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:29:58 compute-0 sudo[269022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omvqostmzflenjjzporensylukzvcvtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725397.1615899-104-243092235736433/AnsiballZ_file.py'
Dec 03 01:29:58 compute-0 sudo[269022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:29:58 compute-0 sshd-session[267147]: Connection closed by authenticating user root 193.32.162.157 port 56576 [preauth]
Dec 03 01:29:58 compute-0 focused_blackburn[268858]: {
Dec 03 01:29:58 compute-0 focused_blackburn[268858]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 01:29:58 compute-0 focused_blackburn[268858]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:29:58 compute-0 focused_blackburn[268858]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 01:29:58 compute-0 focused_blackburn[268858]:         "osd_id": 2,
Dec 03 01:29:58 compute-0 focused_blackburn[268858]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:29:58 compute-0 focused_blackburn[268858]:         "type": "bluestore"
Dec 03 01:29:58 compute-0 focused_blackburn[268858]:     },
Dec 03 01:29:58 compute-0 focused_blackburn[268858]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 01:29:58 compute-0 focused_blackburn[268858]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:29:58 compute-0 focused_blackburn[268858]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 01:29:58 compute-0 focused_blackburn[268858]:         "osd_id": 1,
Dec 03 01:29:58 compute-0 focused_blackburn[268858]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:29:58 compute-0 focused_blackburn[268858]:         "type": "bluestore"
Dec 03 01:29:58 compute-0 focused_blackburn[268858]:     },
Dec 03 01:29:58 compute-0 focused_blackburn[268858]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 01:29:58 compute-0 focused_blackburn[268858]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:29:58 compute-0 focused_blackburn[268858]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 01:29:58 compute-0 focused_blackburn[268858]:         "osd_id": 0,
Dec 03 01:29:58 compute-0 focused_blackburn[268858]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:29:58 compute-0 focused_blackburn[268858]:         "type": "bluestore"
Dec 03 01:29:58 compute-0 focused_blackburn[268858]:     }
Dec 03 01:29:58 compute-0 focused_blackburn[268858]: }
Dec 03 01:29:58 compute-0 python3.9[269027]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:29:58 compute-0 systemd[1]: libpod-14a666816107010bfd3dfb0b0281263136b2846f746312fcd872af5418411383.scope: Deactivated successfully.
Dec 03 01:29:58 compute-0 systemd[1]: libpod-14a666816107010bfd3dfb0b0281263136b2846f746312fcd872af5418411383.scope: Consumed 1.159s CPU time.
Dec 03 01:29:58 compute-0 sudo[269022]: pam_unix(sudo:session): session closed for user root
Dec 03 01:29:58 compute-0 podman[269042]: 2025-12-03 01:29:58.949347432 +0000 UTC m=+0.069642713 container died 14a666816107010bfd3dfb0b0281263136b2846f746312fcd872af5418411383 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_blackburn, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 03 01:29:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v456: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:29:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-b18413666fb03b3ef04bc2e27e84355fb096e1c5c1129102b9dbfd52aa4c8c0b-merged.mount: Deactivated successfully.
Dec 03 01:29:59 compute-0 podman[269042]: 2025-12-03 01:29:59.038004893 +0000 UTC m=+0.158300084 container remove 14a666816107010bfd3dfb0b0281263136b2846f746312fcd872af5418411383 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_blackburn, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 03 01:29:59 compute-0 systemd[1]: libpod-conmon-14a666816107010bfd3dfb0b0281263136b2846f746312fcd872af5418411383.scope: Deactivated successfully.
Dec 03 01:29:59 compute-0 sudo[268609]: pam_unix(sudo:session): session closed for user root
Dec 03 01:29:59 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:29:59 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:29:59 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:29:59 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:29:59 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev f64b5e18-8bed-4c71-97c8-1e4312658062 does not exist
Dec 03 01:29:59 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 714842c2-68c1-4062-869d-f261ed6a0f2f does not exist
Dec 03 01:29:59 compute-0 sudo[269111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:29:59 compute-0 sudo[269111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:29:59 compute-0 sudo[269111]: pam_unix(sudo:session): session closed for user root
Dec 03 01:29:59 compute-0 sudo[269160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 01:29:59 compute-0 sudo[269160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:29:59 compute-0 sudo[269160]: pam_unix(sudo:session): session closed for user root
Dec 03 01:29:59 compute-0 podman[158098]: time="2025-12-03T01:29:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:29:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:29:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec 03 01:29:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:29:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6832 "" "Go-http-client/1.1"
Dec 03 01:30:00 compute-0 ceph-mon[192821]: pgmap v456: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:00 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:30:00 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:30:00 compute-0 sudo[269257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lopcllutbzylrjuoofmmhoslrqotcmxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725399.0859966-116-183282279437456/AnsiballZ_stat.py'
Dec 03 01:30:00 compute-0 sudo[269257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:30:00 compute-0 python3.9[269259]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:30:00 compute-0 sudo[269257]: pam_unix(sudo:session): session closed for user root
Dec 03 01:30:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v457: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:01 compute-0 sudo[269336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nceaenoyutphxvvfddzcxqdpvatazcll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725399.0859966-116-183282279437456/AnsiballZ_file.py'
Dec 03 01:30:01 compute-0 sudo[269336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:30:01 compute-0 python3.9[269338]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.69n1sqmr recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:30:01 compute-0 sudo[269336]: pam_unix(sudo:session): session closed for user root
Dec 03 01:30:01 compute-0 openstack_network_exporter[160250]: ERROR   01:30:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:30:01 compute-0 openstack_network_exporter[160250]: ERROR   01:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:30:01 compute-0 openstack_network_exporter[160250]: ERROR   01:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:30:01 compute-0 openstack_network_exporter[160250]: ERROR   01:30:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:30:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:30:01 compute-0 openstack_network_exporter[160250]: ERROR   01:30:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:30:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:30:02 compute-0 ceph-mon[192821]: pgmap v457: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:30:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v458: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:03 compute-0 sudo[269488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmqdivafmskggdxpkiyglhbwxppeiwkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725401.748933-128-134097793612686/AnsiballZ_stat.py'
Dec 03 01:30:03 compute-0 sudo[269488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:30:03 compute-0 python3.9[269490]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:30:03 compute-0 sudo[269488]: pam_unix(sudo:session): session closed for user root
Dec 03 01:30:03 compute-0 sudo[269566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsnkzutzdzmcwmqfuurwlakcxvpfnxrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725401.748933-128-134097793612686/AnsiballZ_file.py'
Dec 03 01:30:03 compute-0 sudo[269566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:30:04 compute-0 python3.9[269568]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:30:04 compute-0 sudo[269566]: pam_unix(sudo:session): session closed for user root
Dec 03 01:30:04 compute-0 ceph-mon[192821]: pgmap v458: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v459: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:05 compute-0 sudo[269718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivxxcqhajmipoymyzqcdmmtrcxriqcps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725404.4417017-141-170781243175335/AnsiballZ_command.py'
Dec 03 01:30:05 compute-0 sudo[269718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:30:05 compute-0 python3.9[269720]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:30:05 compute-0 sudo[269718]: pam_unix(sudo:session): session closed for user root
Dec 03 01:30:06 compute-0 ceph-mon[192821]: pgmap v459: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:06 compute-0 sudo[269871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyesfqueuzaylrtazlvwwqlscdvayxsw ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764725405.7890165-149-116780734299170/AnsiballZ_edpm_nftables_from_files.py'
Dec 03 01:30:06 compute-0 sudo[269871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:30:06 compute-0 python3[269873]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 03 01:30:06 compute-0 sudo[269871]: pam_unix(sudo:session): session closed for user root
Dec 03 01:30:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v460: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:30:07 compute-0 sudo[270023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttwvrsqmcqanblupnsutrexzmtinjmsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725407.146207-157-22840052641416/AnsiballZ_stat.py'
Dec 03 01:30:07 compute-0 sudo[270023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:30:07 compute-0 python3.9[270025]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:30:07 compute-0 sudo[270023]: pam_unix(sudo:session): session closed for user root
Dec 03 01:30:08 compute-0 ceph-mon[192821]: pgmap v460: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:08 compute-0 sudo[270101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrsdyslbibudbcfjhmioyvtldtynmjvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725407.146207-157-22840052641416/AnsiballZ_file.py'
Dec 03 01:30:08 compute-0 sudo[270101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:30:08 compute-0 python3.9[270103]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:30:08 compute-0 sudo[270101]: pam_unix(sudo:session): session closed for user root
Dec 03 01:30:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v461: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:09 compute-0 sudo[270253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncciwjzopdvgtcqywwpouqmshhztaqqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725408.9727821-169-1383414324223/AnsiballZ_stat.py'
Dec 03 01:30:09 compute-0 sudo[270253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:30:09 compute-0 python3.9[270255]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:30:09 compute-0 sudo[270253]: pam_unix(sudo:session): session closed for user root
Dec 03 01:30:10 compute-0 ceph-mon[192821]: pgmap v461: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:10 compute-0 sudo[270331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kszspblvooahrsmfthmkmioalqbrkvpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725408.9727821-169-1383414324223/AnsiballZ_file.py'
Dec 03 01:30:10 compute-0 sudo[270331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:30:10 compute-0 sshd-session[269078]: Connection closed by authenticating user root 193.32.162.157 port 52684 [preauth]
Dec 03 01:30:10 compute-0 python3.9[270333]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:30:10 compute-0 sudo[270331]: pam_unix(sudo:session): session closed for user root
Dec 03 01:30:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v462: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:11 compute-0 ceph-mon[192821]: pgmap v462: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:11 compute-0 sudo[270484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtbddrdaoxnkgwuatlsoexkmksuwzwif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725410.7553923-181-188534386507895/AnsiballZ_stat.py'
Dec 03 01:30:11 compute-0 sudo[270484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:30:11 compute-0 python3.9[270486]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:30:11 compute-0 sudo[270484]: pam_unix(sudo:session): session closed for user root
Dec 03 01:30:12 compute-0 sudo[270562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slffolvvujerhydmgmzfwegbojazqdwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725410.7553923-181-188534386507895/AnsiballZ_file.py'
Dec 03 01:30:12 compute-0 sudo[270562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:30:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:30:12 compute-0 python3.9[270564]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:30:12 compute-0 sudo[270562]: pam_unix(sudo:session): session closed for user root
Dec 03 01:30:12 compute-0 sshd-session[270565]: Invalid user openbravo from 80.253.31.232 port 52966
Dec 03 01:30:12 compute-0 sshd-session[270565]: Received disconnect from 80.253.31.232 port 52966:11: Bye Bye [preauth]
Dec 03 01:30:12 compute-0 sshd-session[270565]: Disconnected from invalid user openbravo 80.253.31.232 port 52966 [preauth]
Dec 03 01:30:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v463: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:13 compute-0 sudo[270764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfbnjhsrtseidzijbwyjixxubaroooet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725413.1995282-193-76169445554155/AnsiballZ_stat.py'
Dec 03 01:30:13 compute-0 sudo[270764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:30:13 compute-0 podman[270698]: 2025-12-03 01:30:13.820252924 +0000 UTC m=+0.103522922 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec 03 01:30:13 compute-0 podman[270692]: 2025-12-03 01:30:13.823856666 +0000 UTC m=+0.108179284 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 03 01:30:13 compute-0 podman[270696]: 2025-12-03 01:30:13.845147059 +0000 UTC m=+0.133533972 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=minimal rhel9, release=1755695350, architecture=x86_64, container_name=openstack_network_exporter, io.buildah.version=1.33.7, managed_by=edpm_ansible, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 03 01:30:13 compute-0 podman[270701]: 2025-12-03 01:30:13.882372993 +0000 UTC m=+0.153741624 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec 03 01:30:13 compute-0 python3.9[270792]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:30:14 compute-0 sudo[270764]: pam_unix(sudo:session): session closed for user root
Dec 03 01:30:14 compute-0 ceph-mon[192821]: pgmap v463: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:14 compute-0 sudo[270882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thinrttogimlotimfgedfciffflednyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725413.1995282-193-76169445554155/AnsiballZ_file.py'
Dec 03 01:30:14 compute-0 sudo[270882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:30:14 compute-0 python3.9[270884]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:30:14 compute-0 sudo[270882]: pam_unix(sudo:session): session closed for user root
Dec 03 01:30:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v464: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:15 compute-0 podman[270936]: 2025-12-03 01:30:15.829100854 +0000 UTC m=+0.085748589 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 03 01:30:16 compute-0 ceph-mon[192821]: pgmap v464: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:16 compute-0 sudo[271052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwbrzemdqvzcllqptzxzrxlfvylsyzyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725415.6782455-205-43749648242229/AnsiballZ_stat.py'
Dec 03 01:30:16 compute-0 sudo[271052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:30:16 compute-0 python3.9[271054]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:30:16 compute-0 sudo[271052]: pam_unix(sudo:session): session closed for user root
Dec 03 01:30:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v465: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:30:17 compute-0 sudo[271130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbdyyjsfnuyauqjgtdvdtufrcxudnbnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725415.6782455-205-43749648242229/AnsiballZ_file.py'
Dec 03 01:30:17 compute-0 sudo[271130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:30:17 compute-0 python3.9[271132]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:30:17 compute-0 sudo[271130]: pam_unix(sudo:session): session closed for user root
Dec 03 01:30:18 compute-0 ceph-mon[192821]: pgmap v465: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:18 compute-0 sudo[271282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwgqnswkhxvakktqxyeozoxuqiupakzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725417.805729-218-128671041977909/AnsiballZ_command.py'
Dec 03 01:30:18 compute-0 sudo[271282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:30:18 compute-0 python3.9[271284]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:30:18 compute-0 sudo[271282]: pam_unix(sudo:session): session closed for user root
Dec 03 01:30:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v466: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:19 compute-0 sudo[271438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adlnbvaodgmgochobszryzzjbiolftgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725418.9662967-226-56832770814333/AnsiballZ_blockinfile.py'
Dec 03 01:30:19 compute-0 sudo[271438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:30:19 compute-0 python3.9[271440]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:30:19 compute-0 sudo[271438]: pam_unix(sudo:session): session closed for user root
Dec 03 01:30:20 compute-0 ceph-mon[192821]: pgmap v466: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:21 compute-0 sudo[271592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aykkkpggriwrzqtzkdwcmnxmqfdjtntm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725420.4194121-235-38399771952265/AnsiballZ_command.py'
Dec 03 01:30:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v467: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:21 compute-0 sudo[271592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:30:21 compute-0 sshd-session[271441]: Invalid user guest from 173.249.50.59 port 36360
Dec 03 01:30:21 compute-0 python3.9[271594]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:30:21 compute-0 sshd-session[271441]: Received disconnect from 173.249.50.59 port 36360:11: Bye Bye [preauth]
Dec 03 01:30:21 compute-0 sshd-session[271441]: Disconnected from invalid user guest 173.249.50.59 port 36360 [preauth]
Dec 03 01:30:21 compute-0 sudo[271592]: pam_unix(sudo:session): session closed for user root
Dec 03 01:30:21 compute-0 sshd-session[271596]: Received disconnect from 34.66.72.251 port 41826:11: Bye Bye [preauth]
Dec 03 01:30:21 compute-0 sshd-session[271596]: Disconnected from authenticating user root 34.66.72.251 port 41826 [preauth]
Dec 03 01:30:22 compute-0 sshd-session[270334]: Connection closed by authenticating user root 193.32.162.157 port 51756 [preauth]
Dec 03 01:30:22 compute-0 sudo[271747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kohrjzbsvgjlxqzsvmhtvhducjqsuvhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725421.543197-243-64629346155030/AnsiballZ_stat.py'
Dec 03 01:30:22 compute-0 sudo[271747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:30:22 compute-0 ceph-mon[192821]: pgmap v467: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:30:22 compute-0 python3.9[271749]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:30:22 compute-0 sudo[271747]: pam_unix(sudo:session): session closed for user root
Dec 03 01:30:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v468: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:23 compute-0 sudo[271917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itmbxgzdqhaedlwzfqqabvzhufbgjniv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725422.5884166-252-256386975372187/AnsiballZ_file.py'
Dec 03 01:30:23 compute-0 sudo[271917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:30:23 compute-0 podman[271874]: 2025-12-03 01:30:23.170203353 +0000 UTC m=+0.133588494 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, version=9.4, config_id=edpm, name=ubi9, maintainer=Red Hat, Inc., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, distribution-scope=public, release-0.7.12=, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9)
Dec 03 01:30:23 compute-0 python3.9[271922]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:30:23 compute-0 sudo[271917]: pam_unix(sudo:session): session closed for user root
Dec 03 01:30:24 compute-0 ceph-mon[192821]: pgmap v468: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v469: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:25 compute-0 python3.9[272073]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 01:30:25 compute-0 ceph-mon[192821]: pgmap v469: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v470: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:30:27 compute-0 sudo[272224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oviiigxfhjcmykajbwjclxeflspudymp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725426.6930923-292-140321407484783/AnsiballZ_command.py'
Dec 03 01:30:27 compute-0 sudo[272224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:30:27 compute-0 python3.9[272226]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:2e:0a:f2:93:49:d5" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:30:27 compute-0 ovs-vsctl[272227]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:2e:0a:f2:93:49:d5 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Dec 03 01:30:27 compute-0 sudo[272224]: pam_unix(sudo:session): session closed for user root
Dec 03 01:30:28 compute-0 ceph-mon[192821]: pgmap v470: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:30:28
Dec 03 01:30:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 01:30:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 01:30:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['backups', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'vms', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', '.mgr']
Dec 03 01:30:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 01:30:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:30:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:30:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:30:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:30:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:30:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:30:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 01:30:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:30:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 01:30:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:30:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:30:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:30:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:30:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:30:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:30:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:30:28 compute-0 podman[272327]: 2025-12-03 01:30:28.856118847 +0000 UTC m=+0.109300016 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 03 01:30:28 compute-0 sudo[272400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skrlnkvvpjugkbipophghuigonvdraoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725427.8787029-301-164625237057641/AnsiballZ_command.py'
Dec 03 01:30:28 compute-0 sudo[272400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:30:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v471: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:29 compute-0 sshd-session[271750]: Invalid user steam from 193.32.162.157 port 50274
Dec 03 01:30:29 compute-0 python3.9[272402]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:30:29 compute-0 sudo[272400]: pam_unix(sudo:session): session closed for user root
Dec 03 01:30:29 compute-0 podman[158098]: time="2025-12-03T01:30:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:30:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:30:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec 03 01:30:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:30:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6838 "" "Go-http-client/1.1"
Dec 03 01:30:30 compute-0 ceph-mon[192821]: pgmap v471: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:30 compute-0 python3.9[272556]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:30:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v472: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:31 compute-0 sudo[272708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxuelvndworjsrpaeeexgkrwmkwlcwwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725430.7475305-319-174007301626667/AnsiballZ_file.py'
Dec 03 01:30:31 compute-0 sudo[272708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:30:31 compute-0 openstack_network_exporter[160250]: ERROR   01:30:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:30:31 compute-0 openstack_network_exporter[160250]: ERROR   01:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:30:31 compute-0 openstack_network_exporter[160250]: ERROR   01:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:30:31 compute-0 openstack_network_exporter[160250]: ERROR   01:30:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:30:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:30:31 compute-0 openstack_network_exporter[160250]: ERROR   01:30:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:30:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:30:31 compute-0 python3.9[272710]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:30:31 compute-0 sudo[272708]: pam_unix(sudo:session): session closed for user root
Dec 03 01:30:31 compute-0 sshd-session[271750]: Connection closed by invalid user steam 193.32.162.157 port 50274 [preauth]
Dec 03 01:30:32 compute-0 ceph-mon[192821]: pgmap v472: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:30:32 compute-0 sudo[272861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqakomwenvsgzxwpfsgnhebdtyjsmzhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725431.8295531-327-144703968024873/AnsiballZ_stat.py'
Dec 03 01:30:32 compute-0 sudo[272861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:30:32 compute-0 python3.9[272863]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:30:32 compute-0 sudo[272861]: pam_unix(sudo:session): session closed for user root
Dec 03 01:30:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v473: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:33 compute-0 sudo[272939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkfypdzkanldkfstxlcnjjrguqydfhyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725431.8295531-327-144703968024873/AnsiballZ_file.py'
Dec 03 01:30:33 compute-0 sudo[272939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:30:33 compute-0 python3.9[272941]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:30:33 compute-0 sudo[272939]: pam_unix(sudo:session): session closed for user root
Dec 03 01:30:34 compute-0 sudo[273092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhhmhyklqllkitnrgfbxyhvgysiawaun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725433.5713956-327-51426526791430/AnsiballZ_stat.py'
Dec 03 01:30:34 compute-0 sudo[273092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:30:34 compute-0 ceph-mon[192821]: pgmap v473: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:34 compute-0 python3.9[273094]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:30:34 compute-0 sudo[273092]: pam_unix(sudo:session): session closed for user root
Dec 03 01:30:34 compute-0 sudo[273170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iezaajdawwkyxaesefwxheehugzqpdaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725433.5713956-327-51426526791430/AnsiballZ_file.py'
Dec 03 01:30:34 compute-0 sudo[273170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:30:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v474: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:35 compute-0 python3.9[273172]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:30:35 compute-0 sudo[273170]: pam_unix(sudo:session): session closed for user root
Dec 03 01:30:35 compute-0 sudo[273322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slvxscajelpcdhnjwnhaulorbmrjhmuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725435.3613963-350-164941019708540/AnsiballZ_file.py'
Dec 03 01:30:35 compute-0 sudo[273322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:30:36 compute-0 python3.9[273324]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:30:36 compute-0 sudo[273322]: pam_unix(sudo:session): session closed for user root
Dec 03 01:30:36 compute-0 ceph-mon[192821]: pgmap v474: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v475: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:37 compute-0 sudo[273474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqjjkxwrwmlkyaycilebzeozlwezwvlj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725436.495928-358-31786628759311/AnsiballZ_stat.py'
Dec 03 01:30:37 compute-0 sudo[273474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:30:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:30:37 compute-0 python3.9[273476]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:30:37 compute-0 sudo[273474]: pam_unix(sudo:session): session closed for user root
Dec 03 01:30:37 compute-0 sudo[273552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wypkvokljgwvfesjjnciwyqypjhxkdod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725436.495928-358-31786628759311/AnsiballZ_file.py'
Dec 03 01:30:37 compute-0 sudo[273552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:30:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 01:30:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:30:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 01:30:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:30:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:30:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:30:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:30:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:30:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:30:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:30:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:30:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:30:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 01:30:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:30:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:30:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:30:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 01:30:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:30:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 01:30:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:30:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:30:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:30:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 01:30:38 compute-0 python3.9[273554]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:30:38 compute-0 sudo[273552]: pam_unix(sudo:session): session closed for user root
Dec 03 01:30:38 compute-0 ceph-mon[192821]: pgmap v475: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:38 compute-0 sudo[273704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puzardndaybcvxmerwhvgekfmkryhdsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725438.3424206-370-221535054845283/AnsiballZ_stat.py'
Dec 03 01:30:38 compute-0 sudo[273704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:30:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v476: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:39 compute-0 python3.9[273706]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:30:39 compute-0 sudo[273704]: pam_unix(sudo:session): session closed for user root
Dec 03 01:30:40 compute-0 ceph-mon[192821]: pgmap v476: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:40 compute-0 sudo[273782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfnzfbsvbqcxmygurogvjzshhohjjdes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725438.3424206-370-221535054845283/AnsiballZ_file.py'
Dec 03 01:30:40 compute-0 sudo[273782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:30:40 compute-0 python3.9[273784]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:30:40 compute-0 sudo[273782]: pam_unix(sudo:session): session closed for user root
Dec 03 01:30:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v477: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:41 compute-0 sudo[273934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-voymzyzihfnyrnzlizbjbzxxbqodxinf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725440.7831452-382-204545442767855/AnsiballZ_systemd.py'
Dec 03 01:30:41 compute-0 sudo[273934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:30:41 compute-0 python3.9[273936]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:30:41 compute-0 systemd[1]: Reloading.
Dec 03 01:30:41 compute-0 systemd-rc-local-generator[273961]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:30:41 compute-0 systemd-sysv-generator[273965]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:30:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:30:42 compute-0 ceph-mon[192821]: pgmap v477: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:42 compute-0 sudo[273934]: pam_unix(sudo:session): session closed for user root
Dec 03 01:30:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v478: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:43 compute-0 sshd-session[272735]: Connection closed by authenticating user root 193.32.162.157 port 35264 [preauth]
Dec 03 01:30:43 compute-0 sudo[274123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iiaqhayqpqbpmihiwidiktdgzosghgfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725443.3492963-390-97086920766074/AnsiballZ_stat.py'
Dec 03 01:30:43 compute-0 sudo[274123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:30:44 compute-0 podman[274125]: 2025-12-03 01:30:44.060702154 +0000 UTC m=+0.106336122 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 03 01:30:44 compute-0 podman[274126]: 2025-12-03 01:30:44.072453327 +0000 UTC m=+0.117299992 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, version=9.6, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, distribution-scope=public, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, config_id=edpm, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git)
Dec 03 01:30:44 compute-0 podman[274127]: 2025-12-03 01:30:44.104401152 +0000 UTC m=+0.141771045 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Dec 03 01:30:44 compute-0 podman[274128]: 2025-12-03 01:30:44.13543718 +0000 UTC m=+0.167207155 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 03 01:30:44 compute-0 python3.9[274139]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:30:44 compute-0 sudo[274123]: pam_unix(sudo:session): session closed for user root
Dec 03 01:30:44 compute-0 ceph-mon[192821]: pgmap v478: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:44 compute-0 sudo[274286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkkyhoeeusxttmswkvrupiofbxksutad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725443.3492963-390-97086920766074/AnsiballZ_file.py'
Dec 03 01:30:44 compute-0 sudo[274286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:30:44 compute-0 python3.9[274288]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:30:44 compute-0 sudo[274286]: pam_unix(sudo:session): session closed for user root
Dec 03 01:30:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v479: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:45 compute-0 ceph-mon[192821]: pgmap v479: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:45 compute-0 sudo[274439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrnnpffgveklmffaeyxupnkhlljhtibp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725445.236406-402-164240442313360/AnsiballZ_stat.py'
Dec 03 01:30:45 compute-0 sudo[274439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:30:45 compute-0 python3.9[274441]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:30:46 compute-0 sudo[274439]: pam_unix(sudo:session): session closed for user root
Dec 03 01:30:46 compute-0 sudo[274535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkcftiobvyrwkfsxmkhgzckwygwlhskn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725445.236406-402-164240442313360/AnsiballZ_file.py'
Dec 03 01:30:46 compute-0 sudo[274535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:30:46 compute-0 podman[274491]: 2025-12-03 01:30:46.531864195 +0000 UTC m=+0.136238029 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 03 01:30:46 compute-0 python3.9[274540]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:30:46 compute-0 sudo[274535]: pam_unix(sudo:session): session closed for user root
Dec 03 01:30:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v480: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:30:47 compute-0 sudo[274690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpyigldegyfxogyawqxccjtwfimcyvvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725447.0719337-414-215742964790045/AnsiballZ_systemd.py'
Dec 03 01:30:47 compute-0 sudo[274690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:30:47 compute-0 python3.9[274692]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:30:48 compute-0 systemd[1]: Reloading.
Dec 03 01:30:48 compute-0 ceph-mon[192821]: pgmap v480: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:48 compute-0 systemd-sysv-generator[274723]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:30:48 compute-0 systemd-rc-local-generator[274720]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:30:48 compute-0 systemd[1]: Starting Create netns directory...
Dec 03 01:30:48 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 03 01:30:48 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 03 01:30:48 compute-0 systemd[1]: Finished Create netns directory.
Dec 03 01:30:48 compute-0 sudo[274690]: pam_unix(sudo:session): session closed for user root
Dec 03 01:30:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v481: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:49 compute-0 sudo[274886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igrmhrtdsfydvlnfcjcpaqlcuthbvjzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725449.0751448-424-53162790407760/AnsiballZ_file.py'
Dec 03 01:30:49 compute-0 sudo[274886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:30:49 compute-0 python3.9[274888]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:30:49 compute-0 sudo[274886]: pam_unix(sudo:session): session closed for user root
Dec 03 01:30:50 compute-0 ceph-mon[192821]: pgmap v481: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:50 compute-0 sudo[275038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eduxdpbgjkevazmkfkbzfolhtjsudcoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725450.1930964-432-164929957793323/AnsiballZ_stat.py'
Dec 03 01:30:50 compute-0 sudo[275038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:30:50 compute-0 python3.9[275040]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:30:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v482: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:51 compute-0 sudo[275038]: pam_unix(sudo:session): session closed for user root
Dec 03 01:30:51 compute-0 sudo[275116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtrnvfajaoiikmonczscfppnhgbinihq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725450.1930964-432-164929957793323/AnsiballZ_file.py'
Dec 03 01:30:51 compute-0 sudo[275116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:30:51 compute-0 python3.9[275118]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/ovn_controller/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/ovn_controller/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:30:51 compute-0 sudo[275116]: pam_unix(sudo:session): session closed for user root
Dec 03 01:30:52 compute-0 ceph-mon[192821]: pgmap v482: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:30:52 compute-0 sudo[275268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfnunqntwufunkxdjakwigpsvblucrau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725452.334135-446-238672839687099/AnsiballZ_file.py'
Dec 03 01:30:52 compute-0 sudo[275268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:30:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v483: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:53 compute-0 python3.9[275270]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:30:53 compute-0 sudo[275268]: pam_unix(sudo:session): session closed for user root
Dec 03 01:30:53 compute-0 podman[275353]: 2025-12-03 01:30:53.885971002 +0000 UTC m=+0.136203087 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, vcs-type=git, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, architecture=x86_64, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, name=ubi9, io.buildah.version=1.29.0, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 03 01:30:54 compute-0 sudo[275439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlszpiuszhokpautalsmxxnwkwfrdbvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725453.5067284-454-176141875981316/AnsiballZ_stat.py'
Dec 03 01:30:54 compute-0 sudo[275439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:30:54 compute-0 ceph-mon[192821]: pgmap v483: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:54 compute-0 python3.9[275441]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:30:54 compute-0 sudo[275439]: pam_unix(sudo:session): session closed for user root
Dec 03 01:30:54 compute-0 sudo[275517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crfeszkqbzkphvigzhqnibbvaiaxzeoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725453.5067284-454-176141875981316/AnsiballZ_file.py'
Dec 03 01:30:54 compute-0 sudo[275517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:30:54 compute-0 sshd-session[273997]: Connection closed by authenticating user root 193.32.162.157 port 47504 [preauth]
Dec 03 01:30:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v484: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:55 compute-0 python3.9[275519]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/ovn_controller.json _original_basename=.pqao_6ru recurse=False state=file path=/var/lib/kolla/config_files/ovn_controller.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:30:55 compute-0 sudo[275517]: pam_unix(sudo:session): session closed for user root
Dec 03 01:30:56 compute-0 ceph-mon[192821]: pgmap v484: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:56 compute-0 sudo[275670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oppaujqgutooceaydevqjlgkzqxordkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725455.3428276-466-8575793733735/AnsiballZ_file.py'
Dec 03 01:30:56 compute-0 sudo[275670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:30:56 compute-0 python3.9[275672]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:30:56 compute-0 sudo[275670]: pam_unix(sudo:session): session closed for user root
Dec 03 01:30:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v485: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:30:57 compute-0 sudo[275823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aanyccdlwgqftdxmqrkpjadugntrremm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725457.2120287-474-139863106700916/AnsiballZ_stat.py'
Dec 03 01:30:57 compute-0 sudo[275823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:30:58 compute-0 sudo[275823]: pam_unix(sudo:session): session closed for user root
Dec 03 01:30:58 compute-0 ceph-mon[192821]: pgmap v485: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:30:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:30:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:30:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:30:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:30:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:30:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v486: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:30:59 compute-0 sudo[275916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xifzuqjiaevulqbvtrncrnatqsjuivuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725457.2120287-474-139863106700916/AnsiballZ_file.py'
Dec 03 01:30:59 compute-0 sudo[275916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:30:59 compute-0 podman[275875]: 2025-12-03 01:30:59.368373453 +0000 UTC m=+0.127273102 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 03 01:30:59 compute-0 sudo[275928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:30:59 compute-0 sudo[275928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:30:59 compute-0 sudo[275928]: pam_unix(sudo:session): session closed for user root
Dec 03 01:30:59 compute-0 sudo[275916]: pam_unix(sudo:session): session closed for user root
Dec 03 01:30:59 compute-0 sudo[275953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:30:59 compute-0 sudo[275953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:30:59 compute-0 sudo[275953]: pam_unix(sudo:session): session closed for user root
Dec 03 01:30:59 compute-0 podman[158098]: time="2025-12-03T01:30:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:30:59 compute-0 sudo[275981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:30:59 compute-0 sudo[275981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:30:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:30:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec 03 01:30:59 compute-0 sudo[275981]: pam_unix(sudo:session): session closed for user root
Dec 03 01:30:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:30:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6841 "" "Go-http-client/1.1"
Dec 03 01:30:59 compute-0 sudo[276027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 01:30:59 compute-0 sudo[276027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:31:00 compute-0 ceph-mon[192821]: pgmap v486: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:31:00 compute-0 sudo[276027]: pam_unix(sudo:session): session closed for user root
Dec 03 01:31:00 compute-0 sudo[276208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uipqiykgqsjaalhwzvvhmzxhddwhqvtd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725460.0149424-488-189200428106711/AnsiballZ_container_config_data.py'
Dec 03 01:31:00 compute-0 sudo[276208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:31:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec 03 01:31:00 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 03 01:31:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:31:00 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:31:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 01:31:00 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:31:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 01:31:00 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:31:00 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 69677d6e-f16b-4c07-8ca0-9ecabf412c00 does not exist
Dec 03 01:31:00 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 42b6a4a7-9570-476e-91d3-143310c2e712 does not exist
Dec 03 01:31:00 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 9a30475a-429a-445c-bd7e-d65a6f7af71a does not exist
Dec 03 01:31:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 01:31:00 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:31:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 01:31:00 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:31:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:31:00 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:31:00 compute-0 sudo[276211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:31:00 compute-0 sudo[276211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:31:00 compute-0 sudo[276211]: pam_unix(sudo:session): session closed for user root
Dec 03 01:31:00 compute-0 python3.9[276210]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Dec 03 01:31:00 compute-0 sudo[276236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:31:00 compute-0 sudo[276208]: pam_unix(sudo:session): session closed for user root
Dec 03 01:31:00 compute-0 sudo[276236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:31:00 compute-0 sudo[276236]: pam_unix(sudo:session): session closed for user root
Dec 03 01:31:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v487: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:31:01 compute-0 sudo[276261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:31:01 compute-0 sudo[276261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:31:01 compute-0 sudo[276261]: pam_unix(sudo:session): session closed for user root
Dec 03 01:31:01 compute-0 sudo[276310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 01:31:01 compute-0 sudo[276310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:31:01 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 03 01:31:01 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:31:01 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:31:01 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:31:01 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:31:01 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:31:01 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:31:01 compute-0 openstack_network_exporter[160250]: ERROR   01:31:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:31:01 compute-0 openstack_network_exporter[160250]: ERROR   01:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:31:01 compute-0 openstack_network_exporter[160250]: ERROR   01:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:31:01 compute-0 openstack_network_exporter[160250]: ERROR   01:31:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:31:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:31:01 compute-0 openstack_network_exporter[160250]: ERROR   01:31:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:31:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:31:01 compute-0 podman[276424]: 2025-12-03 01:31:01.662856017 +0000 UTC m=+0.090648867 container create 79c855ba43ef4a8eb72cfed48b1f479ccb198b8a624544e553114125965bf2f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_yalow, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:31:01 compute-0 podman[276424]: 2025-12-03 01:31:01.625682146 +0000 UTC m=+0.053475056 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:31:01 compute-0 systemd[1]: Started libpod-conmon-79c855ba43ef4a8eb72cfed48b1f479ccb198b8a624544e553114125965bf2f8.scope.
Dec 03 01:31:01 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:31:01 compute-0 podman[276424]: 2025-12-03 01:31:01.805483227 +0000 UTC m=+0.233276127 container init 79c855ba43ef4a8eb72cfed48b1f479ccb198b8a624544e553114125965bf2f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Dec 03 01:31:01 compute-0 podman[276424]: 2025-12-03 01:31:01.822591743 +0000 UTC m=+0.250384603 container start 79c855ba43ef4a8eb72cfed48b1f479ccb198b8a624544e553114125965bf2f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_yalow, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:31:01 compute-0 podman[276424]: 2025-12-03 01:31:01.829162418 +0000 UTC m=+0.256955308 container attach 79c855ba43ef4a8eb72cfed48b1f479ccb198b8a624544e553114125965bf2f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 03 01:31:01 compute-0 reverent_yalow[276465]: 167 167
Dec 03 01:31:01 compute-0 systemd[1]: libpod-79c855ba43ef4a8eb72cfed48b1f479ccb198b8a624544e553114125965bf2f8.scope: Deactivated successfully.
Dec 03 01:31:01 compute-0 podman[276424]: 2025-12-03 01:31:01.834799058 +0000 UTC m=+0.262591908 container died 79c855ba43ef4a8eb72cfed48b1f479ccb198b8a624544e553114125965bf2f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_yalow, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:31:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9c83708454e117e8bedad08339b6c808fff6a87a89ea0765d347c7733e69cb4-merged.mount: Deactivated successfully.
Dec 03 01:31:01 compute-0 podman[276424]: 2025-12-03 01:31:01.918319433 +0000 UTC m=+0.346112253 container remove 79c855ba43ef4a8eb72cfed48b1f479ccb198b8a624544e553114125965bf2f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 03 01:31:01 compute-0 systemd[1]: libpod-conmon-79c855ba43ef4a8eb72cfed48b1f479ccb198b8a624544e553114125965bf2f8.scope: Deactivated successfully.
Dec 03 01:31:01 compute-0 sudo[276531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmomqemzvqcslyckyhgukciydkcxumsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725461.2359824-497-135457146079949/AnsiballZ_container_config_hash.py'
Dec 03 01:31:01 compute-0 sudo[276531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:31:02 compute-0 podman[276539]: 2025-12-03 01:31:02.161843772 +0000 UTC m=+0.093149013 container create ae6f2ed10921d050d0dc15ce21ce9404c903cd392bf726ede37f2d4cd2f841c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_turing, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:31:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:31:02 compute-0 python3.9[276533]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 03 01:31:02 compute-0 sudo[276531]: pam_unix(sudo:session): session closed for user root
Dec 03 01:31:02 compute-0 ceph-mon[192821]: pgmap v487: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:31:02 compute-0 podman[276539]: 2025-12-03 01:31:02.123811358 +0000 UTC m=+0.055116689 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:31:02 compute-0 systemd[1]: Started libpod-conmon-ae6f2ed10921d050d0dc15ce21ce9404c903cd392bf726ede37f2d4cd2f841c6.scope.
Dec 03 01:31:02 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:31:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b122c34f8ff908f5d5311ffb8984d4b73023cbcb34b258bf3a89d2c4c2c8b37/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:31:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b122c34f8ff908f5d5311ffb8984d4b73023cbcb34b258bf3a89d2c4c2c8b37/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:31:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b122c34f8ff908f5d5311ffb8984d4b73023cbcb34b258bf3a89d2c4c2c8b37/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:31:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b122c34f8ff908f5d5311ffb8984d4b73023cbcb34b258bf3a89d2c4c2c8b37/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:31:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b122c34f8ff908f5d5311ffb8984d4b73023cbcb34b258bf3a89d2c4c2c8b37/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:31:02 compute-0 podman[276539]: 2025-12-03 01:31:02.333613538 +0000 UTC m=+0.264918889 container init ae6f2ed10921d050d0dc15ce21ce9404c903cd392bf726ede37f2d4cd2f841c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_turing, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:31:02 compute-0 podman[276539]: 2025-12-03 01:31:02.361684766 +0000 UTC m=+0.292990047 container start ae6f2ed10921d050d0dc15ce21ce9404c903cd392bf726ede37f2d4cd2f841c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_turing, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:31:02 compute-0 podman[276539]: 2025-12-03 01:31:02.36858735 +0000 UTC m=+0.299892631 container attach ae6f2ed10921d050d0dc15ce21ce9404c903cd392bf726ede37f2d4cd2f841c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_turing, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:31:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v488: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:31:03 compute-0 sudo[276720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acfwilkcnyirxttuwxufzzquqgicvnyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725462.5856657-506-31824130120969/AnsiballZ_podman_container_info.py'
Dec 03 01:31:03 compute-0 sudo[276720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:31:03 compute-0 python3.9[276724]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 03 01:31:03 compute-0 kind_turing[276568]: --> passed data devices: 0 physical, 3 LVM
Dec 03 01:31:03 compute-0 kind_turing[276568]: --> relative data size: 1.0
Dec 03 01:31:03 compute-0 kind_turing[276568]: --> All data devices are unavailable
Dec 03 01:31:03 compute-0 systemd[1]: libpod-ae6f2ed10921d050d0dc15ce21ce9404c903cd392bf726ede37f2d4cd2f841c6.scope: Deactivated successfully.
Dec 03 01:31:03 compute-0 podman[276539]: 2025-12-03 01:31:03.655823767 +0000 UTC m=+1.587129038 container died ae6f2ed10921d050d0dc15ce21ce9404c903cd392bf726ede37f2d4cd2f841c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_turing, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:31:03 compute-0 systemd[1]: libpod-ae6f2ed10921d050d0dc15ce21ce9404c903cd392bf726ede37f2d4cd2f841c6.scope: Consumed 1.225s CPU time.
Dec 03 01:31:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b122c34f8ff908f5d5311ffb8984d4b73023cbcb34b258bf3a89d2c4c2c8b37-merged.mount: Deactivated successfully.
Dec 03 01:31:03 compute-0 podman[276539]: 2025-12-03 01:31:03.758835602 +0000 UTC m=+1.690140843 container remove ae6f2ed10921d050d0dc15ce21ce9404c903cd392bf726ede37f2d4cd2f841c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_turing, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec 03 01:31:03 compute-0 systemd[1]: libpod-conmon-ae6f2ed10921d050d0dc15ce21ce9404c903cd392bf726ede37f2d4cd2f841c6.scope: Deactivated successfully.
Dec 03 01:31:03 compute-0 sudo[276310]: pam_unix(sudo:session): session closed for user root
Dec 03 01:31:03 compute-0 sudo[276782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:31:03 compute-0 sudo[276782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:31:03 compute-0 sudo[276782]: pam_unix(sudo:session): session closed for user root
Dec 03 01:31:04 compute-0 sudo[276825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:31:04 compute-0 sudo[276825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:31:04 compute-0 sudo[276825]: pam_unix(sudo:session): session closed for user root
Dec 03 01:31:04 compute-0 sudo[276874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:31:04 compute-0 sudo[276874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:31:04 compute-0 sudo[276874]: pam_unix(sudo:session): session closed for user root
Dec 03 01:31:04 compute-0 ceph-mon[192821]: pgmap v488: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:31:04 compute-0 sudo[276913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 01:31:04 compute-0 sudo[276913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:31:04 compute-0 podman[277046]: 2025-12-03 01:31:04.777581485 +0000 UTC m=+0.068169528 container create 5b80459c7b89c4c88ea8ecd2e82ec05bdaec669d96c7b9f1cd2aea8a94771677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_feistel, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:31:04 compute-0 systemd[1]: Started libpod-conmon-5b80459c7b89c4c88ea8ecd2e82ec05bdaec669d96c7b9f1cd2aea8a94771677.scope.
Dec 03 01:31:04 compute-0 podman[277046]: 2025-12-03 01:31:04.749479316 +0000 UTC m=+0.040067359 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:31:04 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:31:04 compute-0 podman[277046]: 2025-12-03 01:31:04.911593575 +0000 UTC m=+0.202181668 container init 5b80459c7b89c4c88ea8ecd2e82ec05bdaec669d96c7b9f1cd2aea8a94771677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_feistel, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:31:04 compute-0 podman[277046]: 2025-12-03 01:31:04.928574338 +0000 UTC m=+0.219162371 container start 5b80459c7b89c4c88ea8ecd2e82ec05bdaec669d96c7b9f1cd2aea8a94771677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_feistel, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 03 01:31:04 compute-0 podman[277046]: 2025-12-03 01:31:04.935092651 +0000 UTC m=+0.225680724 container attach 5b80459c7b89c4c88ea8ecd2e82ec05bdaec669d96c7b9f1cd2aea8a94771677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_feistel, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:31:04 compute-0 musing_feistel[277081]: 167 167
Dec 03 01:31:04 compute-0 systemd[1]: libpod-5b80459c7b89c4c88ea8ecd2e82ec05bdaec669d96c7b9f1cd2aea8a94771677.scope: Deactivated successfully.
Dec 03 01:31:04 compute-0 podman[277046]: 2025-12-03 01:31:04.940036053 +0000 UTC m=+0.230624096 container died 5b80459c7b89c4c88ea8ecd2e82ec05bdaec669d96c7b9f1cd2aea8a94771677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_feistel, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:31:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-79b4978e871c91f75b6f76ae56950e000897812dbeb2fe900d04d5f0a1df1d59-merged.mount: Deactivated successfully.
Dec 03 01:31:05 compute-0 podman[277046]: 2025-12-03 01:31:05.020789135 +0000 UTC m=+0.311377178 container remove 5b80459c7b89c4c88ea8ecd2e82ec05bdaec669d96c7b9f1cd2aea8a94771677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_feistel, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 03 01:31:05 compute-0 systemd[1]: libpod-conmon-5b80459c7b89c4c88ea8ecd2e82ec05bdaec669d96c7b9f1cd2aea8a94771677.scope: Deactivated successfully.
Dec 03 01:31:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v489: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:31:05 compute-0 ceph-mon[192821]: pgmap v489: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:31:05 compute-0 podman[277141]: 2025-12-03 01:31:05.278717607 +0000 UTC m=+0.074663000 container create e12150af361d2212dad414058aff9abba5de5587d0ad0cc0c8052d8ccb98570b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 03 01:31:05 compute-0 sudo[276720]: pam_unix(sudo:session): session closed for user root
Dec 03 01:31:05 compute-0 podman[277141]: 2025-12-03 01:31:05.2472938 +0000 UTC m=+0.043239193 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:31:05 compute-0 systemd[1]: Started libpod-conmon-e12150af361d2212dad414058aff9abba5de5587d0ad0cc0c8052d8ccb98570b.scope.
Dec 03 01:31:05 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:31:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a14c5cd4d3f990529d0796d72271d46228f6a2290449954651257174011383f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:31:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a14c5cd4d3f990529d0796d72271d46228f6a2290449954651257174011383f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:31:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a14c5cd4d3f990529d0796d72271d46228f6a2290449954651257174011383f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:31:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a14c5cd4d3f990529d0796d72271d46228f6a2290449954651257174011383f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:31:05 compute-0 podman[277141]: 2025-12-03 01:31:05.462094153 +0000 UTC m=+0.258039526 container init e12150af361d2212dad414058aff9abba5de5587d0ad0cc0c8052d8ccb98570b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_einstein, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:31:05 compute-0 podman[277141]: 2025-12-03 01:31:05.488681501 +0000 UTC m=+0.284626864 container start e12150af361d2212dad414058aff9abba5de5587d0ad0cc0c8052d8ccb98570b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_einstein, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 03 01:31:05 compute-0 podman[277141]: 2025-12-03 01:31:05.493354096 +0000 UTC m=+0.289299459 container attach e12150af361d2212dad414058aff9abba5de5587d0ad0cc0c8052d8ccb98570b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_einstein, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 03 01:31:06 compute-0 elastic_einstein[277164]: {
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:     "0": [
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:         {
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:             "devices": [
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:                 "/dev/loop3"
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:             ],
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:             "lv_name": "ceph_lv0",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:             "lv_size": "21470642176",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:             "name": "ceph_lv0",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:             "tags": {
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:                 "ceph.cluster_name": "ceph",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:                 "ceph.crush_device_class": "",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:                 "ceph.encrypted": "0",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:                 "ceph.osd_id": "0",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:                 "ceph.type": "block",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:                 "ceph.vdo": "0"
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:             },
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:             "type": "block",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:             "vg_name": "ceph_vg0"
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:         }
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:     ],
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:     "1": [
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:         {
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:             "devices": [
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:                 "/dev/loop4"
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:             ],
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:             "lv_name": "ceph_lv1",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:             "lv_size": "21470642176",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:             "name": "ceph_lv1",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:             "tags": {
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:                 "ceph.cluster_name": "ceph",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:                 "ceph.crush_device_class": "",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:                 "ceph.encrypted": "0",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:                 "ceph.osd_id": "1",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:                 "ceph.type": "block",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:                 "ceph.vdo": "0"
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:             },
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:             "type": "block",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:             "vg_name": "ceph_vg1"
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:         }
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:     ],
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:     "2": [
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:         {
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:             "devices": [
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:                 "/dev/loop5"
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:             ],
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:             "lv_name": "ceph_lv2",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:             "lv_size": "21470642176",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:             "name": "ceph_lv2",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:             "tags": {
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:                 "ceph.cluster_name": "ceph",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:                 "ceph.crush_device_class": "",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:                 "ceph.encrypted": "0",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:                 "ceph.osd_id": "2",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:                 "ceph.type": "block",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:                 "ceph.vdo": "0"
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:             },
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:             "type": "block",
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:             "vg_name": "ceph_vg2"
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:         }
Dec 03 01:31:06 compute-0 elastic_einstein[277164]:     ]
Dec 03 01:31:06 compute-0 elastic_einstein[277164]: }
Dec 03 01:31:06 compute-0 systemd[1]: libpod-e12150af361d2212dad414058aff9abba5de5587d0ad0cc0c8052d8ccb98570b.scope: Deactivated successfully.
Dec 03 01:31:06 compute-0 podman[277141]: 2025-12-03 01:31:06.344663688 +0000 UTC m=+1.140609071 container died e12150af361d2212dad414058aff9abba5de5587d0ad0cc0c8052d8ccb98570b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec 03 01:31:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a14c5cd4d3f990529d0796d72271d46228f6a2290449954651257174011383f-merged.mount: Deactivated successfully.
Dec 03 01:31:06 compute-0 podman[277141]: 2025-12-03 01:31:06.467042229 +0000 UTC m=+1.262987602 container remove e12150af361d2212dad414058aff9abba5de5587d0ad0cc0c8052d8ccb98570b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 03 01:31:06 compute-0 systemd[1]: libpod-conmon-e12150af361d2212dad414058aff9abba5de5587d0ad0cc0c8052d8ccb98570b.scope: Deactivated successfully.
Dec 03 01:31:06 compute-0 sudo[276913]: pam_unix(sudo:session): session closed for user root
Dec 03 01:31:06 compute-0 sudo[277258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:31:06 compute-0 sudo[277258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:31:06 compute-0 sudo[277258]: pam_unix(sudo:session): session closed for user root
Dec 03 01:31:06 compute-0 sshd-session[275520]: Connection closed by authenticating user root 193.32.162.157 port 47350 [preauth]
Dec 03 01:31:06 compute-0 sudo[277283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:31:06 compute-0 sudo[277283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:31:06 compute-0 sudo[277283]: pam_unix(sudo:session): session closed for user root
Dec 03 01:31:06 compute-0 sudo[277331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:31:06 compute-0 sudo[277331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:31:06 compute-0 sudo[277331]: pam_unix(sudo:session): session closed for user root
Dec 03 01:31:07 compute-0 sudo[277381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 01:31:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v490: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:31:07 compute-0 sudo[277381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:31:07 compute-0 sudo[277431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfayfgslbdwvnievmntbbtezclejqpvr ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764725466.3021157-519-266181197688642/AnsiballZ_edpm_container_manage.py'
Dec 03 01:31:07 compute-0 sudo[277431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:31:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:31:07 compute-0 python3[277434]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 03 01:31:07 compute-0 podman[277492]: 2025-12-03 01:31:07.616374382 +0000 UTC m=+0.068084705 container create c451890e566faf74ae52c7747ef67674468c5bf953f1ca804e151b15c4ab38ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_sammet, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:31:07 compute-0 podman[277492]: 2025-12-03 01:31:07.590827681 +0000 UTC m=+0.042538004 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:31:07 compute-0 systemd[1]: Started libpod-conmon-c451890e566faf74ae52c7747ef67674468c5bf953f1ca804e151b15c4ab38ae.scope.
Dec 03 01:31:07 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:31:07 compute-0 python3[277434]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                {
                                                     "Id": "3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c",
                                                     "Digest": "sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c",
                                                     "RepoTags": [
                                                          "quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified"
                                                     ],
                                                     "RepoDigests": [
                                                          "quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c"
                                                     ],
                                                     "Parent": "",
                                                     "Comment": "",
                                                     "Created": "2025-12-01T06:38:47.246477714Z",
                                                     "Config": {
                                                          "User": "root",
                                                          "Env": [
                                                               "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                                                               "LANG=en_US.UTF-8",
                                                               "TZ=UTC",
                                                               "container=oci"
                                                          ],
                                                          "Entrypoint": [
                                                               "dumb-init",
                                                               "--single-child",
                                                               "--"
                                                          ],
                                                          "Cmd": [
                                                               "kolla_start"
                                                          ],
                                                          "Labels": {
                                                               "io.buildah.version": "1.41.3",
                                                               "maintainer": "OpenStack Kubernetes Operator team",
                                                               "org.label-schema.build-date": "20251125",
                                                               "org.label-schema.license": "GPLv2",
                                                               "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                               "org.label-schema.schema-version": "1.0",
                                                               "org.label-schema.vendor": "CentOS",
                                                               "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",
                                                               "tcib_managed": "true"
                                                          },
                                                          "StopSignal": "SIGTERM"
                                                     },
                                                     "Version": "",
                                                     "Author": "",
                                                     "Architecture": "amd64",
                                                     "Os": "linux",
                                                     "Size": 345722821,
                                                     "VirtualSize": 345722821,
                                                     "GraphDriver": {
                                                          "Name": "overlay",
                                                          "Data": {
                                                               "LowerDir": "/var/lib/containers/storage/overlay/06baa34adcac19ffd1cac321f0c14e5e32037c7b357d2eb54e065b4d177d72fd/diff:/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",
                                                               "UpperDir": "/var/lib/containers/storage/overlay/0dae0ae2501f0b947a8e64948b264823feec8c7ddb8b7849cb102fbfe0c75da8/diff",
                                                               "WorkDir": "/var/lib/containers/storage/overlay/0dae0ae2501f0b947a8e64948b264823feec8c7ddb8b7849cb102fbfe0c75da8/work"
                                                          }
                                                     },
                                                     "RootFS": {
                                                          "Type": "layers",
                                                          "Layers": [
                                                               "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",
                                                               "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",
                                                               "sha256:ba9362d2aeb297e34b0679b2fc8168350c70a5b0ec414daf293bf2bc013e9088",
                                                               "sha256:aae3b8a85314314b9db80a043fdf3f3b1d0b69927faca0303c73969a23dddd0f"
                                                          ]
                                                     },
                                                     "Labels": {
                                                          "io.buildah.version": "1.41.3",
                                                          "maintainer": "OpenStack Kubernetes Operator team",
                                                          "org.label-schema.build-date": "20251125",
                                                          "org.label-schema.license": "GPLv2",
                                                          "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                          "org.label-schema.schema-version": "1.0",
                                                          "org.label-schema.vendor": "CentOS",
                                                          "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",
                                                          "tcib_managed": "true"
                                                     },
                                                     "Annotations": {},
                                                     "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
                                                     "User": "root",
                                                     "History": [
                                                          {
                                                               "created": "2025-11-25T04:02:36.223494528Z",
                                                               "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-11-25T04:02:36.223562059Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20251125\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-11-25T04:02:39.054452717Z",
                                                               "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:09:28.025707917Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",
                                                               "comment": "FROM quay.io/centos/centos:stream9",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:09:28.025744608Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:09:28.025767729Z",
                                                               "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:09:28.025791379Z",
                                                               "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:09:28.02581523Z",
                                                               "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:09:28.025867611Z",
                                                               "created_by": "/bin/sh -c #(nop) USER root",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:09:28.469442331Z",
                                                               "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:02.029095017Z",
                                                               "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:05.672474685Z",
                                                               "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which python-tcib-containers",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:06.113425253Z",
                                                               "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/uid_gid_manage.sh /usr/local/bin/uid_gid_manage",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:06.532320725Z",
                                                               "created_by": "/bin/sh -c chmod 755 /usr/local/bin/uid_gid_manage",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:07.370061347Z",
                                                               "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage kolla hugetlbfs libvirt qemu",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:07.805172373Z",
                                                               "created_by": "/bin/sh -c touch /usr/local/bin/kolla_extend_start && chmod 755 /usr/local/bin/kolla_extend_start",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:08.259306372Z",
                                                               "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/set_configs.py /usr/local/bin/kolla_set_configs",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:08.625948784Z",
                                                               "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_set_configs",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:09.028304824Z",
                                                               "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/start.sh /usr/local/bin/kolla_start",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:09.423316076Z",
                                                               "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_start",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:09.801219631Z",
                                                               "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/httpd_setup.sh /usr/local/bin/kolla_httpd_setup",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:10.239187116Z",
                                                               "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_httpd_setup",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:10.70996597Z",
                                                               "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/copy_cacerts.sh /usr/local/bin/kolla_copy_cacerts",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:11.147342611Z",
                                                               "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_copy_cacerts",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:11.5739488Z",
                                                               "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/sudoers /etc/sudoers",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:12.006975065Z",
                                                               "created_by": "/bin/sh -c chmod 440 /etc/sudoers",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:12.421255505Z",
                                                               "created_by": "/bin/sh -c sed -ri '/^(passwd:|group:)/ s/systemd//g' /etc/nsswitch.conf",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:16.066694755Z",
                                                               "created_by": "/bin/sh -c dnf -y reinstall which && rpm -e --nodeps tzdata && dnf -y install tzdata",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:16.475695836Z",
                                                               "created_by": "/bin/sh -c if [ ! -f \"/etc/localtime\" ]; then ln -s /usr/share/zoneinfo/Etc/UTC /etc/localtime; fi",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:16.8971372Z",
                                                               "created_by": "/bin/sh -c mkdir -p /openstack",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:18.542651107Z",
                                                               "created_by": "/bin/sh -c if [ 'centos' == 'centos' ];then if [ -n \"$(rpm -qa redhat-release)\" ];then rpm -e --nodeps redhat-release; fi ; dnf -y install centos-stream-release; fi",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:20.622503041Z",
                                                               "created_by": "/bin/sh -c dnf update --excludepkgs redhat-release -y && dnf clean all && rm -rf /var/cache/dnf",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:20.622561802Z",
                                                               "created_by": "/bin/sh -c #(nop) STOPSIGNAL SIGTERM",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:20.622578342Z",
                                                               "created_by": "/bin/sh -c #(nop) ENTRYPOINT [\"dumb-init\", \"--single-child\", \"--\"]",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:20.622594423Z",
                                                               "created_by": "/bin/sh -c #(nop) CMD [\"kolla_start\"]",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:22.080892529Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:12:22.759131427Z",
                                                               "created_by": "/bin/sh -c #(nop) USER root",
                                                               "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-base:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:13:25.258260855Z",
                                                               "created_by": "/bin/sh -c dnf -y install openvswitch openvswitch-ovn-common python3-netifaces python3-openvswitch tcpdump && dnf clean all && rm -rf /var/cache/dnf",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:13:28.025145079Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:38:13.535675197Z",
                                                               "created_by": "/bin/sh -c #(nop) USER root",
                                                               "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-ovn-base:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:38:47.244104142Z",
                                                               "created_by": "/bin/sh -c dnf -y install openvswitch-ovn-host && dnf clean all && rm -rf /var/cache/dnf",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:38:48.759416475Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                          }
                                                     ],
                                                     "NamesHistory": [
                                                          "quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified"
                                                     ]
                                                }
                                           ]
                                           : quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec 03 01:31:07 compute-0 podman[277492]: 2025-12-03 01:31:07.746732445 +0000 UTC m=+0.198442768 container init c451890e566faf74ae52c7747ef67674468c5bf953f1ca804e151b15c4ab38ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec 03 01:31:07 compute-0 podman[277492]: 2025-12-03 01:31:07.764724914 +0000 UTC m=+0.216435227 container start c451890e566faf74ae52c7747ef67674468c5bf953f1ca804e151b15c4ab38ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_sammet, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:31:07 compute-0 podman[277492]: 2025-12-03 01:31:07.772025229 +0000 UTC m=+0.223735562 container attach c451890e566faf74ae52c7747ef67674468c5bf953f1ca804e151b15c4ab38ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 03 01:31:07 compute-0 magical_sammet[277526]: 167 167
Dec 03 01:31:07 compute-0 systemd[1]: libpod-c451890e566faf74ae52c7747ef67674468c5bf953f1ca804e151b15c4ab38ae.scope: Deactivated successfully.
Dec 03 01:31:07 compute-0 podman[277492]: 2025-12-03 01:31:07.777084784 +0000 UTC m=+0.228795077 container died c451890e566faf74ae52c7747ef67674468c5bf953f1ca804e151b15c4ab38ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_sammet, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec 03 01:31:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-adc70281d91a89ddfd042c66ca4fa4783881778f0e5d97ee4913ff7ca6043d7e-merged.mount: Deactivated successfully.
Dec 03 01:31:07 compute-0 podman[277492]: 2025-12-03 01:31:07.843267817 +0000 UTC m=+0.294978140 container remove c451890e566faf74ae52c7747ef67674468c5bf953f1ca804e151b15c4ab38ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Dec 03 01:31:07 compute-0 systemd[1]: libpod-conmon-c451890e566faf74ae52c7747ef67674468c5bf953f1ca804e151b15c4ab38ae.scope: Deactivated successfully.
Dec 03 01:31:07 compute-0 sudo[277431]: pam_unix(sudo:session): session closed for user root
Dec 03 01:31:08 compute-0 sshd-session[277215]: Received disconnect from 103.146.202.174 port 42456:11: Bye Bye [preauth]
Dec 03 01:31:08 compute-0 sshd-session[277215]: Disconnected from authenticating user root 103.146.202.174 port 42456 [preauth]
Dec 03 01:31:08 compute-0 podman[277588]: 2025-12-03 01:31:08.086980711 +0000 UTC m=+0.072736089 container create ad08a1a02f4950134d53e94f2cf7b4355986c1c3624c7b352ffb340c39de5f27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Dec 03 01:31:08 compute-0 ceph-mon[192821]: pgmap v490: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:31:08 compute-0 podman[277588]: 2025-12-03 01:31:08.052246875 +0000 UTC m=+0.038002293 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:31:08 compute-0 systemd[1]: Started libpod-conmon-ad08a1a02f4950134d53e94f2cf7b4355986c1c3624c7b352ffb340c39de5f27.scope.
Dec 03 01:31:08 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:31:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bccc071674e55c48806889f98a465899c03576fa441d65d0e3c7fa7eb0cc49b7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:31:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bccc071674e55c48806889f98a465899c03576fa441d65d0e3c7fa7eb0cc49b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:31:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bccc071674e55c48806889f98a465899c03576fa441d65d0e3c7fa7eb0cc49b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:31:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bccc071674e55c48806889f98a465899c03576fa441d65d0e3c7fa7eb0cc49b7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:31:08 compute-0 podman[277588]: 2025-12-03 01:31:08.239216397 +0000 UTC m=+0.224971815 container init ad08a1a02f4950134d53e94f2cf7b4355986c1c3624c7b352ffb340c39de5f27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_swartz, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 03 01:31:08 compute-0 podman[277588]: 2025-12-03 01:31:08.259756083 +0000 UTC m=+0.245511451 container start ad08a1a02f4950134d53e94f2cf7b4355986c1c3624c7b352ffb340c39de5f27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_swartz, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 03 01:31:08 compute-0 podman[277588]: 2025-12-03 01:31:08.268203888 +0000 UTC m=+0.253959306 container attach ad08a1a02f4950134d53e94f2cf7b4355986c1c3624c7b352ffb340c39de5f27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_swartz, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:31:08 compute-0 sudo[277736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awfuhkkpowgdjxaxpitrefrpvzrfbqfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725468.1927664-527-15266685076825/AnsiballZ_stat.py'
Dec 03 01:31:08 compute-0 sudo[277736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:31:09 compute-0 python3.9[277738]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:31:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v491: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:31:09 compute-0 sudo[277736]: pam_unix(sudo:session): session closed for user root
Dec 03 01:31:09 compute-0 suspicious_swartz[277624]: {
Dec 03 01:31:09 compute-0 suspicious_swartz[277624]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 01:31:09 compute-0 suspicious_swartz[277624]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:31:09 compute-0 suspicious_swartz[277624]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 01:31:09 compute-0 suspicious_swartz[277624]:         "osd_id": 2,
Dec 03 01:31:09 compute-0 suspicious_swartz[277624]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:31:09 compute-0 suspicious_swartz[277624]:         "type": "bluestore"
Dec 03 01:31:09 compute-0 suspicious_swartz[277624]:     },
Dec 03 01:31:09 compute-0 suspicious_swartz[277624]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 01:31:09 compute-0 suspicious_swartz[277624]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:31:09 compute-0 suspicious_swartz[277624]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 01:31:09 compute-0 suspicious_swartz[277624]:         "osd_id": 1,
Dec 03 01:31:09 compute-0 suspicious_swartz[277624]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:31:09 compute-0 suspicious_swartz[277624]:         "type": "bluestore"
Dec 03 01:31:09 compute-0 suspicious_swartz[277624]:     },
Dec 03 01:31:09 compute-0 suspicious_swartz[277624]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 01:31:09 compute-0 suspicious_swartz[277624]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:31:09 compute-0 suspicious_swartz[277624]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 01:31:09 compute-0 suspicious_swartz[277624]:         "osd_id": 0,
Dec 03 01:31:09 compute-0 suspicious_swartz[277624]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:31:09 compute-0 suspicious_swartz[277624]:         "type": "bluestore"
Dec 03 01:31:09 compute-0 suspicious_swartz[277624]:     }
Dec 03 01:31:09 compute-0 suspicious_swartz[277624]: }
Dec 03 01:31:09 compute-0 systemd[1]: libpod-ad08a1a02f4950134d53e94f2cf7b4355986c1c3624c7b352ffb340c39de5f27.scope: Deactivated successfully.
Dec 03 01:31:09 compute-0 systemd[1]: libpod-ad08a1a02f4950134d53e94f2cf7b4355986c1c3624c7b352ffb340c39de5f27.scope: Consumed 1.140s CPU time.
Dec 03 01:31:09 compute-0 podman[277588]: 2025-12-03 01:31:09.412514977 +0000 UTC m=+1.398270345 container died ad08a1a02f4950134d53e94f2cf7b4355986c1c3624c7b352ffb340c39de5f27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 03 01:31:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-bccc071674e55c48806889f98a465899c03576fa441d65d0e3c7fa7eb0cc49b7-merged.mount: Deactivated successfully.
Dec 03 01:31:09 compute-0 podman[277588]: 2025-12-03 01:31:09.525615301 +0000 UTC m=+1.511370639 container remove ad08a1a02f4950134d53e94f2cf7b4355986c1c3624c7b352ffb340c39de5f27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 03 01:31:09 compute-0 systemd[1]: libpod-conmon-ad08a1a02f4950134d53e94f2cf7b4355986c1c3624c7b352ffb340c39de5f27.scope: Deactivated successfully.
Dec 03 01:31:09 compute-0 sudo[277381]: pam_unix(sudo:session): session closed for user root
Dec 03 01:31:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:31:09 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:31:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:31:09 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:31:09 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev f3aeb102-e175-4a75-9408-22b1a0a3b892 does not exist
Dec 03 01:31:09 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 580e1067-00a7-4518-909d-3b3508a6e780 does not exist
Dec 03 01:31:09 compute-0 sudo[277859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:31:09 compute-0 sudo[277859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:31:09 compute-0 sudo[277859]: pam_unix(sudo:session): session closed for user root
Dec 03 01:31:09 compute-0 sudo[277912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 01:31:09 compute-0 sudo[277912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:31:09 compute-0 sudo[277912]: pam_unix(sudo:session): session closed for user root
Dec 03 01:31:09 compute-0 sudo[277981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxbluvtwgsowrhdgbhkqauwakakrwlng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725469.4355545-536-20400091180018/AnsiballZ_file.py'
Dec 03 01:31:09 compute-0 sudo[277981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:31:10 compute-0 ceph-mon[192821]: pgmap v491: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:31:10 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:31:10 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:31:10 compute-0 python3.9[277983]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:31:10 compute-0 sudo[277981]: pam_unix(sudo:session): session closed for user root
Dec 03 01:31:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v492: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:31:11 compute-0 sudo[278057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpcpncssfsswgqamueonvpexxacnrrgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725469.4355545-536-20400091180018/AnsiballZ_stat.py'
Dec 03 01:31:11 compute-0 sudo[278057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:31:11 compute-0 python3.9[278059]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:31:11 compute-0 sudo[278057]: pam_unix(sudo:session): session closed for user root
Dec 03 01:31:12 compute-0 ceph-mon[192821]: pgmap v492: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:31:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:31:12 compute-0 sudo[278208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvrtsiukxvjaakvjyrjxacjeaahgtiuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725472.053399-536-212012165253571/AnsiballZ_copy.py'
Dec 03 01:31:12 compute-0 sudo[278208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:31:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v493: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:31:13 compute-0 python3.9[278210]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764725472.053399-536-212012165253571/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:31:13 compute-0 sudo[278208]: pam_unix(sudo:session): session closed for user root
Dec 03 01:31:14 compute-0 sudo[278284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shooxpykctovqkjvnddytovbhncqdcti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725472.053399-536-212012165253571/AnsiballZ_systemd.py'
Dec 03 01:31:14 compute-0 sudo[278284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:31:14 compute-0 ceph-mon[192821]: pgmap v493: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:31:14 compute-0 python3.9[278286]: ansible-systemd Invoked with state=started name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:31:14 compute-0 sudo[278284]: pam_unix(sudo:session): session closed for user root
Dec 03 01:31:14 compute-0 podman[278288]: 2025-12-03 01:31:14.574818482 +0000 UTC m=+0.114439291 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 03 01:31:14 compute-0 podman[278290]: 2025-12-03 01:31:14.575163851 +0000 UTC m=+0.109209381 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute)
Dec 03 01:31:14 compute-0 podman[278289]: 2025-12-03 01:31:14.575992053 +0000 UTC m=+0.114679127 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.openshift.expose-services=, config_id=edpm, io.buildah.version=1.33.7, managed_by=edpm_ansible, architecture=x86_64, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 03 01:31:14 compute-0 podman[278291]: 2025-12-03 01:31:14.6344366 +0000 UTC m=+0.174174542 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 03 01:31:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v494: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:31:15 compute-0 sudo[278519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgiukbnbksrbmngznwpqehplkxsxrorc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725474.8172903-560-185548391184829/AnsiballZ_command.py'
Dec 03 01:31:15 compute-0 sudo[278519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:31:15 compute-0 python3.9[278521]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:31:15 compute-0 ovs-vsctl[278522]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Dec 03 01:31:15 compute-0 sudo[278519]: pam_unix(sudo:session): session closed for user root
Dec 03 01:31:15 compute-0 sshd-session[277337]: Invalid user admin from 193.32.162.157 port 45848
Dec 03 01:31:16 compute-0 ceph-mon[192821]: pgmap v494: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:31:16 compute-0 sudo[278672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrsbgwqbrneewvddcbwkysyrhhbphyxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725475.9552646-568-125964175829898/AnsiballZ_command.py'
Dec 03 01:31:16 compute-0 sudo[278672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:31:16 compute-0 podman[278674]: 2025-12-03 01:31:16.70922439 +0000 UTC m=+0.113456043 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 03 01:31:16 compute-0 python3.9[278675]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:31:16 compute-0 ovs-vsctl[278694]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Dec 03 01:31:16 compute-0 sudo[278672]: pam_unix(sudo:session): session closed for user root
Dec 03 01:31:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v495: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:31:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:31:17 compute-0 sudo[278845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ammgyobwvtqonplbkdglaxvjchircool ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725477.4158897-582-29643273754906/AnsiballZ_command.py'
Dec 03 01:31:17 compute-0 sudo[278845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:31:18 compute-0 python3.9[278847]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:31:18 compute-0 ceph-mon[192821]: pgmap v495: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:31:18 compute-0 ovs-vsctl[278848]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Dec 03 01:31:18 compute-0 sudo[278845]: pam_unix(sudo:session): session closed for user root
Dec 03 01:31:18 compute-0 sshd-session[277337]: Connection closed by invalid user admin 193.32.162.157 port 45848 [preauth]
Dec 03 01:31:18 compute-0 sshd-session[266283]: Connection closed by 192.168.122.30 port 40212
Dec 03 01:31:18 compute-0 sshd-session[266280]: pam_unix(sshd:session): session closed for user zuul
Dec 03 01:31:18 compute-0 systemd[1]: session-52.scope: Deactivated successfully.
Dec 03 01:31:18 compute-0 systemd[1]: session-52.scope: Consumed 1min 14.859s CPU time.
Dec 03 01:31:18 compute-0 systemd-logind[800]: Session 52 logged out. Waiting for processes to exit.
Dec 03 01:31:18 compute-0 systemd-logind[800]: Removed session 52.
Dec 03 01:31:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v496: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 03 01:31:20 compute-0 ceph-mon[192821]: pgmap v496: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 03 01:31:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v497: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 22 op/s
Dec 03 01:31:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:31:22 compute-0 ceph-mon[192821]: pgmap v497: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 22 op/s
Dec 03 01:31:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v498: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 22 op/s
Dec 03 01:31:23 compute-0 ceph-mon[192821]: pgmap v498: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 22 op/s
Dec 03 01:31:23 compute-0 sshd-session[278876]: Received disconnect from 173.249.50.59 port 34616:11: Bye Bye [preauth]
Dec 03 01:31:23 compute-0 sshd-session[278876]: Disconnected from authenticating user root 173.249.50.59 port 34616 [preauth]
Dec 03 01:31:24 compute-0 sshd-session[278878]: Accepted publickey for zuul from 192.168.122.30 port 41004 ssh2: ECDSA SHA256:ja3ITS17A9km0/Ot+KN2pl9ub4ump/b6GV+vNoE7Szw
Dec 03 01:31:24 compute-0 systemd-logind[800]: New session 53 of user zuul.
Dec 03 01:31:24 compute-0 systemd[1]: Started Session 53 of User zuul.
Dec 03 01:31:24 compute-0 sshd-session[278878]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 03 01:31:24 compute-0 podman[278880]: 2025-12-03 01:31:24.707297318 +0000 UTC m=+0.134670369 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, release-0.7.12=, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., config_id=edpm, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, io.openshift.tags=base rhel9, vcs-type=git, version=9.4, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9)
Dec 03 01:31:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v499: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Dec 03 01:31:25 compute-0 sshd-session[278955]: Invalid user foundry from 34.66.72.251 port 38910
Dec 03 01:31:25 compute-0 sshd-session[278955]: Received disconnect from 34.66.72.251 port 38910:11: Bye Bye [preauth]
Dec 03 01:31:25 compute-0 sshd-session[278955]: Disconnected from invalid user foundry 34.66.72.251 port 38910 [preauth]
Dec 03 01:31:26 compute-0 ceph-mon[192821]: pgmap v499: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Dec 03 01:31:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v500: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 03 01:31:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:31:27 compute-0 python3.9[279054]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 01:31:27 compute-0 sshd-session[278873]: Invalid user git from 193.32.162.157 port 37800
Dec 03 01:31:28 compute-0 sshd-session[279055]: Received disconnect from 80.253.31.232 port 47990:11: Bye Bye [preauth]
Dec 03 01:31:28 compute-0 sshd-session[279055]: Disconnected from authenticating user root 80.253.31.232 port 47990 [preauth]
Dec 03 01:31:28 compute-0 ceph-mon[192821]: pgmap v500: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 03 01:31:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:31:28
Dec 03 01:31:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 01:31:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 01:31:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'backups', 'cephfs.cephfs.data', 'volumes', 'vms', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'images', 'cephfs.cephfs.meta']
Dec 03 01:31:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 01:31:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:31:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:31:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:31:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:31:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:31:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:31:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 01:31:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:31:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 01:31:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:31:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:31:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:31:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:31:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:31:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:31:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:31:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v501: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 03 01:31:29 compute-0 sudo[279210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgzptfnwfqvvjyujpqtcccgadhvxzzmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725487.9395974-34-59411368989327/AnsiballZ_file.py'
Dec 03 01:31:29 compute-0 sudo[279210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:31:29 compute-0 python3.9[279212]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:31:29 compute-0 sudo[279210]: pam_unix(sudo:session): session closed for user root
Dec 03 01:31:29 compute-0 podman[158098]: time="2025-12-03T01:31:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:31:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:31:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec 03 01:31:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:31:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6845 "" "Go-http-client/1.1"
Dec 03 01:31:29 compute-0 podman[279260]: 2025-12-03 01:31:29.880201173 +0000 UTC m=+0.123144342 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 01:31:30 compute-0 sshd-session[278873]: Connection closed by invalid user git 193.32.162.157 port 37800 [preauth]
Dec 03 01:31:30 compute-0 ceph-mon[192821]: pgmap v501: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 03 01:31:30 compute-0 sudo[279386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqqfoipicaeoertkwmklzqfyervftjxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725489.7245126-34-203441622777962/AnsiballZ_file.py'
Dec 03 01:31:30 compute-0 sudo[279386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:31:30 compute-0 python3.9[279388]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:31:30 compute-0 sudo[279386]: pam_unix(sudo:session): session closed for user root
Dec 03 01:31:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v502: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 59 op/s
Dec 03 01:31:31 compute-0 sudo[279539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urenokgpzxkxnccdhvzxkdzylcyswcju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725490.7280607-34-34368946871016/AnsiballZ_file.py'
Dec 03 01:31:31 compute-0 sudo[279539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:31:31 compute-0 openstack_network_exporter[160250]: ERROR   01:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:31:31 compute-0 openstack_network_exporter[160250]: ERROR   01:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:31:31 compute-0 openstack_network_exporter[160250]: ERROR   01:31:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:31:31 compute-0 openstack_network_exporter[160250]: ERROR   01:31:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:31:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:31:31 compute-0 openstack_network_exporter[160250]: ERROR   01:31:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:31:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:31:31 compute-0 python3.9[279541]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:31:31 compute-0 sudo[279539]: pam_unix(sudo:session): session closed for user root
Dec 03 01:31:32 compute-0 ceph-mon[192821]: pgmap v502: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 59 op/s
Dec 03 01:31:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:31:32 compute-0 sudo[279691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvygmlavtxsijchasinggubwfcedcgqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725491.9542816-34-104008574285425/AnsiballZ_file.py'
Dec 03 01:31:32 compute-0 sudo[279691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:31:32 compute-0 python3.9[279694]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:31:32 compute-0 sudo[279691]: pam_unix(sudo:session): session closed for user root
Dec 03 01:31:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v503: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 37 op/s
Dec 03 01:31:33 compute-0 sudo[279844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvhasznnxrwxyjymfpbojqatdansusuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725493.0789614-34-103276556046644/AnsiballZ_file.py'
Dec 03 01:31:33 compute-0 sudo[279844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:31:33 compute-0 python3.9[279846]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:31:33 compute-0 sudo[279844]: pam_unix(sudo:session): session closed for user root
Dec 03 01:31:34 compute-0 ceph-mon[192821]: pgmap v503: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 37 op/s
Dec 03 01:31:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v504: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 37 op/s
Dec 03 01:31:35 compute-0 python3.9[279996]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 01:31:36 compute-0 sudo[280146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmmctxufprcwmvveejxmnyguqxjzwskk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725495.5236857-78-257002189515133/AnsiballZ_seboolean.py'
Dec 03 01:31:36 compute-0 sudo[280146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:31:36 compute-0 ceph-mon[192821]: pgmap v504: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 37 op/s
Dec 03 01:31:36 compute-0 python3.9[280148]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec 03 01:31:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v505: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Dec 03 01:31:37 compute-0 sudo[280146]: pam_unix(sudo:session): session closed for user root
Dec 03 01:31:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:31:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 01:31:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:31:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 01:31:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:31:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:31:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:31:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:31:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:31:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:31:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:31:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:31:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:31:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 01:31:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:31:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:31:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:31:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 01:31:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:31:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 01:31:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:31:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:31:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:31:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 01:31:38 compute-0 ceph-mon[192821]: pgmap v505: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Dec 03 01:31:38 compute-0 python3.9[280298]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:31:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v506: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:31:39 compute-0 ceph-mon[192821]: pgmap v506: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:31:40 compute-0 python3.9[280419]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764725497.3927703-86-267942322831429/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.971 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.972 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.973 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.973 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f00ebd496a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.974 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.974 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eda45910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.975 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.975 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.976 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.978 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f00ebd4b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.978 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.978 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f00edba6090>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.978 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eabec2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.978 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f00ebd4bb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.980 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f00ebd4b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.981 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f00ebd4b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.981 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f00ebd4b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.982 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f00ebd4b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.982 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f00eabec290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f00ebd4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebcadee0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f00ebd4b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f00ebd4b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f00ebd4bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f00ebd4b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f00ebd4bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f00ebd4bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f00ebd4bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f00ebe0e030>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f00ebd4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f00ebd4b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f00ede91a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f00ebd4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f00ebd4b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f00ede92450>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f00ebd4bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f00ebd4bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.992 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.992 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.992 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.992 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:31:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v507: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:31:41 compute-0 sshd-session[279389]: Connection closed by authenticating user daemon 193.32.162.157 port 59066 [preauth]
Dec 03 01:31:42 compute-0 ceph-mon[192821]: pgmap v507: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:31:42 compute-0 python3.9[280570]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:31:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:31:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v508: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:31:43 compute-0 python3.9[280692]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764725500.620203-101-179203207798632/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:31:44 compute-0 sudo[280842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smvrqwikqozopkghhwmkxtmczfbocvwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725503.5406861-118-196067171062644/AnsiballZ_setup.py'
Dec 03 01:31:44 compute-0 sudo[280842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:31:44 compute-0 ceph-mon[192821]: pgmap v508: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:31:44 compute-0 python3.9[280844]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 03 01:31:44 compute-0 sudo[280842]: pam_unix(sudo:session): session closed for user root
Dec 03 01:31:44 compute-0 podman[280854]: 2025-12-03 01:31:44.836605008 +0000 UTC m=+0.110482854 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64)
Dec 03 01:31:44 compute-0 podman[280855]: 2025-12-03 01:31:44.848210268 +0000 UTC m=+0.109297903 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Dec 03 01:31:44 compute-0 podman[280853]: 2025-12-03 01:31:44.861568514 +0000 UTC m=+0.137161316 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 03 01:31:44 compute-0 podman[280856]: 2025-12-03 01:31:44.876231714 +0000 UTC m=+0.130877788 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 03 01:31:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v509: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:31:45 compute-0 sshd-session[280886]: Invalid user super from 146.190.144.138 port 43920
Dec 03 01:31:45 compute-0 sshd-session[280886]: Received disconnect from 146.190.144.138 port 43920:11: Bye Bye [preauth]
Dec 03 01:31:45 compute-0 sshd-session[280886]: Disconnected from invalid user super 146.190.144.138 port 43920 [preauth]
Dec 03 01:31:45 compute-0 sudo[281016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdtotvngsojshsggnbydntnjmmvvfluq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725503.5406861-118-196067171062644/AnsiballZ_dnf.py'
Dec 03 01:31:45 compute-0 sudo[281016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:31:45 compute-0 python3.9[281018]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 03 01:31:46 compute-0 ceph-mon[192821]: pgmap v509: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:31:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v510: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:31:47 compute-0 sudo[281016]: pam_unix(sudo:session): session closed for user root
Dec 03 01:31:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:31:47 compute-0 podman[281096]: 2025-12-03 01:31:47.874837249 +0000 UTC m=+0.130735085 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 03 01:31:48 compute-0 ceph-mon[192821]: pgmap v510: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:31:48 compute-0 sudo[281188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unjpemiarnwrtkrzsmfqkicdbacwpycp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725507.3923213-130-155847892795248/AnsiballZ_systemd.py'
Dec 03 01:31:48 compute-0 sudo[281188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:31:48 compute-0 python3.9[281190]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 03 01:31:48 compute-0 sudo[281188]: pam_unix(sudo:session): session closed for user root
Dec 03 01:31:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v511: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:31:49 compute-0 python3.9[281344]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:31:50 compute-0 ceph-mon[192821]: pgmap v511: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:31:50 compute-0 python3.9[281465]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764725509.0543349-138-2978533307805/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:31:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v512: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:31:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:31:52 compute-0 ceph-mon[192821]: pgmap v512: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:31:52 compute-0 python3.9[281615]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:31:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v513: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:31:53 compute-0 sshd-session[280571]: Connection closed by authenticating user root 193.32.162.157 port 42106 [preauth]
Dec 03 01:31:54 compute-0 python3.9[281736]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764725511.8342652-138-149461265406231/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:31:54 compute-0 ceph-mon[192821]: pgmap v513: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:31:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v514: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:31:55 compute-0 ceph-mon[192821]: pgmap v514: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:31:55 compute-0 podman[281861]: 2025-12-03 01:31:55.716925919 +0000 UTC m=+0.123244275 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, release=1214.1726694543, io.openshift.expose-services=, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, container_name=kepler, distribution-scope=public, vendor=Red Hat, Inc., version=9.4, io.openshift.tags=base rhel9, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64)
Dec 03 01:31:55 compute-0 python3.9[281905]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:31:56 compute-0 python3.9[282029]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764725515.104071-182-164662718790397/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:31:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v515: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:31:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:31:57 compute-0 python3.9[282179]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:31:58 compute-0 ceph-mon[192821]: pgmap v515: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:31:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:31:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:31:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:31:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:31:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:31:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:31:58 compute-0 python3.9[282300]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764725517.1467767-182-267736154891827/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:31:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v516: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:31:59 compute-0 python3.9[282450]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:31:59 compute-0 podman[158098]: time="2025-12-03T01:31:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:31:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:31:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec 03 01:31:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:31:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6842 "" "Go-http-client/1.1"
Dec 03 01:32:00 compute-0 ceph-mon[192821]: pgmap v516: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:00 compute-0 sudo[282619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmpajuebnvooqcxfbkothtzvbtwhozrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725520.0657-220-9610836495337/AnsiballZ_file.py'
Dec 03 01:32:00 compute-0 podman[282576]: 2025-12-03 01:32:00.676354057 +0000 UTC m=+0.125289849 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 01:32:00 compute-0 sudo[282619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:32:00 compute-0 python3.9[282628]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:32:00 compute-0 sudo[282619]: pam_unix(sudo:session): session closed for user root
Dec 03 01:32:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v517: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:01 compute-0 openstack_network_exporter[160250]: ERROR   01:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:32:01 compute-0 openstack_network_exporter[160250]: ERROR   01:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:32:01 compute-0 openstack_network_exporter[160250]: ERROR   01:32:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:32:01 compute-0 openstack_network_exporter[160250]: ERROR   01:32:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:32:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:32:01 compute-0 openstack_network_exporter[160250]: ERROR   01:32:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:32:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:32:01 compute-0 sudo[282778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbobgwoznkckekehjabkajuxousiftmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725521.1890318-228-35735254675642/AnsiballZ_stat.py'
Dec 03 01:32:01 compute-0 sudo[282778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:32:01 compute-0 python3.9[282780]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:32:02 compute-0 sudo[282778]: pam_unix(sudo:session): session closed for user root
Dec 03 01:32:02 compute-0 ceph-mon[192821]: pgmap v517: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:32:02 compute-0 sudo[282856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lghezbyxfcuajtyikzhezrrocodkeeio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725521.1890318-228-35735254675642/AnsiballZ_file.py'
Dec 03 01:32:02 compute-0 sudo[282856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:32:02 compute-0 python3.9[282858]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:32:02 compute-0 sudo[282856]: pam_unix(sudo:session): session closed for user root
Dec 03 01:32:02 compute-0 sshd-session[281737]: Invalid user informix from 193.32.162.157 port 56896
Dec 03 01:32:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v518: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:04 compute-0 ceph-mon[192821]: pgmap v518: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:04 compute-0 sudo[283008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sleuxpbiprovoasejlpuelfqyfryjssy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725522.9684076-228-176128456992407/AnsiballZ_stat.py'
Dec 03 01:32:04 compute-0 sudo[283008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:32:04 compute-0 python3.9[283010]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:32:04 compute-0 sudo[283008]: pam_unix(sudo:session): session closed for user root
Dec 03 01:32:04 compute-0 sudo[283086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owyqtvibiyzerafgdhwhkwbepxkfbjda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725522.9684076-228-176128456992407/AnsiballZ_file.py'
Dec 03 01:32:04 compute-0 sudo[283086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:32:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v519: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:05 compute-0 python3.9[283088]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:32:05 compute-0 sudo[283086]: pam_unix(sudo:session): session closed for user root
Dec 03 01:32:05 compute-0 sshd-session[281737]: Connection closed by invalid user informix 193.32.162.157 port 56896 [preauth]
Dec 03 01:32:06 compute-0 ceph-mon[192821]: pgmap v519: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:06 compute-0 sudo[283239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbxvtyylvgjyoqtsypydacrhdvuqhvvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725526.1150372-251-7001270105169/AnsiballZ_file.py'
Dec 03 01:32:06 compute-0 sudo[283239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:32:06 compute-0 python3.9[283241]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:32:06 compute-0 sudo[283239]: pam_unix(sudo:session): session closed for user root
Dec 03 01:32:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v520: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:32:07 compute-0 sudo[283391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyzqqbuytguosxrnhthvahmoanbjdecw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725527.2513216-259-138472447682633/AnsiballZ_stat.py'
Dec 03 01:32:07 compute-0 sudo[283391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:32:07 compute-0 python3.9[283393]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:32:07 compute-0 sudo[283391]: pam_unix(sudo:session): session closed for user root
Dec 03 01:32:08 compute-0 ceph-mon[192821]: pgmap v520: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:08 compute-0 sudo[283470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnjfemzpechqsdjufabgmuevawzebnom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725527.2513216-259-138472447682633/AnsiballZ_file.py'
Dec 03 01:32:08 compute-0 sudo[283470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:32:08 compute-0 python3.9[283472]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:32:08 compute-0 sudo[283470]: pam_unix(sudo:session): session closed for user root
Dec 03 01:32:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v521: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:09 compute-0 sudo[283622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfhllskhrllweliumehblamatindiilp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725528.949183-271-119806120182121/AnsiballZ_stat.py'
Dec 03 01:32:09 compute-0 sudo[283622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:32:09 compute-0 python3.9[283624]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:32:09 compute-0 sudo[283622]: pam_unix(sudo:session): session closed for user root
Dec 03 01:32:09 compute-0 sudo[283631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:32:09 compute-0 sudo[283631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:32:09 compute-0 sudo[283631]: pam_unix(sudo:session): session closed for user root
Dec 03 01:32:10 compute-0 sudo[283680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:32:10 compute-0 sudo[283680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:32:10 compute-0 sudo[283680]: pam_unix(sudo:session): session closed for user root
Dec 03 01:32:10 compute-0 ceph-mon[192821]: pgmap v521: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:10 compute-0 sudo[283767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skhgfljpimfgpqnsgocuninxonmexalu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725528.949183-271-119806120182121/AnsiballZ_file.py'
Dec 03 01:32:10 compute-0 sudo[283767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:32:10 compute-0 sudo[283734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:32:10 compute-0 sudo[283734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:32:10 compute-0 sudo[283734]: pam_unix(sudo:session): session closed for user root
Dec 03 01:32:10 compute-0 sudo[283778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 01:32:10 compute-0 sudo[283778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:32:10 compute-0 python3.9[283775]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:32:10 compute-0 sudo[283767]: pam_unix(sudo:session): session closed for user root
Dec 03 01:32:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v522: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:11 compute-0 sudo[283778]: pam_unix(sudo:session): session closed for user root
Dec 03 01:32:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:32:11 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:32:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 01:32:11 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:32:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 01:32:11 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:32:11 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 8756678d-37a0-41be-a19a-05a9b31dd1c4 does not exist
Dec 03 01:32:11 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev ecf32252-2cf3-4dc1-893a-0fe560a4b890 does not exist
Dec 03 01:32:11 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev a9764b25-acf4-460d-a091-e3a99cb72c41 does not exist
Dec 03 01:32:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 01:32:11 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:32:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 01:32:11 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:32:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:32:11 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:32:11 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:32:11 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:32:11 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:32:11 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:32:11 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:32:11 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:32:11 compute-0 sudo[284000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkimspfspcicsfueufkrazpwpdxrkcwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725530.7556798-283-52218853427539/AnsiballZ_systemd.py'
Dec 03 01:32:11 compute-0 sudo[284000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:32:11 compute-0 sudo[283965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:32:11 compute-0 sudo[283965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:32:11 compute-0 sudo[283965]: pam_unix(sudo:session): session closed for user root
Dec 03 01:32:11 compute-0 sudo[284010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:32:11 compute-0 sudo[284010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:32:11 compute-0 sudo[284010]: pam_unix(sudo:session): session closed for user root
Dec 03 01:32:11 compute-0 sudo[284035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:32:11 compute-0 sudo[284035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:32:11 compute-0 sudo[284035]: pam_unix(sudo:session): session closed for user root
Dec 03 01:32:11 compute-0 python3.9[284007]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:32:11 compute-0 systemd[1]: Reloading.
Dec 03 01:32:11 compute-0 systemd-sysv-generator[284116]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:32:11 compute-0 systemd-rc-local-generator[284113]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:32:12 compute-0 sudo[284060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 01:32:12 compute-0 sudo[284060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:32:12 compute-0 sudo[284000]: pam_unix(sudo:session): session closed for user root
Dec 03 01:32:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:32:12 compute-0 ceph-mon[192821]: pgmap v522: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:12 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Dec 03 01:32:12 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:32:12.253472) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 03 01:32:12 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Dec 03 01:32:12 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725532253522, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2040, "num_deletes": 251, "total_data_size": 3473905, "memory_usage": 3536360, "flush_reason": "Manual Compaction"}
Dec 03 01:32:12 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Dec 03 01:32:12 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725532279449, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3409130, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9686, "largest_seqno": 11725, "table_properties": {"data_size": 3399866, "index_size": 5886, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17858, "raw_average_key_size": 19, "raw_value_size": 3381497, "raw_average_value_size": 3687, "num_data_blocks": 267, "num_entries": 917, "num_filter_entries": 917, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764725298, "oldest_key_time": 1764725298, "file_creation_time": 1764725532, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Dec 03 01:32:12 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 26011 microseconds, and 14058 cpu microseconds.
Dec 03 01:32:12 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 01:32:12 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:32:12.279490) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3409130 bytes OK
Dec 03 01:32:12 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:32:12.279507) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Dec 03 01:32:12 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:32:12.282017) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Dec 03 01:32:12 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:32:12.282033) EVENT_LOG_v1 {"time_micros": 1764725532282028, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 03 01:32:12 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:32:12.282050) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 03 01:32:12 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3465398, prev total WAL file size 3465398, number of live WAL files 2.
Dec 03 01:32:12 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 01:32:12 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:32:12.283115) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Dec 03 01:32:12 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 03 01:32:12 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3329KB)], [26(5929KB)]
Dec 03 01:32:12 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725532283192, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 9480436, "oldest_snapshot_seqno": -1}
Dec 03 01:32:12 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3688 keys, 7829534 bytes, temperature: kUnknown
Dec 03 01:32:12 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725532331457, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 7829534, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7801305, "index_size": 17879, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9285, "raw_key_size": 88604, "raw_average_key_size": 24, "raw_value_size": 7731152, "raw_average_value_size": 2096, "num_data_blocks": 775, "num_entries": 3688, "num_filter_entries": 3688, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764725532, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Dec 03 01:32:12 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 01:32:12 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:32:12.331749) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 7829534 bytes
Dec 03 01:32:12 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:32:12.334052) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 195.9 rd, 161.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 5.8 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(5.1) write-amplify(2.3) OK, records in: 4202, records dropped: 514 output_compression: NoCompression
Dec 03 01:32:12 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:32:12.334075) EVENT_LOG_v1 {"time_micros": 1764725532334065, "job": 10, "event": "compaction_finished", "compaction_time_micros": 48391, "compaction_time_cpu_micros": 24454, "output_level": 6, "num_output_files": 1, "total_output_size": 7829534, "num_input_records": 4202, "num_output_records": 3688, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 03 01:32:12 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 01:32:12 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725532334945, "job": 10, "event": "table_file_deletion", "file_number": 28}
Dec 03 01:32:12 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 01:32:12 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725532336265, "job": 10, "event": "table_file_deletion", "file_number": 26}
Dec 03 01:32:12 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:32:12.282993) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:32:12 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:32:12.336483) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:32:12 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:32:12.336490) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:32:12 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:32:12.336492) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:32:12 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:32:12.336494) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:32:12 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:32:12.336496) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:32:12 compute-0 podman[284237]: 2025-12-03 01:32:12.566216702 +0000 UTC m=+0.069116687 container create bd20eb856cc9a0c1d9a7558c284591c23dd31326acc53870de04710f88ac314a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_almeida, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 03 01:32:12 compute-0 systemd[1]: Started libpod-conmon-bd20eb856cc9a0c1d9a7558c284591c23dd31326acc53870de04710f88ac314a.scope.
Dec 03 01:32:12 compute-0 podman[284237]: 2025-12-03 01:32:12.542592428 +0000 UTC m=+0.045492453 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:32:12 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:32:12 compute-0 podman[284237]: 2025-12-03 01:32:12.677924381 +0000 UTC m=+0.180824386 container init bd20eb856cc9a0c1d9a7558c284591c23dd31326acc53870de04710f88ac314a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_almeida, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec 03 01:32:12 compute-0 podman[284237]: 2025-12-03 01:32:12.68889388 +0000 UTC m=+0.191793875 container start bd20eb856cc9a0c1d9a7558c284591c23dd31326acc53870de04710f88ac314a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_almeida, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:32:12 compute-0 podman[284237]: 2025-12-03 01:32:12.69293512 +0000 UTC m=+0.195835105 container attach bd20eb856cc9a0c1d9a7558c284591c23dd31326acc53870de04710f88ac314a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:32:12 compute-0 awesome_almeida[284275]: 167 167
Dec 03 01:32:12 compute-0 systemd[1]: libpod-bd20eb856cc9a0c1d9a7558c284591c23dd31326acc53870de04710f88ac314a.scope: Deactivated successfully.
Dec 03 01:32:12 compute-0 podman[284237]: 2025-12-03 01:32:12.710028036 +0000 UTC m=+0.212928031 container died bd20eb856cc9a0c1d9a7558c284591c23dd31326acc53870de04710f88ac314a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_almeida, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec 03 01:32:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-7525c70556dfba5e5a7515b14eb1058dabc07ad9612023d50358e06ef62a25d8-merged.mount: Deactivated successfully.
Dec 03 01:32:12 compute-0 podman[284237]: 2025-12-03 01:32:12.781031474 +0000 UTC m=+0.283931459 container remove bd20eb856cc9a0c1d9a7558c284591c23dd31326acc53870de04710f88ac314a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_almeida, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 03 01:32:12 compute-0 systemd[1]: libpod-conmon-bd20eb856cc9a0c1d9a7558c284591c23dd31326acc53870de04710f88ac314a.scope: Deactivated successfully.
Dec 03 01:32:12 compute-0 sudo[284340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsefflyqskzrgtjxbttwityzpvlcsnoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725532.3542159-291-240526236946615/AnsiballZ_stat.py'
Dec 03 01:32:12 compute-0 sudo[284340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:32:13 compute-0 python3.9[284344]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:32:13 compute-0 podman[284350]: 2025-12-03 01:32:13.026764081 +0000 UTC m=+0.088314211 container create 7e8586cd14d4f17e2ce8cc2f77ef911bc1053fd143e19c60a9c6e8b0702bcb85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:32:13 compute-0 sudo[284340]: pam_unix(sudo:session): session closed for user root
Dec 03 01:32:13 compute-0 podman[284350]: 2025-12-03 01:32:12.99409668 +0000 UTC m=+0.055646860 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:32:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v523: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:13 compute-0 systemd[1]: Started libpod-conmon-7e8586cd14d4f17e2ce8cc2f77ef911bc1053fd143e19c60a9c6e8b0702bcb85.scope.
Dec 03 01:32:13 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:32:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e63698bf649c38f9acf7da98c85cf6a09f407cea3d73c43215a939264a86d7f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:32:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e63698bf649c38f9acf7da98c85cf6a09f407cea3d73c43215a939264a86d7f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:32:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e63698bf649c38f9acf7da98c85cf6a09f407cea3d73c43215a939264a86d7f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:32:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e63698bf649c38f9acf7da98c85cf6a09f407cea3d73c43215a939264a86d7f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:32:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e63698bf649c38f9acf7da98c85cf6a09f407cea3d73c43215a939264a86d7f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:32:13 compute-0 podman[284350]: 2025-12-03 01:32:13.213255531 +0000 UTC m=+0.274805701 container init 7e8586cd14d4f17e2ce8cc2f77ef911bc1053fd143e19c60a9c6e8b0702bcb85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:32:13 compute-0 podman[284350]: 2025-12-03 01:32:13.242243643 +0000 UTC m=+0.303793813 container start 7e8586cd14d4f17e2ce8cc2f77ef911bc1053fd143e19c60a9c6e8b0702bcb85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 03 01:32:13 compute-0 podman[284350]: 2025-12-03 01:32:13.24946985 +0000 UTC m=+0.311020020 container attach 7e8586cd14d4f17e2ce8cc2f77ef911bc1053fd143e19c60a9c6e8b0702bcb85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 03 01:32:13 compute-0 ceph-mon[192821]: pgmap v523: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:13 compute-0 sudo[284445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aauaxpfapasgotuklsciqeurxxmsxbex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725532.3542159-291-240526236946615/AnsiballZ_file.py'
Dec 03 01:32:13 compute-0 sudo[284445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:32:13 compute-0 python3.9[284447]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:32:13 compute-0 sudo[284445]: pam_unix(sudo:session): session closed for user root
Dec 03 01:32:14 compute-0 goofy_kilby[284373]: --> passed data devices: 0 physical, 3 LVM
Dec 03 01:32:14 compute-0 goofy_kilby[284373]: --> relative data size: 1.0
Dec 03 01:32:14 compute-0 goofy_kilby[284373]: --> All data devices are unavailable
Dec 03 01:32:14 compute-0 sudo[284620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qttnzpwpogprjpljjynsdcaaksofyybu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725533.9519193-303-133227023755765/AnsiballZ_stat.py'
Dec 03 01:32:14 compute-0 sudo[284620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:32:14 compute-0 systemd[1]: libpod-7e8586cd14d4f17e2ce8cc2f77ef911bc1053fd143e19c60a9c6e8b0702bcb85.scope: Deactivated successfully.
Dec 03 01:32:14 compute-0 podman[284350]: 2025-12-03 01:32:14.566948208 +0000 UTC m=+1.628498398 container died 7e8586cd14d4f17e2ce8cc2f77ef911bc1053fd143e19c60a9c6e8b0702bcb85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 03 01:32:14 compute-0 systemd[1]: libpod-7e8586cd14d4f17e2ce8cc2f77ef911bc1053fd143e19c60a9c6e8b0702bcb85.scope: Consumed 1.267s CPU time.
Dec 03 01:32:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e63698bf649c38f9acf7da98c85cf6a09f407cea3d73c43215a939264a86d7f-merged.mount: Deactivated successfully.
Dec 03 01:32:14 compute-0 podman[284350]: 2025-12-03 01:32:14.676008945 +0000 UTC m=+1.737559085 container remove 7e8586cd14d4f17e2ce8cc2f77ef911bc1053fd143e19c60a9c6e8b0702bcb85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 03 01:32:14 compute-0 systemd[1]: libpod-conmon-7e8586cd14d4f17e2ce8cc2f77ef911bc1053fd143e19c60a9c6e8b0702bcb85.scope: Deactivated successfully.
Dec 03 01:32:14 compute-0 sudo[284060]: pam_unix(sudo:session): session closed for user root
Dec 03 01:32:14 compute-0 python3.9[284623]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:32:14 compute-0 sudo[284620]: pam_unix(sudo:session): session closed for user root
Dec 03 01:32:14 compute-0 sudo[284637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:32:14 compute-0 sudo[284637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:32:14 compute-0 sudo[284637]: pam_unix(sudo:session): session closed for user root
Dec 03 01:32:15 compute-0 sudo[284664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:32:15 compute-0 sudo[284664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:32:15 compute-0 sudo[284664]: pam_unix(sudo:session): session closed for user root
Dec 03 01:32:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v524: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:15 compute-0 sudo[284713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:32:15 compute-0 podman[284688]: 2025-12-03 01:32:15.135870436 +0000 UTC m=+0.112867011 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 03 01:32:15 compute-0 sudo[284713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:32:15 compute-0 sudo[284713]: pam_unix(sudo:session): session closed for user root
Dec 03 01:32:15 compute-0 podman[284689]: 2025-12-03 01:32:15.146561378 +0000 UTC m=+0.118511056 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, config_id=edpm, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, name=ubi9-minimal, managed_by=edpm_ansible, version=9.6, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, architecture=x86_64)
Dec 03 01:32:15 compute-0 podman[284690]: 2025-12-03 01:32:15.168685832 +0000 UTC m=+0.138641485 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm)
Dec 03 01:32:15 compute-0 podman[284691]: 2025-12-03 01:32:15.192032099 +0000 UTC m=+0.144108814 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_controller)
Dec 03 01:32:15 compute-0 sudo[284790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 01:32:15 compute-0 sudo[284790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:32:15 compute-0 podman[284859]: 2025-12-03 01:32:15.773094258 +0000 UTC m=+0.098577441 container create 68ab8fd824d1a2a113a3fbea7b94894614c59fd302c4eda872fc16acd9cc2427 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_meninsky, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec 03 01:32:15 compute-0 podman[284859]: 2025-12-03 01:32:15.745084374 +0000 UTC m=+0.070567547 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:32:15 compute-0 systemd[1]: Started libpod-conmon-68ab8fd824d1a2a113a3fbea7b94894614c59fd302c4eda872fc16acd9cc2427.scope.
Dec 03 01:32:15 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:32:15 compute-0 podman[284859]: 2025-12-03 01:32:15.928065988 +0000 UTC m=+0.253549211 container init 68ab8fd824d1a2a113a3fbea7b94894614c59fd302c4eda872fc16acd9cc2427 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 03 01:32:15 compute-0 podman[284859]: 2025-12-03 01:32:15.943681314 +0000 UTC m=+0.269164487 container start 68ab8fd824d1a2a113a3fbea7b94894614c59fd302c4eda872fc16acd9cc2427 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:32:15 compute-0 podman[284859]: 2025-12-03 01:32:15.949796141 +0000 UTC m=+0.275279304 container attach 68ab8fd824d1a2a113a3fbea7b94894614c59fd302c4eda872fc16acd9cc2427 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:32:15 compute-0 affectionate_meninsky[284899]: 167 167
Dec 03 01:32:15 compute-0 systemd[1]: libpod-68ab8fd824d1a2a113a3fbea7b94894614c59fd302c4eda872fc16acd9cc2427.scope: Deactivated successfully.
Dec 03 01:32:15 compute-0 podman[284859]: 2025-12-03 01:32:15.956320589 +0000 UTC m=+0.281803762 container died 68ab8fd824d1a2a113a3fbea7b94894614c59fd302c4eda872fc16acd9cc2427 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 03 01:32:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-08c87877285bc1092a340dacfb54354b5446b76748e4df13d0fc70cf407f6fd1-merged.mount: Deactivated successfully.
Dec 03 01:32:16 compute-0 podman[284859]: 2025-12-03 01:32:16.034250666 +0000 UTC m=+0.359733839 container remove 68ab8fd824d1a2a113a3fbea7b94894614c59fd302c4eda872fc16acd9cc2427 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_meninsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:32:16 compute-0 systemd[1]: libpod-conmon-68ab8fd824d1a2a113a3fbea7b94894614c59fd302c4eda872fc16acd9cc2427.scope: Deactivated successfully.
Dec 03 01:32:16 compute-0 sudo[284967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npykazclcpfutasuitpzhojqrfrnatat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725533.9519193-303-133227023755765/AnsiballZ_file.py'
Dec 03 01:32:16 compute-0 sudo[284967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:32:16 compute-0 ceph-mon[192821]: pgmap v524: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:16 compute-0 podman[284975]: 2025-12-03 01:32:16.298135768 +0000 UTC m=+0.106508487 container create 15b65a087d1cb883aef24c4eaa9f9f9d3c88eb68354ddd8086009f21e197a680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_kalam, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 03 01:32:16 compute-0 podman[284975]: 2025-12-03 01:32:16.236344312 +0000 UTC m=+0.044717101 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:32:16 compute-0 python3.9[284969]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:32:16 compute-0 sudo[284967]: pam_unix(sudo:session): session closed for user root
Dec 03 01:32:16 compute-0 systemd[1]: Started libpod-conmon-15b65a087d1cb883aef24c4eaa9f9f9d3c88eb68354ddd8086009f21e197a680.scope.
Dec 03 01:32:16 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:32:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a08ac1a890c0e2556ce3e10c40d75ff20c47cbb4904453204f7dfeeeca6e1ec0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:32:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a08ac1a890c0e2556ce3e10c40d75ff20c47cbb4904453204f7dfeeeca6e1ec0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:32:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a08ac1a890c0e2556ce3e10c40d75ff20c47cbb4904453204f7dfeeeca6e1ec0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:32:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a08ac1a890c0e2556ce3e10c40d75ff20c47cbb4904453204f7dfeeeca6e1ec0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:32:16 compute-0 podman[284975]: 2025-12-03 01:32:16.50194944 +0000 UTC m=+0.310322209 container init 15b65a087d1cb883aef24c4eaa9f9f9d3c88eb68354ddd8086009f21e197a680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:32:16 compute-0 podman[284975]: 2025-12-03 01:32:16.519228112 +0000 UTC m=+0.327600831 container start 15b65a087d1cb883aef24c4eaa9f9f9d3c88eb68354ddd8086009f21e197a680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 03 01:32:16 compute-0 podman[284975]: 2025-12-03 01:32:16.525991317 +0000 UTC m=+0.334364086 container attach 15b65a087d1cb883aef24c4eaa9f9f9d3c88eb68354ddd8086009f21e197a680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Dec 03 01:32:17 compute-0 sshd-session[283113]: Connection closed by authenticating user root 193.32.162.157 port 39846 [preauth]
Dec 03 01:32:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v525: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:32:17 compute-0 sudo[285150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntdlfsiqhekjmmopboyqnkqdzvmpnnvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725536.6884186-315-279796073253362/AnsiballZ_systemd.py'
Dec 03 01:32:17 compute-0 sudo[285150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]: {
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:     "0": [
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:         {
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:             "devices": [
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:                 "/dev/loop3"
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:             ],
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:             "lv_name": "ceph_lv0",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:             "lv_size": "21470642176",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:             "name": "ceph_lv0",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:             "tags": {
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:                 "ceph.cluster_name": "ceph",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:                 "ceph.crush_device_class": "",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:                 "ceph.encrypted": "0",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:                 "ceph.osd_id": "0",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:                 "ceph.type": "block",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:                 "ceph.vdo": "0"
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:             },
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:             "type": "block",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:             "vg_name": "ceph_vg0"
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:         }
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:     ],
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:     "1": [
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:         {
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:             "devices": [
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:                 "/dev/loop4"
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:             ],
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:             "lv_name": "ceph_lv1",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:             "lv_size": "21470642176",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:             "name": "ceph_lv1",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:             "tags": {
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:                 "ceph.cluster_name": "ceph",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:                 "ceph.crush_device_class": "",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:                 "ceph.encrypted": "0",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:                 "ceph.osd_id": "1",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:                 "ceph.type": "block",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:                 "ceph.vdo": "0"
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:             },
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:             "type": "block",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:             "vg_name": "ceph_vg1"
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:         }
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:     ],
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:     "2": [
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:         {
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:             "devices": [
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:                 "/dev/loop5"
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:             ],
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:             "lv_name": "ceph_lv2",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:             "lv_size": "21470642176",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:             "name": "ceph_lv2",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:             "tags": {
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:                 "ceph.cluster_name": "ceph",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:                 "ceph.crush_device_class": "",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:                 "ceph.encrypted": "0",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:                 "ceph.osd_id": "2",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:                 "ceph.type": "block",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:                 "ceph.vdo": "0"
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:             },
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:             "type": "block",
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:             "vg_name": "ceph_vg2"
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:         }
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]:     ]
Dec 03 01:32:17 compute-0 relaxed_kalam[284991]: }
Dec 03 01:32:17 compute-0 systemd[1]: libpod-15b65a087d1cb883aef24c4eaa9f9f9d3c88eb68354ddd8086009f21e197a680.scope: Deactivated successfully.
Dec 03 01:32:17 compute-0 podman[284975]: 2025-12-03 01:32:17.401576404 +0000 UTC m=+1.209949093 container died 15b65a087d1cb883aef24c4eaa9f9f9d3c88eb68354ddd8086009f21e197a680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_kalam, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:32:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-a08ac1a890c0e2556ce3e10c40d75ff20c47cbb4904453204f7dfeeeca6e1ec0-merged.mount: Deactivated successfully.
Dec 03 01:32:17 compute-0 podman[284975]: 2025-12-03 01:32:17.486644236 +0000 UTC m=+1.295016925 container remove 15b65a087d1cb883aef24c4eaa9f9f9d3c88eb68354ddd8086009f21e197a680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 03 01:32:17 compute-0 systemd[1]: libpod-conmon-15b65a087d1cb883aef24c4eaa9f9f9d3c88eb68354ddd8086009f21e197a680.scope: Deactivated successfully.
Dec 03 01:32:17 compute-0 sudo[284790]: pam_unix(sudo:session): session closed for user root
Dec 03 01:32:17 compute-0 sudo[285164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:32:17 compute-0 sudo[285164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:32:17 compute-0 sudo[285164]: pam_unix(sudo:session): session closed for user root
Dec 03 01:32:17 compute-0 python3.9[285152]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:32:17 compute-0 systemd[1]: Reloading.
Dec 03 01:32:17 compute-0 systemd-rc-local-generator[285235]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:32:17 compute-0 systemd-sysv-generator[285243]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:32:18 compute-0 ceph-mon[192821]: pgmap v525: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:18 compute-0 sudo[285189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:32:18 compute-0 sudo[285189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:32:18 compute-0 sudo[285189]: pam_unix(sudo:session): session closed for user root
Dec 03 01:32:18 compute-0 systemd[1]: Starting Create netns directory...
Dec 03 01:32:18 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 03 01:32:18 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 03 01:32:18 compute-0 systemd[1]: Finished Create netns directory.
Dec 03 01:32:18 compute-0 sudo[285258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:32:18 compute-0 sudo[285258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:32:18 compute-0 sudo[285150]: pam_unix(sudo:session): session closed for user root
Dec 03 01:32:18 compute-0 sudo[285258]: pam_unix(sudo:session): session closed for user root
Dec 03 01:32:18 compute-0 podman[285250]: 2025-12-03 01:32:18.440892301 +0000 UTC m=+0.149694987 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 03 01:32:18 compute-0 sudo[285299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 01:32:18 compute-0 sudo[285299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:32:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v526: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:19 compute-0 podman[285438]: 2025-12-03 01:32:19.104435621 +0000 UTC m=+0.075538672 container create 06923f64dcdbd65773154e59b143628b46bf5f05d31a8b8d2d3c065c4087a9ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_buck, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:32:19 compute-0 podman[285438]: 2025-12-03 01:32:19.076847298 +0000 UTC m=+0.047950379 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:32:19 compute-0 systemd[1]: Started libpod-conmon-06923f64dcdbd65773154e59b143628b46bf5f05d31a8b8d2d3c065c4087a9ce.scope.
Dec 03 01:32:19 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:32:19 compute-0 podman[285438]: 2025-12-03 01:32:19.251143606 +0000 UTC m=+0.222246707 container init 06923f64dcdbd65773154e59b143628b46bf5f05d31a8b8d2d3c065c4087a9ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec 03 01:32:19 compute-0 podman[285438]: 2025-12-03 01:32:19.268784467 +0000 UTC m=+0.239887528 container start 06923f64dcdbd65773154e59b143628b46bf5f05d31a8b8d2d3c065c4087a9ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_buck, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:32:19 compute-0 podman[285438]: 2025-12-03 01:32:19.276390505 +0000 UTC m=+0.247493606 container attach 06923f64dcdbd65773154e59b143628b46bf5f05d31a8b8d2d3c065c4087a9ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_buck, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 03 01:32:19 compute-0 stoic_buck[285475]: 167 167
Dec 03 01:32:19 compute-0 systemd[1]: libpod-06923f64dcdbd65773154e59b143628b46bf5f05d31a8b8d2d3c065c4087a9ce.scope: Deactivated successfully.
Dec 03 01:32:19 compute-0 podman[285438]: 2025-12-03 01:32:19.281008861 +0000 UTC m=+0.252111912 container died 06923f64dcdbd65773154e59b143628b46bf5f05d31a8b8d2d3c065c4087a9ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 03 01:32:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-22612e3c41728e8a109c6f94ae26994c823015f3825580597de5f19be988745f-merged.mount: Deactivated successfully.
Dec 03 01:32:19 compute-0 podman[285438]: 2025-12-03 01:32:19.369112775 +0000 UTC m=+0.340215806 container remove 06923f64dcdbd65773154e59b143628b46bf5f05d31a8b8d2d3c065c4087a9ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_buck, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:32:19 compute-0 systemd[1]: libpod-conmon-06923f64dcdbd65773154e59b143628b46bf5f05d31a8b8d2d3c065c4087a9ce.scope: Deactivated successfully.
Dec 03 01:32:19 compute-0 sudo[285549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-duovbtwkpuelgitmksdqclckeoybexsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725538.843982-325-253690808638911/AnsiballZ_file.py'
Dec 03 01:32:19 compute-0 sudo[285549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:32:19 compute-0 podman[285553]: 2025-12-03 01:32:19.626710846 +0000 UTC m=+0.064352207 container create 6990debb437cf71246448226b1d522ebbf400f13b74f16d9f424128ec38b62d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_satoshi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 03 01:32:19 compute-0 systemd[1]: Started libpod-conmon-6990debb437cf71246448226b1d522ebbf400f13b74f16d9f424128ec38b62d6.scope.
Dec 03 01:32:19 compute-0 podman[285553]: 2025-12-03 01:32:19.604624773 +0000 UTC m=+0.042266184 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:32:19 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:32:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e6129541d7b1faf48b61e523a220e5eda5630b567c102d3e610f41adefe1d6f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:32:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e6129541d7b1faf48b61e523a220e5eda5630b567c102d3e610f41adefe1d6f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:32:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e6129541d7b1faf48b61e523a220e5eda5630b567c102d3e610f41adefe1d6f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:32:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e6129541d7b1faf48b61e523a220e5eda5630b567c102d3e610f41adefe1d6f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:32:19 compute-0 python3.9[285555]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:32:19 compute-0 podman[285553]: 2025-12-03 01:32:19.794159286 +0000 UTC m=+0.231800667 container init 6990debb437cf71246448226b1d522ebbf400f13b74f16d9f424128ec38b62d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:32:19 compute-0 sudo[285549]: pam_unix(sudo:session): session closed for user root
Dec 03 01:32:19 compute-0 podman[285553]: 2025-12-03 01:32:19.814773849 +0000 UTC m=+0.252415240 container start 6990debb437cf71246448226b1d522ebbf400f13b74f16d9f424128ec38b62d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:32:19 compute-0 podman[285553]: 2025-12-03 01:32:19.821347238 +0000 UTC m=+0.258988649 container attach 6990debb437cf71246448226b1d522ebbf400f13b74f16d9f424128ec38b62d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:32:20 compute-0 ceph-mon[192821]: pgmap v526: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:20 compute-0 sudo[285735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isbckbuapezkocpcoxxdchznupwxqjmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725540.0959888-333-178416649302138/AnsiballZ_stat.py'
Dec 03 01:32:20 compute-0 sudo[285735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:32:20 compute-0 python3.9[285738]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:32:20 compute-0 sudo[285735]: pam_unix(sudo:session): session closed for user root
Dec 03 01:32:20 compute-0 determined_satoshi[285571]: {
Dec 03 01:32:20 compute-0 determined_satoshi[285571]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 01:32:20 compute-0 determined_satoshi[285571]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:32:20 compute-0 determined_satoshi[285571]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 01:32:20 compute-0 determined_satoshi[285571]:         "osd_id": 2,
Dec 03 01:32:20 compute-0 determined_satoshi[285571]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:32:20 compute-0 determined_satoshi[285571]:         "type": "bluestore"
Dec 03 01:32:20 compute-0 determined_satoshi[285571]:     },
Dec 03 01:32:20 compute-0 determined_satoshi[285571]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 01:32:20 compute-0 determined_satoshi[285571]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:32:20 compute-0 determined_satoshi[285571]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 01:32:20 compute-0 determined_satoshi[285571]:         "osd_id": 1,
Dec 03 01:32:20 compute-0 determined_satoshi[285571]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:32:20 compute-0 determined_satoshi[285571]:         "type": "bluestore"
Dec 03 01:32:20 compute-0 determined_satoshi[285571]:     },
Dec 03 01:32:20 compute-0 determined_satoshi[285571]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 01:32:20 compute-0 determined_satoshi[285571]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:32:20 compute-0 determined_satoshi[285571]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 01:32:20 compute-0 determined_satoshi[285571]:         "osd_id": 0,
Dec 03 01:32:20 compute-0 determined_satoshi[285571]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:32:20 compute-0 determined_satoshi[285571]:         "type": "bluestore"
Dec 03 01:32:20 compute-0 determined_satoshi[285571]:     }
Dec 03 01:32:20 compute-0 determined_satoshi[285571]: }
Dec 03 01:32:20 compute-0 systemd[1]: libpod-6990debb437cf71246448226b1d522ebbf400f13b74f16d9f424128ec38b62d6.scope: Deactivated successfully.
Dec 03 01:32:20 compute-0 podman[285553]: 2025-12-03 01:32:20.987865696 +0000 UTC m=+1.425507067 container died 6990debb437cf71246448226b1d522ebbf400f13b74f16d9f424128ec38b62d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_satoshi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 03 01:32:20 compute-0 systemd[1]: libpod-6990debb437cf71246448226b1d522ebbf400f13b74f16d9f424128ec38b62d6.scope: Consumed 1.167s CPU time.
Dec 03 01:32:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e6129541d7b1faf48b61e523a220e5eda5630b567c102d3e610f41adefe1d6f-merged.mount: Deactivated successfully.
Dec 03 01:32:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v527: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:21 compute-0 podman[285553]: 2025-12-03 01:32:21.09828816 +0000 UTC m=+1.535929531 container remove 6990debb437cf71246448226b1d522ebbf400f13b74f16d9f424128ec38b62d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_satoshi, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 03 01:32:21 compute-0 systemd[1]: libpod-conmon-6990debb437cf71246448226b1d522ebbf400f13b74f16d9f424128ec38b62d6.scope: Deactivated successfully.
Dec 03 01:32:21 compute-0 sudo[285299]: pam_unix(sudo:session): session closed for user root
Dec 03 01:32:21 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:32:21 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:32:21 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:32:21 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:32:21 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 3a699973-21f0-4b11-9c99-2851c61eec87 does not exist
Dec 03 01:32:21 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev cbb94639-9b8d-46ef-bc73-468b37ee079f does not exist
Dec 03 01:32:21 compute-0 sudo[285817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:32:21 compute-0 sudo[285817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:32:21 compute-0 sudo[285817]: pam_unix(sudo:session): session closed for user root
Dec 03 01:32:21 compute-0 sudo[285863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 01:32:21 compute-0 sudo[285863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:32:21 compute-0 sudo[285863]: pam_unix(sudo:session): session closed for user root
Dec 03 01:32:21 compute-0 sudo[285938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-voguqgggilpiztgdgsxwarjhmrynyhdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725540.0959888-333-178416649302138/AnsiballZ_copy.py'
Dec 03 01:32:21 compute-0 sudo[285938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:32:21 compute-0 python3.9[285940]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764725540.0959888-333-178416649302138/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:32:21 compute-0 sudo[285938]: pam_unix(sudo:session): session closed for user root
Dec 03 01:32:22 compute-0 ceph-mon[192821]: pgmap v527: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:22 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:32:22 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:32:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:32:22 compute-0 sudo[286090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddonkmijjcvqwfyuddniviaejkdacemw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725542.360523-350-259588253950593/AnsiballZ_file.py'
Dec 03 01:32:22 compute-0 sudo[286090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:32:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v528: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:23 compute-0 python3.9[286092]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:32:23 compute-0 sudo[286090]: pam_unix(sudo:session): session closed for user root
Dec 03 01:32:23 compute-0 sudo[286242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfbnqcvbwgiorqymougnqjeidxmgxiqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725543.3989627-358-228609382032875/AnsiballZ_stat.py'
Dec 03 01:32:23 compute-0 sudo[286242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:32:24 compute-0 ceph-mon[192821]: pgmap v528: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:24 compute-0 python3.9[286244]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:32:24 compute-0 sudo[286242]: pam_unix(sudo:session): session closed for user root
Dec 03 01:32:24 compute-0 sudo[286365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llfbehubyborabvecggrmnjwgqrprtnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725543.3989627-358-228609382032875/AnsiballZ_copy.py'
Dec 03 01:32:24 compute-0 sudo[286365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:32:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v529: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:25 compute-0 python3.9[286367]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764725543.3989627-358-228609382032875/.source.json _original_basename=.fziwbwmr follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:32:25 compute-0 sudo[286365]: pam_unix(sudo:session): session closed for user root
Dec 03 01:32:26 compute-0 sudo[286533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsfavqciqaryrsqauvhnudesghigslco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725545.4497795-373-211275462883770/AnsiballZ_file.py'
Dec 03 01:32:26 compute-0 sudo[286533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:32:26 compute-0 ceph-mon[192821]: pgmap v529: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:26 compute-0 podman[286493]: 2025-12-03 01:32:26.260826033 +0000 UTC m=+0.187322784 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, container_name=kepler, io.openshift.expose-services=, managed_by=edpm_ansible, release=1214.1726694543, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, name=ubi9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, distribution-scope=public, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9)
Dec 03 01:32:26 compute-0 python3.9[286539]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:32:26 compute-0 sudo[286533]: pam_unix(sudo:session): session closed for user root
Dec 03 01:32:26 compute-0 sshd-session[286368]: Invalid user cc from 103.146.202.174 port 42124
Dec 03 01:32:26 compute-0 sshd-session[286368]: Received disconnect from 103.146.202.174 port 42124:11: Bye Bye [preauth]
Dec 03 01:32:26 compute-0 sshd-session[286368]: Disconnected from invalid user cc 103.146.202.174 port 42124 [preauth]
Dec 03 01:32:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v530: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:32:27 compute-0 sudo[286692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmbcgfjedngctrlftssfswtujtigwube ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725546.7778127-381-171462859462122/AnsiballZ_stat.py'
Dec 03 01:32:27 compute-0 sudo[286692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:32:27 compute-0 sshd-session[286640]: Received disconnect from 34.66.72.251 port 44716:11: Bye Bye [preauth]
Dec 03 01:32:27 compute-0 sshd-session[286640]: Disconnected from authenticating user root 34.66.72.251 port 44716 [preauth]
Dec 03 01:32:27 compute-0 sudo[286692]: pam_unix(sudo:session): session closed for user root
Dec 03 01:32:28 compute-0 ceph-mon[192821]: pgmap v530: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:32:28
Dec 03 01:32:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 01:32:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 01:32:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['vms', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', 'backups', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'images', 'cephfs.cephfs.data']
Dec 03 01:32:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 01:32:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:32:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:32:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:32:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:32:28 compute-0 sudo[286815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipfhbqmoivsyiseaywdwnkxxpqfuziap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725546.7778127-381-171462859462122/AnsiballZ_copy.py'
Dec 03 01:32:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:32:28 compute-0 sudo[286815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:32:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:32:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 01:32:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:32:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 01:32:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:32:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:32:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:32:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:32:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:32:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:32:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:32:28 compute-0 sudo[286815]: pam_unix(sudo:session): session closed for user root
Dec 03 01:32:28 compute-0 sshd-session[285141]: Connection closed by authenticating user root 193.32.162.157 port 41332 [preauth]
Dec 03 01:32:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v531: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:29 compute-0 podman[158098]: time="2025-12-03T01:32:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:32:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:32:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec 03 01:32:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:32:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6843 "" "Go-http-client/1.1"
Dec 03 01:32:30 compute-0 sshd-session[286841]: Invalid user localhost from 173.249.50.59 port 32876
Dec 03 01:32:30 compute-0 sshd-session[286841]: Received disconnect from 173.249.50.59 port 32876:11: Bye Bye [preauth]
Dec 03 01:32:30 compute-0 sshd-session[286841]: Disconnected from invalid user localhost 173.249.50.59 port 32876 [preauth]
Dec 03 01:32:30 compute-0 ceph-mon[192821]: pgmap v531: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:30 compute-0 sudo[286969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwtamvfhttslyriwbigywatnwuukxkbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725549.7141519-398-122022506083379/AnsiballZ_container_config_data.py'
Dec 03 01:32:30 compute-0 sudo[286969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:32:30 compute-0 python3.9[286971]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Dec 03 01:32:30 compute-0 sudo[286969]: pam_unix(sudo:session): session closed for user root
Dec 03 01:32:30 compute-0 podman[286972]: 2025-12-03 01:32:30.835411677 +0000 UTC m=+0.102035976 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 03 01:32:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v532: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:31 compute-0 ceph-mon[192821]: pgmap v532: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:31 compute-0 openstack_network_exporter[160250]: ERROR   01:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:32:31 compute-0 openstack_network_exporter[160250]: ERROR   01:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:32:31 compute-0 openstack_network_exporter[160250]: ERROR   01:32:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:32:31 compute-0 openstack_network_exporter[160250]: ERROR   01:32:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:32:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:32:31 compute-0 openstack_network_exporter[160250]: ERROR   01:32:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:32:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:32:32 compute-0 sudo[287144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvhjawgjzcaiuxjwqdjwxlxtqdhdldom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725551.0232446-407-110212418271920/AnsiballZ_container_config_hash.py'
Dec 03 01:32:32 compute-0 sudo[287144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:32:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:32:32 compute-0 python3.9[287146]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 03 01:32:32 compute-0 sudo[287144]: pam_unix(sudo:session): session closed for user root
Dec 03 01:32:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v533: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:33 compute-0 sudo[287296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqellysjxjlpwrihyukgzlctqsnuwlrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725552.8189063-416-4751456841574/AnsiballZ_podman_container_info.py'
Dec 03 01:32:33 compute-0 sudo[287296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:32:33 compute-0 python3.9[287298]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 03 01:32:34 compute-0 ceph-mon[192821]: pgmap v533: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:34 compute-0 sudo[287296]: pam_unix(sudo:session): session closed for user root
Dec 03 01:32:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v534: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:36 compute-0 sudo[287474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olxtuhpsvadbcjfcbkmvjskffumbqutn ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764725555.2058134-429-85833703797286/AnsiballZ_edpm_container_manage.py'
Dec 03 01:32:36 compute-0 sudo[287474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:32:36 compute-0 ceph-mon[192821]: pgmap v534: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:36 compute-0 python3[287476]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 03 01:32:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v535: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:32:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 01:32:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:32:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 01:32:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:32:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:32:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:32:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:32:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:32:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:32:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:32:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:32:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:32:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 01:32:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:32:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:32:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:32:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 01:32:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:32:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 01:32:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:32:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:32:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:32:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 01:32:38 compute-0 ceph-mon[192821]: pgmap v535: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v536: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:40 compute-0 ceph-mon[192821]: pgmap v536: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v537: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:32:42 compute-0 ceph-mon[192821]: pgmap v537: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v538: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:43 compute-0 ceph-mon[192821]: pgmap v538: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v539: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:46 compute-0 sshd-session[287544]: Received disconnect from 80.253.31.232 port 43022:11: Bye Bye [preauth]
Dec 03 01:32:46 compute-0 sshd-session[287544]: Disconnected from authenticating user root 80.253.31.232 port 43022 [preauth]
Dec 03 01:32:46 compute-0 ceph-mon[192821]: pgmap v539: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:46 compute-0 podman[287550]: 2025-12-03 01:32:46.962066406 +0000 UTC m=+1.567010751 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 03 01:32:46 compute-0 podman[287551]: 2025-12-03 01:32:46.978248967 +0000 UTC m=+1.581382722 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=edpm, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, distribution-scope=public, vcs-type=git, io.buildah.version=1.33.7)
Dec 03 01:32:46 compute-0 podman[287552]: 2025-12-03 01:32:46.980442817 +0000 UTC m=+1.583064658 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Dec 03 01:32:47 compute-0 podman[287553]: 2025-12-03 01:32:47.003917008 +0000 UTC m=+1.598253833 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 03 01:32:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v540: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:32:47 compute-0 podman[287490]: 2025-12-03 01:32:47.658688239 +0000 UTC m=+11.033525872 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 03 01:32:47 compute-0 podman[287669]: 2025-12-03 01:32:47.9500146 +0000 UTC m=+0.094617963 container create 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 03 01:32:47 compute-0 podman[287669]: 2025-12-03 01:32:47.902922685 +0000 UTC m=+0.047526068 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 03 01:32:47 compute-0 python3[287476]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 03 01:32:48 compute-0 ceph-mon[192821]: pgmap v540: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:48 compute-0 sudo[287474]: pam_unix(sudo:session): session closed for user root
Dec 03 01:32:48 compute-0 podman[287801]: 2025-12-03 01:32:48.863356497 +0000 UTC m=+0.115366709 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi)
Dec 03 01:32:49 compute-0 sudo[287875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjpstkyccuzpovciuxyeboxzjkmuekbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725568.5043888-437-99472190182512/AnsiballZ_stat.py'
Dec 03 01:32:49 compute-0 sudo[287875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:32:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v541: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:49 compute-0 python3.9[287877]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:32:49 compute-0 sudo[287875]: pam_unix(sudo:session): session closed for user root
Dec 03 01:32:50 compute-0 ceph-mon[192821]: pgmap v541: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:50 compute-0 sudo[288030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-equwsciquepxtnwdbxthowepfcjeyyfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725569.7868657-446-8819397634791/AnsiballZ_file.py'
Dec 03 01:32:50 compute-0 sudo[288030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:32:50 compute-0 python3.9[288032]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:32:50 compute-0 sudo[288030]: pam_unix(sudo:session): session closed for user root
Dec 03 01:32:51 compute-0 sudo[288106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzmkcivubqaljkybahkwedelafrzlrcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725569.7868657-446-8819397634791/AnsiballZ_stat.py'
Dec 03 01:32:51 compute-0 sudo[288106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:32:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v542: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:51 compute-0 python3.9[288108]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:32:51 compute-0 sudo[288106]: pam_unix(sudo:session): session closed for user root
Dec 03 01:32:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:32:52 compute-0 sudo[288257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jiohrdjfkydgdgbgfzelxrkvinahzrgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725571.4598265-446-134686505984931/AnsiballZ_copy.py'
Dec 03 01:32:52 compute-0 sudo[288257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:32:52 compute-0 ceph-mon[192821]: pgmap v542: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:52 compute-0 python3.9[288259]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764725571.4598265-446-134686505984931/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:32:52 compute-0 sudo[288257]: pam_unix(sudo:session): session closed for user root
Dec 03 01:32:52 compute-0 sudo[288333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihckkgwyzgitqzjgotmvyhephbczsaki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725571.4598265-446-134686505984931/AnsiballZ_systemd.py'
Dec 03 01:32:52 compute-0 sudo[288333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:32:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v543: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:54 compute-0 ceph-mon[192821]: pgmap v543: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:54 compute-0 python3.9[288335]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 03 01:32:54 compute-0 systemd[1]: Reloading.
Dec 03 01:32:54 compute-0 systemd-rc-local-generator[288363]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:32:54 compute-0 systemd-sysv-generator[288367]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:32:55 compute-0 sudo[288333]: pam_unix(sudo:session): session closed for user root
Dec 03 01:32:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v544: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:55 compute-0 ceph-mon[192821]: pgmap v544: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:56 compute-0 sudo[288445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bijqhmfbpxhgidsvfojhwsoowokxzokq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725571.4598265-446-134686505984931/AnsiballZ_systemd.py'
Dec 03 01:32:56 compute-0 sudo[288445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:32:56 compute-0 python3.9[288447]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:32:56 compute-0 systemd[1]: Reloading.
Dec 03 01:32:56 compute-0 podman[288449]: 2025-12-03 01:32:56.766661945 +0000 UTC m=+0.142915342 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, release-0.7.12=, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, maintainer=Red Hat, Inc., version=9.4, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, distribution-scope=public, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, com.redhat.component=ubi9-container)
Dec 03 01:32:56 compute-0 systemd-rc-local-generator[288497]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:32:56 compute-0 systemd-sysv-generator[288500]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:32:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v545: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:57 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Dec 03 01:32:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:32:57 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:32:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b02c71fafd679e995a36529ccc3f301be28fb64ad6b23ce21a437cb97af0b4eb/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Dec 03 01:32:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b02c71fafd679e995a36529ccc3f301be28fb64ad6b23ce21a437cb97af0b4eb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 03 01:32:57 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6.
Dec 03 01:32:57 compute-0 podman[288508]: 2025-12-03 01:32:57.44724336 +0000 UTC m=+0.260781538 container init 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 03 01:32:57 compute-0 ovn_metadata_agent[288523]: + sudo -E kolla_set_configs
Dec 03 01:32:57 compute-0 podman[288508]: 2025-12-03 01:32:57.495825976 +0000 UTC m=+0.309364154 container start 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 03 01:32:57 compute-0 edpm-start-podman-container[288508]: ovn_metadata_agent
Dec 03 01:32:57 compute-0 ovn_metadata_agent[288523]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 03 01:32:57 compute-0 ovn_metadata_agent[288523]: INFO:__main__:Validating config file
Dec 03 01:32:57 compute-0 ovn_metadata_agent[288523]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 03 01:32:57 compute-0 ovn_metadata_agent[288523]: INFO:__main__:Copying service configuration files
Dec 03 01:32:57 compute-0 ovn_metadata_agent[288523]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Dec 03 01:32:57 compute-0 ovn_metadata_agent[288523]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Dec 03 01:32:57 compute-0 ovn_metadata_agent[288523]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Dec 03 01:32:57 compute-0 ovn_metadata_agent[288523]: INFO:__main__:Writing out command to execute
Dec 03 01:32:57 compute-0 ovn_metadata_agent[288523]: INFO:__main__:Setting permission for /var/lib/neutron
Dec 03 01:32:57 compute-0 ovn_metadata_agent[288523]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Dec 03 01:32:57 compute-0 ovn_metadata_agent[288523]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Dec 03 01:32:57 compute-0 ovn_metadata_agent[288523]: INFO:__main__:Setting permission for /var/lib/neutron/external
Dec 03 01:32:57 compute-0 ovn_metadata_agent[288523]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Dec 03 01:32:57 compute-0 ovn_metadata_agent[288523]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Dec 03 01:32:57 compute-0 ovn_metadata_agent[288523]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Dec 03 01:32:57 compute-0 ovn_metadata_agent[288523]: ++ cat /run_command
Dec 03 01:32:57 compute-0 ovn_metadata_agent[288523]: + CMD=neutron-ovn-metadata-agent
Dec 03 01:32:57 compute-0 ovn_metadata_agent[288523]: + ARGS=
Dec 03 01:32:57 compute-0 ovn_metadata_agent[288523]: + sudo kolla_copy_cacerts
Dec 03 01:32:57 compute-0 edpm-start-podman-container[288507]: Creating additional drop-in dependency for "ovn_metadata_agent" (5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6)
Dec 03 01:32:57 compute-0 podman[288530]: 2025-12-03 01:32:57.6373803 +0000 UTC m=+0.124628883 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 03 01:32:57 compute-0 ovn_metadata_agent[288523]: + [[ ! -n '' ]]
Dec 03 01:32:57 compute-0 ovn_metadata_agent[288523]: + . kolla_extend_start
Dec 03 01:32:57 compute-0 ovn_metadata_agent[288523]: Running command: 'neutron-ovn-metadata-agent'
Dec 03 01:32:57 compute-0 ovn_metadata_agent[288523]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Dec 03 01:32:57 compute-0 ovn_metadata_agent[288523]: + umask 0022
Dec 03 01:32:57 compute-0 ovn_metadata_agent[288523]: + exec neutron-ovn-metadata-agent
Dec 03 01:32:57 compute-0 systemd[1]: Reloading.
Dec 03 01:32:57 compute-0 systemd-sysv-generator[288599]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:32:57 compute-0 systemd-rc-local-generator[288593]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:32:58 compute-0 systemd[1]: Started ovn_metadata_agent container.
Dec 03 01:32:58 compute-0 sudo[288445]: pam_unix(sudo:session): session closed for user root
Dec 03 01:32:58 compute-0 ceph-mon[192821]: pgmap v545: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:32:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:32:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:32:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:32:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:32:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:32:58 compute-0 sshd-session[278892]: Connection closed by 192.168.122.30 port 41004
Dec 03 01:32:58 compute-0 sshd-session[278878]: pam_unix(sshd:session): session closed for user zuul
Dec 03 01:32:58 compute-0 systemd[1]: session-53.scope: Deactivated successfully.
Dec 03 01:32:58 compute-0 systemd[1]: session-53.scope: Consumed 1min 34.303s CPU time.
Dec 03 01:32:58 compute-0 systemd-logind[800]: Session 53 logged out. Waiting for processes to exit.
Dec 03 01:32:58 compute-0 systemd-logind[800]: Removed session 53.
Dec 03 01:32:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v546: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.537 288528 INFO neutron.common.config [-] Logging enabled!
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.537 288528 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.537 288528 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.538 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.538 288528 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.538 288528 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.538 288528 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.538 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.538 288528 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.538 288528 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.538 288528 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.539 288528 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.539 288528 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.539 288528 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.539 288528 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.539 288528 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.539 288528 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.539 288528 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.540 288528 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.540 288528 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.540 288528 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.540 288528 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.540 288528 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.540 288528 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.540 288528 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.541 288528 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.541 288528 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.541 288528 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.541 288528 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.541 288528 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.541 288528 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.541 288528 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.542 288528 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.542 288528 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.542 288528 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.542 288528 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.542 288528 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.542 288528 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.542 288528 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.542 288528 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.543 288528 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.543 288528 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.543 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.543 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.543 288528 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.543 288528 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.543 288528 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.543 288528 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.543 288528 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.544 288528 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.544 288528 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.544 288528 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.544 288528 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.544 288528 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.544 288528 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.544 288528 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.545 288528 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.545 288528 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.545 288528 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.545 288528 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.545 288528 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.545 288528 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.545 288528 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.546 288528 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.546 288528 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.546 288528 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.546 288528 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.546 288528 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.546 288528 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.546 288528 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.546 288528 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.547 288528 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.547 288528 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.547 288528 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.547 288528 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.547 288528 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.547 288528 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.547 288528 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.548 288528 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.548 288528 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.548 288528 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.548 288528 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.548 288528 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.548 288528 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.548 288528 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.548 288528 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.549 288528 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.549 288528 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.549 288528 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.549 288528 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.549 288528 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.549 288528 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.550 288528 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.550 288528 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.550 288528 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.550 288528 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.550 288528 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.550 288528 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.550 288528 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.550 288528 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.550 288528 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.550 288528 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.551 288528 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.551 288528 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.551 288528 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.551 288528 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.551 288528 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.551 288528 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.551 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.551 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.551 288528 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.552 288528 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.552 288528 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.552 288528 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.552 288528 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.552 288528 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.552 288528 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.552 288528 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.553 288528 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.553 288528 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.553 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.553 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.553 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.553 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.553 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.553 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.554 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.554 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.554 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.554 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.554 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.554 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.554 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.554 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.554 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.555 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.555 288528 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.555 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.555 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.555 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.555 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.556 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.556 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.556 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.556 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.556 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.556 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.556 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.557 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.557 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.557 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.557 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.557 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.557 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.557 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.558 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.558 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.558 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.558 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.558 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.558 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.558 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.559 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.559 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.559 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.559 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.559 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.559 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.559 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.559 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.560 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.560 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.560 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.560 288528 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.560 288528 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.560 288528 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.560 288528 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.560 288528 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.560 288528 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.561 288528 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.561 288528 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.561 288528 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.561 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.561 288528 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.561 288528 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.561 288528 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.561 288528 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.562 288528 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.562 288528 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.562 288528 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.562 288528 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.562 288528 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.562 288528 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.562 288528 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.562 288528 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.563 288528 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.563 288528 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.563 288528 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.563 288528 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.563 288528 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.563 288528 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.563 288528 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.564 288528 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.564 288528 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.564 288528 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.564 288528 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.564 288528 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.564 288528 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.564 288528 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.564 288528 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.565 288528 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.565 288528 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.565 288528 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.565 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.565 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.565 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.565 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.566 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.566 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.566 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.566 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.566 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.566 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.566 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.567 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.567 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.567 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.567 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.567 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.567 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.567 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.567 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.568 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.568 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.568 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.568 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.568 288528 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.568 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.568 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.568 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.568 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.569 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.569 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.569 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.569 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.569 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.569 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.569 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.569 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.570 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.570 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.570 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.570 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.570 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.570 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.570 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.570 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.570 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.571 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.571 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.571 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.571 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.571 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.571 288528 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.571 288528 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.571 288528 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.572 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.572 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.572 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.572 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.572 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.572 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.572 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.573 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.573 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.573 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.573 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.573 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.573 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.573 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.573 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.573 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.574 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.574 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.574 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.574 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.574 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.574 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.574 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.574 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.574 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.575 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.575 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.575 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.575 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.575 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.575 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.575 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.575 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.575 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.576 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.576 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.576 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.576 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.586 288528 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.586 288528 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.586 288528 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.587 288528 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.587 288528 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.601 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name eda9fd7d-f2b1-4121-b9ac-fc31f8426272 (UUID: eda9fd7d-f2b1-4121-b9ac-fc31f8426272) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.642 288528 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.643 288528 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.643 288528 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.643 288528 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.646 288528 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.652 288528 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.657 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'eda9fd7d-f2b1-4121-b9ac-fc31f8426272'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], external_ids={}, name=eda9fd7d-f2b1-4121-b9ac-fc31f8426272, nb_cfg_timestamp=1764723909412, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.658 288528 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f652f23de80>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.659 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.659 288528 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.660 288528 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.660 288528 INFO oslo_service.service [-] Starting 1 workers
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.664 288528 DEBUG oslo_service.service [-] Started child 288634 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.667 288528 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpox5gvgqk/privsep.sock']
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.669 288634 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-429760'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.704 288634 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.705 288634 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.705 288634 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.713 288634 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.722 288634 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec 03 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.731 288634 INFO eventlet.wsgi.server [-] (288634) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Dec 03 01:32:59 compute-0 podman[158098]: time="2025-12-03T01:32:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:32:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:32:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35732 "" "Go-http-client/1.1"
Dec 03 01:32:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:32:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7279 "" "Go-http-client/1.1"
Dec 03 01:33:00 compute-0 ceph-mon[192821]: pgmap v546: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:00 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:00.400 288528 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 03 01:33:00 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:00.401 288528 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpox5gvgqk/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec 03 01:33:00 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:00.270 288639 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 03 01:33:00 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:00.277 288639 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 03 01:33:00 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:00.281 288639 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Dec 03 01:33:00 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:00.281 288639 INFO oslo.privsep.daemon [-] privsep daemon running as pid 288639
Dec 03 01:33:00 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:00.406 288639 DEBUG oslo.privsep.daemon [-] privsep: reply[12b09bcb-2264-4ae1-938f-1a29616dbed8]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 01:33:00 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:00.926 288639 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:33:00 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:00.926 288639 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:33:00 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:00.926 288639 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:33:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v547: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:01 compute-0 openstack_network_exporter[160250]: ERROR   01:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:33:01 compute-0 openstack_network_exporter[160250]: ERROR   01:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:33:01 compute-0 openstack_network_exporter[160250]: ERROR   01:33:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:33:01 compute-0 openstack_network_exporter[160250]: ERROR   01:33:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:33:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:33:01 compute-0 openstack_network_exporter[160250]: ERROR   01:33:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:33:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.489 288639 DEBUG oslo.privsep.daemon [-] privsep: reply[e48bf436-638a-4017-9606-d9ca126af20a]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.491 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=eda9fd7d-f2b1-4121-b9ac-fc31f8426272, column=external_ids, values=({'neutron:ovn-metadata-id': 'dfc124bb-8fd2-5454-9234-248aae16aad5'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.509 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=eda9fd7d-f2b1-4121-b9ac-fc31f8426272, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.525 288528 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.525 288528 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.525 288528 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.525 288528 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.526 288528 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.526 288528 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.526 288528 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.526 288528 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.526 288528 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.527 288528 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.527 288528 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.527 288528 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.527 288528 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.527 288528 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.527 288528 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.528 288528 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.528 288528 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.528 288528 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.528 288528 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.529 288528 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.529 288528 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.529 288528 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.529 288528 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.529 288528 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.530 288528 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.530 288528 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.530 288528 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.530 288528 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.530 288528 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.531 288528 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.531 288528 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.531 288528 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.531 288528 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.531 288528 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.531 288528 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.532 288528 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.532 288528 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.532 288528 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.532 288528 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.532 288528 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.533 288528 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.533 288528 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.533 288528 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.533 288528 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.533 288528 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.533 288528 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.534 288528 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.534 288528 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.534 288528 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.534 288528 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.535 288528 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.535 288528 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.535 288528 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.535 288528 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.535 288528 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.535 288528 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.536 288528 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.536 288528 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.536 288528 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.536 288528 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.536 288528 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.536 288528 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.537 288528 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.537 288528 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.537 288528 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.537 288528 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.537 288528 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.537 288528 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.538 288528 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.538 288528 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.538 288528 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.538 288528 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.538 288528 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.538 288528 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.539 288528 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.539 288528 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.539 288528 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.539 288528 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.539 288528 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.539 288528 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.540 288528 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.540 288528 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.540 288528 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.540 288528 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.540 288528 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.540 288528 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.541 288528 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.541 288528 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.541 288528 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.541 288528 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.542 288528 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.542 288528 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.542 288528 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.542 288528 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.543 288528 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.543 288528 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.543 288528 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.543 288528 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.543 288528 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.544 288528 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.544 288528 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.544 288528 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.544 288528 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.545 288528 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.545 288528 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.545 288528 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.546 288528 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.546 288528 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.546 288528 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.546 288528 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.547 288528 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.547 288528 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.547 288528 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.548 288528 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.548 288528 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.549 288528 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.549 288528 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.549 288528 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.549 288528 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.549 288528 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.550 288528 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.550 288528 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.550 288528 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.550 288528 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.551 288528 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.551 288528 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.551 288528 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.551 288528 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.552 288528 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.552 288528 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.552 288528 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.552 288528 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.553 288528 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.553 288528 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.553 288528 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.553 288528 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.554 288528 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.554 288528 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.554 288528 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.554 288528 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.555 288528 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.555 288528 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.555 288528 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.555 288528 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.555 288528 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.556 288528 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.556 288528 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.556 288528 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.556 288528 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.557 288528 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.557 288528 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.557 288528 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.557 288528 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.558 288528 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.558 288528 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.558 288528 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.558 288528 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.558 288528 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.558 288528 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.559 288528 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.559 288528 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.559 288528 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.559 288528 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.560 288528 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.560 288528 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.560 288528 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.560 288528 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.561 288528 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.561 288528 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.561 288528 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.561 288528 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.562 288528 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.562 288528 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.562 288528 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.562 288528 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.563 288528 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.563 288528 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.563 288528 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.563 288528 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.563 288528 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.564 288528 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.564 288528 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.564 288528 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.564 288528 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.565 288528 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.565 288528 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.565 288528 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.565 288528 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.565 288528 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.565 288528 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.566 288528 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.566 288528 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.566 288528 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.566 288528 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.566 288528 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.566 288528 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.567 288528 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.567 288528 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.567 288528 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.567 288528 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.567 288528 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.567 288528 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.567 288528 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.568 288528 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.568 288528 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.568 288528 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.569 288528 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.569 288528 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.569 288528 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.570 288528 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.570 288528 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.570 288528 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.571 288528 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.571 288528 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.572 288528 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.572 288528 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.572 288528 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.573 288528 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.573 288528 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.573 288528 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.573 288528 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.573 288528 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.573 288528 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.573 288528 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.574 288528 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.574 288528 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.574 288528 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.574 288528 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.574 288528 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.574 288528 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.574 288528 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.574 288528 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.574 288528 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.574 288528 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.575 288528 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.575 288528 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.575 288528 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.575 288528 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.575 288528 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.575 288528 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.575 288528 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.575 288528 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.575 288528 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.576 288528 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.576 288528 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.576 288528 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.576 288528 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.576 288528 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.576 288528 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.576 288528 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.576 288528 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.576 288528 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.577 288528 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.577 288528 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.577 288528 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.577 288528 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.577 288528 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.577 288528 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.577 288528 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.578 288528 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.578 288528 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.578 288528 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.578 288528 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.578 288528 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.578 288528 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.578 288528 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.578 288528 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.578 288528 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.579 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.579 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.579 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.579 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.579 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.579 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.579 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.579 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.579 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.580 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.580 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.580 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.580 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.580 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.580 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.580 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.580 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.580 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.581 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.581 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.581 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.581 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.581 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.581 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.581 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.582 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.582 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.582 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.582 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.582 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.582 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.582 288528 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.582 288528 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.582 288528 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.583 288528 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.583 288528 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 03 01:33:01 compute-0 podman[288644]: 2025-12-03 01:33:01.90837816 +0000 UTC m=+0.155768473 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 03 01:33:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:33:02 compute-0 ceph-mon[192821]: pgmap v547: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v548: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:03 compute-0 ceph-mon[192821]: pgmap v548: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:04 compute-0 sshd-session[288668]: Accepted publickey for zuul from 192.168.122.30 port 60532 ssh2: ECDSA SHA256:ja3ITS17A9km0/Ot+KN2pl9ub4ump/b6GV+vNoE7Szw
Dec 03 01:33:04 compute-0 systemd-logind[800]: New session 54 of user zuul.
Dec 03 01:33:04 compute-0 systemd[1]: Started Session 54 of User zuul.
Dec 03 01:33:04 compute-0 sshd-session[288668]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 03 01:33:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v549: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:06 compute-0 python3.9[288821]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 01:33:06 compute-0 ceph-mon[192821]: pgmap v549: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v550: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:33:07 compute-0 sudo[288975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsymhurpxskhuibyzubxvuvwozmdhfar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725586.9097881-34-277637958093039/AnsiballZ_command.py'
Dec 03 01:33:07 compute-0 sudo[288975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:33:07 compute-0 python3.9[288977]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:33:07 compute-0 sudo[288975]: pam_unix(sudo:session): session closed for user root
Dec 03 01:33:08 compute-0 ceph-mon[192821]: pgmap v550: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v551: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:09 compute-0 sudo[289140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dndcwaydprukmjkqntkwtgwaiptixdkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725588.6291945-45-132415377538963/AnsiballZ_systemd_service.py'
Dec 03 01:33:09 compute-0 sudo[289140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:33:09 compute-0 python3.9[289142]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 03 01:33:09 compute-0 systemd[1]: Reloading.
Dec 03 01:33:10 compute-0 systemd-rc-local-generator[289169]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:33:10 compute-0 systemd-sysv-generator[289172]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:33:10 compute-0 ceph-mon[192821]: pgmap v551: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:10 compute-0 sudo[289140]: pam_unix(sudo:session): session closed for user root
Dec 03 01:33:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v552: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:11 compute-0 python3.9[289327]: ansible-ansible.builtin.service_facts Invoked
Dec 03 01:33:12 compute-0 network[289344]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 03 01:33:12 compute-0 network[289345]: 'network-scripts' will be removed from distribution in near future.
Dec 03 01:33:12 compute-0 network[289346]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 03 01:33:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:33:12 compute-0 ceph-mon[192821]: pgmap v552: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v553: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:14 compute-0 ceph-mon[192821]: pgmap v553: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v554: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:16 compute-0 ceph-mon[192821]: pgmap v554: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v555: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:33:17 compute-0 ceph-mon[192821]: pgmap v555: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:17 compute-0 podman[289522]: 2025-12-03 01:33:17.858257678 +0000 UTC m=+0.097344888 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true)
Dec 03 01:33:17 compute-0 podman[289520]: 2025-12-03 01:33:17.865612539 +0000 UTC m=+0.102519110 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, container_name=openstack_network_exporter, vcs-type=git, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 03 01:33:17 compute-0 podman[289514]: 2025-12-03 01:33:17.88727142 +0000 UTC m=+0.130525394 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 01:33:17 compute-0 podman[289525]: 2025-12-03 01:33:17.934660284 +0000 UTC m=+0.161789128 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Dec 03 01:33:18 compute-0 sudo[289702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcqkuhuqhajucvuyeeofebrbangcbllw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725597.6905448-64-23970617304721/AnsiballZ_systemd_service.py'
Dec 03 01:33:18 compute-0 sudo[289702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:33:18 compute-0 python3.9[289704]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:33:18 compute-0 sudo[289702]: pam_unix(sudo:session): session closed for user root
Dec 03 01:33:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v556: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:19 compute-0 sshd-session[289603]: Invalid user cc from 14.103.201.7 port 57606
Dec 03 01:33:19 compute-0 podman[289830]: 2025-12-03 01:33:19.831227495 +0000 UTC m=+0.141707009 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 03 01:33:20 compute-0 ceph-mon[192821]: pgmap v556: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:20 compute-0 sshd-session[289603]: Received disconnect from 14.103.201.7 port 57606:11: Bye Bye [preauth]
Dec 03 01:33:20 compute-0 sshd-session[289603]: Disconnected from invalid user cc 14.103.201.7 port 57606 [preauth]
Dec 03 01:33:20 compute-0 sudo[289875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fobzofpwsbfijgzvhmlsltbayuqfhchh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725599.0020964-64-76597406032462/AnsiballZ_systemd_service.py'
Dec 03 01:33:20 compute-0 sudo[289875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:33:21 compute-0 python3.9[289877]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:33:21 compute-0 sudo[289875]: pam_unix(sudo:session): session closed for user root
Dec 03 01:33:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v557: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:21 compute-0 sudo[289952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:33:21 compute-0 sudo[289952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:33:21 compute-0 sudo[289952]: pam_unix(sudo:session): session closed for user root
Dec 03 01:33:21 compute-0 sudo[290003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:33:21 compute-0 sudo[290003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:33:21 compute-0 sudo[290003]: pam_unix(sudo:session): session closed for user root
Dec 03 01:33:21 compute-0 sudo[290052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:33:21 compute-0 sudo[290052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:33:21 compute-0 sudo[290052]: pam_unix(sudo:session): session closed for user root
Dec 03 01:33:21 compute-0 sudo[290102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwlssjgyvlhyvbcdgsumytbumtuppebv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725601.3360455-64-222015174695996/AnsiballZ_systemd_service.py'
Dec 03 01:33:21 compute-0 sudo[290102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:33:21 compute-0 sudo[290104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 03 01:33:21 compute-0 sudo[290104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:33:22 compute-0 python3.9[290109]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:33:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:33:22 compute-0 ceph-mon[192821]: pgmap v557: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:22 compute-0 sudo[290102]: pam_unix(sudo:session): session closed for user root
Dec 03 01:33:22 compute-0 podman[290203]: 2025-12-03 01:33:22.864677421 +0000 UTC m=+0.109979404 container exec d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:33:22 compute-0 podman[290203]: 2025-12-03 01:33:22.972141784 +0000 UTC m=+0.217443727 container exec_died d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:33:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v558: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:23 compute-0 sudo[290434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-peixkejbshsdrwmkfkzaiafevppaaahl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725603.0359416-64-216201386883810/AnsiballZ_systemd_service.py'
Dec 03 01:33:23 compute-0 sudo[290434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:33:23 compute-0 python3.9[290443]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:33:23 compute-0 sudo[290434]: pam_unix(sudo:session): session closed for user root
Dec 03 01:33:24 compute-0 sudo[290104]: pam_unix(sudo:session): session closed for user root
Dec 03 01:33:24 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:33:24 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:33:24 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:33:24 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:33:24 compute-0 ceph-mon[192821]: pgmap v558: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:24 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:33:24 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:33:24 compute-0 sudo[290549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:33:24 compute-0 sudo[290549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:33:24 compute-0 sudo[290549]: pam_unix(sudo:session): session closed for user root
Dec 03 01:33:24 compute-0 sudo[290605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:33:24 compute-0 sudo[290605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:33:24 compute-0 sudo[290605]: pam_unix(sudo:session): session closed for user root
Dec 03 01:33:24 compute-0 sudo[290653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:33:24 compute-0 sudo[290653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:33:24 compute-0 sudo[290653]: pam_unix(sudo:session): session closed for user root
Dec 03 01:33:24 compute-0 sudo[290699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 01:33:24 compute-0 sudo[290699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:33:24 compute-0 sudo[290753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyeumoapljljoyfhkxpmmztnjmkqcclr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725604.1873019-64-32651718837207/AnsiballZ_systemd_service.py'
Dec 03 01:33:24 compute-0 sudo[290753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:33:25 compute-0 python3.9[290755]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:33:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v559: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:25 compute-0 sudo[290753]: pam_unix(sudo:session): session closed for user root
Dec 03 01:33:25 compute-0 sudo[290699]: pam_unix(sudo:session): session closed for user root
Dec 03 01:33:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:33:25 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:33:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 01:33:25 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:33:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 01:33:25 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:33:25 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev ec3a39e7-1825-499c-92ec-925d74f66884 does not exist
Dec 03 01:33:25 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev f2e44fd1-b3ca-4044-a463-8f98c39bb14e does not exist
Dec 03 01:33:25 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev ca79ad86-911e-4577-ad31-70f7c4c0abca does not exist
Dec 03 01:33:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 01:33:25 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:33:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 01:33:25 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:33:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:33:25 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:33:25 compute-0 sudo[290864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:33:25 compute-0 sudo[290864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:33:25 compute-0 sudo[290864]: pam_unix(sudo:session): session closed for user root
Dec 03 01:33:25 compute-0 sudo[290912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:33:25 compute-0 sudo[290912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:33:25 compute-0 sudo[290912]: pam_unix(sudo:session): session closed for user root
Dec 03 01:33:25 compute-0 sudo[290937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:33:25 compute-0 sudo[290937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:33:25 compute-0 sudo[290937]: pam_unix(sudo:session): session closed for user root
Dec 03 01:33:26 compute-0 sudo[290985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 01:33:26 compute-0 sudo[290985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:33:26 compute-0 sudo[291037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tltoslxqvgewoggkyubpqgfetavdfbcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725605.3871794-64-47926584913357/AnsiballZ_systemd_service.py'
Dec 03 01:33:26 compute-0 sudo[291037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:33:26 compute-0 ceph-mon[192821]: pgmap v559: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:26 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:33:26 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:33:26 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:33:26 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:33:26 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:33:26 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:33:26 compute-0 python3.9[291039]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:33:26 compute-0 sudo[291037]: pam_unix(sudo:session): session closed for user root
Dec 03 01:33:26 compute-0 podman[291087]: 2025-12-03 01:33:26.642369962 +0000 UTC m=+0.079153982 container create 508d0d8bf0996fae9046c0f006cf9d5fe75b7c912f60adc9b64dd04887c0bf82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_ganguly, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:33:26 compute-0 podman[291087]: 2025-12-03 01:33:26.608673412 +0000 UTC m=+0.045457502 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:33:26 compute-0 systemd[1]: Started libpod-conmon-508d0d8bf0996fae9046c0f006cf9d5fe75b7c912f60adc9b64dd04887c0bf82.scope.
Dec 03 01:33:26 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:33:26 compute-0 podman[291087]: 2025-12-03 01:33:26.798956966 +0000 UTC m=+0.235741046 container init 508d0d8bf0996fae9046c0f006cf9d5fe75b7c912f60adc9b64dd04887c0bf82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_ganguly, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:33:26 compute-0 podman[291087]: 2025-12-03 01:33:26.81266344 +0000 UTC m=+0.249447480 container start 508d0d8bf0996fae9046c0f006cf9d5fe75b7c912f60adc9b64dd04887c0bf82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:33:26 compute-0 podman[291087]: 2025-12-03 01:33:26.820209656 +0000 UTC m=+0.256993676 container attach 508d0d8bf0996fae9046c0f006cf9d5fe75b7c912f60adc9b64dd04887c0bf82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_ganguly, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 03 01:33:26 compute-0 distracted_ganguly[291142]: 167 167
Dec 03 01:33:26 compute-0 systemd[1]: libpod-508d0d8bf0996fae9046c0f006cf9d5fe75b7c912f60adc9b64dd04887c0bf82.scope: Deactivated successfully.
Dec 03 01:33:26 compute-0 podman[291087]: 2025-12-03 01:33:26.822854398 +0000 UTC m=+0.259638408 container died 508d0d8bf0996fae9046c0f006cf9d5fe75b7c912f60adc9b64dd04887c0bf82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_ganguly, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:33:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-a98311949d94dca6fd6911e3ab8890c046a01e8499e3861e968f6b161c136f65-merged.mount: Deactivated successfully.
Dec 03 01:33:26 compute-0 podman[291087]: 2025-12-03 01:33:26.919027004 +0000 UTC m=+0.355811004 container remove 508d0d8bf0996fae9046c0f006cf9d5fe75b7c912f60adc9b64dd04887c0bf82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Dec 03 01:33:26 compute-0 systemd[1]: libpod-conmon-508d0d8bf0996fae9046c0f006cf9d5fe75b7c912f60adc9b64dd04887c0bf82.scope: Deactivated successfully.
Dec 03 01:33:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v560: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:27 compute-0 podman[291221]: 2025-12-03 01:33:27.169007388 +0000 UTC m=+0.075426580 container create 0394a5a09900a33edc8876911a5c03209cb168b30b2d09f6e699397a4d0654a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 03 01:33:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:33:27 compute-0 systemd[1]: Started libpod-conmon-0394a5a09900a33edc8876911a5c03209cb168b30b2d09f6e699397a4d0654a3.scope.
Dec 03 01:33:27 compute-0 podman[291221]: 2025-12-03 01:33:27.143473111 +0000 UTC m=+0.049892393 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:33:27 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6881f95f3877a4f3d03055d07584613729df2ae24559a9f0c42d5325b94a2504/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6881f95f3877a4f3d03055d07584613729df2ae24559a9f0c42d5325b94a2504/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6881f95f3877a4f3d03055d07584613729df2ae24559a9f0c42d5325b94a2504/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6881f95f3877a4f3d03055d07584613729df2ae24559a9f0c42d5325b94a2504/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6881f95f3877a4f3d03055d07584613729df2ae24559a9f0c42d5325b94a2504/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:33:27 compute-0 sudo[291299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyaxlrvczgosedbpniygunvnsqlzphoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725606.7395484-64-152428498584598/AnsiballZ_systemd_service.py'
Dec 03 01:33:27 compute-0 sudo[291299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:33:27 compute-0 podman[291254]: 2025-12-03 01:33:27.342281038 +0000 UTC m=+0.122220418 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., container_name=kepler, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release-0.7.12=, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, vcs-type=git, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, io.openshift.expose-services=, name=ubi9, com.redhat.component=ubi9-container, config_id=edpm, architecture=x86_64, distribution-scope=public)
Dec 03 01:33:27 compute-0 podman[291221]: 2025-12-03 01:33:27.34308766 +0000 UTC m=+0.249506902 container init 0394a5a09900a33edc8876911a5c03209cb168b30b2d09f6e699397a4d0654a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chandrasekhar, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:33:27 compute-0 podman[291221]: 2025-12-03 01:33:27.35409761 +0000 UTC m=+0.260516832 container start 0394a5a09900a33edc8876911a5c03209cb168b30b2d09f6e699397a4d0654a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 03 01:33:27 compute-0 podman[291221]: 2025-12-03 01:33:27.36141689 +0000 UTC m=+0.267836152 container attach 0394a5a09900a33edc8876911a5c03209cb168b30b2d09f6e699397a4d0654a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 03 01:33:27 compute-0 python3.9[291304]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:33:27 compute-0 sudo[291299]: pam_unix(sudo:session): session closed for user root
Dec 03 01:33:28 compute-0 ceph-mon[192821]: pgmap v560: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:33:28
Dec 03 01:33:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 01:33:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 01:33:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'backups', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'default.rgw.log', 'vms', 'images', 'cephfs.cephfs.meta']
Dec 03 01:33:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 01:33:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:33:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:33:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:33:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:33:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:33:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:33:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 01:33:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:33:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 01:33:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:33:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:33:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:33:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:33:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:33:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:33:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:33:28 compute-0 beautiful_chandrasekhar[291269]: --> passed data devices: 0 physical, 3 LVM
Dec 03 01:33:28 compute-0 beautiful_chandrasekhar[291269]: --> relative data size: 1.0
Dec 03 01:33:28 compute-0 beautiful_chandrasekhar[291269]: --> All data devices are unavailable
Dec 03 01:33:28 compute-0 podman[291221]: 2025-12-03 01:33:28.687245261 +0000 UTC m=+1.593664463 container died 0394a5a09900a33edc8876911a5c03209cb168b30b2d09f6e699397a4d0654a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chandrasekhar, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 03 01:33:28 compute-0 systemd[1]: libpod-0394a5a09900a33edc8876911a5c03209cb168b30b2d09f6e699397a4d0654a3.scope: Deactivated successfully.
Dec 03 01:33:28 compute-0 systemd[1]: libpod-0394a5a09900a33edc8876911a5c03209cb168b30b2d09f6e699397a4d0654a3.scope: Consumed 1.260s CPU time.
Dec 03 01:33:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-6881f95f3877a4f3d03055d07584613729df2ae24559a9f0c42d5325b94a2504-merged.mount: Deactivated successfully.
Dec 03 01:33:28 compute-0 podman[291221]: 2025-12-03 01:33:28.785482783 +0000 UTC m=+1.691901975 container remove 0394a5a09900a33edc8876911a5c03209cb168b30b2d09f6e699397a4d0654a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chandrasekhar, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:33:28 compute-0 systemd[1]: libpod-conmon-0394a5a09900a33edc8876911a5c03209cb168b30b2d09f6e699397a4d0654a3.scope: Deactivated successfully.
Dec 03 01:33:28 compute-0 sudo[290985]: pam_unix(sudo:session): session closed for user root
Dec 03 01:33:28 compute-0 podman[291456]: 2025-12-03 01:33:28.843477806 +0000 UTC m=+0.100769352 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Dec 03 01:33:28 compute-0 sudo[291511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujetojntvslrfywsikqviboccmptkfqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725608.1014972-116-59473236906803/AnsiballZ_file.py'
Dec 03 01:33:28 compute-0 sudo[291511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:33:28 compute-0 sudo[291512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:33:28 compute-0 sudo[291512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:33:28 compute-0 sudo[291512]: pam_unix(sudo:session): session closed for user root
Dec 03 01:33:29 compute-0 sudo[291540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:33:29 compute-0 sudo[291540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:33:29 compute-0 python3.9[291521]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:33:29 compute-0 sudo[291540]: pam_unix(sudo:session): session closed for user root
Dec 03 01:33:29 compute-0 sudo[291511]: pam_unix(sudo:session): session closed for user root
Dec 03 01:33:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v561: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:29 compute-0 sudo[291565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:33:29 compute-0 sudo[291565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:33:29 compute-0 sudo[291565]: pam_unix(sudo:session): session closed for user root
Dec 03 01:33:29 compute-0 sudo[291614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 01:33:29 compute-0 sudo[291614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:33:29 compute-0 podman[158098]: time="2025-12-03T01:33:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:33:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:33:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35732 "" "Go-http-client/1.1"
Dec 03 01:33:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:33:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7271 "" "Go-http-client/1.1"
Dec 03 01:33:29 compute-0 podman[291784]: 2025-12-03 01:33:29.896388538 +0000 UTC m=+0.080577041 container create f5c695b4dcea40ea8048a0ea79f12c5b80a55eb776a7f260c2c985faf34de6d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_swartz, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:33:29 compute-0 sudo[291818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwtgbplxnxvvehmplwcrqrdndqdnezob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725609.2977855-116-69384884907019/AnsiballZ_file.py'
Dec 03 01:33:29 compute-0 sudo[291818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:33:29 compute-0 podman[291784]: 2025-12-03 01:33:29.860753855 +0000 UTC m=+0.044942418 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:33:29 compute-0 systemd[1]: Started libpod-conmon-f5c695b4dcea40ea8048a0ea79f12c5b80a55eb776a7f260c2c985faf34de6d4.scope.
Dec 03 01:33:30 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:33:30 compute-0 podman[291784]: 2025-12-03 01:33:30.075450516 +0000 UTC m=+0.259639079 container init f5c695b4dcea40ea8048a0ea79f12c5b80a55eb776a7f260c2c985faf34de6d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:33:30 compute-0 podman[291784]: 2025-12-03 01:33:30.094414623 +0000 UTC m=+0.278603136 container start f5c695b4dcea40ea8048a0ea79f12c5b80a55eb776a7f260c2c985faf34de6d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_swartz, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:33:30 compute-0 podman[291784]: 2025-12-03 01:33:30.100845319 +0000 UTC m=+0.285033872 container attach f5c695b4dcea40ea8048a0ea79f12c5b80a55eb776a7f260c2c985faf34de6d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:33:30 compute-0 festive_swartz[291823]: 167 167
Dec 03 01:33:30 compute-0 systemd[1]: libpod-f5c695b4dcea40ea8048a0ea79f12c5b80a55eb776a7f260c2c985faf34de6d4.scope: Deactivated successfully.
Dec 03 01:33:30 compute-0 conmon[291823]: conmon f5c695b4dcea40ea8048 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f5c695b4dcea40ea8048a0ea79f12c5b80a55eb776a7f260c2c985faf34de6d4.scope/container/memory.events
Dec 03 01:33:30 compute-0 podman[291784]: 2025-12-03 01:33:30.110633246 +0000 UTC m=+0.294821749 container died f5c695b4dcea40ea8048a0ea79f12c5b80a55eb776a7f260c2c985faf34de6d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:33:30 compute-0 python3.9[291820]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:33:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-cbe857c19c710c61e6a7e5408d445c5e2aa2309edde1cfd9aed05220fc3c3c62-merged.mount: Deactivated successfully.
Dec 03 01:33:30 compute-0 sudo[291818]: pam_unix(sudo:session): session closed for user root
Dec 03 01:33:30 compute-0 podman[291784]: 2025-12-03 01:33:30.192932853 +0000 UTC m=+0.377121366 container remove f5c695b4dcea40ea8048a0ea79f12c5b80a55eb776a7f260c2c985faf34de6d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Dec 03 01:33:30 compute-0 systemd[1]: libpod-conmon-f5c695b4dcea40ea8048a0ea79f12c5b80a55eb776a7f260c2c985faf34de6d4.scope: Deactivated successfully.
Dec 03 01:33:30 compute-0 ceph-mon[192821]: pgmap v561: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:30 compute-0 podman[291877]: 2025-12-03 01:33:30.466522691 +0000 UTC m=+0.087241292 container create 8ac924686fdaed98c0f1d284d53a71cbf7ed63a9c3fafc0451309d88e1eba798 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:33:30 compute-0 podman[291877]: 2025-12-03 01:33:30.434316762 +0000 UTC m=+0.055035423 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:33:30 compute-0 systemd[1]: Started libpod-conmon-8ac924686fdaed98c0f1d284d53a71cbf7ed63a9c3fafc0451309d88e1eba798.scope.
Dec 03 01:33:30 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:33:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f6188e14fa5835c969a8cff49e1a71cbc4fbaa8732f726e679fb9eee249794f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:33:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f6188e14fa5835c969a8cff49e1a71cbc4fbaa8732f726e679fb9eee249794f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:33:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f6188e14fa5835c969a8cff49e1a71cbc4fbaa8732f726e679fb9eee249794f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:33:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f6188e14fa5835c969a8cff49e1a71cbc4fbaa8732f726e679fb9eee249794f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:33:30 compute-0 podman[291877]: 2025-12-03 01:33:30.643406259 +0000 UTC m=+0.264124920 container init 8ac924686fdaed98c0f1d284d53a71cbf7ed63a9c3fafc0451309d88e1eba798 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Dec 03 01:33:30 compute-0 podman[291877]: 2025-12-03 01:33:30.662875131 +0000 UTC m=+0.283593712 container start 8ac924686fdaed98c0f1d284d53a71cbf7ed63a9c3fafc0451309d88e1eba798 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_black, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:33:30 compute-0 podman[291877]: 2025-12-03 01:33:30.66833981 +0000 UTC m=+0.289058471 container attach 8ac924686fdaed98c0f1d284d53a71cbf7ed63a9c3fafc0451309d88e1eba798 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_black, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:33:31 compute-0 sudo[292014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqroejihnqlrehkbuifjgvldzcbabqyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725610.4064617-116-180284111048238/AnsiballZ_file.py'
Dec 03 01:33:31 compute-0 sudo[292014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:33:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v562: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:31 compute-0 python3.9[292016]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:33:31 compute-0 ceph-mon[192821]: pgmap v562: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:31 compute-0 sudo[292014]: pam_unix(sudo:session): session closed for user root
Dec 03 01:33:31 compute-0 openstack_network_exporter[160250]: ERROR   01:33:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:33:31 compute-0 openstack_network_exporter[160250]: ERROR   01:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:33:31 compute-0 openstack_network_exporter[160250]: ERROR   01:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:33:31 compute-0 openstack_network_exporter[160250]: ERROR   01:33:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:33:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:33:31 compute-0 openstack_network_exporter[160250]: ERROR   01:33:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:33:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:33:31 compute-0 clever_black[291930]: {
Dec 03 01:33:31 compute-0 clever_black[291930]:     "0": [
Dec 03 01:33:31 compute-0 clever_black[291930]:         {
Dec 03 01:33:31 compute-0 clever_black[291930]:             "devices": [
Dec 03 01:33:31 compute-0 clever_black[291930]:                 "/dev/loop3"
Dec 03 01:33:31 compute-0 clever_black[291930]:             ],
Dec 03 01:33:31 compute-0 clever_black[291930]:             "lv_name": "ceph_lv0",
Dec 03 01:33:31 compute-0 clever_black[291930]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:33:31 compute-0 clever_black[291930]:             "lv_size": "21470642176",
Dec 03 01:33:31 compute-0 clever_black[291930]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:33:31 compute-0 clever_black[291930]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:33:31 compute-0 clever_black[291930]:             "name": "ceph_lv0",
Dec 03 01:33:31 compute-0 clever_black[291930]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:33:31 compute-0 clever_black[291930]:             "tags": {
Dec 03 01:33:31 compute-0 clever_black[291930]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:33:31 compute-0 clever_black[291930]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:33:31 compute-0 clever_black[291930]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:33:31 compute-0 clever_black[291930]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:33:31 compute-0 clever_black[291930]:                 "ceph.cluster_name": "ceph",
Dec 03 01:33:31 compute-0 clever_black[291930]:                 "ceph.crush_device_class": "",
Dec 03 01:33:31 compute-0 clever_black[291930]:                 "ceph.encrypted": "0",
Dec 03 01:33:31 compute-0 clever_black[291930]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:33:31 compute-0 clever_black[291930]:                 "ceph.osd_id": "0",
Dec 03 01:33:31 compute-0 clever_black[291930]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:33:31 compute-0 clever_black[291930]:                 "ceph.type": "block",
Dec 03 01:33:31 compute-0 clever_black[291930]:                 "ceph.vdo": "0"
Dec 03 01:33:31 compute-0 clever_black[291930]:             },
Dec 03 01:33:31 compute-0 clever_black[291930]:             "type": "block",
Dec 03 01:33:31 compute-0 clever_black[291930]:             "vg_name": "ceph_vg0"
Dec 03 01:33:31 compute-0 clever_black[291930]:         }
Dec 03 01:33:31 compute-0 clever_black[291930]:     ],
Dec 03 01:33:31 compute-0 clever_black[291930]:     "1": [
Dec 03 01:33:31 compute-0 clever_black[291930]:         {
Dec 03 01:33:31 compute-0 clever_black[291930]:             "devices": [
Dec 03 01:33:31 compute-0 clever_black[291930]:                 "/dev/loop4"
Dec 03 01:33:31 compute-0 clever_black[291930]:             ],
Dec 03 01:33:31 compute-0 clever_black[291930]:             "lv_name": "ceph_lv1",
Dec 03 01:33:31 compute-0 clever_black[291930]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:33:31 compute-0 clever_black[291930]:             "lv_size": "21470642176",
Dec 03 01:33:31 compute-0 clever_black[291930]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:33:31 compute-0 clever_black[291930]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:33:31 compute-0 clever_black[291930]:             "name": "ceph_lv1",
Dec 03 01:33:31 compute-0 clever_black[291930]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:33:31 compute-0 clever_black[291930]:             "tags": {
Dec 03 01:33:31 compute-0 clever_black[291930]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:33:31 compute-0 clever_black[291930]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:33:31 compute-0 clever_black[291930]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:33:31 compute-0 clever_black[291930]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:33:31 compute-0 clever_black[291930]:                 "ceph.cluster_name": "ceph",
Dec 03 01:33:31 compute-0 clever_black[291930]:                 "ceph.crush_device_class": "",
Dec 03 01:33:31 compute-0 clever_black[291930]:                 "ceph.encrypted": "0",
Dec 03 01:33:31 compute-0 clever_black[291930]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:33:31 compute-0 clever_black[291930]:                 "ceph.osd_id": "1",
Dec 03 01:33:31 compute-0 clever_black[291930]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:33:31 compute-0 clever_black[291930]:                 "ceph.type": "block",
Dec 03 01:33:31 compute-0 clever_black[291930]:                 "ceph.vdo": "0"
Dec 03 01:33:31 compute-0 clever_black[291930]:             },
Dec 03 01:33:31 compute-0 clever_black[291930]:             "type": "block",
Dec 03 01:33:31 compute-0 clever_black[291930]:             "vg_name": "ceph_vg1"
Dec 03 01:33:31 compute-0 clever_black[291930]:         }
Dec 03 01:33:31 compute-0 clever_black[291930]:     ],
Dec 03 01:33:31 compute-0 clever_black[291930]:     "2": [
Dec 03 01:33:31 compute-0 clever_black[291930]:         {
Dec 03 01:33:31 compute-0 clever_black[291930]:             "devices": [
Dec 03 01:33:31 compute-0 clever_black[291930]:                 "/dev/loop5"
Dec 03 01:33:31 compute-0 clever_black[291930]:             ],
Dec 03 01:33:31 compute-0 clever_black[291930]:             "lv_name": "ceph_lv2",
Dec 03 01:33:31 compute-0 clever_black[291930]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:33:31 compute-0 clever_black[291930]:             "lv_size": "21470642176",
Dec 03 01:33:31 compute-0 clever_black[291930]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:33:31 compute-0 clever_black[291930]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:33:31 compute-0 clever_black[291930]:             "name": "ceph_lv2",
Dec 03 01:33:31 compute-0 clever_black[291930]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:33:31 compute-0 clever_black[291930]:             "tags": {
Dec 03 01:33:31 compute-0 clever_black[291930]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:33:31 compute-0 clever_black[291930]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:33:31 compute-0 clever_black[291930]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:33:31 compute-0 clever_black[291930]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:33:31 compute-0 clever_black[291930]:                 "ceph.cluster_name": "ceph",
Dec 03 01:33:31 compute-0 clever_black[291930]:                 "ceph.crush_device_class": "",
Dec 03 01:33:31 compute-0 clever_black[291930]:                 "ceph.encrypted": "0",
Dec 03 01:33:31 compute-0 clever_black[291930]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:33:31 compute-0 clever_black[291930]:                 "ceph.osd_id": "2",
Dec 03 01:33:31 compute-0 clever_black[291930]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:33:31 compute-0 clever_black[291930]:                 "ceph.type": "block",
Dec 03 01:33:31 compute-0 clever_black[291930]:                 "ceph.vdo": "0"
Dec 03 01:33:31 compute-0 clever_black[291930]:             },
Dec 03 01:33:31 compute-0 clever_black[291930]:             "type": "block",
Dec 03 01:33:31 compute-0 clever_black[291930]:             "vg_name": "ceph_vg2"
Dec 03 01:33:31 compute-0 clever_black[291930]:         }
Dec 03 01:33:31 compute-0 clever_black[291930]:     ]
Dec 03 01:33:31 compute-0 clever_black[291930]: }
Dec 03 01:33:31 compute-0 systemd[1]: libpod-8ac924686fdaed98c0f1d284d53a71cbf7ed63a9c3fafc0451309d88e1eba798.scope: Deactivated successfully.
Dec 03 01:33:31 compute-0 podman[291877]: 2025-12-03 01:33:31.507617989 +0000 UTC m=+1.128336590 container died 8ac924686fdaed98c0f1d284d53a71cbf7ed63a9c3fafc0451309d88e1eba798 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Dec 03 01:33:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f6188e14fa5835c969a8cff49e1a71cbc4fbaa8732f726e679fb9eee249794f-merged.mount: Deactivated successfully.
Dec 03 01:33:31 compute-0 podman[291877]: 2025-12-03 01:33:31.606806507 +0000 UTC m=+1.227525088 container remove 8ac924686fdaed98c0f1d284d53a71cbf7ed63a9c3fafc0451309d88e1eba798 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_black, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:33:31 compute-0 systemd[1]: libpod-conmon-8ac924686fdaed98c0f1d284d53a71cbf7ed63a9c3fafc0451309d88e1eba798.scope: Deactivated successfully.
Dec 03 01:33:31 compute-0 sudo[291614]: pam_unix(sudo:session): session closed for user root
Dec 03 01:33:31 compute-0 sudo[292103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:33:31 compute-0 sudo[292103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:33:31 compute-0 sudo[292103]: pam_unix(sudo:session): session closed for user root
Dec 03 01:33:31 compute-0 sudo[292151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:33:31 compute-0 sudo[292151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:33:31 compute-0 sudo[292151]: pam_unix(sudo:session): session closed for user root
Dec 03 01:33:32 compute-0 sudo[292205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:33:32 compute-0 sudo[292205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:33:32 compute-0 sudo[292205]: pam_unix(sudo:session): session closed for user root
Dec 03 01:33:32 compute-0 sudo[292259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjtwpnealofjhxzumnpqyvpitaoycbsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725611.5761895-116-251371009892118/AnsiballZ_file.py'
Dec 03 01:33:32 compute-0 sudo[292259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:33:32 compute-0 sudo[292267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 01:33:32 compute-0 sudo[292267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:33:32 compute-0 sshd-session[292199]: Invalid user userroot from 34.66.72.251 port 37256
Dec 03 01:33:32 compute-0 podman[292253]: 2025-12-03 01:33:32.156114162 +0000 UTC m=+0.121595830 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 03 01:33:32 compute-0 sshd-session[292199]: Received disconnect from 34.66.72.251 port 37256:11: Bye Bye [preauth]
Dec 03 01:33:32 compute-0 sshd-session[292199]: Disconnected from invalid user userroot 34.66.72.251 port 37256 [preauth]
Dec 03 01:33:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:33:32 compute-0 python3.9[292271]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:33:32 compute-0 sudo[292259]: pam_unix(sudo:session): session closed for user root
Dec 03 01:33:32 compute-0 podman[292390]: 2025-12-03 01:33:32.684926797 +0000 UTC m=+0.086444851 container create 6c35d38196f94ebee51250ebdcd5de2e53d040bbb935bf656207e14d5e35a742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_aryabhata, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 03 01:33:32 compute-0 podman[292390]: 2025-12-03 01:33:32.656870981 +0000 UTC m=+0.058389035 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:33:32 compute-0 systemd[1]: Started libpod-conmon-6c35d38196f94ebee51250ebdcd5de2e53d040bbb935bf656207e14d5e35a742.scope.
Dec 03 01:33:32 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:33:32 compute-0 podman[292390]: 2025-12-03 01:33:32.835098466 +0000 UTC m=+0.236616550 container init 6c35d38196f94ebee51250ebdcd5de2e53d040bbb935bf656207e14d5e35a742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:33:32 compute-0 podman[292390]: 2025-12-03 01:33:32.854058904 +0000 UTC m=+0.255576938 container start 6c35d38196f94ebee51250ebdcd5de2e53d040bbb935bf656207e14d5e35a742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 03 01:33:32 compute-0 podman[292390]: 2025-12-03 01:33:32.861381484 +0000 UTC m=+0.262899578 container attach 6c35d38196f94ebee51250ebdcd5de2e53d040bbb935bf656207e14d5e35a742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:33:32 compute-0 angry_aryabhata[292438]: 167 167
Dec 03 01:33:32 compute-0 systemd[1]: libpod-6c35d38196f94ebee51250ebdcd5de2e53d040bbb935bf656207e14d5e35a742.scope: Deactivated successfully.
Dec 03 01:33:32 compute-0 podman[292390]: 2025-12-03 01:33:32.868283842 +0000 UTC m=+0.269801856 container died 6c35d38196f94ebee51250ebdcd5de2e53d040bbb935bf656207e14d5e35a742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_aryabhata, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec 03 01:33:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b0fa54bfb6c60774ed73bf47b57c47cde1795d5f83e701aa2b47d6a499571f0-merged.mount: Deactivated successfully.
Dec 03 01:33:32 compute-0 podman[292390]: 2025-12-03 01:33:32.945765177 +0000 UTC m=+0.347283221 container remove 6c35d38196f94ebee51250ebdcd5de2e53d040bbb935bf656207e14d5e35a742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_aryabhata, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 03 01:33:32 compute-0 systemd[1]: libpod-conmon-6c35d38196f94ebee51250ebdcd5de2e53d040bbb935bf656207e14d5e35a742.scope: Deactivated successfully.
Dec 03 01:33:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v563: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:33 compute-0 podman[292463]: 2025-12-03 01:33:33.180334801 +0000 UTC m=+0.077332272 container create 140266db9537fdac9794a9bd8dbb4f7467f66a35d8371bbd7b9443b8b81a78f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 03 01:33:33 compute-0 podman[292463]: 2025-12-03 01:33:33.14916839 +0000 UTC m=+0.046165861 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:33:33 compute-0 systemd[1]: Started libpod-conmon-140266db9537fdac9794a9bd8dbb4f7467f66a35d8371bbd7b9443b8b81a78f5.scope.
Dec 03 01:33:33 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:33:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aacaad438d5db30bfa0c05ced1ed8091d267a1b1ce2f29272dc610896987c8dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:33:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aacaad438d5db30bfa0c05ced1ed8091d267a1b1ce2f29272dc610896987c8dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:33:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aacaad438d5db30bfa0c05ced1ed8091d267a1b1ce2f29272dc610896987c8dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:33:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aacaad438d5db30bfa0c05ced1ed8091d267a1b1ce2f29272dc610896987c8dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:33:33 compute-0 podman[292463]: 2025-12-03 01:33:33.367395337 +0000 UTC m=+0.264392808 container init 140266db9537fdac9794a9bd8dbb4f7467f66a35d8371bbd7b9443b8b81a78f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_aryabhata, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:33:33 compute-0 podman[292463]: 2025-12-03 01:33:33.401282812 +0000 UTC m=+0.298280283 container start 140266db9537fdac9794a9bd8dbb4f7467f66a35d8371bbd7b9443b8b81a78f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_aryabhata, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:33:33 compute-0 podman[292463]: 2025-12-03 01:33:33.408609592 +0000 UTC m=+0.305607103 container attach 140266db9537fdac9794a9bd8dbb4f7467f66a35d8371bbd7b9443b8b81a78f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_aryabhata, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 03 01:33:33 compute-0 sudo[292558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyihlkaxoqkuvkvzdacipnuhwzdreoqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725612.6170774-116-234294718116496/AnsiballZ_file.py'
Dec 03 01:33:33 compute-0 sudo[292558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:33:34 compute-0 ceph-mon[192821]: pgmap v563: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:34 compute-0 python3.9[292560]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:33:34 compute-0 sudo[292558]: pam_unix(sudo:session): session closed for user root
Dec 03 01:33:34 compute-0 nostalgic_aryabhata[292480]: {
Dec 03 01:33:34 compute-0 nostalgic_aryabhata[292480]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 01:33:34 compute-0 nostalgic_aryabhata[292480]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:33:34 compute-0 nostalgic_aryabhata[292480]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 01:33:34 compute-0 nostalgic_aryabhata[292480]:         "osd_id": 2,
Dec 03 01:33:34 compute-0 nostalgic_aryabhata[292480]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:33:34 compute-0 nostalgic_aryabhata[292480]:         "type": "bluestore"
Dec 03 01:33:34 compute-0 nostalgic_aryabhata[292480]:     },
Dec 03 01:33:34 compute-0 nostalgic_aryabhata[292480]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 01:33:34 compute-0 nostalgic_aryabhata[292480]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:33:34 compute-0 nostalgic_aryabhata[292480]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 01:33:34 compute-0 nostalgic_aryabhata[292480]:         "osd_id": 1,
Dec 03 01:33:34 compute-0 nostalgic_aryabhata[292480]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:33:34 compute-0 nostalgic_aryabhata[292480]:         "type": "bluestore"
Dec 03 01:33:34 compute-0 nostalgic_aryabhata[292480]:     },
Dec 03 01:33:34 compute-0 nostalgic_aryabhata[292480]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 01:33:34 compute-0 nostalgic_aryabhata[292480]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:33:34 compute-0 nostalgic_aryabhata[292480]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 01:33:34 compute-0 nostalgic_aryabhata[292480]:         "osd_id": 0,
Dec 03 01:33:34 compute-0 nostalgic_aryabhata[292480]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:33:34 compute-0 nostalgic_aryabhata[292480]:         "type": "bluestore"
Dec 03 01:33:34 compute-0 nostalgic_aryabhata[292480]:     }
Dec 03 01:33:34 compute-0 nostalgic_aryabhata[292480]: }
Dec 03 01:33:34 compute-0 systemd[1]: libpod-140266db9537fdac9794a9bd8dbb4f7467f66a35d8371bbd7b9443b8b81a78f5.scope: Deactivated successfully.
Dec 03 01:33:34 compute-0 systemd[1]: libpod-140266db9537fdac9794a9bd8dbb4f7467f66a35d8371bbd7b9443b8b81a78f5.scope: Consumed 1.202s CPU time.
Dec 03 01:33:34 compute-0 podman[292636]: 2025-12-03 01:33:34.670038806 +0000 UTC m=+0.044432594 container died 140266db9537fdac9794a9bd8dbb4f7467f66a35d8371bbd7b9443b8b81a78f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_aryabhata, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 03 01:33:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-aacaad438d5db30bfa0c05ced1ed8091d267a1b1ce2f29272dc610896987c8dd-merged.mount: Deactivated successfully.
Dec 03 01:33:34 compute-0 podman[292636]: 2025-12-03 01:33:34.758367287 +0000 UTC m=+0.132761005 container remove 140266db9537fdac9794a9bd8dbb4f7467f66a35d8371bbd7b9443b8b81a78f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec 03 01:33:34 compute-0 systemd[1]: libpod-conmon-140266db9537fdac9794a9bd8dbb4f7467f66a35d8371bbd7b9443b8b81a78f5.scope: Deactivated successfully.
Dec 03 01:33:34 compute-0 sudo[292267]: pam_unix(sudo:session): session closed for user root
Dec 03 01:33:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:33:34 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:33:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:33:34 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:33:34 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 4b512dc1-e29c-4c24-abee-69f8cdecccb8 does not exist
Dec 03 01:33:34 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 82b379cb-fba0-4778-99e7-c2759aae589f does not exist
Dec 03 01:33:34 compute-0 sudo[292700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:33:34 compute-0 sudo[292700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:33:34 compute-0 sudo[292700]: pam_unix(sudo:session): session closed for user root
Dec 03 01:33:35 compute-0 sudo[292749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 01:33:35 compute-0 sudo[292749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:33:35 compute-0 sudo[292749]: pam_unix(sudo:session): session closed for user root
Dec 03 01:33:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v564: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:35 compute-0 sudo[292800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adpmwtczqaopfppzvumituvmmukrrhbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725614.5447996-116-100326067900859/AnsiballZ_file.py'
Dec 03 01:33:35 compute-0 sudo[292800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:33:35 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:33:35 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:33:35 compute-0 ceph-mon[192821]: pgmap v564: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:35 compute-0 python3.9[292802]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:33:35 compute-0 sudo[292800]: pam_unix(sudo:session): session closed for user root
Dec 03 01:33:36 compute-0 sudo[292952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjbowhowuqwwglanscflqnjfyjalsysx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725616.145545-116-267728017057115/AnsiballZ_file.py'
Dec 03 01:33:36 compute-0 sudo[292952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:33:36 compute-0 python3.9[292954]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:33:36 compute-0 sudo[292952]: pam_unix(sudo:session): session closed for user root
Dec 03 01:33:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v565: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:33:37 compute-0 sudo[293104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkexqzmdcrpfsbruppxdtbivehcgjjve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725617.295596-166-244092004730561/AnsiballZ_file.py'
Dec 03 01:33:37 compute-0 sudo[293104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:33:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 01:33:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:33:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 01:33:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:33:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:33:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:33:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:33:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:33:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:33:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:33:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:33:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:33:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 01:33:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:33:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:33:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:33:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 01:33:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:33:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 01:33:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:33:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:33:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:33:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 01:33:38 compute-0 python3.9[293106]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:33:38 compute-0 sudo[293104]: pam_unix(sudo:session): session closed for user root
Dec 03 01:33:38 compute-0 ceph-mon[192821]: pgmap v565: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:38 compute-0 sudo[293256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grvivgllpbgguwpdtojjlkedsirbehjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725618.3311656-166-226303126444425/AnsiballZ_file.py'
Dec 03 01:33:38 compute-0 sudo[293256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:33:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v566: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:39 compute-0 python3.9[293258]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:33:39 compute-0 sudo[293256]: pam_unix(sudo:session): session closed for user root
Dec 03 01:33:39 compute-0 sshd-session[293259]: Received disconnect from 173.249.50.59 port 59366:11: Bye Bye [preauth]
Dec 03 01:33:39 compute-0 sshd-session[293259]: Disconnected from authenticating user root 173.249.50.59 port 59366 [preauth]
Dec 03 01:33:39 compute-0 sudo[293410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncncbfytijdnkojkljmobotslhvkdhdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725619.3818161-166-98496959837837/AnsiballZ_file.py'
Dec 03 01:33:39 compute-0 sudo[293410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:33:40 compute-0 python3.9[293412]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:33:40 compute-0 sudo[293410]: pam_unix(sudo:session): session closed for user root
Dec 03 01:33:40 compute-0 ceph-mon[192821]: pgmap v566: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:40 compute-0 sudo[293564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rohsyhraabrfvcuzcdgkpfhkqdojziod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725620.4363391-166-14742337355244/AnsiballZ_file.py'
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.972 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.973 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.973 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.973 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f00ebd496a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.975 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.975 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eda45910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.975 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.976 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f00ebd4b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.976 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:33:40 compute-0 sudo[293564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.977 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f00edba6090>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.977 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.977 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f00ebd4bb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.977 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.979 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f00ebd4b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.979 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.979 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f00ebd4b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.979 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.979 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f00ebd4b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.980 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eabec2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.980 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f00ebd4b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.980 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f00eabec290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.981 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.981 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f00ebd4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.982 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f00ebd4b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f00ebd4b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f00ebd4bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f00ebd4b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.986 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f00ebd4bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.987 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f00ebd4bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f00ebd4bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.988 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f00ebe0e030>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f00ebd4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f00ebd4b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f00ede91a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f00ebd4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebcadee0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f00ebd4b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.992 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f00ede92450>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.992 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.992 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f00ebd4bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.993 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f00ebd4bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.993 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:33:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v567: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:41 compute-0 python3.9[293567]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:33:41 compute-0 sudo[293564]: pam_unix(sudo:session): session closed for user root
Dec 03 01:33:41 compute-0 sshd-session[293537]: Invalid user usuario2 from 146.190.144.138 port 37082
Dec 03 01:33:41 compute-0 sshd-session[293537]: Received disconnect from 146.190.144.138 port 37082:11: Bye Bye [preauth]
Dec 03 01:33:41 compute-0 sshd-session[293537]: Disconnected from invalid user usuario2 146.190.144.138 port 37082 [preauth]
Dec 03 01:33:41 compute-0 sudo[293717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msohtcbqecprpcercfvwtxbozgwotaub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725621.4433227-166-124823228444768/AnsiballZ_file.py'
Dec 03 01:33:42 compute-0 sudo[293717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:33:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:33:42 compute-0 ceph-mon[192821]: pgmap v567: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:42 compute-0 python3.9[293719]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:33:42 compute-0 sudo[293717]: pam_unix(sudo:session): session closed for user root
Dec 03 01:33:43 compute-0 sudo[293869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuhsxuxpxckcfuulucvvflpaokmyvfen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725622.5610254-166-157198126155736/AnsiballZ_file.py'
Dec 03 01:33:43 compute-0 sudo[293869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:33:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v568: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:43 compute-0 python3.9[293871]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:33:43 compute-0 sudo[293869]: pam_unix(sudo:session): session closed for user root
Dec 03 01:33:44 compute-0 sudo[294021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suyxlpgzwekwscwepeuoofaltelrcyls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725623.6501365-166-70216553468069/AnsiballZ_file.py'
Dec 03 01:33:44 compute-0 sudo[294021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:33:44 compute-0 ceph-mon[192821]: pgmap v568: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:44 compute-0 python3.9[294023]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:33:44 compute-0 sudo[294021]: pam_unix(sudo:session): session closed for user root
Dec 03 01:33:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v569: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:45 compute-0 sudo[294173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owuueyvfncrtlvnrmrsfwfagqpcakruu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725624.8594155-217-160160374701991/AnsiballZ_command.py'
Dec 03 01:33:45 compute-0 sudo[294173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:33:45 compute-0 python3.9[294175]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:33:45 compute-0 sudo[294173]: pam_unix(sudo:session): session closed for user root
Dec 03 01:33:46 compute-0 ceph-mon[192821]: pgmap v569: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:46 compute-0 python3.9[294327]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 03 01:33:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v570: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:33:47 compute-0 ceph-mon[192821]: pgmap v570: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:48 compute-0 sudo[294543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpbzjanbfzvmjzmunhnofvsdntldaqqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725628.3179638-235-137456328588945/AnsiballZ_systemd_service.py'
Dec 03 01:33:48 compute-0 sudo[294543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:33:48 compute-0 podman[294452]: 2025-12-03 01:33:48.875778275 +0000 UTC m=+0.108503903 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, io.buildah.version=1.41.4)
Dec 03 01:33:48 compute-0 podman[294451]: 2025-12-03 01:33:48.888602825 +0000 UTC m=+0.122170866 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, architecture=x86_64, build-date=2025-08-20T13:12:41, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, vcs-type=git, distribution-scope=public, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 03 01:33:48 compute-0 podman[294447]: 2025-12-03 01:33:48.888633966 +0000 UTC m=+0.127537163 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 03 01:33:48 compute-0 podman[294454]: 2025-12-03 01:33:48.908628122 +0000 UTC m=+0.125707573 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 03 01:33:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v571: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:49 compute-0 python3.9[294558]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 03 01:33:49 compute-0 systemd[1]: Reloading.
Dec 03 01:33:49 compute-0 systemd-rc-local-generator[294589]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:33:49 compute-0 systemd-sysv-generator[294594]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:33:49 compute-0 sudo[294543]: pam_unix(sudo:session): session closed for user root
Dec 03 01:33:50 compute-0 ceph-mon[192821]: pgmap v571: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:50 compute-0 podman[294704]: 2025-12-03 01:33:50.895151048 +0000 UTC m=+0.144913547 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 03 01:33:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v572: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:51 compute-0 sudo[294773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytulzytlgxrvsxnltbniozjrhzlcqzlj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725629.99678-243-157217864045099/AnsiballZ_command.py'
Dec 03 01:33:51 compute-0 sudo[294773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:33:51 compute-0 python3.9[294775]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:33:51 compute-0 sudo[294773]: pam_unix(sudo:session): session closed for user root
Dec 03 01:33:51 compute-0 sshd-session[294639]: Received disconnect from 103.146.202.174 port 41804:11: Bye Bye [preauth]
Dec 03 01:33:51 compute-0 sshd-session[294639]: Disconnected from authenticating user root 103.146.202.174 port 41804 [preauth]
Dec 03 01:33:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:33:52 compute-0 ceph-mon[192821]: pgmap v572: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:52 compute-0 sudo[294926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pahspejfzdgttpyjgicwdcdtgoapejbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725631.7506433-243-133788379877595/AnsiballZ_command.py'
Dec 03 01:33:52 compute-0 sudo[294926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:33:52 compute-0 python3.9[294928]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:33:52 compute-0 sudo[294926]: pam_unix(sudo:session): session closed for user root
Dec 03 01:33:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v573: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:53 compute-0 sudo[295079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsvlqwqljvltvzhcxulvemwopdpqxbzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725632.8312402-243-68134529100661/AnsiballZ_command.py'
Dec 03 01:33:53 compute-0 sudo[295079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:33:53 compute-0 python3.9[295081]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:33:53 compute-0 sudo[295079]: pam_unix(sudo:session): session closed for user root
Dec 03 01:33:54 compute-0 ceph-mon[192821]: pgmap v573: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:54 compute-0 sudo[295232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbndocczbzawbseekktogjnwmthuamee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725633.9224644-243-62998509087232/AnsiballZ_command.py'
Dec 03 01:33:54 compute-0 sudo[295232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:33:54 compute-0 python3.9[295234]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:33:54 compute-0 sudo[295232]: pam_unix(sudo:session): session closed for user root
Dec 03 01:33:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v574: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:55 compute-0 sudo[295385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfvlbttdtgdlirkjllyvmjxjvdhiqpez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725635.0002651-243-32679215469185/AnsiballZ_command.py'
Dec 03 01:33:55 compute-0 sudo[295385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:33:55 compute-0 python3.9[295387]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:33:55 compute-0 sudo[295385]: pam_unix(sudo:session): session closed for user root
Dec 03 01:33:56 compute-0 ceph-mon[192821]: pgmap v574: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:56 compute-0 sudo[295538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyuqbhmftoqyadygwbsxdqnqaalvhmyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725636.0429955-243-198447322127381/AnsiballZ_command.py'
Dec 03 01:33:56 compute-0 sudo[295538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:33:56 compute-0 python3.9[295540]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:33:56 compute-0 sudo[295538]: pam_unix(sudo:session): session closed for user root
Dec 03 01:33:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v575: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:33:57 compute-0 sudo[295703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rymolktgrweloyvcjjddoqkefgjpwdwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725637.1623788-243-72340396400076/AnsiballZ_command.py'
Dec 03 01:33:57 compute-0 sudo[295703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:33:57 compute-0 podman[295665]: 2025-12-03 01:33:57.879158493 +0000 UTC m=+0.154191780 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, build-date=2024-09-18T21:23:30, container_name=kepler, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.tags=base rhel9, release-0.7.12=, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.buildah.version=1.29.0, name=ubi9)
Dec 03 01:33:58 compute-0 python3.9[295709]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:33:58 compute-0 sudo[295703]: pam_unix(sudo:session): session closed for user root
Dec 03 01:33:58 compute-0 ceph-mon[192821]: pgmap v575: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:33:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:33:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:33:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:33:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:33:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:33:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v576: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:33:59 compute-0 podman[295835]: 2025-12-03 01:33:59.444300377 +0000 UTC m=+0.106548569 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 03 01:33:59 compute-0 sudo[295879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpjyulihkfgnuetewegvfatdvuyctdsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725638.736007-297-57307175662063/AnsiballZ_getent.py'
Dec 03 01:33:59 compute-0 sudo[295879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:33:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:59.589 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:33:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:59.590 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:33:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:59.590 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:33:59 compute-0 python3.9[295881]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Dec 03 01:33:59 compute-0 sudo[295879]: pam_unix(sudo:session): session closed for user root
Dec 03 01:33:59 compute-0 podman[158098]: time="2025-12-03T01:33:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:33:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:33:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35732 "" "Go-http-client/1.1"
Dec 03 01:33:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:33:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7275 "" "Go-http-client/1.1"
Dec 03 01:34:00 compute-0 ceph-mon[192821]: pgmap v576: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:00 compute-0 sudo[296032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efmklbenimdlbovmaksbajysxvxspwkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725640.261397-310-151977862062109/AnsiballZ_setup.py'
Dec 03 01:34:00 compute-0 sudo[296032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:34:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v577: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:01 compute-0 python3.9[296034]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 03 01:34:01 compute-0 ceph-mon[192821]: pgmap v577: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:01 compute-0 openstack_network_exporter[160250]: ERROR   01:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:34:01 compute-0 openstack_network_exporter[160250]: ERROR   01:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:34:01 compute-0 openstack_network_exporter[160250]: ERROR   01:34:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:34:01 compute-0 openstack_network_exporter[160250]: ERROR   01:34:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:34:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:34:01 compute-0 openstack_network_exporter[160250]: ERROR   01:34:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:34:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:34:01 compute-0 sudo[296032]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:34:02 compute-0 sudo[296130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmdztssvzapwnagfujevovaluxgvhxly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725640.261397-310-151977862062109/AnsiballZ_dnf.py'
Dec 03 01:34:02 compute-0 sudo[296130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:34:02 compute-0 podman[296090]: 2025-12-03 01:34:02.487253982 +0000 UTC m=+0.135913621 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 01:34:02 compute-0 python3.9[296141]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 03 01:34:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v578: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:04 compute-0 sudo[296130]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:04 compute-0 ceph-mon[192821]: pgmap v578: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v579: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:05 compute-0 sudo[296292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uorzccawmmociffsuyxjaxehwrsbbmyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725644.5867476-322-248787460094225/AnsiballZ_systemd.py'
Dec 03 01:34:05 compute-0 sudo[296292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:34:05 compute-0 python3.9[296294]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 03 01:34:05 compute-0 sudo[296292]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:06 compute-0 ceph-mon[192821]: pgmap v579: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:06 compute-0 sudo[296447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdallkjbyybayuvskouiqgkqsdswlbpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725646.2087839-322-126236178006675/AnsiballZ_systemd.py'
Dec 03 01:34:06 compute-0 sudo[296447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:34:07 compute-0 python3.9[296449]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 03 01:34:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v580: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:34:07 compute-0 sudo[296447]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:07 compute-0 sudo[296602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjmluzltokofcpqyorgborpmjwouaxlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725647.4460607-322-32613588798625/AnsiballZ_systemd.py'
Dec 03 01:34:07 compute-0 sudo[296602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:34:08 compute-0 ceph-mon[192821]: pgmap v580: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:08 compute-0 python3.9[296604]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 03 01:34:08 compute-0 sudo[296602]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v581: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:09 compute-0 sudo[296757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzrewdojksjaxdymigcnuacdfxahwzbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725648.6710143-322-231092422507357/AnsiballZ_systemd.py'
Dec 03 01:34:09 compute-0 sudo[296757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:34:09 compute-0 python3.9[296759]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 03 01:34:09 compute-0 sudo[296757]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:10 compute-0 ceph-mon[192821]: pgmap v581: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:10 compute-0 sshd-session[296761]: Received disconnect from 80.253.31.232 port 45910:11: Bye Bye [preauth]
Dec 03 01:34:10 compute-0 sshd-session[296761]: Disconnected from authenticating user root 80.253.31.232 port 45910 [preauth]
Dec 03 01:34:10 compute-0 sudo[296914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajsayryuuwouxekxnwqmiyokmtkfwhbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725649.9366846-351-76556099427129/AnsiballZ_systemd.py'
Dec 03 01:34:10 compute-0 sudo[296914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:34:10 compute-0 python3.9[296916]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 03 01:34:11 compute-0 sudo[296914]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v582: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:11 compute-0 sudo[297069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjpcjnbeyabqiabyjpztdfgevvcpbeai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725651.303773-351-191843286165181/AnsiballZ_systemd.py'
Dec 03 01:34:11 compute-0 sudo[297069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:34:12 compute-0 python3.9[297071]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 03 01:34:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:34:12 compute-0 ceph-mon[192821]: pgmap v582: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:12 compute-0 sudo[297069]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v583: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:13 compute-0 sudo[297224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osbaeddsomxwoikvywkjenrslfocurcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725652.6769147-351-228661861502241/AnsiballZ_systemd.py'
Dec 03 01:34:13 compute-0 sudo[297224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:34:13 compute-0 ceph-mon[192821]: pgmap v583: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:13 compute-0 python3.9[297226]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 03 01:34:13 compute-0 sudo[297224]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:14 compute-0 sudo[297379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rajlxpvsuptwsvvoefrrjzoyvfaxbrvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725653.9631355-351-155990358945003/AnsiballZ_systemd.py'
Dec 03 01:34:14 compute-0 sudo[297379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:34:14 compute-0 python3.9[297381]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 03 01:34:15 compute-0 sudo[297379]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v584: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:15 compute-0 sudo[297534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bambcamqlcknoikrqueodaknbjofnmmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725655.280728-351-134225124500850/AnsiballZ_systemd.py'
Dec 03 01:34:15 compute-0 sudo[297534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:34:16 compute-0 python3.9[297536]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 03 01:34:16 compute-0 ceph-mon[192821]: pgmap v584: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:16 compute-0 sudo[297534]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v585: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:34:17 compute-0 sudo[297689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkcwuqrbytsqxwgdpzghonspmzlvzubv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725656.684633-387-213272518734198/AnsiballZ_systemd.py'
Dec 03 01:34:17 compute-0 sudo[297689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:34:17 compute-0 python3.9[297691]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 03 01:34:17 compute-0 sudo[297689]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:18 compute-0 ceph-mon[192821]: pgmap v585: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:18 compute-0 sudo[297844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqptcxplyrluwhlwpmtkoheynaqxddnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725658.0306656-395-6626812089398/AnsiballZ_systemd.py'
Dec 03 01:34:18 compute-0 sudo[297844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:34:19 compute-0 python3.9[297846]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 03 01:34:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v586: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:19 compute-0 sudo[297844]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:19 compute-0 podman[297848]: 2025-12-03 01:34:19.229653965 +0000 UTC m=+0.103209088 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 03 01:34:19 compute-0 podman[297849]: 2025-12-03 01:34:19.249796665 +0000 UTC m=+0.122775643 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.openshift.expose-services=, distribution-scope=public, version=9.6, com.redhat.component=ubi9-minimal-container, config_id=edpm, name=ubi9-minimal, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc.)
Dec 03 01:34:19 compute-0 podman[297851]: 2025-12-03 01:34:19.271694603 +0000 UTC m=+0.141360600 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:34:19 compute-0 podman[297850]: 2025-12-03 01:34:19.289743315 +0000 UTC m=+0.159840024 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 03 01:34:19 compute-0 sudo[298083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrftkmegwveecuzkwcctwkmdvuqkjatb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725659.4298697-395-268693911517942/AnsiballZ_systemd.py'
Dec 03 01:34:20 compute-0 sudo[298083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:34:20 compute-0 ceph-mon[192821]: pgmap v586: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:20 compute-0 python3.9[298085]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 03 01:34:20 compute-0 sudo[298083]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v587: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:21 compute-0 sudo[298254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fflomwjwwyvmrgoybmvgrmxtysgttixt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725660.7278135-395-86862720563092/AnsiballZ_systemd.py'
Dec 03 01:34:21 compute-0 sudo[298254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:34:21 compute-0 podman[298212]: 2025-12-03 01:34:21.353807659 +0000 UTC m=+0.114334252 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 03 01:34:21 compute-0 python3.9[298260]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 03 01:34:21 compute-0 sudo[298254]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:34:22 compute-0 ceph-mon[192821]: pgmap v587: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:22 compute-0 sudo[298413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgoitiwtuaabbhyndpzmpjugwwtjvuef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725662.0915442-395-237535959767629/AnsiballZ_systemd.py'
Dec 03 01:34:22 compute-0 sudo[298413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:34:23 compute-0 python3.9[298415]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 03 01:34:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v588: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:23 compute-0 sudo[298413]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:24 compute-0 sudo[298568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sywfjwhpqlxkzhkeszimnspwvdknplcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725663.4656363-395-3668805616490/AnsiballZ_systemd.py'
Dec 03 01:34:24 compute-0 sudo[298568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:34:24 compute-0 ceph-mon[192821]: pgmap v588: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:24 compute-0 python3.9[298570]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 03 01:34:24 compute-0 sudo[298568]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v589: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:25 compute-0 ceph-mon[192821]: pgmap v589: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:25 compute-0 sudo[298723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vanilzbpriodvtpscbpeaubwwjdetcmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725664.8213718-395-92179119356194/AnsiballZ_systemd.py'
Dec 03 01:34:25 compute-0 sudo[298723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:34:25 compute-0 python3.9[298725]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 03 01:34:25 compute-0 sudo[298723]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:26 compute-0 sudo[298878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oninmuznfloqbudplzvmrqzkcxnraypq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725666.1666768-395-225034282645358/AnsiballZ_systemd.py'
Dec 03 01:34:26 compute-0 sudo[298878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:34:27 compute-0 python3.9[298880]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 03 01:34:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v590: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:34:27 compute-0 sudo[298878]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:28 compute-0 ceph-mon[192821]: pgmap v590: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:34:28
Dec 03 01:34:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 01:34:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 01:34:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', '.rgw.root', 'images', 'backups', '.mgr', 'default.rgw.log', 'vms', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta']
Dec 03 01:34:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 01:34:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:34:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:34:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:34:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:34:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:34:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:34:28 compute-0 podman[298960]: 2025-12-03 01:34:28.407159597 +0000 UTC m=+0.133005112 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, maintainer=Red Hat, Inc., name=ubi9, io.openshift.tags=base rhel9, config_id=edpm, io.buildah.version=1.29.0, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., version=9.4, architecture=x86_64, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30)
Dec 03 01:34:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 01:34:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:34:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 01:34:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:34:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:34:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:34:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:34:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:34:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:34:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:34:28 compute-0 sudo[299052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuchszfsvkojlrhniitectrrlqhxadny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725667.473051-395-90918106625183/AnsiballZ_systemd.py'
Dec 03 01:34:28 compute-0 sudo[299052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:34:29 compute-0 python3.9[299054]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 03 01:34:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v591: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:29 compute-0 sudo[299052]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:29 compute-0 podman[158098]: time="2025-12-03T01:34:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:34:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:34:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35732 "" "Go-http-client/1.1"
Dec 03 01:34:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:34:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7284 "" "Go-http-client/1.1"
Dec 03 01:34:29 compute-0 podman[299157]: 2025-12-03 01:34:29.904060368 +0000 UTC m=+0.143350814 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:34:30 compute-0 sudo[299225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgjgclvusxltzuklwijybkxvlgpbwaod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725669.4706817-395-133459216792464/AnsiballZ_systemd.py'
Dec 03 01:34:30 compute-0 sudo[299225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:34:30 compute-0 ceph-mon[192821]: pgmap v591: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:30 compute-0 python3.9[299227]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 03 01:34:30 compute-0 sudo[299225]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v592: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:31 compute-0 openstack_network_exporter[160250]: ERROR   01:34:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:34:31 compute-0 openstack_network_exporter[160250]: ERROR   01:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:34:31 compute-0 openstack_network_exporter[160250]: ERROR   01:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:34:31 compute-0 openstack_network_exporter[160250]: ERROR   01:34:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:34:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:34:31 compute-0 openstack_network_exporter[160250]: ERROR   01:34:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:34:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:34:32 compute-0 sudo[299380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbuiqrhqmmrcsiallvngdcpkllrcyvop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725670.755823-395-221440950693305/AnsiballZ_systemd.py'
Dec 03 01:34:32 compute-0 sudo[299380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:34:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:34:32 compute-0 ceph-mon[192821]: pgmap v592: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:32 compute-0 python3.9[299382]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 03 01:34:32 compute-0 sudo[299380]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:32 compute-0 podman[299384]: 2025-12-03 01:34:32.78499693 +0000 UTC m=+0.161996573 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 03 01:34:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v593: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:33 compute-0 sudo[299561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwychdfflldlebuucsyyspotfuywroeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725672.9908593-395-105090683637648/AnsiballZ_systemd.py'
Dec 03 01:34:33 compute-0 sudo[299561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:34:33 compute-0 python3.9[299563]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 03 01:34:33 compute-0 sudo[299561]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:34 compute-0 ceph-mon[192821]: pgmap v593: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:34 compute-0 sudo[299716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzpuutnjbatnbqtazpxqlckfcggsoudp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725674.2836053-395-242206002054907/AnsiballZ_systemd.py'
Dec 03 01:34:34 compute-0 sudo[299716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:34:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v594: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:35 compute-0 python3.9[299718]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 03 01:34:35 compute-0 sudo[299719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:34:35 compute-0 sudo[299719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:34:35 compute-0 sudo[299719]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:35 compute-0 sudo[299716]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:35 compute-0 sudo[299747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:34:35 compute-0 sudo[299747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:34:35 compute-0 sudo[299747]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:35 compute-0 sudo[299791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:34:35 compute-0 sudo[299791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:34:35 compute-0 sudo[299791]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:35 compute-0 sudo[299835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 01:34:35 compute-0 sudo[299835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:34:35 compute-0 sshd-session[299486]: Invalid user cc from 45.78.219.140 port 43370
Dec 03 01:34:35 compute-0 sshd-session[299486]: Received disconnect from 45.78.219.140 port 43370:11: Bye Bye [preauth]
Dec 03 01:34:35 compute-0 sshd-session[299486]: Disconnected from invalid user cc 45.78.219.140 port 43370 [preauth]
Dec 03 01:34:36 compute-0 sudo[299986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtuotaiztfukrmknairsimkcbxoighmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725675.5202498-395-152084076919844/AnsiballZ_systemd.py'
Dec 03 01:34:36 compute-0 sudo[299986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:34:36 compute-0 sudo[299835]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:36 compute-0 ceph-mon[192821]: pgmap v594: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:36 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:34:36 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:34:36 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 01:34:36 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:34:36 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 01:34:36 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:34:36 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 11954f0f-1c9f-4ffd-adbc-f97673661b8b does not exist
Dec 03 01:34:36 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 462f97c5-4003-41b1-b4ab-0f4643b841ae does not exist
Dec 03 01:34:36 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 691d94a4-5de2-4922-8f7c-716ed0f8517e does not exist
Dec 03 01:34:36 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 01:34:36 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:34:36 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 01:34:36 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:34:36 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:34:36 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:34:36 compute-0 python3.9[299990]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 03 01:34:36 compute-0 sudo[300003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:34:36 compute-0 sudo[300003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:34:36 compute-0 sudo[300003]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:36 compute-0 sudo[299986]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:36 compute-0 sudo[300031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:34:36 compute-0 sudo[300031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:34:36 compute-0 sudo[300031]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:36 compute-0 sudo[300077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:34:36 compute-0 sudo[300077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:34:36 compute-0 sudo[300077]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:36 compute-0 sudo[300128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 01:34:36 compute-0 sudo[300128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:34:37 compute-0 sshd-session[300181]: Invalid user sonarqube from 34.66.72.251 port 48894
Dec 03 01:34:37 compute-0 sshd-session[300181]: Received disconnect from 34.66.72.251 port 48894:11: Bye Bye [preauth]
Dec 03 01:34:37 compute-0 sshd-session[300181]: Disconnected from invalid user sonarqube 34.66.72.251 port 48894 [preauth]
Dec 03 01:34:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v595: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:34:37 compute-0 sudo[300292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sngvqwjpqgiahmxtuvxeftllndnvjehu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725676.730816-395-269688082181595/AnsiballZ_systemd.py'
Dec 03 01:34:37 compute-0 sudo[300292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:34:37 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:34:37 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:34:37 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:34:37 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:34:37 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:34:37 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:34:37 compute-0 ceph-mon[192821]: pgmap v595: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:37 compute-0 podman[300293]: 2025-12-03 01:34:37.382112159 +0000 UTC m=+0.106110697 container create a4cde70839b769a53f9afc42521ff5c4c356bc4792f785b4ce06018175fadec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 03 01:34:37 compute-0 podman[300293]: 2025-12-03 01:34:37.328761213 +0000 UTC m=+0.052759811 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:34:37 compute-0 systemd[1]: Started libpod-conmon-a4cde70839b769a53f9afc42521ff5c4c356bc4792f785b4ce06018175fadec2.scope.
Dec 03 01:34:37 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:34:37 compute-0 podman[300293]: 2025-12-03 01:34:37.525137053 +0000 UTC m=+0.249135571 container init a4cde70839b769a53f9afc42521ff5c4c356bc4792f785b4ce06018175fadec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ardinghelli, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:34:37 compute-0 podman[300293]: 2025-12-03 01:34:37.541472219 +0000 UTC m=+0.265470747 container start a4cde70839b769a53f9afc42521ff5c4c356bc4792f785b4ce06018175fadec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ardinghelli, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 03 01:34:37 compute-0 podman[300293]: 2025-12-03 01:34:37.549002175 +0000 UTC m=+0.273000703 container attach a4cde70839b769a53f9afc42521ff5c4c356bc4792f785b4ce06018175fadec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ardinghelli, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 03 01:34:37 compute-0 eloquent_ardinghelli[300311]: 167 167
Dec 03 01:34:37 compute-0 systemd[1]: libpod-a4cde70839b769a53f9afc42521ff5c4c356bc4792f785b4ce06018175fadec2.scope: Deactivated successfully.
Dec 03 01:34:37 compute-0 podman[300293]: 2025-12-03 01:34:37.554365361 +0000 UTC m=+0.278363889 container died a4cde70839b769a53f9afc42521ff5c4c356bc4792f785b4ce06018175fadec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ardinghelli, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:34:37 compute-0 python3.9[300301]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 03 01:34:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-c12123f46fa9101f7ebc446b364828b85236da4457f6b9fd7ec64d993b8912b0-merged.mount: Deactivated successfully.
Dec 03 01:34:37 compute-0 podman[300293]: 2025-12-03 01:34:37.636828902 +0000 UTC m=+0.360827440 container remove a4cde70839b769a53f9afc42521ff5c4c356bc4792f785b4ce06018175fadec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:34:37 compute-0 systemd[1]: libpod-conmon-a4cde70839b769a53f9afc42521ff5c4c356bc4792f785b4ce06018175fadec2.scope: Deactivated successfully.
Dec 03 01:34:37 compute-0 sudo[300292]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:37 compute-0 podman[300344]: 2025-12-03 01:34:37.8994143 +0000 UTC m=+0.076633053 container create fce3aa3ef78d874c14ace303b0424d800dc0fb077bea6d396b293e143e6af472 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_buck, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:34:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 01:34:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:34:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 01:34:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:34:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:34:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:34:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:34:37 compute-0 podman[300344]: 2025-12-03 01:34:37.862425981 +0000 UTC m=+0.039644774 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:34:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:34:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:34:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:34:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:34:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:34:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 01:34:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:34:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:34:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:34:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 01:34:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:34:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 01:34:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:34:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:34:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:34:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 01:34:37 compute-0 systemd[1]: Started libpod-conmon-fce3aa3ef78d874c14ace303b0424d800dc0fb077bea6d396b293e143e6af472.scope.
Dec 03 01:34:38 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:34:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fd434fb70ce101eabd3c6490419309008cc2d376eff3afe1c4330c385a0438e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:34:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fd434fb70ce101eabd3c6490419309008cc2d376eff3afe1c4330c385a0438e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:34:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fd434fb70ce101eabd3c6490419309008cc2d376eff3afe1c4330c385a0438e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:34:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fd434fb70ce101eabd3c6490419309008cc2d376eff3afe1c4330c385a0438e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:34:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fd434fb70ce101eabd3c6490419309008cc2d376eff3afe1c4330c385a0438e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:34:38 compute-0 podman[300344]: 2025-12-03 01:34:38.072654749 +0000 UTC m=+0.249873492 container init fce3aa3ef78d874c14ace303b0424d800dc0fb077bea6d396b293e143e6af472 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_buck, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:34:38 compute-0 podman[300344]: 2025-12-03 01:34:38.104593461 +0000 UTC m=+0.281812204 container start fce3aa3ef78d874c14ace303b0424d800dc0fb077bea6d396b293e143e6af472 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_buck, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:34:38 compute-0 podman[300344]: 2025-12-03 01:34:38.111729336 +0000 UTC m=+0.288948079 container attach fce3aa3ef78d874c14ace303b0424d800dc0fb077bea6d396b293e143e6af472 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_buck, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:34:38 compute-0 sudo[300506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpjlpcupkegxxjqhxfqmuyjnemitzlem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725678.1852682-497-252938155849325/AnsiballZ_file.py'
Dec 03 01:34:38 compute-0 sudo[300506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:34:38 compute-0 python3.9[300508]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:34:38 compute-0 sudo[300506]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v596: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:39 compute-0 elastic_buck[300376]: --> passed data devices: 0 physical, 3 LVM
Dec 03 01:34:39 compute-0 elastic_buck[300376]: --> relative data size: 1.0
Dec 03 01:34:39 compute-0 elastic_buck[300376]: --> All data devices are unavailable
Dec 03 01:34:39 compute-0 systemd[1]: libpod-fce3aa3ef78d874c14ace303b0424d800dc0fb077bea6d396b293e143e6af472.scope: Deactivated successfully.
Dec 03 01:34:39 compute-0 podman[300344]: 2025-12-03 01:34:39.40501549 +0000 UTC m=+1.582234223 container died fce3aa3ef78d874c14ace303b0424d800dc0fb077bea6d396b293e143e6af472 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 03 01:34:39 compute-0 systemd[1]: libpod-fce3aa3ef78d874c14ace303b0424d800dc0fb077bea6d396b293e143e6af472.scope: Consumed 1.246s CPU time.
Dec 03 01:34:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-9fd434fb70ce101eabd3c6490419309008cc2d376eff3afe1c4330c385a0438e-merged.mount: Deactivated successfully.
Dec 03 01:34:39 compute-0 podman[300344]: 2025-12-03 01:34:39.523023 +0000 UTC m=+1.700241733 container remove fce3aa3ef78d874c14ace303b0424d800dc0fb077bea6d396b293e143e6af472 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_buck, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:34:39 compute-0 systemd[1]: libpod-conmon-fce3aa3ef78d874c14ace303b0424d800dc0fb077bea6d396b293e143e6af472.scope: Deactivated successfully.
Dec 03 01:34:39 compute-0 sudo[300128]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:39 compute-0 sudo[300646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:34:39 compute-0 sudo[300646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:34:39 compute-0 sudo[300646]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:39 compute-0 sudo[300740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djoovqkwickreubhxnitkkdievwuxuxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725679.2552185-497-91743376055654/AnsiballZ_file.py'
Dec 03 01:34:39 compute-0 sudo[300696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:34:39 compute-0 sudo[300740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:34:39 compute-0 sudo[300696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:34:39 compute-0 sudo[300696]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:39 compute-0 sudo[300747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:34:39 compute-0 sudo[300747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:34:39 compute-0 sudo[300747]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:40 compute-0 python3.9[300745]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:34:40 compute-0 sudo[300740]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:40 compute-0 sudo[300772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 01:34:40 compute-0 sudo[300772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:34:40 compute-0 ceph-mon[192821]: pgmap v596: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:40 compute-0 podman[300935]: 2025-12-03 01:34:40.635292882 +0000 UTC m=+0.082015699 container create 037c3d047224b46c8c62073afc47a9659aa1041d683bcf094b70db72394942ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wilson, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 03 01:34:40 compute-0 podman[300935]: 2025-12-03 01:34:40.599243438 +0000 UTC m=+0.045966295 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:34:40 compute-0 systemd[1]: Started libpod-conmon-037c3d047224b46c8c62073afc47a9659aa1041d683bcf094b70db72394942ff.scope.
Dec 03 01:34:40 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:34:40 compute-0 podman[300935]: 2025-12-03 01:34:40.762148075 +0000 UTC m=+0.208870902 container init 037c3d047224b46c8c62073afc47a9659aa1041d683bcf094b70db72394942ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:34:40 compute-0 podman[300935]: 2025-12-03 01:34:40.775123309 +0000 UTC m=+0.221846096 container start 037c3d047224b46c8c62073afc47a9659aa1041d683bcf094b70db72394942ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec 03 01:34:40 compute-0 podman[300935]: 2025-12-03 01:34:40.780222909 +0000 UTC m=+0.226945696 container attach 037c3d047224b46c8c62073afc47a9659aa1041d683bcf094b70db72394942ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wilson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 03 01:34:40 compute-0 eager_wilson[300975]: 167 167
Dec 03 01:34:40 compute-0 podman[300935]: 2025-12-03 01:34:40.787120377 +0000 UTC m=+0.233843174 container died 037c3d047224b46c8c62073afc47a9659aa1041d683bcf094b70db72394942ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wilson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Dec 03 01:34:40 compute-0 systemd[1]: libpod-037c3d047224b46c8c62073afc47a9659aa1041d683bcf094b70db72394942ff.scope: Deactivated successfully.
Dec 03 01:34:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-942cb886e497193f9fcc82ed95e08c32d1ff131da947ba950d1c66326fe45823-merged.mount: Deactivated successfully.
Dec 03 01:34:40 compute-0 podman[300935]: 2025-12-03 01:34:40.862909856 +0000 UTC m=+0.309632673 container remove 037c3d047224b46c8c62073afc47a9659aa1041d683bcf094b70db72394942ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wilson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:34:40 compute-0 sudo[301014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tptxeikzdodyrxggrjekeismilccwduc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725680.2897851-497-29460554027944/AnsiballZ_file.py'
Dec 03 01:34:40 compute-0 sudo[301014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:34:40 compute-0 systemd[1]: libpod-conmon-037c3d047224b46c8c62073afc47a9659aa1041d683bcf094b70db72394942ff.scope: Deactivated successfully.
Dec 03 01:34:41 compute-0 python3.9[301019]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:34:41 compute-0 sudo[301014]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:41 compute-0 podman[301027]: 2025-12-03 01:34:41.164975111 +0000 UTC m=+0.101912523 container create efca9664760330afa638e65cce576d16208adccf12d9ae7d7b479353e8f3e07f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_spence, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec 03 01:34:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v597: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:41 compute-0 podman[301027]: 2025-12-03 01:34:41.12570855 +0000 UTC m=+0.062646062 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:34:41 compute-0 systemd[1]: Started libpod-conmon-efca9664760330afa638e65cce576d16208adccf12d9ae7d7b479353e8f3e07f.scope.
Dec 03 01:34:41 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:34:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7339c3af79415305ad373308cad408ccf286ec9a4b01e1ea0c6a31dca0b64cc5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:34:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7339c3af79415305ad373308cad408ccf286ec9a4b01e1ea0c6a31dca0b64cc5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:34:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7339c3af79415305ad373308cad408ccf286ec9a4b01e1ea0c6a31dca0b64cc5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:34:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7339c3af79415305ad373308cad408ccf286ec9a4b01e1ea0c6a31dca0b64cc5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:34:41 compute-0 podman[301027]: 2025-12-03 01:34:41.296467951 +0000 UTC m=+0.233405403 container init efca9664760330afa638e65cce576d16208adccf12d9ae7d7b479353e8f3e07f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_spence, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:34:41 compute-0 podman[301027]: 2025-12-03 01:34:41.30814169 +0000 UTC m=+0.245079072 container start efca9664760330afa638e65cce576d16208adccf12d9ae7d7b479353e8f3e07f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 03 01:34:41 compute-0 podman[301027]: 2025-12-03 01:34:41.312769876 +0000 UTC m=+0.249707308 container attach efca9664760330afa638e65cce576d16208adccf12d9ae7d7b479353e8f3e07f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 03 01:34:41 compute-0 sudo[301197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbfjmkkmswkubowkkddxixxojwcyxytn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725681.3831737-497-213591722118110/AnsiballZ_file.py'
Dec 03 01:34:41 compute-0 sudo[301197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:34:42 compute-0 jolly_spence[301064]: {
Dec 03 01:34:42 compute-0 jolly_spence[301064]:     "0": [
Dec 03 01:34:42 compute-0 jolly_spence[301064]:         {
Dec 03 01:34:42 compute-0 jolly_spence[301064]:             "devices": [
Dec 03 01:34:42 compute-0 jolly_spence[301064]:                 "/dev/loop3"
Dec 03 01:34:42 compute-0 jolly_spence[301064]:             ],
Dec 03 01:34:42 compute-0 jolly_spence[301064]:             "lv_name": "ceph_lv0",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:             "lv_size": "21470642176",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:             "name": "ceph_lv0",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:             "tags": {
Dec 03 01:34:42 compute-0 jolly_spence[301064]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:                 "ceph.cluster_name": "ceph",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:                 "ceph.crush_device_class": "",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:                 "ceph.encrypted": "0",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:                 "ceph.osd_id": "0",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:                 "ceph.type": "block",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:                 "ceph.vdo": "0"
Dec 03 01:34:42 compute-0 jolly_spence[301064]:             },
Dec 03 01:34:42 compute-0 jolly_spence[301064]:             "type": "block",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:             "vg_name": "ceph_vg0"
Dec 03 01:34:42 compute-0 jolly_spence[301064]:         }
Dec 03 01:34:42 compute-0 jolly_spence[301064]:     ],
Dec 03 01:34:42 compute-0 jolly_spence[301064]:     "1": [
Dec 03 01:34:42 compute-0 jolly_spence[301064]:         {
Dec 03 01:34:42 compute-0 jolly_spence[301064]:             "devices": [
Dec 03 01:34:42 compute-0 jolly_spence[301064]:                 "/dev/loop4"
Dec 03 01:34:42 compute-0 jolly_spence[301064]:             ],
Dec 03 01:34:42 compute-0 jolly_spence[301064]:             "lv_name": "ceph_lv1",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:             "lv_size": "21470642176",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:             "name": "ceph_lv1",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:             "tags": {
Dec 03 01:34:42 compute-0 jolly_spence[301064]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:                 "ceph.cluster_name": "ceph",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:                 "ceph.crush_device_class": "",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:                 "ceph.encrypted": "0",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:                 "ceph.osd_id": "1",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:                 "ceph.type": "block",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:                 "ceph.vdo": "0"
Dec 03 01:34:42 compute-0 jolly_spence[301064]:             },
Dec 03 01:34:42 compute-0 jolly_spence[301064]:             "type": "block",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:             "vg_name": "ceph_vg1"
Dec 03 01:34:42 compute-0 jolly_spence[301064]:         }
Dec 03 01:34:42 compute-0 jolly_spence[301064]:     ],
Dec 03 01:34:42 compute-0 jolly_spence[301064]:     "2": [
Dec 03 01:34:42 compute-0 jolly_spence[301064]:         {
Dec 03 01:34:42 compute-0 jolly_spence[301064]:             "devices": [
Dec 03 01:34:42 compute-0 jolly_spence[301064]:                 "/dev/loop5"
Dec 03 01:34:42 compute-0 jolly_spence[301064]:             ],
Dec 03 01:34:42 compute-0 jolly_spence[301064]:             "lv_name": "ceph_lv2",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:             "lv_size": "21470642176",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:             "name": "ceph_lv2",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:             "tags": {
Dec 03 01:34:42 compute-0 jolly_spence[301064]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:                 "ceph.cluster_name": "ceph",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:                 "ceph.crush_device_class": "",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:                 "ceph.encrypted": "0",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:                 "ceph.osd_id": "2",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:                 "ceph.type": "block",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:                 "ceph.vdo": "0"
Dec 03 01:34:42 compute-0 jolly_spence[301064]:             },
Dec 03 01:34:42 compute-0 jolly_spence[301064]:             "type": "block",
Dec 03 01:34:42 compute-0 jolly_spence[301064]:             "vg_name": "ceph_vg2"
Dec 03 01:34:42 compute-0 jolly_spence[301064]:         }
Dec 03 01:34:42 compute-0 jolly_spence[301064]:     ]
Dec 03 01:34:42 compute-0 jolly_spence[301064]: }
Dec 03 01:34:42 compute-0 systemd[1]: libpod-efca9664760330afa638e65cce576d16208adccf12d9ae7d7b479353e8f3e07f.scope: Deactivated successfully.
Dec 03 01:34:42 compute-0 podman[301027]: 2025-12-03 01:34:42.184011779 +0000 UTC m=+1.120949181 container died efca9664760330afa638e65cce576d16208adccf12d9ae7d7b479353e8f3e07f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_spence, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:34:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:34:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-7339c3af79415305ad373308cad408ccf286ec9a4b01e1ea0c6a31dca0b64cc5-merged.mount: Deactivated successfully.
Dec 03 01:34:42 compute-0 python3.9[301199]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:34:42 compute-0 ceph-mon[192821]: pgmap v597: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:42 compute-0 sudo[301197]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:42 compute-0 podman[301027]: 2025-12-03 01:34:42.279938527 +0000 UTC m=+1.216875929 container remove efca9664760330afa638e65cce576d16208adccf12d9ae7d7b479353e8f3e07f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_spence, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 03 01:34:42 compute-0 systemd[1]: libpod-conmon-efca9664760330afa638e65cce576d16208adccf12d9ae7d7b479353e8f3e07f.scope: Deactivated successfully.
Dec 03 01:34:42 compute-0 sudo[300772]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:42 compute-0 sudo[301228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:34:42 compute-0 sudo[301228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:34:42 compute-0 sudo[301228]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:42 compute-0 sudo[301271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:34:42 compute-0 sudo[301271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:34:42 compute-0 sudo[301271]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:42 compute-0 sudo[301330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:34:42 compute-0 sudo[301330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:34:42 compute-0 sudo[301330]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:42 compute-0 sudo[301373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 01:34:42 compute-0 sudo[301373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:34:43 compute-0 sudo[301478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehhynnksjtpslxeqdjedxlqesdeaezzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725682.5038-497-269961985206506/AnsiballZ_file.py'
Dec 03 01:34:43 compute-0 sudo[301478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:34:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v598: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:43 compute-0 python3.9[301489]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:34:43 compute-0 sudo[301478]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:43 compute-0 podman[301507]: 2025-12-03 01:34:43.327939374 +0000 UTC m=+0.069715674 container create 35f1b8b8cfc0c67e48e4245e3729cae84bfb6ee1e3a3e983d27a54daa15e73c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_curran, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 03 01:34:43 compute-0 systemd[1]: Started libpod-conmon-35f1b8b8cfc0c67e48e4245e3729cae84bfb6ee1e3a3e983d27a54daa15e73c3.scope.
Dec 03 01:34:43 compute-0 podman[301507]: 2025-12-03 01:34:43.306034746 +0000 UTC m=+0.047811126 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:34:43 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:34:43 compute-0 podman[301507]: 2025-12-03 01:34:43.461179861 +0000 UTC m=+0.202956191 container init 35f1b8b8cfc0c67e48e4245e3729cae84bfb6ee1e3a3e983d27a54daa15e73c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_curran, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:34:43 compute-0 podman[301507]: 2025-12-03 01:34:43.478074783 +0000 UTC m=+0.219851113 container start 35f1b8b8cfc0c67e48e4245e3729cae84bfb6ee1e3a3e983d27a54daa15e73c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_curran, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:34:43 compute-0 magical_curran[301523]: 167 167
Dec 03 01:34:43 compute-0 podman[301507]: 2025-12-03 01:34:43.484740755 +0000 UTC m=+0.226517145 container attach 35f1b8b8cfc0c67e48e4245e3729cae84bfb6ee1e3a3e983d27a54daa15e73c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_curran, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 03 01:34:43 compute-0 systemd[1]: libpod-35f1b8b8cfc0c67e48e4245e3729cae84bfb6ee1e3a3e983d27a54daa15e73c3.scope: Deactivated successfully.
Dec 03 01:34:43 compute-0 podman[301507]: 2025-12-03 01:34:43.487292904 +0000 UTC m=+0.229069224 container died 35f1b8b8cfc0c67e48e4245e3729cae84bfb6ee1e3a3e983d27a54daa15e73c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 03 01:34:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e9481c271846f5faf1110e6455b62458bd0f8d8f0aa6e337d70e9d041cbb3fd-merged.mount: Deactivated successfully.
Dec 03 01:34:43 compute-0 podman[301507]: 2025-12-03 01:34:43.557923482 +0000 UTC m=+0.299699802 container remove 35f1b8b8cfc0c67e48e4245e3729cae84bfb6ee1e3a3e983d27a54daa15e73c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:34:43 compute-0 systemd[1]: libpod-conmon-35f1b8b8cfc0c67e48e4245e3729cae84bfb6ee1e3a3e983d27a54daa15e73c3.scope: Deactivated successfully.
Dec 03 01:34:43 compute-0 podman[301557]: 2025-12-03 01:34:43.822285479 +0000 UTC m=+0.067791602 container create 8b9912179f3a2425d2835725bc8e689b5aadc728df399f1aaeb8e845ab6326b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec 03 01:34:43 compute-0 podman[301557]: 2025-12-03 01:34:43.787711085 +0000 UTC m=+0.033217258 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:34:43 compute-0 systemd[1]: Started libpod-conmon-8b9912179f3a2425d2835725bc8e689b5aadc728df399f1aaeb8e845ab6326b6.scope.
Dec 03 01:34:43 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:34:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24da559fe4e4ae2cdbea637053a788086aaa413296dd29682d962829a2004d84/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:34:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24da559fe4e4ae2cdbea637053a788086aaa413296dd29682d962829a2004d84/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:34:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24da559fe4e4ae2cdbea637053a788086aaa413296dd29682d962829a2004d84/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:34:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24da559fe4e4ae2cdbea637053a788086aaa413296dd29682d962829a2004d84/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:34:44 compute-0 podman[301557]: 2025-12-03 01:34:44.012697957 +0000 UTC m=+0.258204090 container init 8b9912179f3a2425d2835725bc8e689b5aadc728df399f1aaeb8e845ab6326b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_roentgen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 03 01:34:44 compute-0 podman[301557]: 2025-12-03 01:34:44.029195927 +0000 UTC m=+0.274702050 container start 8b9912179f3a2425d2835725bc8e689b5aadc728df399f1aaeb8e845ab6326b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:34:44 compute-0 podman[301557]: 2025-12-03 01:34:44.038959753 +0000 UTC m=+0.284465877 container attach 8b9912179f3a2425d2835725bc8e689b5aadc728df399f1aaeb8e845ab6326b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_roentgen, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Dec 03 01:34:44 compute-0 ceph-mon[192821]: pgmap v598: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:44 compute-0 sudo[301713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aivufneuwksvxgkctaenvxpseablaaia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725683.9576619-497-208706041615842/AnsiballZ_file.py'
Dec 03 01:34:44 compute-0 sudo[301713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:34:44 compute-0 python3.9[301715]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:34:44 compute-0 sudo[301713]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v599: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:45 compute-0 sleepy_roentgen[301596]: {
Dec 03 01:34:45 compute-0 sleepy_roentgen[301596]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 01:34:45 compute-0 sleepy_roentgen[301596]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:34:45 compute-0 sleepy_roentgen[301596]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 01:34:45 compute-0 sleepy_roentgen[301596]:         "osd_id": 2,
Dec 03 01:34:45 compute-0 sleepy_roentgen[301596]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:34:45 compute-0 sleepy_roentgen[301596]:         "type": "bluestore"
Dec 03 01:34:45 compute-0 sleepy_roentgen[301596]:     },
Dec 03 01:34:45 compute-0 sleepy_roentgen[301596]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 01:34:45 compute-0 sleepy_roentgen[301596]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:34:45 compute-0 sleepy_roentgen[301596]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 01:34:45 compute-0 sleepy_roentgen[301596]:         "osd_id": 1,
Dec 03 01:34:45 compute-0 sleepy_roentgen[301596]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:34:45 compute-0 sleepy_roentgen[301596]:         "type": "bluestore"
Dec 03 01:34:45 compute-0 sleepy_roentgen[301596]:     },
Dec 03 01:34:45 compute-0 sleepy_roentgen[301596]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 01:34:45 compute-0 sleepy_roentgen[301596]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:34:45 compute-0 sleepy_roentgen[301596]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 01:34:45 compute-0 sleepy_roentgen[301596]:         "osd_id": 0,
Dec 03 01:34:45 compute-0 sleepy_roentgen[301596]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:34:45 compute-0 sleepy_roentgen[301596]:         "type": "bluestore"
Dec 03 01:34:45 compute-0 sleepy_roentgen[301596]:     }
Dec 03 01:34:45 compute-0 sleepy_roentgen[301596]: }
Dec 03 01:34:45 compute-0 systemd[1]: libpod-8b9912179f3a2425d2835725bc8e689b5aadc728df399f1aaeb8e845ab6326b6.scope: Deactivated successfully.
Dec 03 01:34:45 compute-0 systemd[1]: libpod-8b9912179f3a2425d2835725bc8e689b5aadc728df399f1aaeb8e845ab6326b6.scope: Consumed 1.252s CPU time.
Dec 03 01:34:45 compute-0 podman[301557]: 2025-12-03 01:34:45.293825248 +0000 UTC m=+1.539331361 container died 8b9912179f3a2425d2835725bc8e689b5aadc728df399f1aaeb8e845ab6326b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 03 01:34:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-24da559fe4e4ae2cdbea637053a788086aaa413296dd29682d962829a2004d84-merged.mount: Deactivated successfully.
Dec 03 01:34:45 compute-0 podman[301557]: 2025-12-03 01:34:45.396911802 +0000 UTC m=+1.642417925 container remove 8b9912179f3a2425d2835725bc8e689b5aadc728df399f1aaeb8e845ab6326b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_roentgen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 03 01:34:45 compute-0 systemd[1]: libpod-conmon-8b9912179f3a2425d2835725bc8e689b5aadc728df399f1aaeb8e845ab6326b6.scope: Deactivated successfully.
Dec 03 01:34:45 compute-0 sudo[301373]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:45 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:34:45 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:34:45 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:34:45 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:34:45 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 7ce3e72a-3e6a-4554-848d-7ac110192202 does not exist
Dec 03 01:34:45 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev a4f95231-2302-4f84-9ef6-64d22834ef8d does not exist
Dec 03 01:34:45 compute-0 sudo[301854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:34:45 compute-0 sudo[301854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:34:45 compute-0 sudo[301854]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:45 compute-0 sudo[301903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 01:34:45 compute-0 sudo[301903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:34:45 compute-0 sudo[301903]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:46 compute-0 ceph-mon[192821]: pgmap v599: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:46 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:34:46 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:34:46 compute-0 sudo[301954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eiygozihjcberjqbgkhwdvqixgqpdbwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725685.0366888-540-117299765417805/AnsiballZ_stat.py'
Dec 03 01:34:46 compute-0 sudo[301954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:34:46 compute-0 python3.9[301956]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:34:47 compute-0 sudo[301954]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v600: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:34:47 compute-0 sudo[302032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwjxbvofnfzrwppdtnabpzmblsekooqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725685.0366888-540-117299765417805/AnsiballZ_file.py'
Dec 03 01:34:47 compute-0 sudo[302032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:34:47 compute-0 python3.9[302034]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/libvirt/virtlogd.conf _original_basename=virtlogd.conf recurse=False state=file path=/etc/libvirt/virtlogd.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:34:47 compute-0 sudo[302032]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:48 compute-0 ceph-mon[192821]: pgmap v600: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:48 compute-0 sudo[302184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfpfxsktlnrtpwhdjtjkyxuzxmntbsqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725688.0732896-540-60586908184266/AnsiballZ_stat.py'
Dec 03 01:34:48 compute-0 sudo[302184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:34:48 compute-0 python3.9[302186]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:34:49 compute-0 sudo[302184]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v601: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:49 compute-0 ceph-mon[192821]: pgmap v601: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:49 compute-0 sudo[302307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdqwphwpsglhdtujtyhxcbrnmsnqvhfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725688.0732896-540-60586908184266/AnsiballZ_file.py'
Dec 03 01:34:49 compute-0 sudo[302307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:34:49 compute-0 podman[302236]: 2025-12-03 01:34:49.522837454 +0000 UTC m=+0.125784349 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 03 01:34:49 compute-0 podman[302238]: 2025-12-03 01:34:49.534649137 +0000 UTC m=+0.128559738 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 03 01:34:49 compute-0 podman[302237]: 2025-12-03 01:34:49.550816573 +0000 UTC m=+0.148111809 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, name=ubi9-minimal, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=ubi9-minimal-container, version=9.6)
Dec 03 01:34:49 compute-0 podman[302239]: 2025-12-03 01:34:49.559027925 +0000 UTC m=+0.138359094 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 03 01:34:49 compute-0 python3.9[302338]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/libvirt/virtnodedevd.conf _original_basename=virtnodedevd.conf recurse=False state=file path=/etc/libvirt/virtnodedevd.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:34:49 compute-0 sudo[302307]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:50 compute-0 sudo[302495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osphymgzyypapxnqevmalajnbkupgdqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725689.9509397-540-67870609735613/AnsiballZ_stat.py'
Dec 03 01:34:50 compute-0 sudo[302495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:34:50 compute-0 python3.9[302497]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:34:50 compute-0 sudo[302495]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v602: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:51 compute-0 sudo[302573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btlgkzhnquaxzyxfeioycfmpdjuaoshj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725689.9509397-540-67870609735613/AnsiballZ_file.py'
Dec 03 01:34:51 compute-0 sudo[302573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:34:51 compute-0 python3.9[302575]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/libvirt/virtproxyd.conf _original_basename=virtproxyd.conf recurse=False state=file path=/etc/libvirt/virtproxyd.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:34:51 compute-0 sudo[302573]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:51 compute-0 podman[302576]: 2025-12-03 01:34:51.69463288 +0000 UTC m=+0.130992256 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi)
Dec 03 01:34:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:34:52 compute-0 ceph-mon[192821]: pgmap v602: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:52 compute-0 sudo[302744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhuyjjiyvvkerqbhfmiqwqqlibhydrkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725691.8431256-540-12728914277298/AnsiballZ_stat.py'
Dec 03 01:34:52 compute-0 sudo[302744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:34:52 compute-0 python3.9[302746]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:34:52 compute-0 sudo[302744]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:53 compute-0 sudo[302824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvlqclnokipmmvzpxrwnklkqhylvnxsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725691.8431256-540-12728914277298/AnsiballZ_file.py'
Dec 03 01:34:53 compute-0 sudo[302824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:34:53 compute-0 sshd-session[302747]: Invalid user myuser from 173.249.50.59 port 57634
Dec 03 01:34:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v603: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:53 compute-0 sshd-session[302747]: Received disconnect from 173.249.50.59 port 57634:11: Bye Bye [preauth]
Dec 03 01:34:53 compute-0 sshd-session[302747]: Disconnected from invalid user myuser 173.249.50.59 port 57634 [preauth]
Dec 03 01:34:53 compute-0 python3.9[302826]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/libvirt/virtqemud.conf _original_basename=virtqemud.conf recurse=False state=file path=/etc/libvirt/virtqemud.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:34:53 compute-0 sudo[302824]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:54 compute-0 sudo[302976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsoadkioqsaofhlvkntbduomprhktfjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725693.6298568-540-106243291397777/AnsiballZ_stat.py'
Dec 03 01:34:54 compute-0 sudo[302976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:34:54 compute-0 ceph-mon[192821]: pgmap v603: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:54 compute-0 python3.9[302978]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:34:54 compute-0 sudo[302976]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:54 compute-0 sudo[303054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygovmwcliazcyhmbeeqrpedlwzvspcqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725693.6298568-540-106243291397777/AnsiballZ_file.py'
Dec 03 01:34:54 compute-0 sudo[303054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:34:55 compute-0 python3.9[303056]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/libvirt/qemu.conf _original_basename=qemu.conf.j2 recurse=False state=file path=/etc/libvirt/qemu.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:34:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v604: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:55 compute-0 sudo[303054]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:55 compute-0 sudo[303206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phnbpduubmpmkodxfkvvxqtdfbkqxspm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725695.459102-540-230160379127698/AnsiballZ_stat.py'
Dec 03 01:34:55 compute-0 sudo[303206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:34:56 compute-0 python3.9[303208]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:34:56 compute-0 ceph-mon[192821]: pgmap v604: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:56 compute-0 sudo[303206]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:56 compute-0 sudo[303284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmcwicukvaqlyoihiudmkfvwloxjhtru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725695.459102-540-230160379127698/AnsiballZ_file.py'
Dec 03 01:34:56 compute-0 sudo[303284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:34:56 compute-0 python3.9[303286]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/libvirt/virtsecretd.conf _original_basename=virtsecretd.conf recurse=False state=file path=/etc/libvirt/virtsecretd.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:34:56 compute-0 sudo[303284]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v605: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:34:57 compute-0 sudo[303436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfjluahkxroxzdjgmdwszliyrzcbnmfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725697.1403902-540-68351521785465/AnsiballZ_stat.py'
Dec 03 01:34:57 compute-0 sudo[303436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:34:57 compute-0 python3.9[303438]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:34:57 compute-0 sudo[303436]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:58 compute-0 ceph-mon[192821]: pgmap v605: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:34:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:34:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:34:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:34:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:34:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:34:58 compute-0 podman[303464]: 2025-12-03 01:34:58.927661502 +0000 UTC m=+0.174916116 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vendor=Red Hat, Inc., distribution-scope=public, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, config_id=edpm, io.openshift.expose-services=, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler)
Dec 03 01:34:59 compute-0 sudo[303533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrqfmkeccnlzyacufznnsgynfhvreeit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725697.1403902-540-68351521785465/AnsiballZ_file.py'
Dec 03 01:34:59 compute-0 sudo[303533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:34:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v606: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:34:59 compute-0 python3.9[303535]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0600 owner=libvirt dest=/etc/libvirt/auth.conf _original_basename=auth.conf recurse=False state=file path=/etc/libvirt/auth.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:34:59 compute-0 sudo[303533]: pam_unix(sudo:session): session closed for user root
Dec 03 01:34:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:34:59.590 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:34:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:34:59.591 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:34:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:34:59.591 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:34:59 compute-0 podman[158098]: time="2025-12-03T01:34:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:34:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:34:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35732 "" "Go-http-client/1.1"
Dec 03 01:34:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:34:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7276 "" "Go-http-client/1.1"
Dec 03 01:35:00 compute-0 sudo[303701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfjjosvhprmvbotosoryzvrmxvwyvzbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725699.6744583-540-121273631332332/AnsiballZ_stat.py'
Dec 03 01:35:00 compute-0 sudo[303701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:35:00 compute-0 ceph-mon[192821]: pgmap v606: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:00 compute-0 podman[303659]: 2025-12-03 01:35:00.310853502 +0000 UTC m=+0.151493544 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 03 01:35:00 compute-0 python3.9[303705]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:35:00 compute-0 sudo[303701]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v607: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:01 compute-0 ceph-mon[192821]: pgmap v607: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:01 compute-0 openstack_network_exporter[160250]: ERROR   01:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:35:01 compute-0 openstack_network_exporter[160250]: ERROR   01:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:35:01 compute-0 openstack_network_exporter[160250]: ERROR   01:35:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:35:01 compute-0 openstack_network_exporter[160250]: ERROR   01:35:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:35:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:35:01 compute-0 openstack_network_exporter[160250]: ERROR   01:35:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:35:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:35:01 compute-0 sudo[303781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzqcsaolxyavfowksugybihsvuogniam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725699.6744583-540-121273631332332/AnsiballZ_file.py'
Dec 03 01:35:01 compute-0 sudo[303781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:35:01 compute-0 anacron[59208]: Job `cron.weekly' started
Dec 03 01:35:01 compute-0 anacron[59208]: Job `cron.weekly' terminated
Dec 03 01:35:01 compute-0 python3.9[303783]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/sasl2/libvirt.conf _original_basename=sasl_libvirt.conf recurse=False state=file path=/etc/sasl2/libvirt.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:35:01 compute-0 sudo[303781]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:35:02 compute-0 sudo[303935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ititjijiwqjtkotezfteenytihuagzvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725702.1780457-629-104729674845457/AnsiballZ_command.py'
Dec 03 01:35:02 compute-0 sudo[303935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:35:02 compute-0 python3.9[303937]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Dec 03 01:35:03 compute-0 sudo[303935]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v608: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:03 compute-0 podman[304042]: 2025-12-03 01:35:03.866714093 +0000 UTC m=+0.118008890 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 03 01:35:03 compute-0 sudo[304112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhttkrxwzrdydmztvpqerlpwfxbytajr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725703.38228-638-259958976219801/AnsiballZ_file.py'
Dec 03 01:35:03 compute-0 sudo[304112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:35:04 compute-0 python3.9[304114]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:35:04 compute-0 sudo[304112]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:04 compute-0 ceph-mon[192821]: pgmap v608: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:05 compute-0 sudo[304264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjnsdhcrobhzyfumzyljpiolvvhifxfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725704.4445584-638-79956394276328/AnsiballZ_file.py'
Dec 03 01:35:05 compute-0 sudo[304264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:35:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v609: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:05 compute-0 python3.9[304266]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:35:05 compute-0 sudo[304264]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:06 compute-0 sudo[304416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asoqzofnulzhcyndfsnonqmjgdnldrlx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725705.533343-638-19737965270313/AnsiballZ_file.py'
Dec 03 01:35:06 compute-0 sudo[304416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:35:06 compute-0 python3.9[304418]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:35:06 compute-0 ceph-mon[192821]: pgmap v609: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:06 compute-0 sudo[304416]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:07 compute-0 sudo[304568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cukrvezhonozynunirbrederlbaebalo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725706.6052523-638-276031664643774/AnsiballZ_file.py'
Dec 03 01:35:07 compute-0 sudo[304568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:35:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v610: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:35:07 compute-0 python3.9[304570]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:35:07 compute-0 sudo[304568]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:08 compute-0 ceph-mon[192821]: pgmap v610: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:08 compute-0 sudo[304720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oiwxdckhtwjneggtxkdpitazahejkmcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725707.7171743-638-39323831470093/AnsiballZ_file.py'
Dec 03 01:35:08 compute-0 sudo[304720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:35:08 compute-0 python3.9[304722]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:35:08 compute-0 sudo[304720]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v611: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:09 compute-0 sudo[304872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovxjnsiummigzkyacrqqeruxieiuxefy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725708.8300807-638-61806165855023/AnsiballZ_file.py'
Dec 03 01:35:09 compute-0 sudo[304872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:35:09 compute-0 python3.9[304874]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:35:09 compute-0 sudo[304872]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:10 compute-0 ceph-mon[192821]: pgmap v611: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:11 compute-0 sudo[305024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nttxqrurjbglxjfqhjigxyajchvgpmnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725709.9149842-638-91891779289944/AnsiballZ_file.py'
Dec 03 01:35:11 compute-0 sudo[305024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:35:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v612: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:11 compute-0 python3.9[305026]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:35:11 compute-0 sudo[305024]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:12 compute-0 sudo[305176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-caozemztqcyarhhybjjupsgiemgqpgum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725711.647192-638-25799494563259/AnsiballZ_file.py'
Dec 03 01:35:12 compute-0 sudo[305176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:35:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:35:12 compute-0 ceph-mon[192821]: pgmap v612: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:12 compute-0 python3.9[305178]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:35:12 compute-0 sudo[305176]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v613: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:13 compute-0 ceph-mon[192821]: pgmap v613: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:13 compute-0 sudo[305328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzkahikolwfgnaxubffmowzvfszviepm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725713.1270468-638-241719333113361/AnsiballZ_file.py'
Dec 03 01:35:13 compute-0 sudo[305328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:35:13 compute-0 python3.9[305330]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:35:13 compute-0 sudo[305328]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:14 compute-0 sudo[305480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkkwxlzpcbqreukimocihodvxfzlzlar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725714.1152666-638-51270903498690/AnsiballZ_file.py'
Dec 03 01:35:14 compute-0 sudo[305480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:35:14 compute-0 python3.9[305482]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:35:14 compute-0 sudo[305480]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v614: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:15 compute-0 sudo[305632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcqiowrwzckwvppkfkphretxcbcvxtph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725715.2323518-638-71525472505414/AnsiballZ_file.py'
Dec 03 01:35:15 compute-0 sudo[305632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:35:16 compute-0 python3.9[305634]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:35:16 compute-0 sudo[305632]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:16 compute-0 ceph-mon[192821]: pgmap v614: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:16 compute-0 sudo[305786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjhqzhiuzvryshszouohbawjqoaiurrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725716.2949839-638-12579959663378/AnsiballZ_file.py'
Dec 03 01:35:16 compute-0 sudo[305786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:35:17 compute-0 python3.9[305788]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:35:17 compute-0 sudo[305786]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v615: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:35:17 compute-0 sudo[305938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rastohpntlnxhuighjaqslstyquguidk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725717.4557023-638-270978636406614/AnsiballZ_file.py'
Dec 03 01:35:17 compute-0 sudo[305938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:35:18 compute-0 sshd-session[305752]: Invalid user openbravo from 103.146.202.174 port 41480
Dec 03 01:35:18 compute-0 python3.9[305940]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:35:18 compute-0 sudo[305938]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:18 compute-0 ceph-mon[192821]: pgmap v615: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:18 compute-0 sshd-session[305752]: Received disconnect from 103.146.202.174 port 41480:11: Bye Bye [preauth]
Dec 03 01:35:18 compute-0 sshd-session[305752]: Disconnected from invalid user openbravo 103.146.202.174 port 41480 [preauth]
Dec 03 01:35:19 compute-0 sudo[306090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fezpejmxrrcyozyncyqgaktfecucoqjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725718.51278-638-212437059081163/AnsiballZ_file.py'
Dec 03 01:35:19 compute-0 sudo[306090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:35:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v616: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:19 compute-0 python3.9[306092]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:35:19 compute-0 sudo[306090]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:19 compute-0 podman[306126]: 2025-12-03 01:35:19.892133041 +0000 UTC m=+0.122439555 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute)
Dec 03 01:35:19 compute-0 podman[306124]: 2025-12-03 01:35:19.892014948 +0000 UTC m=+0.129193606 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, distribution-scope=public, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, name=ubi9-minimal, version=9.6, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 03 01:35:19 compute-0 podman[306118]: 2025-12-03 01:35:19.905299943 +0000 UTC m=+0.145872546 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 01:35:19 compute-0 podman[306129]: 2025-12-03 01:35:19.93427337 +0000 UTC m=+0.151454783 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 03 01:35:20 compute-0 ceph-mon[192821]: pgmap v616: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:20 compute-0 sudo[306327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bigbwdcybaadcwanwmtbnpqszipcdpba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725719.82866-737-259207134669589/AnsiballZ_stat.py'
Dec 03 01:35:20 compute-0 sudo[306327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:35:20 compute-0 python3.9[306329]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:35:20 compute-0 sudo[306327]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:21 compute-0 sudo[306405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpfipgymespvvpmlmnlcphixclpjwcoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725719.82866-737-259207134669589/AnsiballZ_file.py'
Dec 03 01:35:21 compute-0 sudo[306405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:35:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v617: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:21 compute-0 python3.9[306407]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtlogd.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtlogd.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:35:21 compute-0 sudo[306405]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:22 compute-0 sudo[306570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zncenlbtkrrvppxdvmlpthxkznyddmdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725721.5602775-737-92980024212262/AnsiballZ_stat.py'
Dec 03 01:35:22 compute-0 sudo[306570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:35:22 compute-0 podman[306531]: 2025-12-03 01:35:22.140659295 +0000 UTC m=+0.129590167 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 03 01:35:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:35:22 compute-0 ceph-mon[192821]: pgmap v617: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:22 compute-0 python3.9[306577]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:35:22 compute-0 sudo[306570]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:22 compute-0 sudo[306653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlwbbozqddhnseqznkslfxjxveaxvkos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725721.5602775-737-92980024212262/AnsiballZ_file.py'
Dec 03 01:35:22 compute-0 sudo[306653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:35:23 compute-0 python3.9[306655]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:35:23 compute-0 sudo[306653]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v618: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:24 compute-0 sudo[306805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gblrikiztxeyzajkhjjkifucijntklnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725723.7182355-737-191476448721655/AnsiballZ_stat.py'
Dec 03 01:35:24 compute-0 sudo[306805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:35:24 compute-0 ceph-mon[192821]: pgmap v618: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:24 compute-0 python3.9[306807]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:35:24 compute-0 sudo[306805]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:24 compute-0 sudo[306883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqggrvixltynxtnxbkwwcqvukbtkscem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725723.7182355-737-191476448721655/AnsiballZ_file.py'
Dec 03 01:35:24 compute-0 sudo[306883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:35:25 compute-0 python3.9[306885]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtnodedevd.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:35:25 compute-0 sudo[306883]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v619: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:25 compute-0 ceph-mon[192821]: pgmap v619: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:26 compute-0 sudo[307035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hybpejrynefssnxvgklswwbapvtokjba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725725.9229414-737-3567482288893/AnsiballZ_stat.py'
Dec 03 01:35:26 compute-0 sudo[307035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:35:26 compute-0 python3.9[307037]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:35:26 compute-0 sudo[307035]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v620: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:27 compute-0 sudo[307113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukfniwowtzdfdebtbnkwntektrjhmius ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725725.9229414-737-3567482288893/AnsiballZ_file.py'
Dec 03 01:35:27 compute-0 sudo[307113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:35:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:35:27 compute-0 python3.9[307115]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:35:27 compute-0 sudo[307113]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:28 compute-0 ceph-mon[192821]: pgmap v620: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:35:28
Dec 03 01:35:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 01:35:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 01:35:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', 'backups', 'default.rgw.control', '.mgr', 'default.rgw.log', 'vms', 'images', '.rgw.root', 'default.rgw.meta']
Dec 03 01:35:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 01:35:28 compute-0 sudo[307265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcoxkhgcxmkrmltvagolvgxbyymrwctx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725727.785603-737-39175062314238/AnsiballZ_stat.py'
Dec 03 01:35:28 compute-0 sudo[307265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:35:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:35:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:35:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:35:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:35:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:35:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:35:28 compute-0 python3.9[307267]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:35:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 01:35:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 01:35:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:35:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:35:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:35:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:35:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:35:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:35:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:35:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:35:28 compute-0 sudo[307265]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:29 compute-0 sudo[307343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdreyatbuzyeabkptussgjzoptepkefo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725727.785603-737-39175062314238/AnsiballZ_file.py'
Dec 03 01:35:29 compute-0 sudo[307343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:35:29 compute-0 podman[307345]: 2025-12-03 01:35:29.190873096 +0000 UTC m=+0.121626372 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=Red Hat, Inc., container_name=kepler, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, name=ubi9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, version=9.4, vcs-type=git, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec 03 01:35:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v621: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:29 compute-0 python3.9[307346]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:35:29 compute-0 sudo[307343]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:29 compute-0 podman[158098]: time="2025-12-03T01:35:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:35:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:35:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35732 "" "Go-http-client/1.1"
Dec 03 01:35:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:35:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7275 "" "Go-http-client/1.1"
Dec 03 01:35:30 compute-0 sudo[307514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjmqsshffuvrkcjuflwoqiyyqxdyovev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725729.5498607-737-118574037934796/AnsiballZ_stat.py'
Dec 03 01:35:30 compute-0 sudo[307514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:35:30 compute-0 ceph-mon[192821]: pgmap v621: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:30 compute-0 python3.9[307516]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:35:30 compute-0 sudo[307514]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:30 compute-0 podman[307547]: 2025-12-03 01:35:30.865226551 +0000 UTC m=+0.118765671 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 03 01:35:30 compute-0 sudo[307610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpflwdviftrjpdlltjslnctktxruldtn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725729.5498607-737-118574037934796/AnsiballZ_file.py'
Dec 03 01:35:30 compute-0 sudo[307610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:35:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v622: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:31 compute-0 python3.9[307612]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtproxyd.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtproxyd.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:35:31 compute-0 sudo[307610]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:31 compute-0 openstack_network_exporter[160250]: ERROR   01:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:35:31 compute-0 openstack_network_exporter[160250]: ERROR   01:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:35:31 compute-0 openstack_network_exporter[160250]: ERROR   01:35:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:35:31 compute-0 openstack_network_exporter[160250]: ERROR   01:35:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:35:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:35:31 compute-0 openstack_network_exporter[160250]: ERROR   01:35:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:35:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:35:31 compute-0 sshd-session[307637]: Invalid user kapsch from 146.190.144.138 port 60612
Dec 03 01:35:31 compute-0 sshd-session[307637]: Received disconnect from 146.190.144.138 port 60612:11: Bye Bye [preauth]
Dec 03 01:35:31 compute-0 sshd-session[307637]: Disconnected from invalid user kapsch 146.190.144.138 port 60612 [preauth]
Dec 03 01:35:32 compute-0 sudo[307764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksvbywzfbsgpgvdvhbiannrwhvlrpsah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725731.5146317-737-115016791069213/AnsiballZ_stat.py'
Dec 03 01:35:32 compute-0 sudo[307764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:35:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:35:32 compute-0 python3.9[307766]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:35:32 compute-0 ceph-mon[192821]: pgmap v622: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:32 compute-0 sudo[307764]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:32 compute-0 sudo[307842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yohhryeyusofeoqtzgatpwdlnxqjqvna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725731.5146317-737-115016791069213/AnsiballZ_file.py'
Dec 03 01:35:32 compute-0 sudo[307842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:35:32 compute-0 python3.9[307846]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:35:32 compute-0 sudo[307842]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v623: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:33 compute-0 sshd-session[307843]: Received disconnect from 80.253.31.232 port 40512:11: Bye Bye [preauth]
Dec 03 01:35:33 compute-0 sshd-session[307843]: Disconnected from authenticating user root 80.253.31.232 port 40512 [preauth]
Dec 03 01:35:33 compute-0 sudo[307996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-papdgyuxsldhuqntlkwonttclxhrdeyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725733.190112-737-29987600088296/AnsiballZ_stat.py'
Dec 03 01:35:33 compute-0 sudo[307996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:35:33 compute-0 python3.9[307998]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:35:33 compute-0 sudo[307996]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:34 compute-0 ceph-mon[192821]: pgmap v623: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:34 compute-0 sudo[308090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qriruqfvlkznlxiojokqhktgpoteblti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725733.190112-737-29987600088296/AnsiballZ_file.py'
Dec 03 01:35:34 compute-0 podman[308048]: 2025-12-03 01:35:34.468143692 +0000 UTC m=+0.123878196 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 01:35:34 compute-0 sudo[308090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:35:34 compute-0 python3.9[308099]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:35:34 compute-0 sudo[308090]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v624: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:35 compute-0 sudo[308249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhcitdlvgmgclpnzqgodbgfxbsyyqcya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725734.9556756-737-241523646563007/AnsiballZ_stat.py'
Dec 03 01:35:35 compute-0 sudo[308249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:35:35 compute-0 python3.9[308251]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:35:35 compute-0 sudo[308249]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:36 compute-0 ceph-mon[192821]: pgmap v624: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:36 compute-0 sudo[308327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmfypfasolaaocshrelahgdbsfoevppj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725734.9556756-737-241523646563007/AnsiballZ_file.py'
Dec 03 01:35:36 compute-0 sudo[308327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:35:37 compute-0 python3.9[308329]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtqemud.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtqemud.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:35:37 compute-0 sudo[308327]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v625: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:35:37 compute-0 ceph-mon[192821]: pgmap v625: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:37 compute-0 sudo[308479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnhzqxgxdktoosnzbuvqmdzpikuicxce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725737.348808-737-188296650317245/AnsiballZ_stat.py'
Dec 03 01:35:37 compute-0 sudo[308479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:35:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 01:35:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:35:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 01:35:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:35:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:35:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:35:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:35:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:35:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:35:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:35:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:35:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:35:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 01:35:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:35:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:35:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:35:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 01:35:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:35:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 01:35:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:35:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:35:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:35:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 01:35:38 compute-0 python3.9[308481]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:35:38 compute-0 sudo[308479]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v626: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:39 compute-0 sudo[308557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drharddvpahubptnbocablhkxipedkzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725737.348808-737-188296650317245/AnsiballZ_file.py'
Dec 03 01:35:39 compute-0 sudo[308557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:35:39 compute-0 python3.9[308559]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:35:39 compute-0 sudo[308557]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:40 compute-0 ceph-mon[192821]: pgmap v626: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:40 compute-0 sudo[308709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlvsladyywodibhtogvkueshuywggbga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725740.0059855-737-229750075266169/AnsiballZ_stat.py'
Dec 03 01:35:40 compute-0 sudo[308709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:35:40 compute-0 python3.9[308711]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.973 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.974 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.974 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.975 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f00ebd496a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eda45910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eabec2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.981 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.981 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.981 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.981 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebcadee0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.978 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f00ebd4b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f00edba6090>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f00ebd4bb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f00ebd4b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f00ebd4b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f00ebd4b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f00ebd4b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f00eabec290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f00ebd4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.986 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f00ebd4b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.986 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f00ebd4b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.986 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f00ebd4bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f00ebd4b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f00ebd4bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f00ebd4bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f00ebd4bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f00ebe0e030>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f00ebd4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f00ebd4b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f00ede91a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f00ebd4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f00ebd4b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f00ede92450>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f00ebd4bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f00ebd4bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.992 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:35:41 compute-0 sudo[308709]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v627: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:41 compute-0 sudo[308788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjcuzidiklmdkpmgmaylpiqsxfdqqjne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725740.0059855-737-229750075266169/AnsiballZ_file.py'
Dec 03 01:35:41 compute-0 sudo[308788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:35:41 compute-0 python3.9[308790]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:35:41 compute-0 sudo[308788]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:42 compute-0 sshd-session[308814]: Received disconnect from 34.66.72.251 port 36630:11: Bye Bye [preauth]
Dec 03 01:35:42 compute-0 sshd-session[308814]: Disconnected from authenticating user root 34.66.72.251 port 36630 [preauth]
Dec 03 01:35:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:35:42 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Dec 03 01:35:42 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:35:42.252473) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 03 01:35:42 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Dec 03 01:35:42 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725742252625, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1814, "num_deletes": 252, "total_data_size": 3122117, "memory_usage": 3178896, "flush_reason": "Manual Compaction"}
Dec 03 01:35:42 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Dec 03 01:35:42 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725742277298, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1761679, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11726, "largest_seqno": 13539, "table_properties": {"data_size": 1755754, "index_size": 3000, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14657, "raw_average_key_size": 20, "raw_value_size": 1742706, "raw_average_value_size": 2387, "num_data_blocks": 139, "num_entries": 730, "num_filter_entries": 730, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764725533, "oldest_key_time": 1764725533, "file_creation_time": 1764725742, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Dec 03 01:35:42 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 24943 microseconds, and 10907 cpu microseconds.
Dec 03 01:35:42 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 01:35:42 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:35:42.277404) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1761679 bytes OK
Dec 03 01:35:42 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:35:42.277449) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Dec 03 01:35:42 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:35:42.280600) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Dec 03 01:35:42 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:35:42.280625) EVENT_LOG_v1 {"time_micros": 1764725742280617, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 03 01:35:42 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:35:42.280648) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 03 01:35:42 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 3114447, prev total WAL file size 3114447, number of live WAL files 2.
Dec 03 01:35:42 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 01:35:42 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:35:42.282470) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353033' seq:0, type:0; will stop at (end)
Dec 03 01:35:42 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 03 01:35:42 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1720KB)], [29(7646KB)]
Dec 03 01:35:42 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725742282518, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9591213, "oldest_snapshot_seqno": -1}
Dec 03 01:35:42 compute-0 ceph-mon[192821]: pgmap v627: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:42 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4001 keys, 7566547 bytes, temperature: kUnknown
Dec 03 01:35:42 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725742362337, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7566547, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7538035, "index_size": 17394, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 95173, "raw_average_key_size": 23, "raw_value_size": 7464064, "raw_average_value_size": 1865, "num_data_blocks": 759, "num_entries": 4001, "num_filter_entries": 4001, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764725742, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Dec 03 01:35:42 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 01:35:42 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:35:42.362815) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7566547 bytes
Dec 03 01:35:42 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:35:42.365931) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 120.0 rd, 94.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 7.5 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(9.7) write-amplify(4.3) OK, records in: 4418, records dropped: 417 output_compression: NoCompression
Dec 03 01:35:42 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:35:42.365962) EVENT_LOG_v1 {"time_micros": 1764725742365947, "job": 12, "event": "compaction_finished", "compaction_time_micros": 79910, "compaction_time_cpu_micros": 36518, "output_level": 6, "num_output_files": 1, "total_output_size": 7566547, "num_input_records": 4418, "num_output_records": 4001, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 03 01:35:42 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 01:35:42 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725742366846, "job": 12, "event": "table_file_deletion", "file_number": 31}
Dec 03 01:35:42 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 01:35:42 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725742370090, "job": 12, "event": "table_file_deletion", "file_number": 29}
Dec 03 01:35:42 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:35:42.282267) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:35:42 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:35:42.370416) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:35:42 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:35:42.370428) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:35:42 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:35:42.370431) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:35:42 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:35:42.370435) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:35:42 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:35:42.370439) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:35:42 compute-0 sudo[308942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgkihtpptujkvvpwzuulimgxfetmdlbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725741.8653114-737-197608901360823/AnsiballZ_stat.py'
Dec 03 01:35:42 compute-0 sudo[308942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:35:42 compute-0 python3.9[308944]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:35:42 compute-0 sudo[308942]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:43 compute-0 sudo[309020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olxmuvmrjlixzswliwqaqwauoimxunuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725741.8653114-737-197608901360823/AnsiballZ_file.py'
Dec 03 01:35:43 compute-0 sudo[309020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:35:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v628: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:43 compute-0 python3.9[309022]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtsecretd.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtsecretd.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:35:43 compute-0 sudo[309020]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:44 compute-0 sudo[309172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtfyiyeasbzgkfebhplqxzudhulnjrru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725743.568145-737-213750182091646/AnsiballZ_stat.py'
Dec 03 01:35:44 compute-0 sudo[309172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:35:44 compute-0 ceph-mon[192821]: pgmap v628: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:44 compute-0 python3.9[309174]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:35:44 compute-0 sudo[309172]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:44 compute-0 sudo[309250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikrwgmgjodslutfmvphmkpibcecrautb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725743.568145-737-213750182091646/AnsiballZ_file.py'
Dec 03 01:35:44 compute-0 sudo[309250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:35:45 compute-0 python3.9[309252]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:35:45 compute-0 sudo[309250]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v629: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:45 compute-0 sudo[309376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:35:45 compute-0 sudo[309376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:35:45 compute-0 sudo[309376]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:45 compute-0 sudo[309427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svttzrnxfodhwtafuwyrxuobwvjxteuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725745.309238-737-199011367946650/AnsiballZ_stat.py'
Dec 03 01:35:45 compute-0 sudo[309427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:35:46 compute-0 sudo[309428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:35:46 compute-0 sudo[309428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:35:46 compute-0 sudo[309428]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:46 compute-0 python3.9[309435]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:35:46 compute-0 sudo[309455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:35:46 compute-0 sudo[309455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:35:46 compute-0 sudo[309455]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:46 compute-0 sudo[309427]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:46 compute-0 sudo[309482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 01:35:46 compute-0 sudo[309482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:35:46 compute-0 ceph-mon[192821]: pgmap v629: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:46 compute-0 sudo[309593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyhspcmwvgzoeggeqtuszoqoycaikejv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725745.309238-737-199011367946650/AnsiballZ_file.py'
Dec 03 01:35:46 compute-0 sudo[309593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:35:46 compute-0 python3.9[309595]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:35:46 compute-0 sudo[309593]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:47 compute-0 sudo[309482]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:35:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:35:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 01:35:47 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:35:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 01:35:47 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:35:47 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 978e453f-18a3-4910-b21b-6bde7c5f1158 does not exist
Dec 03 01:35:47 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev d34dfe07-3f09-4cee-9bda-ba5acd74fd51 does not exist
Dec 03 01:35:47 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 644b7a25-c0db-4d30-b319-22237a2ff141 does not exist
Dec 03 01:35:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 01:35:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:35:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 01:35:47 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:35:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:35:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:35:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v630: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:47 compute-0 sudo[309643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:35:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:35:47 compute-0 sudo[309643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:35:47 compute-0 sudo[309643]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:47 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:35:47 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:35:47 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:35:47 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:35:47 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:35:47 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:35:47 compute-0 sudo[309703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:35:47 compute-0 sudo[309703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:35:47 compute-0 sudo[309703]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:47 compute-0 sudo[309752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:35:47 compute-0 sudo[309752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:35:47 compute-0 sudo[309752]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:47 compute-0 sudo[309805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 01:35:47 compute-0 sudo[309805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:35:47 compute-0 python3.9[309862]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:35:48 compute-0 podman[309925]: 2025-12-03 01:35:48.25820778 +0000 UTC m=+0.086210993 container create 84b9a0695195aa37eb7be54ba19cd948aa25002746662e209991cd5e17446d47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 03 01:35:48 compute-0 podman[309925]: 2025-12-03 01:35:48.224714175 +0000 UTC m=+0.052717438 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:35:48 compute-0 ceph-mon[192821]: pgmap v630: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:48 compute-0 systemd[1]: Started libpod-conmon-84b9a0695195aa37eb7be54ba19cd948aa25002746662e209991cd5e17446d47.scope.
Dec 03 01:35:48 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:35:48 compute-0 podman[309925]: 2025-12-03 01:35:48.410495376 +0000 UTC m=+0.238498629 container init 84b9a0695195aa37eb7be54ba19cd948aa25002746662e209991cd5e17446d47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_dijkstra, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:35:48 compute-0 podman[309925]: 2025-12-03 01:35:48.429943905 +0000 UTC m=+0.257947118 container start 84b9a0695195aa37eb7be54ba19cd948aa25002746662e209991cd5e17446d47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_dijkstra, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 03 01:35:48 compute-0 podman[309925]: 2025-12-03 01:35:48.437481517 +0000 UTC m=+0.265484730 container attach 84b9a0695195aa37eb7be54ba19cd948aa25002746662e209991cd5e17446d47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_dijkstra, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:35:48 compute-0 peaceful_dijkstra[309947]: 167 167
Dec 03 01:35:48 compute-0 systemd[1]: libpod-84b9a0695195aa37eb7be54ba19cd948aa25002746662e209991cd5e17446d47.scope: Deactivated successfully.
Dec 03 01:35:48 compute-0 podman[309925]: 2025-12-03 01:35:48.442769096 +0000 UTC m=+0.270772309 container died 84b9a0695195aa37eb7be54ba19cd948aa25002746662e209991cd5e17446d47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec 03 01:35:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-b45c80d8fc3f24fb85898b32b21b9f0a2938ddcf7bda2465499f66d531e3c308-merged.mount: Deactivated successfully.
Dec 03 01:35:48 compute-0 podman[309925]: 2025-12-03 01:35:48.527093115 +0000 UTC m=+0.355096298 container remove 84b9a0695195aa37eb7be54ba19cd948aa25002746662e209991cd5e17446d47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_dijkstra, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:35:48 compute-0 systemd[1]: libpod-conmon-84b9a0695195aa37eb7be54ba19cd948aa25002746662e209991cd5e17446d47.scope: Deactivated successfully.
Dec 03 01:35:48 compute-0 podman[310014]: 2025-12-03 01:35:48.807426444 +0000 UTC m=+0.094819846 container create 9e71d77b802fd6b244d2e0247c21a3e1e9914a83f18ce198dc809a173b1fdd2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kilby, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 03 01:35:48 compute-0 podman[310014]: 2025-12-03 01:35:48.772222541 +0000 UTC m=+0.059616013 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:35:48 compute-0 systemd[1]: Started libpod-conmon-9e71d77b802fd6b244d2e0247c21a3e1e9914a83f18ce198dc809a173b1fdd2c.scope.
Dec 03 01:35:48 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:35:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8290eab74f0fac425f80b2b6f86cb32465d42e3f6e4d6ffb4216308f8e7d3e88/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:35:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8290eab74f0fac425f80b2b6f86cb32465d42e3f6e4d6ffb4216308f8e7d3e88/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:35:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8290eab74f0fac425f80b2b6f86cb32465d42e3f6e4d6ffb4216308f8e7d3e88/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:35:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8290eab74f0fac425f80b2b6f86cb32465d42e3f6e4d6ffb4216308f8e7d3e88/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:35:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8290eab74f0fac425f80b2b6f86cb32465d42e3f6e4d6ffb4216308f8e7d3e88/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:35:49 compute-0 podman[310014]: 2025-12-03 01:35:49.028256564 +0000 UTC m=+0.315649966 container init 9e71d77b802fd6b244d2e0247c21a3e1e9914a83f18ce198dc809a173b1fdd2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kilby, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:35:49 compute-0 podman[310014]: 2025-12-03 01:35:49.046132168 +0000 UTC m=+0.333525550 container start 9e71d77b802fd6b244d2e0247c21a3e1e9914a83f18ce198dc809a173b1fdd2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec 03 01:35:49 compute-0 podman[310014]: 2025-12-03 01:35:49.052119497 +0000 UTC m=+0.339512949 container attach 9e71d77b802fd6b244d2e0247c21a3e1e9914a83f18ce198dc809a173b1fdd2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kilby, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:35:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v631: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:49 compute-0 ceph-mon[192821]: pgmap v631: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:49 compute-0 sudo[310109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdyblyzvpaebgzhaxdqhuktfxvflebti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725748.4136026-901-3151923612961/AnsiballZ_seboolean.py'
Dec 03 01:35:49 compute-0 sudo[310109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:35:49 compute-0 python3.9[310111]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Dec 03 01:35:50 compute-0 nostalgic_kilby[310030]: --> passed data devices: 0 physical, 3 LVM
Dec 03 01:35:50 compute-0 nostalgic_kilby[310030]: --> relative data size: 1.0
Dec 03 01:35:50 compute-0 nostalgic_kilby[310030]: --> All data devices are unavailable
Dec 03 01:35:50 compute-0 systemd[1]: libpod-9e71d77b802fd6b244d2e0247c21a3e1e9914a83f18ce198dc809a173b1fdd2c.scope: Deactivated successfully.
Dec 03 01:35:50 compute-0 systemd[1]: libpod-9e71d77b802fd6b244d2e0247c21a3e1e9914a83f18ce198dc809a173b1fdd2c.scope: Consumed 1.244s CPU time.
Dec 03 01:35:50 compute-0 podman[310136]: 2025-12-03 01:35:50.444535308 +0000 UTC m=+0.070069168 container died 9e71d77b802fd6b244d2e0247c21a3e1e9914a83f18ce198dc809a173b1fdd2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kilby, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 03 01:35:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-8290eab74f0fac425f80b2b6f86cb32465d42e3f6e4d6ffb4216308f8e7d3e88-merged.mount: Deactivated successfully.
Dec 03 01:35:50 compute-0 sudo[310109]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:50 compute-0 podman[310136]: 2025-12-03 01:35:50.546051522 +0000 UTC m=+0.171585352 container remove 9e71d77b802fd6b244d2e0247c21a3e1e9914a83f18ce198dc809a173b1fdd2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Dec 03 01:35:50 compute-0 podman[310143]: 2025-12-03 01:35:50.55163989 +0000 UTC m=+0.137001706 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, config_id=edpm, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, maintainer=Red Hat, Inc., name=ubi9-minimal, architecture=x86_64, distribution-scope=public)
Dec 03 01:35:50 compute-0 systemd[1]: libpod-conmon-9e71d77b802fd6b244d2e0247c21a3e1e9914a83f18ce198dc809a173b1fdd2c.scope: Deactivated successfully.
Dec 03 01:35:50 compute-0 podman[310137]: 2025-12-03 01:35:50.565773899 +0000 UTC m=+0.158475882 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 03 01:35:50 compute-0 podman[310145]: 2025-12-03 01:35:50.571318745 +0000 UTC m=+0.150133727 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Dec 03 01:35:50 compute-0 podman[310147]: 2025-12-03 01:35:50.583792597 +0000 UTC m=+0.171153140 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Dec 03 01:35:50 compute-0 sudo[309805]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:50 compute-0 sudo[310251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:35:50 compute-0 sudo[310251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:35:50 compute-0 sudo[310251]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:50 compute-0 sudo[310282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:35:50 compute-0 sudo[310282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:35:50 compute-0 sudo[310282]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:50 compute-0 sudo[310307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:35:50 compute-0 sudo[310307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:35:50 compute-0 sudo[310307]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:51 compute-0 sudo[310355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 01:35:51 compute-0 sudo[310355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:35:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v632: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:51 compute-0 podman[310416]: 2025-12-03 01:35:51.636839073 +0000 UTC m=+0.092955603 container create c9f452f7f4ad306ed58798386c54dca4af3b9a20d865ce001461fa6e8d9d26f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:35:51 compute-0 podman[310416]: 2025-12-03 01:35:51.599357146 +0000 UTC m=+0.055473716 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:35:51 compute-0 systemd[1]: Started libpod-conmon-c9f452f7f4ad306ed58798386c54dca4af3b9a20d865ce001461fa6e8d9d26f7.scope.
Dec 03 01:35:51 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:35:51 compute-0 podman[310416]: 2025-12-03 01:35:51.792392342 +0000 UTC m=+0.248508922 container init c9f452f7f4ad306ed58798386c54dca4af3b9a20d865ce001461fa6e8d9d26f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_carver, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec 03 01:35:51 compute-0 podman[310416]: 2025-12-03 01:35:51.811058138 +0000 UTC m=+0.267174658 container start c9f452f7f4ad306ed58798386c54dca4af3b9a20d865ce001461fa6e8d9d26f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_carver, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 03 01:35:51 compute-0 podman[310416]: 2025-12-03 01:35:51.817642524 +0000 UTC m=+0.273759054 container attach c9f452f7f4ad306ed58798386c54dca4af3b9a20d865ce001461fa6e8d9d26f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_carver, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:35:51 compute-0 quizzical_carver[310431]: 167 167
Dec 03 01:35:51 compute-0 systemd[1]: libpod-c9f452f7f4ad306ed58798386c54dca4af3b9a20d865ce001461fa6e8d9d26f7.scope: Deactivated successfully.
Dec 03 01:35:51 compute-0 podman[310416]: 2025-12-03 01:35:51.830075915 +0000 UTC m=+0.286192495 container died c9f452f7f4ad306ed58798386c54dca4af3b9a20d865ce001461fa6e8d9d26f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:35:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-45af201ba4a26a7775b20fa7ba24b61eeab903e5c0a268e5fbaab36c1c77762f-merged.mount: Deactivated successfully.
Dec 03 01:35:51 compute-0 podman[310416]: 2025-12-03 01:35:51.909640059 +0000 UTC m=+0.365756569 container remove c9f452f7f4ad306ed58798386c54dca4af3b9a20d865ce001461fa6e8d9d26f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_carver, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec 03 01:35:51 compute-0 systemd[1]: libpod-conmon-c9f452f7f4ad306ed58798386c54dca4af3b9a20d865ce001461fa6e8d9d26f7.scope: Deactivated successfully.
Dec 03 01:35:52 compute-0 podman[310485]: 2025-12-03 01:35:52.186936652 +0000 UTC m=+0.096285437 container create cb83cba6f4e7bff46c642b2ebc3fd2a4d258d4e4cf6c61bca4ce9d54c34466f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:35:52 compute-0 podman[310485]: 2025-12-03 01:35:52.151647897 +0000 UTC m=+0.060996722 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:35:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:35:52 compute-0 systemd[1]: Started libpod-conmon-cb83cba6f4e7bff46c642b2ebc3fd2a4d258d4e4cf6c61bca4ce9d54c34466f0.scope.
Dec 03 01:35:52 compute-0 ceph-mon[192821]: pgmap v632: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:52 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:35:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05c0624ecdf4f131545466e4034efa038f86cc8d44364519b20154a5abbb2e02/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:35:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05c0624ecdf4f131545466e4034efa038f86cc8d44364519b20154a5abbb2e02/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:35:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05c0624ecdf4f131545466e4034efa038f86cc8d44364519b20154a5abbb2e02/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:35:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05c0624ecdf4f131545466e4034efa038f86cc8d44364519b20154a5abbb2e02/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:35:52 compute-0 podman[310485]: 2025-12-03 01:35:52.382693995 +0000 UTC m=+0.292042810 container init cb83cba6f4e7bff46c642b2ebc3fd2a4d258d4e4cf6c61bca4ce9d54c34466f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 03 01:35:52 compute-0 podman[310498]: 2025-12-03 01:35:52.394670292 +0000 UTC m=+0.133885468 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 03 01:35:52 compute-0 podman[310485]: 2025-12-03 01:35:52.401122594 +0000 UTC m=+0.310471350 container start cb83cba6f4e7bff46c642b2ebc3fd2a4d258d4e4cf6c61bca4ce9d54c34466f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_dewdney, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 03 01:35:52 compute-0 podman[310485]: 2025-12-03 01:35:52.405986772 +0000 UTC m=+0.315335627 container attach cb83cba6f4e7bff46c642b2ebc3fd2a4d258d4e4cf6c61bca4ce9d54c34466f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:35:52 compute-0 sudo[310598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvlpbvbrmunmpvhoyjdbgcyucyyzxjib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725750.9588242-909-72555500561578/AnsiballZ_copy.py'
Dec 03 01:35:52 compute-0 sudo[310598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:35:52 compute-0 python3.9[310600]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:35:52 compute-0 sudo[310598]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:53 compute-0 funny_dewdney[310509]: {
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:     "0": [
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:         {
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:             "devices": [
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:                 "/dev/loop3"
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:             ],
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:             "lv_name": "ceph_lv0",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:             "lv_size": "21470642176",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:             "name": "ceph_lv0",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:             "tags": {
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:                 "ceph.cluster_name": "ceph",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:                 "ceph.crush_device_class": "",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:                 "ceph.encrypted": "0",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:                 "ceph.osd_id": "0",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:                 "ceph.type": "block",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:                 "ceph.vdo": "0"
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:             },
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:             "type": "block",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:             "vg_name": "ceph_vg0"
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:         }
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:     ],
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:     "1": [
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:         {
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:             "devices": [
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:                 "/dev/loop4"
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:             ],
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:             "lv_name": "ceph_lv1",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:             "lv_size": "21470642176",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:             "name": "ceph_lv1",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:             "tags": {
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:                 "ceph.cluster_name": "ceph",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:                 "ceph.crush_device_class": "",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:                 "ceph.encrypted": "0",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:                 "ceph.osd_id": "1",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:                 "ceph.type": "block",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:                 "ceph.vdo": "0"
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:             },
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:             "type": "block",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:             "vg_name": "ceph_vg1"
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:         }
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:     ],
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:     "2": [
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:         {
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:             "devices": [
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:                 "/dev/loop5"
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:             ],
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:             "lv_name": "ceph_lv2",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:             "lv_size": "21470642176",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:             "name": "ceph_lv2",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:             "tags": {
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:                 "ceph.cluster_name": "ceph",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:                 "ceph.crush_device_class": "",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:                 "ceph.encrypted": "0",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:                 "ceph.osd_id": "2",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:                 "ceph.type": "block",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:                 "ceph.vdo": "0"
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:             },
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:             "type": "block",
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:             "vg_name": "ceph_vg2"
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:         }
Dec 03 01:35:53 compute-0 funny_dewdney[310509]:     ]
Dec 03 01:35:53 compute-0 funny_dewdney[310509]: }
Dec 03 01:35:53 compute-0 podman[310485]: 2025-12-03 01:35:53.188763685 +0000 UTC m=+1.098112440 container died cb83cba6f4e7bff46c642b2ebc3fd2a4d258d4e4cf6c61bca4ce9d54c34466f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 03 01:35:53 compute-0 systemd[1]: libpod-cb83cba6f4e7bff46c642b2ebc3fd2a4d258d4e4cf6c61bca4ce9d54c34466f0.scope: Deactivated successfully.
Dec 03 01:35:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v633: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-05c0624ecdf4f131545466e4034efa038f86cc8d44364519b20154a5abbb2e02-merged.mount: Deactivated successfully.
Dec 03 01:35:53 compute-0 podman[310485]: 2025-12-03 01:35:53.293398166 +0000 UTC m=+1.202746921 container remove cb83cba6f4e7bff46c642b2ebc3fd2a4d258d4e4cf6c61bca4ce9d54c34466f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:35:53 compute-0 systemd[1]: libpod-conmon-cb83cba6f4e7bff46c642b2ebc3fd2a4d258d4e4cf6c61bca4ce9d54c34466f0.scope: Deactivated successfully.
Dec 03 01:35:53 compute-0 sudo[310355]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:53 compute-0 sudo[310714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:35:53 compute-0 sudo[310714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:35:53 compute-0 sudo[310714]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:53 compute-0 sudo[310763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:35:53 compute-0 sudo[310763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:35:53 compute-0 sudo[310763]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:53 compute-0 sudo[310814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmqlxhagdnfromnvzafyhbeqmdirmyte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725753.114382-909-14407137722835/AnsiballZ_copy.py'
Dec 03 01:35:53 compute-0 sudo[310814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:35:53 compute-0 sudo[310815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:35:53 compute-0 sudo[310815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:35:53 compute-0 sudo[310815]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:53 compute-0 python3.9[310823]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:35:53 compute-0 sudo[310842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 01:35:53 compute-0 sudo[310814]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:53 compute-0 sudo[310842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:35:54 compute-0 ceph-mon[192821]: pgmap v633: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:54 compute-0 podman[310995]: 2025-12-03 01:35:54.456850089 +0000 UTC m=+0.079619397 container create 683d81b750305265e1ca4823c1bd819f457bb45cc262afa14adeae372432b667 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_ritchie, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:35:54 compute-0 podman[310995]: 2025-12-03 01:35:54.420414941 +0000 UTC m=+0.043184319 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:35:54 compute-0 systemd[1]: Started libpod-conmon-683d81b750305265e1ca4823c1bd819f457bb45cc262afa14adeae372432b667.scope.
Dec 03 01:35:54 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:35:54 compute-0 podman[310995]: 2025-12-03 01:35:54.583206803 +0000 UTC m=+0.205976181 container init 683d81b750305265e1ca4823c1bd819f457bb45cc262afa14adeae372432b667 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_ritchie, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:35:54 compute-0 podman[310995]: 2025-12-03 01:35:54.595623804 +0000 UTC m=+0.218393112 container start 683d81b750305265e1ca4823c1bd819f457bb45cc262afa14adeae372432b667 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:35:54 compute-0 upbeat_ritchie[311044]: 167 167
Dec 03 01:35:54 compute-0 podman[310995]: 2025-12-03 01:35:54.603167395 +0000 UTC m=+0.225936693 container attach 683d81b750305265e1ca4823c1bd819f457bb45cc262afa14adeae372432b667 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_ritchie, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 03 01:35:54 compute-0 systemd[1]: libpod-683d81b750305265e1ca4823c1bd819f457bb45cc262afa14adeae372432b667.scope: Deactivated successfully.
Dec 03 01:35:54 compute-0 podman[310995]: 2025-12-03 01:35:54.605191843 +0000 UTC m=+0.227961121 container died 683d81b750305265e1ca4823c1bd819f457bb45cc262afa14adeae372432b667 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec 03 01:35:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-b23174b90589d50260ee6a93a5d91e74d0bd61d1df3a576569f8775ff9bcd4c6-merged.mount: Deactivated successfully.
Dec 03 01:35:54 compute-0 podman[310995]: 2025-12-03 01:35:54.670763652 +0000 UTC m=+0.293532940 container remove 683d81b750305265e1ca4823c1bd819f457bb45cc262afa14adeae372432b667 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:35:54 compute-0 systemd[1]: libpod-conmon-683d81b750305265e1ca4823c1bd819f457bb45cc262afa14adeae372432b667.scope: Deactivated successfully.
Dec 03 01:35:54 compute-0 sudo[311088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cteuegqpdnvwmngkzefvevboiphlnupd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725754.1753254-909-11441247579870/AnsiballZ_copy.py'
Dec 03 01:35:54 compute-0 sudo[311088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:35:54 compute-0 python3.9[311092]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:35:54 compute-0 podman[311098]: 2025-12-03 01:35:54.93618491 +0000 UTC m=+0.081341916 container create 4e5da291991e9adadf98e4cf74ff93ff841814d949f092144a1805aee88f549e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kowalevski, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:35:54 compute-0 sudo[311088]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:54 compute-0 podman[311098]: 2025-12-03 01:35:54.898891808 +0000 UTC m=+0.044048864 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:35:55 compute-0 systemd[1]: Started libpod-conmon-4e5da291991e9adadf98e4cf74ff93ff841814d949f092144a1805aee88f549e.scope.
Dec 03 01:35:55 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:35:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a36df89d02717120687d894fe9690573ee35f56e991a5f2068e9a5fac243241/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:35:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a36df89d02717120687d894fe9690573ee35f56e991a5f2068e9a5fac243241/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:35:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a36df89d02717120687d894fe9690573ee35f56e991a5f2068e9a5fac243241/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:35:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a36df89d02717120687d894fe9690573ee35f56e991a5f2068e9a5fac243241/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:35:55 compute-0 podman[311098]: 2025-12-03 01:35:55.105207759 +0000 UTC m=+0.250364795 container init 4e5da291991e9adadf98e4cf74ff93ff841814d949f092144a1805aee88f549e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Dec 03 01:35:55 compute-0 podman[311098]: 2025-12-03 01:35:55.124670738 +0000 UTC m=+0.269827744 container start 4e5da291991e9adadf98e4cf74ff93ff841814d949f092144a1805aee88f549e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 03 01:35:55 compute-0 podman[311098]: 2025-12-03 01:35:55.130864582 +0000 UTC m=+0.276021548 container attach 4e5da291991e9adadf98e4cf74ff93ff841814d949f092144a1805aee88f549e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 03 01:35:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v634: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:55 compute-0 sudo[311268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqnxpdjwltbzraifyjrpfrrkmxmenelv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725755.2225997-909-280532168672102/AnsiballZ_copy.py'
Dec 03 01:35:55 compute-0 sudo[311268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:35:56 compute-0 python3.9[311271]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:35:56 compute-0 sudo[311268]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:56 compute-0 jolly_kowalevski[311121]: {
Dec 03 01:35:56 compute-0 jolly_kowalevski[311121]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 01:35:56 compute-0 jolly_kowalevski[311121]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:35:56 compute-0 jolly_kowalevski[311121]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 01:35:56 compute-0 jolly_kowalevski[311121]:         "osd_id": 2,
Dec 03 01:35:56 compute-0 jolly_kowalevski[311121]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:35:56 compute-0 jolly_kowalevski[311121]:         "type": "bluestore"
Dec 03 01:35:56 compute-0 jolly_kowalevski[311121]:     },
Dec 03 01:35:56 compute-0 jolly_kowalevski[311121]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 01:35:56 compute-0 jolly_kowalevski[311121]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:35:56 compute-0 jolly_kowalevski[311121]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 01:35:56 compute-0 jolly_kowalevski[311121]:         "osd_id": 1,
Dec 03 01:35:56 compute-0 jolly_kowalevski[311121]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:35:56 compute-0 jolly_kowalevski[311121]:         "type": "bluestore"
Dec 03 01:35:56 compute-0 jolly_kowalevski[311121]:     },
Dec 03 01:35:56 compute-0 jolly_kowalevski[311121]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 01:35:56 compute-0 jolly_kowalevski[311121]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:35:56 compute-0 jolly_kowalevski[311121]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 01:35:56 compute-0 jolly_kowalevski[311121]:         "osd_id": 0,
Dec 03 01:35:56 compute-0 jolly_kowalevski[311121]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:35:56 compute-0 jolly_kowalevski[311121]:         "type": "bluestore"
Dec 03 01:35:56 compute-0 jolly_kowalevski[311121]:     }
Dec 03 01:35:56 compute-0 jolly_kowalevski[311121]: }
Dec 03 01:35:56 compute-0 ceph-mon[192821]: pgmap v634: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:56 compute-0 systemd[1]: libpod-4e5da291991e9adadf98e4cf74ff93ff841814d949f092144a1805aee88f549e.scope: Deactivated successfully.
Dec 03 01:35:56 compute-0 systemd[1]: libpod-4e5da291991e9adadf98e4cf74ff93ff841814d949f092144a1805aee88f549e.scope: Consumed 1.193s CPU time.
Dec 03 01:35:56 compute-0 podman[311098]: 2025-12-03 01:35:56.326196424 +0000 UTC m=+1.471353420 container died 4e5da291991e9adadf98e4cf74ff93ff841814d949f092144a1805aee88f549e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kowalevski, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:35:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a36df89d02717120687d894fe9690573ee35f56e991a5f2068e9a5fac243241-merged.mount: Deactivated successfully.
Dec 03 01:35:56 compute-0 podman[311098]: 2025-12-03 01:35:56.417690785 +0000 UTC m=+1.562847761 container remove 4e5da291991e9adadf98e4cf74ff93ff841814d949f092144a1805aee88f549e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kowalevski, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 03 01:35:56 compute-0 systemd[1]: libpod-conmon-4e5da291991e9adadf98e4cf74ff93ff841814d949f092144a1805aee88f549e.scope: Deactivated successfully.
Dec 03 01:35:56 compute-0 sudo[310842]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:35:56 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:35:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:35:56 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:35:56 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 641e3905-4509-4481-acbf-e5af627b8796 does not exist
Dec 03 01:35:56 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 17132ad1-31b3-47f1-8fe8-6106f6994611 does not exist
Dec 03 01:35:56 compute-0 sudo[311396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:35:56 compute-0 sudo[311396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:35:56 compute-0 sudo[311396]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:56 compute-0 sudo[311444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 01:35:56 compute-0 sudo[311444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:35:56 compute-0 sudo[311444]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:56 compute-0 sudo[311511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oacdhrhmvueduzugvynpitoajddwitig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725756.297394-909-108434446848082/AnsiballZ_copy.py'
Dec 03 01:35:56 compute-0 sudo[311511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:35:57 compute-0 python3.9[311513]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:35:57 compute-0 sudo[311511]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v635: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:35:57 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:35:57 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:35:57 compute-0 ceph-mon[192821]: pgmap v635: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:58 compute-0 sudo[311663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkslnivhaldtmxfgofzdbqqfdvodygwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725757.4296653-945-236511997244135/AnsiballZ_copy.py'
Dec 03 01:35:58 compute-0 sudo[311663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:35:58 compute-0 python3.9[311665]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:35:58 compute-0 sudo[311663]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:35:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:35:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:35:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:35:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:35:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:35:59 compute-0 sudo[311815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eprchayehytlixazurwcukudkksasadg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725758.6157408-945-128164193859181/AnsiballZ_copy.py'
Dec 03 01:35:59 compute-0 sudo[311815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:35:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v636: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:35:59 compute-0 python3.9[311817]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:35:59 compute-0 sudo[311815]: pam_unix(sudo:session): session closed for user root
Dec 03 01:35:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:35:59.592 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:35:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:35:59.593 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:35:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:35:59.593 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:35:59 compute-0 podman[311818]: 2025-12-03 01:35:59.626155738 +0000 UTC m=+0.139889947 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., name=ubi9, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, release=1214.1726694543, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.openshift.expose-services=, managed_by=edpm_ansible, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., container_name=kepler, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 03 01:35:59 compute-0 podman[158098]: time="2025-12-03T01:35:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:35:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:35:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35732 "" "Go-http-client/1.1"
Dec 03 01:35:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:35:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7280 "" "Go-http-client/1.1"
Dec 03 01:36:00 compute-0 ceph-mon[192821]: pgmap v636: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:00 compute-0 sudo[311988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eesxeirctdnlhkirttwzgiowraafkgbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725759.8153238-945-41797592465117/AnsiballZ_copy.py'
Dec 03 01:36:00 compute-0 sudo[311988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:36:00 compute-0 python3.9[311990]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:36:00 compute-0 sudo[311988]: pam_unix(sudo:session): session closed for user root
Dec 03 01:36:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v637: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:01 compute-0 openstack_network_exporter[160250]: ERROR   01:36:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:36:01 compute-0 openstack_network_exporter[160250]: ERROR   01:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:36:01 compute-0 openstack_network_exporter[160250]: ERROR   01:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:36:01 compute-0 openstack_network_exporter[160250]: ERROR   01:36:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:36:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:36:01 compute-0 openstack_network_exporter[160250]: ERROR   01:36:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:36:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:36:01 compute-0 sudo[312157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxhuqbraaikfoqhzpvhsxzzcxajenitn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725760.8535824-945-79677580756431/AnsiballZ_copy.py'
Dec 03 01:36:01 compute-0 sudo[312157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:36:01 compute-0 podman[312115]: 2025-12-03 01:36:01.446692757 +0000 UTC m=+0.125197043 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 03 01:36:01 compute-0 python3.9[312159]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:36:01 compute-0 sudo[312157]: pam_unix(sudo:session): session closed for user root
Dec 03 01:36:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:36:02 compute-0 ceph-mon[192821]: pgmap v637: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v638: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:03 compute-0 sudo[312310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abrdlsxyrdvautcipxcqjkpwmmsnzrtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725762.1023116-945-279916027438667/AnsiballZ_copy.py'
Dec 03 01:36:03 compute-0 sudo[312310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:36:03 compute-0 sshd-session[312038]: Invalid user admin from 185.156.73.233 port 53500
Dec 03 01:36:03 compute-0 python3.9[312312]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:36:03 compute-0 sudo[312310]: pam_unix(sudo:session): session closed for user root
Dec 03 01:36:03 compute-0 sshd-session[312038]: Connection closed by invalid user admin 185.156.73.233 port 53500 [preauth]
Dec 03 01:36:04 compute-0 ceph-mon[192821]: pgmap v638: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:04 compute-0 sudo[312462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dualwiimrkzyhurkcurhruooqqitxgkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725763.9925878-982-250272257064296/AnsiballZ_file.py'
Dec 03 01:36:04 compute-0 sudo[312462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:36:04 compute-0 podman[312464]: 2025-12-03 01:36:04.717152478 +0000 UTC m=+0.119688487 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 03 01:36:04 compute-0 python3.9[312465]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:36:04 compute-0 sudo[312462]: pam_unix(sudo:session): session closed for user root
Dec 03 01:36:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v639: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:06 compute-0 ceph-mon[192821]: pgmap v639: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:06 compute-0 sudo[312637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cujtngycqjuhwxefjabxryituhevmmck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725765.7907364-990-263694201488872/AnsiballZ_find.py'
Dec 03 01:36:06 compute-0 sudo[312637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:36:06 compute-0 python3.9[312639]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 03 01:36:06 compute-0 sudo[312637]: pam_unix(sudo:session): session closed for user root
Dec 03 01:36:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v640: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:36:07 compute-0 ceph-mon[192821]: pgmap v640: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:07 compute-0 sudo[312789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrjclwbddmcflhtqfdvlktqaywgndrty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725766.8876925-998-196539670655814/AnsiballZ_command.py'
Dec 03 01:36:07 compute-0 sudo[312789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:36:07 compute-0 python3.9[312791]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:36:07 compute-0 sudo[312789]: pam_unix(sudo:session): session closed for user root
Dec 03 01:36:08 compute-0 sshd-session[312820]: Received disconnect from 173.249.50.59 port 55898:11: Bye Bye [preauth]
Dec 03 01:36:08 compute-0 sshd-session[312820]: Disconnected from authenticating user root 173.249.50.59 port 55898 [preauth]
Dec 03 01:36:08 compute-0 python3.9[312947]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 03 01:36:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v641: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:10 compute-0 python3.9[313097]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:36:10 compute-0 ceph-mon[192821]: pgmap v641: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:11 compute-0 python3.9[313218]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764725769.3572528-1017-159886683660546/.source.xml follow=False _original_basename=secret.xml.j2 checksum=af5c10e13a0d75758c0266fc0df27b554a39904d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:36:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v642: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:11 compute-0 sudo[313368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ceidlrjqitpdyovwhbbmnbpvotfrxlit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725771.4003274-1032-114839761772109/AnsiballZ_command.py'
Dec 03 01:36:11 compute-0 sudo[313368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:36:12 compute-0 python3.9[313370]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 3765feb2-36f8-5b86-b74c-64e9221f9c4c
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:36:12 compute-0 polkitd[43396]: Registered Authentication Agent for unix-process:313372:461181 (system bus name :1.3951 [pkttyagent --process 313372 --notify-fd 5 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Dec 03 01:36:12 compute-0 polkitd[43396]: Unregistered Authentication Agent for unix-process:313372:461181 (system bus name :1.3951, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Dec 03 01:36:12 compute-0 systemd[1]: Starting libvirt secret daemon...
Dec 03 01:36:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:36:12 compute-0 systemd[1]: Started libvirt secret daemon.
Dec 03 01:36:12 compute-0 ceph-mon[192821]: pgmap v642: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:12 compute-0 polkitd[43396]: Registered Authentication Agent for unix-process:313371:461181 (system bus name :1.3953 [pkttyagent --process 313371 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Dec 03 01:36:12 compute-0 polkitd[43396]: Unregistered Authentication Agent for unix-process:313371:461181 (system bus name :1.3953, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Dec 03 01:36:12 compute-0 sudo[313368]: pam_unix(sudo:session): session closed for user root
Dec 03 01:36:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v643: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:13 compute-0 python3.9[313551]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:36:14 compute-0 sudo[313701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbuepoqlohuvigaefktgdbdbvyicizlx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725773.625157-1048-113651940931607/AnsiballZ_command.py'
Dec 03 01:36:14 compute-0 sudo[313701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:36:14 compute-0 ceph-mon[192821]: pgmap v643: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:14 compute-0 sudo[313701]: pam_unix(sudo:session): session closed for user root
Dec 03 01:36:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v644: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:16 compute-0 sudo[313854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlkwitfhlchfykbkjadzgmsluiyobzlg ; FSID=3765feb2-36f8-5b86-b74c-64e9221f9c4c KEY=AQCCjy9pAAAAABAAp+KNKPmL/Q89NduD/bXpeQ== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725775.4890347-1056-1409646136986/AnsiballZ_command.py'
Dec 03 01:36:16 compute-0 sudo[313854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:36:16 compute-0 ceph-mon[192821]: pgmap v644: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:16 compute-0 polkitd[43396]: Registered Authentication Agent for unix-process:313857:461597 (system bus name :1.3958 [pkttyagent --process 313857 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Dec 03 01:36:16 compute-0 polkitd[43396]: Unregistered Authentication Agent for unix-process:313857:461597 (system bus name :1.3958, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Dec 03 01:36:16 compute-0 sudo[313854]: pam_unix(sudo:session): session closed for user root
Dec 03 01:36:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v645: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:36:17 compute-0 ceph-mon[192821]: pgmap v645: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:18 compute-0 sudo[314012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqvrhcdaibssepqeowuukewckgliimui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725776.7330625-1064-262827338912043/AnsiballZ_copy.py'
Dec 03 01:36:18 compute-0 sudo[314012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:36:18 compute-0 python3.9[314014]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:36:18 compute-0 sudo[314012]: pam_unix(sudo:session): session closed for user root
Dec 03 01:36:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v646: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:19 compute-0 sudo[314165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knruunmtawurjpusxvucgotublybkktb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725778.931869-1072-271237813305078/AnsiballZ_stat.py'
Dec 03 01:36:19 compute-0 sudo[314165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:36:19 compute-0 python3.9[314167]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:36:19 compute-0 sudo[314165]: pam_unix(sudo:session): session closed for user root
Dec 03 01:36:20 compute-0 sudo[314243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsiebrmtacsqcuwhtpwamwthzlmdyzsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725778.931869-1072-271237813305078/AnsiballZ_file.py'
Dec 03 01:36:20 compute-0 sudo[314243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:36:20 compute-0 ceph-mon[192821]: pgmap v646: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:20 compute-0 python3.9[314245]: ansible-ansible.legacy.file Invoked with mode=0640 dest=/var/lib/edpm-config/firewall/libvirt.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/libvirt.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:36:20 compute-0 sudo[314243]: pam_unix(sudo:session): session closed for user root
Dec 03 01:36:20 compute-0 podman[314270]: 2025-12-03 01:36:20.879030963 +0000 UTC m=+0.126679685 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 03 01:36:20 compute-0 podman[314271]: 2025-12-03 01:36:20.880487504 +0000 UTC m=+0.122072085 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git)
Dec 03 01:36:20 compute-0 podman[314272]: 2025-12-03 01:36:20.892497633 +0000 UTC m=+0.129118104 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute)
Dec 03 01:36:20 compute-0 podman[314273]: 2025-12-03 01:36:20.9267735 +0000 UTC m=+0.155908559 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec 03 01:36:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v647: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:21 compute-0 sudo[314477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pinlniiritvedrmgdgaxuwyuxzxxtzdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725780.8921716-1085-68390223601434/AnsiballZ_file.py'
Dec 03 01:36:21 compute-0 sudo[314477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:36:21 compute-0 python3.9[314479]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:36:21 compute-0 sudo[314477]: pam_unix(sudo:session): session closed for user root
Dec 03 01:36:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:36:22 compute-0 ceph-mon[192821]: pgmap v647: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:22 compute-0 sudo[314642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzjlwanwgulawwwzylmguhmhkfyxpwxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725782.0535688-1093-14028859772430/AnsiballZ_stat.py'
Dec 03 01:36:22 compute-0 sudo[314642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:36:22 compute-0 podman[314603]: 2025-12-03 01:36:22.620462391 +0000 UTC m=+0.142788440 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 03 01:36:22 compute-0 python3.9[314648]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:36:22 compute-0 sudo[314642]: pam_unix(sudo:session): session closed for user root
Dec 03 01:36:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v648: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:23 compute-0 sudo[314724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awlucybrqnznagwxmkjvmmkmloscinux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725782.0535688-1093-14028859772430/AnsiballZ_file.py'
Dec 03 01:36:23 compute-0 sudo[314724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:36:23 compute-0 python3.9[314726]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:36:23 compute-0 sudo[314724]: pam_unix(sudo:session): session closed for user root
Dec 03 01:36:24 compute-0 ceph-mon[192821]: pgmap v648: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:24 compute-0 sudo[314876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oezdvmxcydjaofzejhyspbjhbbsrusah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725783.8059764-1105-105946173697276/AnsiballZ_stat.py'
Dec 03 01:36:24 compute-0 sudo[314876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:36:24 compute-0 python3.9[314878]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:36:24 compute-0 sudo[314876]: pam_unix(sudo:session): session closed for user root
Dec 03 01:36:25 compute-0 sudo[314954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-riapjhohkgwlglnmbhtqzcjebjbocenj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725783.8059764-1105-105946173697276/AnsiballZ_file.py'
Dec 03 01:36:25 compute-0 sudo[314954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:36:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v649: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:25 compute-0 python3.9[314956]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=._nqhxvf9 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:36:25 compute-0 sudo[314954]: pam_unix(sudo:session): session closed for user root
Dec 03 01:36:26 compute-0 ceph-mon[192821]: pgmap v649: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:26 compute-0 sudo[315106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlohrmdthuhgxdrmsvlmncnbjubwsaot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725785.7062078-1117-140373341277737/AnsiballZ_stat.py'
Dec 03 01:36:26 compute-0 sudo[315106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:36:26 compute-0 python3.9[315108]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:36:26 compute-0 sudo[315106]: pam_unix(sudo:session): session closed for user root
Dec 03 01:36:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v650: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:36:27 compute-0 ceph-mon[192821]: pgmap v650: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:28 compute-0 sudo[315184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jknyxamjwzsnkdwfiehuzlxseplhkakf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725785.7062078-1117-140373341277737/AnsiballZ_file.py'
Dec 03 01:36:28 compute-0 sudo[315184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:36:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:36:28
Dec 03 01:36:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 01:36:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 01:36:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', 'images', '.mgr', 'cephfs.cephfs.data', 'backups', 'default.rgw.log', 'vms', 'volumes', '.rgw.root', 'cephfs.cephfs.meta']
Dec 03 01:36:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 01:36:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:36:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:36:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:36:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:36:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:36:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:36:28 compute-0 python3.9[315186]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:36:28 compute-0 sudo[315184]: pam_unix(sudo:session): session closed for user root
Dec 03 01:36:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 01:36:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:36:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 01:36:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:36:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:36:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:36:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:36:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:36:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:36:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:36:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v651: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:29 compute-0 sudo[315336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tthyiarqjzzlfsoasqhgtvbzszzazpkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725788.8508644-1130-253956938854994/AnsiballZ_command.py'
Dec 03 01:36:29 compute-0 sudo[315336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:36:29 compute-0 python3.9[315338]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:36:29 compute-0 sudo[315336]: pam_unix(sudo:session): session closed for user root
Dec 03 01:36:29 compute-0 podman[158098]: time="2025-12-03T01:36:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:36:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:36:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35732 "" "Go-http-client/1.1"
Dec 03 01:36:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:36:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7281 "" "Go-http-client/1.1"
Dec 03 01:36:29 compute-0 podman[315340]: 2025-12-03 01:36:29.881058147 +0000 UTC m=+0.128750403 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, maintainer=Red Hat, Inc., name=ubi9, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.component=ubi9-container, distribution-scope=public, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, io.openshift.expose-services=)
Dec 03 01:36:30 compute-0 ceph-mon[192821]: pgmap v651: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:31 compute-0 sudo[315509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-viaclpoeukhgbnqdoapkrnatkypqlbrt ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764725790.4667046-1138-279145574439813/AnsiballZ_edpm_nftables_from_files.py'
Dec 03 01:36:31 compute-0 sudo[315509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:36:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v652: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:31 compute-0 python3[315511]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 03 01:36:31 compute-0 openstack_network_exporter[160250]: ERROR   01:36:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:36:31 compute-0 openstack_network_exporter[160250]: ERROR   01:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:36:31 compute-0 openstack_network_exporter[160250]: ERROR   01:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:36:31 compute-0 openstack_network_exporter[160250]: ERROR   01:36:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:36:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:36:31 compute-0 openstack_network_exporter[160250]: ERROR   01:36:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:36:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:36:31 compute-0 sudo[315509]: pam_unix(sudo:session): session closed for user root
Dec 03 01:36:31 compute-0 podman[315557]: 2025-12-03 01:36:31.881717687 +0000 UTC m=+0.132971842 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 03 01:36:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:36:32 compute-0 ceph-mon[192821]: pgmap v652: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:32 compute-0 sudo[315681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogrhqohsqvjjwwkwwwghzovvrqsgwaka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725791.7413468-1146-51424740496264/AnsiballZ_stat.py'
Dec 03 01:36:32 compute-0 sudo[315681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:36:32 compute-0 python3.9[315683]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:36:32 compute-0 sudo[315681]: pam_unix(sudo:session): session closed for user root
Dec 03 01:36:33 compute-0 sudo[315759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwgandecnwnmmwzvigiimbblstauhobv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725791.7413468-1146-51424740496264/AnsiballZ_file.py'
Dec 03 01:36:33 compute-0 sudo[315759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:36:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v653: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:33 compute-0 python3.9[315761]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:36:33 compute-0 sudo[315759]: pam_unix(sudo:session): session closed for user root
Dec 03 01:36:34 compute-0 sudo[315911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqowglycwmavgwbokgircyzcafinmjsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725793.5989294-1158-81900825004494/AnsiballZ_stat.py'
Dec 03 01:36:34 compute-0 sudo[315911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:36:34 compute-0 ceph-mon[192821]: pgmap v653: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:34 compute-0 python3.9[315913]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:36:34 compute-0 sudo[315911]: pam_unix(sudo:session): session closed for user root
Dec 03 01:36:34 compute-0 podman[315963]: 2025-12-03 01:36:34.957058265 +0000 UTC m=+0.112319360 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 03 01:36:34 compute-0 sudo[316006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvidvmmrieyzencfdimaflxynkrpaszw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725793.5989294-1158-81900825004494/AnsiballZ_file.py'
Dec 03 01:36:34 compute-0 sudo[316006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:36:35 compute-0 python3.9[316015]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:36:35 compute-0 sudo[316006]: pam_unix(sudo:session): session closed for user root
Dec 03 01:36:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v654: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:36 compute-0 sudo[316165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mztrhpnhnumudwtxzfwbdzfymbeewkji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725795.629007-1170-2844995135803/AnsiballZ_stat.py'
Dec 03 01:36:36 compute-0 sudo[316165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:36:36 compute-0 ceph-mon[192821]: pgmap v654: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:36 compute-0 python3.9[316167]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:36:36 compute-0 sudo[316165]: pam_unix(sudo:session): session closed for user root
Dec 03 01:36:36 compute-0 sudo[316243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxvrckxgqqaeyumxayuoxbauewmrruxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725795.629007-1170-2844995135803/AnsiballZ_file.py'
Dec 03 01:36:36 compute-0 sudo[316243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:36:37 compute-0 python3.9[316245]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:36:37 compute-0 sudo[316243]: pam_unix(sudo:session): session closed for user root
Dec 03 01:36:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v655: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:36:37 compute-0 ceph-mon[192821]: pgmap v655: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 01:36:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:36:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 01:36:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:36:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:36:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:36:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:36:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:36:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:36:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:36:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:36:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:36:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 01:36:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:36:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:36:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:36:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 01:36:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:36:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 01:36:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:36:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:36:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:36:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 01:36:38 compute-0 sudo[316395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icnyulwvtnwggqfbnftbmoobfqevunyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725797.4967935-1182-1547831620604/AnsiballZ_stat.py'
Dec 03 01:36:38 compute-0 sudo[316395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:36:38 compute-0 python3.9[316397]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:36:38 compute-0 sudo[316395]: pam_unix(sudo:session): session closed for user root
Dec 03 01:36:38 compute-0 sudo[316473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btslhswdfysddzszbyolpipmkwyqlosm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725797.4967935-1182-1547831620604/AnsiballZ_file.py'
Dec 03 01:36:38 compute-0 sudo[316473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:36:39 compute-0 python3.9[316475]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:36:39 compute-0 sudo[316473]: pam_unix(sudo:session): session closed for user root
Dec 03 01:36:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v656: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:40 compute-0 sudo[316625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbqjapaokuurudxnkuqijtmlyeutvrei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725799.3673434-1194-114407010370934/AnsiballZ_stat.py'
Dec 03 01:36:40 compute-0 sudo[316625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:36:40 compute-0 ceph-mon[192821]: pgmap v656: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:40 compute-0 python3.9[316627]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:36:40 compute-0 sudo[316625]: pam_unix(sudo:session): session closed for user root
Dec 03 01:36:40 compute-0 sudo[316703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuszuuclrghgsjrvwevjamnvhyajjjnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725799.3673434-1194-114407010370934/AnsiballZ_file.py'
Dec 03 01:36:40 compute-0 sudo[316703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:36:41 compute-0 python3.9[316705]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:36:41 compute-0 sudo[316703]: pam_unix(sudo:session): session closed for user root
Dec 03 01:36:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v657: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:36:42 compute-0 ceph-mon[192821]: pgmap v657: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:42 compute-0 sudo[316855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuqrjohhviunmkqqhfgnmuglcudymxns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725802.038744-1207-14339395490311/AnsiballZ_command.py'
Dec 03 01:36:42 compute-0 sudo[316855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:36:42 compute-0 python3.9[316857]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:36:42 compute-0 sudo[316855]: pam_unix(sudo:session): session closed for user root
Dec 03 01:36:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v658: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:44 compute-0 ceph-mon[192821]: pgmap v658: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:44 compute-0 sudo[317012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkwggoprkktgevsbnohbihzsunzaytnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725803.236758-1215-15757052214569/AnsiballZ_blockinfile.py'
Dec 03 01:36:44 compute-0 sudo[317012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:36:44 compute-0 sshd-session[316920]: Received disconnect from 103.146.202.174 port 41152:11: Bye Bye [preauth]
Dec 03 01:36:44 compute-0 sshd-session[316920]: Disconnected from authenticating user root 103.146.202.174 port 41152 [preauth]
Dec 03 01:36:44 compute-0 python3.9[317014]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:36:44 compute-0 sudo[317012]: pam_unix(sudo:session): session closed for user root
Dec 03 01:36:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v659: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:45 compute-0 sshd-session[317083]: Invalid user bounce from 34.66.72.251 port 53958
Dec 03 01:36:45 compute-0 sshd-session[317083]: Received disconnect from 34.66.72.251 port 53958:11: Bye Bye [preauth]
Dec 03 01:36:45 compute-0 sshd-session[317083]: Disconnected from invalid user bounce 34.66.72.251 port 53958 [preauth]
Dec 03 01:36:45 compute-0 sudo[317166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdydcsventkixtxjkjrkbxeavzybigmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725805.388885-1224-156940291405287/AnsiballZ_command.py'
Dec 03 01:36:45 compute-0 sudo[317166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:36:46 compute-0 python3.9[317168]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:36:46 compute-0 sudo[317166]: pam_unix(sudo:session): session closed for user root
Dec 03 01:36:46 compute-0 ceph-mon[192821]: pgmap v659: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:47 compute-0 sudo[317319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhbbvjrinvghkuntnfdvodhjwubfzlel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725806.476694-1232-178907118740967/AnsiballZ_stat.py'
Dec 03 01:36:47 compute-0 sudo[317319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:36:47 compute-0 python3.9[317321]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:36:47 compute-0 sudo[317319]: pam_unix(sudo:session): session closed for user root
Dec 03 01:36:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v660: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:36:47 compute-0 ceph-mon[192821]: pgmap v660: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:48 compute-0 sudo[317471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awwtlkqjhfdtbktkqbhalswldhpndctd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725807.6253006-1241-64171145038465/AnsiballZ_file.py'
Dec 03 01:36:48 compute-0 sudo[317471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:36:48 compute-0 python3.9[317473]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:36:48 compute-0 sudo[317471]: pam_unix(sudo:session): session closed for user root
Dec 03 01:36:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v661: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:49 compute-0 sudo[317623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpdywsdtpgilwvbidxhwkiynfnqunere ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725808.84028-1249-36126887861948/AnsiballZ_stat.py'
Dec 03 01:36:49 compute-0 sudo[317623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:36:49 compute-0 python3.9[317625]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:36:49 compute-0 sudo[317623]: pam_unix(sudo:session): session closed for user root
Dec 03 01:36:50 compute-0 sudo[317702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twcrmpcbptvvftagbzxtdbupnqkroqjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725808.84028-1249-36126887861948/AnsiballZ_file.py'
Dec 03 01:36:50 compute-0 sudo[317702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:36:50 compute-0 ceph-mon[192821]: pgmap v661: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:50 compute-0 python3.9[317704]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/systemd/system/edpm_libvirt.target _original_basename=edpm_libvirt.target recurse=False state=file path=/etc/systemd/system/edpm_libvirt.target force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:36:50 compute-0 sudo[317702]: pam_unix(sudo:session): session closed for user root
Dec 03 01:36:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v662: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:51 compute-0 podman[317828]: 2025-12-03 01:36:51.295326959 +0000 UTC m=+0.122282950 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 01:36:51 compute-0 sudo[317910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jedhjrcvepvdfrcjnybpiacaawkwidgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725810.675011-1261-236645049564161/AnsiballZ_stat.py'
Dec 03 01:36:51 compute-0 sudo[317910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:36:51 compute-0 podman[317830]: 2025-12-03 01:36:51.325392587 +0000 UTC m=+0.139618949 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute)
Dec 03 01:36:51 compute-0 podman[317829]: 2025-12-03 01:36:51.32830686 +0000 UTC m=+0.149554320 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.buildah.version=1.33.7, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9)
Dec 03 01:36:51 compute-0 podman[317831]: 2025-12-03 01:36:51.34141807 +0000 UTC m=+0.147363219 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 03 01:36:51 compute-0 python3.9[317931]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:36:51 compute-0 sudo[317910]: pam_unix(sudo:session): session closed for user root
Dec 03 01:36:51 compute-0 sudo[318013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upkdgrkmozuvucnvlrsovwurnojsljuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725810.675011-1261-236645049564161/AnsiballZ_file.py'
Dec 03 01:36:52 compute-0 sudo[318013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:36:52 compute-0 python3.9[318015]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/systemd/system/edpm_libvirt_guests.service _original_basename=edpm_libvirt_guests.service recurse=False state=file path=/etc/systemd/system/edpm_libvirt_guests.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:36:52 compute-0 sudo[318013]: pam_unix(sudo:session): session closed for user root
Dec 03 01:36:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:36:52 compute-0 ceph-mon[192821]: pgmap v662: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:52 compute-0 podman[318100]: 2025-12-03 01:36:52.843595067 +0000 UTC m=+0.097682117 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi)
Dec 03 01:36:53 compute-0 sudo[318184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fscukjnxpmoguujnueridokvkbfnkqly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725812.5316615-1273-138457421267821/AnsiballZ_stat.py'
Dec 03 01:36:53 compute-0 sudo[318184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:36:53 compute-0 python3.9[318186]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:36:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v663: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:53 compute-0 sudo[318184]: pam_unix(sudo:session): session closed for user root
Dec 03 01:36:54 compute-0 ceph-mon[192821]: pgmap v663: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:54 compute-0 sudo[318264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlxygjeinxfxjqexibyghzckzsejawzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725812.5316615-1273-138457421267821/AnsiballZ_file.py'
Dec 03 01:36:54 compute-0 sudo[318264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:36:55 compute-0 python3.9[318266]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/systemd/system/virt-guest-shutdown.target _original_basename=virt-guest-shutdown.target recurse=False state=file path=/etc/systemd/system/virt-guest-shutdown.target force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:36:55 compute-0 sudo[318264]: pam_unix(sudo:session): session closed for user root
Dec 03 01:36:55 compute-0 sshd-session[318212]: Invalid user temp from 80.253.31.232 port 46130
Dec 03 01:36:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v664: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:55 compute-0 sshd-session[318212]: Received disconnect from 80.253.31.232 port 46130:11: Bye Bye [preauth]
Dec 03 01:36:55 compute-0 sshd-session[318212]: Disconnected from invalid user temp 80.253.31.232 port 46130 [preauth]
Dec 03 01:36:55 compute-0 ceph-mon[192821]: pgmap v664: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:55 compute-0 sshd-session[288671]: Connection closed by 192.168.122.30 port 60532
Dec 03 01:36:55 compute-0 sshd-session[288668]: pam_unix(sshd:session): session closed for user zuul
Dec 03 01:36:55 compute-0 systemd[1]: session-54.scope: Deactivated successfully.
Dec 03 01:36:55 compute-0 systemd[1]: session-54.scope: Consumed 3min 3.960s CPU time.
Dec 03 01:36:55 compute-0 systemd-logind[800]: Session 54 logged out. Waiting for processes to exit.
Dec 03 01:36:55 compute-0 systemd-logind[800]: Removed session 54.
Dec 03 01:36:56 compute-0 sudo[318293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:36:56 compute-0 sudo[318293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:36:56 compute-0 sudo[318293]: pam_unix(sudo:session): session closed for user root
Dec 03 01:36:57 compute-0 sudo[318318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:36:57 compute-0 sudo[318318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:36:57 compute-0 sudo[318318]: pam_unix(sudo:session): session closed for user root
Dec 03 01:36:57 compute-0 sudo[318343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:36:57 compute-0 sudo[318343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:36:57 compute-0 sudo[318343]: pam_unix(sudo:session): session closed for user root
Dec 03 01:36:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v665: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:36:57 compute-0 sudo[318368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 01:36:57 compute-0 sudo[318368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:36:58 compute-0 sudo[318368]: pam_unix(sudo:session): session closed for user root
Dec 03 01:36:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:36:58 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:36:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 01:36:58 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:36:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 01:36:58 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:36:58 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 978a47cc-5561-4534-951a-b8ec7465d8c2 does not exist
Dec 03 01:36:58 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev b653a9cb-3155-4882-b58b-c6273e12270e does not exist
Dec 03 01:36:58 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev a9e5e0c0-dd85-45b3-bdcb-4a5289f4a237 does not exist
Dec 03 01:36:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 01:36:58 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:36:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 01:36:58 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:36:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:36:58 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:36:58 compute-0 sudo[318425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:36:58 compute-0 sudo[318425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:36:58 compute-0 sudo[318425]: pam_unix(sudo:session): session closed for user root
Dec 03 01:36:58 compute-0 ceph-mon[192821]: pgmap v665: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:58 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:36:58 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:36:58 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:36:58 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:36:58 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:36:58 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:36:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:36:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:36:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:36:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:36:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:36:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:36:58 compute-0 sudo[318450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:36:58 compute-0 sudo[318450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:36:58 compute-0 sudo[318450]: pam_unix(sudo:session): session closed for user root
Dec 03 01:36:58 compute-0 sudo[318475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:36:58 compute-0 sudo[318475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:36:58 compute-0 sudo[318475]: pam_unix(sudo:session): session closed for user root
Dec 03 01:36:58 compute-0 sshd-session[318291]: Received disconnect from 45.78.219.140 port 44226:11: Bye Bye [preauth]
Dec 03 01:36:58 compute-0 sshd-session[318291]: Disconnected from authenticating user root 45.78.219.140 port 44226 [preauth]
Dec 03 01:36:58 compute-0 sudo[318500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 01:36:58 compute-0 sudo[318500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:36:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v666: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:36:59 compute-0 podman[318564]: 2025-12-03 01:36:59.278007454 +0000 UTC m=+0.075072129 container create 3d3229aefec77ee57c34e98218c944324f40fa6b2a082ab5b7bf4492ba9749da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 03 01:36:59 compute-0 podman[318564]: 2025-12-03 01:36:59.240763113 +0000 UTC m=+0.037827838 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:36:59 compute-0 systemd[1]: Started libpod-conmon-3d3229aefec77ee57c34e98218c944324f40fa6b2a082ab5b7bf4492ba9749da.scope.
Dec 03 01:36:59 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:36:59 compute-0 podman[318564]: 2025-12-03 01:36:59.467795388 +0000 UTC m=+0.264860103 container init 3d3229aefec77ee57c34e98218c944324f40fa6b2a082ab5b7bf4492ba9749da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_franklin, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 03 01:36:59 compute-0 podman[318564]: 2025-12-03 01:36:59.485210439 +0000 UTC m=+0.282275114 container start 3d3229aefec77ee57c34e98218c944324f40fa6b2a082ab5b7bf4492ba9749da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_franklin, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 03 01:36:59 compute-0 podman[318564]: 2025-12-03 01:36:59.492235197 +0000 UTC m=+0.289299912 container attach 3d3229aefec77ee57c34e98218c944324f40fa6b2a082ab5b7bf4492ba9749da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_franklin, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Dec 03 01:36:59 compute-0 wonderful_franklin[318580]: 167 167
Dec 03 01:36:59 compute-0 systemd[1]: libpod-3d3229aefec77ee57c34e98218c944324f40fa6b2a082ab5b7bf4492ba9749da.scope: Deactivated successfully.
Dec 03 01:36:59 compute-0 podman[318564]: 2025-12-03 01:36:59.497705072 +0000 UTC m=+0.294769777 container died 3d3229aefec77ee57c34e98218c944324f40fa6b2a082ab5b7bf4492ba9749da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:36:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-df1e0620b8c754b7ecf1b03d3e643c9b0fe33a2622cdf470e3891e0a4a7c044e-merged.mount: Deactivated successfully.
Dec 03 01:36:59 compute-0 podman[318564]: 2025-12-03 01:36:59.582635748 +0000 UTC m=+0.379700393 container remove 3d3229aefec77ee57c34e98218c944324f40fa6b2a082ab5b7bf4492ba9749da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_franklin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:36:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:36:59.593 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:36:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:36:59.596 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:36:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:36:59.596 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:36:59 compute-0 systemd[1]: libpod-conmon-3d3229aefec77ee57c34e98218c944324f40fa6b2a082ab5b7bf4492ba9749da.scope: Deactivated successfully.
Dec 03 01:36:59 compute-0 podman[158098]: time="2025-12-03T01:36:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:36:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:36:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35732 "" "Go-http-client/1.1"
Dec 03 01:36:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:36:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7275 "" "Go-http-client/1.1"
Dec 03 01:36:59 compute-0 podman[318603]: 2025-12-03 01:36:59.896110991 +0000 UTC m=+0.096152974 container create f613d31647389e6b3616c8cd74aac3f39b5a5cb9abca992bdcb59b0eaa44d1cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:36:59 compute-0 podman[318603]: 2025-12-03 01:36:59.861107704 +0000 UTC m=+0.061149737 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:36:59 compute-0 systemd[1]: Started libpod-conmon-f613d31647389e6b3616c8cd74aac3f39b5a5cb9abca992bdcb59b0eaa44d1cb.scope.
Dec 03 01:37:00 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:37:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24423ead940ec6abd2029d49a1d59e59e797e74191d1c12a64014f44f47f4952/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:37:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24423ead940ec6abd2029d49a1d59e59e797e74191d1c12a64014f44f47f4952/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:37:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24423ead940ec6abd2029d49a1d59e59e797e74191d1c12a64014f44f47f4952/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:37:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24423ead940ec6abd2029d49a1d59e59e797e74191d1c12a64014f44f47f4952/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:37:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24423ead940ec6abd2029d49a1d59e59e797e74191d1c12a64014f44f47f4952/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:37:00 compute-0 podman[318603]: 2025-12-03 01:37:00.09351052 +0000 UTC m=+0.293552463 container init f613d31647389e6b3616c8cd74aac3f39b5a5cb9abca992bdcb59b0eaa44d1cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_buck, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:37:00 compute-0 podman[318617]: 2025-12-03 01:37:00.121827469 +0000 UTC m=+0.159180202 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_id=edpm, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, managed_by=edpm_ansible, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, container_name=kepler, release-0.7.12=, io.openshift.tags=base rhel9, version=9.4, com.redhat.component=ubi9-container)
Dec 03 01:37:00 compute-0 podman[318603]: 2025-12-03 01:37:00.12435553 +0000 UTC m=+0.324397513 container start f613d31647389e6b3616c8cd74aac3f39b5a5cb9abca992bdcb59b0eaa44d1cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 03 01:37:00 compute-0 podman[318603]: 2025-12-03 01:37:00.136101561 +0000 UTC m=+0.336143524 container attach f613d31647389e6b3616c8cd74aac3f39b5a5cb9abca992bdcb59b0eaa44d1cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:37:00 compute-0 ceph-mon[192821]: pgmap v666: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:00 compute-0 sshd-session[318643]: Accepted publickey for zuul from 192.168.122.30 port 54200 ssh2: ECDSA SHA256:ja3ITS17A9km0/Ot+KN2pl9ub4ump/b6GV+vNoE7Szw
Dec 03 01:37:00 compute-0 systemd-logind[800]: New session 55 of user zuul.
Dec 03 01:37:00 compute-0 systemd[1]: Started Session 55 of User zuul.
Dec 03 01:37:00 compute-0 sshd-session[318643]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 03 01:37:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v667: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:01 compute-0 cranky_buck[318625]: --> passed data devices: 0 physical, 3 LVM
Dec 03 01:37:01 compute-0 cranky_buck[318625]: --> relative data size: 1.0
Dec 03 01:37:01 compute-0 cranky_buck[318625]: --> All data devices are unavailable
Dec 03 01:37:01 compute-0 systemd[1]: libpod-f613d31647389e6b3616c8cd74aac3f39b5a5cb9abca992bdcb59b0eaa44d1cb.scope: Deactivated successfully.
Dec 03 01:37:01 compute-0 systemd[1]: libpod-f613d31647389e6b3616c8cd74aac3f39b5a5cb9abca992bdcb59b0eaa44d1cb.scope: Consumed 1.221s CPU time.
Dec 03 01:37:01 compute-0 podman[318603]: 2025-12-03 01:37:01.39514131 +0000 UTC m=+1.595183283 container died f613d31647389e6b3616c8cd74aac3f39b5a5cb9abca992bdcb59b0eaa44d1cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_buck, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Dec 03 01:37:01 compute-0 openstack_network_exporter[160250]: ERROR   01:37:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:37:01 compute-0 openstack_network_exporter[160250]: ERROR   01:37:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:37:01 compute-0 openstack_network_exporter[160250]: ERROR   01:37:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:37:01 compute-0 openstack_network_exporter[160250]: ERROR   01:37:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:37:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:37:01 compute-0 openstack_network_exporter[160250]: ERROR   01:37:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:37:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:37:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-24423ead940ec6abd2029d49a1d59e59e797e74191d1c12a64014f44f47f4952-merged.mount: Deactivated successfully.
Dec 03 01:37:01 compute-0 podman[318603]: 2025-12-03 01:37:01.477126963 +0000 UTC m=+1.677168906 container remove f613d31647389e6b3616c8cd74aac3f39b5a5cb9abca992bdcb59b0eaa44d1cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_buck, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:37:01 compute-0 systemd[1]: libpod-conmon-f613d31647389e6b3616c8cd74aac3f39b5a5cb9abca992bdcb59b0eaa44d1cb.scope: Deactivated successfully.
Dec 03 01:37:01 compute-0 sudo[318500]: pam_unix(sudo:session): session closed for user root
Dec 03 01:37:01 compute-0 sudo[318782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:37:01 compute-0 sudo[318782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:37:01 compute-0 sudo[318782]: pam_unix(sudo:session): session closed for user root
Dec 03 01:37:01 compute-0 sudo[318830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:37:01 compute-0 sudo[318830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:37:01 compute-0 sudo[318830]: pam_unix(sudo:session): session closed for user root
Dec 03 01:37:01 compute-0 sudo[318882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:37:01 compute-0 sudo[318882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:37:01 compute-0 sudo[318882]: pam_unix(sudo:session): session closed for user root
Dec 03 01:37:02 compute-0 sudo[318909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 01:37:02 compute-0 sudo[318909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:37:02 compute-0 podman[318907]: 2025-12-03 01:37:02.065692547 +0000 UTC m=+0.116301182 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 03 01:37:02 compute-0 python3.9[318883]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 01:37:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:37:02 compute-0 ceph-mon[192821]: pgmap v667: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:02 compute-0 podman[318998]: 2025-12-03 01:37:02.651026919 +0000 UTC m=+0.084802093 container create 308eaf34d16fefb0265d7808f4781a9e889fdcf682830a8a7e7198b26b5e8aef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_zhukovsky, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 03 01:37:02 compute-0 podman[318998]: 2025-12-03 01:37:02.622892845 +0000 UTC m=+0.056668029 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:37:02 compute-0 systemd[1]: Started libpod-conmon-308eaf34d16fefb0265d7808f4781a9e889fdcf682830a8a7e7198b26b5e8aef.scope.
Dec 03 01:37:02 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:37:02 compute-0 podman[318998]: 2025-12-03 01:37:02.808323686 +0000 UTC m=+0.242098930 container init 308eaf34d16fefb0265d7808f4781a9e889fdcf682830a8a7e7198b26b5e8aef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 03 01:37:02 compute-0 podman[318998]: 2025-12-03 01:37:02.818260357 +0000 UTC m=+0.252035531 container start 308eaf34d16fefb0265d7808f4781a9e889fdcf682830a8a7e7198b26b5e8aef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_zhukovsky, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:37:02 compute-0 podman[318998]: 2025-12-03 01:37:02.825148511 +0000 UTC m=+0.258923695 container attach 308eaf34d16fefb0265d7808f4781a9e889fdcf682830a8a7e7198b26b5e8aef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 03 01:37:02 compute-0 agitated_zhukovsky[319032]: 167 167
Dec 03 01:37:02 compute-0 systemd[1]: libpod-308eaf34d16fefb0265d7808f4781a9e889fdcf682830a8a7e7198b26b5e8aef.scope: Deactivated successfully.
Dec 03 01:37:02 compute-0 podman[318998]: 2025-12-03 01:37:02.834410772 +0000 UTC m=+0.268185946 container died 308eaf34d16fefb0265d7808f4781a9e889fdcf682830a8a7e7198b26b5e8aef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:37:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-61be21ec0b5ba0d82688018c030f0d333d702f825d298d65c72ef6d0485fabad-merged.mount: Deactivated successfully.
Dec 03 01:37:02 compute-0 podman[318998]: 2025-12-03 01:37:02.923596398 +0000 UTC m=+0.357371542 container remove 308eaf34d16fefb0265d7808f4781a9e889fdcf682830a8a7e7198b26b5e8aef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 03 01:37:02 compute-0 systemd[1]: libpod-conmon-308eaf34d16fefb0265d7808f4781a9e889fdcf682830a8a7e7198b26b5e8aef.scope: Deactivated successfully.
Dec 03 01:37:03 compute-0 podman[319106]: 2025-12-03 01:37:03.184968922 +0000 UTC m=+0.076324594 container create 103dcf19711bab371769877d45d671a9511bc9e9fdde6576e434ca6ba458abbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_keller, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 03 01:37:03 compute-0 podman[319106]: 2025-12-03 01:37:03.152042903 +0000 UTC m=+0.043398625 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:37:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v668: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:03 compute-0 systemd[1]: Started libpod-conmon-103dcf19711bab371769877d45d671a9511bc9e9fdde6576e434ca6ba458abbc.scope.
Dec 03 01:37:03 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:37:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29632595b22dc54467d2fa7afb5b1fedab13b6a5b0147647b0953ab1d6644215/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:37:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29632595b22dc54467d2fa7afb5b1fedab13b6a5b0147647b0953ab1d6644215/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:37:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29632595b22dc54467d2fa7afb5b1fedab13b6a5b0147647b0953ab1d6644215/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:37:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29632595b22dc54467d2fa7afb5b1fedab13b6a5b0147647b0953ab1d6644215/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:37:03 compute-0 podman[319106]: 2025-12-03 01:37:03.367354867 +0000 UTC m=+0.258710539 container init 103dcf19711bab371769877d45d671a9511bc9e9fdde6576e434ca6ba458abbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_keller, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 03 01:37:03 compute-0 podman[319106]: 2025-12-03 01:37:03.401226553 +0000 UTC m=+0.292582195 container start 103dcf19711bab371769877d45d671a9511bc9e9fdde6576e434ca6ba458abbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Dec 03 01:37:03 compute-0 podman[319106]: 2025-12-03 01:37:03.408068876 +0000 UTC m=+0.299424548 container attach 103dcf19711bab371769877d45d671a9511bc9e9fdde6576e434ca6ba458abbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_keller, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:37:03 compute-0 ceph-mon[192821]: pgmap v668: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:03 compute-0 python3.9[319200]: ansible-ansible.builtin.service_facts Invoked
Dec 03 01:37:03 compute-0 network[319217]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 03 01:37:03 compute-0 network[319218]: 'network-scripts' will be removed from distribution in near future.
Dec 03 01:37:03 compute-0 network[319219]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 03 01:37:04 compute-0 sleepy_keller[319121]: {
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:     "0": [
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:         {
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:             "devices": [
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:                 "/dev/loop3"
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:             ],
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:             "lv_name": "ceph_lv0",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:             "lv_size": "21470642176",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:             "name": "ceph_lv0",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:             "tags": {
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:                 "ceph.cluster_name": "ceph",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:                 "ceph.crush_device_class": "",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:                 "ceph.encrypted": "0",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:                 "ceph.osd_id": "0",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:                 "ceph.type": "block",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:                 "ceph.vdo": "0"
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:             },
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:             "type": "block",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:             "vg_name": "ceph_vg0"
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:         }
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:     ],
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:     "1": [
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:         {
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:             "devices": [
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:                 "/dev/loop4"
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:             ],
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:             "lv_name": "ceph_lv1",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:             "lv_size": "21470642176",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:             "name": "ceph_lv1",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:             "tags": {
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:                 "ceph.cluster_name": "ceph",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:                 "ceph.crush_device_class": "",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:                 "ceph.encrypted": "0",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:                 "ceph.osd_id": "1",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:                 "ceph.type": "block",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:                 "ceph.vdo": "0"
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:             },
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:             "type": "block",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:             "vg_name": "ceph_vg1"
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:         }
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:     ],
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:     "2": [
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:         {
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:             "devices": [
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:                 "/dev/loop5"
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:             ],
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:             "lv_name": "ceph_lv2",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:             "lv_size": "21470642176",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:             "name": "ceph_lv2",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:             "tags": {
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:                 "ceph.cluster_name": "ceph",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:                 "ceph.crush_device_class": "",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:                 "ceph.encrypted": "0",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:                 "ceph.osd_id": "2",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:                 "ceph.type": "block",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:                 "ceph.vdo": "0"
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:             },
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:             "type": "block",
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:             "vg_name": "ceph_vg2"
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:         }
Dec 03 01:37:04 compute-0 sleepy_keller[319121]:     ]
Dec 03 01:37:04 compute-0 sleepy_keller[319121]: }
Dec 03 01:37:04 compute-0 podman[319106]: 2025-12-03 01:37:04.264858677 +0000 UTC m=+1.156214369 container died 103dcf19711bab371769877d45d671a9511bc9e9fdde6576e434ca6ba458abbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_keller, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 03 01:37:04 compute-0 systemd[1]: libpod-103dcf19711bab371769877d45d671a9511bc9e9fdde6576e434ca6ba458abbc.scope: Deactivated successfully.
Dec 03 01:37:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-29632595b22dc54467d2fa7afb5b1fedab13b6a5b0147647b0953ab1d6644215-merged.mount: Deactivated successfully.
Dec 03 01:37:04 compute-0 podman[319106]: 2025-12-03 01:37:04.99269898 +0000 UTC m=+1.884054622 container remove 103dcf19711bab371769877d45d671a9511bc9e9fdde6576e434ca6ba458abbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_keller, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:37:05 compute-0 systemd[1]: libpod-conmon-103dcf19711bab371769877d45d671a9511bc9e9fdde6576e434ca6ba458abbc.scope: Deactivated successfully.
Dec 03 01:37:05 compute-0 sudo[318909]: pam_unix(sudo:session): session closed for user root
Dec 03 01:37:05 compute-0 podman[319246]: 2025-12-03 01:37:05.112422447 +0000 UTC m=+0.097242244 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 03 01:37:05 compute-0 sudo[319259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:37:05 compute-0 sudo[319259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:37:05 compute-0 sudo[319259]: pam_unix(sudo:session): session closed for user root
Dec 03 01:37:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v669: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:05 compute-0 sudo[319300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:37:05 compute-0 sudo[319300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:37:05 compute-0 sudo[319300]: pam_unix(sudo:session): session closed for user root
Dec 03 01:37:05 compute-0 sudo[319328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:37:05 compute-0 sudo[319328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:37:05 compute-0 sudo[319328]: pam_unix(sudo:session): session closed for user root
Dec 03 01:37:05 compute-0 sudo[319357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 01:37:05 compute-0 sudo[319357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:37:06 compute-0 podman[319433]: 2025-12-03 01:37:06.018354154 +0000 UTC m=+0.094602039 container create d8e083fab70817cdedf9bd3680a6339f502ab511989e5097113a42a6e6cb70ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_beaver, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:37:06 compute-0 podman[319433]: 2025-12-03 01:37:05.978691196 +0000 UTC m=+0.054939151 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:37:06 compute-0 systemd[1]: Started libpod-conmon-d8e083fab70817cdedf9bd3680a6339f502ab511989e5097113a42a6e6cb70ec.scope.
Dec 03 01:37:06 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:37:06 compute-0 podman[319433]: 2025-12-03 01:37:06.164804716 +0000 UTC m=+0.241052661 container init d8e083fab70817cdedf9bd3680a6339f502ab511989e5097113a42a6e6cb70ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:37:06 compute-0 podman[319433]: 2025-12-03 01:37:06.178834212 +0000 UTC m=+0.255082107 container start d8e083fab70817cdedf9bd3680a6339f502ab511989e5097113a42a6e6cb70ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_beaver, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec 03 01:37:06 compute-0 podman[319433]: 2025-12-03 01:37:06.185394786 +0000 UTC m=+0.261642731 container attach d8e083fab70817cdedf9bd3680a6339f502ab511989e5097113a42a6e6cb70ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_beaver, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:37:06 compute-0 awesome_beaver[319454]: 167 167
Dec 03 01:37:06 compute-0 systemd[1]: libpod-d8e083fab70817cdedf9bd3680a6339f502ab511989e5097113a42a6e6cb70ec.scope: Deactivated successfully.
Dec 03 01:37:06 compute-0 podman[319433]: 2025-12-03 01:37:06.189633175 +0000 UTC m=+0.265881070 container died d8e083fab70817cdedf9bd3680a6339f502ab511989e5097113a42a6e6cb70ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_beaver, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:37:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8341c4b144c07df8c3a3dd334fd784e17b4b89c4a60493e46a55c04e060f19c-merged.mount: Deactivated successfully.
Dec 03 01:37:06 compute-0 podman[319433]: 2025-12-03 01:37:06.260723321 +0000 UTC m=+0.336971216 container remove d8e083fab70817cdedf9bd3680a6339f502ab511989e5097113a42a6e6cb70ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_beaver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:37:06 compute-0 systemd[1]: libpod-conmon-d8e083fab70817cdedf9bd3680a6339f502ab511989e5097113a42a6e6cb70ec.scope: Deactivated successfully.
Dec 03 01:37:06 compute-0 ceph-mon[192821]: pgmap v669: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:06 compute-0 podman[319487]: 2025-12-03 01:37:06.536929123 +0000 UTC m=+0.092721917 container create ee4d811b41c8002f6c990c07d686f12f08a152f44b4f8c805ec88ef5650cbf0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wright, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:37:06 compute-0 podman[319487]: 2025-12-03 01:37:06.506811693 +0000 UTC m=+0.062604537 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:37:06 compute-0 systemd[1]: Started libpod-conmon-ee4d811b41c8002f6c990c07d686f12f08a152f44b4f8c805ec88ef5650cbf0b.scope.
Dec 03 01:37:06 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:37:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dc94c766df843d8a645829a0bfb9d0891c21a95552ffad435b178f13464e197/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:37:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dc94c766df843d8a645829a0bfb9d0891c21a95552ffad435b178f13464e197/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:37:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dc94c766df843d8a645829a0bfb9d0891c21a95552ffad435b178f13464e197/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:37:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dc94c766df843d8a645829a0bfb9d0891c21a95552ffad435b178f13464e197/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:37:06 compute-0 podman[319487]: 2025-12-03 01:37:06.725989667 +0000 UTC m=+0.281782471 container init ee4d811b41c8002f6c990c07d686f12f08a152f44b4f8c805ec88ef5650cbf0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 03 01:37:06 compute-0 podman[319487]: 2025-12-03 01:37:06.756838657 +0000 UTC m=+0.312631441 container start ee4d811b41c8002f6c990c07d686f12f08a152f44b4f8c805ec88ef5650cbf0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wright, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 03 01:37:06 compute-0 podman[319487]: 2025-12-03 01:37:06.762925498 +0000 UTC m=+0.318718262 container attach ee4d811b41c8002f6c990c07d686f12f08a152f44b4f8c805ec88ef5650cbf0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 03 01:37:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v670: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:37:07 compute-0 relaxed_wright[319510]: {
Dec 03 01:37:07 compute-0 relaxed_wright[319510]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 01:37:07 compute-0 relaxed_wright[319510]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:37:07 compute-0 relaxed_wright[319510]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 01:37:07 compute-0 relaxed_wright[319510]:         "osd_id": 2,
Dec 03 01:37:07 compute-0 relaxed_wright[319510]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:37:07 compute-0 relaxed_wright[319510]:         "type": "bluestore"
Dec 03 01:37:07 compute-0 relaxed_wright[319510]:     },
Dec 03 01:37:07 compute-0 relaxed_wright[319510]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 01:37:07 compute-0 relaxed_wright[319510]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:37:07 compute-0 relaxed_wright[319510]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 01:37:07 compute-0 relaxed_wright[319510]:         "osd_id": 1,
Dec 03 01:37:07 compute-0 relaxed_wright[319510]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:37:07 compute-0 relaxed_wright[319510]:         "type": "bluestore"
Dec 03 01:37:07 compute-0 relaxed_wright[319510]:     },
Dec 03 01:37:07 compute-0 relaxed_wright[319510]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 01:37:07 compute-0 relaxed_wright[319510]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:37:07 compute-0 relaxed_wright[319510]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 01:37:07 compute-0 relaxed_wright[319510]:         "osd_id": 0,
Dec 03 01:37:07 compute-0 relaxed_wright[319510]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:37:07 compute-0 relaxed_wright[319510]:         "type": "bluestore"
Dec 03 01:37:07 compute-0 relaxed_wright[319510]:     }
Dec 03 01:37:07 compute-0 relaxed_wright[319510]: }
Dec 03 01:37:07 compute-0 systemd[1]: libpod-ee4d811b41c8002f6c990c07d686f12f08a152f44b4f8c805ec88ef5650cbf0b.scope: Deactivated successfully.
Dec 03 01:37:07 compute-0 podman[319487]: 2025-12-03 01:37:07.997206639 +0000 UTC m=+1.552999403 container died ee4d811b41c8002f6c990c07d686f12f08a152f44b4f8c805ec88ef5650cbf0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wright, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 03 01:37:07 compute-0 systemd[1]: libpod-ee4d811b41c8002f6c990c07d686f12f08a152f44b4f8c805ec88ef5650cbf0b.scope: Consumed 1.230s CPU time.
Dec 03 01:37:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-6dc94c766df843d8a645829a0bfb9d0891c21a95552ffad435b178f13464e197-merged.mount: Deactivated successfully.
Dec 03 01:37:08 compute-0 podman[319487]: 2025-12-03 01:37:08.10641313 +0000 UTC m=+1.662205894 container remove ee4d811b41c8002f6c990c07d686f12f08a152f44b4f8c805ec88ef5650cbf0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wright, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec 03 01:37:08 compute-0 systemd[1]: libpod-conmon-ee4d811b41c8002f6c990c07d686f12f08a152f44b4f8c805ec88ef5650cbf0b.scope: Deactivated successfully.
Dec 03 01:37:08 compute-0 sudo[319357]: pam_unix(sudo:session): session closed for user root
Dec 03 01:37:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:37:08 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:37:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:37:08 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:37:08 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev f30f5015-d2de-438e-9e16-be90424f0b83 does not exist
Dec 03 01:37:08 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev d38593e6-9a09-47ce-82d0-045c817c70ea does not exist
Dec 03 01:37:08 compute-0 sudo[319598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:37:08 compute-0 sudo[319598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:37:08 compute-0 sudo[319598]: pam_unix(sudo:session): session closed for user root
Dec 03 01:37:08 compute-0 ceph-mon[192821]: pgmap v670: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:08 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:37:08 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:37:08 compute-0 sudo[319627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 01:37:08 compute-0 sudo[319627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:37:08 compute-0 sudo[319627]: pam_unix(sudo:session): session closed for user root
Dec 03 01:37:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v671: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:09 compute-0 sudo[319815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqtgwwiaemtmwdvppbjwrxfvybfnswsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725829.3660996-47-200667510617555/AnsiballZ_setup.py'
Dec 03 01:37:09 compute-0 sudo[319815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:37:10 compute-0 python3.9[319817]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 03 01:37:10 compute-0 ceph-mon[192821]: pgmap v671: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:10 compute-0 sudo[319815]: pam_unix(sudo:session): session closed for user root
Dec 03 01:37:11 compute-0 sudo[319899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkrheraskvdqshxnyxcwccfqdlwozbiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725829.3660996-47-200667510617555/AnsiballZ_dnf.py'
Dec 03 01:37:11 compute-0 sudo[319899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:37:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v672: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:11 compute-0 python3.9[319901]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 03 01:37:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:37:12 compute-0 ceph-mon[192821]: pgmap v672: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:12 compute-0 sudo[319899]: pam_unix(sudo:session): session closed for user root
Dec 03 01:37:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v673: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:13 compute-0 ceph-mon[192821]: pgmap v673: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:13 compute-0 sudo[320052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqeiofhvnykydbkevsqmwmugbmflobob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725832.9213967-59-141149166868783/AnsiballZ_stat.py'
Dec 03 01:37:13 compute-0 sudo[320052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:37:13 compute-0 python3.9[320054]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:37:13 compute-0 sudo[320052]: pam_unix(sudo:session): session closed for user root
Dec 03 01:37:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v674: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:15 compute-0 sudo[320204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihrqlaslooofmrmgkzxjilixjlmwrgfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725834.9678023-69-269143675838310/AnsiballZ_command.py'
Dec 03 01:37:15 compute-0 sudo[320204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:37:15 compute-0 python3.9[320206]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:37:16 compute-0 sudo[320204]: pam_unix(sudo:session): session closed for user root
Dec 03 01:37:16 compute-0 ceph-mon[192821]: pgmap v674: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:16 compute-0 sudo[320357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jypecagqdmcuthxjwaczcfvizeteyxpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725836.4047163-79-185212938300491/AnsiballZ_stat.py'
Dec 03 01:37:16 compute-0 sudo[320357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:37:17 compute-0 python3.9[320359]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:37:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:37:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v675: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:17 compute-0 sudo[320357]: pam_unix(sudo:session): session closed for user root
Dec 03 01:37:17 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Dec 03 01:37:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:37:17.280799) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 03 01:37:17 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Dec 03 01:37:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725837280869, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1248, "num_deletes": 507, "total_data_size": 1436496, "memory_usage": 1470752, "flush_reason": "Manual Compaction"}
Dec 03 01:37:17 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Dec 03 01:37:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725837292389, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1412035, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13540, "largest_seqno": 14787, "table_properties": {"data_size": 1406542, "index_size": 2441, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 14176, "raw_average_key_size": 17, "raw_value_size": 1393506, "raw_average_value_size": 1761, "num_data_blocks": 112, "num_entries": 791, "num_filter_entries": 791, "num_deletions": 507, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764725742, "oldest_key_time": 1764725742, "file_creation_time": 1764725837, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Dec 03 01:37:17 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 11652 microseconds, and 5032 cpu microseconds.
Dec 03 01:37:17 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 01:37:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:37:17.292451) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1412035 bytes OK
Dec 03 01:37:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:37:17.292482) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Dec 03 01:37:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:37:17.294518) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Dec 03 01:37:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:37:17.294552) EVENT_LOG_v1 {"time_micros": 1764725837294548, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 03 01:37:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:37:17.294574) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 03 01:37:17 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1429749, prev total WAL file size 1429749, number of live WAL files 2.
Dec 03 01:37:17 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 01:37:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:37:17.295644) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323533' seq:0, type:0; will stop at (end)
Dec 03 01:37:17 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 03 01:37:17 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1378KB)], [32(7389KB)]
Dec 03 01:37:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725837295739, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 8978582, "oldest_snapshot_seqno": -1}
Dec 03 01:37:17 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3765 keys, 7052455 bytes, temperature: kUnknown
Dec 03 01:37:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725837352741, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7052455, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7025567, "index_size": 16347, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9477, "raw_key_size": 92372, "raw_average_key_size": 24, "raw_value_size": 6955693, "raw_average_value_size": 1847, "num_data_blocks": 694, "num_entries": 3765, "num_filter_entries": 3765, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764725837, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 03 01:37:17 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 01:37:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:37:17.353156) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7052455 bytes
Dec 03 01:37:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:37:17.358271) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 157.2 rd, 123.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 7.2 +0.0 blob) out(6.7 +0.0 blob), read-write-amplify(11.4) write-amplify(5.0) OK, records in: 4792, records dropped: 1027 output_compression: NoCompression
Dec 03 01:37:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:37:17.358305) EVENT_LOG_v1 {"time_micros": 1764725837358290, "job": 14, "event": "compaction_finished", "compaction_time_micros": 57115, "compaction_time_cpu_micros": 34176, "output_level": 6, "num_output_files": 1, "total_output_size": 7052455, "num_input_records": 4792, "num_output_records": 3765, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 03 01:37:17 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 01:37:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725837359027, "job": 14, "event": "table_file_deletion", "file_number": 34}
Dec 03 01:37:17 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 01:37:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725837361627, "job": 14, "event": "table_file_deletion", "file_number": 32}
Dec 03 01:37:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:37:17.295353) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:37:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:37:17.361819) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:37:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:37:17.361826) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:37:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:37:17.361830) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:37:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:37:17.361833) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:37:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:37:17.361836) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:37:18 compute-0 sudo[320509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vabwqnwnheokzjakcymeelfurhteejjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725837.5561075-87-175452689919338/AnsiballZ_command.py'
Dec 03 01:37:18 compute-0 sudo[320509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:37:18 compute-0 python3.9[320511]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:37:18 compute-0 sudo[320509]: pam_unix(sudo:session): session closed for user root
Dec 03 01:37:18 compute-0 ceph-mon[192821]: pgmap v675: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:19 compute-0 sudo[320664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpqdyrofqxbckqtecsuphbhfryoiylor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725838.6386387-95-52272453686439/AnsiballZ_stat.py'
Dec 03 01:37:19 compute-0 sudo[320664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:37:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v676: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:19 compute-0 sshd-session[320573]: Invalid user bounce from 173.249.50.59 port 54158
Dec 03 01:37:19 compute-0 python3.9[320666]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:37:19 compute-0 sudo[320664]: pam_unix(sudo:session): session closed for user root
Dec 03 01:37:19 compute-0 sshd-session[320573]: Received disconnect from 173.249.50.59 port 54158:11: Bye Bye [preauth]
Dec 03 01:37:19 compute-0 sshd-session[320573]: Disconnected from invalid user bounce 173.249.50.59 port 54158 [preauth]
Dec 03 01:37:20 compute-0 sudo[320788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umhijlhlbetaqpkwizuiazhgxsxqhiyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725838.6386387-95-52272453686439/AnsiballZ_copy.py'
Dec 03 01:37:20 compute-0 ceph-mon[192821]: pgmap v676: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:20 compute-0 sudo[320788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:37:20 compute-0 python3.9[320790]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764725838.6386387-95-52272453686439/.source.iscsi _original_basename=.c0gcynkp follow=False checksum=45ae4747473aca1feb0876e067fc4836f7675e84 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:37:20 compute-0 sudo[320788]: pam_unix(sudo:session): session closed for user root
Dec 03 01:37:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v677: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:21 compute-0 ceph-mon[192821]: pgmap v677: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:21 compute-0 sudo[320996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-diemzshgzflzkwbthbdptfrorpsunrta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725840.8864129-110-165362601454343/AnsiballZ_file.py'
Dec 03 01:37:21 compute-0 podman[320914]: 2025-12-03 01:37:21.666312711 +0000 UTC m=+0.105933579 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 03 01:37:21 compute-0 sudo[320996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:37:21 compute-0 podman[320916]: 2025-12-03 01:37:21.689384122 +0000 UTC m=+0.120287644 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, managed_by=edpm_ansible)
Dec 03 01:37:21 compute-0 podman[320915]: 2025-12-03 01:37:21.705416735 +0000 UTC m=+0.142948344 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, managed_by=edpm_ansible, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, config_id=edpm, version=9.6, architecture=x86_64, vcs-type=git)
Dec 03 01:37:21 compute-0 podman[320917]: 2025-12-03 01:37:21.721864949 +0000 UTC m=+0.141726170 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 03 01:37:21 compute-0 python3.9[321019]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:37:21 compute-0 sudo[320996]: pam_unix(sudo:session): session closed for user root
Dec 03 01:37:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:37:22 compute-0 sudo[321177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oikddtcksebbazqkfhfrhvovzbuwveei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725842.1833065-118-210826415171202/AnsiballZ_lineinfile.py'
Dec 03 01:37:22 compute-0 sudo[321177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:37:23 compute-0 podman[321179]: 2025-12-03 01:37:23.081179306 +0000 UTC m=+0.137616373 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.license=GPLv2)
Dec 03 01:37:23 compute-0 python3.9[321180]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:37:23 compute-0 sudo[321177]: pam_unix(sudo:session): session closed for user root
Dec 03 01:37:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v678: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:24 compute-0 ceph-mon[192821]: pgmap v678: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:24 compute-0 sudo[321348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrohowentilgtybsugarvhzzgayekasc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725843.4594772-127-78307345779484/AnsiballZ_systemd_service.py'
Dec 03 01:37:24 compute-0 sudo[321348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:37:24 compute-0 python3.9[321350]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:37:24 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Dec 03 01:37:25 compute-0 sudo[321348]: pam_unix(sudo:session): session closed for user root
Dec 03 01:37:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v679: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:25 compute-0 sshd-session[321380]: Invalid user bounce from 146.190.144.138 port 35998
Dec 03 01:37:25 compute-0 sshd-session[321380]: Received disconnect from 146.190.144.138 port 35998:11: Bye Bye [preauth]
Dec 03 01:37:25 compute-0 sshd-session[321380]: Disconnected from invalid user bounce 146.190.144.138 port 35998 [preauth]
Dec 03 01:37:26 compute-0 ceph-mon[192821]: pgmap v679: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:26 compute-0 sudo[321506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rboaxktxnjujyaegrfhynznuvmrsxtlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725845.3288853-135-13183753890466/AnsiballZ_systemd_service.py'
Dec 03 01:37:26 compute-0 sudo[321506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:37:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v680: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:37:27 compute-0 python3.9[321508]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:37:27 compute-0 systemd[1]: Reloading.
Dec 03 01:37:27 compute-0 systemd-rc-local-generator[321533]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:37:27 compute-0 systemd-sysv-generator[321537]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:37:27 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec 03 01:37:27 compute-0 systemd[1]: Starting Open-iSCSI...
Dec 03 01:37:27 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Dec 03 01:37:27 compute-0 systemd[1]: Started Open-iSCSI.
Dec 03 01:37:28 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Dec 03 01:37:28 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Dec 03 01:37:28 compute-0 sudo[321506]: pam_unix(sudo:session): session closed for user root
Dec 03 01:37:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:37:28
Dec 03 01:37:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 01:37:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 01:37:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'default.rgw.control', 'images', 'cephfs.cephfs.data', 'volumes', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'vms']
Dec 03 01:37:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 01:37:28 compute-0 ceph-mon[192821]: pgmap v680: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:37:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:37:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:37:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:37:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:37:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:37:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 01:37:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:37:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 01:37:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:37:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:37:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:37:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:37:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:37:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:37:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:37:29 compute-0 sudo[321707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phaqsfvkimevceceiliaosotwelclvuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725848.590295-146-233704674094768/AnsiballZ_service_facts.py'
Dec 03 01:37:29 compute-0 sudo[321707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:37:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v681: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:29 compute-0 python3.9[321709]: ansible-ansible.builtin.service_facts Invoked
Dec 03 01:37:29 compute-0 network[321726]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 03 01:37:29 compute-0 network[321727]: 'network-scripts' will be removed from distribution in near future.
Dec 03 01:37:29 compute-0 network[321728]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 03 01:37:29 compute-0 podman[158098]: time="2025-12-03T01:37:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:37:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:37:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35732 "" "Go-http-client/1.1"
Dec 03 01:37:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:37:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7294 "" "Go-http-client/1.1"
Dec 03 01:37:30 compute-0 ceph-mon[192821]: pgmap v681: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:30 compute-0 podman[321735]: 2025-12-03 01:37:30.654991037 +0000 UTC m=+0.178035081 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, maintainer=Red Hat, Inc., release-0.7.12=, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=base rhel9, name=ubi9, container_name=kepler, io.buildah.version=1.29.0, managed_by=edpm_ansible, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public)
Dec 03 01:37:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v682: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:31 compute-0 openstack_network_exporter[160250]: ERROR   01:37:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:37:31 compute-0 openstack_network_exporter[160250]: ERROR   01:37:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:37:31 compute-0 openstack_network_exporter[160250]: ERROR   01:37:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:37:31 compute-0 openstack_network_exporter[160250]: ERROR   01:37:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:37:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:37:31 compute-0 openstack_network_exporter[160250]: ERROR   01:37:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:37:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:37:31 compute-0 ceph-mon[192821]: pgmap v682: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:32 compute-0 podman[321799]: 2025-12-03 01:37:32.273840776 +0000 UTC m=+0.143531983 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 03 01:37:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:37:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v683: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:34 compute-0 ceph-mon[192821]: pgmap v683: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:34 compute-0 sudo[321707]: pam_unix(sudo:session): session closed for user root
Dec 03 01:37:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v684: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:35 compute-0 sudo[322046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpxmtfdsxrozfsxrsptkimuhmclnxkrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725855.0945735-156-91230263921791/AnsiballZ_file.py'
Dec 03 01:37:35 compute-0 sudo[322046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:37:35 compute-0 podman[322008]: 2025-12-03 01:37:35.687141835 +0000 UTC m=+0.155273202 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 01:37:35 compute-0 python3.9[322059]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 03 01:37:35 compute-0 sudo[322046]: pam_unix(sudo:session): session closed for user root
Dec 03 01:37:36 compute-0 ceph-mon[192821]: pgmap v684: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:36 compute-0 sudo[322209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhdlxcailokoqcetvtibyxfhmjkzprbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725856.1894042-164-273401736169095/AnsiballZ_modprobe.py'
Dec 03 01:37:36 compute-0 sudo[322209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:37:37 compute-0 python3.9[322211]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Dec 03 01:37:37 compute-0 sudo[322209]: pam_unix(sudo:session): session closed for user root
Dec 03 01:37:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:37:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v685: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 01:37:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:37:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 01:37:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:37:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:37:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:37:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:37:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:37:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:37:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:37:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:37:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:37:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 01:37:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:37:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:37:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:37:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 01:37:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:37:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 01:37:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:37:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:37:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:37:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 01:37:38 compute-0 ceph-mon[192821]: pgmap v685: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:38 compute-0 sudo[322365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkzvgwmlmfbiahjnrwpwhrujziyvtcuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725857.5608733-172-76672569751921/AnsiballZ_stat.py'
Dec 03 01:37:38 compute-0 sudo[322365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:37:39 compute-0 python3.9[322367]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:37:39 compute-0 sudo[322365]: pam_unix(sudo:session): session closed for user root
Dec 03 01:37:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v686: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:39 compute-0 sudo[322488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcxuocactggznwnsdvjggmiisdadvxul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725857.5608733-172-76672569751921/AnsiballZ_copy.py'
Dec 03 01:37:39 compute-0 sudo[322488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:37:40 compute-0 python3.9[322490]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764725857.5608733-172-76672569751921/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:37:40 compute-0 sudo[322488]: pam_unix(sudo:session): session closed for user root
Dec 03 01:37:40 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 03 01:37:40 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 1200.0 total, 600.0 interval
                                            Cumulative writes: 3309 writes, 14K keys, 3309 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                            Cumulative WAL: 3309 writes, 3309 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 1272 writes, 5778 keys, 1272 commit groups, 1.0 writes per commit group, ingest: 8.46 MB, 0.01 MB/s
                                            Interval WAL: 1272 writes, 1272 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     96.9      0.16              0.07         7    0.022       0      0       0.0       0.0
                                              L6      1/0    6.73 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.7    124.6    102.9      0.39              0.20         6    0.065     24K   3201       0.0       0.0
                                             Sum      1/0    6.73 MB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   3.7     89.0    101.2      0.55              0.28        13    0.042     24K   3201       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.8    108.9    109.8      0.31              0.16         8    0.039     17K   2472       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    124.6    102.9      0.39              0.20         6    0.065     24K   3201       0.0       0.0
                                            High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     98.4      0.15              0.07         6    0.026       0      0       0.0       0.0
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     18.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.0 total, 600.0 interval
                                            Flush(GB): cumulative 0.015, interval 0.007
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.05 GB write, 0.05 MB/s write, 0.05 GB read, 0.04 MB/s read, 0.5 seconds
                                            Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.3 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x559a0b5b71f0#2 capacity: 308.00 MB usage: 1.50 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 5.2e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(107,1.29 MB,0.417283%) FilterBlock(14,74.42 KB,0.0235966%) IndexBlock(14,144.80 KB,0.0459101%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
Dec 03 01:37:40 compute-0 ceph-mon[192821]: pgmap v686: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.974 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.975 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.975 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.976 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f00ebd496a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eda45910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eabec2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.981 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.981 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.981 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.981 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.981 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.979 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f00ebd4b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f00edba6090>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f00ebd4bb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f00ebd4b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f00ebd4b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebcadee0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f00ebd4b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.986 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f00ebd4b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f00eabec290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f00ebd4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f00ebd4b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f00ebd4b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f00ebd4bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f00ebd4b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f00ebd4bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f00ebd4bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f00ebd4bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f00ebe0e030>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f00ebd4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f00ebd4b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f00ede91a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.992 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f00ebd4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.992 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.992 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f00ebd4b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.992 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.992 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f00ede92450>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.992 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f00ebd4bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.993 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f00ebd4bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.993 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:37:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v687: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:41 compute-0 ceph-mon[192821]: pgmap v687: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:41 compute-0 sudo[322641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-miqzgarzfbylxaldezrxzilivzpkecul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725860.4925618-188-166265687710168/AnsiballZ_lineinfile.py'
Dec 03 01:37:41 compute-0 sudo[322641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:37:42 compute-0 python3.9[322643]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:37:42 compute-0 sudo[322641]: pam_unix(sudo:session): session closed for user root
Dec 03 01:37:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:37:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v688: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:43 compute-0 sudo[322793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uoaefhgzggonuymvtskyspydjptulysj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725862.5317848-196-70267221866208/AnsiballZ_systemd.py'
Dec 03 01:37:43 compute-0 sudo[322793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:37:43 compute-0 python3.9[322795]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 03 01:37:43 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 03 01:37:43 compute-0 systemd[1]: Stopped Load Kernel Modules.
Dec 03 01:37:43 compute-0 systemd[1]: Stopping Load Kernel Modules...
Dec 03 01:37:43 compute-0 systemd[1]: Starting Load Kernel Modules...
Dec 03 01:37:43 compute-0 systemd[1]: Finished Load Kernel Modules.
Dec 03 01:37:43 compute-0 sudo[322793]: pam_unix(sudo:session): session closed for user root
Dec 03 01:37:44 compute-0 ceph-mon[192821]: pgmap v688: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:44 compute-0 sudo[322949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzguysavcnissgokqahnywwepsygftlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725864.222865-204-179228591366987/AnsiballZ_file.py'
Dec 03 01:37:44 compute-0 sudo[322949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:37:45 compute-0 python3.9[322951]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:37:45 compute-0 sudo[322949]: pam_unix(sudo:session): session closed for user root
Dec 03 01:37:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v689: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:45 compute-0 sudo[323101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csvhqrkqycbwiolasxbmqrpfwdssizly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725865.4420063-213-149889733916503/AnsiballZ_stat.py'
Dec 03 01:37:45 compute-0 sudo[323101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:37:46 compute-0 python3.9[323103]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:37:46 compute-0 sudo[323101]: pam_unix(sudo:session): session closed for user root
Dec 03 01:37:46 compute-0 ceph-mon[192821]: pgmap v689: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:47 compute-0 sudo[323253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myqvbhwrstrgtsnbjepukjkoiqivzmxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725866.5941408-222-79788580986914/AnsiballZ_stat.py'
Dec 03 01:37:47 compute-0 sudo[323253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:37:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:37:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v690: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:47 compute-0 python3.9[323255]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:37:47 compute-0 sudo[323253]: pam_unix(sudo:session): session closed for user root
Dec 03 01:37:47 compute-0 sshd-session[323280]: Invalid user guest from 34.66.72.251 port 59852
Dec 03 01:37:47 compute-0 sshd-session[323280]: Received disconnect from 34.66.72.251 port 59852:11: Bye Bye [preauth]
Dec 03 01:37:47 compute-0 sshd-session[323280]: Disconnected from invalid user guest 34.66.72.251 port 59852 [preauth]
Dec 03 01:37:48 compute-0 sudo[323407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbwrsxupbggdvzimrfcnycxyqwiizoup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725867.685499-230-1744332121386/AnsiballZ_stat.py'
Dec 03 01:37:48 compute-0 sudo[323407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:37:48 compute-0 ceph-mon[192821]: pgmap v690: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:48 compute-0 python3.9[323409]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:37:48 compute-0 sudo[323407]: pam_unix(sudo:session): session closed for user root
Dec 03 01:37:49 compute-0 sudo[323530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ropujjamregzscmsrwwjrfhhjdurcmxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725867.685499-230-1744332121386/AnsiballZ_copy.py'
Dec 03 01:37:49 compute-0 sudo[323530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:37:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v691: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:49 compute-0 python3.9[323532]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764725867.685499-230-1744332121386/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:37:49 compute-0 sudo[323530]: pam_unix(sudo:session): session closed for user root
Dec 03 01:37:50 compute-0 sudo[323683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xopbtrzunftbuehhphwjvawpxrwilfjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725869.740518-245-210829000871616/AnsiballZ_command.py'
Dec 03 01:37:50 compute-0 sudo[323683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:37:50 compute-0 ceph-mon[192821]: pgmap v691: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:50 compute-0 python3.9[323685]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:37:50 compute-0 sudo[323683]: pam_unix(sudo:session): session closed for user root
Dec 03 01:37:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v692: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:51 compute-0 ceph-mon[192821]: pgmap v692: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:51 compute-0 sudo[323836]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtvlobgbznoiyaghjukdphdqmmonvzso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725870.828244-253-42740579422441/AnsiballZ_lineinfile.py'
Dec 03 01:37:51 compute-0 sudo[323836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:37:51 compute-0 python3.9[323838]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:37:51 compute-0 sudo[323836]: pam_unix(sudo:session): session closed for user root
Dec 03 01:37:51 compute-0 podman[323844]: 2025-12-03 01:37:51.845605409 +0000 UTC m=+0.100047221 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 01:37:51 compute-0 podman[323848]: 2025-12-03 01:37:51.879018878 +0000 UTC m=+0.118354376 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 03 01:37:51 compute-0 podman[323854]: 2025-12-03 01:37:51.887839625 +0000 UTC m=+0.122893743 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, distribution-scope=public, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.tags=minimal rhel9, architecture=x86_64, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, version=9.6, config_id=edpm, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 03 01:37:51 compute-0 podman[323867]: 2025-12-03 01:37:51.910192223 +0000 UTC m=+0.131080043 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125)
Dec 03 01:37:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:37:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v693: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:53 compute-0 sudo[324084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtthzovnkadhiwhokzmwepipcvzxtojy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725871.9336061-261-258941137449164/AnsiballZ_replace.py'
Dec 03 01:37:53 compute-0 sudo[324084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:37:53 compute-0 podman[324042]: 2025-12-03 01:37:53.816391843 +0000 UTC m=+0.169161923 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 03 01:37:54 compute-0 python3.9[324089]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:37:54 compute-0 sudo[324084]: pam_unix(sudo:session): session closed for user root
Dec 03 01:37:54 compute-0 ceph-mon[192821]: pgmap v693: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v694: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:55 compute-0 sudo[324240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emmdfzfvgfqhozejnithzggvkqvqztzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725874.406841-269-150896695933275/AnsiballZ_replace.py'
Dec 03 01:37:55 compute-0 sudo[324240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:37:55 compute-0 python3.9[324242]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:37:55 compute-0 sudo[324240]: pam_unix(sudo:session): session closed for user root
Dec 03 01:37:56 compute-0 ceph-mon[192821]: pgmap v694: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:56 compute-0 sudo[324392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwjnqpkrifjgcvptfneoikaryoqewgta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725876.2694967-278-195842919509231/AnsiballZ_lineinfile.py'
Dec 03 01:37:56 compute-0 sudo[324392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:37:57 compute-0 python3.9[324394]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:37:57 compute-0 sudo[324392]: pam_unix(sudo:session): session closed for user root
Dec 03 01:37:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:37:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v695: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:57 compute-0 sudo[324544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxjzbsxpcyxvnapiupuchdblwacddysd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725877.3299055-278-11108269694595/AnsiballZ_lineinfile.py'
Dec 03 01:37:57 compute-0 sudo[324544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:37:58 compute-0 python3.9[324546]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:37:58 compute-0 sudo[324544]: pam_unix(sudo:session): session closed for user root
Dec 03 01:37:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:37:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:37:58 compute-0 ceph-mon[192821]: pgmap v695: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:37:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:37:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:37:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:37:59 compute-0 sudo[324696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yiilzbedccwoirpuxolhxneevvyihzfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725878.402512-278-226998776280267/AnsiballZ_lineinfile.py'
Dec 03 01:37:59 compute-0 sudo[324696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:37:59 compute-0 python3.9[324698]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:37:59 compute-0 sudo[324696]: pam_unix(sudo:session): session closed for user root
Dec 03 01:37:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v696: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:37:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:37:59.595 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:37:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:37:59.597 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:37:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:37:59.597 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:37:59 compute-0 podman[158098]: time="2025-12-03T01:37:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:37:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:37:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35732 "" "Go-http-client/1.1"
Dec 03 01:37:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:37:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7272 "" "Go-http-client/1.1"
Dec 03 01:38:00 compute-0 sudo[324848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eevoiqsfuerlishandbnrmswoelhlghx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725879.5464995-278-86380183461165/AnsiballZ_lineinfile.py'
Dec 03 01:38:00 compute-0 sudo[324848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:38:00 compute-0 ceph-mon[192821]: pgmap v696: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:00 compute-0 python3.9[324850]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:38:00 compute-0 sudo[324848]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:00 compute-0 podman[324876]: 2025-12-03 01:38:00.882039426 +0000 UTC m=+0.135816466 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., vcs-type=git, config_id=edpm, release-0.7.12=, name=ubi9, architecture=x86_64, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, version=9.4, build-date=2024-09-18T21:23:30, container_name=kepler, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec 03 01:38:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v697: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:01 compute-0 sudo[325021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtwoqtjpdqajumwcrovsnqtmoiowwfgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725880.8073654-307-173854701107352/AnsiballZ_stat.py'
Dec 03 01:38:01 compute-0 sudo[325021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:38:01 compute-0 ceph-mon[192821]: pgmap v697: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:01 compute-0 openstack_network_exporter[160250]: ERROR   01:38:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:38:01 compute-0 openstack_network_exporter[160250]: ERROR   01:38:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:38:01 compute-0 openstack_network_exporter[160250]: ERROR   01:38:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:38:01 compute-0 openstack_network_exporter[160250]: ERROR   01:38:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:38:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:38:01 compute-0 openstack_network_exporter[160250]: ERROR   01:38:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:38:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:38:01 compute-0 python3.9[325023]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:38:01 compute-0 sudo[325021]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:38:02 compute-0 podman[325149]: 2025-12-03 01:38:02.437270338 +0000 UTC m=+0.100350319 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 03 01:38:02 compute-0 sudo[325191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvlweejwvkjxyexcuigmkweudrfwbzlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725881.8850543-315-201968218866695/AnsiballZ_file.py'
Dec 03 01:38:02 compute-0 sudo[325191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:38:02 compute-0 python3.9[325195]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:38:02 compute-0 sudo[325191]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v698: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:03 compute-0 sudo[325346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbbxofnkqrekwjfrkcguztsleseuysbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725883.048237-324-108610491856387/AnsiballZ_file.py'
Dec 03 01:38:03 compute-0 sudo[325346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:38:03 compute-0 python3.9[325348]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:38:03 compute-0 sudo[325346]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:04 compute-0 ceph-mon[192821]: pgmap v698: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:04 compute-0 sudo[325498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmurbskiijvdqbyxucdefromadpscxeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725884.1000984-332-267157068681582/AnsiballZ_stat.py'
Dec 03 01:38:04 compute-0 sudo[325498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:38:04 compute-0 python3.9[325500]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:38:04 compute-0 sudo[325498]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v699: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:05 compute-0 sudo[325582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwmuzijqxejclzcunexoivxptpygklak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725884.1000984-332-267157068681582/AnsiballZ_file.py'
Dec 03 01:38:05 compute-0 sudo[325582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:38:05 compute-0 podman[325561]: 2025-12-03 01:38:05.864685634 +0000 UTC m=+0.115295149 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 03 01:38:06 compute-0 python3.9[325589]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:38:06 compute-0 sudo[325582]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:06 compute-0 ceph-mon[192821]: pgmap v699: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:06 compute-0 sudo[325754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwkwtzjdyneasyhifjvdkrnytggrcbbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725886.33021-332-116109222576204/AnsiballZ_stat.py'
Dec 03 01:38:06 compute-0 sudo[325754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:38:07 compute-0 python3.9[325756]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:38:07 compute-0 sudo[325754]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:38:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v700: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:07 compute-0 sshd-session[325627]: Received disconnect from 103.146.202.174 port 40830:11: Bye Bye [preauth]
Dec 03 01:38:07 compute-0 sshd-session[325627]: Disconnected from authenticating user root 103.146.202.174 port 40830 [preauth]
Dec 03 01:38:08 compute-0 sudo[325832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsywfljyevbpzabmjxnqsbjdamoaqlry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725886.33021-332-116109222576204/AnsiballZ_file.py'
Dec 03 01:38:08 compute-0 sudo[325832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:38:08 compute-0 ceph-mon[192821]: pgmap v700: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:08 compute-0 sudo[325835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:38:08 compute-0 sudo[325835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:38:08 compute-0 sudo[325835]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:08 compute-0 python3.9[325834]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:38:08 compute-0 sudo[325832]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:08 compute-0 sudo[325860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:38:08 compute-0 sudo[325860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:38:08 compute-0 sudo[325860]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:08 compute-0 sudo[325909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:38:08 compute-0 sudo[325909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:38:08 compute-0 sudo[325909]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:08 compute-0 sudo[325954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 01:38:08 compute-0 sudo[325954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:38:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v701: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:09 compute-0 sudo[326100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtcgpgsnzmctrwuogzejmjawtzlcsnvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725888.8982334-355-279582295358356/AnsiballZ_file.py'
Dec 03 01:38:09 compute-0 sudo[326100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:38:09 compute-0 python3.9[326102]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:38:09 compute-0 sudo[325954]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:09 compute-0 sudo[326100]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:38:09 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:38:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 01:38:09 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:38:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 01:38:09 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:38:09 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev d313723e-c6d8-434a-b9b0-c9bd0e15e807 does not exist
Dec 03 01:38:09 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev cd777488-7c3d-4792-9ac2-af2e5ae300e2 does not exist
Dec 03 01:38:09 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 29fbddfb-77ba-4108-b0b3-7dfe2d6b498b does not exist
Dec 03 01:38:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 01:38:09 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:38:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 01:38:09 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:38:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:38:09 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:38:09 compute-0 sudo[326140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:38:09 compute-0 sudo[326140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:38:09 compute-0 sudo[326140]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:09 compute-0 sudo[326184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:38:10 compute-0 sudo[326184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:38:10 compute-0 sudo[326184]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:10 compute-0 sudo[326237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:38:10 compute-0 sudo[326237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:38:10 compute-0 sudo[326237]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:10 compute-0 sudo[326285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 01:38:10 compute-0 sudo[326285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:38:10 compute-0 ceph-mon[192821]: pgmap v701: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:10 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:38:10 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:38:10 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:38:10 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:38:10 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:38:10 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:38:10 compute-0 sudo[326373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcurwuidwuiphkbxajwgfmlgcvbsshkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725889.925024-363-26947908450500/AnsiballZ_stat.py'
Dec 03 01:38:10 compute-0 sudo[326373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:38:10 compute-0 python3.9[326379]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:38:10 compute-0 sudo[326373]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:10 compute-0 podman[326407]: 2025-12-03 01:38:10.768993081 +0000 UTC m=+0.094288229 container create 3acb9bd0f1c08191a8729dfdc4dbdfcae40578867755d9447547208dd3329a76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_cray, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:38:10 compute-0 podman[326407]: 2025-12-03 01:38:10.735647944 +0000 UTC m=+0.060943152 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:38:10 compute-0 systemd[1]: Started libpod-conmon-3acb9bd0f1c08191a8729dfdc4dbdfcae40578867755d9447547208dd3329a76.scope.
Dec 03 01:38:10 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:38:10 compute-0 podman[326407]: 2025-12-03 01:38:10.924778577 +0000 UTC m=+0.250073765 container init 3acb9bd0f1c08191a8729dfdc4dbdfcae40578867755d9447547208dd3329a76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_cray, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 03 01:38:10 compute-0 podman[326407]: 2025-12-03 01:38:10.943097641 +0000 UTC m=+0.268392779 container start 3acb9bd0f1c08191a8729dfdc4dbdfcae40578867755d9447547208dd3329a76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_cray, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 03 01:38:10 compute-0 podman[326407]: 2025-12-03 01:38:10.950361145 +0000 UTC m=+0.275656333 container attach 3acb9bd0f1c08191a8729dfdc4dbdfcae40578867755d9447547208dd3329a76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_cray, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:38:10 compute-0 eloquent_cray[326445]: 167 167
Dec 03 01:38:10 compute-0 systemd[1]: libpod-3acb9bd0f1c08191a8729dfdc4dbdfcae40578867755d9447547208dd3329a76.scope: Deactivated successfully.
Dec 03 01:38:10 compute-0 podman[326407]: 2025-12-03 01:38:10.956290112 +0000 UTC m=+0.281585250 container died 3acb9bd0f1c08191a8729dfdc4dbdfcae40578867755d9447547208dd3329a76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_cray, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 03 01:38:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-54421240329fc95f60bb31752c5b0098ed59f7c627b282ebc0cf89c541440b3d-merged.mount: Deactivated successfully.
Dec 03 01:38:11 compute-0 podman[326407]: 2025-12-03 01:38:11.033666215 +0000 UTC m=+0.358961343 container remove 3acb9bd0f1c08191a8729dfdc4dbdfcae40578867755d9447547208dd3329a76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_cray, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:38:11 compute-0 systemd[1]: libpod-conmon-3acb9bd0f1c08191a8729dfdc4dbdfcae40578867755d9447547208dd3329a76.scope: Deactivated successfully.
Dec 03 01:38:11 compute-0 sudo[326516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcxstlpucphttenfnmoeakztoiisgzuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725889.925024-363-26947908450500/AnsiballZ_file.py'
Dec 03 01:38:11 compute-0 sudo[326516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:38:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v702: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:11 compute-0 podman[326524]: 2025-12-03 01:38:11.354951169 +0000 UTC m=+0.106167973 container create af09ba3258194515247d713912d4df51bf7a01778ae49696d694887b510c4e46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:38:11 compute-0 podman[326524]: 2025-12-03 01:38:11.314377409 +0000 UTC m=+0.065594243 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:38:11 compute-0 python3.9[326521]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:38:11 compute-0 ceph-mon[192821]: pgmap v702: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:11 compute-0 systemd[1]: Started libpod-conmon-af09ba3258194515247d713912d4df51bf7a01778ae49696d694887b510c4e46.scope.
Dec 03 01:38:11 compute-0 sudo[326516]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:11 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:38:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/038460924a241a866b63701b0cebb9b00a593df52a13d38b760c1f2259fb6c15/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:38:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/038460924a241a866b63701b0cebb9b00a593df52a13d38b760c1f2259fb6c15/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:38:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/038460924a241a866b63701b0cebb9b00a593df52a13d38b760c1f2259fb6c15/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:38:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/038460924a241a866b63701b0cebb9b00a593df52a13d38b760c1f2259fb6c15/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:38:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/038460924a241a866b63701b0cebb9b00a593df52a13d38b760c1f2259fb6c15/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:38:11 compute-0 podman[326524]: 2025-12-03 01:38:11.494411356 +0000 UTC m=+0.245628150 container init af09ba3258194515247d713912d4df51bf7a01778ae49696d694887b510c4e46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 03 01:38:11 compute-0 podman[326524]: 2025-12-03 01:38:11.512118483 +0000 UTC m=+0.263335287 container start af09ba3258194515247d713912d4df51bf7a01778ae49696d694887b510c4e46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 03 01:38:11 compute-0 podman[326524]: 2025-12-03 01:38:11.517660629 +0000 UTC m=+0.268877443 container attach af09ba3258194515247d713912d4df51bf7a01778ae49696d694887b510c4e46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_nash, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:38:12 compute-0 sudo[326696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fseqwzlznaczyxiqabqsachrqtkkdscq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725891.68858-375-79205024289130/AnsiballZ_stat.py'
Dec 03 01:38:12 compute-0 sudo[326696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:38:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:38:12 compute-0 python3.9[326700]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:38:12 compute-0 sudo[326696]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:12 compute-0 sshd-session[326610]: Received disconnect from 80.253.31.232 port 50934:11: Bye Bye [preauth]
Dec 03 01:38:12 compute-0 sshd-session[326610]: Disconnected from authenticating user root 80.253.31.232 port 50934 [preauth]
Dec 03 01:38:12 compute-0 great_nash[326540]: --> passed data devices: 0 physical, 3 LVM
Dec 03 01:38:12 compute-0 great_nash[326540]: --> relative data size: 1.0
Dec 03 01:38:12 compute-0 great_nash[326540]: --> All data devices are unavailable
Dec 03 01:38:12 compute-0 systemd[1]: libpod-af09ba3258194515247d713912d4df51bf7a01778ae49696d694887b510c4e46.scope: Deactivated successfully.
Dec 03 01:38:12 compute-0 systemd[1]: libpod-af09ba3258194515247d713912d4df51bf7a01778ae49696d694887b510c4e46.scope: Consumed 1.184s CPU time.
Dec 03 01:38:12 compute-0 conmon[326540]: conmon af09ba3258194515247d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-af09ba3258194515247d713912d4df51bf7a01778ae49696d694887b510c4e46.scope/container/memory.events
Dec 03 01:38:12 compute-0 podman[326524]: 2025-12-03 01:38:12.793014801 +0000 UTC m=+1.544231635 container died af09ba3258194515247d713912d4df51bf7a01778ae49696d694887b510c4e46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_nash, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:38:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-038460924a241a866b63701b0cebb9b00a593df52a13d38b760c1f2259fb6c15-merged.mount: Deactivated successfully.
Dec 03 01:38:12 compute-0 podman[326524]: 2025-12-03 01:38:12.9058732 +0000 UTC m=+1.657090004 container remove af09ba3258194515247d713912d4df51bf7a01778ae49696d694887b510c4e46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:38:12 compute-0 systemd[1]: libpod-conmon-af09ba3258194515247d713912d4df51bf7a01778ae49696d694887b510c4e46.scope: Deactivated successfully.
Dec 03 01:38:12 compute-0 sudo[326809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urnfvsfdgtqunbfenkhzykyilvcvrigj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725891.68858-375-79205024289130/AnsiballZ_file.py'
Dec 03 01:38:12 compute-0 sudo[326809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:38:12 compute-0 sudo[326285]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:13 compute-0 sudo[326812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:38:13 compute-0 sudo[326812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:38:13 compute-0 sudo[326812]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:13 compute-0 python3.9[326811]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:38:13 compute-0 sudo[326809]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:13 compute-0 sudo[326837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:38:13 compute-0 sudo[326837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:38:13 compute-0 sudo[326837]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v703: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:13 compute-0 sudo[326870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:38:13 compute-0 sudo[326870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:38:13 compute-0 sudo[326870]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:13 compute-0 sudo[326912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 01:38:13 compute-0 sudo[326912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:38:14 compute-0 podman[327070]: 2025-12-03 01:38:14.051641892 +0000 UTC m=+0.092005095 container create e0ad029f6ed82db12ca1917e17d941f1bf9891d9e7a29ccb95c21579c0ad1cfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_saha, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:38:14 compute-0 sudo[327109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzalsxuubickujjfnknpdbxlpadsorbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725893.4678078-387-253631933197596/AnsiballZ_systemd.py'
Dec 03 01:38:14 compute-0 podman[327070]: 2025-12-03 01:38:14.019822908 +0000 UTC m=+0.060186192 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:38:14 compute-0 sudo[327109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:38:14 compute-0 systemd[1]: Started libpod-conmon-e0ad029f6ed82db12ca1917e17d941f1bf9891d9e7a29ccb95c21579c0ad1cfa.scope.
Dec 03 01:38:14 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:38:14 compute-0 podman[327070]: 2025-12-03 01:38:14.189874564 +0000 UTC m=+0.230237817 container init e0ad029f6ed82db12ca1917e17d941f1bf9891d9e7a29ccb95c21579c0ad1cfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_saha, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:38:14 compute-0 podman[327070]: 2025-12-03 01:38:14.207272002 +0000 UTC m=+0.247635185 container start e0ad029f6ed82db12ca1917e17d941f1bf9891d9e7a29ccb95c21579c0ad1cfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_saha, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:38:14 compute-0 podman[327070]: 2025-12-03 01:38:14.211517852 +0000 UTC m=+0.251881135 container attach e0ad029f6ed82db12ca1917e17d941f1bf9891d9e7a29ccb95c21579c0ad1cfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_saha, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:38:14 compute-0 trusting_saha[327116]: 167 167
Dec 03 01:38:14 compute-0 systemd[1]: libpod-e0ad029f6ed82db12ca1917e17d941f1bf9891d9e7a29ccb95c21579c0ad1cfa.scope: Deactivated successfully.
Dec 03 01:38:14 compute-0 podman[327070]: 2025-12-03 01:38:14.221132402 +0000 UTC m=+0.261495635 container died e0ad029f6ed82db12ca1917e17d941f1bf9891d9e7a29ccb95c21579c0ad1cfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_saha, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:38:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-4075aa79130f8ae54aa22882916854fa73d4e08a1fd8f82915826327246fc852-merged.mount: Deactivated successfully.
Dec 03 01:38:14 compute-0 podman[327070]: 2025-12-03 01:38:14.30332409 +0000 UTC m=+0.343687313 container remove e0ad029f6ed82db12ca1917e17d941f1bf9891d9e7a29ccb95c21579c0ad1cfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_saha, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:38:14 compute-0 systemd[1]: libpod-conmon-e0ad029f6ed82db12ca1917e17d941f1bf9891d9e7a29ccb95c21579c0ad1cfa.scope: Deactivated successfully.
Dec 03 01:38:14 compute-0 ceph-mon[192821]: pgmap v703: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:14 compute-0 python3.9[327115]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:38:14 compute-0 systemd[1]: Reloading.
Dec 03 01:38:14 compute-0 podman[327140]: 2025-12-03 01:38:14.579005974 +0000 UTC m=+0.073002302 container create fe0777c4aa5e1f324f3558d518581ee1c01b5d5c26f538ec5dab14817904a698 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_brahmagupta, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:38:14 compute-0 systemd-rc-local-generator[327175]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:38:14 compute-0 podman[327140]: 2025-12-03 01:38:14.547137858 +0000 UTC m=+0.041134186 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:38:14 compute-0 systemd-sysv-generator[327179]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:38:14 compute-0 systemd[1]: Started libpod-conmon-fe0777c4aa5e1f324f3558d518581ee1c01b5d5c26f538ec5dab14817904a698.scope.
Dec 03 01:38:15 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:38:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df27e3c4d7aece84573a730eaf3edbf93e99b18275fee26843dfa1e1c8e824cf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:38:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df27e3c4d7aece84573a730eaf3edbf93e99b18275fee26843dfa1e1c8e824cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:38:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df27e3c4d7aece84573a730eaf3edbf93e99b18275fee26843dfa1e1c8e824cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:38:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df27e3c4d7aece84573a730eaf3edbf93e99b18275fee26843dfa1e1c8e824cf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:38:15 compute-0 sudo[327109]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:15 compute-0 podman[327140]: 2025-12-03 01:38:15.059476499 +0000 UTC m=+0.553472827 container init fe0777c4aa5e1f324f3558d518581ee1c01b5d5c26f538ec5dab14817904a698 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_brahmagupta, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:38:15 compute-0 podman[327140]: 2025-12-03 01:38:15.0976381 +0000 UTC m=+0.591634418 container start fe0777c4aa5e1f324f3558d518581ee1c01b5d5c26f538ec5dab14817904a698 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_brahmagupta, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec 03 01:38:15 compute-0 podman[327140]: 2025-12-03 01:38:15.103596568 +0000 UTC m=+0.597592876 container attach fe0777c4aa5e1f324f3558d518581ee1c01b5d5c26f538ec5dab14817904a698 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 03 01:38:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v704: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]: {
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:     "0": [
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:         {
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:             "devices": [
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:                 "/dev/loop3"
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:             ],
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:             "lv_name": "ceph_lv0",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:             "lv_size": "21470642176",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:             "name": "ceph_lv0",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:             "tags": {
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:                 "ceph.cluster_name": "ceph",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:                 "ceph.crush_device_class": "",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:                 "ceph.encrypted": "0",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:                 "ceph.osd_id": "0",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:                 "ceph.type": "block",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:                 "ceph.vdo": "0"
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:             },
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:             "type": "block",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:             "vg_name": "ceph_vg0"
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:         }
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:     ],
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:     "1": [
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:         {
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:             "devices": [
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:                 "/dev/loop4"
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:             ],
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:             "lv_name": "ceph_lv1",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:             "lv_size": "21470642176",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:             "name": "ceph_lv1",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:             "tags": {
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:                 "ceph.cluster_name": "ceph",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:                 "ceph.crush_device_class": "",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:                 "ceph.encrypted": "0",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:                 "ceph.osd_id": "1",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:                 "ceph.type": "block",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:                 "ceph.vdo": "0"
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:             },
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:             "type": "block",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:             "vg_name": "ceph_vg1"
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:         }
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:     ],
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:     "2": [
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:         {
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:             "devices": [
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:                 "/dev/loop5"
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:             ],
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:             "lv_name": "ceph_lv2",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:             "lv_size": "21470642176",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:             "name": "ceph_lv2",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:             "tags": {
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:                 "ceph.cluster_name": "ceph",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:                 "ceph.crush_device_class": "",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:                 "ceph.encrypted": "0",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:                 "ceph.osd_id": "2",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:                 "ceph.type": "block",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:                 "ceph.vdo": "0"
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:             },
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:             "type": "block",
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:             "vg_name": "ceph_vg2"
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:         }
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]:     ]
Dec 03 01:38:15 compute-0 strange_brahmagupta[327191]: }
Dec 03 01:38:15 compute-0 systemd[1]: libpod-fe0777c4aa5e1f324f3558d518581ee1c01b5d5c26f538ec5dab14817904a698.scope: Deactivated successfully.
Dec 03 01:38:15 compute-0 sudo[327349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-husrpubssxokgahunajqluxjoderbfhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725895.330007-395-13374202271948/AnsiballZ_stat.py'
Dec 03 01:38:15 compute-0 sudo[327349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:38:15 compute-0 podman[327350]: 2025-12-03 01:38:15.982110083 +0000 UTC m=+0.050511160 container died fe0777c4aa5e1f324f3558d518581ee1c01b5d5c26f538ec5dab14817904a698 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 03 01:38:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-df27e3c4d7aece84573a730eaf3edbf93e99b18275fee26843dfa1e1c8e824cf-merged.mount: Deactivated successfully.
Dec 03 01:38:16 compute-0 podman[327350]: 2025-12-03 01:38:16.086152705 +0000 UTC m=+0.154553712 container remove fe0777c4aa5e1f324f3558d518581ee1c01b5d5c26f538ec5dab14817904a698 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:38:16 compute-0 systemd[1]: libpod-conmon-fe0777c4aa5e1f324f3558d518581ee1c01b5d5c26f538ec5dab14817904a698.scope: Deactivated successfully.
Dec 03 01:38:16 compute-0 sudo[326912]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:16 compute-0 python3.9[327357]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:38:16 compute-0 sudo[327366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:38:16 compute-0 sudo[327366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:38:16 compute-0 sudo[327366]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:16 compute-0 sudo[327349]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:16 compute-0 sudo[327393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:38:16 compute-0 sudo[327393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:38:16 compute-0 sudo[327393]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:16 compute-0 ceph-mon[192821]: pgmap v704: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:16 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec 03 01:38:16 compute-0 sudo[327439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:38:16 compute-0 sudo[327439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:38:16 compute-0 sudo[327439]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:16 compute-0 sudo[327485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 01:38:16 compute-0 sudo[327485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:38:16 compute-0 sudo[327542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oazxpsomjcjcapoyqgpmstrrpldzlllg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725895.330007-395-13374202271948/AnsiballZ_file.py'
Dec 03 01:38:16 compute-0 sudo[327542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:38:16 compute-0 python3.9[327544]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:38:16 compute-0 sudo[327542]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:17 compute-0 podman[327610]: 2025-12-03 01:38:17.200184156 +0000 UTC m=+0.084067423 container create a5d6ec123394c376f1565efbe01c7b8db27cba173e3fd226868705c36ea242f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec 03 01:38:17 compute-0 podman[327610]: 2025-12-03 01:38:17.16187812 +0000 UTC m=+0.045761437 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:38:17 compute-0 systemd[1]: Started libpod-conmon-a5d6ec123394c376f1565efbe01c7b8db27cba173e3fd226868705c36ea242f3.scope.
Dec 03 01:38:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:38:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v705: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:17 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:38:17 compute-0 podman[327610]: 2025-12-03 01:38:17.337109831 +0000 UTC m=+0.220993108 container init a5d6ec123394c376f1565efbe01c7b8db27cba173e3fd226868705c36ea242f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:38:17 compute-0 podman[327610]: 2025-12-03 01:38:17.349307944 +0000 UTC m=+0.233191211 container start a5d6ec123394c376f1565efbe01c7b8db27cba173e3fd226868705c36ea242f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_kalam, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 03 01:38:17 compute-0 podman[327610]: 2025-12-03 01:38:17.35591877 +0000 UTC m=+0.239802037 container attach a5d6ec123394c376f1565efbe01c7b8db27cba173e3fd226868705c36ea242f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_kalam, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec 03 01:38:17 compute-0 vibrant_kalam[327649]: 167 167
Dec 03 01:38:17 compute-0 systemd[1]: libpod-a5d6ec123394c376f1565efbe01c7b8db27cba173e3fd226868705c36ea242f3.scope: Deactivated successfully.
Dec 03 01:38:17 compute-0 podman[327610]: 2025-12-03 01:38:17.358876013 +0000 UTC m=+0.242759290 container died a5d6ec123394c376f1565efbe01c7b8db27cba173e3fd226868705c36ea242f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec 03 01:38:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-d92bdcdb2c213f1d4be0403496cb30611f666fc6b3835b5110900ccb0cd9d025-merged.mount: Deactivated successfully.
Dec 03 01:38:17 compute-0 podman[327610]: 2025-12-03 01:38:17.42429775 +0000 UTC m=+0.308181007 container remove a5d6ec123394c376f1565efbe01c7b8db27cba173e3fd226868705c36ea242f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec 03 01:38:17 compute-0 systemd[1]: libpod-conmon-a5d6ec123394c376f1565efbe01c7b8db27cba173e3fd226868705c36ea242f3.scope: Deactivated successfully.
Dec 03 01:38:17 compute-0 podman[327723]: 2025-12-03 01:38:17.61690222 +0000 UTC m=+0.066807357 container create 4b76f0bd90f4fb0a1d2612627f1a82cbaefddb5924047cdbfdfd7fe6fa9f5522 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 03 01:38:17 compute-0 podman[327723]: 2025-12-03 01:38:17.583990856 +0000 UTC m=+0.033896033 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:38:17 compute-0 systemd[1]: Started libpod-conmon-4b76f0bd90f4fb0a1d2612627f1a82cbaefddb5924047cdbfdfd7fe6fa9f5522.scope.
Dec 03 01:38:17 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:38:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/457e210623ab732be736a40114591d3353af672aca42c3eefa6ace4aff2a77d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:38:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/457e210623ab732be736a40114591d3353af672aca42c3eefa6ace4aff2a77d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:38:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/457e210623ab732be736a40114591d3353af672aca42c3eefa6ace4aff2a77d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:38:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/457e210623ab732be736a40114591d3353af672aca42c3eefa6ace4aff2a77d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:38:17 compute-0 podman[327723]: 2025-12-03 01:38:17.777826989 +0000 UTC m=+0.227732106 container init 4b76f0bd90f4fb0a1d2612627f1a82cbaefddb5924047cdbfdfd7fe6fa9f5522 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Dec 03 01:38:17 compute-0 podman[327723]: 2025-12-03 01:38:17.807435431 +0000 UTC m=+0.257340568 container start 4b76f0bd90f4fb0a1d2612627f1a82cbaefddb5924047cdbfdfd7fe6fa9f5522 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cray, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:38:17 compute-0 podman[327723]: 2025-12-03 01:38:17.816877726 +0000 UTC m=+0.266782863 container attach 4b76f0bd90f4fb0a1d2612627f1a82cbaefddb5924047cdbfdfd7fe6fa9f5522 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cray, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 03 01:38:17 compute-0 sudo[327795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxysmahwhsiunhcvunipgsdxnwzmnedg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725897.278366-407-196019997641018/AnsiballZ_stat.py'
Dec 03 01:38:17 compute-0 sudo[327795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:38:18 compute-0 python3.9[327797]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:38:18 compute-0 sudo[327795]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:18 compute-0 sshd-session[327800]: Connection closed by 14.103.201.7 port 44520 [preauth]
Dec 03 01:38:18 compute-0 ceph-mon[192821]: pgmap v705: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:19 compute-0 pedantic_cray[327764]: {
Dec 03 01:38:19 compute-0 pedantic_cray[327764]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 01:38:19 compute-0 pedantic_cray[327764]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:38:19 compute-0 pedantic_cray[327764]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 01:38:19 compute-0 pedantic_cray[327764]:         "osd_id": 2,
Dec 03 01:38:19 compute-0 pedantic_cray[327764]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:38:19 compute-0 pedantic_cray[327764]:         "type": "bluestore"
Dec 03 01:38:19 compute-0 pedantic_cray[327764]:     },
Dec 03 01:38:19 compute-0 pedantic_cray[327764]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 01:38:19 compute-0 pedantic_cray[327764]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:38:19 compute-0 pedantic_cray[327764]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 01:38:19 compute-0 pedantic_cray[327764]:         "osd_id": 1,
Dec 03 01:38:19 compute-0 pedantic_cray[327764]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:38:19 compute-0 pedantic_cray[327764]:         "type": "bluestore"
Dec 03 01:38:19 compute-0 pedantic_cray[327764]:     },
Dec 03 01:38:19 compute-0 pedantic_cray[327764]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 01:38:19 compute-0 pedantic_cray[327764]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:38:19 compute-0 pedantic_cray[327764]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 01:38:19 compute-0 pedantic_cray[327764]:         "osd_id": 0,
Dec 03 01:38:19 compute-0 pedantic_cray[327764]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:38:19 compute-0 pedantic_cray[327764]:         "type": "bluestore"
Dec 03 01:38:19 compute-0 pedantic_cray[327764]:     }
Dec 03 01:38:19 compute-0 pedantic_cray[327764]: }
Dec 03 01:38:19 compute-0 systemd[1]: libpod-4b76f0bd90f4fb0a1d2612627f1a82cbaefddb5924047cdbfdfd7fe6fa9f5522.scope: Deactivated successfully.
Dec 03 01:38:19 compute-0 podman[327723]: 2025-12-03 01:38:19.063350126 +0000 UTC m=+1.513255233 container died 4b76f0bd90f4fb0a1d2612627f1a82cbaefddb5924047cdbfdfd7fe6fa9f5522 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cray, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 03 01:38:19 compute-0 systemd[1]: libpod-4b76f0bd90f4fb0a1d2612627f1a82cbaefddb5924047cdbfdfd7fe6fa9f5522.scope: Consumed 1.252s CPU time.
Dec 03 01:38:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-457e210623ab732be736a40114591d3353af672aca42c3eefa6ace4aff2a77d7-merged.mount: Deactivated successfully.
Dec 03 01:38:19 compute-0 podman[327723]: 2025-12-03 01:38:19.164024074 +0000 UTC m=+1.613929201 container remove 4b76f0bd90f4fb0a1d2612627f1a82cbaefddb5924047cdbfdfd7fe6fa9f5522 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cray, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:38:19 compute-0 systemd[1]: libpod-conmon-4b76f0bd90f4fb0a1d2612627f1a82cbaefddb5924047cdbfdfd7fe6fa9f5522.scope: Deactivated successfully.
Dec 03 01:38:19 compute-0 sudo[327485]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:19 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:38:19 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:38:19 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:38:19 compute-0 sudo[327916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjtadunfpweayemzfubjwmzplklmhlzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725897.278366-407-196019997641018/AnsiballZ_file.py'
Dec 03 01:38:19 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:38:19 compute-0 sudo[327916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:38:19 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 40928c2b-2f82-4909-81e1-3e83bda56c4c does not exist
Dec 03 01:38:19 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev b7908a63-6ae5-49a0-9cca-5b2cbb468db6 does not exist
Dec 03 01:38:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v706: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:19 compute-0 sudo[327919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:38:19 compute-0 sudo[327919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:38:19 compute-0 sudo[327919]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:19 compute-0 python3.9[327918]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:38:19 compute-0 sudo[327916]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:19 compute-0 sudo[327944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 01:38:19 compute-0 sudo[327944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:38:19 compute-0 sudo[327944]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:20 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:38:20 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:38:20 compute-0 ceph-mon[192821]: pgmap v706: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v707: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:21 compute-0 sudo[328119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crpmysngasyelguugptylzinsjglprsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725899.855314-419-86154897233497/AnsiballZ_systemd.py'
Dec 03 01:38:21 compute-0 sudo[328119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:38:21 compute-0 python3.9[328121]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:38:21 compute-0 systemd[1]: Reloading.
Dec 03 01:38:21 compute-0 systemd-sysv-generator[328148]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:38:21 compute-0 systemd-rc-local-generator[328145]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:38:22 compute-0 systemd[1]: Starting Create netns directory...
Dec 03 01:38:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:38:22 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 03 01:38:22 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 03 01:38:22 compute-0 systemd[1]: Finished Create netns directory.
Dec 03 01:38:22 compute-0 podman[328159]: 2025-12-03 01:38:22.338331231 +0000 UTC m=+0.100275327 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 03 01:38:22 compute-0 sudo[328119]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:22 compute-0 podman[328158]: 2025-12-03 01:38:22.352596602 +0000 UTC m=+0.118896110 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, build-date=2025-08-20T13:12:41, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, container_name=openstack_network_exporter, io.buildah.version=1.33.7, managed_by=edpm_ansible, release=1755695350, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, distribution-scope=public, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=)
Dec 03 01:38:22 compute-0 podman[328157]: 2025-12-03 01:38:22.357416217 +0000 UTC m=+0.123336705 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 01:38:22 compute-0 ceph-mon[192821]: pgmap v707: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:22 compute-0 podman[328160]: 2025-12-03 01:38:22.377583494 +0000 UTC m=+0.141696341 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 03 01:38:23 compute-0 sudo[328396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cckzzmbbxbgcskqqjvoiwxodvukvequv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725902.6993558-429-2209472903398/AnsiballZ_file.py'
Dec 03 01:38:23 compute-0 sudo[328396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:38:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v708: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:23 compute-0 python3.9[328398]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:38:23 compute-0 sudo[328396]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:24 compute-0 ceph-mon[192821]: pgmap v708: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:24 compute-0 sudo[328562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgvimifozcbqeyijaqtwfzzmbbwjggfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725903.8757424-437-196488983808366/AnsiballZ_stat.py'
Dec 03 01:38:24 compute-0 sudo[328562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:38:24 compute-0 podman[328522]: 2025-12-03 01:38:24.550430714 +0000 UTC m=+0.173762272 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, config_id=edpm)
Dec 03 01:38:24 compute-0 python3.9[328570]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:38:24 compute-0 sudo[328562]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:25 compute-0 sshd-session[328541]: Invalid user cc from 173.249.50.59 port 52408
Dec 03 01:38:25 compute-0 sshd-session[328541]: Received disconnect from 173.249.50.59 port 52408:11: Bye Bye [preauth]
Dec 03 01:38:25 compute-0 sshd-session[328541]: Disconnected from invalid user cc 173.249.50.59 port 52408 [preauth]
Dec 03 01:38:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v709: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:25 compute-0 sudo[328692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijymgqkrtdhzsttgigutdhfjjpiracvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725903.8757424-437-196488983808366/AnsiballZ_copy.py'
Dec 03 01:38:25 compute-0 sudo[328692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:38:25 compute-0 python3.9[328694]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764725903.8757424-437-196488983808366/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:38:25 compute-0 sudo[328692]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:26 compute-0 ceph-mon[192821]: pgmap v709: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:26 compute-0 sudo[328844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzbysulraxizjqggotkvslfezqrcdalp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725906.2798944-454-102998566832012/AnsiballZ_file.py'
Dec 03 01:38:26 compute-0 sudo[328844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:38:27 compute-0 python3.9[328846]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:38:27 compute-0 sudo[328844]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:38:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v710: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:27 compute-0 sudo[328996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbunrmkvspxafoonccrczwxpjaxlpaoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725907.432187-462-209297975865546/AnsiballZ_stat.py'
Dec 03 01:38:27 compute-0 sudo[328996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:38:28 compute-0 python3.9[328998]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:38:28 compute-0 sudo[328996]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:38:28
Dec 03 01:38:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 01:38:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 01:38:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['volumes', 'default.rgw.log', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', '.mgr', 'vms', 'default.rgw.meta', 'images', 'cephfs.cephfs.data', 'backups']
Dec 03 01:38:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 01:38:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:38:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:38:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:38:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:38:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:38:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:38:28 compute-0 ceph-mon[192821]: pgmap v710: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 01:38:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:38:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 01:38:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:38:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:38:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:38:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:38:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:38:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:38:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:38:28 compute-0 sudo[329119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-curcpnjuadjtlussuqybaixypysbpzrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725907.432187-462-209297975865546/AnsiballZ_copy.py'
Dec 03 01:38:28 compute-0 sudo[329119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:38:29 compute-0 python3.9[329121]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764725907.432187-462-209297975865546/.source.json _original_basename=.mprbrqzm follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:38:29 compute-0 sudo[329119]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v711: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:29 compute-0 podman[158098]: time="2025-12-03T01:38:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:38:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:38:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35732 "" "Go-http-client/1.1"
Dec 03 01:38:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:38:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7282 "" "Go-http-client/1.1"
Dec 03 01:38:29 compute-0 sudo[329271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqrlgckyqwllagobocszdgtpfxikruqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725909.421286-477-27102936373919/AnsiballZ_file.py'
Dec 03 01:38:29 compute-0 sudo[329271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:38:30 compute-0 python3.9[329273]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:38:30 compute-0 sudo[329271]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:30 compute-0 ceph-mon[192821]: pgmap v711: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:31 compute-0 sudo[329441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-empxnnddwnqvykrputeemsknrlrtttor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725910.5483742-485-134085309566155/AnsiballZ_stat.py'
Dec 03 01:38:31 compute-0 sudo[329441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:38:31 compute-0 podman[329397]: 2025-12-03 01:38:31.144930735 +0000 UTC m=+0.144033617 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, distribution-scope=public, release=1214.1726694543, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, release-0.7.12=, build-date=2024-09-18T21:23:30, container_name=kepler, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.29.0, io.openshift.expose-services=, io.openshift.tags=base rhel9, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64)
Dec 03 01:38:31 compute-0 openstack_network_exporter[160250]: ERROR   01:38:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:38:31 compute-0 openstack_network_exporter[160250]: ERROR   01:38:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:38:31 compute-0 openstack_network_exporter[160250]: ERROR   01:38:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:38:31 compute-0 openstack_network_exporter[160250]: ERROR   01:38:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:38:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:38:31 compute-0 openstack_network_exporter[160250]: ERROR   01:38:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:38:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:38:31 compute-0 sudo[329441]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v712: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:32 compute-0 sudo[329565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxbfexxggwrfzagcwozcttxbhfsqkkxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725910.5483742-485-134085309566155/AnsiballZ_copy.py'
Dec 03 01:38:32 compute-0 sudo[329565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:38:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:38:32 compute-0 sudo[329565]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:32 compute-0 ceph-mon[192821]: pgmap v712: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:32 compute-0 podman[329568]: 2025-12-03 01:38:32.859122831 +0000 UTC m=+0.107307395 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 03 01:38:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v713: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:33 compute-0 sudo[329735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kimbpvnjmytlycpjkyvacltjpmjugkyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725913.2142594-502-162399094248382/AnsiballZ_container_config_data.py'
Dec 03 01:38:33 compute-0 sudo[329735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:38:34 compute-0 python3.9[329737]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Dec 03 01:38:34 compute-0 sudo[329735]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:34 compute-0 ceph-mon[192821]: pgmap v713: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v714: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:35 compute-0 sudo[329887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dczjlkqjnkjtwtpuvgdbhooriaofrrsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725914.6188586-511-245573353429101/AnsiballZ_container_config_hash.py'
Dec 03 01:38:35 compute-0 sudo[329887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:38:35 compute-0 python3.9[329889]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 03 01:38:36 compute-0 sudo[329887]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:36 compute-0 ceph-mon[192821]: pgmap v714: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:36 compute-0 podman[329966]: 2025-12-03 01:38:36.865235522 +0000 UTC m=+0.117103140 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 01:38:37 compute-0 sudo[330063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbagfszimhnurksuwvxomilnqzjsggrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725916.4063966-520-255172268206025/AnsiballZ_podman_container_info.py'
Dec 03 01:38:37 compute-0 sudo[330063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:38:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:38:37 compute-0 python3.9[330065]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 03 01:38:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v715: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:37 compute-0 sudo[330063]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 01:38:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:38:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 01:38:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:38:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:38:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:38:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:38:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:38:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:38:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:38:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:38:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:38:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 01:38:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:38:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:38:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:38:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 01:38:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:38:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 01:38:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:38:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:38:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:38:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 01:38:38 compute-0 ceph-mon[192821]: pgmap v715: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v716: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:39 compute-0 sudo[330240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afzrxzpkwnzvnxdhszxcnbhnkkzzchft ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764725918.75339-533-47351656156630/AnsiballZ_edpm_container_manage.py'
Dec 03 01:38:39 compute-0 sudo[330240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:38:39 compute-0 python3[330242]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 03 01:38:40 compute-0 ceph-mon[192821]: pgmap v716: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v717: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:41 compute-0 podman[330253]: 2025-12-03 01:38:41.594750381 +0000 UTC m=+1.669979056 image pull 9af6aa52ee187025bc25565b66d3eefb486acac26f9281e33f4cce76a40d21f7 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec 03 01:38:41 compute-0 podman[330307]: 2025-12-03 01:38:41.842933002 +0000 UTC m=+0.089066773 container create df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 03 01:38:41 compute-0 podman[330307]: 2025-12-03 01:38:41.797400363 +0000 UTC m=+0.043534184 image pull 9af6aa52ee187025bc25565b66d3eefb486acac26f9281e33f4cce76a40d21f7 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec 03 01:38:41 compute-0 python3[330242]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec 03 01:38:42 compute-0 sudo[330240]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:38:42 compute-0 ceph-mon[192821]: pgmap v717: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:43 compute-0 sudo[330492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcfgcgxbzctzsdljbsldwmbczmxxkijw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725922.4493432-541-85278095322334/AnsiballZ_stat.py'
Dec 03 01:38:43 compute-0 sudo[330492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:38:43 compute-0 python3.9[330494]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:38:43 compute-0 sudo[330492]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v718: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:44 compute-0 sudo[330646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcktjjqjhntuhrhhenfvvuwuagsvecll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725923.7577386-550-10295372460280/AnsiballZ_file.py'
Dec 03 01:38:44 compute-0 sudo[330646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:38:44 compute-0 python3.9[330648]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:38:44 compute-0 sudo[330646]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:44 compute-0 ceph-mon[192821]: pgmap v718: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:45 compute-0 sudo[330722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjfmutbpkujfjwoahssdgjutenykreyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725923.7577386-550-10295372460280/AnsiballZ_stat.py'
Dec 03 01:38:45 compute-0 sudo[330722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:38:45 compute-0 python3.9[330724]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:38:45 compute-0 sudo[330722]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v719: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:46 compute-0 sudo[330873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipwnzsixeonjdndaizaosetngvvzuwml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725925.4086745-550-194272009794186/AnsiballZ_copy.py'
Dec 03 01:38:46 compute-0 sudo[330873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:38:46 compute-0 python3.9[330875]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764725925.4086745-550-194272009794186/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:38:46 compute-0 sudo[330873]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:46 compute-0 ceph-mon[192821]: pgmap v719: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:46 compute-0 sudo[330949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abkpfzzbbwnsrpzxgnwezviyibqzhnhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725925.4086745-550-194272009794186/AnsiballZ_systemd.py'
Dec 03 01:38:46 compute-0 sudo[330949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:38:47 compute-0 python3.9[330951]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 03 01:38:47 compute-0 systemd[1]: Reloading.
Dec 03 01:38:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:38:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v720: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:47 compute-0 systemd-sysv-generator[330980]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:38:47 compute-0 systemd-rc-local-generator[330975]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:38:47 compute-0 sudo[330949]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:48 compute-0 sudo[331062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmtmxqiasqmibhenzsgrcmnjxqgyeyaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725925.4086745-550-194272009794186/AnsiballZ_systemd.py'
Dec 03 01:38:48 compute-0 sudo[331062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:38:48 compute-0 python3.9[331064]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:38:48 compute-0 ceph-mon[192821]: pgmap v720: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:48 compute-0 sshd-session[331065]: Invalid user openbravo from 34.66.72.251 port 32908
Dec 03 01:38:48 compute-0 systemd[1]: Reloading.
Dec 03 01:38:48 compute-0 sshd-session[331065]: Received disconnect from 34.66.72.251 port 32908:11: Bye Bye [preauth]
Dec 03 01:38:48 compute-0 sshd-session[331065]: Disconnected from invalid user openbravo 34.66.72.251 port 32908 [preauth]
Dec 03 01:38:48 compute-0 systemd-sysv-generator[331097]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:38:48 compute-0 systemd-rc-local-generator[331093]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:38:49 compute-0 systemd[1]: Starting multipathd container...
Dec 03 01:38:49 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:38:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7a0ca4b56dffc6a4e58bda68c2eec33330d1dbcd40d12da48433ad0c5e77eab/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 03 01:38:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7a0ca4b56dffc6a4e58bda68c2eec33330d1dbcd40d12da48433ad0c5e77eab/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 03 01:38:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v721: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:49 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630.
Dec 03 01:38:49 compute-0 podman[331105]: 2025-12-03 01:38:49.506323755 +0000 UTC m=+0.260316873 container init df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 03 01:38:49 compute-0 multipathd[331119]: + sudo -E kolla_set_configs
Dec 03 01:38:49 compute-0 podman[331105]: 2025-12-03 01:38:49.552906143 +0000 UTC m=+0.306899271 container start df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 03 01:38:49 compute-0 podman[331105]: multipathd
Dec 03 01:38:49 compute-0 sudo[331126]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 03 01:38:49 compute-0 sudo[331126]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 03 01:38:49 compute-0 sudo[331126]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 03 01:38:49 compute-0 systemd[1]: Started multipathd container.
Dec 03 01:38:49 compute-0 sudo[331062]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:49 compute-0 multipathd[331119]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 03 01:38:49 compute-0 multipathd[331119]: INFO:__main__:Validating config file
Dec 03 01:38:49 compute-0 multipathd[331119]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 03 01:38:49 compute-0 multipathd[331119]: INFO:__main__:Writing out command to execute
Dec 03 01:38:49 compute-0 sudo[331126]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:49 compute-0 multipathd[331119]: ++ cat /run_command
Dec 03 01:38:49 compute-0 multipathd[331119]: + CMD='/usr/sbin/multipathd -d'
Dec 03 01:38:49 compute-0 multipathd[331119]: + ARGS=
Dec 03 01:38:49 compute-0 multipathd[331119]: + sudo kolla_copy_cacerts
Dec 03 01:38:49 compute-0 podman[331127]: 2025-12-03 01:38:49.661881644 +0000 UTC m=+0.085245426 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 03 01:38:49 compute-0 sudo[331149]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 03 01:38:49 compute-0 sudo[331149]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 03 01:38:49 compute-0 sudo[331149]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 03 01:38:49 compute-0 systemd[1]: df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630-d9d42129a6f5ed5.service: Main process exited, code=exited, status=1/FAILURE
Dec 03 01:38:49 compute-0 systemd[1]: df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630-d9d42129a6f5ed5.service: Failed with result 'exit-code'.
Dec 03 01:38:49 compute-0 sudo[331149]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:49 compute-0 multipathd[331119]: + [[ ! -n '' ]]
Dec 03 01:38:49 compute-0 multipathd[331119]: + . kolla_extend_start
Dec 03 01:38:49 compute-0 multipathd[331119]: Running command: '/usr/sbin/multipathd -d'
Dec 03 01:38:49 compute-0 multipathd[331119]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec 03 01:38:49 compute-0 multipathd[331119]: + umask 0022
Dec 03 01:38:49 compute-0 multipathd[331119]: + exec /usr/sbin/multipathd -d
Dec 03 01:38:49 compute-0 multipathd[331119]: 4769.378834 | --------start up--------
Dec 03 01:38:49 compute-0 multipathd[331119]: 4769.378870 | read /etc/multipath.conf
Dec 03 01:38:49 compute-0 multipathd[331119]: 4769.391541 | path checkers start up
Dec 03 01:38:50 compute-0 ceph-mon[192821]: pgmap v721: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v722: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:51 compute-0 python3.9[331308]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:38:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:38:52 compute-0 sudo[331503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqspeqvzwvnbtjrmukfhpmdbyggpxaei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725931.9244945-586-158652629476284/AnsiballZ_command.py'
Dec 03 01:38:52 compute-0 sudo[331503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:38:52 compute-0 podman[331434]: 2025-12-03 01:38:52.514241118 +0000 UTC m=+0.096977175 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 01:38:52 compute-0 podman[331435]: 2025-12-03 01:38:52.52960155 +0000 UTC m=+0.101808691 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.openshift.expose-services=, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, name=ubi9-minimal, config_id=edpm, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350)
Dec 03 01:38:52 compute-0 podman[331436]: 2025-12-03 01:38:52.54742295 +0000 UTC m=+0.118292203 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 03 01:38:52 compute-0 podman[331438]: 2025-12-03 01:38:52.561121075 +0000 UTC m=+0.118442118 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 03 01:38:52 compute-0 python3.9[331535]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:38:52 compute-0 ceph-mon[192821]: pgmap v722: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:52 compute-0 sudo[331503]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v723: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:53 compute-0 sudo[331707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jeolhowrlczousgiwouxmwgodjxnxsgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725933.124113-594-65819234905881/AnsiballZ_systemd.py'
Dec 03 01:38:53 compute-0 sudo[331707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:38:54 compute-0 python3.9[331709]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 03 01:38:54 compute-0 systemd[1]: Stopping multipathd container...
Dec 03 01:38:54 compute-0 multipathd[331119]: 4773.882690 | exit (signal)
Dec 03 01:38:54 compute-0 multipathd[331119]: 4773.882906 | --------shut down-------
Dec 03 01:38:54 compute-0 systemd[1]: libpod-df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630.scope: Deactivated successfully.
Dec 03 01:38:54 compute-0 podman[331713]: 2025-12-03 01:38:54.256069671 +0000 UTC m=+0.127784730 container died df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 03 01:38:54 compute-0 systemd[1]: df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630-d9d42129a6f5ed5.timer: Deactivated successfully.
Dec 03 01:38:54 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630.
Dec 03 01:38:54 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630-userdata-shm.mount: Deactivated successfully.
Dec 03 01:38:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7a0ca4b56dffc6a4e58bda68c2eec33330d1dbcd40d12da48433ad0c5e77eab-merged.mount: Deactivated successfully.
Dec 03 01:38:54 compute-0 podman[331713]: 2025-12-03 01:38:54.36178919 +0000 UTC m=+0.233504199 container cleanup df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 03 01:38:54 compute-0 podman[331713]: multipathd
Dec 03 01:38:54 compute-0 podman[331742]: multipathd
Dec 03 01:38:54 compute-0 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Dec 03 01:38:54 compute-0 systemd[1]: Stopped multipathd container.
Dec 03 01:38:54 compute-0 systemd[1]: Starting multipathd container...
Dec 03 01:38:54 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:38:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7a0ca4b56dffc6a4e58bda68c2eec33330d1dbcd40d12da48433ad0c5e77eab/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 03 01:38:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7a0ca4b56dffc6a4e58bda68c2eec33330d1dbcd40d12da48433ad0c5e77eab/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 03 01:38:54 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630.
Dec 03 01:38:54 compute-0 podman[331752]: 2025-12-03 01:38:54.653093392 +0000 UTC m=+0.173490874 container init df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Dec 03 01:38:54 compute-0 multipathd[331766]: + sudo -E kolla_set_configs
Dec 03 01:38:54 compute-0 podman[331752]: 2025-12-03 01:38:54.700418831 +0000 UTC m=+0.220816293 container start df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125)
Dec 03 01:38:54 compute-0 podman[331752]: multipathd
Dec 03 01:38:54 compute-0 sudo[331782]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 03 01:38:54 compute-0 systemd[1]: Started multipathd container.
Dec 03 01:38:54 compute-0 sudo[331782]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 03 01:38:54 compute-0 sudo[331782]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 03 01:38:54 compute-0 ceph-mon[192821]: pgmap v723: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:54 compute-0 sudo[331707]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:54 compute-0 multipathd[331766]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 03 01:38:54 compute-0 multipathd[331766]: INFO:__main__:Validating config file
Dec 03 01:38:54 compute-0 multipathd[331766]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 03 01:38:54 compute-0 multipathd[331766]: INFO:__main__:Writing out command to execute
Dec 03 01:38:54 compute-0 sudo[331782]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:54 compute-0 podman[331769]: 2025-12-03 01:38:54.777778954 +0000 UTC m=+0.146011962 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 03 01:38:54 compute-0 multipathd[331766]: ++ cat /run_command
Dec 03 01:38:54 compute-0 multipathd[331766]: + CMD='/usr/sbin/multipathd -d'
Dec 03 01:38:54 compute-0 multipathd[331766]: + ARGS=
Dec 03 01:38:54 compute-0 multipathd[331766]: + sudo kolla_copy_cacerts
Dec 03 01:38:54 compute-0 sudo[331810]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 03 01:38:54 compute-0 sudo[331810]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 03 01:38:54 compute-0 sudo[331810]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 03 01:38:54 compute-0 sudo[331810]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:54 compute-0 multipathd[331766]: Running command: '/usr/sbin/multipathd -d'
Dec 03 01:38:54 compute-0 multipathd[331766]: + [[ ! -n '' ]]
Dec 03 01:38:54 compute-0 multipathd[331766]: + . kolla_extend_start
Dec 03 01:38:54 compute-0 multipathd[331766]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec 03 01:38:54 compute-0 multipathd[331766]: + umask 0022
Dec 03 01:38:54 compute-0 multipathd[331766]: + exec /usr/sbin/multipathd -d
Dec 03 01:38:54 compute-0 multipathd[331766]: 4774.516844 | --------start up--------
Dec 03 01:38:54 compute-0 multipathd[331766]: 4774.516878 | read /etc/multipath.conf
Dec 03 01:38:54 compute-0 podman[331783]: 2025-12-03 01:38:54.859643893 +0000 UTC m=+0.127504032 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec 03 01:38:54 compute-0 multipathd[331766]: 4774.532488 | path checkers start up
Dec 03 01:38:54 compute-0 systemd[1]: df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630-3a5a14b6f5856a05.service: Main process exited, code=exited, status=1/FAILURE
Dec 03 01:38:54 compute-0 systemd[1]: df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630-3a5a14b6f5856a05.service: Failed with result 'exit-code'.
Dec 03 01:38:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v724: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:55 compute-0 sudo[331973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isnonyiobukmdiiyxnmlvdcgccxotgff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725935.0970635-602-155731124067512/AnsiballZ_file.py'
Dec 03 01:38:55 compute-0 sudo[331973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:38:55 compute-0 python3.9[331975]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:38:55 compute-0 sudo[331973]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:56 compute-0 ceph-mon[192821]: pgmap v724: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:57 compute-0 sudo[332125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzqdnkebjowahjvxefmhmvttccnmpzsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725936.458209-614-115837299306428/AnsiballZ_file.py'
Dec 03 01:38:57 compute-0 sudo[332125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:38:57 compute-0 python3.9[332127]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 03 01:38:57 compute-0 sudo[332125]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:38:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v725: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:58 compute-0 sudo[332277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqwxjysinwvkqtuazbocanyctylqfvql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725937.5591881-622-168178510907416/AnsiballZ_modprobe.py'
Dec 03 01:38:58 compute-0 sudo[332277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:38:58 compute-0 python3.9[332279]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Dec 03 01:38:58 compute-0 kernel: Key type psk registered
Dec 03 01:38:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:38:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:38:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:38:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:38:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:38:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:38:58 compute-0 sudo[332277]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:58 compute-0 ceph-mon[192821]: pgmap v725: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:59 compute-0 sudo[332440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkezkojwgiuzzbgpestnuglpttqfcdvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725938.749532-630-18200752187020/AnsiballZ_stat.py'
Dec 03 01:38:59 compute-0 sudo[332440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:38:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v726: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:38:59 compute-0 python3.9[332442]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:38:59 compute-0 sudo[332440]: pam_unix(sudo:session): session closed for user root
Dec 03 01:38:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:38:59.596 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:38:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:38:59.598 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:38:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:38:59.598 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:38:59 compute-0 podman[158098]: time="2025-12-03T01:38:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:38:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:38:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 38321 "" "Go-http-client/1.1"
Dec 03 01:38:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:38:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7691 "" "Go-http-client/1.1"
Dec 03 01:39:00 compute-0 sudo[332563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnvfumokcradcqgvwjrammhdpwrjqexr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725938.749532-630-18200752187020/AnsiballZ_copy.py'
Dec 03 01:39:00 compute-0 sudo[332563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:39:00 compute-0 python3.9[332565]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764725938.749532-630-18200752187020/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:39:00 compute-0 sudo[332563]: pam_unix(sudo:session): session closed for user root
Dec 03 01:39:00 compute-0 ceph-mon[192821]: pgmap v726: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:01 compute-0 openstack_network_exporter[160250]: ERROR   01:39:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:39:01 compute-0 openstack_network_exporter[160250]: ERROR   01:39:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:39:01 compute-0 openstack_network_exporter[160250]: ERROR   01:39:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:39:01 compute-0 openstack_network_exporter[160250]: ERROR   01:39:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:39:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:39:01 compute-0 openstack_network_exporter[160250]: ERROR   01:39:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:39:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:39:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v727: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:01 compute-0 sudo[332728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdnexqwbslpwpajilxcwbfqqeqfvknmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725940.9404962-646-158860445042666/AnsiballZ_lineinfile.py'
Dec 03 01:39:01 compute-0 sudo[332728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:39:01 compute-0 podman[332689]: 2025-12-03 01:39:01.592823009 +0000 UTC m=+0.142396400 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., name=ubi9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.buildah.version=1.29.0, io.openshift.expose-services=, version=9.4, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1214.1726694543, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9)
Dec 03 01:39:01 compute-0 python3.9[332734]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:39:01 compute-0 sudo[332728]: pam_unix(sudo:session): session closed for user root
Dec 03 01:39:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:39:02 compute-0 ceph-mon[192821]: pgmap v727: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v728: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:03 compute-0 sudo[332899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqpvekkpfouuokspteesorrboopedyzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725942.1230717-654-116470183573961/AnsiballZ_systemd.py'
Dec 03 01:39:03 compute-0 sudo[332899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:39:03 compute-0 podman[332859]: 2025-12-03 01:39:03.690940821 +0000 UTC m=+0.116861304 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent)
Dec 03 01:39:04 compute-0 python3.9[332903]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 03 01:39:04 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 03 01:39:04 compute-0 systemd[1]: Stopped Load Kernel Modules.
Dec 03 01:39:04 compute-0 systemd[1]: Stopping Load Kernel Modules...
Dec 03 01:39:04 compute-0 systemd[1]: Starting Load Kernel Modules...
Dec 03 01:39:04 compute-0 systemd[1]: Finished Load Kernel Modules.
Dec 03 01:39:04 compute-0 sudo[332899]: pam_unix(sudo:session): session closed for user root
Dec 03 01:39:04 compute-0 ceph-mon[192821]: pgmap v728: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:05 compute-0 sudo[333057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhwmkukzmzemskghlfrhtkxhbvztmikm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725944.6523526-662-228512429788689/AnsiballZ_dnf.py'
Dec 03 01:39:05 compute-0 sudo[333057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:39:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v729: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:05 compute-0 python3.9[333059]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 03 01:39:06 compute-0 ceph-mon[192821]: pgmap v729: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:39:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v730: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:07 compute-0 podman[333064]: 2025-12-03 01:39:07.864991687 +0000 UTC m=+0.119307662 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 03 01:39:08 compute-0 systemd[1]: Reloading.
Dec 03 01:39:08 compute-0 systemd-sysv-generator[333119]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:39:08 compute-0 systemd-rc-local-generator[333112]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:39:08 compute-0 systemd[1]: Reloading.
Dec 03 01:39:08 compute-0 systemd-rc-local-generator[333146]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:39:08 compute-0 systemd-sysv-generator[333150]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:39:08 compute-0 ceph-mon[192821]: pgmap v730: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:09 compute-0 systemd-logind[800]: Watching system buttons on /dev/input/event0 (Power Button)
Dec 03 01:39:09 compute-0 systemd-logind[800]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec 03 01:39:09 compute-0 lvm[333197]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 03 01:39:09 compute-0 lvm[333197]: VG ceph_vg1 finished
Dec 03 01:39:09 compute-0 lvm[333198]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 03 01:39:09 compute-0 lvm[333198]: VG ceph_vg2 finished
Dec 03 01:39:09 compute-0 lvm[333202]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 03 01:39:09 compute-0 lvm[333202]: VG ceph_vg0 finished
Dec 03 01:39:09 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 03 01:39:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v731: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:09 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 03 01:39:09 compute-0 systemd[1]: Reloading.
Dec 03 01:39:09 compute-0 systemd-sysv-generator[333255]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:39:09 compute-0 systemd-rc-local-generator[333249]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:39:09 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 03 01:39:10 compute-0 sudo[333057]: pam_unix(sudo:session): session closed for user root
Dec 03 01:39:10 compute-0 ceph-mon[192821]: pgmap v731: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:11 compute-0 sudo[334344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dujfjyhgurbanncpdwfokcjirbhqdrin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725950.6181111-670-246687364623008/AnsiballZ_systemd_service.py'
Dec 03 01:39:11 compute-0 sudo[334344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:39:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v732: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:11 compute-0 python3.9[334361]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 03 01:39:11 compute-0 systemd[1]: Stopping Open-iSCSI...
Dec 03 01:39:11 compute-0 iscsid[321548]: iscsid shutting down.
Dec 03 01:39:11 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Dec 03 01:39:11 compute-0 systemd[1]: Stopped Open-iSCSI.
Dec 03 01:39:11 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec 03 01:39:11 compute-0 systemd[1]: Starting Open-iSCSI...
Dec 03 01:39:11 compute-0 systemd[1]: Started Open-iSCSI.
Dec 03 01:39:11 compute-0 sudo[334344]: pam_unix(sudo:session): session closed for user root
Dec 03 01:39:11 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 03 01:39:11 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 03 01:39:11 compute-0 systemd[1]: man-db-cache-update.service: Consumed 2.607s CPU time.
Dec 03 01:39:11 compute-0 systemd[1]: run-r278b756dfbb642aea5910ce2f28e2a42.service: Deactivated successfully.
Dec 03 01:39:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:39:12 compute-0 python3.9[334696]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 01:39:12 compute-0 ceph-mon[192821]: pgmap v732: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v733: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:13 compute-0 sudo[334850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xodihzrgthjpxswgoicrwsorgtjtjlba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725953.385594-688-4566476009015/AnsiballZ_file.py'
Dec 03 01:39:13 compute-0 sudo[334850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:39:14 compute-0 python3.9[334852]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:39:14 compute-0 sudo[334850]: pam_unix(sudo:session): session closed for user root
Dec 03 01:39:14 compute-0 ceph-mon[192821]: pgmap v733: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:15 compute-0 sudo[335004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bffsfvxvotdliqgqyaxbayvxszcilswb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725954.789677-699-221252953962671/AnsiballZ_systemd_service.py'
Dec 03 01:39:15 compute-0 sudo[335004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:39:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v734: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:15 compute-0 python3.9[335006]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 03 01:39:15 compute-0 systemd[1]: Reloading.
Dec 03 01:39:15 compute-0 systemd-sysv-generator[335035]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:39:15 compute-0 systemd-rc-local-generator[335029]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:39:16 compute-0 sudo[335004]: pam_unix(sudo:session): session closed for user root
Dec 03 01:39:16 compute-0 ceph-mon[192821]: pgmap v734: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:39:17 compute-0 python3.9[335191]: ansible-ansible.builtin.service_facts Invoked
Dec 03 01:39:17 compute-0 network[335208]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 03 01:39:17 compute-0 network[335209]: 'network-scripts' will be removed from distribution in near future.
Dec 03 01:39:17 compute-0 network[335210]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 03 01:39:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v735: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:18 compute-0 ceph-mon[192821]: pgmap v735: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v736: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:19 compute-0 sudo[335248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:39:19 compute-0 sudo[335248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:39:19 compute-0 sudo[335248]: pam_unix(sudo:session): session closed for user root
Dec 03 01:39:19 compute-0 sudo[335277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:39:19 compute-0 sudo[335277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:39:19 compute-0 sudo[335277]: pam_unix(sudo:session): session closed for user root
Dec 03 01:39:19 compute-0 sudo[335306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:39:19 compute-0 sudo[335306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:39:19 compute-0 sudo[335306]: pam_unix(sudo:session): session closed for user root
Dec 03 01:39:20 compute-0 sudo[335335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 01:39:20 compute-0 sudo[335335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:39:20 compute-0 sudo[335335]: pam_unix(sudo:session): session closed for user root
Dec 03 01:39:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:39:20 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:39:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 01:39:20 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:39:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 01:39:20 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:39:20 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev a3f4c0b8-2f62-4018-b5c4-5475fbd349e0 does not exist
Dec 03 01:39:20 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 0a10b34f-030f-4c18-aedf-89e2d8a80df9 does not exist
Dec 03 01:39:20 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 5a034a2d-b877-4b8e-a505-de0cf2ad74e2 does not exist
Dec 03 01:39:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 01:39:20 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:39:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 01:39:20 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:39:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:39:20 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:39:20 compute-0 ceph-mon[192821]: pgmap v736: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:20 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:39:20 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:39:20 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:39:20 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:39:20 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:39:20 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:39:21 compute-0 sudo[335416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:39:21 compute-0 sudo[335416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:39:21 compute-0 sudo[335416]: pam_unix(sudo:session): session closed for user root
Dec 03 01:39:21 compute-0 sudo[335444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:39:21 compute-0 sudo[335444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:39:21 compute-0 sudo[335444]: pam_unix(sudo:session): session closed for user root
Dec 03 01:39:21 compute-0 sudo[335473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:39:21 compute-0 sudo[335473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:39:21 compute-0 sudo[335473]: pam_unix(sudo:session): session closed for user root
Dec 03 01:39:21 compute-0 sudo[335503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 01:39:21 compute-0 sudo[335503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:39:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v737: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:21 compute-0 podman[335583]: 2025-12-03 01:39:21.934956362 +0000 UTC m=+0.107580823 container create 1cf2504e41042895edf2af18d65501351dda47c349c8d5666fd4b44eae9fe100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec 03 01:39:21 compute-0 podman[335583]: 2025-12-03 01:39:21.885654987 +0000 UTC m=+0.058279518 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:39:22 compute-0 systemd[1]: Started libpod-conmon-1cf2504e41042895edf2af18d65501351dda47c349c8d5666fd4b44eae9fe100.scope.
Dec 03 01:39:22 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:39:22 compute-0 podman[335583]: 2025-12-03 01:39:22.056971599 +0000 UTC m=+0.229596070 container init 1cf2504e41042895edf2af18d65501351dda47c349c8d5666fd4b44eae9fe100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_snyder, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:39:22 compute-0 podman[335583]: 2025-12-03 01:39:22.074510041 +0000 UTC m=+0.247134492 container start 1cf2504e41042895edf2af18d65501351dda47c349c8d5666fd4b44eae9fe100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_snyder, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Dec 03 01:39:22 compute-0 podman[335583]: 2025-12-03 01:39:22.079207403 +0000 UTC m=+0.251831964 container attach 1cf2504e41042895edf2af18d65501351dda47c349c8d5666fd4b44eae9fe100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 03 01:39:22 compute-0 determined_snyder[335604]: 167 167
Dec 03 01:39:22 compute-0 systemd[1]: libpod-1cf2504e41042895edf2af18d65501351dda47c349c8d5666fd4b44eae9fe100.scope: Deactivated successfully.
Dec 03 01:39:22 compute-0 podman[335612]: 2025-12-03 01:39:22.164691694 +0000 UTC m=+0.052346661 container died 1cf2504e41042895edf2af18d65501351dda47c349c8d5666fd4b44eae9fe100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_snyder, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:39:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-aac5f71a4939bf973946eb3e56aa2320191d3b7ecc5574d5760ceee14cf976ae-merged.mount: Deactivated successfully.
Dec 03 01:39:22 compute-0 podman[335612]: 2025-12-03 01:39:22.22580972 +0000 UTC m=+0.113464687 container remove 1cf2504e41042895edf2af18d65501351dda47c349c8d5666fd4b44eae9fe100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_snyder, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:39:22 compute-0 systemd[1]: libpod-conmon-1cf2504e41042895edf2af18d65501351dda47c349c8d5666fd4b44eae9fe100.scope: Deactivated successfully.
Dec 03 01:39:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:39:22 compute-0 podman[335643]: 2025-12-03 01:39:22.538322138 +0000 UTC m=+0.099845556 container create e0259b8f6d773507487d1746467cd66c1a69f0c888050bd9102da51b72ca44a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_poincare, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:39:22 compute-0 podman[335643]: 2025-12-03 01:39:22.497286925 +0000 UTC m=+0.058810423 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:39:22 compute-0 systemd[1]: Started libpod-conmon-e0259b8f6d773507487d1746467cd66c1a69f0c888050bd9102da51b72ca44a0.scope.
Dec 03 01:39:22 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:39:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc9ee0185f46dd0eb3390dfb017df79518e7595cb90b05fcf8639db5f86c4203/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:39:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc9ee0185f46dd0eb3390dfb017df79518e7595cb90b05fcf8639db5f86c4203/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:39:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc9ee0185f46dd0eb3390dfb017df79518e7595cb90b05fcf8639db5f86c4203/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:39:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc9ee0185f46dd0eb3390dfb017df79518e7595cb90b05fcf8639db5f86c4203/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:39:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc9ee0185f46dd0eb3390dfb017df79518e7595cb90b05fcf8639db5f86c4203/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:39:22 compute-0 podman[335643]: 2025-12-03 01:39:22.719336272 +0000 UTC m=+0.280859670 container init e0259b8f6d773507487d1746467cd66c1a69f0c888050bd9102da51b72ca44a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec 03 01:39:22 compute-0 podman[335680]: 2025-12-03 01:39:22.734689053 +0000 UTC m=+0.119298702 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 03 01:39:22 compute-0 podman[335643]: 2025-12-03 01:39:22.742488932 +0000 UTC m=+0.304012310 container start e0259b8f6d773507487d1746467cd66c1a69f0c888050bd9102da51b72ca44a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_poincare, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 03 01:39:22 compute-0 podman[335643]: 2025-12-03 01:39:22.748707217 +0000 UTC m=+0.310230615 container attach e0259b8f6d773507487d1746467cd66c1a69f0c888050bd9102da51b72ca44a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_poincare, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:39:22 compute-0 podman[335681]: 2025-12-03 01:39:22.756882126 +0000 UTC m=+0.143216453 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm)
Dec 03 01:39:22 compute-0 podman[335684]: 2025-12-03 01:39:22.75913628 +0000 UTC m=+0.138853331 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 03 01:39:22 compute-0 podman[335685]: 2025-12-03 01:39:22.817404266 +0000 UTC m=+0.177601639 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 03 01:39:22 compute-0 ceph-mon[192821]: pgmap v737: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v738: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:23 compute-0 sudo[335900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnubelvqvttlzphglknjiytykbpyboyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725962.9067974-718-115640153269464/AnsiballZ_systemd_service.py'
Dec 03 01:39:23 compute-0 sudo[335900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:39:23 compute-0 python3.9[335905]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:39:23 compute-0 sudo[335900]: pam_unix(sudo:session): session closed for user root
Dec 03 01:39:23 compute-0 dreamy_poincare[335704]: --> passed data devices: 0 physical, 3 LVM
Dec 03 01:39:23 compute-0 dreamy_poincare[335704]: --> relative data size: 1.0
Dec 03 01:39:23 compute-0 dreamy_poincare[335704]: --> All data devices are unavailable
Dec 03 01:39:24 compute-0 systemd[1]: libpod-e0259b8f6d773507487d1746467cd66c1a69f0c888050bd9102da51b72ca44a0.scope: Deactivated successfully.
Dec 03 01:39:24 compute-0 podman[335643]: 2025-12-03 01:39:24.002208775 +0000 UTC m=+1.563732173 container died e0259b8f6d773507487d1746467cd66c1a69f0c888050bd9102da51b72ca44a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_poincare, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 03 01:39:24 compute-0 systemd[1]: libpod-e0259b8f6d773507487d1746467cd66c1a69f0c888050bd9102da51b72ca44a0.scope: Consumed 1.186s CPU time.
Dec 03 01:39:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc9ee0185f46dd0eb3390dfb017df79518e7595cb90b05fcf8639db5f86c4203-merged.mount: Deactivated successfully.
Dec 03 01:39:24 compute-0 podman[335643]: 2025-12-03 01:39:24.113590863 +0000 UTC m=+1.675114251 container remove e0259b8f6d773507487d1746467cd66c1a69f0c888050bd9102da51b72ca44a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_poincare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec 03 01:39:24 compute-0 systemd[1]: libpod-conmon-e0259b8f6d773507487d1746467cd66c1a69f0c888050bd9102da51b72ca44a0.scope: Deactivated successfully.
Dec 03 01:39:24 compute-0 sudo[335503]: pam_unix(sudo:session): session closed for user root
Dec 03 01:39:24 compute-0 sudo[335998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:39:24 compute-0 sudo[335998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:39:24 compute-0 sudo[335998]: pam_unix(sudo:session): session closed for user root
Dec 03 01:39:24 compute-0 sudo[336048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:39:24 compute-0 sudo[336048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:39:24 compute-0 sudo[336048]: pam_unix(sudo:session): session closed for user root
Dec 03 01:39:24 compute-0 sudo[336091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:39:24 compute-0 sudo[336091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:39:24 compute-0 sudo[336091]: pam_unix(sudo:session): session closed for user root
Dec 03 01:39:24 compute-0 sudo[336179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euzlikzcvrcispgmbonayiziiptemjmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725964.1194928-718-226137630820092/AnsiballZ_systemd_service.py'
Dec 03 01:39:24 compute-0 sudo[336179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:39:24 compute-0 sudo[336140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 01:39:24 compute-0 sudo[336140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:39:25 compute-0 ceph-mon[192821]: pgmap v738: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:25 compute-0 python3.9[336183]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:39:25 compute-0 sudo[336179]: pam_unix(sudo:session): session closed for user root
Dec 03 01:39:25 compute-0 podman[336218]: 2025-12-03 01:39:25.218575119 +0000 UTC m=+0.095172924 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 03 01:39:25 compute-0 podman[336241]: 2025-12-03 01:39:25.265335053 +0000 UTC m=+0.075518773 container create f5a5db2fdb39272ff8b5dc1ec1739f0b6c63dc7849fce21fb974a09c554a3dad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:39:25 compute-0 podman[336222]: 2025-12-03 01:39:25.270864238 +0000 UTC m=+0.141971709 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 03 01:39:25 compute-0 podman[336241]: 2025-12-03 01:39:25.23568913 +0000 UTC m=+0.045872930 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:39:25 compute-0 systemd[1]: Started libpod-conmon-f5a5db2fdb39272ff8b5dc1ec1739f0b6c63dc7849fce21fb974a09c554a3dad.scope.
Dec 03 01:39:25 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:39:25 compute-0 podman[336241]: 2025-12-03 01:39:25.405281093 +0000 UTC m=+0.215464903 container init f5a5db2fdb39272ff8b5dc1ec1739f0b6c63dc7849fce21fb974a09c554a3dad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:39:25 compute-0 podman[336241]: 2025-12-03 01:39:25.421043346 +0000 UTC m=+0.231227096 container start f5a5db2fdb39272ff8b5dc1ec1739f0b6c63dc7849fce21fb974a09c554a3dad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:39:25 compute-0 podman[336241]: 2025-12-03 01:39:25.42794496 +0000 UTC m=+0.238128720 container attach f5a5db2fdb39272ff8b5dc1ec1739f0b6c63dc7849fce21fb974a09c554a3dad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_grothendieck, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:39:25 compute-0 upbeat_grothendieck[336303]: 167 167
Dec 03 01:39:25 compute-0 systemd[1]: libpod-f5a5db2fdb39272ff8b5dc1ec1739f0b6c63dc7849fce21fb974a09c554a3dad.scope: Deactivated successfully.
Dec 03 01:39:25 compute-0 podman[336241]: 2025-12-03 01:39:25.432694033 +0000 UTC m=+0.242877783 container died f5a5db2fdb39272ff8b5dc1ec1739f0b6c63dc7849fce21fb974a09c554a3dad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_grothendieck, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:39:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v739: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-3defef656fe750f4b3d2162334d665f6e25f05ffa591d5df87d88d7ea960eb55-merged.mount: Deactivated successfully.
Dec 03 01:39:25 compute-0 podman[336241]: 2025-12-03 01:39:25.509102899 +0000 UTC m=+0.319286639 container remove f5a5db2fdb39272ff8b5dc1ec1739f0b6c63dc7849fce21fb974a09c554a3dad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 03 01:39:25 compute-0 systemd[1]: libpod-conmon-f5a5db2fdb39272ff8b5dc1ec1739f0b6c63dc7849fce21fb974a09c554a3dad.scope: Deactivated successfully.
Dec 03 01:39:25 compute-0 podman[336394]: 2025-12-03 01:39:25.76296063 +0000 UTC m=+0.068134785 container create 1d245dce376d69ecfa44f02c47932117b20d81c9ac0eda176a3ea21451bf8f50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:39:25 compute-0 systemd[1]: Started libpod-conmon-1d245dce376d69ecfa44f02c47932117b20d81c9ac0eda176a3ea21451bf8f50.scope.
Dec 03 01:39:25 compute-0 podman[336394]: 2025-12-03 01:39:25.736229579 +0000 UTC m=+0.041403824 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:39:25 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:39:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb6753130cf9225d5ca89a697fcd91a58b283dff5a442242e876f2344a076b14/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:39:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb6753130cf9225d5ca89a697fcd91a58b283dff5a442242e876f2344a076b14/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:39:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb6753130cf9225d5ca89a697fcd91a58b283dff5a442242e876f2344a076b14/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:39:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb6753130cf9225d5ca89a697fcd91a58b283dff5a442242e876f2344a076b14/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:39:25 compute-0 podman[336394]: 2025-12-03 01:39:25.912409826 +0000 UTC m=+0.217583981 container init 1d245dce376d69ecfa44f02c47932117b20d81c9ac0eda176a3ea21451bf8f50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_swanson, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec 03 01:39:25 compute-0 podman[336394]: 2025-12-03 01:39:25.924806604 +0000 UTC m=+0.229980769 container start 1d245dce376d69ecfa44f02c47932117b20d81c9ac0eda176a3ea21451bf8f50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:39:25 compute-0 podman[336394]: 2025-12-03 01:39:25.929192298 +0000 UTC m=+0.234366463 container attach 1d245dce376d69ecfa44f02c47932117b20d81c9ac0eda176a3ea21451bf8f50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_swanson, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True)
Dec 03 01:39:25 compute-0 sudo[336466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fclgmtpefmklyybhjbxqosxbvynwqhws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725965.3933716-718-278855780665078/AnsiballZ_systemd_service.py'
Dec 03 01:39:25 compute-0 sudo[336466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:39:26 compute-0 python3.9[336468]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:39:26 compute-0 sudo[336466]: pam_unix(sudo:session): session closed for user root
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]: {
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:     "0": [
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:         {
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:             "devices": [
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:                 "/dev/loop3"
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:             ],
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:             "lv_name": "ceph_lv0",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:             "lv_size": "21470642176",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:             "name": "ceph_lv0",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:             "tags": {
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:                 "ceph.cluster_name": "ceph",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:                 "ceph.crush_device_class": "",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:                 "ceph.encrypted": "0",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:                 "ceph.osd_id": "0",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:                 "ceph.type": "block",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:                 "ceph.vdo": "0"
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:             },
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:             "type": "block",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:             "vg_name": "ceph_vg0"
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:         }
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:     ],
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:     "1": [
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:         {
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:             "devices": [
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:                 "/dev/loop4"
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:             ],
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:             "lv_name": "ceph_lv1",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:             "lv_size": "21470642176",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:             "name": "ceph_lv1",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:             "tags": {
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:                 "ceph.cluster_name": "ceph",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:                 "ceph.crush_device_class": "",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:                 "ceph.encrypted": "0",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:                 "ceph.osd_id": "1",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:                 "ceph.type": "block",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:                 "ceph.vdo": "0"
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:             },
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:             "type": "block",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:             "vg_name": "ceph_vg1"
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:         }
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:     ],
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:     "2": [
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:         {
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:             "devices": [
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:                 "/dev/loop5"
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:             ],
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:             "lv_name": "ceph_lv2",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:             "lv_size": "21470642176",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:             "name": "ceph_lv2",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:             "tags": {
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:                 "ceph.cluster_name": "ceph",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:                 "ceph.crush_device_class": "",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:                 "ceph.encrypted": "0",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:                 "ceph.osd_id": "2",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:                 "ceph.type": "block",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:                 "ceph.vdo": "0"
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:             },
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:             "type": "block",
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:             "vg_name": "ceph_vg2"
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:         }
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]:     ]
Dec 03 01:39:26 compute-0 pedantic_swanson[336433]: }
Dec 03 01:39:26 compute-0 systemd[1]: libpod-1d245dce376d69ecfa44f02c47932117b20d81c9ac0eda176a3ea21451bf8f50.scope: Deactivated successfully.
Dec 03 01:39:26 compute-0 podman[336394]: 2025-12-03 01:39:26.847154721 +0000 UTC m=+1.152328916 container died 1d245dce376d69ecfa44f02c47932117b20d81c9ac0eda176a3ea21451bf8f50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_swanson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 03 01:39:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb6753130cf9225d5ca89a697fcd91a58b283dff5a442242e876f2344a076b14-merged.mount: Deactivated successfully.
Dec 03 01:39:26 compute-0 podman[336394]: 2025-12-03 01:39:26.934924716 +0000 UTC m=+1.240098881 container remove 1d245dce376d69ecfa44f02c47932117b20d81c9ac0eda176a3ea21451bf8f50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_swanson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 03 01:39:26 compute-0 systemd[1]: libpod-conmon-1d245dce376d69ecfa44f02c47932117b20d81c9ac0eda176a3ea21451bf8f50.scope: Deactivated successfully.
Dec 03 01:39:26 compute-0 sudo[336140]: pam_unix(sudo:session): session closed for user root
Dec 03 01:39:27 compute-0 ceph-mon[192821]: pgmap v739: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:27 compute-0 sudo[336586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:39:27 compute-0 sudo[336586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:39:27 compute-0 sudo[336586]: pam_unix(sudo:session): session closed for user root
Dec 03 01:39:27 compute-0 sudo[336683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fphmcbwjjihvggidzprzkhgdflsoxgjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725966.6616704-718-210194843361095/AnsiballZ_systemd_service.py'
Dec 03 01:39:27 compute-0 sudo[336683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:39:27 compute-0 sudo[336641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:39:27 compute-0 sudo[336641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:39:27 compute-0 sudo[336641]: pam_unix(sudo:session): session closed for user root
Dec 03 01:39:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:39:27 compute-0 sshd-session[336436]: Invalid user usuario2 from 103.146.202.174 port 40502
Dec 03 01:39:27 compute-0 sudo[336688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:39:27 compute-0 sudo[336688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:39:27 compute-0 sudo[336688]: pam_unix(sudo:session): session closed for user root
Dec 03 01:39:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v740: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:27 compute-0 sudo[336713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 01:39:27 compute-0 sudo[336713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:39:27 compute-0 python3.9[336686]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:39:27 compute-0 sshd-session[336436]: Received disconnect from 103.146.202.174 port 40502:11: Bye Bye [preauth]
Dec 03 01:39:27 compute-0 sshd-session[336436]: Disconnected from invalid user usuario2 103.146.202.174 port 40502 [preauth]
Dec 03 01:39:27 compute-0 sudo[336683]: pam_unix(sudo:session): session closed for user root
Dec 03 01:39:28 compute-0 podman[336822]: 2025-12-03 01:39:28.038912794 +0000 UTC m=+0.082805636 container create 5e9efa52fec0d1945f420a299be9e18b75cd25ca4a95dc31bda1e772c68b31e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_brattain, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 03 01:39:28 compute-0 podman[336822]: 2025-12-03 01:39:28.004785316 +0000 UTC m=+0.048678188 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:39:28 compute-0 systemd[1]: Started libpod-conmon-5e9efa52fec0d1945f420a299be9e18b75cd25ca4a95dc31bda1e772c68b31e4.scope.
Dec 03 01:39:28 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:39:28 compute-0 podman[336822]: 2025-12-03 01:39:28.17975615 +0000 UTC m=+0.223648962 container init 5e9efa52fec0d1945f420a299be9e18b75cd25ca4a95dc31bda1e772c68b31e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:39:28 compute-0 podman[336822]: 2025-12-03 01:39:28.197670333 +0000 UTC m=+0.241563125 container start 5e9efa52fec0d1945f420a299be9e18b75cd25ca4a95dc31bda1e772c68b31e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_brattain, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 03 01:39:28 compute-0 podman[336822]: 2025-12-03 01:39:28.202267333 +0000 UTC m=+0.246160125 container attach 5e9efa52fec0d1945f420a299be9e18b75cd25ca4a95dc31bda1e772c68b31e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_brattain, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:39:28 compute-0 heuristic_brattain[336872]: 167 167
Dec 03 01:39:28 compute-0 systemd[1]: libpod-5e9efa52fec0d1945f420a299be9e18b75cd25ca4a95dc31bda1e772c68b31e4.scope: Deactivated successfully.
Dec 03 01:39:28 compute-0 podman[336822]: 2025-12-03 01:39:28.21144509 +0000 UTC m=+0.255337922 container died 5e9efa52fec0d1945f420a299be9e18b75cd25ca4a95dc31bda1e772c68b31e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:39:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-55311c7bcc74168cc861d57dea7dcaa00155f614505dfc0cdc58a57c7884b687-merged.mount: Deactivated successfully.
Dec 03 01:39:28 compute-0 podman[336822]: 2025-12-03 01:39:28.282314691 +0000 UTC m=+0.326207493 container remove 5e9efa52fec0d1945f420a299be9e18b75cd25ca4a95dc31bda1e772c68b31e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 03 01:39:28 compute-0 systemd[1]: libpod-conmon-5e9efa52fec0d1945f420a299be9e18b75cd25ca4a95dc31bda1e772c68b31e4.scope: Deactivated successfully.
Dec 03 01:39:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:39:28
Dec 03 01:39:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 01:39:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 01:39:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['images', 'backups', 'vms', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', '.rgw.root', 'default.rgw.log', 'default.rgw.meta']
Dec 03 01:39:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 01:39:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:39:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:39:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:39:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:39:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:39:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:39:28 compute-0 sudo[336975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffyleibainzgprvksgifmqqspwxwurjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725967.956638-718-147427474037418/AnsiballZ_systemd_service.py'
Dec 03 01:39:28 compute-0 sudo[336975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:39:28 compute-0 podman[336956]: 2025-12-03 01:39:28.547692235 +0000 UTC m=+0.080408690 container create 5fd8cc2c013d645e5861ae410cdb49e7e68e447974b7b5965ccb06d7bf47a9af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 03 01:39:28 compute-0 podman[336956]: 2025-12-03 01:39:28.521131219 +0000 UTC m=+0.053847654 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:39:28 compute-0 systemd[1]: Started libpod-conmon-5fd8cc2c013d645e5861ae410cdb49e7e68e447974b7b5965ccb06d7bf47a9af.scope.
Dec 03 01:39:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 01:39:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:39:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 01:39:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:39:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:39:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:39:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:39:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:39:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:39:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:39:28 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:39:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd9cbdf7bebf9026a34d4611ad6094c98777d0fa5be91819e63b6e45abc8087f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:39:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd9cbdf7bebf9026a34d4611ad6094c98777d0fa5be91819e63b6e45abc8087f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:39:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd9cbdf7bebf9026a34d4611ad6094c98777d0fa5be91819e63b6e45abc8087f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:39:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd9cbdf7bebf9026a34d4611ad6094c98777d0fa5be91819e63b6e45abc8087f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:39:28 compute-0 podman[336956]: 2025-12-03 01:39:28.72910763 +0000 UTC m=+0.261824085 container init 5fd8cc2c013d645e5861ae410cdb49e7e68e447974b7b5965ccb06d7bf47a9af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:39:28 compute-0 podman[336956]: 2025-12-03 01:39:28.750369447 +0000 UTC m=+0.283085902 container start 5fd8cc2c013d645e5861ae410cdb49e7e68e447974b7b5965ccb06d7bf47a9af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_margulis, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:39:28 compute-0 podman[336956]: 2025-12-03 01:39:28.757622351 +0000 UTC m=+0.290338786 container attach 5fd8cc2c013d645e5861ae410cdb49e7e68e447974b7b5965ccb06d7bf47a9af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_margulis, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec 03 01:39:28 compute-0 python3.9[336982]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:39:28 compute-0 sudo[336975]: pam_unix(sudo:session): session closed for user root
Dec 03 01:39:29 compute-0 ceph-mon[192821]: pgmap v740: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v741: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:29 compute-0 sudo[337161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uymbdbcffqaoxpcufjmcxxcrlciwghtw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725969.1998925-718-111348298049835/AnsiballZ_systemd_service.py'
Dec 03 01:39:29 compute-0 podman[158098]: time="2025-12-03T01:39:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:39:29 compute-0 sudo[337161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:39:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:39:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 39893 "" "Go-http-client/1.1"
Dec 03 01:39:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:39:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8125 "" "Go-http-client/1.1"
Dec 03 01:39:29 compute-0 quizzical_margulis[336985]: {
Dec 03 01:39:29 compute-0 quizzical_margulis[336985]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 01:39:29 compute-0 quizzical_margulis[336985]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:39:29 compute-0 quizzical_margulis[336985]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 01:39:29 compute-0 quizzical_margulis[336985]:         "osd_id": 2,
Dec 03 01:39:29 compute-0 quizzical_margulis[336985]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:39:29 compute-0 quizzical_margulis[336985]:         "type": "bluestore"
Dec 03 01:39:29 compute-0 quizzical_margulis[336985]:     },
Dec 03 01:39:29 compute-0 quizzical_margulis[336985]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 01:39:29 compute-0 quizzical_margulis[336985]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:39:29 compute-0 quizzical_margulis[336985]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 01:39:29 compute-0 quizzical_margulis[336985]:         "osd_id": 1,
Dec 03 01:39:29 compute-0 quizzical_margulis[336985]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:39:29 compute-0 quizzical_margulis[336985]:         "type": "bluestore"
Dec 03 01:39:29 compute-0 quizzical_margulis[336985]:     },
Dec 03 01:39:29 compute-0 quizzical_margulis[336985]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 01:39:29 compute-0 quizzical_margulis[336985]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:39:29 compute-0 quizzical_margulis[336985]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 01:39:29 compute-0 quizzical_margulis[336985]:         "osd_id": 0,
Dec 03 01:39:29 compute-0 quizzical_margulis[336985]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:39:29 compute-0 quizzical_margulis[336985]:         "type": "bluestore"
Dec 03 01:39:29 compute-0 quizzical_margulis[336985]:     }
Dec 03 01:39:29 compute-0 quizzical_margulis[336985]: }
Dec 03 01:39:29 compute-0 systemd[1]: libpod-5fd8cc2c013d645e5861ae410cdb49e7e68e447974b7b5965ccb06d7bf47a9af.scope: Deactivated successfully.
Dec 03 01:39:29 compute-0 systemd[1]: libpod-5fd8cc2c013d645e5861ae410cdb49e7e68e447974b7b5965ccb06d7bf47a9af.scope: Consumed 1.176s CPU time.
Dec 03 01:39:29 compute-0 conmon[336985]: conmon 5fd8cc2c013d645e5861 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5fd8cc2c013d645e5861ae410cdb49e7e68e447974b7b5965ccb06d7bf47a9af.scope/container/memory.events
Dec 03 01:39:30 compute-0 podman[337173]: 2025-12-03 01:39:30.003769291 +0000 UTC m=+0.056513758 container died 5fd8cc2c013d645e5861ae410cdb49e7e68e447974b7b5965ccb06d7bf47a9af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Dec 03 01:39:30 compute-0 python3.9[337164]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:39:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd9cbdf7bebf9026a34d4611ad6094c98777d0fa5be91819e63b6e45abc8087f-merged.mount: Deactivated successfully.
Dec 03 01:39:30 compute-0 sudo[337161]: pam_unix(sudo:session): session closed for user root
Dec 03 01:39:30 compute-0 podman[337173]: 2025-12-03 01:39:30.124992106 +0000 UTC m=+0.177736523 container remove 5fd8cc2c013d645e5861ae410cdb49e7e68e447974b7b5965ccb06d7bf47a9af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_margulis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 03 01:39:30 compute-0 systemd[1]: libpod-conmon-5fd8cc2c013d645e5861ae410cdb49e7e68e447974b7b5965ccb06d7bf47a9af.scope: Deactivated successfully.
Dec 03 01:39:30 compute-0 sudo[336713]: pam_unix(sudo:session): session closed for user root
Dec 03 01:39:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:39:30 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:39:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:39:30 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:39:30 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 6cab5926-c891-48a5-96ef-63ce93f509e7 does not exist
Dec 03 01:39:30 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev ff0d941f-fbb7-4121-8710-aea2c90d207e does not exist
Dec 03 01:39:30 compute-0 sudo[337212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:39:30 compute-0 sshd-session[337146]: Invalid user guest from 80.253.31.232 port 49092
Dec 03 01:39:30 compute-0 sudo[337212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:39:30 compute-0 sudo[337212]: pam_unix(sudo:session): session closed for user root
Dec 03 01:39:30 compute-0 sshd-session[337146]: Received disconnect from 80.253.31.232 port 49092:11: Bye Bye [preauth]
Dec 03 01:39:30 compute-0 sshd-session[337146]: Disconnected from invalid user guest 80.253.31.232 port 49092 [preauth]
Dec 03 01:39:30 compute-0 sudo[337263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 01:39:30 compute-0 sudo[337263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:39:30 compute-0 sudo[337263]: pam_unix(sudo:session): session closed for user root
Dec 03 01:39:30 compute-0 sudo[337387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzuepfsmaixbrtbtsvocqkvoawgjuodf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725970.3561041-718-44487948875923/AnsiballZ_systemd_service.py'
Dec 03 01:39:30 compute-0 sudo[337387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:39:31 compute-0 ceph-mon[192821]: pgmap v741: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:31 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:39:31 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:39:31 compute-0 python3.9[337389]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:39:31 compute-0 sudo[337387]: pam_unix(sudo:session): session closed for user root
Dec 03 01:39:31 compute-0 openstack_network_exporter[160250]: ERROR   01:39:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:39:31 compute-0 openstack_network_exporter[160250]: ERROR   01:39:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:39:31 compute-0 openstack_network_exporter[160250]: ERROR   01:39:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:39:31 compute-0 openstack_network_exporter[160250]: ERROR   01:39:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:39:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:39:31 compute-0 openstack_network_exporter[160250]: ERROR   01:39:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:39:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:39:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v742: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:31 compute-0 podman[337453]: 2025-12-03 01:39:31.892925583 +0000 UTC m=+0.142985928 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.4, build-date=2024-09-18T21:23:30, config_id=edpm, release-0.7.12=, container_name=kepler, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.buildah.version=1.29.0, distribution-scope=public, name=ubi9, maintainer=Red Hat, Inc.)
Dec 03 01:39:32 compute-0 sshd-session[337390]: Invalid user sonarqube from 173.249.50.59 port 50662
Dec 03 01:39:32 compute-0 sudo[337562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byylkgebumzkdtbydawuawbdgplvofey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725971.6457403-718-18167473027193/AnsiballZ_systemd_service.py'
Dec 03 01:39:32 compute-0 sudo[337562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:39:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:39:32 compute-0 sshd-session[337390]: Received disconnect from 173.249.50.59 port 50662:11: Bye Bye [preauth]
Dec 03 01:39:32 compute-0 sshd-session[337390]: Disconnected from invalid user sonarqube 173.249.50.59 port 50662 [preauth]
Dec 03 01:39:32 compute-0 python3.9[337564]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:39:32 compute-0 sudo[337562]: pam_unix(sudo:session): session closed for user root
Dec 03 01:39:33 compute-0 ceph-mon[192821]: pgmap v742: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v743: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:33 compute-0 podman[337607]: 2025-12-03 01:39:33.871203907 +0000 UTC m=+0.117236714 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 03 01:39:34 compute-0 sudo[337734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opmpeoccnmwlkkoayeqiuvrakhhedhwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725973.7650383-777-31243651708112/AnsiballZ_file.py'
Dec 03 01:39:34 compute-0 sudo[337734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:39:34 compute-0 python3.9[337736]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:39:34 compute-0 sudo[337734]: pam_unix(sudo:session): session closed for user root
Dec 03 01:39:35 compute-0 ceph-mon[192821]: pgmap v743: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v744: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:36 compute-0 sudo[337886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srtdrbjgymlwlzomadtzqbgajyvhffru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725974.865151-777-40858258639817/AnsiballZ_file.py'
Dec 03 01:39:36 compute-0 sudo[337886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:39:36 compute-0 python3.9[337888]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:39:36 compute-0 sudo[337886]: pam_unix(sudo:session): session closed for user root
Dec 03 01:39:36 compute-0 sshd-session[334900]: Received disconnect from 45.78.219.140 port 60544:11: Bye Bye [preauth]
Dec 03 01:39:36 compute-0 sshd-session[334900]: Disconnected from 45.78.219.140 port 60544 [preauth]
Dec 03 01:39:37 compute-0 sudo[338038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyqszyiumuzctuareffxwflwomfllpwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725976.6865826-777-249513875538547/AnsiballZ_file.py'
Dec 03 01:39:37 compute-0 sudo[338038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:39:37 compute-0 ceph-mon[192821]: pgmap v744: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:39:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v745: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:37 compute-0 python3.9[338040]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:39:37 compute-0 sudo[338038]: pam_unix(sudo:session): session closed for user root
Dec 03 01:39:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 01:39:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:39:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 01:39:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:39:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:39:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:39:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:39:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:39:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:39:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:39:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:39:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:39:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 01:39:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:39:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:39:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:39:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 01:39:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:39:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 01:39:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:39:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:39:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:39:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 01:39:38 compute-0 sudo[338206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfpimtrpitfglmvcuiflwtbfjysgzctz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725977.773058-777-202918413894254/AnsiballZ_file.py'
Dec 03 01:39:38 compute-0 sudo[338206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:39:38 compute-0 podman[338164]: 2025-12-03 01:39:38.359680643 +0000 UTC m=+0.146610584 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 01:39:38 compute-0 python3.9[338215]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:39:38 compute-0 sudo[338206]: pam_unix(sudo:session): session closed for user root
Dec 03 01:39:39 compute-0 ceph-mon[192821]: pgmap v745: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:39 compute-0 sudo[338365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oeduczwszqbmokumnslifjudunnbadgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725978.8158705-777-184754707022418/AnsiballZ_file.py'
Dec 03 01:39:39 compute-0 sudo[338365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:39:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v746: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:39 compute-0 python3.9[338367]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:39:39 compute-0 sudo[338365]: pam_unix(sudo:session): session closed for user root
Dec 03 01:39:40 compute-0 sudo[338517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcnniluuyavxunoloejpjzbdnbmgatrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725979.9033606-777-161035690163232/AnsiballZ_file.py'
Dec 03 01:39:40 compute-0 sudo[338517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:39:40 compute-0 python3.9[338519]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:39:40 compute-0 sudo[338517]: pam_unix(sudo:session): session closed for user root
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.975 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.976 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.977 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f00ebd496a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eda45910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eabec2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.979 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f00ebd4b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.981 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f00edba6090>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.981 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f00ebd4bb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.982 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f00ebd4b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.982 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f00ebd4b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.987 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebcadee0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.987 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.987 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f00ebd4b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f00ebd4b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f00eabec290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f00ebd4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f00ebd4b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f00ebd4b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f00ebd4bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f00ebd4b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f00ebd4bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f00ebd4bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f00ebd4bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f00ebe0e030>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.992 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.992 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f00ebd4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.993 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f00ebd4b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.993 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f00ede91a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.993 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.994 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f00ebd4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.994 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.994 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f00ebd4b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.994 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.994 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f00ede92450>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.994 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f00ebd4bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.995 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f00ebd4bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.995 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:39:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:39:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:39:41 compute-0 ceph-mon[192821]: pgmap v746: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v747: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:41 compute-0 sudo[338670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xharqfnmcexqnskyulsfrdphyfabmsxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725980.994259-777-191184136286416/AnsiballZ_file.py'
Dec 03 01:39:41 compute-0 sudo[338670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:39:41 compute-0 python3.9[338672]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:39:41 compute-0 sudo[338670]: pam_unix(sudo:session): session closed for user root
Dec 03 01:39:42 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 03 01:39:42 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Cumulative writes: 5690 writes, 23K keys, 5690 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                            Cumulative WAL: 5690 writes, 885 syncs, 6.43 writes per sync, written: 0.02 GB, 0.02 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s
                                            Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                             Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55cd94ae11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
                                            
                                            ** Compaction Stats [m-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55cd94ae11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-0] **
                                            
                                            ** Compaction Stats [m-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55cd94ae11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-1] **
                                            
                                            ** Compaction Stats [m-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55cd94ae11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-2] **
                                            
                                            ** Compaction Stats [p-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                             Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55cd94ae11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-0] **
                                            
                                            ** Compaction Stats [p-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55cd94ae11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-1] **
                                            
                                            ** Compaction Stats [p-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55cd94ae11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-2] **
                                            
                                            ** Compaction Stats [O-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55cd94ae1090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-0] **
                                            
                                            ** Compaction Stats [O-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55cd94ae1090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-1] **
                                            
                                            ** Compaction Stats [O-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                             Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55cd94ae1090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-2] **
                                            
                                            ** Compaction Stats [L] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [L] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55cd94ae11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [L] **
                                            
                                            ** Compaction Stats [P] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [P] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55cd94ae11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [P] **
Dec 03 01:39:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:39:42 compute-0 sudo[338822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhodtxbanclcvttznnedfvksafypqwfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725982.0508049-777-61762048733616/AnsiballZ_file.py'
Dec 03 01:39:42 compute-0 sudo[338822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:39:42 compute-0 python3.9[338824]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:39:42 compute-0 sudo[338822]: pam_unix(sudo:session): session closed for user root
Dec 03 01:39:43 compute-0 ceph-mon[192821]: pgmap v747: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v748: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:43 compute-0 sudo[338974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqbzupkusxqybkkezbzbcpxjizxtogsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725983.126761-834-206722221441178/AnsiballZ_file.py'
Dec 03 01:39:43 compute-0 sudo[338974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:39:43 compute-0 python3.9[338976]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:39:43 compute-0 sudo[338974]: pam_unix(sudo:session): session closed for user root
Dec 03 01:39:44 compute-0 sudo[339126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pemenewppccrozxcalplidqhohtydpxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725984.2131374-834-237169731454354/AnsiballZ_file.py'
Dec 03 01:39:44 compute-0 sudo[339126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:39:45 compute-0 python3.9[339128]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:39:45 compute-0 sudo[339126]: pam_unix(sudo:session): session closed for user root
Dec 03 01:39:45 compute-0 ceph-mon[192821]: pgmap v748: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v749: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:45 compute-0 sudo[339278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvrzyhybrudvobyocstlzzzzyjzwdafo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725985.2852683-834-225435609296456/AnsiballZ_file.py'
Dec 03 01:39:45 compute-0 sudo[339278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:39:46 compute-0 python3.9[339280]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:39:46 compute-0 sudo[339278]: pam_unix(sudo:session): session closed for user root
Dec 03 01:39:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:39:47 compute-0 ceph-mon[192821]: pgmap v749: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v750: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:47 compute-0 sudo[339430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfcyjmobhoxbegiyzbvblujsxdhzcyca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725987.4312677-834-55526382463684/AnsiballZ_file.py'
Dec 03 01:39:47 compute-0 sudo[339430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:39:48 compute-0 python3.9[339432]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:39:48 compute-0 sudo[339430]: pam_unix(sudo:session): session closed for user root
Dec 03 01:39:48 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 03 01:39:48 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Cumulative writes: 6920 writes, 28K keys, 6920 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                            Cumulative WAL: 6920 writes, 1242 syncs, 5.57 writes per sync, written: 0.02 GB, 0.02 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 180 writes, 271 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                            Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                             Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55f0a3d5d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
                                            
                                            ** Compaction Stats [m-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55f0a3d5d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-0] **
                                            
                                            ** Compaction Stats [m-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55f0a3d5d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-1] **
                                            
                                            ** Compaction Stats [m-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55f0a3d5d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-2] **
                                            
                                            ** Compaction Stats [p-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                             Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55f0a3d5d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-0] **
                                            
                                            ** Compaction Stats [p-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55f0a3d5d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-1] **
                                            
                                            ** Compaction Stats [p-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55f0a3d5d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-2] **
                                            
                                            ** Compaction Stats [O-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55f0a3d5d090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-0] **
                                            
                                            ** Compaction Stats [O-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55f0a3d5d090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-1] **
                                            
                                            ** Compaction Stats [O-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                             Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55f0a3d5d090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-2] **
                                            
                                            ** Compaction Stats [L] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [L] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55f0a3d5d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [L] **
                                            
                                            ** Compaction Stats [P] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [P] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55f0a3d5d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [P] **
Dec 03 01:39:49 compute-0 sudo[339582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qihuwjncusnxqwaaksacuclobgdonssb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725988.4636135-834-86112630390564/AnsiballZ_file.py'
Dec 03 01:39:49 compute-0 sudo[339582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:39:49 compute-0 python3.9[339584]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:39:49 compute-0 sudo[339582]: pam_unix(sudo:session): session closed for user root
Dec 03 01:39:49 compute-0 ceph-mon[192821]: pgmap v750: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v751: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:50 compute-0 sudo[339735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aljzsgoutwsaogwwtkgjzkwozfhnhmzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725989.8322492-834-90342718969152/AnsiballZ_file.py'
Dec 03 01:39:50 compute-0 sudo[339735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:39:50 compute-0 python3.9[339737]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:39:50 compute-0 sudo[339735]: pam_unix(sudo:session): session closed for user root
Dec 03 01:39:51 compute-0 sshd-session[339814]: Invalid user kapsch from 34.66.72.251 port 50324
Dec 03 01:39:51 compute-0 sshd-session[339814]: Received disconnect from 34.66.72.251 port 50324:11: Bye Bye [preauth]
Dec 03 01:39:51 compute-0 sshd-session[339814]: Disconnected from invalid user kapsch 34.66.72.251 port 50324 [preauth]
Dec 03 01:39:51 compute-0 ceph-mon[192821]: pgmap v751: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:51 compute-0 sudo[339889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bggpnrcfmqofhrgpyhvsapduimdshigj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725990.8589272-834-185868017291439/AnsiballZ_file.py'
Dec 03 01:39:51 compute-0 sudo[339889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:39:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v752: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:51 compute-0 python3.9[339891]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:39:51 compute-0 sudo[339889]: pam_unix(sudo:session): session closed for user root
Dec 03 01:39:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:39:52 compute-0 sudo[340041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibgbkxwddijxcekfdbtxdbhaionbvabh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725991.9030492-834-49978030851448/AnsiballZ_file.py'
Dec 03 01:39:52 compute-0 sudo[340041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:39:52 compute-0 python3.9[340043]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:39:52 compute-0 sudo[340041]: pam_unix(sudo:session): session closed for user root
Dec 03 01:39:53 compute-0 ceph-mon[192821]: pgmap v752: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v753: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:53 compute-0 sudo[340247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjeyxymujndythwzvbfzbwkiadhzrwbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725993.1426704-892-190519506615865/AnsiballZ_command.py'
Dec 03 01:39:53 compute-0 sudo[340247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:39:53 compute-0 podman[340169]: 2025-12-03 01:39:53.715776943 +0000 UTC m=+0.113055528 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 03 01:39:53 compute-0 podman[340168]: 2025-12-03 01:39:53.716203095 +0000 UTC m=+0.122797128 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, managed_by=edpm_ansible, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, maintainer=Red Hat, Inc., vcs-type=git, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=openstack_network_exporter, distribution-scope=public, architecture=x86_64, io.openshift.tags=minimal rhel9, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 03 01:39:53 compute-0 podman[340167]: 2025-12-03 01:39:53.731510163 +0000 UTC m=+0.145204146 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 03 01:39:53 compute-0 podman[340170]: 2025-12-03 01:39:53.782063932 +0000 UTC m=+0.176632284 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:39:53 compute-0 python3.9[340269]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:39:53 compute-0 sudo[340247]: pam_unix(sudo:session): session closed for user root
Dec 03 01:39:54 compute-0 auditd[706]: Audit daemon rotating log files
Dec 03 01:39:55 compute-0 python3.9[340428]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 03 01:39:55 compute-0 ceph-mon[192821]: pgmap v753: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v754: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:55 compute-0 podman[340524]: 2025-12-03 01:39:55.883168073 +0000 UTC m=+0.129685332 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 03 01:39:55 compute-0 podman[340517]: 2025-12-03 01:39:55.888377572 +0000 UTC m=+0.133597056 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm)
Dec 03 01:39:55 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 03 01:39:55 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Cumulative writes: 5709 writes, 24K keys, 5709 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                            Cumulative WAL: 5709 writes, 908 syncs, 6.29 writes per sync, written: 0.02 GB, 0.02 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 178 writes, 270 keys, 178 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                            Interval WAL: 178 writes, 88 syncs, 2.02 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                             Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x558b8220d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.2e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
                                            
                                            ** Compaction Stats [m-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x558b8220d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.2e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-0] **
                                            
                                            ** Compaction Stats [m-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x558b8220d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.2e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-1] **
                                            
                                            ** Compaction Stats [m-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x558b8220d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.2e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-2] **
                                            
                                            ** Compaction Stats [p-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                             Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x558b8220d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.2e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-0] **
                                            
                                            ** Compaction Stats [p-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x558b8220d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.2e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-1] **
                                            
                                            ** Compaction Stats [p-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x558b8220d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.2e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-2] **
                                            
                                            ** Compaction Stats [O-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x558b8220cf30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-0] **
                                            
                                            ** Compaction Stats [O-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x558b8220cf30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-1] **
                                            
                                            ** Compaction Stats [O-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                             Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x558b8220cf30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-2] **
                                            
                                            ** Compaction Stats [L] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [L] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x558b8220d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.2e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [L] **
                                            
                                            ** Compaction Stats [P] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [P] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x558b8220d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.2e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [P] **
Dec 03 01:39:56 compute-0 sudo[340614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbpvepxopucpslyukuptunbbgguclici ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725995.5233965-910-56573609792385/AnsiballZ_systemd_service.py'
Dec 03 01:39:56 compute-0 sudo[340614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:39:56 compute-0 python3.9[340616]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 03 01:39:56 compute-0 systemd[1]: Reloading.
Dec 03 01:39:56 compute-0 systemd-rc-local-generator[340639]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:39:56 compute-0 systemd-sysv-generator[340644]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:39:56 compute-0 sudo[340614]: pam_unix(sudo:session): session closed for user root
Dec 03 01:39:57 compute-0 ceph-mgr[193109]: [devicehealth INFO root] Check health
Dec 03 01:39:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:39:57 compute-0 ceph-mon[192821]: pgmap v754: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v755: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:57 compute-0 sudo[340801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-encomsmjpqkfvupepnsqrqqssscbnzql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725997.3208365-918-58819810231390/AnsiballZ_command.py'
Dec 03 01:39:57 compute-0 sudo[340801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:39:58 compute-0 python3.9[340803]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:39:58 compute-0 sudo[340801]: pam_unix(sudo:session): session closed for user root
Dec 03 01:39:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:39:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:39:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:39:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:39:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:39:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:39:59 compute-0 sudo[340954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvhxjjuzgufqstsufuthegeiiabehevb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725998.459067-918-27995241544157/AnsiballZ_command.py'
Dec 03 01:39:59 compute-0 sudo[340954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:39:59 compute-0 python3.9[340956]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:39:59 compute-0 sudo[340954]: pam_unix(sudo:session): session closed for user root
Dec 03 01:39:59 compute-0 ceph-mon[192821]: pgmap v755: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v756: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:39:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:39:59.598 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:39:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:39:59.599 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:39:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:39:59.600 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:39:59 compute-0 podman[158098]: time="2025-12-03T01:39:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:39:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:39:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 38320 "" "Go-http-client/1.1"
Dec 03 01:39:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:39:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7697 "" "Go-http-client/1.1"
Dec 03 01:40:00 compute-0 sudo[341107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjdsogqfbmrvzvojracnumtxqinzpqwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764725999.5592847-918-214015714694977/AnsiballZ_command.py'
Dec 03 01:40:00 compute-0 sudo[341107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:40:00 compute-0 python3.9[341109]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:40:00 compute-0 sudo[341107]: pam_unix(sudo:session): session closed for user root
Dec 03 01:40:01 compute-0 openstack_network_exporter[160250]: ERROR   01:40:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:40:01 compute-0 openstack_network_exporter[160250]: ERROR   01:40:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:40:01 compute-0 openstack_network_exporter[160250]: ERROR   01:40:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:40:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:40:01 compute-0 openstack_network_exporter[160250]: ERROR   01:40:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:40:01 compute-0 openstack_network_exporter[160250]: ERROR   01:40:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:40:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:40:01 compute-0 ceph-mon[192821]: pgmap v756: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v757: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:02 compute-0 sudo[341275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrmnilfpkqcnhrlxzredgaiookpdoibk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726000.6721869-918-130453005111050/AnsiballZ_command.py'
Dec 03 01:40:02 compute-0 sudo[341275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:40:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:40:02 compute-0 podman[341234]: 2025-12-03 01:40:02.321607782 +0000 UTC m=+0.177235661 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1214.1726694543, container_name=kepler, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, distribution-scope=public, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec 03 01:40:02 compute-0 python3.9[341281]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:40:02 compute-0 ceph-mon[192821]: pgmap v757: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:02 compute-0 sudo[341275]: pam_unix(sudo:session): session closed for user root
Dec 03 01:40:03 compute-0 sudo[341433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yirsupmjanczymzkhndzbzabyszseiff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726002.7993095-918-112979009721203/AnsiballZ_command.py'
Dec 03 01:40:03 compute-0 sudo[341433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:40:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v758: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:03 compute-0 python3.9[341435]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:40:03 compute-0 sudo[341433]: pam_unix(sudo:session): session closed for user root
Dec 03 01:40:04 compute-0 ceph-mon[192821]: pgmap v758: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:04 compute-0 podman[341560]: 2025-12-03 01:40:04.864182705 +0000 UTC m=+0.123133837 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 03 01:40:05 compute-0 sudo[341603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-riypktroraorhykvkxayuftdtnvmuczl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726003.9002237-918-235587579803598/AnsiballZ_command.py'
Dec 03 01:40:05 compute-0 sudo[341603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:40:05 compute-0 python3.9[341605]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:40:05 compute-0 sudo[341603]: pam_unix(sudo:session): session closed for user root
Dec 03 01:40:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v759: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:06 compute-0 sudo[341756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blslorqljewknxpvsyjbsucapmsyboxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726005.4968224-918-167908955524540/AnsiballZ_command.py'
Dec 03 01:40:06 compute-0 sudo[341756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:40:06 compute-0 python3.9[341758]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:40:06 compute-0 sudo[341756]: pam_unix(sudo:session): session closed for user root
Dec 03 01:40:06 compute-0 ceph-mon[192821]: pgmap v759: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:07 compute-0 sudo[341909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovmzqdnkspmbtkbspqublwezmzpniahv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726006.5570586-918-2601202813999/AnsiballZ_command.py'
Dec 03 01:40:07 compute-0 sudo[341909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:40:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:40:07 compute-0 python3.9[341911]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:40:07 compute-0 sudo[341909]: pam_unix(sudo:session): session closed for user root
Dec 03 01:40:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v760: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:07 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Dec 03 01:40:07 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:40:07.587002) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 03 01:40:07 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Dec 03 01:40:07 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726007587104, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1515, "num_deletes": 251, "total_data_size": 2504728, "memory_usage": 2534016, "flush_reason": "Manual Compaction"}
Dec 03 01:40:07 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Dec 03 01:40:07 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726007614965, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2470997, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14788, "largest_seqno": 16302, "table_properties": {"data_size": 2463837, "index_size": 4231, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14130, "raw_average_key_size": 19, "raw_value_size": 2449715, "raw_average_value_size": 3397, "num_data_blocks": 193, "num_entries": 721, "num_filter_entries": 721, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764725837, "oldest_key_time": 1764725837, "file_creation_time": 1764726007, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec 03 01:40:07 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 28078 microseconds, and 11289 cpu microseconds.
Dec 03 01:40:07 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 01:40:07 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:40:07.615060) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2470997 bytes OK
Dec 03 01:40:07 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:40:07.615113) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Dec 03 01:40:07 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:40:07.617696) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Dec 03 01:40:07 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:40:07.617719) EVENT_LOG_v1 {"time_micros": 1764726007617711, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 03 01:40:07 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:40:07.617741) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 03 01:40:07 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2498144, prev total WAL file size 2498144, number of live WAL files 2.
Dec 03 01:40:07 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 01:40:07 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:40:07.619874) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Dec 03 01:40:07 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 03 01:40:07 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2413KB)], [35(6887KB)]
Dec 03 01:40:07 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726007620007, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9523452, "oldest_snapshot_seqno": -1}
Dec 03 01:40:07 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 3972 keys, 7767520 bytes, temperature: kUnknown
Dec 03 01:40:07 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726007698734, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 7767520, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7738562, "index_size": 17904, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9989, "raw_key_size": 97075, "raw_average_key_size": 24, "raw_value_size": 7664282, "raw_average_value_size": 1929, "num_data_blocks": 759, "num_entries": 3972, "num_filter_entries": 3972, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764726007, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Dec 03 01:40:07 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 01:40:07 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:40:07.699137) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 7767520 bytes
Dec 03 01:40:07 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:40:07.702494) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 120.7 rd, 98.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 6.7 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(7.0) write-amplify(3.1) OK, records in: 4486, records dropped: 514 output_compression: NoCompression
Dec 03 01:40:07 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:40:07.702591) EVENT_LOG_v1 {"time_micros": 1764726007702521, "job": 16, "event": "compaction_finished", "compaction_time_micros": 78877, "compaction_time_cpu_micros": 34685, "output_level": 6, "num_output_files": 1, "total_output_size": 7767520, "num_input_records": 4486, "num_output_records": 3972, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 03 01:40:07 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 01:40:07 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726007703643, "job": 16, "event": "table_file_deletion", "file_number": 37}
Dec 03 01:40:07 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 01:40:07 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726007706407, "job": 16, "event": "table_file_deletion", "file_number": 35}
Dec 03 01:40:07 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:40:07.619259) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:40:07 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:40:07.706674) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:40:07 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:40:07.706681) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:40:07 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:40:07.706683) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:40:07 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:40:07.706685) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:40:07 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:40:07.706687) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:40:08 compute-0 ceph-mon[192821]: pgmap v760: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:08 compute-0 podman[341993]: 2025-12-03 01:40:08.869140483 +0000 UTC m=+0.123689352 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 03 01:40:09 compute-0 sudo[342086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrfyynummridtvjhehldcgvysuzdiues ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726008.5263472-997-75749508368141/AnsiballZ_file.py'
Dec 03 01:40:09 compute-0 sudo[342086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:40:09 compute-0 python3.9[342088]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:40:09 compute-0 sudo[342086]: pam_unix(sudo:session): session closed for user root
Dec 03 01:40:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v761: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:10 compute-0 sudo[342238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpfbxzbxgotbonllktwufopmomdnejjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726009.6183662-997-185730052413969/AnsiballZ_file.py'
Dec 03 01:40:10 compute-0 sudo[342238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:40:10 compute-0 python3.9[342240]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:40:10 compute-0 sudo[342238]: pam_unix(sudo:session): session closed for user root
Dec 03 01:40:10 compute-0 ceph-mon[192821]: pgmap v761: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:11 compute-0 sudo[342390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuyyhbjlvnjekqqsmuvvqbslknysuuip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726010.654036-997-82651782531393/AnsiballZ_file.py'
Dec 03 01:40:11 compute-0 sudo[342390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:40:11 compute-0 python3.9[342392]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:40:11 compute-0 sudo[342390]: pam_unix(sudo:session): session closed for user root
Dec 03 01:40:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v762: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:12 compute-0 sudo[342542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgwesxxzleghrnntynbeijumgedptdgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726011.745628-1019-75261383516149/AnsiballZ_file.py'
Dec 03 01:40:12 compute-0 sudo[342542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:40:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:40:12 compute-0 python3.9[342544]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:40:12 compute-0 sudo[342542]: pam_unix(sudo:session): session closed for user root
Dec 03 01:40:12 compute-0 ceph-mon[192821]: pgmap v762: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:13 compute-0 sudo[342694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dedvwozjnhgrmqhiiqbdmebzwxgiifga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726012.7996182-1019-152901788645205/AnsiballZ_file.py'
Dec 03 01:40:13 compute-0 sudo[342694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:40:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v763: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:13 compute-0 python3.9[342696]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:40:13 compute-0 sudo[342694]: pam_unix(sudo:session): session closed for user root
Dec 03 01:40:14 compute-0 sudo[342846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gftttqfctfpqeflzhzkawagmvxjfcghe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726013.8541603-1019-179926501724820/AnsiballZ_file.py'
Dec 03 01:40:14 compute-0 sudo[342846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:40:14 compute-0 python3.9[342848]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:40:14 compute-0 ceph-mon[192821]: pgmap v763: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:14 compute-0 sudo[342846]: pam_unix(sudo:session): session closed for user root
Dec 03 01:40:15 compute-0 sudo[342998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgakugvlrzemjxgtywvltrskfbxuneaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726014.8836396-1019-261647970543951/AnsiballZ_file.py'
Dec 03 01:40:15 compute-0 sudo[342998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:40:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v764: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:15 compute-0 python3.9[343000]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:40:15 compute-0 sudo[342998]: pam_unix(sudo:session): session closed for user root
Dec 03 01:40:16 compute-0 ceph-mon[192821]: pgmap v764: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:17 compute-0 sudo[343150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-payqzsdpknmbovwmpdpejzflirnjlgtt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726016.4760282-1019-163419706564184/AnsiballZ_file.py'
Dec 03 01:40:17 compute-0 sudo[343150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:40:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:40:17 compute-0 python3.9[343152]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:40:17 compute-0 sudo[343150]: pam_unix(sudo:session): session closed for user root
Dec 03 01:40:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v765: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:18 compute-0 sudo[343302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxacrbtippdktwyzkbmkjrazqchpalbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726017.6369426-1019-6496379138306/AnsiballZ_file.py'
Dec 03 01:40:18 compute-0 sudo[343302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:40:18 compute-0 python3.9[343304]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:40:18 compute-0 sudo[343302]: pam_unix(sudo:session): session closed for user root
Dec 03 01:40:18 compute-0 ceph-mon[192821]: pgmap v765: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v766: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:19 compute-0 sudo[343455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lntarnrmwzywneojxrcbfyhiihizgdhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726019.333109-1019-271298582351167/AnsiballZ_file.py'
Dec 03 01:40:19 compute-0 sudo[343455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:40:20 compute-0 python3.9[343457]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:40:20 compute-0 sudo[343455]: pam_unix(sudo:session): session closed for user root
Dec 03 01:40:20 compute-0 ceph-mon[192821]: pgmap v766: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v767: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:40:22 compute-0 ceph-mon[192821]: pgmap v767: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v768: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:24 compute-0 ceph-mon[192821]: pgmap v768: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:24 compute-0 podman[343484]: 2025-12-03 01:40:24.878462455 +0000 UTC m=+0.110446789 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 03 01:40:24 compute-0 podman[343482]: 2025-12-03 01:40:24.883855149 +0000 UTC m=+0.127609007 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 01:40:24 compute-0 podman[343483]: 2025-12-03 01:40:24.914415044 +0000 UTC m=+0.151484544 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, config_id=edpm, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 03 01:40:24 compute-0 podman[343485]: 2025-12-03 01:40:24.930126443 +0000 UTC m=+0.153647731 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 03 01:40:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v769: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:26 compute-0 sudo[343715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zifecpdlegstnxsytnnldvmdshpphwxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726025.425749-1208-43617061916204/AnsiballZ_getent.py'
Dec 03 01:40:26 compute-0 sudo[343715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:40:26 compute-0 podman[343666]: 2025-12-03 01:40:26.163509418 +0000 UTC m=+0.119103739 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec 03 01:40:26 compute-0 podman[343665]: 2025-12-03 01:40:26.189870271 +0000 UTC m=+0.151467413 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3)
Dec 03 01:40:26 compute-0 python3.9[343726]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Dec 03 01:40:26 compute-0 sudo[343715]: pam_unix(sudo:session): session closed for user root
Dec 03 01:40:26 compute-0 ceph-mon[192821]: pgmap v769: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:40:27 compute-0 sudo[343879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmnxolybqufyrnsdzuvpggpyslybitmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726026.6901448-1216-125390180534549/AnsiballZ_group.py'
Dec 03 01:40:27 compute-0 sudo[343879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:40:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v770: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:27 compute-0 python3.9[343881]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 03 01:40:27 compute-0 groupadd[343882]: group added to /etc/group: name=nova, GID=42436
Dec 03 01:40:27 compute-0 groupadd[343882]: group added to /etc/gshadow: name=nova
Dec 03 01:40:27 compute-0 groupadd[343882]: new group: name=nova, GID=42436
Dec 03 01:40:27 compute-0 sudo[343879]: pam_unix(sudo:session): session closed for user root
Dec 03 01:40:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:40:28
Dec 03 01:40:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 01:40:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 01:40:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'vms', 'default.rgw.control', 'backups', '.mgr', 'default.rgw.meta', 'volumes', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Dec 03 01:40:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 01:40:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:40:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:40:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:40:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:40:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:40:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:40:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 01:40:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:40:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 01:40:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:40:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:40:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:40:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:40:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:40:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:40:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:40:28 compute-0 ceph-mon[192821]: pgmap v770: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v771: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:29 compute-0 sudo[344037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jylslcqehmdrbqvcgjczxxkrfttqewqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726028.0147536-1224-111173423180877/AnsiballZ_user.py'
Dec 03 01:40:29 compute-0 sudo[344037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:40:29 compute-0 podman[158098]: time="2025-12-03T01:40:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:40:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:40:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 38320 "" "Go-http-client/1.1"
Dec 03 01:40:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:40:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7698 "" "Go-http-client/1.1"
Dec 03 01:40:29 compute-0 python3.9[344039]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 03 01:40:30 compute-0 useradd[344041]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Dec 03 01:40:30 compute-0 rsyslogd[188612]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 03 01:40:30 compute-0 rsyslogd[188612]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 03 01:40:30 compute-0 useradd[344041]: add 'nova' to group 'libvirt'
Dec 03 01:40:30 compute-0 useradd[344041]: add 'nova' to shadow group 'libvirt'
Dec 03 01:40:30 compute-0 sudo[344037]: pam_unix(sudo:session): session closed for user root
Dec 03 01:40:30 compute-0 sudo[344073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:40:30 compute-0 sudo[344073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:40:30 compute-0 sudo[344073]: pam_unix(sudo:session): session closed for user root
Dec 03 01:40:30 compute-0 ceph-mon[192821]: pgmap v771: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:30 compute-0 sudo[344098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:40:30 compute-0 sudo[344098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:40:30 compute-0 sudo[344098]: pam_unix(sudo:session): session closed for user root
Dec 03 01:40:30 compute-0 sudo[344123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:40:30 compute-0 sudo[344123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:40:30 compute-0 sudo[344123]: pam_unix(sudo:session): session closed for user root
Dec 03 01:40:31 compute-0 sudo[344148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Dec 03 01:40:31 compute-0 sudo[344148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:40:31 compute-0 sudo[344148]: pam_unix(sudo:session): session closed for user root
Dec 03 01:40:31 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:40:31 compute-0 openstack_network_exporter[160250]: ERROR   01:40:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:40:31 compute-0 openstack_network_exporter[160250]: ERROR   01:40:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:40:31 compute-0 openstack_network_exporter[160250]: ERROR   01:40:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:40:31 compute-0 openstack_network_exporter[160250]: ERROR   01:40:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:40:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:40:31 compute-0 openstack_network_exporter[160250]: ERROR   01:40:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:40:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:40:31 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:40:31 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:40:31 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:40:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v772: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:31 compute-0 sudo[344192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:40:31 compute-0 sudo[344192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:40:31 compute-0 sudo[344192]: pam_unix(sudo:session): session closed for user root
Dec 03 01:40:31 compute-0 sudo[344217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:40:31 compute-0 sudo[344217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:40:31 compute-0 sudo[344217]: pam_unix(sudo:session): session closed for user root
Dec 03 01:40:31 compute-0 sudo[344242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:40:31 compute-0 sudo[344242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:40:31 compute-0 sudo[344242]: pam_unix(sudo:session): session closed for user root
Dec 03 01:40:31 compute-0 sshd-session[344265]: Accepted publickey for zuul from 192.168.122.30 port 33466 ssh2: ECDSA SHA256:ja3ITS17A9km0/Ot+KN2pl9ub4ump/b6GV+vNoE7Szw
Dec 03 01:40:31 compute-0 systemd-logind[800]: New session 56 of user zuul.
Dec 03 01:40:31 compute-0 systemd[1]: Started Session 56 of User zuul.
Dec 03 01:40:31 compute-0 sudo[344269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 01:40:31 compute-0 sshd-session[344265]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 03 01:40:31 compute-0 sudo[344269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:40:32 compute-0 sshd-session[344294]: Received disconnect from 192.168.122.30 port 33466:11: disconnected by user
Dec 03 01:40:32 compute-0 sshd-session[344294]: Disconnected from user zuul 192.168.122.30 port 33466
Dec 03 01:40:32 compute-0 sshd-session[344265]: pam_unix(sshd:session): session closed for user zuul
Dec 03 01:40:32 compute-0 systemd[1]: session-56.scope: Deactivated successfully.
Dec 03 01:40:32 compute-0 systemd-logind[800]: Session 56 logged out. Waiting for processes to exit.
Dec 03 01:40:32 compute-0 systemd-logind[800]: Removed session 56.
Dec 03 01:40:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:40:32 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:40:32 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:40:32 compute-0 sudo[344269]: pam_unix(sudo:session): session closed for user root
Dec 03 01:40:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:40:32 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:40:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 01:40:32 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:40:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 01:40:32 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:40:32 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev f8f87bfc-128b-42ba-bd19-78b78239c278 does not exist
Dec 03 01:40:32 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 1e49fe43-88dc-4d6e-b368-7ce85e37d9e9 does not exist
Dec 03 01:40:32 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 724da50a-f2b7-4e10-819b-0604a6891e5b does not exist
Dec 03 01:40:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 01:40:32 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:40:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 01:40:32 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:40:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:40:32 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:40:32 compute-0 sudo[344402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:40:32 compute-0 sudo[344402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:40:32 compute-0 sudo[344402]: pam_unix(sudo:session): session closed for user root
Dec 03 01:40:32 compute-0 podman[344439]: 2025-12-03 01:40:32.858175255 +0000 UTC m=+0.112111433 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, name=ubi9, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc.)
Dec 03 01:40:32 compute-0 sudo[344460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:40:32 compute-0 sudo[344460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:40:32 compute-0 sudo[344460]: pam_unix(sudo:session): session closed for user root
Dec 03 01:40:33 compute-0 sudo[344523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:40:33 compute-0 sudo[344523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:40:33 compute-0 sudo[344523]: pam_unix(sudo:session): session closed for user root
Dec 03 01:40:33 compute-0 python3.9[344566]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:40:33 compute-0 sudo[344569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 01:40:33 compute-0 sudo[344569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:40:33 compute-0 ceph-mon[192821]: pgmap v772: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:33 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:40:33 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:40:33 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:40:33 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:40:33 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:40:33 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:40:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v773: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:33 compute-0 podman[344721]: 2025-12-03 01:40:33.69208286 +0000 UTC m=+0.069652230 container create 2728c4d4ee49815e907e78b4671aec346428ce48f5aedd7370fdfcd72ae67014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wilson, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 03 01:40:33 compute-0 podman[344721]: 2025-12-03 01:40:33.659001247 +0000 UTC m=+0.036570707 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:40:33 compute-0 systemd[1]: Started libpod-conmon-2728c4d4ee49815e907e78b4671aec346428ce48f5aedd7370fdfcd72ae67014.scope.
Dec 03 01:40:33 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:40:33 compute-0 podman[344721]: 2025-12-03 01:40:33.846224812 +0000 UTC m=+0.223794232 container init 2728c4d4ee49815e907e78b4671aec346428ce48f5aedd7370fdfcd72ae67014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec 03 01:40:33 compute-0 podman[344721]: 2025-12-03 01:40:33.863785931 +0000 UTC m=+0.241355311 container start 2728c4d4ee49815e907e78b4671aec346428ce48f5aedd7370fdfcd72ae67014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 03 01:40:33 compute-0 podman[344721]: 2025-12-03 01:40:33.869064652 +0000 UTC m=+0.246634032 container attach 2728c4d4ee49815e907e78b4671aec346428ce48f5aedd7370fdfcd72ae67014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:40:33 compute-0 wonderful_wilson[344765]: 167 167
Dec 03 01:40:33 compute-0 systemd[1]: libpod-2728c4d4ee49815e907e78b4671aec346428ce48f5aedd7370fdfcd72ae67014.scope: Deactivated successfully.
Dec 03 01:40:33 compute-0 podman[344721]: 2025-12-03 01:40:33.875434662 +0000 UTC m=+0.253004072 container died 2728c4d4ee49815e907e78b4671aec346428ce48f5aedd7370fdfcd72ae67014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wilson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:40:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-2db2a4eb2e266fb05469415630416c6b7cbc638f45fcb2aac6a4f400cc04c133-merged.mount: Deactivated successfully.
Dec 03 01:40:33 compute-0 podman[344721]: 2025-12-03 01:40:33.957497072 +0000 UTC m=+0.335066452 container remove 2728c4d4ee49815e907e78b4671aec346428ce48f5aedd7370fdfcd72ae67014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:40:33 compute-0 systemd[1]: libpod-conmon-2728c4d4ee49815e907e78b4671aec346428ce48f5aedd7370fdfcd72ae67014.scope: Deactivated successfully.
Dec 03 01:40:33 compute-0 python3.9[344769]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764726032.3762393-1249-189122045207061/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:40:34 compute-0 podman[344816]: 2025-12-03 01:40:34.249082573 +0000 UTC m=+0.089843898 container create 56b437c9a792d02fcd778aa6c90d3ea13d54b9b81b4a95e0a1d2884a20104975 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_borg, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 03 01:40:34 compute-0 podman[344816]: 2025-12-03 01:40:34.199245433 +0000 UTC m=+0.040006758 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:40:34 compute-0 systemd[1]: Started libpod-conmon-56b437c9a792d02fcd778aa6c90d3ea13d54b9b81b4a95e0a1d2884a20104975.scope.
Dec 03 01:40:34 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:40:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a54f2db8a96b25f672c3de34cddc04a0c45aafb3b4b3fe54f7ab6e8b4bd970b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:40:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a54f2db8a96b25f672c3de34cddc04a0c45aafb3b4b3fe54f7ab6e8b4bd970b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:40:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a54f2db8a96b25f672c3de34cddc04a0c45aafb3b4b3fe54f7ab6e8b4bd970b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:40:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a54f2db8a96b25f672c3de34cddc04a0c45aafb3b4b3fe54f7ab6e8b4bd970b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:40:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a54f2db8a96b25f672c3de34cddc04a0c45aafb3b4b3fe54f7ab6e8b4bd970b0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:40:34 compute-0 podman[344816]: 2025-12-03 01:40:34.400142164 +0000 UTC m=+0.240903539 container init 56b437c9a792d02fcd778aa6c90d3ea13d54b9b81b4a95e0a1d2884a20104975 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_borg, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 03 01:40:34 compute-0 podman[344816]: 2025-12-03 01:40:34.435090827 +0000 UTC m=+0.275852152 container start 56b437c9a792d02fcd778aa6c90d3ea13d54b9b81b4a95e0a1d2884a20104975 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec 03 01:40:34 compute-0 podman[344816]: 2025-12-03 01:40:34.442003052 +0000 UTC m=+0.282764377 container attach 56b437c9a792d02fcd778aa6c90d3ea13d54b9b81b4a95e0a1d2884a20104975 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_borg, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 03 01:40:35 compute-0 python3.9[344962]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:40:35 compute-0 podman[344963]: 2025-12-03 01:40:35.213935262 +0000 UTC m=+0.126971029 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 03 01:40:35 compute-0 ceph-mon[192821]: pgmap v773: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v774: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:35 compute-0 peaceful_borg[344867]: --> passed data devices: 0 physical, 3 LVM
Dec 03 01:40:35 compute-0 peaceful_borg[344867]: --> relative data size: 1.0
Dec 03 01:40:35 compute-0 peaceful_borg[344867]: --> All data devices are unavailable
Dec 03 01:40:35 compute-0 python3.9[345073]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:40:35 compute-0 systemd[1]: libpod-56b437c9a792d02fcd778aa6c90d3ea13d54b9b81b4a95e0a1d2884a20104975.scope: Deactivated successfully.
Dec 03 01:40:35 compute-0 systemd[1]: libpod-56b437c9a792d02fcd778aa6c90d3ea13d54b9b81b4a95e0a1d2884a20104975.scope: Consumed 1.185s CPU time.
Dec 03 01:40:35 compute-0 podman[344816]: 2025-12-03 01:40:35.685792544 +0000 UTC m=+1.526553839 container died 56b437c9a792d02fcd778aa6c90d3ea13d54b9b81b4a95e0a1d2884a20104975 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_borg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 03 01:40:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-a54f2db8a96b25f672c3de34cddc04a0c45aafb3b4b3fe54f7ab6e8b4bd970b0-merged.mount: Deactivated successfully.
Dec 03 01:40:35 compute-0 podman[344816]: 2025-12-03 01:40:35.78643009 +0000 UTC m=+1.627191395 container remove 56b437c9a792d02fcd778aa6c90d3ea13d54b9b81b4a95e0a1d2884a20104975 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_borg, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 03 01:40:35 compute-0 systemd[1]: libpod-conmon-56b437c9a792d02fcd778aa6c90d3ea13d54b9b81b4a95e0a1d2884a20104975.scope: Deactivated successfully.
Dec 03 01:40:35 compute-0 sudo[344569]: pam_unix(sudo:session): session closed for user root
Dec 03 01:40:35 compute-0 sudo[345116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:40:35 compute-0 sudo[345116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:40:35 compute-0 sudo[345116]: pam_unix(sudo:session): session closed for user root
Dec 03 01:40:36 compute-0 sudo[345164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:40:36 compute-0 sudo[345164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:40:36 compute-0 sudo[345164]: pam_unix(sudo:session): session closed for user root
Dec 03 01:40:36 compute-0 sudo[345218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:40:36 compute-0 sudo[345218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:40:36 compute-0 sudo[345218]: pam_unix(sudo:session): session closed for user root
Dec 03 01:40:36 compute-0 sudo[345261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 01:40:36 compute-0 sudo[345261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:40:36 compute-0 python3.9[345343]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:40:36 compute-0 podman[345382]: 2025-12-03 01:40:36.981733959 +0000 UTC m=+0.076685608 container create 3861582df168a4023bb79945aeed92a6fb7185739d1bacfa482736934235e036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_shannon, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 03 01:40:37 compute-0 systemd[1]: Started libpod-conmon-3861582df168a4023bb79945aeed92a6fb7185739d1bacfa482736934235e036.scope.
Dec 03 01:40:37 compute-0 podman[345382]: 2025-12-03 01:40:36.954239195 +0000 UTC m=+0.049190874 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:40:37 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:40:37 compute-0 podman[345382]: 2025-12-03 01:40:37.104645319 +0000 UTC m=+0.199596978 container init 3861582df168a4023bb79945aeed92a6fb7185739d1bacfa482736934235e036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_shannon, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 03 01:40:37 compute-0 podman[345382]: 2025-12-03 01:40:37.115670843 +0000 UTC m=+0.210622492 container start 3861582df168a4023bb79945aeed92a6fb7185739d1bacfa482736934235e036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_shannon, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:40:37 compute-0 podman[345382]: 2025-12-03 01:40:37.121288583 +0000 UTC m=+0.216240262 container attach 3861582df168a4023bb79945aeed92a6fb7185739d1bacfa482736934235e036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_shannon, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 03 01:40:37 compute-0 focused_shannon[345442]: 167 167
Dec 03 01:40:37 compute-0 systemd[1]: libpod-3861582df168a4023bb79945aeed92a6fb7185739d1bacfa482736934235e036.scope: Deactivated successfully.
Dec 03 01:40:37 compute-0 podman[345382]: 2025-12-03 01:40:37.125170407 +0000 UTC m=+0.220122056 container died 3861582df168a4023bb79945aeed92a6fb7185739d1bacfa482736934235e036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 03 01:40:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-e2cff0ac7720074036f5a4b8f3987e4756b57885ad47793aeadd17227f5936c0-merged.mount: Deactivated successfully.
Dec 03 01:40:37 compute-0 podman[345382]: 2025-12-03 01:40:37.183159654 +0000 UTC m=+0.278111313 container remove 3861582df168a4023bb79945aeed92a6fb7185739d1bacfa482736934235e036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 03 01:40:37 compute-0 systemd[1]: libpod-conmon-3861582df168a4023bb79945aeed92a6fb7185739d1bacfa482736934235e036.scope: Deactivated successfully.
Dec 03 01:40:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:40:37 compute-0 podman[345513]: 2025-12-03 01:40:37.429017365 +0000 UTC m=+0.080150830 container create 0587a4992db8c1a974f3a6b0976248f3a00132f0cbf3c52514bc672c754e45fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 03 01:40:37 compute-0 systemd[1]: Started libpod-conmon-0587a4992db8c1a974f3a6b0976248f3a00132f0cbf3c52514bc672c754e45fb.scope.
Dec 03 01:40:37 compute-0 ceph-mon[192821]: pgmap v774: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:37 compute-0 podman[345513]: 2025-12-03 01:40:37.402351213 +0000 UTC m=+0.053484728 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:40:37 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:40:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef8e328e1ab498748a41bf297377b2a1105c6c7d5e433a55de4b84864bf69eb9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:40:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v775: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef8e328e1ab498748a41bf297377b2a1105c6c7d5e433a55de4b84864bf69eb9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:40:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef8e328e1ab498748a41bf297377b2a1105c6c7d5e433a55de4b84864bf69eb9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:40:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef8e328e1ab498748a41bf297377b2a1105c6c7d5e433a55de4b84864bf69eb9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:40:37 compute-0 podman[345513]: 2025-12-03 01:40:37.574099366 +0000 UTC m=+0.225232901 container init 0587a4992db8c1a974f3a6b0976248f3a00132f0cbf3c52514bc672c754e45fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:40:37 compute-0 podman[345513]: 2025-12-03 01:40:37.592691252 +0000 UTC m=+0.243824717 container start 0587a4992db8c1a974f3a6b0976248f3a00132f0cbf3c52514bc672c754e45fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_archimedes, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 03 01:40:37 compute-0 podman[345513]: 2025-12-03 01:40:37.597703926 +0000 UTC m=+0.248837421 container attach 0587a4992db8c1a974f3a6b0976248f3a00132f0cbf3c52514bc672c754e45fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:40:37 compute-0 python3.9[345553]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764726035.9707463-1249-85723918476647/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:40:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 01:40:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:40:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 01:40:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:40:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:40:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:40:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:40:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:40:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:40:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:40:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:40:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:40:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 01:40:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:40:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:40:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:40:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 01:40:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:40:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 01:40:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:40:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:40:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:40:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]: {
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:     "0": [
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:         {
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:             "devices": [
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:                 "/dev/loop3"
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:             ],
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:             "lv_name": "ceph_lv0",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:             "lv_size": "21470642176",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:             "name": "ceph_lv0",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:             "tags": {
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:                 "ceph.cluster_name": "ceph",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:                 "ceph.crush_device_class": "",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:                 "ceph.encrypted": "0",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:                 "ceph.osd_id": "0",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:                 "ceph.type": "block",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:                 "ceph.vdo": "0"
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:             },
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:             "type": "block",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:             "vg_name": "ceph_vg0"
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:         }
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:     ],
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:     "1": [
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:         {
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:             "devices": [
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:                 "/dev/loop4"
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:             ],
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:             "lv_name": "ceph_lv1",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:             "lv_size": "21470642176",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:             "name": "ceph_lv1",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:             "tags": {
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:                 "ceph.cluster_name": "ceph",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:                 "ceph.crush_device_class": "",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:                 "ceph.encrypted": "0",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:                 "ceph.osd_id": "1",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:                 "ceph.type": "block",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:                 "ceph.vdo": "0"
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:             },
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:             "type": "block",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:             "vg_name": "ceph_vg1"
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:         }
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:     ],
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:     "2": [
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:         {
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:             "devices": [
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:                 "/dev/loop5"
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:             ],
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:             "lv_name": "ceph_lv2",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:             "lv_size": "21470642176",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:             "name": "ceph_lv2",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:             "tags": {
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:                 "ceph.cluster_name": "ceph",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:                 "ceph.crush_device_class": "",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:                 "ceph.encrypted": "0",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:                 "ceph.osd_id": "2",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:                 "ceph.type": "block",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:                 "ceph.vdo": "0"
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:             },
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:             "type": "block",
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:             "vg_name": "ceph_vg2"
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:         }
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]:     ]
Dec 03 01:40:38 compute-0 awesome_archimedes[345556]: }
Dec 03 01:40:38 compute-0 systemd[1]: libpod-0587a4992db8c1a974f3a6b0976248f3a00132f0cbf3c52514bc672c754e45fb.scope: Deactivated successfully.
Dec 03 01:40:38 compute-0 podman[345513]: 2025-12-03 01:40:38.460041359 +0000 UTC m=+1.111174864 container died 0587a4992db8c1a974f3a6b0976248f3a00132f0cbf3c52514bc672c754e45fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:40:38 compute-0 ceph-mon[192821]: pgmap v775: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef8e328e1ab498748a41bf297377b2a1105c6c7d5e433a55de4b84864bf69eb9-merged.mount: Deactivated successfully.
Dec 03 01:40:38 compute-0 podman[345513]: 2025-12-03 01:40:38.551904041 +0000 UTC m=+1.203037496 container remove 0587a4992db8c1a974f3a6b0976248f3a00132f0cbf3c52514bc672c754e45fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_archimedes, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec 03 01:40:38 compute-0 systemd[1]: libpod-conmon-0587a4992db8c1a974f3a6b0976248f3a00132f0cbf3c52514bc672c754e45fb.scope: Deactivated successfully.
Dec 03 01:40:38 compute-0 sudo[345261]: pam_unix(sudo:session): session closed for user root
Dec 03 01:40:38 compute-0 sudo[345728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:40:38 compute-0 sudo[345728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:40:38 compute-0 sudo[345728]: pam_unix(sudo:session): session closed for user root
Dec 03 01:40:38 compute-0 python3.9[345725]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:40:38 compute-0 sudo[345753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:40:38 compute-0 sudo[345753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:40:38 compute-0 sudo[345753]: pam_unix(sudo:session): session closed for user root
Dec 03 01:40:38 compute-0 sudo[345796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:40:38 compute-0 sudo[345796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:40:38 compute-0 sudo[345796]: pam_unix(sudo:session): session closed for user root
Dec 03 01:40:39 compute-0 podman[345846]: 2025-12-03 01:40:39.109627635 +0000 UTC m=+0.107846629 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 03 01:40:39 compute-0 sudo[345856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 01:40:39 compute-0 sudo[345856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:40:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v776: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:39 compute-0 python3.9[345992]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764726037.9508862-1249-193219835405454/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:40:39 compute-0 podman[346012]: 2025-12-03 01:40:39.705429285 +0000 UTC m=+0.070737039 container create 55e84f3f2404af4ca8d83c86acd9c0023830f3ffbb7ee0231bb6b1dec6633850 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec 03 01:40:39 compute-0 podman[346012]: 2025-12-03 01:40:39.674033327 +0000 UTC m=+0.039341061 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:40:39 compute-0 systemd[1]: Started libpod-conmon-55e84f3f2404af4ca8d83c86acd9c0023830f3ffbb7ee0231bb6b1dec6633850.scope.
Dec 03 01:40:39 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:40:39 compute-0 podman[346012]: 2025-12-03 01:40:39.926910935 +0000 UTC m=+0.292218739 container init 55e84f3f2404af4ca8d83c86acd9c0023830f3ffbb7ee0231bb6b1dec6633850 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_perlman, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:40:39 compute-0 podman[346012]: 2025-12-03 01:40:39.947071443 +0000 UTC m=+0.312379197 container start 55e84f3f2404af4ca8d83c86acd9c0023830f3ffbb7ee0231bb6b1dec6633850 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_perlman, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:40:39 compute-0 podman[346012]: 2025-12-03 01:40:39.954175103 +0000 UTC m=+0.319482907 container attach 55e84f3f2404af4ca8d83c86acd9c0023830f3ffbb7ee0231bb6b1dec6633850 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_perlman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:40:39 compute-0 dazzling_perlman[346052]: 167 167
Dec 03 01:40:39 compute-0 systemd[1]: libpod-55e84f3f2404af4ca8d83c86acd9c0023830f3ffbb7ee0231bb6b1dec6633850.scope: Deactivated successfully.
Dec 03 01:40:39 compute-0 podman[346012]: 2025-12-03 01:40:39.962996568 +0000 UTC m=+0.328304372 container died 55e84f3f2404af4ca8d83c86acd9c0023830f3ffbb7ee0231bb6b1dec6633850 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 03 01:40:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-24af5ec410e89d059685d8da8128ece4b44e910064e5130efefd52fc2c063752-merged.mount: Deactivated successfully.
Dec 03 01:40:40 compute-0 podman[346012]: 2025-12-03 01:40:40.035818872 +0000 UTC m=+0.401126586 container remove 55e84f3f2404af4ca8d83c86acd9c0023830f3ffbb7ee0231bb6b1dec6633850 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_perlman, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:40:40 compute-0 systemd[1]: libpod-conmon-55e84f3f2404af4ca8d83c86acd9c0023830f3ffbb7ee0231bb6b1dec6633850.scope: Deactivated successfully.
Dec 03 01:40:40 compute-0 sshd-session[345986]: Received disconnect from 173.249.50.59 port 48912:11: Bye Bye [preauth]
Dec 03 01:40:40 compute-0 sshd-session[345986]: Disconnected from authenticating user root 173.249.50.59 port 48912 [preauth]
Dec 03 01:40:40 compute-0 podman[346145]: 2025-12-03 01:40:40.310365659 +0000 UTC m=+0.083082259 container create aec79770080fc634e555690c2393cd358ee32efd9892285a278749d0330ac904 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_driscoll, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:40:40 compute-0 podman[346145]: 2025-12-03 01:40:40.272648272 +0000 UTC m=+0.045364852 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:40:40 compute-0 systemd[1]: Started libpod-conmon-aec79770080fc634e555690c2393cd358ee32efd9892285a278749d0330ac904.scope.
Dec 03 01:40:40 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:40:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f96cd473c19fa2f37995d332b62a7b654639c625d226ef41440046c53abebaa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:40:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f96cd473c19fa2f37995d332b62a7b654639c625d226ef41440046c53abebaa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:40:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f96cd473c19fa2f37995d332b62a7b654639c625d226ef41440046c53abebaa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:40:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f96cd473c19fa2f37995d332b62a7b654639c625d226ef41440046c53abebaa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:40:40 compute-0 podman[346145]: 2025-12-03 01:40:40.499461835 +0000 UTC m=+0.272178485 container init aec79770080fc634e555690c2393cd358ee32efd9892285a278749d0330ac904 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_driscoll, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 03 01:40:40 compute-0 podman[346145]: 2025-12-03 01:40:40.534194052 +0000 UTC m=+0.306910622 container start aec79770080fc634e555690c2393cd358ee32efd9892285a278749d0330ac904 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_driscoll, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 03 01:40:40 compute-0 podman[346145]: 2025-12-03 01:40:40.5423863 +0000 UTC m=+0.315102900 container attach aec79770080fc634e555690c2393cd358ee32efd9892285a278749d0330ac904 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 03 01:40:40 compute-0 ceph-mon[192821]: pgmap v776: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:40 compute-0 python3.9[346220]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:40:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v777: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:41 compute-0 python3.9[346347]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764726039.994734-1249-3237874373654/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:40:41 compute-0 quirky_driscoll[346190]: {
Dec 03 01:40:41 compute-0 quirky_driscoll[346190]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 01:40:41 compute-0 quirky_driscoll[346190]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:40:41 compute-0 quirky_driscoll[346190]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 01:40:41 compute-0 quirky_driscoll[346190]:         "osd_id": 2,
Dec 03 01:40:41 compute-0 quirky_driscoll[346190]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:40:41 compute-0 quirky_driscoll[346190]:         "type": "bluestore"
Dec 03 01:40:41 compute-0 quirky_driscoll[346190]:     },
Dec 03 01:40:41 compute-0 quirky_driscoll[346190]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 01:40:41 compute-0 quirky_driscoll[346190]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:40:41 compute-0 quirky_driscoll[346190]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 01:40:41 compute-0 quirky_driscoll[346190]:         "osd_id": 1,
Dec 03 01:40:41 compute-0 quirky_driscoll[346190]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:40:41 compute-0 quirky_driscoll[346190]:         "type": "bluestore"
Dec 03 01:40:41 compute-0 quirky_driscoll[346190]:     },
Dec 03 01:40:41 compute-0 quirky_driscoll[346190]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 01:40:41 compute-0 quirky_driscoll[346190]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:40:41 compute-0 quirky_driscoll[346190]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 01:40:41 compute-0 quirky_driscoll[346190]:         "osd_id": 0,
Dec 03 01:40:41 compute-0 quirky_driscoll[346190]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:40:41 compute-0 quirky_driscoll[346190]:         "type": "bluestore"
Dec 03 01:40:41 compute-0 quirky_driscoll[346190]:     }
Dec 03 01:40:41 compute-0 quirky_driscoll[346190]: }
Dec 03 01:40:41 compute-0 systemd[1]: libpod-aec79770080fc634e555690c2393cd358ee32efd9892285a278749d0330ac904.scope: Deactivated successfully.
Dec 03 01:40:41 compute-0 podman[346145]: 2025-12-03 01:40:41.753984393 +0000 UTC m=+1.526700963 container died aec79770080fc634e555690c2393cd358ee32efd9892285a278749d0330ac904 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_driscoll, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 03 01:40:41 compute-0 systemd[1]: libpod-aec79770080fc634e555690c2393cd358ee32efd9892285a278749d0330ac904.scope: Consumed 1.217s CPU time.
Dec 03 01:40:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f96cd473c19fa2f37995d332b62a7b654639c625d226ef41440046c53abebaa-merged.mount: Deactivated successfully.
Dec 03 01:40:41 compute-0 podman[346145]: 2025-12-03 01:40:41.860088915 +0000 UTC m=+1.632805495 container remove aec79770080fc634e555690c2393cd358ee32efd9892285a278749d0330ac904 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 03 01:40:41 compute-0 systemd[1]: libpod-conmon-aec79770080fc634e555690c2393cd358ee32efd9892285a278749d0330ac904.scope: Deactivated successfully.
Dec 03 01:40:41 compute-0 sudo[345856]: pam_unix(sudo:session): session closed for user root
Dec 03 01:40:41 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:40:41 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:40:41 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:40:41 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:40:41 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 912c5aae-7047-4b29-9ac2-204f05d5954a does not exist
Dec 03 01:40:41 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev fdadf596-8977-44db-a8cb-9360df5cb871 does not exist
Dec 03 01:40:42 compute-0 sudo[346454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:40:42 compute-0 sudo[346454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:40:42 compute-0 sudo[346454]: pam_unix(sudo:session): session closed for user root
Dec 03 01:40:42 compute-0 sudo[346485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 01:40:42 compute-0 sudo[346485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:40:42 compute-0 sudo[346485]: pam_unix(sudo:session): session closed for user root
Dec 03 01:40:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:40:42 compute-0 ceph-mon[192821]: pgmap v777: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:42 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:40:42 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:40:43 compute-0 python3.9[346583]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:40:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v778: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:43 compute-0 python3.9[346704]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764726041.878425-1249-216442937457595/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:40:44 compute-0 ceph-mon[192821]: pgmap v778: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v779: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:45 compute-0 sudo[346854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlsaiemvkwwguoqugpajcxgncwosylet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726045.2002854-1332-39304039063386/AnsiballZ_file.py'
Dec 03 01:40:45 compute-0 sudo[346854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:40:46 compute-0 python3.9[346856]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:40:46 compute-0 sudo[346854]: pam_unix(sudo:session): session closed for user root
Dec 03 01:40:46 compute-0 sudo[347006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjpsaogywwqgelfpagzttdlmzlhjitae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726046.3329546-1340-163674061391058/AnsiballZ_copy.py'
Dec 03 01:40:46 compute-0 sudo[347006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:40:46 compute-0 ceph-mon[192821]: pgmap v779: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:47 compute-0 python3.9[347008]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:40:47 compute-0 sudo[347006]: pam_unix(sudo:session): session closed for user root
Dec 03 01:40:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:40:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v780: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:48 compute-0 sudo[347162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znrcmhdpezhrzrkvsbbaiwlafxzguduj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726047.474322-1348-36917485606249/AnsiballZ_stat.py'
Dec 03 01:40:48 compute-0 sudo[347162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:40:48 compute-0 python3.9[347164]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:40:48 compute-0 sudo[347162]: pam_unix(sudo:session): session closed for user root
Dec 03 01:40:48 compute-0 sshd-session[347082]: Received disconnect from 80.253.31.232 port 52738:11: Bye Bye [preauth]
Dec 03 01:40:48 compute-0 sshd-session[347082]: Disconnected from authenticating user root 80.253.31.232 port 52738 [preauth]
Dec 03 01:40:48 compute-0 ceph-mon[192821]: pgmap v780: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:49 compute-0 sudo[347314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcukdoldxuqxtillyeryesjoqtfhlutp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726048.5458724-1356-207141803458916/AnsiballZ_stat.py'
Dec 03 01:40:49 compute-0 sudo[347314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:40:49 compute-0 python3.9[347316]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:40:49 compute-0 sudo[347314]: pam_unix(sudo:session): session closed for user root
Dec 03 01:40:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v781: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:49 compute-0 sshd-session[347134]: Received disconnect from 103.146.202.174 port 40224:11: Bye Bye [preauth]
Dec 03 01:40:49 compute-0 sshd-session[347134]: Disconnected from authenticating user root 103.146.202.174 port 40224 [preauth]
Dec 03 01:40:50 compute-0 sudo[347438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uoxmbedcytmrsfodovmeblyoozkisyba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726048.5458724-1356-207141803458916/AnsiballZ_copy.py'
Dec 03 01:40:50 compute-0 sudo[347438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:40:50 compute-0 python3.9[347440]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1764726048.5458724-1356-207141803458916/.source _original_basename=.5j8wav2u follow=False checksum=477c62050a358b588929eb0b410757a54c9a1308 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Dec 03 01:40:50 compute-0 sudo[347438]: pam_unix(sudo:session): session closed for user root
Dec 03 01:40:51 compute-0 ceph-mon[192821]: pgmap v781: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:51 compute-0 python3.9[347592]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:40:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v782: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:40:52 compute-0 python3.9[347744]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:40:53 compute-0 ceph-mon[192821]: pgmap v782: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:53 compute-0 python3.9[347865]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764726051.7291126-1382-197171699205897/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:40:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v783: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:54 compute-0 sshd-session[347879]: Received disconnect from 34.66.72.251 port 37880:11: Bye Bye [preauth]
Dec 03 01:40:54 compute-0 sshd-session[347879]: Disconnected from authenticating user root 34.66.72.251 port 37880 [preauth]
Dec 03 01:40:55 compute-0 ceph-mon[192821]: pgmap v783: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:55 compute-0 podman[347992]: 2025-12-03 01:40:55.26367623 +0000 UTC m=+0.116876079 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, maintainer=Red Hat, Inc., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, version=9.6, architecture=x86_64)
Dec 03 01:40:55 compute-0 podman[347991]: 2025-12-03 01:40:55.29778209 +0000 UTC m=+0.164046438 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 01:40:55 compute-0 podman[347993]: 2025-12-03 01:40:55.298142439 +0000 UTC m=+0.143311834 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 03 01:40:55 compute-0 podman[347999]: 2025-12-03 01:40:55.300249586 +0000 UTC m=+0.151624267 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible)
Dec 03 01:40:55 compute-0 python3.9[348066]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:40:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v784: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:56 compute-0 podman[348194]: 2025-12-03 01:40:56.776857491 +0000 UTC m=+0.130418641 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 03 01:40:56 compute-0 podman[348195]: 2025-12-03 01:40:56.824518413 +0000 UTC m=+0.172203806 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd)
Dec 03 01:40:56 compute-0 python3.9[348251]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764726054.6556053-1397-267399815361325/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:40:57 compute-0 ceph-mon[192821]: pgmap v784: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:40:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v785: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:57 compute-0 sudo[348408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raqvczpsfpespfaknzudffxmqhzxabcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726057.3847206-1414-57801204092626/AnsiballZ_container_config_data.py'
Dec 03 01:40:57 compute-0 sudo[348408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:40:58 compute-0 python3.9[348410]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Dec 03 01:40:58 compute-0 sudo[348408]: pam_unix(sudo:session): session closed for user root
Dec 03 01:40:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:40:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:40:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:40:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:40:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:40:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:40:59 compute-0 ceph-mon[192821]: pgmap v785: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:59 compute-0 sudo[348560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oavbjxrmjihsokmkevmgopzyeqkazzhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726058.5874982-1423-60756309302720/AnsiballZ_container_config_hash.py'
Dec 03 01:40:59 compute-0 sudo[348560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:40:59 compute-0 python3.9[348562]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 03 01:40:59 compute-0 sudo[348560]: pam_unix(sudo:session): session closed for user root
Dec 03 01:40:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v786: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:40:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:40:59.600 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:40:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:40:59.600 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:40:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:40:59.601 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:40:59 compute-0 podman[158098]: time="2025-12-03T01:40:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:40:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:40:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 38320 "" "Go-http-client/1.1"
Dec 03 01:40:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:40:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7707 "" "Go-http-client/1.1"
Dec 03 01:41:00 compute-0 sudo[348712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdzfyendeuroevfudkwfmjienmogqdpp ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764726060.0368326-1433-278636261728479/AnsiballZ_edpm_container_manage.py'
Dec 03 01:41:00 compute-0 sudo[348712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:41:00 compute-0 python3[348714]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Dec 03 01:41:01 compute-0 ceph-mon[192821]: pgmap v786: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:41:01 compute-0 openstack_network_exporter[160250]: ERROR   01:41:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:41:01 compute-0 openstack_network_exporter[160250]: ERROR   01:41:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:41:01 compute-0 openstack_network_exporter[160250]: ERROR   01:41:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:41:01 compute-0 openstack_network_exporter[160250]: ERROR   01:41:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:41:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:41:01 compute-0 openstack_network_exporter[160250]: ERROR   01:41:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:41:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:41:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v787: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:41:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:41:03 compute-0 ceph-mon[192821]: pgmap v787: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:41:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v788: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:41:03 compute-0 podman[348748]: 2025-12-03 01:41:03.806343363 +0000 UTC m=+0.068275133 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, name=ubi9, release=1214.1726694543, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, release-0.7.12=, config_id=edpm, io.openshift.tags=base rhel9, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec 03 01:41:05 compute-0 ceph-mon[192821]: pgmap v788: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:41:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v789: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:41:07 compute-0 ceph-mon[192821]: pgmap v789: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:41:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:41:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v790: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:41:09 compute-0 ceph-mon[192821]: pgmap v790: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:41:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v791: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:41:10 compute-0 podman[348782]: 2025-12-03 01:41:10.196956446 +0000 UTC m=+4.464873752 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 03 01:41:10 compute-0 podman[348793]: 2025-12-03 01:41:10.216361934 +0000 UTC m=+0.476922999 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 01:41:11 compute-0 ceph-mon[192821]: pgmap v791: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:41:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v792: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:41:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:41:12 compute-0 ceph-mon[192821]: pgmap v792: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:41:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v793: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:41:15 compute-0 ceph-mon[192821]: pgmap v793: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:41:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v794: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:41:17 compute-0 ceph-mon[192821]: pgmap v794: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:41:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:41:17 compute-0 podman[348725]: 2025-12-03 01:41:17.528402397 +0000 UTC m=+16.512997475 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec 03 01:41:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v795: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:41:17 compute-0 podman[348858]: 2025-12-03 01:41:17.82793351 +0000 UTC m=+0.109319128 container create 540e2e9404e81677d7621395e04fb189d09872932cfad9cabeac5fc917d6fffa (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, config_id=edpm, tcib_managed=true, org.label-schema.build-date=20251125, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 03 01:41:17 compute-0 podman[348858]: 2025-12-03 01:41:17.77322814 +0000 UTC m=+0.054613828 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec 03 01:41:17 compute-0 python3[348714]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Dec 03 01:41:18 compute-0 sudo[348712]: pam_unix(sudo:session): session closed for user root
Dec 03 01:41:19 compute-0 sudo[349044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilagiqhptdxiliauszbdhilevddxfchv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726078.4846334-1441-91146911485591/AnsiballZ_stat.py'
Dec 03 01:41:19 compute-0 sudo[349044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:41:19 compute-0 ceph-mon[192821]: pgmap v795: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:41:19 compute-0 python3.9[349046]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:41:19 compute-0 sudo[349044]: pam_unix(sudo:session): session closed for user root
Dec 03 01:41:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v796: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:41:21 compute-0 ceph-mon[192821]: pgmap v796: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:41:21 compute-0 sudo[349199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbcoxceanqbtsfhpplhgassexwcgyovx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726080.7831838-1453-260488522155304/AnsiballZ_container_config_data.py'
Dec 03 01:41:21 compute-0 sudo[349199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:41:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v797: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 0 B/s wr, 12 op/s
Dec 03 01:41:21 compute-0 python3.9[349201]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Dec 03 01:41:21 compute-0 sudo[349199]: pam_unix(sudo:session): session closed for user root
Dec 03 01:41:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:41:23 compute-0 ceph-mon[192821]: pgmap v797: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 0 B/s wr, 12 op/s
Dec 03 01:41:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v798: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 0 B/s wr, 12 op/s
Dec 03 01:41:23 compute-0 sudo[349351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbqquniacvlxovsdvconuhodqejgkzrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726081.9436693-1462-79242990788215/AnsiballZ_container_config_hash.py'
Dec 03 01:41:23 compute-0 sudo[349351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:41:23 compute-0 python3.9[349353]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 03 01:41:23 compute-0 sudo[349351]: pam_unix(sudo:session): session closed for user root
Dec 03 01:41:24 compute-0 sudo[349503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwobkxbtpokgtkjhkqhtfpsoalyztysy ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764726084.2554615-1472-270201707980025/AnsiballZ_edpm_container_manage.py'
Dec 03 01:41:24 compute-0 sudo[349503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:41:25 compute-0 python3[349505]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec 03 01:41:25 compute-0 ceph-mon[192821]: pgmap v798: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 0 B/s wr, 12 op/s
Dec 03 01:41:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v799: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 0 B/s wr, 50 op/s
Dec 03 01:41:25 compute-0 podman[349538]: 2025-12-03 01:41:25.544016086 +0000 UTC m=+0.105872046 container create 1889b1738f438ee313befe0f02ea00cb2618a8f557e17b1fe752d5f1aa7d3101 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, container_name=nova_compute, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.schema-version=1.0)
Dec 03 01:41:25 compute-0 podman[349538]: 2025-12-03 01:41:25.492100381 +0000 UTC m=+0.053956391 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec 03 01:41:25 compute-0 python3[349505]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Dec 03 01:41:25 compute-0 sudo[349503]: pam_unix(sudo:session): session closed for user root
Dec 03 01:41:25 compute-0 podman[349573]: 2025-12-03 01:41:25.876866039 +0000 UTC m=+0.122357976 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, container_name=openstack_network_exporter, name=ubi9-minimal, config_id=edpm, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, managed_by=edpm_ansible)
Dec 03 01:41:25 compute-0 podman[349572]: 2025-12-03 01:41:25.876977582 +0000 UTC m=+0.121489493 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 03 01:41:25 compute-0 podman[349574]: 2025-12-03 01:41:25.906288294 +0000 UTC m=+0.137929572 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Dec 03 01:41:25 compute-0 podman[349575]: 2025-12-03 01:41:25.951711746 +0000 UTC m=+0.179733997 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Dec 03 01:41:26 compute-0 sudo[349804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phsalgqnashvelyrscrloakldsjxxqtd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726086.104576-1480-159093099736794/AnsiballZ_stat.py'
Dec 03 01:41:26 compute-0 sudo[349804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:41:26 compute-0 python3.9[349806]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:41:26 compute-0 sudo[349804]: pam_unix(sudo:session): session closed for user root
Dec 03 01:41:27 compute-0 ceph-mon[192821]: pgmap v799: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 0 B/s wr, 50 op/s
Dec 03 01:41:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:41:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v800: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 03 01:41:27 compute-0 podman[349911]: 2025-12-03 01:41:27.873512082 +0000 UTC m=+0.125577982 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:41:27 compute-0 podman[349915]: 2025-12-03 01:41:27.876589954 +0000 UTC m=+0.127082892 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd)
Dec 03 01:41:27 compute-0 sudo[349995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zihzmqydfdnozjxmdffoiayjlqknunrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726087.4084868-1489-45217501336257/AnsiballZ_file.py'
Dec 03 01:41:27 compute-0 sudo[349995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:41:28 compute-0 python3.9[349997]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:41:28 compute-0 sudo[349995]: pam_unix(sudo:session): session closed for user root
Dec 03 01:41:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:41:28
Dec 03 01:41:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 01:41:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 01:41:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta', '.mgr', 'backups', 'vms', '.rgw.root', 'images', 'volumes', 'default.rgw.log']
Dec 03 01:41:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 01:41:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:41:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:41:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:41:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:41:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:41:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:41:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 01:41:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:41:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 01:41:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:41:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:41:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:41:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:41:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:41:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:41:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:41:29 compute-0 sudo[350146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etuotbenmgsfuuwwryjaxokqmffausju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726088.2548232-1489-106830894291005/AnsiballZ_copy.py'
Dec 03 01:41:29 compute-0 sudo[350146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:41:29 compute-0 ceph-mon[192821]: pgmap v800: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 03 01:41:29 compute-0 python3.9[350148]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764726088.2548232-1489-106830894291005/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:41:29 compute-0 sudo[350146]: pam_unix(sudo:session): session closed for user root
Dec 03 01:41:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v801: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 03 01:41:29 compute-0 podman[158098]: time="2025-12-03T01:41:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:41:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:41:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42584 "" "Go-http-client/1.1"
Dec 03 01:41:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:41:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7694 "" "Go-http-client/1.1"
Dec 03 01:41:29 compute-0 sudo[350222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbkihlqupepwrffgzecemrfgfxgfgpdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726088.2548232-1489-106830894291005/AnsiballZ_systemd.py'
Dec 03 01:41:29 compute-0 sudo[350222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:41:30 compute-0 python3.9[350224]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 03 01:41:30 compute-0 systemd[1]: Reloading.
Dec 03 01:41:30 compute-0 systemd-rc-local-generator[350248]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:41:30 compute-0 systemd-sysv-generator[350251]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:41:30 compute-0 sudo[350222]: pam_unix(sudo:session): session closed for user root
Dec 03 01:41:31 compute-0 ceph-mon[192821]: pgmap v801: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 03 01:41:31 compute-0 sudo[350335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzczrzcakadckbvjroafzgpsdivuejnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726088.2548232-1489-106830894291005/AnsiballZ_systemd.py'
Dec 03 01:41:31 compute-0 sudo[350335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:41:31 compute-0 openstack_network_exporter[160250]: ERROR   01:41:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:41:31 compute-0 openstack_network_exporter[160250]: ERROR   01:41:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:41:31 compute-0 openstack_network_exporter[160250]: ERROR   01:41:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:41:31 compute-0 openstack_network_exporter[160250]: ERROR   01:41:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:41:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:41:31 compute-0 openstack_network_exporter[160250]: ERROR   01:41:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:41:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:41:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v802: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 03 01:41:31 compute-0 python3.9[350337]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:41:31 compute-0 systemd[1]: Reloading.
Dec 03 01:41:31 compute-0 systemd-rc-local-generator[350363]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:41:31 compute-0 systemd-sysv-generator[350371]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:41:32 compute-0 systemd[1]: Starting nova_compute container...
Dec 03 01:41:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:41:32 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:41:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b68e35a6c38835a07ca7b432662818307ea714e030c65d1dee2979c23a2baef/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 03 01:41:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b68e35a6c38835a07ca7b432662818307ea714e030c65d1dee2979c23a2baef/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec 03 01:41:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b68e35a6c38835a07ca7b432662818307ea714e030c65d1dee2979c23a2baef/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 03 01:41:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b68e35a6c38835a07ca7b432662818307ea714e030c65d1dee2979c23a2baef/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 03 01:41:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b68e35a6c38835a07ca7b432662818307ea714e030c65d1dee2979c23a2baef/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 03 01:41:32 compute-0 podman[350377]: 2025-12-03 01:41:32.491382687 +0000 UTC m=+0.236209655 container init 1889b1738f438ee313befe0f02ea00cb2618a8f557e17b1fe752d5f1aa7d3101 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm)
Dec 03 01:41:32 compute-0 podman[350377]: 2025-12-03 01:41:32.515315186 +0000 UTC m=+0.260142094 container start 1889b1738f438ee313befe0f02ea00cb2618a8f557e17b1fe752d5f1aa7d3101 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=nova_compute, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm)
Dec 03 01:41:32 compute-0 podman[350377]: nova_compute
Dec 03 01:41:32 compute-0 nova_compute[350390]: + sudo -E kolla_set_configs
Dec 03 01:41:32 compute-0 systemd[1]: Started nova_compute container.
Dec 03 01:41:32 compute-0 sudo[350335]: pam_unix(sudo:session): session closed for user root
Dec 03 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 03 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Validating config file
Dec 03 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 03 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Copying service configuration files
Dec 03 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec 03 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec 03 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec 03 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec 03 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec 03 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 03 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 03 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 03 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 03 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec 03 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec 03 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 03 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 03 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Deleting /etc/ceph
Dec 03 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Creating directory /etc/ceph
Dec 03 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Setting permission for /etc/ceph
Dec 03 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec 03 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 03 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec 03 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 03 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec 03 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 03 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec 03 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 03 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec 03 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec 03 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec 03 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Writing out command to execute
Dec 03 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 03 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 03 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec 03 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 03 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 03 01:41:32 compute-0 nova_compute[350390]: ++ cat /run_command
Dec 03 01:41:32 compute-0 nova_compute[350390]: + CMD=nova-compute
Dec 03 01:41:32 compute-0 nova_compute[350390]: + ARGS=
Dec 03 01:41:32 compute-0 nova_compute[350390]: + sudo kolla_copy_cacerts
Dec 03 01:41:32 compute-0 sshd-session[350260]: Invalid user autrede from 14.103.201.7 port 44422
Dec 03 01:41:32 compute-0 nova_compute[350390]: + [[ ! -n '' ]]
Dec 03 01:41:32 compute-0 nova_compute[350390]: + . kolla_extend_start
Dec 03 01:41:32 compute-0 nova_compute[350390]: + echo 'Running command: '\''nova-compute'\'''
Dec 03 01:41:32 compute-0 nova_compute[350390]: Running command: 'nova-compute'
Dec 03 01:41:32 compute-0 nova_compute[350390]: + umask 0022
Dec 03 01:41:32 compute-0 nova_compute[350390]: + exec nova-compute
Dec 03 01:41:32 compute-0 sshd-session[350260]: Received disconnect from 14.103.201.7 port 44422:11: Bye Bye [preauth]
Dec 03 01:41:32 compute-0 sshd-session[350260]: Disconnected from invalid user autrede 14.103.201.7 port 44422 [preauth]
Dec 03 01:41:33 compute-0 ceph-mon[192821]: pgmap v802: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 03 01:41:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v803: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 0 B/s wr, 46 op/s
Dec 03 01:41:34 compute-0 podman[350528]: 2025-12-03 01:41:34.68212771 +0000 UTC m=+0.177357873 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, config_id=edpm, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, build-date=2024-09-18T21:23:30, release-0.7.12=)
Dec 03 01:41:34 compute-0 nova_compute[350390]: 2025-12-03 01:41:34.796 350396 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 03 01:41:34 compute-0 nova_compute[350390]: 2025-12-03 01:41:34.796 350396 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 03 01:41:34 compute-0 nova_compute[350390]: 2025-12-03 01:41:34.796 350396 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 03 01:41:34 compute-0 nova_compute[350390]: 2025-12-03 01:41:34.796 350396 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Dec 03 01:41:34 compute-0 python3.9[350564]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:41:34 compute-0 nova_compute[350390]: 2025-12-03 01:41:34.932 350396 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:41:34 compute-0 nova_compute[350390]: 2025-12-03 01:41:34.958 350396 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:41:34 compute-0 nova_compute[350390]: 2025-12-03 01:41:34.958 350396 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Dec 03 01:41:35 compute-0 ceph-mon[192821]: pgmap v803: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 0 B/s wr, 46 op/s
Dec 03 01:41:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v804: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 0 B/s wr, 46 op/s
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.595 350396 INFO nova.virt.driver [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.738 350396 INFO nova.compute.provider_config [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.759 350396 DEBUG oslo_concurrency.lockutils [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.759 350396 DEBUG oslo_concurrency.lockutils [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.759 350396 DEBUG oslo_concurrency.lockutils [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.760 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.760 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.760 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.761 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.761 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.761 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.761 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.761 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.762 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.762 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.762 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.762 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.763 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.763 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.763 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.763 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.763 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.764 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.764 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.764 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.764 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.764 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.765 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.765 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.765 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.765 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.766 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.766 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.766 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.766 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.767 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.767 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.767 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.767 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.767 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.768 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.768 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.768 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.768 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.769 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.769 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.769 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.769 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.770 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.770 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.770 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.770 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.770 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.771 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.771 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.771 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.771 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.772 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.772 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.772 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.772 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.772 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.773 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.773 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.773 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.773 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.774 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.774 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.774 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.774 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.774 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.774 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.775 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.775 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.775 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.775 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.775 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.776 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.776 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.776 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.776 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.776 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.777 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.777 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.777 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.777 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.778 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.778 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.778 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.778 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.778 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.779 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.779 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.779 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.779 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.780 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.780 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.780 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.780 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.780 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.780 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.781 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.781 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.781 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.781 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.782 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.782 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.782 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.782 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.783 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.783 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.783 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.783 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.783 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.784 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.784 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.784 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.784 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.784 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.784 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.784 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.785 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.785 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.785 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.785 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.785 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.785 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.785 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.786 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.786 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.786 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.786 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.786 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.786 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.786 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.787 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.787 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.787 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.787 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.787 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.787 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.787 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.788 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.788 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.788 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.788 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.788 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.788 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.788 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.789 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.789 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.789 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.789 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.789 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.789 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.790 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.790 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.790 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.790 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.790 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.790 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.790 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.791 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.791 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.791 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.791 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.791 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.791 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.791 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.792 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.792 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.792 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.792 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.792 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.792 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.793 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.793 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.793 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.793 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.793 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.793 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.793 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.794 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.794 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.794 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.794 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.794 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.794 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.794 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.795 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.795 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.795 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.795 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.795 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.795 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.796 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.796 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.796 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.796 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.796 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.796 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.796 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.797 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.797 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.797 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.797 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.797 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.797 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.797 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.798 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.798 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.798 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.798 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.798 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.798 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.799 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.799 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.799 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.799 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.799 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.799 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.799 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.800 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.800 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.800 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.800 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.800 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.800 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.800 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.801 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.801 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.801 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.801 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.801 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.801 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.801 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.802 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.802 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.802 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.802 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.802 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.802 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.802 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.803 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.803 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.803 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.803 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.803 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.803 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.804 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.804 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.804 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.804 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.804 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.804 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.804 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.805 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.805 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.805 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.805 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.805 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.805 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.805 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.806 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.806 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.806 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.806 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.806 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.806 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.806 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.807 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.807 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.807 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.807 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.807 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.807 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.807 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.808 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.808 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.808 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.808 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.808 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.808 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.808 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.809 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.809 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.809 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.809 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.809 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.809 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.810 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.810 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.810 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.810 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.810 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.810 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.810 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.811 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.811 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.811 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.811 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.811 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.811 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.811 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.812 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.812 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.812 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.812 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.812 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.812 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.813 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.813 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.813 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.813 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.813 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.813 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.814 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.814 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.814 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.814 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.814 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.814 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.815 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.815 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.815 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.815 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.815 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.815 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.816 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.816 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.816 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.816 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.816 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.816 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.817 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.817 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.817 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.817 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.817 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.817 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.817 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.818 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.818 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.818 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.818 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.818 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.818 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.819 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.819 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.819 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.819 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.819 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.819 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.820 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.820 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.820 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.820 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.820 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.821 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.821 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.821 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.821 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.822 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.822 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.822 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.822 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.822 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.822 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.823 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.823 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.823 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.823 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.823 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.823 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.824 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.824 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.824 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.824 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.824 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.824 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.824 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.825 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.825 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.825 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.825 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.825 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.825 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.826 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.826 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.826 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.826 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.826 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.826 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.826 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.827 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.827 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.827 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.827 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.827 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.827 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.827 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.828 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.828 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.828 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.828 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.828 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.828 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.829 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.829 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.829 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.829 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.829 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.829 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.829 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.830 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.830 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.830 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.830 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.830 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.830 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.831 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.831 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.831 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.831 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.831 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.831 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.831 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.832 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.832 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.832 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.832 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.832 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.832 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.832 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.833 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.833 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.833 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.833 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.833 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.833 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.834 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.834 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.834 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.834 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.834 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.834 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.835 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.835 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.835 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.835 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.835 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.835 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.835 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.836 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.836 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.836 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.836 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.836 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.836 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.836 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.837 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.837 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.837 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.837 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.837 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.837 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.838 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.838 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.838 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.838 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.838 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.838 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.839 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.839 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.839 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.839 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.839 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.839 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.840 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.840 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.840 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.840 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.840 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.840 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.841 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.841 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.841 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.841 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.841 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.841 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.842 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.842 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.842 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.842 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.842 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.842 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.843 350396 WARNING oslo_config.cfg [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec 03 01:41:35 compute-0 nova_compute[350390]: live_migration_uri is deprecated for removal in favor of two other options that
Dec 03 01:41:35 compute-0 nova_compute[350390]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec 03 01:41:35 compute-0 nova_compute[350390]: and ``live_migration_inbound_addr`` respectively.
Dec 03 01:41:35 compute-0 nova_compute[350390]: ).  Its value may be silently ignored in the future.
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.843 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.843 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.843 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.843 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.844 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.844 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.844 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.844 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.844 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.844 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.845 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.845 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.845 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.845 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.845 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.846 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.846 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.846 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.846 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.rbd_secret_uuid        = 3765feb2-36f8-5b86-b74c-64e9221f9c4c log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.846 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.846 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.846 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.847 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.847 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.847 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.847 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.847 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.847 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.848 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.848 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.848 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.848 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.848 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.849 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.849 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.849 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.849 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.849 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.849 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.850 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.850 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.850 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.850 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.850 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.850 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.851 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.851 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.851 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.851 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.851 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.851 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.852 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.852 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.852 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.852 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.852 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.852 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.853 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.853 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.853 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.853 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.853 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.853 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.854 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.854 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.854 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.854 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.854 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.854 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.854 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.855 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.855 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.855 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.855 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.855 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.855 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.856 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.856 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.856 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.856 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.856 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.856 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.857 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.857 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.857 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.857 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.857 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.857 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.857 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.858 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.858 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.858 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.858 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.858 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.858 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.859 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.859 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.859 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.859 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.859 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.859 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.859 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.860 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.860 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.860 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.860 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.860 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.860 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.861 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.861 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.861 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.861 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.861 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.861 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.862 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.862 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.862 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.862 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.862 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.862 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.863 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.863 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.863 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.863 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.863 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.863 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.864 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.864 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.864 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.864 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.864 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.864 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.864 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.865 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.865 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.865 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.865 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.865 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.865 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.866 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.866 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.866 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.866 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.866 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.866 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.867 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.867 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.867 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.867 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.867 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.867 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.868 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.868 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.868 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.868 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.868 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.868 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.869 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.869 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.869 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.869 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.869 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.869 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.869 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.870 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.870 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.870 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.870 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.870 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.870 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.871 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.871 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.871 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.871 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.871 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.871 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.871 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.872 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.872 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.872 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.872 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.872 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.873 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.873 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.873 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.873 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.874 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.874 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.874 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.874 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.874 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.875 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.875 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.875 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.875 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.875 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.876 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.876 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.876 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.876 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.876 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.877 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.877 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.877 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.877 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.877 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.878 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.878 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.878 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.878 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.878 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.879 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.879 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.879 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.879 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.879 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.879 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.879 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.880 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.880 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.880 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.880 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.880 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.880 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.881 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.881 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.881 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.881 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.881 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.881 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.882 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.882 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.882 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.882 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.882 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.882 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.882 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.883 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.883 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.883 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.883 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.883 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.883 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.884 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.884 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.884 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.884 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.884 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.884 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.885 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.885 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.885 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.885 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.885 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.885 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.886 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.886 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.886 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.886 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.886 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.886 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.887 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.887 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.887 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.887 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.887 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.887 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.887 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.888 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.888 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.888 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.888 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.888 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.888 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.888 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.889 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.889 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.889 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.889 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.889 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.889 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.890 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.890 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.890 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.890 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.890 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.890 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.890 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.891 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.891 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.891 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.891 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.891 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.891 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.892 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.892 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.892 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.892 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.892 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.892 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.893 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.893 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.893 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.893 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.893 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.893 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.894 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.894 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.894 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.894 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.894 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.894 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.894 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.895 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.895 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.895 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.895 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.895 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.895 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.896 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.896 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.896 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.896 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.896 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.896 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.896 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.897 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.897 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.897 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.897 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.897 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.897 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.898 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.898 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.898 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.898 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.898 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.898 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.899 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.899 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.899 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.899 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.899 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.899 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.900 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.900 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.900 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.900 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.900 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.900 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.900 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.901 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.901 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.901 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.901 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.901 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.901 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.902 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.902 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.902 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.902 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.902 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.902 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.902 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.903 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.903 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.903 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.903 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.903 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.903 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.904 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.904 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.904 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.904 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.905 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.905 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.905 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.905 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.905 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.906 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.906 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.906 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.906 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.906 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.907 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.907 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.907 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.907 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.907 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.908 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.908 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.908 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.908 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.908 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.909 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.909 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.909 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.909 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.909 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.909 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.910 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.910 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.910 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.910 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.910 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.910 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.911 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.911 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.911 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.911 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.911 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.911 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.911 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.912 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.912 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.912 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.912 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.912 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.912 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.913 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.913 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.913 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.913 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.913 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.914 350396 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.936 350396 DEBUG nova.virt.libvirt.host [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.937 350396 DEBUG nova.virt.libvirt.host [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.937 350396 DEBUG nova.virt.libvirt.host [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.938 350396 DEBUG nova.virt.libvirt.host [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.956 350396 DEBUG nova.virt.libvirt.host [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f0e7169ce50> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.962 350396 DEBUG nova.virt.libvirt.host [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f0e7169ce50> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.963 350396 INFO nova.virt.libvirt.driver [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Connection event '1' reason 'None'
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.995 350396 WARNING nova.virt.libvirt.driver [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Dec 03 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.995 350396 DEBUG nova.virt.libvirt.volume.mount [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Dec 03 01:41:36 compute-0 python3.9[350760]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:41:37 compute-0 nova_compute[350390]: 2025-12-03 01:41:37.167 350396 INFO nova.virt.libvirt.host [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Libvirt host capabilities <capabilities>
Dec 03 01:41:37 compute-0 nova_compute[350390]: 
Dec 03 01:41:37 compute-0 nova_compute[350390]:   <host>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <uuid>bb85f21b-9f67-464f-8fbe-e50d4e1e7eb4</uuid>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <cpu>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <arch>x86_64</arch>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model>EPYC-Rome-v4</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <vendor>AMD</vendor>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <microcode version='16777317'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <signature family='23' model='49' stepping='0'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <maxphysaddr mode='emulate' bits='40'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature name='x2apic'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature name='tsc-deadline'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature name='osxsave'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature name='hypervisor'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature name='tsc_adjust'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature name='spec-ctrl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature name='stibp'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature name='arch-capabilities'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature name='ssbd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature name='cmp_legacy'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature name='topoext'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature name='virt-ssbd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature name='lbrv'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature name='tsc-scale'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature name='vmcb-clean'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature name='pause-filter'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature name='pfthreshold'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature name='svme-addr-chk'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature name='rdctl-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature name='skip-l1dfl-vmentry'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature name='mds-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature name='pschange-mc-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <pages unit='KiB' size='4'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <pages unit='KiB' size='2048'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <pages unit='KiB' size='1048576'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </cpu>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <power_management>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <suspend_mem/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </power_management>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <iommu support='no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <migration_features>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <live/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <uri_transports>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <uri_transport>tcp</uri_transport>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <uri_transport>rdma</uri_transport>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </uri_transports>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </migration_features>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <topology>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <cells num='1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <cell id='0'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:           <memory unit='KiB'>7864312</memory>
Dec 03 01:41:37 compute-0 nova_compute[350390]:           <pages unit='KiB' size='4'>1966078</pages>
Dec 03 01:41:37 compute-0 nova_compute[350390]:           <pages unit='KiB' size='2048'>0</pages>
Dec 03 01:41:37 compute-0 nova_compute[350390]:           <pages unit='KiB' size='1048576'>0</pages>
Dec 03 01:41:37 compute-0 nova_compute[350390]:           <distances>
Dec 03 01:41:37 compute-0 nova_compute[350390]:             <sibling id='0' value='10'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:           </distances>
Dec 03 01:41:37 compute-0 nova_compute[350390]:           <cpus num='8'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:           </cpus>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         </cell>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </cells>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </topology>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <cache>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </cache>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <secmodel>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model>selinux</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <doi>0</doi>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </secmodel>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <secmodel>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model>dac</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <doi>0</doi>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <baselabel type='kvm'>+107:+107</baselabel>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <baselabel type='qemu'>+107:+107</baselabel>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </secmodel>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   </host>
Dec 03 01:41:37 compute-0 nova_compute[350390]: 
Dec 03 01:41:37 compute-0 nova_compute[350390]:   <guest>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <os_type>hvm</os_type>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <arch name='i686'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <wordsize>32</wordsize>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <domain type='qemu'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <domain type='kvm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </arch>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <features>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <pae/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <nonpae/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <acpi default='on' toggle='yes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <apic default='on' toggle='no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <cpuselection/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <deviceboot/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <disksnapshot default='on' toggle='no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <externalSnapshot/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </features>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   </guest>
Dec 03 01:41:37 compute-0 nova_compute[350390]: 
Dec 03 01:41:37 compute-0 nova_compute[350390]:   <guest>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <os_type>hvm</os_type>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <arch name='x86_64'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <wordsize>64</wordsize>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <domain type='qemu'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <domain type='kvm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </arch>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <features>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <acpi default='on' toggle='yes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <apic default='on' toggle='no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <cpuselection/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <deviceboot/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <disksnapshot default='on' toggle='no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <externalSnapshot/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </features>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   </guest>
Dec 03 01:41:37 compute-0 nova_compute[350390]: 
Dec 03 01:41:37 compute-0 nova_compute[350390]: </capabilities>
Dec 03 01:41:37 compute-0 nova_compute[350390]: 
Dec 03 01:41:37 compute-0 nova_compute[350390]: 2025-12-03 01:41:37.179 350396 DEBUG nova.virt.libvirt.host [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 03 01:41:37 compute-0 nova_compute[350390]: 2025-12-03 01:41:37.213 350396 DEBUG nova.virt.libvirt.host [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec 03 01:41:37 compute-0 nova_compute[350390]: <domainCapabilities>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   <path>/usr/libexec/qemu-kvm</path>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   <domain>kvm</domain>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   <arch>i686</arch>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   <vcpu max='4096'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   <iothreads supported='yes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   <os supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <enum name='firmware'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <loader supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='type'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>rom</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>pflash</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='readonly'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>yes</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>no</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='secure'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>no</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </loader>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   </os>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   <cpu>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <mode name='host-passthrough' supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='hostPassthroughMigratable'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>on</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>off</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </mode>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <mode name='maximum' supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='maximumMigratable'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>on</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>off</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </mode>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <mode name='host-model' supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <vendor>AMD</vendor>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='x2apic'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='tsc-deadline'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='hypervisor'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='tsc_adjust'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='spec-ctrl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='stibp'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='ssbd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='cmp_legacy'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='overflow-recov'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='succor'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='ibrs'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='amd-ssbd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='virt-ssbd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='lbrv'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='tsc-scale'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='vmcb-clean'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='flushbyasid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='pause-filter'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='pfthreshold'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='svme-addr-chk'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='disable' name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </mode>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <mode name='custom' supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Broadwell'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Broadwell-IBRS'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Broadwell-noTSX'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Broadwell-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Broadwell-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Broadwell-v3'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Broadwell-v4'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Cascadelake-Server'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Cascadelake-Server-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Cascadelake-Server-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Cascadelake-Server-v3'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Cascadelake-Server-v4'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Cascadelake-Server-v5'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Cooperlake'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Cooperlake-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Cooperlake-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Denverton'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='mpx'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Denverton-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='mpx'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Denverton-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Denverton-v3'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Dhyana-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='EPYC-Genoa'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amd-psfd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='auto-ibrs'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='no-nested-data-bp'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='null-sel-clr-base'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='stibp-always-on'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='EPYC-Genoa-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amd-psfd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='auto-ibrs'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='no-nested-data-bp'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='null-sel-clr-base'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='stibp-always-on'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='EPYC-Milan'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='EPYC-Milan-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='EPYC-Milan-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amd-psfd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='no-nested-data-bp'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='null-sel-clr-base'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='stibp-always-on'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='EPYC-Rome'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='EPYC-Rome-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='EPYC-Rome-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='EPYC-Rome-v3'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='EPYC-v3'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='EPYC-v4'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='GraniteRapids'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-fp16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-int8'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-tile'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-fp16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fbsdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrc'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrs'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fzrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='mcdt-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pbrsb-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='prefetchiti'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='psdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='serialize'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xfd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='GraniteRapids-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-fp16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-int8'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-tile'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-fp16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fbsdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrc'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrs'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fzrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='mcdt-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pbrsb-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='prefetchiti'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='psdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='serialize'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xfd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='GraniteRapids-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-fp16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-int8'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-tile'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx10'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx10-128'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx10-256'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx10-512'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-fp16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='cldemote'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fbsdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrc'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrs'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fzrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='mcdt-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdir64b'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdiri'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pbrsb-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='prefetchiti'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='psdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='serialize'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ss'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xfd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Haswell'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Haswell-IBRS'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Haswell-noTSX'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Haswell-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Haswell-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Haswell-v3'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Haswell-v4'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Icelake-Server'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Icelake-Server-noTSX'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Icelake-Server-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Icelake-Server-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Icelake-Server-v3'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Icelake-Server-v4'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Icelake-Server-v5'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Icelake-Server-v6'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Icelake-Server-v7'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='IvyBridge'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='IvyBridge-IBRS'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='IvyBridge-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='IvyBridge-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='KnightsMill'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-4fmaps'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-4vnniw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512er'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512pf'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ss'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='KnightsMill-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-4fmaps'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-4vnniw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512er'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512pf'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ss'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Opteron_G4'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fma4'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xop'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Opteron_G4-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fma4'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xop'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Opteron_G5'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fma4'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='tbm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xop'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Opteron_G5-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fma4'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='tbm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xop'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='SapphireRapids'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-int8'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-tile'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-fp16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrc'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrs'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fzrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='serialize'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xfd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='SapphireRapids-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-int8'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-tile'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-fp16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrc'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrs'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fzrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='serialize'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xfd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='SapphireRapids-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-int8'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-tile'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-fp16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fbsdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrc'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrs'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fzrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='psdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='serialize'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xfd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='SapphireRapids-v3'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-int8'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-tile'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-fp16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='cldemote'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fbsdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrc'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrs'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fzrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdir64b'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdiri'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='psdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='serialize'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ss'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xfd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='SierraForest'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-ne-convert'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-vnni-int8'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='cmpccxadd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fbsdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrs'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='mcdt-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pbrsb-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='psdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='serialize'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='SierraForest-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-ne-convert'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-vnni-int8'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='cmpccxadd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fbsdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrs'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='mcdt-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pbrsb-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='psdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='serialize'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Client'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Client-IBRS'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Client-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Client-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Client-v3'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Client-v4'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Server'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Server-IBRS'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Server-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Server-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Server-v3'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Server-v4'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Server-v5'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Snowridge'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='cldemote'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='core-capability'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdir64b'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdiri'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='mpx'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='split-lock-detect'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Snowridge-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='cldemote'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='core-capability'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdir64b'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdiri'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='mpx'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='split-lock-detect'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Snowridge-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='cldemote'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='core-capability'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdir64b'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdiri'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='split-lock-detect'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Snowridge-v3'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='cldemote'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='core-capability'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdir64b'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdiri'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='split-lock-detect'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Snowridge-v4'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='cldemote'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdir64b'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdiri'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='athlon'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='3dnow'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='3dnowext'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='athlon-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='3dnow'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='3dnowext'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='core2duo'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ss'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='core2duo-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ss'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='coreduo'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ss'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='coreduo-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ss'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='n270'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ss'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='n270-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ss'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='phenom'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='3dnow'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='3dnowext'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='phenom-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='3dnow'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='3dnowext'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </mode>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   </cpu>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   <memoryBacking supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <enum name='sourceType'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <value>file</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <value>anonymous</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <value>memfd</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   </memoryBacking>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   <devices>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <disk supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='diskDevice'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>disk</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>cdrom</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>floppy</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>lun</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='bus'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>fdc</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>scsi</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>virtio</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>usb</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>sata</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='model'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>virtio</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>virtio-transitional</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>virtio-non-transitional</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </disk>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <graphics supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='type'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>vnc</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>egl-headless</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>dbus</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </graphics>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <video supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='modelType'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>vga</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>cirrus</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>virtio</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>none</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>bochs</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>ramfb</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </video>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <hostdev supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='mode'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>subsystem</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='startupPolicy'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>default</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>mandatory</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>requisite</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>optional</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='subsysType'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>usb</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>pci</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>scsi</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='capsType'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='pciBackend'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </hostdev>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <rng supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='model'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>virtio</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>virtio-transitional</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>virtio-non-transitional</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='backendModel'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>random</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>egd</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>builtin</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </rng>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <filesystem supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='driverType'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>path</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>handle</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>virtiofs</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </filesystem>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <tpm supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='model'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>tpm-tis</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>tpm-crb</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='backendModel'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>emulator</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>external</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='backendVersion'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>2.0</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </tpm>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <redirdev supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='bus'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>usb</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </redirdev>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <channel supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='type'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>pty</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>unix</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </channel>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <crypto supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='model'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='type'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>qemu</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='backendModel'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>builtin</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </crypto>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <interface supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='backendType'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>default</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>passt</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </interface>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <panic supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='model'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>isa</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>hyperv</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </panic>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <console supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='type'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>null</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>vc</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>pty</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>dev</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>file</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>pipe</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>stdio</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>udp</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>tcp</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>unix</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>qemu-vdagent</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>dbus</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </console>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   </devices>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   <features>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <gic supported='no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <vmcoreinfo supported='yes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <genid supported='yes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <backingStoreInput supported='yes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <backup supported='yes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <async-teardown supported='yes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <ps2 supported='yes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <sev supported='no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <sgx supported='no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <hyperv supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='features'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>relaxed</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>vapic</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>spinlocks</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>vpindex</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>runtime</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>synic</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>stimer</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>reset</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>vendor_id</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>frequencies</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>reenlightenment</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>tlbflush</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>ipi</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>avic</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>emsr_bitmap</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>xmm_input</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <defaults>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <spinlocks>4095</spinlocks>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <stimer_direct>on</stimer_direct>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <tlbflush_direct>on</tlbflush_direct>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <tlbflush_extended>on</tlbflush_extended>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </defaults>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </hyperv>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <launchSecurity supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='sectype'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>tdx</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </launchSecurity>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   </features>
Dec 03 01:41:37 compute-0 nova_compute[350390]: </domainCapabilities>
Dec 03 01:41:37 compute-0 nova_compute[350390]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 03 01:41:37 compute-0 nova_compute[350390]: 2025-12-03 01:41:37.223 350396 DEBUG nova.virt.libvirt.host [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec 03 01:41:37 compute-0 nova_compute[350390]: <domainCapabilities>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   <path>/usr/libexec/qemu-kvm</path>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   <domain>kvm</domain>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   <arch>i686</arch>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   <vcpu max='240'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   <iothreads supported='yes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   <os supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <enum name='firmware'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <loader supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='type'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>rom</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>pflash</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='readonly'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>yes</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>no</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='secure'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>no</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </loader>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   </os>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   <cpu>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <mode name='host-passthrough' supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='hostPassthroughMigratable'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>on</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>off</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </mode>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <mode name='maximum' supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='maximumMigratable'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>on</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>off</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </mode>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <mode name='host-model' supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <vendor>AMD</vendor>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='x2apic'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='tsc-deadline'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='hypervisor'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='tsc_adjust'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='spec-ctrl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='stibp'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='ssbd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='cmp_legacy'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='overflow-recov'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='succor'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='ibrs'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='amd-ssbd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='virt-ssbd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='lbrv'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='tsc-scale'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='vmcb-clean'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='flushbyasid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='pause-filter'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='pfthreshold'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='svme-addr-chk'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='disable' name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </mode>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <mode name='custom' supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Broadwell'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Broadwell-IBRS'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Broadwell-noTSX'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Broadwell-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Broadwell-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Broadwell-v3'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Broadwell-v4'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Cascadelake-Server'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Cascadelake-Server-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Cascadelake-Server-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Cascadelake-Server-v3'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Cascadelake-Server-v4'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Cascadelake-Server-v5'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Cooperlake'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Cooperlake-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Cooperlake-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Denverton'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='mpx'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Denverton-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='mpx'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Denverton-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Denverton-v3'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Dhyana-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='EPYC-Genoa'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amd-psfd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='auto-ibrs'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='no-nested-data-bp'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='null-sel-clr-base'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='stibp-always-on'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='EPYC-Genoa-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amd-psfd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='auto-ibrs'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='no-nested-data-bp'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='null-sel-clr-base'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='stibp-always-on'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='EPYC-Milan'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='EPYC-Milan-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='EPYC-Milan-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amd-psfd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='no-nested-data-bp'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='null-sel-clr-base'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='stibp-always-on'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='EPYC-Rome'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='EPYC-Rome-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='EPYC-Rome-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='EPYC-Rome-v3'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='EPYC-v3'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='EPYC-v4'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='GraniteRapids'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-fp16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-int8'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-tile'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-fp16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fbsdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrc'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrs'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fzrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='mcdt-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pbrsb-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='prefetchiti'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='psdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='serialize'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xfd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='GraniteRapids-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-fp16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-int8'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-tile'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-fp16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fbsdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrc'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrs'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fzrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='mcdt-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pbrsb-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='prefetchiti'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='psdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='serialize'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xfd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='GraniteRapids-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-fp16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-int8'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-tile'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx10'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx10-128'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx10-256'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx10-512'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-fp16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='cldemote'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fbsdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrc'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrs'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fzrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='mcdt-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdir64b'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdiri'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pbrsb-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='prefetchiti'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='psdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='serialize'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ss'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xfd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Haswell'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Haswell-IBRS'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Haswell-noTSX'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Haswell-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Haswell-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Haswell-v3'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Haswell-v4'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Icelake-Server'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Icelake-Server-noTSX'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Icelake-Server-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Icelake-Server-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Icelake-Server-v3'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Icelake-Server-v4'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Icelake-Server-v5'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Icelake-Server-v6'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Icelake-Server-v7'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='IvyBridge'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='IvyBridge-IBRS'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='IvyBridge-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='IvyBridge-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='KnightsMill'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-4fmaps'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-4vnniw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512er'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512pf'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ss'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='KnightsMill-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-4fmaps'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-4vnniw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512er'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512pf'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ss'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Opteron_G4'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fma4'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xop'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Opteron_G4-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fma4'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xop'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Opteron_G5'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fma4'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='tbm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xop'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Opteron_G5-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fma4'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='tbm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xop'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='SapphireRapids'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-int8'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-tile'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-fp16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 ceph-mon[192821]: pgmap v804: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 0 B/s wr, 46 op/s
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrc'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrs'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fzrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='serialize'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xfd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='SapphireRapids-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-int8'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-tile'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-fp16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrc'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrs'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fzrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='serialize'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xfd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='SapphireRapids-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-int8'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-tile'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-fp16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fbsdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrc'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrs'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fzrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='psdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='serialize'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xfd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='SapphireRapids-v3'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-int8'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-tile'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-fp16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='cldemote'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fbsdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrc'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrs'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fzrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdir64b'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdiri'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='psdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='serialize'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ss'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xfd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='SierraForest'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-ne-convert'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-vnni-int8'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='cmpccxadd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fbsdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrs'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='mcdt-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pbrsb-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='psdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='serialize'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='SierraForest-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-ne-convert'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-vnni-int8'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='cmpccxadd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fbsdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrs'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='mcdt-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pbrsb-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='psdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='serialize'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Client'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Client-IBRS'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Client-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Client-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Client-v3'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Client-v4'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Server'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Server-IBRS'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Server-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Server-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Server-v3'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Server-v4'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Server-v5'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Snowridge'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='cldemote'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='core-capability'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdir64b'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdiri'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='mpx'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='split-lock-detect'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Snowridge-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='cldemote'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='core-capability'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdir64b'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdiri'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='mpx'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='split-lock-detect'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Snowridge-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='cldemote'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='core-capability'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdir64b'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdiri'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='split-lock-detect'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Snowridge-v3'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='cldemote'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='core-capability'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdir64b'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdiri'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='split-lock-detect'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Snowridge-v4'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='cldemote'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdir64b'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdiri'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='athlon'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='3dnow'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='3dnowext'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='athlon-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='3dnow'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='3dnowext'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='core2duo'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ss'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='core2duo-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ss'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='coreduo'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ss'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='coreduo-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ss'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='n270'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ss'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='n270-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ss'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='phenom'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='3dnow'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='3dnowext'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='phenom-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='3dnow'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='3dnowext'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </mode>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   </cpu>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   <memoryBacking supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <enum name='sourceType'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <value>file</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <value>anonymous</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <value>memfd</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   </memoryBacking>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   <devices>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <disk supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='diskDevice'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>disk</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>cdrom</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>floppy</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>lun</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='bus'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>ide</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>fdc</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>scsi</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>virtio</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>usb</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>sata</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='model'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>virtio</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>virtio-transitional</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>virtio-non-transitional</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </disk>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <graphics supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='type'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>vnc</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>egl-headless</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>dbus</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </graphics>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <video supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='modelType'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>vga</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>cirrus</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>virtio</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>none</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>bochs</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>ramfb</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </video>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <hostdev supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='mode'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>subsystem</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='startupPolicy'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>default</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>mandatory</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>requisite</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>optional</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='subsysType'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>usb</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>pci</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>scsi</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='capsType'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='pciBackend'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </hostdev>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <rng supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='model'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>virtio</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>virtio-transitional</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>virtio-non-transitional</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='backendModel'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>random</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>egd</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>builtin</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </rng>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <filesystem supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='driverType'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>path</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>handle</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>virtiofs</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </filesystem>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <tpm supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='model'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>tpm-tis</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>tpm-crb</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='backendModel'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>emulator</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>external</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='backendVersion'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>2.0</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </tpm>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <redirdev supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='bus'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>usb</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </redirdev>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <channel supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='type'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>pty</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>unix</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </channel>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <crypto supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='model'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='type'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>qemu</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='backendModel'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>builtin</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </crypto>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <interface supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='backendType'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>default</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>passt</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </interface>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <panic supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='model'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>isa</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>hyperv</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </panic>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <console supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='type'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>null</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>vc</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>pty</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>dev</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>file</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>pipe</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>stdio</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>udp</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>tcp</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>unix</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>qemu-vdagent</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>dbus</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </console>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   </devices>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   <features>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <gic supported='no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <vmcoreinfo supported='yes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <genid supported='yes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <backingStoreInput supported='yes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <backup supported='yes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <async-teardown supported='yes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <ps2 supported='yes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <sev supported='no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <sgx supported='no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <hyperv supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='features'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>relaxed</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>vapic</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>spinlocks</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>vpindex</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>runtime</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>synic</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>stimer</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>reset</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>vendor_id</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>frequencies</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>reenlightenment</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>tlbflush</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>ipi</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>avic</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>emsr_bitmap</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>xmm_input</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <defaults>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <spinlocks>4095</spinlocks>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <stimer_direct>on</stimer_direct>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <tlbflush_direct>on</tlbflush_direct>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <tlbflush_extended>on</tlbflush_extended>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </defaults>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </hyperv>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <launchSecurity supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='sectype'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>tdx</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </launchSecurity>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   </features>
Dec 03 01:41:37 compute-0 nova_compute[350390]: </domainCapabilities>
Dec 03 01:41:37 compute-0 nova_compute[350390]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 03 01:41:37 compute-0 nova_compute[350390]: 2025-12-03 01:41:37.302 350396 DEBUG nova.virt.libvirt.host [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 03 01:41:37 compute-0 nova_compute[350390]: 2025-12-03 01:41:37.309 350396 DEBUG nova.virt.libvirt.host [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec 03 01:41:37 compute-0 nova_compute[350390]: <domainCapabilities>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   <path>/usr/libexec/qemu-kvm</path>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   <domain>kvm</domain>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   <arch>x86_64</arch>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   <vcpu max='4096'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   <iothreads supported='yes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   <os supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <enum name='firmware'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <value>efi</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <loader supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='type'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>rom</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>pflash</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='readonly'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>yes</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>no</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='secure'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>yes</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>no</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </loader>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   </os>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   <cpu>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <mode name='host-passthrough' supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='hostPassthroughMigratable'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>on</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>off</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </mode>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <mode name='maximum' supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='maximumMigratable'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>on</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>off</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </mode>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <mode name='host-model' supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <vendor>AMD</vendor>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='x2apic'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='tsc-deadline'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='hypervisor'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='tsc_adjust'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='spec-ctrl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='stibp'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='ssbd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='cmp_legacy'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='overflow-recov'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='succor'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='ibrs'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='amd-ssbd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='virt-ssbd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='lbrv'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='tsc-scale'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='vmcb-clean'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='flushbyasid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='pause-filter'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='pfthreshold'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='svme-addr-chk'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='disable' name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </mode>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <mode name='custom' supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Broadwell'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Broadwell-IBRS'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Broadwell-noTSX'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Broadwell-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Broadwell-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Broadwell-v3'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Broadwell-v4'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Cascadelake-Server'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Cascadelake-Server-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Cascadelake-Server-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Cascadelake-Server-v3'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Cascadelake-Server-v4'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Cascadelake-Server-v5'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Cooperlake'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Cooperlake-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Cooperlake-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Denverton'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='mpx'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Denverton-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='mpx'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Denverton-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Denverton-v3'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Dhyana-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='EPYC-Genoa'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amd-psfd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='auto-ibrs'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='no-nested-data-bp'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='null-sel-clr-base'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='stibp-always-on'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='EPYC-Genoa-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amd-psfd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='auto-ibrs'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='no-nested-data-bp'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='null-sel-clr-base'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='stibp-always-on'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='EPYC-Milan'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='EPYC-Milan-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='EPYC-Milan-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amd-psfd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='no-nested-data-bp'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='null-sel-clr-base'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='stibp-always-on'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='EPYC-Rome'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='EPYC-Rome-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='EPYC-Rome-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='EPYC-Rome-v3'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='EPYC-v3'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='EPYC-v4'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='GraniteRapids'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-fp16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-int8'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-tile'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-fp16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fbsdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrc'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrs'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fzrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='mcdt-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pbrsb-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='prefetchiti'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='psdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='serialize'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xfd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='GraniteRapids-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-fp16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-int8'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-tile'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-fp16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fbsdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrc'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrs'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fzrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='mcdt-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pbrsb-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='prefetchiti'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='psdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='serialize'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xfd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='GraniteRapids-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-fp16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-int8'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-tile'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx10'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx10-128'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx10-256'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx10-512'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-fp16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='cldemote'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fbsdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrc'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrs'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fzrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='mcdt-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdir64b'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdiri'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pbrsb-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='prefetchiti'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='psdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='serialize'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ss'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xfd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Haswell'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Haswell-IBRS'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Haswell-noTSX'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Haswell-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Haswell-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Haswell-v3'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Haswell-v4'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Icelake-Server'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Icelake-Server-noTSX'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Icelake-Server-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Icelake-Server-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Icelake-Server-v3'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Icelake-Server-v4'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Icelake-Server-v5'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Icelake-Server-v6'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Icelake-Server-v7'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='IvyBridge'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='IvyBridge-IBRS'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='IvyBridge-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='IvyBridge-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='KnightsMill'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-4fmaps'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-4vnniw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512er'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512pf'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ss'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='KnightsMill-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-4fmaps'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-4vnniw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512er'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512pf'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ss'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Opteron_G4'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fma4'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xop'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Opteron_G4-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fma4'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xop'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Opteron_G5'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fma4'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='tbm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xop'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Opteron_G5-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fma4'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='tbm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xop'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='SapphireRapids'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-int8'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-tile'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-fp16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrc'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrs'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fzrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='serialize'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xfd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='SapphireRapids-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-int8'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-tile'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-fp16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrc'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrs'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fzrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='serialize'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xfd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='SapphireRapids-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-int8'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-tile'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-fp16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fbsdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrc'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrs'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fzrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='psdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='serialize'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xfd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='SapphireRapids-v3'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-int8'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-tile'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-fp16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='cldemote'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fbsdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrc'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrs'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fzrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdir64b'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdiri'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='psdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='serialize'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ss'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xfd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='SierraForest'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-ne-convert'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-vnni-int8'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='cmpccxadd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fbsdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrs'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='mcdt-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pbrsb-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='psdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='serialize'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='SierraForest-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-ne-convert'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-vnni-int8'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='cmpccxadd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fbsdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrs'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='mcdt-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pbrsb-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='psdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='serialize'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Client'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Client-IBRS'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Client-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Client-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Client-v3'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Client-v4'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Server'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Server-IBRS'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Server-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Server-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Server-v3'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Server-v4'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Server-v5'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Snowridge'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='cldemote'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='core-capability'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdir64b'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdiri'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='mpx'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='split-lock-detect'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Snowridge-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='cldemote'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='core-capability'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdir64b'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdiri'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='mpx'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='split-lock-detect'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Snowridge-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='cldemote'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='core-capability'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdir64b'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdiri'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='split-lock-detect'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Snowridge-v3'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='cldemote'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='core-capability'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdir64b'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdiri'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='split-lock-detect'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Snowridge-v4'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='cldemote'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdir64b'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdiri'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='athlon'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='3dnow'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='3dnowext'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='athlon-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='3dnow'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='3dnowext'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='core2duo'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ss'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='core2duo-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ss'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='coreduo'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ss'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='coreduo-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ss'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='n270'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ss'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='n270-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ss'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='phenom'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='3dnow'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='3dnowext'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='phenom-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='3dnow'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='3dnowext'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </mode>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   </cpu>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   <memoryBacking supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <enum name='sourceType'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <value>file</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <value>anonymous</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <value>memfd</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   </memoryBacking>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   <devices>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <disk supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='diskDevice'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>disk</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>cdrom</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>floppy</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>lun</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='bus'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>fdc</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>scsi</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>virtio</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>usb</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>sata</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='model'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>virtio</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>virtio-transitional</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>virtio-non-transitional</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </disk>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <graphics supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='type'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>vnc</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>egl-headless</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>dbus</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </graphics>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <video supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='modelType'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>vga</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>cirrus</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>virtio</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>none</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>bochs</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>ramfb</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </video>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <hostdev supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='mode'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>subsystem</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='startupPolicy'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>default</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>mandatory</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>requisite</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>optional</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='subsysType'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>usb</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>pci</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>scsi</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='capsType'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='pciBackend'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </hostdev>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <rng supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='model'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>virtio</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>virtio-transitional</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>virtio-non-transitional</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='backendModel'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>random</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>egd</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>builtin</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </rng>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <filesystem supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='driverType'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>path</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>handle</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>virtiofs</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </filesystem>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <tpm supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='model'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>tpm-tis</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>tpm-crb</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='backendModel'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>emulator</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>external</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='backendVersion'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>2.0</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </tpm>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <redirdev supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='bus'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>usb</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </redirdev>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <channel supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='type'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>pty</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>unix</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </channel>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <crypto supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='model'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='type'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>qemu</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='backendModel'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>builtin</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </crypto>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <interface supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='backendType'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>default</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>passt</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </interface>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <panic supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='model'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>isa</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>hyperv</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </panic>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <console supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='type'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>null</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>vc</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>pty</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>dev</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>file</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>pipe</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>stdio</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>udp</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>tcp</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>unix</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>qemu-vdagent</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>dbus</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </console>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   </devices>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   <features>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <gic supported='no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <vmcoreinfo supported='yes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <genid supported='yes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <backingStoreInput supported='yes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <backup supported='yes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <async-teardown supported='yes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <ps2 supported='yes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <sev supported='no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <sgx supported='no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <hyperv supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='features'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>relaxed</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>vapic</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>spinlocks</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>vpindex</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>runtime</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>synic</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>stimer</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>reset</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>vendor_id</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>frequencies</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>reenlightenment</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>tlbflush</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>ipi</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>avic</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>emsr_bitmap</value>
Dec 03 01:41:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v805: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 8 op/s
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>xmm_input</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <defaults>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <spinlocks>4095</spinlocks>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <stimer_direct>on</stimer_direct>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <tlbflush_direct>on</tlbflush_direct>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <tlbflush_extended>on</tlbflush_extended>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </defaults>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </hyperv>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <launchSecurity supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='sectype'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>tdx</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </launchSecurity>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   </features>
Dec 03 01:41:37 compute-0 nova_compute[350390]: </domainCapabilities>
Dec 03 01:41:37 compute-0 nova_compute[350390]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 03 01:41:37 compute-0 nova_compute[350390]: 2025-12-03 01:41:37.411 350396 DEBUG nova.virt.libvirt.host [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec 03 01:41:37 compute-0 nova_compute[350390]: <domainCapabilities>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   <path>/usr/libexec/qemu-kvm</path>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   <domain>kvm</domain>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   <arch>x86_64</arch>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   <vcpu max='240'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   <iothreads supported='yes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   <os supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <enum name='firmware'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <loader supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='type'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>rom</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>pflash</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='readonly'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>yes</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>no</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='secure'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>no</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </loader>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   </os>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   <cpu>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <mode name='host-passthrough' supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='hostPassthroughMigratable'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>on</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>off</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </mode>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <mode name='maximum' supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='maximumMigratable'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>on</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>off</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </mode>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <mode name='host-model' supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <vendor>AMD</vendor>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='x2apic'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='tsc-deadline'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='hypervisor'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='tsc_adjust'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='spec-ctrl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='stibp'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='ssbd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='cmp_legacy'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='overflow-recov'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='succor'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='ibrs'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='amd-ssbd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='virt-ssbd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='lbrv'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='tsc-scale'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='vmcb-clean'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='flushbyasid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='pause-filter'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='pfthreshold'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='svme-addr-chk'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <feature policy='disable' name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </mode>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <mode name='custom' supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Broadwell'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Broadwell-IBRS'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Broadwell-noTSX'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Broadwell-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Broadwell-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Broadwell-v3'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Broadwell-v4'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Cascadelake-Server'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Cascadelake-Server-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Cascadelake-Server-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Cascadelake-Server-v3'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Cascadelake-Server-v4'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Cascadelake-Server-v5'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Cooperlake'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Cooperlake-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Cooperlake-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Denverton'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='mpx'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Denverton-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='mpx'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Denverton-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Denverton-v3'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Dhyana-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='EPYC-Genoa'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amd-psfd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='auto-ibrs'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='no-nested-data-bp'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='null-sel-clr-base'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='stibp-always-on'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='EPYC-Genoa-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amd-psfd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='auto-ibrs'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='no-nested-data-bp'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='null-sel-clr-base'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='stibp-always-on'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='EPYC-Milan'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='EPYC-Milan-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='EPYC-Milan-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amd-psfd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='no-nested-data-bp'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='null-sel-clr-base'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='stibp-always-on'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='EPYC-Rome'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='EPYC-Rome-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='EPYC-Rome-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='EPYC-Rome-v3'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='EPYC-v3'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='EPYC-v4'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='GraniteRapids'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-fp16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-int8'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-tile'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-fp16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fbsdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrc'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrs'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fzrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='mcdt-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pbrsb-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='prefetchiti'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='psdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='serialize'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xfd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='GraniteRapids-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-fp16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-int8'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-tile'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-fp16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fbsdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrc'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrs'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fzrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='mcdt-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pbrsb-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='prefetchiti'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='psdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='serialize'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xfd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='GraniteRapids-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-fp16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-int8'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-tile'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx10'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx10-128'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx10-256'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx10-512'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-fp16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='cldemote'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fbsdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrc'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrs'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fzrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='mcdt-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdir64b'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdiri'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pbrsb-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='prefetchiti'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='psdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='serialize'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ss'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xfd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Haswell'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Haswell-IBRS'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Haswell-noTSX'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Haswell-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Haswell-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Haswell-v3'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Haswell-v4'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Icelake-Server'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Icelake-Server-noTSX'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Icelake-Server-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Icelake-Server-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Icelake-Server-v3'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Icelake-Server-v4'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Icelake-Server-v5'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Icelake-Server-v6'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Icelake-Server-v7'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='IvyBridge'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='IvyBridge-IBRS'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='IvyBridge-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='IvyBridge-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='KnightsMill'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-4fmaps'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-4vnniw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512er'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512pf'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ss'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='KnightsMill-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-4fmaps'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-4vnniw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512er'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512pf'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ss'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Opteron_G4'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fma4'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xop'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Opteron_G4-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fma4'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xop'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Opteron_G5'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fma4'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='tbm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xop'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Opteron_G5-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fma4'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='tbm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xop'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='SapphireRapids'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-int8'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-tile'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-fp16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrc'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrs'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fzrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='serialize'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xfd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='SapphireRapids-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-int8'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-tile'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-fp16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrc'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrs'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fzrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='serialize'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xfd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='SapphireRapids-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-int8'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-tile'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-fp16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fbsdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrc'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrs'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fzrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='psdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='serialize'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xfd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='SapphireRapids-v3'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-int8'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='amx-tile'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-bf16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-fp16'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bitalg'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='cldemote'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fbsdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrc'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrs'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fzrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='la57'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdir64b'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdiri'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='psdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='serialize'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ss'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='taa-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xfd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='SierraForest'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-ne-convert'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-vnni-int8'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='cmpccxadd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fbsdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrs'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='mcdt-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pbrsb-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='psdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='serialize'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='SierraForest-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-ifma'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-ne-convert'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-vnni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx-vnni-int8'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='cmpccxadd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fbsdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='fsrs'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ibrs-all'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='mcdt-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pbrsb-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='psdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='serialize'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vaes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Client'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Client-IBRS'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Client-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Client-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Client-v3'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Client-v4'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Server'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Server-IBRS'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Server-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Server-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='hle'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='rtm'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Server-v3'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Server-v4'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Skylake-Server-v5'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512bw'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512cd'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512dq'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512f'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='avx512vl'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='invpcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pcid'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='pku'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Snowridge'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='cldemote'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='core-capability'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdir64b'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdiri'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='mpx'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='split-lock-detect'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Snowridge-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='cldemote'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='core-capability'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdir64b'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdiri'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='mpx'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='split-lock-detect'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Snowridge-v2'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='cldemote'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='core-capability'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdir64b'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdiri'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='split-lock-detect'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Snowridge-v3'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='cldemote'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='core-capability'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdir64b'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdiri'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='split-lock-detect'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='Snowridge-v4'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='cldemote'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='erms'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='gfni'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdir64b'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='movdiri'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='xsaves'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='athlon'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='3dnow'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='3dnowext'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='athlon-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='3dnow'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='3dnowext'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='core2duo'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ss'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='core2duo-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ss'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='coreduo'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ss'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='coreduo-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ss'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='n270'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ss'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='n270-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='ss'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='phenom'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='3dnow'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='3dnowext'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <blockers model='phenom-v1'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='3dnow'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <feature name='3dnowext'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </blockers>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </mode>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   </cpu>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   <memoryBacking supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <enum name='sourceType'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <value>file</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <value>anonymous</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <value>memfd</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   </memoryBacking>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   <devices>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <disk supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='diskDevice'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>disk</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>cdrom</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>floppy</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>lun</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='bus'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>ide</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>fdc</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>scsi</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>virtio</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>usb</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>sata</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='model'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>virtio</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>virtio-transitional</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>virtio-non-transitional</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </disk>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <graphics supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='type'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>vnc</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>egl-headless</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>dbus</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </graphics>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <video supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='modelType'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>vga</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>cirrus</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>virtio</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>none</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>bochs</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>ramfb</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </video>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <hostdev supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='mode'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>subsystem</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='startupPolicy'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>default</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>mandatory</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>requisite</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>optional</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='subsysType'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>usb</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>pci</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>scsi</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='capsType'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='pciBackend'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </hostdev>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <rng supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='model'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>virtio</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>virtio-transitional</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>virtio-non-transitional</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='backendModel'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>random</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>egd</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>builtin</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </rng>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <filesystem supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='driverType'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>path</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>handle</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>virtiofs</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </filesystem>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <tpm supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='model'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>tpm-tis</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>tpm-crb</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='backendModel'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>emulator</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>external</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='backendVersion'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>2.0</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </tpm>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <redirdev supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='bus'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>usb</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </redirdev>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <channel supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='type'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>pty</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>unix</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </channel>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <crypto supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='model'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='type'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>qemu</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='backendModel'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>builtin</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </crypto>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <interface supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='backendType'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>default</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>passt</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </interface>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <panic supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='model'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>isa</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>hyperv</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </panic>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <console supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='type'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>null</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>vc</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>pty</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>dev</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>file</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>pipe</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>stdio</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>udp</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>tcp</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>unix</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>qemu-vdagent</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>dbus</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </console>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   </devices>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   <features>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <gic supported='no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <vmcoreinfo supported='yes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <genid supported='yes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <backingStoreInput supported='yes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <backup supported='yes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <async-teardown supported='yes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <ps2 supported='yes'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <sev supported='no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <sgx supported='no'/>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <hyperv supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='features'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>relaxed</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>vapic</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>spinlocks</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>vpindex</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>runtime</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>synic</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>stimer</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>reset</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>vendor_id</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>frequencies</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>reenlightenment</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>tlbflush</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>ipi</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>avic</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>emsr_bitmap</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>xmm_input</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <defaults>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <spinlocks>4095</spinlocks>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <stimer_direct>on</stimer_direct>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <tlbflush_direct>on</tlbflush_direct>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <tlbflush_extended>on</tlbflush_extended>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </defaults>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </hyperv>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     <launchSecurity supported='yes'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       <enum name='sectype'>
Dec 03 01:41:37 compute-0 nova_compute[350390]:         <value>tdx</value>
Dec 03 01:41:37 compute-0 nova_compute[350390]:       </enum>
Dec 03 01:41:37 compute-0 nova_compute[350390]:     </launchSecurity>
Dec 03 01:41:37 compute-0 nova_compute[350390]:   </features>
Dec 03 01:41:37 compute-0 nova_compute[350390]: </domainCapabilities>
Dec 03 01:41:37 compute-0 nova_compute[350390]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 03 01:41:37 compute-0 nova_compute[350390]: 2025-12-03 01:41:37.541 350396 DEBUG nova.virt.libvirt.host [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 03 01:41:37 compute-0 nova_compute[350390]: 2025-12-03 01:41:37.541 350396 INFO nova.virt.libvirt.host [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Secure Boot support detected
Dec 03 01:41:37 compute-0 nova_compute[350390]: 2025-12-03 01:41:37.545 350396 INFO nova.virt.libvirt.driver [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 03 01:41:37 compute-0 nova_compute[350390]: 2025-12-03 01:41:37.545 350396 INFO nova.virt.libvirt.driver [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 03 01:41:37 compute-0 nova_compute[350390]: 2025-12-03 01:41:37.568 350396 DEBUG nova.virt.libvirt.driver [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Dec 03 01:41:37 compute-0 nova_compute[350390]: 2025-12-03 01:41:37.622 350396 INFO nova.virt.node [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Determined node identity 107397d2-51bc-4a03-bce4-7cd69319cf05 from /var/lib/nova/compute_id
Dec 03 01:41:37 compute-0 nova_compute[350390]: 2025-12-03 01:41:37.658 350396 WARNING nova.compute.manager [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Compute nodes ['107397d2-51bc-4a03-bce4-7cd69319cf05'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Dec 03 01:41:37 compute-0 nova_compute[350390]: 2025-12-03 01:41:37.715 350396 INFO nova.compute.manager [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Dec 03 01:41:37 compute-0 python3.9[350922]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:41:37 compute-0 nova_compute[350390]: 2025-12-03 01:41:37.774 350396 WARNING nova.compute.manager [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Dec 03 01:41:37 compute-0 nova_compute[350390]: 2025-12-03 01:41:37.774 350396 DEBUG oslo_concurrency.lockutils [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:41:37 compute-0 nova_compute[350390]: 2025-12-03 01:41:37.774 350396 DEBUG oslo_concurrency.lockutils [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:41:37 compute-0 nova_compute[350390]: 2025-12-03 01:41:37.774 350396 DEBUG oslo_concurrency.lockutils [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:41:37 compute-0 nova_compute[350390]: 2025-12-03 01:41:37.774 350396 DEBUG nova.compute.resource_tracker [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 01:41:37 compute-0 nova_compute[350390]: 2025-12-03 01:41:37.775 350396 DEBUG oslo_concurrency.processutils [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:41:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 01:41:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:41:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 01:41:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:41:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:41:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:41:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:41:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:41:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:41:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:41:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:41:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:41:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 01:41:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:41:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:41:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:41:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 01:41:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:41:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 01:41:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:41:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:41:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:41:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 01:41:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 01:41:38 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1082600722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:41:38 compute-0 nova_compute[350390]: 2025-12-03 01:41:38.249 350396 DEBUG oslo_concurrency.processutils [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:41:38 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Dec 03 01:41:38 compute-0 systemd[1]: Started libvirt nodedev daemon.
Dec 03 01:41:38 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1082600722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:41:38 compute-0 sudo[351117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjtgwbfalyvomjholvjnxidxwpuofknq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726098.0259717-1549-257261529239710/AnsiballZ_podman_container.py'
Dec 03 01:41:38 compute-0 sudo[351117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:41:38 compute-0 nova_compute[350390]: 2025-12-03 01:41:38.845 350396 WARNING nova.virt.libvirt.driver [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 01:41:38 compute-0 nova_compute[350390]: 2025-12-03 01:41:38.847 350396 DEBUG nova.compute.resource_tracker [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4576MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 01:41:38 compute-0 nova_compute[350390]: 2025-12-03 01:41:38.847 350396 DEBUG oslo_concurrency.lockutils [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:41:38 compute-0 nova_compute[350390]: 2025-12-03 01:41:38.847 350396 DEBUG oslo_concurrency.lockutils [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:41:38 compute-0 nova_compute[350390]: 2025-12-03 01:41:38.866 350396 WARNING nova.compute.resource_tracker [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] No compute node record for compute-0.ctlplane.example.com:107397d2-51bc-4a03-bce4-7cd69319cf05: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 107397d2-51bc-4a03-bce4-7cd69319cf05 could not be found.
Dec 03 01:41:38 compute-0 nova_compute[350390]: 2025-12-03 01:41:38.896 350396 INFO nova.compute.resource_tracker [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 107397d2-51bc-4a03-bce4-7cd69319cf05
Dec 03 01:41:38 compute-0 nova_compute[350390]: 2025-12-03 01:41:38.957 350396 DEBUG nova.compute.resource_tracker [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 01:41:38 compute-0 nova_compute[350390]: 2025-12-03 01:41:38.958 350396 DEBUG nova.compute.resource_tracker [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 01:41:38 compute-0 python3.9[351119]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec 03 01:41:39 compute-0 sudo[351117]: pam_unix(sudo:session): session closed for user root
Dec 03 01:41:39 compute-0 rsyslogd[188612]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 03 01:41:39 compute-0 rsyslogd[188612]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 03 01:41:39 compute-0 ceph-mon[192821]: pgmap v805: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 8 op/s
Dec 03 01:41:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v806: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:41:40 compute-0 nova_compute[350390]: 2025-12-03 01:41:40.069 350396 INFO nova.scheduler.client.report [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] [req-e196463b-40da-40c9-9372-3ef0ada7f326] Created resource provider record via placement API for resource provider with UUID 107397d2-51bc-4a03-bce4-7cd69319cf05 and name compute-0.ctlplane.example.com.
Dec 03 01:41:40 compute-0 sudo[351289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acqsrvnpdfkfxhdbxusrxafooodyszdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726099.5818863-1557-109237653758223/AnsiballZ_systemd.py'
Dec 03 01:41:40 compute-0 sudo[351289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:41:40 compute-0 python3.9[351291]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 03 01:41:40 compute-0 nova_compute[350390]: 2025-12-03 01:41:40.485 350396 DEBUG oslo_concurrency.processutils [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:41:40 compute-0 systemd[1]: Stopping nova_compute container...
Dec 03 01:41:40 compute-0 nova_compute[350390]: 2025-12-03 01:41:40.679 350396 DEBUG oslo_concurrency.lockutils [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.832s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:41:40 compute-0 nova_compute[350390]: 2025-12-03 01:41:40.680 350396 DEBUG oslo_concurrency.lockutils [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 01:41:40 compute-0 nova_compute[350390]: 2025-12-03 01:41:40.681 350396 DEBUG oslo_concurrency.lockutils [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 01:41:40 compute-0 nova_compute[350390]: 2025-12-03 01:41:40.681 350396 DEBUG oslo_concurrency.lockutils [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 01:41:40 compute-0 ceph-mon[192821]: pgmap v806: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.975 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.976 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.976 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f00ebd496a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eda45910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eabec2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebcadee0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.982 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f00ebd4b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f00edba6090>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f00ebd4bb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f00ebd4b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f00ebd4b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f00ebd4b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f00ebd4b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f00eabec290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.986 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f00ebd4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.986 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f00ebd4b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.986 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f00ebd4b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f00ebd4bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f00ebd4b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f00ebd4bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f00ebd4bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f00ebd4bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f00ebe0e030>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f00ebd4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f00ebd4b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f00ede91a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f00ebd4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f00ebd4b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f00ede92450>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.992 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f00ebd4bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.992 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.992 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f00ebd4bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.992 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:41:41 compute-0 virtqemud[154511]: End of file while reading data: Input/output error
Dec 03 01:41:41 compute-0 systemd[1]: libpod-1889b1738f438ee313befe0f02ea00cb2618a8f557e17b1fe752d5f1aa7d3101.scope: Deactivated successfully.
Dec 03 01:41:41 compute-0 systemd[1]: libpod-1889b1738f438ee313befe0f02ea00cb2618a8f557e17b1fe752d5f1aa7d3101.scope: Consumed 4.116s CPU time.
Dec 03 01:41:41 compute-0 podman[351296]: 2025-12-03 01:41:41.074400688 +0000 UTC m=+0.517533283 container died 1889b1738f438ee313befe0f02ea00cb2618a8f557e17b1fe752d5f1aa7d3101 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm)
Dec 03 01:41:41 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1889b1738f438ee313befe0f02ea00cb2618a8f557e17b1fe752d5f1aa7d3101-userdata-shm.mount: Deactivated successfully.
Dec 03 01:41:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b68e35a6c38835a07ca7b432662818307ea714e030c65d1dee2979c23a2baef-merged.mount: Deactivated successfully.
Dec 03 01:41:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v807: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:41:42 compute-0 sudo[351342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:41:42 compute-0 sudo[351342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:41:42 compute-0 sudo[351342]: pam_unix(sudo:session): session closed for user root
Dec 03 01:41:42 compute-0 sudo[351367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:41:42 compute-0 sudo[351367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:41:42 compute-0 sudo[351367]: pam_unix(sudo:session): session closed for user root
Dec 03 01:41:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:41:42 compute-0 sudo[351392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:41:42 compute-0 sudo[351392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:41:42 compute-0 sudo[351392]: pam_unix(sudo:session): session closed for user root
Dec 03 01:41:42 compute-0 sudo[351417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 01:41:42 compute-0 sudo[351417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:41:42 compute-0 ceph-mon[192821]: pgmap v807: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:41:42 compute-0 podman[351296]: 2025-12-03 01:41:42.823926985 +0000 UTC m=+2.267059570 container cleanup 1889b1738f438ee313befe0f02ea00cb2618a8f557e17b1fe752d5f1aa7d3101 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3)
Dec 03 01:41:42 compute-0 podman[351296]: nova_compute
Dec 03 01:41:42 compute-0 podman[351447]: nova_compute
Dec 03 01:41:42 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Dec 03 01:41:42 compute-0 systemd[1]: Stopped nova_compute container.
Dec 03 01:41:42 compute-0 systemd[1]: edpm_nova_compute.service: Consumed 1.351s CPU time, 20.6M memory peak, read 0B from disk, written 105.5K to disk.
Dec 03 01:41:42 compute-0 systemd[1]: Starting nova_compute container...
Dec 03 01:41:43 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:41:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b68e35a6c38835a07ca7b432662818307ea714e030c65d1dee2979c23a2baef/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 03 01:41:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b68e35a6c38835a07ca7b432662818307ea714e030c65d1dee2979c23a2baef/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec 03 01:41:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b68e35a6c38835a07ca7b432662818307ea714e030c65d1dee2979c23a2baef/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 03 01:41:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b68e35a6c38835a07ca7b432662818307ea714e030c65d1dee2979c23a2baef/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 03 01:41:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b68e35a6c38835a07ca7b432662818307ea714e030c65d1dee2979c23a2baef/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 03 01:41:43 compute-0 podman[351467]: 2025-12-03 01:41:43.135681647 +0000 UTC m=+0.158969099 container init 1889b1738f438ee313befe0f02ea00cb2618a8f557e17b1fe752d5f1aa7d3101 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 03 01:41:43 compute-0 podman[351467]: 2025-12-03 01:41:43.147632742 +0000 UTC m=+0.170920154 container start 1889b1738f438ee313befe0f02ea00cb2618a8f557e17b1fe752d5f1aa7d3101 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 03 01:41:43 compute-0 podman[351467]: nova_compute
Dec 03 01:41:43 compute-0 nova_compute[351485]: + sudo -E kolla_set_configs
Dec 03 01:41:43 compute-0 systemd[1]: Started nova_compute container.
Dec 03 01:41:43 compute-0 sudo[351289]: pam_unix(sudo:session): session closed for user root
Dec 03 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 03 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Validating config file
Dec 03 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 03 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Copying service configuration files
Dec 03 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec 03 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec 03 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec 03 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Dec 03 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec 03 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec 03 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 03 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 03 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 03 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 03 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 03 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 03 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Dec 03 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec 03 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec 03 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 03 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 03 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 03 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Deleting /etc/ceph
Dec 03 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Creating directory /etc/ceph
Dec 03 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Setting permission for /etc/ceph
Dec 03 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec 03 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 03 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec 03 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 03 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Dec 03 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec 03 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 03 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Dec 03 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec 03 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 03 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec 03 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec 03 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec 03 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Writing out command to execute
Dec 03 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 03 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 03 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec 03 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 03 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 03 01:41:43 compute-0 nova_compute[351485]: ++ cat /run_command
Dec 03 01:41:43 compute-0 nova_compute[351485]: + CMD=nova-compute
Dec 03 01:41:43 compute-0 nova_compute[351485]: + ARGS=
Dec 03 01:41:43 compute-0 nova_compute[351485]: + sudo kolla_copy_cacerts
Dec 03 01:41:43 compute-0 nova_compute[351485]: + [[ ! -n '' ]]
Dec 03 01:41:43 compute-0 nova_compute[351485]: + . kolla_extend_start
Dec 03 01:41:43 compute-0 nova_compute[351485]: Running command: 'nova-compute'
Dec 03 01:41:43 compute-0 nova_compute[351485]: + echo 'Running command: '\''nova-compute'\'''
Dec 03 01:41:43 compute-0 nova_compute[351485]: + umask 0022
Dec 03 01:41:43 compute-0 nova_compute[351485]: + exec nova-compute
Dec 03 01:41:43 compute-0 sudo[351417]: pam_unix(sudo:session): session closed for user root
Dec 03 01:41:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec 03 01:41:43 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 03 01:41:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:41:43 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:41:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 01:41:43 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:41:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 01:41:43 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:41:43 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev a1177377-94bf-4365-8239-3dc3fb9f8f4b does not exist
Dec 03 01:41:43 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 27cd01a6-fc67-44b1-8d4e-87843be3568c does not exist
Dec 03 01:41:43 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev a5ea1430-cf1e-4e45-86ce-0ab80dfe2682 does not exist
Dec 03 01:41:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 01:41:43 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:41:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 01:41:43 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:41:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:41:43 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:41:43 compute-0 sudo[351541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:41:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v808: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:41:43 compute-0 sudo[351541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:41:43 compute-0 sudo[351541]: pam_unix(sudo:session): session closed for user root
Dec 03 01:41:43 compute-0 sudo[351606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:41:43 compute-0 sudo[351606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:41:43 compute-0 sudo[351606]: pam_unix(sudo:session): session closed for user root
Dec 03 01:41:43 compute-0 sudo[351644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:41:43 compute-0 sudo[351644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:41:43 compute-0 sudo[351644]: pam_unix(sudo:session): session closed for user root
Dec 03 01:41:43 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 03 01:41:43 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:41:43 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:41:43 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:41:43 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:41:43 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:41:43 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:41:43 compute-0 sudo[351688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 01:41:43 compute-0 sudo[351688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:41:44 compute-0 sudo[351764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tirxbmcqxceoinnflvvewfyjxxfbsmiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726103.514909-1566-127139456404954/AnsiballZ_podman_container.py'
Dec 03 01:41:44 compute-0 sudo[351764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:41:44 compute-0 python3.9[351767]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec 03 01:41:44 compute-0 podman[351810]: 2025-12-03 01:41:44.512398392 +0000 UTC m=+0.096957039 container create 799f826c91cffadfcc25c12439bebb75427780922aa84b57bfac7a640d1c2ac6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_nobel, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:41:44 compute-0 podman[351810]: 2025-12-03 01:41:44.474577456 +0000 UTC m=+0.059136163 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:41:44 compute-0 systemd[1]: Started libpod-conmon-799f826c91cffadfcc25c12439bebb75427780922aa84b57bfac7a640d1c2ac6.scope.
Dec 03 01:41:44 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:41:44 compute-0 systemd[1]: Started libpod-conmon-540e2e9404e81677d7621395e04fb189d09872932cfad9cabeac5fc917d6fffa.scope.
Dec 03 01:41:44 compute-0 podman[351810]: 2025-12-03 01:41:44.653098571 +0000 UTC m=+0.237657218 container init 799f826c91cffadfcc25c12439bebb75427780922aa84b57bfac7a640d1c2ac6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_nobel, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec 03 01:41:44 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:41:44 compute-0 podman[351810]: 2025-12-03 01:41:44.665839976 +0000 UTC m=+0.250398603 container start 799f826c91cffadfcc25c12439bebb75427780922aa84b57bfac7a640d1c2ac6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_nobel, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:41:44 compute-0 podman[351810]: 2025-12-03 01:41:44.671802313 +0000 UTC m=+0.256360950 container attach 799f826c91cffadfcc25c12439bebb75427780922aa84b57bfac7a640d1c2ac6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_nobel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:41:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e55ba46e4fbc6debe88a6407308101f83be09f1b64c2b54a9dd9544d84ba9b34/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Dec 03 01:41:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e55ba46e4fbc6debe88a6407308101f83be09f1b64c2b54a9dd9544d84ba9b34/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 03 01:41:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e55ba46e4fbc6debe88a6407308101f83be09f1b64c2b54a9dd9544d84ba9b34/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Dec 03 01:41:44 compute-0 loving_nobel[351854]: 167 167
Dec 03 01:41:44 compute-0 systemd[1]: libpod-799f826c91cffadfcc25c12439bebb75427780922aa84b57bfac7a640d1c2ac6.scope: Deactivated successfully.
Dec 03 01:41:44 compute-0 podman[351810]: 2025-12-03 01:41:44.700669679 +0000 UTC m=+0.285228326 container died 799f826c91cffadfcc25c12439bebb75427780922aa84b57bfac7a640d1c2ac6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 03 01:41:44 compute-0 podman[351840]: 2025-12-03 01:41:44.72433281 +0000 UTC m=+0.188332100 container init 540e2e9404e81677d7621395e04fb189d09872932cfad9cabeac5fc917d6fffa (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 03 01:41:44 compute-0 podman[351840]: 2025-12-03 01:41:44.734261887 +0000 UTC m=+0.198261167 container start 540e2e9404e81677d7621395e04fb189d09872932cfad9cabeac5fc917d6fffa (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, tcib_managed=true, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 03 01:41:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-7df7ecf9a2e03bafad96681792113b94c7d3e556ebd0565973dc8a720fef7725-merged.mount: Deactivated successfully.
Dec 03 01:41:44 compute-0 python3.9[351767]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Dec 03 01:41:44 compute-0 podman[351810]: 2025-12-03 01:41:44.76838802 +0000 UTC m=+0.352946647 container remove 799f826c91cffadfcc25c12439bebb75427780922aa84b57bfac7a640d1c2ac6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_nobel, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 03 01:41:44 compute-0 systemd[1]: libpod-conmon-799f826c91cffadfcc25c12439bebb75427780922aa84b57bfac7a640d1c2ac6.scope: Deactivated successfully.
Dec 03 01:41:44 compute-0 nova_compute_init[351880]: INFO:nova_statedir:Applying nova statedir ownership
Dec 03 01:41:44 compute-0 nova_compute_init[351880]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Dec 03 01:41:44 compute-0 nova_compute_init[351880]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Dec 03 01:41:44 compute-0 nova_compute_init[351880]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Dec 03 01:41:44 compute-0 nova_compute_init[351880]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Dec 03 01:41:44 compute-0 nova_compute_init[351880]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Dec 03 01:41:44 compute-0 nova_compute_init[351880]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Dec 03 01:41:44 compute-0 nova_compute_init[351880]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Dec 03 01:41:44 compute-0 nova_compute_init[351880]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Dec 03 01:41:44 compute-0 nova_compute_init[351880]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Dec 03 01:41:44 compute-0 nova_compute_init[351880]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Dec 03 01:41:44 compute-0 nova_compute_init[351880]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Dec 03 01:41:44 compute-0 nova_compute_init[351880]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Dec 03 01:41:44 compute-0 nova_compute_init[351880]: INFO:nova_statedir:Nova statedir ownership complete
Dec 03 01:41:44 compute-0 systemd[1]: libpod-540e2e9404e81677d7621395e04fb189d09872932cfad9cabeac5fc917d6fffa.scope: Deactivated successfully.
Dec 03 01:41:44 compute-0 podman[351882]: 2025-12-03 01:41:44.819984501 +0000 UTC m=+0.043499346 container died 540e2e9404e81677d7621395e04fb189d09872932cfad9cabeac5fc917d6fffa (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=edpm)
Dec 03 01:41:44 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-540e2e9404e81677d7621395e04fb189d09872932cfad9cabeac5fc917d6fffa-userdata-shm.mount: Deactivated successfully.
Dec 03 01:41:44 compute-0 podman[351893]: 2025-12-03 01:41:44.885221243 +0000 UTC m=+0.069848022 container cleanup 540e2e9404e81677d7621395e04fb189d09872932cfad9cabeac5fc917d6fffa (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Dec 03 01:41:44 compute-0 systemd[1]: libpod-conmon-540e2e9404e81677d7621395e04fb189d09872932cfad9cabeac5fc917d6fffa.scope: Deactivated successfully.
Dec 03 01:41:44 compute-0 sudo[351764]: pam_unix(sudo:session): session closed for user root
Dec 03 01:41:44 compute-0 podman[351922]: 2025-12-03 01:41:44.971165463 +0000 UTC m=+0.053473675 container create 49a614d12b60a009c34d7d67e12af2911da0a097287374c60321e7e2e56fab9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_clarke, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 03 01:41:45 compute-0 systemd[1]: Started libpod-conmon-49a614d12b60a009c34d7d67e12af2911da0a097287374c60321e7e2e56fab9a.scope.
Dec 03 01:41:45 compute-0 podman[351922]: 2025-12-03 01:41:44.94850407 +0000 UTC m=+0.030812322 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:41:45 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:41:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31bd18cbc263736a345d44abc52428637b8d3024d095c5580e6dcb99ca032084/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:41:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31bd18cbc263736a345d44abc52428637b8d3024d095c5580e6dcb99ca032084/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:41:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31bd18cbc263736a345d44abc52428637b8d3024d095c5580e6dcb99ca032084/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:41:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31bd18cbc263736a345d44abc52428637b8d3024d095c5580e6dcb99ca032084/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:41:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31bd18cbc263736a345d44abc52428637b8d3024d095c5580e6dcb99ca032084/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:41:45 compute-0 podman[351922]: 2025-12-03 01:41:45.113290371 +0000 UTC m=+0.195598603 container init 49a614d12b60a009c34d7d67e12af2911da0a097287374c60321e7e2e56fab9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_clarke, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 03 01:41:45 compute-0 podman[351922]: 2025-12-03 01:41:45.132617081 +0000 UTC m=+0.214925293 container start 49a614d12b60a009c34d7d67e12af2911da0a097287374c60321e7e2e56fab9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_clarke, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Dec 03 01:41:45 compute-0 podman[351922]: 2025-12-03 01:41:45.137863138 +0000 UTC m=+0.220171390 container attach 49a614d12b60a009c34d7d67e12af2911da0a097287374c60321e7e2e56fab9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 03 01:41:45 compute-0 nova_compute[351485]: 2025-12-03 01:41:45.376 351492 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 03 01:41:45 compute-0 nova_compute[351485]: 2025-12-03 01:41:45.376 351492 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 03 01:41:45 compute-0 nova_compute[351485]: 2025-12-03 01:41:45.376 351492 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 03 01:41:45 compute-0 nova_compute[351485]: 2025-12-03 01:41:45.377 351492 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Dec 03 01:41:45 compute-0 ceph-mon[192821]: pgmap v808: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:41:45 compute-0 nova_compute[351485]: 2025-12-03 01:41:45.537 351492 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:41:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-e55ba46e4fbc6debe88a6407308101f83be09f1b64c2b54a9dd9544d84ba9b34-merged.mount: Deactivated successfully.
Dec 03 01:41:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v809: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:41:45 compute-0 sshd-session[318646]: Connection closed by 192.168.122.30 port 54200
Dec 03 01:41:45 compute-0 sshd-session[318643]: pam_unix(sshd:session): session closed for user zuul
Dec 03 01:41:45 compute-0 nova_compute[351485]: 2025-12-03 01:41:45.564 351492 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:41:45 compute-0 nova_compute[351485]: 2025-12-03 01:41:45.564 351492 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Dec 03 01:41:45 compute-0 systemd[1]: session-55.scope: Deactivated successfully.
Dec 03 01:41:45 compute-0 systemd[1]: session-55.scope: Consumed 4min 13.490s CPU time.
Dec 03 01:41:45 compute-0 systemd-logind[800]: Session 55 logged out. Waiting for processes to exit.
Dec 03 01:41:45 compute-0 systemd-logind[800]: Removed session 55.
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.164 351492 INFO nova.virt.driver [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.294 351492 INFO nova.compute.provider_config [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.314 351492 DEBUG oslo_concurrency.lockutils [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.314 351492 DEBUG oslo_concurrency.lockutils [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.314 351492 DEBUG oslo_concurrency.lockutils [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.315 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.315 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.315 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.316 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.316 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.316 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.316 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.316 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.316 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.316 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.317 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.317 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.317 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.317 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.317 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.317 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.318 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.318 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.318 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.318 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.318 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.318 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.318 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.319 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.319 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.319 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.319 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.319 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.319 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.320 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.320 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.320 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.320 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.320 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.320 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.320 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.321 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.321 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.321 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.321 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.322 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.322 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.322 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.322 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.322 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.322 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.323 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.323 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.323 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.323 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.323 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.323 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.323 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.324 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.324 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.324 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.324 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.324 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.324 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.324 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.325 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.325 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.325 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.325 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.325 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.325 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.325 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.326 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.326 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.326 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.326 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.326 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.326 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.326 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.327 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.327 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.327 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.327 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.327 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.327 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.328 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.328 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.328 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.328 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.328 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.328 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.328 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.329 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.329 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.329 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.329 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.329 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.329 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.330 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.330 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.330 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.330 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.330 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.330 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.331 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.331 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.331 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.331 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.331 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.331 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.331 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.332 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.332 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.332 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.332 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.332 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.332 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.332 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.333 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.333 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.333 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.333 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.333 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.333 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.334 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.334 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.334 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.334 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.334 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.334 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.335 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.335 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.335 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.335 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.335 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.335 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.336 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.336 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.336 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.336 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.336 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.336 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.336 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.337 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.337 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.337 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.337 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.337 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.337 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.337 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.338 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.338 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.338 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.338 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.338 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.339 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.339 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.339 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.339 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.339 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.339 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.340 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.340 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.340 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.340 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.340 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.340 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.340 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.341 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.341 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.341 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.341 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.341 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.341 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.341 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.342 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.342 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.342 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.342 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.342 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.342 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.343 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.343 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.343 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.343 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.343 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.343 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.344 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.344 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.344 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.344 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.344 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.344 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.344 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.345 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.345 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.345 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.345 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.345 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.345 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.346 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.346 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.346 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.346 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.346 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.346 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.346 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.347 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.347 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.347 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.347 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.347 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.348 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.348 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.348 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.348 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.348 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.349 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.349 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.349 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.349 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.350 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.350 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.350 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.350 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.350 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.350 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.351 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.351 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.351 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.351 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.351 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.351 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.352 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.352 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.352 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.352 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.352 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.352 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.352 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.353 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.353 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.353 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.353 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.353 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.353 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.354 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.354 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.354 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.354 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.354 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.354 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.355 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.355 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.355 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.355 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.355 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.355 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.355 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 laughing_clarke[351955]: --> passed data devices: 0 physical, 3 LVM
Dec 03 01:41:46 compute-0 laughing_clarke[351955]: --> relative data size: 1.0
Dec 03 01:41:46 compute-0 laughing_clarke[351955]: --> All data devices are unavailable
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.356 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.357 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.357 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.357 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.357 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.358 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.358 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.358 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.358 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.358 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.358 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.359 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.359 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.359 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.359 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.359 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.359 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.360 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.360 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.360 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.360 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.360 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.360 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.360 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.361 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.361 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.361 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.361 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.361 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.361 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.361 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.362 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.362 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.362 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.362 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.362 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.362 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.362 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.363 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.363 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.363 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.363 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.363 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.363 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.364 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.364 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.364 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.364 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.364 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.364 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.365 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.365 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.365 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.365 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.365 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.365 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.365 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.366 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.366 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.366 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.366 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.366 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.366 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.367 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.367 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.367 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.367 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.367 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.367 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.367 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.368 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.368 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.368 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.368 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.368 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.368 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.368 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.369 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.369 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.369 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.369 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.369 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.369 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.369 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.370 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.370 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.370 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.370 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.370 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.370 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.371 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.371 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.371 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.371 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.371 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.371 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.372 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.372 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.372 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.372 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.372 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.373 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.373 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.373 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.373 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.373 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.374 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.374 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.374 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.374 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.374 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.374 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.375 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.375 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.375 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.375 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.375 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.375 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.375 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.376 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.376 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.376 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.376 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.376 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.376 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.376 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.377 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.377 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.377 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.377 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.377 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.377 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.377 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.377 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.378 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.378 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.378 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.378 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.378 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.378 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.379 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.379 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.379 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.379 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.379 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.379 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.379 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.380 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.380 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.380 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.380 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.380 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.380 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.380 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.381 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.381 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.381 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.381 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.381 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.381 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.381 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.382 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.382 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.382 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.382 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.382 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.383 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.383 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.383 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.383 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.383 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.383 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.384 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.384 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.384 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.384 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.384 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.384 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.384 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.385 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.385 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.385 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.385 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.385 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.385 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.385 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.386 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.386 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.386 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.386 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.386 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.386 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.387 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.387 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.387 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.387 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.387 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.387 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.387 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.388 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.388 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.388 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.388 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.388 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.388 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.388 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.389 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.389 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.389 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.389 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.389 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.389 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.390 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.390 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.390 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.390 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.391 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.391 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.391 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.391 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.391 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.391 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.391 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.392 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.392 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.392 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.392 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.392 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.392 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.393 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.393 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.393 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.393 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.393 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.393 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.393 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.394 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.394 351492 WARNING oslo_config.cfg [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec 03 01:41:46 compute-0 nova_compute[351485]: live_migration_uri is deprecated for removal in favor of two other options that
Dec 03 01:41:46 compute-0 nova_compute[351485]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec 03 01:41:46 compute-0 nova_compute[351485]: and ``live_migration_inbound_addr`` respectively.
Dec 03 01:41:46 compute-0 nova_compute[351485]: ).  Its value may be silently ignored in the future.
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.394 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.394 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.395 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.395 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.395 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.395 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.395 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.395 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.396 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.396 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.396 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.396 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.396 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.396 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.396 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.397 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.397 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.397 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.397 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.rbd_secret_uuid        = 3765feb2-36f8-5b86-b74c-64e9221f9c4c log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.397 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.397 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.398 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.398 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.398 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.398 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.398 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.398 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.399 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.399 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.399 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.399 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.400 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.400 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.400 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.400 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.400 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.400 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.401 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.401 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.401 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.401 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.401 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.401 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.401 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.402 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.402 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.402 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.402 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 systemd[1]: libpod-49a614d12b60a009c34d7d67e12af2911da0a097287374c60321e7e2e56fab9a.scope: Deactivated successfully.
Dec 03 01:41:46 compute-0 systemd[1]: libpod-49a614d12b60a009c34d7d67e12af2911da0a097287374c60321e7e2e56fab9a.scope: Consumed 1.181s CPU time.
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.405 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.405 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.405 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.406 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.406 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.406 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.406 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.406 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.407 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.407 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.407 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 podman[351922]: 2025-12-03 01:41:46.406992457 +0000 UTC m=+1.489300709 container died 49a614d12b60a009c34d7d67e12af2911da0a097287374c60321e7e2e56fab9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_clarke, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.407 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.407 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.407 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.408 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.408 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.408 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.408 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.408 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.408 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.408 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.409 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.409 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.409 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.409 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.409 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.409 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.410 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.410 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.410 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.410 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.410 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.411 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.411 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.411 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.411 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.411 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.412 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.412 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.412 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.412 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.412 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.413 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.413 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.413 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.413 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.413 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.413 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.413 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.414 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.414 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.414 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.414 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.414 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.414 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.415 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.415 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.415 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.415 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.415 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.416 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.416 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.416 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.416 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.416 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.416 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.417 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.417 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.417 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.417 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.417 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.417 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.418 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.418 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.418 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.418 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.418 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.418 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.419 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.419 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.419 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.419 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.419 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.419 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.420 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.420 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.420 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.420 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.420 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.420 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.421 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.421 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.421 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.421 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.421 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.422 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.422 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.422 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.422 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.422 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.423 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.423 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.423 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.423 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.423 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.423 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.424 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.424 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.424 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.424 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.424 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.424 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.424 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.425 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.425 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.425 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.425 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.425 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.425 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.426 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.426 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.426 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.426 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.426 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.427 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.427 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.427 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.427 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.427 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.427 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.428 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.428 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.428 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.428 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.428 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.428 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.429 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.429 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.429 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.429 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.429 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.429 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.430 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.430 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.430 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.430 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.430 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.431 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.431 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.431 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.431 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.431 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.432 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.432 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.432 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.432 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.432 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.432 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.433 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.433 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.433 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.433 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.433 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.433 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.434 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.434 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.434 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.434 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.434 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.434 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.434 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.435 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.435 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.435 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.435 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.435 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.435 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.436 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.436 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.436 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.436 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.436 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.436 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.436 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.437 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.437 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.437 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.437 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.437 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.437 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.438 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.438 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.438 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.438 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.438 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.438 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.439 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.439 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.439 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.439 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.439 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.439 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.440 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.440 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.440 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.440 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.440 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.440 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.441 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.441 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.441 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.441 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.441 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.441 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.442 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.442 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.442 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.442 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.442 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.442 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.443 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.443 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.443 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.444 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.444 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.444 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.444 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.444 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.444 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.444 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.445 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.445 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.445 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.445 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.445 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.446 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.446 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.446 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.446 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.446 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.446 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.447 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.447 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.447 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.447 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.447 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.448 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.448 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.448 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.448 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.448 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.448 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.449 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.449 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.449 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.449 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.449 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.449 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.449 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.450 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.450 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.450 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.450 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.450 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.451 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.451 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.451 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.451 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.451 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.451 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.452 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.452 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.452 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.452 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.452 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.453 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.453 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.453 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.453 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.453 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.453 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.453 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.454 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.454 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.454 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.454 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.454 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.454 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.455 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.455 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.455 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.455 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.455 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.455 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.456 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.456 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.456 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.456 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.456 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.456 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.457 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.457 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.457 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.457 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.457 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.457 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.457 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.458 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.458 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.458 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.458 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.458 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.458 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.459 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.459 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.459 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.459 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.459 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.459 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.459 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.460 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.460 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.460 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.460 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.460 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.460 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.461 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.461 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.461 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.461 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.461 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.461 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.462 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.462 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.462 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.462 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.462 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.463 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.463 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.463 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.463 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.463 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.464 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.464 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.464 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.464 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.464 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.464 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.465 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.465 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.465 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.465 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.465 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.466 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.466 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.466 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.466 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.466 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.467 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.467 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.467 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.467 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.467 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.467 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.467 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.468 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.468 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.468 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.468 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.468 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-31bd18cbc263736a345d44abc52428637b8d3024d095c5580e6dcb99ca032084-merged.mount: Deactivated successfully.
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.468 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.469 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.469 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.469 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.470 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.470 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.471 351492 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.496 351492 INFO nova.virt.node [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Determined node identity 107397d2-51bc-4a03-bce4-7cd69319cf05 from /var/lib/nova/compute_id
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.496 351492 DEBUG nova.virt.libvirt.host [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.497 351492 DEBUG nova.virt.libvirt.host [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.497 351492 DEBUG nova.virt.libvirt.host [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.497 351492 DEBUG nova.virt.libvirt.host [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Dec 03 01:41:46 compute-0 podman[351922]: 2025-12-03 01:41:46.513856551 +0000 UTC m=+1.596164773 container remove 49a614d12b60a009c34d7d67e12af2911da0a097287374c60321e7e2e56fab9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_clarke, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.517 351492 DEBUG nova.virt.libvirt.host [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fee0837a580> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Dec 03 01:41:46 compute-0 systemd[1]: libpod-conmon-49a614d12b60a009c34d7d67e12af2911da0a097287374c60321e7e2e56fab9a.scope: Deactivated successfully.
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.537 351492 DEBUG nova.virt.libvirt.host [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fee0837a580> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.539 351492 INFO nova.virt.libvirt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Connection event '1' reason 'None'
Dec 03 01:41:46 compute-0 sudo[351688]: pam_unix(sudo:session): session closed for user root
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.547 351492 INFO nova.virt.libvirt.host [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Libvirt host capabilities <capabilities>
Dec 03 01:41:46 compute-0 nova_compute[351485]: 
Dec 03 01:41:46 compute-0 nova_compute[351485]:   <host>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <uuid>bb85f21b-9f67-464f-8fbe-e50d4e1e7eb4</uuid>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <cpu>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <arch>x86_64</arch>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model>EPYC-Rome-v4</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <vendor>AMD</vendor>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <microcode version='16777317'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <signature family='23' model='49' stepping='0'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <maxphysaddr mode='emulate' bits='40'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature name='x2apic'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature name='tsc-deadline'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature name='osxsave'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature name='hypervisor'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature name='tsc_adjust'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature name='spec-ctrl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature name='stibp'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature name='arch-capabilities'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature name='ssbd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature name='cmp_legacy'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature name='topoext'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature name='virt-ssbd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature name='lbrv'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature name='tsc-scale'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature name='vmcb-clean'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature name='pause-filter'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature name='pfthreshold'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature name='svme-addr-chk'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature name='rdctl-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature name='skip-l1dfl-vmentry'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature name='mds-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature name='pschange-mc-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <pages unit='KiB' size='4'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <pages unit='KiB' size='2048'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <pages unit='KiB' size='1048576'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </cpu>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <power_management>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <suspend_mem/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </power_management>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <iommu support='no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <migration_features>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <live/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <uri_transports>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <uri_transport>tcp</uri_transport>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <uri_transport>rdma</uri_transport>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </uri_transports>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </migration_features>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <topology>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <cells num='1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <cell id='0'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:           <memory unit='KiB'>7864312</memory>
Dec 03 01:41:46 compute-0 nova_compute[351485]:           <pages unit='KiB' size='4'>1966078</pages>
Dec 03 01:41:46 compute-0 nova_compute[351485]:           <pages unit='KiB' size='2048'>0</pages>
Dec 03 01:41:46 compute-0 nova_compute[351485]:           <pages unit='KiB' size='1048576'>0</pages>
Dec 03 01:41:46 compute-0 nova_compute[351485]:           <distances>
Dec 03 01:41:46 compute-0 nova_compute[351485]:             <sibling id='0' value='10'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:           </distances>
Dec 03 01:41:46 compute-0 nova_compute[351485]:           <cpus num='8'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:           </cpus>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         </cell>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </cells>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </topology>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <cache>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </cache>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <secmodel>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model>selinux</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <doi>0</doi>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </secmodel>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <secmodel>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model>dac</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <doi>0</doi>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <baselabel type='kvm'>+107:+107</baselabel>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <baselabel type='qemu'>+107:+107</baselabel>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </secmodel>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   </host>
Dec 03 01:41:46 compute-0 nova_compute[351485]: 
Dec 03 01:41:46 compute-0 nova_compute[351485]:   <guest>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <os_type>hvm</os_type>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <arch name='i686'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <wordsize>32</wordsize>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <domain type='qemu'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <domain type='kvm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </arch>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <features>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <pae/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <nonpae/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <acpi default='on' toggle='yes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <apic default='on' toggle='no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <cpuselection/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <deviceboot/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <disksnapshot default='on' toggle='no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <externalSnapshot/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </features>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   </guest>
Dec 03 01:41:46 compute-0 nova_compute[351485]: 
Dec 03 01:41:46 compute-0 nova_compute[351485]:   <guest>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <os_type>hvm</os_type>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <arch name='x86_64'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <wordsize>64</wordsize>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <domain type='qemu'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <domain type='kvm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </arch>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <features>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <acpi default='on' toggle='yes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <apic default='on' toggle='no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <cpuselection/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <deviceboot/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <disksnapshot default='on' toggle='no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <externalSnapshot/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </features>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   </guest>
Dec 03 01:41:46 compute-0 nova_compute[351485]: 
Dec 03 01:41:46 compute-0 nova_compute[351485]: </capabilities>
Dec 03 01:41:46 compute-0 nova_compute[351485]: 
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.555 351492 DEBUG nova.virt.libvirt.host [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.556 351492 DEBUG nova.virt.libvirt.volume.mount [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.562 351492 DEBUG nova.virt.libvirt.host [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec 03 01:41:46 compute-0 nova_compute[351485]: <domainCapabilities>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   <path>/usr/libexec/qemu-kvm</path>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   <domain>kvm</domain>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   <arch>i686</arch>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   <vcpu max='240'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   <iothreads supported='yes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   <os supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <enum name='firmware'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <loader supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='type'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>rom</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>pflash</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='readonly'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>yes</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>no</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='secure'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>no</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </loader>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   </os>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   <cpu>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <mode name='host-passthrough' supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='hostPassthroughMigratable'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>on</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>off</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </mode>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <mode name='maximum' supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='maximumMigratable'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>on</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>off</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </mode>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <mode name='host-model' supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <vendor>AMD</vendor>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='x2apic'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='tsc-deadline'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='hypervisor'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='tsc_adjust'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='spec-ctrl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='stibp'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='ssbd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='cmp_legacy'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='overflow-recov'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='succor'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='ibrs'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='amd-ssbd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='virt-ssbd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='lbrv'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='tsc-scale'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='vmcb-clean'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='flushbyasid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='pause-filter'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='pfthreshold'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='svme-addr-chk'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='disable' name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </mode>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <mode name='custom' supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Broadwell'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Broadwell-IBRS'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Broadwell-noTSX'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Broadwell-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Broadwell-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Broadwell-v3'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Broadwell-v4'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Cascadelake-Server'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Cascadelake-Server-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Cascadelake-Server-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Cascadelake-Server-v3'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Cascadelake-Server-v4'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Cascadelake-Server-v5'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Cooperlake'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Cooperlake-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Cooperlake-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Denverton'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='mpx'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Denverton-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='mpx'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Denverton-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Denverton-v3'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Dhyana-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='EPYC-Genoa'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amd-psfd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='auto-ibrs'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='no-nested-data-bp'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='null-sel-clr-base'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='stibp-always-on'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='EPYC-Genoa-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amd-psfd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='auto-ibrs'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='no-nested-data-bp'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='null-sel-clr-base'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='stibp-always-on'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='EPYC-Milan'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='EPYC-Milan-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='EPYC-Milan-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amd-psfd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='no-nested-data-bp'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='null-sel-clr-base'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='stibp-always-on'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='EPYC-Rome'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='EPYC-Rome-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='EPYC-Rome-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='EPYC-Rome-v3'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='EPYC-v3'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='EPYC-v4'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='GraniteRapids'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-fp16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-int8'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-tile'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-fp16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fbsdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrc'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrs'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fzrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='mcdt-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pbrsb-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='prefetchiti'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='psdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='serialize'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xfd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='GraniteRapids-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-fp16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-int8'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-tile'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-fp16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fbsdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrc'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrs'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fzrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='mcdt-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pbrsb-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='prefetchiti'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='psdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='serialize'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xfd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='GraniteRapids-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-fp16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-int8'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-tile'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx10'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx10-128'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx10-256'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx10-512'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-fp16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='cldemote'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fbsdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrc'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrs'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fzrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='mcdt-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdir64b'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdiri'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pbrsb-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='prefetchiti'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='psdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='serialize'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ss'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xfd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Haswell'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Haswell-IBRS'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Haswell-noTSX'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Haswell-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Haswell-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Haswell-v3'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Haswell-v4'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Icelake-Server'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Icelake-Server-noTSX'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Icelake-Server-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Icelake-Server-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Icelake-Server-v3'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Icelake-Server-v4'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Icelake-Server-v5'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Icelake-Server-v6'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Icelake-Server-v7'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='IvyBridge'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='IvyBridge-IBRS'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='IvyBridge-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='IvyBridge-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='KnightsMill'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-4fmaps'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-4vnniw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512er'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512pf'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ss'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='KnightsMill-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-4fmaps'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-4vnniw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512er'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512pf'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ss'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Opteron_G4'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fma4'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xop'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Opteron_G4-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fma4'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xop'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Opteron_G5'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fma4'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='tbm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xop'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Opteron_G5-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fma4'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='tbm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xop'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='SapphireRapids'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-int8'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-tile'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-fp16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrc'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrs'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fzrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='serialize'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xfd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='SapphireRapids-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-int8'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-tile'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-fp16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrc'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrs'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fzrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='serialize'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xfd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='SapphireRapids-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-int8'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-tile'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-fp16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fbsdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrc'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrs'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fzrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='psdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='serialize'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xfd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='SapphireRapids-v3'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-int8'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-tile'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-fp16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='cldemote'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fbsdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrc'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrs'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fzrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdir64b'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdiri'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='psdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='serialize'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ss'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xfd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='SierraForest'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-ne-convert'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-vnni-int8'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='cmpccxadd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fbsdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrs'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='mcdt-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pbrsb-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='psdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='serialize'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='SierraForest-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-ne-convert'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-vnni-int8'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='cmpccxadd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fbsdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrs'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='mcdt-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pbrsb-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='psdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='serialize'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Client'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Client-IBRS'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Client-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Client-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Client-v3'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Client-v4'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Server'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Server-IBRS'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Server-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Server-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Server-v3'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Server-v4'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Server-v5'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 sudo[352033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Snowridge'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='cldemote'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='core-capability'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdir64b'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdiri'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='mpx'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='split-lock-detect'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Snowridge-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='cldemote'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='core-capability'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdir64b'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdiri'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='mpx'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='split-lock-detect'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Snowridge-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='cldemote'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='core-capability'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdir64b'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdiri'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='split-lock-detect'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Snowridge-v3'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='cldemote'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='core-capability'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdir64b'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdiri'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='split-lock-detect'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Snowridge-v4'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='cldemote'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdir64b'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdiri'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='athlon'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='3dnow'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='3dnowext'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='athlon-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='3dnow'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='3dnowext'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='core2duo'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ss'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='core2duo-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ss'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='coreduo'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ss'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='coreduo-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ss'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='n270'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ss'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='n270-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ss'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 03 01:41:46 compute-0 sudo[352033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='phenom'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='3dnow'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='3dnowext'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='phenom-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='3dnow'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='3dnowext'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </mode>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   </cpu>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   <memoryBacking supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <enum name='sourceType'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <value>file</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <value>anonymous</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <value>memfd</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   </memoryBacking>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   <devices>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <disk supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='diskDevice'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>disk</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>cdrom</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>floppy</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>lun</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='bus'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>ide</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>fdc</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>scsi</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>virtio</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>usb</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>sata</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='model'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>virtio</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>virtio-transitional</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>virtio-non-transitional</value>
Dec 03 01:41:46 compute-0 sudo[352033]: pam_unix(sudo:session): session closed for user root
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </disk>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <graphics supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='type'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>vnc</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>egl-headless</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>dbus</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </graphics>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <video supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='modelType'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>vga</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>cirrus</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>virtio</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>none</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>bochs</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>ramfb</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </video>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <hostdev supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='mode'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>subsystem</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='startupPolicy'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>default</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>mandatory</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>requisite</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>optional</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='subsysType'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>usb</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>pci</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>scsi</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='capsType'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='pciBackend'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </hostdev>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <rng supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='model'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>virtio</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>virtio-transitional</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>virtio-non-transitional</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='backendModel'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>random</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>egd</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>builtin</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </rng>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <filesystem supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='driverType'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>path</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>handle</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>virtiofs</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </filesystem>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <tpm supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='model'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>tpm-tis</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>tpm-crb</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='backendModel'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>emulator</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>external</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='backendVersion'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>2.0</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </tpm>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <redirdev supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='bus'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>usb</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </redirdev>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <channel supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='type'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>pty</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>unix</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </channel>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <crypto supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='model'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='type'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>qemu</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='backendModel'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>builtin</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </crypto>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <interface supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='backendType'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>default</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>passt</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </interface>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <panic supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='model'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>isa</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>hyperv</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </panic>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <console supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='type'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>null</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>vc</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>pty</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>dev</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>file</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>pipe</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>stdio</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>udp</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>tcp</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>unix</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>qemu-vdagent</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>dbus</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </console>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   </devices>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   <features>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <gic supported='no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <vmcoreinfo supported='yes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <genid supported='yes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <backingStoreInput supported='yes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <backup supported='yes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <async-teardown supported='yes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <ps2 supported='yes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <sev supported='no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <sgx supported='no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <hyperv supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='features'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>relaxed</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>vapic</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>spinlocks</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>vpindex</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>runtime</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>synic</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>stimer</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>reset</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>vendor_id</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>frequencies</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>reenlightenment</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>tlbflush</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>ipi</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>avic</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>emsr_bitmap</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>xmm_input</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <defaults>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <spinlocks>4095</spinlocks>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <stimer_direct>on</stimer_direct>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <tlbflush_direct>on</tlbflush_direct>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <tlbflush_extended>on</tlbflush_extended>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </defaults>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </hyperv>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <launchSecurity supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='sectype'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>tdx</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </launchSecurity>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   </features>
Dec 03 01:41:46 compute-0 nova_compute[351485]: </domainCapabilities>
Dec 03 01:41:46 compute-0 nova_compute[351485]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.569 351492 DEBUG nova.virt.libvirt.host [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec 03 01:41:46 compute-0 nova_compute[351485]: <domainCapabilities>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   <path>/usr/libexec/qemu-kvm</path>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   <domain>kvm</domain>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   <arch>i686</arch>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   <vcpu max='4096'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   <iothreads supported='yes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   <os supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <enum name='firmware'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <loader supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='type'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>rom</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>pflash</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='readonly'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>yes</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>no</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='secure'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>no</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </loader>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   </os>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   <cpu>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <mode name='host-passthrough' supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='hostPassthroughMigratable'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>on</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>off</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </mode>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <mode name='maximum' supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='maximumMigratable'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>on</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>off</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </mode>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <mode name='host-model' supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <vendor>AMD</vendor>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='x2apic'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='tsc-deadline'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='hypervisor'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='tsc_adjust'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='spec-ctrl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='stibp'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='ssbd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='cmp_legacy'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='overflow-recov'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='succor'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='ibrs'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='amd-ssbd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='virt-ssbd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='lbrv'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='tsc-scale'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='vmcb-clean'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='flushbyasid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='pause-filter'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='pfthreshold'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='svme-addr-chk'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='disable' name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </mode>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <mode name='custom' supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Broadwell'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Broadwell-IBRS'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Broadwell-noTSX'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Broadwell-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Broadwell-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Broadwell-v3'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Broadwell-v4'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Cascadelake-Server'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Cascadelake-Server-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Cascadelake-Server-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Cascadelake-Server-v3'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Cascadelake-Server-v4'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Cascadelake-Server-v5'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Cooperlake'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Cooperlake-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Cooperlake-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Denverton'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='mpx'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Denverton-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='mpx'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Denverton-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Denverton-v3'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Dhyana-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='EPYC-Genoa'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amd-psfd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='auto-ibrs'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='no-nested-data-bp'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='null-sel-clr-base'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='stibp-always-on'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='EPYC-Genoa-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amd-psfd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='auto-ibrs'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='no-nested-data-bp'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='null-sel-clr-base'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='stibp-always-on'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='EPYC-Milan'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='EPYC-Milan-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='EPYC-Milan-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amd-psfd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='no-nested-data-bp'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='null-sel-clr-base'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='stibp-always-on'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='EPYC-Rome'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='EPYC-Rome-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='EPYC-Rome-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='EPYC-Rome-v3'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='EPYC-v3'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='EPYC-v4'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='GraniteRapids'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-fp16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-int8'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-tile'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-fp16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fbsdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrc'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrs'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fzrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='mcdt-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pbrsb-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='prefetchiti'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='psdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='serialize'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xfd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='GraniteRapids-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-fp16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-int8'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-tile'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-fp16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fbsdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrc'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrs'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fzrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='mcdt-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pbrsb-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='prefetchiti'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='psdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='serialize'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xfd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='GraniteRapids-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-fp16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-int8'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-tile'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx10'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx10-128'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx10-256'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx10-512'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-fp16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='cldemote'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fbsdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrc'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrs'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fzrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='mcdt-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdir64b'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdiri'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pbrsb-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='prefetchiti'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='psdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='serialize'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ss'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xfd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Haswell'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Haswell-IBRS'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Haswell-noTSX'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Haswell-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Haswell-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Haswell-v3'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Haswell-v4'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Icelake-Server'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Icelake-Server-noTSX'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Icelake-Server-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Icelake-Server-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Icelake-Server-v3'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Icelake-Server-v4'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Icelake-Server-v5'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Icelake-Server-v6'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Icelake-Server-v7'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='IvyBridge'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='IvyBridge-IBRS'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='IvyBridge-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='IvyBridge-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='KnightsMill'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-4fmaps'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-4vnniw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512er'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512pf'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ss'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='KnightsMill-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-4fmaps'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-4vnniw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512er'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512pf'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ss'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Opteron_G4'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fma4'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xop'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Opteron_G4-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fma4'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xop'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Opteron_G5'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fma4'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='tbm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xop'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Opteron_G5-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fma4'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='tbm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xop'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='SapphireRapids'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-int8'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-tile'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-fp16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrc'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrs'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fzrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='serialize'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xfd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='SapphireRapids-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-int8'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-tile'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-fp16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrc'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrs'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fzrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='serialize'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xfd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='SapphireRapids-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-int8'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-tile'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-fp16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fbsdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrc'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrs'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fzrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='psdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='serialize'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xfd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='SapphireRapids-v3'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-int8'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-tile'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-fp16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='cldemote'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fbsdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrc'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrs'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fzrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdir64b'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdiri'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='psdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='serialize'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ss'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xfd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='SierraForest'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-ne-convert'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-vnni-int8'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='cmpccxadd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fbsdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrs'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='mcdt-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pbrsb-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='psdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='serialize'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='SierraForest-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-ne-convert'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-vnni-int8'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='cmpccxadd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fbsdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrs'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='mcdt-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pbrsb-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='psdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='serialize'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Client'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Client-IBRS'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Client-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Client-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Client-v3'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Client-v4'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Server'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Server-IBRS'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Server-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Server-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Server-v3'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Server-v4'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Server-v5'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Snowridge'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='cldemote'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='core-capability'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdir64b'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdiri'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='mpx'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='split-lock-detect'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Snowridge-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='cldemote'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='core-capability'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdir64b'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdiri'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='mpx'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='split-lock-detect'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Snowridge-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='cldemote'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='core-capability'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdir64b'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdiri'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='split-lock-detect'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Snowridge-v3'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='cldemote'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='core-capability'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdir64b'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdiri'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='split-lock-detect'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 sudo[352058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Snowridge-v4'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='cldemote'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdir64b'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdiri'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='athlon'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='3dnow'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='3dnowext'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='athlon-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='3dnow'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='3dnowext'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='core2duo'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ss'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='core2duo-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ss'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='coreduo'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ss'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='coreduo-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ss'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='n270'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ss'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='n270-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ss'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='phenom'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='3dnow'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='3dnowext'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='phenom-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='3dnow'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='3dnowext'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </mode>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   </cpu>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   <memoryBacking supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <enum name='sourceType'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <value>file</value>
Dec 03 01:41:46 compute-0 sudo[352058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <value>anonymous</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <value>memfd</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   </memoryBacking>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   <devices>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <disk supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='diskDevice'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>disk</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>cdrom</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>floppy</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>lun</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='bus'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>fdc</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>scsi</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>virtio</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>usb</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>sata</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='model'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>virtio</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>virtio-transitional</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>virtio-non-transitional</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </disk>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <graphics supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='type'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>vnc</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>egl-headless</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>dbus</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </graphics>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <video supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='modelType'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>vga</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>cirrus</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>virtio</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>none</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>bochs</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>ramfb</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </video>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <hostdev supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='mode'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>subsystem</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='startupPolicy'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>default</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>mandatory</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>requisite</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>optional</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='subsysType'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>usb</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>pci</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>scsi</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='capsType'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='pciBackend'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </hostdev>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <rng supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='model'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>virtio</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>virtio-transitional</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>virtio-non-transitional</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='backendModel'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>random</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>egd</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>builtin</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </rng>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <filesystem supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='driverType'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>path</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>handle</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>virtiofs</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 sudo[352058]: pam_unix(sudo:session): session closed for user root
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </filesystem>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <tpm supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='model'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>tpm-tis</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>tpm-crb</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='backendModel'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>emulator</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>external</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='backendVersion'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>2.0</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </tpm>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <redirdev supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='bus'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>usb</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </redirdev>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <channel supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='type'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>pty</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>unix</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </channel>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <crypto supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='model'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='type'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>qemu</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='backendModel'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>builtin</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </crypto>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <interface supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='backendType'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>default</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>passt</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </interface>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <panic supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='model'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>isa</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>hyperv</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </panic>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <console supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='type'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>null</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>vc</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>pty</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>dev</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>file</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>pipe</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>stdio</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>udp</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>tcp</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>unix</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>qemu-vdagent</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>dbus</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </console>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   </devices>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   <features>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <gic supported='no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <vmcoreinfo supported='yes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <genid supported='yes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <backingStoreInput supported='yes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <backup supported='yes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <async-teardown supported='yes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <ps2 supported='yes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <sev supported='no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <sgx supported='no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <hyperv supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='features'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>relaxed</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>vapic</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>spinlocks</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>vpindex</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>runtime</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>synic</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>stimer</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>reset</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>vendor_id</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>frequencies</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>reenlightenment</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>tlbflush</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>ipi</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>avic</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>emsr_bitmap</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>xmm_input</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <defaults>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <spinlocks>4095</spinlocks>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <stimer_direct>on</stimer_direct>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <tlbflush_direct>on</tlbflush_direct>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <tlbflush_extended>on</tlbflush_extended>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </defaults>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </hyperv>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <launchSecurity supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='sectype'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>tdx</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </launchSecurity>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   </features>
Dec 03 01:41:46 compute-0 nova_compute[351485]: </domainCapabilities>
Dec 03 01:41:46 compute-0 nova_compute[351485]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.621 351492 DEBUG nova.virt.libvirt.host [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.626 351492 DEBUG nova.virt.libvirt.host [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec 03 01:41:46 compute-0 nova_compute[351485]: <domainCapabilities>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   <path>/usr/libexec/qemu-kvm</path>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   <domain>kvm</domain>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   <arch>x86_64</arch>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   <vcpu max='240'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   <iothreads supported='yes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   <os supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <enum name='firmware'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <loader supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='type'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>rom</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>pflash</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='readonly'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>yes</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>no</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='secure'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>no</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </loader>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   </os>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   <cpu>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <mode name='host-passthrough' supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='hostPassthroughMigratable'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>on</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>off</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </mode>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <mode name='maximum' supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='maximumMigratable'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>on</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>off</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </mode>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <mode name='host-model' supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <vendor>AMD</vendor>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='x2apic'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='tsc-deadline'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='hypervisor'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='tsc_adjust'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='spec-ctrl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='stibp'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='ssbd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='cmp_legacy'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='overflow-recov'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='succor'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='ibrs'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='amd-ssbd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='virt-ssbd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='lbrv'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='tsc-scale'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='vmcb-clean'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='flushbyasid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='pause-filter'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='pfthreshold'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='svme-addr-chk'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='disable' name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </mode>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <mode name='custom' supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Broadwell'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Broadwell-IBRS'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Broadwell-noTSX'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Broadwell-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Broadwell-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Broadwell-v3'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Broadwell-v4'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Cascadelake-Server'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Cascadelake-Server-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Cascadelake-Server-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Cascadelake-Server-v3'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Cascadelake-Server-v4'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Cascadelake-Server-v5'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Cooperlake'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Cooperlake-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Cooperlake-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Denverton'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='mpx'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Denverton-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='mpx'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Denverton-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Denverton-v3'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Dhyana-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='EPYC-Genoa'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amd-psfd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='auto-ibrs'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='no-nested-data-bp'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='null-sel-clr-base'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='stibp-always-on'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='EPYC-Genoa-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amd-psfd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='auto-ibrs'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='no-nested-data-bp'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='null-sel-clr-base'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='stibp-always-on'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='EPYC-Milan'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='EPYC-Milan-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='EPYC-Milan-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amd-psfd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='no-nested-data-bp'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='null-sel-clr-base'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='stibp-always-on'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='EPYC-Rome'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='EPYC-Rome-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='EPYC-Rome-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='EPYC-Rome-v3'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='EPYC-v3'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='EPYC-v4'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='GraniteRapids'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-fp16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-int8'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-tile'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-fp16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fbsdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrc'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrs'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fzrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='mcdt-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pbrsb-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='prefetchiti'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='psdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='serialize'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xfd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='GraniteRapids-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-fp16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-int8'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-tile'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-fp16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fbsdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrc'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrs'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fzrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='mcdt-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pbrsb-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='prefetchiti'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='psdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='serialize'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xfd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='GraniteRapids-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-fp16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-int8'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-tile'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx10'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx10-128'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx10-256'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx10-512'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-fp16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='cldemote'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fbsdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrc'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrs'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fzrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='mcdt-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdir64b'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdiri'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pbrsb-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='prefetchiti'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='psdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='serialize'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ss'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xfd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Haswell'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Haswell-IBRS'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Haswell-noTSX'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Haswell-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Haswell-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Haswell-v3'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Haswell-v4'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Icelake-Server'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Icelake-Server-noTSX'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Icelake-Server-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Icelake-Server-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Icelake-Server-v3'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Icelake-Server-v4'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Icelake-Server-v5'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Icelake-Server-v6'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Icelake-Server-v7'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='IvyBridge'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='IvyBridge-IBRS'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='IvyBridge-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='IvyBridge-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='KnightsMill'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-4fmaps'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-4vnniw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512er'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512pf'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ss'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='KnightsMill-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-4fmaps'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-4vnniw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512er'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512pf'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ss'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Opteron_G4'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fma4'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xop'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Opteron_G4-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fma4'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xop'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Opteron_G5'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fma4'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='tbm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xop'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Opteron_G5-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fma4'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='tbm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xop'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='SapphireRapids'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-int8'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-tile'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-fp16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrc'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrs'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fzrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='serialize'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xfd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='SapphireRapids-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-int8'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-tile'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-fp16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrc'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrs'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fzrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='serialize'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xfd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='SapphireRapids-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-int8'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-tile'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-fp16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fbsdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrc'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrs'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fzrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='psdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='serialize'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xfd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='SapphireRapids-v3'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-int8'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-tile'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-fp16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='cldemote'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fbsdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrc'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrs'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fzrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdir64b'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdiri'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='psdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='serialize'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ss'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xfd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='SierraForest'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-ne-convert'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-vnni-int8'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='cmpccxadd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fbsdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrs'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='mcdt-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pbrsb-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='psdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='serialize'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='SierraForest-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-ne-convert'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-vnni-int8'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='cmpccxadd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fbsdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrs'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='mcdt-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pbrsb-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='psdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='serialize'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Client'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Client-IBRS'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Client-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Client-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Client-v3'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Client-v4'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Server'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Server-IBRS'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Server-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Server-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Server-v3'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Server-v4'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Server-v5'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Snowridge'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='cldemote'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='core-capability'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdir64b'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdiri'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='mpx'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='split-lock-detect'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Snowridge-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='cldemote'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='core-capability'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdir64b'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdiri'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='mpx'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='split-lock-detect'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Snowridge-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='cldemote'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='core-capability'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdir64b'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdiri'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='split-lock-detect'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Snowridge-v3'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='cldemote'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='core-capability'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdir64b'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdiri'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='split-lock-detect'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Snowridge-v4'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='cldemote'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdir64b'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdiri'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='athlon'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='3dnow'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='3dnowext'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='athlon-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='3dnow'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='3dnowext'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='core2duo'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ss'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='core2duo-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ss'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='coreduo'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ss'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='coreduo-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ss'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='n270'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ss'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='n270-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ss'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='phenom'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='3dnow'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='3dnowext'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='phenom-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='3dnow'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='3dnowext'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </mode>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   </cpu>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   <memoryBacking supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <enum name='sourceType'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <value>file</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <value>anonymous</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <value>memfd</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   </memoryBacking>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   <devices>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <disk supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='diskDevice'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>disk</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>cdrom</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>floppy</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>lun</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='bus'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>ide</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>fdc</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>scsi</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>virtio</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>usb</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>sata</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='model'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>virtio</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>virtio-transitional</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>virtio-non-transitional</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </disk>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <graphics supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='type'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>vnc</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>egl-headless</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>dbus</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </graphics>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <video supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='modelType'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>vga</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>cirrus</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>virtio</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>none</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>bochs</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>ramfb</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </video>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <hostdev supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='mode'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>subsystem</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='startupPolicy'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>default</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>mandatory</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>requisite</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>optional</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='subsysType'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>usb</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>pci</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>scsi</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='capsType'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='pciBackend'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </hostdev>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <rng supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='model'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>virtio</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>virtio-transitional</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>virtio-non-transitional</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='backendModel'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>random</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>egd</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>builtin</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </rng>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <filesystem supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='driverType'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>path</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>handle</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>virtiofs</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </filesystem>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <tpm supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='model'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>tpm-tis</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>tpm-crb</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='backendModel'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>emulator</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>external</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='backendVersion'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>2.0</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </tpm>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <redirdev supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='bus'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>usb</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </redirdev>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <channel supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='type'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>pty</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>unix</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </channel>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <crypto supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='model'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='type'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>qemu</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='backendModel'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>builtin</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </crypto>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <interface supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='backendType'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>default</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>passt</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </interface>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <panic supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='model'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>isa</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>hyperv</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </panic>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <console supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='type'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>null</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>vc</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>pty</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>dev</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>file</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>pipe</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>stdio</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>udp</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>tcp</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>unix</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>qemu-vdagent</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>dbus</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </console>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   </devices>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   <features>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <gic supported='no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <vmcoreinfo supported='yes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <genid supported='yes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <backingStoreInput supported='yes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <backup supported='yes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <async-teardown supported='yes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <ps2 supported='yes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <sev supported='no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <sgx supported='no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <hyperv supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='features'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>relaxed</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>vapic</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>spinlocks</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>vpindex</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>runtime</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>synic</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>stimer</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>reset</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>vendor_id</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>frequencies</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>reenlightenment</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>tlbflush</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>ipi</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>avic</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>emsr_bitmap</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>xmm_input</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <defaults>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <spinlocks>4095</spinlocks>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <stimer_direct>on</stimer_direct>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <tlbflush_direct>on</tlbflush_direct>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <tlbflush_extended>on</tlbflush_extended>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </defaults>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </hyperv>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <launchSecurity supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='sectype'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>tdx</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </launchSecurity>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   </features>
Dec 03 01:41:46 compute-0 nova_compute[351485]: </domainCapabilities>
Dec 03 01:41:46 compute-0 nova_compute[351485]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.742 351492 DEBUG nova.virt.libvirt.host [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec 03 01:41:46 compute-0 nova_compute[351485]: <domainCapabilities>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   <path>/usr/libexec/qemu-kvm</path>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   <domain>kvm</domain>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   <arch>x86_64</arch>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   <vcpu max='4096'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   <iothreads supported='yes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   <os supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <enum name='firmware'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <value>efi</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <loader supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='type'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>rom</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>pflash</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='readonly'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>yes</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>no</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='secure'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>yes</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>no</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </loader>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   </os>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   <cpu>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <mode name='host-passthrough' supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='hostPassthroughMigratable'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>on</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>off</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </mode>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <mode name='maximum' supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='maximumMigratable'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>on</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>off</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </mode>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <mode name='host-model' supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <vendor>AMD</vendor>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='x2apic'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='tsc-deadline'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='hypervisor'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='tsc_adjust'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='spec-ctrl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='stibp'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='ssbd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='cmp_legacy'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='overflow-recov'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='succor'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='ibrs'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='amd-ssbd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='virt-ssbd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='lbrv'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='tsc-scale'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='vmcb-clean'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='flushbyasid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='pause-filter'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='pfthreshold'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='svme-addr-chk'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <feature policy='disable' name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </mode>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <mode name='custom' supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Broadwell'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Broadwell-IBRS'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Broadwell-noTSX'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Broadwell-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Broadwell-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Broadwell-v3'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Broadwell-v4'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Cascadelake-Server'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 03 01:41:46 compute-0 sudo[352083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Cascadelake-Server-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Cascadelake-Server-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Cascadelake-Server-v3'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Cascadelake-Server-v4'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Cascadelake-Server-v5'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 sudo[352083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Cooperlake'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Cooperlake-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 03 01:41:46 compute-0 sudo[352083]: pam_unix(sudo:session): session closed for user root
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Cooperlake-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Denverton'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='mpx'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Denverton-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='mpx'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Denverton-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Denverton-v3'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Dhyana-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='EPYC-Genoa'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amd-psfd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='auto-ibrs'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='no-nested-data-bp'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='null-sel-clr-base'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='stibp-always-on'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='EPYC-Genoa-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amd-psfd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='auto-ibrs'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='no-nested-data-bp'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='null-sel-clr-base'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='stibp-always-on'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='EPYC-Milan'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='EPYC-Milan-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='EPYC-Milan-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amd-psfd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='no-nested-data-bp'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='null-sel-clr-base'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='stibp-always-on'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='EPYC-Rome'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='EPYC-Rome-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='EPYC-Rome-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='EPYC-Rome-v3'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='EPYC-v3'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='EPYC-v4'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='GraniteRapids'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-fp16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-int8'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-tile'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-fp16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fbsdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrc'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrs'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fzrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='mcdt-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pbrsb-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='prefetchiti'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='psdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='serialize'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xfd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='GraniteRapids-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-fp16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-int8'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-tile'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-fp16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fbsdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrc'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrs'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fzrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='mcdt-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pbrsb-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='prefetchiti'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='psdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='serialize'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xfd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='GraniteRapids-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-fp16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-int8'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-tile'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx10'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx10-128'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx10-256'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx10-512'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-fp16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='cldemote'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fbsdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrc'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrs'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fzrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='mcdt-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdir64b'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdiri'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pbrsb-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='prefetchiti'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='psdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='serialize'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ss'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xfd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Haswell'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Haswell-IBRS'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Haswell-noTSX'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Haswell-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Haswell-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Haswell-v3'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Haswell-v4'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Icelake-Server'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Icelake-Server-noTSX'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Icelake-Server-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Icelake-Server-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Icelake-Server-v3'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Icelake-Server-v4'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Icelake-Server-v5'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Icelake-Server-v6'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Icelake-Server-v7'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='IvyBridge'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='IvyBridge-IBRS'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='IvyBridge-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='IvyBridge-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='KnightsMill'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-4fmaps'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-4vnniw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512er'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512pf'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ss'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='KnightsMill-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-4fmaps'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-4vnniw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512er'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512pf'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ss'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Opteron_G4'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fma4'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xop'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Opteron_G4-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fma4'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xop'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Opteron_G5'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fma4'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='tbm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xop'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Opteron_G5-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fma4'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='tbm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xop'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='SapphireRapids'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-int8'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-tile'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-fp16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrc'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrs'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fzrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='serialize'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xfd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='SapphireRapids-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-int8'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-tile'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-fp16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrc'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrs'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fzrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='serialize'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xfd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='SapphireRapids-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-int8'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-tile'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-fp16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fbsdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrc'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrs'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fzrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='psdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='serialize'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xfd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='SapphireRapids-v3'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-int8'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='amx-tile'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-bf16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-fp16'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512-vpopcntdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bitalg'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vbmi2'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='cldemote'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fbsdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrc'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrs'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fzrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='la57'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdir64b'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdiri'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='psdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='serialize'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ss'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='taa-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='tsx-ldtrk'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xfd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='SierraForest'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-ne-convert'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-vnni-int8'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='cmpccxadd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fbsdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrs'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='mcdt-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pbrsb-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='psdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='serialize'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='SierraForest-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-ifma'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-ne-convert'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-vnni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx-vnni-int8'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='bus-lock-detect'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='cmpccxadd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fbsdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='fsrs'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ibrs-all'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='mcdt-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pbrsb-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='psdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='sbdr-ssdp-no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='serialize'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vaes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='vpclmulqdq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Client'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Client-IBRS'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Client-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Client-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Client-v3'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Client-v4'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Server'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Server-IBRS'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Server-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Server-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='hle'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='rtm'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Server-v3'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Server-v4'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Skylake-Server-v5'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512bw'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512cd'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512dq'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512f'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='avx512vl'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='invpcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pcid'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='pku'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Snowridge'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='cldemote'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='core-capability'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdir64b'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdiri'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='mpx'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='split-lock-detect'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Snowridge-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='cldemote'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='core-capability'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdir64b'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdiri'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='mpx'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='split-lock-detect'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Snowridge-v2'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='cldemote'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='core-capability'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdir64b'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdiri'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='split-lock-detect'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Snowridge-v3'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='cldemote'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='core-capability'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdir64b'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdiri'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='split-lock-detect'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='Snowridge-v4'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='cldemote'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='erms'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='gfni'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdir64b'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='movdiri'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='xsaves'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='athlon'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='3dnow'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='3dnowext'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='athlon-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='3dnow'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='3dnowext'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='core2duo'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ss'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='core2duo-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ss'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='coreduo'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ss'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='coreduo-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ss'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='n270'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ss'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='n270-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='ss'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='phenom'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='3dnow'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='3dnowext'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <blockers model='phenom-v1'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='3dnow'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <feature name='3dnowext'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </blockers>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </mode>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   </cpu>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   <memoryBacking supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <enum name='sourceType'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <value>file</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <value>anonymous</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <value>memfd</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   </memoryBacking>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   <devices>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <disk supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='diskDevice'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>disk</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>cdrom</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>floppy</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>lun</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='bus'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>fdc</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>scsi</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>virtio</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>usb</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>sata</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='model'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>virtio</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>virtio-transitional</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>virtio-non-transitional</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </disk>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <graphics supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='type'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>vnc</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>egl-headless</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>dbus</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </graphics>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <video supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='modelType'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>vga</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>cirrus</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>virtio</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>none</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>bochs</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>ramfb</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </video>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <hostdev supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='mode'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>subsystem</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='startupPolicy'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>default</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>mandatory</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>requisite</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>optional</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='subsysType'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>usb</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>pci</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>scsi</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='capsType'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='pciBackend'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </hostdev>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <rng supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='model'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>virtio</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>virtio-transitional</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>virtio-non-transitional</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='backendModel'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>random</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>egd</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>builtin</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </rng>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <filesystem supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='driverType'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>path</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>handle</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>virtiofs</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </filesystem>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <tpm supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='model'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>tpm-tis</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>tpm-crb</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='backendModel'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>emulator</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>external</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='backendVersion'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>2.0</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </tpm>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <redirdev supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='bus'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>usb</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </redirdev>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <channel supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='type'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>pty</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>unix</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </channel>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <crypto supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='model'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='type'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>qemu</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='backendModel'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>builtin</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </crypto>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <interface supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='backendType'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>default</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>passt</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </interface>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <panic supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='model'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>isa</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>hyperv</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </panic>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <console supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='type'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>null</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>vc</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>pty</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>dev</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>file</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>pipe</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>stdio</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>udp</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>tcp</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>unix</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>qemu-vdagent</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>dbus</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </console>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   </devices>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   <features>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <gic supported='no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <vmcoreinfo supported='yes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <genid supported='yes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <backingStoreInput supported='yes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <backup supported='yes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <async-teardown supported='yes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <ps2 supported='yes'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <sev supported='no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <sgx supported='no'/>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <hyperv supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='features'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>relaxed</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>vapic</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>spinlocks</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>vpindex</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>runtime</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>synic</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>stimer</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>reset</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>vendor_id</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>frequencies</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>reenlightenment</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>tlbflush</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>ipi</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>avic</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>emsr_bitmap</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>xmm_input</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <defaults>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <spinlocks>4095</spinlocks>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <stimer_direct>on</stimer_direct>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <tlbflush_direct>on</tlbflush_direct>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <tlbflush_extended>on</tlbflush_extended>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </defaults>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </hyperv>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     <launchSecurity supported='yes'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       <enum name='sectype'>
Dec 03 01:41:46 compute-0 nova_compute[351485]:         <value>tdx</value>
Dec 03 01:41:46 compute-0 nova_compute[351485]:       </enum>
Dec 03 01:41:46 compute-0 nova_compute[351485]:     </launchSecurity>
Dec 03 01:41:46 compute-0 nova_compute[351485]:   </features>
Dec 03 01:41:46 compute-0 nova_compute[351485]: </domainCapabilities>
Dec 03 01:41:46 compute-0 nova_compute[351485]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.859 351492 DEBUG nova.virt.libvirt.host [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.860 351492 DEBUG nova.virt.libvirt.host [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.860 351492 DEBUG nova.virt.libvirt.host [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.861 351492 INFO nova.virt.libvirt.host [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Secure Boot support detected
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.866 351492 INFO nova.virt.libvirt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.866 351492 INFO nova.virt.libvirt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.890 351492 DEBUG nova.virt.libvirt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.918 351492 INFO nova.virt.node [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Determined node identity 107397d2-51bc-4a03-bce4-7cd69319cf05 from /var/lib/nova/compute_id
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.943 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Verified node 107397d2-51bc-4a03-bce4-7cd69319cf05 matches my host compute-0.ctlplane.example.com _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568
Dec 03 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.972 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Dec 03 01:41:47 compute-0 sudo[352108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 01:41:47 compute-0 sudo[352108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:41:47 compute-0 nova_compute[351485]: 2025-12-03 01:41:47.077 351492 DEBUG oslo_concurrency.lockutils [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:41:47 compute-0 nova_compute[351485]: 2025-12-03 01:41:47.078 351492 DEBUG oslo_concurrency.lockutils [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:41:47 compute-0 nova_compute[351485]: 2025-12-03 01:41:47.079 351492 DEBUG oslo_concurrency.lockutils [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:41:47 compute-0 nova_compute[351485]: 2025-12-03 01:41:47.079 351492 DEBUG nova.compute.resource_tracker [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 01:41:47 compute-0 nova_compute[351485]: 2025-12-03 01:41:47.080 351492 DEBUG oslo_concurrency.processutils [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:41:47 compute-0 ceph-mon[192821]: pgmap v809: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:41:47 compute-0 podman[352194]: 2025-12-03 01:41:47.459916569 +0000 UTC m=+0.066659242 container create aa4a055fd92c25a1d7cd35312a58d05565832784b874277c6994eede20a0548c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hugle, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 03 01:41:47 compute-0 systemd[1]: Started libpod-conmon-aa4a055fd92c25a1d7cd35312a58d05565832784b874277c6994eede20a0548c.scope.
Dec 03 01:41:47 compute-0 podman[352194]: 2025-12-03 01:41:47.433593074 +0000 UTC m=+0.040335767 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:41:47 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:41:47 compute-0 podman[352194]: 2025-12-03 01:41:47.550956381 +0000 UTC m=+0.157699084 container init aa4a055fd92c25a1d7cd35312a58d05565832784b874277c6994eede20a0548c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hugle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:41:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v810: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:41:47 compute-0 podman[352194]: 2025-12-03 01:41:47.561772313 +0000 UTC m=+0.168515006 container start aa4a055fd92c25a1d7cd35312a58d05565832784b874277c6994eede20a0548c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hugle, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:41:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 01:41:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3907357913' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:41:47 compute-0 unruffled_hugle[352209]: 167 167
Dec 03 01:41:47 compute-0 podman[352194]: 2025-12-03 01:41:47.568576463 +0000 UTC m=+0.175319156 container attach aa4a055fd92c25a1d7cd35312a58d05565832784b874277c6994eede20a0548c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:41:47 compute-0 conmon[352209]: conmon aa4a055fd92c25a1d7cd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aa4a055fd92c25a1d7cd35312a58d05565832784b874277c6994eede20a0548c.scope/container/memory.events
Dec 03 01:41:47 compute-0 systemd[1]: libpod-aa4a055fd92c25a1d7cd35312a58d05565832784b874277c6994eede20a0548c.scope: Deactivated successfully.
Dec 03 01:41:47 compute-0 podman[352194]: 2025-12-03 01:41:47.570629311 +0000 UTC m=+0.177372004 container died aa4a055fd92c25a1d7cd35312a58d05565832784b874277c6994eede20a0548c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hugle, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:41:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:41:47 compute-0 nova_compute[351485]: 2025-12-03 01:41:47.598 351492 DEBUG oslo_concurrency.processutils [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:41:47 compute-0 podman[352208]: 2025-12-03 01:41:47.600569727 +0000 UTC m=+0.094591793 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 01:41:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f7ab8a4a709445e8bc0d7aedcfb2c18cdf835d4085bb74fa3636f461159bc55-merged.mount: Deactivated successfully.
Dec 03 01:41:47 compute-0 podman[352205]: 2025-12-03 01:41:47.617033627 +0000 UTC m=+0.109817608 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 03 01:41:47 compute-0 podman[352194]: 2025-12-03 01:41:47.622622003 +0000 UTC m=+0.229364676 container remove aa4a055fd92c25a1d7cd35312a58d05565832784b874277c6994eede20a0548c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hugle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 03 01:41:47 compute-0 systemd[1]: libpod-conmon-aa4a055fd92c25a1d7cd35312a58d05565832784b874277c6994eede20a0548c.scope: Deactivated successfully.
Dec 03 01:41:47 compute-0 podman[352274]: 2025-12-03 01:41:47.859193449 +0000 UTC m=+0.062951939 container create f706e69fc823b5c80aef84ec6e94241438a2fc4956aaafde0d6227bda8bb8626 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_banzai, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:41:47 compute-0 systemd[1]: Started libpod-conmon-f706e69fc823b5c80aef84ec6e94241438a2fc4956aaafde0d6227bda8bb8626.scope.
Dec 03 01:41:47 compute-0 podman[352274]: 2025-12-03 01:41:47.835652791 +0000 UTC m=+0.039411301 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:41:47 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:41:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b90857e7bba2ff2f001e51fb0a7d412b16ce463db628c36ea7ebc826c748edd0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:41:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b90857e7bba2ff2f001e51fb0a7d412b16ce463db628c36ea7ebc826c748edd0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:41:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b90857e7bba2ff2f001e51fb0a7d412b16ce463db628c36ea7ebc826c748edd0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:41:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b90857e7bba2ff2f001e51fb0a7d412b16ce463db628c36ea7ebc826c748edd0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:41:47 compute-0 podman[352274]: 2025-12-03 01:41:47.984605031 +0000 UTC m=+0.188363521 container init f706e69fc823b5c80aef84ec6e94241438a2fc4956aaafde0d6227bda8bb8626 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_banzai, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 03 01:41:48 compute-0 podman[352274]: 2025-12-03 01:41:48.006017269 +0000 UTC m=+0.209775759 container start f706e69fc823b5c80aef84ec6e94241438a2fc4956aaafde0d6227bda8bb8626 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_banzai, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 03 01:41:48 compute-0 podman[352274]: 2025-12-03 01:41:48.011096681 +0000 UTC m=+0.214855181 container attach f706e69fc823b5c80aef84ec6e94241438a2fc4956aaafde0d6227bda8bb8626 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec 03 01:41:48 compute-0 nova_compute[351485]: 2025-12-03 01:41:48.029 351492 WARNING nova.virt.libvirt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 01:41:48 compute-0 nova_compute[351485]: 2025-12-03 01:41:48.030 351492 DEBUG nova.compute.resource_tracker [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4537MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 01:41:48 compute-0 nova_compute[351485]: 2025-12-03 01:41:48.031 351492 DEBUG oslo_concurrency.lockutils [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:41:48 compute-0 nova_compute[351485]: 2025-12-03 01:41:48.031 351492 DEBUG oslo_concurrency.lockutils [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:41:48 compute-0 sshd-session[352136]: Invalid user autrede from 173.249.50.59 port 47176
Dec 03 01:41:48 compute-0 nova_compute[351485]: 2025-12-03 01:41:48.229 351492 DEBUG nova.compute.resource_tracker [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 01:41:48 compute-0 nova_compute[351485]: 2025-12-03 01:41:48.230 351492 DEBUG nova.compute.resource_tracker [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 01:41:48 compute-0 sshd-session[352136]: Received disconnect from 173.249.50.59 port 47176:11: Bye Bye [preauth]
Dec 03 01:41:48 compute-0 sshd-session[352136]: Disconnected from invalid user autrede 173.249.50.59 port 47176 [preauth]
Dec 03 01:41:48 compute-0 nova_compute[351485]: 2025-12-03 01:41:48.356 351492 DEBUG nova.scheduler.client.report [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Refreshing inventories for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 03 01:41:48 compute-0 nova_compute[351485]: 2025-12-03 01:41:48.387 351492 DEBUG nova.scheduler.client.report [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Updating ProviderTree inventory for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 from _refresh_and_get_inventory using data: {} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 03 01:41:48 compute-0 nova_compute[351485]: 2025-12-03 01:41:48.387 351492 DEBUG nova.compute.provider_tree [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 01:41:48 compute-0 nova_compute[351485]: 2025-12-03 01:41:48.412 351492 DEBUG nova.scheduler.client.report [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Refreshing aggregate associations for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 03 01:41:48 compute-0 nova_compute[351485]: 2025-12-03 01:41:48.430 351492 DEBUG nova.scheduler.client.report [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Refreshing trait associations for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05, traits: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 03 01:41:48 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3907357913' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:41:48 compute-0 nova_compute[351485]: 2025-12-03 01:41:48.459 351492 DEBUG oslo_concurrency.processutils [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:41:48 compute-0 trusting_banzai[352290]: {
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:     "0": [
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:         {
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:             "devices": [
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:                 "/dev/loop3"
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:             ],
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:             "lv_name": "ceph_lv0",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:             "lv_size": "21470642176",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:             "name": "ceph_lv0",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:             "tags": {
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:                 "ceph.cluster_name": "ceph",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:                 "ceph.crush_device_class": "",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:                 "ceph.encrypted": "0",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:                 "ceph.osd_id": "0",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:                 "ceph.type": "block",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:                 "ceph.vdo": "0"
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:             },
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:             "type": "block",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:             "vg_name": "ceph_vg0"
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:         }
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:     ],
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:     "1": [
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:         {
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:             "devices": [
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:                 "/dev/loop4"
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:             ],
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:             "lv_name": "ceph_lv1",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:             "lv_size": "21470642176",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:             "name": "ceph_lv1",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:             "tags": {
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:                 "ceph.cluster_name": "ceph",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:                 "ceph.crush_device_class": "",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:                 "ceph.encrypted": "0",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:                 "ceph.osd_id": "1",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:                 "ceph.type": "block",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:                 "ceph.vdo": "0"
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:             },
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:             "type": "block",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:             "vg_name": "ceph_vg1"
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:         }
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:     ],
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:     "2": [
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:         {
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:             "devices": [
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:                 "/dev/loop5"
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:             ],
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:             "lv_name": "ceph_lv2",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:             "lv_size": "21470642176",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:             "name": "ceph_lv2",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:             "tags": {
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:                 "ceph.cluster_name": "ceph",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:                 "ceph.crush_device_class": "",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:                 "ceph.encrypted": "0",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:                 "ceph.osd_id": "2",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:                 "ceph.type": "block",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:                 "ceph.vdo": "0"
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:             },
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:             "type": "block",
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:             "vg_name": "ceph_vg2"
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:         }
Dec 03 01:41:48 compute-0 trusting_banzai[352290]:     ]
Dec 03 01:41:48 compute-0 trusting_banzai[352290]: }
Dec 03 01:41:48 compute-0 systemd[1]: libpod-f706e69fc823b5c80aef84ec6e94241438a2fc4956aaafde0d6227bda8bb8626.scope: Deactivated successfully.
Dec 03 01:41:48 compute-0 podman[352319]: 2025-12-03 01:41:48.908914302 +0000 UTC m=+0.049578796 container died f706e69fc823b5c80aef84ec6e94241438a2fc4956aaafde0d6227bda8bb8626 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec 03 01:41:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 01:41:48 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3938662527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:41:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-b90857e7bba2ff2f001e51fb0a7d412b16ce463db628c36ea7ebc826c748edd0-merged.mount: Deactivated successfully.
Dec 03 01:41:48 compute-0 nova_compute[351485]: 2025-12-03 01:41:48.973 351492 DEBUG oslo_concurrency.processutils [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:41:48 compute-0 nova_compute[351485]: 2025-12-03 01:41:48.985 351492 DEBUG nova.virt.libvirt.host [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Dec 03 01:41:48 compute-0 nova_compute[351485]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Dec 03 01:41:48 compute-0 nova_compute[351485]: 2025-12-03 01:41:48.985 351492 INFO nova.virt.libvirt.host [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] kernel doesn't support AMD SEV
Dec 03 01:41:48 compute-0 nova_compute[351485]: 2025-12-03 01:41:48.987 351492 DEBUG nova.compute.provider_tree [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Updating inventory in ProviderTree for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 03 01:41:48 compute-0 nova_compute[351485]: 2025-12-03 01:41:48.988 351492 DEBUG nova.virt.libvirt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 03 01:41:48 compute-0 podman[352319]: 2025-12-03 01:41:48.998559914 +0000 UTC m=+0.139224378 container remove f706e69fc823b5c80aef84ec6e94241438a2fc4956aaafde0d6227bda8bb8626 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_banzai, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:41:49 compute-0 systemd[1]: libpod-conmon-f706e69fc823b5c80aef84ec6e94241438a2fc4956aaafde0d6227bda8bb8626.scope: Deactivated successfully.
Dec 03 01:41:49 compute-0 sudo[352108]: pam_unix(sudo:session): session closed for user root
Dec 03 01:41:49 compute-0 nova_compute[351485]: 2025-12-03 01:41:49.056 351492 DEBUG nova.scheduler.client.report [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Updated inventory for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Dec 03 01:41:49 compute-0 nova_compute[351485]: 2025-12-03 01:41:49.056 351492 DEBUG nova.compute.provider_tree [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Updating resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Dec 03 01:41:49 compute-0 nova_compute[351485]: 2025-12-03 01:41:49.056 351492 DEBUG nova.compute.provider_tree [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Updating inventory in ProviderTree for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 03 01:41:49 compute-0 nova_compute[351485]: 2025-12-03 01:41:49.148 351492 DEBUG nova.compute.provider_tree [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Updating resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Dec 03 01:41:49 compute-0 nova_compute[351485]: 2025-12-03 01:41:49.175 351492 DEBUG nova.compute.resource_tracker [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 01:41:49 compute-0 nova_compute[351485]: 2025-12-03 01:41:49.176 351492 DEBUG oslo_concurrency.lockutils [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.145s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:41:49 compute-0 nova_compute[351485]: 2025-12-03 01:41:49.176 351492 DEBUG nova.service [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Dec 03 01:41:49 compute-0 sudo[352337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:41:49 compute-0 sudo[352337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:41:49 compute-0 sudo[352337]: pam_unix(sudo:session): session closed for user root
Dec 03 01:41:49 compute-0 nova_compute[351485]: 2025-12-03 01:41:49.317 351492 DEBUG nova.service [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Dec 03 01:41:49 compute-0 nova_compute[351485]: 2025-12-03 01:41:49.318 351492 DEBUG nova.servicegroup.drivers.db [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Dec 03 01:41:49 compute-0 sudo[352362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:41:49 compute-0 sudo[352362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:41:49 compute-0 sudo[352362]: pam_unix(sudo:session): session closed for user root
Dec 03 01:41:49 compute-0 sudo[352387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:41:49 compute-0 sudo[352387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:41:49 compute-0 sudo[352387]: pam_unix(sudo:session): session closed for user root
Dec 03 01:41:49 compute-0 ceph-mon[192821]: pgmap v810: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:41:49 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3938662527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:41:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v811: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:41:49 compute-0 sudo[352412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 01:41:49 compute-0 sudo[352412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:41:50 compute-0 podman[352479]: 2025-12-03 01:41:50.151279503 +0000 UTC m=+0.085334073 container create e105cd9bafb35bc1a0e944cec0ec3a2f95eb61a3d35c2b871ca99641b372677e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_pascal, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec 03 01:41:50 compute-0 podman[352479]: 2025-12-03 01:41:50.115937047 +0000 UTC m=+0.049991657 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:41:50 compute-0 systemd[1]: Started libpod-conmon-e105cd9bafb35bc1a0e944cec0ec3a2f95eb61a3d35c2b871ca99641b372677e.scope.
Dec 03 01:41:50 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:41:50 compute-0 podman[352479]: 2025-12-03 01:41:50.307809245 +0000 UTC m=+0.241863855 container init e105cd9bafb35bc1a0e944cec0ec3a2f95eb61a3d35c2b871ca99641b372677e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_pascal, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 03 01:41:50 compute-0 podman[352479]: 2025-12-03 01:41:50.325018565 +0000 UTC m=+0.259073145 container start e105cd9bafb35bc1a0e944cec0ec3a2f95eb61a3d35c2b871ca99641b372677e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:41:50 compute-0 podman[352479]: 2025-12-03 01:41:50.331904807 +0000 UTC m=+0.265959367 container attach e105cd9bafb35bc1a0e944cec0ec3a2f95eb61a3d35c2b871ca99641b372677e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_pascal, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:41:50 compute-0 silly_pascal[352493]: 167 167
Dec 03 01:41:50 compute-0 systemd[1]: libpod-e105cd9bafb35bc1a0e944cec0ec3a2f95eb61a3d35c2b871ca99641b372677e.scope: Deactivated successfully.
Dec 03 01:41:50 compute-0 podman[352479]: 2025-12-03 01:41:50.337805512 +0000 UTC m=+0.271860082 container died e105cd9bafb35bc1a0e944cec0ec3a2f95eb61a3d35c2b871ca99641b372677e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 03 01:41:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-249719f639ffb87f5d005793d1001d631ca8b86f5f4fbafceadbd612adcb91d7-merged.mount: Deactivated successfully.
Dec 03 01:41:50 compute-0 podman[352479]: 2025-12-03 01:41:50.410193604 +0000 UTC m=+0.344248164 container remove e105cd9bafb35bc1a0e944cec0ec3a2f95eb61a3d35c2b871ca99641b372677e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:41:50 compute-0 systemd[1]: libpod-conmon-e105cd9bafb35bc1a0e944cec0ec3a2f95eb61a3d35c2b871ca99641b372677e.scope: Deactivated successfully.
Dec 03 01:41:50 compute-0 podman[352518]: 2025-12-03 01:41:50.660776611 +0000 UTC m=+0.076816296 container create 777cfbbacb108ac7bcbb636baf5dfcfd8efc6bb5cb9a8d0c68c98e7e99d4e26d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_feynman, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:41:50 compute-0 podman[352518]: 2025-12-03 01:41:50.62565667 +0000 UTC m=+0.041696425 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:41:50 compute-0 systemd[1]: Started libpod-conmon-777cfbbacb108ac7bcbb636baf5dfcfd8efc6bb5cb9a8d0c68c98e7e99d4e26d.scope.
Dec 03 01:41:50 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:41:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6d9ed99e9cf51c421b3f1abd49aea94ca5526626881c62133d924f646526e17/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:41:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6d9ed99e9cf51c421b3f1abd49aea94ca5526626881c62133d924f646526e17/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:41:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6d9ed99e9cf51c421b3f1abd49aea94ca5526626881c62133d924f646526e17/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:41:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6d9ed99e9cf51c421b3f1abd49aea94ca5526626881c62133d924f646526e17/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:41:50 compute-0 podman[352518]: 2025-12-03 01:41:50.834923364 +0000 UTC m=+0.250963079 container init 777cfbbacb108ac7bcbb636baf5dfcfd8efc6bb5cb9a8d0c68c98e7e99d4e26d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:41:50 compute-0 podman[352518]: 2025-12-03 01:41:50.859210022 +0000 UTC m=+0.275249737 container start 777cfbbacb108ac7bcbb636baf5dfcfd8efc6bb5cb9a8d0c68c98e7e99d4e26d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_feynman, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Dec 03 01:41:50 compute-0 podman[352518]: 2025-12-03 01:41:50.865371004 +0000 UTC m=+0.281410689 container attach 777cfbbacb108ac7bcbb636baf5dfcfd8efc6bb5cb9a8d0c68c98e7e99d4e26d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_feynman, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec 03 01:41:51 compute-0 ceph-mon[192821]: pgmap v811: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:41:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v812: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:41:51 compute-0 sshd-session[352540]: Accepted publickey for zuul from 192.168.122.30 port 46552 ssh2: ECDSA SHA256:ja3ITS17A9km0/Ot+KN2pl9ub4ump/b6GV+vNoE7Szw
Dec 03 01:41:51 compute-0 systemd-logind[800]: New session 57 of user zuul.
Dec 03 01:41:51 compute-0 systemd[1]: Started Session 57 of User zuul.
Dec 03 01:41:51 compute-0 sshd-session[352540]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 03 01:41:52 compute-0 vigilant_feynman[352534]: {
Dec 03 01:41:52 compute-0 vigilant_feynman[352534]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 01:41:52 compute-0 vigilant_feynman[352534]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:41:52 compute-0 vigilant_feynman[352534]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 01:41:52 compute-0 vigilant_feynman[352534]:         "osd_id": 2,
Dec 03 01:41:52 compute-0 vigilant_feynman[352534]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:41:52 compute-0 vigilant_feynman[352534]:         "type": "bluestore"
Dec 03 01:41:52 compute-0 vigilant_feynman[352534]:     },
Dec 03 01:41:52 compute-0 vigilant_feynman[352534]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 01:41:52 compute-0 vigilant_feynman[352534]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:41:52 compute-0 vigilant_feynman[352534]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 01:41:52 compute-0 vigilant_feynman[352534]:         "osd_id": 1,
Dec 03 01:41:52 compute-0 vigilant_feynman[352534]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:41:52 compute-0 vigilant_feynman[352534]:         "type": "bluestore"
Dec 03 01:41:52 compute-0 vigilant_feynman[352534]:     },
Dec 03 01:41:52 compute-0 vigilant_feynman[352534]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 01:41:52 compute-0 vigilant_feynman[352534]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:41:52 compute-0 vigilant_feynman[352534]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 01:41:52 compute-0 vigilant_feynman[352534]:         "osd_id": 0,
Dec 03 01:41:52 compute-0 vigilant_feynman[352534]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:41:52 compute-0 vigilant_feynman[352534]:         "type": "bluestore"
Dec 03 01:41:52 compute-0 vigilant_feynman[352534]:     }
Dec 03 01:41:52 compute-0 vigilant_feynman[352534]: }
Dec 03 01:41:52 compute-0 systemd[1]: libpod-777cfbbacb108ac7bcbb636baf5dfcfd8efc6bb5cb9a8d0c68c98e7e99d4e26d.scope: Deactivated successfully.
Dec 03 01:41:52 compute-0 systemd[1]: libpod-777cfbbacb108ac7bcbb636baf5dfcfd8efc6bb5cb9a8d0c68c98e7e99d4e26d.scope: Consumed 1.287s CPU time.
Dec 03 01:41:52 compute-0 podman[352518]: 2025-12-03 01:41:52.15476514 +0000 UTC m=+1.570804845 container died 777cfbbacb108ac7bcbb636baf5dfcfd8efc6bb5cb9a8d0c68c98e7e99d4e26d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_feynman, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 03 01:41:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6d9ed99e9cf51c421b3f1abd49aea94ca5526626881c62133d924f646526e17-merged.mount: Deactivated successfully.
Dec 03 01:41:52 compute-0 podman[352518]: 2025-12-03 01:41:52.253683042 +0000 UTC m=+1.669722757 container remove 777cfbbacb108ac7bcbb636baf5dfcfd8efc6bb5cb9a8d0c68c98e7e99d4e26d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:41:52 compute-0 systemd[1]: libpod-conmon-777cfbbacb108ac7bcbb636baf5dfcfd8efc6bb5cb9a8d0c68c98e7e99d4e26d.scope: Deactivated successfully.
Dec 03 01:41:52 compute-0 sudo[352412]: pam_unix(sudo:session): session closed for user root
Dec 03 01:41:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:41:52 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:41:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:41:52 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:41:52 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 19d465ef-3097-4daf-bc4e-ca36496b51ee does not exist
Dec 03 01:41:52 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 9f74c3a4-277c-46b0-a00f-099ea3520360 does not exist
Dec 03 01:41:52 compute-0 sudo[352634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:41:52 compute-0 sudo[352634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:41:52 compute-0 sudo[352634]: pam_unix(sudo:session): session closed for user root
Dec 03 01:41:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:41:52 compute-0 sudo[352679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 01:41:52 compute-0 sudo[352679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:41:52 compute-0 sudo[352679]: pam_unix(sudo:session): session closed for user root
Dec 03 01:41:53 compute-0 python3.9[352781]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 01:41:53 compute-0 ceph-mon[192821]: pgmap v812: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:41:53 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:41:53 compute-0 rsyslogd[188612]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 03 01:41:53 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:41:53 compute-0 rsyslogd[188612]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 03 01:41:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v813: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:41:55 compute-0 sudo[352936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dncllrhgfjdokvyrqmoocxurngfaszfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726114.1205194-36-7185524577524/AnsiballZ_systemd_service.py'
Dec 03 01:41:55 compute-0 sudo[352936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:41:55 compute-0 ceph-mon[192821]: pgmap v813: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:41:55 compute-0 python3.9[352938]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 03 01:41:55 compute-0 systemd[1]: Reloading.
Dec 03 01:41:55 compute-0 systemd-rc-local-generator[352963]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:41:55 compute-0 systemd-sysv-generator[352969]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:41:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v814: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:41:55 compute-0 sudo[352936]: pam_unix(sudo:session): session closed for user root
Dec 03 01:41:56 compute-0 podman[353080]: 2025-12-03 01:41:56.864236109 +0000 UTC m=+0.101401223 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 03 01:41:56 compute-0 podman[353094]: 2025-12-03 01:41:56.892062776 +0000 UTC m=+0.119631682 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true)
Dec 03 01:41:56 compute-0 podman[353092]: 2025-12-03 01:41:56.909858813 +0000 UTC m=+0.146858142 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, io.buildah.version=1.33.7, name=ubi9-minimal, release=1755695350, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vcs-type=git, vendor=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41)
Dec 03 01:41:56 compute-0 podman[353097]: 2025-12-03 01:41:56.930190631 +0000 UTC m=+0.151786060 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 03 01:41:57 compute-0 python3.9[353199]: ansible-ansible.builtin.service_facts Invoked
Dec 03 01:41:57 compute-0 network[353223]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 03 01:41:57 compute-0 network[353224]: 'network-scripts' will be removed from distribution in near future.
Dec 03 01:41:57 compute-0 network[353225]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 03 01:41:57 compute-0 ceph-mon[192821]: pgmap v814: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:41:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v815: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:41:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:41:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:41:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:41:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:41:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:41:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:41:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:41:58 compute-0 podman[353232]: 2025-12-03 01:41:58.444085716 +0000 UTC m=+0.101253649 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Dec 03 01:41:58 compute-0 podman[353234]: 2025-12-03 01:41:58.473839657 +0000 UTC m=+0.119891479 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 03 01:41:59 compute-0 ceph-mon[192821]: pgmap v815: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:41:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v816: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:41:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:41:59.601 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:41:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:41:59.601 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:41:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:41:59.602 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:41:59 compute-0 podman[158098]: time="2025-12-03T01:41:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:41:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:41:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec 03 01:41:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:41:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8124 "" "Go-http-client/1.1"
Dec 03 01:42:00 compute-0 sshd-session[353310]: Received disconnect from 34.66.72.251 port 53318:11: Bye Bye [preauth]
Dec 03 01:42:00 compute-0 sshd-session[353310]: Disconnected from authenticating user root 34.66.72.251 port 53318 [preauth]
Dec 03 01:42:01 compute-0 ceph-mon[192821]: pgmap v816: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:01 compute-0 openstack_network_exporter[160250]: ERROR   01:42:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:42:01 compute-0 openstack_network_exporter[160250]: ERROR   01:42:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:42:01 compute-0 openstack_network_exporter[160250]: ERROR   01:42:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:42:01 compute-0 openstack_network_exporter[160250]: ERROR   01:42:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:42:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:42:01 compute-0 openstack_network_exporter[160250]: ERROR   01:42:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:42:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:42:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v817: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:42:03 compute-0 ceph-mon[192821]: pgmap v817: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v818: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:04 compute-0 sudo[353537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxvilkqhngfeqanbtfddroutnbfinkih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726123.4950418-55-77359125728869/AnsiballZ_systemd_service.py'
Dec 03 01:42:04 compute-0 sudo[353537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:42:04 compute-0 python3.9[353539]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:42:04 compute-0 sudo[353537]: pam_unix(sudo:session): session closed for user root
Dec 03 01:42:04 compute-0 podman[353565]: 2025-12-03 01:42:04.877404892 +0000 UTC m=+0.130213227 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, io.openshift.expose-services=, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, managed_by=edpm_ansible, release-0.7.12=, release=1214.1726694543, vcs-type=git, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm)
Dec 03 01:42:05 compute-0 ceph-mon[192821]: pgmap v818: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:05 compute-0 sudo[353708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxriattybqrehpkwuywpmwwfnuwbkqjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726124.8877077-65-7732512148990/AnsiballZ_file.py'
Dec 03 01:42:05 compute-0 sudo[353708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:42:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v819: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:05 compute-0 python3.9[353710]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:42:05 compute-0 sudo[353708]: pam_unix(sudo:session): session closed for user root
Dec 03 01:42:06 compute-0 sudo[353860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buvyxhxzkmssacrqexyrtnrpxtzxkekz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726126.1147618-73-279404451981239/AnsiballZ_file.py'
Dec 03 01:42:06 compute-0 sudo[353860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:42:06 compute-0 python3.9[353862]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:42:06 compute-0 sudo[353860]: pam_unix(sudo:session): session closed for user root
Dec 03 01:42:07 compute-0 ceph-mon[192821]: pgmap v819: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v820: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:42:08 compute-0 sudo[354014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hceygenwrfrbstcpwncjsxfhsysdzyym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726127.3002973-82-50124727322461/AnsiballZ_command.py'
Dec 03 01:42:08 compute-0 sudo[354014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:42:08 compute-0 python3.9[354016]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:42:08 compute-0 sshd-session[353939]: Invalid user bounce from 80.253.31.232 port 42298
Dec 03 01:42:08 compute-0 sudo[354014]: pam_unix(sudo:session): session closed for user root
Dec 03 01:42:08 compute-0 sshd-session[353939]: Received disconnect from 80.253.31.232 port 42298:11: Bye Bye [preauth]
Dec 03 01:42:08 compute-0 sshd-session[353939]: Disconnected from invalid user bounce 80.253.31.232 port 42298 [preauth]
Dec 03 01:42:09 compute-0 python3.9[354170]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 03 01:42:09 compute-0 ceph-mon[192821]: pgmap v820: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v821: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:10 compute-0 sudo[354320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjqetaakrgnvxjwjvufwiqlgufcpiqoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726129.8268325-100-107058622238785/AnsiballZ_systemd_service.py'
Dec 03 01:42:10 compute-0 sudo[354320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:42:10 compute-0 sshd-session[354152]: Invalid user foundry from 103.146.202.174 port 39882
Dec 03 01:42:10 compute-0 python3.9[354322]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 03 01:42:10 compute-0 systemd[1]: Reloading.
Dec 03 01:42:10 compute-0 sshd-session[354152]: Received disconnect from 103.146.202.174 port 39882:11: Bye Bye [preauth]
Dec 03 01:42:10 compute-0 sshd-session[354152]: Disconnected from invalid user foundry 103.146.202.174 port 39882 [preauth]
Dec 03 01:42:10 compute-0 systemd-sysv-generator[354351]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:42:10 compute-0 systemd-rc-local-generator[354348]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:42:11 compute-0 sudo[354320]: pam_unix(sudo:session): session closed for user root
Dec 03 01:42:11 compute-0 ceph-mon[192821]: pgmap v821: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v822: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:12 compute-0 sudo[354508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxhstssxufxgsnjwqokhkhfndfpcuiua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726131.6625807-108-120179502486890/AnsiballZ_command.py'
Dec 03 01:42:12 compute-0 sudo[354508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:42:12 compute-0 python3.9[354510]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:42:12 compute-0 sudo[354508]: pam_unix(sudo:session): session closed for user root
Dec 03 01:42:12 compute-0 ceph-mon[192821]: pgmap v822: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:42:13 compute-0 sudo[354663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rspeitqjgdqxhxjatzoilbyqznghrvhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726132.8590448-117-194483767858967/AnsiballZ_file.py'
Dec 03 01:42:13 compute-0 sudo[354663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:42:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v823: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:13 compute-0 sshd-session[354611]: Invalid user user from 78.128.112.74 port 59496
Dec 03 01:42:13 compute-0 python3.9[354665]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:42:13 compute-0 sudo[354663]: pam_unix(sudo:session): session closed for user root
Dec 03 01:42:13 compute-0 sshd-session[354611]: Connection closed by invalid user user 78.128.112.74 port 59496 [preauth]
Dec 03 01:42:14 compute-0 ceph-mon[192821]: pgmap v823: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:14 compute-0 python3.9[354815]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:42:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v824: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:16 compute-0 python3.9[354967]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:42:16 compute-0 ceph-mon[192821]: pgmap v824: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:16 compute-0 python3.9[355043]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:42:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v825: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:42:17 compute-0 podman[355167]: 2025-12-03 01:42:17.767883544 +0000 UTC m=+0.109155219 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 01:42:17 compute-0 sudo[355228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxobibailzsyeyxgkvrxvuavcknbdgrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726137.0018415-145-124045283060465/AnsiballZ_group.py'
Dec 03 01:42:17 compute-0 sudo[355228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:42:17 compute-0 podman[355168]: 2025-12-03 01:42:17.810917966 +0000 UTC m=+0.147194202 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec 03 01:42:17 compute-0 python3.9[355235]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None
Dec 03 01:42:18 compute-0 sudo[355228]: pam_unix(sudo:session): session closed for user root
Dec 03 01:42:18 compute-0 ceph-mon[192821]: pgmap v825: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:19 compute-0 sudo[355386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvtphwrmtziatstrbzrjilgjjtcintwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726138.4412642-156-5414136354391/AnsiballZ_getent.py'
Dec 03 01:42:19 compute-0 sudo[355386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:42:19 compute-0 python3.9[355388]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Dec 03 01:42:19 compute-0 sudo[355386]: pam_unix(sudo:session): session closed for user root
Dec 03 01:42:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v826: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:20 compute-0 ceph-mon[192821]: pgmap v826: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:20 compute-0 python3.9[355540]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:42:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v827: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:21 compute-0 python3.9[355616]: ansible-ansible.legacy.file Invoked with mode=0640 dest=/var/lib/openstack/config/telemetry/ceilometer.conf _original_basename=ceilometer.conf recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:42:22 compute-0 ceph-mon[192821]: pgmap v827: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:42:22 compute-0 python3.9[355766]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:42:23 compute-0 python3.9[355842]: ansible-ansible.legacy.file Invoked with mode=0640 dest=/var/lib/openstack/config/telemetry/polling.yaml _original_basename=polling.yaml recurse=False state=file path=/var/lib/openstack/config/telemetry/polling.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:42:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v828: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:24 compute-0 python3.9[355992]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:42:24 compute-0 ceph-mon[192821]: pgmap v828: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:25 compute-0 python3.9[356068]: ansible-ansible.legacy.file Invoked with mode=0640 dest=/var/lib/openstack/config/telemetry/custom.conf _original_basename=custom.conf recurse=False state=file path=/var/lib/openstack/config/telemetry/custom.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:42:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v829: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:26 compute-0 python3.9[356218]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:42:27 compute-0 ceph-mon[192821]: pgmap v829: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v830: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:27 compute-0 podman[356345]: 2025-12-03 01:42:27.69377472 +0000 UTC m=+0.109392025 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, architecture=x86_64, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, config_id=edpm, maintainer=Red Hat, Inc., release=1755695350, io.buildah.version=1.33.7, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vendor=Red Hat, Inc.)
Dec 03 01:42:27 compute-0 podman[356344]: 2025-12-03 01:42:27.696837326 +0000 UTC m=+0.127052189 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 03 01:42:27 compute-0 podman[356346]: 2025-12-03 01:42:27.714110948 +0000 UTC m=+0.130811014 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=edpm)
Dec 03 01:42:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:42:27 compute-0 podman[356352]: 2025-12-03 01:42:27.756898983 +0000 UTC m=+0.160277337 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 03 01:42:27 compute-0 python3.9[356440]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:42:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:42:28
Dec 03 01:42:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 01:42:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 01:42:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['.mgr', 'backups', 'default.rgw.control', 'default.rgw.meta', 'images', 'vms', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta']
Dec 03 01:42:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 01:42:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:42:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:42:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:42:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:42:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:42:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:42:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 01:42:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:42:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 01:42:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:42:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:42:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:42:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:42:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:42:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:42:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:42:28 compute-0 podman[356583]: 2025-12-03 01:42:28.816497181 +0000 UTC m=+0.125741742 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:42:28 compute-0 podman[356584]: 2025-12-03 01:42:28.833950718 +0000 UTC m=+0.137840280 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 03 01:42:28 compute-0 python3.9[356640]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:42:29 compute-0 ceph-mon[192821]: pgmap v830: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v831: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:29 compute-0 podman[158098]: time="2025-12-03T01:42:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:42:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:42:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec 03 01:42:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:42:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8124 "" "Go-http-client/1.1"
Dec 03 01:42:30 compute-0 python3.9[356724]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json _original_basename=ceilometer-agent-compute.json.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:42:31 compute-0 ceph-mon[192821]: pgmap v831: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:31 compute-0 openstack_network_exporter[160250]: ERROR   01:42:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:42:31 compute-0 openstack_network_exporter[160250]: ERROR   01:42:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:42:31 compute-0 openstack_network_exporter[160250]: ERROR   01:42:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:42:31 compute-0 openstack_network_exporter[160250]: ERROR   01:42:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:42:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:42:31 compute-0 openstack_network_exporter[160250]: ERROR   01:42:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:42:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:42:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v832: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:31 compute-0 python3.9[356874]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:42:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:42:32 compute-0 python3.9[356950]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:42:33 compute-0 ceph-mon[192821]: pgmap v832: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:33 compute-0 nova_compute[351485]: 2025-12-03 01:42:33.322 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:42:33 compute-0 nova_compute[351485]: 2025-12-03 01:42:33.344 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:42:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v833: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:33 compute-0 python3.9[357100]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:42:34 compute-0 python3.9[357176]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json _original_basename=ceilometer_agent_compute.json.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:42:35 compute-0 ceph-mon[192821]: pgmap v833: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:35 compute-0 podman[357300]: 2025-12-03 01:42:35.373317366 +0000 UTC m=+0.117611525 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.openshift.expose-services=, container_name=kepler, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, io.openshift.tags=base rhel9, release=1214.1726694543, version=9.4, maintainer=Red Hat, Inc., name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, release-0.7.12=, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec 03 01:42:35 compute-0 python3.9[357343]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:42:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v834: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:36 compute-0 python3.9[357422]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:42:37 compute-0 ceph-mon[192821]: pgmap v834: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:37 compute-0 python3.9[357572]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:42:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v835: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:42:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 01:42:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:42:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 01:42:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:42:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:42:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:42:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:42:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:42:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:42:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:42:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:42:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:42:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 01:42:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:42:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:42:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:42:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 01:42:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:42:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 01:42:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:42:38 compute-0 python3.9[357648]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/firewall.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/firewall.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:42:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:42:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:42:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 01:42:39 compute-0 ceph-mon[192821]: pgmap v835: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:39 compute-0 python3.9[357798]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:42:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v836: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:39 compute-0 python3.9[357874]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/node_exporter.json _original_basename=node_exporter.json.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/node_exporter.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:42:40 compute-0 python3.9[358024]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:42:41 compute-0 ceph-mon[192821]: pgmap v836: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:41 compute-0 python3.9[358100]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/node_exporter.yaml _original_basename=node_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/node_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:42:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v837: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:42:42 compute-0 python3.9[358250]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:42:43 compute-0 ceph-mon[192821]: pgmap v837: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:43 compute-0 python3.9[358326]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.json _original_basename=openstack_network_exporter.json.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:42:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v838: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:44 compute-0 python3.9[358476]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:42:45 compute-0 python3.9[358552]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml _original_basename=openstack_network_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:42:45 compute-0 ceph-mon[192821]: pgmap v838: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:45 compute-0 nova_compute[351485]: 2025-12-03 01:42:45.579 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:42:45 compute-0 nova_compute[351485]: 2025-12-03 01:42:45.581 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:42:45 compute-0 nova_compute[351485]: 2025-12-03 01:42:45.581 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 01:42:45 compute-0 nova_compute[351485]: 2025-12-03 01:42:45.582 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 03 01:42:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v839: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:45 compute-0 nova_compute[351485]: 2025-12-03 01:42:45.597 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 03 01:42:45 compute-0 nova_compute[351485]: 2025-12-03 01:42:45.598 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:42:45 compute-0 nova_compute[351485]: 2025-12-03 01:42:45.599 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:42:45 compute-0 nova_compute[351485]: 2025-12-03 01:42:45.599 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:42:45 compute-0 nova_compute[351485]: 2025-12-03 01:42:45.600 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:42:45 compute-0 nova_compute[351485]: 2025-12-03 01:42:45.601 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:42:45 compute-0 nova_compute[351485]: 2025-12-03 01:42:45.602 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:42:45 compute-0 nova_compute[351485]: 2025-12-03 01:42:45.602 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 01:42:45 compute-0 nova_compute[351485]: 2025-12-03 01:42:45.603 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:42:45 compute-0 nova_compute[351485]: 2025-12-03 01:42:45.636 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:42:45 compute-0 nova_compute[351485]: 2025-12-03 01:42:45.637 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:42:45 compute-0 nova_compute[351485]: 2025-12-03 01:42:45.637 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:42:45 compute-0 nova_compute[351485]: 2025-12-03 01:42:45.637 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 01:42:45 compute-0 nova_compute[351485]: 2025-12-03 01:42:45.638 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:42:46 compute-0 python3.9[358722]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:42:46 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 01:42:46 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4084696605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:42:46 compute-0 nova_compute[351485]: 2025-12-03 01:42:46.183 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:42:46 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/4084696605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:42:46 compute-0 nova_compute[351485]: 2025-12-03 01:42:46.750 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 01:42:46 compute-0 nova_compute[351485]: 2025-12-03 01:42:46.753 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4601MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 01:42:46 compute-0 nova_compute[351485]: 2025-12-03 01:42:46.754 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:42:46 compute-0 nova_compute[351485]: 2025-12-03 01:42:46.755 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:42:46 compute-0 python3.9[358800]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/podman_exporter.json _original_basename=podman_exporter.json.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/podman_exporter.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:42:46 compute-0 nova_compute[351485]: 2025-12-03 01:42:46.848 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 01:42:46 compute-0 nova_compute[351485]: 2025-12-03 01:42:46.849 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 01:42:46 compute-0 nova_compute[351485]: 2025-12-03 01:42:46.884 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:42:47 compute-0 ceph-mon[192821]: pgmap v839: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 01:42:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1244810326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:42:47 compute-0 nova_compute[351485]: 2025-12-03 01:42:47.371 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:42:47 compute-0 nova_compute[351485]: 2025-12-03 01:42:47.385 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 01:42:47 compute-0 nova_compute[351485]: 2025-12-03 01:42:47.402 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 01:42:47 compute-0 nova_compute[351485]: 2025-12-03 01:42:47.406 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 01:42:47 compute-0 nova_compute[351485]: 2025-12-03 01:42:47.407 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:42:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v840: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:42:47 compute-0 python3.9[358972]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:42:47 compute-0 podman[358974]: 2025-12-03 01:42:47.977679838 +0000 UTC m=+0.109067496 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 03 01:42:47 compute-0 podman[358976]: 2025-12-03 01:42:47.994855348 +0000 UTC m=+0.125149776 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 03 01:42:48 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1244810326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:42:48 compute-0 python3.9[359089]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml _original_basename=podman_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/podman_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:42:49 compute-0 ceph-mon[192821]: pgmap v840: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v841: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:49 compute-0 python3.9[359239]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:42:51 compute-0 ceph-mon[192821]: pgmap v841: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:51 compute-0 python3.9[359316]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/node_exporter.yaml _original_basename=node_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/node_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:42:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v842: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:52 compute-0 python3.9[359466]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:42:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:42:52 compute-0 sudo[359469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:42:52 compute-0 sudo[359469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:42:52 compute-0 sudo[359469]: pam_unix(sudo:session): session closed for user root
Dec 03 01:42:52 compute-0 sudo[359494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:42:52 compute-0 sudo[359494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:42:52 compute-0 sudo[359494]: pam_unix(sudo:session): session closed for user root
Dec 03 01:42:53 compute-0 sudo[359519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:42:53 compute-0 sudo[359519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:42:53 compute-0 sudo[359519]: pam_unix(sudo:session): session closed for user root
Dec 03 01:42:53 compute-0 sudo[359544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 01:42:53 compute-0 sudo[359544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:42:53 compute-0 ceph-mon[192821]: pgmap v842: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v843: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:53 compute-0 python3.9[359655]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml _original_basename=podman_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/podman_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:42:53 compute-0 sudo[359544]: pam_unix(sudo:session): session closed for user root
Dec 03 01:42:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:42:53 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:42:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 01:42:53 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:42:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 01:42:53 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:42:53 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev d02b211b-125d-4427-8a61-5db8b9d40e5a does not exist
Dec 03 01:42:53 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 0ad94931-b458-4247-b136-f259396f8504 does not exist
Dec 03 01:42:53 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev ac0d9c19-5a8b-4f82-960e-0d9ce869c3d9 does not exist
Dec 03 01:42:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 01:42:53 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:42:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 01:42:53 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:42:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:42:53 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:42:54 compute-0 sudo[359698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:42:54 compute-0 sudo[359698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:42:54 compute-0 sudo[359698]: pam_unix(sudo:session): session closed for user root
Dec 03 01:42:54 compute-0 sudo[359750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:42:54 compute-0 sudo[359750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:42:54 compute-0 sudo[359750]: pam_unix(sudo:session): session closed for user root
Dec 03 01:42:54 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:42:54 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:42:54 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:42:54 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:42:54 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:42:54 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:42:54 compute-0 sudo[359800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:42:54 compute-0 sudo[359800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:42:54 compute-0 sudo[359800]: pam_unix(sudo:session): session closed for user root
Dec 03 01:42:54 compute-0 sudo[359848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 01:42:54 compute-0 sudo[359848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:42:54 compute-0 sshd-session[359695]: Received disconnect from 173.249.50.59 port 45432:11: Bye Bye [preauth]
Dec 03 01:42:54 compute-0 sshd-session[359695]: Disconnected from authenticating user root 173.249.50.59 port 45432 [preauth]
Dec 03 01:42:54 compute-0 python3.9[359923]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:42:55 compute-0 podman[359986]: 2025-12-03 01:42:55.061933341 +0000 UTC m=+0.084028787 container create cf1c46957882355076149a717166b40e7f7de6b8384ef3ec752bdfb3c1e3c85a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ptolemy, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:42:55 compute-0 podman[359986]: 2025-12-03 01:42:55.024260959 +0000 UTC m=+0.046356415 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:42:55 compute-0 systemd[1]: Started libpod-conmon-cf1c46957882355076149a717166b40e7f7de6b8384ef3ec752bdfb3c1e3c85a.scope.
Dec 03 01:42:55 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:42:55 compute-0 podman[359986]: 2025-12-03 01:42:55.205257824 +0000 UTC m=+0.227353320 container init cf1c46957882355076149a717166b40e7f7de6b8384ef3ec752bdfb3c1e3c85a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:42:55 compute-0 podman[359986]: 2025-12-03 01:42:55.215720686 +0000 UTC m=+0.237816092 container start cf1c46957882355076149a717166b40e7f7de6b8384ef3ec752bdfb3c1e3c85a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ptolemy, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:42:55 compute-0 podman[359986]: 2025-12-03 01:42:55.220019866 +0000 UTC m=+0.242115322 container attach cf1c46957882355076149a717166b40e7f7de6b8384ef3ec752bdfb3c1e3c85a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:42:55 compute-0 gallant_ptolemy[360032]: 167 167
Dec 03 01:42:55 compute-0 systemd[1]: libpod-cf1c46957882355076149a717166b40e7f7de6b8384ef3ec752bdfb3c1e3c85a.scope: Deactivated successfully.
Dec 03 01:42:55 compute-0 podman[359986]: 2025-12-03 01:42:55.22875257 +0000 UTC m=+0.250847986 container died cf1c46957882355076149a717166b40e7f7de6b8384ef3ec752bdfb3c1e3c85a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ptolemy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:42:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-673e39b706c007f5833a829b53e28a13ebce7dd4d94780bfcb5864f84be60c0b-merged.mount: Deactivated successfully.
Dec 03 01:42:55 compute-0 ceph-mon[192821]: pgmap v843: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:55 compute-0 podman[359986]: 2025-12-03 01:42:55.308835746 +0000 UTC m=+0.330931182 container remove cf1c46957882355076149a717166b40e7f7de6b8384ef3ec752bdfb3c1e3c85a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ptolemy, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:42:55 compute-0 systemd[1]: libpod-conmon-cf1c46957882355076149a717166b40e7f7de6b8384ef3ec752bdfb3c1e3c85a.scope: Deactivated successfully.
Dec 03 01:42:55 compute-0 python3.9[360056]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:42:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v844: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:55 compute-0 podman[360081]: 2025-12-03 01:42:55.618867724 +0000 UTC m=+0.094663405 container create 9254b44e1f4b16680fb35382642c5b4d9dc8530c9ab137542816fcbb54968338 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_sanderson, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec 03 01:42:55 compute-0 podman[360081]: 2025-12-03 01:42:55.579208486 +0000 UTC m=+0.055004237 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:42:55 compute-0 systemd[1]: Started libpod-conmon-9254b44e1f4b16680fb35382642c5b4d9dc8530c9ab137542816fcbb54968338.scope.
Dec 03 01:42:55 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:42:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1af4da527150caca81daef6e9e5d55c90536638b7275f68daac6b32a2823f05/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:42:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1af4da527150caca81daef6e9e5d55c90536638b7275f68daac6b32a2823f05/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:42:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1af4da527150caca81daef6e9e5d55c90536638b7275f68daac6b32a2823f05/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:42:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1af4da527150caca81daef6e9e5d55c90536638b7275f68daac6b32a2823f05/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:42:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1af4da527150caca81daef6e9e5d55c90536638b7275f68daac6b32a2823f05/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:42:55 compute-0 podman[360081]: 2025-12-03 01:42:55.808305954 +0000 UTC m=+0.284101695 container init 9254b44e1f4b16680fb35382642c5b4d9dc8530c9ab137542816fcbb54968338 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:42:55 compute-0 podman[360081]: 2025-12-03 01:42:55.833064225 +0000 UTC m=+0.308859916 container start 9254b44e1f4b16680fb35382642c5b4d9dc8530c9ab137542816fcbb54968338 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_sanderson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:42:55 compute-0 podman[360081]: 2025-12-03 01:42:55.847429126 +0000 UTC m=+0.323224817 container attach 9254b44e1f4b16680fb35382642c5b4d9dc8530c9ab137542816fcbb54968338 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:42:56 compute-0 sudo[360247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wblrvcsczeokbdwngbleyvpgsrgjsyzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726175.7336764-393-171174584525791/AnsiballZ_file.py'
Dec 03 01:42:56 compute-0 sudo[360247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:42:56 compute-0 python3.9[360249]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:42:56 compute-0 sudo[360247]: pam_unix(sudo:session): session closed for user root
Dec 03 01:42:57 compute-0 happy_sanderson[360140]: --> passed data devices: 0 physical, 3 LVM
Dec 03 01:42:57 compute-0 happy_sanderson[360140]: --> relative data size: 1.0
Dec 03 01:42:57 compute-0 happy_sanderson[360140]: --> All data devices are unavailable
Dec 03 01:42:57 compute-0 podman[360081]: 2025-12-03 01:42:57.148866997 +0000 UTC m=+1.624662688 container died 9254b44e1f4b16680fb35382642c5b4d9dc8530c9ab137542816fcbb54968338 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_sanderson, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 03 01:42:57 compute-0 systemd[1]: libpod-9254b44e1f4b16680fb35382642c5b4d9dc8530c9ab137542816fcbb54968338.scope: Deactivated successfully.
Dec 03 01:42:57 compute-0 systemd[1]: libpod-9254b44e1f4b16680fb35382642c5b4d9dc8530c9ab137542816fcbb54968338.scope: Consumed 1.250s CPU time.
Dec 03 01:42:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1af4da527150caca81daef6e9e5d55c90536638b7275f68daac6b32a2823f05-merged.mount: Deactivated successfully.
Dec 03 01:42:57 compute-0 podman[360081]: 2025-12-03 01:42:57.246510754 +0000 UTC m=+1.722306405 container remove 9254b44e1f4b16680fb35382642c5b4d9dc8530c9ab137542816fcbb54968338 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_sanderson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:42:57 compute-0 systemd[1]: libpod-conmon-9254b44e1f4b16680fb35382642c5b4d9dc8530c9ab137542816fcbb54968338.scope: Deactivated successfully.
Dec 03 01:42:57 compute-0 sudo[359848]: pam_unix(sudo:session): session closed for user root
Dec 03 01:42:57 compute-0 ceph-mon[192821]: pgmap v844: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:57 compute-0 sudo[360410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:42:57 compute-0 sudo[360410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:42:57 compute-0 sudo[360410]: pam_unix(sudo:session): session closed for user root
Dec 03 01:42:57 compute-0 sudo[360461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itvdokjnjfyyuxabuemamyvlgeqfnnfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726176.8938112-401-86480450492473/AnsiballZ_file.py'
Dec 03 01:42:57 compute-0 sudo[360461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:42:57 compute-0 sudo[360462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:42:57 compute-0 sudo[360462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:42:57 compute-0 sudo[360462]: pam_unix(sudo:session): session closed for user root
Dec 03 01:42:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v845: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:57 compute-0 sudo[360489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:42:57 compute-0 sudo[360489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:42:57 compute-0 sudo[360489]: pam_unix(sudo:session): session closed for user root
Dec 03 01:42:57 compute-0 python3.9[360472]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:42:57 compute-0 sudo[360461]: pam_unix(sudo:session): session closed for user root
Dec 03 01:42:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:42:57 compute-0 sudo[360514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 01:42:57 compute-0 sudo[360514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:42:57 compute-0 podman[360535]: 2025-12-03 01:42:57.864320196 +0000 UTC m=+0.110816776 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.7, name=ubi9-minimal, version=9.6, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-type=git, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec 03 01:42:57 compute-0 podman[360545]: 2025-12-03 01:42:57.864319416 +0000 UTC m=+0.094679835 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Dec 03 01:42:57 compute-0 podman[360542]: 2025-12-03 01:42:57.88272369 +0000 UTC m=+0.125109385 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 03 01:42:57 compute-0 podman[360577]: 2025-12-03 01:42:57.935411671 +0000 UTC m=+0.120890637 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec 03 01:42:58 compute-0 podman[360745]: 2025-12-03 01:42:58.269746947 +0000 UTC m=+0.086069474 container create 3651b10f78cb5211c2fe989ec4346d78ba907390f2489f51d140953303ec1330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Dec 03 01:42:58 compute-0 systemd[1]: Started libpod-conmon-3651b10f78cb5211c2fe989ec4346d78ba907390f2489f51d140953303ec1330.scope.
Dec 03 01:42:58 compute-0 podman[360745]: 2025-12-03 01:42:58.231319714 +0000 UTC m=+0.047642291 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:42:58 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:42:58 compute-0 podman[360745]: 2025-12-03 01:42:58.394733058 +0000 UTC m=+0.211055625 container init 3651b10f78cb5211c2fe989ec4346d78ba907390f2489f51d140953303ec1330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:42:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:42:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:42:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:42:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:42:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:42:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:42:58 compute-0 podman[360745]: 2025-12-03 01:42:58.413691957 +0000 UTC m=+0.230014474 container start 3651b10f78cb5211c2fe989ec4346d78ba907390f2489f51d140953303ec1330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_merkle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:42:58 compute-0 podman[360745]: 2025-12-03 01:42:58.421350481 +0000 UTC m=+0.237672998 container attach 3651b10f78cb5211c2fe989ec4346d78ba907390f2489f51d140953303ec1330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_merkle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec 03 01:42:58 compute-0 reverent_merkle[360794]: 167 167
Dec 03 01:42:58 compute-0 systemd[1]: libpod-3651b10f78cb5211c2fe989ec4346d78ba907390f2489f51d140953303ec1330.scope: Deactivated successfully.
Dec 03 01:42:58 compute-0 podman[360745]: 2025-12-03 01:42:58.425327922 +0000 UTC m=+0.241650469 container died 3651b10f78cb5211c2fe989ec4346d78ba907390f2489f51d140953303ec1330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_merkle, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 03 01:42:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-5bdebad2f8129f78f9181e9aa367ea6b5097ffde8fead8b3e98a13d29c8f164e-merged.mount: Deactivated successfully.
Dec 03 01:42:58 compute-0 podman[360745]: 2025-12-03 01:42:58.506413386 +0000 UTC m=+0.322735883 container remove 3651b10f78cb5211c2fe989ec4346d78ba907390f2489f51d140953303ec1330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_merkle, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 03 01:42:58 compute-0 systemd[1]: libpod-conmon-3651b10f78cb5211c2fe989ec4346d78ba907390f2489f51d140953303ec1330.scope: Deactivated successfully.
Dec 03 01:42:58 compute-0 sudo[360849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofdrfardbgzezmskbxasuqiexffewaba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726177.9850924-409-123638505737105/AnsiballZ_file.py'
Dec 03 01:42:58 compute-0 sudo[360849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:42:58 compute-0 podman[360857]: 2025-12-03 01:42:58.780658094 +0000 UTC m=+0.103820180 container create 3a0a7e3b2d8b811608b8ca8be069aed7f0d83f23143586051944ca5eda262223 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 03 01:42:58 compute-0 python3.9[360851]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:42:58 compute-0 sudo[360849]: pam_unix(sudo:session): session closed for user root
Dec 03 01:42:58 compute-0 podman[360857]: 2025-12-03 01:42:58.745463562 +0000 UTC m=+0.068625718 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:42:58 compute-0 systemd[1]: Started libpod-conmon-3a0a7e3b2d8b811608b8ca8be069aed7f0d83f23143586051944ca5eda262223.scope.
Dec 03 01:42:58 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:42:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b9ca804d42470950cde2bc4865cc62ba5332365aa40ea2bcd5cc4571263298c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:42:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b9ca804d42470950cde2bc4865cc62ba5332365aa40ea2bcd5cc4571263298c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:42:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b9ca804d42470950cde2bc4865cc62ba5332365aa40ea2bcd5cc4571263298c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:42:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b9ca804d42470950cde2bc4865cc62ba5332365aa40ea2bcd5cc4571263298c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:42:58 compute-0 podman[360857]: 2025-12-03 01:42:58.953216473 +0000 UTC m=+0.276378579 container init 3a0a7e3b2d8b811608b8ca8be069aed7f0d83f23143586051944ca5eda262223 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_albattani, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:42:58 compute-0 podman[360857]: 2025-12-03 01:42:58.968085298 +0000 UTC m=+0.291247394 container start 3a0a7e3b2d8b811608b8ca8be069aed7f0d83f23143586051944ca5eda262223 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_albattani, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec 03 01:42:58 compute-0 podman[360857]: 2025-12-03 01:42:58.983233691 +0000 UTC m=+0.306395787 container attach 3a0a7e3b2d8b811608b8ca8be069aed7f0d83f23143586051944ca5eda262223 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 03 01:42:59 compute-0 podman[360883]: 2025-12-03 01:42:59.059138891 +0000 UTC m=+0.159636439 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 03 01:42:59 compute-0 podman[360880]: 2025-12-03 01:42:59.070414596 +0000 UTC m=+0.170561244 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 03 01:42:59 compute-0 sshd-session[360517]: Invalid user kapsch from 14.103.201.7 port 51800
Dec 03 01:42:59 compute-0 ceph-mon[192821]: pgmap v845: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v846: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:42:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:42:59.602 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:42:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:42:59.602 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:42:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:42:59.602 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:42:59 compute-0 distracted_albattani[360879]: {
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:     "0": [
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:         {
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:             "devices": [
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:                 "/dev/loop3"
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:             ],
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:             "lv_name": "ceph_lv0",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:             "lv_size": "21470642176",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:             "name": "ceph_lv0",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:             "tags": {
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:                 "ceph.cluster_name": "ceph",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:                 "ceph.crush_device_class": "",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:                 "ceph.encrypted": "0",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:                 "ceph.osd_id": "0",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:                 "ceph.type": "block",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:                 "ceph.vdo": "0"
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:             },
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:             "type": "block",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:             "vg_name": "ceph_vg0"
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:         }
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:     ],
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:     "1": [
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:         {
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:             "devices": [
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:                 "/dev/loop4"
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:             ],
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:             "lv_name": "ceph_lv1",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:             "lv_size": "21470642176",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:             "name": "ceph_lv1",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:             "tags": {
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:                 "ceph.cluster_name": "ceph",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:                 "ceph.crush_device_class": "",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:                 "ceph.encrypted": "0",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:                 "ceph.osd_id": "1",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:                 "ceph.type": "block",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:                 "ceph.vdo": "0"
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:             },
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:             "type": "block",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:             "vg_name": "ceph_vg1"
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:         }
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:     ],
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:     "2": [
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:         {
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:             "devices": [
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:                 "/dev/loop5"
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:             ],
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:             "lv_name": "ceph_lv2",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:             "lv_size": "21470642176",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:             "name": "ceph_lv2",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:             "tags": {
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:                 "ceph.cluster_name": "ceph",
Dec 03 01:42:59 compute-0 podman[158098]: time="2025-12-03T01:42:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:                 "ceph.crush_device_class": "",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:                 "ceph.encrypted": "0",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:                 "ceph.osd_id": "2",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:                 "ceph.type": "block",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:                 "ceph.vdo": "0"
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:             },
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:             "type": "block",
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:             "vg_name": "ceph_vg2"
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:         }
Dec 03 01:42:59 compute-0 distracted_albattani[360879]:     ]
Dec 03 01:42:59 compute-0 distracted_albattani[360879]: }
Dec 03 01:42:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:42:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 44154 "" "Go-http-client/1.1"
Dec 03 01:42:59 compute-0 sshd-session[360517]: Received disconnect from 14.103.201.7 port 51800:11: Bye Bye [preauth]
Dec 03 01:42:59 compute-0 sshd-session[360517]: Disconnected from invalid user kapsch 14.103.201.7 port 51800 [preauth]
Dec 03 01:42:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:42:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8551 "" "Go-http-client/1.1"
Dec 03 01:42:59 compute-0 systemd[1]: libpod-3a0a7e3b2d8b811608b8ca8be069aed7f0d83f23143586051944ca5eda262223.scope: Deactivated successfully.
Dec 03 01:42:59 compute-0 podman[360857]: 2025-12-03 01:42:59.781185024 +0000 UTC m=+1.104347110 container died 3a0a7e3b2d8b811608b8ca8be069aed7f0d83f23143586051944ca5eda262223 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_albattani, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:42:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b9ca804d42470950cde2bc4865cc62ba5332365aa40ea2bcd5cc4571263298c-merged.mount: Deactivated successfully.
Dec 03 01:42:59 compute-0 podman[360857]: 2025-12-03 01:42:59.869032777 +0000 UTC m=+1.192194863 container remove 3a0a7e3b2d8b811608b8ca8be069aed7f0d83f23143586051944ca5eda262223 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 03 01:42:59 compute-0 sudo[361079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kiaclftntwhegkhjcudxtbvxlecrsoeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726179.3438056-417-80642793850614/AnsiballZ_systemd_service.py'
Dec 03 01:42:59 compute-0 systemd[1]: libpod-conmon-3a0a7e3b2d8b811608b8ca8be069aed7f0d83f23143586051944ca5eda262223.scope: Deactivated successfully.
Dec 03 01:42:59 compute-0 sudo[361079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:42:59 compute-0 sudo[360514]: pam_unix(sudo:session): session closed for user root
Dec 03 01:43:00 compute-0 sudo[361082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:43:00 compute-0 sudo[361082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:43:00 compute-0 sudo[361082]: pam_unix(sudo:session): session closed for user root
Dec 03 01:43:00 compute-0 sudo[361107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:43:00 compute-0 sudo[361107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:43:00 compute-0 sudo[361107]: pam_unix(sudo:session): session closed for user root
Dec 03 01:43:00 compute-0 python3.9[361081]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:43:00 compute-0 sudo[361079]: pam_unix(sudo:session): session closed for user root
Dec 03 01:43:00 compute-0 sudo[361132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:43:00 compute-0 sudo[361132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:43:00 compute-0 sudo[361132]: pam_unix(sudo:session): session closed for user root
Dec 03 01:43:00 compute-0 sudo[361162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 01:43:00 compute-0 sudo[361162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:43:00 compute-0 podman[361299]: 2025-12-03 01:43:00.989894556 +0000 UTC m=+0.091788125 container create 0620c04a31cddbbac1eca6f0be83243a21859f317c72a637bab4a3fbfe78ac60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_einstein, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:43:01 compute-0 podman[361299]: 2025-12-03 01:43:00.956302518 +0000 UTC m=+0.058196147 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:43:01 compute-0 systemd[1]: Started libpod-conmon-0620c04a31cddbbac1eca6f0be83243a21859f317c72a637bab4a3fbfe78ac60.scope.
Dec 03 01:43:01 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:43:01 compute-0 podman[361299]: 2025-12-03 01:43:01.154852342 +0000 UTC m=+0.256745921 container init 0620c04a31cddbbac1eca6f0be83243a21859f317c72a637bab4a3fbfe78ac60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_einstein, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 03 01:43:01 compute-0 podman[361299]: 2025-12-03 01:43:01.172088173 +0000 UTC m=+0.273981742 container start 0620c04a31cddbbac1eca6f0be83243a21859f317c72a637bab4a3fbfe78ac60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_einstein, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec 03 01:43:01 compute-0 podman[361299]: 2025-12-03 01:43:01.178988916 +0000 UTC m=+0.280882475 container attach 0620c04a31cddbbac1eca6f0be83243a21859f317c72a637bab4a3fbfe78ac60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_einstein, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:43:01 compute-0 peaceful_einstein[361355]: 167 167
Dec 03 01:43:01 compute-0 systemd[1]: libpod-0620c04a31cddbbac1eca6f0be83243a21859f317c72a637bab4a3fbfe78ac60.scope: Deactivated successfully.
Dec 03 01:43:01 compute-0 podman[361299]: 2025-12-03 01:43:01.184239253 +0000 UTC m=+0.286132812 container died 0620c04a31cddbbac1eca6f0be83243a21859f317c72a637bab4a3fbfe78ac60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:43:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-91cc5adbb56cba453cbf7f82ea1aa5aa62278f91215f88143b0c3ea89b5e06d5-merged.mount: Deactivated successfully.
Dec 03 01:43:01 compute-0 podman[361299]: 2025-12-03 01:43:01.254745631 +0000 UTC m=+0.356639160 container remove 0620c04a31cddbbac1eca6f0be83243a21859f317c72a637bab4a3fbfe78ac60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_einstein, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:43:01 compute-0 sudo[361404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbtflhbhsnzhezpgciukvoxxuwiorwxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726180.6944883-426-123571532459383/AnsiballZ_stat.py'
Dec 03 01:43:01 compute-0 sudo[361404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:43:01 compute-0 systemd[1]: libpod-conmon-0620c04a31cddbbac1eca6f0be83243a21859f317c72a637bab4a3fbfe78ac60.scope: Deactivated successfully.
Dec 03 01:43:01 compute-0 ceph-mon[192821]: pgmap v846: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:01 compute-0 openstack_network_exporter[160250]: ERROR   01:43:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:43:01 compute-0 openstack_network_exporter[160250]: ERROR   01:43:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:43:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:43:01 compute-0 openstack_network_exporter[160250]: ERROR   01:43:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:43:01 compute-0 openstack_network_exporter[160250]: ERROR   01:43:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:43:01 compute-0 openstack_network_exporter[160250]: ERROR   01:43:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:43:01 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:43:01 compute-0 python3.9[361409]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:43:01 compute-0 podman[361415]: 2025-12-03 01:43:01.516218363 +0000 UTC m=+0.079007377 container create 21e57be06f65c714742e1dcd084501d04490c3464d166074a448a2f6a5e571fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Dec 03 01:43:01 compute-0 sudo[361404]: pam_unix(sudo:session): session closed for user root
Dec 03 01:43:01 compute-0 podman[361415]: 2025-12-03 01:43:01.476164574 +0000 UTC m=+0.038953638 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:43:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v847: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:01 compute-0 systemd[1]: Started libpod-conmon-21e57be06f65c714742e1dcd084501d04490c3464d166074a448a2f6a5e571fe.scope.
Dec 03 01:43:01 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:43:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3def53c430f1ca87cd5aba578a9579cd1c0954fa01f8adf5d5af8f6157975a56/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:43:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3def53c430f1ca87cd5aba578a9579cd1c0954fa01f8adf5d5af8f6157975a56/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:43:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3def53c430f1ca87cd5aba578a9579cd1c0954fa01f8adf5d5af8f6157975a56/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:43:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3def53c430f1ca87cd5aba578a9579cd1c0954fa01f8adf5d5af8f6157975a56/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:43:01 compute-0 podman[361415]: 2025-12-03 01:43:01.663207258 +0000 UTC m=+0.225996242 container init 21e57be06f65c714742e1dcd084501d04490c3464d166074a448a2f6a5e571fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_solomon, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec 03 01:43:01 compute-0 podman[361415]: 2025-12-03 01:43:01.683833494 +0000 UTC m=+0.246622488 container start 21e57be06f65c714742e1dcd084501d04490c3464d166074a448a2f6a5e571fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_solomon, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:43:01 compute-0 podman[361415]: 2025-12-03 01:43:01.690688225 +0000 UTC m=+0.253477309 container attach 21e57be06f65c714742e1dcd084501d04490c3464d166074a448a2f6a5e571fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_solomon, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 03 01:43:01 compute-0 sudo[361510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovaeqejtscgkbygftfctamispybzjfwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726180.6944883-426-123571532459383/AnsiballZ_file.py'
Dec 03 01:43:02 compute-0 sudo[361510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:43:02 compute-0 python3.9[361512]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:43:02 compute-0 sudo[361510]: pam_unix(sudo:session): session closed for user root
Dec 03 01:43:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:43:02 compute-0 sharp_solomon[361435]: {
Dec 03 01:43:02 compute-0 sharp_solomon[361435]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 01:43:02 compute-0 sharp_solomon[361435]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:43:02 compute-0 sharp_solomon[361435]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 01:43:02 compute-0 sharp_solomon[361435]:         "osd_id": 2,
Dec 03 01:43:02 compute-0 sharp_solomon[361435]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:43:02 compute-0 sharp_solomon[361435]:         "type": "bluestore"
Dec 03 01:43:02 compute-0 sharp_solomon[361435]:     },
Dec 03 01:43:02 compute-0 sharp_solomon[361435]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 01:43:02 compute-0 sharp_solomon[361435]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:43:02 compute-0 sharp_solomon[361435]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 01:43:02 compute-0 sharp_solomon[361435]:         "osd_id": 1,
Dec 03 01:43:02 compute-0 sharp_solomon[361435]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:43:02 compute-0 sharp_solomon[361435]:         "type": "bluestore"
Dec 03 01:43:02 compute-0 sharp_solomon[361435]:     },
Dec 03 01:43:02 compute-0 sharp_solomon[361435]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 01:43:02 compute-0 sharp_solomon[361435]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:43:02 compute-0 sharp_solomon[361435]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 01:43:02 compute-0 sharp_solomon[361435]:         "osd_id": 0,
Dec 03 01:43:02 compute-0 sharp_solomon[361435]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:43:02 compute-0 sharp_solomon[361435]:         "type": "bluestore"
Dec 03 01:43:02 compute-0 sharp_solomon[361435]:     }
Dec 03 01:43:02 compute-0 sharp_solomon[361435]: }
Dec 03 01:43:02 compute-0 systemd[1]: libpod-21e57be06f65c714742e1dcd084501d04490c3464d166074a448a2f6a5e571fe.scope: Deactivated successfully.
Dec 03 01:43:02 compute-0 systemd[1]: libpod-21e57be06f65c714742e1dcd084501d04490c3464d166074a448a2f6a5e571fe.scope: Consumed 1.156s CPU time.
Dec 03 01:43:02 compute-0 podman[361415]: 2025-12-03 01:43:02.836199953 +0000 UTC m=+1.398988997 container died 21e57be06f65c714742e1dcd084501d04490c3464d166074a448a2f6a5e571fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_solomon, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec 03 01:43:02 compute-0 sudo[361620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-codufcupaxuheynrjormyfwxbhxjeupz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726180.6944883-426-123571532459383/AnsiballZ_stat.py'
Dec 03 01:43:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-3def53c430f1ca87cd5aba578a9579cd1c0954fa01f8adf5d5af8f6157975a56-merged.mount: Deactivated successfully.
Dec 03 01:43:02 compute-0 sudo[361620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:43:02 compute-0 podman[361415]: 2025-12-03 01:43:02.93922662 +0000 UTC m=+1.502015644 container remove 21e57be06f65c714742e1dcd084501d04490c3464d166074a448a2f6a5e571fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_solomon, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Dec 03 01:43:02 compute-0 systemd[1]: libpod-conmon-21e57be06f65c714742e1dcd084501d04490c3464d166074a448a2f6a5e571fe.scope: Deactivated successfully.
Dec 03 01:43:02 compute-0 sudo[361162]: pam_unix(sudo:session): session closed for user root
Dec 03 01:43:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:43:02 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:43:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:43:03 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:43:03 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 9a3974e0-ce99-4bf7-97f3-c74e18382241 does not exist
Dec 03 01:43:03 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 5675ba5c-d95e-4aaf-b354-32ef133a5207 does not exist
Dec 03 01:43:03 compute-0 sudo[361632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:43:03 compute-0 python3.9[361630]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:43:03 compute-0 sudo[361632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:43:03 compute-0 sudo[361632]: pam_unix(sudo:session): session closed for user root
Dec 03 01:43:03 compute-0 sshd-session[361624]: Invalid user localhost from 34.66.72.251 port 36758
Dec 03 01:43:03 compute-0 sudo[361620]: pam_unix(sudo:session): session closed for user root
Dec 03 01:43:03 compute-0 sshd-session[361624]: Received disconnect from 34.66.72.251 port 36758:11: Bye Bye [preauth]
Dec 03 01:43:03 compute-0 sshd-session[361624]: Disconnected from invalid user localhost 34.66.72.251 port 36758 [preauth]
Dec 03 01:43:03 compute-0 sudo[361659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 01:43:03 compute-0 sudo[361659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:43:03 compute-0 sudo[361659]: pam_unix(sudo:session): session closed for user root
Dec 03 01:43:03 compute-0 ceph-mon[192821]: pgmap v847: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:03 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:43:03 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:43:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v848: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:03 compute-0 sudo[361757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjyidzbhtgussmcqvffbczfcjjilprps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726180.6944883-426-123571532459383/AnsiballZ_file.py'
Dec 03 01:43:03 compute-0 sudo[361757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:43:03 compute-0 python3.9[361759]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ _original_basename=healthcheck.future recurse=False state=file path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:43:03 compute-0 rsyslogd[188612]: imjournal from <compute-0:python3.9>: begin to drop messages due to rate-limiting
Dec 03 01:43:03 compute-0 sudo[361757]: pam_unix(sudo:session): session closed for user root
Dec 03 01:43:05 compute-0 sudo[361909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmrnbeqzteqizgbuapmalfwcwgyvleok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726184.395228-448-198230619508725/AnsiballZ_container_config_data.py'
Dec 03 01:43:05 compute-0 sudo[361909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:43:05 compute-0 python3.9[361911]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=ceilometer_agent_compute.json debug=False
Dec 03 01:43:05 compute-0 sudo[361909]: pam_unix(sudo:session): session closed for user root
Dec 03 01:43:05 compute-0 ceph-mon[192821]: pgmap v848: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v849: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:05 compute-0 podman[361912]: 2025-12-03 01:43:05.902024464 +0000 UTC m=+0.146069010 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., container_name=kepler, architecture=x86_64, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, release-0.7.12=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_id=edpm, io.openshift.tags=base rhel9, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, com.redhat.component=ubi9-container, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec 03 01:43:07 compute-0 sudo[362081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyjrneygrylyepzpjagpoxwurynbhdhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726186.5058703-457-202844394872042/AnsiballZ_container_config_hash.py'
Dec 03 01:43:07 compute-0 sudo[362081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:43:07 compute-0 ceph-mon[192821]: pgmap v849: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:07 compute-0 python3.9[362083]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 03 01:43:07 compute-0 sudo[362081]: pam_unix(sudo:session): session closed for user root
Dec 03 01:43:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v850: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:43:08 compute-0 sshd-session[362131]: Received disconnect from 146.190.144.138 port 39912:11: Bye Bye [preauth]
Dec 03 01:43:08 compute-0 sshd-session[362131]: Disconnected from authenticating user root 146.190.144.138 port 39912 [preauth]
Dec 03 01:43:09 compute-0 ceph-mon[192821]: pgmap v850: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:09 compute-0 sudo[362235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixxbufwiivzpegsecumsvscmktweydqw ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764726187.9136636-467-194236144465928/AnsiballZ_edpm_container_manage.py'
Dec 03 01:43:09 compute-0 sudo[362235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:43:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v851: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:09 compute-0 python3[362237]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=ceilometer_agent_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec 03 01:43:10 compute-0 python3[362237]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                {
                                                     "Id": "b1b6d71b432c07886b3bae74df4dc9841d1f26407d5f96d6c1e400b0154d9a3d",
                                                     "Digest": "sha256:1810de77f8d2f3059c7cc377072be9f22a136bfbd0a3ad4f08539090d9469fac",
                                                     "RepoTags": [
                                                          "quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested"
                                                     ],
                                                     "RepoDigests": [
                                                          "quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute@sha256:1810de77f8d2f3059c7cc377072be9f22a136bfbd0a3ad4f08539090d9469fac"
                                                     ],
                                                     "Parent": "",
                                                     "Comment": "",
                                                     "Created": "2025-12-01T05:11:05.921630712Z",
                                                     "Config": {
                                                          "User": "root",
                                                          "Env": [
                                                               "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                                                               "LANG=en_US.UTF-8",
                                                               "TZ=UTC",
                                                               "container=oci"
                                                          ],
                                                          "Entrypoint": [
                                                               "dumb-init",
                                                               "--single-child",
                                                               "--"
                                                          ],
                                                          "Cmd": [
                                                               "kolla_start"
                                                          ],
                                                          "Labels": {
                                                               "io.buildah.version": "1.41.4",
                                                               "maintainer": "OpenStack Kubernetes Operator team",
                                                               "org.label-schema.build-date": "20251125",
                                                               "org.label-schema.license": "GPLv2",
                                                               "org.label-schema.name": "CentOS Stream 10 Base Image",
                                                               "org.label-schema.schema-version": "1.0",
                                                               "org.label-schema.vendor": "CentOS",
                                                               "tcib_build_tag": "3a7876c5b6a4ff2e2bc50e11e9db5f42",
                                                               "tcib_managed": "true"
                                                          },
                                                          "StopSignal": "SIGTERM"
                                                     },
                                                     "Version": "",
                                                     "Author": "",
                                                     "Architecture": "amd64",
                                                     "Os": "linux",
                                                     "Size": 601995467,
                                                     "VirtualSize": 601995467,
                                                     "GraphDriver": {
                                                          "Name": "overlay",
                                                          "Data": {
                                                               "LowerDir": "/var/lib/containers/storage/overlay/586629c35ab12bf3c21aa8405321e52ee8dc3eb91fe319ec2e2bcffcf2f07750/diff:/var/lib/containers/storage/overlay/b726b38a9994fb8597c31b02de6a7067e1e6010e18192135f063d07cbad1efce/diff:/var/lib/containers/storage/overlay/816b6cf07292074c7d459b3269e12ec5823a680369545863b4ff246f9cf897b1/diff:/var/lib/containers/storage/overlay/9cbc2db18be2b6332ac66757d2050c04af51f422021105d6d3edc0bda0b8515c/diff",
                                                               "UpperDir": "/var/lib/containers/storage/overlay/d27b7d7dfa077a19fa71a8e66da1979beb59cc810756e543817991e757a42a46/diff",
                                                               "WorkDir": "/var/lib/containers/storage/overlay/d27b7d7dfa077a19fa71a8e66da1979beb59cc810756e543817991e757a42a46/work"
                                                          }
                                                     },
                                                     "RootFS": {
                                                          "Type": "layers",
                                                          "Layers": [
                                                               "sha256:9cbc2db18be2b6332ac66757d2050c04af51f422021105d6d3edc0bda0b8515c",
                                                               "sha256:4b40c712f1bd18fdb2c50c6adb38e6952f9d174873260f311696915f181f9947",
                                                               "sha256:eaeeda82071109aa7bb6c3500cc7a126797ce0a53bc0f8828831aba88104203b",
                                                               "sha256:c58c65fadb00ed08655f756d68fed13f115faec2bc2384f51ce46e18334fe2ae",
                                                               "sha256:2f6d51b7d12dca1a77173f044cfb4b6a796a560f1015e515fa8ee8a14f36c103"
                                                          ]
                                                     },
                                                     "Labels": {
                                                          "io.buildah.version": "1.41.4",
                                                          "maintainer": "OpenStack Kubernetes Operator team",
                                                          "org.label-schema.build-date": "20251125",
                                                          "org.label-schema.license": "GPLv2",
                                                          "org.label-schema.name": "CentOS Stream 10 Base Image",
                                                          "org.label-schema.schema-version": "1.0",
                                                          "org.label-schema.vendor": "CentOS",
                                                          "tcib_build_tag": "3a7876c5b6a4ff2e2bc50e11e9db5f42",
                                                          "tcib_managed": "true"
                                                     },
                                                     "Annotations": {},
                                                     "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
                                                     "User": "root",
                                                     "History": [
                                                          {
                                                               "created": "2025-11-25T03:00:15.634483436Z",
                                                               "created_by": "/bin/sh -c #(nop) ADD file:c435edaaf9833341bf9650d5dcfda033191519e1d9c91ecfa082699fd3e149e4 in / ",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-11-25T03:00:15.634561379Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 10 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20251125\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-11-25T03:00:18.392267297Z",
                                                               "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:03:54.682983025Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",
                                                               "comment": "FROM quay.io/centos/centos:stream10",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:03:54.683002525Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:03:54.683016626Z",
                                                               "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:03:54.683029656Z",
                                                               "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:03:54.683039096Z",
                                                               "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:03:54.683051027Z",
                                                               "created_by": "/bin/sh -c #(nop) USER root",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:03:55.032223959Z",
                                                               "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:03:55.512889527Z",
                                                               "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/centos.repo\" ]; then rm -f /etc/yum.repos.d/centos*.repo && dnf clean all && rm -rf /var/cache/dnf; fi",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:06.648921904Z",
                                                               "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:10.17980807Z",
                                                               "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which python-tcib-containers",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:10.543770896Z",
                                                               "created_by": "/bin/sh -c if [ ! -f \"/etc/pki/tls/cert.pem\" ]; then ln -s /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem /etc/pki/tls/cert.pem; fi",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:10.845951852Z",
                                                               "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/uid_gid_manage.sh /usr/local/bin/uid_gid_manage",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:11.140582401Z",
                                                               "created_by": "/bin/sh -c chmod 755 /usr/local/bin/uid_gid_manage",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:11.595535873Z",
                                                               "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage kolla hugetlbfs libvirt qemu",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:11.873565728Z",
                                                               "created_by": "/bin/sh -c touch /usr/local/bin/kolla_extend_start && chmod 755 /usr/local/bin/kolla_extend_start",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:12.161351256Z",
                                                               "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/set_configs.py /usr/local/bin/kolla_set_configs",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:12.439519965Z",
                                                               "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_set_configs",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:13.03816645Z",
                                                               "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/start.sh /usr/local/bin/kolla_start",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:13.326571045Z",
                                                               "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_start",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:13.607978165Z",
                                                               "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/httpd_setup.sh /usr/local/bin/kolla_httpd_setup",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:13.880788572Z",
                                                               "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_httpd_setup",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:14.152583884Z",
                                                               "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/copy_cacerts.sh /usr/local/bin/kolla_copy_cacerts",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:14.423855069Z",
                                                               "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_copy_cacerts",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:14.694844119Z",
                                                               "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/sudoers /etc/sudoers",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:14.965588963Z",
                                                               "created_by": "/bin/sh -c chmod 440 /etc/sudoers",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:15.237099026Z",
                                                               "created_by": "/bin/sh -c sed -ri '/^(passwd:|group:)/ s/systemd//g' /etc/nsswitch.conf",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:17.576994187Z",
                                                               "created_by": "/bin/sh -c dnf -y reinstall which && rpm -e --nodeps tzdata && dnf -y install tzdata",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:17.848042619Z",
                                                               "created_by": "/bin/sh -c if [ ! -f \"/etc/localtime\" ]; then ln -s /usr/share/zoneinfo/Etc/UTC /etc/localtime; fi",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:18.119965201Z",
                                                               "created_by": "/bin/sh -c mkdir -p /openstack",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:19.072728213Z",
                                                               "created_by": "/bin/sh -c if [ 'centos' == 'centos' ];then if [ -n \"$(rpm -qa redhat-release)\" ];then rpm -e --nodeps redhat-release; fi ; dnf -y install centos-stream-release; fi",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:20.222969012Z",
                                                               "created_by": "/bin/sh -c dnf update --excludepkgs redhat-release -y && dnf clean all && rm -rf /var/cache/dnf",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:20.223021953Z",
                                                               "created_by": "/bin/sh -c #(nop) STOPSIGNAL SIGTERM",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:20.223036743Z",
                                                               "created_by": "/bin/sh -c #(nop) ENTRYPOINT [\"dumb-init\", \"--single-child\", \"--\"]",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:20.223046834Z",
                                                               "created_by": "/bin/sh -c #(nop) CMD [\"kolla_start\"]",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:21.244606139Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"3a7876c5b6a4ff2e2bc50e11e9db5f42\""
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:59.941136358Z",
                                                               "created_by": "/bin/sh -c #(nop) USER root",
                                                               "comment": "FROM quay.rdoproject.org/podified-master-centos10/openstack-base:3a7876c5b6a4ff2e2bc50e11e9db5f42",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:05:21.488793949Z",
                                                               "created_by": "/bin/sh -c dnf install -y python3-barbicanclient python3-cinderclient python3-designateclient python3-glanceclient python3-ironicclient python3-keystoneclient python3-manilaclient python3-neutronclient python3-novaclient python3-observabilityclient python3-octaviaclient python3-openstackclient python3-swiftclient python3-pymemcache && dnf clean all && rm -rf /var/cache/dnf",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:05:28.702733012Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"3a7876c5b6a4ff2e2bc50e11e9db5f42\""
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:07:25.003646183Z",
                                                               "created_by": "/bin/sh -c #(nop) USER root",
                                                               "comment": "FROM quay.rdoproject.org/podified-master-centos10/openstack-os:3a7876c5b6a4ff2e2bc50e11e9db5f42",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:07:26.030572464Z",
                                                               "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage ceilometer",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:07:42.357527997Z",
                                                               "created_by": "/bin/sh -c dnf -y install openstack-ceilometer-common && dnf clean all && rm -rf /var/cache/dnf",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:07:44.698286094Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"3a7876c5b6a4ff2e2bc50e11e9db5f42\""
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:10:54.136243993Z",
                                                               "created_by": "/bin/sh -c #(nop) USER root",
                                                               "comment": "FROM quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-base:3a7876c5b6a4ff2e2bc50e11e9db5f42",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:11:05.918804558Z",
                                                               "created_by": "/bin/sh -c dnf -y install openstack-ceilometer-compute && dnf clean all && rm -rf /var/cache/dnf",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:11:07.91684183Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"3a7876c5b6a4ff2e2bc50e11e9db5f42\""
                                                          }
                                                     ],
                                                     "NamesHistory": [
                                                          "quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested"
                                                     ]
                                                }
                                           ]
                                           : quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Dec 03 01:43:10 compute-0 sudo[362235]: pam_unix(sudo:session): session closed for user root
Dec 03 01:43:11 compute-0 sudo[362444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fztsgnogvrkhkhtwgwdzjrgeosaoaikn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726190.7278993-475-271931596180484/AnsiballZ_stat.py'
Dec 03 01:43:11 compute-0 sudo[362444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:43:11 compute-0 ceph-mon[192821]: pgmap v851: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:11 compute-0 python3.9[362446]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:43:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v852: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:11 compute-0 sudo[362444]: pam_unix(sudo:session): session closed for user root
Dec 03 01:43:12 compute-0 sudo[362598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmihvwivqkjkszjxyhglqxflfyeimnlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726192.088257-484-639912176881/AnsiballZ_file.py'
Dec 03 01:43:12 compute-0 sudo[362598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:43:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:43:12 compute-0 python3.9[362600]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:43:12 compute-0 sudo[362598]: pam_unix(sudo:session): session closed for user root
Dec 03 01:43:13 compute-0 ceph-mon[192821]: pgmap v852: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v853: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:14 compute-0 sudo[362749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoznrvxowrszcpetmhscwjvatwacduic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726193.0282829-484-261702530865315/AnsiballZ_copy.py'
Dec 03 01:43:14 compute-0 sudo[362749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:43:14 compute-0 python3.9[362751]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764726193.0282829-484-261702530865315/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:43:14 compute-0 sudo[362749]: pam_unix(sudo:session): session closed for user root
Dec 03 01:43:15 compute-0 sudo[362825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmeuoofejfcipxszzsnudyhmclwmbvuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726193.0282829-484-261702530865315/AnsiballZ_systemd.py'
Dec 03 01:43:15 compute-0 sudo[362825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:43:15 compute-0 python3.9[362827]: ansible-systemd Invoked with state=started name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:43:15 compute-0 ceph-mon[192821]: pgmap v853: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:15 compute-0 sudo[362825]: pam_unix(sudo:session): session closed for user root
Dec 03 01:43:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v854: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:16 compute-0 sudo[362979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qseujdrcsyfkkgbimtyimcmpurpsrzvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726195.9337358-504-93297814071309/AnsiballZ_systemd.py'
Dec 03 01:43:16 compute-0 sudo[362979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:43:16 compute-0 python3.9[362981]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 03 01:43:16 compute-0 systemd[1]: Stopping ceilometer_agent_compute container...
Dec 03 01:43:16 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:43:16.987 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Dec 03 01:43:17 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:43:17.090 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:319
Dec 03 01:43:17 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:43:17.090 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:323
Dec 03 01:43:17 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:43:17.091 14 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [14]
Dec 03 01:43:17 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:43:17.091 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentHeartBeatManager(0) [12]
Dec 03 01:43:17 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:43:17.109 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:335
Dec 03 01:43:17 compute-0 virtqemud[154511]: End of file while reading data: Input/output error
Dec 03 01:43:17 compute-0 virtqemud[154511]: End of file while reading data: Input/output error
Dec 03 01:43:17 compute-0 systemd[1]: libpod-7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264.scope: Deactivated successfully.
Dec 03 01:43:17 compute-0 systemd[1]: libpod-7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264.scope: Consumed 4.190s CPU time.
Dec 03 01:43:17 compute-0 conmon[154605]: conmon 7418403b3c8b6908f3c6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264.scope/container/memory.events
Dec 03 01:43:17 compute-0 podman[362985]: 2025-12-03 01:43:17.309384357 +0000 UTC m=+0.404924428 container died 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image)
Dec 03 01:43:17 compute-0 systemd[1]: 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264-2d98b0e3ac0f54e5.timer: Deactivated successfully.
Dec 03 01:43:17 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264.
Dec 03 01:43:17 compute-0 systemd[1]: 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264-2d98b0e3ac0f54e5.service: Failed to open /run/systemd/transient/7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264-2d98b0e3ac0f54e5.service: No such file or directory
Dec 03 01:43:17 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264-userdata-shm.mount: Deactivated successfully.
Dec 03 01:43:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-c05fda9c1bf365319581a3b522676c832bdfa8164015757f5238b71ba927c121-merged.mount: Deactivated successfully.
Dec 03 01:43:17 compute-0 podman[362985]: 2025-12-03 01:43:17.431302561 +0000 UTC m=+0.526842622 container cleanup 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 03 01:43:17 compute-0 podman[362985]: ceilometer_agent_compute
Dec 03 01:43:17 compute-0 ceph-mon[192821]: pgmap v854: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:17 compute-0 podman[363011]: ceilometer_agent_compute
Dec 03 01:43:17 compute-0 systemd[1]: edpm_ceilometer_agent_compute.service: Deactivated successfully.
Dec 03 01:43:17 compute-0 systemd[1]: Stopped ceilometer_agent_compute container.
Dec 03 01:43:17 compute-0 systemd[1]: Starting ceilometer_agent_compute container...
Dec 03 01:43:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v855: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:43:17 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:43:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c05fda9c1bf365319581a3b522676c832bdfa8164015757f5238b71ba927c121/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec 03 01:43:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c05fda9c1bf365319581a3b522676c832bdfa8164015757f5238b71ba927c121/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 03 01:43:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c05fda9c1bf365319581a3b522676c832bdfa8164015757f5238b71ba927c121/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec 03 01:43:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c05fda9c1bf365319581a3b522676c832bdfa8164015757f5238b71ba927c121/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec 03 01:43:17 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264.
Dec 03 01:43:17 compute-0 podman[363022]: 2025-12-03 01:43:17.855094876 +0000 UTC m=+0.252077341 container init 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 03 01:43:17 compute-0 ceilometer_agent_compute[363035]: + sudo -E kolla_set_configs
Dec 03 01:43:17 compute-0 podman[363022]: 2025-12-03 01:43:17.905442942 +0000 UTC m=+0.302425337 container start 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 03 01:43:17 compute-0 podman[363022]: ceilometer_agent_compute
Dec 03 01:43:17 compute-0 systemd[1]: Started ceilometer_agent_compute container.
Dec 03 01:43:17 compute-0 sudo[363042]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 03 01:43:17 compute-0 ceilometer_agent_compute[363035]: sudo: unable to send audit message: Operation not permitted
Dec 03 01:43:17 compute-0 sudo[363042]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 03 01:43:17 compute-0 sudo[362979]: pam_unix(sudo:session): session closed for user root
Dec 03 01:43:18 compute-0 podman[363043]: 2025-12-03 01:43:18.017980454 +0000 UTC m=+0.089618683 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 03 01:43:18 compute-0 ceilometer_agent_compute[363035]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 03 01:43:18 compute-0 ceilometer_agent_compute[363035]: INFO:__main__:Validating config file
Dec 03 01:43:18 compute-0 ceilometer_agent_compute[363035]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 03 01:43:18 compute-0 ceilometer_agent_compute[363035]: INFO:__main__:Copying service configuration files
Dec 03 01:43:18 compute-0 ceilometer_agent_compute[363035]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec 03 01:43:18 compute-0 ceilometer_agent_compute[363035]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec 03 01:43:18 compute-0 ceilometer_agent_compute[363035]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec 03 01:43:18 compute-0 ceilometer_agent_compute[363035]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec 03 01:43:18 compute-0 ceilometer_agent_compute[363035]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec 03 01:43:18 compute-0 ceilometer_agent_compute[363035]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec 03 01:43:18 compute-0 ceilometer_agent_compute[363035]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 03 01:43:18 compute-0 ceilometer_agent_compute[363035]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 03 01:43:18 compute-0 ceilometer_agent_compute[363035]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 03 01:43:18 compute-0 ceilometer_agent_compute[363035]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 03 01:43:18 compute-0 ceilometer_agent_compute[363035]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 03 01:43:18 compute-0 ceilometer_agent_compute[363035]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 03 01:43:18 compute-0 ceilometer_agent_compute[363035]: INFO:__main__:Writing out command to execute
Dec 03 01:43:18 compute-0 systemd[1]: 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264-307494c50137460f.service: Main process exited, code=exited, status=1/FAILURE
Dec 03 01:43:18 compute-0 systemd[1]: 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264-307494c50137460f.service: Failed with result 'exit-code'.
Dec 03 01:43:18 compute-0 sudo[363042]: pam_unix(sudo:session): session closed for user root
Dec 03 01:43:18 compute-0 ceilometer_agent_compute[363035]: ++ cat /run_command
Dec 03 01:43:18 compute-0 ceilometer_agent_compute[363035]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec 03 01:43:18 compute-0 ceilometer_agent_compute[363035]: + ARGS=
Dec 03 01:43:18 compute-0 ceilometer_agent_compute[363035]: + sudo kolla_copy_cacerts
Dec 03 01:43:18 compute-0 sudo[363096]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 03 01:43:18 compute-0 ceilometer_agent_compute[363035]: sudo: unable to send audit message: Operation not permitted
Dec 03 01:43:18 compute-0 sudo[363096]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 03 01:43:18 compute-0 sudo[363096]: pam_unix(sudo:session): session closed for user root
Dec 03 01:43:18 compute-0 ceilometer_agent_compute[363035]: + [[ ! -n '' ]]
Dec 03 01:43:18 compute-0 ceilometer_agent_compute[363035]: + . kolla_extend_start
Dec 03 01:43:18 compute-0 ceilometer_agent_compute[363035]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec 03 01:43:18 compute-0 ceilometer_agent_compute[363035]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Dec 03 01:43:18 compute-0 ceilometer_agent_compute[363035]: + umask 0022
Dec 03 01:43:18 compute-0 ceilometer_agent_compute[363035]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Dec 03 01:43:18 compute-0 podman[363066]: 2025-12-03 01:43:18.152176822 +0000 UTC m=+0.089290935 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 03 01:43:18 compute-0 podman[363076]: 2025-12-03 01:43:18.189087722 +0000 UTC m=+0.121223266 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 03 01:43:18 compute-0 sudo[363253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yggtxohpuvieqyrrmcsvxqtysflexjpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726198.315032-512-271983766859159/AnsiballZ_stat.py'
Dec 03 01:43:18 compute-0 sudo[363253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:43:19 compute-0 python3.9[363255]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:43:19 compute-0 sudo[363253]: pam_unix(sudo:session): session closed for user root
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.183 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.183 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.183 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.184 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.184 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.184 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.184 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.184 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.184 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.184 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.184 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.184 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.184 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.185 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.185 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.185 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.185 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.185 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.185 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.185 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.185 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.185 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.185 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.185 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.186 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.186 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.186 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.186 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.186 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.186 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.186 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.186 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.186 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.186 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.186 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.186 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.186 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.187 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.187 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.187 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.187 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.187 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.187 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.187 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.187 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.187 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.187 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.187 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.187 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.187 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.187 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.188 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.188 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.188 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.188 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.188 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.188 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.188 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.188 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.188 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.188 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.188 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.188 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.189 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.189 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.189 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.189 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.189 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.189 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.189 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.189 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.189 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.189 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.189 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.189 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.190 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.190 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.190 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.190 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.190 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.190 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.190 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.190 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.190 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.190 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.191 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.191 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.191 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.191 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.191 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.191 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.191 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.191 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.191 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.191 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.191 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.192 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.192 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.192 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.192 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.192 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.192 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.192 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.192 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.192 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.192 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.192 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.192 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.192 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.193 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.193 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.193 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.193 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.193 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.193 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.193 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.193 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.193 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.193 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.193 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.193 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.194 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.194 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.194 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.194 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.194 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.194 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.194 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.194 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.194 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.194 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.194 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.194 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.194 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.194 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.195 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.195 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.195 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.195 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.195 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.195 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.195 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.195 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.195 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.195 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.195 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.195 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.195 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.196 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.196 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.196 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.196 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.196 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.222 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.223 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.223 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.223 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.224 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.224 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.224 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.224 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.224 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.225 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.225 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.225 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.225 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.225 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.225 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.226 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.226 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.226 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.226 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.226 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.226 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.227 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.227 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.227 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.228 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.228 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.228 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.228 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.228 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.228 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.228 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.229 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.229 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.229 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.229 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.229 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.229 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.229 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.230 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.230 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.230 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.230 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.231 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.231 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.231 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.231 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.232 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.232 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.232 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.232 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.233 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.233 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.233 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.233 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.233 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.234 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.234 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.234 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.234 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.234 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.235 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.235 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.235 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.235 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.235 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.236 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.236 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.236 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.236 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.236 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.237 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.237 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.237 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.237 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.237 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.237 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.238 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.238 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.238 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.238 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.239 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.239 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.239 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.239 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.239 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.240 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.240 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.240 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.241 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.241 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.241 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.241 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.241 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.241 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.241 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.242 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.242 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.242 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.242 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.242 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.243 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.243 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.243 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.243 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.243 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.243 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.244 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.244 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.244 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.244 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.244 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.244 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.245 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.245 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.245 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.245 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.246 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.246 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.246 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.247 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.247 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.247 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.248 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.248 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.248 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.248 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.249 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.249 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.249 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.249 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.249 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.250 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.250 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.250 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.250 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.250 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.251 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.251 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.252 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.252 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.252 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.252 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.252 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.252 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.253 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.253 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.253 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.253 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.253 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.253 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.254 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.254 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.254 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.254 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.257 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.259 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.261 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.262 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.277 14 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.278 14 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.278 14 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.440 14 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.441 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.441 14 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.441 14 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.441 14 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.441 14 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.441 14 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.441 14 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.441 14 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.441 14 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.441 14 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.441 14 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.442 14 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.442 14 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.442 14 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.442 14 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.442 14 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.442 14 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.442 14 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.442 14 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.442 14 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.443 14 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.443 14 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.443 14 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.443 14 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.443 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.443 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.443 14 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.443 14 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.443 14 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.443 14 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.443 14 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.443 14 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.444 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.444 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.444 14 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.444 14 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.444 14 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.444 14 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.444 14 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.444 14 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.444 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.444 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.444 14 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.444 14 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.445 14 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.445 14 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.445 14 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.445 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.445 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.445 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.445 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.445 14 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.445 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.445 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.445 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.446 14 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.446 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.446 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.446 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.446 14 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.446 14 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.446 14 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.446 14 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.446 14 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.446 14 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.446 14 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.446 14 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.447 14 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.447 14 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.447 14 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.447 14 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.447 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.447 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.447 14 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.447 14 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.447 14 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.447 14 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.447 14 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.447 14 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.448 14 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.448 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.448 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.448 14 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.448 14 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.448 14 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.448 14 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.448 14 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.448 14 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.448 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.449 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.449 14 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.449 14 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.449 14 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.449 14 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.449 14 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.449 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.449 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.449 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.449 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.449 14 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.449 14 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.450 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.450 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.450 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.450 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.450 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.450 14 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.450 14 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.450 14 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.450 14 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.450 14 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.450 14 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.450 14 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.451 14 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.451 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.451 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.451 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.451 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.451 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.451 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.451 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.451 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.451 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.451 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.451 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.451 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.451 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.451 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.451 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.452 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.452 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.452 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.452 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.452 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.452 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.452 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.452 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.452 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.452 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.452 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.452 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.452 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.452 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.452 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.452 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.453 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.453 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.453 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.453 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.453 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.453 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.453 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.453 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.453 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.453 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.453 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.453 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.453 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.454 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.454 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.454 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.454 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.454 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.454 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.454 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.454 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.454 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.454 14 DEBUG cotyledon._service [-] Run service AgentManager(0) [14] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.459 14 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.12/site-packages/ceilometer/agent.py:64
Dec 03 01:43:19 compute-0 ceph-mon[192821]: pgmap v855: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.499 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.500 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.501 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.502 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.502 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.503 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.504 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.505 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.505 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.506 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.507 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.508 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.509 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.510 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.511 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.511 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:43:19 compute-0 sudo[363344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csbyjkzdnzamhoyyafoudqpchafvmozb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726198.315032-512-271983766859159/AnsiballZ_file.py'
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.521 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.521 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.522 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.522 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.522 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.522 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.522 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.522 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.522 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.522 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.522 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.523 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.523 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.523 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.523 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.523 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.523 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.523 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.523 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.523 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.524 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.524 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.524 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.524 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.524 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.524 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.524 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.524 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.525 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.525 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.525 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.525 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:43:19 compute-0 sudo[363344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.525 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.525 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.525 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.525 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.525 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.526 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.526 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.526 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.526 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.526 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.526 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.526 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.526 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.526 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.526 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.521 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'power.state': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'cpu': [], 'disk.ephemeral.size': [], 'network.outgoing.bytes': [], 'disk.device.allocation': [], 'network.incoming.packets.drop': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.527 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'power.state': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'cpu': [], 'disk.ephemeral.size': [], 'network.outgoing.bytes': [], 'disk.device.allocation': [], 'network.incoming.packets.drop': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.527 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.528 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.528 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.528 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.528 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.529 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.529 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.529 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.530 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.530 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.530 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.531 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.531 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.531 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.535 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.535 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.535 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.536 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.536 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.536 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:43:19.536 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:43:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v856: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:19 compute-0 python3.9[363347]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/node_exporter/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/node_exporter/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:43:19 compute-0 sudo[363344]: pam_unix(sudo:session): session closed for user root
Dec 03 01:43:20 compute-0 sudo[363497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzvqzrmzrxxurvvqakzinuqliblysvri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726200.1885495-526-100969903495289/AnsiballZ_container_config_data.py'
Dec 03 01:43:20 compute-0 sudo[363497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:43:21 compute-0 python3.9[363499]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=node_exporter.json debug=False
Dec 03 01:43:21 compute-0 sudo[363497]: pam_unix(sudo:session): session closed for user root
Dec 03 01:43:21 compute-0 ceph-mon[192821]: pgmap v856: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v857: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:21 compute-0 sudo[363649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhkqufccyjqbpqoucjnmrsvnwvhubxcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726201.442845-535-127893357377471/AnsiballZ_container_config_hash.py'
Dec 03 01:43:21 compute-0 sudo[363649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:43:22 compute-0 python3.9[363651]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 03 01:43:22 compute-0 sudo[363649]: pam_unix(sudo:session): session closed for user root
Dec 03 01:43:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:43:23 compute-0 sudo[363801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mywipdwzmsbddptchllcxqkoejcgwlus ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764726202.7121022-545-56061254171944/AnsiballZ_edpm_container_manage.py'
Dec 03 01:43:23 compute-0 sudo[363801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:43:23 compute-0 ceph-mon[192821]: pgmap v857: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:23 compute-0 python3[363803]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=node_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec 03 01:43:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v858: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:23 compute-0 python3[363803]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                {
                                                     "Id": "0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83",
                                                     "Digest": "sha256:fa8e5700b7762fffe0674e944762f44bb787a7e44d97569fe55348260453bf80",
                                                     "RepoTags": [
                                                          "quay.io/prometheus/node-exporter:v1.5.0"
                                                     ],
                                                     "RepoDigests": [
                                                          "quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c",
                                                          "quay.io/prometheus/node-exporter@sha256:fa8e5700b7762fffe0674e944762f44bb787a7e44d97569fe55348260453bf80"
                                                     ],
                                                     "Parent": "",
                                                     "Comment": "",
                                                     "Created": "2022-11-29T19:06:14.987394068Z",
                                                     "Config": {
                                                          "User": "nobody",
                                                          "ExposedPorts": {
                                                               "9100/tcp": {}
                                                          },
                                                          "Env": [
                                                               "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
                                                          ],
                                                          "Entrypoint": [
                                                               "/bin/node_exporter"
                                                          ],
                                                          "Labels": {
                                                               "maintainer": "The Prometheus Authors <prometheus-developers@googlegroups.com>"
                                                          }
                                                     },
                                                     "Version": "19.03.8",
                                                     "Author": "",
                                                     "Architecture": "amd64",
                                                     "Os": "linux",
                                                     "Size": 23851788,
                                                     "VirtualSize": 23851788,
                                                     "GraphDriver": {
                                                          "Name": "overlay",
                                                          "Data": {
                                                               "LowerDir": "/var/lib/containers/storage/overlay/a1185e7325783fe8cba63270bc6e59299386d7c73e4bc34c560a1fbc9e6d7e2c/diff:/var/lib/containers/storage/overlay/0438ade5aeea533b00cd75095bec75fbc2b307bace4c89bb39b75d428637bcd8/diff",
                                                               "UpperDir": "/var/lib/containers/storage/overlay/2cd9444c84550fbd551e3826a8110fcc009757858b99e84f1119041f2325189b/diff",
                                                               "WorkDir": "/var/lib/containers/storage/overlay/2cd9444c84550fbd551e3826a8110fcc009757858b99e84f1119041f2325189b/work"
                                                          }
                                                     },
                                                     "RootFS": {
                                                          "Type": "layers",
                                                          "Layers": [
                                                               "sha256:0438ade5aeea533b00cd75095bec75fbc2b307bace4c89bb39b75d428637bcd8",
                                                               "sha256:9f2d25037e3e722ca7f4ca9c7a885f19a2ce11140592ee0acb323dec3b26640d",
                                                               "sha256:76857a93cd03e12817c36c667cc3263d58886232cad116327e55d79036e5977d"
                                                          ]
                                                     },
                                                     "Labels": {
                                                          "maintainer": "The Prometheus Authors <prometheus-developers@googlegroups.com>"
                                                     },
                                                     "Annotations": {},
                                                     "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
                                                     "User": "nobody",
                                                     "History": [
                                                          {
                                                               "created": "2022-10-26T06:30:33.700079457Z",
                                                               "created_by": "/bin/sh -c #(nop) ADD file:5e991de3200129dc05c3130f7a64bebb5704486b4f773bfcaa6b13165d6c2416 in / "
                                                          },
                                                          {
                                                               "created": "2022-10-26T06:30:33.794221299Z",
                                                               "created_by": "/bin/sh -c #(nop)  CMD [\"sh\"]",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2022-11-15T10:54:54.845364304Z",
                                                               "created_by": "/bin/sh -c #(nop)  MAINTAINER The Prometheus Authors <prometheus-developers@googlegroups.com>",
                                                               "author": "The Prometheus Authors <prometheus-developers@googlegroups.com>",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2022-11-15T10:54:55.54866664Z",
                                                               "created_by": "/bin/sh -c #(nop) COPY dir:02c961e21531be78a67ed9bad67d03391cfedcead8b0a35cfb9171346636f11a in / ",
                                                               "author": "The Prometheus Authors <prometheus-developers@googlegroups.com>"
                                                          },
                                                          {
                                                               "created": "2022-11-29T19:06:13.622645057Z",
                                                               "created_by": "/bin/sh -c #(nop)  LABEL maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2022-11-29T19:06:13.810765105Z",
                                                               "created_by": "/bin/sh -c #(nop)  ARG ARCH=amd64",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2022-11-29T19:06:13.990897895Z",
                                                               "created_by": "/bin/sh -c #(nop)  ARG OS=linux",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2022-11-29T19:06:14.358293759Z",
                                                               "created_by": "/bin/sh -c #(nop) COPY file:3ef20dd145817033186947b860c3b6f7bb06d4c435257258c0e5df15f6e51eb7 in /bin/node_exporter "
                                                          },
                                                          {
                                                               "created": "2022-11-29T19:06:14.630644274Z",
                                                               "created_by": "/bin/sh -c #(nop)  EXPOSE 9100",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2022-11-29T19:06:14.79596292Z",
                                                               "created_by": "/bin/sh -c #(nop)  USER nobody",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2022-11-29T19:06:14.987394068Z",
                                                               "created_by": "/bin/sh -c #(nop)  ENTRYPOINT [\"/bin/node_exporter\"]",
                                                               "empty_layer": true
                                                          }
                                                     ],
                                                     "NamesHistory": [
                                                          "quay.io/prometheus/node-exporter:v1.5.0"
                                                     ]
                                                }
                                           ]
                                           : quay.io/prometheus/node-exporter:v1.5.0
Dec 03 01:43:24 compute-0 systemd[1]: libpod-0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb.scope: Deactivated successfully.
Dec 03 01:43:24 compute-0 systemd[1]: libpod-0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb.scope: Consumed 7.187s CPU time.
Dec 03 01:43:24 compute-0 podman[363850]: 2025-12-03 01:43:24.06089963 +0000 UTC m=+0.093831812 container died 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 03 01:43:24 compute-0 systemd[1]: 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb-4b7ed1ba46ffddbb.timer: Deactivated successfully.
Dec 03 01:43:24 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb.
Dec 03 01:43:24 compute-0 systemd[1]: 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb-4b7ed1ba46ffddbb.service: Failed to open /run/systemd/transient/0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb-4b7ed1ba46ffddbb.service: No such file or directory
Dec 03 01:43:24 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb-userdata-shm.mount: Deactivated successfully.
Dec 03 01:43:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-a223acd36c294252abbef2129cb869c4b2118341768b302e0db9403ccbec37a8-merged.mount: Deactivated successfully.
Dec 03 01:43:24 compute-0 podman[363850]: 2025-12-03 01:43:24.158386192 +0000 UTC m=+0.191318314 container cleanup 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 01:43:24 compute-0 python3[363803]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman stop node_exporter
Dec 03 01:43:24 compute-0 systemd[1]: edpm_node_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec 03 01:43:24 compute-0 systemd[1]: 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb-4b7ed1ba46ffddbb.timer: Failed to open /run/systemd/transient/0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb-4b7ed1ba46ffddbb.timer: No such file or directory
Dec 03 01:43:24 compute-0 systemd[1]: 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb-4b7ed1ba46ffddbb.service: Failed to open /run/systemd/transient/0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb-4b7ed1ba46ffddbb.service: No such file or directory
Dec 03 01:43:24 compute-0 podman[363876]: 2025-12-03 01:43:24.278314611 +0000 UTC m=+0.091983280 container remove 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 03 01:43:24 compute-0 podman[363877]: Error: no container with ID 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb found in database: no such container
Dec 03 01:43:24 compute-0 python3[363803]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force node_exporter
Dec 03 01:43:24 compute-0 systemd[1]: edpm_node_exporter.service: Control process exited, code=exited, status=125/n/a
Dec 03 01:43:24 compute-0 systemd[1]: edpm_node_exporter.service: Failed with result 'exit-code'.
Dec 03 01:43:24 compute-0 podman[363900]: 2025-12-03 01:43:24.379341202 +0000 UTC m=+0.072340671 container create 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, config_id=edpm, container_name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 03 01:43:24 compute-0 podman[363900]: 2025-12-03 01:43:24.342339449 +0000 UTC m=+0.035338948 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Dec 03 01:43:24 compute-0 python3[363803]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name node_exporter --conmon-pidfile /run/node_exporter.pid --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck node_exporter --label config_id=edpm --label container_name=node_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9100:9100 --user root --volume /var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z --volume /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw --volume /var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z quay.io/prometheus/node-exporter:v1.5.0 --web.config.file=/etc/node_exporter/node_exporter.yaml --web.disable-exporter-metrics --collector.systemd --collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service --no-collector.dmi --no-collector.entropy --no-collector.thermal_zone --no-collector.time --no-collector.timex --no-collector.uname --no-collector.stat --no-collector.hwmon --no-collector.os --no-collector.selinux --no-collector.textfile --no-collector.powersupplyclass --no-collector.pressure --no-collector.rapl
Dec 03 01:43:24 compute-0 systemd[1]: edpm_node_exporter.service: Scheduled restart job, restart counter is at 1.
Dec 03 01:43:24 compute-0 systemd[1]: Stopped node_exporter container.
Dec 03 01:43:24 compute-0 systemd[1]: Starting node_exporter container...
Dec 03 01:43:24 compute-0 sshd-session[363832]: Invalid user userroot from 80.253.31.232 port 38036
Dec 03 01:43:24 compute-0 systemd[1]: Started libpod-conmon-9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df.scope.
Dec 03 01:43:24 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:43:24 compute-0 sshd-session[363832]: Received disconnect from 80.253.31.232 port 38036:11: Bye Bye [preauth]
Dec 03 01:43:24 compute-0 sshd-session[363832]: Disconnected from invalid user userroot 80.253.31.232 port 38036 [preauth]
Dec 03 01:43:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8c4c51a2ebaba9c9962c48edc737f49dc31bd7bde2654d119963ea40bdad8c7/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 03 01:43:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8c4c51a2ebaba9c9962c48edc737f49dc31bd7bde2654d119963ea40bdad8c7/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec 03 01:43:24 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df.
Dec 03 01:43:24 compute-0 podman[363912]: 2025-12-03 01:43:24.732428462 +0000 UTC m=+0.303623980 container init 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 03 01:43:24 compute-0 node_exporter[363933]: ts=2025-12-03T01:43:24.762Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Dec 03 01:43:24 compute-0 node_exporter[363933]: ts=2025-12-03T01:43:24.762Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Dec 03 01:43:24 compute-0 node_exporter[363933]: ts=2025-12-03T01:43:24.763Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Dec 03 01:43:24 compute-0 node_exporter[363933]: ts=2025-12-03T01:43:24.763Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Dec 03 01:43:24 compute-0 node_exporter[363933]: ts=2025-12-03T01:43:24.763Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Dec 03 01:43:24 compute-0 node_exporter[363933]: ts=2025-12-03T01:43:24.764Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec 03 01:43:24 compute-0 node_exporter[363933]: ts=2025-12-03T01:43:24.764Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec 03 01:43:24 compute-0 node_exporter[363933]: ts=2025-12-03T01:43:24.764Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec 03 01:43:24 compute-0 node_exporter[363933]: ts=2025-12-03T01:43:24.764Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec 03 01:43:24 compute-0 node_exporter[363933]: ts=2025-12-03T01:43:24.764Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec 03 01:43:24 compute-0 node_exporter[363933]: ts=2025-12-03T01:43:24.764Z caller=node_exporter.go:117 level=info collector=arp
Dec 03 01:43:24 compute-0 node_exporter[363933]: ts=2025-12-03T01:43:24.764Z caller=node_exporter.go:117 level=info collector=bcache
Dec 03 01:43:24 compute-0 node_exporter[363933]: ts=2025-12-03T01:43:24.764Z caller=node_exporter.go:117 level=info collector=bonding
Dec 03 01:43:24 compute-0 node_exporter[363933]: ts=2025-12-03T01:43:24.764Z caller=node_exporter.go:117 level=info collector=btrfs
Dec 03 01:43:24 compute-0 node_exporter[363933]: ts=2025-12-03T01:43:24.764Z caller=node_exporter.go:117 level=info collector=conntrack
Dec 03 01:43:24 compute-0 node_exporter[363933]: ts=2025-12-03T01:43:24.764Z caller=node_exporter.go:117 level=info collector=cpu
Dec 03 01:43:24 compute-0 node_exporter[363933]: ts=2025-12-03T01:43:24.764Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec 03 01:43:24 compute-0 node_exporter[363933]: ts=2025-12-03T01:43:24.764Z caller=node_exporter.go:117 level=info collector=diskstats
Dec 03 01:43:24 compute-0 node_exporter[363933]: ts=2025-12-03T01:43:24.764Z caller=node_exporter.go:117 level=info collector=edac
Dec 03 01:43:24 compute-0 node_exporter[363933]: ts=2025-12-03T01:43:24.764Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec 03 01:43:24 compute-0 node_exporter[363933]: ts=2025-12-03T01:43:24.764Z caller=node_exporter.go:117 level=info collector=filefd
Dec 03 01:43:24 compute-0 node_exporter[363933]: ts=2025-12-03T01:43:24.764Z caller=node_exporter.go:117 level=info collector=filesystem
Dec 03 01:43:24 compute-0 node_exporter[363933]: ts=2025-12-03T01:43:24.764Z caller=node_exporter.go:117 level=info collector=infiniband
Dec 03 01:43:24 compute-0 node_exporter[363933]: ts=2025-12-03T01:43:24.764Z caller=node_exporter.go:117 level=info collector=ipvs
Dec 03 01:43:24 compute-0 node_exporter[363933]: ts=2025-12-03T01:43:24.764Z caller=node_exporter.go:117 level=info collector=loadavg
Dec 03 01:43:24 compute-0 node_exporter[363933]: ts=2025-12-03T01:43:24.764Z caller=node_exporter.go:117 level=info collector=mdadm
Dec 03 01:43:24 compute-0 node_exporter[363933]: ts=2025-12-03T01:43:24.764Z caller=node_exporter.go:117 level=info collector=meminfo
Dec 03 01:43:24 compute-0 node_exporter[363933]: ts=2025-12-03T01:43:24.764Z caller=node_exporter.go:117 level=info collector=netclass
Dec 03 01:43:24 compute-0 node_exporter[363933]: ts=2025-12-03T01:43:24.764Z caller=node_exporter.go:117 level=info collector=netdev
Dec 03 01:43:24 compute-0 node_exporter[363933]: ts=2025-12-03T01:43:24.764Z caller=node_exporter.go:117 level=info collector=netstat
Dec 03 01:43:24 compute-0 node_exporter[363933]: ts=2025-12-03T01:43:24.764Z caller=node_exporter.go:117 level=info collector=nfs
Dec 03 01:43:24 compute-0 node_exporter[363933]: ts=2025-12-03T01:43:24.764Z caller=node_exporter.go:117 level=info collector=nfsd
Dec 03 01:43:24 compute-0 node_exporter[363933]: ts=2025-12-03T01:43:24.764Z caller=node_exporter.go:117 level=info collector=nvme
Dec 03 01:43:24 compute-0 node_exporter[363933]: ts=2025-12-03T01:43:24.764Z caller=node_exporter.go:117 level=info collector=schedstat
Dec 03 01:43:24 compute-0 node_exporter[363933]: ts=2025-12-03T01:43:24.764Z caller=node_exporter.go:117 level=info collector=sockstat
Dec 03 01:43:24 compute-0 node_exporter[363933]: ts=2025-12-03T01:43:24.764Z caller=node_exporter.go:117 level=info collector=softnet
Dec 03 01:43:24 compute-0 node_exporter[363933]: ts=2025-12-03T01:43:24.764Z caller=node_exporter.go:117 level=info collector=systemd
Dec 03 01:43:24 compute-0 node_exporter[363933]: ts=2025-12-03T01:43:24.764Z caller=node_exporter.go:117 level=info collector=tapestats
Dec 03 01:43:24 compute-0 node_exporter[363933]: ts=2025-12-03T01:43:24.764Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec 03 01:43:24 compute-0 node_exporter[363933]: ts=2025-12-03T01:43:24.764Z caller=node_exporter.go:117 level=info collector=vmstat
Dec 03 01:43:24 compute-0 node_exporter[363933]: ts=2025-12-03T01:43:24.764Z caller=node_exporter.go:117 level=info collector=xfs
Dec 03 01:43:24 compute-0 node_exporter[363933]: ts=2025-12-03T01:43:24.764Z caller=node_exporter.go:117 level=info collector=zfs
Dec 03 01:43:24 compute-0 node_exporter[363933]: ts=2025-12-03T01:43:24.765Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Dec 03 01:43:24 compute-0 node_exporter[363933]: ts=2025-12-03T01:43:24.766Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Dec 03 01:43:24 compute-0 podman[363912]: 2025-12-03 01:43:24.775006871 +0000 UTC m=+0.346202279 container start 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 01:43:24 compute-0 podman[363921]: node_exporter
Dec 03 01:43:24 compute-0 python3[363803]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman start node_exporter
Dec 03 01:43:24 compute-0 systemd[1]: Started node_exporter container.
Dec 03 01:43:24 compute-0 podman[363943]: 2025-12-03 01:43:24.896223206 +0000 UTC m=+0.097747021 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 01:43:25 compute-0 sudo[363801]: pam_unix(sudo:session): session closed for user root
Dec 03 01:43:25 compute-0 ceph-mon[192821]: pgmap v858: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v859: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:25 compute-0 sudo[364137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjetmcbpyjhbeegemltufbvngbgpqzga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726205.3682115-553-253816080073114/AnsiballZ_stat.py'
Dec 03 01:43:25 compute-0 sudo[364137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:43:26 compute-0 python3.9[364139]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:43:26 compute-0 sudo[364137]: pam_unix(sudo:session): session closed for user root
Dec 03 01:43:27 compute-0 sudo[364291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tipekvihgpwkxqvotcgpwgjgxmvqvafo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726206.647387-562-148470562836889/AnsiballZ_file.py'
Dec 03 01:43:27 compute-0 sudo[364291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:43:27 compute-0 python3.9[364293]: ansible-file Invoked with path=/etc/systemd/system/edpm_node_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:43:27 compute-0 sudo[364291]: pam_unix(sudo:session): session closed for user root
Dec 03 01:43:27 compute-0 ceph-mon[192821]: pgmap v859: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v860: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:43:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:43:28
Dec 03 01:43:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 01:43:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 01:43:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', '.rgw.root', 'default.rgw.log', 'volumes', 'default.rgw.meta', 'backups']
Dec 03 01:43:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 01:43:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:43:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:43:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:43:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:43:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:43:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:43:28 compute-0 podman[364346]: 2025-12-03 01:43:28.50003027 +0000 UTC m=+0.134060604 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_id=edpm, distribution-scope=public, maintainer=Red Hat, Inc., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec 03 01:43:28 compute-0 podman[364347]: 2025-12-03 01:43:28.589794997 +0000 UTC m=+0.221293810 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 03 01:43:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 01:43:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:43:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 01:43:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:43:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:43:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:43:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:43:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:43:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:43:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:43:29 compute-0 sudo[364520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgjfmnnjudsiaqjtcjjfpsijvpjvmwwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726207.5728333-562-86863657520914/AnsiballZ_copy.py'
Dec 03 01:43:29 compute-0 podman[364460]: 2025-12-03 01:43:29.50309992 +0000 UTC m=+0.124154948 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0)
Dec 03 01:43:29 compute-0 podman[364459]: 2025-12-03 01:43:29.509763606 +0000 UTC m=+0.129906849 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.build-date=20251125)
Dec 03 01:43:29 compute-0 sudo[364520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:43:29 compute-0 ceph-mon[192821]: pgmap v860: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v861: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:29 compute-0 python3.9[364525]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764726207.5728333-562-86863657520914/source dest=/etc/systemd/system/edpm_node_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:43:29 compute-0 sudo[364520]: pam_unix(sudo:session): session closed for user root
Dec 03 01:43:29 compute-0 podman[158098]: time="2025-12-03T01:43:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:43:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:43:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42582 "" "Go-http-client/1.1"
Dec 03 01:43:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:43:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8117 "" "Go-http-client/1.1"
Dec 03 01:43:30 compute-0 sudo[364602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfpcrxdeznnwmdlkhrepvsqtuzmcfiqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726207.5728333-562-86863657520914/AnsiballZ_systemd.py'
Dec 03 01:43:30 compute-0 sudo[364602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:43:30 compute-0 python3.9[364604]: ansible-systemd Invoked with state=started name=edpm_node_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:43:30 compute-0 ceph-mon[192821]: pgmap v861: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:30 compute-0 sudo[364602]: pam_unix(sudo:session): session closed for user root
Dec 03 01:43:31 compute-0 sshd-session[364529]: Invalid user gns3 from 103.146.202.174 port 39546
Dec 03 01:43:31 compute-0 openstack_network_exporter[160250]: ERROR   01:43:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:43:31 compute-0 openstack_network_exporter[160250]: ERROR   01:43:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:43:31 compute-0 openstack_network_exporter[160250]: ERROR   01:43:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:43:31 compute-0 openstack_network_exporter[160250]: ERROR   01:43:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:43:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:43:31 compute-0 openstack_network_exporter[160250]: ERROR   01:43:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:43:31 compute-0 openstack_network_exporter[160250]: 
Dec 03 01:43:31 compute-0 sshd-session[364529]: Received disconnect from 103.146.202.174 port 39546:11: Bye Bye [preauth]
Dec 03 01:43:31 compute-0 sshd-session[364529]: Disconnected from invalid user gns3 103.146.202.174 port 39546 [preauth]
Dec 03 01:43:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v862: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:32 compute-0 sudo[364756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slvlqtwhvqsxarpkeepzmsjdlxwvrvxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726211.7778082-582-237022756299975/AnsiballZ_systemd.py'
Dec 03 01:43:32 compute-0 sudo[364756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:43:32 compute-0 python3.9[364758]: ansible-ansible.builtin.systemd Invoked with name=edpm_node_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 03 01:43:32 compute-0 ceph-mon[192821]: pgmap v862: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:43:32 compute-0 systemd[1]: Stopping node_exporter container...
Dec 03 01:43:32 compute-0 systemd[1]: libpod-9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df.scope: Deactivated successfully.
Dec 03 01:43:32 compute-0 podman[364762]: 2025-12-03 01:43:32.870646337 +0000 UTC m=+0.098423470 container died 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 03 01:43:32 compute-0 systemd[1]: 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df-111ca9e5d77efe24.timer: Deactivated successfully.
Dec 03 01:43:32 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df.
Dec 03 01:43:32 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df-userdata-shm.mount: Deactivated successfully.
Dec 03 01:43:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8c4c51a2ebaba9c9962c48edc737f49dc31bd7bde2654d119963ea40bdad8c7-merged.mount: Deactivated successfully.
Dec 03 01:43:32 compute-0 podman[364762]: 2025-12-03 01:43:32.964514638 +0000 UTC m=+0.192291751 container cleanup 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 03 01:43:32 compute-0 podman[364762]: node_exporter
Dec 03 01:43:32 compute-0 systemd[1]: edpm_node_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec 03 01:43:32 compute-0 systemd[1]: libpod-conmon-9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df.scope: Deactivated successfully.
Dec 03 01:43:33 compute-0 podman[364789]: node_exporter
Dec 03 01:43:33 compute-0 systemd[1]: edpm_node_exporter.service: Failed with result 'exit-code'.
Dec 03 01:43:33 compute-0 systemd[1]: Stopped node_exporter container.
Dec 03 01:43:33 compute-0 systemd[1]: Starting node_exporter container...
Dec 03 01:43:33 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:43:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8c4c51a2ebaba9c9962c48edc737f49dc31bd7bde2654d119963ea40bdad8c7/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 03 01:43:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8c4c51a2ebaba9c9962c48edc737f49dc31bd7bde2654d119963ea40bdad8c7/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec 03 01:43:33 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df.
Dec 03 01:43:33 compute-0 podman[364800]: 2025-12-03 01:43:33.349007335 +0000 UTC m=+0.224927212 container init 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 01:43:33 compute-0 node_exporter[364814]: ts=2025-12-03T01:43:33.380Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Dec 03 01:43:33 compute-0 node_exporter[364814]: ts=2025-12-03T01:43:33.380Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Dec 03 01:43:33 compute-0 node_exporter[364814]: ts=2025-12-03T01:43:33.381Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Dec 03 01:43:33 compute-0 node_exporter[364814]: ts=2025-12-03T01:43:33.382Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec 03 01:43:33 compute-0 node_exporter[364814]: ts=2025-12-03T01:43:33.382Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec 03 01:43:33 compute-0 node_exporter[364814]: ts=2025-12-03T01:43:33.382Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Dec 03 01:43:33 compute-0 node_exporter[364814]: ts=2025-12-03T01:43:33.383Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Dec 03 01:43:33 compute-0 node_exporter[364814]: ts=2025-12-03T01:43:33.383Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec 03 01:43:33 compute-0 node_exporter[364814]: ts=2025-12-03T01:43:33.383Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec 03 01:43:33 compute-0 node_exporter[364814]: ts=2025-12-03T01:43:33.384Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec 03 01:43:33 compute-0 node_exporter[364814]: ts=2025-12-03T01:43:33.384Z caller=node_exporter.go:117 level=info collector=arp
Dec 03 01:43:33 compute-0 node_exporter[364814]: ts=2025-12-03T01:43:33.384Z caller=node_exporter.go:117 level=info collector=bcache
Dec 03 01:43:33 compute-0 node_exporter[364814]: ts=2025-12-03T01:43:33.384Z caller=node_exporter.go:117 level=info collector=bonding
Dec 03 01:43:33 compute-0 node_exporter[364814]: ts=2025-12-03T01:43:33.384Z caller=node_exporter.go:117 level=info collector=btrfs
Dec 03 01:43:33 compute-0 node_exporter[364814]: ts=2025-12-03T01:43:33.384Z caller=node_exporter.go:117 level=info collector=conntrack
Dec 03 01:43:33 compute-0 node_exporter[364814]: ts=2025-12-03T01:43:33.384Z caller=node_exporter.go:117 level=info collector=cpu
Dec 03 01:43:33 compute-0 node_exporter[364814]: ts=2025-12-03T01:43:33.384Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec 03 01:43:33 compute-0 node_exporter[364814]: ts=2025-12-03T01:43:33.384Z caller=node_exporter.go:117 level=info collector=diskstats
Dec 03 01:43:33 compute-0 node_exporter[364814]: ts=2025-12-03T01:43:33.384Z caller=node_exporter.go:117 level=info collector=edac
Dec 03 01:43:33 compute-0 node_exporter[364814]: ts=2025-12-03T01:43:33.384Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec 03 01:43:33 compute-0 node_exporter[364814]: ts=2025-12-03T01:43:33.384Z caller=node_exporter.go:117 level=info collector=filefd
Dec 03 01:43:33 compute-0 node_exporter[364814]: ts=2025-12-03T01:43:33.384Z caller=node_exporter.go:117 level=info collector=filesystem
Dec 03 01:43:33 compute-0 node_exporter[364814]: ts=2025-12-03T01:43:33.384Z caller=node_exporter.go:117 level=info collector=infiniband
Dec 03 01:43:33 compute-0 node_exporter[364814]: ts=2025-12-03T01:43:33.384Z caller=node_exporter.go:117 level=info collector=ipvs
Dec 03 01:43:33 compute-0 node_exporter[364814]: ts=2025-12-03T01:43:33.384Z caller=node_exporter.go:117 level=info collector=loadavg
Dec 03 01:43:33 compute-0 node_exporter[364814]: ts=2025-12-03T01:43:33.384Z caller=node_exporter.go:117 level=info collector=mdadm
Dec 03 01:43:33 compute-0 node_exporter[364814]: ts=2025-12-03T01:43:33.384Z caller=node_exporter.go:117 level=info collector=meminfo
Dec 03 01:43:33 compute-0 node_exporter[364814]: ts=2025-12-03T01:43:33.384Z caller=node_exporter.go:117 level=info collector=netclass
Dec 03 01:43:33 compute-0 node_exporter[364814]: ts=2025-12-03T01:43:33.384Z caller=node_exporter.go:117 level=info collector=netdev
Dec 03 01:43:33 compute-0 node_exporter[364814]: ts=2025-12-03T01:43:33.384Z caller=node_exporter.go:117 level=info collector=netstat
Dec 03 01:43:33 compute-0 node_exporter[364814]: ts=2025-12-03T01:43:33.384Z caller=node_exporter.go:117 level=info collector=nfs
Dec 03 01:43:33 compute-0 node_exporter[364814]: ts=2025-12-03T01:43:33.384Z caller=node_exporter.go:117 level=info collector=nfsd
Dec 03 01:43:33 compute-0 node_exporter[364814]: ts=2025-12-03T01:43:33.384Z caller=node_exporter.go:117 level=info collector=nvme
Dec 03 01:43:33 compute-0 node_exporter[364814]: ts=2025-12-03T01:43:33.384Z caller=node_exporter.go:117 level=info collector=schedstat
Dec 03 01:43:33 compute-0 node_exporter[364814]: ts=2025-12-03T01:43:33.384Z caller=node_exporter.go:117 level=info collector=sockstat
Dec 03 01:43:33 compute-0 node_exporter[364814]: ts=2025-12-03T01:43:33.384Z caller=node_exporter.go:117 level=info collector=softnet
Dec 03 01:43:33 compute-0 node_exporter[364814]: ts=2025-12-03T01:43:33.384Z caller=node_exporter.go:117 level=info collector=systemd
Dec 03 01:43:33 compute-0 node_exporter[364814]: ts=2025-12-03T01:43:33.384Z caller=node_exporter.go:117 level=info collector=tapestats
Dec 03 01:43:33 compute-0 node_exporter[364814]: ts=2025-12-03T01:43:33.384Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec 03 01:43:33 compute-0 node_exporter[364814]: ts=2025-12-03T01:43:33.384Z caller=node_exporter.go:117 level=info collector=vmstat
Dec 03 01:43:33 compute-0 node_exporter[364814]: ts=2025-12-03T01:43:33.384Z caller=node_exporter.go:117 level=info collector=xfs
Dec 03 01:43:33 compute-0 node_exporter[364814]: ts=2025-12-03T01:43:33.384Z caller=node_exporter.go:117 level=info collector=zfs
Dec 03 01:43:33 compute-0 node_exporter[364814]: ts=2025-12-03T01:43:33.385Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Dec 03 01:43:33 compute-0 node_exporter[364814]: ts=2025-12-03T01:43:33.386Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Dec 03 01:43:33 compute-0 podman[364800]: 2025-12-03 01:43:33.394745672 +0000 UTC m=+0.270665529 container start 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 01:43:33 compute-0 podman[364800]: node_exporter
Dec 03 01:43:33 compute-0 systemd[1]: Started node_exporter container.
Dec 03 01:43:33 compute-0 sudo[364756]: pam_unix(sudo:session): session closed for user root
Dec 03 01:43:33 compute-0 podman[364824]: 2025-12-03 01:43:33.515488164 +0000 UTC m=+0.100453837 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 03 01:43:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v863: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:34 compute-0 sudo[364997]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eawdwogqjjbhjbkzddsxqhqopogagtwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726213.7824008-590-237696300829843/AnsiballZ_stat.py'
Dec 03 01:43:34 compute-0 sudo[364997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:43:34 compute-0 python3.9[364999]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/podman_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:43:34 compute-0 sudo[364997]: pam_unix(sudo:session): session closed for user root
Dec 03 01:43:34 compute-0 ceph-mon[192821]: pgmap v863: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:35 compute-0 sudo[365075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nluyqmxruxxvwtwykbbpqpkqsxrbywzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726213.7824008-590-237696300829843/AnsiballZ_file.py'
Dec 03 01:43:35 compute-0 sudo[365075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:43:35 compute-0 python3.9[365077]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/podman_exporter/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/podman_exporter/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:43:35 compute-0 sudo[365075]: pam_unix(sudo:session): session closed for user root
Dec 03 01:43:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v864: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:36 compute-0 sudo[365242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdvghrlizwsfezqplddpgaiidmxvjjqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726215.8371933-604-73897866190356/AnsiballZ_container_config_data.py'
Dec 03 01:43:36 compute-0 sudo[365242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:43:36 compute-0 podman[365201]: 2025-12-03 01:43:36.439459644 +0000 UTC m=+0.135206096 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, managed_by=edpm_ansible, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.openshift.expose-services=, name=ubi9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, distribution-scope=public)
Dec 03 01:43:36 compute-0 python3.9[365248]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=podman_exporter.json debug=False
Dec 03 01:43:36 compute-0 sudo[365242]: pam_unix(sudo:session): session closed for user root
Dec 03 01:43:36 compute-0 ceph-mon[192821]: pgmap v864: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:37 compute-0 sudo[365398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdqvjbnasswgepqywnjplzngqvonplnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726217.0473208-613-8930226316900/AnsiballZ_container_config_hash.py'
Dec 03 01:43:37 compute-0 sudo[365398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:43:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v865: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:43:37 compute-0 python3.9[365400]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 03 01:43:37 compute-0 sudo[365398]: pam_unix(sudo:session): session closed for user root
Dec 03 01:43:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 01:43:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:43:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 01:43:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:43:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:43:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:43:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:43:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:43:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:43:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:43:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:43:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:43:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 01:43:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:43:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:43:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:43:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 01:43:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:43:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 01:43:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:43:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:43:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:43:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 01:43:38 compute-0 ceph-mon[192821]: pgmap v865: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:38 compute-0 sudo[365550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbfuqbicxxlbpxscgiqihohihnfgswdm ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764726218.2355638-623-76654811515415/AnsiballZ_edpm_container_manage.py'
Dec 03 01:43:38 compute-0 sudo[365550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:43:39 compute-0 python3[365552]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=podman_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec 03 01:43:39 compute-0 python3[365552]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                {
                                                     "Id": "e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815",
                                                     "Digest": "sha256:7b7f37816f4a78244e32f90a517fdec0c458a6d3cd132212bb6bc16a9dc4fade",
                                                     "RepoTags": [
                                                          "quay.io/navidys/prometheus-podman-exporter:v1.10.1"
                                                     ],
                                                     "RepoDigests": [
                                                          "quay.io/navidys/prometheus-podman-exporter@sha256:7b7f37816f4a78244e32f90a517fdec0c458a6d3cd132212bb6bc16a9dc4fade",
                                                          "quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd"
                                                     ],
                                                     "Parent": "",
                                                     "Comment": "",
                                                     "Created": "2024-03-17T01:45:00.251170784Z",
                                                     "Config": {
                                                          "User": "nobody",
                                                          "ExposedPorts": {
                                                               "9882/tcp": {}
                                                          },
                                                          "Env": [
                                                               "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
                                                          ],
                                                          "Entrypoint": [
                                                               "/bin/podman_exporter"
                                                          ],
                                                          "Labels": {
                                                               "maintainer": "Navid Yaghoobi <navidys@fedoraproject.org>"
                                                          }
                                                     },
                                                     "Version": "",
                                                     "Author": "The Prometheus Authors <prometheus-developers@googlegroups.com>",
                                                     "Architecture": "amd64",
                                                     "Os": "linux",
                                                     "Size": 33863535,
                                                     "VirtualSize": 33863535,
                                                     "GraphDriver": {
                                                          "Name": "overlay",
                                                          "Data": {
                                                               "LowerDir": "/var/lib/containers/storage/overlay/b4f761d90eeb5a4c1ea51e856783cf8398e02a6caf306b90498250a43e5bbae1/diff:/var/lib/containers/storage/overlay/1e604deea57dbda554a168861cff1238f93b8c6c69c863c43aed37d9d99c5fed/diff",
                                                               "UpperDir": "/var/lib/containers/storage/overlay/e1fac4507a16e359f79966290a44e975bb0ed717e8b6cc0e34b61e8c96e0a1a3/diff",
                                                               "WorkDir": "/var/lib/containers/storage/overlay/e1fac4507a16e359f79966290a44e975bb0ed717e8b6cc0e34b61e8c96e0a1a3/work"
                                                          }
                                                     },
                                                     "RootFS": {
                                                          "Type": "layers",
                                                          "Layers": [
                                                               "sha256:1e604deea57dbda554a168861cff1238f93b8c6c69c863c43aed37d9d99c5fed",
                                                               "sha256:6b83872188a9e8912bee1d43add5e9bc518601b02a14a364c0da43b0d59acf33",
                                                               "sha256:7a73cdcd46b4e3c3a632bae42ad152935f204b50dd02f0a46070f81446516318"
                                                          ]
                                                     },
                                                     "Labels": {
                                                          "maintainer": "Navid Yaghoobi <navidys@fedoraproject.org>"
                                                     },
                                                     "Annotations": {},
                                                     "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
                                                     "User": "nobody",
                                                     "History": [
                                                          {
                                                               "created": "2023-12-05T20:23:06.467739954Z",
                                                               "created_by": "/bin/sh -c #(nop) ADD file:ee9bb8755ccbdd689b434d9b4ac7518e972699604ecda33e4ddc2a15d2831443 in / "
                                                          },
                                                          {
                                                               "created": "2023-12-05T20:23:06.550971969Z",
                                                               "created_by": "/bin/sh -c #(nop)  CMD [\"sh\"]",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2023-12-15T10:54:58.99835989Z",
                                                               "created_by": "MAINTAINER The Prometheus Authors <prometheus-developers@googlegroups.com>",
                                                               "comment": "buildkit.dockerfile.v0",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2023-12-15T10:54:58.99835989Z",
                                                               "created_by": "COPY /rootfs / # buildkit",
                                                               "comment": "buildkit.dockerfile.v0"
                                                          },
                                                          {
                                                               "created": "2024-03-17T01:45:00.251170784Z",
                                                               "created_by": "LABEL maintainer=Navid Yaghoobi <navidys@fedoraproject.org>",
                                                               "comment": "buildkit.dockerfile.v0",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-03-17T01:45:00.251170784Z",
                                                               "created_by": "ARG TARGETPLATFORM",
                                                               "comment": "buildkit.dockerfile.v0",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-03-17T01:45:00.251170784Z",
                                                               "created_by": "ARG TARGETOS",
                                                               "comment": "buildkit.dockerfile.v0",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-03-17T01:45:00.251170784Z",
                                                               "created_by": "ARG TARGETARCH",
                                                               "comment": "buildkit.dockerfile.v0",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-03-17T01:45:00.251170784Z",
                                                               "created_by": "COPY ./bin/remote/prometheus-podman-exporter-amd64 /bin/podman_exporter # buildkit",
                                                               "comment": "buildkit.dockerfile.v0"
                                                          },
                                                          {
                                                               "created": "2024-03-17T01:45:00.251170784Z",
                                                               "created_by": "EXPOSE map[9882/tcp:{}]",
                                                               "comment": "buildkit.dockerfile.v0",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-03-17T01:45:00.251170784Z",
                                                               "created_by": "USER nobody",
                                                               "comment": "buildkit.dockerfile.v0",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-03-17T01:45:00.251170784Z",
                                                               "created_by": "ENTRYPOINT [\"/bin/podman_exporter\"]",
                                                               "comment": "buildkit.dockerfile.v0",
                                                               "empty_layer": true
                                                          }
                                                     ],
                                                     "NamesHistory": [
                                                          "quay.io/navidys/prometheus-podman-exporter:v1.10.1"
                                                     ]
                                                }
                                           ]
                                           : quay.io/navidys/prometheus-podman-exporter:v1.10.1
Dec 03 01:43:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v866: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:39 compute-0 podman[158098]: @ - - [03/Dec/2025:01:12:07 +0000] "GET /v4.9.3/libpod/events?filters=%7B%7D&since=&stream=true&until= HTTP/1.1" 200 3229597 "" "Go-http-client/1.1"
Dec 03 01:43:39 compute-0 systemd[1]: libpod-7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a.scope: Deactivated successfully.
Dec 03 01:43:39 compute-0 systemd[1]: libpod-7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a.scope: Consumed 4.749s CPU time.
Dec 03 01:43:39 compute-0 podman[365601]: 2025-12-03 01:43:39.730202457 +0000 UTC m=+0.099810008 container died 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 03 01:43:39 compute-0 systemd[1]: 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a-85f73eef1cf9430.timer: Deactivated successfully.
Dec 03 01:43:39 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a.
Dec 03 01:43:39 compute-0 systemd[1]: 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a-85f73eef1cf9430.service: Failed to open /run/systemd/transient/7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a-85f73eef1cf9430.service: No such file or directory
Dec 03 01:43:39 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a-userdata-shm.mount: Deactivated successfully.
Dec 03 01:43:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a3af917fa450f54f3298d1356fcb0769645478608c41ec56846e1707f625807-merged.mount: Deactivated successfully.
Dec 03 01:43:39 compute-0 podman[365601]: 2025-12-03 01:43:39.817914536 +0000 UTC m=+0.187522057 container cleanup 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 03 01:43:39 compute-0 python3[365552]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman stop podman_exporter
Dec 03 01:43:39 compute-0 systemd[1]: edpm_podman_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec 03 01:43:39 compute-0 systemd[1]: 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a-85f73eef1cf9430.timer: Failed to open /run/systemd/transient/7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a-85f73eef1cf9430.timer: No such file or directory
Dec 03 01:43:39 compute-0 systemd[1]: 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a-85f73eef1cf9430.service: Failed to open /run/systemd/transient/7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a-85f73eef1cf9430.service: No such file or directory
Dec 03 01:43:39 compute-0 podman[365628]: 2025-12-03 01:43:39.955342713 +0000 UTC m=+0.101134434 container remove 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 01:43:39 compute-0 podman[365629]: Error: no container with name or ID "podman_exporter" found: no such container
Dec 03 01:43:39 compute-0 python3[365552]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force podman_exporter
Dec 03 01:43:39 compute-0 systemd[1]: edpm_podman_exporter.service: Control process exited, code=exited, status=125/n/a
Dec 03 01:43:39 compute-0 systemd[1]: edpm_podman_exporter.service: Failed with result 'exit-code'.
Dec 03 01:43:40 compute-0 podman[365649]: 2025-12-03 01:43:40.101044652 +0000 UTC m=+0.116004421 container create 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, config_id=edpm, container_name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 03 01:43:40 compute-0 podman[365649]: 2025-12-03 01:43:40.037775185 +0000 UTC m=+0.052735014 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Dec 03 01:43:40 compute-0 python3[365552]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name podman_exporter --conmon-pidfile /run/podman_exporter.pid --env OS_ENDPOINT_TYPE=internal --env CONTAINER_HOST=unix:///run/podman/podman.sock --healthcheck-command /openstack/healthcheck podman_exporter --label config_id=edpm --label container_name=podman_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9882:9882 --user root --volume /var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z --volume /run/podman/podman.sock:/run/podman/podman.sock:rw,z --volume /var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z quay.io/navidys/prometheus-podman-exporter:v1.10.1 --web.config.file=/etc/podman_exporter/podman_exporter.yaml
Dec 03 01:43:40 compute-0 systemd[1]: edpm_podman_exporter.service: Scheduled restart job, restart counter is at 1.
Dec 03 01:43:40 compute-0 systemd[1]: Stopped podman_exporter container.
Dec 03 01:43:40 compute-0 systemd[1]: Starting podman_exporter container...
Dec 03 01:43:40 compute-0 systemd[1]: Started libpod-conmon-82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195.scope.
Dec 03 01:43:40 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:43:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b26f5ee6d12bff17e7cd21d9ec23ce43f49bc60e25ead8b55a9d01ad5f159ae1/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec 03 01:43:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b26f5ee6d12bff17e7cd21d9ec23ce43f49bc60e25ead8b55a9d01ad5f159ae1/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 03 01:43:40 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195.
Dec 03 01:43:40 compute-0 podman[365662]: 2025-12-03 01:43:40.42185423 +0000 UTC m=+0.282149670 container init 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 01:43:40 compute-0 podman_exporter[365686]: ts=2025-12-03T01:43:40.460Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Dec 03 01:43:40 compute-0 podman_exporter[365686]: ts=2025-12-03T01:43:40.460Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Dec 03 01:43:40 compute-0 podman_exporter[365686]: ts=2025-12-03T01:43:40.460Z caller=handler.go:94 level=info msg="enabled collectors"
Dec 03 01:43:40 compute-0 podman_exporter[365686]: ts=2025-12-03T01:43:40.460Z caller=handler.go:105 level=info collector=container
Dec 03 01:43:40 compute-0 podman[158098]: @ - - [03/Dec/2025:01:43:40 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Dec 03 01:43:40 compute-0 podman[365662]: 2025-12-03 01:43:40.465292413 +0000 UTC m=+0.325587803 container start 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 03 01:43:40 compute-0 podman[158098]: time="2025-12-03T01:43:40Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:43:40 compute-0 podman[365674]: podman_exporter
Dec 03 01:43:40 compute-0 python3[365552]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman start podman_exporter
Dec 03 01:43:40 compute-0 systemd[1]: Started podman_exporter container.
Dec 03 01:43:40 compute-0 podman[158098]: @ - - [03/Dec/2025:01:43:40 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 43262 "" "Go-http-client/1.1"
Dec 03 01:43:40 compute-0 podman_exporter[365686]: ts=2025-12-03T01:43:40.578Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Dec 03 01:43:40 compute-0 podman_exporter[365686]: ts=2025-12-03T01:43:40.579Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Dec 03 01:43:40 compute-0 podman_exporter[365686]: ts=2025-12-03T01:43:40.580Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Dec 03 01:43:40 compute-0 podman[365695]: 2025-12-03 01:43:40.586151268 +0000 UTC m=+0.102687198 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=starting, health_failing_streak=1, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 01:43:40 compute-0 systemd[1]: 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195-6686609766e628a9.service: Main process exited, code=exited, status=1/FAILURE
Dec 03 01:43:40 compute-0 systemd[1]: 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195-6686609766e628a9.service: Failed with result 'exit-code'.
Dec 03 01:43:40 compute-0 sudo[365550]: pam_unix(sudo:session): session closed for user root
Dec 03 01:43:40 compute-0 ceph-mon[192821]: pgmap v866: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v867: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:41 compute-0 sudo[365890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltbppjkjdyphpcxksbkzqtcobkbxjuhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726221.0286672-631-80219190256519/AnsiballZ_stat.py'
Dec 03 01:43:41 compute-0 sudo[365890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:43:41 compute-0 python3.9[365892]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:43:41 compute-0 sudo[365890]: pam_unix(sudo:session): session closed for user root
Dec 03 01:43:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:43:42 compute-0 ceph-mon[192821]: pgmap v867: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:42 compute-0 sudo[366044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-useikleguothgbcphwjaodaxtoggcpsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726222.267235-640-42975248352260/AnsiballZ_file.py'
Dec 03 01:43:42 compute-0 sudo[366044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:43:43 compute-0 python3.9[366046]: ansible-file Invoked with path=/etc/systemd/system/edpm_podman_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:43:43 compute-0 sudo[366044]: pam_unix(sudo:session): session closed for user root
Dec 03 01:43:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v868: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:44 compute-0 sudo[366195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbdhfhlzitchukgueekounceqerlzggk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726223.2142885-640-78038970217894/AnsiballZ_copy.py'
Dec 03 01:43:44 compute-0 sudo[366195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:43:44 compute-0 python3.9[366197]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764726223.2142885-640-78038970217894/source dest=/etc/systemd/system/edpm_podman_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:43:44 compute-0 sudo[366195]: pam_unix(sudo:session): session closed for user root
Dec 03 01:43:44 compute-0 ceph-mon[192821]: pgmap v868: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:45 compute-0 sudo[366271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdcvgaqnlkaikcsebkndlczlywdrbrgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726223.2142885-640-78038970217894/AnsiballZ_systemd.py'
Dec 03 01:43:45 compute-0 sudo[366271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:43:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v869: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:45 compute-0 python3.9[366273]: ansible-systemd Invoked with state=started name=edpm_podman_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:43:45 compute-0 sudo[366271]: pam_unix(sudo:session): session closed for user root
Dec 03 01:43:46 compute-0 ceph-mon[192821]: pgmap v869: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:46 compute-0 sudo[366425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsokyflxflwgvhnyapfywnhcurtiftyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726226.219683-660-89636398114675/AnsiballZ_systemd.py'
Dec 03 01:43:46 compute-0 sudo[366425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:43:47 compute-0 python3.9[366427]: ansible-ansible.builtin.systemd Invoked with name=edpm_podman_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 03 01:43:47 compute-0 systemd[1]: Stopping podman_exporter container...
Dec 03 01:43:47 compute-0 podman[158098]: @ - - [03/Dec/2025:01:43:40 +0000] "GET /v4.9.3/libpod/events?filters=%7B%7D&since=&stream=true&until= HTTP/1.1" 200 1449 "" "Go-http-client/1.1"
Dec 03 01:43:47 compute-0 systemd[1]: libpod-82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195.scope: Deactivated successfully.
Dec 03 01:43:47 compute-0 podman[366431]: 2025-12-03 01:43:47.360109336 +0000 UTC m=+0.101448143 container died 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 03 01:43:47 compute-0 systemd[1]: 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195-6686609766e628a9.timer: Deactivated successfully.
Dec 03 01:43:47 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195.
Dec 03 01:43:47 compute-0 nova_compute[351485]: 2025-12-03 01:43:47.397 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:43:47 compute-0 nova_compute[351485]: 2025-12-03 01:43:47.398 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:43:47 compute-0 nova_compute[351485]: 2025-12-03 01:43:47.426 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:43:47 compute-0 nova_compute[351485]: 2025-12-03 01:43:47.426 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 01:43:47 compute-0 nova_compute[351485]: 2025-12-03 01:43:47.426 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 03 01:43:47 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195-userdata-shm.mount: Deactivated successfully.
Dec 03 01:43:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-b26f5ee6d12bff17e7cd21d9ec23ce43f49bc60e25ead8b55a9d01ad5f159ae1-merged.mount: Deactivated successfully.
Dec 03 01:43:47 compute-0 nova_compute[351485]: 2025-12-03 01:43:47.445 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 03 01:43:47 compute-0 nova_compute[351485]: 2025-12-03 01:43:47.445 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:43:47 compute-0 nova_compute[351485]: 2025-12-03 01:43:47.446 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:43:47 compute-0 nova_compute[351485]: 2025-12-03 01:43:47.446 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:43:47 compute-0 nova_compute[351485]: 2025-12-03 01:43:47.446 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:43:47 compute-0 podman[366431]: 2025-12-03 01:43:47.458022071 +0000 UTC m=+0.199360888 container cleanup 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 03 01:43:47 compute-0 podman[366431]: podman_exporter
Dec 03 01:43:47 compute-0 systemd[1]: edpm_podman_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec 03 01:43:47 compute-0 nova_compute[351485]: 2025-12-03 01:43:47.484 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:43:47 compute-0 nova_compute[351485]: 2025-12-03 01:43:47.484 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:43:47 compute-0 nova_compute[351485]: 2025-12-03 01:43:47.485 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:43:47 compute-0 nova_compute[351485]: 2025-12-03 01:43:47.485 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 01:43:47 compute-0 nova_compute[351485]: 2025-12-03 01:43:47.485 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:43:47 compute-0 systemd[1]: libpod-conmon-82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195.scope: Deactivated successfully.
Dec 03 01:43:47 compute-0 podman[366457]: podman_exporter
Dec 03 01:43:47 compute-0 systemd[1]: edpm_podman_exporter.service: Failed with result 'exit-code'.
Dec 03 01:43:47 compute-0 systemd[1]: Stopped podman_exporter container.
Dec 03 01:43:47 compute-0 systemd[1]: Starting podman_exporter container...
Dec 03 01:43:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v870: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:43:47 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:43:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b26f5ee6d12bff17e7cd21d9ec23ce43f49bc60e25ead8b55a9d01ad5f159ae1/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec 03 01:43:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b26f5ee6d12bff17e7cd21d9ec23ce43f49bc60e25ead8b55a9d01ad5f159ae1/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 03 01:43:47 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195.
Dec 03 01:43:47 compute-0 podman[366469]: 2025-12-03 01:43:47.847065115 +0000 UTC m=+0.243856421 container init 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 03 01:43:47 compute-0 podman_exporter[366501]: ts=2025-12-03T01:43:47.871Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Dec 03 01:43:47 compute-0 podman_exporter[366501]: ts=2025-12-03T01:43:47.871Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Dec 03 01:43:47 compute-0 podman_exporter[366501]: ts=2025-12-03T01:43:47.871Z caller=handler.go:94 level=info msg="enabled collectors"
Dec 03 01:43:47 compute-0 podman_exporter[366501]: ts=2025-12-03T01:43:47.871Z caller=handler.go:105 level=info collector=container
Dec 03 01:43:47 compute-0 podman[158098]: @ - - [03/Dec/2025:01:43:47 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Dec 03 01:43:47 compute-0 podman[158098]: time="2025-12-03T01:43:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:43:47 compute-0 podman[366469]: 2025-12-03 01:43:47.886249829 +0000 UTC m=+0.283041125 container start 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 01:43:47 compute-0 podman[366469]: podman_exporter
Dec 03 01:43:47 compute-0 systemd[1]: Started podman_exporter container.
Dec 03 01:43:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 01:43:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3484882705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:43:47 compute-0 sudo[366425]: pam_unix(sudo:session): session closed for user root
Dec 03 01:43:47 compute-0 podman[158098]: @ - - [03/Dec/2025:01:43:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 43264 "" "Go-http-client/1.1"
Dec 03 01:43:47 compute-0 podman_exporter[366501]: ts=2025-12-03T01:43:47.972Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Dec 03 01:43:47 compute-0 podman_exporter[366501]: ts=2025-12-03T01:43:47.973Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Dec 03 01:43:47 compute-0 podman_exporter[366501]: ts=2025-12-03T01:43:47.974Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Dec 03 01:43:47 compute-0 nova_compute[351485]: 2025-12-03 01:43:47.978 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:43:47 compute-0 podman[366511]: 2025-12-03 01:43:47.993257547 +0000 UTC m=+0.098773089 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 03 01:43:48 compute-0 nova_compute[351485]: 2025-12-03 01:43:48.411 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 01:43:48 compute-0 nova_compute[351485]: 2025-12-03 01:43:48.412 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4545MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 01:43:48 compute-0 nova_compute[351485]: 2025-12-03 01:43:48.412 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:43:48 compute-0 nova_compute[351485]: 2025-12-03 01:43:48.413 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:43:48 compute-0 nova_compute[351485]: 2025-12-03 01:43:48.473 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 01:43:48 compute-0 nova_compute[351485]: 2025-12-03 01:43:48.474 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 01:43:48 compute-0 nova_compute[351485]: 2025-12-03 01:43:48.502 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:43:48 compute-0 podman[366678]: 2025-12-03 01:43:48.818417479 +0000 UTC m=+0.099554891 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:43:48 compute-0 podman[366679]: 2025-12-03 01:43:48.821401523 +0000 UTC m=+0.096376663 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=2, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec 03 01:43:48 compute-0 ceph-mon[192821]: pgmap v870: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:48 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3484882705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:43:48 compute-0 sudo[366735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gihgvgbmmoghtzliwbqknvgjdlhlgufn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726228.2548208-668-277796717553771/AnsiballZ_stat.py'
Dec 03 01:43:48 compute-0 systemd[1]: 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264-307494c50137460f.service: Main process exited, code=exited, status=1/FAILURE
Dec 03 01:43:48 compute-0 systemd[1]: 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264-307494c50137460f.service: Failed with result 'exit-code'.
Dec 03 01:43:48 compute-0 sudo[366735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:43:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 01:43:48 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1863120069' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:43:49 compute-0 nova_compute[351485]: 2025-12-03 01:43:49.014 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:43:49 compute-0 python3.9[366740]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/openstack_network_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:43:49 compute-0 nova_compute[351485]: 2025-12-03 01:43:49.027 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 01:43:49 compute-0 sudo[366735]: pam_unix(sudo:session): session closed for user root
Dec 03 01:43:49 compute-0 nova_compute[351485]: 2025-12-03 01:43:49.055 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 01:43:49 compute-0 nova_compute[351485]: 2025-12-03 01:43:49.057 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 01:43:49 compute-0 nova_compute[351485]: 2025-12-03 01:43:49.058 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:43:49 compute-0 nova_compute[351485]: 2025-12-03 01:43:49.187 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:43:49 compute-0 nova_compute[351485]: 2025-12-03 01:43:49.188 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:43:49 compute-0 nova_compute[351485]: 2025-12-03 01:43:49.188 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:43:49 compute-0 nova_compute[351485]: 2025-12-03 01:43:49.188 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 01:43:49 compute-0 sudo[366818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btipwytjbvifnmhnpjrcbhvbwsbpzcda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726228.2548208-668-277796717553771/AnsiballZ_file.py'
Dec 03 01:43:49 compute-0 sudo[366818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:43:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v871: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:49 compute-0 python3.9[366821]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/openstack_network_exporter/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/openstack_network_exporter/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:43:49 compute-0 sudo[366818]: pam_unix(sudo:session): session closed for user root
Dec 03 01:43:49 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1863120069' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:43:50 compute-0 sudo[366971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oaxixosstprbjdoojivltflqmnveayqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726230.1670196-682-3243775634354/AnsiballZ_container_config_data.py'
Dec 03 01:43:50 compute-0 sudo[366971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:43:50 compute-0 ceph-mon[192821]: pgmap v871: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:50 compute-0 python3.9[366973]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=openstack_network_exporter.json debug=False
Dec 03 01:43:50 compute-0 sudo[366971]: pam_unix(sudo:session): session closed for user root
Dec 03 01:43:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v872: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:51 compute-0 sudo[367123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqwqchfdcikoagzojdfjqigjgyrqvxbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726231.3762496-691-208466549033646/AnsiballZ_container_config_hash.py'
Dec 03 01:43:51 compute-0 sudo[367123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:43:52 compute-0 python3.9[367125]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 03 01:43:52 compute-0 sudo[367123]: pam_unix(sudo:session): session closed for user root
Dec 03 01:43:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:43:52 compute-0 ceph-mon[192821]: pgmap v872: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:53 compute-0 sudo[367275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikkeoqbroiiowroogdrhftbstltkhtds ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764726232.6946025-701-245258869093475/AnsiballZ_edpm_container_manage.py'
Dec 03 01:43:53 compute-0 sudo[367275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:43:53 compute-0 python3[367277]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=openstack_network_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec 03 01:43:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v873: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:53 compute-0 python3[367277]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                {
                                                     "Id": "186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1",
                                                     "Digest": "sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7",
                                                     "RepoTags": [
                                                          "quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified"
                                                     ],
                                                     "RepoDigests": [
                                                          "quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7"
                                                     ],
                                                     "Parent": "",
                                                     "Comment": "",
                                                     "Created": "2025-08-26T15:52:54.446618393Z",
                                                     "Config": {
                                                          "ExposedPorts": {
                                                               "1981/tcp": {}
                                                          },
                                                          "Env": [
                                                               "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                                                               "container=oci"
                                                          ],
                                                          "Cmd": [
                                                               "/app/openstack-network-exporter"
                                                          ],
                                                          "WorkingDir": "/",
                                                          "Labels": {
                                                               "architecture": "x86_64",
                                                               "build-date": "2025-08-20T13:12:41",
                                                               "com.redhat.component": "ubi9-minimal-container",
                                                               "com.redhat.license_terms": "https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI",
                                                               "description": "The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.",
                                                               "distribution-scope": "public",
                                                               "io.buildah.version": "1.33.7",
                                                               "io.k8s.description": "The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.",
                                                               "io.k8s.display-name": "Red Hat Universal Base Image 9 Minimal",
                                                               "io.openshift.expose-services": "",
                                                               "io.openshift.tags": "minimal rhel9",
                                                               "maintainer": "Red Hat, Inc.",
                                                               "name": "ubi9-minimal",
                                                               "release": "1755695350",
                                                               "summary": "Provides the latest release of the minimal Red Hat Universal Base Image 9.",
                                                               "url": "https://catalog.redhat.com/en/search?searchType=containers",
                                                               "vcs-ref": "f4b088292653bbf5ca8188a5e59ffd06a8671d4b",
                                                               "vcs-type": "git",
                                                               "vendor": "Red Hat, Inc.",
                                                               "version": "9.6"
                                                          }
                                                     },
                                                     "Version": "",
                                                     "Author": "Red Hat",
                                                     "Architecture": "amd64",
                                                     "Os": "linux",
                                                     "Size": 142088877,
                                                     "VirtualSize": 142088877,
                                                     "GraphDriver": {
                                                          "Name": "overlay",
                                                          "Data": {
                                                               "LowerDir": "/var/lib/containers/storage/overlay/157961e3a1fe369d02893b19044a0e08e15689974ef810b235cb5ec194c7142c/diff:/var/lib/containers/storage/overlay/778d8c610941586099cac6c507cad2d1156b71b2bb54c42cebedf8808c68edb9/diff",
                                                               "UpperDir": "/var/lib/containers/storage/overlay/cd505d6f54e550fae708d1680b6b8d44753cf72fac8d36345974b92245bc660c/diff",
                                                               "WorkDir": "/var/lib/containers/storage/overlay/cd505d6f54e550fae708d1680b6b8d44753cf72fac8d36345974b92245bc660c/work"
                                                          }
                                                     },
                                                     "RootFS": {
                                                          "Type": "layers",
                                                          "Layers": [
                                                               "sha256:778d8c610941586099cac6c507cad2d1156b71b2bb54c42cebedf8808c68edb9",
                                                               "sha256:60984b2898b5b4ad1680d36433001b7e2bebb1073775d06b4c2ff80f985caccb",
                                                               "sha256:866ed9f0f685cc1d741f560227443a94926fc22494aa7808be751e7247cda421"
                                                          ]
                                                     },
                                                     "Labels": {
                                                          "architecture": "x86_64",
                                                          "build-date": "2025-08-20T13:12:41",
                                                          "com.redhat.component": "ubi9-minimal-container",
                                                          "com.redhat.license_terms": "https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI",
                                                          "description": "The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.",
                                                          "distribution-scope": "public",
                                                          "io.buildah.version": "1.33.7",
                                                          "io.k8s.description": "The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.",
                                                          "io.k8s.display-name": "Red Hat Universal Base Image 9 Minimal",
                                                          "io.openshift.expose-services": "",
                                                          "io.openshift.tags": "minimal rhel9",
                                                          "maintainer": "Red Hat, Inc.",
                                                          "name": "ubi9-minimal",
                                                          "release": "1755695350",
                                                          "summary": "Provides the latest release of the minimal Red Hat Universal Base Image 9.",
                                                          "url": "https://catalog.redhat.com/en/search?searchType=containers",
                                                          "vcs-ref": "f4b088292653bbf5ca8188a5e59ffd06a8671d4b",
                                                          "vcs-type": "git",
                                                          "vendor": "Red Hat, Inc.",
                                                          "version": "9.6"
                                                     },
                                                     "Annotations": {},
                                                     "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
                                                     "User": "",
                                                     "History": [
                                                          {
                                                               "created": "2025-08-20T13:14:24.836114247Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"Red Hat, Inc.\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-20T13:14:24.907067406Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL vendor=\"Red Hat, Inc.\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-20T13:14:24.953912498Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL url=\"https://catalog.redhat.com/en/search?searchType=containers\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-20T13:14:24.99202543Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL com.redhat.component=\"ubi9-minimal-container\"       name=\"ubi9-minimal\"       version=\"9.6\"       distribution-scope=\"public\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-20T13:14:25.033232759Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL com.redhat.license_terms=\"https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-20T13:14:25.116880439Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL summary=\"Provides the latest release of the minimal Red Hat Universal Base Image 9.\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-20T13:14:25.167988017Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL description=\"The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-20T13:14:25.205286235Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL io.k8s.description=\"The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-20T13:14:25.239930205Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL io.k8s.display-name=\"Red Hat Universal Base Image 9 Minimal\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-20T13:14:25.298417937Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL io.openshift.expose-services=\"\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-20T13:14:25.346108994Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL io.openshift.tags=\"minimal rhel9\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-20T13:14:25.381850293Z",
                                                               "created_by": "/bin/sh -c #(nop) ENV container oci",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-20T13:14:25.998561869Z",
                                                               "created_by": "/bin/sh -c #(nop) COPY dir:e1f22eafd6489859288910ef7585f9d694693aa84a31ba9d54dea9e7a451abe6 in / ",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-20T13:14:26.169088157Z",
                                                               "created_by": "/bin/sh -c #(nop) COPY file:b37d593713ee21ad52a4cd1424dc019a24f7966f85df0ac4b86d234302695328 in /etc/yum.repos.d/. ",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-20T13:14:26.222750062Z",
                                                               "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-20T13:14:26.44502305Z",
                                                               "created_by": "/bin/sh -c #(nop) COPY file:58cc94f5b3b2d60de2c77a6ed4b1797dcede502ccdb429a72e7a72d994235b3c in /usr/share/buildinfo/content-sets.json ",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-20T13:14:26.581849716Z",
                                                               "created_by": "/bin/sh -c #(nop) COPY file:58cc94f5b3b2d60de2c77a6ed4b1797dcede502ccdb429a72e7a72d994235b3c in /root/buildinfo/content_manifests/content-sets.json ",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-20T13:14:26.902035614Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL \"build-date\"=\"2025-08-20T13:12:41\" \"architecture\"=\"x86_64\" \"vcs-type\"=\"git\" \"vcs-ref\"=\"f4b088292653bbf5ca8188a5e59ffd06a8671d4b\" \"release\"=\"1755695350\""
                                                          },
                                                          {
                                                               "created": "2025-08-26T15:52:52.889456996Z",
                                                               "created_by": "/bin/sh -c microdnf update -y && rm -rf /var/cache/yum",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-26T15:52:54.116955892Z",
                                                               "created_by": "/bin/sh -c microdnf install -y iproute && microdnf clean all",
                                                               "comment": "FROM registry.access.redhat.com/ubi9/ubi-minimal:latest"
                                                          },
                                                          {
                                                               "created": "2025-08-26T15:52:54.314008349Z",
                                                               "created_by": "/bin/sh -c #(nop) COPY file:fab61bc60c39fae33dbfa4e382d473ceab94ebaf876018d5034ba62f04740767 in /etc/openstack-network-exporter.yaml ",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-26T15:52:54.407547534Z",
                                                               "created_by": "/bin/sh -c #(nop) COPY file:be836064c1a23a46d9411cf2aafe0d43f5d498cf2fd92e788160ae2e0f30bb86 in /app/openstack-network-exporter ",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-26T15:52:54.420490087Z",
                                                               "created_by": "/bin/sh -c #(nop) MAINTAINER Red Hat",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-26T15:52:54.432520013Z",
                                                               "created_by": "/bin/sh -c #(nop) EXPOSE 1981",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-26T15:52:54.48363818Z",
                                                               "created_by": "/bin/sh -c #(nop) CMD [\"/app/openstack-network-exporter\"]",
                                                               "author": "Red Hat",
                                                               "comment": "FROM 688666ea38a8"
                                                          }
                                                     ],
                                                     "NamesHistory": [
                                                          "quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified"
                                                     ]
                                                }
                                           ]
                                           : quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Dec 03 01:43:54 compute-0 systemd[1]: libpod-3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44.scope: Deactivated successfully.
Dec 03 01:43:54 compute-0 systemd[1]: libpod-3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44.scope: Consumed 5.624s CPU time.
Dec 03 01:43:54 compute-0 podman[367324]: 2025-12-03 01:43:54.030701501 +0000 UTC m=+0.096890969 container died 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_id=edpm, version=9.6, io.buildah.version=1.33.7, distribution-scope=public, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9)
Dec 03 01:43:54 compute-0 systemd[1]: 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44-7b0ab96ed9ee26ed.timer: Deactivated successfully.
Dec 03 01:43:54 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44.
Dec 03 01:43:54 compute-0 systemd[1]: 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44-7b0ab96ed9ee26ed.service: Failed to open /run/systemd/transient/3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44-7b0ab96ed9ee26ed.service: No such file or directory
Dec 03 01:43:54 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44-userdata-shm.mount: Deactivated successfully.
Dec 03 01:43:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a9fbefe05972c72a9f7a13632a386f66954f2ee389425ed857290e85304a23e-merged.mount: Deactivated successfully.
Dec 03 01:43:54 compute-0 podman[367324]: 2025-12-03 01:43:54.126079305 +0000 UTC m=+0.192268743 container cleanup 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, name=ubi9-minimal, maintainer=Red Hat, Inc., version=9.6, io.buildah.version=1.33.7, vendor=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, container_name=openstack_network_exporter)
Dec 03 01:43:54 compute-0 python3[367277]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman stop openstack_network_exporter
Dec 03 01:43:54 compute-0 systemd[1]: edpm_openstack_network_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec 03 01:43:54 compute-0 systemd[1]: 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44-7b0ab96ed9ee26ed.timer: Failed to open /run/systemd/transient/3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44-7b0ab96ed9ee26ed.timer: No such file or directory
Dec 03 01:43:54 compute-0 systemd[1]: 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44-7b0ab96ed9ee26ed.service: Failed to open /run/systemd/transient/3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44-7b0ab96ed9ee26ed.service: No such file or directory
Dec 03 01:43:54 compute-0 podman[367348]: 2025-12-03 01:43:54.26806414 +0000 UTC m=+0.108045233 container remove 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9-minimal, release=1755695350, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=minimal rhel9, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 03 01:43:54 compute-0 podman[367353]: Error: no container with ID 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 found in database: no such container
Dec 03 01:43:54 compute-0 python3[367277]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force openstack_network_exporter
Dec 03 01:43:54 compute-0 systemd[1]: edpm_openstack_network_exporter.service: Control process exited, code=exited, status=125/n/a
Dec 03 01:43:54 compute-0 systemd[1]: edpm_openstack_network_exporter.service: Failed with result 'exit-code'.
Dec 03 01:43:54 compute-0 podman[367374]: 2025-12-03 01:43:54.405855817 +0000 UTC m=+0.099617735 container create 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, config_id=edpm, io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., version=9.6, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, release=1755695350, vcs-type=git)
Dec 03 01:43:54 compute-0 podman[367374]: 2025-12-03 01:43:54.35339928 +0000 UTC m=+0.047161248 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Dec 03 01:43:54 compute-0 python3[367277]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name openstack_network_exporter --conmon-pidfile /run/openstack_network_exporter.pid --env OS_ENDPOINT_TYPE=internal --env OPENSTACK_NETWORK_EXPORTER_YAML=/etc/openstack_network_exporter/openstack_network_exporter.yaml --healthcheck-command /openstack/healthcheck openstack-netwo --label config_id=edpm --label container_name=openstack_network_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9105:9105 --volume /var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z --volume /var/run/openvswitch:/run/openvswitch:rw,z --volume /var/lib/openvswitch/ovn:/run/ovn:rw,z --volume /proc:/host/proc:ro --volume /var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Dec 03 01:43:54 compute-0 systemd[1]: edpm_openstack_network_exporter.service: Scheduled restart job, restart counter is at 1.
Dec 03 01:43:54 compute-0 systemd[1]: Stopped openstack_network_exporter container.
Dec 03 01:43:54 compute-0 systemd[1]: Starting openstack_network_exporter container...
Dec 03 01:43:54 compute-0 systemd[1]: Started libpod-conmon-945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b.scope.
Dec 03 01:43:54 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:43:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb43e89b26773bbb5521b34a448b068b54c329657c493af271c198f3e0b477d0/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec 03 01:43:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb43e89b26773bbb5521b34a448b068b54c329657c493af271c198f3e0b477d0/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 03 01:43:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb43e89b26773bbb5521b34a448b068b54c329657c493af271c198f3e0b477d0/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec 03 01:43:54 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b.
Dec 03 01:43:54 compute-0 podman[367387]: 2025-12-03 01:43:54.679456616 +0000 UTC m=+0.230445196 container init 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.7, release=1755695350, version=9.6, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, name=ubi9-minimal)
Dec 03 01:43:54 compute-0 podman[367387]: 2025-12-03 01:43:54.724303197 +0000 UTC m=+0.275291747 container start 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, version=9.6, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41)
Dec 03 01:43:54 compute-0 openstack_network_exporter[367414]: INFO    01:43:54 main.go:48: registering *bridge.Collector
Dec 03 01:43:54 compute-0 openstack_network_exporter[367414]: INFO    01:43:54 main.go:48: registering *coverage.Collector
Dec 03 01:43:54 compute-0 openstack_network_exporter[367414]: INFO    01:43:54 main.go:48: registering *datapath.Collector
Dec 03 01:43:54 compute-0 openstack_network_exporter[367414]: INFO    01:43:54 main.go:48: registering *iface.Collector
Dec 03 01:43:54 compute-0 openstack_network_exporter[367414]: INFO    01:43:54 main.go:48: registering *memory.Collector
Dec 03 01:43:54 compute-0 openstack_network_exporter[367414]: INFO    01:43:54 main.go:48: registering *ovnnorthd.Collector
Dec 03 01:43:54 compute-0 openstack_network_exporter[367414]: INFO    01:43:54 main.go:48: registering *ovn.Collector
Dec 03 01:43:54 compute-0 openstack_network_exporter[367414]: INFO    01:43:54 main.go:48: registering *ovsdbserver.Collector
Dec 03 01:43:54 compute-0 openstack_network_exporter[367414]: INFO    01:43:54 main.go:48: registering *pmd_perf.Collector
Dec 03 01:43:54 compute-0 openstack_network_exporter[367414]: INFO    01:43:54 main.go:48: registering *pmd_rxq.Collector
Dec 03 01:43:54 compute-0 openstack_network_exporter[367414]: INFO    01:43:54 main.go:48: registering *vswitch.Collector
Dec 03 01:43:54 compute-0 openstack_network_exporter[367414]: NOTICE  01:43:54 main.go:76: listening on https://:9105/metrics
Dec 03 01:43:54 compute-0 podman[367393]: openstack_network_exporter
Dec 03 01:43:54 compute-0 python3[367277]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman start openstack_network_exporter
Dec 03 01:43:54 compute-0 systemd[1]: Started openstack_network_exporter container.
Dec 03 01:43:54 compute-0 podman[367422]: 2025-12-03 01:43:54.860028766 +0000 UTC m=+0.121831376 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, architecture=x86_64, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, distribution-scope=public, io.buildah.version=1.33.7, io.openshift.expose-services=, name=ubi9-minimal, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9)
Dec 03 01:43:54 compute-0 ceph-mon[192821]: pgmap v873: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:55 compute-0 sudo[367275]: pam_unix(sudo:session): session closed for user root
Dec 03 01:43:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v874: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:55 compute-0 sudo[367617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcktkkfkaxzfndhardzbjydgyrdwkcjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726235.3422995-709-186404900961869/AnsiballZ_stat.py'
Dec 03 01:43:55 compute-0 sudo[367617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:43:56 compute-0 python3.9[367619]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:43:56 compute-0 sudo[367617]: pam_unix(sudo:session): session closed for user root
Dec 03 01:43:56 compute-0 ceph-mon[192821]: pgmap v874: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:57 compute-0 sudo[367771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbrerihszpeghruiuecrttcfejnkozko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726236.6213212-718-72667418431681/AnsiballZ_file.py'
Dec 03 01:43:57 compute-0 sudo[367771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:43:57 compute-0 python3.9[367773]: ansible-file Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:43:57 compute-0 sudo[367771]: pam_unix(sudo:session): session closed for user root
Dec 03 01:43:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v875: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:43:58 compute-0 sudo[367924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovqpmvxfqvdujlosdjlnntalyvtjqofo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726237.5395143-718-181028273526183/AnsiballZ_copy.py'
Dec 03 01:43:58 compute-0 sudo[367924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:43:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:43:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:43:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:43:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:43:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:43:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:43:58 compute-0 python3.9[367926]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764726237.5395143-718-181028273526183/source dest=/etc/systemd/system/edpm_openstack_network_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:43:58 compute-0 sudo[367924]: pam_unix(sudo:session): session closed for user root
Dec 03 01:43:58 compute-0 ceph-mon[192821]: pgmap v875: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:58 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Dec 03 01:43:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:43:58.943796) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 03 01:43:58 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Dec 03 01:43:58 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726238943841, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 2036, "num_deletes": 251, "total_data_size": 3420809, "memory_usage": 3489664, "flush_reason": "Manual Compaction"}
Dec 03 01:43:58 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Dec 03 01:43:58 compute-0 podman[367952]: 2025-12-03 01:43:58.950698562 +0000 UTC m=+0.197520212 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 03 01:43:58 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726238980650, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 3355758, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16303, "largest_seqno": 18338, "table_properties": {"data_size": 3346585, "index_size": 5795, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 18074, "raw_average_key_size": 19, "raw_value_size": 3328314, "raw_average_value_size": 3633, "num_data_blocks": 263, "num_entries": 916, "num_filter_entries": 916, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764726008, "oldest_key_time": 1764726008, "file_creation_time": 1764726238, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Dec 03 01:43:58 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 36974 microseconds, and 15005 cpu microseconds.
Dec 03 01:43:58 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 01:43:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:43:58.980748) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 3355758 bytes OK
Dec 03 01:43:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:43:58.980794) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Dec 03 01:43:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:43:58.983844) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Dec 03 01:43:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:43:58.983868) EVENT_LOG_v1 {"time_micros": 1764726238983860, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 03 01:43:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:43:58.983890) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 03 01:43:58 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3412330, prev total WAL file size 3412330, number of live WAL files 2.
Dec 03 01:43:58 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 01:43:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:43:58.986090) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Dec 03 01:43:58 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 03 01:43:58 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(3277KB)], [38(7585KB)]
Dec 03 01:43:58 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726238986164, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 11123278, "oldest_snapshot_seqno": -1}
Dec 03 01:43:59 compute-0 sudo[368028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrujhofaappmlsnnavhxseudbwkzodgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726237.5395143-718-181028273526183/AnsiballZ_systemd.py'
Dec 03 01:43:59 compute-0 sudo[368028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:43:59 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4374 keys, 9327210 bytes, temperature: kUnknown
Dec 03 01:43:59 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726239070268, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 9327210, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9294255, "index_size": 20941, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10949, "raw_key_size": 105722, "raw_average_key_size": 24, "raw_value_size": 9211477, "raw_average_value_size": 2105, "num_data_blocks": 891, "num_entries": 4374, "num_filter_entries": 4374, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764726238, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Dec 03 01:43:59 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 01:43:59 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:43:59.070602) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 9327210 bytes
Dec 03 01:43:59 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:43:59.072828) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 132.1 rd, 110.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.4 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 4888, records dropped: 514 output_compression: NoCompression
Dec 03 01:43:59 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:43:59.072856) EVENT_LOG_v1 {"time_micros": 1764726239072842, "job": 18, "event": "compaction_finished", "compaction_time_micros": 84182, "compaction_time_cpu_micros": 26879, "output_level": 6, "num_output_files": 1, "total_output_size": 9327210, "num_input_records": 4888, "num_output_records": 4374, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 03 01:43:59 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 01:43:59 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726239073892, "job": 18, "event": "table_file_deletion", "file_number": 40}
Dec 03 01:43:59 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 01:43:59 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726239076431, "job": 18, "event": "table_file_deletion", "file_number": 38}
Dec 03 01:43:59 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:43:58.985860) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:43:59 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:43:59.076756) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:43:59 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:43:59.076766) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:43:59 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:43:59.076771) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:43:59 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:43:59.076775) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:43:59 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:43:59.076779) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:43:59 compute-0 python3.9[368030]: ansible-systemd Invoked with state=started name=edpm_openstack_network_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:43:59 compute-0 sudo[368028]: pam_unix(sudo:session): session closed for user root
Dec 03 01:43:59 compute-0 sshd-session[367927]: Received disconnect from 173.249.50.59 port 43696:11: Bye Bye [preauth]
Dec 03 01:43:59 compute-0 sshd-session[367927]: Disconnected from authenticating user root 173.249.50.59 port 43696 [preauth]
Dec 03 01:43:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:43:59.603 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:43:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:43:59.603 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:43:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:43:59.604 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:43:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v876: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:43:59 compute-0 podman[158098]: time="2025-12-03T01:43:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:43:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:43:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42582 "" "Go-http-client/1.1"
Dec 03 01:43:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:43:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8094 "" "Go-http-client/1.1"
Dec 03 01:43:59 compute-0 podman[368081]: 2025-12-03 01:43:59.880421276 +0000 UTC m=+0.115413103 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 03 01:43:59 compute-0 podman[368080]: 2025-12-03 01:43:59.882988549 +0000 UTC m=+0.127588449 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3)
Dec 03 01:44:00 compute-0 sudo[368221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cankfrsutbzppabfjtflezsryrhsdvrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726239.7297006-738-74433247763191/AnsiballZ_systemd.py'
Dec 03 01:44:00 compute-0 sudo[368221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:44:00 compute-0 python3.9[368223]: ansible-ansible.builtin.systemd Invoked with name=edpm_openstack_network_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 03 01:44:00 compute-0 systemd[1]: Stopping openstack_network_exporter container...
Dec 03 01:44:00 compute-0 systemd[1]: libpod-945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b.scope: Deactivated successfully.
Dec 03 01:44:00 compute-0 podman[368227]: 2025-12-03 01:44:00.757750924 +0000 UTC m=+0.101574701 container died 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., managed_by=edpm_ansible, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, config_id=edpm, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec 03 01:44:00 compute-0 systemd[1]: 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b-61a6636f72b1447c.timer: Deactivated successfully.
Dec 03 01:44:00 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b.
Dec 03 01:44:00 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b-userdata-shm.mount: Deactivated successfully.
Dec 03 01:44:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb43e89b26773bbb5521b34a448b068b54c329657c493af271c198f3e0b477d0-merged.mount: Deactivated successfully.
Dec 03 01:44:00 compute-0 podman[368227]: 2025-12-03 01:44:00.840766428 +0000 UTC m=+0.184590215 container cleanup 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, vendor=Red Hat, Inc., version=9.6, io.openshift.tags=minimal rhel9, config_id=edpm, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 03 01:44:00 compute-0 podman[368227]: openstack_network_exporter
Dec 03 01:44:00 compute-0 systemd[1]: edpm_openstack_network_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec 03 01:44:00 compute-0 systemd[1]: libpod-conmon-945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b.scope: Deactivated successfully.
Dec 03 01:44:00 compute-0 podman[368251]: openstack_network_exporter
Dec 03 01:44:00 compute-0 ceph-mon[192821]: pgmap v876: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:00 compute-0 systemd[1]: edpm_openstack_network_exporter.service: Failed with result 'exit-code'.
Dec 03 01:44:00 compute-0 systemd[1]: Stopped openstack_network_exporter container.
Dec 03 01:44:00 compute-0 systemd[1]: Starting openstack_network_exporter container...
Dec 03 01:44:01 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:44:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb43e89b26773bbb5521b34a448b068b54c329657c493af271c198f3e0b477d0/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec 03 01:44:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb43e89b26773bbb5521b34a448b068b54c329657c493af271c198f3e0b477d0/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 03 01:44:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb43e89b26773bbb5521b34a448b068b54c329657c493af271c198f3e0b477d0/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec 03 01:44:01 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b.
Dec 03 01:44:01 compute-0 podman[368264]: 2025-12-03 01:44:01.212940702 +0000 UTC m=+0.208730630 container init 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.openshift.expose-services=, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, version=9.6, config_id=edpm, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, distribution-scope=public, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, architecture=x86_64, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible)
Dec 03 01:44:01 compute-0 openstack_network_exporter[368278]: INFO    01:44:01 main.go:48: registering *bridge.Collector
Dec 03 01:44:01 compute-0 openstack_network_exporter[368278]: INFO    01:44:01 main.go:48: registering *coverage.Collector
Dec 03 01:44:01 compute-0 openstack_network_exporter[368278]: INFO    01:44:01 main.go:48: registering *datapath.Collector
Dec 03 01:44:01 compute-0 openstack_network_exporter[368278]: INFO    01:44:01 main.go:48: registering *iface.Collector
Dec 03 01:44:01 compute-0 openstack_network_exporter[368278]: INFO    01:44:01 main.go:48: registering *memory.Collector
Dec 03 01:44:01 compute-0 openstack_network_exporter[368278]: INFO    01:44:01 main.go:48: registering *ovnnorthd.Collector
Dec 03 01:44:01 compute-0 openstack_network_exporter[368278]: INFO    01:44:01 main.go:48: registering *ovn.Collector
Dec 03 01:44:01 compute-0 openstack_network_exporter[368278]: INFO    01:44:01 main.go:48: registering *ovsdbserver.Collector
Dec 03 01:44:01 compute-0 openstack_network_exporter[368278]: INFO    01:44:01 main.go:48: registering *pmd_perf.Collector
Dec 03 01:44:01 compute-0 openstack_network_exporter[368278]: INFO    01:44:01 main.go:48: registering *pmd_rxq.Collector
Dec 03 01:44:01 compute-0 openstack_network_exporter[368278]: INFO    01:44:01 main.go:48: registering *vswitch.Collector
Dec 03 01:44:01 compute-0 openstack_network_exporter[368278]: NOTICE  01:44:01 main.go:76: listening on https://:9105/metrics
Dec 03 01:44:01 compute-0 podman[368264]: 2025-12-03 01:44:01.25730866 +0000 UTC m=+0.253098578 container start 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, name=ubi9-minimal, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, build-date=2025-08-20T13:12:41)
Dec 03 01:44:01 compute-0 podman[368264]: openstack_network_exporter
Dec 03 01:44:01 compute-0 systemd[1]: Started openstack_network_exporter container.
Dec 03 01:44:01 compute-0 sudo[368221]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:01 compute-0 podman[368288]: 2025-12-03 01:44:01.380629316 +0000 UTC m=+0.112288904 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, release=1755695350, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., version=9.6, config_id=edpm, name=ubi9-minimal, vcs-type=git, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc.)
Dec 03 01:44:01 compute-0 openstack_network_exporter[368278]: ERROR   01:44:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:44:01 compute-0 openstack_network_exporter[368278]: ERROR   01:44:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:44:01 compute-0 openstack_network_exporter[368278]: ERROR   01:44:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:44:01 compute-0 openstack_network_exporter[368278]: ERROR   01:44:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:44:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:44:01 compute-0 openstack_network_exporter[368278]: ERROR   01:44:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:44:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:44:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v877: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:02 compute-0 sudo[368458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejkemtkxikrswbfzakjlayfgdncfkuhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726241.6423643-746-183857442326560/AnsiballZ_find.py'
Dec 03 01:44:02 compute-0 sudo[368458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:44:02 compute-0 python3.9[368460]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 03 01:44:02 compute-0 sudo[368458]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:44:02 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Dec 03 01:44:02 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:44:02.762702) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 03 01:44:02 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Dec 03 01:44:02 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726242762750, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 276, "num_deletes": 250, "total_data_size": 68030, "memory_usage": 74368, "flush_reason": "Manual Compaction"}
Dec 03 01:44:02 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Dec 03 01:44:02 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726242766937, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 67623, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18339, "largest_seqno": 18614, "table_properties": {"data_size": 65697, "index_size": 154, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5090, "raw_average_key_size": 19, "raw_value_size": 62013, "raw_average_value_size": 234, "num_data_blocks": 7, "num_entries": 265, "num_filter_entries": 265, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764726239, "oldest_key_time": 1764726239, "file_creation_time": 1764726242, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Dec 03 01:44:02 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 4310 microseconds, and 1721 cpu microseconds.
Dec 03 01:44:02 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 01:44:02 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:44:02.767013) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 67623 bytes OK
Dec 03 01:44:02 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:44:02.767033) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Dec 03 01:44:02 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:44:02.769743) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Dec 03 01:44:02 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:44:02.769765) EVENT_LOG_v1 {"time_micros": 1764726242769758, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 03 01:44:02 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:44:02.769784) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 03 01:44:02 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 65944, prev total WAL file size 65944, number of live WAL files 2.
Dec 03 01:44:02 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 01:44:02 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:44:02.770726) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353032' seq:72057594037927935, type:22 .. '6D67727374617400373533' seq:0, type:0; will stop at (end)
Dec 03 01:44:02 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 03 01:44:02 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(66KB)], [41(9108KB)]
Dec 03 01:44:02 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726242770777, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 9394833, "oldest_snapshot_seqno": -1}
Dec 03 01:44:02 compute-0 sshd-session[367781]: Received disconnect from 45.78.219.140 port 43372:11: Bye Bye [preauth]
Dec 03 01:44:02 compute-0 sshd-session[367781]: Disconnected from authenticating user root 45.78.219.140 port 43372 [preauth]
Dec 03 01:44:02 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4132 keys, 6107721 bytes, temperature: kUnknown
Dec 03 01:44:02 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726242848513, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 6107721, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6080985, "index_size": 15299, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10373, "raw_key_size": 101127, "raw_average_key_size": 24, "raw_value_size": 6006932, "raw_average_value_size": 1453, "num_data_blocks": 646, "num_entries": 4132, "num_filter_entries": 4132, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764726242, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Dec 03 01:44:02 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 01:44:02 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:44:02.849322) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 6107721 bytes
Dec 03 01:44:02 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:44:02.852603) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 119.9 rd, 78.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 8.9 +0.0 blob) out(5.8 +0.0 blob), read-write-amplify(229.2) write-amplify(90.3) OK, records in: 4639, records dropped: 507 output_compression: NoCompression
Dec 03 01:44:02 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:44:02.852632) EVENT_LOG_v1 {"time_micros": 1764726242852619, "job": 20, "event": "compaction_finished", "compaction_time_micros": 78342, "compaction_time_cpu_micros": 39729, "output_level": 6, "num_output_files": 1, "total_output_size": 6107721, "num_input_records": 4639, "num_output_records": 4132, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 03 01:44:02 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 01:44:02 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726242854784, "job": 20, "event": "table_file_deletion", "file_number": 43}
Dec 03 01:44:02 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 01:44:02 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726242859201, "job": 20, "event": "table_file_deletion", "file_number": 41}
Dec 03 01:44:02 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:44:02.770412) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:44:02 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:44:02.860036) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:44:02 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:44:02.860042) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:44:02 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:44:02.860045) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:44:02 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:44:02.860047) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:44:02 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:44:02.860050) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:44:02 compute-0 ceph-mon[192821]: pgmap v877: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:03 compute-0 sudo[368537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:44:03 compute-0 sudo[368537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:44:03 compute-0 sudo[368537]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:03 compute-0 sudo[368585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:44:03 compute-0 sudo[368585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:44:03 compute-0 sudo[368585]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v878: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:03 compute-0 podman[368633]: 2025-12-03 01:44:03.712302114 +0000 UTC m=+0.105592385 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 01:44:03 compute-0 sudo[368698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wchbauzpaevgzxweitqtdhsxgnflymkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726243.0226543-756-240127139189095/AnsiballZ_podman_container_info.py'
Dec 03 01:44:03 compute-0 sudo[368698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:44:03 compute-0 sudo[368646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:44:03 compute-0 sudo[368646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:44:03 compute-0 sudo[368646]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:03 compute-0 sudo[368710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 03 01:44:03 compute-0 sudo[368710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:44:03 compute-0 python3.9[368708]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Dec 03 01:44:04 compute-0 sudo[368698]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:04 compute-0 podman[368892]: 2025-12-03 01:44:04.683304889 +0000 UTC m=+0.119298494 container exec d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:44:04 compute-0 podman[368892]: 2025-12-03 01:44:04.807502541 +0000 UTC m=+0.243496076 container exec_died d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 03 01:44:04 compute-0 ceph-mon[192821]: pgmap v878: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:05 compute-0 sudo[369035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptqzlovslptotsjkzcwogsehibifpiqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726244.4151714-764-280414689522307/AnsiballZ_podman_container_exec.py'
Dec 03 01:44:05 compute-0 sudo[369035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:44:05 compute-0 python3.9[369044]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:44:05 compute-0 systemd[1]: Started libpod-conmon-926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f.scope.
Dec 03 01:44:05 compute-0 podman[369078]: 2025-12-03 01:44:05.589056832 +0000 UTC m=+0.143212972 container exec 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 03 01:44:05 compute-0 podman[369078]: 2025-12-03 01:44:05.601411572 +0000 UTC m=+0.155567702 container exec_died 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 03 01:44:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v879: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:05 compute-0 sudo[369035]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:05 compute-0 systemd[1]: libpod-conmon-926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f.scope: Deactivated successfully.
Dec 03 01:44:05 compute-0 sudo[368710]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:05 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:44:05 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:44:05 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:44:05 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:44:06 compute-0 sudo[369175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:44:06 compute-0 sudo[369175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:44:06 compute-0 sudo[369175]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:06 compute-0 sudo[369233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:44:06 compute-0 sudo[369233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:44:06 compute-0 sudo[369233]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:06 compute-0 sshd-session[369213]: Invalid user usuario2 from 34.66.72.251 port 50128
Dec 03 01:44:06 compute-0 sshd-session[369213]: Received disconnect from 34.66.72.251 port 50128:11: Bye Bye [preauth]
Dec 03 01:44:06 compute-0 sshd-session[369213]: Disconnected from invalid user usuario2 34.66.72.251 port 50128 [preauth]
Dec 03 01:44:06 compute-0 sudo[369285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:44:06 compute-0 sudo[369285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:44:06 compute-0 sudo[369285]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:06 compute-0 sudo[369339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 01:44:06 compute-0 sudo[369339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:44:06 compute-0 sudo[369397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktaltiyjkjfnydauzpmuuiqzcrlfgcse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726245.9690912-772-122184169256572/AnsiballZ_podman_container_exec.py'
Dec 03 01:44:06 compute-0 sudo[369397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:44:06 compute-0 podman[369399]: 2025-12-03 01:44:06.663686535 +0000 UTC m=+0.111323638 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, version=9.4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, architecture=x86_64, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, vcs-type=git, io.buildah.version=1.29.0, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec 03 01:44:06 compute-0 python3.9[369400]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:44:06 compute-0 ceph-mon[192821]: pgmap v879: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:06 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:44:06 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:44:06 compute-0 systemd[1]: Started libpod-conmon-926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f.scope.
Dec 03 01:44:06 compute-0 podman[369433]: 2025-12-03 01:44:06.919128188 +0000 UTC m=+0.160718398 container exec 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 03 01:44:06 compute-0 podman[369433]: 2025-12-03 01:44:06.956090816 +0000 UTC m=+0.197681026 container exec_died 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:44:07 compute-0 systemd[1]: libpod-conmon-926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f.scope: Deactivated successfully.
Dec 03 01:44:07 compute-0 sudo[369397]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:07 compute-0 sudo[369339]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:44:07 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:44:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 01:44:07 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:44:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 01:44:07 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:44:07 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 1955fe29-9618-4acd-97e7-276d0d3b1356 does not exist
Dec 03 01:44:07 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 091d948e-4502-404b-88c4-77bb9f1cb566 does not exist
Dec 03 01:44:07 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 41d0086f-1ace-461d-8571-0879e5402d63 does not exist
Dec 03 01:44:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 01:44:07 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:44:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 01:44:07 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:44:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:44:07 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:44:07 compute-0 sudo[369502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:44:07 compute-0 sudo[369502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:44:07 compute-0 sudo[369502]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:07 compute-0 sudo[369551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:44:07 compute-0 sudo[369551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:44:07 compute-0 sudo[369551]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:07 compute-0 sudo[369605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:44:07 compute-0 sudo[369605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:44:07 compute-0 sudo[369605]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v880: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:07 compute-0 sudo[369653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 01:44:07 compute-0 sudo[369653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:44:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:44:07 compute-0 sudo[369727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmiemwagzexcbjxdeqtmbgwjqmhxzsxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726247.3364553-780-102459113924743/AnsiballZ_file.py'
Dec 03 01:44:07 compute-0 sudo[369727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:44:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:44:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:44:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:44:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:44:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:44:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:44:08 compute-0 python3.9[369729]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:44:08 compute-0 sudo[369727]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:08 compute-0 podman[369793]: 2025-12-03 01:44:08.363034193 +0000 UTC m=+0.093897514 container create cd6f2abc4e31d2d3366ed0068b965c115ae4b50cfc8c757ee6700ddb91b3a817 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_moore, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec 03 01:44:08 compute-0 podman[369793]: 2025-12-03 01:44:08.328991867 +0000 UTC m=+0.059855248 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:44:08 compute-0 systemd[1]: Started libpod-conmon-cd6f2abc4e31d2d3366ed0068b965c115ae4b50cfc8c757ee6700ddb91b3a817.scope.
Dec 03 01:44:08 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:44:08 compute-0 podman[369793]: 2025-12-03 01:44:08.519355955 +0000 UTC m=+0.250219336 container init cd6f2abc4e31d2d3366ed0068b965c115ae4b50cfc8c757ee6700ddb91b3a817 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_moore, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:44:08 compute-0 podman[369793]: 2025-12-03 01:44:08.537922321 +0000 UTC m=+0.268785632 container start cd6f2abc4e31d2d3366ed0068b965c115ae4b50cfc8c757ee6700ddb91b3a817 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Dec 03 01:44:08 compute-0 podman[369793]: 2025-12-03 01:44:08.54460694 +0000 UTC m=+0.275470281 container attach cd6f2abc4e31d2d3366ed0068b965c115ae4b50cfc8c757ee6700ddb91b3a817 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Dec 03 01:44:08 compute-0 busy_moore[369833]: 167 167
Dec 03 01:44:08 compute-0 systemd[1]: libpod-cd6f2abc4e31d2d3366ed0068b965c115ae4b50cfc8c757ee6700ddb91b3a817.scope: Deactivated successfully.
Dec 03 01:44:08 compute-0 podman[369793]: 2025-12-03 01:44:08.550490707 +0000 UTC m=+0.281354018 container died cd6f2abc4e31d2d3366ed0068b965c115ae4b50cfc8c757ee6700ddb91b3a817 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_moore, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:44:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5bed39167720363ab50b3f88f2d11bb7fdfa801df1f27066033aae0f668d6c2-merged.mount: Deactivated successfully.
Dec 03 01:44:08 compute-0 podman[369793]: 2025-12-03 01:44:08.630398443 +0000 UTC m=+0.361261764 container remove cd6f2abc4e31d2d3366ed0068b965c115ae4b50cfc8c757ee6700ddb91b3a817 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_moore, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec 03 01:44:08 compute-0 systemd[1]: libpod-conmon-cd6f2abc4e31d2d3366ed0068b965c115ae4b50cfc8c757ee6700ddb91b3a817.scope: Deactivated successfully.
Dec 03 01:44:08 compute-0 podman[369927]: 2025-12-03 01:44:08.906781221 +0000 UTC m=+0.078867918 container create bec5d4cc68b10c53ae1b522caca46b430d4a7f80303d86fa38c8606e8985321b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_sutherland, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 03 01:44:08 compute-0 podman[369927]: 2025-12-03 01:44:08.872277192 +0000 UTC m=+0.044363979 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:44:08 compute-0 systemd[1]: Started libpod-conmon-bec5d4cc68b10c53ae1b522caca46b430d4a7f80303d86fa38c8606e8985321b.scope.
Dec 03 01:44:09 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:44:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/723d8a8bd7424a92aa111255db75cee043069f3fe59988df1038b44813163a06/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:44:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/723d8a8bd7424a92aa111255db75cee043069f3fe59988df1038b44813163a06/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:44:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/723d8a8bd7424a92aa111255db75cee043069f3fe59988df1038b44813163a06/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:44:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/723d8a8bd7424a92aa111255db75cee043069f3fe59988df1038b44813163a06/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:44:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/723d8a8bd7424a92aa111255db75cee043069f3fe59988df1038b44813163a06/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:44:09 compute-0 sudo[369974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbomxzffottweevaxxljueuwgbvenpqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726248.4189005-789-1398930386334/AnsiballZ_podman_container_info.py'
Dec 03 01:44:09 compute-0 sudo[369974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:44:09 compute-0 podman[369927]: 2025-12-03 01:44:09.099157136 +0000 UTC m=+0.271243883 container init bec5d4cc68b10c53ae1b522caca46b430d4a7f80303d86fa38c8606e8985321b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_sutherland, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 03 01:44:09 compute-0 podman[369927]: 2025-12-03 01:44:09.126632805 +0000 UTC m=+0.298719522 container start bec5d4cc68b10c53ae1b522caca46b430d4a7f80303d86fa38c8606e8985321b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_sutherland, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:44:09 compute-0 podman[369927]: 2025-12-03 01:44:09.132260504 +0000 UTC m=+0.304347221 container attach bec5d4cc68b10c53ae1b522caca46b430d4a7f80303d86fa38c8606e8985321b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True)
Dec 03 01:44:09 compute-0 ceph-mon[192821]: pgmap v880: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:09 compute-0 python3.9[369977]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Dec 03 01:44:09 compute-0 sudo[369974]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v881: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:10 compute-0 sudo[370162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eeopzvggdoycvlxdxxngsmjsfywznczc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726249.7793796-797-253925284801315/AnsiballZ_podman_container_exec.py'
Dec 03 01:44:10 compute-0 sudo[370162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:44:10 compute-0 jovial_sutherland[369971]: --> passed data devices: 0 physical, 3 LVM
Dec 03 01:44:10 compute-0 jovial_sutherland[369971]: --> relative data size: 1.0
Dec 03 01:44:10 compute-0 jovial_sutherland[369971]: --> All data devices are unavailable
Dec 03 01:44:10 compute-0 systemd[1]: libpod-bec5d4cc68b10c53ae1b522caca46b430d4a7f80303d86fa38c8606e8985321b.scope: Deactivated successfully.
Dec 03 01:44:10 compute-0 systemd[1]: libpod-bec5d4cc68b10c53ae1b522caca46b430d4a7f80303d86fa38c8606e8985321b.scope: Consumed 1.207s CPU time.
Dec 03 01:44:10 compute-0 conmon[369971]: conmon bec5d4cc68b10c53ae1b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bec5d4cc68b10c53ae1b522caca46b430d4a7f80303d86fa38c8606e8985321b.scope/container/memory.events
Dec 03 01:44:10 compute-0 podman[369927]: 2025-12-03 01:44:10.398346286 +0000 UTC m=+1.570433073 container died bec5d4cc68b10c53ae1b522caca46b430d4a7f80303d86fa38c8606e8985321b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 03 01:44:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-723d8a8bd7424a92aa111255db75cee043069f3fe59988df1038b44813163a06-merged.mount: Deactivated successfully.
Dec 03 01:44:10 compute-0 podman[369927]: 2025-12-03 01:44:10.507248394 +0000 UTC m=+1.679335091 container remove bec5d4cc68b10c53ae1b522caca46b430d4a7f80303d86fa38c8606e8985321b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:44:10 compute-0 systemd[1]: libpod-conmon-bec5d4cc68b10c53ae1b522caca46b430d4a7f80303d86fa38c8606e8985321b.scope: Deactivated successfully.
Dec 03 01:44:10 compute-0 sudo[369653]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:10 compute-0 python3.9[370166]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:44:10 compute-0 sudo[370183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:44:10 compute-0 sudo[370183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:44:10 compute-0 sudo[370183]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:10 compute-0 systemd[1]: Started libpod-conmon-7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264.scope.
Dec 03 01:44:10 compute-0 podman[370189]: 2025-12-03 01:44:10.791232197 +0000 UTC m=+0.155232853 container exec 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible)
Dec 03 01:44:10 compute-0 podman[370189]: 2025-12-03 01:44:10.824151201 +0000 UTC m=+0.188151837 container exec_died 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Dec 03 01:44:10 compute-0 sudo[370220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:44:10 compute-0 sudo[370220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:44:10 compute-0 sudo[370220]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:10 compute-0 systemd[1]: libpod-conmon-7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264.scope: Deactivated successfully.
Dec 03 01:44:10 compute-0 sudo[370162]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:10 compute-0 sudo[370262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:44:10 compute-0 sudo[370262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:44:10 compute-0 sudo[370262]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:11 compute-0 sudo[370307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 01:44:11 compute-0 sudo[370307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:44:11 compute-0 ceph-mon[192821]: pgmap v881: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:11 compute-0 podman[370425]: 2025-12-03 01:44:11.586227671 +0000 UTC m=+0.072386254 container create f5a4ec5ec4b1550e5d4f4c961bff51eb02f0120e53833decb1366efaf49f4167 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_sutherland, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:44:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v882: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:11 compute-0 systemd[1]: Started libpod-conmon-f5a4ec5ec4b1550e5d4f4c961bff51eb02f0120e53833decb1366efaf49f4167.scope.
Dec 03 01:44:11 compute-0 podman[370425]: 2025-12-03 01:44:11.56045799 +0000 UTC m=+0.046616593 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:44:11 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:44:11 compute-0 podman[370425]: 2025-12-03 01:44:11.719738507 +0000 UTC m=+0.205897140 container init f5a4ec5ec4b1550e5d4f4c961bff51eb02f0120e53833decb1366efaf49f4167 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_sutherland, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:44:11 compute-0 podman[370425]: 2025-12-03 01:44:11.738500699 +0000 UTC m=+0.224659262 container start f5a4ec5ec4b1550e5d4f4c961bff51eb02f0120e53833decb1366efaf49f4167 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:44:11 compute-0 podman[370425]: 2025-12-03 01:44:11.745219279 +0000 UTC m=+0.231377912 container attach f5a4ec5ec4b1550e5d4f4c961bff51eb02f0120e53833decb1366efaf49f4167 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:44:11 compute-0 dazzling_sutherland[370441]: 167 167
Dec 03 01:44:11 compute-0 systemd[1]: libpod-f5a4ec5ec4b1550e5d4f4c961bff51eb02f0120e53833decb1366efaf49f4167.scope: Deactivated successfully.
Dec 03 01:44:11 compute-0 podman[370425]: 2025-12-03 01:44:11.751038094 +0000 UTC m=+0.237196677 container died f5a4ec5ec4b1550e5d4f4c961bff51eb02f0120e53833decb1366efaf49f4167 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_sutherland, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:44:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-a53ee9204d0036e87c5d2fbc9c92f0adde77a0d5637a51d835405b1e6cd33a8b-merged.mount: Deactivated successfully.
Dec 03 01:44:11 compute-0 podman[370425]: 2025-12-03 01:44:11.828292275 +0000 UTC m=+0.314450868 container remove f5a4ec5ec4b1550e5d4f4c961bff51eb02f0120e53833decb1366efaf49f4167 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Dec 03 01:44:11 compute-0 systemd[1]: libpod-conmon-f5a4ec5ec4b1550e5d4f4c961bff51eb02f0120e53833decb1366efaf49f4167.scope: Deactivated successfully.
Dec 03 01:44:12 compute-0 podman[370507]: 2025-12-03 01:44:12.123765473 +0000 UTC m=+0.083802847 container create 05accec3bec05c5b7d4a822634684159d62688e67adf05abea7d3bae9ede84c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_shaw, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec 03 01:44:12 compute-0 podman[370507]: 2025-12-03 01:44:12.098205859 +0000 UTC m=+0.058243273 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:44:12 compute-0 systemd[1]: Started libpod-conmon-05accec3bec05c5b7d4a822634684159d62688e67adf05abea7d3bae9ede84c0.scope.
Dec 03 01:44:12 compute-0 sudo[370550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysbwmmrpbikxvjyllnlamffrffiewhyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726251.1555579-805-194980581343415/AnsiballZ_podman_container_exec.py'
Dec 03 01:44:12 compute-0 sudo[370550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:44:12 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:44:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04abb10e3f242561109c13a099a23bfae0c17b74a5292ff184ced085f257a3a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:44:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04abb10e3f242561109c13a099a23bfae0c17b74a5292ff184ced085f257a3a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:44:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04abb10e3f242561109c13a099a23bfae0c17b74a5292ff184ced085f257a3a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:44:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04abb10e3f242561109c13a099a23bfae0c17b74a5292ff184ced085f257a3a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:44:12 compute-0 podman[370507]: 2025-12-03 01:44:12.280646461 +0000 UTC m=+0.240683845 container init 05accec3bec05c5b7d4a822634684159d62688e67adf05abea7d3bae9ede84c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_shaw, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:44:12 compute-0 podman[370507]: 2025-12-03 01:44:12.310164448 +0000 UTC m=+0.270201812 container start 05accec3bec05c5b7d4a822634684159d62688e67adf05abea7d3bae9ede84c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_shaw, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:44:12 compute-0 podman[370507]: 2025-12-03 01:44:12.316254031 +0000 UTC m=+0.276291395 container attach 05accec3bec05c5b7d4a822634684159d62688e67adf05abea7d3bae9ede84c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_shaw, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:44:12 compute-0 python3.9[370557]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:44:12 compute-0 systemd[1]: Started libpod-conmon-7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264.scope.
Dec 03 01:44:12 compute-0 podman[370560]: 2025-12-03 01:44:12.67202748 +0000 UTC m=+0.162732456 container exec 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4)
Dec 03 01:44:12 compute-0 podman[370560]: 2025-12-03 01:44:12.707826235 +0000 UTC m=+0.198531181 container exec_died 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125)
Dec 03 01:44:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:44:12 compute-0 sudo[370550]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:12 compute-0 systemd[1]: libpod-conmon-7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264.scope: Deactivated successfully.
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]: {
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:     "0": [
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:         {
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:             "devices": [
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:                 "/dev/loop3"
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:             ],
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:             "lv_name": "ceph_lv0",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:             "lv_size": "21470642176",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:             "name": "ceph_lv0",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:             "tags": {
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:                 "ceph.cluster_name": "ceph",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:                 "ceph.crush_device_class": "",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:                 "ceph.encrypted": "0",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:                 "ceph.osd_id": "0",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:                 "ceph.type": "block",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:                 "ceph.vdo": "0"
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:             },
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:             "type": "block",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:             "vg_name": "ceph_vg0"
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:         }
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:     ],
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:     "1": [
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:         {
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:             "devices": [
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:                 "/dev/loop4"
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:             ],
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:             "lv_name": "ceph_lv1",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:             "lv_size": "21470642176",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:             "name": "ceph_lv1",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:             "tags": {
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:                 "ceph.cluster_name": "ceph",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:                 "ceph.crush_device_class": "",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:                 "ceph.encrypted": "0",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:                 "ceph.osd_id": "1",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:                 "ceph.type": "block",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:                 "ceph.vdo": "0"
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:             },
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:             "type": "block",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:             "vg_name": "ceph_vg1"
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:         }
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:     ],
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:     "2": [
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:         {
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:             "devices": [
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:                 "/dev/loop5"
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:             ],
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:             "lv_name": "ceph_lv2",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:             "lv_size": "21470642176",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:             "name": "ceph_lv2",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:             "tags": {
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:                 "ceph.cluster_name": "ceph",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:                 "ceph.crush_device_class": "",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:                 "ceph.encrypted": "0",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:                 "ceph.osd_id": "2",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:                 "ceph.type": "block",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:                 "ceph.vdo": "0"
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:             },
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:             "type": "block",
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:             "vg_name": "ceph_vg2"
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:         }
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]:     ]
Dec 03 01:44:13 compute-0 upbeat_shaw[370554]: }
Dec 03 01:44:13 compute-0 systemd[1]: libpod-05accec3bec05c5b7d4a822634684159d62688e67adf05abea7d3bae9ede84c0.scope: Deactivated successfully.
Dec 03 01:44:13 compute-0 podman[370507]: 2025-12-03 01:44:13.179094988 +0000 UTC m=+1.139132422 container died 05accec3bec05c5b7d4a822634684159d62688e67adf05abea7d3bae9ede84c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_shaw, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 03 01:44:13 compute-0 ceph-mon[192821]: pgmap v882: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-04abb10e3f242561109c13a099a23bfae0c17b74a5292ff184ced085f257a3a0-merged.mount: Deactivated successfully.
Dec 03 01:44:13 compute-0 podman[370507]: 2025-12-03 01:44:13.276886551 +0000 UTC m=+1.236923955 container remove 05accec3bec05c5b7d4a822634684159d62688e67adf05abea7d3bae9ede84c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 03 01:44:13 compute-0 systemd[1]: libpod-conmon-05accec3bec05c5b7d4a822634684159d62688e67adf05abea7d3bae9ede84c0.scope: Deactivated successfully.
Dec 03 01:44:13 compute-0 sudo[370307]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:13 compute-0 sudo[370680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:44:13 compute-0 sudo[370680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:44:13 compute-0 sudo[370680]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:13 compute-0 sudo[370705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:44:13 compute-0 sudo[370705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:44:13 compute-0 sudo[370705]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v883: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:13 compute-0 sudo[370730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:44:13 compute-0 sudo[370730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:44:13 compute-0 sudo[370730]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:13 compute-0 sudo[370755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 01:44:13 compute-0 sudo[370755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:44:14 compute-0 sudo[370888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phvfcbejpdkfzhanjrwyltinimpetqqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726253.10581-813-96148257193495/AnsiballZ_file.py'
Dec 03 01:44:14 compute-0 sudo[370888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:44:14 compute-0 podman[370895]: 2025-12-03 01:44:14.524906551 +0000 UTC m=+0.083674904 container create 97bc7a1c90532c367bf77015aee1b84ad87d0a5ccb4d4d7b1c297c8e182de76d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_mestorf, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 03 01:44:14 compute-0 podman[370895]: 2025-12-03 01:44:14.48994232 +0000 UTC m=+0.048710723 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:44:14 compute-0 systemd[1]: Started libpod-conmon-97bc7a1c90532c367bf77015aee1b84ad87d0a5ccb4d4d7b1c297c8e182de76d.scope.
Dec 03 01:44:14 compute-0 python3.9[370894]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:44:14 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:44:14 compute-0 sudo[370888]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:14 compute-0 podman[370895]: 2025-12-03 01:44:14.686316478 +0000 UTC m=+0.245084831 container init 97bc7a1c90532c367bf77015aee1b84ad87d0a5ccb4d4d7b1c297c8e182de76d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:44:14 compute-0 podman[370895]: 2025-12-03 01:44:14.712004727 +0000 UTC m=+0.270773080 container start 97bc7a1c90532c367bf77015aee1b84ad87d0a5ccb4d4d7b1c297c8e182de76d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_mestorf, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec 03 01:44:14 compute-0 podman[370895]: 2025-12-03 01:44:14.719501819 +0000 UTC m=+0.278270172 container attach 97bc7a1c90532c367bf77015aee1b84ad87d0a5ccb4d4d7b1c297c8e182de76d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:44:14 compute-0 goofy_mestorf[370911]: 167 167
Dec 03 01:44:14 compute-0 systemd[1]: libpod-97bc7a1c90532c367bf77015aee1b84ad87d0a5ccb4d4d7b1c297c8e182de76d.scope: Deactivated successfully.
Dec 03 01:44:14 compute-0 podman[370895]: 2025-12-03 01:44:14.732022294 +0000 UTC m=+0.290790677 container died 97bc7a1c90532c367bf77015aee1b84ad87d0a5ccb4d4d7b1c297c8e182de76d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_mestorf, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 03 01:44:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-17831e9d78e3e4ce42d67db221897d3e9f051e64778faf794051c22e5cc3d22a-merged.mount: Deactivated successfully.
Dec 03 01:44:14 compute-0 podman[370895]: 2025-12-03 01:44:14.803292125 +0000 UTC m=+0.362060448 container remove 97bc7a1c90532c367bf77015aee1b84ad87d0a5ccb4d4d7b1c297c8e182de76d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_mestorf, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 03 01:44:14 compute-0 systemd[1]: libpod-conmon-97bc7a1c90532c367bf77015aee1b84ad87d0a5ccb4d4d7b1c297c8e182de76d.scope: Deactivated successfully.
Dec 03 01:44:15 compute-0 podman[370958]: 2025-12-03 01:44:15.060732135 +0000 UTC m=+0.090691252 container create f2eb09388ad0beabba41ff04c82b670b277e74b8ebefb3176a29d03a3900223c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 03 01:44:15 compute-0 podman[370958]: 2025-12-03 01:44:15.033426791 +0000 UTC m=+0.063385948 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:44:15 compute-0 systemd[1]: Started libpod-conmon-f2eb09388ad0beabba41ff04c82b670b277e74b8ebefb3176a29d03a3900223c.scope.
Dec 03 01:44:15 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:44:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2410d9d8b10fc6fa27dfe6cdcf84033aa8ddf2578f97a5756d63397ba4e61fa1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:44:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2410d9d8b10fc6fa27dfe6cdcf84033aa8ddf2578f97a5756d63397ba4e61fa1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:44:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2410d9d8b10fc6fa27dfe6cdcf84033aa8ddf2578f97a5756d63397ba4e61fa1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:44:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2410d9d8b10fc6fa27dfe6cdcf84033aa8ddf2578f97a5756d63397ba4e61fa1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:44:15 compute-0 podman[370958]: 2025-12-03 01:44:15.216376059 +0000 UTC m=+0.246335206 container init f2eb09388ad0beabba41ff04c82b670b277e74b8ebefb3176a29d03a3900223c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_noyce, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:44:15 compute-0 ceph-mon[192821]: pgmap v883: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:15 compute-0 podman[370958]: 2025-12-03 01:44:15.24709283 +0000 UTC m=+0.277051967 container start f2eb09388ad0beabba41ff04c82b670b277e74b8ebefb3176a29d03a3900223c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 03 01:44:15 compute-0 podman[370958]: 2025-12-03 01:44:15.254307294 +0000 UTC m=+0.284266431 container attach f2eb09388ad0beabba41ff04c82b670b277e74b8ebefb3176a29d03a3900223c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:44:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v884: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:15 compute-0 sudo[371104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etdcismwcnzcrkzbnpiynwwwyhtgtdbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726255.0798025-822-80840672922710/AnsiballZ_podman_container_info.py'
Dec 03 01:44:15 compute-0 sudo[371104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:44:15 compute-0 python3.9[371106]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Dec 03 01:44:16 compute-0 sudo[371104]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:16 compute-0 dreamy_noyce[371005]: {
Dec 03 01:44:16 compute-0 dreamy_noyce[371005]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 01:44:16 compute-0 dreamy_noyce[371005]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:44:16 compute-0 dreamy_noyce[371005]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 01:44:16 compute-0 dreamy_noyce[371005]:         "osd_id": 2,
Dec 03 01:44:16 compute-0 dreamy_noyce[371005]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:44:16 compute-0 dreamy_noyce[371005]:         "type": "bluestore"
Dec 03 01:44:16 compute-0 dreamy_noyce[371005]:     },
Dec 03 01:44:16 compute-0 dreamy_noyce[371005]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 01:44:16 compute-0 dreamy_noyce[371005]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:44:16 compute-0 dreamy_noyce[371005]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 01:44:16 compute-0 dreamy_noyce[371005]:         "osd_id": 1,
Dec 03 01:44:16 compute-0 dreamy_noyce[371005]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:44:16 compute-0 dreamy_noyce[371005]:         "type": "bluestore"
Dec 03 01:44:16 compute-0 dreamy_noyce[371005]:     },
Dec 03 01:44:16 compute-0 dreamy_noyce[371005]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 01:44:16 compute-0 dreamy_noyce[371005]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:44:16 compute-0 dreamy_noyce[371005]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 01:44:16 compute-0 dreamy_noyce[371005]:         "osd_id": 0,
Dec 03 01:44:16 compute-0 dreamy_noyce[371005]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:44:16 compute-0 dreamy_noyce[371005]:         "type": "bluestore"
Dec 03 01:44:16 compute-0 dreamy_noyce[371005]:     }
Dec 03 01:44:16 compute-0 dreamy_noyce[371005]: }
Dec 03 01:44:16 compute-0 systemd[1]: libpod-f2eb09388ad0beabba41ff04c82b670b277e74b8ebefb3176a29d03a3900223c.scope: Deactivated successfully.
Dec 03 01:44:16 compute-0 podman[370958]: 2025-12-03 01:44:16.503447995 +0000 UTC m=+1.533407142 container died f2eb09388ad0beabba41ff04c82b670b277e74b8ebefb3176a29d03a3900223c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_noyce, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 03 01:44:16 compute-0 systemd[1]: libpod-f2eb09388ad0beabba41ff04c82b670b277e74b8ebefb3176a29d03a3900223c.scope: Consumed 1.251s CPU time.
Dec 03 01:44:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-2410d9d8b10fc6fa27dfe6cdcf84033aa8ddf2578f97a5756d63397ba4e61fa1-merged.mount: Deactivated successfully.
Dec 03 01:44:16 compute-0 podman[370958]: 2025-12-03 01:44:16.610682906 +0000 UTC m=+1.640642013 container remove f2eb09388ad0beabba41ff04c82b670b277e74b8ebefb3176a29d03a3900223c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_noyce, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Dec 03 01:44:16 compute-0 systemd[1]: libpod-conmon-f2eb09388ad0beabba41ff04c82b670b277e74b8ebefb3176a29d03a3900223c.scope: Deactivated successfully.
Dec 03 01:44:16 compute-0 sudo[370755]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:16 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:44:16 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:44:16 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:44:16 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:44:16 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 65078e8a-7fde-4276-b19e-57a84a856680 does not exist
Dec 03 01:44:16 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 4789baa9-84ea-4105-9d56-a5810ddc14c2 does not exist
Dec 03 01:44:16 compute-0 sudo[371260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:44:16 compute-0 sudo[371260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:44:16 compute-0 sudo[371260]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:16 compute-0 sudo[371342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gllcbpuxwdnlkemehvkehruqzmsvwkss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726256.3769026-830-3340459345648/AnsiballZ_podman_container_exec.py'
Dec 03 01:44:16 compute-0 sudo[371342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:44:16 compute-0 sudo[371319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 01:44:16 compute-0 sudo[371319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:44:16 compute-0 sudo[371319]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:17 compute-0 python3.9[371355]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:44:17 compute-0 ceph-mon[192821]: pgmap v884: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:17 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:44:17 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:44:17 compute-0 systemd[1]: Started libpod-conmon-9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df.scope.
Dec 03 01:44:17 compute-0 podman[371358]: 2025-12-03 01:44:17.362911737 +0000 UTC m=+0.173588744 container exec 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 01:44:17 compute-0 podman[371358]: 2025-12-03 01:44:17.398127415 +0000 UTC m=+0.208804412 container exec_died 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 01:44:17 compute-0 sudo[371342]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:17 compute-0 systemd[1]: libpod-conmon-9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df.scope: Deactivated successfully.
Dec 03 01:44:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v885: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:44:18 compute-0 podman[371510]: 2025-12-03 01:44:18.362874252 +0000 UTC m=+0.119526420 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 03 01:44:18 compute-0 sudo[371551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lujfabclvhaqazlnpoyuiynwwvbqieqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726257.777114-838-165860419836049/AnsiballZ_podman_container_exec.py'
Dec 03 01:44:18 compute-0 sudo[371551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:44:18 compute-0 python3.9[371560]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:44:18 compute-0 systemd[1]: Started libpod-conmon-9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df.scope.
Dec 03 01:44:18 compute-0 podman[371561]: 2025-12-03 01:44:18.78529176 +0000 UTC m=+0.170294719 container exec 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 01:44:18 compute-0 podman[371561]: 2025-12-03 01:44:18.820961302 +0000 UTC m=+0.205964261 container exec_died 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 01:44:18 compute-0 systemd[1]: libpod-conmon-9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df.scope: Deactivated successfully.
Dec 03 01:44:18 compute-0 sudo[371551]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:19 compute-0 podman[371591]: 2025-12-03 01:44:19.05152012 +0000 UTC m=+0.106292285 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm)
Dec 03 01:44:19 compute-0 podman[371590]: 2025-12-03 01:44:19.077151377 +0000 UTC m=+0.134494185 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 03 01:44:19 compute-0 ceph-mon[192821]: pgmap v885: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v886: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:19 compute-0 sudo[371776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psewdyhobfiyiasajpcfruiylrqtjmio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726259.1950672-846-223200447309783/AnsiballZ_file.py'
Dec 03 01:44:19 compute-0 sudo[371776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:44:20 compute-0 python3.9[371778]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:44:20 compute-0 sudo[371776]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:20 compute-0 sudo[371928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnpfwcwxtnixfisjledsocfrzgfmdlfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726260.4070487-855-265693925439799/AnsiballZ_podman_container_info.py'
Dec 03 01:44:20 compute-0 sudo[371928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:44:21 compute-0 python3.9[371930]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Dec 03 01:44:21 compute-0 sudo[371928]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:21 compute-0 ceph-mon[192821]: pgmap v886: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v887: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:22 compute-0 sudo[372091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krqoooutybnyhwsinrfdkcnkeusxmyeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726261.5667117-863-17228620138843/AnsiballZ_podman_container_exec.py'
Dec 03 01:44:22 compute-0 sudo[372091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:44:22 compute-0 python3.9[372093]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:44:22 compute-0 systemd[1]: Started libpod-conmon-82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195.scope.
Dec 03 01:44:22 compute-0 podman[372094]: 2025-12-03 01:44:22.477017755 +0000 UTC m=+0.155932023 container exec 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 03 01:44:22 compute-0 podman[372094]: 2025-12-03 01:44:22.513955682 +0000 UTC m=+0.192869900 container exec_died 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 01:44:22 compute-0 sudo[372091]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:22 compute-0 systemd[1]: libpod-conmon-82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195.scope: Deactivated successfully.
Dec 03 01:44:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:44:23 compute-0 ceph-mon[192821]: pgmap v887: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v888: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:24 compute-0 sudo[372274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svdfnpbmsmyahzprsrculbrgldkdqjfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726263.493303-871-126907120943952/AnsiballZ_podman_container_exec.py'
Dec 03 01:44:24 compute-0 sudo[372274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:44:24 compute-0 python3.9[372276]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:44:24 compute-0 systemd[1]: Started libpod-conmon-82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195.scope.
Dec 03 01:44:24 compute-0 podman[372277]: 2025-12-03 01:44:24.494201904 +0000 UTC m=+0.148545613 container exec 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 01:44:24 compute-0 podman[372277]: 2025-12-03 01:44:24.530771921 +0000 UTC m=+0.185115670 container exec_died 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 03 01:44:24 compute-0 systemd[1]: libpod-conmon-82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195.scope: Deactivated successfully.
Dec 03 01:44:24 compute-0 sudo[372274]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:25 compute-0 ceph-mon[192821]: pgmap v888: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v889: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:25 compute-0 sudo[372454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgfhwcppuatmvmzeavopzyzyczerxhrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726264.8908408-879-28472392230509/AnsiballZ_file.py'
Dec 03 01:44:25 compute-0 sudo[372454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:44:26 compute-0 python3.9[372456]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:44:26 compute-0 sudo[372454]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:27 compute-0 ceph-mon[192821]: pgmap v889: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v890: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:44:27 compute-0 sudo[372606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnkwmdupjaqbqypopqpvesxqrageydfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726267.354474-888-224320846816949/AnsiballZ_podman_container_info.py'
Dec 03 01:44:27 compute-0 sudo[372606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:44:28 compute-0 python3.9[372608]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Dec 03 01:44:28 compute-0 sudo[372606]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:44:28
Dec 03 01:44:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 01:44:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 01:44:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.control', 'volumes', 'vms', 'backups', 'images', 'default.rgw.meta', 'default.rgw.log']
Dec 03 01:44:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 01:44:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:44:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:44:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:44:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:44:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:44:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:44:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 01:44:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:44:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 01:44:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:44:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:44:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:44:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:44:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:44:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:44:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:44:29 compute-0 sudo[372783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymcfyrjynxtisdjfgnsdcxkvlycppuye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726268.5326443-896-173620848268001/AnsiballZ_podman_container_exec.py'
Dec 03 01:44:29 compute-0 sudo[372783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:44:29 compute-0 podman[372743]: 2025-12-03 01:44:29.232734703 +0000 UTC m=+0.208294408 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 03 01:44:29 compute-0 python3.9[372789]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:44:29 compute-0 ceph-mon[192821]: pgmap v890: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:29 compute-0 systemd[1]: Started libpod-conmon-945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b.scope.
Dec 03 01:44:29 compute-0 podman[372799]: 2025-12-03 01:44:29.551710888 +0000 UTC m=+0.142663436 container exec 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, architecture=x86_64, config_id=edpm, vcs-type=git, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, vendor=Red Hat, Inc.)
Dec 03 01:44:29 compute-0 podman[372799]: 2025-12-03 01:44:29.56589084 +0000 UTC m=+0.156843358 container exec_died 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, version=9.6, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, maintainer=Red Hat, Inc., name=ubi9-minimal)
Dec 03 01:44:29 compute-0 sudo[372783]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v891: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:29 compute-0 systemd[1]: libpod-conmon-945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b.scope: Deactivated successfully.
Dec 03 01:44:29 compute-0 podman[158098]: time="2025-12-03T01:44:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:44:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:44:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec 03 01:44:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:44:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8086 "" "Go-http-client/1.1"
Dec 03 01:44:30 compute-0 sudo[373008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdcejnhjehirmhyfvxbqgswgzxbhyknm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726269.9440126-904-278092332747034/AnsiballZ_podman_container_exec.py'
Dec 03 01:44:30 compute-0 sudo[373008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:44:30 compute-0 podman[372953]: 2025-12-03 01:44:30.591936324 +0000 UTC m=+0.141517774 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 03 01:44:30 compute-0 podman[372952]: 2025-12-03 01:44:30.601811094 +0000 UTC m=+0.153445672 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi)
Dec 03 01:44:30 compute-0 python3.9[373018]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:44:30 compute-0 systemd[1]: Started libpod-conmon-945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b.scope.
Dec 03 01:44:30 compute-0 podman[373020]: 2025-12-03 01:44:30.98676622 +0000 UTC m=+0.151076165 container exec 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, version=9.6, distribution-scope=public, architecture=x86_64, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7)
Dec 03 01:44:31 compute-0 podman[373020]: 2025-12-03 01:44:31.022092622 +0000 UTC m=+0.186402497 container exec_died 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, architecture=x86_64, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, version=9.6, build-date=2025-08-20T13:12:41, io.openshift.expose-services=)
Dec 03 01:44:31 compute-0 sudo[373008]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:31 compute-0 systemd[1]: libpod-conmon-945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b.scope: Deactivated successfully.
Dec 03 01:44:31 compute-0 openstack_network_exporter[368278]: ERROR   01:44:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:44:31 compute-0 openstack_network_exporter[368278]: ERROR   01:44:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:44:31 compute-0 openstack_network_exporter[368278]: ERROR   01:44:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:44:31 compute-0 openstack_network_exporter[368278]: ERROR   01:44:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:44:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:44:31 compute-0 ceph-mon[192821]: pgmap v891: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:31 compute-0 openstack_network_exporter[368278]: ERROR   01:44:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:44:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:44:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v892: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:31 compute-0 podman[373167]: 2025-12-03 01:44:31.921522956 +0000 UTC m=+0.167298585 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., container_name=openstack_network_exporter, name=ubi9-minimal, release=1755695350, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, vcs-type=git)
Dec 03 01:44:31 compute-0 sudo[373218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nysbjytpysuftpwlcvfteowebdsjuron ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726271.4020593-912-167580740038772/AnsiballZ_file.py'
Dec 03 01:44:31 compute-0 sudo[373218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:44:32 compute-0 python3.9[373220]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:44:32 compute-0 sudo[373218]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:44:33 compute-0 sudo[373370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nihhgluezrvlspabchvmqsnkenouwyga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726272.5346458-921-185076376347827/AnsiballZ_podman_container_info.py'
Dec 03 01:44:33 compute-0 sudo[373370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:44:33 compute-0 python3.9[373372]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_ipmi'] executable=podman
Dec 03 01:44:33 compute-0 ceph-mon[192821]: pgmap v892: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:33 compute-0 sudo[373370]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v893: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:34 compute-0 sudo[373545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsnhgkiiiqtaeufspunlqgrazltjctpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726273.8858666-929-207269125069782/AnsiballZ_podman_container_exec.py'
Dec 03 01:44:34 compute-0 sudo[373545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:44:34 compute-0 podman[373509]: 2025-12-03 01:44:34.511660993 +0000 UTC m=+0.144148709 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 03 01:44:34 compute-0 python3.9[373555]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:44:34 compute-0 systemd[1]: Started libpod-conmon-ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92.scope.
Dec 03 01:44:34 compute-0 podman[373562]: 2025-12-03 01:44:34.916947466 +0000 UTC m=+0.153436773 container exec ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm)
Dec 03 01:44:34 compute-0 podman[373562]: 2025-12-03 01:44:34.957716022 +0000 UTC m=+0.194205259 container exec_died ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 03 01:44:35 compute-0 sudo[373545]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:35 compute-0 systemd[1]: libpod-conmon-ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92.scope: Deactivated successfully.
Dec 03 01:44:35 compute-0 ceph-mon[192821]: pgmap v893: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v894: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:35 compute-0 sudo[373739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bilvihelgxhfasevinrpgoilezofpcwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726275.338255-937-197455428112955/AnsiballZ_podman_container_exec.py'
Dec 03 01:44:35 compute-0 sudo[373739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:44:36 compute-0 python3.9[373741]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:44:36 compute-0 systemd[1]: Started libpod-conmon-ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92.scope.
Dec 03 01:44:36 compute-0 podman[373742]: 2025-12-03 01:44:36.332600579 +0000 UTC m=+0.161936053 container exec ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 03 01:44:36 compute-0 podman[373742]: 2025-12-03 01:44:36.367767046 +0000 UTC m=+0.197102460 container exec_died ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Dec 03 01:44:36 compute-0 sudo[373739]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:36 compute-0 systemd[1]: libpod-conmon-ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92.scope: Deactivated successfully.
Dec 03 01:44:36 compute-0 podman[373808]: 2025-12-03 01:44:36.894258005 +0000 UTC m=+0.144733865 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, name=ubi9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, container_name=kepler, distribution-scope=public, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, io.openshift.expose-services=)
Dec 03 01:44:37 compute-0 sudo[373940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wojsnoqvmgawglfvkqzczlxhbbefgeny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726276.7657204-945-18867168655439/AnsiballZ_file.py'
Dec 03 01:44:37 compute-0 sudo[373940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:44:37 compute-0 ceph-mon[192821]: pgmap v894: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:37 compute-0 python3.9[373942]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:44:37 compute-0 sudo[373940]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v895: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:44:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 01:44:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:44:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 01:44:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:44:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:44:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:44:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:44:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:44:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:44:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:44:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:44:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:44:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 01:44:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:44:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:44:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:44:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 01:44:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:44:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 01:44:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:44:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:44:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:44:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 01:44:38 compute-0 sudo[374094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzdbqvczqufpltlxniwcwdyxfhtsubyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726278.0316732-954-1723371725712/AnsiballZ_podman_container_info.py'
Dec 03 01:44:38 compute-0 sudo[374094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:44:38 compute-0 python3.9[374096]: ansible-containers.podman.podman_container_info Invoked with name=['kepler'] executable=podman
Dec 03 01:44:38 compute-0 sshd-session[374019]: Invalid user zhangsan from 80.253.31.232 port 50958
Dec 03 01:44:38 compute-0 sudo[374094]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:39 compute-0 sshd-session[374019]: Received disconnect from 80.253.31.232 port 50958:11: Bye Bye [preauth]
Dec 03 01:44:39 compute-0 sshd-session[374019]: Disconnected from invalid user zhangsan 80.253.31.232 port 50958 [preauth]
Dec 03 01:44:39 compute-0 ceph-mon[192821]: pgmap v895: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v896: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:39 compute-0 sudo[374259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvpoifsbjldkyltpcdaartwiuaumkpnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726279.3269565-962-246573685458581/AnsiballZ_podman_container_exec.py'
Dec 03 01:44:39 compute-0 sudo[374259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:44:40 compute-0 python3.9[374261]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:44:40 compute-0 systemd[1]: Started libpod-conmon-96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687.scope.
Dec 03 01:44:40 compute-0 podman[374262]: 2025-12-03 01:44:40.412278843 +0000 UTC m=+0.167823189 container exec 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.buildah.version=1.29.0, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vcs-type=git, name=ubi9, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, container_name=kepler, distribution-scope=public)
Dec 03 01:44:40 compute-0 podman[374262]: 2025-12-03 01:44:40.448652575 +0000 UTC m=+0.204196881 container exec_died 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, build-date=2024-09-18T21:23:30, distribution-scope=public, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, architecture=x86_64, container_name=kepler, io.openshift.tags=base rhel9, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, release=1214.1726694543, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, com.redhat.component=ubi9-container, name=ubi9, vendor=Red Hat, Inc.)
Dec 03 01:44:40 compute-0 systemd[1]: libpod-conmon-96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687.scope: Deactivated successfully.
Dec 03 01:44:40 compute-0 sudo[374259]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:41 compute-0 sudo[374441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbajcndtvtcaolbequwdodjkvswkddbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726280.8512135-970-223027594351137/AnsiballZ_podman_container_exec.py'
Dec 03 01:44:41 compute-0 sudo[374441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:44:41 compute-0 ceph-mon[192821]: pgmap v896: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:41 compute-0 python3.9[374443]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:44:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v897: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:41 compute-0 systemd[1]: Started libpod-conmon-96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687.scope.
Dec 03 01:44:41 compute-0 podman[374444]: 2025-12-03 01:44:41.810029128 +0000 UTC m=+0.143052198 container exec 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., container_name=kepler, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.openshift.tags=base rhel9, release=1214.1726694543, config_id=edpm, maintainer=Red Hat, Inc., name=ubi9, io.openshift.expose-services=, architecture=x86_64, managed_by=edpm_ansible)
Dec 03 01:44:41 compute-0 podman[374444]: 2025-12-03 01:44:41.845985067 +0000 UTC m=+0.179008137 container exec_died 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, vendor=Red Hat, Inc., version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.expose-services=, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, distribution-scope=public, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container)
Dec 03 01:44:41 compute-0 systemd[1]: libpod-conmon-96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687.scope: Deactivated successfully.
Dec 03 01:44:41 compute-0 sudo[374441]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:44:42 compute-0 sudo[374622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nutxtjitwivtcacpfipqsmzxtfosffwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726282.2316134-978-213471277073601/AnsiballZ_file.py'
Dec 03 01:44:42 compute-0 sudo[374622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:44:42 compute-0 python3.9[374624]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/kepler recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:44:43 compute-0 sudo[374622]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:43 compute-0 ceph-mon[192821]: pgmap v897: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v898: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:43 compute-0 sudo[374774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpgplwlkhjmwcbzbevhawtkmtpsvxlqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726283.338664-987-150167203731449/AnsiballZ_podman_container_info.py'
Dec 03 01:44:43 compute-0 sudo[374774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:44:44 compute-0 python3.9[374776]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Dec 03 01:44:44 compute-0 sudo[374774]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:45 compute-0 sudo[374937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxhzgsuvchlcsavttkaxfcddbtjdcwmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726284.6973627-995-167645321072979/AnsiballZ_podman_container_exec.py'
Dec 03 01:44:45 compute-0 sudo[374937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:44:45 compute-0 python3.9[374939]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:44:45 compute-0 ceph-mon[192821]: pgmap v898: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:45 compute-0 nova_compute[351485]: 2025-12-03 01:44:45.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:44:45 compute-0 nova_compute[351485]: 2025-12-03 01:44:45.619 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:44:45 compute-0 nova_compute[351485]: 2025-12-03 01:44:45.620 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:44:45 compute-0 nova_compute[351485]: 2025-12-03 01:44:45.621 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:44:45 compute-0 nova_compute[351485]: 2025-12-03 01:44:45.621 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 01:44:45 compute-0 nova_compute[351485]: 2025-12-03 01:44:45.622 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:44:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v899: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:45 compute-0 systemd[1]: Started libpod-conmon-5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6.scope.
Dec 03 01:44:45 compute-0 podman[374940]: 2025-12-03 01:44:45.696267407 +0000 UTC m=+0.137371116 container exec 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 03 01:44:45 compute-0 podman[374940]: 2025-12-03 01:44:45.716605344 +0000 UTC m=+0.157709093 container exec_died 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 03 01:44:45 compute-0 sudo[374937]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:45 compute-0 systemd[1]: libpod-conmon-5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6.scope: Deactivated successfully.
Dec 03 01:44:46 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 01:44:46 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/529351921' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:44:46 compute-0 nova_compute[351485]: 2025-12-03 01:44:46.096 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:44:46 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/529351921' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:44:46 compute-0 nova_compute[351485]: 2025-12-03 01:44:46.650 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 01:44:46 compute-0 nova_compute[351485]: 2025-12-03 01:44:46.652 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4589MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 01:44:46 compute-0 nova_compute[351485]: 2025-12-03 01:44:46.652 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:44:46 compute-0 nova_compute[351485]: 2025-12-03 01:44:46.653 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:44:46 compute-0 nova_compute[351485]: 2025-12-03 01:44:46.769 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 01:44:46 compute-0 nova_compute[351485]: 2025-12-03 01:44:46.769 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 01:44:46 compute-0 nova_compute[351485]: 2025-12-03 01:44:46.786 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:44:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 01:44:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2590681817' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:44:47 compute-0 nova_compute[351485]: 2025-12-03 01:44:47.300 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:44:47 compute-0 nova_compute[351485]: 2025-12-03 01:44:47.312 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 01:44:47 compute-0 nova_compute[351485]: 2025-12-03 01:44:47.333 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 01:44:47 compute-0 nova_compute[351485]: 2025-12-03 01:44:47.335 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 01:44:47 compute-0 nova_compute[351485]: 2025-12-03 01:44:47.335 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:44:47 compute-0 sudo[375161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkzsyexiuaylofqtesaoripirtwpqdnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726286.0668664-1003-38880116247599/AnsiballZ_podman_container_exec.py'
Dec 03 01:44:47 compute-0 sudo[375161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:44:47 compute-0 ceph-mon[192821]: pgmap v899: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2590681817' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:44:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v900: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:47 compute-0 python3.9[375163]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:44:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:44:47 compute-0 systemd[1]: Started libpod-conmon-5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6.scope.
Dec 03 01:44:47 compute-0 podman[375164]: 2025-12-03 01:44:47.952166956 +0000 UTC m=+0.162206589 container exec 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 03 01:44:47 compute-0 podman[375164]: 2025-12-03 01:44:47.98790395 +0000 UTC m=+0.197943583 container exec_died 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 03 01:44:48 compute-0 sudo[375161]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:48 compute-0 systemd[1]: libpod-conmon-5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6.scope: Deactivated successfully.
Dec 03 01:44:48 compute-0 nova_compute[351485]: 2025-12-03 01:44:48.336 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:44:48 compute-0 nova_compute[351485]: 2025-12-03 01:44:48.336 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 01:44:48 compute-0 nova_compute[351485]: 2025-12-03 01:44:48.337 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 03 01:44:48 compute-0 nova_compute[351485]: 2025-12-03 01:44:48.360 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 03 01:44:48 compute-0 nova_compute[351485]: 2025-12-03 01:44:48.361 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:44:48 compute-0 nova_compute[351485]: 2025-12-03 01:44:48.362 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:44:48 compute-0 nova_compute[351485]: 2025-12-03 01:44:48.362 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:44:48 compute-0 nova_compute[351485]: 2025-12-03 01:44:48.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:44:48 compute-0 nova_compute[351485]: 2025-12-03 01:44:48.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:44:48 compute-0 nova_compute[351485]: 2025-12-03 01:44:48.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:44:48 compute-0 nova_compute[351485]: 2025-12-03 01:44:48.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 01:44:48 compute-0 podman[375294]: 2025-12-03 01:44:48.874216903 +0000 UTC m=+0.123234136 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 01:44:48 compute-0 sudo[375365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxcxjtrxtlqharnaqiqzbodvnrdwlfvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726288.3989255-1011-244987003745763/AnsiballZ_file.py'
Dec 03 01:44:48 compute-0 sudo[375365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:44:49 compute-0 python3.9[375367]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:44:49 compute-0 sudo[375365]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:49 compute-0 ceph-mon[192821]: pgmap v900: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:49 compute-0 nova_compute[351485]: 2025-12-03 01:44:49.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:44:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v901: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:49 compute-0 podman[375445]: 2025-12-03 01:44:49.890839271 +0000 UTC m=+0.125191951 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Dec 03 01:44:49 compute-0 podman[375449]: 2025-12-03 01:44:49.892167949 +0000 UTC m=+0.134556297 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0)
Dec 03 01:44:50 compute-0 sudo[375553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozvsowijnonygcybokdngerqxjaagryc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726289.5465734-1020-236490106866109/AnsiballZ_podman_container_info.py'
Dec 03 01:44:50 compute-0 sudo[375553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:44:50 compute-0 python3.9[375555]: ansible-containers.podman.podman_container_info Invoked with name=['multipathd'] executable=podman
Dec 03 01:44:50 compute-0 sudo[375553]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:51 compute-0 sudo[375717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yshouubatjibvjdpmmyurakqsxfrrojp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726290.751192-1028-212090941485848/AnsiballZ_podman_container_exec.py'
Dec 03 01:44:51 compute-0 sudo[375717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:44:51 compute-0 python3.9[375719]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:44:51 compute-0 ceph-mon[192821]: pgmap v901: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v902: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:51 compute-0 systemd[1]: Started libpod-conmon-df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630.scope.
Dec 03 01:44:51 compute-0 podman[375722]: 2025-12-03 01:44:51.739262025 +0000 UTC m=+0.145174828 container exec df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:44:51 compute-0 podman[375722]: 2025-12-03 01:44:51.775646847 +0000 UTC m=+0.181559680 container exec_died df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 03 01:44:51 compute-0 systemd[1]: libpod-conmon-df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630.scope: Deactivated successfully.
Dec 03 01:44:51 compute-0 sudo[375717]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:52 compute-0 ceph-mon[192821]: pgmap v902: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:44:52 compute-0 sudo[375902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uelxoesleijvzvbpxkkwclvfgfksohbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726292.181341-1036-152173984182358/AnsiballZ_podman_container_exec.py'
Dec 03 01:44:52 compute-0 sudo[375902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:44:53 compute-0 python3.9[375904]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:44:53 compute-0 sshd-session[375720]: Received disconnect from 103.146.202.174 port 39218:11: Bye Bye [preauth]
Dec 03 01:44:53 compute-0 sshd-session[375720]: Disconnected from authenticating user root 103.146.202.174 port 39218 [preauth]
Dec 03 01:44:53 compute-0 systemd[1]: Started libpod-conmon-df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630.scope.
Dec 03 01:44:53 compute-0 podman[375905]: 2025-12-03 01:44:53.21915703 +0000 UTC m=+0.157503977 container exec df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible)
Dec 03 01:44:53 compute-0 podman[375905]: 2025-12-03 01:44:53.255002527 +0000 UTC m=+0.193349474 container exec_died df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 03 01:44:53 compute-0 systemd[1]: libpod-conmon-df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630.scope: Deactivated successfully.
Dec 03 01:44:53 compute-0 sudo[375902]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v903: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:54 compute-0 sudo[376085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjtackbuoinrddadebzqclfkxhclizia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726293.6149135-1044-239538285111301/AnsiballZ_file.py'
Dec 03 01:44:54 compute-0 sudo[376085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:44:54 compute-0 python3.9[376087]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/multipathd recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:44:54 compute-0 sudo[376085]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:54 compute-0 ceph-mon[192821]: pgmap v903: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:55 compute-0 sudo[376237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efbzthbbrhnhvaybipuwsadanonevkqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726294.7882032-1053-47885009344399/AnsiballZ_file.py'
Dec 03 01:44:55 compute-0 sudo[376237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:44:55 compute-0 python3.9[376239]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:44:55 compute-0 sudo[376237]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v904: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:56 compute-0 sudo[376389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrtivqxfesslrnxcxoefuhvpmlsfikde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726295.8887105-1061-269290833564001/AnsiballZ_stat.py'
Dec 03 01:44:56 compute-0 sudo[376389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:44:56 compute-0 python3.9[376391]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:44:56 compute-0 sudo[376389]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:56 compute-0 ceph-mon[192821]: pgmap v904: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:57 compute-0 sudo[376467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usgqlrwrbftjmyvfncgonwxpeskricab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726295.8887105-1061-269290833564001/AnsiballZ_file.py'
Dec 03 01:44:57 compute-0 sudo[376467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:44:57 compute-0 python3.9[376469]: ansible-ansible.legacy.file Invoked with mode=0640 dest=/var/lib/edpm-config/firewall/telemetry.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/telemetry.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:44:57 compute-0 sudo[376467]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v905: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:44:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:44:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:44:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:44:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:44:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:44:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:44:58 compute-0 ceph-mon[192821]: pgmap v905: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:59 compute-0 sudo[376619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcasbftuehojoitwlnnltbmshjmnqezw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726298.5907474-1074-15354034355122/AnsiballZ_file.py'
Dec 03 01:44:59 compute-0 sudo[376619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:44:59 compute-0 podman[376621]: 2025-12-03 01:44:59.525142236 +0000 UTC m=+0.197135161 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 03 01:44:59 compute-0 python3.9[376622]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:44:59 compute-0 sudo[376619]: pam_unix(sudo:session): session closed for user root
Dec 03 01:44:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:44:59.604 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:44:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:44:59.605 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:44:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:44:59.605 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:44:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v906: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:44:59 compute-0 podman[158098]: time="2025-12-03T01:44:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:44:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:44:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec 03 01:44:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:44:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8104 "" "Go-http-client/1.1"
Dec 03 01:45:00 compute-0 ceph-mon[192821]: pgmap v906: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:00 compute-0 sudo[376827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-polmfxiqyyqifjofntehsxfznnxhpsif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726300.2847183-1082-72558084835459/AnsiballZ_stat.py'
Dec 03 01:45:00 compute-0 sudo[376827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:45:00 compute-0 podman[376771]: 2025-12-03 01:45:00.86886839 +0000 UTC m=+0.107835579 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 03 01:45:00 compute-0 podman[376770]: 2025-12-03 01:45:00.888673421 +0000 UTC m=+0.137247763 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Dec 03 01:45:01 compute-0 python3.9[376835]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:45:01 compute-0 sudo[376827]: pam_unix(sudo:session): session closed for user root
Dec 03 01:45:01 compute-0 openstack_network_exporter[368278]: ERROR   01:45:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:45:01 compute-0 openstack_network_exporter[368278]: ERROR   01:45:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:45:01 compute-0 openstack_network_exporter[368278]: ERROR   01:45:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:45:01 compute-0 openstack_network_exporter[368278]: ERROR   01:45:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:45:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:45:01 compute-0 openstack_network_exporter[368278]: ERROR   01:45:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:45:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:45:01 compute-0 sudo[376911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezcmpkmmqmxoilqoppbrmophfdmneuhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726300.2847183-1082-72558084835459/AnsiballZ_file.py'
Dec 03 01:45:01 compute-0 sudo[376911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:45:01 compute-0 python3.9[376913]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:45:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v907: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:01 compute-0 sudo[376911]: pam_unix(sudo:session): session closed for user root
Dec 03 01:45:02 compute-0 sudo[377077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afpymziqheizuqiqwdndmupwsmgdlrjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726302.0156035-1094-259437206966260/AnsiballZ_stat.py'
Dec 03 01:45:02 compute-0 sudo[377077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:45:02 compute-0 podman[377037]: 2025-12-03 01:45:02.636224035 +0000 UTC m=+0.127387493 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.openshift.expose-services=, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git)
Dec 03 01:45:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:45:02 compute-0 ceph-mon[192821]: pgmap v907: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:02 compute-0 python3.9[377083]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:45:02 compute-0 sudo[377077]: pam_unix(sudo:session): session closed for user root
Dec 03 01:45:03 compute-0 sudo[377159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvqbsyaaktobjkazcjhxthpvruwyskgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726302.0156035-1094-259437206966260/AnsiballZ_file.py'
Dec 03 01:45:03 compute-0 sudo[377159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:45:03 compute-0 python3.9[377161]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.yvot_7au recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:45:03 compute-0 sudo[377159]: pam_unix(sudo:session): session closed for user root
Dec 03 01:45:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v908: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:04 compute-0 sudo[377311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdmjgwallaboxvwyvhikvroivxehdwym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726303.793315-1106-158084232772581/AnsiballZ_stat.py'
Dec 03 01:45:04 compute-0 sudo[377311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:45:04 compute-0 python3.9[377313]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:45:04 compute-0 sudo[377311]: pam_unix(sudo:session): session closed for user root
Dec 03 01:45:04 compute-0 podman[377316]: 2025-12-03 01:45:04.82495541 +0000 UTC m=+0.118932433 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 03 01:45:04 compute-0 ceph-mon[192821]: pgmap v908: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:05 compute-0 sudo[377415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhlfdayyytsivigiqmgdilggcqsofolz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726303.793315-1106-158084232772581/AnsiballZ_file.py'
Dec 03 01:45:05 compute-0 sudo[377415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:45:05 compute-0 python3.9[377417]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:45:05 compute-0 sudo[377415]: pam_unix(sudo:session): session closed for user root
Dec 03 01:45:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v909: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:06 compute-0 sudo[377567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvpbegczkgzxnvaccohrputyfwdqvmxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726305.6400592-1119-141075880198562/AnsiballZ_command.py'
Dec 03 01:45:06 compute-0 sudo[377567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:45:06 compute-0 python3.9[377569]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:45:06 compute-0 sudo[377567]: pam_unix(sudo:session): session closed for user root
Dec 03 01:45:06 compute-0 ceph-mon[192821]: pgmap v909: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v910: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:07 compute-0 sudo[377735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkihjxokywmvnjqdbebtsihhcfotbxkw ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764726306.818952-1127-240202843056557/AnsiballZ_edpm_nftables_from_files.py'
Dec 03 01:45:07 compute-0 sshd-session[377599]: Invalid user testuser from 173.249.50.59 port 41956
Dec 03 01:45:07 compute-0 sudo[377735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:45:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:45:07 compute-0 podman[377696]: 2025-12-03 01:45:07.778351747 +0000 UTC m=+0.140115754 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=base rhel9, managed_by=edpm_ansible, com.redhat.component=ubi9-container, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler)
Dec 03 01:45:07 compute-0 sshd-session[377599]: Received disconnect from 173.249.50.59 port 41956:11: Bye Bye [preauth]
Dec 03 01:45:07 compute-0 sshd-session[377599]: Disconnected from invalid user testuser 173.249.50.59 port 41956 [preauth]
Dec 03 01:45:08 compute-0 python3[377742]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 03 01:45:08 compute-0 sudo[377735]: pam_unix(sudo:session): session closed for user root
Dec 03 01:45:08 compute-0 ceph-mon[192821]: pgmap v910: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:09 compute-0 sudo[377893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdudwfwbcvgnbqvcccfxfsoculauumwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726308.408955-1135-72437487512415/AnsiballZ_stat.py'
Dec 03 01:45:09 compute-0 sudo[377893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:45:09 compute-0 python3.9[377895]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:45:09 compute-0 sudo[377893]: pam_unix(sudo:session): session closed for user root
Dec 03 01:45:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v911: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:09 compute-0 sudo[377973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crjovzilqyvkqjjtonzudueuprkzalzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726308.408955-1135-72437487512415/AnsiballZ_file.py'
Dec 03 01:45:09 compute-0 sudo[377973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:45:09 compute-0 sshd-session[377951]: Invalid user myuser from 34.66.72.251 port 54180
Dec 03 01:45:09 compute-0 python3.9[377975]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:45:09 compute-0 sudo[377973]: pam_unix(sudo:session): session closed for user root
Dec 03 01:45:09 compute-0 sshd-session[377951]: Received disconnect from 34.66.72.251 port 54180:11: Bye Bye [preauth]
Dec 03 01:45:09 compute-0 sshd-session[377951]: Disconnected from invalid user myuser 34.66.72.251 port 54180 [preauth]
Dec 03 01:45:10 compute-0 ceph-mon[192821]: pgmap v911: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:10 compute-0 sudo[378125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbzbsdepwpfujzybbxkyvwczmkxpttra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726310.245661-1147-154670123994941/AnsiballZ_stat.py'
Dec 03 01:45:10 compute-0 sudo[378125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:45:11 compute-0 python3.9[378127]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:45:11 compute-0 sudo[378125]: pam_unix(sudo:session): session closed for user root
Dec 03 01:45:11 compute-0 sudo[378203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyssknwvurkzwqzvufxohvawkgmjiwts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726310.245661-1147-154670123994941/AnsiballZ_file.py'
Dec 03 01:45:11 compute-0 sudo[378203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:45:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v912: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:11 compute-0 python3.9[378205]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:45:11 compute-0 sudo[378203]: pam_unix(sudo:session): session closed for user root
Dec 03 01:45:12 compute-0 sudo[378355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdzdnparjbrbeameqeltrkdyuoxnikja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726312.1456206-1159-171191084516009/AnsiballZ_stat.py'
Dec 03 01:45:12 compute-0 sudo[378355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:45:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:45:12 compute-0 ceph-mon[192821]: pgmap v912: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:13 compute-0 python3.9[378357]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:45:13 compute-0 sudo[378355]: pam_unix(sudo:session): session closed for user root
Dec 03 01:45:13 compute-0 sudo[378433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtebplyfyjhwgaodldhdxjmqxkjcasor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726312.1456206-1159-171191084516009/AnsiballZ_file.py'
Dec 03 01:45:13 compute-0 sudo[378433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:45:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v913: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:13 compute-0 python3.9[378435]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:45:13 compute-0 sudo[378433]: pam_unix(sudo:session): session closed for user root
Dec 03 01:45:14 compute-0 sudo[378585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nujrcdsyyjcswqvjpthbayhnstqduzgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726314.085547-1171-114665580315999/AnsiballZ_stat.py'
Dec 03 01:45:14 compute-0 sudo[378585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:45:14 compute-0 ceph-mon[192821]: pgmap v913: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:15 compute-0 python3.9[378587]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:45:15 compute-0 sudo[378585]: pam_unix(sudo:session): session closed for user root
Dec 03 01:45:15 compute-0 sudo[378663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szzufpojvxvfjkfaaqvrpybbvfphqosu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726314.085547-1171-114665580315999/AnsiballZ_file.py'
Dec 03 01:45:15 compute-0 sudo[378663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:45:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v914: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:15 compute-0 python3.9[378665]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:45:15 compute-0 sudo[378663]: pam_unix(sudo:session): session closed for user root
Dec 03 01:45:16 compute-0 sudo[378815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcpgjthkirgtsamybguwpagsozlsxjiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726316.2230463-1183-260700277414015/AnsiballZ_stat.py'
Dec 03 01:45:16 compute-0 sudo[378815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:45:16 compute-0 ceph-mon[192821]: pgmap v914: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:17 compute-0 python3.9[378817]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:45:17 compute-0 sudo[378818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:45:17 compute-0 sudo[378818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:45:17 compute-0 sudo[378818]: pam_unix(sudo:session): session closed for user root
Dec 03 01:45:17 compute-0 sudo[378815]: pam_unix(sudo:session): session closed for user root
Dec 03 01:45:17 compute-0 sudo[378845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:45:17 compute-0 sudo[378845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:45:17 compute-0 sudo[378845]: pam_unix(sudo:session): session closed for user root
Dec 03 01:45:17 compute-0 sudo[378893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:45:17 compute-0 sudo[378893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:45:17 compute-0 sudo[378893]: pam_unix(sudo:session): session closed for user root
Dec 03 01:45:17 compute-0 sudo[378939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 01:45:17 compute-0 sudo[378939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:45:17 compute-0 sudo[378993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygjxcglamlkobmfnrhsyfbvpvrxkukzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726316.2230463-1183-260700277414015/AnsiballZ_file.py'
Dec 03 01:45:17 compute-0 sudo[378993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:45:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v915: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:45:17 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Dec 03 01:45:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:45:17.786258) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 03 01:45:17 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Dec 03 01:45:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726317786342, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 846, "num_deletes": 255, "total_data_size": 1142092, "memory_usage": 1166320, "flush_reason": "Manual Compaction"}
Dec 03 01:45:17 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Dec 03 01:45:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726317798351, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 1121217, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18615, "largest_seqno": 19460, "table_properties": {"data_size": 1116979, "index_size": 1954, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 8853, "raw_average_key_size": 18, "raw_value_size": 1108461, "raw_average_value_size": 2294, "num_data_blocks": 89, "num_entries": 483, "num_filter_entries": 483, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764726242, "oldest_key_time": 1764726242, "file_creation_time": 1764726317, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Dec 03 01:45:17 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 12145 microseconds, and 3452 cpu microseconds.
Dec 03 01:45:17 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 01:45:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:45:17.798398) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 1121217 bytes OK
Dec 03 01:45:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:45:17.798425) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Dec 03 01:45:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:45:17.800237) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Dec 03 01:45:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:45:17.800248) EVENT_LOG_v1 {"time_micros": 1764726317800245, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 03 01:45:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:45:17.800265) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 03 01:45:17 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 1137907, prev total WAL file size 1137907, number of live WAL files 2.
Dec 03 01:45:17 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 01:45:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:45:17.800958) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323532' seq:72057594037927935, type:22 .. '6C6F676D00353033' seq:0, type:0; will stop at (end)
Dec 03 01:45:17 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 03 01:45:17 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(1094KB)], [44(5964KB)]
Dec 03 01:45:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726317801017, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 7228938, "oldest_snapshot_seqno": -1}
Dec 03 01:45:17 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4093 keys, 7097995 bytes, temperature: kUnknown
Dec 03 01:45:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726317845359, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 7097995, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7069957, "index_size": 16728, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10245, "raw_key_size": 101380, "raw_average_key_size": 24, "raw_value_size": 6995073, "raw_average_value_size": 1709, "num_data_blocks": 704, "num_entries": 4093, "num_filter_entries": 4093, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764726317, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Dec 03 01:45:17 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 01:45:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:45:17.845843) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 7097995 bytes
Dec 03 01:45:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:45:17.848713) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 162.1 rd, 159.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 5.8 +0.0 blob) out(6.8 +0.0 blob), read-write-amplify(12.8) write-amplify(6.3) OK, records in: 4615, records dropped: 522 output_compression: NoCompression
Dec 03 01:45:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:45:17.848744) EVENT_LOG_v1 {"time_micros": 1764726317848730, "job": 22, "event": "compaction_finished", "compaction_time_micros": 44598, "compaction_time_cpu_micros": 21938, "output_level": 6, "num_output_files": 1, "total_output_size": 7097995, "num_input_records": 4615, "num_output_records": 4093, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 03 01:45:17 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 01:45:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726317849295, "job": 22, "event": "table_file_deletion", "file_number": 46}
Dec 03 01:45:17 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 01:45:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726317851856, "job": 22, "event": "table_file_deletion", "file_number": 44}
Dec 03 01:45:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:45:17.800846) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:45:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:45:17.852235) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:45:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:45:17.852243) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:45:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:45:17.852247) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:45:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:45:17.852250) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:45:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:45:17.852253) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:45:17 compute-0 python3.9[378995]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:45:17 compute-0 sudo[378993]: pam_unix(sudo:session): session closed for user root
Dec 03 01:45:18 compute-0 sudo[378939]: pam_unix(sudo:session): session closed for user root
Dec 03 01:45:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:45:18 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:45:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 01:45:18 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:45:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 01:45:18 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:45:18 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 99541a0c-afd8-4dbb-8822-09312a11d5b1 does not exist
Dec 03 01:45:18 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 5ed236d9-acd3-4d48-8c57-ae2bb76686e8 does not exist
Dec 03 01:45:18 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 44205ee6-8fe8-4b91-9d0f-00f7da196991 does not exist
Dec 03 01:45:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 01:45:18 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:45:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 01:45:18 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:45:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:45:18 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:45:18 compute-0 sudo[379053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:45:18 compute-0 sudo[379053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:45:18 compute-0 sudo[379053]: pam_unix(sudo:session): session closed for user root
Dec 03 01:45:18 compute-0 sudo[379110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:45:18 compute-0 sudo[379110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:45:18 compute-0 sudo[379110]: pam_unix(sudo:session): session closed for user root
Dec 03 01:45:18 compute-0 sudo[379164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:45:18 compute-0 sudo[379164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:45:18 compute-0 sudo[379164]: pam_unix(sudo:session): session closed for user root
Dec 03 01:45:18 compute-0 ceph-mon[192821]: pgmap v915: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:18 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:45:18 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:45:18 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:45:18 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:45:18 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:45:18 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:45:18 compute-0 sudo[379208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 01:45:18 compute-0 sudo[379208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:45:19 compute-0 podman[379248]: 2025-12-03 01:45:19.116030483 +0000 UTC m=+0.127787475 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.500 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.501 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.501 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56178f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.502 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.502 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56178f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.503 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56178f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.504 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56178f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.504 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56178f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.504 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56178f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.504 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56178f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.504 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56178f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.505 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56178f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.506 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56178f0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.506 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56178f0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.506 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56178f0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.507 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56178f0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.507 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56178f0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.507 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56178f0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.505 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.508 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.508 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.508 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.509 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.509 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.509 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.507 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56178f0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.509 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.510 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.510 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.511 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.511 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.511 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.510 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56178f0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56178f0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.511 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.513 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.513 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56178f0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.513 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.514 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.514 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.514 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56178f0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.514 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.516 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.516 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.516 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.516 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.516 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.517 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.517 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56178f0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'power.state': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56178f0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'power.state': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56178f0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'power.state': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56178f0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'power.state': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56178f0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'power.state': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56178f0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'power.state': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.517 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.520 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.520 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.520 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.520 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.520 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.521 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.521 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.521 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.521 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.521 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.521 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.522 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.522 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.522 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.522 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.522 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.522 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.523 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.523 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.523 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.523 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.523 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.524 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.524 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.524 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.524 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.525 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.525 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.525 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.525 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.525 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.525 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.526 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.526 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.526 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.526 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.526 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.526 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.527 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.527 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.527 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.527 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.527 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.528 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.528 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.528 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.528 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:45:19.528 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:45:19 compute-0 podman[379312]: 2025-12-03 01:45:19.565593351 +0000 UTC m=+0.083450187 container create 67481ab1cd5cdc459295f68431744ea0210612a125aabe071982ebd0abab6194 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hodgkin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:45:19 compute-0 podman[379312]: 2025-12-03 01:45:19.52781113 +0000 UTC m=+0.045667956 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:45:19 compute-0 systemd[1]: Started libpod-conmon-67481ab1cd5cdc459295f68431744ea0210612a125aabe071982ebd0abab6194.scope.
Dec 03 01:45:19 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:45:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v916: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:19 compute-0 podman[379312]: 2025-12-03 01:45:19.708492323 +0000 UTC m=+0.226349169 container init 67481ab1cd5cdc459295f68431744ea0210612a125aabe071982ebd0abab6194 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:45:19 compute-0 podman[379312]: 2025-12-03 01:45:19.721165723 +0000 UTC m=+0.239022519 container start 67481ab1cd5cdc459295f68431744ea0210612a125aabe071982ebd0abab6194 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hodgkin, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Dec 03 01:45:19 compute-0 sleepy_hodgkin[379330]: 167 167
Dec 03 01:45:19 compute-0 podman[379312]: 2025-12-03 01:45:19.727213934 +0000 UTC m=+0.245070770 container attach 67481ab1cd5cdc459295f68431744ea0210612a125aabe071982ebd0abab6194 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hodgkin, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:45:19 compute-0 systemd[1]: libpod-67481ab1cd5cdc459295f68431744ea0210612a125aabe071982ebd0abab6194.scope: Deactivated successfully.
Dec 03 01:45:19 compute-0 podman[379312]: 2025-12-03 01:45:19.734101849 +0000 UTC m=+0.251958645 container died 67481ab1cd5cdc459295f68431744ea0210612a125aabe071982ebd0abab6194 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hodgkin, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 03 01:45:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a8c818a00bfa3eb0f19ecaa042535f84165ffa710badc488c58ba378b0717bb-merged.mount: Deactivated successfully.
Dec 03 01:45:19 compute-0 podman[379312]: 2025-12-03 01:45:19.79478839 +0000 UTC m=+0.312645196 container remove 67481ab1cd5cdc459295f68431744ea0210612a125aabe071982ebd0abab6194 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hodgkin, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 03 01:45:19 compute-0 systemd[1]: libpod-conmon-67481ab1cd5cdc459295f68431744ea0210612a125aabe071982ebd0abab6194.scope: Deactivated successfully.
Dec 03 01:45:20 compute-0 podman[379369]: 2025-12-03 01:45:20.110696358 +0000 UTC m=+0.086226156 container create d7a5b6aa087687d040e2c230ca0e4708622c180ca5d895a4f600cfe1e3a80108 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hoover, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 03 01:45:20 compute-0 podman[379349]: 2025-12-03 01:45:20.114182427 +0000 UTC m=+0.127951889 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125)
Dec 03 01:45:20 compute-0 sudo[379423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzhudctsldscbmudmspzsjhysyjsevva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726318.4754767-1196-248349538129995/AnsiballZ_command.py'
Dec 03 01:45:20 compute-0 sudo[379423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:45:20 compute-0 podman[379347]: 2025-12-03 01:45:20.128333087 +0000 UTC m=+0.145609819 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 03 01:45:20 compute-0 systemd[1]: Started libpod-conmon-d7a5b6aa087687d040e2c230ca0e4708622c180ca5d895a4f600cfe1e3a80108.scope.
Dec 03 01:45:20 compute-0 podman[379369]: 2025-12-03 01:45:20.080227284 +0000 UTC m=+0.055757092 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:45:20 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:45:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00cd264b1eb5309835904fe1d5bde5928a0d0648b725a5424101312118a14b94/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:45:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00cd264b1eb5309835904fe1d5bde5928a0d0648b725a5424101312118a14b94/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:45:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00cd264b1eb5309835904fe1d5bde5928a0d0648b725a5424101312118a14b94/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:45:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00cd264b1eb5309835904fe1d5bde5928a0d0648b725a5424101312118a14b94/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:45:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00cd264b1eb5309835904fe1d5bde5928a0d0648b725a5424101312118a14b94/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:45:20 compute-0 podman[379369]: 2025-12-03 01:45:20.261980007 +0000 UTC m=+0.237509825 container init d7a5b6aa087687d040e2c230ca0e4708622c180ca5d895a4f600cfe1e3a80108 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hoover, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec 03 01:45:20 compute-0 podman[379369]: 2025-12-03 01:45:20.278204687 +0000 UTC m=+0.253734475 container start d7a5b6aa087687d040e2c230ca0e4708622c180ca5d895a4f600cfe1e3a80108 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hoover, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:45:20 compute-0 podman[379369]: 2025-12-03 01:45:20.283094296 +0000 UTC m=+0.258624154 container attach d7a5b6aa087687d040e2c230ca0e4708622c180ca5d895a4f600cfe1e3a80108 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hoover, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:45:20 compute-0 python3.9[379427]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:45:20 compute-0 sudo[379423]: pam_unix(sudo:session): session closed for user root
Dec 03 01:45:20 compute-0 ceph-mon[192821]: pgmap v916: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:21 compute-0 sudo[379606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guudzgphqptnljgunhlkvgcafhittwnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726320.65788-1204-171994298267311/AnsiballZ_blockinfile.py'
Dec 03 01:45:21 compute-0 sudo[379606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:45:21 compute-0 eloquent_hoover[379430]: --> passed data devices: 0 physical, 3 LVM
Dec 03 01:45:21 compute-0 eloquent_hoover[379430]: --> relative data size: 1.0
Dec 03 01:45:21 compute-0 eloquent_hoover[379430]: --> All data devices are unavailable
Dec 03 01:45:21 compute-0 systemd[1]: libpod-d7a5b6aa087687d040e2c230ca0e4708622c180ca5d895a4f600cfe1e3a80108.scope: Deactivated successfully.
Dec 03 01:45:21 compute-0 systemd[1]: libpod-d7a5b6aa087687d040e2c230ca0e4708622c180ca5d895a4f600cfe1e3a80108.scope: Consumed 1.239s CPU time.
Dec 03 01:45:21 compute-0 podman[379369]: 2025-12-03 01:45:21.588192334 +0000 UTC m=+1.563722162 container died d7a5b6aa087687d040e2c230ca0e4708622c180ca5d895a4f600cfe1e3a80108 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:45:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-00cd264b1eb5309835904fe1d5bde5928a0d0648b725a5424101312118a14b94-merged.mount: Deactivated successfully.
Dec 03 01:45:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v917: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:21 compute-0 podman[379369]: 2025-12-03 01:45:21.709839954 +0000 UTC m=+1.685369742 container remove d7a5b6aa087687d040e2c230ca0e4708622c180ca5d895a4f600cfe1e3a80108 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hoover, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:45:21 compute-0 python3.9[379609]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:45:21 compute-0 sudo[379606]: pam_unix(sudo:session): session closed for user root
Dec 03 01:45:21 compute-0 systemd[1]: libpod-conmon-d7a5b6aa087687d040e2c230ca0e4708622c180ca5d895a4f600cfe1e3a80108.scope: Deactivated successfully.
Dec 03 01:45:21 compute-0 sudo[379208]: pam_unix(sudo:session): session closed for user root
Dec 03 01:45:21 compute-0 sudo[379627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:45:21 compute-0 sudo[379627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:45:21 compute-0 sudo[379627]: pam_unix(sudo:session): session closed for user root
Dec 03 01:45:21 compute-0 sudo[379674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:45:21 compute-0 sudo[379674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:45:21 compute-0 sudo[379674]: pam_unix(sudo:session): session closed for user root
Dec 03 01:45:22 compute-0 sudo[379699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:45:22 compute-0 sudo[379699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:45:22 compute-0 sudo[379699]: pam_unix(sudo:session): session closed for user root
Dec 03 01:45:22 compute-0 sudo[379724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 01:45:22 compute-0 sudo[379724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:45:22 compute-0 podman[379843]: 2025-12-03 01:45:22.771349665 +0000 UTC m=+0.079195467 container create 89468005f73ca8e6f2687248708f3d91d3ce7cb88cecc963c63a41640b93e23a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_shockley, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:45:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:45:22 compute-0 podman[379843]: 2025-12-03 01:45:22.736661501 +0000 UTC m=+0.044507363 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:45:22 compute-0 systemd[1]: Started libpod-conmon-89468005f73ca8e6f2687248708f3d91d3ce7cb88cecc963c63a41640b93e23a.scope.
Dec 03 01:45:22 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:45:22 compute-0 podman[379843]: 2025-12-03 01:45:22.909369278 +0000 UTC m=+0.217215130 container init 89468005f73ca8e6f2687248708f3d91d3ce7cb88cecc963c63a41640b93e23a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_shockley, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:45:22 compute-0 podman[379843]: 2025-12-03 01:45:22.92141242 +0000 UTC m=+0.229258192 container start 89468005f73ca8e6f2687248708f3d91d3ce7cb88cecc963c63a41640b93e23a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 03 01:45:22 compute-0 podman[379843]: 2025-12-03 01:45:22.927144022 +0000 UTC m=+0.234989874 container attach 89468005f73ca8e6f2687248708f3d91d3ce7cb88cecc963c63a41640b93e23a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_shockley, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:45:22 compute-0 quizzical_shockley[379889]: 167 167
Dec 03 01:45:22 compute-0 systemd[1]: libpod-89468005f73ca8e6f2687248708f3d91d3ce7cb88cecc963c63a41640b93e23a.scope: Deactivated successfully.
Dec 03 01:45:22 compute-0 conmon[379889]: conmon 89468005f73ca8e6f268 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-89468005f73ca8e6f2687248708f3d91d3ce7cb88cecc963c63a41640b93e23a.scope/container/memory.events
Dec 03 01:45:22 compute-0 podman[379843]: 2025-12-03 01:45:22.932790433 +0000 UTC m=+0.240636205 container died 89468005f73ca8e6f2687248708f3d91d3ce7cb88cecc963c63a41640b93e23a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_shockley, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 03 01:45:22 compute-0 ceph-mon[192821]: pgmap v917: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-86145981554f26ea528464f8dd41fcc7b547a0215e26174ec85c0780757533b5-merged.mount: Deactivated successfully.
Dec 03 01:45:22 compute-0 podman[379843]: 2025-12-03 01:45:22.988304177 +0000 UTC m=+0.296149949 container remove 89468005f73ca8e6f2687248708f3d91d3ce7cb88cecc963c63a41640b93e23a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:45:23 compute-0 systemd[1]: libpod-conmon-89468005f73ca8e6f2687248708f3d91d3ce7cb88cecc963c63a41640b93e23a.scope: Deactivated successfully.
Dec 03 01:45:23 compute-0 sudo[379948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccvsiimvfmkcwsymxajojvpibdcluyje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726322.4897406-1213-197235019190638/AnsiballZ_command.py'
Dec 03 01:45:23 compute-0 sudo[379948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:45:23 compute-0 python3.9[379950]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:45:23 compute-0 podman[379956]: 2025-12-03 01:45:23.255582836 +0000 UTC m=+0.083281593 container create e87cbd949a40427a69ad57915299d9994218005b2c89d6348a6925b650957f61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 03 01:45:23 compute-0 sudo[379948]: pam_unix(sudo:session): session closed for user root
Dec 03 01:45:23 compute-0 podman[379956]: 2025-12-03 01:45:23.217834536 +0000 UTC m=+0.045533363 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:45:23 compute-0 systemd[1]: Started libpod-conmon-e87cbd949a40427a69ad57915299d9994218005b2c89d6348a6925b650957f61.scope.
Dec 03 01:45:23 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:45:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a23c09d3e4dbe941759398db5e69771c2fd842974ec2c352a6608af6f1ce0e32/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:45:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a23c09d3e4dbe941759398db5e69771c2fd842974ec2c352a6608af6f1ce0e32/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:45:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a23c09d3e4dbe941759398db5e69771c2fd842974ec2c352a6608af6f1ce0e32/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:45:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a23c09d3e4dbe941759398db5e69771c2fd842974ec2c352a6608af6f1ce0e32/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:45:23 compute-0 podman[379956]: 2025-12-03 01:45:23.437791233 +0000 UTC m=+0.265490060 container init e87cbd949a40427a69ad57915299d9994218005b2c89d6348a6925b650957f61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_joliot, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:45:23 compute-0 podman[379956]: 2025-12-03 01:45:23.455272508 +0000 UTC m=+0.282971245 container start e87cbd949a40427a69ad57915299d9994218005b2c89d6348a6925b650957f61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_joliot, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:45:23 compute-0 podman[379956]: 2025-12-03 01:45:23.461607738 +0000 UTC m=+0.289306505 container attach e87cbd949a40427a69ad57915299d9994218005b2c89d6348a6925b650957f61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 03 01:45:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v918: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:24 compute-0 sudo[380129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovfcruttgagixgjpriitigvdgizfvgvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726323.6390529-1221-208915175232152/AnsiballZ_stat.py'
Dec 03 01:45:24 compute-0 sudo[380129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:45:24 compute-0 cranky_joliot[379987]: {
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:     "0": [
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:         {
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:             "devices": [
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:                 "/dev/loop3"
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:             ],
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:             "lv_name": "ceph_lv0",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:             "lv_size": "21470642176",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:             "name": "ceph_lv0",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:             "tags": {
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:                 "ceph.cluster_name": "ceph",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:                 "ceph.crush_device_class": "",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:                 "ceph.encrypted": "0",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:                 "ceph.osd_id": "0",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:                 "ceph.type": "block",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:                 "ceph.vdo": "0"
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:             },
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:             "type": "block",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:             "vg_name": "ceph_vg0"
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:         }
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:     ],
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:     "1": [
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:         {
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:             "devices": [
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:                 "/dev/loop4"
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:             ],
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:             "lv_name": "ceph_lv1",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:             "lv_size": "21470642176",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:             "name": "ceph_lv1",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:             "tags": {
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:                 "ceph.cluster_name": "ceph",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:                 "ceph.crush_device_class": "",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:                 "ceph.encrypted": "0",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:                 "ceph.osd_id": "1",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:                 "ceph.type": "block",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:                 "ceph.vdo": "0"
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:             },
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:             "type": "block",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:             "vg_name": "ceph_vg1"
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:         }
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:     ],
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:     "2": [
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:         {
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:             "devices": [
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:                 "/dev/loop5"
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:             ],
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:             "lv_name": "ceph_lv2",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:             "lv_size": "21470642176",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:             "name": "ceph_lv2",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:             "tags": {
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:                 "ceph.cluster_name": "ceph",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:                 "ceph.crush_device_class": "",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:                 "ceph.encrypted": "0",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:                 "ceph.osd_id": "2",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:                 "ceph.type": "block",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:                 "ceph.vdo": "0"
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:             },
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:             "type": "block",
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:             "vg_name": "ceph_vg2"
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:         }
Dec 03 01:45:24 compute-0 cranky_joliot[379987]:     ]
Dec 03 01:45:24 compute-0 cranky_joliot[379987]: }
Dec 03 01:45:24 compute-0 systemd[1]: libpod-e87cbd949a40427a69ad57915299d9994218005b2c89d6348a6925b650957f61.scope: Deactivated successfully.
Dec 03 01:45:24 compute-0 python3.9[380131]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:45:24 compute-0 podman[380134]: 2025-12-03 01:45:24.400493751 +0000 UTC m=+0.068196555 container died e87cbd949a40427a69ad57915299d9994218005b2c89d6348a6925b650957f61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_joliot, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 03 01:45:24 compute-0 sudo[380129]: pam_unix(sudo:session): session closed for user root
Dec 03 01:45:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-a23c09d3e4dbe941759398db5e69771c2fd842974ec2c352a6608af6f1ce0e32-merged.mount: Deactivated successfully.
Dec 03 01:45:24 compute-0 podman[380134]: 2025-12-03 01:45:24.494363132 +0000 UTC m=+0.162065896 container remove e87cbd949a40427a69ad57915299d9994218005b2c89d6348a6925b650957f61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec 03 01:45:24 compute-0 systemd[1]: libpod-conmon-e87cbd949a40427a69ad57915299d9994218005b2c89d6348a6925b650957f61.scope: Deactivated successfully.
Dec 03 01:45:24 compute-0 sudo[379724]: pam_unix(sudo:session): session closed for user root
Dec 03 01:45:24 compute-0 sudo[380171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:45:24 compute-0 sudo[380171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:45:24 compute-0 sudo[380171]: pam_unix(sudo:session): session closed for user root
Dec 03 01:45:24 compute-0 sudo[380219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:45:24 compute-0 sudo[380219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:45:24 compute-0 sudo[380219]: pam_unix(sudo:session): session closed for user root
Dec 03 01:45:24 compute-0 sudo[380273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:45:24 compute-0 sudo[380273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:45:24 compute-0 sudo[380273]: pam_unix(sudo:session): session closed for user root
Dec 03 01:45:24 compute-0 ceph-mon[192821]: pgmap v918: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:25 compute-0 sudo[380321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 01:45:25 compute-0 sudo[380321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:45:25 compute-0 sudo[380398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lduqefskwypseydanpztjwieqtjtysov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726324.732373-1230-113743657581298/AnsiballZ_file.py'
Dec 03 01:45:25 compute-0 sudo[380398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:45:25 compute-0 python3.9[380408]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:45:25 compute-0 sudo[380398]: pam_unix(sudo:session): session closed for user root
Dec 03 01:45:25 compute-0 podman[380447]: 2025-12-03 01:45:25.642173731 +0000 UTC m=+0.079973259 container create f9b949b9a32270cf7c06e2c0a6f71def65eefc96e90d3f9531928235289b6a76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:45:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v919: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:25 compute-0 systemd[1]: Started libpod-conmon-f9b949b9a32270cf7c06e2c0a6f71def65eefc96e90d3f9531928235289b6a76.scope.
Dec 03 01:45:25 compute-0 podman[380447]: 2025-12-03 01:45:25.615883675 +0000 UTC m=+0.053683213 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:45:25 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:45:25 compute-0 podman[380447]: 2025-12-03 01:45:25.785521715 +0000 UTC m=+0.223321283 container init f9b949b9a32270cf7c06e2c0a6f71def65eefc96e90d3f9531928235289b6a76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_dewdney, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 03 01:45:25 compute-0 podman[380447]: 2025-12-03 01:45:25.801755556 +0000 UTC m=+0.239555074 container start f9b949b9a32270cf7c06e2c0a6f71def65eefc96e90d3f9531928235289b6a76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_dewdney, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:45:25 compute-0 podman[380447]: 2025-12-03 01:45:25.807829158 +0000 UTC m=+0.245628686 container attach f9b949b9a32270cf7c06e2c0a6f71def65eefc96e90d3f9531928235289b6a76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:45:25 compute-0 zen_dewdney[380479]: 167 167
Dec 03 01:45:25 compute-0 systemd[1]: libpod-f9b949b9a32270cf7c06e2c0a6f71def65eefc96e90d3f9531928235289b6a76.scope: Deactivated successfully.
Dec 03 01:45:25 compute-0 podman[380447]: 2025-12-03 01:45:25.813128908 +0000 UTC m=+0.250928436 container died f9b949b9a32270cf7c06e2c0a6f71def65eefc96e90d3f9531928235289b6a76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:45:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-180baef346c4e30aaaa365c5aaf2fe051b004b8e31560bd0549dd24cf899c3d2-merged.mount: Deactivated successfully.
Dec 03 01:45:25 compute-0 podman[380447]: 2025-12-03 01:45:25.88583201 +0000 UTC m=+0.323631508 container remove f9b949b9a32270cf7c06e2c0a6f71def65eefc96e90d3f9531928235289b6a76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:45:25 compute-0 systemd[1]: libpod-conmon-f9b949b9a32270cf7c06e2c0a6f71def65eefc96e90d3f9531928235289b6a76.scope: Deactivated successfully.
Dec 03 01:45:25 compute-0 sshd-session[352553]: Connection closed by 192.168.122.30 port 46552
Dec 03 01:45:25 compute-0 sshd-session[352540]: pam_unix(sshd:session): session closed for user zuul
Dec 03 01:45:25 compute-0 systemd[1]: session-57.scope: Deactivated successfully.
Dec 03 01:45:25 compute-0 systemd[1]: session-57.scope: Consumed 2min 48.334s CPU time.
Dec 03 01:45:25 compute-0 systemd-logind[800]: Session 57 logged out. Waiting for processes to exit.
Dec 03 01:45:25 compute-0 systemd-logind[800]: Removed session 57.
Dec 03 01:45:26 compute-0 podman[380503]: 2025-12-03 01:45:26.151880674 +0000 UTC m=+0.045693347 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:45:26 compute-0 podman[380503]: 2025-12-03 01:45:26.358464802 +0000 UTC m=+0.252277455 container create b0e9fcfaf382643caf2f4bb055009561849e13f3cf71b784ac4b8f46f7cfa7ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:45:26 compute-0 systemd[1]: Started libpod-conmon-b0e9fcfaf382643caf2f4bb055009561849e13f3cf71b784ac4b8f46f7cfa7ad.scope.
Dec 03 01:45:26 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:45:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba146e3823af8e477ef2dbcf1d0dcbb8753b1630fc3ee7f46fce86e1c351d51a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:45:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba146e3823af8e477ef2dbcf1d0dcbb8753b1630fc3ee7f46fce86e1c351d51a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:45:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba146e3823af8e477ef2dbcf1d0dcbb8753b1630fc3ee7f46fce86e1c351d51a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:45:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba146e3823af8e477ef2dbcf1d0dcbb8753b1630fc3ee7f46fce86e1c351d51a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:45:26 compute-0 podman[380503]: 2025-12-03 01:45:26.520441895 +0000 UTC m=+0.414254588 container init b0e9fcfaf382643caf2f4bb055009561849e13f3cf71b784ac4b8f46f7cfa7ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:45:26 compute-0 podman[380503]: 2025-12-03 01:45:26.535825241 +0000 UTC m=+0.429637894 container start b0e9fcfaf382643caf2f4bb055009561849e13f3cf71b784ac4b8f46f7cfa7ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_poincare, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec 03 01:45:26 compute-0 podman[380503]: 2025-12-03 01:45:26.542992615 +0000 UTC m=+0.436805338 container attach b0e9fcfaf382643caf2f4bb055009561849e13f3cf71b784ac4b8f46f7cfa7ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_poincare, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec 03 01:45:26 compute-0 ceph-mon[192821]: pgmap v919: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:27 compute-0 upbeat_poincare[380518]: {
Dec 03 01:45:27 compute-0 upbeat_poincare[380518]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 01:45:27 compute-0 upbeat_poincare[380518]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:45:27 compute-0 upbeat_poincare[380518]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 01:45:27 compute-0 upbeat_poincare[380518]:         "osd_id": 2,
Dec 03 01:45:27 compute-0 upbeat_poincare[380518]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:45:27 compute-0 upbeat_poincare[380518]:         "type": "bluestore"
Dec 03 01:45:27 compute-0 upbeat_poincare[380518]:     },
Dec 03 01:45:27 compute-0 upbeat_poincare[380518]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 01:45:27 compute-0 upbeat_poincare[380518]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:45:27 compute-0 upbeat_poincare[380518]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 01:45:27 compute-0 upbeat_poincare[380518]:         "osd_id": 1,
Dec 03 01:45:27 compute-0 upbeat_poincare[380518]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:45:27 compute-0 upbeat_poincare[380518]:         "type": "bluestore"
Dec 03 01:45:27 compute-0 upbeat_poincare[380518]:     },
Dec 03 01:45:27 compute-0 upbeat_poincare[380518]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 01:45:27 compute-0 upbeat_poincare[380518]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:45:27 compute-0 upbeat_poincare[380518]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 01:45:27 compute-0 upbeat_poincare[380518]:         "osd_id": 0,
Dec 03 01:45:27 compute-0 upbeat_poincare[380518]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:45:27 compute-0 upbeat_poincare[380518]:         "type": "bluestore"
Dec 03 01:45:27 compute-0 upbeat_poincare[380518]:     }
Dec 03 01:45:27 compute-0 upbeat_poincare[380518]: }
Dec 03 01:45:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v920: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:27 compute-0 systemd[1]: libpod-b0e9fcfaf382643caf2f4bb055009561849e13f3cf71b784ac4b8f46f7cfa7ad.scope: Deactivated successfully.
Dec 03 01:45:27 compute-0 systemd[1]: libpod-b0e9fcfaf382643caf2f4bb055009561849e13f3cf71b784ac4b8f46f7cfa7ad.scope: Consumed 1.165s CPU time.
Dec 03 01:45:27 compute-0 podman[380503]: 2025-12-03 01:45:27.692746227 +0000 UTC m=+1.586558890 container died b0e9fcfaf382643caf2f4bb055009561849e13f3cf71b784ac4b8f46f7cfa7ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_poincare, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 03 01:45:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba146e3823af8e477ef2dbcf1d0dcbb8753b1630fc3ee7f46fce86e1c351d51a-merged.mount: Deactivated successfully.
Dec 03 01:45:27 compute-0 podman[380503]: 2025-12-03 01:45:27.764116741 +0000 UTC m=+1.657929364 container remove b0e9fcfaf382643caf2f4bb055009561849e13f3cf71b784ac4b8f46f7cfa7ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:45:27 compute-0 systemd[1]: libpod-conmon-b0e9fcfaf382643caf2f4bb055009561849e13f3cf71b784ac4b8f46f7cfa7ad.scope: Deactivated successfully.
Dec 03 01:45:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:45:27 compute-0 sudo[380321]: pam_unix(sudo:session): session closed for user root
Dec 03 01:45:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:45:27 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:45:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:45:27 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:45:27 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 3626035e-c4ca-410d-8463-b61b6eff5634 does not exist
Dec 03 01:45:27 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 72430930-de53-4844-8337-df4891ee4701 does not exist
Dec 03 01:45:27 compute-0 sudo[380561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:45:27 compute-0 sudo[380561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:45:27 compute-0 sudo[380561]: pam_unix(sudo:session): session closed for user root
Dec 03 01:45:28 compute-0 sudo[380586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 01:45:28 compute-0 sudo[380586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:45:28 compute-0 sudo[380586]: pam_unix(sudo:session): session closed for user root
Dec 03 01:45:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:45:28
Dec 03 01:45:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 01:45:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 01:45:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['default.rgw.log', 'images', '.rgw.root', 'backups', '.mgr', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta', 'vms']
Dec 03 01:45:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 01:45:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:45:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:45:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:45:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:45:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:45:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:45:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 01:45:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:45:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 01:45:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:45:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:45:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:45:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:45:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:45:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:45:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:45:28 compute-0 ceph-mon[192821]: pgmap v920: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:28 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:45:28 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:45:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v921: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:29 compute-0 podman[158098]: time="2025-12-03T01:45:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:45:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:45:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec 03 01:45:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:45:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8110 "" "Go-http-client/1.1"
Dec 03 01:45:29 compute-0 podman[380611]: 2025-12-03 01:45:29.975329363 +0000 UTC m=+0.221737269 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Dec 03 01:45:30 compute-0 ceph-mon[192821]: pgmap v921: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:31 compute-0 openstack_network_exporter[368278]: ERROR   01:45:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:45:31 compute-0 openstack_network_exporter[368278]: ERROR   01:45:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:45:31 compute-0 openstack_network_exporter[368278]: ERROR   01:45:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:45:31 compute-0 openstack_network_exporter[368278]: ERROR   01:45:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:45:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:45:31 compute-0 openstack_network_exporter[368278]: ERROR   01:45:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:45:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:45:31 compute-0 sshd-session[380637]: Accepted publickey for zuul from 192.168.122.30 port 32886 ssh2: ECDSA SHA256:ja3ITS17A9km0/Ot+KN2pl9ub4ump/b6GV+vNoE7Szw
Dec 03 01:45:31 compute-0 systemd-logind[800]: New session 58 of user zuul.
Dec 03 01:45:31 compute-0 systemd[1]: Started Session 58 of User zuul.
Dec 03 01:45:31 compute-0 sshd-session[380637]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 03 01:45:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v922: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:31 compute-0 podman[380639]: 2025-12-03 01:45:31.748107982 +0000 UTC m=+0.164275429 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 03 01:45:31 compute-0 podman[380640]: 2025-12-03 01:45:31.759282829 +0000 UTC m=+0.169214869 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 03 01:45:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:45:32 compute-0 ceph-mon[192821]: pgmap v922: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:32 compute-0 podman[380729]: 2025-12-03 01:45:32.900728997 +0000 UTC m=+0.150642733 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.openshift.tags=minimal rhel9, version=9.6, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=edpm_ansible, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, vcs-type=git)
Dec 03 01:45:33 compute-0 sudo[380847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otlhgpldcwagwsfvygcfmxyqdmngpduy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726331.7931647-24-197922436069203/AnsiballZ_systemd_service.py'
Dec 03 01:45:33 compute-0 sudo[380847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:45:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v923: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:34 compute-0 python3.9[380849]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 03 01:45:34 compute-0 systemd[1]: Reloading.
Dec 03 01:45:34 compute-0 systemd-rc-local-generator[380874]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:45:34 compute-0 systemd-sysv-generator[380878]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:45:34 compute-0 sudo[380847]: pam_unix(sudo:session): session closed for user root
Dec 03 01:45:34 compute-0 ceph-mon[192821]: pgmap v923: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v924: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:35 compute-0 podman[380985]: 2025-12-03 01:45:35.894073577 +0000 UTC m=+0.138103957 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 01:45:36 compute-0 python3.9[381058]: ansible-ansible.builtin.service_facts Invoked
Dec 03 01:45:36 compute-0 network[381075]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 03 01:45:36 compute-0 network[381076]: 'network-scripts' will be removed from distribution in near future.
Dec 03 01:45:36 compute-0 network[381077]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 03 01:45:36 compute-0 ceph-mon[192821]: pgmap v924: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v925: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:45:38 compute-0 podman[381091]: 2025-12-03 01:45:38.025837796 +0000 UTC m=+0.157929688 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, io.buildah.version=1.29.0, container_name=kepler, distribution-scope=public, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, maintainer=Red Hat, Inc., release-0.7.12=, vendor=Red Hat, Inc., config_id=edpm, version=9.4, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 03 01:45:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 01:45:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:45:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 01:45:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:45:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:45:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:45:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:45:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:45:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:45:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:45:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:45:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:45:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 01:45:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:45:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:45:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:45:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 01:45:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:45:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 01:45:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:45:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:45:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:45:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 01:45:38 compute-0 ceph-mon[192821]: pgmap v925: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v926: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:40 compute-0 ceph-mon[192821]: pgmap v926: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v927: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:45:42 compute-0 ceph-mon[192821]: pgmap v927: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:42 compute-0 sudo[381369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skediiyjkxrpxoglyrjpbocqaflqxani ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726342.2954252-47-66468276702036/AnsiballZ_systemd_service.py'
Dec 03 01:45:42 compute-0 sudo[381369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:45:43 compute-0 python3.9[381371]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:45:43 compute-0 sudo[381369]: pam_unix(sudo:session): session closed for user root
Dec 03 01:45:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v928: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:44 compute-0 sudo[381522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dajmbnfeyrggtaptusefcvixztiaaofu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726343.7999637-57-239011583719127/AnsiballZ_file.py'
Dec 03 01:45:44 compute-0 sudo[381522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:45:44 compute-0 python3.9[381524]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:45:44 compute-0 sudo[381522]: pam_unix(sudo:session): session closed for user root
Dec 03 01:45:44 compute-0 ceph-mon[192821]: pgmap v928: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v929: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:46 compute-0 nova_compute[351485]: 2025-12-03 01:45:46.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:45:46 compute-0 nova_compute[351485]: 2025-12-03 01:45:46.612 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:45:46 compute-0 nova_compute[351485]: 2025-12-03 01:45:46.613 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:45:46 compute-0 nova_compute[351485]: 2025-12-03 01:45:46.614 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:45:46 compute-0 nova_compute[351485]: 2025-12-03 01:45:46.615 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 01:45:46 compute-0 nova_compute[351485]: 2025-12-03 01:45:46.617 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:45:46 compute-0 sudo[381675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvogbojwngqryfcxhoeizxopedgqczwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726345.114122-65-204624385260557/AnsiballZ_file.py'
Dec 03 01:45:46 compute-0 sudo[381675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:45:46 compute-0 ceph-mon[192821]: pgmap v929: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:46 compute-0 python3.9[381687]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:45:47 compute-0 sudo[381675]: pam_unix(sudo:session): session closed for user root
Dec 03 01:45:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 01:45:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3793001908' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:45:47 compute-0 nova_compute[351485]: 2025-12-03 01:45:47.184 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:45:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v930: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:47 compute-0 nova_compute[351485]: 2025-12-03 01:45:47.737 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 01:45:47 compute-0 nova_compute[351485]: 2025-12-03 01:45:47.740 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4573MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 01:45:47 compute-0 nova_compute[351485]: 2025-12-03 01:45:47.740 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:45:47 compute-0 nova_compute[351485]: 2025-12-03 01:45:47.741 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:45:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:45:47 compute-0 nova_compute[351485]: 2025-12-03 01:45:47.830 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 01:45:47 compute-0 nova_compute[351485]: 2025-12-03 01:45:47.831 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 01:45:47 compute-0 nova_compute[351485]: 2025-12-03 01:45:47.863 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:45:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3793001908' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:45:48 compute-0 sudo[381868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjuefmkebgrjjzryqhdeeiylmizffeps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726347.4246228-74-175503541286092/AnsiballZ_command.py'
Dec 03 01:45:48 compute-0 sudo[381868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:45:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 01:45:48 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4185067906' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:45:48 compute-0 nova_compute[351485]: 2025-12-03 01:45:48.377 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:45:48 compute-0 nova_compute[351485]: 2025-12-03 01:45:48.391 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 01:45:48 compute-0 python3.9[381870]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:45:48 compute-0 sudo[381868]: pam_unix(sudo:session): session closed for user root
Dec 03 01:45:48 compute-0 nova_compute[351485]: 2025-12-03 01:45:48.441 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 01:45:48 compute-0 nova_compute[351485]: 2025-12-03 01:45:48.444 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 01:45:48 compute-0 nova_compute[351485]: 2025-12-03 01:45:48.445 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:45:48 compute-0 ceph-mon[192821]: pgmap v930: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:48 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/4185067906' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:45:49 compute-0 nova_compute[351485]: 2025-12-03 01:45:49.440 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:45:49 compute-0 nova_compute[351485]: 2025-12-03 01:45:49.464 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:45:49 compute-0 nova_compute[351485]: 2025-12-03 01:45:49.465 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 01:45:49 compute-0 nova_compute[351485]: 2025-12-03 01:45:49.465 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 03 01:45:49 compute-0 nova_compute[351485]: 2025-12-03 01:45:49.495 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 03 01:45:49 compute-0 nova_compute[351485]: 2025-12-03 01:45:49.496 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:45:49 compute-0 nova_compute[351485]: 2025-12-03 01:45:49.497 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:45:49 compute-0 nova_compute[351485]: 2025-12-03 01:45:49.497 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:45:49 compute-0 nova_compute[351485]: 2025-12-03 01:45:49.498 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:45:49 compute-0 nova_compute[351485]: 2025-12-03 01:45:49.578 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:45:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v931: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:49 compute-0 podman[381998]: 2025-12-03 01:45:49.870498996 +0000 UTC m=+0.119720446 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 03 01:45:50 compute-0 python3.9[382042]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 03 01:45:50 compute-0 nova_compute[351485]: 2025-12-03 01:45:50.569 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:45:50 compute-0 nova_compute[351485]: 2025-12-03 01:45:50.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:45:50 compute-0 nova_compute[351485]: 2025-12-03 01:45:50.576 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 01:45:50 compute-0 podman[382129]: 2025-12-03 01:45:50.87076082 +0000 UTC m=+0.119073017 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Dec 03 01:45:50 compute-0 podman[382137]: 2025-12-03 01:45:50.89225917 +0000 UTC m=+0.139242680 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Dec 03 01:45:51 compute-0 ceph-mon[192821]: pgmap v931: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:51 compute-0 sudo[382236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkxqcdfdywipdxawlyrrbggxblmmyzyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726350.494733-92-168975101936814/AnsiballZ_systemd_service.py'
Dec 03 01:45:51 compute-0 sudo[382236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:45:51 compute-0 python3.9[382238]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 03 01:45:51 compute-0 systemd[1]: Reloading.
Dec 03 01:45:51 compute-0 systemd-rc-local-generator[382260]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 03 01:45:51 compute-0 systemd-sysv-generator[382266]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 03 01:45:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v932: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:51 compute-0 sudo[382236]: pam_unix(sudo:session): session closed for user root
Dec 03 01:45:52 compute-0 sudo[382422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmubompnkgouozjxmnosruojwykhwmiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726352.2200632-100-83301008877523/AnsiballZ_command.py'
Dec 03 01:45:52 compute-0 sudo[382422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:45:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:45:52 compute-0 python3.9[382424]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:45:53 compute-0 ceph-mon[192821]: pgmap v932: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:53 compute-0 sudo[382422]: pam_unix(sudo:session): session closed for user root
Dec 03 01:45:53 compute-0 sshd-session[382425]: Invalid user redmine from 80.253.31.232 port 50504
Dec 03 01:45:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v933: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:53 compute-0 sshd-session[382425]: Received disconnect from 80.253.31.232 port 50504:11: Bye Bye [preauth]
Dec 03 01:45:53 compute-0 sshd-session[382425]: Disconnected from invalid user redmine 80.253.31.232 port 50504 [preauth]
Dec 03 01:45:53 compute-0 sudo[382577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etvlhazpgeaaryiyddlfordmnkdkgwvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726353.3714602-109-189175289200992/AnsiballZ_file.py'
Dec 03 01:45:53 compute-0 sudo[382577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:45:54 compute-0 python3.9[382579]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry-power-monitoring recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:45:54 compute-0 sudo[382577]: pam_unix(sudo:session): session closed for user root
Dec 03 01:45:55 compute-0 ceph-mon[192821]: pgmap v933: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:55 compute-0 python3.9[382729]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:45:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v934: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:56 compute-0 python3.9[382881]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:45:57 compute-0 ceph-mon[192821]: pgmap v934: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:57 compute-0 python3.9[382957]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:45:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v935: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:45:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:45:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:45:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:45:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:45:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:45:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:45:58 compute-0 sudo[383107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mniznrtqddkgfqyuwcxpxsylcctetnid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726357.679354-140-114123346920821/AnsiballZ_getent.py'
Dec 03 01:45:58 compute-0 sudo[383107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:45:58 compute-0 python3.9[383109]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Dec 03 01:45:58 compute-0 sudo[383107]: pam_unix(sudo:session): session closed for user root
Dec 03 01:45:59 compute-0 ceph-mon[192821]: pgmap v935: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:45:59.605 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:45:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:45:59.607 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:45:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:45:59.607 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:45:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v936: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:45:59 compute-0 podman[158098]: time="2025-12-03T01:45:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:45:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:45:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec 03 01:45:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:45:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8102 "" "Go-http-client/1.1"
Dec 03 01:46:00 compute-0 podman[383211]: 2025-12-03 01:46:00.948438696 +0000 UTC m=+0.191556243 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 03 01:46:01 compute-0 ceph-mon[192821]: pgmap v936: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:01 compute-0 python3.9[383286]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:46:01 compute-0 openstack_network_exporter[368278]: ERROR   01:46:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:46:01 compute-0 openstack_network_exporter[368278]: ERROR   01:46:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:46:01 compute-0 openstack_network_exporter[368278]: ERROR   01:46:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:46:01 compute-0 openstack_network_exporter[368278]: ERROR   01:46:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:46:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:46:01 compute-0 openstack_network_exporter[368278]: ERROR   01:46:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:46:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:46:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v937: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:02 compute-0 podman[383337]: 2025-12-03 01:46:02.483077294 +0000 UTC m=+0.126348964 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 03 01:46:02 compute-0 podman[383336]: 2025-12-03 01:46:02.4991475 +0000 UTC m=+0.149385327 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 03 01:46:02 compute-0 python3.9[383390]: ansible-ansible.legacy.file Invoked with mode=0640 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf _original_basename=ceilometer.conf recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:46:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:46:03 compute-0 ceph-mon[192821]: pgmap v937: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:03 compute-0 podman[383524]: 2025-12-03 01:46:03.520505002 +0000 UTC m=+0.150120364 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, config_id=edpm, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 03 01:46:03 compute-0 python3.9[383564]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:46:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v938: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:04 compute-0 python3.9[383645]: ansible-ansible.legacy.file Invoked with mode=0640 dest=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml _original_basename=polling.yaml recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:46:05 compute-0 ceph-mon[192821]: pgmap v938: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:05 compute-0 python3.9[383795]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:46:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v939: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:05 compute-0 python3.9[383871]: ansible-ansible.legacy.file Invoked with mode=0640 dest=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf _original_basename=custom.conf recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:46:06 compute-0 podman[383872]: 2025-12-03 01:46:06.133320694 +0000 UTC m=+0.128855137 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 03 01:46:07 compute-0 python3.9[384043]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:46:07 compute-0 ceph-mon[192821]: pgmap v939: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v940: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:46:08 compute-0 python3.9[384195]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:46:08 compute-0 podman[384279]: 2025-12-03 01:46:08.876376312 +0000 UTC m=+0.130323929 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, managed_by=edpm_ansible, vcs-type=git, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, name=ubi9, config_id=edpm, release-0.7.12=, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0)
Dec 03 01:46:09 compute-0 ceph-mon[192821]: pgmap v940: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:09 compute-0 python3.9[384367]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:46:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v941: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:10 compute-0 python3.9[384443]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json _original_basename=ceilometer-agent-ipmi.json.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:46:11 compute-0 python3.9[384593]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:46:11 compute-0 ceph-mon[192821]: pgmap v941: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:11 compute-0 rsyslogd[188612]: imjournal: 2988 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Dec 03 01:46:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v942: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:11 compute-0 python3.9[384669]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:46:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:46:13 compute-0 python3.9[384819]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:46:13 compute-0 ceph-mon[192821]: pgmap v942: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v943: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:13 compute-0 sshd-session[384845]: Invalid user redmine from 34.66.72.251 port 55730
Dec 03 01:46:13 compute-0 sshd-session[384845]: Received disconnect from 34.66.72.251 port 55730:11: Bye Bye [preauth]
Dec 03 01:46:13 compute-0 sshd-session[384845]: Disconnected from invalid user redmine 34.66.72.251 port 55730 [preauth]
Dec 03 01:46:14 compute-0 python3.9[384899]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json _original_basename=ceilometer_agent_ipmi.json.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:46:15 compute-0 ceph-mon[192821]: pgmap v943: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:15 compute-0 sshd-session[384847]: Invalid user localhost from 103.146.202.174 port 38894
Dec 03 01:46:15 compute-0 python3.9[385049]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:46:15 compute-0 sshd-session[384847]: Received disconnect from 103.146.202.174 port 38894:11: Bye Bye [preauth]
Dec 03 01:46:15 compute-0 sshd-session[384847]: Disconnected from invalid user localhost 103.146.202.174 port 38894 [preauth]
Dec 03 01:46:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v944: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:17 compute-0 python3.9[385125]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:46:17 compute-0 ceph-mon[192821]: pgmap v944: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v945: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:46:18 compute-0 sshd-session[385173]: Invalid user openbravo from 173.249.50.59 port 40202
Dec 03 01:46:18 compute-0 python3.9[385277]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:46:18 compute-0 sshd-session[385173]: Received disconnect from 173.249.50.59 port 40202:11: Bye Bye [preauth]
Dec 03 01:46:18 compute-0 sshd-session[385173]: Disconnected from invalid user openbravo 173.249.50.59 port 40202 [preauth]
Dec 03 01:46:18 compute-0 python3.9[385353]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:46:19 compute-0 ceph-mon[192821]: pgmap v945: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v946: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:19 compute-0 python3.9[385504]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:46:20 compute-0 podman[385554]: 2025-12-03 01:46:20.36087578 +0000 UTC m=+0.122712432 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 03 01:46:20 compute-0 python3.9[385597]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json _original_basename=kepler.json.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:46:21 compute-0 ceph-mon[192821]: pgmap v946: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:21 compute-0 podman[385728]: 2025-12-03 01:46:21.556454513 +0000 UTC m=+0.105302456 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 03 01:46:21 compute-0 podman[385729]: 2025-12-03 01:46:21.602435925 +0000 UTC m=+0.144223376 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec 03 01:46:21 compute-0 python3.9[385787]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:46:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v947: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:22 compute-0 python3.9[385866]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:46:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:46:23 compute-0 ceph-mon[192821]: pgmap v947: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:23 compute-0 sudo[386016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqgeaadaxshxpunshtbcqmlgmwjofyfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726382.7397594-298-159403192924924/AnsiballZ_file.py'
Dec 03 01:46:23 compute-0 sudo[386016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:46:23 compute-0 python3.9[386018]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:46:23 compute-0 sudo[386016]: pam_unix(sudo:session): session closed for user root
Dec 03 01:46:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v948: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:24 compute-0 sudo[386168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwitjiavxiozffjwyulrfwpovplgjnuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726383.9237316-306-39616744115236/AnsiballZ_file.py'
Dec 03 01:46:24 compute-0 sudo[386168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:46:24 compute-0 python3.9[386170]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:46:24 compute-0 sudo[386168]: pam_unix(sudo:session): session closed for user root
Dec 03 01:46:25 compute-0 ceph-mon[192821]: pgmap v948: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:25 compute-0 sudo[386320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnzxqwlyjxyniceytgmwomxptpllvkth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726384.99838-314-252443188109205/AnsiballZ_file.py'
Dec 03 01:46:25 compute-0 sudo[386320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:46:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v949: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:25 compute-0 python3.9[386322]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:46:25 compute-0 sudo[386320]: pam_unix(sudo:session): session closed for user root
Dec 03 01:46:27 compute-0 sudo[386472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwdopudalornchxfeftsmjblanxmrcln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726386.0511012-322-204845378471214/AnsiballZ_stat.py'
Dec 03 01:46:27 compute-0 sudo[386472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:46:27 compute-0 python3.9[386474]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:46:27 compute-0 sudo[386472]: pam_unix(sudo:session): session closed for user root
Dec 03 01:46:27 compute-0 ceph-mon[192821]: pgmap v949: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v950: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:46:27 compute-0 sudo[386550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prdpxfsjmrjtpbnzbuihlrobtdbtufod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726386.0511012-322-204845378471214/AnsiballZ_file.py'
Dec 03 01:46:27 compute-0 sudo[386550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:46:28 compute-0 python3.9[386552]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:46:28 compute-0 sudo[386550]: pam_unix(sudo:session): session closed for user root
Dec 03 01:46:28 compute-0 sudo[386553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:46:28 compute-0 sudo[386553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:46:28 compute-0 sudo[386553]: pam_unix(sudo:session): session closed for user root
Dec 03 01:46:28 compute-0 sudo[386601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:46:28 compute-0 sudo[386601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:46:28 compute-0 sudo[386601]: pam_unix(sudo:session): session closed for user root
Dec 03 01:46:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:46:28
Dec 03 01:46:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 01:46:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 01:46:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'backups', 'default.rgw.log', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', 'vms', 'images']
Dec 03 01:46:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 01:46:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:46:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:46:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:46:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:46:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:46:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:46:28 compute-0 sudo[386640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:46:28 compute-0 sudo[386640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:46:28 compute-0 sudo[386640]: pam_unix(sudo:session): session closed for user root
Dec 03 01:46:28 compute-0 sudo[386684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 01:46:28 compute-0 sudo[386684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:46:28 compute-0 sudo[386724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlhijmywiudfithldteufpqmjptpootw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726386.0511012-322-204845378471214/AnsiballZ_stat.py'
Dec 03 01:46:28 compute-0 sudo[386724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:46:28 compute-0 python3.9[386728]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:46:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 01:46:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:46:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 01:46:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:46:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:46:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:46:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:46:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:46:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:46:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:46:28 compute-0 sudo[386724]: pam_unix(sudo:session): session closed for user root
Dec 03 01:46:29 compute-0 sudo[386684]: pam_unix(sudo:session): session closed for user root
Dec 03 01:46:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:46:29 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:46:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 01:46:29 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:46:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 01:46:29 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:46:29 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev b7001d72-4bab-46e5-822a-8b7ef92081f5 does not exist
Dec 03 01:46:29 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 6faa3b9d-a720-47f3-88ae-b338e4e56fa2 does not exist
Dec 03 01:46:29 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev b03ba713-46ed-42df-aca2-1f36ef899721 does not exist
Dec 03 01:46:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 01:46:29 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:46:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 01:46:29 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:46:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:46:29 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:46:29 compute-0 sudo[386762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:46:29 compute-0 sudo[386762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:46:29 compute-0 sudo[386762]: pam_unix(sudo:session): session closed for user root
Dec 03 01:46:29 compute-0 ceph-mon[192821]: pgmap v950: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:29 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:46:29 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:46:29 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:46:29 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:46:29 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:46:29 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:46:29 compute-0 sudo[386787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:46:29 compute-0 sudo[386787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:46:29 compute-0 sudo[386787]: pam_unix(sudo:session): session closed for user root
Dec 03 01:46:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v951: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:29 compute-0 sudo[386824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:46:29 compute-0 sudo[386824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:46:29 compute-0 sudo[386824]: pam_unix(sudo:session): session closed for user root
Dec 03 01:46:29 compute-0 podman[158098]: time="2025-12-03T01:46:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:46:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:46:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec 03 01:46:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:46:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8117 "" "Go-http-client/1.1"
Dec 03 01:46:29 compute-0 sudo[386867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 01:46:29 compute-0 sudo[386867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:46:29 compute-0 sudo[386935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snxdziqhnnsudxcndgesbmulbjdkmjwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726386.0511012-322-204845378471214/AnsiballZ_file.py'
Dec 03 01:46:29 compute-0 sudo[386935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:46:30 compute-0 python3.9[386937]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ _original_basename=healthcheck.future recurse=False state=file path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:46:30 compute-0 sudo[386935]: pam_unix(sudo:session): session closed for user root
Dec 03 01:46:30 compute-0 podman[387001]: 2025-12-03 01:46:30.392167072 +0000 UTC m=+0.071776869 container create 7d2a8cca1fe0c50a4c6f479e7286c8c4d370c23d9c9da06632e8f60e7e19de3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_heyrovsky, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:46:30 compute-0 podman[387001]: 2025-12-03 01:46:30.361137677 +0000 UTC m=+0.040747454 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:46:30 compute-0 systemd[1]: Started libpod-conmon-7d2a8cca1fe0c50a4c6f479e7286c8c4d370c23d9c9da06632e8f60e7e19de3e.scope.
Dec 03 01:46:30 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:46:30 compute-0 podman[387001]: 2025-12-03 01:46:30.539227879 +0000 UTC m=+0.218837666 container init 7d2a8cca1fe0c50a4c6f479e7286c8c4d370c23d9c9da06632e8f60e7e19de3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_heyrovsky, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:46:30 compute-0 podman[387001]: 2025-12-03 01:46:30.558404986 +0000 UTC m=+0.238014783 container start 7d2a8cca1fe0c50a4c6f479e7286c8c4d370c23d9c9da06632e8f60e7e19de3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 03 01:46:30 compute-0 podman[387001]: 2025-12-03 01:46:30.565473047 +0000 UTC m=+0.245082834 container attach 7d2a8cca1fe0c50a4c6f479e7286c8c4d370c23d9c9da06632e8f60e7e19de3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:46:30 compute-0 flamboyant_heyrovsky[387040]: 167 167
Dec 03 01:46:30 compute-0 systemd[1]: libpod-7d2a8cca1fe0c50a4c6f479e7286c8c4d370c23d9c9da06632e8f60e7e19de3e.scope: Deactivated successfully.
Dec 03 01:46:30 compute-0 podman[387001]: 2025-12-03 01:46:30.571934052 +0000 UTC m=+0.251543839 container died 7d2a8cca1fe0c50a4c6f479e7286c8c4d370c23d9c9da06632e8f60e7e19de3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_heyrovsky, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec 03 01:46:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ea0f5c0d52f8b167ce254ce3711824991be8b47b7c668006cbf4fbc1268cfe9-merged.mount: Deactivated successfully.
Dec 03 01:46:30 compute-0 podman[387001]: 2025-12-03 01:46:30.662517766 +0000 UTC m=+0.342127533 container remove 7d2a8cca1fe0c50a4c6f479e7286c8c4d370c23d9c9da06632e8f60e7e19de3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:46:30 compute-0 systemd[1]: libpod-conmon-7d2a8cca1fe0c50a4c6f479e7286c8c4d370c23d9c9da06632e8f60e7e19de3e.scope: Deactivated successfully.
Dec 03 01:46:30 compute-0 podman[387135]: 2025-12-03 01:46:30.969754363 +0000 UTC m=+0.109170906 container create 1bb7395a03c8189a80e2207a1091f140cdd291d8ae889ecbba4cb4d3743941ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 03 01:46:31 compute-0 podman[387135]: 2025-12-03 01:46:30.915609338 +0000 UTC m=+0.055025951 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:46:31 compute-0 systemd[1]: Started libpod-conmon-1bb7395a03c8189a80e2207a1091f140cdd291d8ae889ecbba4cb4d3743941ea.scope.
Dec 03 01:46:31 compute-0 sudo[387181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpugjikuhuvrhcnhhjnnpxmzsebtdxob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726390.4521558-322-26506954221937/AnsiballZ_stat.py'
Dec 03 01:46:31 compute-0 sudo[387181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:46:31 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:46:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84070fb2b07c55b51b55554adca6f838b5842f439d849852d628d273caedc0ec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:46:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84070fb2b07c55b51b55554adca6f838b5842f439d849852d628d273caedc0ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:46:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84070fb2b07c55b51b55554adca6f838b5842f439d849852d628d273caedc0ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:46:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84070fb2b07c55b51b55554adca6f838b5842f439d849852d628d273caedc0ec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:46:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84070fb2b07c55b51b55554adca6f838b5842f439d849852d628d273caedc0ec/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:46:31 compute-0 podman[387135]: 2025-12-03 01:46:31.152333733 +0000 UTC m=+0.291750376 container init 1bb7395a03c8189a80e2207a1091f140cdd291d8ae889ecbba4cb4d3743941ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_poincare, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 03 01:46:31 compute-0 podman[387135]: 2025-12-03 01:46:31.165972402 +0000 UTC m=+0.305388945 container start 1bb7395a03c8189a80e2207a1091f140cdd291d8ae889ecbba4cb4d3743941ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:46:31 compute-0 podman[387135]: 2025-12-03 01:46:31.171001295 +0000 UTC m=+0.310417848 container attach 1bb7395a03c8189a80e2207a1091f140cdd291d8ae889ecbba4cb4d3743941ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_poincare, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 03 01:46:31 compute-0 podman[387173]: 2025-12-03 01:46:31.238178182 +0000 UTC m=+0.198527656 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 03 01:46:31 compute-0 python3.9[387194]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/kepler/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:46:31 compute-0 sudo[387181]: pam_unix(sudo:session): session closed for user root
Dec 03 01:46:31 compute-0 openstack_network_exporter[368278]: ERROR   01:46:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:46:31 compute-0 openstack_network_exporter[368278]: ERROR   01:46:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:46:31 compute-0 openstack_network_exporter[368278]: ERROR   01:46:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:46:31 compute-0 openstack_network_exporter[368278]: ERROR   01:46:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:46:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:46:31 compute-0 openstack_network_exporter[368278]: ERROR   01:46:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:46:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:46:31 compute-0 ceph-mon[192821]: pgmap v951: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v952: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:31 compute-0 sudo[387286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyedhakukocgcflqdmnuvkjjgfnacxfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726390.4521558-322-26506954221937/AnsiballZ_file.py'
Dec 03 01:46:31 compute-0 sudo[387286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:46:32 compute-0 python3.9[387288]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/kepler/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/kepler/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 03 01:46:32 compute-0 sudo[387286]: pam_unix(sudo:session): session closed for user root
Dec 03 01:46:32 compute-0 upbeat_poincare[387187]: --> passed data devices: 0 physical, 3 LVM
Dec 03 01:46:32 compute-0 upbeat_poincare[387187]: --> relative data size: 1.0
Dec 03 01:46:32 compute-0 upbeat_poincare[387187]: --> All data devices are unavailable
Dec 03 01:46:32 compute-0 systemd[1]: libpod-1bb7395a03c8189a80e2207a1091f140cdd291d8ae889ecbba4cb4d3743941ea.scope: Deactivated successfully.
Dec 03 01:46:32 compute-0 systemd[1]: libpod-1bb7395a03c8189a80e2207a1091f140cdd291d8ae889ecbba4cb4d3743941ea.scope: Consumed 1.287s CPU time.
Dec 03 01:46:32 compute-0 podman[387350]: 2025-12-03 01:46:32.585783152 +0000 UTC m=+0.049863033 container died 1bb7395a03c8189a80e2207a1091f140cdd291d8ae889ecbba4cb4d3743941ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_poincare, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 03 01:46:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-84070fb2b07c55b51b55554adca6f838b5842f439d849852d628d273caedc0ec-merged.mount: Deactivated successfully.
Dec 03 01:46:32 compute-0 podman[387365]: 2025-12-03 01:46:32.672676652 +0000 UTC m=+0.095151906 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 03 01:46:32 compute-0 podman[387350]: 2025-12-03 01:46:32.685058365 +0000 UTC m=+0.149138186 container remove 1bb7395a03c8189a80e2207a1091f140cdd291d8ae889ecbba4cb4d3743941ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:46:32 compute-0 systemd[1]: libpod-conmon-1bb7395a03c8189a80e2207a1091f140cdd291d8ae889ecbba4cb4d3743941ea.scope: Deactivated successfully.
Dec 03 01:46:32 compute-0 podman[387351]: 2025-12-03 01:46:32.703066439 +0000 UTC m=+0.128051435 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=1, health_log=, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 03 01:46:32 compute-0 systemd[1]: ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92-64bdef25bfa2e2e5.service: Main process exited, code=exited, status=1/FAILURE
Dec 03 01:46:32 compute-0 systemd[1]: ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92-64bdef25bfa2e2e5.service: Failed with result 'exit-code'.
Dec 03 01:46:32 compute-0 sudo[386867]: pam_unix(sudo:session): session closed for user root
Dec 03 01:46:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:46:32 compute-0 sudo[387438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:46:32 compute-0 sudo[387438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:46:32 compute-0 sudo[387438]: pam_unix(sudo:session): session closed for user root
Dec 03 01:46:32 compute-0 sudo[387467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:46:32 compute-0 sudo[387467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:46:32 compute-0 sudo[387467]: pam_unix(sudo:session): session closed for user root
Dec 03 01:46:33 compute-0 sudo[387512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:46:33 compute-0 sudo[387512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:46:33 compute-0 sudo[387512]: pam_unix(sudo:session): session closed for user root
Dec 03 01:46:33 compute-0 sudo[387607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cesriucllkuzzsgjqdudxkbblsnmoaws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726392.585813-355-9793233605212/AnsiballZ_container_config_data.py'
Dec 03 01:46:33 compute-0 sudo[387607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:46:33 compute-0 sudo[387567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 01:46:33 compute-0 sudo[387567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:46:33 compute-0 python3.9[387611]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=ceilometer_agent_ipmi.json debug=False
Dec 03 01:46:33 compute-0 sudo[387607]: pam_unix(sudo:session): session closed for user root
Dec 03 01:46:33 compute-0 ceph-mon[192821]: pgmap v952: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v953: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:33 compute-0 podman[387675]: 2025-12-03 01:46:33.86736076 +0000 UTC m=+0.117291728 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, version=9.6, vendor=Red Hat, Inc., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, vcs-type=git, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, name=ubi9-minimal)
Dec 03 01:46:33 compute-0 podman[387683]: 2025-12-03 01:46:33.883663165 +0000 UTC m=+0.091184143 container create dd954e1959f47969cab4d33ac83ece90781ab223c04f98591578aab1aaaf29f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_haslett, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 03 01:46:33 compute-0 podman[387683]: 2025-12-03 01:46:33.844973771 +0000 UTC m=+0.052494829 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:46:33 compute-0 systemd[1]: Started libpod-conmon-dd954e1959f47969cab4d33ac83ece90781ab223c04f98591578aab1aaaf29f2.scope.
Dec 03 01:46:33 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:46:34 compute-0 podman[387683]: 2025-12-03 01:46:34.010068262 +0000 UTC m=+0.217589320 container init dd954e1959f47969cab4d33ac83ece90781ab223c04f98591578aab1aaaf29f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_haslett, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 03 01:46:34 compute-0 podman[387683]: 2025-12-03 01:46:34.029799955 +0000 UTC m=+0.237320953 container start dd954e1959f47969cab4d33ac83ece90781ab223c04f98591578aab1aaaf29f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_haslett, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 03 01:46:34 compute-0 podman[387683]: 2025-12-03 01:46:34.038353699 +0000 UTC m=+0.245874747 container attach dd954e1959f47969cab4d33ac83ece90781ab223c04f98591578aab1aaaf29f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_haslett, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 03 01:46:34 compute-0 wonderful_haslett[387751]: 167 167
Dec 03 01:46:34 compute-0 systemd[1]: libpod-dd954e1959f47969cab4d33ac83ece90781ab223c04f98591578aab1aaaf29f2.scope: Deactivated successfully.
Dec 03 01:46:34 compute-0 podman[387683]: 2025-12-03 01:46:34.041593191 +0000 UTC m=+0.249114179 container died dd954e1959f47969cab4d33ac83ece90781ab223c04f98591578aab1aaaf29f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 03 01:46:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-5676bcf9d04427aa69109cac317c3eecc5f2bd1d35f1798fd0b921df1dcbff1f-merged.mount: Deactivated successfully.
Dec 03 01:46:34 compute-0 podman[387683]: 2025-12-03 01:46:34.1277547 +0000 UTC m=+0.335275698 container remove dd954e1959f47969cab4d33ac83ece90781ab223c04f98591578aab1aaaf29f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_haslett, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:46:34 compute-0 systemd[1]: libpod-conmon-dd954e1959f47969cab4d33ac83ece90781ab223c04f98591578aab1aaaf29f2.scope: Deactivated successfully.
Dec 03 01:46:34 compute-0 podman[387786]: 2025-12-03 01:46:34.418497856 +0000 UTC m=+0.088205918 container create be5afb9c57d166bbbd8170986db49aadfbca7da9499098ca08009e010a304ab6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kilby, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:46:34 compute-0 podman[387786]: 2025-12-03 01:46:34.38501197 +0000 UTC m=+0.054720072 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:46:34 compute-0 systemd[1]: Started libpod-conmon-be5afb9c57d166bbbd8170986db49aadfbca7da9499098ca08009e010a304ab6.scope.
Dec 03 01:46:34 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:46:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d75fee44198d0c6e848943c833a17d375186e3d7df6bed78233693f3ba191cbb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:46:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d75fee44198d0c6e848943c833a17d375186e3d7df6bed78233693f3ba191cbb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:46:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d75fee44198d0c6e848943c833a17d375186e3d7df6bed78233693f3ba191cbb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:46:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d75fee44198d0c6e848943c833a17d375186e3d7df6bed78233693f3ba191cbb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:46:34 compute-0 podman[387786]: 2025-12-03 01:46:34.613495629 +0000 UTC m=+0.283203731 container init be5afb9c57d166bbbd8170986db49aadfbca7da9499098ca08009e010a304ab6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kilby, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 03 01:46:34 compute-0 podman[387786]: 2025-12-03 01:46:34.644072632 +0000 UTC m=+0.313780684 container start be5afb9c57d166bbbd8170986db49aadfbca7da9499098ca08009e010a304ab6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 03 01:46:34 compute-0 podman[387786]: 2025-12-03 01:46:34.651767382 +0000 UTC m=+0.321475474 container attach be5afb9c57d166bbbd8170986db49aadfbca7da9499098ca08009e010a304ab6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:46:34 compute-0 sudo[387879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwlwecgqakueadtptlbrjvhgixxlrxyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726393.8736246-364-140678650269553/AnsiballZ_container_config_hash.py'
Dec 03 01:46:34 compute-0 sudo[387879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:46:35 compute-0 python3.9[387881]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 03 01:46:35 compute-0 sudo[387879]: pam_unix(sudo:session): session closed for user root
Dec 03 01:46:35 compute-0 focused_kilby[387824]: {
Dec 03 01:46:35 compute-0 focused_kilby[387824]:     "0": [
Dec 03 01:46:35 compute-0 focused_kilby[387824]:         {
Dec 03 01:46:35 compute-0 focused_kilby[387824]:             "devices": [
Dec 03 01:46:35 compute-0 focused_kilby[387824]:                 "/dev/loop3"
Dec 03 01:46:35 compute-0 focused_kilby[387824]:             ],
Dec 03 01:46:35 compute-0 focused_kilby[387824]:             "lv_name": "ceph_lv0",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:             "lv_size": "21470642176",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:             "name": "ceph_lv0",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:             "tags": {
Dec 03 01:46:35 compute-0 focused_kilby[387824]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:                 "ceph.cluster_name": "ceph",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:                 "ceph.crush_device_class": "",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:                 "ceph.encrypted": "0",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:                 "ceph.osd_id": "0",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:                 "ceph.type": "block",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:                 "ceph.vdo": "0"
Dec 03 01:46:35 compute-0 focused_kilby[387824]:             },
Dec 03 01:46:35 compute-0 focused_kilby[387824]:             "type": "block",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:             "vg_name": "ceph_vg0"
Dec 03 01:46:35 compute-0 focused_kilby[387824]:         }
Dec 03 01:46:35 compute-0 focused_kilby[387824]:     ],
Dec 03 01:46:35 compute-0 focused_kilby[387824]:     "1": [
Dec 03 01:46:35 compute-0 focused_kilby[387824]:         {
Dec 03 01:46:35 compute-0 focused_kilby[387824]:             "devices": [
Dec 03 01:46:35 compute-0 focused_kilby[387824]:                 "/dev/loop4"
Dec 03 01:46:35 compute-0 focused_kilby[387824]:             ],
Dec 03 01:46:35 compute-0 focused_kilby[387824]:             "lv_name": "ceph_lv1",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:             "lv_size": "21470642176",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:             "name": "ceph_lv1",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:             "tags": {
Dec 03 01:46:35 compute-0 focused_kilby[387824]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:                 "ceph.cluster_name": "ceph",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:                 "ceph.crush_device_class": "",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:                 "ceph.encrypted": "0",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:                 "ceph.osd_id": "1",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:                 "ceph.type": "block",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:                 "ceph.vdo": "0"
Dec 03 01:46:35 compute-0 focused_kilby[387824]:             },
Dec 03 01:46:35 compute-0 focused_kilby[387824]:             "type": "block",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:             "vg_name": "ceph_vg1"
Dec 03 01:46:35 compute-0 focused_kilby[387824]:         }
Dec 03 01:46:35 compute-0 focused_kilby[387824]:     ],
Dec 03 01:46:35 compute-0 focused_kilby[387824]:     "2": [
Dec 03 01:46:35 compute-0 focused_kilby[387824]:         {
Dec 03 01:46:35 compute-0 focused_kilby[387824]:             "devices": [
Dec 03 01:46:35 compute-0 focused_kilby[387824]:                 "/dev/loop5"
Dec 03 01:46:35 compute-0 focused_kilby[387824]:             ],
Dec 03 01:46:35 compute-0 focused_kilby[387824]:             "lv_name": "ceph_lv2",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:             "lv_size": "21470642176",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:             "name": "ceph_lv2",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:             "tags": {
Dec 03 01:46:35 compute-0 focused_kilby[387824]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:                 "ceph.cluster_name": "ceph",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:                 "ceph.crush_device_class": "",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:                 "ceph.encrypted": "0",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:                 "ceph.osd_id": "2",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:                 "ceph.type": "block",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:                 "ceph.vdo": "0"
Dec 03 01:46:35 compute-0 focused_kilby[387824]:             },
Dec 03 01:46:35 compute-0 focused_kilby[387824]:             "type": "block",
Dec 03 01:46:35 compute-0 focused_kilby[387824]:             "vg_name": "ceph_vg2"
Dec 03 01:46:35 compute-0 focused_kilby[387824]:         }
Dec 03 01:46:35 compute-0 focused_kilby[387824]:     ]
Dec 03 01:46:35 compute-0 focused_kilby[387824]: }
Dec 03 01:46:35 compute-0 systemd[1]: libpod-be5afb9c57d166bbbd8170986db49aadfbca7da9499098ca08009e010a304ab6.scope: Deactivated successfully.
Dec 03 01:46:35 compute-0 podman[387786]: 2025-12-03 01:46:35.512405347 +0000 UTC m=+1.182113409 container died be5afb9c57d166bbbd8170986db49aadfbca7da9499098ca08009e010a304ab6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kilby, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:46:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-d75fee44198d0c6e848943c833a17d375186e3d7df6bed78233693f3ba191cbb-merged.mount: Deactivated successfully.
Dec 03 01:46:35 compute-0 ceph-mon[192821]: pgmap v953: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:35 compute-0 podman[387786]: 2025-12-03 01:46:35.622348694 +0000 UTC m=+1.292056726 container remove be5afb9c57d166bbbd8170986db49aadfbca7da9499098ca08009e010a304ab6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kilby, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:46:35 compute-0 systemd[1]: libpod-conmon-be5afb9c57d166bbbd8170986db49aadfbca7da9499098ca08009e010a304ab6.scope: Deactivated successfully.
Dec 03 01:46:35 compute-0 sudo[387567]: pam_unix(sudo:session): session closed for user root
Dec 03 01:46:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v954: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:35 compute-0 sudo[387969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:46:35 compute-0 sudo[387969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:46:35 compute-0 sudo[387969]: pam_unix(sudo:session): session closed for user root
Dec 03 01:46:35 compute-0 sudo[387997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:46:35 compute-0 sudo[387997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:46:35 compute-0 sudo[387997]: pam_unix(sudo:session): session closed for user root
Dec 03 01:46:36 compute-0 sudo[388023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:46:36 compute-0 sudo[388023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:46:36 compute-0 sudo[388023]: pam_unix(sudo:session): session closed for user root
Dec 03 01:46:36 compute-0 sudo[388071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 01:46:36 compute-0 sudo[388071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:46:36 compute-0 sudo[388159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbwsipmsmofwiltvzsmyxbqzlofobgos ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764726395.5800917-374-153215631262448/AnsiballZ_edpm_container_manage.py'
Dec 03 01:46:36 compute-0 sudo[388159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:46:36 compute-0 podman[388118]: 2025-12-03 01:46:36.449484695 +0000 UTC m=+0.131823712 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 03 01:46:36 compute-0 python3[388170]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=ceilometer_agent_ipmi.json log_base_path=/var/log/containers/stdouts debug=False
Dec 03 01:46:36 compute-0 podman[388222]: 2025-12-03 01:46:36.913945248 +0000 UTC m=+0.087848248 container create aa31783a576c1c3e7cd197713f1a5c77ee8105439a571628ab617335402074e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:46:36 compute-0 podman[388222]: 2025-12-03 01:46:36.886942447 +0000 UTC m=+0.060845457 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:46:36 compute-0 systemd[1]: Started libpod-conmon-aa31783a576c1c3e7cd197713f1a5c77ee8105439a571628ab617335402074e5.scope.
Dec 03 01:46:37 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:46:37 compute-0 podman[388222]: 2025-12-03 01:46:37.072125381 +0000 UTC m=+0.246028441 container init aa31783a576c1c3e7cd197713f1a5c77ee8105439a571628ab617335402074e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Dec 03 01:46:37 compute-0 python3[388170]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                {
                                                     "Id": "24d4416455a3caf43088be1a1fdcd72d9680ad5e64ac2b338cb2cc50d15f5acc",
                                                     "Digest": "sha256:b2785dbc3ceaa930dff8068bbb8654af2e0b40a9c2632300641cb8348e9cf43d",
                                                     "RepoTags": [
                                                          "quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified"
                                                     ],
                                                     "RepoDigests": [
                                                          "quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi@sha256:b2785dbc3ceaa930dff8068bbb8654af2e0b40a9c2632300641cb8348e9cf43d"
                                                     ],
                                                     "Parent": "",
                                                     "Comment": "",
                                                     "Created": "2025-12-01T06:21:56.309143559Z",
                                                     "Config": {
                                                          "User": "ceilometer",
                                                          "Env": [
                                                               "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                                                               "LANG=en_US.UTF-8",
                                                               "TZ=UTC",
                                                               "container=oci"
                                                          ],
                                                          "Entrypoint": [
                                                               "dumb-init",
                                                               "--single-child",
                                                               "--"
                                                          ],
                                                          "Cmd": [
                                                               "kolla_start"
                                                          ],
                                                          "Labels": {
                                                               "io.buildah.version": "1.41.3",
                                                               "maintainer": "OpenStack Kubernetes Operator team",
                                                               "org.label-schema.build-date": "20251125",
                                                               "org.label-schema.license": "GPLv2",
                                                               "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                               "org.label-schema.schema-version": "1.0",
                                                               "org.label-schema.vendor": "CentOS",
                                                               "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",
                                                               "tcib_managed": "true"
                                                          },
                                                          "StopSignal": "SIGTERM"
                                                     },
                                                     "Version": "",
                                                     "Author": "",
                                                     "Architecture": "amd64",
                                                     "Os": "linux",
                                                     "Size": 506187128,
                                                     "VirtualSize": 506187128,
                                                     "GraphDriver": {
                                                          "Name": "overlay",
                                                          "Data": {
                                                               "LowerDir": "/var/lib/containers/storage/overlay/4b9c41fe9442d39f0f731cbd431e2ad53f3df5a873cab9bbccc810ab289d4d69/diff:/var/lib/containers/storage/overlay/11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60/diff:/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",
                                                               "UpperDir": "/var/lib/containers/storage/overlay/821b44142d5812fced017a49e9cde2155fbb57b89e20e5e28a492c08b7bcc279/diff",
                                                               "WorkDir": "/var/lib/containers/storage/overlay/821b44142d5812fced017a49e9cde2155fbb57b89e20e5e28a492c08b7bcc279/work"
                                                          }
                                                     },
                                                     "RootFS": {
                                                          "Type": "layers",
                                                          "Layers": [
                                                               "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",
                                                               "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",
                                                               "sha256:86c2cd3987225f8a9bf38cc88e9c24b56bdf4a194f2301186519b4a7571b0c92",
                                                               "sha256:a47016624274f5ebad76019f5a2e465c1737f96caa539b36f90ab8e33592f415",
                                                               "sha256:fac9f22f4739f84f681c87b7458e8da1dae9a71bb9d7e632a7076d50c98f8070"
                                                          ]
                                                     },
                                                     "Labels": {
                                                          "io.buildah.version": "1.41.3",
                                                          "maintainer": "OpenStack Kubernetes Operator team",
                                                          "org.label-schema.build-date": "20251125",
                                                          "org.label-schema.license": "GPLv2",
                                                          "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                          "org.label-schema.schema-version": "1.0",
                                                          "org.label-schema.vendor": "CentOS",
                                                          "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",
                                                          "tcib_managed": "true"
                                                     },
                                                     "Annotations": {},
                                                     "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
                                                     "User": "ceilometer",
                                                     "History": [
                                                          {
                                                               "created": "2025-11-25T04:02:36.223494528Z",
                                                               "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-11-25T04:02:36.223562059Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20251125\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-11-25T04:02:39.054452717Z",
                                                               "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:09:28.025707917Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",
                                                               "comment": "FROM quay.io/centos/centos:stream9",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:09:28.025744608Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:09:28.025767729Z",
                                                               "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:09:28.025791379Z",
                                                               "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:09:28.02581523Z",
                                                               "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:09:28.025867611Z",
                                                               "created_by": "/bin/sh -c #(nop) USER root",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:09:28.469442331Z",
                                                               "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:02.029095017Z",
                                                               "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:05.672474685Z",
                                                               "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which python-tcib-containers",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:06.113425253Z",
                                                               "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/uid_gid_manage.sh /usr/local/bin/uid_gid_manage",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:06.532320725Z",
                                                               "created_by": "/bin/sh -c chmod 755 /usr/local/bin/uid_gid_manage",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:07.370061347Z",
                                                               "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage kolla hugetlbfs libvirt qemu",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:07.805172373Z",
                                                               "created_by": "/bin/sh -c touch /usr/local/bin/kolla_extend_start && chmod 755 /usr/local/bin/kolla_extend_start",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:08.259306372Z",
                                                               "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/set_configs.py /usr/local/bin/kolla_set_configs",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:08.625948784Z",
                                                               "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_set_configs",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:09.028304824Z",
                                                               "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/start.sh /usr/local/bin/kolla_start",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:09.423316076Z",
                                                               "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_start",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:09.801219631Z",
                                                               "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/httpd_setup.sh /usr/local/bin/kolla_httpd_setup",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:10.239187116Z",
                                                               "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_httpd_setup",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:10.70996597Z",
                                                               "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/copy_cacerts.sh /usr/local/bin/kolla_copy_cacerts",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:11.147342611Z",
                                                               "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_copy_cacerts",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:11.5739488Z",
                                                               "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/sudoers /etc/sudoers",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:12.006975065Z",
                                                               "created_by": "/bin/sh -c chmod 440 /etc/sudoers",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:12.421255505Z",
                                                               "created_by": "/bin/sh -c sed -ri '/^(passwd:|group:)/ s/systemd//g' /etc/nsswitch.conf",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:16.066694755Z",
                                                               "created_by": "/bin/sh -c dnf -y reinstall which && rpm -e --nodeps tzdata && dnf -y install tzdata",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:16.475695836Z",
                                                               "created_by": "/bin/sh -c if [ ! -f \"/etc/localtime\" ]; then ln -s /usr/share/zoneinfo/Etc/UTC /etc/localtime; fi",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:16.8971372Z",
                                                               "created_by": "/bin/sh -c mkdir -p /openstack",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:18.542651107Z",
                                                               "created_by": "/bin/sh -c if [ 'centos' == 'centos' ];then if [ -n \"$(rpm -qa redhat-release)\" ];then rpm -e --nodeps redhat-release; fi ; dnf -y install centos-stream-release; fi",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:20.622503041Z",
                                                               "created_by": "/bin/sh -c dnf update --excludepkgs redhat-release -y && dnf clean all && rm -rf /var/cache/dnf",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:20.622561802Z",
                                                               "created_by": "/bin/sh -c #(nop) STOPSIGNAL SIGTERM",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:20.622578342Z",
                                                               "created_by": "/bin/sh -c #(nop) ENTRYPOINT [\"dumb-init\", \"--single-child\", \"--\"]",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:20.622594423Z",
                                                               "created_by": "/bin/sh -c #(nop) CMD [\"kolla_start\"]",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:22.080892529Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:12:15.092312074Z",
                                                               "created_by": "/bin/sh -c #(nop) USER root",
                                                               "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-base:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:12:53.218820537Z",
                                                               "created_by": "/bin/sh -c dnf install -y python3-barbicanclient python3-cinderclient python3-designateclient python3-glanceclient python3-ironicclient python3-keystoneclient python3-manilaclient python3-neutronclient python3-novaclient python3-observabilityclient python3-octaviaclient python3-openstackclient python3-swiftclient python3-pymemcache && dnf clean all && rm -rf /var/cache/dnf",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:12:56.858075591Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:14:56.244673147Z",
                                                               "created_by": "/bin/sh -c #(nop) USER root",
                                                               "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-os:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:14:56.960273159Z",
                                                               "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage ceilometer",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:15:37.588899909Z",
                                                               "created_by": "/bin/sh -c dnf -y install openstack-ceilometer-common && dnf clean all && rm -rf /var/cache/dnf",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:15:41.197123864Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:21:19.693367404Z",
                                                               "created_by": "/bin/sh -c #(nop) USER root",
                                                               "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-ceilometer-base:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:21:56.306692765Z",
                                                               "created_by": "/bin/sh -c dnf -y install openstack-ceilometer-ipmi && dnf clean all && rm -rf /var/cache/dnf",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:21:56.306749376Z",
                                                               "created_by": "/bin/sh -c #(nop) USER ceilometer",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:21:59.082745267Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                          }
                                                     ],
                                                     "NamesHistory": [
                                                          "quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified"
                                                     ]
                                                }
                                           ]
                                           : quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Dec 03 01:46:37 compute-0 podman[388222]: 2025-12-03 01:46:37.084823434 +0000 UTC m=+0.258726404 container start aa31783a576c1c3e7cd197713f1a5c77ee8105439a571628ab617335402074e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_carson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 03 01:46:37 compute-0 podman[388222]: 2025-12-03 01:46:37.090066023 +0000 UTC m=+0.263969003 container attach aa31783a576c1c3e7cd197713f1a5c77ee8105439a571628ab617335402074e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:46:37 compute-0 wizardly_carson[388258]: 167 167
Dec 03 01:46:37 compute-0 systemd[1]: libpod-aa31783a576c1c3e7cd197713f1a5c77ee8105439a571628ab617335402074e5.scope: Deactivated successfully.
Dec 03 01:46:37 compute-0 conmon[388258]: conmon aa31783a576c1c3e7cd1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aa31783a576c1c3e7cd197713f1a5c77ee8105439a571628ab617335402074e5.scope/container/memory.events
Dec 03 01:46:37 compute-0 podman[388222]: 2025-12-03 01:46:37.10186674 +0000 UTC m=+0.275769740 container died aa31783a576c1c3e7cd197713f1a5c77ee8105439a571628ab617335402074e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:46:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-b214b83176b03f109152f28d42a492ff550fd3df4774690adc21e8d1bec6b528-merged.mount: Deactivated successfully.
Dec 03 01:46:37 compute-0 podman[388222]: 2025-12-03 01:46:37.174795831 +0000 UTC m=+0.348698791 container remove aa31783a576c1c3e7cd197713f1a5c77ee8105439a571628ab617335402074e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:46:37 compute-0 systemd[1]: libpod-conmon-aa31783a576c1c3e7cd197713f1a5c77ee8105439a571628ab617335402074e5.scope: Deactivated successfully.
Dec 03 01:46:37 compute-0 sudo[388159]: pam_unix(sudo:session): session closed for user root
Dec 03 01:46:37 compute-0 podman[388326]: 2025-12-03 01:46:37.418064332 +0000 UTC m=+0.069889005 container create 15b8ee9325ee0f7081ee5a96a3bc8c8e4d8ff68dbf0d727dab44d0776e631793 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_elion, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Dec 03 01:46:37 compute-0 podman[388326]: 2025-12-03 01:46:37.382720593 +0000 UTC m=+0.034545306 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:46:37 compute-0 systemd[1]: Started libpod-conmon-15b8ee9325ee0f7081ee5a96a3bc8c8e4d8ff68dbf0d727dab44d0776e631793.scope.
Dec 03 01:46:37 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:46:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9468ff4cb5a1e2a5cdbcefdf25700e7c8dd0f2e2444ae0acf63b8d25d78edaa2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:46:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9468ff4cb5a1e2a5cdbcefdf25700e7c8dd0f2e2444ae0acf63b8d25d78edaa2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:46:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9468ff4cb5a1e2a5cdbcefdf25700e7c8dd0f2e2444ae0acf63b8d25d78edaa2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:46:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9468ff4cb5a1e2a5cdbcefdf25700e7c8dd0f2e2444ae0acf63b8d25d78edaa2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:46:37 compute-0 podman[388326]: 2025-12-03 01:46:37.586794876 +0000 UTC m=+0.238619589 container init 15b8ee9325ee0f7081ee5a96a3bc8c8e4d8ff68dbf0d727dab44d0776e631793 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_elion, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:46:37 compute-0 ceph-mon[192821]: pgmap v954: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:37 compute-0 podman[388326]: 2025-12-03 01:46:37.615328371 +0000 UTC m=+0.267153004 container start 15b8ee9325ee0f7081ee5a96a3bc8c8e4d8ff68dbf0d727dab44d0776e631793 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_elion, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec 03 01:46:37 compute-0 podman[388326]: 2025-12-03 01:46:37.619482679 +0000 UTC m=+0.271307422 container attach 15b8ee9325ee0f7081ee5a96a3bc8c8e4d8ff68dbf0d727dab44d0776e631793 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_elion, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 03 01:46:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v955: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:46:38 compute-0 sudo[388471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyxtiriuxslpimwulvonmcjrofmmpfcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726397.5039012-382-236383960275475/AnsiballZ_stat.py'
Dec 03 01:46:38 compute-0 sudo[388471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:46:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 01:46:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:46:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 01:46:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:46:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:46:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:46:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:46:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:46:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:46:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:46:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:46:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:46:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 01:46:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:46:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:46:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:46:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 01:46:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:46:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 01:46:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:46:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:46:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:46:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 01:46:38 compute-0 python3.9[388473]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:46:38 compute-0 sudo[388471]: pam_unix(sudo:session): session closed for user root
Dec 03 01:46:38 compute-0 crazy_elion[388364]: {
Dec 03 01:46:38 compute-0 crazy_elion[388364]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 01:46:38 compute-0 crazy_elion[388364]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:46:38 compute-0 crazy_elion[388364]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 01:46:38 compute-0 crazy_elion[388364]:         "osd_id": 2,
Dec 03 01:46:38 compute-0 crazy_elion[388364]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:46:38 compute-0 crazy_elion[388364]:         "type": "bluestore"
Dec 03 01:46:38 compute-0 crazy_elion[388364]:     },
Dec 03 01:46:38 compute-0 crazy_elion[388364]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 01:46:38 compute-0 crazy_elion[388364]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:46:38 compute-0 crazy_elion[388364]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 01:46:38 compute-0 crazy_elion[388364]:         "osd_id": 1,
Dec 03 01:46:38 compute-0 crazy_elion[388364]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:46:38 compute-0 crazy_elion[388364]:         "type": "bluestore"
Dec 03 01:46:38 compute-0 crazy_elion[388364]:     },
Dec 03 01:46:38 compute-0 crazy_elion[388364]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 01:46:38 compute-0 crazy_elion[388364]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:46:38 compute-0 crazy_elion[388364]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 01:46:38 compute-0 crazy_elion[388364]:         "osd_id": 0,
Dec 03 01:46:38 compute-0 crazy_elion[388364]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:46:38 compute-0 crazy_elion[388364]:         "type": "bluestore"
Dec 03 01:46:38 compute-0 crazy_elion[388364]:     }
Dec 03 01:46:38 compute-0 crazy_elion[388364]: }
Dec 03 01:46:38 compute-0 systemd[1]: libpod-15b8ee9325ee0f7081ee5a96a3bc8c8e4d8ff68dbf0d727dab44d0776e631793.scope: Deactivated successfully.
Dec 03 01:46:38 compute-0 systemd[1]: libpod-15b8ee9325ee0f7081ee5a96a3bc8c8e4d8ff68dbf0d727dab44d0776e631793.scope: Consumed 1.254s CPU time.
Dec 03 01:46:38 compute-0 podman[388580]: 2025-12-03 01:46:38.948239792 +0000 UTC m=+0.052868230 container died 15b8ee9325ee0f7081ee5a96a3bc8c8e4d8ff68dbf0d727dab44d0776e631793 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_elion, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 03 01:46:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-9468ff4cb5a1e2a5cdbcefdf25700e7c8dd0f2e2444ae0acf63b8d25d78edaa2-merged.mount: Deactivated successfully.
Dec 03 01:46:39 compute-0 podman[388580]: 2025-12-03 01:46:39.082811932 +0000 UTC m=+0.187440350 container remove 15b8ee9325ee0f7081ee5a96a3bc8c8e4d8ff68dbf0d727dab44d0776e631793 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_elion, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec 03 01:46:39 compute-0 systemd[1]: libpod-conmon-15b8ee9325ee0f7081ee5a96a3bc8c8e4d8ff68dbf0d727dab44d0776e631793.scope: Deactivated successfully.
Dec 03 01:46:39 compute-0 sudo[388071]: pam_unix(sudo:session): session closed for user root
Dec 03 01:46:39 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:46:39 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:46:39 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:46:39 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:46:39 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev d7d63e2a-4d69-4afc-855c-4e90914f17c6 does not exist
Dec 03 01:46:39 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev d41b1b59-cbeb-4946-bb46-88a1774e101e does not exist
Dec 03 01:46:39 compute-0 podman[388615]: 2025-12-03 01:46:39.175608419 +0000 UTC m=+0.120522240 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, io.openshift.expose-services=, release=1214.1726694543, version=9.4, config_id=edpm, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, name=ubi9, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9)
Dec 03 01:46:39 compute-0 sudo[388660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:46:39 compute-0 sudo[388660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:46:39 compute-0 sudo[388660]: pam_unix(sudo:session): session closed for user root
Dec 03 01:46:39 compute-0 sudo[388710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nalhbklolmdhyjnawpjomhtpvwtwjhgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726398.744134-391-273758257270087/AnsiballZ_file.py'
Dec 03 01:46:39 compute-0 sudo[388710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:46:39 compute-0 sudo[388713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 01:46:39 compute-0 sudo[388713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:46:39 compute-0 sudo[388713]: pam_unix(sudo:session): session closed for user root
Dec 03 01:46:39 compute-0 python3.9[388714]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:46:39 compute-0 sudo[388710]: pam_unix(sudo:session): session closed for user root
Dec 03 01:46:39 compute-0 ceph-mon[192821]: pgmap v955: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:39 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:46:39 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:46:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v956: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:41 compute-0 ceph-mon[192821]: pgmap v956: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:41 compute-0 sudo[388887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vztvseohlvyekfnwbwlpfvdpersiimij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726399.5975723-391-198645089079497/AnsiballZ_copy.py'
Dec 03 01:46:41 compute-0 sudo[388887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:46:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v957: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:41 compute-0 python3.9[388889]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764726399.5975723-391-198645089079497/source dest=/etc/systemd/system/edpm_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:46:41 compute-0 sudo[388887]: pam_unix(sudo:session): session closed for user root
Dec 03 01:46:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:46:43 compute-0 sudo[388963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnjbxbpzsvnzgwjhdleorlzdxcyectgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726399.5975723-391-198645089079497/AnsiballZ_systemd.py'
Dec 03 01:46:43 compute-0 sudo[388963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:46:43 compute-0 ceph-mon[192821]: pgmap v957: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v958: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:43 compute-0 python3.9[388965]: ansible-systemd Invoked with state=started name=edpm_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:46:43 compute-0 sudo[388963]: pam_unix(sudo:session): session closed for user root
Dec 03 01:46:44 compute-0 ceph-mon[192821]: pgmap v958: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:44 compute-0 sudo[389117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgkixxielypzjxjcaosynuxeayezkves ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726404.335562-413-137525260631606/AnsiballZ_container_config_data.py'
Dec 03 01:46:44 compute-0 sudo[389117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:46:45 compute-0 python3.9[389119]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=kepler.json debug=False
Dec 03 01:46:45 compute-0 sudo[389117]: pam_unix(sudo:session): session closed for user root
Dec 03 01:46:45 compute-0 nova_compute[351485]: 2025-12-03 01:46:45.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:46:45 compute-0 nova_compute[351485]: 2025-12-03 01:46:45.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 03 01:46:45 compute-0 nova_compute[351485]: 2025-12-03 01:46:45.601 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 03 01:46:45 compute-0 nova_compute[351485]: 2025-12-03 01:46:45.602 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:46:45 compute-0 nova_compute[351485]: 2025-12-03 01:46:45.603 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 03 01:46:45 compute-0 nova_compute[351485]: 2025-12-03 01:46:45.626 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:46:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v959: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:46 compute-0 sudo[389269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlctzftrrqiizfbjusjcyyeebkgbvazl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726405.5227969-422-255313749193741/AnsiballZ_container_config_hash.py'
Dec 03 01:46:46 compute-0 sudo[389269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:46:46 compute-0 python3.9[389271]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 03 01:46:46 compute-0 sudo[389269]: pam_unix(sudo:session): session closed for user root
Dec 03 01:46:46 compute-0 ceph-mon[192821]: pgmap v959: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:47 compute-0 sudo[389421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhqsrlkmjxgrnjqecyhmapgkthhfavju ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764726406.941084-432-88989992228891/AnsiballZ_edpm_container_manage.py'
Dec 03 01:46:47 compute-0 sudo[389421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:46:47 compute-0 nova_compute[351485]: 2025-12-03 01:46:47.645 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:46:47 compute-0 nova_compute[351485]: 2025-12-03 01:46:47.646 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:46:47 compute-0 nova_compute[351485]: 2025-12-03 01:46:47.679 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:46:47 compute-0 nova_compute[351485]: 2025-12-03 01:46:47.680 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:46:47 compute-0 nova_compute[351485]: 2025-12-03 01:46:47.680 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:46:47 compute-0 nova_compute[351485]: 2025-12-03 01:46:47.681 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 01:46:47 compute-0 nova_compute[351485]: 2025-12-03 01:46:47.681 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:46:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v960: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:46:47 compute-0 python3[389423]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=kepler.json log_base_path=/var/log/containers/stdouts debug=False
Dec 03 01:46:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 01:46:48 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2354050632' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:46:48 compute-0 nova_compute[351485]: 2025-12-03 01:46:48.165 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:46:48 compute-0 python3[389423]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                {
                                                     "Id": "ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7",
                                                     "Digest": "sha256:c74e63cd5740586d4c62182467bb463ef5e3dd809027aedc92c05ac19e93b086",
                                                     "RepoTags": [
                                                          "quay.io/sustainable_computing_io/kepler:release-0.7.12"
                                                     ],
                                                     "RepoDigests": [
                                                          "quay.io/sustainable_computing_io/kepler@sha256:581b65b646301e0fcb07582150ba63438f1353a85bf9acf1eb2acb4ce71c58bd",
                                                          "quay.io/sustainable_computing_io/kepler@sha256:c74e63cd5740586d4c62182467bb463ef5e3dd809027aedc92c05ac19e93b086"
                                                     ],
                                                     "Parent": "",
                                                     "Comment": "",
                                                     "Created": "2024-10-15T06:30:56.315982344Z",
                                                     "Config": {
                                                          "Env": [
                                                               "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                                                               "container=oci",
                                                               "NVIDIA_VISIBLE_DEVICES=all",
                                                               "NVIDIA_DRIVER_CAPABILITIES=utility",
                                                               "NVIDIA_MIG_MONITOR_DEVICES=all",
                                                               "NVIDIA_MIG_CONFIG_DEVICES=all"
                                                          ],
                                                          "Entrypoint": [
                                                               "/usr/bin/kepler"
                                                          ],
                                                          "Labels": {
                                                               "architecture": "x86_64",
                                                               "build-date": "2024-09-18T21:23:30",
                                                               "com.redhat.component": "ubi9-container",
                                                               "com.redhat.license_terms": "https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI",
                                                               "description": "The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.",
                                                               "distribution-scope": "public",
                                                               "io.buildah.version": "1.29.0",
                                                               "io.k8s.description": "The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.",
                                                               "io.k8s.display-name": "Red Hat Universal Base Image 9",
                                                               "io.openshift.expose-services": "",
                                                               "io.openshift.tags": "base rhel9",
                                                               "maintainer": "Red Hat, Inc.",
                                                               "name": "ubi9",
                                                               "release": "1214.1726694543",
                                                               "release-0.7.12": "",
                                                               "summary": "Provides the latest release of Red Hat Universal Base Image 9.",
                                                               "url": "https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543",
                                                               "vcs-ref": "e309397d02fc53f7fa99db1371b8700eb49f268f",
                                                               "vcs-type": "git",
                                                               "vendor": "Red Hat, Inc.",
                                                               "version": "9.4"
                                                          }
                                                     },
                                                     "Version": "",
                                                     "Author": "",
                                                     "Architecture": "amd64",
                                                     "Os": "linux",
                                                     "Size": 331545571,
                                                     "VirtualSize": 331545571,
                                                     "GraphDriver": {
                                                          "Name": "overlay",
                                                          "Data": {
                                                               "LowerDir": "/var/lib/containers/storage/overlay/de1557109facda5eb038045e25371b06ad2baf5cf32c60a7fe84a603bee1e079/diff:/var/lib/containers/storage/overlay/725f7e4e3b8edde36f0bdcd313bbaf872dbe55b162264f8008ee3c09a0b89b66/diff:/var/lib/containers/storage/overlay/573769ea2305456dffa2f0674424aa020c1494387d36bcccb339788fd220d39b/diff:/var/lib/containers/storage/overlay/56a7d751d1997fb4e9fb31bd07356a0c9a7699a9bb524feeb3c7fe2b433b8223/diff:/var/lib/containers/storage/overlay/0560e6233aa93f1e1ac7bed53255811f32dc680869ef7f31dd630efc1203b853/diff:/var/lib/containers/storage/overlay/8d984035cdde48f32944ddaa464ac42d376faabc98415168800b2b8c9aec0930/diff:/var/lib/containers/storage/overlay/e7328e803158cca63d8efdbe1caefb1b51654de77e5fa8691079ad06db1abf75/diff",
                                                               "UpperDir": "/var/lib/containers/storage/overlay/ed698de2bb3f7ef46422d45edf0654a1764e700cec794f481dab0a1f34f51932/diff",
                                                               "WorkDir": "/var/lib/containers/storage/overlay/ed698de2bb3f7ef46422d45edf0654a1764e700cec794f481dab0a1f34f51932/work"
                                                          }
                                                     },
                                                     "RootFS": {
                                                          "Type": "layers",
                                                          "Layers": [
                                                               "sha256:e7328e803158cca63d8efdbe1caefb1b51654de77e5fa8691079ad06db1abf75",
                                                               "sha256:f947b23b2d0723eac9b608b79e6d48e59d90f74958e05f2762295489e0088e86",
                                                               "sha256:3bf6ab40cc16a103a087232c2c6a1a093dcb6141e70397de57907f5d00741429",
                                                               "sha256:2f5269f1ade14b3b0806305a0b2d3efffe65a187b302789a50ac00bcb815b960",
                                                               "sha256:413f5abb84bd1c03bdfd9c1e0dec8f4be92159c9c6116c4e44247efcdcc6b518",
                                                               "sha256:60c06a2423851502fc43aec0680b91181b0d62b52812c019d3fc66f1546c4529",
                                                               "sha256:323ce4bcad35618db6032dd5bfbd6c8ebb0cde882f730b19296d0ceaf5e39427",
                                                               "sha256:270b3386a8e4a2127a32b007abfea7cb394ae1dee577ee7fefdbb79cd2bea856"
                                                          ]
                                                     },
                                                     "Labels": {
                                                          "architecture": "x86_64",
                                                          "build-date": "2024-09-18T21:23:30",
                                                          "com.redhat.component": "ubi9-container",
                                                          "com.redhat.license_terms": "https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI",
                                                          "description": "The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.",
                                                          "distribution-scope": "public",
                                                          "io.buildah.version": "1.29.0",
                                                          "io.k8s.description": "The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.",
                                                          "io.k8s.display-name": "Red Hat Universal Base Image 9",
                                                          "io.openshift.expose-services": "",
                                                          "io.openshift.tags": "base rhel9",
                                                          "maintainer": "Red Hat, Inc.",
                                                          "name": "ubi9",
                                                          "release": "1214.1726694543",
                                                          "release-0.7.12": "",
                                                          "summary": "Provides the latest release of Red Hat Universal Base Image 9.",
                                                          "url": "https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543",
                                                          "vcs-ref": "e309397d02fc53f7fa99db1371b8700eb49f268f",
                                                          "vcs-type": "git",
                                                          "vendor": "Red Hat, Inc.",
                                                          "version": "9.4"
                                                     },
                                                     "Annotations": {},
                                                     "ManifestType": "application/vnd.oci.image.manifest.v1+json",
                                                     "User": "",
                                                     "History": [
                                                          {
                                                               "created": "2024-09-18T21:36:31.099323493Z",
                                                               "created_by": "/bin/sh -c #(nop) ADD file:0067eb9f2ee25ab2d666a7639a85fe707b582902a09242761abf30c53664069b in / ",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:32.031010231Z",
                                                               "created_by": "/bin/sh -c mv -f /etc/yum.repos.d/ubi.repo /tmp || :",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:32.418413433Z",
                                                               "created_by": "/bin/sh -c #(nop) ADD file:5b1f650e1376d79fa3a65df4a154ea5166def95154b52c1c1097dfd8fc7d58eb in /tmp/tls-ca-bundle.pem ",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:32.91238548Z",
                                                               "created_by": "/bin/sh -c #(nop) ADD multi:7a67822d03b1a3ddb205cc3fcf7acd9d3180aef5988a5d25887bc0753a7a493b in /etc/yum.repos.d/ ",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:32.912448474Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"Red Hat, Inc.\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:32.912573716Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL com.redhat.component=\"ubi9-container\"       name=\"ubi9\"       version=\"9.4\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:32.912652474Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL com.redhat.license_terms=\"https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:32.912740628Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL summary=\"Provides the latest release of Red Hat Universal Base Image 9.\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:32.912866673Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL description=\"The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:32.912921304Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL io.k8s.display-name=\"Red Hat Universal Base Image 9\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:32.912962586Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL io.openshift.expose-services=\"\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:32.913001888Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL io.openshift.tags=\"base rhel9\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:32.913021599Z",
                                                               "created_by": "/bin/sh -c #(nop) ENV container oci",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:32.913081151Z",
                                                               "created_by": "/bin/sh -c #(nop) ENV PATH /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:32.913091001Z",
                                                               "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:33.824802353Z",
                                                               "created_by": "/bin/sh -c rm -rf /var/log/*",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:34.766737128Z",
                                                               "created_by": "/bin/sh -c mkdir -p /var/log/rhsm",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:35.121320055Z",
                                                               "created_by": "/bin/sh -c #(nop) ADD file:ed34e436a5c2cc729eecd8b15b94c75028aea1cb18b739cafbb293b5e4ad5dae in /root/buildinfo/content_manifests/ubi9-container-9.4-1214.1726694543.json ",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:35.525712655Z",
                                                               "created_by": "/bin/sh -c #(nop) ADD file:d56bb1961538221b52d7e292418978f186bf67b9906771f38530fc3996a9d0d4 in /root/buildinfo/Dockerfile-ubi9-9.4-1214.1726694543 ",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:35.526152969Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL \"release\"=\"1214.1726694543\" \"distribution-scope\"=\"public\" \"vendor\"=\"Red Hat, Inc.\" \"build-date\"=\"2024-09-18T21:23:30\" \"architecture\"=\"x86_64\" \"vcs-type\"=\"git\" \"vcs-ref\"=\"e309397d02fc53f7fa99db1371b8700eb49f268f\" \"io.k8s.description\"=\"The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.\" \"url\"=\"https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:36.481014095Z",
                                                               "created_by": "/bin/sh -c rm -f '/etc/yum.repos.d/odcs-3496925-3b364.repo' '/etc/yum.repos.d/rhel-9.4-compose-34ae9.repo'",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:37.364179091Z",
                                                               "created_by": "/bin/sh -c rm -f /tmp/tls-ca-bundle.pem",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:41.423178117Z",
                                                               "created_by": "/bin/sh -c mv -fZ /tmp/ubi.repo /etc/yum.repos.d/ubi.repo || :"
                                                          },
                                                          {
                                                               "created": "2024-10-15T06:28:14.211190228Z",
                                                               "created_by": "SHELL [/bin/bash -c]",
                                                               "comment": "buildkit.dockerfile.v0",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-10-15T06:28:14.211190228Z",
                                                               "created_by": "ARG INSTALL_DCGM=false",
                                                               "comment": "buildkit.dockerfile.v0",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-10-15T06:28:14.211190228Z",
                                                               "created_by": "ARG INSTALL_HABANA=false",
                                                               "comment": "buildkit.dockerfile.v0",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-10-15T06:28:14.211190228Z",
                                                               "created_by": "ARG TARGETARCH=amd64",
                                                               "comment": "buildkit.dockerfile.v0",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-10-15T06:28:14.211190228Z",
                                                               "created_by": "ENV NVIDIA_VISIBLE_DEVICES=all",
                                                               "comment": "buildkit.dockerfile.v0",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-10-15T06:28:14.211190228Z",
                                                               "created_by": "ENV NVIDIA_DRIVER_CAPABILITIES=utility",
                                                               "comment": "buildkit.dockerfile.v0",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-10-15T06:28:14.211190228Z",
                                                               "created_by": "ENV NVIDIA_MIG_MONITOR_DEVICES=all",
                                                               "comment": "buildkit.dockerfile.v0",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-10-15T06:28:14.211190228Z",
                                                               "created_by": "ENV NVIDIA_MIG_CONFIG_DEVICES=all",
                                                               "comment": "buildkit.dockerfile.v0",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-10-15T06:28:14.211190228Z",
                                                               "created_by": "RUN |3 INSTALL_DCGM=false INSTALL_HABANA=false TARGETARCH=amd64 /bin/bash -c yum -y update-minimal --security --sec-severity=Important --sec-severity=Critical && yum clean all # buildkit",
                                                               "comment": "buildkit.dockerfile.v0"
                                                          },
                                                          {
                                                               "created": "2024-10-15T06:28:38.991358946Z",
                                                               "created_by": "RUN |3 INSTALL_DCGM=false INSTALL_HABANA=false TARGETARCH=amd64 /bin/bash -c set -e -x ;\t\tINSTALL_PKGS=\" \t\t\tlibbpf  \t\t\" ;\t\tyum install -y $INSTALL_PKGS ;\t\t\t\tif [[ \"$TARGETARCH\" == \"amd64\" ]]; then \t\t\tyum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm; \t\t\tyum install -y cpuid; \t\t\tif [[ \"$INSTALL_DCGM\" == \"true\" ]]; then \t\t\t\tdnf config-manager --add-repo https://developer.download.nvidia.com/compute/cuda/repos/rhel9/x86_64/cuda-rhel9.repo; \t\t\t\tyum install -y datacenter-gpu-manager libnvidia-ml; \t\t\tfi; \t\t\tif [[ \"$INSTALL_HABANA\" == \"true\" ]]; then \t\t\t\trpm -Uvh https://vault.habana.ai/artifactory/rhel/9/9.2/habanalabs-firmware-tools-1.15.1-15.el9.x86_64.rpm --nodeps; \t\t\t\techo /usr/lib/habanalabs > /etc/ld.so.conf.d/habanalabs.conf; \t\t\t\tldconfig; \t\t\tfi; \t\tfi;\t\tyum clean all # buildkit",
                                                               "comment": "buildkit.dockerfile.v0"
                                                          },
                                                          {
                                                               "created": "2024-10-15T06:30:56.146511902Z",
                                                               "created_by": "COPY /workspace/_output/bin/kepler /usr/bin/kepler # buildkit",
                                                               "comment": "buildkit.dockerfile.v0"
                                                          },
                                                          {
                                                               "created": "2024-10-15T06:30:56.168608119Z",
                                                               "created_by": "COPY /libbpf-source/linux-5.14.0-424.el9/tools/bpf/bpftool/bpftool /usr/bin/bpftool # buildkit",
                                                               "comment": "buildkit.dockerfile.v0"
                                                          },
                                                          {
                                                               "created": "2024-10-15T06:30:56.24706386Z",
                                                               "created_by": "RUN |3 INSTALL_DCGM=false INSTALL_HABANA=false TARGETARCH=amd64 /bin/bash -c mkdir -p /var/lib/kepler/data # buildkit",
                                                               "comment": "buildkit.dockerfile.v0"
                                                          },
                                                          {
                                                               "created": "2024-10-15T06:30:56.299132132Z",
                                                               "created_by": "COPY /workspace/data/cpus.yaml /var/lib/kepler/data/cpus.yaml # buildkit",
                                                               "comment": "buildkit.dockerfile.v0"
                                                          },
                                                          {
                                                               "created": "2024-10-15T06:30:56.315982344Z",
                                                               "created_by": "COPY /workspace/data/model_weight /var/lib/kepler/data/model_weight # buildkit",
                                                               "comment": "buildkit.dockerfile.v0"
                                                          },
                                                          {
                                                               "created": "2024-10-15T06:30:56.315982344Z",
                                                               "created_by": "ENTRYPOINT [\"/usr/bin/kepler\"]",
                                                               "comment": "buildkit.dockerfile.v0",
                                                               "empty_layer": true
                                                          }
                                                     ],
                                                     "NamesHistory": [
                                                          "quay.io/sustainable_computing_io/kepler:release-0.7.12"
                                                     ]
                                                }
                                           ]
                                           : quay.io/sustainable_computing_io/kepler:release-0.7.12
Dec 03 01:46:48 compute-0 kepler[177915]: I1203 01:46:48.424412       1 exporter.go:218] Received shutdown signal
Dec 03 01:46:48 compute-0 kepler[177915]: I1203 01:46:48.424801       1 exporter.go:226] Exiting...
Dec 03 01:46:48 compute-0 systemd[1]: libpod-96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687.scope: Deactivated successfully.
Dec 03 01:46:48 compute-0 systemd[1]: libpod-96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687.scope: Consumed 41.410s CPU time.
Dec 03 01:46:48 compute-0 podman[389495]: 2025-12-03 01:46:48.625761741 +0000 UTC m=+0.287122813 container died 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, release-0.7.12=, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, container_name=kepler, io.openshift.expose-services=, com.redhat.component=ubi9-container, managed_by=edpm_ansible, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0)
Dec 03 01:46:48 compute-0 systemd[1]: 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687-691e7a48be3cc627.timer: Deactivated successfully.
Dec 03 01:46:48 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687.
Dec 03 01:46:48 compute-0 systemd[1]: 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687-691e7a48be3cc627.service: Failed to open /run/systemd/transient/96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687-691e7a48be3cc627.service: No such file or directory
Dec 03 01:46:48 compute-0 nova_compute[351485]: 2025-12-03 01:46:48.647 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 01:46:48 compute-0 nova_compute[351485]: 2025-12-03 01:46:48.651 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4581MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 01:46:48 compute-0 nova_compute[351485]: 2025-12-03 01:46:48.652 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:46:48 compute-0 nova_compute[351485]: 2025-12-03 01:46:48.653 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:46:48 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687-userdata-shm.mount: Deactivated successfully.
Dec 03 01:46:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-56bb532fcb66b2740ea57176a30adf601274f50f260afcb2d3f32777dc3ac537-merged.mount: Deactivated successfully.
Dec 03 01:46:48 compute-0 podman[389495]: 2025-12-03 01:46:48.685910057 +0000 UTC m=+0.347271149 container cleanup 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1214.1726694543, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, config_id=edpm, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., name=ubi9, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, vcs-type=git)
Dec 03 01:46:48 compute-0 python3[389423]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman stop kepler
Dec 03 01:46:48 compute-0 systemd[1]: 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687-691e7a48be3cc627.timer: Failed to open /run/systemd/transient/96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687-691e7a48be3cc627.timer: No such file or directory
Dec 03 01:46:48 compute-0 systemd[1]: 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687-691e7a48be3cc627.service: Failed to open /run/systemd/transient/96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687-691e7a48be3cc627.service: No such file or directory
Dec 03 01:46:48 compute-0 podman[389520]: 2025-12-03 01:46:48.774017961 +0000 UTC m=+0.067501427 container remove 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, release-0.7.12=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, container_name=kepler, version=9.4, maintainer=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.buildah.version=1.29.0, com.redhat.component=ubi9-container)
Dec 03 01:46:48 compute-0 python3[389423]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force kepler
Dec 03 01:46:48 compute-0 podman[389523]: Error: no container with name or ID "kepler" found: no such container
Dec 03 01:46:48 compute-0 systemd[1]: edpm_kepler.service: Control process exited, code=exited, status=125/n/a
Dec 03 01:46:48 compute-0 ceph-mon[192821]: pgmap v960: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:48 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2354050632' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:46:48 compute-0 podman[389544]: Error: no container with name or ID "kepler" found: no such container
Dec 03 01:46:48 compute-0 nova_compute[351485]: 2025-12-03 01:46:48.841 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 01:46:48 compute-0 nova_compute[351485]: 2025-12-03 01:46:48.842 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 01:46:48 compute-0 systemd[1]: edpm_kepler.service: Control process exited, code=exited, status=125/n/a
Dec 03 01:46:48 compute-0 systemd[1]: edpm_kepler.service: Failed with result 'exit-code'.
Dec 03 01:46:48 compute-0 podman[389543]: 2025-12-03 01:46:48.864942686 +0000 UTC m=+0.067371744 container create c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, maintainer=Red Hat, Inc., vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, config_id=edpm, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, release=1214.1726694543, io.openshift.expose-services=, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, container_name=kepler, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., name=ubi9, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 03 01:46:48 compute-0 podman[389543]: 2025-12-03 01:46:48.829306709 +0000 UTC m=+0.031735857 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Dec 03 01:46:48 compute-0 python3[389423]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name kepler --conmon-pidfile /run/kepler.pid --env ENABLE_GPU=true --env EXPOSE_CONTAINER_METRICS=true --env ENABLE_PROCESS_METRICS=true --env EXPOSE_VM_METRICS=true --env EXPOSE_ESTIMATED_IDLE_POWER_METRICS=false --env LIBVIRT_METADATA_URI=http://openstack.org/xmlns/libvirt/nova/1.1 --healthcheck-command /openstack/healthcheck kepler --label config_id=edpm --label container_name=kepler --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 8888:8888 --volume /lib/modules:/lib/modules:ro --volume /run/libvirt:/run/libvirt:shared,ro --volume /sys:/sys --volume /proc:/proc --volume /var/lib/openstack/healthchecks/kepler:/openstack:ro,z quay.io/sustainable_computing_io/kepler:release-0.7.12 -v=2
Dec 03 01:46:48 compute-0 nova_compute[351485]: 2025-12-03 01:46:48.941 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing inventories for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 03 01:46:48 compute-0 systemd[1]: edpm_kepler.service: Scheduled restart job, restart counter is at 1.
Dec 03 01:46:48 compute-0 systemd[1]: Stopped kepler container.
Dec 03 01:46:48 compute-0 systemd[1]: Starting kepler container...
Dec 03 01:46:48 compute-0 systemd[1]: Started libpod-conmon-c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6.scope.
Dec 03 01:46:49 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:46:49 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6.
Dec 03 01:46:49 compute-0 podman[389567]: 2025-12-03 01:46:49.080161916 +0000 UTC m=+0.183017152 container init c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, distribution-scope=public, io.openshift.expose-services=, config_id=edpm, vcs-type=git, com.redhat.component=ubi9-container, release=1214.1726694543, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., release-0.7.12=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9)
Dec 03 01:46:49 compute-0 nova_compute[351485]: 2025-12-03 01:46:49.092 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Updating ProviderTree inventory for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 03 01:46:49 compute-0 nova_compute[351485]: 2025-12-03 01:46:49.092 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Updating inventory in ProviderTree for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 03 01:46:49 compute-0 nova_compute[351485]: 2025-12-03 01:46:49.112 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing aggregate associations for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 03 01:46:49 compute-0 kepler[389583]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec 03 01:46:49 compute-0 podman[389567]: 2025-12-03 01:46:49.120879388 +0000 UTC m=+0.223734654 container start c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, managed_by=edpm_ansible, name=ubi9, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., version=9.4, release=1214.1726694543, architecture=x86_64, build-date=2024-09-18T21:23:30, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9)
Dec 03 01:46:49 compute-0 podman[389580]: kepler
Dec 03 01:46:49 compute-0 python3[389423]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman start kepler
Dec 03 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.138398       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Dec 03 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.138829       1 config.go:293] using gCgroup ID in the BPF program: true
Dec 03 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.138999       1 config.go:295] kernel version: 5.14
Dec 03 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.140593       1 power.go:78] Unable to obtain power, use estimate method
Dec 03 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.140646       1 redfish.go:169] failed to get redfish credential file path
Dec 03 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.141438       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Dec 03 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.141483       1 power.go:79] using none to obtain power
Dec 03 01:46:49 compute-0 kepler[389583]: E1203 01:46:49.141517       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Dec 03 01:46:49 compute-0 kepler[389583]: E1203 01:46:49.141609       1 exporter.go:154] failed to init GPU accelerators: no devices found
Dec 03 01:46:49 compute-0 kepler[389583]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec 03 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.145904       1 exporter.go:84] Number of CPUs: 8
Dec 03 01:46:49 compute-0 systemd[1]: Started kepler container.
Dec 03 01:46:49 compute-0 nova_compute[351485]: 2025-12-03 01:46:49.146 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing trait associations for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05, traits: HW_CPU_X86_SSE42,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_ABM,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_F16C,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_RESCUE_BFV,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 03 01:46:49 compute-0 nova_compute[351485]: 2025-12-03 01:46:49.171 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:46:49 compute-0 podman[389602]: 2025-12-03 01:46:49.253312358 +0000 UTC m=+0.111583376 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, io.buildah.version=1.29.0, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., name=ubi9, io.openshift.tags=base rhel9, release=1214.1726694543, version=9.4, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, io.openshift.expose-services=, release-0.7.12=, vcs-type=git, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, managed_by=edpm_ansible)
Dec 03 01:46:49 compute-0 systemd[1]: c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6-2387456e96475ea9.service: Main process exited, code=exited, status=1/FAILURE
Dec 03 01:46:49 compute-0 systemd[1]: c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6-2387456e96475ea9.service: Failed with result 'exit-code'.
Dec 03 01:46:49 compute-0 sudo[389421]: pam_unix(sudo:session): session closed for user root
Dec 03 01:46:49 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 01:46:49 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2561605973' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:46:49 compute-0 nova_compute[351485]: 2025-12-03 01:46:49.661 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:46:49 compute-0 nova_compute[351485]: 2025-12-03 01:46:49.671 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 01:46:49 compute-0 nova_compute[351485]: 2025-12-03 01:46:49.693 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 01:46:49 compute-0 nova_compute[351485]: 2025-12-03 01:46:49.696 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 01:46:49 compute-0 nova_compute[351485]: 2025-12-03 01:46:49.696 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.044s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:46:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v961: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.806735       1 watcher.go:83] Using in cluster k8s config
Dec 03 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.806801       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Dec 03 01:46:49 compute-0 kepler[389583]: E1203 01:46:49.806913       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Dec 03 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.813794       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Dec 03 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.813868       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Dec 03 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.820995       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Dec 03 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.821058       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Dec 03 01:46:49 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2561605973' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.841789       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 03 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.841856       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec 03 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.841880       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Dec 03 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.857078       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 03 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.857150       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 03 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.857160       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 03 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.857168       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 03 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.857179       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec 03 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.857201       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Dec 03 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.857395       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Dec 03 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.857441       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Dec 03 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.857476       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Dec 03 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.857502       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Dec 03 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.857793       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Dec 03 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.858684       1 exporter.go:208] Started Kepler in 720.841417ms
Dec 03 01:46:50 compute-0 sudo[389830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhjglymbsslfujnuqnatrndhptjtzjuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726409.6318445-440-190157633896067/AnsiballZ_stat.py'
Dec 03 01:46:50 compute-0 sudo[389830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:46:50 compute-0 python3.9[389832]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:46:50 compute-0 sudo[389830]: pam_unix(sudo:session): session closed for user root
Dec 03 01:46:50 compute-0 nova_compute[351485]: 2025-12-03 01:46:50.628 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:46:50 compute-0 nova_compute[351485]: 2025-12-03 01:46:50.629 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 01:46:50 compute-0 nova_compute[351485]: 2025-12-03 01:46:50.630 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 03 01:46:50 compute-0 nova_compute[351485]: 2025-12-03 01:46:50.808 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 03 01:46:50 compute-0 nova_compute[351485]: 2025-12-03 01:46:50.809 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:46:50 compute-0 nova_compute[351485]: 2025-12-03 01:46:50.809 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:46:50 compute-0 nova_compute[351485]: 2025-12-03 01:46:50.810 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:46:50 compute-0 ceph-mon[192821]: pgmap v961: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:50 compute-0 podman[389859]: 2025-12-03 01:46:50.867425523 +0000 UTC m=+0.120641433 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 01:46:51 compute-0 sudo[390007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fplngrdplanwkgbmvkajettwzbgxpmto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726410.9498036-449-26157984553117/AnsiballZ_file.py'
Dec 03 01:46:51 compute-0 sudo[390007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:46:51 compute-0 nova_compute[351485]: 2025-12-03 01:46:51.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:46:51 compute-0 nova_compute[351485]: 2025-12-03 01:46:51.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:46:51 compute-0 nova_compute[351485]: 2025-12-03 01:46:51.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:46:51 compute-0 nova_compute[351485]: 2025-12-03 01:46:51.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 01:46:51 compute-0 python3.9[390009]: ansible-file Invoked with path=/etc/systemd/system/edpm_kepler.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:46:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v962: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:51 compute-0 sudo[390007]: pam_unix(sudo:session): session closed for user root
Dec 03 01:46:51 compute-0 podman[390010]: 2025-12-03 01:46:51.898342819 +0000 UTC m=+0.142202938 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:46:51 compute-0 podman[390011]: 2025-12-03 01:46:51.920142851 +0000 UTC m=+0.154241572 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125)
Dec 03 01:46:52 compute-0 sudo[390195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwowfqhrsvsbcnxefafvyjrjjyyifkpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726411.8950243-449-264263953144331/AnsiballZ_copy.py'
Dec 03 01:46:52 compute-0 sudo[390195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:46:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:46:52 compute-0 ceph-mon[192821]: pgmap v962: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:52 compute-0 python3.9[390197]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764726411.8950243-449-264263953144331/source dest=/etc/systemd/system/edpm_kepler.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:46:52 compute-0 sudo[390195]: pam_unix(sudo:session): session closed for user root
Dec 03 01:46:53 compute-0 sudo[390271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbaphlzkeaomrwhpeudoslyalunssnzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726411.8950243-449-264263953144331/AnsiballZ_systemd.py'
Dec 03 01:46:53 compute-0 sudo[390271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:46:53 compute-0 python3.9[390273]: ansible-systemd Invoked with state=started name=edpm_kepler.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 03 01:46:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v963: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:53 compute-0 sudo[390271]: pam_unix(sudo:session): session closed for user root
Dec 03 01:46:54 compute-0 sudo[390425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtcmrwukecjxmljdkbvwmzvfkkyuhyyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726414.257012-469-92658866411058/AnsiballZ_systemd.py'
Dec 03 01:46:54 compute-0 sudo[390425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:46:54 compute-0 ceph-mon[192821]: pgmap v963: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:55 compute-0 python3.9[390427]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_ipmi.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 03 01:46:55 compute-0 systemd[1]: Stopping ceilometer_agent_ipmi container...
Dec 03 01:46:55 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:46:55.298 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Dec 03 01:46:55 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:46:55.401 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:304
Dec 03 01:46:55 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:46:55.401 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:308
Dec 03 01:46:55 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:46:55.402 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [12]
Dec 03 01:46:55 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:46:55.413 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:320
Dec 03 01:46:55 compute-0 systemd[1]: libpod-ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92.scope: Deactivated successfully.
Dec 03 01:46:55 compute-0 systemd[1]: libpod-ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92.scope: Consumed 3.715s CPU time.
Dec 03 01:46:55 compute-0 podman[390431]: 2025-12-03 01:46:55.585027881 +0000 UTC m=+0.362922876 container died ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 03 01:46:55 compute-0 systemd[1]: ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92-64bdef25bfa2e2e5.timer: Deactivated successfully.
Dec 03 01:46:55 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92.
Dec 03 01:46:55 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92-userdata-shm.mount: Deactivated successfully.
Dec 03 01:46:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc6ab5567927337a784d4e1fad456ca1db68e67b38a0f6ac3c208559879cc889-merged.mount: Deactivated successfully.
Dec 03 01:46:55 compute-0 podman[390431]: 2025-12-03 01:46:55.655582464 +0000 UTC m=+0.433477419 container cleanup ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 03 01:46:55 compute-0 podman[390431]: ceilometer_agent_ipmi
Dec 03 01:46:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v964: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:55 compute-0 podman[390459]: ceilometer_agent_ipmi
Dec 03 01:46:55 compute-0 systemd[1]: edpm_ceilometer_agent_ipmi.service: Deactivated successfully.
Dec 03 01:46:55 compute-0 systemd[1]: Stopped ceilometer_agent_ipmi container.
Dec 03 01:46:55 compute-0 systemd[1]: Starting ceilometer_agent_ipmi container...
Dec 03 01:46:55 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:46:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc6ab5567927337a784d4e1fad456ca1db68e67b38a0f6ac3c208559879cc889/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 03 01:46:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc6ab5567927337a784d4e1fad456ca1db68e67b38a0f6ac3c208559879cc889/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec 03 01:46:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc6ab5567927337a784d4e1fad456ca1db68e67b38a0f6ac3c208559879cc889/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec 03 01:46:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc6ab5567927337a784d4e1fad456ca1db68e67b38a0f6ac3c208559879cc889/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec 03 01:46:56 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92.
Dec 03 01:46:56 compute-0 podman[390471]: 2025-12-03 01:46:56.09913381 +0000 UTC m=+0.272771524 container init ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2)
Dec 03 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: + sudo -E kolla_set_configs
Dec 03 01:46:56 compute-0 podman[390471]: 2025-12-03 01:46:56.146280076 +0000 UTC m=+0.319917810 container start ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 03 01:46:56 compute-0 podman[390471]: ceilometer_agent_ipmi
Dec 03 01:46:56 compute-0 sudo[390492]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 03 01:46:56 compute-0 sudo[390492]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 03 01:46:56 compute-0 sudo[390492]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 03 01:46:56 compute-0 systemd[1]: Started ceilometer_agent_ipmi container.
Dec 03 01:46:56 compute-0 sudo[390425]: pam_unix(sudo:session): session closed for user root
Dec 03 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 03 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: INFO:__main__:Validating config file
Dec 03 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 03 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: INFO:__main__:Copying service configuration files
Dec 03 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec 03 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec 03 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec 03 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec 03 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec 03 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec 03 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 03 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 03 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 03 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 03 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 03 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 03 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: INFO:__main__:Writing out command to execute
Dec 03 01:46:56 compute-0 sudo[390492]: pam_unix(sudo:session): session closed for user root
Dec 03 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: ++ cat /run_command
Dec 03 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec 03 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: + ARGS=
Dec 03 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: + sudo kolla_copy_cacerts
Dec 03 01:46:56 compute-0 sudo[390514]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 03 01:46:56 compute-0 sudo[390514]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 03 01:46:56 compute-0 sudo[390514]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 03 01:46:56 compute-0 sudo[390514]: pam_unix(sudo:session): session closed for user root
Dec 03 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: + [[ ! -n '' ]]
Dec 03 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: + . kolla_extend_start
Dec 03 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec 03 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Dec 03 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: + umask 0022
Dec 03 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Dec 03 01:46:56 compute-0 podman[390493]: 2025-12-03 01:46:56.278473397 +0000 UTC m=+0.113255482 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 03 01:46:56 compute-0 systemd[1]: ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92-462cb216658e0ca0.service: Main process exited, code=exited, status=1/FAILURE
Dec 03 01:46:56 compute-0 systemd[1]: ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92-462cb216658e0ca0.service: Failed with result 'exit-code'.
Dec 03 01:46:56 compute-0 ceph-mon[192821]: pgmap v964: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.154 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.154 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.154 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.154 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.155 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.155 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.155 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.155 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.155 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.155 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.155 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.155 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.155 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.155 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.155 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.156 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.156 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.156 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.156 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.156 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.156 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.156 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.156 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.156 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.156 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.156 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.156 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.157 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.157 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.157 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.157 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.157 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.157 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.157 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.157 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.157 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.157 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.157 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.157 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.158 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.158 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.158 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.158 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.158 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.158 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.158 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.158 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.158 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.158 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.158 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.159 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.159 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.159 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.159 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.159 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.159 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.159 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.159 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.159 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.159 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.159 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.160 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.160 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.160 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.160 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.160 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.160 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.160 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.160 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.160 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.160 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.160 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.160 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.160 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.161 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.161 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.161 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.161 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.161 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.161 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.161 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.161 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.161 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.162 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.162 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.162 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.162 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.162 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.162 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.162 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.162 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.162 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.162 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.162 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.163 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.163 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.163 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.163 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.163 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.163 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.163 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.163 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.163 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.164 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.164 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.164 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.164 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.164 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.164 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.164 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.164 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.164 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.164 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.165 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.165 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.165 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.165 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.165 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.165 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.165 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.165 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.165 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.165 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.165 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.166 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.166 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.166 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.166 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.166 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.166 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.166 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.166 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.166 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.166 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.166 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.167 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.167 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.167 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.167 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.167 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.167 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.167 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.167 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.167 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.167 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.167 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.167 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.167 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.168 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.168 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.168 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.168 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.168 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.168 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.168 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.168 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.168 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.168 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.168 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.168 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.169 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.169 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.169 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.169 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.169 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.190 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.193 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.194 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.220 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpp0in8a1p/privsep.sock']
Dec 03 01:46:57 compute-0 sudo[390644]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/bin/ceilometer-rootwrap /etc/ceilometer/rootwrap.conf privsep-helper --privsep_context ceilometer.privsep.sys_admin_pctxt --privsep_sock_path /tmp/tmpp0in8a1p/privsep.sock
Dec 03 01:46:57 compute-0 sudo[390644]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 03 01:46:57 compute-0 sudo[390644]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 03 01:46:57 compute-0 sudo[390673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnzywfmsbtbdoqdjbmsvgawlleezrvlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726416.7613325-477-263724776405823/AnsiballZ_systemd.py'
Dec 03 01:46:57 compute-0 sudo[390673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:46:57 compute-0 python3.9[390675]: ansible-ansible.builtin.systemd Invoked with name=edpm_kepler.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 03 01:46:57 compute-0 systemd[1]: Stopping kepler container...
Dec 03 01:46:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v965: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:46:57 compute-0 kepler[389583]: I1203 01:46:57.849904       1 exporter.go:218] Received shutdown signal
Dec 03 01:46:57 compute-0 kepler[389583]: I1203 01:46:57.851757       1 exporter.go:226] Exiting...
Dec 03 01:46:57 compute-0 sudo[390644]: pam_unix(sudo:session): session closed for user root
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.902 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.903 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpp0in8a1p/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.789 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.796 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.800 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Dec 03 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.801 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.045 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.045 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.047 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.048 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.048 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.048 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.048 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.049 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.049 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.049 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.049 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.050 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.050 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.056 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.056 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.056 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.056 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.057 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.057 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.057 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.057 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.058 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.058 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.058 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.058 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.058 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.059 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.059 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.059 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.060 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.060 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.060 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.061 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.061 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.061 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.061 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.061 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.062 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.062 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.062 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.062 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 systemd[1]: libpod-c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6.scope: Deactivated successfully.
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.062 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.063 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.063 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.063 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 systemd[1]: libpod-c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6.scope: Consumed 1.060s CPU time.
Dec 03 01:46:58 compute-0 podman[390680]: 2025-12-03 01:46:58.064079635 +0000 UTC m=+0.295050599 container stop c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, version=9.4, architecture=x86_64, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, distribution-scope=public, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9)
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.064 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.064 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.064 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.064 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 conmon[389583]: conmon c095e31a3195bbbfc2a5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6.scope/container/memory.events
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.065 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.065 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.065 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.066 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.066 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.066 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.066 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.067 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.067 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.068 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.068 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.069 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.070 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 podman[390680]: 2025-12-03 01:46:58.071215339 +0000 UTC m=+0.302186223 container died c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, config_id=edpm, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, io.openshift.tags=base rhel9, name=ubi9, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, release-0.7.12=, vcs-type=git, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.071 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.072 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.072 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.073 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.073 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.073 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.073 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.073 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.074 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.074 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.074 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.074 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.074 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.074 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.075 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.075 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.075 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.075 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.075 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.076 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.076 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.076 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.076 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.076 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.076 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.077 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.077 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.077 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.077 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.077 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.077 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.078 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.078 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.078 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.078 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.078 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.078 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.078 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.079 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.079 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.079 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.079 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.079 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.079 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.080 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.080 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.080 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.080 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.080 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.080 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.081 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.081 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.081 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.081 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.081 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.082 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.082 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.082 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.082 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.082 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.082 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.083 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.083 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.083 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.083 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 systemd[1]: c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6-2387456e96475ea9.timer: Deactivated successfully.
Dec 03 01:46:58 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6.
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.083 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.084 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.084 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.084 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.084 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.084 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.084 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.085 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.085 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.086 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.086 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.086 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.086 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.086 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.087 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.087 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.087 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.087 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.087 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.087 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.087 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.087 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.087 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.088 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.088 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.088 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.088 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.088 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.088 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.088 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.088 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.089 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.089 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.089 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.089 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.089 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.089 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.089 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.089 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.089 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.090 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.090 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.090 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.090 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.090 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.090 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.090 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.090 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.091 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.091 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.091 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.091 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.091 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.091 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.091 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.091 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.091 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.092 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.092 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.092 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.092 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.092 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.092 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.092 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.092 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.093 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.093 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.093 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.093 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.093 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.093 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.093 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.094 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.094 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.094 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.094 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.094 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.094 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.094 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.094 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.095 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.095 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.095 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.096 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.096 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.096 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.096 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.096 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.096 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Dec 03 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.099 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Dec 03 01:46:58 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6-userdata-shm.mount: Deactivated successfully.
Dec 03 01:46:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5fee0233e554adad84cb1c80469097f5dd66633b17399db4710d1dfcd084939-merged.mount: Deactivated successfully.
Dec 03 01:46:58 compute-0 podman[390680]: 2025-12-03 01:46:58.120993679 +0000 UTC m=+0.351964563 container cleanup c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., name=ubi9, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 03 01:46:58 compute-0 podman[390680]: kepler
Dec 03 01:46:58 compute-0 systemd[1]: libpod-conmon-c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6.scope: Deactivated successfully.
Dec 03 01:46:58 compute-0 podman[390712]: kepler
Dec 03 01:46:58 compute-0 systemd[1]: edpm_kepler.service: Deactivated successfully.
Dec 03 01:46:58 compute-0 systemd[1]: Stopped kepler container.
Dec 03 01:46:58 compute-0 systemd[1]: Starting kepler container...
Dec 03 01:46:58 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:46:58 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6.
Dec 03 01:46:58 compute-0 podman[390725]: 2025-12-03 01:46:58.347302057 +0000 UTC m=+0.137193476 container init c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., name=ubi9, architecture=x86_64, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, version=9.4, container_name=kepler, build-date=2024-09-18T21:23:30, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.29.0, vendor=Red Hat, Inc., release-0.7.12=)
Dec 03 01:46:58 compute-0 kepler[390740]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec 03 01:46:58 compute-0 podman[390725]: 2025-12-03 01:46:58.394598766 +0000 UTC m=+0.184490155 container start c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, maintainer=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, architecture=x86_64, io.buildah.version=1.29.0)
Dec 03 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.397210       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Dec 03 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.397432       1 config.go:293] using gCgroup ID in the BPF program: true
Dec 03 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.397457       1 config.go:295] kernel version: 5.14
Dec 03 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.398303       1 power.go:78] Unable to obtain power, use estimate method
Dec 03 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.398346       1 redfish.go:169] failed to get redfish credential file path
Dec 03 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.399007       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Dec 03 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.399044       1 power.go:79] using none to obtain power
Dec 03 01:46:58 compute-0 kepler[390740]: E1203 01:46:58.399068       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Dec 03 01:46:58 compute-0 kepler[390740]: E1203 01:46:58.399100       1 exporter.go:154] failed to init GPU accelerators: no devices found
Dec 03 01:46:58 compute-0 podman[390725]: kepler
Dec 03 01:46:58 compute-0 kepler[390740]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec 03 01:46:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:46:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.402079       1 exporter.go:84] Number of CPUs: 8
Dec 03 01:46:58 compute-0 systemd[1]: Started kepler container.
Dec 03 01:46:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:46:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:46:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:46:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:46:58 compute-0 sudo[390673]: pam_unix(sudo:session): session closed for user root
Dec 03 01:46:58 compute-0 podman[390750]: 2025-12-03 01:46:58.48480635 +0000 UTC m=+0.083135893 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, version=9.4, config_id=edpm, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.openshift.tags=base rhel9, name=ubi9, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, managed_by=edpm_ansible, release-0.7.12=, release=1214.1726694543, architecture=x86_64, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc.)
Dec 03 01:46:58 compute-0 systemd[1]: c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6-dbf87a852cde7e.service: Main process exited, code=exited, status=1/FAILURE
Dec 03 01:46:58 compute-0 systemd[1]: c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6-dbf87a852cde7e.service: Failed with result 'exit-code'.
Dec 03 01:46:58 compute-0 ceph-mon[192821]: pgmap v965: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.925361       1 watcher.go:83] Using in cluster k8s config
Dec 03 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.925394       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Dec 03 01:46:58 compute-0 kepler[390740]: E1203 01:46:58.925437       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Dec 03 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.932567       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Dec 03 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.932593       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Dec 03 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.938896       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Dec 03 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.939181       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Dec 03 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.951500       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 03 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.951637       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec 03 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.951664       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Dec 03 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.971481       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 03 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.971605       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 03 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.971615       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 03 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.971625       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 03 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.971636       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec 03 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.971659       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Dec 03 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.971808       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Dec 03 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.971895       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Dec 03 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.972178       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Dec 03 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.972321       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Dec 03 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.972472       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Dec 03 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.974131       1 exporter.go:208] Started Kepler in 577.281941ms
Dec 03 01:46:59 compute-0 sudo[390931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsiscfggspjsimtvrutgfudnaopoknti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726418.6652212-485-264850999458286/AnsiballZ_find.py'
Dec 03 01:46:59 compute-0 sudo[390931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:46:59 compute-0 python3.9[390933]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 03 01:46:59 compute-0 sudo[390931]: pam_unix(sudo:session): session closed for user root
Dec 03 01:46:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:46:59.607 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:46:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:46:59.609 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:46:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:46:59.609 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:46:59 compute-0 podman[158098]: time="2025-12-03T01:46:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:46:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v966: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:46:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:46:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42581 "" "Go-http-client/1.1"
Dec 03 01:46:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:46:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8108 "" "Go-http-client/1.1"
Dec 03 01:47:00 compute-0 sshd-session[390958]: Invalid user guest from 146.190.144.138 port 37386
Dec 03 01:47:00 compute-0 sshd-session[390958]: Received disconnect from 146.190.144.138 port 37386:11: Bye Bye [preauth]
Dec 03 01:47:00 compute-0 sshd-session[390958]: Disconnected from invalid user guest 146.190.144.138 port 37386 [preauth]
Dec 03 01:47:00 compute-0 sudo[391085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khpmbjipqzvhagskaqgncdakxtpqbuex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726420.052715-495-35678776075821/AnsiballZ_podman_container_info.py'
Dec 03 01:47:00 compute-0 sudo[391085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:47:00 compute-0 ceph-mon[192821]: pgmap v966: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:01 compute-0 python3.9[391087]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Dec 03 01:47:01 compute-0 sudo[391085]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:01 compute-0 openstack_network_exporter[368278]: ERROR   01:47:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:47:01 compute-0 openstack_network_exporter[368278]: ERROR   01:47:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:47:01 compute-0 openstack_network_exporter[368278]: ERROR   01:47:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:47:01 compute-0 openstack_network_exporter[368278]: ERROR   01:47:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:47:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:47:01 compute-0 openstack_network_exporter[368278]: ERROR   01:47:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:47:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:47:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v967: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:01 compute-0 podman[391169]: 2025-12-03 01:47:01.951725232 +0000 UTC m=+0.201384937 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 03 01:47:02 compute-0 sudo[391272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymjipownrdstdrwhlmmxzzdxicpacejf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726421.6215909-503-103608464395670/AnsiballZ_podman_container_exec.py'
Dec 03 01:47:02 compute-0 sudo[391272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:47:02 compute-0 python3.9[391274]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:47:02 compute-0 systemd[1]: Started libpod-conmon-926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f.scope.
Dec 03 01:47:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:47:02 compute-0 podman[391275]: 2025-12-03 01:47:02.808618862 +0000 UTC m=+0.167322065 container exec 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 03 01:47:02 compute-0 podman[391275]: 2025-12-03 01:47:02.846009829 +0000 UTC m=+0.204712982 container exec_died 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Dec 03 01:47:02 compute-0 podman[391287]: 2025-12-03 01:47:02.896187111 +0000 UTC m=+0.148473248 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 03 01:47:02 compute-0 systemd[1]: libpod-conmon-926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f.scope: Deactivated successfully.
Dec 03 01:47:02 compute-0 sudo[391272]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:02 compute-0 ceph-mon[192821]: pgmap v967: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:03 compute-0 sudo[391473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjreaurbutzqfudogmsrnkpxnhbfsxah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726423.184736-511-154412092878816/AnsiballZ_podman_container_exec.py'
Dec 03 01:47:03 compute-0 sudo[391473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:47:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v968: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:03 compute-0 python3.9[391475]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:47:04 compute-0 systemd[1]: Started libpod-conmon-926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f.scope.
Dec 03 01:47:04 compute-0 podman[391476]: 2025-12-03 01:47:04.096645683 +0000 UTC m=+0.131601056 container exec 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec 03 01:47:04 compute-0 podman[391476]: 2025-12-03 01:47:04.108341556 +0000 UTC m=+0.143296999 container exec_died 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Dec 03 01:47:04 compute-0 sudo[391473]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:04 compute-0 systemd[1]: libpod-conmon-926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f.scope: Deactivated successfully.
Dec 03 01:47:04 compute-0 podman[391491]: 2025-12-03 01:47:04.252826409 +0000 UTC m=+0.142378083 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, build-date=2025-08-20T13:12:41, config_id=edpm, architecture=x86_64, distribution-scope=public, name=ubi9-minimal, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7)
Dec 03 01:47:04 compute-0 ceph-mon[192821]: pgmap v968: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:05 compute-0 sudo[391675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsdzcmibviqpjxlacravemkbidittqpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726424.475142-519-162814112530235/AnsiballZ_file.py'
Dec 03 01:47:05 compute-0 sudo[391675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:47:05 compute-0 python3.9[391677]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:47:05 compute-0 sudo[391675]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v969: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:06 compute-0 sudo[391827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzlobxzuqyoskqvzpphorwrrtcamjfxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726425.8601093-528-277508859473662/AnsiballZ_podman_container_info.py'
Dec 03 01:47:06 compute-0 sudo[391827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:47:06 compute-0 python3.9[391829]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Dec 03 01:47:06 compute-0 sudo[391827]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:06 compute-0 podman[391839]: 2025-12-03 01:47:06.873858616 +0000 UTC m=+0.117385651 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 01:47:06 compute-0 ceph-mon[192821]: pgmap v969: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v970: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:47:08 compute-0 sudo[392014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wujuaqsafsragsptjkewvmmcjnqqtept ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726427.609238-536-153408289244051/AnsiballZ_podman_container_exec.py'
Dec 03 01:47:08 compute-0 sudo[392014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:47:08 compute-0 python3.9[392016]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:47:08 compute-0 systemd[1]: Started libpod-conmon-7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264.scope.
Dec 03 01:47:08 compute-0 podman[392017]: 2025-12-03 01:47:08.592096701 +0000 UTC m=+0.163290130 container exec 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec 03 01:47:08 compute-0 podman[392017]: 2025-12-03 01:47:08.628117959 +0000 UTC m=+0.199311328 container exec_died 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec 03 01:47:08 compute-0 sudo[392014]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:08 compute-0 systemd[1]: libpod-conmon-7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264.scope: Deactivated successfully.
Dec 03 01:47:08 compute-0 ceph-mon[192821]: pgmap v970: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v971: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:10 compute-0 sudo[392198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-naxlenezsrtyvlujeanpvyexzeqfgqlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726429.9982922-544-112836013136700/AnsiballZ_podman_container_exec.py'
Dec 03 01:47:10 compute-0 sudo[392198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:47:10 compute-0 python3.9[392200]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:47:11 compute-0 sshd-session[392146]: Invalid user mcserver from 80.253.31.232 port 48014
Dec 03 01:47:11 compute-0 ceph-mon[192821]: pgmap v971: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:11 compute-0 systemd[1]: Started libpod-conmon-7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264.scope.
Dec 03 01:47:11 compute-0 podman[392201]: 2025-12-03 01:47:11.090722244 +0000 UTC m=+0.174369636 container exec 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 03 01:47:11 compute-0 podman[392201]: 2025-12-03 01:47:11.126259838 +0000 UTC m=+0.209907150 container exec_died 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_compute, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec 03 01:47:11 compute-0 sshd-session[392146]: Received disconnect from 80.253.31.232 port 48014:11: Bye Bye [preauth]
Dec 03 01:47:11 compute-0 sshd-session[392146]: Disconnected from invalid user mcserver 80.253.31.232 port 48014 [preauth]
Dec 03 01:47:11 compute-0 systemd[1]: libpod-conmon-7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264.scope: Deactivated successfully.
Dec 03 01:47:11 compute-0 sudo[392198]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v972: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:12 compute-0 sudo[392380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbbbspcelaixlwnelvgwcnbdebbksxew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726431.4871418-552-39935694156267/AnsiballZ_file.py'
Dec 03 01:47:12 compute-0 sudo[392380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:47:12 compute-0 python3.9[392382]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:47:12 compute-0 sudo[392380]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:47:13 compute-0 ceph-mon[192821]: pgmap v972: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:13 compute-0 sudo[392532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-reegclwrlvoshmhczcuvvokfjhxffmbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726432.6972303-561-52332323312173/AnsiballZ_podman_container_info.py'
Dec 03 01:47:13 compute-0 sudo[392532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:47:13 compute-0 python3.9[392534]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Dec 03 01:47:13 compute-0 sudo[392532]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v973: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:14 compute-0 sudo[392695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmmgjcexihmkjiawgwztbhgzngftnaiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726433.9610114-569-157572081653579/AnsiballZ_podman_container_exec.py'
Dec 03 01:47:14 compute-0 sudo[392695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:47:14 compute-0 python3.9[392697]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:47:14 compute-0 systemd[1]: Started libpod-conmon-9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df.scope.
Dec 03 01:47:15 compute-0 podman[392698]: 2025-12-03 01:47:15.008251822 +0000 UTC m=+0.158613127 container exec 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 01:47:15 compute-0 ceph-mon[192821]: pgmap v973: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:15 compute-0 podman[392698]: 2025-12-03 01:47:15.042646404 +0000 UTC m=+0.193007729 container exec_died 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 03 01:47:15 compute-0 systemd[1]: libpod-conmon-9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df.scope: Deactivated successfully.
Dec 03 01:47:15 compute-0 sudo[392695]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v974: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:15 compute-0 sudo[392876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqjiyrqskyurupfxuyzhoezcxenoyusd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726435.4396896-577-101939569047067/AnsiballZ_podman_container_exec.py'
Dec 03 01:47:15 compute-0 sudo[392876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:47:16 compute-0 python3.9[392878]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:47:16 compute-0 systemd[1]: Started libpod-conmon-9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df.scope.
Dec 03 01:47:16 compute-0 podman[392879]: 2025-12-03 01:47:16.409021191 +0000 UTC m=+0.161402947 container exec 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 03 01:47:16 compute-0 podman[392879]: 2025-12-03 01:47:16.445039188 +0000 UTC m=+0.197420884 container exec_died 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 03 01:47:16 compute-0 systemd[1]: libpod-conmon-9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df.scope: Deactivated successfully.
Dec 03 01:47:16 compute-0 sudo[392876]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:17 compute-0 ceph-mon[192821]: pgmap v974: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:17 compute-0 sudo[393059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwdlhbvzilrtlrgvkyxfobtsazfpyfmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726436.846854-585-256569749746780/AnsiballZ_file.py'
Dec 03 01:47:17 compute-0 sudo[393059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:47:17 compute-0 python3.9[393061]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:47:17 compute-0 sudo[393059]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v975: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:47:18 compute-0 sshd-session[393085]: Invalid user super from 34.66.72.251 port 47700
Dec 03 01:47:18 compute-0 sshd-session[393085]: Received disconnect from 34.66.72.251 port 47700:11: Bye Bye [preauth]
Dec 03 01:47:18 compute-0 sshd-session[393085]: Disconnected from invalid user super 34.66.72.251 port 47700 [preauth]
Dec 03 01:47:18 compute-0 sudo[393213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdhiacccadwrfyhrtueiajbqarwkvezh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726438.075015-594-194702805588558/AnsiballZ_podman_container_info.py'
Dec 03 01:47:18 compute-0 sudo[393213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:47:18 compute-0 python3.9[393215]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Dec 03 01:47:18 compute-0 sudo[393213]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:19 compute-0 ceph-mon[192821]: pgmap v975: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.500 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.501 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.501 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.502 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.502 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.503 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.503 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.503 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.503 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.503 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.503 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.504 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.504 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.504 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.504 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.504 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.504 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.504 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.504 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.504 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.505 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.505 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.505 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.505 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.505 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.505 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.505 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.505 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.505 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.506 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.506 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.506 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.506 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.507 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.507 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.507 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.507 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.507 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.507 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.508 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.508 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.508 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.508 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.508 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.509 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.509 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.509 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.509 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.509 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.509 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.510 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.510 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.510 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.510 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.510 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.510 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.511 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.511 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.511 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.511 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.511 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.511 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.512 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.512 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.512 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.512 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.512 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.513 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.513 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.513 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.513 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.513 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.513 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.514 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.514 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.514 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.514 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.514 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.515 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.515 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.515 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.515 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.515 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.515 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.515 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.515 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.516 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.516 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.516 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.516 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.516 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.516 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.516 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.516 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.516 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.516 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.516 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.516 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.516 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.516 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.516 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.516 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.516 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.516 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.517 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.517 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:47:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v976: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:19 compute-0 sudo[393378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkpocsfuebpfbonamhfcjjjymanhdrrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726439.3162012-602-20671800817032/AnsiballZ_podman_container_exec.py'
Dec 03 01:47:19 compute-0 sudo[393378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:47:20 compute-0 python3.9[393380]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:47:20 compute-0 systemd[1]: Started libpod-conmon-82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195.scope.
Dec 03 01:47:20 compute-0 podman[393381]: 2025-12-03 01:47:20.238635441 +0000 UTC m=+0.129742463 container exec 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 01:47:20 compute-0 podman[393381]: 2025-12-03 01:47:20.274216406 +0000 UTC m=+0.165323368 container exec_died 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 01:47:20 compute-0 systemd[1]: libpod-conmon-82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195.scope: Deactivated successfully.
Dec 03 01:47:20 compute-0 sudo[393378]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:21 compute-0 ceph-mon[192821]: pgmap v976: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v977: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:21 compute-0 podman[393532]: 2025-12-03 01:47:21.882926136 +0000 UTC m=+0.137335388 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 03 01:47:22 compute-0 sudo[393580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qctcqiwhwuunngnchnbmrggwkdzxzjyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726441.2908041-610-131249860312995/AnsiballZ_podman_container_exec.py'
Dec 03 01:47:22 compute-0 sudo[393580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:47:22 compute-0 podman[393583]: 2025-12-03 01:47:22.191847611 +0000 UTC m=+0.118188564 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Dec 03 01:47:22 compute-0 podman[393582]: 2025-12-03 01:47:22.203186474 +0000 UTC m=+0.133936872 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent)
Dec 03 01:47:22 compute-0 python3.9[393584]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:47:22 compute-0 systemd[1]: Started libpod-conmon-82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195.scope.
Dec 03 01:47:22 compute-0 podman[393620]: 2025-12-03 01:47:22.456677127 +0000 UTC m=+0.148533719 container exec 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 01:47:22 compute-0 podman[393620]: 2025-12-03 01:47:22.492268313 +0000 UTC m=+0.184124925 container exec_died 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 03 01:47:22 compute-0 systemd[1]: libpod-conmon-82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195.scope: Deactivated successfully.
Dec 03 01:47:22 compute-0 sudo[393580]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:47:23 compute-0 ceph-mon[192821]: pgmap v977: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v978: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:24 compute-0 sudo[393798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvxjubgfycyudgjcqqannfygpyfqhqlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726443.4940274-618-150952506911152/AnsiballZ_file.py'
Dec 03 01:47:24 compute-0 sudo[393798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:47:24 compute-0 python3.9[393800]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:47:24 compute-0 sudo[393798]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:25 compute-0 ceph-mon[192821]: pgmap v978: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:25 compute-0 sudo[393950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpfmwavtmalttydaobumdwoucankteoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726444.785027-627-252628258683924/AnsiballZ_podman_container_info.py'
Dec 03 01:47:25 compute-0 sudo[393950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:47:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v979: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:26 compute-0 python3.9[393952]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Dec 03 01:47:26 compute-0 sudo[393950]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:26 compute-0 podman[394047]: 2025-12-03 01:47:26.871929437 +0000 UTC m=+0.116097244 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=2, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 03 01:47:26 compute-0 systemd[1]: ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92-462cb216658e0ca0.service: Main process exited, code=exited, status=1/FAILURE
Dec 03 01:47:26 compute-0 systemd[1]: ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92-462cb216658e0ca0.service: Failed with result 'exit-code'.
Dec 03 01:47:27 compute-0 sudo[394134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdinjignwzzilldptsraumjavjpzuhie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726446.5120401-635-34758144267317/AnsiballZ_podman_container_exec.py'
Dec 03 01:47:27 compute-0 sudo[394134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:47:27 compute-0 ceph-mon[192821]: pgmap v979: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:27 compute-0 python3.9[394136]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:47:27 compute-0 systemd[1]: Started libpod-conmon-945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b.scope.
Dec 03 01:47:27 compute-0 podman[394137]: 2025-12-03 01:47:27.504230409 +0000 UTC m=+0.156894758 container exec 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, managed_by=edpm_ansible, config_id=edpm, io.openshift.tags=minimal rhel9, name=ubi9-minimal, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, release=1755695350, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git)
Dec 03 01:47:27 compute-0 podman[394137]: 2025-12-03 01:47:27.539761072 +0000 UTC m=+0.192425441 container exec_died 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git)
Dec 03 01:47:27 compute-0 sudo[394134]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:27 compute-0 systemd[1]: libpod-conmon-945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b.scope: Deactivated successfully.
Dec 03 01:47:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v980: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:47:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:47:28
Dec 03 01:47:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 01:47:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 01:47:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'images', 'default.rgw.meta', 'volumes', 'vms', 'cephfs.cephfs.data', 'backups', 'default.rgw.log']
Dec 03 01:47:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 01:47:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:47:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:47:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:47:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:47:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:47:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:47:28 compute-0 sudo[394317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-boytyljnmyhgkeocnyxjajnihcwoaxhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726447.8994823-643-234897950959949/AnsiballZ_podman_container_exec.py'
Dec 03 01:47:28 compute-0 sudo[394317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:47:28 compute-0 python3.9[394319]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:47:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 01:47:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:47:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 01:47:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:47:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:47:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:47:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:47:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:47:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:47:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:47:28 compute-0 systemd[1]: Started libpod-conmon-945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b.scope.
Dec 03 01:47:28 compute-0 podman[394320]: 2025-12-03 01:47:28.878019317 +0000 UTC m=+0.169717063 container exec 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, architecture=x86_64, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.buildah.version=1.33.7, config_id=edpm, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, name=ubi9-minimal, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec 03 01:47:28 compute-0 podman[394326]: 2025-12-03 01:47:28.90649781 +0000 UTC m=+0.162409735 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, com.redhat.component=ubi9-container, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, distribution-scope=public, release-0.7.12=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, container_name=kepler, name=ubi9)
Dec 03 01:47:28 compute-0 podman[394320]: 2025-12-03 01:47:28.911852693 +0000 UTC m=+0.203550359 container exec_died 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, release=1755695350, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-type=git, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, distribution-scope=public)
Dec 03 01:47:28 compute-0 systemd[1]: libpod-conmon-945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b.scope: Deactivated successfully.
Dec 03 01:47:28 compute-0 sudo[394317]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:29 compute-0 ceph-mon[192821]: pgmap v980: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:29 compute-0 podman[158098]: time="2025-12-03T01:47:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:47:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:47:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec 03 01:47:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v981: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:47:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8098 "" "Go-http-client/1.1"
Dec 03 01:47:29 compute-0 sudo[394518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdfpopwyqomzazceekvlbbhtssryljsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726449.3149776-651-228187717890812/AnsiballZ_file.py'
Dec 03 01:47:29 compute-0 sudo[394518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:47:30 compute-0 python3.9[394520]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:47:30 compute-0 sudo[394518]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:31 compute-0 sudo[394672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xppgtpnpgttcpdmoixivyhkxqqyohcsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726450.5086782-660-251522456962375/AnsiballZ_podman_container_info.py'
Dec 03 01:47:31 compute-0 sudo[394672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:47:31 compute-0 ceph-mon[192821]: pgmap v981: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:31 compute-0 python3.9[394674]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_ipmi'] executable=podman
Dec 03 01:47:31 compute-0 sshd-session[394545]: Invalid user kapsch from 173.249.50.59 port 38452
Dec 03 01:47:31 compute-0 openstack_network_exporter[368278]: ERROR   01:47:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:47:31 compute-0 openstack_network_exporter[368278]: ERROR   01:47:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:47:31 compute-0 openstack_network_exporter[368278]: ERROR   01:47:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:47:31 compute-0 openstack_network_exporter[368278]: ERROR   01:47:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:47:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:47:31 compute-0 openstack_network_exporter[368278]: ERROR   01:47:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:47:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:47:31 compute-0 sudo[394672]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:31 compute-0 sshd-session[394545]: Received disconnect from 173.249.50.59 port 38452:11: Bye Bye [preauth]
Dec 03 01:47:31 compute-0 sshd-session[394545]: Disconnected from invalid user kapsch 173.249.50.59 port 38452 [preauth]
Dec 03 01:47:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v982: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:32 compute-0 sudo[394852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skwlkvzrjqzanifkzqjtzqllfsjdoppq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726451.8261397-668-177444806839320/AnsiballZ_podman_container_exec.py'
Dec 03 01:47:32 compute-0 sudo[394852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:47:32 compute-0 podman[394811]: 2025-12-03 01:47:32.53423227 +0000 UTC m=+0.216772776 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 03 01:47:32 compute-0 python3.9[394857]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:47:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:47:32 compute-0 systemd[1]: Started libpod-conmon-ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92.scope.
Dec 03 01:47:32 compute-0 podman[394864]: 2025-12-03 01:47:32.85704635 +0000 UTC m=+0.163317091 container exec ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 03 01:47:32 compute-0 podman[394864]: 2025-12-03 01:47:32.893967324 +0000 UTC m=+0.200238035 container exec_died ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec 03 01:47:32 compute-0 sudo[394852]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:32 compute-0 systemd[1]: libpod-conmon-ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92.scope: Deactivated successfully.
Dec 03 01:47:33 compute-0 podman[394892]: 2025-12-03 01:47:33.14828132 +0000 UTC m=+0.135528808 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible)
Dec 03 01:47:33 compute-0 ceph-mon[192821]: pgmap v982: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v983: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:33 compute-0 sudo[395060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhcgbcpuexkhhmxjrdqubiqafkciinjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726453.2732859-676-54516025647594/AnsiballZ_podman_container_exec.py'
Dec 03 01:47:33 compute-0 sudo[395060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:47:34 compute-0 python3.9[395062]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:47:34 compute-0 systemd[1]: Started libpod-conmon-ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92.scope.
Dec 03 01:47:34 compute-0 podman[395063]: 2025-12-03 01:47:34.296315727 +0000 UTC m=+0.146544842 container exec ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 03 01:47:34 compute-0 podman[395063]: 2025-12-03 01:47:34.33182582 +0000 UTC m=+0.182054925 container exec_died ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 03 01:47:34 compute-0 systemd[1]: libpod-conmon-ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92.scope: Deactivated successfully.
Dec 03 01:47:34 compute-0 sudo[395060]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:34 compute-0 podman[395093]: 2025-12-03 01:47:34.602780302 +0000 UTC m=+0.154391707 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, release=1755695350, container_name=openstack_network_exporter, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, config_id=edpm, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible)
Dec 03 01:47:35 compute-0 ceph-mon[192821]: pgmap v983: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v984: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:36 compute-0 sudo[395263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykijlmowrppeelotfiqdrlllbvfkyhal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726454.7142282-684-276320982614041/AnsiballZ_file.py'
Dec 03 01:47:36 compute-0 sudo[395263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:47:36 compute-0 python3.9[395265]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:47:36 compute-0 sudo[395263]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:37 compute-0 ceph-mon[192821]: pgmap v984: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:37 compute-0 sudo[395431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrbzpagmxtdnqvzgnphogonctyuzhvur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726456.701681-693-44614123331000/AnsiballZ_podman_container_info.py'
Dec 03 01:47:37 compute-0 sudo[395431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:47:37 compute-0 podman[395391]: 2025-12-03 01:47:37.325852609 +0000 UTC m=+0.165813672 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 03 01:47:37 compute-0 python3.9[395441]: ansible-containers.podman.podman_container_info Invoked with name=['kepler'] executable=podman
Dec 03 01:47:37 compute-0 sudo[395431]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v985: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:47:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 01:47:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:47:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 01:47:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:47:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:47:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:47:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:47:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:47:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:47:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:47:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:47:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:47:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 01:47:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:47:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:47:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:47:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 01:47:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:47:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 01:47:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:47:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:47:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:47:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 01:47:39 compute-0 sudo[395605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btbcvtsextuflnxgxejpjlbyimbyvltr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726458.5362039-701-119601172448285/AnsiballZ_podman_container_exec.py'
Dec 03 01:47:39 compute-0 sudo[395605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:47:39 compute-0 ceph-mon[192821]: pgmap v985: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:39 compute-0 sshd-session[395342]: Received disconnect from 14.103.201.7 port 45022:11: Bye Bye [preauth]
Dec 03 01:47:39 compute-0 sshd-session[395342]: Disconnected from authenticating user root 14.103.201.7 port 45022 [preauth]
Dec 03 01:47:39 compute-0 python3.9[395607]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:47:39 compute-0 sshd-session[395454]: Invalid user kapsch from 103.146.202.174 port 38560
Dec 03 01:47:39 compute-0 sudo[395614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:47:39 compute-0 sudo[395614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:47:39 compute-0 sudo[395614]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:39 compute-0 systemd[1]: Started libpod-conmon-c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6.scope.
Dec 03 01:47:39 compute-0 podman[395608]: 2025-12-03 01:47:39.568390895 +0000 UTC m=+0.196344743 container exec c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, managed_by=edpm_ansible, name=ubi9, com.redhat.component=ubi9-container, release=1214.1726694543, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, container_name=kepler, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, architecture=x86_64)
Dec 03 01:47:39 compute-0 podman[395608]: 2025-12-03 01:47:39.605318049 +0000 UTC m=+0.233271897 container exec_died c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, release-0.7.12=, config_id=edpm, vcs-type=git, build-date=2024-09-18T21:23:30, version=9.4, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, distribution-scope=public, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, com.redhat.component=ubi9-container, container_name=kepler)
Dec 03 01:47:39 compute-0 sshd-session[395454]: Received disconnect from 103.146.202.174 port 38560:11: Bye Bye [preauth]
Dec 03 01:47:39 compute-0 sshd-session[395454]: Disconnected from invalid user kapsch 103.146.202.174 port 38560 [preauth]
Dec 03 01:47:39 compute-0 sudo[395647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:47:39 compute-0 sudo[395647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:47:39 compute-0 sudo[395647]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:39 compute-0 sudo[395605]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:39 compute-0 systemd[1]: libpod-conmon-c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6.scope: Deactivated successfully.
Dec 03 01:47:39 compute-0 sudo[395686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:47:39 compute-0 sudo[395686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:47:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v986: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:39 compute-0 sudo[395686]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:39 compute-0 sudo[395735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 01:47:39 compute-0 sudo[395735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:47:40 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 03 01:47:40 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 1800.0 total, 600.0 interval
                                            Cumulative writes: 4595 writes, 20K keys, 4595 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                            Cumulative WAL: 4595 writes, 4595 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 1286 writes, 5584 keys, 1286 commit groups, 1.0 writes per commit group, ingest: 8.45 MB, 0.01 MB/s
                                            Interval WAL: 1286 writes, 1286 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     91.9      0.24              0.11        11    0.022       0      0       0.0       0.0
                                              L6      1/0    6.77 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.2    124.4    102.1      0.68              0.33        10    0.068     42K   5258       0.0       0.0
                                             Sum      1/0    6.77 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.2     92.1     99.5      0.92              0.43        21    0.044     42K   5258       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.3     96.7     96.8      0.37              0.15         8    0.046     18K   2057       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    124.4    102.1      0.68              0.33        10    0.068     42K   5258       0.0       0.0
                                            High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     92.7      0.24              0.11        10    0.024       0      0       0.0       0.0
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     18.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1800.0 total, 600.0 interval
                                            Flush(GB): cumulative 0.021, interval 0.007
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.09 GB write, 0.05 MB/s write, 0.08 GB read, 0.05 MB/s read, 0.9 seconds
                                            Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.4 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x559a0b5b71f0#2 capacity: 308.00 MB usage: 6.36 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 8.9e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(418,6.00 MB,1.94772%) FilterBlock(22,127.17 KB,0.0403218%) IndexBlock(22,238.08 KB,0.0754864%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
Dec 03 01:47:40 compute-0 sudo[395735]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:40 compute-0 sudo[395916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkhmregvzrhywtzgedcwiczgsuuczpuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726459.9857078-709-279014434684339/AnsiballZ_podman_container_exec.py'
Dec 03 01:47:40 compute-0 sudo[395916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:47:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:47:40 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:47:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 01:47:40 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:47:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 01:47:40 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:47:40 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 118534c7-b330-4167-b02a-b4580bf46380 does not exist
Dec 03 01:47:40 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 4b92bc31-cff7-475d-9bde-6eb5517db1be does not exist
Dec 03 01:47:40 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 93ec1a13-eccc-4be5-9055-228be06da025 does not exist
Dec 03 01:47:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 01:47:40 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:47:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 01:47:40 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:47:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:47:40 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:47:40 compute-0 sudo[395919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:47:40 compute-0 sudo[395919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:47:40 compute-0 sudo[395919]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:40 compute-0 python3.9[395918]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:47:40 compute-0 sudo[395945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:47:40 compute-0 sudo[395945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:47:40 compute-0 sudo[395945]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:40 compute-0 systemd[1]: Started libpod-conmon-c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6.scope.
Dec 03 01:47:41 compute-0 podman[395944]: 2025-12-03 01:47:41.009615877 +0000 UTC m=+0.153881572 container exec c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, distribution-scope=public, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.expose-services=, name=ubi9, release-0.7.12=, container_name=kepler, io.openshift.tags=base rhel9, managed_by=edpm_ansible, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, config_id=edpm)
Dec 03 01:47:41 compute-0 podman[395944]: 2025-12-03 01:47:41.062168257 +0000 UTC m=+0.206433942 container exec_died c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.openshift.tags=base rhel9, config_id=edpm, version=9.4, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, managed_by=edpm_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., io.openshift.expose-services=, release=1214.1726694543, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, architecture=x86_64, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 03 01:47:41 compute-0 sudo[395983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:47:41 compute-0 sudo[395983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:47:41 compute-0 sudo[395983]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:41 compute-0 sudo[395916]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:41 compute-0 systemd[1]: libpod-conmon-c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6.scope: Deactivated successfully.
Dec 03 01:47:41 compute-0 sudo[396022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 01:47:41 compute-0 sudo[396022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:47:41 compute-0 ceph-mon[192821]: pgmap v986: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:41 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:47:41 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:47:41 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:47:41 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:47:41 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:47:41 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:47:41 compute-0 podman[396188]: 2025-12-03 01:47:41.758040322 +0000 UTC m=+0.081000882 container create 25d69ccdda8ba2d755c976af4f3ff8b6ac66274da66198559e7ecfc63f085ec0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:47:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v987: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:41 compute-0 systemd[1]: Started libpod-conmon-25d69ccdda8ba2d755c976af4f3ff8b6ac66274da66198559e7ecfc63f085ec0.scope.
Dec 03 01:47:41 compute-0 podman[396188]: 2025-12-03 01:47:41.727578003 +0000 UTC m=+0.050538563 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:47:41 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:47:41 compute-0 podman[396188]: 2025-12-03 01:47:41.866903188 +0000 UTC m=+0.189863818 container init 25d69ccdda8ba2d755c976af4f3ff8b6ac66274da66198559e7ecfc63f085ec0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mcclintock, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 03 01:47:41 compute-0 podman[396188]: 2025-12-03 01:47:41.878108588 +0000 UTC m=+0.201069148 container start 25d69ccdda8ba2d755c976af4f3ff8b6ac66274da66198559e7ecfc63f085ec0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Dec 03 01:47:41 compute-0 podman[396188]: 2025-12-03 01:47:41.882983967 +0000 UTC m=+0.205944527 container attach 25d69ccdda8ba2d755c976af4f3ff8b6ac66274da66198559e7ecfc63f085ec0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 03 01:47:41 compute-0 clever_mcclintock[396228]: 167 167
Dec 03 01:47:41 compute-0 systemd[1]: libpod-25d69ccdda8ba2d755c976af4f3ff8b6ac66274da66198559e7ecfc63f085ec0.scope: Deactivated successfully.
Dec 03 01:47:41 compute-0 podman[396188]: 2025-12-03 01:47:41.887472745 +0000 UTC m=+0.210433305 container died 25d69ccdda8ba2d755c976af4f3ff8b6ac66274da66198559e7ecfc63f085ec0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mcclintock, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 03 01:47:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a6016610766394af1f833362e11c9f76359db8ff31b84a0429bfb6b23703c20-merged.mount: Deactivated successfully.
Dec 03 01:47:41 compute-0 podman[396188]: 2025-12-03 01:47:41.94334931 +0000 UTC m=+0.266309870 container remove 25d69ccdda8ba2d755c976af4f3ff8b6ac66274da66198559e7ecfc63f085ec0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:47:41 compute-0 sudo[396261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-doopvbgaevyrttjeemsjnzvozkonmmoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726461.4032805-717-76314579143891/AnsiballZ_file.py'
Dec 03 01:47:41 compute-0 sudo[396261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:47:41 compute-0 systemd[1]: libpod-conmon-25d69ccdda8ba2d755c976af4f3ff8b6ac66274da66198559e7ecfc63f085ec0.scope: Deactivated successfully.
Dec 03 01:47:42 compute-0 python3.9[396270]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/kepler recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:47:42 compute-0 sudo[396261]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:42 compute-0 podman[396281]: 2025-12-03 01:47:42.239596012 +0000 UTC m=+0.116408742 container create ca9ab3b0bb3ca0e0b689ea2c937ee6b2319290b290a4ae454a8de15aaddf3e70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_allen, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:47:42 compute-0 podman[396281]: 2025-12-03 01:47:42.204301515 +0000 UTC m=+0.081114315 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:47:42 compute-0 systemd[1]: Started libpod-conmon-ca9ab3b0bb3ca0e0b689ea2c937ee6b2319290b290a4ae454a8de15aaddf3e70.scope.
Dec 03 01:47:42 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:47:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5b14389536b199d6c630ecd520700bce4d33867ca18de69b9520ac4c3484db4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:47:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5b14389536b199d6c630ecd520700bce4d33867ca18de69b9520ac4c3484db4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:47:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5b14389536b199d6c630ecd520700bce4d33867ca18de69b9520ac4c3484db4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:47:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5b14389536b199d6c630ecd520700bce4d33867ca18de69b9520ac4c3484db4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:47:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5b14389536b199d6c630ecd520700bce4d33867ca18de69b9520ac4c3484db4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:47:42 compute-0 podman[396281]: 2025-12-03 01:47:42.412940589 +0000 UTC m=+0.289753339 container init ca9ab3b0bb3ca0e0b689ea2c937ee6b2319290b290a4ae454a8de15aaddf3e70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_allen, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 03 01:47:42 compute-0 podman[396281]: 2025-12-03 01:47:42.435663297 +0000 UTC m=+0.312476027 container start ca9ab3b0bb3ca0e0b689ea2c937ee6b2319290b290a4ae454a8de15aaddf3e70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_allen, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef)
Dec 03 01:47:42 compute-0 podman[396281]: 2025-12-03 01:47:42.441672108 +0000 UTC m=+0.318484928 container attach ca9ab3b0bb3ca0e0b689ea2c937ee6b2319290b290a4ae454a8de15aaddf3e70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 03 01:47:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:47:43 compute-0 sudo[396452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfldbsrfurylcsrtleopmnedgcmzhsfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726462.5726442-726-209324033790876/AnsiballZ_podman_container_info.py'
Dec 03 01:47:43 compute-0 sudo[396452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:47:43 compute-0 ceph-mon[192821]: pgmap v987: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:43 compute-0 python3.9[396456]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Dec 03 01:47:43 compute-0 sudo[396452]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:43 compute-0 sad_allen[396321]: --> passed data devices: 0 physical, 3 LVM
Dec 03 01:47:43 compute-0 sad_allen[396321]: --> relative data size: 1.0
Dec 03 01:47:43 compute-0 sad_allen[396321]: --> All data devices are unavailable
Dec 03 01:47:43 compute-0 systemd[1]: libpod-ca9ab3b0bb3ca0e0b689ea2c937ee6b2319290b290a4ae454a8de15aaddf3e70.scope: Deactivated successfully.
Dec 03 01:47:43 compute-0 systemd[1]: libpod-ca9ab3b0bb3ca0e0b689ea2c937ee6b2319290b290a4ae454a8de15aaddf3e70.scope: Consumed 1.248s CPU time.
Dec 03 01:47:43 compute-0 podman[396281]: 2025-12-03 01:47:43.768115495 +0000 UTC m=+1.644928245 container died ca9ab3b0bb3ca0e0b689ea2c937ee6b2319290b290a4ae454a8de15aaddf3e70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_allen, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 03 01:47:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v988: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5b14389536b199d6c630ecd520700bce4d33867ca18de69b9520ac4c3484db4-merged.mount: Deactivated successfully.
Dec 03 01:47:43 compute-0 podman[396281]: 2025-12-03 01:47:43.86781618 +0000 UTC m=+1.744628900 container remove ca9ab3b0bb3ca0e0b689ea2c937ee6b2319290b290a4ae454a8de15aaddf3e70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_allen, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:47:43 compute-0 systemd[1]: libpod-conmon-ca9ab3b0bb3ca0e0b689ea2c937ee6b2319290b290a4ae454a8de15aaddf3e70.scope: Deactivated successfully.
Dec 03 01:47:43 compute-0 sudo[396022]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:44 compute-0 sudo[396579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:47:44 compute-0 sudo[396579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:47:44 compute-0 sudo[396579]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:44 compute-0 sudo[396627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:47:44 compute-0 sudo[396627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:47:44 compute-0 sudo[396627]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:44 compute-0 sudo[396676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:47:44 compute-0 sudo[396676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:47:44 compute-0 sudo[396676]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:44 compute-0 sudo[396728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpqeraukgewhbttzgiupapuhxginpfaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726463.7305965-734-212634946796967/AnsiballZ_podman_container_exec.py'
Dec 03 01:47:44 compute-0 sudo[396728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:47:44 compute-0 sudo[396727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 01:47:44 compute-0 sudo[396727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:47:44 compute-0 python3.9[396734]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:47:44 compute-0 systemd[1]: Started libpod-conmon-5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6.scope.
Dec 03 01:47:44 compute-0 podman[396755]: 2025-12-03 01:47:44.681880058 +0000 UTC m=+0.130796163 container exec 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 03 01:47:44 compute-0 podman[396755]: 2025-12-03 01:47:44.716258779 +0000 UTC m=+0.165174854 container exec_died 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 03 01:47:44 compute-0 systemd[1]: libpod-conmon-5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6.scope: Deactivated successfully.
Dec 03 01:47:44 compute-0 sudo[396728]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:44 compute-0 podman[396831]: 2025-12-03 01:47:44.936342438 +0000 UTC m=+0.074881717 container create aac42895e19d60845870e8292de4e28a30cb9b166b2c700be9756ee3528a7dcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_kepler, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:47:44 compute-0 podman[396831]: 2025-12-03 01:47:44.90311591 +0000 UTC m=+0.041655169 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:47:45 compute-0 systemd[1]: Started libpod-conmon-aac42895e19d60845870e8292de4e28a30cb9b166b2c700be9756ee3528a7dcd.scope.
Dec 03 01:47:45 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:47:45 compute-0 podman[396831]: 2025-12-03 01:47:45.089448957 +0000 UTC m=+0.227988316 container init aac42895e19d60845870e8292de4e28a30cb9b166b2c700be9756ee3528a7dcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_kepler, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 03 01:47:45 compute-0 podman[396831]: 2025-12-03 01:47:45.10147663 +0000 UTC m=+0.240015909 container start aac42895e19d60845870e8292de4e28a30cb9b166b2c700be9756ee3528a7dcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_kepler, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 03 01:47:45 compute-0 competent_kepler[396859]: 167 167
Dec 03 01:47:45 compute-0 podman[396831]: 2025-12-03 01:47:45.108655885 +0000 UTC m=+0.247195164 container attach aac42895e19d60845870e8292de4e28a30cb9b166b2c700be9756ee3528a7dcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:47:45 compute-0 systemd[1]: libpod-aac42895e19d60845870e8292de4e28a30cb9b166b2c700be9756ee3528a7dcd.scope: Deactivated successfully.
Dec 03 01:47:45 compute-0 podman[396831]: 2025-12-03 01:47:45.110098046 +0000 UTC m=+0.248637325 container died aac42895e19d60845870e8292de4e28a30cb9b166b2c700be9756ee3528a7dcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_kepler, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:47:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-44c02aa9353c722f044076831909d5a85bc9a8b3d80f4a190fbb2fc13c1f9684-merged.mount: Deactivated successfully.
Dec 03 01:47:45 compute-0 podman[396831]: 2025-12-03 01:47:45.185024624 +0000 UTC m=+0.323563883 container remove aac42895e19d60845870e8292de4e28a30cb9b166b2c700be9756ee3528a7dcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec 03 01:47:45 compute-0 systemd[1]: libpod-conmon-aac42895e19d60845870e8292de4e28a30cb9b166b2c700be9756ee3528a7dcd.scope: Deactivated successfully.
Dec 03 01:47:45 compute-0 ceph-mon[192821]: pgmap v988: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:45 compute-0 podman[396952]: 2025-12-03 01:47:45.434235545 +0000 UTC m=+0.080446737 container create 6b96fb705c206d395977ce0b9d0b2744e5ebaeb9fd919fd9808f3191f85d664e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 03 01:47:45 compute-0 podman[396952]: 2025-12-03 01:47:45.407200743 +0000 UTC m=+0.053411945 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:47:45 compute-0 systemd[1]: Started libpod-conmon-6b96fb705c206d395977ce0b9d0b2744e5ebaeb9fd919fd9808f3191f85d664e.scope.
Dec 03 01:47:45 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:47:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aea6b26204defa9a5c288b3a95749d83e614f9f4ea19d62f7086567021b94398/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:47:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aea6b26204defa9a5c288b3a95749d83e614f9f4ea19d62f7086567021b94398/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:47:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aea6b26204defa9a5c288b3a95749d83e614f9f4ea19d62f7086567021b94398/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:47:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aea6b26204defa9a5c288b3a95749d83e614f9f4ea19d62f7086567021b94398/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:47:45 compute-0 podman[396952]: 2025-12-03 01:47:45.611519153 +0000 UTC m=+0.257730345 container init 6b96fb705c206d395977ce0b9d0b2744e5ebaeb9fd919fd9808f3191f85d664e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cohen, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:47:45 compute-0 podman[396952]: 2025-12-03 01:47:45.623460104 +0000 UTC m=+0.269671296 container start 6b96fb705c206d395977ce0b9d0b2744e5ebaeb9fd919fd9808f3191f85d664e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cohen, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec 03 01:47:45 compute-0 podman[396952]: 2025-12-03 01:47:45.629598159 +0000 UTC m=+0.275809331 container attach 6b96fb705c206d395977ce0b9d0b2744e5ebaeb9fd919fd9808f3191f85d664e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cohen, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:47:45 compute-0 sudo[397028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgazpivuwyqssmvwtjuezqfvyiwgspbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726465.101578-742-133413330401415/AnsiballZ_podman_container_exec.py'
Dec 03 01:47:45 compute-0 sudo[397028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:47:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v989: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:45 compute-0 python3.9[397030]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:47:46 compute-0 systemd[1]: Started libpod-conmon-5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6.scope.
Dec 03 01:47:46 compute-0 podman[397031]: 2025-12-03 01:47:46.152069687 +0000 UTC m=+0.157218327 container exec 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 03 01:47:46 compute-0 podman[397031]: 2025-12-03 01:47:46.188668971 +0000 UTC m=+0.193817631 container exec_died 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 03 01:47:46 compute-0 sudo[397028]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:46 compute-0 systemd[1]: libpod-conmon-5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6.scope: Deactivated successfully.
Dec 03 01:47:46 compute-0 reverent_cohen[396994]: {
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:     "0": [
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:         {
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:             "devices": [
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:                 "/dev/loop3"
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:             ],
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:             "lv_name": "ceph_lv0",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:             "lv_size": "21470642176",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:             "name": "ceph_lv0",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:             "tags": {
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:                 "ceph.cluster_name": "ceph",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:                 "ceph.crush_device_class": "",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:                 "ceph.encrypted": "0",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:                 "ceph.osd_id": "0",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:                 "ceph.type": "block",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:                 "ceph.vdo": "0"
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:             },
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:             "type": "block",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:             "vg_name": "ceph_vg0"
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:         }
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:     ],
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:     "1": [
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:         {
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:             "devices": [
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:                 "/dev/loop4"
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:             ],
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:             "lv_name": "ceph_lv1",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:             "lv_size": "21470642176",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:             "name": "ceph_lv1",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:             "tags": {
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:                 "ceph.cluster_name": "ceph",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:                 "ceph.crush_device_class": "",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:                 "ceph.encrypted": "0",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:                 "ceph.osd_id": "1",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:                 "ceph.type": "block",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:                 "ceph.vdo": "0"
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:             },
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:             "type": "block",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:             "vg_name": "ceph_vg1"
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:         }
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:     ],
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:     "2": [
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:         {
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:             "devices": [
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:                 "/dev/loop5"
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:             ],
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:             "lv_name": "ceph_lv2",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:             "lv_size": "21470642176",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:             "name": "ceph_lv2",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:             "tags": {
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:                 "ceph.cluster_name": "ceph",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:                 "ceph.crush_device_class": "",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:                 "ceph.encrypted": "0",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:                 "ceph.osd_id": "2",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:                 "ceph.type": "block",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:                 "ceph.vdo": "0"
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:             },
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:             "type": "block",
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:             "vg_name": "ceph_vg2"
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:         }
Dec 03 01:47:46 compute-0 reverent_cohen[396994]:     ]
Dec 03 01:47:46 compute-0 reverent_cohen[396994]: }
Dec 03 01:47:46 compute-0 systemd[1]: libpod-6b96fb705c206d395977ce0b9d0b2744e5ebaeb9fd919fd9808f3191f85d664e.scope: Deactivated successfully.
Dec 03 01:47:46 compute-0 podman[396952]: 2025-12-03 01:47:46.490885374 +0000 UTC m=+1.137096596 container died 6b96fb705c206d395977ce0b9d0b2744e5ebaeb9fd919fd9808f3191f85d664e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cohen, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:47:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-aea6b26204defa9a5c288b3a95749d83e614f9f4ea19d62f7086567021b94398-merged.mount: Deactivated successfully.
Dec 03 01:47:46 compute-0 podman[396952]: 2025-12-03 01:47:46.59938571 +0000 UTC m=+1.245596882 container remove 6b96fb705c206d395977ce0b9d0b2744e5ebaeb9fd919fd9808f3191f85d664e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cohen, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:47:46 compute-0 systemd[1]: libpod-conmon-6b96fb705c206d395977ce0b9d0b2744e5ebaeb9fd919fd9808f3191f85d664e.scope: Deactivated successfully.
Dec 03 01:47:46 compute-0 sudo[396727]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:46 compute-0 sudo[397133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:47:46 compute-0 sudo[397133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:47:46 compute-0 sudo[397133]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:46 compute-0 sudo[397182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:47:46 compute-0 sudo[397182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:47:46 compute-0 sudo[397182]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:47 compute-0 sudo[397227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:47:47 compute-0 sudo[397227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:47:47 compute-0 sudo[397227]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:47 compute-0 sudo[397277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 01:47:47 compute-0 sudo[397277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:47:47 compute-0 sudo[397328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urxlpobnkvjjkanhzdjwxaslsxqzjvtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726466.608878-750-175429775288220/AnsiballZ_file.py'
Dec 03 01:47:47 compute-0 sudo[397328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:47:47 compute-0 ceph-mon[192821]: pgmap v989: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:47 compute-0 python3.9[397330]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:47:47 compute-0 sudo[397328]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v990: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:47 compute-0 podman[397396]: 2025-12-03 01:47:47.808839079 +0000 UTC m=+0.096601278 container create 706836c2e0a5133edf145af45ff3d5bab0e54824bc96f1b1787696f7836a4603 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:47:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:47:47 compute-0 podman[397396]: 2025-12-03 01:47:47.780752877 +0000 UTC m=+0.068515076 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:47:47 compute-0 systemd[1]: Started libpod-conmon-706836c2e0a5133edf145af45ff3d5bab0e54824bc96f1b1787696f7836a4603.scope.
Dec 03 01:47:47 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:47:47 compute-0 podman[397396]: 2025-12-03 01:47:47.937763067 +0000 UTC m=+0.225525266 container init 706836c2e0a5133edf145af45ff3d5bab0e54824bc96f1b1787696f7836a4603 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 03 01:47:47 compute-0 podman[397396]: 2025-12-03 01:47:47.954473824 +0000 UTC m=+0.242236023 container start 706836c2e0a5133edf145af45ff3d5bab0e54824bc96f1b1787696f7836a4603 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 03 01:47:47 compute-0 compassionate_beaver[397455]: 167 167
Dec 03 01:47:47 compute-0 podman[397396]: 2025-12-03 01:47:47.961991759 +0000 UTC m=+0.249753958 container attach 706836c2e0a5133edf145af45ff3d5bab0e54824bc96f1b1787696f7836a4603 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_beaver, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:47:47 compute-0 systemd[1]: libpod-706836c2e0a5133edf145af45ff3d5bab0e54824bc96f1b1787696f7836a4603.scope: Deactivated successfully.
Dec 03 01:47:47 compute-0 podman[397396]: 2025-12-03 01:47:47.964361446 +0000 UTC m=+0.252123645 container died 706836c2e0a5133edf145af45ff3d5bab0e54824bc96f1b1787696f7836a4603 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_beaver, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:47:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-840a963c9e0b0dee208bd8f8b3da28f576cd6a2e82eb31492f92f00e36b0b72a-merged.mount: Deactivated successfully.
Dec 03 01:47:48 compute-0 podman[397396]: 2025-12-03 01:47:48.047126558 +0000 UTC m=+0.334888727 container remove 706836c2e0a5133edf145af45ff3d5bab0e54824bc96f1b1787696f7836a4603 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_beaver, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 03 01:47:48 compute-0 systemd[1]: libpod-conmon-706836c2e0a5133edf145af45ff3d5bab0e54824bc96f1b1787696f7836a4603.scope: Deactivated successfully.
Dec 03 01:47:48 compute-0 podman[397528]: 2025-12-03 01:47:48.30863848 +0000 UTC m=+0.084563884 container create 9140ce1efe0611952b0033419a64df7f6e0146afdbd5643e465a35fd8b648422 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chaplygin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 03 01:47:48 compute-0 podman[397528]: 2025-12-03 01:47:48.27568882 +0000 UTC m=+0.051614294 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:47:48 compute-0 systemd[1]: Started libpod-conmon-9140ce1efe0611952b0033419a64df7f6e0146afdbd5643e465a35fd8b648422.scope.
Dec 03 01:47:48 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:47:48 compute-0 sudo[397575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfbsjazqwovhsibmqzprkgxpvejsrzva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726467.7957408-759-233550557549539/AnsiballZ_podman_container_info.py'
Dec 03 01:47:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64281b75abe53d5773e45ce3e9eabfd1df3259d26515dc97a5cd6e5b9d4c734f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:47:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64281b75abe53d5773e45ce3e9eabfd1df3259d26515dc97a5cd6e5b9d4c734f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:47:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64281b75abe53d5773e45ce3e9eabfd1df3259d26515dc97a5cd6e5b9d4c734f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:47:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64281b75abe53d5773e45ce3e9eabfd1df3259d26515dc97a5cd6e5b9d4c734f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:47:48 compute-0 sudo[397575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:47:48 compute-0 podman[397528]: 2025-12-03 01:47:48.452722691 +0000 UTC m=+0.228648125 container init 9140ce1efe0611952b0033419a64df7f6e0146afdbd5643e465a35fd8b648422 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chaplygin, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 03 01:47:48 compute-0 podman[397528]: 2025-12-03 01:47:48.467282316 +0000 UTC m=+0.243207700 container start 9140ce1efe0611952b0033419a64df7f6e0146afdbd5643e465a35fd8b648422 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chaplygin, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:47:48 compute-0 podman[397528]: 2025-12-03 01:47:48.476967593 +0000 UTC m=+0.252893027 container attach 9140ce1efe0611952b0033419a64df7f6e0146afdbd5643e465a35fd8b648422 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 03 01:47:48 compute-0 nova_compute[351485]: 2025-12-03 01:47:48.570 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:47:48 compute-0 nova_compute[351485]: 2025-12-03 01:47:48.597 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:47:48 compute-0 python3.9[397580]: ansible-containers.podman.podman_container_info Invoked with name=['multipathd'] executable=podman
Dec 03 01:47:48 compute-0 nova_compute[351485]: 2025-12-03 01:47:48.628 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:47:48 compute-0 nova_compute[351485]: 2025-12-03 01:47:48.629 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:47:48 compute-0 nova_compute[351485]: 2025-12-03 01:47:48.630 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:47:48 compute-0 nova_compute[351485]: 2025-12-03 01:47:48.631 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 01:47:48 compute-0 nova_compute[351485]: 2025-12-03 01:47:48.632 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:47:48 compute-0 sudo[397575]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:49 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 01:47:49 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4118109666' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:47:49 compute-0 nova_compute[351485]: 2025-12-03 01:47:49.192 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:47:49 compute-0 ceph-mon[192821]: pgmap v990: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:49 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/4118109666' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:47:49 compute-0 charming_chaplygin[397576]: {
Dec 03 01:47:49 compute-0 charming_chaplygin[397576]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 01:47:49 compute-0 charming_chaplygin[397576]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:47:49 compute-0 charming_chaplygin[397576]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 01:47:49 compute-0 charming_chaplygin[397576]:         "osd_id": 2,
Dec 03 01:47:49 compute-0 charming_chaplygin[397576]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:47:49 compute-0 charming_chaplygin[397576]:         "type": "bluestore"
Dec 03 01:47:49 compute-0 charming_chaplygin[397576]:     },
Dec 03 01:47:49 compute-0 charming_chaplygin[397576]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 01:47:49 compute-0 charming_chaplygin[397576]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:47:49 compute-0 charming_chaplygin[397576]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 01:47:49 compute-0 charming_chaplygin[397576]:         "osd_id": 1,
Dec 03 01:47:49 compute-0 charming_chaplygin[397576]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:47:49 compute-0 charming_chaplygin[397576]:         "type": "bluestore"
Dec 03 01:47:49 compute-0 charming_chaplygin[397576]:     },
Dec 03 01:47:49 compute-0 charming_chaplygin[397576]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 01:47:49 compute-0 charming_chaplygin[397576]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:47:49 compute-0 charming_chaplygin[397576]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 01:47:49 compute-0 charming_chaplygin[397576]:         "osd_id": 0,
Dec 03 01:47:49 compute-0 charming_chaplygin[397576]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:47:49 compute-0 charming_chaplygin[397576]:         "type": "bluestore"
Dec 03 01:47:49 compute-0 charming_chaplygin[397576]:     }
Dec 03 01:47:49 compute-0 charming_chaplygin[397576]: }
Dec 03 01:47:49 compute-0 systemd[1]: libpod-9140ce1efe0611952b0033419a64df7f6e0146afdbd5643e465a35fd8b648422.scope: Deactivated successfully.
Dec 03 01:47:49 compute-0 systemd[1]: libpod-9140ce1efe0611952b0033419a64df7f6e0146afdbd5643e465a35fd8b648422.scope: Consumed 1.091s CPU time.
Dec 03 01:47:49 compute-0 podman[397769]: 2025-12-03 01:47:49.646451612 +0000 UTC m=+0.047721943 container died 9140ce1efe0611952b0033419a64df7f6e0146afdbd5643e465a35fd8b648422 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:47:49 compute-0 nova_compute[351485]: 2025-12-03 01:47:49.654 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 01:47:49 compute-0 nova_compute[351485]: 2025-12-03 01:47:49.656 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4449MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 01:47:49 compute-0 nova_compute[351485]: 2025-12-03 01:47:49.656 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:47:49 compute-0 nova_compute[351485]: 2025-12-03 01:47:49.657 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:47:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-64281b75abe53d5773e45ce3e9eabfd1df3259d26515dc97a5cd6e5b9d4c734f-merged.mount: Deactivated successfully.
Dec 03 01:47:49 compute-0 nova_compute[351485]: 2025-12-03 01:47:49.737 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 01:47:49 compute-0 nova_compute[351485]: 2025-12-03 01:47:49.738 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 01:47:49 compute-0 podman[397769]: 2025-12-03 01:47:49.745128407 +0000 UTC m=+0.146398748 container remove 9140ce1efe0611952b0033419a64df7f6e0146afdbd5643e465a35fd8b648422 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chaplygin, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:47:49 compute-0 systemd[1]: libpod-conmon-9140ce1efe0611952b0033419a64df7f6e0146afdbd5643e465a35fd8b648422.scope: Deactivated successfully.
Dec 03 01:47:49 compute-0 nova_compute[351485]: 2025-12-03 01:47:49.767 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:47:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v991: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:49 compute-0 sudo[397277]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:49 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:47:49 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:47:49 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:47:49 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:47:49 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 9ea9fa61-9bf2-4b3a-8930-7d1d102fe5c3 does not exist
Dec 03 01:47:49 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev b0a9ad1b-42f0-4567-a08c-3a9b033683bc does not exist
Dec 03 01:47:49 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Dec 03 01:47:49 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:47:49.832685) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 03 01:47:49 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Dec 03 01:47:49 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726469832720, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1426, "num_deletes": 251, "total_data_size": 2286148, "memory_usage": 2336784, "flush_reason": "Manual Compaction"}
Dec 03 01:47:49 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Dec 03 01:47:49 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726469853907, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 2243299, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19461, "largest_seqno": 20886, "table_properties": {"data_size": 2236567, "index_size": 3867, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 13734, "raw_average_key_size": 19, "raw_value_size": 2223184, "raw_average_value_size": 3198, "num_data_blocks": 177, "num_entries": 695, "num_filter_entries": 695, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764726317, "oldest_key_time": 1764726317, "file_creation_time": 1764726469, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Dec 03 01:47:49 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 21320 microseconds, and 7500 cpu microseconds.
Dec 03 01:47:49 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 01:47:49 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:47:49.853987) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 2243299 bytes OK
Dec 03 01:47:49 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:47:49.854028) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Dec 03 01:47:49 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:47:49.855934) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Dec 03 01:47:49 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:47:49.855952) EVENT_LOG_v1 {"time_micros": 1764726469855946, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 03 01:47:49 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:47:49.855973) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 03 01:47:49 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 2279851, prev total WAL file size 2279851, number of live WAL files 2.
Dec 03 01:47:49 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 01:47:49 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:47:49.857207) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Dec 03 01:47:49 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 03 01:47:49 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(2190KB)], [47(6931KB)]
Dec 03 01:47:49 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726469857258, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9341294, "oldest_snapshot_seqno": -1}
Dec 03 01:47:49 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4274 keys, 7564443 bytes, temperature: kUnknown
Dec 03 01:47:49 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726469914951, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7564443, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7534820, "index_size": 17865, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10693, "raw_key_size": 105687, "raw_average_key_size": 24, "raw_value_size": 7456290, "raw_average_value_size": 1744, "num_data_blocks": 751, "num_entries": 4274, "num_filter_entries": 4274, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764726469, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Dec 03 01:47:49 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 01:47:49 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:47:49.915132) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7564443 bytes
Dec 03 01:47:49 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:47:49.917043) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 161.8 rd, 131.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 6.8 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(7.5) write-amplify(3.4) OK, records in: 4788, records dropped: 514 output_compression: NoCompression
Dec 03 01:47:49 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:47:49.917062) EVENT_LOG_v1 {"time_micros": 1764726469917053, "job": 24, "event": "compaction_finished", "compaction_time_micros": 57739, "compaction_time_cpu_micros": 31962, "output_level": 6, "num_output_files": 1, "total_output_size": 7564443, "num_input_records": 4788, "num_output_records": 4274, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 03 01:47:49 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 01:47:49 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726469917615, "job": 24, "event": "table_file_deletion", "file_number": 49}
Dec 03 01:47:49 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 01:47:49 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726469919044, "job": 24, "event": "table_file_deletion", "file_number": 47}
Dec 03 01:47:49 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:47:49.857045) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:47:49 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:47:49.919261) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:47:49 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:47:49.919265) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:47:49 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:47:49.919267) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:47:49 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:47:49.919269) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:47:49 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:47:49.919271) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:47:49 compute-0 sudo[397784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:47:49 compute-0 sudo[397784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:47:49 compute-0 sudo[397784]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:50 compute-0 sudo[397827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 01:47:50 compute-0 sudo[397827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:47:50 compute-0 sudo[397827]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 01:47:50 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/672027519' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:47:50 compute-0 nova_compute[351485]: 2025-12-03 01:47:50.275 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:47:50 compute-0 nova_compute[351485]: 2025-12-03 01:47:50.287 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 01:47:50 compute-0 nova_compute[351485]: 2025-12-03 01:47:50.307 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 01:47:50 compute-0 nova_compute[351485]: 2025-12-03 01:47:50.308 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 01:47:50 compute-0 nova_compute[351485]: 2025-12-03 01:47:50.308 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:47:50 compute-0 sudo[397881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvcjakpsgqoguaaikupabvbonxowgeya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726469.1128945-767-135986909574876/AnsiballZ_podman_container_exec.py'
Dec 03 01:47:50 compute-0 sudo[397881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:47:50 compute-0 python3.9[397883]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:47:50 compute-0 ceph-mon[192821]: pgmap v991: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:50 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:47:50 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:47:50 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/672027519' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:47:50 compute-0 systemd[1]: Started libpod-conmon-df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630.scope.
Dec 03 01:47:50 compute-0 podman[397884]: 2025-12-03 01:47:50.895208072 +0000 UTC m=+0.165938226 container exec df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 03 01:47:50 compute-0 podman[397884]: 2025-12-03 01:47:50.930282833 +0000 UTC m=+0.201012987 container exec_died df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd)
Dec 03 01:47:50 compute-0 sudo[397881]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:51 compute-0 systemd[1]: libpod-conmon-df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630.scope: Deactivated successfully.
Dec 03 01:47:51 compute-0 nova_compute[351485]: 2025-12-03 01:47:51.289 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:47:51 compute-0 nova_compute[351485]: 2025-12-03 01:47:51.289 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 01:47:51 compute-0 nova_compute[351485]: 2025-12-03 01:47:51.290 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 03 01:47:51 compute-0 nova_compute[351485]: 2025-12-03 01:47:51.311 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 03 01:47:51 compute-0 nova_compute[351485]: 2025-12-03 01:47:51.312 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:47:51 compute-0 nova_compute[351485]: 2025-12-03 01:47:51.314 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:47:51 compute-0 nova_compute[351485]: 2025-12-03 01:47:51.314 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:47:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v992: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:52 compute-0 nova_compute[351485]: 2025-12-03 01:47:52.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:47:52 compute-0 nova_compute[351485]: 2025-12-03 01:47:52.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:47:52 compute-0 nova_compute[351485]: 2025-12-03 01:47:52.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:47:52 compute-0 nova_compute[351485]: 2025-12-03 01:47:52.578 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 01:47:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:47:52 compute-0 ceph-mon[192821]: pgmap v992: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:52 compute-0 podman[398019]: 2025-12-03 01:47:52.868748524 +0000 UTC m=+0.108654301 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm)
Dec 03 01:47:52 compute-0 podman[398015]: 2025-12-03 01:47:52.873435348 +0000 UTC m=+0.113077618 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 03 01:47:52 compute-0 podman[398021]: 2025-12-03 01:47:52.902731593 +0000 UTC m=+0.140365256 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 03 01:47:52 compute-0 sudo[398121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unkjbhnfvgubmjcjqrrmhqsqyeilwzxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726471.3269506-775-19947008840636/AnsiballZ_podman_container_exec.py'
Dec 03 01:47:52 compute-0 sudo[398121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:47:53 compute-0 python3.9[398123]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 03 01:47:53 compute-0 systemd[1]: Started libpod-conmon-df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630.scope.
Dec 03 01:47:53 compute-0 podman[398124]: 2025-12-03 01:47:53.325234819 +0000 UTC m=+0.153883772 container exec df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 03 01:47:53 compute-0 podman[398124]: 2025-12-03 01:47:53.360719781 +0000 UTC m=+0.189368784 container exec_died df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd)
Dec 03 01:47:53 compute-0 systemd[1]: libpod-conmon-df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630.scope: Deactivated successfully.
Dec 03 01:47:53 compute-0 sudo[398121]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:53 compute-0 nova_compute[351485]: 2025-12-03 01:47:53.572 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:47:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v993: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:54 compute-0 sudo[398304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugdtsbhbnnwghbuuondupfyhxcrfovan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726473.7655966-783-152598700382563/AnsiballZ_file.py'
Dec 03 01:47:54 compute-0 sudo[398304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:47:54 compute-0 python3.9[398306]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/multipathd recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:47:54 compute-0 sudo[398304]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:54 compute-0 ceph-mon[192821]: pgmap v993: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:55 compute-0 sudo[398456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbxibgiboqcxwgqtamfyoxefybmbqfcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726474.9443116-792-232545076920333/AnsiballZ_file.py'
Dec 03 01:47:55 compute-0 sudo[398456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:47:55 compute-0 python3.9[398458]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:47:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v994: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:55 compute-0 sudo[398456]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:56 compute-0 sudo[398608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crrprykiywbtnslbhgelgunnhjhibsta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726476.098154-800-138236382091238/AnsiballZ_stat.py'
Dec 03 01:47:56 compute-0 sudo[398608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:47:56 compute-0 python3.9[398610]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/kepler.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:47:56 compute-0 ceph-mon[192821]: pgmap v994: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:56 compute-0 sudo[398608]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:57 compute-0 sudo[398703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlfrmqhogirkcxxqvysaxctwlivyhpco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726476.098154-800-138236382091238/AnsiballZ_file.py'
Dec 03 01:47:57 compute-0 podman[398660]: 2025-12-03 01:47:57.265169617 +0000 UTC m=+0.088299990 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:47:57 compute-0 sudo[398703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:47:57 compute-0 python3.9[398708]: ansible-ansible.legacy.file Invoked with mode=0640 dest=/var/lib/edpm-config/firewall/kepler.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/kepler.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:47:57 compute-0 sudo[398703]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v995: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:47:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:47:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:47:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:47:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:47:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:47:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:47:58 compute-0 sudo[398858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxeqjxlgkrseuvllqxnolwaymtmxhhlb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726477.8980176-813-60672075026812/AnsiballZ_file.py'
Dec 03 01:47:58 compute-0 sudo[398858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:47:58 compute-0 python3.9[398860]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:47:58 compute-0 sudo[398858]: pam_unix(sudo:session): session closed for user root
Dec 03 01:47:58 compute-0 ceph-mon[192821]: pgmap v995: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:47:59.609 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:47:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:47:59.611 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:47:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:47:59.611 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:47:59 compute-0 podman[158098]: time="2025-12-03T01:47:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:47:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:47:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec 03 01:47:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:47:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8109 "" "Go-http-client/1.1"
Dec 03 01:47:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v996: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:47:59 compute-0 podman[398972]: 2025-12-03 01:47:59.89308591 +0000 UTC m=+0.137481034 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, managed_by=edpm_ansible, version=9.4, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., container_name=kepler, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., release=1214.1726694543, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=base rhel9, vcs-type=git, config_id=edpm, architecture=x86_64, com.redhat.component=ubi9-container)
Dec 03 01:47:59 compute-0 sudo[399030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehexvidttfwbnychfcusxrlsakujdtpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726479.1423395-821-691951862668/AnsiballZ_stat.py'
Dec 03 01:47:59 compute-0 sudo[399030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:48:00 compute-0 python3.9[399032]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:48:00 compute-0 sudo[399030]: pam_unix(sudo:session): session closed for user root
Dec 03 01:48:00 compute-0 sudo[399108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkmcyxtkfzchfibrhjxcjjqhccvljsri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726479.1423395-821-691951862668/AnsiballZ_file.py'
Dec 03 01:48:00 compute-0 sudo[399108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:48:00 compute-0 ceph-mon[192821]: pgmap v996: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:00 compute-0 python3.9[399110]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:48:00 compute-0 sudo[399108]: pam_unix(sudo:session): session closed for user root
Dec 03 01:48:01 compute-0 openstack_network_exporter[368278]: ERROR   01:48:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:48:01 compute-0 openstack_network_exporter[368278]: ERROR   01:48:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:48:01 compute-0 openstack_network_exporter[368278]: ERROR   01:48:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:48:01 compute-0 openstack_network_exporter[368278]: ERROR   01:48:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:48:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:48:01 compute-0 openstack_network_exporter[368278]: ERROR   01:48:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:48:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:48:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v997: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:01 compute-0 sudo[399260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asefngemkrcsoqvelgztsdvgzdiciuzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726481.261732-833-208078254919893/AnsiballZ_stat.py'
Dec 03 01:48:01 compute-0 sudo[399260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:48:02 compute-0 python3.9[399262]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:48:02 compute-0 sudo[399260]: pam_unix(sudo:session): session closed for user root
Dec 03 01:48:02 compute-0 sudo[399338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psikzwpbqpbdikbfhaqzcxnkawiqxogj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726481.261732-833-208078254919893/AnsiballZ_file.py'
Dec 03 01:48:02 compute-0 sudo[399338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:48:02 compute-0 python3.9[399340]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.2qdhz08x recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:48:02 compute-0 sudo[399338]: pam_unix(sudo:session): session closed for user root
Dec 03 01:48:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:48:02 compute-0 ceph-mon[192821]: pgmap v997: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:02 compute-0 podman[399341]: 2025-12-03 01:48:02.93721613 +0000 UTC m=+0.189572040 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 03 01:48:03 compute-0 sudo[399529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yumcpcvaaoagrcfforeatcssksrmrkse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726483.0753195-845-75995557542738/AnsiballZ_stat.py'
Dec 03 01:48:03 compute-0 sudo[399529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:48:03 compute-0 podman[399489]: 2025-12-03 01:48:03.704617636 +0000 UTC m=+0.158130853 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 03 01:48:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v998: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:03 compute-0 python3.9[399534]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:48:03 compute-0 sudo[399529]: pam_unix(sudo:session): session closed for user root
Dec 03 01:48:04 compute-0 sudo[399611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzbikmadnylnpagqoimpyjkxapcekdvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726483.0753195-845-75995557542738/AnsiballZ_file.py'
Dec 03 01:48:04 compute-0 sudo[399611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:48:04 compute-0 python3.9[399613]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:48:04 compute-0 sudo[399611]: pam_unix(sudo:session): session closed for user root
Dec 03 01:48:04 compute-0 podman[399619]: 2025-12-03 01:48:04.860806535 +0000 UTC m=+0.121793865 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.openshift.expose-services=, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, version=9.6, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec 03 01:48:04 compute-0 ceph-mon[192821]: pgmap v998: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:05 compute-0 sudo[399783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfefimmdttbwicjvyqlmkylblqmhhckh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726484.932342-858-59865783749673/AnsiballZ_command.py'
Dec 03 01:48:05 compute-0 sudo[399783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:48:05 compute-0 python3.9[399785]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:48:05 compute-0 sudo[399783]: pam_unix(sudo:session): session closed for user root
Dec 03 01:48:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v999: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:06 compute-0 sudo[399936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewujoswuskjxtebixtfkpvstgfzznigq ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764726485.98289-866-68991953560398/AnsiballZ_edpm_nftables_from_files.py'
Dec 03 01:48:06 compute-0 sudo[399936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:48:06 compute-0 ceph-mon[192821]: pgmap v999: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:07 compute-0 python3[399938]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 03 01:48:07 compute-0 sudo[399936]: pam_unix(sudo:session): session closed for user root
Dec 03 01:48:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1000: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:48:07 compute-0 podman[400038]: 2025-12-03 01:48:07.862242275 +0000 UTC m=+0.108469546 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 03 01:48:07 compute-0 sudo[400110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxhcqifctthlvztekojoppmbcovmfrol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726487.3755279-874-207166146254030/AnsiballZ_stat.py'
Dec 03 01:48:07 compute-0 sudo[400110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:48:08 compute-0 python3.9[400112]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:48:08 compute-0 sudo[400110]: pam_unix(sudo:session): session closed for user root
Dec 03 01:48:08 compute-0 sudo[400188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcquxtlsodazcyvuxajzqarksvavyhfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726487.3755279-874-207166146254030/AnsiballZ_file.py'
Dec 03 01:48:08 compute-0 sudo[400188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:48:08 compute-0 python3.9[400190]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:48:08 compute-0 ceph-mon[192821]: pgmap v1000: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:08 compute-0 sudo[400188]: pam_unix(sudo:session): session closed for user root
Dec 03 01:48:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1001: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:09 compute-0 sudo[400340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adnvskowwwtftaoumiqdlkgaxhmdbcri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726489.289599-886-212914070358473/AnsiballZ_stat.py'
Dec 03 01:48:09 compute-0 sudo[400340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:48:10 compute-0 python3.9[400342]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:48:10 compute-0 sudo[400340]: pam_unix(sudo:session): session closed for user root
Dec 03 01:48:10 compute-0 sudo[400418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogvqluwiqbugscaynkttziziojslkusp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726489.289599-886-212914070358473/AnsiballZ_file.py'
Dec 03 01:48:10 compute-0 sudo[400418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:48:10 compute-0 ceph-mon[192821]: pgmap v1001: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:11 compute-0 python3.9[400420]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:48:11 compute-0 sudo[400418]: pam_unix(sudo:session): session closed for user root
Dec 03 01:48:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1002: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:11 compute-0 sudo[400570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbclbtljhwsxtyvlrpbzjdsyqdhdbyyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726491.3524191-898-277726416855461/AnsiballZ_stat.py'
Dec 03 01:48:11 compute-0 sudo[400570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:48:12 compute-0 python3.9[400572]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:48:12 compute-0 sudo[400570]: pam_unix(sudo:session): session closed for user root
Dec 03 01:48:12 compute-0 sudo[400648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlgdvhpdtbzpqhvtmldmaaasvcikavds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726491.3524191-898-277726416855461/AnsiballZ_file.py'
Dec 03 01:48:12 compute-0 sudo[400648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:48:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:48:12 compute-0 python3.9[400650]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:48:12 compute-0 sudo[400648]: pam_unix(sudo:session): session closed for user root
Dec 03 01:48:12 compute-0 ceph-mon[192821]: pgmap v1002: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1003: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:13 compute-0 sudo[400800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nljdomqxlwepzatslyepdgbmtqsibuqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726493.2010376-910-13218542287778/AnsiballZ_stat.py'
Dec 03 01:48:13 compute-0 sudo[400800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:48:14 compute-0 python3.9[400802]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:48:14 compute-0 sudo[400800]: pam_unix(sudo:session): session closed for user root
Dec 03 01:48:14 compute-0 sudo[400878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dspgtsxkqzbwjqftqbfosniqezzvkmpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726493.2010376-910-13218542287778/AnsiballZ_file.py'
Dec 03 01:48:14 compute-0 sudo[400878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:48:14 compute-0 python3.9[400880]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:48:14 compute-0 sudo[400878]: pam_unix(sudo:session): session closed for user root
Dec 03 01:48:14 compute-0 ceph-mon[192821]: pgmap v1003: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:15 compute-0 sudo[401030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-veplyvvabytoalowfcegcrmqsyriloij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726495.1179695-922-169193148015198/AnsiballZ_stat.py'
Dec 03 01:48:15 compute-0 sudo[401030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:48:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1004: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:15 compute-0 python3.9[401032]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:48:16 compute-0 sudo[401030]: pam_unix(sudo:session): session closed for user root
Dec 03 01:48:16 compute-0 sudo[401108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amequakiulhjuiccesoabzxkmzlcmjsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726495.1179695-922-169193148015198/AnsiballZ_file.py'
Dec 03 01:48:16 compute-0 sudo[401108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:48:16 compute-0 python3.9[401110]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:48:16 compute-0 sudo[401108]: pam_unix(sudo:session): session closed for user root
Dec 03 01:48:17 compute-0 ceph-mon[192821]: pgmap v1004: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:17 compute-0 sudo[401260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yphzqsuewcqfviqyelzegxoghqyhgort ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726497.1429565-935-27236514179768/AnsiballZ_command.py'
Dec 03 01:48:17 compute-0 sudo[401260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:48:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1005: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:48:17 compute-0 python3.9[401262]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:48:17 compute-0 sudo[401260]: pam_unix(sudo:session): session closed for user root
Dec 03 01:48:19 compute-0 ceph-mon[192821]: pgmap v1005: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:19 compute-0 sudo[401415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-faedpmhuycxdzrppahxzamkxbaastyek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726498.3193405-943-147957963049145/AnsiballZ_blockinfile.py'
Dec 03 01:48:19 compute-0 sudo[401415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:48:19 compute-0 python3.9[401417]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:48:19 compute-0 sudo[401415]: pam_unix(sudo:session): session closed for user root
Dec 03 01:48:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1006: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:20 compute-0 sudo[401568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngctbcdmcptzjcrnlefrgrilvwtzaqsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726499.6743681-952-198998424790041/AnsiballZ_command.py'
Dec 03 01:48:20 compute-0 sudo[401568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:48:20 compute-0 python3.9[401570]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:48:20 compute-0 sudo[401568]: pam_unix(sudo:session): session closed for user root
Dec 03 01:48:21 compute-0 ceph-mon[192821]: pgmap v1006: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:21 compute-0 sudo[401721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfvrxkudcjjuedtzwvjsfqrisszlwhay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726500.8663006-960-176651298100886/AnsiballZ_stat.py'
Dec 03 01:48:21 compute-0 sudo[401721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:48:21 compute-0 python3.9[401723]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 03 01:48:21 compute-0 sudo[401721]: pam_unix(sudo:session): session closed for user root
Dec 03 01:48:21 compute-0 sshd-session[401724]: Invalid user gns3 from 34.66.72.251 port 55712
Dec 03 01:48:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1007: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:21 compute-0 sshd-session[401724]: Received disconnect from 34.66.72.251 port 55712:11: Bye Bye [preauth]
Dec 03 01:48:21 compute-0 sshd-session[401724]: Disconnected from invalid user gns3 34.66.72.251 port 55712 [preauth]
Dec 03 01:48:22 compute-0 sudo[401875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pscwhvmvainvwctvygbnrztkxpzyrgbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726501.9764476-969-48295007531234/AnsiballZ_file.py'
Dec 03 01:48:22 compute-0 sudo[401875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:48:22 compute-0 python3.9[401877]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:48:22 compute-0 sudo[401875]: pam_unix(sudo:session): session closed for user root
Dec 03 01:48:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:48:23 compute-0 ceph-mon[192821]: pgmap v1007: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:23 compute-0 sshd-session[380660]: Connection closed by 192.168.122.30 port 32886
Dec 03 01:48:23 compute-0 sshd-session[380637]: pam_unix(sshd:session): session closed for user zuul
Dec 03 01:48:23 compute-0 systemd[1]: session-58.scope: Deactivated successfully.
Dec 03 01:48:23 compute-0 systemd[1]: session-58.scope: Consumed 2min 9.646s CPU time.
Dec 03 01:48:23 compute-0 systemd-logind[800]: Session 58 logged out. Waiting for processes to exit.
Dec 03 01:48:23 compute-0 systemd-logind[800]: Removed session 58.
Dec 03 01:48:23 compute-0 podman[401904]: 2025-12-03 01:48:23.427294762 +0000 UTC m=+0.110578876 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 01:48:23 compute-0 podman[401902]: 2025-12-03 01:48:23.426478529 +0000 UTC m=+0.120746777 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:48:23 compute-0 podman[401903]: 2025-12-03 01:48:23.495125877 +0000 UTC m=+0.181146219 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec 03 01:48:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1008: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:25 compute-0 ceph-mon[192821]: pgmap v1008: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1009: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:27 compute-0 ceph-mon[192821]: pgmap v1009: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1010: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:48:27 compute-0 podman[401960]: 2025-12-03 01:48:27.921446924 +0000 UTC m=+0.163162236 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 03 01:48:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:48:28
Dec 03 01:48:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 01:48:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 01:48:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'vms', 'volumes', 'backups', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data', 'images']
Dec 03 01:48:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 01:48:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:48:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:48:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:48:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:48:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:48:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:48:28 compute-0 sshd-session[401980]: Accepted publickey for zuul from 192.168.122.30 port 49960 ssh2: ECDSA SHA256:ja3ITS17A9km0/Ot+KN2pl9ub4ump/b6GV+vNoE7Szw
Dec 03 01:48:28 compute-0 systemd-logind[800]: New session 59 of user zuul.
Dec 03 01:48:28 compute-0 systemd[1]: Started Session 59 of User zuul.
Dec 03 01:48:28 compute-0 sshd-session[401980]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 03 01:48:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 01:48:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:48:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 01:48:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:48:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:48:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:48:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:48:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:48:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:48:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:48:29 compute-0 ceph-mon[192821]: pgmap v1010: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:29 compute-0 ceph-mgr[193109]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1922561230
Dec 03 01:48:29 compute-0 podman[158098]: time="2025-12-03T01:48:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:48:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:48:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec 03 01:48:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:48:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8107 "" "Go-http-client/1.1"
Dec 03 01:48:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1011: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:30 compute-0 python3.9[402133]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 01:48:30 compute-0 podman[402162]: 2025-12-03 01:48:30.887496614 +0000 UTC m=+0.135503417 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, io.openshift.expose-services=, com.redhat.component=ubi9-container, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., release=1214.1726694543, vcs-type=git, architecture=x86_64, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, managed_by=edpm_ansible, config_id=edpm, version=9.4)
Dec 03 01:48:31 compute-0 ceph-mon[192821]: pgmap v1011: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:31 compute-0 openstack_network_exporter[368278]: ERROR   01:48:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:48:31 compute-0 openstack_network_exporter[368278]: ERROR   01:48:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:48:31 compute-0 openstack_network_exporter[368278]: ERROR   01:48:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:48:31 compute-0 openstack_network_exporter[368278]: ERROR   01:48:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:48:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:48:31 compute-0 openstack_network_exporter[368278]: ERROR   01:48:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:48:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:48:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1012: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:32 compute-0 sudo[402309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqqffqvatsnrccjvknypfijccrxdsyim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726511.4146888-34-30000391337037/AnsiballZ_systemd.py'
Dec 03 01:48:32 compute-0 sudo[402309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:48:32 compute-0 sshd-session[402238]: Invalid user super from 80.253.31.232 port 60090
Dec 03 01:48:32 compute-0 python3.9[402311]: ansible-ansible.builtin.systemd Invoked with name=rsyslog daemon_reload=False daemon_reexec=False scope=system no_block=False state=None enabled=None force=None masked=None
Dec 03 01:48:32 compute-0 sudo[402309]: pam_unix(sudo:session): session closed for user root
Dec 03 01:48:32 compute-0 sshd-session[402238]: Received disconnect from 80.253.31.232 port 60090:11: Bye Bye [preauth]
Dec 03 01:48:32 compute-0 sshd-session[402238]: Disconnected from invalid user super 80.253.31.232 port 60090 [preauth]
Dec 03 01:48:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:48:33 compute-0 ceph-mon[192821]: pgmap v1012: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1013: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:33 compute-0 podman[402390]: 2025-12-03 01:48:33.892124556 +0000 UTC m=+0.135415005 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 03 01:48:33 compute-0 podman[402389]: 2025-12-03 01:48:33.923925153 +0000 UTC m=+0.174785618 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 03 01:48:34 compute-0 sudo[402507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtvmpwxcxyjwccobedssdedoyfdouyhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726512.9868715-42-11270835894866/AnsiballZ_setup.py'
Dec 03 01:48:34 compute-0 sudo[402507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:48:34 compute-0 python3.9[402509]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 03 01:48:35 compute-0 ceph-mon[192821]: pgmap v1013: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:35 compute-0 sudo[402507]: pam_unix(sudo:session): session closed for user root
Dec 03 01:48:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1014: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:35 compute-0 podman[402541]: 2025-12-03 01:48:35.833302155 +0000 UTC m=+0.095371113 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, architecture=x86_64, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec 03 01:48:35 compute-0 sudo[402612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmecgpfcijwgedbcucbgbpcknlpqabvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726512.9868715-42-11270835894866/AnsiballZ_dnf.py'
Dec 03 01:48:35 compute-0 sudo[402612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:48:36 compute-0 python3.9[402614]: ansible-ansible.legacy.dnf Invoked with name=['rsyslog-openssl'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 03 01:48:37 compute-0 ceph-mon[192821]: pgmap v1014: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:37 compute-0 sudo[402612]: pam_unix(sudo:session): session closed for user root
Dec 03 01:48:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1015: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:48:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 01:48:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:48:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 01:48:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:48:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:48:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:48:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:48:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:48:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:48:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:48:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:48:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:48:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 01:48:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:48:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:48:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:48:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 01:48:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:48:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 01:48:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:48:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:48:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:48:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 01:48:38 compute-0 sudo[402781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dusaalhzkfxtjlzkmnybhbqyixcqbyjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726517.883275-54-228881694369499/AnsiballZ_stat.py'
Dec 03 01:48:38 compute-0 podman[402739]: 2025-12-03 01:48:38.610077524 +0000 UTC m=+0.126090909 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 01:48:38 compute-0 sudo[402781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:48:38 compute-0 python3.9[402790]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/rsyslog/ca-openshift.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:48:38 compute-0 sudo[402781]: pam_unix(sudo:session): session closed for user root
Dec 03 01:48:39 compute-0 ceph-mon[192821]: pgmap v1015: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:39 compute-0 sudo[402868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dljrryxmvgxqysorkumclpimzskqyksb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726517.883275-54-228881694369499/AnsiballZ_file.py'
Dec 03 01:48:39 compute-0 sudo[402868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:48:39 compute-0 python3.9[402870]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/pki/rsyslog/ca-openshift.crt _original_basename=ca-openshift.crt recurse=False state=file path=/etc/pki/rsyslog/ca-openshift.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:48:39 compute-0 sudo[402868]: pam_unix(sudo:session): session closed for user root
Dec 03 01:48:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1016: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:40 compute-0 sudo[403020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fphsypphcyedgwdpolbenbblilisgqoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726520.1216273-66-24959321380977/AnsiballZ_file.py'
Dec 03 01:48:40 compute-0 sudo[403020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:48:40 compute-0 python3.9[403022]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/rsyslog.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:48:40 compute-0 sudo[403020]: pam_unix(sudo:session): session closed for user root
Dec 03 01:48:41 compute-0 ceph-mon[192821]: pgmap v1016: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:41 compute-0 sshd-session[403023]: Invalid user mcserver from 173.249.50.59 port 36710
Dec 03 01:48:41 compute-0 sshd-session[403023]: Received disconnect from 173.249.50.59 port 36710:11: Bye Bye [preauth]
Dec 03 01:48:41 compute-0 sshd-session[403023]: Disconnected from invalid user mcserver 173.249.50.59 port 36710 [preauth]
Dec 03 01:48:41 compute-0 sudo[403176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyxnlrttdevxyeskiyeqseomseniadpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726521.2555535-74-34024991503370/AnsiballZ_stat.py'
Dec 03 01:48:41 compute-0 sudo[403176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:48:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1017: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:41 compute-0 python3.9[403178]: ansible-ansible.legacy.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 03 01:48:42 compute-0 sudo[403176]: pam_unix(sudo:session): session closed for user root
Dec 03 01:48:42 compute-0 sudo[403254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryhsqkkbipjpcmqexdwlguxefsicpusp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764726521.2555535-74-34024991503370/AnsiballZ_file.py'
Dec 03 01:48:42 compute-0 sudo[403254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:48:42 compute-0 python3.9[403256]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/rsyslog.d/10-telemetry.conf _original_basename=10-telemetry.conf recurse=False state=file path=/etc/rsyslog.d/10-telemetry.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 03 01:48:42 compute-0 sudo[403254]: pam_unix(sudo:session): session closed for user root
Dec 03 01:48:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:48:43 compute-0 sshd-session[401983]: Connection closed by 192.168.122.30 port 49960
Dec 03 01:48:43 compute-0 sshd-session[401980]: pam_unix(sshd:session): session closed for user zuul
Dec 03 01:48:43 compute-0 systemd[1]: session-59.scope: Deactivated successfully.
Dec 03 01:48:43 compute-0 systemd[1]: session-59.scope: Consumed 10.661s CPU time.
Dec 03 01:48:43 compute-0 systemd-logind[800]: Session 59 logged out. Waiting for processes to exit.
Dec 03 01:48:43 compute-0 systemd-logind[800]: Removed session 59.
Dec 03 01:48:43 compute-0 ceph-mon[192821]: pgmap v1017: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1018: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:45 compute-0 ceph-mon[192821]: pgmap v1018: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1019: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:47 compute-0 ceph-mon[192821]: pgmap v1019: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1020: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:48:49 compute-0 ceph-mon[192821]: pgmap v1020: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:49 compute-0 nova_compute[351485]: 2025-12-03 01:48:49.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:48:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1021: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:50 compute-0 sudo[403282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:48:50 compute-0 sudo[403282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:48:50 compute-0 sudo[403282]: pam_unix(sudo:session): session closed for user root
Dec 03 01:48:50 compute-0 sudo[403307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:48:50 compute-0 sudo[403307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:48:50 compute-0 sudo[403307]: pam_unix(sudo:session): session closed for user root
Dec 03 01:48:50 compute-0 sudo[403332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:48:50 compute-0 sudo[403332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:48:50 compute-0 sudo[403332]: pam_unix(sudo:session): session closed for user root
Dec 03 01:48:50 compute-0 nova_compute[351485]: 2025-12-03 01:48:50.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:48:50 compute-0 nova_compute[351485]: 2025-12-03 01:48:50.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:48:50 compute-0 nova_compute[351485]: 2025-12-03 01:48:50.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:48:50 compute-0 sudo[403357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 01:48:50 compute-0 sudo[403357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:48:50 compute-0 nova_compute[351485]: 2025-12-03 01:48:50.637 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:48:50 compute-0 nova_compute[351485]: 2025-12-03 01:48:50.638 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:48:50 compute-0 nova_compute[351485]: 2025-12-03 01:48:50.639 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:48:50 compute-0 nova_compute[351485]: 2025-12-03 01:48:50.639 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 01:48:50 compute-0 nova_compute[351485]: 2025-12-03 01:48:50.640 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:48:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 01:48:51 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3934064838' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:48:51 compute-0 nova_compute[351485]: 2025-12-03 01:48:51.137 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:48:51 compute-0 ceph-mon[192821]: pgmap v1021: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:51 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3934064838' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:48:51 compute-0 sudo[403357]: pam_unix(sudo:session): session closed for user root
Dec 03 01:48:51 compute-0 sudo[403434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:48:51 compute-0 sudo[403434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:48:51 compute-0 sudo[403434]: pam_unix(sudo:session): session closed for user root
Dec 03 01:48:51 compute-0 sudo[403459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:48:51 compute-0 sudo[403459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:48:51 compute-0 sudo[403459]: pam_unix(sudo:session): session closed for user root
Dec 03 01:48:51 compute-0 nova_compute[351485]: 2025-12-03 01:48:51.640 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 01:48:51 compute-0 nova_compute[351485]: 2025-12-03 01:48:51.641 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4559MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 01:48:51 compute-0 nova_compute[351485]: 2025-12-03 01:48:51.642 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:48:51 compute-0 nova_compute[351485]: 2025-12-03 01:48:51.642 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:48:51 compute-0 sudo[403484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:48:51 compute-0 sudo[403484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:48:51 compute-0 sudo[403484]: pam_unix(sudo:session): session closed for user root
Dec 03 01:48:51 compute-0 nova_compute[351485]: 2025-12-03 01:48:51.749 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 01:48:51 compute-0 nova_compute[351485]: 2025-12-03 01:48:51.750 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 01:48:51 compute-0 nova_compute[351485]: 2025-12-03 01:48:51.774 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:48:51 compute-0 sudo[403509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Dec 03 01:48:51 compute-0 sudo[403509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:48:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1022: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:52 compute-0 sudo[403509]: pam_unix(sudo:session): session closed for user root
Dec 03 01:48:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:48:52 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:48:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:48:52 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:48:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:48:52 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:48:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 01:48:52 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:48:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 01:48:52 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:48:52 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 9d9156a8-629b-499f-9f57-8c28607cbb17 does not exist
Dec 03 01:48:52 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 6ff8367a-a5ce-4552-92d6-ce594838e560 does not exist
Dec 03 01:48:52 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev b244ce6f-0e56-4aa2-97c6-b7db03292ac8 does not exist
Dec 03 01:48:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 01:48:52 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:48:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 01:48:52 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:48:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:48:52 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:48:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 01:48:52 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1588088429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:48:52 compute-0 sudo[403571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:48:52 compute-0 sudo[403571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:48:52 compute-0 nova_compute[351485]: 2025-12-03 01:48:52.327 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:48:52 compute-0 sudo[403571]: pam_unix(sudo:session): session closed for user root
Dec 03 01:48:52 compute-0 nova_compute[351485]: 2025-12-03 01:48:52.339 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 01:48:52 compute-0 nova_compute[351485]: 2025-12-03 01:48:52.357 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 01:48:52 compute-0 nova_compute[351485]: 2025-12-03 01:48:52.360 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 01:48:52 compute-0 nova_compute[351485]: 2025-12-03 01:48:52.360 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:48:52 compute-0 sudo[403598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:48:52 compute-0 sudo[403598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:48:52 compute-0 sudo[403598]: pam_unix(sudo:session): session closed for user root
Dec 03 01:48:52 compute-0 sudo[403623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:48:52 compute-0 sudo[403623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:48:52 compute-0 sudo[403623]: pam_unix(sudo:session): session closed for user root
Dec 03 01:48:52 compute-0 sudo[403648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 01:48:52 compute-0 sudo[403648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:48:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:48:53 compute-0 ceph-mon[192821]: pgmap v1022: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:53 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:48:53 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:48:53 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:48:53 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:48:53 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:48:53 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:48:53 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:48:53 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:48:53 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1588088429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:48:53 compute-0 podman[403713]: 2025-12-03 01:48:53.330042379 +0000 UTC m=+0.081329432 container create b1535b6d473e74f6e97d1ccc4cf82a6ab8606da8db0b166b96d70f1e248c68e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_sanderson, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 03 01:48:53 compute-0 nova_compute[351485]: 2025-12-03 01:48:53.362 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:48:53 compute-0 nova_compute[351485]: 2025-12-03 01:48:53.363 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 01:48:53 compute-0 nova_compute[351485]: 2025-12-03 01:48:53.363 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 03 01:48:53 compute-0 podman[403713]: 2025-12-03 01:48:53.28102291 +0000 UTC m=+0.032309953 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:48:53 compute-0 sshd-session[403680]: Invalid user testuser from 146.190.144.138 port 44996
Dec 03 01:48:53 compute-0 systemd[1]: Started libpod-conmon-b1535b6d473e74f6e97d1ccc4cf82a6ab8606da8db0b166b96d70f1e248c68e4.scope.
Dec 03 01:48:53 compute-0 nova_compute[351485]: 2025-12-03 01:48:53.403 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 03 01:48:53 compute-0 nova_compute[351485]: 2025-12-03 01:48:53.404 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:48:53 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:48:53 compute-0 sshd-session[403680]: Received disconnect from 146.190.144.138 port 44996:11: Bye Bye [preauth]
Dec 03 01:48:53 compute-0 sshd-session[403680]: Disconnected from invalid user testuser 146.190.144.138 port 44996 [preauth]
Dec 03 01:48:53 compute-0 podman[403713]: 2025-12-03 01:48:53.476822227 +0000 UTC m=+0.228109320 container init b1535b6d473e74f6e97d1ccc4cf82a6ab8606da8db0b166b96d70f1e248c68e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_sanderson, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 03 01:48:53 compute-0 podman[403713]: 2025-12-03 01:48:53.494374578 +0000 UTC m=+0.245661641 container start b1535b6d473e74f6e97d1ccc4cf82a6ab8606da8db0b166b96d70f1e248c68e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:48:53 compute-0 funny_sanderson[403729]: 167 167
Dec 03 01:48:53 compute-0 systemd[1]: libpod-b1535b6d473e74f6e97d1ccc4cf82a6ab8606da8db0b166b96d70f1e248c68e4.scope: Deactivated successfully.
Dec 03 01:48:53 compute-0 podman[403713]: 2025-12-03 01:48:53.503386485 +0000 UTC m=+0.254673578 container attach b1535b6d473e74f6e97d1ccc4cf82a6ab8606da8db0b166b96d70f1e248c68e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_sanderson, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:48:53 compute-0 podman[403713]: 2025-12-03 01:48:53.513639447 +0000 UTC m=+0.264926510 container died b1535b6d473e74f6e97d1ccc4cf82a6ab8606da8db0b166b96d70f1e248c68e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:48:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-22fd9b579a33b26c7a38174fa02559e37ae4322fbfbefe549a0b5688da0e8df2-merged.mount: Deactivated successfully.
Dec 03 01:48:53 compute-0 nova_compute[351485]: 2025-12-03 01:48:53.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:48:53 compute-0 podman[403713]: 2025-12-03 01:48:53.595464122 +0000 UTC m=+0.346751155 container remove b1535b6d473e74f6e97d1ccc4cf82a6ab8606da8db0b166b96d70f1e248c68e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:48:53 compute-0 podman[403732]: 2025-12-03 01:48:53.6122156 +0000 UTC m=+0.133256853 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 01:48:53 compute-0 systemd[1]: libpod-conmon-b1535b6d473e74f6e97d1ccc4cf82a6ab8606da8db0b166b96d70f1e248c68e4.scope: Deactivated successfully.
Dec 03 01:48:53 compute-0 podman[403733]: 2025-12-03 01:48:53.657359418 +0000 UTC m=+0.172014449 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 03 01:48:53 compute-0 podman[403763]: 2025-12-03 01:48:53.690958777 +0000 UTC m=+0.109824725 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 03 01:48:53 compute-0 podman[403813]: 2025-12-03 01:48:53.821493701 +0000 UTC m=+0.081487676 container create 6e3a80e314a27fbec153ac3311affd60046346cdeb724bfb95f54481bdfc19ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_montalcini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 03 01:48:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1023: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:53 compute-0 podman[403813]: 2025-12-03 01:48:53.783081315 +0000 UTC m=+0.043075360 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:48:53 compute-0 systemd[1]: Started libpod-conmon-6e3a80e314a27fbec153ac3311affd60046346cdeb724bfb95f54481bdfc19ef.scope.
Dec 03 01:48:53 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:48:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/510bfb4b9ab5ae36965995224078c7f0c86d15f9eb2613996e5851ea3e4c9530/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:48:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/510bfb4b9ab5ae36965995224078c7f0c86d15f9eb2613996e5851ea3e4c9530/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:48:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/510bfb4b9ab5ae36965995224078c7f0c86d15f9eb2613996e5851ea3e4c9530/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:48:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/510bfb4b9ab5ae36965995224078c7f0c86d15f9eb2613996e5851ea3e4c9530/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:48:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/510bfb4b9ab5ae36965995224078c7f0c86d15f9eb2613996e5851ea3e4c9530/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:48:54 compute-0 podman[403813]: 2025-12-03 01:48:54.016176656 +0000 UTC m=+0.276170651 container init 6e3a80e314a27fbec153ac3311affd60046346cdeb724bfb95f54481bdfc19ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 03 01:48:54 compute-0 podman[403813]: 2025-12-03 01:48:54.034140479 +0000 UTC m=+0.294134464 container start 6e3a80e314a27fbec153ac3311affd60046346cdeb724bfb95f54481bdfc19ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_montalcini, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:48:54 compute-0 podman[403813]: 2025-12-03 01:48:54.041478468 +0000 UTC m=+0.301472503 container attach 6e3a80e314a27fbec153ac3311affd60046346cdeb724bfb95f54481bdfc19ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_montalcini, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 03 01:48:54 compute-0 nova_compute[351485]: 2025-12-03 01:48:54.571 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:48:54 compute-0 nova_compute[351485]: 2025-12-03 01:48:54.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:48:54 compute-0 nova_compute[351485]: 2025-12-03 01:48:54.576 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 01:48:55 compute-0 ceph-mon[192821]: pgmap v1023: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:55 compute-0 angry_montalcini[403829]: --> passed data devices: 0 physical, 3 LVM
Dec 03 01:48:55 compute-0 angry_montalcini[403829]: --> relative data size: 1.0
Dec 03 01:48:55 compute-0 angry_montalcini[403829]: --> All data devices are unavailable
Dec 03 01:48:55 compute-0 systemd[1]: libpod-6e3a80e314a27fbec153ac3311affd60046346cdeb724bfb95f54481bdfc19ef.scope: Deactivated successfully.
Dec 03 01:48:55 compute-0 podman[403813]: 2025-12-03 01:48:55.278454262 +0000 UTC m=+1.538448237 container died 6e3a80e314a27fbec153ac3311affd60046346cdeb724bfb95f54481bdfc19ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 03 01:48:55 compute-0 systemd[1]: libpod-6e3a80e314a27fbec153ac3311affd60046346cdeb724bfb95f54481bdfc19ef.scope: Consumed 1.183s CPU time.
Dec 03 01:48:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-510bfb4b9ab5ae36965995224078c7f0c86d15f9eb2613996e5851ea3e4c9530-merged.mount: Deactivated successfully.
Dec 03 01:48:55 compute-0 podman[403813]: 2025-12-03 01:48:55.38913393 +0000 UTC m=+1.649127885 container remove 6e3a80e314a27fbec153ac3311affd60046346cdeb724bfb95f54481bdfc19ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_montalcini, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:48:55 compute-0 systemd[1]: libpod-conmon-6e3a80e314a27fbec153ac3311affd60046346cdeb724bfb95f54481bdfc19ef.scope: Deactivated successfully.
Dec 03 01:48:55 compute-0 sudo[403648]: pam_unix(sudo:session): session closed for user root
Dec 03 01:48:55 compute-0 sudo[403872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:48:55 compute-0 sudo[403872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:48:55 compute-0 sudo[403872]: pam_unix(sudo:session): session closed for user root
Dec 03 01:48:55 compute-0 sudo[403897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:48:55 compute-0 sudo[403897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:48:55 compute-0 sudo[403897]: pam_unix(sudo:session): session closed for user root
Dec 03 01:48:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1024: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:55 compute-0 sudo[403922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:48:55 compute-0 sudo[403922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:48:55 compute-0 sudo[403922]: pam_unix(sudo:session): session closed for user root
Dec 03 01:48:55 compute-0 sudo[403947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 01:48:55 compute-0 sudo[403947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:48:56 compute-0 podman[404012]: 2025-12-03 01:48:56.570857798 +0000 UTC m=+0.080086246 container create 8cd1e0b9897d8ca1ec91b822dcb5e6218a8913bfe9037fdeb110329b07df019f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_golick, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 03 01:48:56 compute-0 podman[404012]: 2025-12-03 01:48:56.538356161 +0000 UTC m=+0.047584639 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:48:56 compute-0 systemd[1]: Started libpod-conmon-8cd1e0b9897d8ca1ec91b822dcb5e6218a8913bfe9037fdeb110329b07df019f.scope.
Dec 03 01:48:56 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:48:56 compute-0 podman[404012]: 2025-12-03 01:48:56.734962541 +0000 UTC m=+0.244191039 container init 8cd1e0b9897d8ca1ec91b822dcb5e6218a8913bfe9037fdeb110329b07df019f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_golick, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 03 01:48:56 compute-0 podman[404012]: 2025-12-03 01:48:56.752076049 +0000 UTC m=+0.261304457 container start 8cd1e0b9897d8ca1ec91b822dcb5e6218a8913bfe9037fdeb110329b07df019f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:48:56 compute-0 podman[404012]: 2025-12-03 01:48:56.758636246 +0000 UTC m=+0.267864744 container attach 8cd1e0b9897d8ca1ec91b822dcb5e6218a8913bfe9037fdeb110329b07df019f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_golick, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec 03 01:48:56 compute-0 nifty_golick[404029]: 167 167
Dec 03 01:48:56 compute-0 systemd[1]: libpod-8cd1e0b9897d8ca1ec91b822dcb5e6218a8913bfe9037fdeb110329b07df019f.scope: Deactivated successfully.
Dec 03 01:48:56 compute-0 podman[404012]: 2025-12-03 01:48:56.762486716 +0000 UTC m=+0.271715164 container died 8cd1e0b9897d8ca1ec91b822dcb5e6218a8913bfe9037fdeb110329b07df019f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 03 01:48:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-f17bb8bcb11f988180464180060cfa5f3aaec78a28c94da768e5718cb6dbc06e-merged.mount: Deactivated successfully.
Dec 03 01:48:56 compute-0 podman[404012]: 2025-12-03 01:48:56.837678352 +0000 UTC m=+0.346906800 container remove 8cd1e0b9897d8ca1ec91b822dcb5e6218a8913bfe9037fdeb110329b07df019f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_golick, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:48:56 compute-0 systemd[1]: libpod-conmon-8cd1e0b9897d8ca1ec91b822dcb5e6218a8913bfe9037fdeb110329b07df019f.scope: Deactivated successfully.
Dec 03 01:48:57 compute-0 podman[404053]: 2025-12-03 01:48:57.125212036 +0000 UTC m=+0.084412510 container create f83001378dada4658c18f2f0eef339e609bf8db584e300b2b3a4ed13d277278d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_cannon, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:48:57 compute-0 podman[404053]: 2025-12-03 01:48:57.097286139 +0000 UTC m=+0.056486613 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:48:57 compute-0 ceph-mon[192821]: pgmap v1024: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:57 compute-0 systemd[1]: Started libpod-conmon-f83001378dada4658c18f2f0eef339e609bf8db584e300b2b3a4ed13d277278d.scope.
Dec 03 01:48:57 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:48:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad0aa15f1c011e0dce0998c4c91d996043cb0f1eb3ae62e0c18e721a97ab539f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:48:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad0aa15f1c011e0dce0998c4c91d996043cb0f1eb3ae62e0c18e721a97ab539f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:48:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad0aa15f1c011e0dce0998c4c91d996043cb0f1eb3ae62e0c18e721a97ab539f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:48:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad0aa15f1c011e0dce0998c4c91d996043cb0f1eb3ae62e0c18e721a97ab539f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:48:57 compute-0 podman[404053]: 2025-12-03 01:48:57.307989161 +0000 UTC m=+0.267189665 container init f83001378dada4658c18f2f0eef339e609bf8db584e300b2b3a4ed13d277278d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_cannon, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 03 01:48:57 compute-0 podman[404053]: 2025-12-03 01:48:57.33002158 +0000 UTC m=+0.289222054 container start f83001378dada4658c18f2f0eef339e609bf8db584e300b2b3a4ed13d277278d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:48:57 compute-0 podman[404053]: 2025-12-03 01:48:57.336892626 +0000 UTC m=+0.296093160 container attach f83001378dada4658c18f2f0eef339e609bf8db584e300b2b3a4ed13d277278d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 03 01:48:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:48:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1025: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:58 compute-0 elated_cannon[404068]: {
Dec 03 01:48:58 compute-0 elated_cannon[404068]:     "0": [
Dec 03 01:48:58 compute-0 elated_cannon[404068]:         {
Dec 03 01:48:58 compute-0 elated_cannon[404068]:             "devices": [
Dec 03 01:48:58 compute-0 elated_cannon[404068]:                 "/dev/loop3"
Dec 03 01:48:58 compute-0 elated_cannon[404068]:             ],
Dec 03 01:48:58 compute-0 elated_cannon[404068]:             "lv_name": "ceph_lv0",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:             "lv_size": "21470642176",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:             "name": "ceph_lv0",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:             "tags": {
Dec 03 01:48:58 compute-0 elated_cannon[404068]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:                 "ceph.cluster_name": "ceph",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:                 "ceph.crush_device_class": "",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:                 "ceph.encrypted": "0",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:                 "ceph.osd_id": "0",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:                 "ceph.type": "block",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:                 "ceph.vdo": "0"
Dec 03 01:48:58 compute-0 elated_cannon[404068]:             },
Dec 03 01:48:58 compute-0 elated_cannon[404068]:             "type": "block",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:             "vg_name": "ceph_vg0"
Dec 03 01:48:58 compute-0 elated_cannon[404068]:         }
Dec 03 01:48:58 compute-0 elated_cannon[404068]:     ],
Dec 03 01:48:58 compute-0 elated_cannon[404068]:     "1": [
Dec 03 01:48:58 compute-0 elated_cannon[404068]:         {
Dec 03 01:48:58 compute-0 elated_cannon[404068]:             "devices": [
Dec 03 01:48:58 compute-0 elated_cannon[404068]:                 "/dev/loop4"
Dec 03 01:48:58 compute-0 elated_cannon[404068]:             ],
Dec 03 01:48:58 compute-0 elated_cannon[404068]:             "lv_name": "ceph_lv1",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:             "lv_size": "21470642176",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:             "name": "ceph_lv1",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:             "tags": {
Dec 03 01:48:58 compute-0 elated_cannon[404068]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:                 "ceph.cluster_name": "ceph",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:                 "ceph.crush_device_class": "",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:                 "ceph.encrypted": "0",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:                 "ceph.osd_id": "1",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:                 "ceph.type": "block",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:                 "ceph.vdo": "0"
Dec 03 01:48:58 compute-0 elated_cannon[404068]:             },
Dec 03 01:48:58 compute-0 elated_cannon[404068]:             "type": "block",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:             "vg_name": "ceph_vg1"
Dec 03 01:48:58 compute-0 elated_cannon[404068]:         }
Dec 03 01:48:58 compute-0 elated_cannon[404068]:     ],
Dec 03 01:48:58 compute-0 elated_cannon[404068]:     "2": [
Dec 03 01:48:58 compute-0 elated_cannon[404068]:         {
Dec 03 01:48:58 compute-0 elated_cannon[404068]:             "devices": [
Dec 03 01:48:58 compute-0 elated_cannon[404068]:                 "/dev/loop5"
Dec 03 01:48:58 compute-0 elated_cannon[404068]:             ],
Dec 03 01:48:58 compute-0 elated_cannon[404068]:             "lv_name": "ceph_lv2",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:             "lv_size": "21470642176",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:             "name": "ceph_lv2",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:             "tags": {
Dec 03 01:48:58 compute-0 elated_cannon[404068]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:                 "ceph.cluster_name": "ceph",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:                 "ceph.crush_device_class": "",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:                 "ceph.encrypted": "0",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:                 "ceph.osd_id": "2",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:                 "ceph.type": "block",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:                 "ceph.vdo": "0"
Dec 03 01:48:58 compute-0 elated_cannon[404068]:             },
Dec 03 01:48:58 compute-0 elated_cannon[404068]:             "type": "block",
Dec 03 01:48:58 compute-0 elated_cannon[404068]:             "vg_name": "ceph_vg2"
Dec 03 01:48:58 compute-0 elated_cannon[404068]:         }
Dec 03 01:48:58 compute-0 elated_cannon[404068]:     ]
Dec 03 01:48:58 compute-0 elated_cannon[404068]: }
Dec 03 01:48:58 compute-0 systemd[1]: libpod-f83001378dada4658c18f2f0eef339e609bf8db584e300b2b3a4ed13d277278d.scope: Deactivated successfully.
Dec 03 01:48:58 compute-0 podman[404053]: 2025-12-03 01:48:58.259288085 +0000 UTC m=+1.218488559 container died f83001378dada4658c18f2f0eef339e609bf8db584e300b2b3a4ed13d277278d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_cannon, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Dec 03 01:48:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad0aa15f1c011e0dce0998c4c91d996043cb0f1eb3ae62e0c18e721a97ab539f-merged.mount: Deactivated successfully.
Dec 03 01:48:58 compute-0 podman[404053]: 2025-12-03 01:48:58.371186707 +0000 UTC m=+1.330387181 container remove f83001378dada4658c18f2f0eef339e609bf8db584e300b2b3a4ed13d277278d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec 03 01:48:58 compute-0 systemd[1]: libpod-conmon-f83001378dada4658c18f2f0eef339e609bf8db584e300b2b3a4ed13d277278d.scope: Deactivated successfully.
Dec 03 01:48:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:48:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:48:58 compute-0 sudo[403947]: pam_unix(sudo:session): session closed for user root
Dec 03 01:48:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:48:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:48:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:48:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:48:58 compute-0 podman[404078]: 2025-12-03 01:48:58.452877127 +0000 UTC m=+0.137413150 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_ipmi, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec 03 01:48:58 compute-0 sudo[404106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:48:58 compute-0 sudo[404106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:48:58 compute-0 sudo[404106]: pam_unix(sudo:session): session closed for user root
Dec 03 01:48:58 compute-0 sudo[404132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:48:58 compute-0 sudo[404132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:48:58 compute-0 sudo[404132]: pam_unix(sudo:session): session closed for user root
Dec 03 01:48:58 compute-0 sudo[404157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:48:58 compute-0 sudo[404157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:48:58 compute-0 sudo[404157]: pam_unix(sudo:session): session closed for user root
Dec 03 01:48:58 compute-0 sudo[404182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 01:48:58 compute-0 sudo[404182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:48:59 compute-0 ceph-mon[192821]: pgmap v1025: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:59 compute-0 podman[404245]: 2025-12-03 01:48:59.60321984 +0000 UTC m=+0.096246007 container create fb83e37a445419218913b44b1447a80e25d752562b9a3026cd1447f3b7899518 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 03 01:48:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:48:59.612 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:48:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:48:59.613 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:48:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:48:59.614 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:48:59 compute-0 podman[404245]: 2025-12-03 01:48:59.56642288 +0000 UTC m=+0.059449117 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:48:59 compute-0 systemd[1]: Started libpod-conmon-fb83e37a445419218913b44b1447a80e25d752562b9a3026cd1447f3b7899518.scope.
Dec 03 01:48:59 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:48:59 compute-0 podman[404245]: 2025-12-03 01:48:59.735939397 +0000 UTC m=+0.228965614 container init fb83e37a445419218913b44b1447a80e25d752562b9a3026cd1447f3b7899518 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_moser, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 03 01:48:59 compute-0 podman[158098]: time="2025-12-03T01:48:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:48:59 compute-0 podman[404245]: 2025-12-03 01:48:59.760343834 +0000 UTC m=+0.253370011 container start fb83e37a445419218913b44b1447a80e25d752562b9a3026cd1447f3b7899518 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 03 01:48:59 compute-0 podman[404245]: 2025-12-03 01:48:59.767071186 +0000 UTC m=+0.260097433 container attach fb83e37a445419218913b44b1447a80e25d752562b9a3026cd1447f3b7899518 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_moser, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:48:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:48:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43960 "" "Go-http-client/1.1"
Dec 03 01:48:59 compute-0 eloquent_moser[404261]: 167 167
Dec 03 01:48:59 compute-0 systemd[1]: libpod-fb83e37a445419218913b44b1447a80e25d752562b9a3026cd1447f3b7899518.scope: Deactivated successfully.
Dec 03 01:48:59 compute-0 podman[404245]: 2025-12-03 01:48:59.773487829 +0000 UTC m=+0.266514016 container died fb83e37a445419218913b44b1447a80e25d752562b9a3026cd1447f3b7899518 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_moser, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:48:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:48:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8115 "" "Go-http-client/1.1"
Dec 03 01:48:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1026: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:48:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-79cfaa1d5376d179e77f03961d16365a9f9a7c7ada4b0247d72e82bb3f8acc5b-merged.mount: Deactivated successfully.
Dec 03 01:48:59 compute-0 podman[404245]: 2025-12-03 01:48:59.865402541 +0000 UTC m=+0.358428688 container remove fb83e37a445419218913b44b1447a80e25d752562b9a3026cd1447f3b7899518 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_moser, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:48:59 compute-0 systemd[1]: libpod-conmon-fb83e37a445419218913b44b1447a80e25d752562b9a3026cd1447f3b7899518.scope: Deactivated successfully.
Dec 03 01:49:00 compute-0 podman[404287]: 2025-12-03 01:49:00.175375846 +0000 UTC m=+0.110714880 container create 631878f7f45f67fdb1baa19446c31e20f9c02ca402dd254edd5a201bde8d495c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_jennings, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 03 01:49:00 compute-0 podman[404287]: 2025-12-03 01:49:00.133953104 +0000 UTC m=+0.069292198 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:49:00 compute-0 systemd[1]: Started libpod-conmon-631878f7f45f67fdb1baa19446c31e20f9c02ca402dd254edd5a201bde8d495c.scope.
Dec 03 01:49:00 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:49:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d497c43e08b4fdb304340bd360ca0c8c8a5620b45ced7d1c3245207812a6af31/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:49:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d497c43e08b4fdb304340bd360ca0c8c8a5620b45ced7d1c3245207812a6af31/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:49:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d497c43e08b4fdb304340bd360ca0c8c8a5620b45ced7d1c3245207812a6af31/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:49:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d497c43e08b4fdb304340bd360ca0c8c8a5620b45ced7d1c3245207812a6af31/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:49:00 compute-0 podman[404287]: 2025-12-03 01:49:00.362238108 +0000 UTC m=+0.297577202 container init 631878f7f45f67fdb1baa19446c31e20f9c02ca402dd254edd5a201bde8d495c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_jennings, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec 03 01:49:00 compute-0 podman[404287]: 2025-12-03 01:49:00.382393723 +0000 UTC m=+0.317732737 container start 631878f7f45f67fdb1baa19446c31e20f9c02ca402dd254edd5a201bde8d495c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_jennings, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:49:00 compute-0 podman[404287]: 2025-12-03 01:49:00.389797914 +0000 UTC m=+0.325137008 container attach 631878f7f45f67fdb1baa19446c31e20f9c02ca402dd254edd5a201bde8d495c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_jennings, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 03 01:49:01 compute-0 ceph-mon[192821]: pgmap v1026: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:01 compute-0 openstack_network_exporter[368278]: ERROR   01:49:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:49:01 compute-0 openstack_network_exporter[368278]: ERROR   01:49:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:49:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:49:01 compute-0 openstack_network_exporter[368278]: ERROR   01:49:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:49:01 compute-0 openstack_network_exporter[368278]: ERROR   01:49:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:49:01 compute-0 openstack_network_exporter[368278]: ERROR   01:49:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:49:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:49:01 compute-0 confident_jennings[404303]: {
Dec 03 01:49:01 compute-0 confident_jennings[404303]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 01:49:01 compute-0 confident_jennings[404303]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:49:01 compute-0 confident_jennings[404303]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 01:49:01 compute-0 confident_jennings[404303]:         "osd_id": 2,
Dec 03 01:49:01 compute-0 confident_jennings[404303]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:49:01 compute-0 confident_jennings[404303]:         "type": "bluestore"
Dec 03 01:49:01 compute-0 confident_jennings[404303]:     },
Dec 03 01:49:01 compute-0 confident_jennings[404303]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 01:49:01 compute-0 confident_jennings[404303]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:49:01 compute-0 confident_jennings[404303]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 01:49:01 compute-0 confident_jennings[404303]:         "osd_id": 1,
Dec 03 01:49:01 compute-0 confident_jennings[404303]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:49:01 compute-0 confident_jennings[404303]:         "type": "bluestore"
Dec 03 01:49:01 compute-0 confident_jennings[404303]:     },
Dec 03 01:49:01 compute-0 confident_jennings[404303]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 01:49:01 compute-0 confident_jennings[404303]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:49:01 compute-0 confident_jennings[404303]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 01:49:01 compute-0 confident_jennings[404303]:         "osd_id": 0,
Dec 03 01:49:01 compute-0 confident_jennings[404303]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:49:01 compute-0 confident_jennings[404303]:         "type": "bluestore"
Dec 03 01:49:01 compute-0 confident_jennings[404303]:     }
Dec 03 01:49:01 compute-0 confident_jennings[404303]: }
Dec 03 01:49:01 compute-0 systemd[1]: libpod-631878f7f45f67fdb1baa19446c31e20f9c02ca402dd254edd5a201bde8d495c.scope: Deactivated successfully.
Dec 03 01:49:01 compute-0 podman[404287]: 2025-12-03 01:49:01.593303224 +0000 UTC m=+1.528642268 container died 631878f7f45f67fdb1baa19446c31e20f9c02ca402dd254edd5a201bde8d495c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Dec 03 01:49:01 compute-0 systemd[1]: libpod-631878f7f45f67fdb1baa19446c31e20f9c02ca402dd254edd5a201bde8d495c.scope: Consumed 1.196s CPU time.
Dec 03 01:49:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-d497c43e08b4fdb304340bd360ca0c8c8a5620b45ced7d1c3245207812a6af31-merged.mount: Deactivated successfully.
Dec 03 01:49:01 compute-0 podman[404287]: 2025-12-03 01:49:01.686921655 +0000 UTC m=+1.622260659 container remove 631878f7f45f67fdb1baa19446c31e20f9c02ca402dd254edd5a201bde8d495c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 03 01:49:01 compute-0 systemd[1]: libpod-conmon-631878f7f45f67fdb1baa19446c31e20f9c02ca402dd254edd5a201bde8d495c.scope: Deactivated successfully.
Dec 03 01:49:01 compute-0 sudo[404182]: pam_unix(sudo:session): session closed for user root
Dec 03 01:49:01 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:49:01 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:49:01 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:49:01 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:49:01 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 5147d833-1c04-473d-ac4d-8b2b78090c3c does not exist
Dec 03 01:49:01 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev cc2d6c82-0bc0-4148-a05a-50853d100791 does not exist
Dec 03 01:49:01 compute-0 podman[404337]: 2025-12-03 01:49:01.789258075 +0000 UTC m=+0.142078635 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, release-0.7.12=, maintainer=Red Hat, Inc., name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., config_id=edpm, container_name=kepler, build-date=2024-09-18T21:23:30, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git)
Dec 03 01:49:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1027: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:01 compute-0 sudo[404366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:49:01 compute-0 sudo[404366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:49:01 compute-0 sudo[404366]: pam_unix(sudo:session): session closed for user root
Dec 03 01:49:02 compute-0 sudo[404391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 01:49:02 compute-0 sudo[404391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:49:02 compute-0 sudo[404391]: pam_unix(sudo:session): session closed for user root
Dec 03 01:49:02 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:49:02 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:49:02 compute-0 ceph-mon[192821]: pgmap v1027: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:49:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1028: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:04 compute-0 podman[404417]: 2025-12-03 01:49:04.86272937 +0000 UTC m=+0.109453384 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec 03 01:49:04 compute-0 ceph-mon[192821]: pgmap v1028: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:04 compute-0 podman[404416]: 2025-12-03 01:49:04.936950948 +0000 UTC m=+0.188971483 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 03 01:49:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1029: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:06 compute-0 podman[404462]: 2025-12-03 01:49:06.90072357 +0000 UTC m=+0.151653708 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, architecture=x86_64, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, name=ubi9-minimal, container_name=openstack_network_exporter, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public)
Dec 03 01:49:06 compute-0 ceph-mon[192821]: pgmap v1029: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:07 compute-0 sshd-session[404460]: Invalid user bounce from 103.146.202.174 port 38226
Dec 03 01:49:07 compute-0 sshd-session[404460]: Received disconnect from 103.146.202.174 port 38226:11: Bye Bye [preauth]
Dec 03 01:49:07 compute-0 sshd-session[404460]: Disconnected from invalid user bounce 103.146.202.174 port 38226 [preauth]
Dec 03 01:49:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:49:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1030: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:08 compute-0 podman[404482]: 2025-12-03 01:49:08.857673258 +0000 UTC m=+0.114533239 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 03 01:49:08 compute-0 ceph-mon[192821]: pgmap v1030: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1031: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:10 compute-0 ceph-mon[192821]: pgmap v1031: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 01:49:11 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1312798595' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 01:49:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 01:49:11 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1312798595' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 01:49:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 01:49:11 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/927659794' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 01:49:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 01:49:11 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/927659794' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 01:49:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1032: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 01:49:11 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2479349272' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 01:49:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 01:49:11 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2479349272' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 01:49:11 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/1312798595' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 01:49:11 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/1312798595' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 01:49:11 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/927659794' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 01:49:11 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/927659794' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 01:49:11 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/2479349272' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 01:49:11 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/2479349272' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 01:49:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:49:12 compute-0 ceph-mon[192821]: pgmap v1032: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1033: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:14 compute-0 ceph-mon[192821]: pgmap v1033: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1034: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:17 compute-0 ceph-mon[192821]: pgmap v1034: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:49:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1035: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:19 compute-0 ceph-mon[192821]: pgmap v1035: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.501 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.502 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.503 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.503 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.504 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.504 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.505 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.505 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.505 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.505 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.505 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.506 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.506 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.506 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.507 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.507 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.509 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.509 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.509 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.510 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.510 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.510 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.511 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.508 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.512 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.512 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.512 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.512 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.513 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.513 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.513 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.514 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.514 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.514 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.514 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.515 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.515 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.511 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.515 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.518 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.518 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.518 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.519 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.519 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.519 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.519 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.519 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.521 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.521 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.521 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.521 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.522 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.522 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.522 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.522 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.522 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.523 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.523 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.523 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.523 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.524 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.524 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.524 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.524 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.524 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.524 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.525 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.525 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.525 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.525 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.525 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.525 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.526 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.526 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.527 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.527 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.527 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.527 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.528 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.528 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.529 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.529 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.530 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.530 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.530 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.530 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.531 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.531 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.531 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:49:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1036: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:21 compute-0 ceph-mon[192821]: pgmap v1036: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:21 compute-0 sshd-session[403025]: Received disconnect from 45.78.219.140 port 55504:11: Bye Bye [preauth]
Dec 03 01:49:21 compute-0 sshd-session[403025]: Disconnected from authenticating user root 45.78.219.140 port 55504 [preauth]
Dec 03 01:49:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1037: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:49:23 compute-0 ceph-mon[192821]: pgmap v1037: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1038: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:23 compute-0 podman[404507]: 2025-12-03 01:49:23.859894143 +0000 UTC m=+0.108493120 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 03 01:49:23 compute-0 podman[404508]: 2025-12-03 01:49:23.882816967 +0000 UTC m=+0.133806251 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Dec 03 01:49:23 compute-0 podman[404509]: 2025-12-03 01:49:23.914882418 +0000 UTC m=+0.150243913 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 01:49:25 compute-0 ceph-mon[192821]: pgmap v1038: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:25 compute-0 sshd-session[404564]: Invalid user cc from 34.66.72.251 port 55572
Dec 03 01:49:25 compute-0 sshd-session[404564]: Received disconnect from 34.66.72.251 port 55572:11: Bye Bye [preauth]
Dec 03 01:49:25 compute-0 sshd-session[404564]: Disconnected from invalid user cc 34.66.72.251 port 55572 [preauth]
Dec 03 01:49:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1039: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:27 compute-0 ceph-mon[192821]: pgmap v1039: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:49:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1040: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:49:28
Dec 03 01:49:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 01:49:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 01:49:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['.mgr', 'volumes', 'backups', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'images', 'default.rgw.control', 'default.rgw.meta', 'vms']
Dec 03 01:49:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 01:49:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:49:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:49:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:49:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:49:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:49:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:49:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 01:49:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:49:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 01:49:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:49:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:49:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:49:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:49:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:49:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:49:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:49:28 compute-0 podman[404566]: 2025-12-03 01:49:28.872008453 +0000 UTC m=+0.127047831 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Dec 03 01:49:29 compute-0 ceph-mon[192821]: pgmap v1040: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:29 compute-0 podman[158098]: time="2025-12-03T01:49:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:49:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:49:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec 03 01:49:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:49:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8123 "" "Go-http-client/1.1"
Dec 03 01:49:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1041: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:31 compute-0 ceph-mon[192821]: pgmap v1041: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:31 compute-0 openstack_network_exporter[368278]: ERROR   01:49:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:49:31 compute-0 openstack_network_exporter[368278]: ERROR   01:49:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:49:31 compute-0 openstack_network_exporter[368278]: ERROR   01:49:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:49:31 compute-0 openstack_network_exporter[368278]: ERROR   01:49:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:49:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:49:31 compute-0 openstack_network_exporter[368278]: ERROR   01:49:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:49:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:49:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1042: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:49:32 compute-0 podman[404585]: 2025-12-03 01:49:32.877504505 +0000 UTC m=+0.127675769 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, managed_by=edpm_ansible, name=ubi9, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, config_id=edpm, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, release=1214.1726694543, vcs-type=git, architecture=x86_64)
Dec 03 01:49:33 compute-0 ceph-mon[192821]: pgmap v1042: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1043: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:35 compute-0 ceph-mon[192821]: pgmap v1043: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1044: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:35 compute-0 podman[404606]: 2025-12-03 01:49:35.877714897 +0000 UTC m=+0.120360903 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 03 01:49:35 compute-0 podman[404605]: 2025-12-03 01:49:35.916082825 +0000 UTC m=+0.165816881 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 03 01:49:37 compute-0 ceph-mon[192821]: pgmap v1044: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:49:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1045: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:37 compute-0 podman[404648]: 2025-12-03 01:49:37.870136398 +0000 UTC m=+0.126893786 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-type=git)
Dec 03 01:49:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 01:49:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:49:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 01:49:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:49:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:49:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:49:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:49:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:49:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:49:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:49:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:49:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:49:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 01:49:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:49:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:49:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:49:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 01:49:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:49:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 01:49:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:49:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:49:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:49:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 01:49:39 compute-0 ceph-mon[192821]: pgmap v1045: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1046: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:39 compute-0 podman[404668]: 2025-12-03 01:49:39.863356622 +0000 UTC m=+0.114565030 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 03 01:49:41 compute-0 ceph-mon[192821]: pgmap v1046: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1047: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:42 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 03 01:49:42 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 1800.1 total, 600.0 interval
                                            Cumulative writes: 5902 writes, 24K keys, 5902 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                            Cumulative WAL: 5902 writes, 991 syncs, 5.96 writes per sync, written: 0.02 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.10 MB, 0.00 MB/s
                                            Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 03 01:49:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:49:43 compute-0 ceph-mon[192821]: pgmap v1047: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1048: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:45 compute-0 ceph-mon[192821]: pgmap v1048: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1049: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:47 compute-0 ceph-mon[192821]: pgmap v1049: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:47 compute-0 sshd-session[404690]: Received disconnect from 173.249.50.59 port 34964:11: Bye Bye [preauth]
Dec 03 01:49:47 compute-0 sshd-session[404690]: Disconnected from authenticating user root 173.249.50.59 port 34964 [preauth]
Dec 03 01:49:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:49:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1050: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:48 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 03 01:49:48 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 1800.1 total, 600.0 interval
                                            Cumulative writes: 7100 writes, 29K keys, 7100 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                            Cumulative WAL: 7100 writes, 1332 syncs, 5.33 writes per sync, written: 0.02 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                            Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 03 01:49:49 compute-0 ceph-mon[192821]: pgmap v1050: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:49 compute-0 nova_compute[351485]: 2025-12-03 01:49:49.571 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:49:49 compute-0 nova_compute[351485]: 2025-12-03 01:49:49.591 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:49:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1051: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:50 compute-0 nova_compute[351485]: 2025-12-03 01:49:50.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:49:51 compute-0 ceph-mon[192821]: pgmap v1051: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:51 compute-0 nova_compute[351485]: 2025-12-03 01:49:51.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:49:51 compute-0 nova_compute[351485]: 2025-12-03 01:49:51.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 01:49:51 compute-0 nova_compute[351485]: 2025-12-03 01:49:51.578 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 03 01:49:51 compute-0 nova_compute[351485]: 2025-12-03 01:49:51.595 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 03 01:49:51 compute-0 nova_compute[351485]: 2025-12-03 01:49:51.596 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:49:51 compute-0 nova_compute[351485]: 2025-12-03 01:49:51.596 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:49:51 compute-0 nova_compute[351485]: 2025-12-03 01:49:51.635 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:49:51 compute-0 nova_compute[351485]: 2025-12-03 01:49:51.636 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:49:51 compute-0 nova_compute[351485]: 2025-12-03 01:49:51.637 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:49:51 compute-0 nova_compute[351485]: 2025-12-03 01:49:51.637 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 01:49:51 compute-0 nova_compute[351485]: 2025-12-03 01:49:51.637 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:49:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1052: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 01:49:52 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4094770831' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:49:52 compute-0 nova_compute[351485]: 2025-12-03 01:49:52.125 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:49:52 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/4094770831' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:49:52 compute-0 nova_compute[351485]: 2025-12-03 01:49:52.748 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 01:49:52 compute-0 nova_compute[351485]: 2025-12-03 01:49:52.750 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4610MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 01:49:52 compute-0 nova_compute[351485]: 2025-12-03 01:49:52.750 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:49:52 compute-0 nova_compute[351485]: 2025-12-03 01:49:52.751 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:49:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:49:52 compute-0 nova_compute[351485]: 2025-12-03 01:49:52.981 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 01:49:52 compute-0 nova_compute[351485]: 2025-12-03 01:49:52.982 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 01:49:53 compute-0 nova_compute[351485]: 2025-12-03 01:49:53.039 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:49:53 compute-0 sshd-session[404715]: Invalid user autrede from 80.253.31.232 port 55220
Dec 03 01:49:53 compute-0 sshd-session[404715]: Received disconnect from 80.253.31.232 port 55220:11: Bye Bye [preauth]
Dec 03 01:49:53 compute-0 sshd-session[404715]: Disconnected from invalid user autrede 80.253.31.232 port 55220 [preauth]
Dec 03 01:49:53 compute-0 ceph-mon[192821]: pgmap v1052: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 01:49:53 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3699787615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:49:53 compute-0 nova_compute[351485]: 2025-12-03 01:49:53.557 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:49:53 compute-0 nova_compute[351485]: 2025-12-03 01:49:53.567 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 01:49:53 compute-0 nova_compute[351485]: 2025-12-03 01:49:53.583 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 01:49:53 compute-0 nova_compute[351485]: 2025-12-03 01:49:53.584 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 01:49:53 compute-0 nova_compute[351485]: 2025-12-03 01:49:53.585 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.834s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:49:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1053: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:54 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3699787615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:49:54 compute-0 nova_compute[351485]: 2025-12-03 01:49:54.564 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:49:54 compute-0 podman[404739]: 2025-12-03 01:49:54.863804474 +0000 UTC m=+0.117597546 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 03 01:49:54 compute-0 podman[404741]: 2025-12-03 01:49:54.894493186 +0000 UTC m=+0.136210769 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 01:49:54 compute-0 podman[404740]: 2025-12-03 01:49:54.906417771 +0000 UTC m=+0.153102533 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 03 01:49:55 compute-0 ceph-mon[192821]: pgmap v1053: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:55 compute-0 nova_compute[351485]: 2025-12-03 01:49:55.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:49:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1054: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:55 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 03 01:49:55 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 1800.1 total, 600.0 interval
                                            Cumulative writes: 5889 writes, 24K keys, 5889 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                            Cumulative WAL: 5889 writes, 998 syncs, 5.90 writes per sync, written: 0.02 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                            Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 03 01:49:56 compute-0 nova_compute[351485]: 2025-12-03 01:49:56.570 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:49:56 compute-0 nova_compute[351485]: 2025-12-03 01:49:56.575 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:49:56 compute-0 nova_compute[351485]: 2025-12-03 01:49:56.576 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 01:49:57 compute-0 ceph-mgr[193109]: [devicehealth INFO root] Check health
Dec 03 01:49:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Dec 03 01:49:57 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1036824462' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Dec 03 01:49:57 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.14375 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec 03 01:49:57 compute-0 ceph-mgr[193109]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec 03 01:49:57 compute-0 ceph-mgr[193109]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec 03 01:49:57 compute-0 ceph-mon[192821]: pgmap v1054: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:57 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/1036824462' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Dec 03 01:49:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:49:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1055: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:49:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:49:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:49:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:49:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:49:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:49:58 compute-0 ceph-mon[192821]: from='client.14375 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec 03 01:49:59 compute-0 ceph-mon[192821]: pgmap v1055: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:49:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:49:59.614 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:49:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:49:59.615 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:49:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:49:59.615 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:49:59 compute-0 podman[158098]: time="2025-12-03T01:49:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:49:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:49:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec 03 01:49:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:49:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8115 "" "Go-http-client/1.1"
Dec 03 01:49:59 compute-0 podman[404800]: 2025-12-03 01:49:59.859873492 +0000 UTC m=+0.115523197 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 03 01:49:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1056: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:01 compute-0 openstack_network_exporter[368278]: ERROR   01:50:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:50:01 compute-0 openstack_network_exporter[368278]: ERROR   01:50:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:50:01 compute-0 openstack_network_exporter[368278]: ERROR   01:50:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:50:01 compute-0 openstack_network_exporter[368278]: ERROR   01:50:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:50:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:50:01 compute-0 openstack_network_exporter[368278]: ERROR   01:50:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:50:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:50:01 compute-0 ceph-mon[192821]: pgmap v1056: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1057: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:02 compute-0 sudo[404818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:50:02 compute-0 sudo[404818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:50:02 compute-0 sudo[404818]: pam_unix(sudo:session): session closed for user root
Dec 03 01:50:02 compute-0 sudo[404843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:50:02 compute-0 sudo[404843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:50:02 compute-0 sudo[404843]: pam_unix(sudo:session): session closed for user root
Dec 03 01:50:02 compute-0 sudo[404868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:50:02 compute-0 sudo[404868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:50:02 compute-0 sudo[404868]: pam_unix(sudo:session): session closed for user root
Dec 03 01:50:02 compute-0 sudo[404893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 01:50:02 compute-0 sudo[404893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:50:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:50:03 compute-0 sudo[404893]: pam_unix(sudo:session): session closed for user root
Dec 03 01:50:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:50:03 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:50:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 01:50:03 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:50:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 01:50:03 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:50:03 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 754c2b14-b6d4-4b61-90a8-e54fa26dddc5 does not exist
Dec 03 01:50:03 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 9b5027fb-b072-4e92-94c1-d82206deaf46 does not exist
Dec 03 01:50:03 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev c1a962a7-4108-48bc-9be6-7ff4b206005d does not exist
Dec 03 01:50:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 01:50:03 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:50:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 01:50:03 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:50:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:50:03 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:50:03 compute-0 ceph-mon[192821]: pgmap v1057: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:03 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:50:03 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:50:03 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:50:03 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:50:03 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:50:03 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:50:03 compute-0 sudo[404950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:50:03 compute-0 sudo[404950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:50:03 compute-0 sudo[404950]: pam_unix(sudo:session): session closed for user root
Dec 03 01:50:03 compute-0 sudo[404981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:50:03 compute-0 sudo[404981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:50:03 compute-0 sudo[404981]: pam_unix(sudo:session): session closed for user root
Dec 03 01:50:03 compute-0 podman[404974]: 2025-12-03 01:50:03.70433023 +0000 UTC m=+0.143738941 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_id=edpm, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, version=9.4, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, release-0.7.12=, vcs-type=git, com.redhat.component=ubi9-container, name=ubi9)
Dec 03 01:50:03 compute-0 sudo[405020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:50:03 compute-0 sudo[405020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:50:03 compute-0 sudo[405020]: pam_unix(sudo:session): session closed for user root
Dec 03 01:50:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1058: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:03 compute-0 sudo[405046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 01:50:03 compute-0 sudo[405046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:50:04 compute-0 podman[405109]: 2025-12-03 01:50:04.542840024 +0000 UTC m=+0.054490563 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:50:04 compute-0 podman[405109]: 2025-12-03 01:50:04.708935001 +0000 UTC m=+0.220585470 container create 3916938ac11cbfca8745182d4f2d91e1a87339e01e856f65c3336d896fca4a1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_khayyam, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:50:04 compute-0 systemd[1]: Started libpod-conmon-3916938ac11cbfca8745182d4f2d91e1a87339e01e856f65c3336d896fca4a1b.scope.
Dec 03 01:50:04 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:50:04 compute-0 podman[405109]: 2025-12-03 01:50:04.878366543 +0000 UTC m=+0.390017062 container init 3916938ac11cbfca8745182d4f2d91e1a87339e01e856f65c3336d896fca4a1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_khayyam, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 03 01:50:04 compute-0 podman[405109]: 2025-12-03 01:50:04.896521373 +0000 UTC m=+0.408171842 container start 3916938ac11cbfca8745182d4f2d91e1a87339e01e856f65c3336d896fca4a1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec 03 01:50:04 compute-0 podman[405109]: 2025-12-03 01:50:04.907364438 +0000 UTC m=+0.419014957 container attach 3916938ac11cbfca8745182d4f2d91e1a87339e01e856f65c3336d896fca4a1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_khayyam, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 03 01:50:04 compute-0 exciting_khayyam[405124]: 167 167
Dec 03 01:50:04 compute-0 systemd[1]: libpod-3916938ac11cbfca8745182d4f2d91e1a87339e01e856f65c3336d896fca4a1b.scope: Deactivated successfully.
Dec 03 01:50:04 compute-0 podman[405109]: 2025-12-03 01:50:04.912504102 +0000 UTC m=+0.424154561 container died 3916938ac11cbfca8745182d4f2d91e1a87339e01e856f65c3336d896fca4a1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:50:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-bbb66b2985e21a9fd26b4cc2f821bfb19dbb838dda15e24af0e79430402c1be2-merged.mount: Deactivated successfully.
Dec 03 01:50:05 compute-0 podman[405109]: 2025-12-03 01:50:05.008985633 +0000 UTC m=+0.520636092 container remove 3916938ac11cbfca8745182d4f2d91e1a87339e01e856f65c3336d896fca4a1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec 03 01:50:05 compute-0 systemd[1]: libpod-conmon-3916938ac11cbfca8745182d4f2d91e1a87339e01e856f65c3336d896fca4a1b.scope: Deactivated successfully.
Dec 03 01:50:05 compute-0 podman[405146]: 2025-12-03 01:50:05.271869401 +0000 UTC m=+0.094917108 container create f8dea570920ea13db44d0b4056f5c685de96deca3c3cfcf4f6e13bd475a10cf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hoover, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 03 01:50:05 compute-0 podman[405146]: 2025-12-03 01:50:05.234011407 +0000 UTC m=+0.057059164 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:50:05 compute-0 systemd[1]: Started libpod-conmon-f8dea570920ea13db44d0b4056f5c685de96deca3c3cfcf4f6e13bd475a10cf7.scope.
Dec 03 01:50:05 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:50:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ed17c8e7890926a99cf1290fa5a2b725939024ecfebf6615feaf99d58d4bf37/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:50:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ed17c8e7890926a99cf1290fa5a2b725939024ecfebf6615feaf99d58d4bf37/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:50:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ed17c8e7890926a99cf1290fa5a2b725939024ecfebf6615feaf99d58d4bf37/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:50:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ed17c8e7890926a99cf1290fa5a2b725939024ecfebf6615feaf99d58d4bf37/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:50:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ed17c8e7890926a99cf1290fa5a2b725939024ecfebf6615feaf99d58d4bf37/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:50:05 compute-0 podman[405146]: 2025-12-03 01:50:05.450689636 +0000 UTC m=+0.273737353 container init f8dea570920ea13db44d0b4056f5c685de96deca3c3cfcf4f6e13bd475a10cf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:50:05 compute-0 podman[405146]: 2025-12-03 01:50:05.466797509 +0000 UTC m=+0.289845196 container start f8dea570920ea13db44d0b4056f5c685de96deca3c3cfcf4f6e13bd475a10cf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:50:05 compute-0 podman[405146]: 2025-12-03 01:50:05.471854571 +0000 UTC m=+0.294902288 container attach f8dea570920ea13db44d0b4056f5c685de96deca3c3cfcf4f6e13bd475a10cf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:50:05 compute-0 ceph-mon[192821]: pgmap v1058: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1059: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:06 compute-0 frosty_hoover[405161]: --> passed data devices: 0 physical, 3 LVM
Dec 03 01:50:06 compute-0 frosty_hoover[405161]: --> relative data size: 1.0
Dec 03 01:50:06 compute-0 frosty_hoover[405161]: --> All data devices are unavailable
Dec 03 01:50:06 compute-0 systemd[1]: libpod-f8dea570920ea13db44d0b4056f5c685de96deca3c3cfcf4f6e13bd475a10cf7.scope: Deactivated successfully.
Dec 03 01:50:06 compute-0 systemd[1]: libpod-f8dea570920ea13db44d0b4056f5c685de96deca3c3cfcf4f6e13bd475a10cf7.scope: Consumed 1.294s CPU time.
Dec 03 01:50:06 compute-0 podman[405215]: 2025-12-03 01:50:06.899016157 +0000 UTC m=+0.047939788 container died f8dea570920ea13db44d0b4056f5c685de96deca3c3cfcf4f6e13bd475a10cf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:50:06 compute-0 podman[405189]: 2025-12-03 01:50:06.902496205 +0000 UTC m=+0.152768604 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 03 01:50:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ed17c8e7890926a99cf1290fa5a2b725939024ecfebf6615feaf99d58d4bf37-merged.mount: Deactivated successfully.
Dec 03 01:50:06 compute-0 podman[405215]: 2025-12-03 01:50:06.985100876 +0000 UTC m=+0.134024457 container remove f8dea570920ea13db44d0b4056f5c685de96deca3c3cfcf4f6e13bd475a10cf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec 03 01:50:06 compute-0 podman[405188]: 2025-12-03 01:50:06.989050887 +0000 UTC m=+0.240769127 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 03 01:50:07 compute-0 systemd[1]: libpod-conmon-f8dea570920ea13db44d0b4056f5c685de96deca3c3cfcf4f6e13bd475a10cf7.scope: Deactivated successfully.
Dec 03 01:50:07 compute-0 sudo[405046]: pam_unix(sudo:session): session closed for user root
Dec 03 01:50:07 compute-0 sudo[405248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:50:07 compute-0 sudo[405248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:50:07 compute-0 sudo[405248]: pam_unix(sudo:session): session closed for user root
Dec 03 01:50:07 compute-0 sudo[405273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:50:07 compute-0 sudo[405273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:50:07 compute-0 sudo[405273]: pam_unix(sudo:session): session closed for user root
Dec 03 01:50:07 compute-0 sudo[405298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:50:07 compute-0 sudo[405298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:50:07 compute-0 sudo[405298]: pam_unix(sudo:session): session closed for user root
Dec 03 01:50:07 compute-0 ceph-mon[192821]: pgmap v1059: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:07 compute-0 sudo[405323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 01:50:07 compute-0 sudo[405323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:50:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:50:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1060: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:08 compute-0 podman[405385]: 2025-12-03 01:50:08.133278503 +0000 UTC m=+0.071999145 container create de677d3f30fec5e53dc4018e76214f1b013ea89307f40e306900a4ff43822edd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:50:08 compute-0 podman[405385]: 2025-12-03 01:50:08.103787904 +0000 UTC m=+0.042508536 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:50:08 compute-0 systemd[1]: Started libpod-conmon-de677d3f30fec5e53dc4018e76214f1b013ea89307f40e306900a4ff43822edd.scope.
Dec 03 01:50:08 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:50:08 compute-0 podman[405385]: 2025-12-03 01:50:08.272077893 +0000 UTC m=+0.210798585 container init de677d3f30fec5e53dc4018e76214f1b013ea89307f40e306900a4ff43822edd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_swartz, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 03 01:50:08 compute-0 podman[405385]: 2025-12-03 01:50:08.301693175 +0000 UTC m=+0.240413777 container start de677d3f30fec5e53dc4018e76214f1b013ea89307f40e306900a4ff43822edd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_swartz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:50:08 compute-0 podman[405385]: 2025-12-03 01:50:08.307089727 +0000 UTC m=+0.245810399 container attach de677d3f30fec5e53dc4018e76214f1b013ea89307f40e306900a4ff43822edd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_swartz, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:50:08 compute-0 wizardly_swartz[405402]: 167 167
Dec 03 01:50:08 compute-0 systemd[1]: libpod-de677d3f30fec5e53dc4018e76214f1b013ea89307f40e306900a4ff43822edd.scope: Deactivated successfully.
Dec 03 01:50:08 compute-0 podman[405385]: 2025-12-03 01:50:08.314253618 +0000 UTC m=+0.252974240 container died de677d3f30fec5e53dc4018e76214f1b013ea89307f40e306900a4ff43822edd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_swartz, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 03 01:50:08 compute-0 podman[405399]: 2025-12-03 01:50:08.353910683 +0000 UTC m=+0.147216228 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, vcs-type=git, architecture=x86_64, container_name=openstack_network_exporter, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, name=ubi9-minimal, io.openshift.expose-services=, io.buildah.version=1.33.7)
Dec 03 01:50:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9f9b2a8ca99656622e3d15e82e474df4e88606d7e02eb0d9d12712f945b1d9b-merged.mount: Deactivated successfully.
Dec 03 01:50:08 compute-0 podman[405385]: 2025-12-03 01:50:08.389580875 +0000 UTC m=+0.328301497 container remove de677d3f30fec5e53dc4018e76214f1b013ea89307f40e306900a4ff43822edd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_swartz, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:50:08 compute-0 systemd[1]: libpod-conmon-de677d3f30fec5e53dc4018e76214f1b013ea89307f40e306900a4ff43822edd.scope: Deactivated successfully.
Dec 03 01:50:08 compute-0 podman[405441]: 2025-12-03 01:50:08.690611805 +0000 UTC m=+0.097870381 container create 5663bef0cdadd48012e922031eae9c0a63c9dbb651cf1d2ab6de0ab2be13feb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:50:08 compute-0 podman[405441]: 2025-12-03 01:50:08.655149048 +0000 UTC m=+0.062407684 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:50:08 compute-0 systemd[1]: Started libpod-conmon-5663bef0cdadd48012e922031eae9c0a63c9dbb651cf1d2ab6de0ab2be13feb6.scope.
Dec 03 01:50:08 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:50:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e00cbb1626a779e3f691459818dc0903dc2867bb68117c8319e2582a0146b151/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:50:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e00cbb1626a779e3f691459818dc0903dc2867bb68117c8319e2582a0146b151/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:50:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e00cbb1626a779e3f691459818dc0903dc2867bb68117c8319e2582a0146b151/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:50:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e00cbb1626a779e3f691459818dc0903dc2867bb68117c8319e2582a0146b151/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:50:08 compute-0 podman[405441]: 2025-12-03 01:50:08.859183002 +0000 UTC m=+0.266441578 container init 5663bef0cdadd48012e922031eae9c0a63c9dbb651cf1d2ab6de0ab2be13feb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:50:08 compute-0 podman[405441]: 2025-12-03 01:50:08.889887735 +0000 UTC m=+0.297146301 container start 5663bef0cdadd48012e922031eae9c0a63c9dbb651cf1d2ab6de0ab2be13feb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_varahamihira, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 03 01:50:08 compute-0 podman[405441]: 2025-12-03 01:50:08.895223375 +0000 UTC m=+0.302481991 container attach 5663bef0cdadd48012e922031eae9c0a63c9dbb651cf1d2ab6de0ab2be13feb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_varahamihira, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:50:09 compute-0 ceph-mon[192821]: pgmap v1060: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]: {
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:     "0": [
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:         {
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:             "devices": [
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:                 "/dev/loop3"
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:             ],
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:             "lv_name": "ceph_lv0",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:             "lv_size": "21470642176",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:             "name": "ceph_lv0",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:             "tags": {
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:                 "ceph.cluster_name": "ceph",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:                 "ceph.crush_device_class": "",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:                 "ceph.encrypted": "0",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:                 "ceph.osd_id": "0",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:                 "ceph.type": "block",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:                 "ceph.vdo": "0"
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:             },
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:             "type": "block",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:             "vg_name": "ceph_vg0"
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:         }
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:     ],
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:     "1": [
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:         {
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:             "devices": [
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:                 "/dev/loop4"
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:             ],
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:             "lv_name": "ceph_lv1",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:             "lv_size": "21470642176",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:             "name": "ceph_lv1",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:             "tags": {
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:                 "ceph.cluster_name": "ceph",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:                 "ceph.crush_device_class": "",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:                 "ceph.encrypted": "0",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:                 "ceph.osd_id": "1",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:                 "ceph.type": "block",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:                 "ceph.vdo": "0"
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:             },
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:             "type": "block",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:             "vg_name": "ceph_vg1"
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:         }
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:     ],
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:     "2": [
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:         {
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:             "devices": [
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:                 "/dev/loop5"
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:             ],
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:             "lv_name": "ceph_lv2",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:             "lv_size": "21470642176",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:             "name": "ceph_lv2",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:             "tags": {
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:                 "ceph.cluster_name": "ceph",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:                 "ceph.crush_device_class": "",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:                 "ceph.encrypted": "0",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:                 "ceph.osd_id": "2",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:                 "ceph.type": "block",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:                 "ceph.vdo": "0"
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:             },
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:             "type": "block",
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:             "vg_name": "ceph_vg2"
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:         }
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]:     ]
Dec 03 01:50:09 compute-0 loving_varahamihira[405457]: }
Dec 03 01:50:09 compute-0 systemd[1]: libpod-5663bef0cdadd48012e922031eae9c0a63c9dbb651cf1d2ab6de0ab2be13feb6.scope: Deactivated successfully.
Dec 03 01:50:09 compute-0 podman[405441]: 2025-12-03 01:50:09.767980922 +0000 UTC m=+1.175239478 container died 5663bef0cdadd48012e922031eae9c0a63c9dbb651cf1d2ab6de0ab2be13feb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec 03 01:50:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-e00cbb1626a779e3f691459818dc0903dc2867bb68117c8319e2582a0146b151-merged.mount: Deactivated successfully.
Dec 03 01:50:09 compute-0 podman[405441]: 2025-12-03 01:50:09.857771225 +0000 UTC m=+1.265029801 container remove 5663bef0cdadd48012e922031eae9c0a63c9dbb651cf1d2ab6de0ab2be13feb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_varahamihira, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:50:09 compute-0 systemd[1]: libpod-conmon-5663bef0cdadd48012e922031eae9c0a63c9dbb651cf1d2ab6de0ab2be13feb6.scope: Deactivated successfully.
Dec 03 01:50:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1061: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:09 compute-0 sudo[405323]: pam_unix(sudo:session): session closed for user root
Dec 03 01:50:10 compute-0 podman[405478]: 2025-12-03 01:50:10.002466381 +0000 UTC m=+0.090252697 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 03 01:50:10 compute-0 sudo[405485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:50:10 compute-0 sudo[405485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:50:10 compute-0 sudo[405485]: pam_unix(sudo:session): session closed for user root
Dec 03 01:50:10 compute-0 sudo[405526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:50:10 compute-0 sudo[405526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:50:10 compute-0 sudo[405526]: pam_unix(sudo:session): session closed for user root
Dec 03 01:50:10 compute-0 sudo[405552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:50:10 compute-0 sudo[405552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:50:10 compute-0 sudo[405552]: pam_unix(sudo:session): session closed for user root
Dec 03 01:50:10 compute-0 sudo[405577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 01:50:10 compute-0 sudo[405577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:50:10 compute-0 podman[405642]: 2025-12-03 01:50:10.836064626 +0000 UTC m=+0.083731574 container create 5434689d21554e9c6984523cd815bc3aa541703c5231c8ce0eea3f9a860db8b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:50:10 compute-0 podman[405642]: 2025-12-03 01:50:10.802682528 +0000 UTC m=+0.050349506 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:50:10 compute-0 systemd[1]: Started libpod-conmon-5434689d21554e9c6984523cd815bc3aa541703c5231c8ce0eea3f9a860db8b4.scope.
Dec 03 01:50:10 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:50:10 compute-0 podman[405642]: 2025-12-03 01:50:10.97282658 +0000 UTC m=+0.220493578 container init 5434689d21554e9c6984523cd815bc3aa541703c5231c8ce0eea3f9a860db8b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mclaren, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 03 01:50:10 compute-0 podman[405642]: 2025-12-03 01:50:10.99275397 +0000 UTC m=+0.240420918 container start 5434689d21554e9c6984523cd815bc3aa541703c5231c8ce0eea3f9a860db8b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mclaren, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:50:11 compute-0 podman[405642]: 2025-12-03 01:50:11.000398795 +0000 UTC m=+0.248065773 container attach 5434689d21554e9c6984523cd815bc3aa541703c5231c8ce0eea3f9a860db8b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mclaren, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:50:11 compute-0 epic_mclaren[405658]: 167 167
Dec 03 01:50:11 compute-0 systemd[1]: libpod-5434689d21554e9c6984523cd815bc3aa541703c5231c8ce0eea3f9a860db8b4.scope: Deactivated successfully.
Dec 03 01:50:11 compute-0 podman[405642]: 2025-12-03 01:50:11.006365152 +0000 UTC m=+0.254032090 container died 5434689d21554e9c6984523cd815bc3aa541703c5231c8ce0eea3f9a860db8b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:50:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-175bb76ab1b7bd3194f448bc2722cd95cc4130cf95d73d5e9d7363f2c50fc834-merged.mount: Deactivated successfully.
Dec 03 01:50:11 compute-0 podman[405642]: 2025-12-03 01:50:11.062654714 +0000 UTC m=+0.310321632 container remove 5434689d21554e9c6984523cd815bc3aa541703c5231c8ce0eea3f9a860db8b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mclaren, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:50:11 compute-0 systemd[1]: libpod-conmon-5434689d21554e9c6984523cd815bc3aa541703c5231c8ce0eea3f9a860db8b4.scope: Deactivated successfully.
Dec 03 01:50:11 compute-0 podman[405680]: 2025-12-03 01:50:11.343779944 +0000 UTC m=+0.084555197 container create 1d471b02f9a7bf9a4eb1a2ae610b39f3d448375a6d03b878ef3954d710e38c82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:50:11 compute-0 podman[405680]: 2025-12-03 01:50:11.305015905 +0000 UTC m=+0.045791198 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:50:11 compute-0 systemd[1]: Started libpod-conmon-1d471b02f9a7bf9a4eb1a2ae610b39f3d448375a6d03b878ef3954d710e38c82.scope.
Dec 03 01:50:11 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:50:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc07b9a25cea84465a4162539c1ca13befc7f766add458236d3b44894712da87/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:50:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc07b9a25cea84465a4162539c1ca13befc7f766add458236d3b44894712da87/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:50:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc07b9a25cea84465a4162539c1ca13befc7f766add458236d3b44894712da87/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:50:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc07b9a25cea84465a4162539c1ca13befc7f766add458236d3b44894712da87/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:50:11 compute-0 podman[405680]: 2025-12-03 01:50:11.491019262 +0000 UTC m=+0.231794475 container init 1d471b02f9a7bf9a4eb1a2ae610b39f3d448375a6d03b878ef3954d710e38c82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_engelbart, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 03 01:50:11 compute-0 podman[405680]: 2025-12-03 01:50:11.517045064 +0000 UTC m=+0.257820287 container start 1d471b02f9a7bf9a4eb1a2ae610b39f3d448375a6d03b878ef3954d710e38c82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_engelbart, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 03 01:50:11 compute-0 podman[405680]: 2025-12-03 01:50:11.523355181 +0000 UTC m=+0.264130394 container attach 1d471b02f9a7bf9a4eb1a2ae610b39f3d448375a6d03b878ef3954d710e38c82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_engelbart, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:50:11 compute-0 ceph-mon[192821]: pgmap v1061: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1062: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:12 compute-0 nervous_engelbart[405696]: {
Dec 03 01:50:12 compute-0 nervous_engelbart[405696]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 01:50:12 compute-0 nervous_engelbart[405696]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:50:12 compute-0 nervous_engelbart[405696]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 01:50:12 compute-0 nervous_engelbart[405696]:         "osd_id": 2,
Dec 03 01:50:12 compute-0 nervous_engelbart[405696]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:50:12 compute-0 nervous_engelbart[405696]:         "type": "bluestore"
Dec 03 01:50:12 compute-0 nervous_engelbart[405696]:     },
Dec 03 01:50:12 compute-0 nervous_engelbart[405696]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 01:50:12 compute-0 nervous_engelbart[405696]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:50:12 compute-0 nervous_engelbart[405696]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 01:50:12 compute-0 nervous_engelbart[405696]:         "osd_id": 1,
Dec 03 01:50:12 compute-0 nervous_engelbart[405696]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:50:12 compute-0 nervous_engelbart[405696]:         "type": "bluestore"
Dec 03 01:50:12 compute-0 nervous_engelbart[405696]:     },
Dec 03 01:50:12 compute-0 nervous_engelbart[405696]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 01:50:12 compute-0 nervous_engelbart[405696]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:50:12 compute-0 nervous_engelbart[405696]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 01:50:12 compute-0 nervous_engelbart[405696]:         "osd_id": 0,
Dec 03 01:50:12 compute-0 nervous_engelbart[405696]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:50:12 compute-0 nervous_engelbart[405696]:         "type": "bluestore"
Dec 03 01:50:12 compute-0 nervous_engelbart[405696]:     }
Dec 03 01:50:12 compute-0 nervous_engelbart[405696]: }
Dec 03 01:50:12 compute-0 systemd[1]: libpod-1d471b02f9a7bf9a4eb1a2ae610b39f3d448375a6d03b878ef3954d710e38c82.scope: Deactivated successfully.
Dec 03 01:50:12 compute-0 podman[405680]: 2025-12-03 01:50:12.757120603 +0000 UTC m=+1.497895816 container died 1d471b02f9a7bf9a4eb1a2ae610b39f3d448375a6d03b878ef3954d710e38c82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_engelbart, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 03 01:50:12 compute-0 systemd[1]: libpod-1d471b02f9a7bf9a4eb1a2ae610b39f3d448375a6d03b878ef3954d710e38c82.scope: Consumed 1.238s CPU time.
Dec 03 01:50:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc07b9a25cea84465a4162539c1ca13befc7f766add458236d3b44894712da87-merged.mount: Deactivated successfully.
Dec 03 01:50:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:50:12 compute-0 podman[405680]: 2025-12-03 01:50:12.854987453 +0000 UTC m=+1.595762696 container remove 1d471b02f9a7bf9a4eb1a2ae610b39f3d448375a6d03b878ef3954d710e38c82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_engelbart, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec 03 01:50:12 compute-0 systemd[1]: libpod-conmon-1d471b02f9a7bf9a4eb1a2ae610b39f3d448375a6d03b878ef3954d710e38c82.scope: Deactivated successfully.
Dec 03 01:50:12 compute-0 sudo[405577]: pam_unix(sudo:session): session closed for user root
Dec 03 01:50:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:50:12 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:50:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:50:12 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:50:12 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 420ec8c0-112d-40ca-ac4f-268d8e658c85 does not exist
Dec 03 01:50:12 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev b5250900-8546-40b5-901e-9aa64fdc34a2 does not exist
Dec 03 01:50:13 compute-0 sudo[405741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:50:13 compute-0 sudo[405741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:50:13 compute-0 sudo[405741]: pam_unix(sudo:session): session closed for user root
Dec 03 01:50:13 compute-0 sudo[405766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 01:50:13 compute-0 sudo[405766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:50:13 compute-0 sudo[405766]: pam_unix(sudo:session): session closed for user root
Dec 03 01:50:13 compute-0 ceph-mon[192821]: pgmap v1062: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:13 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:50:13 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:50:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1063: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:15 compute-0 ceph-mon[192821]: pgmap v1063: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1064: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Dec 03 01:50:17 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3063780719' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Dec 03 01:50:17 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.14385 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec 03 01:50:17 compute-0 ceph-mgr[193109]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec 03 01:50:17 compute-0 ceph-mgr[193109]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec 03 01:50:17 compute-0 ceph-mon[192821]: pgmap v1064: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:17 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/3063780719' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Dec 03 01:50:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:50:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1065: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:18 compute-0 ceph-mon[192821]: from='client.14385 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec 03 01:50:19 compute-0 ceph-mon[192821]: pgmap v1065: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1066: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:21 compute-0 ceph-mon[192821]: pgmap v1066: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1067: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:50:23 compute-0 ceph-mon[192821]: pgmap v1067: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1068: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1069: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:26 compute-0 ceph-mon[192821]: pgmap v1068: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:26 compute-0 podman[405792]: 2025-12-03 01:50:26.160792974 +0000 UTC m=+0.082330391 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 03 01:50:26 compute-0 podman[405794]: 2025-12-03 01:50:26.203155158 +0000 UTC m=+0.101730408 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 03 01:50:26 compute-0 podman[405793]: 2025-12-03 01:50:26.219810027 +0000 UTC m=+0.125325422 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Dec 03 01:50:27 compute-0 ceph-mon[192821]: pgmap v1069: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:50:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1070: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:50:28
Dec 03 01:50:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 01:50:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 01:50:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', '.rgw.root', 'vms', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', 'volumes', 'images', 'backups', 'cephfs.cephfs.data']
Dec 03 01:50:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 01:50:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:50:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:50:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:50:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:50:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:50:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:50:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 01:50:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:50:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 01:50:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:50:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:50:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:50:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:50:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:50:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:50:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:50:29 compute-0 ceph-mon[192821]: pgmap v1070: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:29 compute-0 sshd-session[405850]: Invalid user testuser from 103.146.202.174 port 37898
Dec 03 01:50:29 compute-0 sshd-session[405850]: Received disconnect from 103.146.202.174 port 37898:11: Bye Bye [preauth]
Dec 03 01:50:29 compute-0 sshd-session[405850]: Disconnected from invalid user testuser 103.146.202.174 port 37898 [preauth]
Dec 03 01:50:29 compute-0 podman[158098]: time="2025-12-03T01:50:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:50:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:50:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec 03 01:50:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:50:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8111 "" "Go-http-client/1.1"
Dec 03 01:50:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1071: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:30 compute-0 podman[405852]: 2025-12-03 01:50:30.904251944 +0000 UTC m=+0.158053964 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible)
Dec 03 01:50:31 compute-0 ceph-mon[192821]: pgmap v1071: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:31 compute-0 openstack_network_exporter[368278]: ERROR   01:50:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:50:31 compute-0 openstack_network_exporter[368278]: ERROR   01:50:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:50:31 compute-0 openstack_network_exporter[368278]: ERROR   01:50:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:50:31 compute-0 openstack_network_exporter[368278]: ERROR   01:50:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:50:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:50:31 compute-0 openstack_network_exporter[368278]: ERROR   01:50:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:50:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:50:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1072: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:50:33 compute-0 ceph-mon[192821]: pgmap v1072: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1073: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:34 compute-0 podman[405871]: 2025-12-03 01:50:34.870423698 +0000 UTC m=+0.123476230 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, maintainer=Red Hat, Inc., vcs-type=git, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., name=ubi9, architecture=x86_64, build-date=2024-09-18T21:23:30, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, io.openshift.tags=base rhel9, managed_by=edpm_ansible, io.openshift.expose-services=, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, version=9.4)
Dec 03 01:50:35 compute-0 ceph-mon[192821]: pgmap v1073: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1074: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:37 compute-0 ceph-mon[192821]: pgmap v1074: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:50:37 compute-0 podman[405891]: 2025-12-03 01:50:37.863307617 +0000 UTC m=+0.109050504 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 03 01:50:37 compute-0 podman[405890]: 2025-12-03 01:50:37.896756649 +0000 UTC m=+0.151473308 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 03 01:50:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1075: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 01:50:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:50:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 01:50:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:50:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:50:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:50:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:50:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:50:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:50:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:50:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:50:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:50:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 01:50:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:50:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:50:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:50:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 01:50:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:50:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 01:50:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:50:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:50:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:50:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 01:50:38 compute-0 podman[405938]: 2025-12-03 01:50:38.895806315 +0000 UTC m=+0.143446522 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1755695350, version=9.6, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, managed_by=edpm_ansible, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 03 01:50:39 compute-0 ceph-mon[192821]: pgmap v1075: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1076: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:40 compute-0 podman[405959]: 2025-12-03 01:50:40.877083396 +0000 UTC m=+0.128900613 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 01:50:41 compute-0 ceph-mon[192821]: pgmap v1076: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1077: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:50:43 compute-0 ceph-mon[192821]: pgmap v1077: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1078: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:45 compute-0 ceph-mon[192821]: pgmap v1078: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1079: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 01:50:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1476334581' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 01:50:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 01:50:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1476334581' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 01:50:47 compute-0 ceph-mon[192821]: pgmap v1079: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/1476334581' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 01:50:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/1476334581' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 01:50:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:50:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1080: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:49 compute-0 ceph-mon[192821]: pgmap v1080: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:49 compute-0 sshd-session[405981]: Received disconnect from 146.190.144.138 port 42778:11: Bye Bye [preauth]
Dec 03 01:50:49 compute-0 sshd-session[405981]: Disconnected from authenticating user root 146.190.144.138 port 42778 [preauth]
Dec 03 01:50:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1081: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:50 compute-0 sshd-session[405984]: Invalid user foundry from 173.249.50.59 port 33226
Dec 03 01:50:51 compute-0 sshd-session[405984]: Received disconnect from 173.249.50.59 port 33226:11: Bye Bye [preauth]
Dec 03 01:50:51 compute-0 sshd-session[405984]: Disconnected from invalid user foundry 173.249.50.59 port 33226 [preauth]
Dec 03 01:50:51 compute-0 ceph-mon[192821]: pgmap v1081: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:51 compute-0 nova_compute[351485]: 2025-12-03 01:50:51.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:50:51 compute-0 nova_compute[351485]: 2025-12-03 01:50:51.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 01:50:51 compute-0 nova_compute[351485]: 2025-12-03 01:50:51.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 03 01:50:51 compute-0 nova_compute[351485]: 2025-12-03 01:50:51.595 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 03 01:50:51 compute-0 nova_compute[351485]: 2025-12-03 01:50:51.595 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:50:51 compute-0 nova_compute[351485]: 2025-12-03 01:50:51.596 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:50:51 compute-0 nova_compute[351485]: 2025-12-03 01:50:51.596 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:50:51 compute-0 nova_compute[351485]: 2025-12-03 01:50:51.630 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:50:51 compute-0 nova_compute[351485]: 2025-12-03 01:50:51.630 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:50:51 compute-0 nova_compute[351485]: 2025-12-03 01:50:51.631 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:50:51 compute-0 nova_compute[351485]: 2025-12-03 01:50:51.631 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 01:50:51 compute-0 nova_compute[351485]: 2025-12-03 01:50:51.631 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:50:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1082: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 01:50:52 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3428749513' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:50:52 compute-0 nova_compute[351485]: 2025-12-03 01:50:52.127 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:50:52 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3428749513' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:50:52 compute-0 nova_compute[351485]: 2025-12-03 01:50:52.643 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 01:50:52 compute-0 nova_compute[351485]: 2025-12-03 01:50:52.644 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4585MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 01:50:52 compute-0 nova_compute[351485]: 2025-12-03 01:50:52.645 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:50:52 compute-0 nova_compute[351485]: 2025-12-03 01:50:52.645 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:50:52 compute-0 nova_compute[351485]: 2025-12-03 01:50:52.780 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 01:50:52 compute-0 nova_compute[351485]: 2025-12-03 01:50:52.781 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 01:50:52 compute-0 nova_compute[351485]: 2025-12-03 01:50:52.815 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:50:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:50:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 01:50:53 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1983472691' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:50:53 compute-0 ceph-mon[192821]: pgmap v1082: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:53 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1983472691' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:50:53 compute-0 nova_compute[351485]: 2025-12-03 01:50:53.327 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:50:53 compute-0 nova_compute[351485]: 2025-12-03 01:50:53.338 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 01:50:53 compute-0 nova_compute[351485]: 2025-12-03 01:50:53.358 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 01:50:53 compute-0 nova_compute[351485]: 2025-12-03 01:50:53.362 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 01:50:53 compute-0 nova_compute[351485]: 2025-12-03 01:50:53.363 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:50:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1083: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:54 compute-0 nova_compute[351485]: 2025-12-03 01:50:54.344 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:50:54 compute-0 nova_compute[351485]: 2025-12-03 01:50:54.344 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:50:55 compute-0 ceph-mon[192821]: pgmap v1083: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:55 compute-0 nova_compute[351485]: 2025-12-03 01:50:55.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:50:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1084: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:56 compute-0 podman[406030]: 2025-12-03 01:50:56.872462184 +0000 UTC m=+0.121114674 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 03 01:50:56 compute-0 podman[406031]: 2025-12-03 01:50:56.873511534 +0000 UTC m=+0.114592460 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image)
Dec 03 01:50:56 compute-0 podman[406032]: 2025-12-03 01:50:56.923589845 +0000 UTC m=+0.159183287 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 01:50:57 compute-0 ceph-mon[192821]: pgmap v1084: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:57 compute-0 nova_compute[351485]: 2025-12-03 01:50:57.570 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:50:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:50:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1085: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:50:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:50:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:50:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:50:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:50:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:50:58 compute-0 nova_compute[351485]: 2025-12-03 01:50:58.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:50:58 compute-0 nova_compute[351485]: 2025-12-03 01:50:58.576 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 01:50:59 compute-0 ceph-mon[192821]: pgmap v1085: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:50:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:50:59.615 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:50:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:50:59.616 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:50:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:50:59.616 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:50:59 compute-0 podman[158098]: time="2025-12-03T01:50:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:50:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:50:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec 03 01:50:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:50:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8116 "" "Go-http-client/1.1"
Dec 03 01:51:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1086: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:51:01 compute-0 ceph-mon[192821]: pgmap v1086: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:51:01 compute-0 openstack_network_exporter[368278]: ERROR   01:51:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:51:01 compute-0 openstack_network_exporter[368278]: ERROR   01:51:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:51:01 compute-0 openstack_network_exporter[368278]: ERROR   01:51:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:51:01 compute-0 openstack_network_exporter[368278]: ERROR   01:51:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:51:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:51:01 compute-0 openstack_network_exporter[368278]: ERROR   01:51:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:51:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:51:01 compute-0 podman[406088]: 2025-12-03 01:51:01.853190542 +0000 UTC m=+0.110007340 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Dec 03 01:51:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1087: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:51:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:51:03 compute-0 ceph-mon[192821]: pgmap v1087: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:51:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1088: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:51:05 compute-0 sshd-session[406107]: Accepted publickey for zuul from 38.102.83.18 port 39508 ssh2: RSA SHA256:NqevRhMCntWIOoTdK6+DV077scp/CQGou+r/H3um4YU
Dec 03 01:51:05 compute-0 systemd-logind[800]: New session 60 of user zuul.
Dec 03 01:51:05 compute-0 systemd[1]: Started Session 60 of User zuul.
Dec 03 01:51:05 compute-0 sshd-session[406107]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 03 01:51:05 compute-0 podman[406109]: 2025-12-03 01:51:05.283397633 +0000 UTC m=+0.127666028 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.expose-services=, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, managed_by=edpm_ansible, config_id=edpm, distribution-scope=public, release=1214.1726694543, architecture=x86_64, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, release-0.7.12=, io.buildah.version=1.29.0, vendor=Red Hat, Inc., name=ubi9)
Dec 03 01:51:05 compute-0 ceph-mon[192821]: pgmap v1088: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:51:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1089: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:51:06 compute-0 python3[406303]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 01:51:07 compute-0 ceph-mon[192821]: pgmap v1089: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:51:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:51:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1090: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:51:08 compute-0 podman[406471]: 2025-12-03 01:51:08.905511251 +0000 UTC m=+0.148037042 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 03 01:51:08 compute-0 podman[406467]: 2025-12-03 01:51:08.956472177 +0000 UTC m=+0.201966141 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 03 01:51:09 compute-0 podman[406553]: 2025-12-03 01:51:09.071431526 +0000 UTC m=+0.105364990 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, architecture=x86_64, release=1755695350, vendor=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.buildah.version=1.33.7, name=ubi9-minimal, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec 03 01:51:09 compute-0 sudo[406595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sumoafmbusmsrebssuctjrfvfjfdmzwr ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764726668.3833334-38559-277461522758552/AnsiballZ_command.py'
Dec 03 01:51:09 compute-0 sudo[406595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:51:09 compute-0 python3[406601]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")
                                           journalctl -t "ceilometer_agent_compute" --no-pager -S "${tstamp}"
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:51:09 compute-0 ceph-mon[192821]: pgmap v1090: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:51:09 compute-0 sudo[406595]: pam_unix(sudo:session): session closed for user root
Dec 03 01:51:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1091: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:51:10 compute-0 sudo[406752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwmhecuqeonwauzutgkctqutxwucjdlz ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764726669.978958-38570-144989426430850/AnsiballZ_command.py'
Dec 03 01:51:10 compute-0 sudo[406752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:51:10 compute-0 python3[406755]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")
                                           journalctl -t "nova_compute" --no-pager -S "${tstamp}"
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:51:11 compute-0 sshd-session[406754]: Invalid user frontend from 80.253.31.232 port 37316
Dec 03 01:51:11 compute-0 sshd-session[406754]: Received disconnect from 80.253.31.232 port 37316:11: Bye Bye [preauth]
Dec 03 01:51:11 compute-0 sshd-session[406754]: Disconnected from invalid user frontend 80.253.31.232 port 37316 [preauth]
Dec 03 01:51:11 compute-0 podman[406759]: 2025-12-03 01:51:11.419015085 +0000 UTC m=+0.149412131 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 03 01:51:11 compute-0 ceph-mon[192821]: pgmap v1091: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:51:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1092: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:51:12 compute-0 sudo[406752]: pam_unix(sudo:session): session closed for user root
Dec 03 01:51:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:51:13 compute-0 sudo[406859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:51:13 compute-0 sudo[406859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:51:13 compute-0 sudo[406859]: pam_unix(sudo:session): session closed for user root
Dec 03 01:51:13 compute-0 ceph-mon[192821]: pgmap v1092: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:51:13 compute-0 sudo[406904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:51:13 compute-0 sudo[406904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:51:13 compute-0 sudo[406904]: pam_unix(sudo:session): session closed for user root
Dec 03 01:51:13 compute-0 sudo[406945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:51:13 compute-0 sudo[406945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:51:13 compute-0 sudo[406945]: pam_unix(sudo:session): session closed for user root
Dec 03 01:51:13 compute-0 sudo[407001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Dec 03 01:51:13 compute-0 sudo[407001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:51:13 compute-0 python3[407013]: ansible-ansible.builtin.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 03 01:51:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1093: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:51:14 compute-0 sudo[407001]: pam_unix(sudo:session): session closed for user root
Dec 03 01:51:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:51:14 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:51:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:51:14 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:51:14 compute-0 sudo[407077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:51:14 compute-0 sudo[407077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:51:14 compute-0 sudo[407077]: pam_unix(sudo:session): session closed for user root
Dec 03 01:51:14 compute-0 sudo[407125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:51:14 compute-0 sudo[407125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:51:14 compute-0 sudo[407125]: pam_unix(sudo:session): session closed for user root
Dec 03 01:51:14 compute-0 sudo[407179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:51:14 compute-0 sudo[407179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:51:14 compute-0 sudo[407179]: pam_unix(sudo:session): session closed for user root
Dec 03 01:51:14 compute-0 sudo[407227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 01:51:14 compute-0 sudo[407227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:51:14 compute-0 sudo[407309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmdoaxkcjpxxyflzmaqmtzkdotzjwkjg ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764726674.380837-38614-85619253094535/AnsiballZ_setup.py'
Dec 03 01:51:14 compute-0 sudo[407309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:51:15 compute-0 ceph-mon[192821]: pgmap v1093: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:51:15 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:51:15 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:51:15 compute-0 python3[407316]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 03 01:51:15 compute-0 sudo[407227]: pam_unix(sudo:session): session closed for user root
Dec 03 01:51:15 compute-0 sudo[407341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:51:15 compute-0 sudo[407341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:51:15 compute-0 sudo[407341]: pam_unix(sudo:session): session closed for user root
Dec 03 01:51:15 compute-0 sudo[407366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:51:15 compute-0 sudo[407366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:51:15 compute-0 sudo[407366]: pam_unix(sudo:session): session closed for user root
Dec 03 01:51:15 compute-0 sshd-session[406153]: Received disconnect from 45.78.219.140 port 38020:11: Bye Bye [preauth]
Dec 03 01:51:15 compute-0 sshd-session[406153]: Disconnected from authenticating user root 45.78.219.140 port 38020 [preauth]
Dec 03 01:51:15 compute-0 sudo[407400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:51:15 compute-0 sudo[407400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:51:15 compute-0 sudo[407400]: pam_unix(sudo:session): session closed for user root
Dec 03 01:51:15 compute-0 sudo[407438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- inventory --format=json-pretty --filter-for-batch
Dec 03 01:51:15 compute-0 sudo[407438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:51:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1094: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:51:16 compute-0 podman[407541]: 2025-12-03 01:51:16.491273338 +0000 UTC m=+0.086000984 container create 24ae28b6e0f5a3d687916e76c1eb81e2cf584df5fc961b0175b662970c1970d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 03 01:51:16 compute-0 podman[407541]: 2025-12-03 01:51:16.45299552 +0000 UTC m=+0.047723186 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:51:16 compute-0 systemd[1]: Started libpod-conmon-24ae28b6e0f5a3d687916e76c1eb81e2cf584df5fc961b0175b662970c1970d3.scope.
Dec 03 01:51:16 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:51:16 compute-0 podman[407541]: 2025-12-03 01:51:16.63828739 +0000 UTC m=+0.233015106 container init 24ae28b6e0f5a3d687916e76c1eb81e2cf584df5fc961b0175b662970c1970d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hypatia, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 03 01:51:16 compute-0 podman[407541]: 2025-12-03 01:51:16.658384457 +0000 UTC m=+0.253112103 container start 24ae28b6e0f5a3d687916e76c1eb81e2cf584df5fc961b0175b662970c1970d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hypatia, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 03 01:51:16 compute-0 podman[407541]: 2025-12-03 01:51:16.665226759 +0000 UTC m=+0.259954405 container attach 24ae28b6e0f5a3d687916e76c1eb81e2cf584df5fc961b0175b662970c1970d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hypatia, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 03 01:51:16 compute-0 nifty_hypatia[407573]: 167 167
Dec 03 01:51:16 compute-0 systemd[1]: libpod-24ae28b6e0f5a3d687916e76c1eb81e2cf584df5fc961b0175b662970c1970d3.scope: Deactivated successfully.
Dec 03 01:51:16 compute-0 podman[407541]: 2025-12-03 01:51:16.67165057 +0000 UTC m=+0.266378226 container died 24ae28b6e0f5a3d687916e76c1eb81e2cf584df5fc961b0175b662970c1970d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 03 01:51:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-70b47958293d763e373e5f3abba9f940c491c23416d758b6713d69329a1aa575-merged.mount: Deactivated successfully.
Dec 03 01:51:16 compute-0 podman[407541]: 2025-12-03 01:51:16.756019497 +0000 UTC m=+0.350747153 container remove 24ae28b6e0f5a3d687916e76c1eb81e2cf584df5fc961b0175b662970c1970d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 03 01:51:16 compute-0 systemd[1]: libpod-conmon-24ae28b6e0f5a3d687916e76c1eb81e2cf584df5fc961b0175b662970c1970d3.scope: Deactivated successfully.
Dec 03 01:51:16 compute-0 sudo[407309]: pam_unix(sudo:session): session closed for user root
Dec 03 01:51:17 compute-0 podman[407617]: 2025-12-03 01:51:17.042887139 +0000 UTC m=+0.081668981 container create 54e8456559c7cb65fb4261699b2d65a211c2f9ee7d0c1913edbe7cdd7ee6d482 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:51:17 compute-0 podman[407617]: 2025-12-03 01:51:17.011372112 +0000 UTC m=+0.050154024 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:51:17 compute-0 systemd[1]: Started libpod-conmon-54e8456559c7cb65fb4261699b2d65a211c2f9ee7d0c1913edbe7cdd7ee6d482.scope.
Dec 03 01:51:17 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:51:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/071a75421b90bf0a32a020b424f0effab78eef0487e003a061979bdd84006258/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:51:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/071a75421b90bf0a32a020b424f0effab78eef0487e003a061979bdd84006258/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:51:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/071a75421b90bf0a32a020b424f0effab78eef0487e003a061979bdd84006258/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:51:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/071a75421b90bf0a32a020b424f0effab78eef0487e003a061979bdd84006258/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:51:17 compute-0 ceph-mon[192821]: pgmap v1094: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:51:17 compute-0 podman[407617]: 2025-12-03 01:51:17.247949437 +0000 UTC m=+0.286731349 container init 54e8456559c7cb65fb4261699b2d65a211c2f9ee7d0c1913edbe7cdd7ee6d482 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_napier, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:51:17 compute-0 podman[407617]: 2025-12-03 01:51:17.267768055 +0000 UTC m=+0.306549917 container start 54e8456559c7cb65fb4261699b2d65a211c2f9ee7d0c1913edbe7cdd7ee6d482 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 03 01:51:17 compute-0 podman[407617]: 2025-12-03 01:51:17.275617416 +0000 UTC m=+0.314399348 container attach 54e8456559c7cb65fb4261699b2d65a211c2f9ee7d0c1913edbe7cdd7ee6d482 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_napier, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:51:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:51:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1095: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:51:18 compute-0 sudo[407770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prwqiiejaazmosphgnpnmqwyzthzujdy ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764726677.559755-38643-184512896105732/AnsiballZ_command.py'
Dec 03 01:51:18 compute-0 sudo[407770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:51:18 compute-0 python3[407773]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:51:18 compute-0 sudo[407770]: pam_unix(sudo:session): session closed for user root
Dec 03 01:51:19 compute-0 ceph-mon[192821]: pgmap v1095: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:51:19 compute-0 sudo[409415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aiiikfbvpyidohomgvwiuqpzdeugfrfa ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764726678.9534364-38660-57515282914391/AnsiballZ_command.py'
Dec 03 01:51:19 compute-0 sudo[409415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.502 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.503 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.503 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e675af60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.504 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.504 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e675af60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.506 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e675af60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.506 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.507 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.507 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.507 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.507 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e675af60>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.508 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e675af60>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.508 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.509 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.508 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e675af60>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.509 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.510 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.509 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e675af60>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.510 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.511 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.511 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e675af60>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e675af60>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e675af60>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e675af60>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e675af60>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e675af60>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e675af60>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e675af60>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.511 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.515 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.515 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e675af60>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.515 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e675af60>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e675af60>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.516 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.517 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.518 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.518 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e675af60>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e675af60>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.518 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.519 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.519 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.519 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e675af60>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e675af60>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.519 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e675af60>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.522 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e675af60>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.521 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.523 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.523 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e675af60>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'power.state': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.523 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.524 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e675af60>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'power.state': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.524 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.525 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.525 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.525 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.526 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.526 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.526 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.526 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.526 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.526 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.526 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.526 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.526 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.527 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.527 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.527 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.527 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.527 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.527 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.528 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.528 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.528 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.529 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.530 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.530 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.530 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.531 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.531 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.531 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.531 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.531 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.531 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.531 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:51:19 compute-0 python3[409462]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 01:51:19 compute-0 sudo[409415]: pam_unix(sudo:session): session closed for user root
Dec 03 01:51:19 compute-0 xenodochial_napier[407636]: [
Dec 03 01:51:19 compute-0 xenodochial_napier[407636]:     {
Dec 03 01:51:19 compute-0 xenodochial_napier[407636]:         "available": false,
Dec 03 01:51:19 compute-0 xenodochial_napier[407636]:         "ceph_device": false,
Dec 03 01:51:19 compute-0 xenodochial_napier[407636]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 03 01:51:19 compute-0 xenodochial_napier[407636]:         "lsm_data": {},
Dec 03 01:51:19 compute-0 xenodochial_napier[407636]:         "lvs": [],
Dec 03 01:51:19 compute-0 xenodochial_napier[407636]:         "path": "/dev/sr0",
Dec 03 01:51:19 compute-0 xenodochial_napier[407636]:         "rejected_reasons": [
Dec 03 01:51:19 compute-0 xenodochial_napier[407636]:             "Has a FileSystem",
Dec 03 01:51:19 compute-0 xenodochial_napier[407636]:             "Insufficient space (<5GB)"
Dec 03 01:51:19 compute-0 xenodochial_napier[407636]:         ],
Dec 03 01:51:19 compute-0 xenodochial_napier[407636]:         "sys_api": {
Dec 03 01:51:19 compute-0 xenodochial_napier[407636]:             "actuators": null,
Dec 03 01:51:19 compute-0 xenodochial_napier[407636]:             "device_nodes": "sr0",
Dec 03 01:51:19 compute-0 xenodochial_napier[407636]:             "devname": "sr0",
Dec 03 01:51:19 compute-0 xenodochial_napier[407636]:             "human_readable_size": "482.00 KB",
Dec 03 01:51:19 compute-0 xenodochial_napier[407636]:             "id_bus": "ata",
Dec 03 01:51:19 compute-0 xenodochial_napier[407636]:             "model": "QEMU DVD-ROM",
Dec 03 01:51:19 compute-0 xenodochial_napier[407636]:             "nr_requests": "2",
Dec 03 01:51:19 compute-0 xenodochial_napier[407636]:             "parent": "/dev/sr0",
Dec 03 01:51:19 compute-0 xenodochial_napier[407636]:             "partitions": {},
Dec 03 01:51:19 compute-0 xenodochial_napier[407636]:             "path": "/dev/sr0",
Dec 03 01:51:19 compute-0 xenodochial_napier[407636]:             "removable": "1",
Dec 03 01:51:19 compute-0 xenodochial_napier[407636]:             "rev": "2.5+",
Dec 03 01:51:19 compute-0 xenodochial_napier[407636]:             "ro": "0",
Dec 03 01:51:19 compute-0 xenodochial_napier[407636]:             "rotational": "1",
Dec 03 01:51:19 compute-0 xenodochial_napier[407636]:             "sas_address": "",
Dec 03 01:51:19 compute-0 xenodochial_napier[407636]:             "sas_device_handle": "",
Dec 03 01:51:19 compute-0 xenodochial_napier[407636]:             "scheduler_mode": "mq-deadline",
Dec 03 01:51:19 compute-0 xenodochial_napier[407636]:             "sectors": 0,
Dec 03 01:51:19 compute-0 xenodochial_napier[407636]:             "sectorsize": "2048",
Dec 03 01:51:19 compute-0 xenodochial_napier[407636]:             "size": 493568.0,
Dec 03 01:51:19 compute-0 xenodochial_napier[407636]:             "support_discard": "2048",
Dec 03 01:51:19 compute-0 xenodochial_napier[407636]:             "type": "disk",
Dec 03 01:51:19 compute-0 xenodochial_napier[407636]:             "vendor": "QEMU"
Dec 03 01:51:19 compute-0 xenodochial_napier[407636]:         }
Dec 03 01:51:19 compute-0 xenodochial_napier[407636]:     }
Dec 03 01:51:19 compute-0 xenodochial_napier[407636]: ]
Dec 03 01:51:19 compute-0 systemd[1]: libpod-54e8456559c7cb65fb4261699b2d65a211c2f9ee7d0c1913edbe7cdd7ee6d482.scope: Deactivated successfully.
Dec 03 01:51:19 compute-0 podman[407617]: 2025-12-03 01:51:19.843214054 +0000 UTC m=+2.881995916 container died 54e8456559c7cb65fb4261699b2d65a211c2f9ee7d0c1913edbe7cdd7ee6d482 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec 03 01:51:19 compute-0 systemd[1]: libpod-54e8456559c7cb65fb4261699b2d65a211c2f9ee7d0c1913edbe7cdd7ee6d482.scope: Consumed 2.684s CPU time.
Dec 03 01:51:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-071a75421b90bf0a32a020b424f0effab78eef0487e003a061979bdd84006258-merged.mount: Deactivated successfully.
Dec 03 01:51:19 compute-0 podman[407617]: 2025-12-03 01:51:19.933340173 +0000 UTC m=+2.972122015 container remove 54e8456559c7cb65fb4261699b2d65a211c2f9ee7d0c1913edbe7cdd7ee6d482 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_napier, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Dec 03 01:51:19 compute-0 systemd[1]: libpod-conmon-54e8456559c7cb65fb4261699b2d65a211c2f9ee7d0c1913edbe7cdd7ee6d482.scope: Deactivated successfully.
Dec 03 01:51:19 compute-0 sudo[407438]: pam_unix(sudo:session): session closed for user root
Dec 03 01:51:19 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:51:20 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:51:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:51:20 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:51:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:51:20 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:51:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 01:51:20 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:51:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 01:51:20 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:51:20 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 4247a20b-32ca-402c-990e-01131ac5de11 does not exist
Dec 03 01:51:20 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 288c4e51-ad35-491b-90bd-b2456f5c38b2 does not exist
Dec 03 01:51:20 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 59993959-cf3b-4c42-ba46-461371feddb1 does not exist
Dec 03 01:51:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 01:51:20 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:51:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 01:51:20 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:51:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:51:20 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:51:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1096: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Dec 03 01:51:20 compute-0 sudo[410248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:51:20 compute-0 sudo[410248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:51:20 compute-0 sudo[410248]: pam_unix(sudo:session): session closed for user root
Dec 03 01:51:20 compute-0 sudo[410273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:51:20 compute-0 sudo[410273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:51:20 compute-0 sudo[410273]: pam_unix(sudo:session): session closed for user root
Dec 03 01:51:20 compute-0 sudo[410298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:51:20 compute-0 sudo[410298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:51:20 compute-0 sudo[410298]: pam_unix(sudo:session): session closed for user root
Dec 03 01:51:20 compute-0 sudo[410323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 01:51:20 compute-0 sudo[410323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:51:21 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:51:21 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:51:21 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:51:21 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:51:21 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:51:21 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:51:21 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:51:21 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:51:21 compute-0 ceph-mon[192821]: pgmap v1096: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Dec 03 01:51:21 compute-0 podman[410387]: 2025-12-03 01:51:21.263393516 +0000 UTC m=+0.081054425 container create acc25027fcb4efc52d61dfb9ce579d21c97c3d3b3f004c5030b2c1b9949cc9de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 03 01:51:21 compute-0 podman[410387]: 2025-12-03 01:51:21.23230838 +0000 UTC m=+0.049969309 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:51:21 compute-0 systemd[1]: Started libpod-conmon-acc25027fcb4efc52d61dfb9ce579d21c97c3d3b3f004c5030b2c1b9949cc9de.scope.
Dec 03 01:51:21 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:51:21 compute-0 podman[410387]: 2025-12-03 01:51:21.429072454 +0000 UTC m=+0.246733373 container init acc25027fcb4efc52d61dfb9ce579d21c97c3d3b3f004c5030b2c1b9949cc9de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 03 01:51:21 compute-0 podman[410387]: 2025-12-03 01:51:21.446606768 +0000 UTC m=+0.264267687 container start acc25027fcb4efc52d61dfb9ce579d21c97c3d3b3f004c5030b2c1b9949cc9de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_joliot, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:51:21 compute-0 podman[410387]: 2025-12-03 01:51:21.454052468 +0000 UTC m=+0.271713387 container attach acc25027fcb4efc52d61dfb9ce579d21c97c3d3b3f004c5030b2c1b9949cc9de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 03 01:51:21 compute-0 zealous_joliot[410403]: 167 167
Dec 03 01:51:21 compute-0 systemd[1]: libpod-acc25027fcb4efc52d61dfb9ce579d21c97c3d3b3f004c5030b2c1b9949cc9de.scope: Deactivated successfully.
Dec 03 01:51:21 compute-0 conmon[410403]: conmon acc25027fcb4efc52d61 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-acc25027fcb4efc52d61dfb9ce579d21c97c3d3b3f004c5030b2c1b9949cc9de.scope/container/memory.events
Dec 03 01:51:21 compute-0 podman[410387]: 2025-12-03 01:51:21.466243931 +0000 UTC m=+0.283904850 container died acc25027fcb4efc52d61dfb9ce579d21c97c3d3b3f004c5030b2c1b9949cc9de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:51:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-7803730e138b5061f11ffea5e76b310e9463a9a89de75ad4755744715624562d-merged.mount: Deactivated successfully.
Dec 03 01:51:21 compute-0 podman[410387]: 2025-12-03 01:51:21.540292367 +0000 UTC m=+0.357953256 container remove acc25027fcb4efc52d61dfb9ce579d21c97c3d3b3f004c5030b2c1b9949cc9de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_joliot, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec 03 01:51:21 compute-0 systemd[1]: libpod-conmon-acc25027fcb4efc52d61dfb9ce579d21c97c3d3b3f004c5030b2c1b9949cc9de.scope: Deactivated successfully.
Dec 03 01:51:21 compute-0 podman[410426]: 2025-12-03 01:51:21.794100337 +0000 UTC m=+0.073146572 container create 30b5e7d17f184e846187c5cfbc7ab3520722e378bbb04a3886f486bf2eba31ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_williamson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 03 01:51:21 compute-0 systemd[1]: Started libpod-conmon-30b5e7d17f184e846187c5cfbc7ab3520722e378bbb04a3886f486bf2eba31ba.scope.
Dec 03 01:51:21 compute-0 podman[410426]: 2025-12-03 01:51:21.770808221 +0000 UTC m=+0.049854496 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:51:21 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:51:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97f1913886ebb912fc032013f762220ffc23f1ddf0075cb523a73054e86ea5c7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:51:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97f1913886ebb912fc032013f762220ffc23f1ddf0075cb523a73054e86ea5c7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:51:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97f1913886ebb912fc032013f762220ffc23f1ddf0075cb523a73054e86ea5c7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:51:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97f1913886ebb912fc032013f762220ffc23f1ddf0075cb523a73054e86ea5c7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:51:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97f1913886ebb912fc032013f762220ffc23f1ddf0075cb523a73054e86ea5c7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:51:21 compute-0 podman[410426]: 2025-12-03 01:51:21.93689347 +0000 UTC m=+0.215939735 container init 30b5e7d17f184e846187c5cfbc7ab3520722e378bbb04a3886f486bf2eba31ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:51:21 compute-0 podman[410426]: 2025-12-03 01:51:21.95251154 +0000 UTC m=+0.231557755 container start 30b5e7d17f184e846187c5cfbc7ab3520722e378bbb04a3886f486bf2eba31ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_williamson, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:51:21 compute-0 podman[410426]: 2025-12-03 01:51:21.956661127 +0000 UTC m=+0.235707402 container attach 30b5e7d17f184e846187c5cfbc7ab3520722e378bbb04a3886f486bf2eba31ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_williamson, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec 03 01:51:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1097: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 0 B/s wr, 8 op/s
Dec 03 01:51:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:51:23 compute-0 ceph-mon[192821]: pgmap v1097: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 0 B/s wr, 8 op/s
Dec 03 01:51:23 compute-0 wizardly_williamson[410441]: --> passed data devices: 0 physical, 3 LVM
Dec 03 01:51:23 compute-0 wizardly_williamson[410441]: --> relative data size: 1.0
Dec 03 01:51:23 compute-0 wizardly_williamson[410441]: --> All data devices are unavailable
Dec 03 01:51:23 compute-0 systemd[1]: libpod-30b5e7d17f184e846187c5cfbc7ab3520722e378bbb04a3886f486bf2eba31ba.scope: Deactivated successfully.
Dec 03 01:51:23 compute-0 systemd[1]: libpod-30b5e7d17f184e846187c5cfbc7ab3520722e378bbb04a3886f486bf2eba31ba.scope: Consumed 1.211s CPU time.
Dec 03 01:51:23 compute-0 podman[410470]: 2025-12-03 01:51:23.32965556 +0000 UTC m=+0.065616280 container died 30b5e7d17f184e846187c5cfbc7ab3520722e378bbb04a3886f486bf2eba31ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_williamson, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 03 01:51:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-97f1913886ebb912fc032013f762220ffc23f1ddf0075cb523a73054e86ea5c7-merged.mount: Deactivated successfully.
Dec 03 01:51:23 compute-0 podman[410470]: 2025-12-03 01:51:23.406310649 +0000 UTC m=+0.142271339 container remove 30b5e7d17f184e846187c5cfbc7ab3520722e378bbb04a3886f486bf2eba31ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_williamson, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:51:23 compute-0 systemd[1]: libpod-conmon-30b5e7d17f184e846187c5cfbc7ab3520722e378bbb04a3886f486bf2eba31ba.scope: Deactivated successfully.
Dec 03 01:51:23 compute-0 sudo[410323]: pam_unix(sudo:session): session closed for user root
Dec 03 01:51:23 compute-0 sudo[410485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:51:23 compute-0 sudo[410485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:51:23 compute-0 sudo[410485]: pam_unix(sudo:session): session closed for user root
Dec 03 01:51:23 compute-0 sudo[410510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:51:23 compute-0 sudo[410510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:51:23 compute-0 sudo[410510]: pam_unix(sudo:session): session closed for user root
Dec 03 01:51:23 compute-0 sudo[410535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:51:23 compute-0 sudo[410535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:51:23 compute-0 sudo[410535]: pam_unix(sudo:session): session closed for user root
Dec 03 01:51:24 compute-0 sudo[410560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 01:51:24 compute-0 sudo[410560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:51:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1098: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 27 op/s
Dec 03 01:51:24 compute-0 podman[410623]: 2025-12-03 01:51:24.608488919 +0000 UTC m=+0.079079219 container create 46e6c091c35f0a52489d6c9021f23e8d61bc034f37239a2f5e936ca076ec25e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_austin, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec 03 01:51:24 compute-0 podman[410623]: 2025-12-03 01:51:24.576519228 +0000 UTC m=+0.047109578 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:51:24 compute-0 systemd[1]: Started libpod-conmon-46e6c091c35f0a52489d6c9021f23e8d61bc034f37239a2f5e936ca076ec25e5.scope.
Dec 03 01:51:24 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:51:24 compute-0 podman[410623]: 2025-12-03 01:51:24.751276882 +0000 UTC m=+0.221867162 container init 46e6c091c35f0a52489d6c9021f23e8d61bc034f37239a2f5e936ca076ec25e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:51:24 compute-0 podman[410623]: 2025-12-03 01:51:24.769114424 +0000 UTC m=+0.239704724 container start 46e6c091c35f0a52489d6c9021f23e8d61bc034f37239a2f5e936ca076ec25e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 03 01:51:24 compute-0 podman[410623]: 2025-12-03 01:51:24.775972008 +0000 UTC m=+0.246562288 container attach 46e6c091c35f0a52489d6c9021f23e8d61bc034f37239a2f5e936ca076ec25e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:51:24 compute-0 quirky_austin[410639]: 167 167
Dec 03 01:51:24 compute-0 systemd[1]: libpod-46e6c091c35f0a52489d6c9021f23e8d61bc034f37239a2f5e936ca076ec25e5.scope: Deactivated successfully.
Dec 03 01:51:24 compute-0 podman[410623]: 2025-12-03 01:51:24.781228656 +0000 UTC m=+0.251818956 container died 46e6c091c35f0a52489d6c9021f23e8d61bc034f37239a2f5e936ca076ec25e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_austin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:51:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5b75217b50172e6fb57cc9ed480fb6a4d97402f6cd95d6aec722ab555224099-merged.mount: Deactivated successfully.
Dec 03 01:51:24 compute-0 podman[410623]: 2025-12-03 01:51:24.846306729 +0000 UTC m=+0.316897009 container remove 46e6c091c35f0a52489d6c9021f23e8d61bc034f37239a2f5e936ca076ec25e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:51:24 compute-0 systemd[1]: libpod-conmon-46e6c091c35f0a52489d6c9021f23e8d61bc034f37239a2f5e936ca076ec25e5.scope: Deactivated successfully.
Dec 03 01:51:25 compute-0 podman[410661]: 2025-12-03 01:51:25.091168738 +0000 UTC m=+0.096873520 container create 1d177dfb8f072b0a3ef1853e7738783d2e7f8b57920f4dc39433a574d1ae0a02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_elion, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:51:25 compute-0 podman[410661]: 2025-12-03 01:51:25.055236626 +0000 UTC m=+0.060941478 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:51:25 compute-0 systemd[1]: Started libpod-conmon-1d177dfb8f072b0a3ef1853e7738783d2e7f8b57920f4dc39433a574d1ae0a02.scope.
Dec 03 01:51:25 compute-0 ceph-mon[192821]: pgmap v1098: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 27 op/s
Dec 03 01:51:25 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:51:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7d4e39b12c6af3a2b217e2c77fa44051bfa341faf8b81ffe7c7502df528295d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:51:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7d4e39b12c6af3a2b217e2c77fa44051bfa341faf8b81ffe7c7502df528295d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:51:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7d4e39b12c6af3a2b217e2c77fa44051bfa341faf8b81ffe7c7502df528295d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:51:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7d4e39b12c6af3a2b217e2c77fa44051bfa341faf8b81ffe7c7502df528295d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:51:25 compute-0 podman[410661]: 2025-12-03 01:51:25.253405118 +0000 UTC m=+0.259109960 container init 1d177dfb8f072b0a3ef1853e7738783d2e7f8b57920f4dc39433a574d1ae0a02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 03 01:51:25 compute-0 podman[410661]: 2025-12-03 01:51:25.290189344 +0000 UTC m=+0.295894136 container start 1d177dfb8f072b0a3ef1853e7738783d2e7f8b57920f4dc39433a574d1ae0a02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec 03 01:51:25 compute-0 podman[410661]: 2025-12-03 01:51:25.296449481 +0000 UTC m=+0.302154263 container attach 1d177dfb8f072b0a3ef1853e7738783d2e7f8b57920f4dc39433a574d1ae0a02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:51:26 compute-0 sharp_elion[410677]: {
Dec 03 01:51:26 compute-0 sharp_elion[410677]:     "0": [
Dec 03 01:51:26 compute-0 sharp_elion[410677]:         {
Dec 03 01:51:26 compute-0 sharp_elion[410677]:             "devices": [
Dec 03 01:51:26 compute-0 sharp_elion[410677]:                 "/dev/loop3"
Dec 03 01:51:26 compute-0 sharp_elion[410677]:             ],
Dec 03 01:51:26 compute-0 sharp_elion[410677]:             "lv_name": "ceph_lv0",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:             "lv_size": "21470642176",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:             "name": "ceph_lv0",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:             "tags": {
Dec 03 01:51:26 compute-0 sharp_elion[410677]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:                 "ceph.cluster_name": "ceph",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:                 "ceph.crush_device_class": "",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:                 "ceph.encrypted": "0",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:                 "ceph.osd_id": "0",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:                 "ceph.type": "block",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:                 "ceph.vdo": "0"
Dec 03 01:51:26 compute-0 sharp_elion[410677]:             },
Dec 03 01:51:26 compute-0 sharp_elion[410677]:             "type": "block",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:             "vg_name": "ceph_vg0"
Dec 03 01:51:26 compute-0 sharp_elion[410677]:         }
Dec 03 01:51:26 compute-0 sharp_elion[410677]:     ],
Dec 03 01:51:26 compute-0 sharp_elion[410677]:     "1": [
Dec 03 01:51:26 compute-0 sharp_elion[410677]:         {
Dec 03 01:51:26 compute-0 sharp_elion[410677]:             "devices": [
Dec 03 01:51:26 compute-0 sharp_elion[410677]:                 "/dev/loop4"
Dec 03 01:51:26 compute-0 sharp_elion[410677]:             ],
Dec 03 01:51:26 compute-0 sharp_elion[410677]:             "lv_name": "ceph_lv1",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:             "lv_size": "21470642176",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:             "name": "ceph_lv1",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:             "tags": {
Dec 03 01:51:26 compute-0 sharp_elion[410677]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:                 "ceph.cluster_name": "ceph",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:                 "ceph.crush_device_class": "",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:                 "ceph.encrypted": "0",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:                 "ceph.osd_id": "1",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:                 "ceph.type": "block",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:                 "ceph.vdo": "0"
Dec 03 01:51:26 compute-0 sharp_elion[410677]:             },
Dec 03 01:51:26 compute-0 sharp_elion[410677]:             "type": "block",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:             "vg_name": "ceph_vg1"
Dec 03 01:51:26 compute-0 sharp_elion[410677]:         }
Dec 03 01:51:26 compute-0 sharp_elion[410677]:     ],
Dec 03 01:51:26 compute-0 sharp_elion[410677]:     "2": [
Dec 03 01:51:26 compute-0 sharp_elion[410677]:         {
Dec 03 01:51:26 compute-0 sharp_elion[410677]:             "devices": [
Dec 03 01:51:26 compute-0 sharp_elion[410677]:                 "/dev/loop5"
Dec 03 01:51:26 compute-0 sharp_elion[410677]:             ],
Dec 03 01:51:26 compute-0 sharp_elion[410677]:             "lv_name": "ceph_lv2",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:             "lv_size": "21470642176",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:             "name": "ceph_lv2",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:             "tags": {
Dec 03 01:51:26 compute-0 sharp_elion[410677]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:                 "ceph.cluster_name": "ceph",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:                 "ceph.crush_device_class": "",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:                 "ceph.encrypted": "0",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:                 "ceph.osd_id": "2",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:                 "ceph.type": "block",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:                 "ceph.vdo": "0"
Dec 03 01:51:26 compute-0 sharp_elion[410677]:             },
Dec 03 01:51:26 compute-0 sharp_elion[410677]:             "type": "block",
Dec 03 01:51:26 compute-0 sharp_elion[410677]:             "vg_name": "ceph_vg2"
Dec 03 01:51:26 compute-0 sharp_elion[410677]:         }
Dec 03 01:51:26 compute-0 sharp_elion[410677]:     ]
Dec 03 01:51:26 compute-0 sharp_elion[410677]: }
Dec 03 01:51:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1099: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 03 01:51:26 compute-0 systemd[1]: libpod-1d177dfb8f072b0a3ef1853e7738783d2e7f8b57920f4dc39433a574d1ae0a02.scope: Deactivated successfully.
Dec 03 01:51:26 compute-0 podman[410661]: 2025-12-03 01:51:26.115130226 +0000 UTC m=+1.120835018 container died 1d177dfb8f072b0a3ef1853e7738783d2e7f8b57920f4dc39433a574d1ae0a02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_elion, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:51:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7d4e39b12c6af3a2b217e2c77fa44051bfa341faf8b81ffe7c7502df528295d-merged.mount: Deactivated successfully.
Dec 03 01:51:26 compute-0 podman[410661]: 2025-12-03 01:51:26.227747659 +0000 UTC m=+1.233452421 container remove 1d177dfb8f072b0a3ef1853e7738783d2e7f8b57920f4dc39433a574d1ae0a02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Dec 03 01:51:26 compute-0 systemd[1]: libpod-conmon-1d177dfb8f072b0a3ef1853e7738783d2e7f8b57920f4dc39433a574d1ae0a02.scope: Deactivated successfully.
Dec 03 01:51:26 compute-0 sudo[410560]: pam_unix(sudo:session): session closed for user root
Dec 03 01:51:26 compute-0 sudo[410700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:51:26 compute-0 sudo[410700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:51:26 compute-0 sudo[410700]: pam_unix(sudo:session): session closed for user root
Dec 03 01:51:26 compute-0 sudo[410725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:51:26 compute-0 sudo[410725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:51:26 compute-0 sudo[410725]: pam_unix(sudo:session): session closed for user root
Dec 03 01:51:26 compute-0 sudo[410750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:51:26 compute-0 sudo[410750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:51:26 compute-0 sudo[410750]: pam_unix(sudo:session): session closed for user root
Dec 03 01:51:26 compute-0 sudo[410775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 01:51:26 compute-0 sudo[410775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:51:27 compute-0 ceph-mon[192821]: pgmap v1099: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 03 01:51:27 compute-0 podman[410840]: 2025-12-03 01:51:27.403990758 +0000 UTC m=+0.090353697 container create 716d745033bb33d9918a6825f1318a5ffc309f8447afc57f81b05d8e497fde7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lamarr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 03 01:51:27 compute-0 podman[410840]: 2025-12-03 01:51:27.374979321 +0000 UTC m=+0.061342290 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:51:27 compute-0 systemd[1]: Started libpod-conmon-716d745033bb33d9918a6825f1318a5ffc309f8447afc57f81b05d8e497fde7c.scope.
Dec 03 01:51:27 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:51:27 compute-0 podman[410840]: 2025-12-03 01:51:27.523292619 +0000 UTC m=+0.209655608 container init 716d745033bb33d9918a6825f1318a5ffc309f8447afc57f81b05d8e497fde7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 03 01:51:27 compute-0 podman[410840]: 2025-12-03 01:51:27.542803519 +0000 UTC m=+0.229166458 container start 716d745033bb33d9918a6825f1318a5ffc309f8447afc57f81b05d8e497fde7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lamarr, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:51:27 compute-0 podman[410840]: 2025-12-03 01:51:27.549478477 +0000 UTC m=+0.235841466 container attach 716d745033bb33d9918a6825f1318a5ffc309f8447afc57f81b05d8e497fde7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lamarr, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 03 01:51:27 compute-0 naughty_lamarr[410862]: 167 167
Dec 03 01:51:27 compute-0 systemd[1]: libpod-716d745033bb33d9918a6825f1318a5ffc309f8447afc57f81b05d8e497fde7c.scope: Deactivated successfully.
Dec 03 01:51:27 compute-0 podman[410840]: 2025-12-03 01:51:27.552456541 +0000 UTC m=+0.238819480 container died 716d745033bb33d9918a6825f1318a5ffc309f8447afc57f81b05d8e497fde7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lamarr, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Dec 03 01:51:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5944b72055c40ca0a076dc3e8c1fe3260529f02f41a0a8f3fcbb1c283248f41-merged.mount: Deactivated successfully.
Dec 03 01:51:27 compute-0 podman[410840]: 2025-12-03 01:51:27.61277565 +0000 UTC m=+0.299138549 container remove 716d745033bb33d9918a6825f1318a5ffc309f8447afc57f81b05d8e497fde7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lamarr, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec 03 01:51:27 compute-0 podman[410857]: 2025-12-03 01:51:27.619752967 +0000 UTC m=+0.144974826 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS)
Dec 03 01:51:27 compute-0 podman[410856]: 2025-12-03 01:51:27.624070949 +0000 UTC m=+0.154372041 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:51:27 compute-0 podman[410858]: 2025-12-03 01:51:27.624250424 +0000 UTC m=+0.146393146 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 01:51:27 compute-0 systemd[1]: libpod-conmon-716d745033bb33d9918a6825f1318a5ffc309f8447afc57f81b05d8e497fde7c.scope: Deactivated successfully.
Dec 03 01:51:27 compute-0 podman[410940]: 2025-12-03 01:51:27.850471637 +0000 UTC m=+0.079596413 container create e3f283bd94e21abdee3f94eb659cfdc691e68764c9d7f69848cd428cd4084bb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jackson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:51:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:51:27 compute-0 podman[410940]: 2025-12-03 01:51:27.818775214 +0000 UTC m=+0.047900030 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:51:27 compute-0 systemd[1]: Started libpod-conmon-e3f283bd94e21abdee3f94eb659cfdc691e68764c9d7f69848cd428cd4084bb5.scope.
Dec 03 01:51:27 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:51:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aad53548dca1152339d64d9f423771656242363e46c35c3ac394a016d5ff477c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:51:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aad53548dca1152339d64d9f423771656242363e46c35c3ac394a016d5ff477c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:51:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aad53548dca1152339d64d9f423771656242363e46c35c3ac394a016d5ff477c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:51:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aad53548dca1152339d64d9f423771656242363e46c35c3ac394a016d5ff477c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:51:28 compute-0 podman[410940]: 2025-12-03 01:51:28.03797144 +0000 UTC m=+0.267096206 container init e3f283bd94e21abdee3f94eb659cfdc691e68764c9d7f69848cd428cd4084bb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jackson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Dec 03 01:51:28 compute-0 podman[410940]: 2025-12-03 01:51:28.060409002 +0000 UTC m=+0.289533788 container start e3f283bd94e21abdee3f94eb659cfdc691e68764c9d7f69848cd428cd4084bb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 03 01:51:28 compute-0 podman[410940]: 2025-12-03 01:51:28.067065039 +0000 UTC m=+0.296189785 container attach e3f283bd94e21abdee3f94eb659cfdc691e68764c9d7f69848cd428cd4084bb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 03 01:51:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1100: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 03 01:51:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:51:28
Dec 03 01:51:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 01:51:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 01:51:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'default.rgw.control', 'backups', 'vms', 'cephfs.cephfs.data', 'volumes', '.rgw.root', 'images', 'default.rgw.meta', 'cephfs.cephfs.meta']
Dec 03 01:51:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 01:51:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:51:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:51:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:51:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:51:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:51:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:51:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 01:51:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:51:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 01:51:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:51:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:51:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:51:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:51:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:51:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:51:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:51:29 compute-0 ceph-mon[192821]: pgmap v1100: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 03 01:51:29 compute-0 objective_jackson[410956]: {
Dec 03 01:51:29 compute-0 objective_jackson[410956]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 01:51:29 compute-0 objective_jackson[410956]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:51:29 compute-0 objective_jackson[410956]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 01:51:29 compute-0 objective_jackson[410956]:         "osd_id": 2,
Dec 03 01:51:29 compute-0 objective_jackson[410956]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:51:29 compute-0 objective_jackson[410956]:         "type": "bluestore"
Dec 03 01:51:29 compute-0 objective_jackson[410956]:     },
Dec 03 01:51:29 compute-0 objective_jackson[410956]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 01:51:29 compute-0 objective_jackson[410956]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:51:29 compute-0 objective_jackson[410956]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 01:51:29 compute-0 objective_jackson[410956]:         "osd_id": 1,
Dec 03 01:51:29 compute-0 objective_jackson[410956]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:51:29 compute-0 objective_jackson[410956]:         "type": "bluestore"
Dec 03 01:51:29 compute-0 objective_jackson[410956]:     },
Dec 03 01:51:29 compute-0 objective_jackson[410956]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 01:51:29 compute-0 objective_jackson[410956]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:51:29 compute-0 objective_jackson[410956]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 01:51:29 compute-0 objective_jackson[410956]:         "osd_id": 0,
Dec 03 01:51:29 compute-0 objective_jackson[410956]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:51:29 compute-0 objective_jackson[410956]:         "type": "bluestore"
Dec 03 01:51:29 compute-0 objective_jackson[410956]:     }
Dec 03 01:51:29 compute-0 objective_jackson[410956]: }
Dec 03 01:51:29 compute-0 systemd[1]: libpod-e3f283bd94e21abdee3f94eb659cfdc691e68764c9d7f69848cd428cd4084bb5.scope: Deactivated successfully.
Dec 03 01:51:29 compute-0 systemd[1]: libpod-e3f283bd94e21abdee3f94eb659cfdc691e68764c9d7f69848cd428cd4084bb5.scope: Consumed 1.221s CPU time.
Dec 03 01:51:29 compute-0 podman[410940]: 2025-12-03 01:51:29.282717688 +0000 UTC m=+1.511842514 container died e3f283bd94e21abdee3f94eb659cfdc691e68764c9d7f69848cd428cd4084bb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:51:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-aad53548dca1152339d64d9f423771656242363e46c35c3ac394a016d5ff477c-merged.mount: Deactivated successfully.
Dec 03 01:51:29 compute-0 podman[410940]: 2025-12-03 01:51:29.384948998 +0000 UTC m=+1.614073784 container remove e3f283bd94e21abdee3f94eb659cfdc691e68764c9d7f69848cd428cd4084bb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 03 01:51:29 compute-0 systemd[1]: libpod-conmon-e3f283bd94e21abdee3f94eb659cfdc691e68764c9d7f69848cd428cd4084bb5.scope: Deactivated successfully.
Dec 03 01:51:29 compute-0 sudo[410775]: pam_unix(sudo:session): session closed for user root
Dec 03 01:51:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:51:29 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:51:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:51:29 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:51:29 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 11bc2454-f008-4729-83b0-295c03e45fcc does not exist
Dec 03 01:51:29 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev b8446ea6-38e4-4c1a-b026-e561b2ce87d0 does not exist
Dec 03 01:51:29 compute-0 sudo[410999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:51:29 compute-0 sudo[410999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:51:29 compute-0 sudo[410999]: pam_unix(sudo:session): session closed for user root
Dec 03 01:51:29 compute-0 podman[158098]: time="2025-12-03T01:51:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:51:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:51:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec 03 01:51:29 compute-0 sudo[411024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 01:51:29 compute-0 sudo[411024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:51:29 compute-0 sudo[411024]: pam_unix(sudo:session): session closed for user root
Dec 03 01:51:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:51:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8122 "" "Go-http-client/1.1"
Dec 03 01:51:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1101: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 03 01:51:30 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:51:30 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:51:31 compute-0 openstack_network_exporter[368278]: ERROR   01:51:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:51:31 compute-0 openstack_network_exporter[368278]: ERROR   01:51:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:51:31 compute-0 openstack_network_exporter[368278]: ERROR   01:51:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:51:31 compute-0 openstack_network_exporter[368278]: ERROR   01:51:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:51:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:51:31 compute-0 openstack_network_exporter[368278]: ERROR   01:51:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:51:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:51:31 compute-0 ceph-mon[192821]: pgmap v1101: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 03 01:51:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1102: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 56 op/s
Dec 03 01:51:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:51:32 compute-0 podman[411049]: 2025-12-03 01:51:32.904327572 +0000 UTC m=+0.152296263 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 03 01:51:33 compute-0 ceph-mon[192821]: pgmap v1102: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 56 op/s
Dec 03 01:51:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1103: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 51 op/s
Dec 03 01:51:35 compute-0 ceph-mon[192821]: pgmap v1103: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 51 op/s
Dec 03 01:51:35 compute-0 podman[411068]: 2025-12-03 01:51:35.88150338 +0000 UTC m=+0.131142606 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, release=1214.1726694543, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.openshift.tags=base rhel9, container_name=kepler, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, vendor=Red Hat, Inc., version=9.4, io.buildah.version=1.29.0, name=ubi9)
Dec 03 01:51:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1104: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 0 B/s wr, 32 op/s
Dec 03 01:51:37 compute-0 ceph-mon[192821]: pgmap v1104: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 0 B/s wr, 32 op/s
Dec 03 01:51:37 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Dec 03 01:51:37 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:51:37.554275) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 03 01:51:37 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Dec 03 01:51:37 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726697554323, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2054, "num_deletes": 251, "total_data_size": 3504778, "memory_usage": 3561728, "flush_reason": "Manual Compaction"}
Dec 03 01:51:37 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Dec 03 01:51:37 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726697585202, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3417041, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20887, "largest_seqno": 22940, "table_properties": {"data_size": 3407676, "index_size": 5923, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18587, "raw_average_key_size": 19, "raw_value_size": 3389090, "raw_average_value_size": 3640, "num_data_blocks": 268, "num_entries": 931, "num_filter_entries": 931, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764726470, "oldest_key_time": 1764726470, "file_creation_time": 1764726697, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Dec 03 01:51:37 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 31054 microseconds, and 12858 cpu microseconds.
Dec 03 01:51:37 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 01:51:37 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:51:37.585297) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3417041 bytes OK
Dec 03 01:51:37 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:51:37.585353) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Dec 03 01:51:37 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:51:37.588109) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Dec 03 01:51:37 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:51:37.588127) EVENT_LOG_v1 {"time_micros": 1764726697588121, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 03 01:51:37 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:51:37.588147) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 03 01:51:37 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3496173, prev total WAL file size 3496173, number of live WAL files 2.
Dec 03 01:51:37 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 01:51:37 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:51:37.590126) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Dec 03 01:51:37 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 03 01:51:37 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3336KB)], [50(7387KB)]
Dec 03 01:51:37 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726697590222, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 10981484, "oldest_snapshot_seqno": -1}
Dec 03 01:51:37 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4691 keys, 9252306 bytes, temperature: kUnknown
Dec 03 01:51:37 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726697681748, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 9252306, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9218321, "index_size": 21139, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11781, "raw_key_size": 114847, "raw_average_key_size": 24, "raw_value_size": 9130825, "raw_average_value_size": 1946, "num_data_blocks": 892, "num_entries": 4691, "num_filter_entries": 4691, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764726697, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Dec 03 01:51:37 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 01:51:37 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:51:37.682013) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 9252306 bytes
Dec 03 01:51:37 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:51:37.683952) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 119.9 rd, 101.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.2 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(5.9) write-amplify(2.7) OK, records in: 5205, records dropped: 514 output_compression: NoCompression
Dec 03 01:51:37 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:51:37.683970) EVENT_LOG_v1 {"time_micros": 1764726697683961, "job": 26, "event": "compaction_finished", "compaction_time_micros": 91578, "compaction_time_cpu_micros": 41059, "output_level": 6, "num_output_files": 1, "total_output_size": 9252306, "num_input_records": 5205, "num_output_records": 4691, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 03 01:51:37 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 01:51:37 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726697684794, "job": 26, "event": "table_file_deletion", "file_number": 52}
Dec 03 01:51:37 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 01:51:37 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726697686382, "job": 26, "event": "table_file_deletion", "file_number": 50}
Dec 03 01:51:37 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:51:37.589599) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:51:37 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:51:37.686805) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:51:37 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:51:37.686816) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:51:37 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:51:37.686820) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:51:37 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:51:37.686824) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:51:37 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:51:37.686828) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:51:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:51:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1105: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:51:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 01:51:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:51:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 01:51:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:51:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:51:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:51:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:51:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:51:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:51:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:51:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:51:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:51:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 01:51:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:51:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:51:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:51:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 01:51:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:51:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 01:51:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:51:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:51:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:51:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 01:51:39 compute-0 ceph-mon[192821]: pgmap v1105: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:51:39 compute-0 podman[411088]: 2025-12-03 01:51:39.908839674 +0000 UTC m=+0.150948123 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7)
Dec 03 01:51:39 compute-0 podman[411089]: 2025-12-03 01:51:39.922061147 +0000 UTC m=+0.153543577 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 03 01:51:39 compute-0 podman[411087]: 2025-12-03 01:51:39.947671789 +0000 UTC m=+0.195810218 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 03 01:51:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1106: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:51:41 compute-0 ceph-mon[192821]: pgmap v1106: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:51:41 compute-0 podman[411148]: 2025-12-03 01:51:41.853077441 +0000 UTC m=+0.100579725 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 03 01:51:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1107: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:51:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:51:43 compute-0 ceph-mon[192821]: pgmap v1107: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:51:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1108: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:51:45 compute-0 ceph-mon[192821]: pgmap v1108: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:51:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1109: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:51:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 01:51:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/661218190' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 01:51:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 01:51:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/661218190' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 01:51:47 compute-0 ceph-mon[192821]: pgmap v1109: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:51:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/661218190' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 01:51:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/661218190' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 01:51:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:51:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1110: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:51:49 compute-0 sshd-session[411172]: Received disconnect from 103.146.202.174 port 37572:11: Bye Bye [preauth]
Dec 03 01:51:49 compute-0 sshd-session[411172]: Disconnected from authenticating user root 103.146.202.174 port 37572 [preauth]
Dec 03 01:51:49 compute-0 ceph-mon[192821]: pgmap v1110: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:51:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1111: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:51:50 compute-0 nova_compute[351485]: 2025-12-03 01:51:50.570 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:51:51 compute-0 nova_compute[351485]: 2025-12-03 01:51:51.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:51:51 compute-0 nova_compute[351485]: 2025-12-03 01:51:51.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 01:51:51 compute-0 nova_compute[351485]: 2025-12-03 01:51:51.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 03 01:51:51 compute-0 nova_compute[351485]: 2025-12-03 01:51:51.598 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 03 01:51:51 compute-0 ceph-mon[192821]: pgmap v1111: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:51:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1112: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:51:52 compute-0 nova_compute[351485]: 2025-12-03 01:51:52.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:51:52 compute-0 nova_compute[351485]: 2025-12-03 01:51:52.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:51:52 compute-0 nova_compute[351485]: 2025-12-03 01:51:52.623 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:51:52 compute-0 nova_compute[351485]: 2025-12-03 01:51:52.624 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:51:52 compute-0 nova_compute[351485]: 2025-12-03 01:51:52.624 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:51:52 compute-0 nova_compute[351485]: 2025-12-03 01:51:52.624 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 01:51:52 compute-0 nova_compute[351485]: 2025-12-03 01:51:52.625 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:51:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:51:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 01:51:53 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2392057991' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:51:53 compute-0 nova_compute[351485]: 2025-12-03 01:51:53.104 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:51:53 compute-0 nova_compute[351485]: 2025-12-03 01:51:53.694 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 01:51:53 compute-0 nova_compute[351485]: 2025-12-03 01:51:53.698 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4540MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 01:51:53 compute-0 nova_compute[351485]: 2025-12-03 01:51:53.699 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:51:53 compute-0 nova_compute[351485]: 2025-12-03 01:51:53.700 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:51:53 compute-0 ceph-mon[192821]: pgmap v1112: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:51:53 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2392057991' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:51:54 compute-0 nova_compute[351485]: 2025-12-03 01:51:54.079 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 01:51:54 compute-0 nova_compute[351485]: 2025-12-03 01:51:54.080 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 01:51:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1113: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:51:54 compute-0 nova_compute[351485]: 2025-12-03 01:51:54.173 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing inventories for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 03 01:51:54 compute-0 nova_compute[351485]: 2025-12-03 01:51:54.283 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Updating ProviderTree inventory for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 03 01:51:54 compute-0 nova_compute[351485]: 2025-12-03 01:51:54.284 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Updating inventory in ProviderTree for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 03 01:51:54 compute-0 nova_compute[351485]: 2025-12-03 01:51:54.305 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing aggregate associations for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 03 01:51:54 compute-0 nova_compute[351485]: 2025-12-03 01:51:54.339 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing trait associations for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05, traits: HW_CPU_X86_SSE42,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_ABM,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_F16C,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_RESCUE_BFV,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 03 01:51:54 compute-0 nova_compute[351485]: 2025-12-03 01:51:54.364 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:51:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 01:51:54 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3041397874' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:51:54 compute-0 nova_compute[351485]: 2025-12-03 01:51:54.891 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:51:54 compute-0 nova_compute[351485]: 2025-12-03 01:51:54.902 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 01:51:54 compute-0 nova_compute[351485]: 2025-12-03 01:51:54.923 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 01:51:54 compute-0 nova_compute[351485]: 2025-12-03 01:51:54.925 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 01:51:54 compute-0 nova_compute[351485]: 2025-12-03 01:51:54.925 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.226s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:51:54 compute-0 nova_compute[351485]: 2025-12-03 01:51:54.926 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:51:54 compute-0 nova_compute[351485]: 2025-12-03 01:51:54.927 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 03 01:51:54 compute-0 nova_compute[351485]: 2025-12-03 01:51:54.995 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 03 01:51:55 compute-0 ceph-mon[192821]: pgmap v1113: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:51:55 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3041397874' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:51:55 compute-0 nova_compute[351485]: 2025-12-03 01:51:55.996 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:51:55 compute-0 nova_compute[351485]: 2025-12-03 01:51:55.997 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:51:55 compute-0 nova_compute[351485]: 2025-12-03 01:51:55.998 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:51:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1114: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:51:56 compute-0 ceph-mon[192821]: pgmap v1114: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:51:57 compute-0 sshd-session[411219]: Invalid user super from 173.249.50.59 port 59706
Dec 03 01:51:57 compute-0 sshd-session[411219]: Received disconnect from 173.249.50.59 port 59706:11: Bye Bye [preauth]
Dec 03 01:51:57 compute-0 sshd-session[411219]: Disconnected from invalid user super 173.249.50.59 port 59706 [preauth]
Dec 03 01:51:57 compute-0 nova_compute[351485]: 2025-12-03 01:51:57.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:51:57 compute-0 podman[411222]: 2025-12-03 01:51:57.872339352 +0000 UTC m=+0.118221002 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 03 01:51:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:51:57 compute-0 podman[411221]: 2025-12-03 01:51:57.88398558 +0000 UTC m=+0.132604667 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent)
Dec 03 01:51:57 compute-0 podman[411223]: 2025-12-03 01:51:57.908710937 +0000 UTC m=+0.146679384 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 01:51:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1115: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:51:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:51:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:51:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:51:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:51:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:51:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:51:59 compute-0 ceph-mon[192821]: pgmap v1115: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:51:59 compute-0 nova_compute[351485]: 2025-12-03 01:51:59.571 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:51:59 compute-0 nova_compute[351485]: 2025-12-03 01:51:59.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:51:59 compute-0 nova_compute[351485]: 2025-12-03 01:51:59.576 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 03 01:51:59 compute-0 nova_compute[351485]: 2025-12-03 01:51:59.595 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:51:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:51:59.617 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:51:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:51:59.617 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:51:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:51:59.618 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:51:59 compute-0 podman[158098]: time="2025-12-03T01:51:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:51:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:51:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec 03 01:51:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:51:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8117 "" "Go-http-client/1.1"
Dec 03 01:52:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1116: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:00 compute-0 nova_compute[351485]: 2025-12-03 01:52:00.615 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:52:00 compute-0 nova_compute[351485]: 2025-12-03 01:52:00.615 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 01:52:01 compute-0 ceph-mon[192821]: pgmap v1116: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:01 compute-0 openstack_network_exporter[368278]: ERROR   01:52:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:52:01 compute-0 openstack_network_exporter[368278]: ERROR   01:52:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:52:01 compute-0 openstack_network_exporter[368278]: ERROR   01:52:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:52:01 compute-0 openstack_network_exporter[368278]: ERROR   01:52:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:52:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:52:01 compute-0 openstack_network_exporter[368278]: ERROR   01:52:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:52:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:52:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1117: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:52:03 compute-0 ceph-mon[192821]: pgmap v1117: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:03 compute-0 podman[411281]: 2025-12-03 01:52:03.906364352 +0000 UTC m=+0.157055726 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 03 01:52:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1118: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:05 compute-0 ceph-mon[192821]: pgmap v1118: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1119: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:06 compute-0 podman[411299]: 2025-12-03 01:52:06.910445678 +0000 UTC m=+0.163131237 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, version=9.4, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, name=ubi9, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, config_id=edpm, container_name=kepler, maintainer=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, managed_by=edpm_ansible, vcs-type=git)
Dec 03 01:52:07 compute-0 ceph-mon[192821]: pgmap v1119: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:52:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1120: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:09 compute-0 ceph-mon[192821]: pgmap v1120: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1121: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:10 compute-0 podman[411319]: 2025-12-03 01:52:10.890412899 +0000 UTC m=+0.127423931 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, architecture=x86_64, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, managed_by=edpm_ansible, maintainer=Red Hat, Inc., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, distribution-scope=public, version=9.6, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41)
Dec 03 01:52:10 compute-0 podman[411320]: 2025-12-03 01:52:10.909308942 +0000 UTC m=+0.142308461 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd)
Dec 03 01:52:10 compute-0 podman[411318]: 2025-12-03 01:52:10.947854088 +0000 UTC m=+0.189373177 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 03 01:52:11 compute-0 ceph-mon[192821]: pgmap v1121: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1122: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:12 compute-0 podman[411378]: 2025-12-03 01:52:12.856311426 +0000 UTC m=+0.114325602 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 03 01:52:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:52:13 compute-0 ceph-mon[192821]: pgmap v1122: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1123: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:15 compute-0 ceph-mon[192821]: pgmap v1123: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1124: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:17 compute-0 ceph-mon[192821]: pgmap v1124: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:52:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1125: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:19 compute-0 sshd-session[411400]: Connection closed by authenticating user nobody 80.94.95.116 port 39112 [preauth]
Dec 03 01:52:19 compute-0 ceph-mon[192821]: pgmap v1125: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:19 compute-0 sshd-session[406121]: Received disconnect from 38.102.83.18 port 39508:11: disconnected by user
Dec 03 01:52:19 compute-0 sshd-session[406121]: Disconnected from user zuul 38.102.83.18 port 39508
Dec 03 01:52:19 compute-0 sshd-session[406107]: pam_unix(sshd:session): session closed for user zuul
Dec 03 01:52:19 compute-0 systemd[1]: session-60.scope: Deactivated successfully.
Dec 03 01:52:19 compute-0 systemd[1]: session-60.scope: Consumed 12.140s CPU time.
Dec 03 01:52:19 compute-0 systemd-logind[800]: Session 60 logged out. Waiting for processes to exit.
Dec 03 01:52:19 compute-0 systemd-logind[800]: Removed session 60.
Dec 03 01:52:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1126: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:21 compute-0 ceph-mon[192821]: pgmap v1126: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1127: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:52:23 compute-0 ceph-mon[192821]: pgmap v1127: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1128: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:25 compute-0 ceph-mon[192821]: pgmap v1128: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1129: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:27 compute-0 sshd-session[411403]: Received disconnect from 80.253.31.232 port 40532:11: Bye Bye [preauth]
Dec 03 01:52:27 compute-0 sshd-session[411403]: Disconnected from authenticating user root 80.253.31.232 port 40532 [preauth]
Dec 03 01:52:27 compute-0 ceph-mon[192821]: pgmap v1129: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:52:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1130: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:28 compute-0 podman[411407]: 2025-12-03 01:52:28.291862642 +0000 UTC m=+0.106774939 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 01:52:28 compute-0 podman[411406]: 2025-12-03 01:52:28.296144173 +0000 UTC m=+0.115427443 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4)
Dec 03 01:52:28 compute-0 podman[411405]: 2025-12-03 01:52:28.320020266 +0000 UTC m=+0.142073434 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 03 01:52:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:52:28
Dec 03 01:52:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 01:52:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 01:52:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', '.rgw.root', 'backups', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta', 'volumes', 'images', 'default.rgw.control', 'default.rgw.log']
Dec 03 01:52:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 01:52:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:52:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:52:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:52:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:52:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:52:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:52:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 01:52:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:52:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 01:52:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:52:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:52:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:52:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:52:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:52:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:52:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:52:29 compute-0 ceph-mon[192821]: pgmap v1130: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:29 compute-0 podman[158098]: time="2025-12-03T01:52:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:52:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:52:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec 03 01:52:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:52:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8121 "" "Go-http-client/1.1"
Dec 03 01:52:29 compute-0 sudo[411461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:52:29 compute-0 sudo[411461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:52:29 compute-0 sudo[411461]: pam_unix(sudo:session): session closed for user root
Dec 03 01:52:30 compute-0 sudo[411486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:52:30 compute-0 sudo[411486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:52:30 compute-0 sudo[411486]: pam_unix(sudo:session): session closed for user root
Dec 03 01:52:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1131: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:30 compute-0 sudo[411511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:52:30 compute-0 sudo[411511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:52:30 compute-0 sudo[411511]: pam_unix(sudo:session): session closed for user root
Dec 03 01:52:30 compute-0 sudo[411536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 01:52:30 compute-0 sudo[411536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:52:30 compute-0 sudo[411536]: pam_unix(sudo:session): session closed for user root
Dec 03 01:52:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec 03 01:52:30 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 03 01:52:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:52:30 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:52:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 01:52:30 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:52:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 01:52:30 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:52:30 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 4037af5e-c42c-45ef-b7ef-194df5cc87d8 does not exist
Dec 03 01:52:30 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 1c656be8-b62b-4e21-a19b-6c891773f6bf does not exist
Dec 03 01:52:30 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 7b669d7a-e871-4502-b191-c430729dca35 does not exist
Dec 03 01:52:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 01:52:30 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:52:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 01:52:30 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:52:31 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:52:31 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:52:31 compute-0 sudo[411591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:52:31 compute-0 sudo[411591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:52:31 compute-0 sudo[411591]: pam_unix(sudo:session): session closed for user root
Dec 03 01:52:31 compute-0 sudo[411616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:52:31 compute-0 sudo[411616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:52:31 compute-0 sudo[411616]: pam_unix(sudo:session): session closed for user root
Dec 03 01:52:31 compute-0 sudo[411641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:52:31 compute-0 openstack_network_exporter[368278]: ERROR   01:52:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:52:31 compute-0 openstack_network_exporter[368278]: ERROR   01:52:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:52:31 compute-0 openstack_network_exporter[368278]: ERROR   01:52:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:52:31 compute-0 openstack_network_exporter[368278]: ERROR   01:52:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:52:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:52:31 compute-0 openstack_network_exporter[368278]: ERROR   01:52:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:52:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:52:31 compute-0 ceph-mon[192821]: pgmap v1131: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:31 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 03 01:52:31 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:52:31 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:52:31 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:52:31 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:52:31 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:52:31 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:52:31 compute-0 sudo[411641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:52:31 compute-0 sudo[411641]: pam_unix(sudo:session): session closed for user root
Dec 03 01:52:31 compute-0 sudo[411666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 01:52:31 compute-0 sudo[411666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:52:32 compute-0 podman[411729]: 2025-12-03 01:52:32.124962014 +0000 UTC m=+0.092270970 container create 38ea1781e3a576b5b67c2a3ce14b9191081fd5d1ec1e8f409c94630dbdeb36af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bassi, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 03 01:52:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1132: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:32 compute-0 podman[411729]: 2025-12-03 01:52:32.088352753 +0000 UTC m=+0.055661709 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:52:32 compute-0 systemd[1]: Started libpod-conmon-38ea1781e3a576b5b67c2a3ce14b9191081fd5d1ec1e8f409c94630dbdeb36af.scope.
Dec 03 01:52:32 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:52:32 compute-0 podman[411729]: 2025-12-03 01:52:32.285812286 +0000 UTC m=+0.253121292 container init 38ea1781e3a576b5b67c2a3ce14b9191081fd5d1ec1e8f409c94630dbdeb36af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bassi, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:52:32 compute-0 podman[411729]: 2025-12-03 01:52:32.303387431 +0000 UTC m=+0.270696387 container start 38ea1781e3a576b5b67c2a3ce14b9191081fd5d1ec1e8f409c94630dbdeb36af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 03 01:52:32 compute-0 podman[411729]: 2025-12-03 01:52:32.310217214 +0000 UTC m=+0.277526210 container attach 38ea1781e3a576b5b67c2a3ce14b9191081fd5d1ec1e8f409c94630dbdeb36af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec 03 01:52:32 compute-0 relaxed_bassi[411745]: 167 167
Dec 03 01:52:32 compute-0 systemd[1]: libpod-38ea1781e3a576b5b67c2a3ce14b9191081fd5d1ec1e8f409c94630dbdeb36af.scope: Deactivated successfully.
Dec 03 01:52:32 compute-0 podman[411729]: 2025-12-03 01:52:32.316694016 +0000 UTC m=+0.284002982 container died 38ea1781e3a576b5b67c2a3ce14b9191081fd5d1ec1e8f409c94630dbdeb36af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bassi, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 03 01:52:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-a212d7359d29775cc07ae3d07bd617b3b6e4c036b7016dd2411e9c3b54da8f99-merged.mount: Deactivated successfully.
Dec 03 01:52:32 compute-0 podman[411729]: 2025-12-03 01:52:32.404115159 +0000 UTC m=+0.371424095 container remove 38ea1781e3a576b5b67c2a3ce14b9191081fd5d1ec1e8f409c94630dbdeb36af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:52:32 compute-0 systemd[1]: libpod-conmon-38ea1781e3a576b5b67c2a3ce14b9191081fd5d1ec1e8f409c94630dbdeb36af.scope: Deactivated successfully.
Dec 03 01:52:32 compute-0 podman[411767]: 2025-12-03 01:52:32.649936955 +0000 UTC m=+0.054507467 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:52:32 compute-0 podman[411767]: 2025-12-03 01:52:32.808175223 +0000 UTC m=+0.212745685 container create 47ef7282368f7331d924c76be29e8914a3e20e09bbc1036bfd605fa061e84049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mirzakhani, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 03 01:52:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:52:32 compute-0 systemd[1]: Started libpod-conmon-47ef7282368f7331d924c76be29e8914a3e20e09bbc1036bfd605fa061e84049.scope.
Dec 03 01:52:32 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:52:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6dc2a8d5ba83e2036a5894a3599beae635f1ba8a5cddb5ca93477ee03f5b4f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:52:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6dc2a8d5ba83e2036a5894a3599beae635f1ba8a5cddb5ca93477ee03f5b4f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:52:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6dc2a8d5ba83e2036a5894a3599beae635f1ba8a5cddb5ca93477ee03f5b4f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:52:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6dc2a8d5ba83e2036a5894a3599beae635f1ba8a5cddb5ca93477ee03f5b4f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:52:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6dc2a8d5ba83e2036a5894a3599beae635f1ba8a5cddb5ca93477ee03f5b4f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:52:33 compute-0 podman[411767]: 2025-12-03 01:52:33.000591204 +0000 UTC m=+0.405161676 container init 47ef7282368f7331d924c76be29e8914a3e20e09bbc1036bfd605fa061e84049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Dec 03 01:52:33 compute-0 podman[411767]: 2025-12-03 01:52:33.033686617 +0000 UTC m=+0.438257069 container start 47ef7282368f7331d924c76be29e8914a3e20e09bbc1036bfd605fa061e84049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mirzakhani, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 03 01:52:33 compute-0 podman[411767]: 2025-12-03 01:52:33.040731915 +0000 UTC m=+0.445302387 container attach 47ef7282368f7331d924c76be29e8914a3e20e09bbc1036bfd605fa061e84049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mirzakhani, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec 03 01:52:33 compute-0 nova_compute[351485]: 2025-12-03 01:52:33.325 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:52:33 compute-0 ceph-mon[192821]: pgmap v1132: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1133: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:34 compute-0 cool_mirzakhani[411783]: --> passed data devices: 0 physical, 3 LVM
Dec 03 01:52:34 compute-0 cool_mirzakhani[411783]: --> relative data size: 1.0
Dec 03 01:52:34 compute-0 cool_mirzakhani[411783]: --> All data devices are unavailable
Dec 03 01:52:34 compute-0 systemd[1]: libpod-47ef7282368f7331d924c76be29e8914a3e20e09bbc1036bfd605fa061e84049.scope: Deactivated successfully.
Dec 03 01:52:34 compute-0 podman[411767]: 2025-12-03 01:52:34.249467169 +0000 UTC m=+1.654037621 container died 47ef7282368f7331d924c76be29e8914a3e20e09bbc1036bfd605fa061e84049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:52:34 compute-0 systemd[1]: libpod-47ef7282368f7331d924c76be29e8914a3e20e09bbc1036bfd605fa061e84049.scope: Consumed 1.160s CPU time.
Dec 03 01:52:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-be6dc2a8d5ba83e2036a5894a3599beae635f1ba8a5cddb5ca93477ee03f5b4f-merged.mount: Deactivated successfully.
Dec 03 01:52:34 compute-0 podman[411767]: 2025-12-03 01:52:34.331017866 +0000 UTC m=+1.735588288 container remove 47ef7282368f7331d924c76be29e8914a3e20e09bbc1036bfd605fa061e84049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 03 01:52:34 compute-0 systemd[1]: libpod-conmon-47ef7282368f7331d924c76be29e8914a3e20e09bbc1036bfd605fa061e84049.scope: Deactivated successfully.
Dec 03 01:52:34 compute-0 sudo[411666]: pam_unix(sudo:session): session closed for user root
Dec 03 01:52:34 compute-0 podman[411813]: 2025-12-03 01:52:34.411164365 +0000 UTC m=+0.114661272 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi)
Dec 03 01:52:34 compute-0 sudo[411843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:52:34 compute-0 sudo[411843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:52:34 compute-0 sudo[411843]: pam_unix(sudo:session): session closed for user root
Dec 03 01:52:34 compute-0 sudo[411870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:52:34 compute-0 sudo[411870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:52:34 compute-0 sudo[411870]: pam_unix(sudo:session): session closed for user root
Dec 03 01:52:34 compute-0 sudo[411895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:52:34 compute-0 sudo[411895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:52:34 compute-0 sudo[411895]: pam_unix(sudo:session): session closed for user root
Dec 03 01:52:34 compute-0 sudo[411920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 01:52:34 compute-0 sudo[411920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:52:35 compute-0 ceph-mon[192821]: pgmap v1133: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:35 compute-0 podman[411983]: 2025-12-03 01:52:35.506633068 +0000 UTC m=+0.089721999 container create 7baa2898b14d0bddda8730354b4c0af20b7a637b6e3eb4a97f1a4e51e3ec11e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:52:35 compute-0 podman[411983]: 2025-12-03 01:52:35.472853716 +0000 UTC m=+0.055942697 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:52:35 compute-0 systemd[1]: Started libpod-conmon-7baa2898b14d0bddda8730354b4c0af20b7a637b6e3eb4a97f1a4e51e3ec11e3.scope.
Dec 03 01:52:35 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:52:35 compute-0 podman[411983]: 2025-12-03 01:52:35.699692697 +0000 UTC m=+0.282781688 container init 7baa2898b14d0bddda8730354b4c0af20b7a637b6e3eb4a97f1a4e51e3ec11e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_ritchie, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:52:35 compute-0 podman[411983]: 2025-12-03 01:52:35.717861399 +0000 UTC m=+0.300950320 container start 7baa2898b14d0bddda8730354b4c0af20b7a637b6e3eb4a97f1a4e51e3ec11e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_ritchie, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 03 01:52:35 compute-0 podman[411983]: 2025-12-03 01:52:35.724515887 +0000 UTC m=+0.307604808 container attach 7baa2898b14d0bddda8730354b4c0af20b7a637b6e3eb4a97f1a4e51e3ec11e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_ritchie, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 03 01:52:35 compute-0 ecstatic_ritchie[412000]: 167 167
Dec 03 01:52:35 compute-0 systemd[1]: libpod-7baa2898b14d0bddda8730354b4c0af20b7a637b6e3eb4a97f1a4e51e3ec11e3.scope: Deactivated successfully.
Dec 03 01:52:35 compute-0 podman[412005]: 2025-12-03 01:52:35.811879618 +0000 UTC m=+0.054086965 container died 7baa2898b14d0bddda8730354b4c0af20b7a637b6e3eb4a97f1a4e51e3ec11e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_ritchie, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Dec 03 01:52:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3e3e46d88c5bf1786dacf4417ef3e49574f978b4d586dc9cd0e44358ec6f27b-merged.mount: Deactivated successfully.
Dec 03 01:52:35 compute-0 podman[412005]: 2025-12-03 01:52:35.88826476 +0000 UTC m=+0.130472107 container remove 7baa2898b14d0bddda8730354b4c0af20b7a637b6e3eb4a97f1a4e51e3ec11e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_ritchie, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 03 01:52:35 compute-0 systemd[1]: libpod-conmon-7baa2898b14d0bddda8730354b4c0af20b7a637b6e3eb4a97f1a4e51e3ec11e3.scope: Deactivated successfully.
Dec 03 01:52:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1134: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:36 compute-0 podman[412026]: 2025-12-03 01:52:36.162755363 +0000 UTC m=+0.066718980 container create 272a113831c0a5d89218f01480c6af74403796373c23cad0dc58964f32101d7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_blackwell, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:52:36 compute-0 systemd[1]: Started libpod-conmon-272a113831c0a5d89218f01480c6af74403796373c23cad0dc58964f32101d7e.scope.
Dec 03 01:52:36 compute-0 podman[412026]: 2025-12-03 01:52:36.141936087 +0000 UTC m=+0.045899734 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:52:36 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:52:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c295cc2645e1d3d10ba6fb0667c8f9ba61c9fa40f7124fee6a3048df205ca29b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:52:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c295cc2645e1d3d10ba6fb0667c8f9ba61c9fa40f7124fee6a3048df205ca29b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:52:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c295cc2645e1d3d10ba6fb0667c8f9ba61c9fa40f7124fee6a3048df205ca29b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:52:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c295cc2645e1d3d10ba6fb0667c8f9ba61c9fa40f7124fee6a3048df205ca29b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:52:36 compute-0 podman[412026]: 2025-12-03 01:52:36.335011707 +0000 UTC m=+0.238975394 container init 272a113831c0a5d89218f01480c6af74403796373c23cad0dc58964f32101d7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 03 01:52:36 compute-0 podman[412026]: 2025-12-03 01:52:36.355241127 +0000 UTC m=+0.259204784 container start 272a113831c0a5d89218f01480c6af74403796373c23cad0dc58964f32101d7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_blackwell, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Dec 03 01:52:36 compute-0 podman[412026]: 2025-12-03 01:52:36.36245315 +0000 UTC m=+0.266416867 container attach 272a113831c0a5d89218f01480c6af74403796373c23cad0dc58964f32101d7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_blackwell, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]: {
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:     "0": [
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:         {
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:             "devices": [
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:                 "/dev/loop3"
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:             ],
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:             "lv_name": "ceph_lv0",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:             "lv_size": "21470642176",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:             "name": "ceph_lv0",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:             "tags": {
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:                 "ceph.cluster_name": "ceph",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:                 "ceph.crush_device_class": "",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:                 "ceph.encrypted": "0",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:                 "ceph.osd_id": "0",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:                 "ceph.type": "block",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:                 "ceph.vdo": "0"
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:             },
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:             "type": "block",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:             "vg_name": "ceph_vg0"
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:         }
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:     ],
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:     "1": [
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:         {
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:             "devices": [
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:                 "/dev/loop4"
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:             ],
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:             "lv_name": "ceph_lv1",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:             "lv_size": "21470642176",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:             "name": "ceph_lv1",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:             "tags": {
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:                 "ceph.cluster_name": "ceph",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:                 "ceph.crush_device_class": "",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:                 "ceph.encrypted": "0",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:                 "ceph.osd_id": "1",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:                 "ceph.type": "block",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:                 "ceph.vdo": "0"
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:             },
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:             "type": "block",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:             "vg_name": "ceph_vg1"
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:         }
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:     ],
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:     "2": [
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:         {
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:             "devices": [
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:                 "/dev/loop5"
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:             ],
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:             "lv_name": "ceph_lv2",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:             "lv_size": "21470642176",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:             "name": "ceph_lv2",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:             "tags": {
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:                 "ceph.cluster_name": "ceph",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:                 "ceph.crush_device_class": "",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:                 "ceph.encrypted": "0",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:                 "ceph.osd_id": "2",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:                 "ceph.type": "block",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:                 "ceph.vdo": "0"
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:             },
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:             "type": "block",
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:             "vg_name": "ceph_vg2"
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:         }
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]:     ]
Dec 03 01:52:37 compute-0 nervous_blackwell[412040]: }
Dec 03 01:52:37 compute-0 systemd[1]: libpod-272a113831c0a5d89218f01480c6af74403796373c23cad0dc58964f32101d7e.scope: Deactivated successfully.
Dec 03 01:52:37 compute-0 podman[412026]: 2025-12-03 01:52:37.180110405 +0000 UTC m=+1.084074052 container died 272a113831c0a5d89218f01480c6af74403796373c23cad0dc58964f32101d7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 03 01:52:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-c295cc2645e1d3d10ba6fb0667c8f9ba61c9fa40f7124fee6a3048df205ca29b-merged.mount: Deactivated successfully.
Dec 03 01:52:37 compute-0 podman[412026]: 2025-12-03 01:52:37.285472714 +0000 UTC m=+1.189436331 container remove 272a113831c0a5d89218f01480c6af74403796373c23cad0dc58964f32101d7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_blackwell, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:52:37 compute-0 systemd[1]: libpod-conmon-272a113831c0a5d89218f01480c6af74403796373c23cad0dc58964f32101d7e.scope: Deactivated successfully.
Dec 03 01:52:37 compute-0 sudo[411920]: pam_unix(sudo:session): session closed for user root
Dec 03 01:52:37 compute-0 podman[412050]: 2025-12-03 01:52:37.356239957 +0000 UTC m=+0.127203834 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., version=9.4, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., config_id=edpm, container_name=kepler, io.openshift.expose-services=)
Dec 03 01:52:37 compute-0 sudo[412077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:52:37 compute-0 sudo[412077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:52:37 compute-0 sudo[412077]: pam_unix(sudo:session): session closed for user root
Dec 03 01:52:37 compute-0 ceph-mon[192821]: pgmap v1134: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:37 compute-0 sudo[412103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:52:37 compute-0 sudo[412103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:52:37 compute-0 sudo[412103]: pam_unix(sudo:session): session closed for user root
Dec 03 01:52:37 compute-0 sudo[412128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:52:37 compute-0 sudo[412128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:52:37 compute-0 sudo[412128]: pam_unix(sudo:session): session closed for user root
Dec 03 01:52:37 compute-0 sudo[412153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 01:52:37 compute-0 sudo[412153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:52:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:52:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1135: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 01:52:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:52:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 01:52:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:52:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:52:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:52:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:52:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:52:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:52:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:52:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:52:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:52:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 01:52:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:52:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:52:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:52:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 01:52:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:52:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 01:52:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:52:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:52:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:52:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 01:52:38 compute-0 podman[412216]: 2025-12-03 01:52:38.371977795 +0000 UTC m=+0.096931032 container create 3b4df413dbdba7f0449010275c8f302a920d0be698d53112d59405192e1fd645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:52:38 compute-0 podman[412216]: 2025-12-03 01:52:38.332377779 +0000 UTC m=+0.057331066 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:52:38 compute-0 systemd[1]: Started libpod-conmon-3b4df413dbdba7f0449010275c8f302a920d0be698d53112d59405192e1fd645.scope.
Dec 03 01:52:38 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:52:38 compute-0 podman[412216]: 2025-12-03 01:52:38.526498828 +0000 UTC m=+0.251452105 container init 3b4df413dbdba7f0449010275c8f302a920d0be698d53112d59405192e1fd645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_rosalind, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:52:38 compute-0 podman[412216]: 2025-12-03 01:52:38.544374742 +0000 UTC m=+0.269327969 container start 3b4df413dbdba7f0449010275c8f302a920d0be698d53112d59405192e1fd645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_rosalind, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:52:38 compute-0 podman[412216]: 2025-12-03 01:52:38.550899066 +0000 UTC m=+0.275852353 container attach 3b4df413dbdba7f0449010275c8f302a920d0be698d53112d59405192e1fd645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_rosalind, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec 03 01:52:38 compute-0 festive_rosalind[412232]: 167 167
Dec 03 01:52:38 compute-0 systemd[1]: libpod-3b4df413dbdba7f0449010275c8f302a920d0be698d53112d59405192e1fd645.scope: Deactivated successfully.
Dec 03 01:52:38 compute-0 podman[412216]: 2025-12-03 01:52:38.55815104 +0000 UTC m=+0.283104267 container died 3b4df413dbdba7f0449010275c8f302a920d0be698d53112d59405192e1fd645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_rosalind, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 03 01:52:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-00f3c9bb62f4be22286279384f53763777ef20e457a1da93ee2b4d832f45d808-merged.mount: Deactivated successfully.
Dec 03 01:52:38 compute-0 podman[412216]: 2025-12-03 01:52:38.630489558 +0000 UTC m=+0.355442785 container remove 3b4df413dbdba7f0449010275c8f302a920d0be698d53112d59405192e1fd645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_rosalind, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:52:38 compute-0 systemd[1]: libpod-conmon-3b4df413dbdba7f0449010275c8f302a920d0be698d53112d59405192e1fd645.scope: Deactivated successfully.
Dec 03 01:52:38 compute-0 podman[412255]: 2025-12-03 01:52:38.903382336 +0000 UTC m=+0.087180467 container create 732b101fe5163e0387d292be9eef51740d27b20afcff35e06800ede152f2af8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_mahavira, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:52:38 compute-0 podman[412255]: 2025-12-03 01:52:38.86412925 +0000 UTC m=+0.047927431 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:52:38 compute-0 systemd[1]: Started libpod-conmon-732b101fe5163e0387d292be9eef51740d27b20afcff35e06800ede152f2af8b.scope.
Dec 03 01:52:39 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:52:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb09da811caea332bf9b74bb1e79207fc2d0ca6d91c669d4136b57eb5e5612f3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:52:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb09da811caea332bf9b74bb1e79207fc2d0ca6d91c669d4136b57eb5e5612f3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:52:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb09da811caea332bf9b74bb1e79207fc2d0ca6d91c669d4136b57eb5e5612f3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:52:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb09da811caea332bf9b74bb1e79207fc2d0ca6d91c669d4136b57eb5e5612f3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:52:39 compute-0 podman[412255]: 2025-12-03 01:52:39.08560768 +0000 UTC m=+0.269405851 container init 732b101fe5163e0387d292be9eef51740d27b20afcff35e06800ede152f2af8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:52:39 compute-0 podman[412255]: 2025-12-03 01:52:39.114855784 +0000 UTC m=+0.298653925 container start 732b101fe5163e0387d292be9eef51740d27b20afcff35e06800ede152f2af8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_mahavira, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 03 01:52:39 compute-0 podman[412255]: 2025-12-03 01:52:39.124044143 +0000 UTC m=+0.307842304 container attach 732b101fe5163e0387d292be9eef51740d27b20afcff35e06800ede152f2af8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 03 01:52:39 compute-0 ceph-mon[192821]: pgmap v1135: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1136: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:40 compute-0 loving_mahavira[412271]: {
Dec 03 01:52:40 compute-0 loving_mahavira[412271]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 01:52:40 compute-0 loving_mahavira[412271]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:52:40 compute-0 loving_mahavira[412271]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 01:52:40 compute-0 loving_mahavira[412271]:         "osd_id": 2,
Dec 03 01:52:40 compute-0 loving_mahavira[412271]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:52:40 compute-0 loving_mahavira[412271]:         "type": "bluestore"
Dec 03 01:52:40 compute-0 loving_mahavira[412271]:     },
Dec 03 01:52:40 compute-0 loving_mahavira[412271]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 01:52:40 compute-0 loving_mahavira[412271]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:52:40 compute-0 loving_mahavira[412271]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 01:52:40 compute-0 loving_mahavira[412271]:         "osd_id": 1,
Dec 03 01:52:40 compute-0 loving_mahavira[412271]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:52:40 compute-0 loving_mahavira[412271]:         "type": "bluestore"
Dec 03 01:52:40 compute-0 loving_mahavira[412271]:     },
Dec 03 01:52:40 compute-0 loving_mahavira[412271]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 01:52:40 compute-0 loving_mahavira[412271]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:52:40 compute-0 loving_mahavira[412271]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 01:52:40 compute-0 loving_mahavira[412271]:         "osd_id": 0,
Dec 03 01:52:40 compute-0 loving_mahavira[412271]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:52:40 compute-0 loving_mahavira[412271]:         "type": "bluestore"
Dec 03 01:52:40 compute-0 loving_mahavira[412271]:     }
Dec 03 01:52:40 compute-0 loving_mahavira[412271]: }
Dec 03 01:52:40 compute-0 systemd[1]: libpod-732b101fe5163e0387d292be9eef51740d27b20afcff35e06800ede152f2af8b.scope: Deactivated successfully.
Dec 03 01:52:40 compute-0 podman[412255]: 2025-12-03 01:52:40.31120628 +0000 UTC m=+1.495004411 container died 732b101fe5163e0387d292be9eef51740d27b20afcff35e06800ede152f2af8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:52:40 compute-0 systemd[1]: libpod-732b101fe5163e0387d292be9eef51740d27b20afcff35e06800ede152f2af8b.scope: Consumed 1.192s CPU time.
Dec 03 01:52:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb09da811caea332bf9b74bb1e79207fc2d0ca6d91c669d4136b57eb5e5612f3-merged.mount: Deactivated successfully.
Dec 03 01:52:40 compute-0 podman[412255]: 2025-12-03 01:52:40.415513038 +0000 UTC m=+1.599311139 container remove 732b101fe5163e0387d292be9eef51740d27b20afcff35e06800ede152f2af8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_mahavira, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:52:40 compute-0 systemd[1]: libpod-conmon-732b101fe5163e0387d292be9eef51740d27b20afcff35e06800ede152f2af8b.scope: Deactivated successfully.
Dec 03 01:52:40 compute-0 sudo[412153]: pam_unix(sudo:session): session closed for user root
Dec 03 01:52:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:52:40 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:52:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:52:40 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:52:40 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 76d17026-e665-45a4-8b46-989a3895bad4 does not exist
Dec 03 01:52:40 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 622e96ae-8d1c-431e-8bc6-948d454a791f does not exist
Dec 03 01:52:40 compute-0 sudo[412315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:52:40 compute-0 sudo[412315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:52:40 compute-0 sudo[412315]: pam_unix(sudo:session): session closed for user root
Dec 03 01:52:40 compute-0 sudo[412340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 01:52:40 compute-0 sudo[412340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:52:40 compute-0 sudo[412340]: pam_unix(sudo:session): session closed for user root
Dec 03 01:52:41 compute-0 ceph-mon[192821]: pgmap v1136: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:41 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:52:41 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:52:41 compute-0 podman[412366]: 2025-12-03 01:52:41.890640317 +0000 UTC m=+0.132553025 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vendor=Red Hat, Inc., config_id=edpm, maintainer=Red Hat, Inc., distribution-scope=public, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git)
Dec 03 01:52:41 compute-0 podman[412367]: 2025-12-03 01:52:41.891785719 +0000 UTC m=+0.122295766 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:52:41 compute-0 podman[412365]: 2025-12-03 01:52:41.951932814 +0000 UTC m=+0.198545895 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 03 01:52:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1137: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:52:43 compute-0 ceph-mon[192821]: pgmap v1137: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:43 compute-0 podman[412427]: 2025-12-03 01:52:43.885720466 +0000 UTC m=+0.132653418 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 01:52:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1138: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:45 compute-0 ceph-mon[192821]: pgmap v1138: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1139: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 01:52:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1583600602' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 01:52:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 01:52:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1583600602' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 01:52:47 compute-0 ceph-mon[192821]: pgmap v1139: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/1583600602' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 01:52:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/1583600602' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 01:52:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:52:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1140: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:49 compute-0 ceph-mon[192821]: pgmap v1140: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1141: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:51 compute-0 ceph-mon[192821]: pgmap v1141: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:51 compute-0 nova_compute[351485]: 2025-12-03 01:52:51.604 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:52:51 compute-0 nova_compute[351485]: 2025-12-03 01:52:51.604 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 01:52:51 compute-0 nova_compute[351485]: 2025-12-03 01:52:51.605 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 03 01:52:51 compute-0 nova_compute[351485]: 2025-12-03 01:52:51.637 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 03 01:52:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1142: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:52:53 compute-0 ceph-mon[192821]: pgmap v1142: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1143: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:54 compute-0 nova_compute[351485]: 2025-12-03 01:52:54.575 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:52:54 compute-0 nova_compute[351485]: 2025-12-03 01:52:54.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:52:54 compute-0 nova_compute[351485]: 2025-12-03 01:52:54.624 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:52:54 compute-0 nova_compute[351485]: 2025-12-03 01:52:54.624 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:52:54 compute-0 nova_compute[351485]: 2025-12-03 01:52:54.625 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:52:54 compute-0 nova_compute[351485]: 2025-12-03 01:52:54.625 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 01:52:54 compute-0 nova_compute[351485]: 2025-12-03 01:52:54.626 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:52:55 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 01:52:55 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3734165423' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:52:55 compute-0 nova_compute[351485]: 2025-12-03 01:52:55.098 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:52:55 compute-0 ceph-mon[192821]: pgmap v1143: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:55 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3734165423' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:52:55 compute-0 nova_compute[351485]: 2025-12-03 01:52:55.665 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 01:52:55 compute-0 nova_compute[351485]: 2025-12-03 01:52:55.666 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4552MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 01:52:55 compute-0 nova_compute[351485]: 2025-12-03 01:52:55.666 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:52:55 compute-0 nova_compute[351485]: 2025-12-03 01:52:55.667 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:52:55 compute-0 nova_compute[351485]: 2025-12-03 01:52:55.757 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 01:52:55 compute-0 nova_compute[351485]: 2025-12-03 01:52:55.757 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 01:52:55 compute-0 nova_compute[351485]: 2025-12-03 01:52:55.782 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:52:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1144: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 01:52:56 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3014041720' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:52:56 compute-0 nova_compute[351485]: 2025-12-03 01:52:56.324 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:52:56 compute-0 nova_compute[351485]: 2025-12-03 01:52:56.335 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 01:52:56 compute-0 nova_compute[351485]: 2025-12-03 01:52:56.359 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 01:52:56 compute-0 nova_compute[351485]: 2025-12-03 01:52:56.361 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 01:52:56 compute-0 nova_compute[351485]: 2025-12-03 01:52:56.361 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:52:56 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3014041720' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:52:57 compute-0 nova_compute[351485]: 2025-12-03 01:52:57.361 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:52:57 compute-0 nova_compute[351485]: 2025-12-03 01:52:57.361 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:52:57 compute-0 nova_compute[351485]: 2025-12-03 01:52:57.361 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:52:57 compute-0 ceph-mon[192821]: pgmap v1144: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:52:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1145: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:52:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:52:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:52:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:52:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:52:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:52:58 compute-0 nova_compute[351485]: 2025-12-03 01:52:58.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:52:58 compute-0 podman[412495]: 2025-12-03 01:52:58.859472341 +0000 UTC m=+0.106764409 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 03 01:52:58 compute-0 podman[412496]: 2025-12-03 01:52:58.901079523 +0000 UTC m=+0.144402289 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute)
Dec 03 01:52:58 compute-0 podman[412497]: 2025-12-03 01:52:58.901224797 +0000 UTC m=+0.137301299 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 01:52:59 compute-0 nova_compute[351485]: 2025-12-03 01:52:59.570 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:52:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:52:59.618 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:52:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:52:59.618 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:52:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:52:59.619 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:52:59 compute-0 ceph-mon[192821]: pgmap v1145: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:52:59 compute-0 podman[158098]: time="2025-12-03T01:52:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:52:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:52:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec 03 01:52:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:52:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8114 "" "Go-http-client/1.1"
Dec 03 01:53:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1146: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:53:01 compute-0 openstack_network_exporter[368278]: ERROR   01:53:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:53:01 compute-0 openstack_network_exporter[368278]: ERROR   01:53:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:53:01 compute-0 openstack_network_exporter[368278]: ERROR   01:53:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:53:01 compute-0 openstack_network_exporter[368278]: ERROR   01:53:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:53:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:53:01 compute-0 openstack_network_exporter[368278]: ERROR   01:53:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:53:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:53:01 compute-0 nova_compute[351485]: 2025-12-03 01:53:01.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:53:01 compute-0 nova_compute[351485]: 2025-12-03 01:53:01.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 01:53:01 compute-0 ceph-mon[192821]: pgmap v1146: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:53:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1147: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:53:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:53:03 compute-0 ceph-mon[192821]: pgmap v1147: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:53:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1148: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:53:04 compute-0 sshd-session[412554]: Invalid user usuario2 from 173.249.50.59 port 57958
Dec 03 01:53:04 compute-0 podman[412556]: 2025-12-03 01:53:04.876719491 +0000 UTC m=+0.126906076 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 03 01:53:05 compute-0 sshd-session[412554]: Received disconnect from 173.249.50.59 port 57958:11: Bye Bye [preauth]
Dec 03 01:53:05 compute-0 sshd-session[412554]: Disconnected from invalid user usuario2 173.249.50.59 port 57958 [preauth]
Dec 03 01:53:05 compute-0 ceph-mon[192821]: pgmap v1148: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:53:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1149: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:53:07 compute-0 ceph-mon[192821]: pgmap v1149: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:53:07 compute-0 podman[412575]: 2025-12-03 01:53:07.841879629 +0000 UTC m=+0.109848126 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, release=1214.1726694543, version=9.4, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=Red Hat, Inc., container_name=kepler, io.buildah.version=1.29.0, architecture=x86_64, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9)
Dec 03 01:53:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:53:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1150: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:53:09 compute-0 ceph-mon[192821]: pgmap v1150: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:53:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1151: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:53:10 compute-0 sshd-session[412594]: Invalid user super from 103.146.202.174 port 37242
Dec 03 01:53:10 compute-0 sshd-session[412594]: Received disconnect from 103.146.202.174 port 37242:11: Bye Bye [preauth]
Dec 03 01:53:10 compute-0 sshd-session[412594]: Disconnected from invalid user super 103.146.202.174 port 37242 [preauth]
Dec 03 01:53:11 compute-0 ceph-mon[192821]: pgmap v1151: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:53:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1152: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:53:12 compute-0 podman[412598]: 2025-12-03 01:53:12.8814684 +0000 UTC m=+0.114161647 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 03 01:53:12 compute-0 podman[412597]: 2025-12-03 01:53:12.893138369 +0000 UTC m=+0.136400394 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, container_name=openstack_network_exporter, io.buildah.version=1.33.7, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 01:53:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:53:12 compute-0 podman[412596]: 2025-12-03 01:53:12.928286749 +0000 UTC m=+0.183204582 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 03 01:53:13 compute-0 ceph-mon[192821]: pgmap v1152: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:53:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1153: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:53:14 compute-0 podman[412658]: 2025-12-03 01:53:14.813626816 +0000 UTC m=+0.103616950 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 01:53:15 compute-0 ceph-mon[192821]: pgmap v1153: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:53:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1154: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:53:17 compute-0 ceph-mon[192821]: pgmap v1154: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:53:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:53:17 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Dec 03 01:53:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:53:17.914745) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 03 01:53:17 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Dec 03 01:53:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726797914789, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1261, "num_deletes": 505, "total_data_size": 1478612, "memory_usage": 1506448, "flush_reason": "Manual Compaction"}
Dec 03 01:53:17 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Dec 03 01:53:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726797930104, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1240902, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22941, "largest_seqno": 24201, "table_properties": {"data_size": 1235671, "index_size": 2179, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 14687, "raw_average_key_size": 18, "raw_value_size": 1223034, "raw_average_value_size": 1570, "num_data_blocks": 98, "num_entries": 779, "num_filter_entries": 779, "num_deletions": 505, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764726698, "oldest_key_time": 1764726698, "file_creation_time": 1764726797, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Dec 03 01:53:17 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 15477 microseconds, and 7809 cpu microseconds.
Dec 03 01:53:17 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 01:53:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:53:17.930198) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1240902 bytes OK
Dec 03 01:53:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:53:17.930246) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Dec 03 01:53:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:53:17.933989) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Dec 03 01:53:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:53:17.934013) EVENT_LOG_v1 {"time_micros": 1764726797934006, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 03 01:53:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:53:17.934035) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 03 01:53:17 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1471824, prev total WAL file size 1471824, number of live WAL files 2.
Dec 03 01:53:17 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 01:53:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:53:17.937222) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353032' seq:72057594037927935, type:22 .. '6C6F676D00373533' seq:0, type:0; will stop at (end)
Dec 03 01:53:17 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 03 01:53:17 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1211KB)], [53(9035KB)]
Dec 03 01:53:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726797937303, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 10493208, "oldest_snapshot_seqno": -1}
Dec 03 01:53:17 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 4464 keys, 7334291 bytes, temperature: kUnknown
Dec 03 01:53:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726797994060, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 7334291, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7304375, "index_size": 17646, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11205, "raw_key_size": 111723, "raw_average_key_size": 25, "raw_value_size": 7223409, "raw_average_value_size": 1618, "num_data_blocks": 736, "num_entries": 4464, "num_filter_entries": 4464, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764726797, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Dec 03 01:53:17 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 01:53:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:53:17.994353) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 7334291 bytes
Dec 03 01:53:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:53:17.997118) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 184.6 rd, 129.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 8.8 +0.0 blob) out(7.0 +0.0 blob), read-write-amplify(14.4) write-amplify(5.9) OK, records in: 5470, records dropped: 1006 output_compression: NoCompression
Dec 03 01:53:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:53:17.997153) EVENT_LOG_v1 {"time_micros": 1764726797997134, "job": 28, "event": "compaction_finished", "compaction_time_micros": 56843, "compaction_time_cpu_micros": 35750, "output_level": 6, "num_output_files": 1, "total_output_size": 7334291, "num_input_records": 5470, "num_output_records": 4464, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 03 01:53:17 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 01:53:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726797997775, "job": 28, "event": "table_file_deletion", "file_number": 55}
Dec 03 01:53:18 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 01:53:18 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726798001228, "job": 28, "event": "table_file_deletion", "file_number": 53}
Dec 03 01:53:18 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:53:17.936159) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:53:18 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:53:18.001671) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:53:18 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:53:18.001676) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:53:18 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:53:18.001680) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:53:18 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:53:18.001683) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:53:18 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:53:18.001686) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:53:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1155: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:53:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Dec 03 01:53:18 compute-0 ceph-mon[192821]: pgmap v1155: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:53:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Dec 03 01:53:18 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.503 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.503 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.504 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e9ae73b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.504 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.505 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e9ae73b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.506 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e9ae73b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.506 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e9ae73b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.506 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e9ae73b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.507 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e9ae73b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.507 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e9ae73b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.507 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e9ae73b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.508 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e9ae73b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.508 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e9ae73b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.509 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.510 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.510 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.510 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.510 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.509 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e9ae73b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.511 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e9ae73b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e9ae73b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.510 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.513 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.513 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.513 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.513 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.514 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e9ae73b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e9ae73b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e9ae73b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.514 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.516 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e9ae73b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e9ae73b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.516 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.518 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e9ae73b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e9ae73b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e9ae73b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e9ae73b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e9ae73b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e9ae73b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.518 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.522 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.521 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e9ae73b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.522 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e9ae73b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.522 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.523 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.523 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.524 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.524 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.524 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.524 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.524 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.524 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.525 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.525 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.525 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.525 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.525 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.525 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.525 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.526 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.526 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.526 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.526 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.526 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.527 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.527 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.527 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.527 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.528 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.528 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.528 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.528 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.528 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.528 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.528 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.529 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.529 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.529 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.529 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.530 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.530 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.530 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.530 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.530 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.530 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.531 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.531 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.531 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.531 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.531 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:53:19 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Dec 03 01:53:19 compute-0 ceph-mon[192821]: osdmap e121: 3 total, 3 up, 3 in
Dec 03 01:53:19 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Dec 03 01:53:19 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Dec 03 01:53:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1158: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:53:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Dec 03 01:53:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Dec 03 01:53:20 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Dec 03 01:53:20 compute-0 ceph-mon[192821]: osdmap e122: 3 total, 3 up, 3 in
Dec 03 01:53:20 compute-0 ceph-mon[192821]: pgmap v1158: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:53:21 compute-0 ceph-mon[192821]: osdmap e123: 3 total, 3 up, 3 in
Dec 03 01:53:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1160: 321 pgs: 321 active+clean; 8.0 MiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1.3 MiB/s wr, 5 op/s
Dec 03 01:53:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:53:23 compute-0 ceph-mon[192821]: pgmap v1160: 321 pgs: 321 active+clean; 8.0 MiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1.3 MiB/s wr, 5 op/s
Dec 03 01:53:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1161: 321 pgs: 321 active+clean; 16 MiB data, 156 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 2.6 MiB/s wr, 16 op/s
Dec 03 01:53:25 compute-0 ceph-mon[192821]: pgmap v1161: 321 pgs: 321 active+clean; 16 MiB data, 156 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 2.6 MiB/s wr, 16 op/s
Dec 03 01:53:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1162: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 2.2 MiB/s wr, 15 op/s
Dec 03 01:53:27 compute-0 ceph-mon[192821]: pgmap v1162: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 2.2 MiB/s wr, 15 op/s
Dec 03 01:53:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:53:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Dec 03 01:53:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Dec 03 01:53:27 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Dec 03 01:53:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1164: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 1.9 MiB/s wr, 13 op/s
Dec 03 01:53:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:53:28
Dec 03 01:53:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 01:53:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 01:53:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'images', '.rgw.root', 'cephfs.cephfs.data', 'volumes', 'default.rgw.meta', 'default.rgw.log', 'vms', 'backups', 'default.rgw.control']
Dec 03 01:53:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 01:53:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:53:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:53:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:53:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:53:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:53:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:53:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 01:53:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:53:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 01:53:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:53:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:53:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:53:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:53:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:53:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:53:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:53:28 compute-0 ceph-mon[192821]: osdmap e124: 3 total, 3 up, 3 in
Dec 03 01:53:28 compute-0 ceph-mon[192821]: pgmap v1164: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 1.9 MiB/s wr, 13 op/s
Dec 03 01:53:29 compute-0 podman[158098]: time="2025-12-03T01:53:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:53:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:53:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec 03 01:53:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:53:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8121 "" "Go-http-client/1.1"
Dec 03 01:53:29 compute-0 podman[412685]: 2025-12-03 01:53:29.871900375 +0000 UTC m=+0.131803934 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:53:29 compute-0 podman[412686]: 2025-12-03 01:53:29.885310523 +0000 UTC m=+0.130381414 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 03 01:53:29 compute-0 podman[412687]: 2025-12-03 01:53:29.917479319 +0000 UTC m=+0.142365572 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 03 01:53:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1165: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 892 KiB/s wr, 8 op/s
Dec 03 01:53:31 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:53:31.191 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:a6:85', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:2a:11:ae:7b:8c'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 01:53:31 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:53:31.193 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 03 01:53:31 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:53:31.195 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=eda9fd7d-f2b1-4121-b9ac-fc31f8426272, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 01:53:31 compute-0 ceph-mon[192821]: pgmap v1165: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 892 KiB/s wr, 8 op/s
Dec 03 01:53:31 compute-0 openstack_network_exporter[368278]: ERROR   01:53:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:53:31 compute-0 openstack_network_exporter[368278]: ERROR   01:53:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:53:31 compute-0 openstack_network_exporter[368278]: ERROR   01:53:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:53:31 compute-0 openstack_network_exporter[368278]: ERROR   01:53:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:53:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:53:31 compute-0 openstack_network_exporter[368278]: ERROR   01:53:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:53:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:53:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1166: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 819 KiB/s wr, 7 op/s
Dec 03 01:53:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:53:33 compute-0 ceph-mon[192821]: pgmap v1166: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 819 KiB/s wr, 7 op/s
Dec 03 01:53:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1167: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 204 B/s wr, 0 op/s
Dec 03 01:53:35 compute-0 ceph-mon[192821]: pgmap v1167: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 204 B/s wr, 0 op/s
Dec 03 01:53:35 compute-0 podman[412740]: 2025-12-03 01:53:35.883481243 +0000 UTC m=+0.133468081 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 03 01:53:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1168: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:53:37 compute-0 ceph-mon[192821]: pgmap v1168: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:53:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:53:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1169: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:53:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 01:53:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:53:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 01:53:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:53:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:53:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:53:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:53:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:53:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:53:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:53:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec 03 01:53:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:53:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 01:53:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:53:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:53:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:53:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 01:53:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:53:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 01:53:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:53:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:53:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:53:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 01:53:38 compute-0 podman[412758]: 2025-12-03 01:53:38.883670639 +0000 UTC m=+0.140418257 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=kepler, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, io.openshift.expose-services=, release=1214.1726694543, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., config_id=edpm)
Dec 03 01:53:39 compute-0 ceph-mon[192821]: pgmap v1169: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:53:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1170: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:53:40 compute-0 sudo[412778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:53:40 compute-0 sudo[412778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:53:40 compute-0 sudo[412778]: pam_unix(sudo:session): session closed for user root
Dec 03 01:53:41 compute-0 sudo[412803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:53:41 compute-0 sudo[412803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:53:41 compute-0 sudo[412803]: pam_unix(sudo:session): session closed for user root
Dec 03 01:53:41 compute-0 sudo[412828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:53:41 compute-0 sudo[412828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:53:41 compute-0 sudo[412828]: pam_unix(sudo:session): session closed for user root
Dec 03 01:53:41 compute-0 ceph-mon[192821]: pgmap v1170: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:53:41 compute-0 sudo[412853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 01:53:41 compute-0 sudo[412853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:53:41 compute-0 sudo[412853]: pam_unix(sudo:session): session closed for user root
Dec 03 01:53:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:53:42 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:53:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 01:53:42 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:53:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 01:53:42 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:53:42 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 015178ae-2471-47b1-b3ba-26591ba635a1 does not exist
Dec 03 01:53:42 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 9c95dc75-4c30-4d2f-b515-39406adc2265 does not exist
Dec 03 01:53:42 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 35839085-16f6-49bd-8a42-9bf4a4a19cd6 does not exist
Dec 03 01:53:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 01:53:42 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:53:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 01:53:42 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:53:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:53:42 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:53:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1171: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:53:42 compute-0 sudo[412909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:53:42 compute-0 sudo[412909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:53:42 compute-0 sudo[412909]: pam_unix(sudo:session): session closed for user root
Dec 03 01:53:42 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:53:42 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:53:42 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:53:42 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:53:42 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:53:42 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:53:42 compute-0 sudo[412934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:53:42 compute-0 sudo[412934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:53:42 compute-0 sudo[412934]: pam_unix(sudo:session): session closed for user root
Dec 03 01:53:42 compute-0 sudo[412959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:53:42 compute-0 sudo[412959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:53:42 compute-0 sudo[412959]: pam_unix(sudo:session): session closed for user root
Dec 03 01:53:42 compute-0 sudo[412984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 01:53:42 compute-0 sudo[412984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:53:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:53:43 compute-0 podman[413049]: 2025-12-03 01:53:43.192265878 +0000 UTC m=+0.082919287 container create f3491060eee29963b8de91c1bd560e991a9a0e7e3c610494711c6844a9ea20ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:53:43 compute-0 podman[413049]: 2025-12-03 01:53:43.153490785 +0000 UTC m=+0.044144234 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:53:43 compute-0 systemd[1]: Started libpod-conmon-f3491060eee29963b8de91c1bd560e991a9a0e7e3c610494711c6844a9ea20ab.scope.
Dec 03 01:53:43 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:53:43 compute-0 ceph-mon[192821]: pgmap v1171: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:53:43 compute-0 podman[413049]: 2025-12-03 01:53:43.318290489 +0000 UTC m=+0.208943908 container init f3491060eee29963b8de91c1bd560e991a9a0e7e3c610494711c6844a9ea20ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_goldstine, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 03 01:53:43 compute-0 podman[413049]: 2025-12-03 01:53:43.33536827 +0000 UTC m=+0.226021649 container start f3491060eee29963b8de91c1bd560e991a9a0e7e3c610494711c6844a9ea20ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_goldstine, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:53:43 compute-0 cranky_goldstine[413076]: 167 167
Dec 03 01:53:43 compute-0 podman[413049]: 2025-12-03 01:53:43.345262748 +0000 UTC m=+0.235916117 container attach f3491060eee29963b8de91c1bd560e991a9a0e7e3c610494711c6844a9ea20ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_goldstine, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:53:43 compute-0 systemd[1]: libpod-f3491060eee29963b8de91c1bd560e991a9a0e7e3c610494711c6844a9ea20ab.scope: Deactivated successfully.
Dec 03 01:53:43 compute-0 podman[413049]: 2025-12-03 01:53:43.346160044 +0000 UTC m=+0.236813443 container died f3491060eee29963b8de91c1bd560e991a9a0e7e3c610494711c6844a9ea20ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 03 01:53:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d4dd9b41a50b79e88e40f1f26944fe2ef5cbfdb1fa8def4f73d935e38079dbd-merged.mount: Deactivated successfully.
Dec 03 01:53:43 compute-0 podman[413049]: 2025-12-03 01:53:43.417583396 +0000 UTC m=+0.308236765 container remove f3491060eee29963b8de91c1bd560e991a9a0e7e3c610494711c6844a9ea20ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_goldstine, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 03 01:53:43 compute-0 podman[413064]: 2025-12-03 01:53:43.423150823 +0000 UTC m=+0.163723634 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, managed_by=edpm_ansible, config_id=edpm, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., name=ubi9-minimal, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, release=1755695350, vendor=Red Hat, Inc.)
Dec 03 01:53:43 compute-0 podman[413065]: 2025-12-03 01:53:43.43051026 +0000 UTC m=+0.156026307 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, container_name=multipathd)
Dec 03 01:53:43 compute-0 podman[413061]: 2025-12-03 01:53:43.430686215 +0000 UTC m=+0.175651300 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 03 01:53:43 compute-0 systemd[1]: libpod-conmon-f3491060eee29963b8de91c1bd560e991a9a0e7e3c610494711c6844a9ea20ab.scope: Deactivated successfully.
Dec 03 01:53:43 compute-0 podman[413147]: 2025-12-03 01:53:43.623122577 +0000 UTC m=+0.066903196 container create c4768658ef0dcc9552e0a85bae98d01cebd3c745a05f0c4db5c268e515ac61c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:53:43 compute-0 podman[413147]: 2025-12-03 01:53:43.597415943 +0000 UTC m=+0.041196672 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:53:43 compute-0 systemd[1]: Started libpod-conmon-c4768658ef0dcc9552e0a85bae98d01cebd3c745a05f0c4db5c268e515ac61c8.scope.
Dec 03 01:53:43 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:53:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4400428c470b060b60ab4936a69e9ee3834d25d795aa2e9790a174c19514aa79/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:53:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4400428c470b060b60ab4936a69e9ee3834d25d795aa2e9790a174c19514aa79/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:53:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4400428c470b060b60ab4936a69e9ee3834d25d795aa2e9790a174c19514aa79/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:53:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4400428c470b060b60ab4936a69e9ee3834d25d795aa2e9790a174c19514aa79/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:53:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4400428c470b060b60ab4936a69e9ee3834d25d795aa2e9790a174c19514aa79/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:53:43 compute-0 podman[413147]: 2025-12-03 01:53:43.795409161 +0000 UTC m=+0.239189820 container init c4768658ef0dcc9552e0a85bae98d01cebd3c745a05f0c4db5c268e515ac61c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Dec 03 01:53:43 compute-0 podman[413147]: 2025-12-03 01:53:43.81099374 +0000 UTC m=+0.254774389 container start c4768658ef0dcc9552e0a85bae98d01cebd3c745a05f0c4db5c268e515ac61c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hermann, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:53:43 compute-0 podman[413147]: 2025-12-03 01:53:43.817522714 +0000 UTC m=+0.261303423 container attach c4768658ef0dcc9552e0a85bae98d01cebd3c745a05f0c4db5c268e515ac61c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:53:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1172: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:53:45 compute-0 elastic_hermann[413163]: --> passed data devices: 0 physical, 3 LVM
Dec 03 01:53:45 compute-0 elastic_hermann[413163]: --> relative data size: 1.0
Dec 03 01:53:45 compute-0 elastic_hermann[413163]: --> All data devices are unavailable
Dec 03 01:53:45 compute-0 systemd[1]: libpod-c4768658ef0dcc9552e0a85bae98d01cebd3c745a05f0c4db5c268e515ac61c8.scope: Deactivated successfully.
Dec 03 01:53:45 compute-0 systemd[1]: libpod-c4768658ef0dcc9552e0a85bae98d01cebd3c745a05f0c4db5c268e515ac61c8.scope: Consumed 1.278s CPU time.
Dec 03 01:53:45 compute-0 podman[413147]: 2025-12-03 01:53:45.156276391 +0000 UTC m=+1.600057050 container died c4768658ef0dcc9552e0a85bae98d01cebd3c745a05f0c4db5c268e515ac61c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec 03 01:53:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-4400428c470b060b60ab4936a69e9ee3834d25d795aa2e9790a174c19514aa79-merged.mount: Deactivated successfully.
Dec 03 01:53:45 compute-0 podman[413147]: 2025-12-03 01:53:45.228364452 +0000 UTC m=+1.672145071 container remove c4768658ef0dcc9552e0a85bae98d01cebd3c745a05f0c4db5c268e515ac61c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hermann, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:53:45 compute-0 systemd[1]: libpod-conmon-c4768658ef0dcc9552e0a85bae98d01cebd3c745a05f0c4db5c268e515ac61c8.scope: Deactivated successfully.
Dec 03 01:53:45 compute-0 sudo[412984]: pam_unix(sudo:session): session closed for user root
Dec 03 01:53:45 compute-0 podman[413195]: 2025-12-03 01:53:45.321762343 +0000 UTC m=+0.125863477 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 01:53:45 compute-0 ceph-mon[192821]: pgmap v1172: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:53:45 compute-0 sudo[413219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:53:45 compute-0 sudo[413219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:53:45 compute-0 sudo[413219]: pam_unix(sudo:session): session closed for user root
Dec 03 01:53:45 compute-0 sudo[413251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:53:45 compute-0 sudo[413251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:53:45 compute-0 sudo[413251]: pam_unix(sudo:session): session closed for user root
Dec 03 01:53:45 compute-0 sudo[413276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:53:45 compute-0 sudo[413276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:53:45 compute-0 sudo[413276]: pam_unix(sudo:session): session closed for user root
Dec 03 01:53:45 compute-0 sudo[413301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 01:53:45 compute-0 sudo[413301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:53:45 compute-0 sshd-session[413192]: Invalid user gns3 from 80.253.31.232 port 55504
Dec 03 01:53:45 compute-0 sshd-session[413192]: Received disconnect from 80.253.31.232 port 55504:11: Bye Bye [preauth]
Dec 03 01:53:45 compute-0 sshd-session[413192]: Disconnected from invalid user gns3 80.253.31.232 port 55504 [preauth]
Dec 03 01:53:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1173: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:53:46 compute-0 podman[413366]: 2025-12-03 01:53:46.351797493 +0000 UTC m=+0.095004398 container create b21206a3b6021ea303e38a6a2f4445f641941a85651d3198f12976431fcbfb8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 03 01:53:46 compute-0 podman[413366]: 2025-12-03 01:53:46.317377623 +0000 UTC m=+0.060584588 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:53:46 compute-0 systemd[1]: Started libpod-conmon-b21206a3b6021ea303e38a6a2f4445f641941a85651d3198f12976431fcbfb8b.scope.
Dec 03 01:53:46 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:53:46 compute-0 podman[413366]: 2025-12-03 01:53:46.49968684 +0000 UTC m=+0.242893745 container init b21206a3b6021ea303e38a6a2f4445f641941a85651d3198f12976431fcbfb8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mclean, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:53:46 compute-0 podman[413366]: 2025-12-03 01:53:46.518254663 +0000 UTC m=+0.261461568 container start b21206a3b6021ea303e38a6a2f4445f641941a85651d3198f12976431fcbfb8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mclean, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 03 01:53:46 compute-0 podman[413366]: 2025-12-03 01:53:46.525062905 +0000 UTC m=+0.268269870 container attach b21206a3b6021ea303e38a6a2f4445f641941a85651d3198f12976431fcbfb8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mclean, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:53:46 compute-0 kind_mclean[413381]: 167 167
Dec 03 01:53:46 compute-0 systemd[1]: libpod-b21206a3b6021ea303e38a6a2f4445f641941a85651d3198f12976431fcbfb8b.scope: Deactivated successfully.
Dec 03 01:53:46 compute-0 podman[413366]: 2025-12-03 01:53:46.532682569 +0000 UTC m=+0.275889484 container died b21206a3b6021ea303e38a6a2f4445f641941a85651d3198f12976431fcbfb8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mclean, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec 03 01:53:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-d64ff86d6cc5afa97825f15a84a6f40508d1df2f3cd0e40c4b8063e10d1c5622-merged.mount: Deactivated successfully.
Dec 03 01:53:46 compute-0 podman[413366]: 2025-12-03 01:53:46.597203507 +0000 UTC m=+0.340410392 container remove b21206a3b6021ea303e38a6a2f4445f641941a85651d3198f12976431fcbfb8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mclean, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:53:46 compute-0 systemd[1]: libpod-conmon-b21206a3b6021ea303e38a6a2f4445f641941a85651d3198f12976431fcbfb8b.scope: Deactivated successfully.
Dec 03 01:53:46 compute-0 podman[413403]: 2025-12-03 01:53:46.865779364 +0000 UTC m=+0.095317767 container create 0b252b090bea735362771b0a6a824bb736a9f61621416db134cf44bdfb65d9d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:53:46 compute-0 podman[413403]: 2025-12-03 01:53:46.826473386 +0000 UTC m=+0.056011879 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:53:46 compute-0 systemd[1]: Started libpod-conmon-0b252b090bea735362771b0a6a824bb736a9f61621416db134cf44bdfb65d9d1.scope.
Dec 03 01:53:46 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:53:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3295690af6102fb22464a1fcf17d612f5ef70d5697dbea8a4000ab59be34a5ad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:53:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3295690af6102fb22464a1fcf17d612f5ef70d5697dbea8a4000ab59be34a5ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:53:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3295690af6102fb22464a1fcf17d612f5ef70d5697dbea8a4000ab59be34a5ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:53:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3295690af6102fb22464a1fcf17d612f5ef70d5697dbea8a4000ab59be34a5ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:53:47 compute-0 podman[413403]: 2025-12-03 01:53:47.050126128 +0000 UTC m=+0.279664601 container init 0b252b090bea735362771b0a6a824bb736a9f61621416db134cf44bdfb65d9d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_lewin, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:53:47 compute-0 podman[413403]: 2025-12-03 01:53:47.07046494 +0000 UTC m=+0.300003373 container start 0b252b090bea735362771b0a6a824bb736a9f61621416db134cf44bdfb65d9d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:53:47 compute-0 podman[413403]: 2025-12-03 01:53:47.077778137 +0000 UTC m=+0.307316620 container attach 0b252b090bea735362771b0a6a824bb736a9f61621416db134cf44bdfb65d9d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_lewin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:53:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 01:53:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1498895008' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 01:53:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 01:53:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1498895008' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 01:53:47 compute-0 ceph-mon[192821]: pgmap v1173: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:53:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/1498895008' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 01:53:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/1498895008' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 01:53:47 compute-0 recursing_lewin[413419]: {
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:     "0": [
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:         {
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:             "devices": [
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:                 "/dev/loop3"
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:             ],
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:             "lv_name": "ceph_lv0",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:             "lv_size": "21470642176",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:             "name": "ceph_lv0",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:             "tags": {
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:                 "ceph.cluster_name": "ceph",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:                 "ceph.crush_device_class": "",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:                 "ceph.encrypted": "0",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:                 "ceph.osd_id": "0",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:                 "ceph.type": "block",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:                 "ceph.vdo": "0"
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:             },
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:             "type": "block",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:             "vg_name": "ceph_vg0"
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:         }
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:     ],
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:     "1": [
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:         {
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:             "devices": [
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:                 "/dev/loop4"
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:             ],
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:             "lv_name": "ceph_lv1",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:             "lv_size": "21470642176",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:             "name": "ceph_lv1",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:             "tags": {
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:                 "ceph.cluster_name": "ceph",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:                 "ceph.crush_device_class": "",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:                 "ceph.encrypted": "0",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:                 "ceph.osd_id": "1",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:                 "ceph.type": "block",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:                 "ceph.vdo": "0"
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:             },
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:             "type": "block",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:             "vg_name": "ceph_vg1"
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:         }
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:     ],
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:     "2": [
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:         {
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:             "devices": [
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:                 "/dev/loop5"
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:             ],
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:             "lv_name": "ceph_lv2",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:             "lv_size": "21470642176",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:             "name": "ceph_lv2",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:             "tags": {
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:                 "ceph.cluster_name": "ceph",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:                 "ceph.crush_device_class": "",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:                 "ceph.encrypted": "0",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:                 "ceph.osd_id": "2",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:                 "ceph.type": "block",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:                 "ceph.vdo": "0"
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:             },
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:             "type": "block",
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:             "vg_name": "ceph_vg2"
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:         }
Dec 03 01:53:47 compute-0 recursing_lewin[413419]:     ]
Dec 03 01:53:47 compute-0 recursing_lewin[413419]: }
Dec 03 01:53:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:53:47 compute-0 systemd[1]: libpod-0b252b090bea735362771b0a6a824bb736a9f61621416db134cf44bdfb65d9d1.scope: Deactivated successfully.
Dec 03 01:53:47 compute-0 podman[413403]: 2025-12-03 01:53:47.964412606 +0000 UTC m=+1.193951039 container died 0b252b090bea735362771b0a6a824bb736a9f61621416db134cf44bdfb65d9d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_lewin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:53:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-3295690af6102fb22464a1fcf17d612f5ef70d5697dbea8a4000ab59be34a5ad-merged.mount: Deactivated successfully.
Dec 03 01:53:48 compute-0 podman[413403]: 2025-12-03 01:53:48.074097037 +0000 UTC m=+1.303635470 container remove 0b252b090bea735362771b0a6a824bb736a9f61621416db134cf44bdfb65d9d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_lewin, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:53:48 compute-0 systemd[1]: libpod-conmon-0b252b090bea735362771b0a6a824bb736a9f61621416db134cf44bdfb65d9d1.scope: Deactivated successfully.
Dec 03 01:53:48 compute-0 sudo[413301]: pam_unix(sudo:session): session closed for user root
Dec 03 01:53:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1174: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:53:48 compute-0 sudo[413441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:53:48 compute-0 sudo[413441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:53:48 compute-0 sudo[413441]: pam_unix(sudo:session): session closed for user root
Dec 03 01:53:48 compute-0 sudo[413466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:53:48 compute-0 sudo[413466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:53:48 compute-0 sudo[413466]: pam_unix(sudo:session): session closed for user root
Dec 03 01:53:48 compute-0 sudo[413491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:53:48 compute-0 sudo[413491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:53:48 compute-0 sudo[413491]: pam_unix(sudo:session): session closed for user root
Dec 03 01:53:48 compute-0 sudo[413516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 01:53:48 compute-0 sudo[413516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:53:49 compute-0 podman[413577]: 2025-12-03 01:53:49.316051676 +0000 UTC m=+0.094316658 container create d2769998629204ac0d51618bc2e4b237890e018a7cdb60b9c0b13ec05caf9902 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_cray, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:53:49 compute-0 ceph-mon[192821]: pgmap v1174: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:53:49 compute-0 podman[413577]: 2025-12-03 01:53:49.279337422 +0000 UTC m=+0.057602414 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:53:49 compute-0 systemd[1]: Started libpod-conmon-d2769998629204ac0d51618bc2e4b237890e018a7cdb60b9c0b13ec05caf9902.scope.
Dec 03 01:53:49 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:53:49 compute-0 podman[413577]: 2025-12-03 01:53:49.463896972 +0000 UTC m=+0.242162004 container init d2769998629204ac0d51618bc2e4b237890e018a7cdb60b9c0b13ec05caf9902 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_cray, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:53:49 compute-0 podman[413577]: 2025-12-03 01:53:49.480118459 +0000 UTC m=+0.258383431 container start d2769998629204ac0d51618bc2e4b237890e018a7cdb60b9c0b13ec05caf9902 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_cray, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 03 01:53:49 compute-0 infallible_cray[413593]: 167 167
Dec 03 01:53:49 compute-0 systemd[1]: libpod-d2769998629204ac0d51618bc2e4b237890e018a7cdb60b9c0b13ec05caf9902.scope: Deactivated successfully.
Dec 03 01:53:49 compute-0 podman[413577]: 2025-12-03 01:53:49.494646138 +0000 UTC m=+0.272911160 container attach d2769998629204ac0d51618bc2e4b237890e018a7cdb60b9c0b13ec05caf9902 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_cray, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:53:49 compute-0 podman[413577]: 2025-12-03 01:53:49.495680227 +0000 UTC m=+0.273945229 container died d2769998629204ac0d51618bc2e4b237890e018a7cdb60b9c0b13ec05caf9902 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_cray, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:53:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-4670849a67fa2ed137219905757d53bef962c02e53f570fd71392f2337ff4001-merged.mount: Deactivated successfully.
Dec 03 01:53:49 compute-0 podman[413577]: 2025-12-03 01:53:49.575929508 +0000 UTC m=+0.354194480 container remove d2769998629204ac0d51618bc2e4b237890e018a7cdb60b9c0b13ec05caf9902 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_cray, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:53:49 compute-0 systemd[1]: libpod-conmon-d2769998629204ac0d51618bc2e4b237890e018a7cdb60b9c0b13ec05caf9902.scope: Deactivated successfully.
Dec 03 01:53:49 compute-0 podman[413616]: 2025-12-03 01:53:49.882361541 +0000 UTC m=+0.094600556 container create ecee97ad8f66370b0977b902f005e3129961ad5f1c8461c6b0cd04e43d04c539 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_chandrasekhar, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 03 01:53:49 compute-0 podman[413616]: 2025-12-03 01:53:49.848203349 +0000 UTC m=+0.060442424 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:53:49 compute-0 systemd[1]: Started libpod-conmon-ecee97ad8f66370b0977b902f005e3129961ad5f1c8461c6b0cd04e43d04c539.scope.
Dec 03 01:53:49 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:53:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67674bb94bbc39e617441784dc74b6d0e2e2fd61b2eae88b9ed55a53cf3b8cbe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:53:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67674bb94bbc39e617441784dc74b6d0e2e2fd61b2eae88b9ed55a53cf3b8cbe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:53:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67674bb94bbc39e617441784dc74b6d0e2e2fd61b2eae88b9ed55a53cf3b8cbe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:53:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67674bb94bbc39e617441784dc74b6d0e2e2fd61b2eae88b9ed55a53cf3b8cbe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:53:50 compute-0 podman[413616]: 2025-12-03 01:53:50.032275285 +0000 UTC m=+0.244514390 container init ecee97ad8f66370b0977b902f005e3129961ad5f1c8461c6b0cd04e43d04c539 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_chandrasekhar, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:53:50 compute-0 podman[413616]: 2025-12-03 01:53:50.063020721 +0000 UTC m=+0.275259756 container start ecee97ad8f66370b0977b902f005e3129961ad5f1c8461c6b0cd04e43d04c539 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:53:50 compute-0 podman[413616]: 2025-12-03 01:53:50.06971873 +0000 UTC m=+0.281957755 container attach ecee97ad8f66370b0977b902f005e3129961ad5f1c8461c6b0cd04e43d04c539 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_chandrasekhar, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 03 01:53:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1175: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:53:50 compute-0 nova_compute[351485]: 2025-12-03 01:53:50.571 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:53:51 compute-0 clever_chandrasekhar[413632]: {
Dec 03 01:53:51 compute-0 clever_chandrasekhar[413632]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 01:53:51 compute-0 clever_chandrasekhar[413632]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:53:51 compute-0 clever_chandrasekhar[413632]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 01:53:51 compute-0 clever_chandrasekhar[413632]:         "osd_id": 2,
Dec 03 01:53:51 compute-0 clever_chandrasekhar[413632]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:53:51 compute-0 clever_chandrasekhar[413632]:         "type": "bluestore"
Dec 03 01:53:51 compute-0 clever_chandrasekhar[413632]:     },
Dec 03 01:53:51 compute-0 clever_chandrasekhar[413632]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 01:53:51 compute-0 clever_chandrasekhar[413632]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:53:51 compute-0 clever_chandrasekhar[413632]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 01:53:51 compute-0 clever_chandrasekhar[413632]:         "osd_id": 1,
Dec 03 01:53:51 compute-0 clever_chandrasekhar[413632]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:53:51 compute-0 clever_chandrasekhar[413632]:         "type": "bluestore"
Dec 03 01:53:51 compute-0 clever_chandrasekhar[413632]:     },
Dec 03 01:53:51 compute-0 clever_chandrasekhar[413632]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 01:53:51 compute-0 clever_chandrasekhar[413632]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:53:51 compute-0 clever_chandrasekhar[413632]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 01:53:51 compute-0 clever_chandrasekhar[413632]:         "osd_id": 0,
Dec 03 01:53:51 compute-0 clever_chandrasekhar[413632]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:53:51 compute-0 clever_chandrasekhar[413632]:         "type": "bluestore"
Dec 03 01:53:51 compute-0 clever_chandrasekhar[413632]:     }
Dec 03 01:53:51 compute-0 clever_chandrasekhar[413632]: }
Dec 03 01:53:51 compute-0 systemd[1]: libpod-ecee97ad8f66370b0977b902f005e3129961ad5f1c8461c6b0cd04e43d04c539.scope: Deactivated successfully.
Dec 03 01:53:51 compute-0 podman[413616]: 2025-12-03 01:53:51.344152206 +0000 UTC m=+1.556391281 container died ecee97ad8f66370b0977b902f005e3129961ad5f1c8461c6b0cd04e43d04c539 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_chandrasekhar, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:53:51 compute-0 systemd[1]: libpod-ecee97ad8f66370b0977b902f005e3129961ad5f1c8461c6b0cd04e43d04c539.scope: Consumed 1.276s CPU time.
Dec 03 01:53:51 compute-0 ceph-mon[192821]: pgmap v1175: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:53:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-67674bb94bbc39e617441784dc74b6d0e2e2fd61b2eae88b9ed55a53cf3b8cbe-merged.mount: Deactivated successfully.
Dec 03 01:53:51 compute-0 podman[413616]: 2025-12-03 01:53:51.458324652 +0000 UTC m=+1.670563657 container remove ecee97ad8f66370b0977b902f005e3129961ad5f1c8461c6b0cd04e43d04c539 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:53:51 compute-0 systemd[1]: libpod-conmon-ecee97ad8f66370b0977b902f005e3129961ad5f1c8461c6b0cd04e43d04c539.scope: Deactivated successfully.
Dec 03 01:53:51 compute-0 sudo[413516]: pam_unix(sudo:session): session closed for user root
Dec 03 01:53:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:53:51 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:53:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:53:51 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:53:51 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev e60e27d3-f16e-429f-b5df-3d90e1fe752b does not exist
Dec 03 01:53:51 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 19f40171-6d0e-4543-a301-e30adc95a6f0 does not exist
Dec 03 01:53:51 compute-0 sudo[413675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:53:51 compute-0 sudo[413675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:53:51 compute-0 sudo[413675]: pam_unix(sudo:session): session closed for user root
Dec 03 01:53:51 compute-0 sudo[413700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 01:53:51 compute-0 sudo[413700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:53:51 compute-0 sudo[413700]: pam_unix(sudo:session): session closed for user root
Dec 03 01:53:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1176: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:53:52 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:53:52 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:53:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:53:53 compute-0 ceph-mon[192821]: pgmap v1176: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:53:53 compute-0 nova_compute[351485]: 2025-12-03 01:53:53.575 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:53:53 compute-0 nova_compute[351485]: 2025-12-03 01:53:53.576 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 01:53:53 compute-0 nova_compute[351485]: 2025-12-03 01:53:53.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 03 01:53:53 compute-0 nova_compute[351485]: 2025-12-03 01:53:53.601 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 03 01:53:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1177: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:53:54 compute-0 nova_compute[351485]: 2025-12-03 01:53:54.575 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:53:54 compute-0 nova_compute[351485]: 2025-12-03 01:53:54.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:53:54 compute-0 nova_compute[351485]: 2025-12-03 01:53:54.618 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:53:54 compute-0 nova_compute[351485]: 2025-12-03 01:53:54.619 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:53:54 compute-0 nova_compute[351485]: 2025-12-03 01:53:54.619 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:53:54 compute-0 nova_compute[351485]: 2025-12-03 01:53:54.620 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 01:53:54 compute-0 nova_compute[351485]: 2025-12-03 01:53:54.620 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:53:55 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 01:53:55 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3153023216' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:53:55 compute-0 nova_compute[351485]: 2025-12-03 01:53:55.101 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:53:55 compute-0 ceph-mon[192821]: pgmap v1177: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:53:55 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3153023216' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:53:55 compute-0 nova_compute[351485]: 2025-12-03 01:53:55.648 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 01:53:55 compute-0 nova_compute[351485]: 2025-12-03 01:53:55.650 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4535MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 01:53:55 compute-0 nova_compute[351485]: 2025-12-03 01:53:55.651 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:53:55 compute-0 nova_compute[351485]: 2025-12-03 01:53:55.652 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:53:55 compute-0 nova_compute[351485]: 2025-12-03 01:53:55.745 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 01:53:55 compute-0 nova_compute[351485]: 2025-12-03 01:53:55.746 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 01:53:55 compute-0 nova_compute[351485]: 2025-12-03 01:53:55.768 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:53:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1178: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:53:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 01:53:56 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3429369695' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:53:56 compute-0 nova_compute[351485]: 2025-12-03 01:53:56.312 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:53:56 compute-0 nova_compute[351485]: 2025-12-03 01:53:56.322 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 01:53:56 compute-0 nova_compute[351485]: 2025-12-03 01:53:56.350 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 01:53:56 compute-0 nova_compute[351485]: 2025-12-03 01:53:56.352 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 01:53:56 compute-0 nova_compute[351485]: 2025-12-03 01:53:56.352 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:53:56 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3429369695' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:53:57 compute-0 ceph-mon[192821]: pgmap v1178: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:53:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:53:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1179: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:53:58 compute-0 nova_compute[351485]: 2025-12-03 01:53:58.353 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:53:58 compute-0 nova_compute[351485]: 2025-12-03 01:53:58.353 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:53:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:53:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:53:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:53:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:53:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:53:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:53:58 compute-0 nova_compute[351485]: 2025-12-03 01:53:58.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:53:58 compute-0 nova_compute[351485]: 2025-12-03 01:53:58.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:53:59 compute-0 ceph-mon[192821]: pgmap v1179: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:53:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:53:59.619 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:53:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:53:59.619 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:53:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:53:59.620 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:53:59 compute-0 podman[158098]: time="2025-12-03T01:53:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:53:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:53:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec 03 01:53:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:53:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8127 "" "Go-http-client/1.1"
Dec 03 01:54:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1180: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:54:00 compute-0 podman[413771]: 2025-12-03 01:54:00.87096081 +0000 UTC m=+0.106558243 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 01:54:00 compute-0 podman[413769]: 2025-12-03 01:54:00.871153166 +0000 UTC m=+0.121121214 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:54:00 compute-0 podman[413770]: 2025-12-03 01:54:00.888294239 +0000 UTC m=+0.130930400 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, org.label-schema.license=GPLv2)
Dec 03 01:54:01 compute-0 openstack_network_exporter[368278]: ERROR   01:54:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:54:01 compute-0 openstack_network_exporter[368278]: ERROR   01:54:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:54:01 compute-0 openstack_network_exporter[368278]: ERROR   01:54:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:54:01 compute-0 openstack_network_exporter[368278]: ERROR   01:54:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:54:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:54:01 compute-0 openstack_network_exporter[368278]: ERROR   01:54:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:54:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:54:01 compute-0 nova_compute[351485]: 2025-12-03 01:54:01.570 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:54:01 compute-0 ceph-mon[192821]: pgmap v1180: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:54:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1181: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:54:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:54:03 compute-0 nova_compute[351485]: 2025-12-03 01:54:03.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:54:03 compute-0 nova_compute[351485]: 2025-12-03 01:54:03.576 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 01:54:03 compute-0 ceph-mon[192821]: pgmap v1181: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:54:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1182: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:54:05 compute-0 ceph-mon[192821]: pgmap v1182: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:54:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1183: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:54:06 compute-0 podman[413829]: 2025-12-03 01:54:06.901651697 +0000 UTC m=+0.148665569 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Dec 03 01:54:07 compute-0 ceph-mon[192821]: pgmap v1183: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:54:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:54:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1184: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:54:09 compute-0 ceph-mon[192821]: pgmap v1184: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:54:09 compute-0 podman[413848]: 2025-12-03 01:54:09.874850594 +0000 UTC m=+0.122604465 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, build-date=2024-09-18T21:23:30, name=ubi9, distribution-scope=public, managed_by=edpm_ansible, io.buildah.version=1.29.0, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=ubi9-container, container_name=kepler, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, release-0.7.12=, config_id=edpm)
Dec 03 01:54:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1185: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:54:11 compute-0 ceph-mon[192821]: pgmap v1185: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:54:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1186: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:54:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:54:13 compute-0 ceph-mon[192821]: pgmap v1186: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:54:13 compute-0 podman[413868]: 2025-12-03 01:54:13.881076074 +0000 UTC m=+0.124642422 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, vcs-type=git, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, architecture=x86_64, release=1755695350)
Dec 03 01:54:13 compute-0 podman[413869]: 2025-12-03 01:54:13.929724125 +0000 UTC m=+0.164548077 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 03 01:54:13 compute-0 podman[413867]: 2025-12-03 01:54:13.947899147 +0000 UTC m=+0.195740336 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 03 01:54:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1187: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:54:15 compute-0 ceph-mon[192821]: pgmap v1187: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:54:15 compute-0 podman[413929]: 2025-12-03 01:54:15.837814522 +0000 UTC m=+0.085778877 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 01:54:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1188: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:54:17 compute-0 ceph-mon[192821]: pgmap v1188: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:54:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:54:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1189: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:54:18 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:18.906 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:a6:85', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:2a:11:ae:7b:8c'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 01:54:18 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:18.907 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 03 01:54:19 compute-0 ceph-mon[192821]: pgmap v1189: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:54:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1190: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:54:20 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:20.911 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=eda9fd7d-f2b1-4121-b9ac-fc31f8426272, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 01:54:21 compute-0 ceph-mon[192821]: pgmap v1190: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:54:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1191: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:54:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:54:23 compute-0 ceph-mon[192821]: pgmap v1191: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:54:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1192: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:54:25 compute-0 ceph-mon[192821]: pgmap v1192: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:54:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1193: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:54:27 compute-0 ceph-mon[192821]: pgmap v1193: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:54:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:54:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1194: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:54:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:54:28
Dec 03 01:54:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 01:54:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 01:54:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', 'default.rgw.control', 'default.rgw.meta', '.mgr', 'images', '.rgw.root', 'backups', 'default.rgw.log']
Dec 03 01:54:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 01:54:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:54:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:54:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:54:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:54:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:54:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:54:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 01:54:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:54:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 01:54:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:54:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:54:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:54:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:54:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:54:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:54:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:54:29 compute-0 podman[158098]: time="2025-12-03T01:54:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:54:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:54:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec 03 01:54:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:54:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8135 "" "Go-http-client/1.1"
Dec 03 01:54:29 compute-0 ceph-mon[192821]: pgmap v1194: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:54:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1195: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:54:31 compute-0 openstack_network_exporter[368278]: ERROR   01:54:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:54:31 compute-0 openstack_network_exporter[368278]: ERROR   01:54:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:54:31 compute-0 openstack_network_exporter[368278]: ERROR   01:54:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:54:31 compute-0 openstack_network_exporter[368278]: ERROR   01:54:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:54:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:54:31 compute-0 openstack_network_exporter[368278]: ERROR   01:54:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:54:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:54:31 compute-0 nova_compute[351485]: 2025-12-03 01:54:31.593 351492 DEBUG oslo_concurrency.lockutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "9182286b-5a08-4961-b4bb-c0e2f05746f7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:54:31 compute-0 nova_compute[351485]: 2025-12-03 01:54:31.593 351492 DEBUG oslo_concurrency.lockutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "9182286b-5a08-4961-b4bb-c0e2f05746f7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:54:31 compute-0 nova_compute[351485]: 2025-12-03 01:54:31.617 351492 DEBUG nova.compute.manager [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 03 01:54:31 compute-0 nova_compute[351485]: 2025-12-03 01:54:31.729 351492 DEBUG oslo_concurrency.lockutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:54:31 compute-0 nova_compute[351485]: 2025-12-03 01:54:31.730 351492 DEBUG oslo_concurrency.lockutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:54:31 compute-0 nova_compute[351485]: 2025-12-03 01:54:31.744 351492 DEBUG nova.virt.hardware [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 03 01:54:31 compute-0 nova_compute[351485]: 2025-12-03 01:54:31.744 351492 INFO nova.compute.claims [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Claim successful on node compute-0.ctlplane.example.com
Dec 03 01:54:31 compute-0 ceph-mon[192821]: pgmap v1195: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:54:31 compute-0 podman[413957]: 2025-12-03 01:54:31.864142411 +0000 UTC m=+0.102040805 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 03 01:54:31 compute-0 podman[413955]: 2025-12-03 01:54:31.870324995 +0000 UTC m=+0.128530321 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 03 01:54:31 compute-0 nova_compute[351485]: 2025-12-03 01:54:31.875 351492 DEBUG oslo_concurrency.processutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:54:31 compute-0 podman[413956]: 2025-12-03 01:54:31.887809498 +0000 UTC m=+0.132552615 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, tcib_managed=true)
Dec 03 01:54:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1196: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:54:32 compute-0 sshd-session[413953]: Invalid user sonarqube from 103.146.202.174 port 36916
Dec 03 01:54:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 01:54:32 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1130994907' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:54:32 compute-0 nova_compute[351485]: 2025-12-03 01:54:32.384 351492 DEBUG oslo_concurrency.processutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:54:32 compute-0 nova_compute[351485]: 2025-12-03 01:54:32.401 351492 DEBUG nova.compute.provider_tree [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 01:54:32 compute-0 nova_compute[351485]: 2025-12-03 01:54:32.441 351492 DEBUG nova.scheduler.client.report [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 01:54:32 compute-0 nova_compute[351485]: 2025-12-03 01:54:32.468 351492 DEBUG oslo_concurrency.lockutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:54:32 compute-0 nova_compute[351485]: 2025-12-03 01:54:32.469 351492 DEBUG nova.compute.manager [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 03 01:54:32 compute-0 nova_compute[351485]: 2025-12-03 01:54:32.513 351492 DEBUG nova.compute.manager [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 03 01:54:32 compute-0 nova_compute[351485]: 2025-12-03 01:54:32.514 351492 DEBUG nova.network.neutron [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 03 01:54:32 compute-0 sshd-session[413953]: Received disconnect from 103.146.202.174 port 36916:11: Bye Bye [preauth]
Dec 03 01:54:32 compute-0 sshd-session[413953]: Disconnected from invalid user sonarqube 103.146.202.174 port 36916 [preauth]
Dec 03 01:54:32 compute-0 nova_compute[351485]: 2025-12-03 01:54:32.541 351492 INFO nova.virt.libvirt.driver [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 03 01:54:32 compute-0 nova_compute[351485]: 2025-12-03 01:54:32.587 351492 DEBUG nova.compute.manager [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 03 01:54:32 compute-0 nova_compute[351485]: 2025-12-03 01:54:32.723 351492 DEBUG nova.compute.manager [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 03 01:54:32 compute-0 nova_compute[351485]: 2025-12-03 01:54:32.726 351492 DEBUG nova.virt.libvirt.driver [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 03 01:54:32 compute-0 nova_compute[351485]: 2025-12-03 01:54:32.726 351492 INFO nova.virt.libvirt.driver [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Creating image(s)
Dec 03 01:54:32 compute-0 nova_compute[351485]: 2025-12-03 01:54:32.765 351492 DEBUG nova.storage.rbd_utils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 9182286b-5a08-4961-b4bb-c0e2f05746f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 01:54:32 compute-0 nova_compute[351485]: 2025-12-03 01:54:32.822 351492 DEBUG nova.storage.rbd_utils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 9182286b-5a08-4961-b4bb-c0e2f05746f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 01:54:32 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1130994907' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:54:32 compute-0 nova_compute[351485]: 2025-12-03 01:54:32.892 351492 DEBUG nova.storage.rbd_utils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 9182286b-5a08-4961-b4bb-c0e2f05746f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 01:54:32 compute-0 nova_compute[351485]: 2025-12-03 01:54:32.902 351492 DEBUG oslo_concurrency.lockutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "b9e804eb90834f1320f9fd6c25a03e15d4052aa8" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:54:32 compute-0 nova_compute[351485]: 2025-12-03 01:54:32.904 351492 DEBUG oslo_concurrency.lockutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "b9e804eb90834f1320f9fd6c25a03e15d4052aa8" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:54:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:54:33 compute-0 nova_compute[351485]: 2025-12-03 01:54:33.156 351492 DEBUG nova.virt.libvirt.imagebackend [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Image locations are: [{'url': 'rbd://3765feb2-36f8-5b86-b74c-64e9221f9c4c/images/466cf0db-c3be-4d70-b9f3-08c056c2cad9/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://3765feb2-36f8-5b86-b74c-64e9221f9c4c/images/466cf0db-c3be-4d70-b9f3-08c056c2cad9/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Dec 03 01:54:33 compute-0 ceph-mon[192821]: pgmap v1196: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:54:33 compute-0 nova_compute[351485]: 2025-12-03 01:54:33.965 351492 WARNING oslo_policy.policy [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Dec 03 01:54:33 compute-0 nova_compute[351485]: 2025-12-03 01:54:33.966 351492 WARNING oslo_policy.policy [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Dec 03 01:54:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1197: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:54:34 compute-0 ceph-mon[192821]: pgmap v1197: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:54:35 compute-0 nova_compute[351485]: 2025-12-03 01:54:35.354 351492 DEBUG oslo_concurrency.processutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b9e804eb90834f1320f9fd6c25a03e15d4052aa8.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:54:35 compute-0 nova_compute[351485]: 2025-12-03 01:54:35.458 351492 DEBUG oslo_concurrency.processutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b9e804eb90834f1320f9fd6c25a03e15d4052aa8.part --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:54:35 compute-0 nova_compute[351485]: 2025-12-03 01:54:35.460 351492 DEBUG nova.virt.images [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] 466cf0db-c3be-4d70-b9f3-08c056c2cad9 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Dec 03 01:54:35 compute-0 nova_compute[351485]: 2025-12-03 01:54:35.463 351492 DEBUG nova.privsep.utils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Dec 03 01:54:35 compute-0 nova_compute[351485]: 2025-12-03 01:54:35.464 351492 DEBUG oslo_concurrency.processutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/b9e804eb90834f1320f9fd6c25a03e15d4052aa8.part /var/lib/nova/instances/_base/b9e804eb90834f1320f9fd6c25a03e15d4052aa8.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:54:35 compute-0 nova_compute[351485]: 2025-12-03 01:54:35.756 351492 DEBUG oslo_concurrency.processutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/b9e804eb90834f1320f9fd6c25a03e15d4052aa8.part /var/lib/nova/instances/_base/b9e804eb90834f1320f9fd6c25a03e15d4052aa8.converted" returned: 0 in 0.293s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:54:35 compute-0 nova_compute[351485]: 2025-12-03 01:54:35.765 351492 DEBUG oslo_concurrency.processutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b9e804eb90834f1320f9fd6c25a03e15d4052aa8.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:54:35 compute-0 nova_compute[351485]: 2025-12-03 01:54:35.860 351492 DEBUG oslo_concurrency.processutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b9e804eb90834f1320f9fd6c25a03e15d4052aa8.converted --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:54:35 compute-0 nova_compute[351485]: 2025-12-03 01:54:35.863 351492 DEBUG oslo_concurrency.lockutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "b9e804eb90834f1320f9fd6c25a03e15d4052aa8" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.959s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:54:35 compute-0 nova_compute[351485]: 2025-12-03 01:54:35.922 351492 DEBUG nova.storage.rbd_utils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 9182286b-5a08-4961-b4bb-c0e2f05746f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 01:54:35 compute-0 nova_compute[351485]: 2025-12-03 01:54:35.933 351492 DEBUG oslo_concurrency.processutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b9e804eb90834f1320f9fd6c25a03e15d4052aa8 9182286b-5a08-4961-b4bb-c0e2f05746f7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:54:36 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Dec 03 01:54:36 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Dec 03 01:54:36 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Dec 03 01:54:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1199: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 773 KiB/s rd, 1 op/s
Dec 03 01:54:36 compute-0 nova_compute[351485]: 2025-12-03 01:54:36.337 351492 DEBUG nova.network.neutron [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Successfully created port: d2a50b9b-c23e-4e96-a247-ba01de01a3f1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 03 01:54:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Dec 03 01:54:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Dec 03 01:54:37 compute-0 ceph-mon[192821]: osdmap e125: 3 total, 3 up, 3 in
Dec 03 01:54:37 compute-0 ceph-mon[192821]: pgmap v1199: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 773 KiB/s rd, 1 op/s
Dec 03 01:54:37 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Dec 03 01:54:37 compute-0 nova_compute[351485]: 2025-12-03 01:54:37.564 351492 DEBUG oslo_concurrency.processutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b9e804eb90834f1320f9fd6c25a03e15d4052aa8 9182286b-5a08-4961-b4bb-c0e2f05746f7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.631s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:54:37 compute-0 nova_compute[351485]: 2025-12-03 01:54:37.631 351492 DEBUG nova.network.neutron [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Successfully updated port: d2a50b9b-c23e-4e96-a247-ba01de01a3f1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 03 01:54:37 compute-0 nova_compute[351485]: 2025-12-03 01:54:37.717 351492 DEBUG oslo_concurrency.lockutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 01:54:37 compute-0 nova_compute[351485]: 2025-12-03 01:54:37.717 351492 DEBUG oslo_concurrency.lockutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquired lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 01:54:37 compute-0 nova_compute[351485]: 2025-12-03 01:54:37.718 351492 DEBUG nova.network.neutron [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 03 01:54:37 compute-0 nova_compute[351485]: 2025-12-03 01:54:37.749 351492 DEBUG nova.storage.rbd_utils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] resizing rbd image 9182286b-5a08-4961-b4bb-c0e2f05746f7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 03 01:54:37 compute-0 podman[414175]: 2025-12-03 01:54:37.882982645 +0000 UTC m=+0.130109827 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 03 01:54:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:54:37 compute-0 nova_compute[351485]: 2025-12-03 01:54:37.976 351492 DEBUG nova.objects.instance [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lazy-loading 'migration_context' on Instance uuid 9182286b-5a08-4961-b4bb-c0e2f05746f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 01:54:38 compute-0 nova_compute[351485]: 2025-12-03 01:54:38.052 351492 DEBUG nova.storage.rbd_utils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 9182286b-5a08-4961-b4bb-c0e2f05746f7_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 01:54:38 compute-0 nova_compute[351485]: 2025-12-03 01:54:38.110 351492 DEBUG nova.storage.rbd_utils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 9182286b-5a08-4961-b4bb-c0e2f05746f7_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 01:54:38 compute-0 nova_compute[351485]: 2025-12-03 01:54:38.118 351492 DEBUG oslo_concurrency.lockutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:54:38 compute-0 nova_compute[351485]: 2025-12-03 01:54:38.119 351492 DEBUG oslo_concurrency.lockutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:54:38 compute-0 nova_compute[351485]: 2025-12-03 01:54:38.120 351492 DEBUG oslo_concurrency.processutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:54:38 compute-0 ceph-mon[192821]: osdmap e126: 3 total, 3 up, 3 in
Dec 03 01:54:38 compute-0 nova_compute[351485]: 2025-12-03 01:54:38.150 351492 DEBUG nova.compute.manager [req-c58eed6d-18c9-472c-9087-58f160e834bb req-6604b7a8-03a5-40ed-b279-cdde1ad18b26 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Received event network-changed-d2a50b9b-c23e-4e96-a247-ba01de01a3f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 01:54:38 compute-0 nova_compute[351485]: 2025-12-03 01:54:38.151 351492 DEBUG nova.compute.manager [req-c58eed6d-18c9-472c-9087-58f160e834bb req-6604b7a8-03a5-40ed-b279-cdde1ad18b26 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Refreshing instance network info cache due to event network-changed-d2a50b9b-c23e-4e96-a247-ba01de01a3f1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 03 01:54:38 compute-0 nova_compute[351485]: 2025-12-03 01:54:38.161 351492 DEBUG oslo_concurrency.lockutils [req-c58eed6d-18c9-472c-9087-58f160e834bb req-6604b7a8-03a5-40ed-b279-cdde1ad18b26 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 01:54:38 compute-0 nova_compute[351485]: 2025-12-03 01:54:38.166 351492 DEBUG oslo_concurrency.processutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:54:38 compute-0 nova_compute[351485]: 2025-12-03 01:54:38.166 351492 DEBUG oslo_concurrency.processutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:54:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1201: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 966 KiB/s rd, 2 op/s
Dec 03 01:54:38 compute-0 nova_compute[351485]: 2025-12-03 01:54:38.217 351492 DEBUG oslo_concurrency.processutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:54:38 compute-0 nova_compute[351485]: 2025-12-03 01:54:38.219 351492 DEBUG oslo_concurrency.lockutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.099s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:54:38 compute-0 nova_compute[351485]: 2025-12-03 01:54:38.261 351492 DEBUG nova.storage.rbd_utils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 9182286b-5a08-4961-b4bb-c0e2f05746f7_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 01:54:38 compute-0 nova_compute[351485]: 2025-12-03 01:54:38.269 351492 DEBUG oslo_concurrency.processutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 9182286b-5a08-4961-b4bb-c0e2f05746f7_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:54:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 01:54:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:54:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 01:54:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:54:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:54:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:54:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:54:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:54:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:54:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:54:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec 03 01:54:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:54:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 01:54:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:54:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:54:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:54:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 01:54:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:54:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 01:54:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:54:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:54:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:54:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 01:54:38 compute-0 nova_compute[351485]: 2025-12-03 01:54:38.966 351492 DEBUG nova.network.neutron [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 03 01:54:39 compute-0 ceph-mon[192821]: pgmap v1201: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 966 KiB/s rd, 2 op/s
Dec 03 01:54:39 compute-0 nova_compute[351485]: 2025-12-03 01:54:39.257 351492 DEBUG oslo_concurrency.processutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 9182286b-5a08-4961-b4bb-c0e2f05746f7_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.989s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:54:39 compute-0 sshd-session[414308]: Invalid user temp from 146.190.144.138 port 39766
Dec 03 01:54:39 compute-0 sshd-session[414308]: Received disconnect from 146.190.144.138 port 39766:11: Bye Bye [preauth]
Dec 03 01:54:39 compute-0 sshd-session[414308]: Disconnected from invalid user temp 146.190.144.138 port 39766 [preauth]
Dec 03 01:54:39 compute-0 nova_compute[351485]: 2025-12-03 01:54:39.527 351492 DEBUG nova.virt.libvirt.driver [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 03 01:54:39 compute-0 nova_compute[351485]: 2025-12-03 01:54:39.528 351492 DEBUG nova.virt.libvirt.driver [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Ensure instance console log exists: /var/lib/nova/instances/9182286b-5a08-4961-b4bb-c0e2f05746f7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 03 01:54:39 compute-0 nova_compute[351485]: 2025-12-03 01:54:39.530 351492 DEBUG oslo_concurrency.lockutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:54:39 compute-0 nova_compute[351485]: 2025-12-03 01:54:39.530 351492 DEBUG oslo_concurrency.lockutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:54:39 compute-0 nova_compute[351485]: 2025-12-03 01:54:39.531 351492 DEBUG oslo_concurrency.lockutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:54:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1202: 321 pgs: 321 active+clean; 35 MiB data, 172 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 985 KiB/s wr, 40 op/s
Dec 03 01:54:40 compute-0 nova_compute[351485]: 2025-12-03 01:54:40.696 351492 DEBUG nova.network.neutron [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Updating instance_info_cache with network_info: [{"id": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "address": "fa:16:3e:8f:a6:32", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2a50b9b-c2", "ovs_interfaceid": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 01:54:40 compute-0 nova_compute[351485]: 2025-12-03 01:54:40.727 351492 DEBUG oslo_concurrency.lockutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Releasing lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 01:54:40 compute-0 nova_compute[351485]: 2025-12-03 01:54:40.728 351492 DEBUG nova.compute.manager [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Instance network_info: |[{"id": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "address": "fa:16:3e:8f:a6:32", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2a50b9b-c2", "ovs_interfaceid": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 03 01:54:40 compute-0 nova_compute[351485]: 2025-12-03 01:54:40.728 351492 DEBUG oslo_concurrency.lockutils [req-c58eed6d-18c9-472c-9087-58f160e834bb req-6604b7a8-03a5-40ed-b279-cdde1ad18b26 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 01:54:40 compute-0 nova_compute[351485]: 2025-12-03 01:54:40.729 351492 DEBUG nova.network.neutron [req-c58eed6d-18c9-472c-9087-58f160e834bb req-6604b7a8-03a5-40ed-b279-cdde1ad18b26 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Refreshing network info cache for port d2a50b9b-c23e-4e96-a247-ba01de01a3f1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 03 01:54:40 compute-0 nova_compute[351485]: 2025-12-03 01:54:40.735 351492 DEBUG nova.virt.libvirt.driver [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Start _get_guest_xml network_info=[{"id": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "address": "fa:16:3e:8f:a6:32", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2a50b9b-c2", "ovs_interfaceid": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-03T01:53:18Z,direct_url=<?>,disk_format='qcow2',id=466cf0db-c3be-4d70-b9f3-08c056c2cad9,min_disk=0,min_ram=0,name='cirros',owner='9746b242761a48048d185ce26d622b33',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-03T01:53:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'boot_index': 0, 'guest_format': None, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'image_id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}], 'ephemerals': [{'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vdb', 'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'size': 1, 'encryption_options': None, 'device_type': 'disk'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 03 01:54:40 compute-0 nova_compute[351485]: 2025-12-03 01:54:40.749 351492 WARNING nova.virt.libvirt.driver [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 01:54:40 compute-0 nova_compute[351485]: 2025-12-03 01:54:40.766 351492 DEBUG nova.virt.libvirt.host [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 03 01:54:40 compute-0 nova_compute[351485]: 2025-12-03 01:54:40.767 351492 DEBUG nova.virt.libvirt.host [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 03 01:54:40 compute-0 nova_compute[351485]: 2025-12-03 01:54:40.778 351492 DEBUG nova.virt.libvirt.host [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 03 01:54:40 compute-0 nova_compute[351485]: 2025-12-03 01:54:40.779 351492 DEBUG nova.virt.libvirt.host [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 03 01:54:40 compute-0 nova_compute[351485]: 2025-12-03 01:54:40.780 351492 DEBUG nova.virt.libvirt.driver [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 03 01:54:40 compute-0 nova_compute[351485]: 2025-12-03 01:54:40.781 351492 DEBUG nova.virt.hardware [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-03T01:53:25Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='bc665ec6-3672-4e52-a447-5267b04e227a',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-03T01:53:18Z,direct_url=<?>,disk_format='qcow2',id=466cf0db-c3be-4d70-b9f3-08c056c2cad9,min_disk=0,min_ram=0,name='cirros',owner='9746b242761a48048d185ce26d622b33',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-03T01:53:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 03 01:54:40 compute-0 nova_compute[351485]: 2025-12-03 01:54:40.782 351492 DEBUG nova.virt.hardware [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 03 01:54:40 compute-0 nova_compute[351485]: 2025-12-03 01:54:40.783 351492 DEBUG nova.virt.hardware [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 03 01:54:40 compute-0 nova_compute[351485]: 2025-12-03 01:54:40.784 351492 DEBUG nova.virt.hardware [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 03 01:54:40 compute-0 nova_compute[351485]: 2025-12-03 01:54:40.785 351492 DEBUG nova.virt.hardware [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 03 01:54:40 compute-0 nova_compute[351485]: 2025-12-03 01:54:40.785 351492 DEBUG nova.virt.hardware [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 03 01:54:40 compute-0 nova_compute[351485]: 2025-12-03 01:54:40.786 351492 DEBUG nova.virt.hardware [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 03 01:54:40 compute-0 nova_compute[351485]: 2025-12-03 01:54:40.787 351492 DEBUG nova.virt.hardware [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 03 01:54:40 compute-0 nova_compute[351485]: 2025-12-03 01:54:40.788 351492 DEBUG nova.virt.hardware [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 03 01:54:40 compute-0 nova_compute[351485]: 2025-12-03 01:54:40.788 351492 DEBUG nova.virt.hardware [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 03 01:54:40 compute-0 nova_compute[351485]: 2025-12-03 01:54:40.789 351492 DEBUG nova.virt.hardware [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 03 01:54:40 compute-0 nova_compute[351485]: 2025-12-03 01:54:40.797 351492 DEBUG nova.privsep.utils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Dec 03 01:54:40 compute-0 nova_compute[351485]: 2025-12-03 01:54:40.799 351492 DEBUG oslo_concurrency.processutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:54:40 compute-0 podman[414364]: 2025-12-03 01:54:40.898342976 +0000 UTC m=+0.150505146 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, config_id=edpm, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2024-09-18T21:23:30, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, name=ubi9, com.redhat.component=ubi9-container, release=1214.1726694543)
Dec 03 01:54:41 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 03 01:54:41 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2606851180' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 01:54:41 compute-0 ceph-mon[192821]: pgmap v1202: 321 pgs: 321 active+clean; 35 MiB data, 172 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 985 KiB/s wr, 40 op/s
Dec 03 01:54:41 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2606851180' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 01:54:41 compute-0 nova_compute[351485]: 2025-12-03 01:54:41.286 351492 DEBUG oslo_concurrency.processutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:54:41 compute-0 nova_compute[351485]: 2025-12-03 01:54:41.287 351492 DEBUG oslo_concurrency.processutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:54:41 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 03 01:54:41 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1623815996' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 01:54:41 compute-0 nova_compute[351485]: 2025-12-03 01:54:41.745 351492 DEBUG oslo_concurrency.processutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:54:41 compute-0 nova_compute[351485]: 2025-12-03 01:54:41.793 351492 DEBUG nova.storage.rbd_utils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 9182286b-5a08-4961-b4bb-c0e2f05746f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 01:54:41 compute-0 nova_compute[351485]: 2025-12-03 01:54:41.809 351492 DEBUG oslo_concurrency.processutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:54:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1203: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 56 op/s
Dec 03 01:54:42 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1623815996' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 01:54:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 03 01:54:42 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1938135160' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.321 351492 DEBUG oslo_concurrency.processutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.322 351492 DEBUG nova.virt.libvirt.vif [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T01:54:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='466cf0db-c3be-4d70-b9f3-08c056c2cad9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9746b242761a48048d185ce26d622b33',ramdisk_id='',reservation_id='r-2j005007',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='466cf0db-c3be-4d70-b9f3-08c056c2cad9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T01:54:32Z,user_data=None,user_id='03ba25e4009b43f7b0054fee32bf9136',uuid=9182286b-5a08-4961-b4bb-c0e2f05746f7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "address": "fa:16:3e:8f:a6:32", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2a50b9b-c2", "ovs_interfaceid": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 03 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.323 351492 DEBUG nova.network.os_vif_util [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Converting VIF {"id": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "address": "fa:16:3e:8f:a6:32", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2a50b9b-c2", "ovs_interfaceid": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 03 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.323 351492 DEBUG nova.network.os_vif_util [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:a6:32,bridge_name='br-int',has_traffic_filtering=True,id=d2a50b9b-c23e-4e96-a247-ba01de01a3f1,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd2a50b9b-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 03 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.325 351492 DEBUG nova.objects.instance [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9182286b-5a08-4961-b4bb-c0e2f05746f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.345 351492 DEBUG nova.virt.libvirt.driver [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] End _get_guest_xml xml=<domain type="kvm">
Dec 03 01:54:42 compute-0 nova_compute[351485]:   <uuid>9182286b-5a08-4961-b4bb-c0e2f05746f7</uuid>
Dec 03 01:54:42 compute-0 nova_compute[351485]:   <name>instance-00000001</name>
Dec 03 01:54:42 compute-0 nova_compute[351485]:   <memory>524288</memory>
Dec 03 01:54:42 compute-0 nova_compute[351485]:   <vcpu>1</vcpu>
Dec 03 01:54:42 compute-0 nova_compute[351485]:   <metadata>
Dec 03 01:54:42 compute-0 nova_compute[351485]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 03 01:54:42 compute-0 nova_compute[351485]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:       <nova:name>test_0</nova:name>
Dec 03 01:54:42 compute-0 nova_compute[351485]:       <nova:creationTime>2025-12-03 01:54:40</nova:creationTime>
Dec 03 01:54:42 compute-0 nova_compute[351485]:       <nova:flavor name="m1.small">
Dec 03 01:54:42 compute-0 nova_compute[351485]:         <nova:memory>512</nova:memory>
Dec 03 01:54:42 compute-0 nova_compute[351485]:         <nova:disk>1</nova:disk>
Dec 03 01:54:42 compute-0 nova_compute[351485]:         <nova:swap>0</nova:swap>
Dec 03 01:54:42 compute-0 nova_compute[351485]:         <nova:ephemeral>1</nova:ephemeral>
Dec 03 01:54:42 compute-0 nova_compute[351485]:         <nova:vcpus>1</nova:vcpus>
Dec 03 01:54:42 compute-0 nova_compute[351485]:       </nova:flavor>
Dec 03 01:54:42 compute-0 nova_compute[351485]:       <nova:owner>
Dec 03 01:54:42 compute-0 nova_compute[351485]:         <nova:user uuid="03ba25e4009b43f7b0054fee32bf9136">admin</nova:user>
Dec 03 01:54:42 compute-0 nova_compute[351485]:         <nova:project uuid="9746b242761a48048d185ce26d622b33">admin</nova:project>
Dec 03 01:54:42 compute-0 nova_compute[351485]:       </nova:owner>
Dec 03 01:54:42 compute-0 nova_compute[351485]:       <nova:root type="image" uuid="466cf0db-c3be-4d70-b9f3-08c056c2cad9"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:       <nova:ports>
Dec 03 01:54:42 compute-0 nova_compute[351485]:         <nova:port uuid="d2a50b9b-c23e-4e96-a247-ba01de01a3f1">
Dec 03 01:54:42 compute-0 nova_compute[351485]:           <nova:ip type="fixed" address="192.168.0.5" ipVersion="4"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:         </nova:port>
Dec 03 01:54:42 compute-0 nova_compute[351485]:       </nova:ports>
Dec 03 01:54:42 compute-0 nova_compute[351485]:     </nova:instance>
Dec 03 01:54:42 compute-0 nova_compute[351485]:   </metadata>
Dec 03 01:54:42 compute-0 nova_compute[351485]:   <sysinfo type="smbios">
Dec 03 01:54:42 compute-0 nova_compute[351485]:     <system>
Dec 03 01:54:42 compute-0 nova_compute[351485]:       <entry name="manufacturer">RDO</entry>
Dec 03 01:54:42 compute-0 nova_compute[351485]:       <entry name="product">OpenStack Compute</entry>
Dec 03 01:54:42 compute-0 nova_compute[351485]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 03 01:54:42 compute-0 nova_compute[351485]:       <entry name="serial">9182286b-5a08-4961-b4bb-c0e2f05746f7</entry>
Dec 03 01:54:42 compute-0 nova_compute[351485]:       <entry name="uuid">9182286b-5a08-4961-b4bb-c0e2f05746f7</entry>
Dec 03 01:54:42 compute-0 nova_compute[351485]:       <entry name="family">Virtual Machine</entry>
Dec 03 01:54:42 compute-0 nova_compute[351485]:     </system>
Dec 03 01:54:42 compute-0 nova_compute[351485]:   </sysinfo>
Dec 03 01:54:42 compute-0 nova_compute[351485]:   <os>
Dec 03 01:54:42 compute-0 nova_compute[351485]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 03 01:54:42 compute-0 nova_compute[351485]:     <boot dev="hd"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:     <smbios mode="sysinfo"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:   </os>
Dec 03 01:54:42 compute-0 nova_compute[351485]:   <features>
Dec 03 01:54:42 compute-0 nova_compute[351485]:     <acpi/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:     <apic/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:     <vmcoreinfo/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:   </features>
Dec 03 01:54:42 compute-0 nova_compute[351485]:   <clock offset="utc">
Dec 03 01:54:42 compute-0 nova_compute[351485]:     <timer name="pit" tickpolicy="delay"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:     <timer name="hpet" present="no"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:   </clock>
Dec 03 01:54:42 compute-0 nova_compute[351485]:   <cpu mode="host-model" match="exact">
Dec 03 01:54:42 compute-0 nova_compute[351485]:     <topology sockets="1" cores="1" threads="1"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:   </cpu>
Dec 03 01:54:42 compute-0 nova_compute[351485]:   <devices>
Dec 03 01:54:42 compute-0 nova_compute[351485]:     <disk type="network" device="disk">
Dec 03 01:54:42 compute-0 nova_compute[351485]:       <driver type="raw" cache="none"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:       <source protocol="rbd" name="vms/9182286b-5a08-4961-b4bb-c0e2f05746f7_disk">
Dec 03 01:54:42 compute-0 nova_compute[351485]:         <host name="192.168.122.100" port="6789"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:       </source>
Dec 03 01:54:42 compute-0 nova_compute[351485]:       <auth username="openstack">
Dec 03 01:54:42 compute-0 nova_compute[351485]:         <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:       </auth>
Dec 03 01:54:42 compute-0 nova_compute[351485]:       <target dev="vda" bus="virtio"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:     </disk>
Dec 03 01:54:42 compute-0 nova_compute[351485]:     <disk type="network" device="disk">
Dec 03 01:54:42 compute-0 nova_compute[351485]:       <driver type="raw" cache="none"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:       <source protocol="rbd" name="vms/9182286b-5a08-4961-b4bb-c0e2f05746f7_disk.eph0">
Dec 03 01:54:42 compute-0 nova_compute[351485]:         <host name="192.168.122.100" port="6789"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:       </source>
Dec 03 01:54:42 compute-0 nova_compute[351485]:       <auth username="openstack">
Dec 03 01:54:42 compute-0 nova_compute[351485]:         <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:       </auth>
Dec 03 01:54:42 compute-0 nova_compute[351485]:       <target dev="vdb" bus="virtio"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:     </disk>
Dec 03 01:54:42 compute-0 nova_compute[351485]:     <disk type="network" device="cdrom">
Dec 03 01:54:42 compute-0 nova_compute[351485]:       <driver type="raw" cache="none"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:       <source protocol="rbd" name="vms/9182286b-5a08-4961-b4bb-c0e2f05746f7_disk.config">
Dec 03 01:54:42 compute-0 nova_compute[351485]:         <host name="192.168.122.100" port="6789"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:       </source>
Dec 03 01:54:42 compute-0 nova_compute[351485]:       <auth username="openstack">
Dec 03 01:54:42 compute-0 nova_compute[351485]:         <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:       </auth>
Dec 03 01:54:42 compute-0 nova_compute[351485]:       <target dev="sda" bus="sata"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:     </disk>
Dec 03 01:54:42 compute-0 nova_compute[351485]:     <interface type="ethernet">
Dec 03 01:54:42 compute-0 nova_compute[351485]:       <mac address="fa:16:3e:8f:a6:32"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:       <model type="virtio"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:       <driver name="vhost" rx_queue_size="512"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:       <mtu size="1442"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:       <target dev="tapd2a50b9b-c2"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:     </interface>
Dec 03 01:54:42 compute-0 nova_compute[351485]:     <serial type="pty">
Dec 03 01:54:42 compute-0 nova_compute[351485]:       <log file="/var/lib/nova/instances/9182286b-5a08-4961-b4bb-c0e2f05746f7/console.log" append="off"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:     </serial>
Dec 03 01:54:42 compute-0 nova_compute[351485]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:     <video>
Dec 03 01:54:42 compute-0 nova_compute[351485]:       <model type="virtio"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:     </video>
Dec 03 01:54:42 compute-0 nova_compute[351485]:     <input type="tablet" bus="usb"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:     <rng model="virtio">
Dec 03 01:54:42 compute-0 nova_compute[351485]:       <backend model="random">/dev/urandom</backend>
Dec 03 01:54:42 compute-0 nova_compute[351485]:     </rng>
Dec 03 01:54:42 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:     <controller type="usb" index="0"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:     <memballoon model="virtio">
Dec 03 01:54:42 compute-0 nova_compute[351485]:       <stats period="10"/>
Dec 03 01:54:42 compute-0 nova_compute[351485]:     </memballoon>
Dec 03 01:54:42 compute-0 nova_compute[351485]:   </devices>
Dec 03 01:54:42 compute-0 nova_compute[351485]: </domain>
Dec 03 01:54:42 compute-0 nova_compute[351485]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 03 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.346 351492 DEBUG nova.compute.manager [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Preparing to wait for external event network-vif-plugged-d2a50b9b-c23e-4e96-a247-ba01de01a3f1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 03 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.346 351492 DEBUG oslo_concurrency.lockutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "9182286b-5a08-4961-b4bb-c0e2f05746f7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.346 351492 DEBUG oslo_concurrency.lockutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "9182286b-5a08-4961-b4bb-c0e2f05746f7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.346 351492 DEBUG oslo_concurrency.lockutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "9182286b-5a08-4961-b4bb-c0e2f05746f7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.347 351492 DEBUG nova.virt.libvirt.vif [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T01:54:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='466cf0db-c3be-4d70-b9f3-08c056c2cad9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9746b242761a48048d185ce26d622b33',ramdisk_id='',reservation_id='r-2j005007',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='466cf0db-c3be-4d70-b9f3-08c056c2cad9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T01:54:32Z,user_data=None,user_id='03ba25e4009b43f7b0054fee32bf9136',uuid=9182286b-5a08-4961-b4bb-c0e2f05746f7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "address": "fa:16:3e:8f:a6:32", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2a50b9b-c2", "ovs_interfaceid": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 03 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.347 351492 DEBUG nova.network.os_vif_util [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Converting VIF {"id": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "address": "fa:16:3e:8f:a6:32", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2a50b9b-c2", "ovs_interfaceid": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 03 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.348 351492 DEBUG nova.network.os_vif_util [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:a6:32,bridge_name='br-int',has_traffic_filtering=True,id=d2a50b9b-c23e-4e96-a247-ba01de01a3f1,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd2a50b9b-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 03 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.348 351492 DEBUG os_vif [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:a6:32,bridge_name='br-int',has_traffic_filtering=True,id=d2a50b9b-c23e-4e96-a247-ba01de01a3f1,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd2a50b9b-c2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 03 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.387 351492 DEBUG ovsdbapp.backend.ovs_idl [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 03 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.387 351492 DEBUG ovsdbapp.backend.ovs_idl [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 03 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.387 351492 DEBUG ovsdbapp.backend.ovs_idl [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 03 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.388 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 03 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.389 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [POLLOUT] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.389 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 03 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.391 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.415 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.416 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.416 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 03 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.418 351492 INFO oslo.privsep.daemon [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpftcxxza1/privsep.sock']
Dec 03 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.729 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:54:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:54:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Dec 03 01:54:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Dec 03 01:54:42 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Dec 03 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.993 351492 DEBUG nova.network.neutron [req-c58eed6d-18c9-472c-9087-58f160e834bb req-6604b7a8-03a5-40ed-b279-cdde1ad18b26 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Updated VIF entry in instance network info cache for port d2a50b9b-c23e-4e96-a247-ba01de01a3f1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 03 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.998 351492 DEBUG nova.network.neutron [req-c58eed6d-18c9-472c-9087-58f160e834bb req-6604b7a8-03a5-40ed-b279-cdde1ad18b26 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Updating instance_info_cache with network_info: [{"id": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "address": "fa:16:3e:8f:a6:32", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2a50b9b-c2", "ovs_interfaceid": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 01:54:43 compute-0 nova_compute[351485]: 2025-12-03 01:54:43.017 351492 DEBUG oslo_concurrency.lockutils [req-c58eed6d-18c9-472c-9087-58f160e834bb req-6604b7a8-03a5-40ed-b279-cdde1ad18b26 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 01:54:43 compute-0 nova_compute[351485]: 2025-12-03 01:54:43.193 351492 INFO oslo.privsep.daemon [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Spawned new privsep daemon via rootwrap
Dec 03 01:54:43 compute-0 nova_compute[351485]: 2025-12-03 01:54:43.056 414469 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 03 01:54:43 compute-0 nova_compute[351485]: 2025-12-03 01:54:43.064 414469 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 03 01:54:43 compute-0 nova_compute[351485]: 2025-12-03 01:54:43.070 414469 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Dec 03 01:54:43 compute-0 nova_compute[351485]: 2025-12-03 01:54:43.070 414469 INFO oslo.privsep.daemon [-] privsep daemon running as pid 414469
Dec 03 01:54:43 compute-0 ceph-mon[192821]: pgmap v1203: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 56 op/s
Dec 03 01:54:43 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1938135160' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 01:54:43 compute-0 ceph-mon[192821]: osdmap e127: 3 total, 3 up, 3 in
Dec 03 01:54:43 compute-0 nova_compute[351485]: 2025-12-03 01:54:43.581 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:54:43 compute-0 nova_compute[351485]: 2025-12-03 01:54:43.581 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd2a50b9b-c2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 01:54:43 compute-0 nova_compute[351485]: 2025-12-03 01:54:43.582 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd2a50b9b-c2, col_values=(('external_ids', {'iface-id': 'd2a50b9b-c23e-4e96-a247-ba01de01a3f1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8f:a6:32', 'vm-uuid': '9182286b-5a08-4961-b4bb-c0e2f05746f7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 01:54:43 compute-0 NetworkManager[48912]: <info>  [1764726883.5852] manager: (tapd2a50b9b-c2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Dec 03 01:54:43 compute-0 nova_compute[351485]: 2025-12-03 01:54:43.586 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 03 01:54:43 compute-0 nova_compute[351485]: 2025-12-03 01:54:43.596 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:54:43 compute-0 nova_compute[351485]: 2025-12-03 01:54:43.598 351492 INFO os_vif [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:a6:32,bridge_name='br-int',has_traffic_filtering=True,id=d2a50b9b-c23e-4e96-a247-ba01de01a3f1,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd2a50b9b-c2')
Dec 03 01:54:43 compute-0 nova_compute[351485]: 2025-12-03 01:54:43.686 351492 DEBUG nova.virt.libvirt.driver [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 03 01:54:43 compute-0 nova_compute[351485]: 2025-12-03 01:54:43.686 351492 DEBUG nova.virt.libvirt.driver [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 03 01:54:43 compute-0 nova_compute[351485]: 2025-12-03 01:54:43.687 351492 DEBUG nova.virt.libvirt.driver [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 03 01:54:43 compute-0 nova_compute[351485]: 2025-12-03 01:54:43.687 351492 DEBUG nova.virt.libvirt.driver [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] No VIF found with MAC fa:16:3e:8f:a6:32, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 03 01:54:43 compute-0 nova_compute[351485]: 2025-12-03 01:54:43.688 351492 INFO nova.virt.libvirt.driver [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Using config drive
Dec 03 01:54:43 compute-0 nova_compute[351485]: 2025-12-03 01:54:43.770 351492 DEBUG nova.storage.rbd_utils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 9182286b-5a08-4961-b4bb-c0e2f05746f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 01:54:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1205: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.0 MiB/s wr, 63 op/s
Dec 03 01:54:44 compute-0 podman[414495]: 2025-12-03 01:54:44.822731966 +0000 UTC m=+0.098815894 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3)
Dec 03 01:54:44 compute-0 podman[414494]: 2025-12-03 01:54:44.833569965 +0000 UTC m=+0.120074340 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, vcs-type=git, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, name=ubi9-minimal, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, version=9.6, architecture=x86_64)
Dec 03 01:54:44 compute-0 podman[414493]: 2025-12-03 01:54:44.864003011 +0000 UTC m=+0.158177004 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Dec 03 01:54:45 compute-0 ceph-mon[192821]: pgmap v1205: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.0 MiB/s wr, 63 op/s
Dec 03 01:54:46 compute-0 nova_compute[351485]: 2025-12-03 01:54:46.148 351492 INFO nova.virt.libvirt.driver [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Creating config drive at /var/lib/nova/instances/9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.config
Dec 03 01:54:46 compute-0 nova_compute[351485]: 2025-12-03 01:54:46.161 351492 DEBUG oslo_concurrency.processutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa5bt4xdf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:54:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1206: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 941 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Dec 03 01:54:46 compute-0 nova_compute[351485]: 2025-12-03 01:54:46.300 351492 DEBUG oslo_concurrency.processutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa5bt4xdf" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:54:46 compute-0 nova_compute[351485]: 2025-12-03 01:54:46.362 351492 DEBUG nova.storage.rbd_utils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 9182286b-5a08-4961-b4bb-c0e2f05746f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 01:54:46 compute-0 nova_compute[351485]: 2025-12-03 01:54:46.372 351492 DEBUG oslo_concurrency.processutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.config 9182286b-5a08-4961-b4bb-c0e2f05746f7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:54:46 compute-0 nova_compute[351485]: 2025-12-03 01:54:46.660 351492 DEBUG oslo_concurrency.processutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.config 9182286b-5a08-4961-b4bb-c0e2f05746f7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.287s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:54:46 compute-0 nova_compute[351485]: 2025-12-03 01:54:46.661 351492 INFO nova.virt.libvirt.driver [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Deleting local config drive /var/lib/nova/instances/9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.config because it was imported into RBD.
Dec 03 01:54:46 compute-0 systemd[1]: Starting libvirt secret daemon...
Dec 03 01:54:46 compute-0 systemd[1]: Started libvirt secret daemon.
Dec 03 01:54:46 compute-0 podman[414599]: 2025-12-03 01:54:46.836102763 +0000 UTC m=+0.121584783 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 01:54:46 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Dec 03 01:54:46 compute-0 kernel: tapd2a50b9b-c2: entered promiscuous mode
Dec 03 01:54:46 compute-0 NetworkManager[48912]: <info>  [1764726886.8705] manager: (tapd2a50b9b-c2): new Tun device (/org/freedesktop/NetworkManager/Devices/22)
Dec 03 01:54:46 compute-0 ovn_controller[89134]: 2025-12-03T01:54:46Z|00027|binding|INFO|Claiming lport d2a50b9b-c23e-4e96-a247-ba01de01a3f1 for this chassis.
Dec 03 01:54:46 compute-0 ovn_controller[89134]: 2025-12-03T01:54:46Z|00028|binding|INFO|d2a50b9b-c23e-4e96-a247-ba01de01a3f1: Claiming fa:16:3e:8f:a6:32 192.168.0.5
Dec 03 01:54:46 compute-0 nova_compute[351485]: 2025-12-03 01:54:46.874 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:54:46 compute-0 nova_compute[351485]: 2025-12-03 01:54:46.879 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:54:46 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:46.893 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8f:a6:32 192.168.0.5'], port_security=['fa:16:3e:8f:a6:32 192.168.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.5/24', 'neutron:device_id': '9182286b-5a08-4961-b4bb-c0e2f05746f7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9746b242761a48048d185ce26d622b33', 'neutron:revision_number': '2', 'neutron:security_group_ids': '43ddbc1b-0018-4ea3-a338-8898d9bf8c87', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=13e9ae70-0999-47f9-bc0c-397e04263018, chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=d2a50b9b-c23e-4e96-a247-ba01de01a3f1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 01:54:46 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:46.894 288528 INFO neutron.agent.ovn.metadata.agent [-] Port d2a50b9b-c23e-4e96-a247-ba01de01a3f1 in datapath 7ba11691-2711-476c-9191-cb6dfd0efa7d bound to our chassis
Dec 03 01:54:46 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:46.896 288528 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ba11691-2711-476c-9191-cb6dfd0efa7d
Dec 03 01:54:46 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:46.898 288528 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmp2pmbp4iw/privsep.sock']
Dec 03 01:54:46 compute-0 systemd-udevd[414657]: Network interface NamePolicy= disabled on kernel command line.
Dec 03 01:54:46 compute-0 NetworkManager[48912]: <info>  [1764726886.9344] device (tapd2a50b9b-c2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 03 01:54:46 compute-0 NetworkManager[48912]: <info>  [1764726886.9354] device (tapd2a50b9b-c2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 03 01:54:46 compute-0 systemd-machined[138558]: New machine qemu-1-instance-00000001.
Dec 03 01:54:46 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Dec 03 01:54:46 compute-0 nova_compute[351485]: 2025-12-03 01:54:46.977 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:54:46 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec 03 01:54:46 compute-0 ovn_controller[89134]: 2025-12-03T01:54:46Z|00029|binding|INFO|Setting lport d2a50b9b-c23e-4e96-a247-ba01de01a3f1 ovn-installed in OVS
Dec 03 01:54:46 compute-0 ovn_controller[89134]: 2025-12-03T01:54:46Z|00030|binding|INFO|Setting lport d2a50b9b-c23e-4e96-a247-ba01de01a3f1 up in Southbound
Dec 03 01:54:46 compute-0 nova_compute[351485]: 2025-12-03 01:54:46.989 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:54:47 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec 03 01:54:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 01:54:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2420847559' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 01:54:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 01:54:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2420847559' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 01:54:47 compute-0 ceph-mon[192821]: pgmap v1206: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 941 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Dec 03 01:54:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/2420847559' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 01:54:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/2420847559' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.598 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764726887.5981455, 9182286b-5a08-4961-b4bb-c0e2f05746f7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.599 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] VM Started (Lifecycle Event)
Dec 03 01:54:47 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:47.647 288528 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 03 01:54:47 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:47.649 288528 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp2pmbp4iw/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec 03 01:54:47 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:47.515 414755 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 03 01:54:47 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:47.523 414755 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 03 01:54:47 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:47.527 414755 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Dec 03 01:54:47 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:47.528 414755 INFO oslo.privsep.daemon [-] privsep daemon running as pid 414755
Dec 03 01:54:47 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:47.653 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[9095c4c0-ecef-4c8d-ab53-aee7eae29338]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.687 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.693 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764726887.5982256, 9182286b-5a08-4961-b4bb-c0e2f05746f7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.693 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] VM Paused (Lifecycle Event)
Dec 03 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.720 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.727 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 03 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.732 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.748 351492 DEBUG nova.compute.manager [req-6cbb4c8d-55bf-472b-8591-d52521905002 req-a1f05d09-f816-405c-a222-5d37f158cae2 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Received event network-vif-plugged-d2a50b9b-c23e-4e96-a247-ba01de01a3f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.748 351492 DEBUG oslo_concurrency.lockutils [req-6cbb4c8d-55bf-472b-8591-d52521905002 req-a1f05d09-f816-405c-a222-5d37f158cae2 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "9182286b-5a08-4961-b4bb-c0e2f05746f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.749 351492 DEBUG oslo_concurrency.lockutils [req-6cbb4c8d-55bf-472b-8591-d52521905002 req-a1f05d09-f816-405c-a222-5d37f158cae2 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "9182286b-5a08-4961-b4bb-c0e2f05746f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.749 351492 DEBUG oslo_concurrency.lockutils [req-6cbb4c8d-55bf-472b-8591-d52521905002 req-a1f05d09-f816-405c-a222-5d37f158cae2 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "9182286b-5a08-4961-b4bb-c0e2f05746f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.749 351492 DEBUG nova.compute.manager [req-6cbb4c8d-55bf-472b-8591-d52521905002 req-a1f05d09-f816-405c-a222-5d37f158cae2 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Processing event network-vif-plugged-d2a50b9b-c23e-4e96-a247-ba01de01a3f1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 03 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.750 351492 DEBUG nova.compute.manager [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 03 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.752 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 03 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.758 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764726887.7577085, 9182286b-5a08-4961-b4bb-c0e2f05746f7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.758 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] VM Resumed (Lifecycle Event)
Dec 03 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.762 351492 DEBUG nova.virt.libvirt.driver [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 03 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.775 351492 INFO nova.virt.libvirt.driver [-] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Instance spawned successfully.
Dec 03 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.775 351492 DEBUG nova.virt.libvirt.driver [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 03 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.781 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.794 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 03 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.805 351492 DEBUG nova.virt.libvirt.driver [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.806 351492 DEBUG nova.virt.libvirt.driver [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.807 351492 DEBUG nova.virt.libvirt.driver [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.808 351492 DEBUG nova.virt.libvirt.driver [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.808 351492 DEBUG nova.virt.libvirt.driver [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.809 351492 DEBUG nova.virt.libvirt.driver [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.816 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 03 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.867 351492 INFO nova.compute.manager [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Took 15.14 seconds to spawn the instance on the hypervisor.
Dec 03 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.868 351492 DEBUG nova.compute.manager [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.936 351492 INFO nova.compute.manager [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Took 16.24 seconds to build instance.
Dec 03 01:54:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.954 351492 DEBUG oslo_concurrency.lockutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "9182286b-5a08-4961-b4bb-c0e2f05746f7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.360s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:54:48 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:48.206 414755 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:54:48 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:48.206 414755 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:54:48 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:48.206 414755 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:54:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1207: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 854 KiB/s rd, 1.6 MiB/s wr, 51 op/s
Dec 03 01:54:48 compute-0 nova_compute[351485]: 2025-12-03 01:54:48.585 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:54:48 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:48.857 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[f8184ea0-e0dd-4510-bb82-d843f0a535a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 01:54:48 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:48.859 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7ba11691-21 in ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 03 01:54:48 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:48.861 414755 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7ba11691-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 03 01:54:48 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:48.861 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[f3ae1e41-795c-4b25-8e64-ab17ebd3d79e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 01:54:48 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:48.865 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[fe6d808e-3491-45c8-a123-d7995c480be4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 01:54:48 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:48.895 288639 DEBUG oslo.privsep.daemon [-] privsep: reply[900b18ce-eed7-4a13-8b71-3e96d272d4d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 01:54:48 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:48.937 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[787facbe-6165-4480-bbba-2f4ed0ba4f03]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 01:54:48 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:48.940 288528 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmp9j9wa7zp/privsep.sock']
Dec 03 01:54:49 compute-0 ceph-mon[192821]: pgmap v1207: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 854 KiB/s rd, 1.6 MiB/s wr, 51 op/s
Dec 03 01:54:49 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:49.696 288528 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 03 01:54:49 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:49.698 288528 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp9j9wa7zp/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec 03 01:54:49 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:49.554 414771 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 03 01:54:49 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:49.558 414771 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 03 01:54:49 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:49.560 414771 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Dec 03 01:54:49 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:49.561 414771 INFO oslo.privsep.daemon [-] privsep daemon running as pid 414771
Dec 03 01:54:49 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:49.702 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[68b80d36-222f-4d8b-af57-fbd9e7dde86a]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 01:54:49 compute-0 nova_compute[351485]: 2025-12-03 01:54:49.827 351492 DEBUG nova.compute.manager [req-5e86aa50-39e7-4cca-94cc-ff42ad31dfc2 req-ea646ff0-5dc2-4003-b650-fb75e0aa7c30 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Received event network-vif-plugged-d2a50b9b-c23e-4e96-a247-ba01de01a3f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 01:54:49 compute-0 nova_compute[351485]: 2025-12-03 01:54:49.828 351492 DEBUG oslo_concurrency.lockutils [req-5e86aa50-39e7-4cca-94cc-ff42ad31dfc2 req-ea646ff0-5dc2-4003-b650-fb75e0aa7c30 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "9182286b-5a08-4961-b4bb-c0e2f05746f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:54:49 compute-0 nova_compute[351485]: 2025-12-03 01:54:49.828 351492 DEBUG oslo_concurrency.lockutils [req-5e86aa50-39e7-4cca-94cc-ff42ad31dfc2 req-ea646ff0-5dc2-4003-b650-fb75e0aa7c30 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "9182286b-5a08-4961-b4bb-c0e2f05746f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:54:49 compute-0 nova_compute[351485]: 2025-12-03 01:54:49.829 351492 DEBUG oslo_concurrency.lockutils [req-5e86aa50-39e7-4cca-94cc-ff42ad31dfc2 req-ea646ff0-5dc2-4003-b650-fb75e0aa7c30 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "9182286b-5a08-4961-b4bb-c0e2f05746f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:54:49 compute-0 nova_compute[351485]: 2025-12-03 01:54:49.830 351492 DEBUG nova.compute.manager [req-5e86aa50-39e7-4cca-94cc-ff42ad31dfc2 req-ea646ff0-5dc2-4003-b650-fb75e0aa7c30 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] No waiting events found dispatching network-vif-plugged-d2a50b9b-c23e-4e96-a247-ba01de01a3f1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 03 01:54:49 compute-0 nova_compute[351485]: 2025-12-03 01:54:49.831 351492 WARNING nova.compute.manager [req-5e86aa50-39e7-4cca-94cc-ff42ad31dfc2 req-ea646ff0-5dc2-4003-b650-fb75e0aa7c30 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Received unexpected event network-vif-plugged-d2a50b9b-c23e-4e96-a247-ba01de01a3f1 for instance with vm_state active and task_state None.
Dec 03 01:54:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1208: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 145 KiB/s rd, 881 KiB/s wr, 33 op/s
Dec 03 01:54:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:50.254 414771 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:54:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:50.254 414771 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:54:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:50.254 414771 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:54:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:50.858 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[aeffe574-299a-4e8f-a9f8-adf1274fcf4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 01:54:50 compute-0 NetworkManager[48912]: <info>  [1764726890.9057] manager: (tap7ba11691-20): new Veth device (/org/freedesktop/NetworkManager/Devices/23)
Dec 03 01:54:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:50.904 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[a6455f5b-0a3d-4efc-9f2e-8d3c6efc352c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 01:54:50 compute-0 systemd-udevd[414783]: Network interface NamePolicy= disabled on kernel command line.
Dec 03 01:54:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:50.966 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[aa297b0a-988e-432e-817d-65bc49062425]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 01:54:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:50.970 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[4401a42a-99f8-4d92-b077-c72af5d878f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 01:54:51 compute-0 NetworkManager[48912]: <info>  [1764726891.0047] device (tap7ba11691-20): carrier: link connected
Dec 03 01:54:51 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:51.013 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[e1198794-480a-4cbe-8c33-e64fafc74bba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 01:54:51 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:51.039 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[ecb42e5a-f75a-425d-959f-ecd51b1aee8b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ba11691-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:a4:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573048, 'reachable_time': 19031, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 414801, 'error': None, 'target': 'ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 01:54:51 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:51.061 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[5924698f-9448-4987-a67b-4f9c7aa43cd4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe09:a4dd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 573048, 'tstamp': 573048}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 414802, 'error': None, 'target': 'ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 01:54:51 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:51.083 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[d2f383b8-350f-4d6b-9af7-afc5eab7e902]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ba11691-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:a4:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573048, 'reachable_time': 19031, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 414803, 'error': None, 'target': 'ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 01:54:51 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:51.141 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[3fcc51e9-b3d3-492a-9ee7-64d9125f428e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 01:54:51 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:51.232 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[07dc4ea4-7142-443f-b0b7-1ee844b552b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 01:54:51 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:51.234 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ba11691-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 01:54:51 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:51.234 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 03 01:54:51 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:51.235 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ba11691-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 01:54:51 compute-0 nova_compute[351485]: 2025-12-03 01:54:51.238 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:54:51 compute-0 NetworkManager[48912]: <info>  [1764726891.2394] manager: (tap7ba11691-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/24)
Dec 03 01:54:51 compute-0 kernel: tap7ba11691-20: entered promiscuous mode
Dec 03 01:54:51 compute-0 nova_compute[351485]: 2025-12-03 01:54:51.244 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:54:51 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:51.247 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ba11691-20, col_values=(('external_ids', {'iface-id': '8c8945aa-32be-4ced-a7fe-2b9502f30008'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 01:54:51 compute-0 nova_compute[351485]: 2025-12-03 01:54:51.250 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:54:51 compute-0 ovn_controller[89134]: 2025-12-03T01:54:51Z|00031|binding|INFO|Releasing lport 8c8945aa-32be-4ced-a7fe-2b9502f30008 from this chassis (sb_readonly=0)
Dec 03 01:54:51 compute-0 nova_compute[351485]: 2025-12-03 01:54:51.252 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:54:51 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:51.253 288528 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7ba11691-2711-476c-9191-cb6dfd0efa7d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7ba11691-2711-476c-9191-cb6dfd0efa7d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 03 01:54:51 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:51.254 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[3c79b873-9341-4ce4-9a9c-be3e8f59ef42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 01:54:51 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:51.256 288528 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 03 01:54:51 compute-0 ovn_metadata_agent[288523]: global
Dec 03 01:54:51 compute-0 ovn_metadata_agent[288523]:     log         /dev/log local0 debug
Dec 03 01:54:51 compute-0 ovn_metadata_agent[288523]:     log-tag     haproxy-metadata-proxy-7ba11691-2711-476c-9191-cb6dfd0efa7d
Dec 03 01:54:51 compute-0 ovn_metadata_agent[288523]:     user        root
Dec 03 01:54:51 compute-0 ovn_metadata_agent[288523]:     group       root
Dec 03 01:54:51 compute-0 ovn_metadata_agent[288523]:     maxconn     1024
Dec 03 01:54:51 compute-0 ovn_metadata_agent[288523]:     pidfile     /var/lib/neutron/external/pids/7ba11691-2711-476c-9191-cb6dfd0efa7d.pid.haproxy
Dec 03 01:54:51 compute-0 ovn_metadata_agent[288523]:     daemon
Dec 03 01:54:51 compute-0 ovn_metadata_agent[288523]: 
Dec 03 01:54:51 compute-0 ovn_metadata_agent[288523]: defaults
Dec 03 01:54:51 compute-0 ovn_metadata_agent[288523]:     log global
Dec 03 01:54:51 compute-0 ovn_metadata_agent[288523]:     mode http
Dec 03 01:54:51 compute-0 ovn_metadata_agent[288523]:     option httplog
Dec 03 01:54:51 compute-0 ovn_metadata_agent[288523]:     option dontlognull
Dec 03 01:54:51 compute-0 ovn_metadata_agent[288523]:     option http-server-close
Dec 03 01:54:51 compute-0 ovn_metadata_agent[288523]:     option forwardfor
Dec 03 01:54:51 compute-0 ovn_metadata_agent[288523]:     retries                 3
Dec 03 01:54:51 compute-0 ovn_metadata_agent[288523]:     timeout http-request    30s
Dec 03 01:54:51 compute-0 ovn_metadata_agent[288523]:     timeout connect         30s
Dec 03 01:54:51 compute-0 ovn_metadata_agent[288523]:     timeout client          32s
Dec 03 01:54:51 compute-0 ovn_metadata_agent[288523]:     timeout server          32s
Dec 03 01:54:51 compute-0 ovn_metadata_agent[288523]:     timeout http-keep-alive 30s
Dec 03 01:54:51 compute-0 ovn_metadata_agent[288523]: 
Dec 03 01:54:51 compute-0 ovn_metadata_agent[288523]: 
Dec 03 01:54:51 compute-0 ovn_metadata_agent[288523]: listen listener
Dec 03 01:54:51 compute-0 ovn_metadata_agent[288523]:     bind 169.254.169.254:80
Dec 03 01:54:51 compute-0 ovn_metadata_agent[288523]:     server metadata /var/lib/neutron/metadata_proxy
Dec 03 01:54:51 compute-0 ovn_metadata_agent[288523]:     http-request add-header X-OVN-Network-ID 7ba11691-2711-476c-9191-cb6dfd0efa7d
Dec 03 01:54:51 compute-0 ovn_metadata_agent[288523]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 03 01:54:51 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:51.257 288528 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'env', 'PROCESS_TAG=haproxy-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7ba11691-2711-476c-9191-cb6dfd0efa7d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 03 01:54:51 compute-0 nova_compute[351485]: 2025-12-03 01:54:51.276 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:54:51 compute-0 ceph-mon[192821]: pgmap v1208: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 145 KiB/s rd, 881 KiB/s wr, 33 op/s
Dec 03 01:54:51 compute-0 podman[414836]: 2025-12-03 01:54:51.838839412 +0000 UTC m=+0.119660498 container create 08a96f0c99af215211c236242d278753571f77111c0901d8562f775763893a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 03 01:54:51 compute-0 podman[414836]: 2025-12-03 01:54:51.781798858 +0000 UTC m=+0.062620004 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 03 01:54:51 compute-0 systemd[1]: Started libpod-conmon-08a96f0c99af215211c236242d278753571f77111c0901d8562f775763893a28.scope.
Dec 03 01:54:51 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:54:51 compute-0 sudo[414847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:54:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be2b42ad51a1eafabc174b54703a8a7fc40735ce50000101ab3bd4077ab4d5c6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 03 01:54:52 compute-0 sudo[414847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:54:52 compute-0 sudo[414847]: pam_unix(sudo:session): session closed for user root
Dec 03 01:54:52 compute-0 podman[414836]: 2025-12-03 01:54:52.011047214 +0000 UTC m=+0.291868320 container init 08a96f0c99af215211c236242d278753571f77111c0901d8562f775763893a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 03 01:54:52 compute-0 podman[414836]: 2025-12-03 01:54:52.019978738 +0000 UTC m=+0.300799804 container start 08a96f0c99af215211c236242d278753571f77111c0901d8562f775763893a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 03 01:54:52 compute-0 neutron-haproxy-ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d[414873]: [NOTICE]   (414882) : New worker (414906) forked
Dec 03 01:54:52 compute-0 neutron-haproxy-ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d[414873]: [NOTICE]   (414882) : Loading success.
Dec 03 01:54:52 compute-0 sudo[414881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:54:52 compute-0 sudo[414881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:54:52 compute-0 sudo[414881]: pam_unix(sudo:session): session closed for user root
Dec 03 01:54:52 compute-0 sudo[414917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:54:52 compute-0 sudo[414917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:54:52 compute-0 sudo[414917]: pam_unix(sudo:session): session closed for user root
Dec 03 01:54:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1209: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 15 KiB/s wr, 57 op/s
Dec 03 01:54:52 compute-0 sudo[414942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 03 01:54:52 compute-0 sudo[414942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:54:52 compute-0 nova_compute[351485]: 2025-12-03 01:54:52.733 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:54:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:54:53 compute-0 podman[415037]: 2025-12-03 01:54:53.197274493 +0000 UTC m=+0.117134435 container exec d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:54:53 compute-0 podman[415037]: 2025-12-03 01:54:53.327670155 +0000 UTC m=+0.247530067 container exec_died d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Dec 03 01:54:53 compute-0 ceph-mon[192821]: pgmap v1209: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 15 KiB/s wr, 57 op/s
Dec 03 01:54:53 compute-0 nova_compute[351485]: 2025-12-03 01:54:53.588 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:54:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1210: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 13 KiB/s wr, 70 op/s
Dec 03 01:54:54 compute-0 sudo[414942]: pam_unix(sudo:session): session closed for user root
Dec 03 01:54:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:54:54 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:54:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:54:54 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:54:54 compute-0 sudo[415193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:54:54 compute-0 sudo[415193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:54:54 compute-0 sudo[415193]: pam_unix(sudo:session): session closed for user root
Dec 03 01:54:54 compute-0 sudo[415218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:54:54 compute-0 sudo[415218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:54:54 compute-0 sudo[415218]: pam_unix(sudo:session): session closed for user root
Dec 03 01:54:55 compute-0 sudo[415243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:54:55 compute-0 sudo[415243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:54:55 compute-0 sudo[415243]: pam_unix(sudo:session): session closed for user root
Dec 03 01:54:55 compute-0 sudo[415268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 01:54:55 compute-0 sudo[415268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:54:55 compute-0 nova_compute[351485]: 2025-12-03 01:54:55.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:54:55 compute-0 nova_compute[351485]: 2025-12-03 01:54:55.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 01:54:55 compute-0 nova_compute[351485]: 2025-12-03 01:54:55.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 03 01:54:55 compute-0 ceph-mon[192821]: pgmap v1210: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 13 KiB/s wr, 70 op/s
Dec 03 01:54:55 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:54:55 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:54:55 compute-0 sudo[415268]: pam_unix(sudo:session): session closed for user root
Dec 03 01:54:55 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:54:55 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:54:55 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 01:54:55 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:54:55 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 01:54:55 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:54:55 compute-0 nova_compute[351485]: 2025-12-03 01:54:55.914 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 01:54:55 compute-0 nova_compute[351485]: 2025-12-03 01:54:55.915 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 01:54:55 compute-0 nova_compute[351485]: 2025-12-03 01:54:55.916 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 03 01:54:55 compute-0 nova_compute[351485]: 2025-12-03 01:54:55.916 351492 DEBUG nova.objects.instance [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9182286b-5a08-4961-b4bb-c0e2f05746f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 01:54:55 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 164b08fd-e4a9-433f-b7b5-13a5862c0112 does not exist
Dec 03 01:54:55 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 779bcd1f-91c5-404d-87c2-149b5e0097de does not exist
Dec 03 01:54:55 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 83a6554e-738b-4f41-bf99-6d3f208ad510 does not exist
Dec 03 01:54:55 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 01:54:55 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:54:55 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 01:54:55 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:54:55 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:54:55 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:54:56 compute-0 sudo[415324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:54:56 compute-0 sudo[415324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:54:56 compute-0 sudo[415324]: pam_unix(sudo:session): session closed for user root
Dec 03 01:54:56 compute-0 sudo[415349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:54:56 compute-0 sudo[415349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:54:56 compute-0 sudo[415349]: pam_unix(sudo:session): session closed for user root
Dec 03 01:54:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1211: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 12 KiB/s wr, 60 op/s
Dec 03 01:54:56 compute-0 sudo[415374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:54:56 compute-0 sudo[415374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:54:56 compute-0 sudo[415374]: pam_unix(sudo:session): session closed for user root
Dec 03 01:54:56 compute-0 sudo[415399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 01:54:56 compute-0 sudo[415399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:54:56 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:54:56 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:54:56 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:54:56 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:54:56 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:54:56 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:54:57 compute-0 podman[415461]: 2025-12-03 01:54:57.04075719 +0000 UTC m=+0.104441214 container create 42928d54fcbe99c4a71b6047a5acf53c5d23652cff221811adb44a268f6bba8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 03 01:54:57 compute-0 podman[415461]: 2025-12-03 01:54:57.004068316 +0000 UTC m=+0.067752400 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:54:57 compute-0 systemd[1]: Started libpod-conmon-42928d54fcbe99c4a71b6047a5acf53c5d23652cff221811adb44a268f6bba8a.scope.
Dec 03 01:54:57 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:54:57 compute-0 podman[415461]: 2025-12-03 01:54:57.196891345 +0000 UTC m=+0.260575359 container init 42928d54fcbe99c4a71b6047a5acf53c5d23652cff221811adb44a268f6bba8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Dec 03 01:54:57 compute-0 podman[415461]: 2025-12-03 01:54:57.209178365 +0000 UTC m=+0.272862359 container start 42928d54fcbe99c4a71b6047a5acf53c5d23652cff221811adb44a268f6bba8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bell, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:54:57 compute-0 podman[415461]: 2025-12-03 01:54:57.213972982 +0000 UTC m=+0.277656976 container attach 42928d54fcbe99c4a71b6047a5acf53c5d23652cff221811adb44a268f6bba8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:54:57 compute-0 interesting_bell[415478]: 167 167
Dec 03 01:54:57 compute-0 systemd[1]: libpod-42928d54fcbe99c4a71b6047a5acf53c5d23652cff221811adb44a268f6bba8a.scope: Deactivated successfully.
Dec 03 01:54:57 compute-0 podman[415461]: 2025-12-03 01:54:57.223906004 +0000 UTC m=+0.287590018 container died 42928d54fcbe99c4a71b6047a5acf53c5d23652cff221811adb44a268f6bba8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 03 01:54:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5f040f5ea34c2cbe8772172a31c6e75ffb0f30d1c51ea565c0abc777b60e8aa-merged.mount: Deactivated successfully.
Dec 03 01:54:57 compute-0 podman[415461]: 2025-12-03 01:54:57.526260672 +0000 UTC m=+0.589944696 container remove 42928d54fcbe99c4a71b6047a5acf53c5d23652cff221811adb44a268f6bba8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 03 01:54:57 compute-0 ceph-mon[192821]: pgmap v1211: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 12 KiB/s wr, 60 op/s
Dec 03 01:54:57 compute-0 systemd[1]: libpod-conmon-42928d54fcbe99c4a71b6047a5acf53c5d23652cff221811adb44a268f6bba8a.scope: Deactivated successfully.
Dec 03 01:54:57 compute-0 nova_compute[351485]: 2025-12-03 01:54:57.738 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:54:57 compute-0 nova_compute[351485]: 2025-12-03 01:54:57.751 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Updating instance_info_cache with network_info: [{"id": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "address": "fa:16:3e:8f:a6:32", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2a50b9b-c2", "ovs_interfaceid": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 01:54:57 compute-0 nova_compute[351485]: 2025-12-03 01:54:57.776 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 01:54:57 compute-0 nova_compute[351485]: 2025-12-03 01:54:57.778 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 03 01:54:57 compute-0 nova_compute[351485]: 2025-12-03 01:54:57.780 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:54:57 compute-0 nova_compute[351485]: 2025-12-03 01:54:57.782 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:54:57 compute-0 nova_compute[351485]: 2025-12-03 01:54:57.833 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:54:57 compute-0 nova_compute[351485]: 2025-12-03 01:54:57.835 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:54:57 compute-0 nova_compute[351485]: 2025-12-03 01:54:57.835 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:54:57 compute-0 nova_compute[351485]: 2025-12-03 01:54:57.836 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 01:54:57 compute-0 nova_compute[351485]: 2025-12-03 01:54:57.837 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:54:57 compute-0 podman[415500]: 2025-12-03 01:54:57.850607736 +0000 UTC m=+0.098628109 container create 1dbb14fe812e60e5e5cba84f0b21f4586f2c986d2078cf472092a658f77d6aad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec 03 01:54:57 compute-0 podman[415500]: 2025-12-03 01:54:57.821694542 +0000 UTC m=+0.069715005 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:54:57 compute-0 systemd[1]: Started libpod-conmon-1dbb14fe812e60e5e5cba84f0b21f4586f2c986d2078cf472092a658f77d6aad.scope.
Dec 03 01:54:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:54:57 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:54:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f141437fb3e5f4a426c458dfb90f0f0917cc77a693fa30eceb8f500251203fd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:54:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f141437fb3e5f4a426c458dfb90f0f0917cc77a693fa30eceb8f500251203fd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:54:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f141437fb3e5f4a426c458dfb90f0f0917cc77a693fa30eceb8f500251203fd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:54:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f141437fb3e5f4a426c458dfb90f0f0917cc77a693fa30eceb8f500251203fd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:54:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f141437fb3e5f4a426c458dfb90f0f0917cc77a693fa30eceb8f500251203fd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:54:57 compute-0 podman[415500]: 2025-12-03 01:54:57.999516455 +0000 UTC m=+0.247536858 container init 1dbb14fe812e60e5e5cba84f0b21f4586f2c986d2078cf472092a658f77d6aad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_moser, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 03 01:54:58 compute-0 podman[415500]: 2025-12-03 01:54:58.02358628 +0000 UTC m=+0.271606683 container start 1dbb14fe812e60e5e5cba84f0b21f4586f2c986d2078cf472092a658f77d6aad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_moser, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 03 01:54:58 compute-0 podman[415500]: 2025-12-03 01:54:58.030340972 +0000 UTC m=+0.278361365 container attach 1dbb14fe812e60e5e5cba84f0b21f4586f2c986d2078cf472092a658f77d6aad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 03 01:54:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1212: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 12 KiB/s wr, 60 op/s
Dec 03 01:54:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 01:54:58 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3239002983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:54:58 compute-0 nova_compute[351485]: 2025-12-03 01:54:58.340 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:54:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:54:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:54:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:54:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:54:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:54:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:54:58 compute-0 nova_compute[351485]: 2025-12-03 01:54:58.445 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 01:54:58 compute-0 nova_compute[351485]: 2025-12-03 01:54:58.446 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 01:54:58 compute-0 nova_compute[351485]: 2025-12-03 01:54:58.447 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 01:54:58 compute-0 nova_compute[351485]: 2025-12-03 01:54:58.593 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:54:58 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3239002983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:54:58 compute-0 nova_compute[351485]: 2025-12-03 01:54:58.950 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 01:54:58 compute-0 nova_compute[351485]: 2025-12-03 01:54:58.953 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4068MB free_disk=59.97224044799805GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 01:54:58 compute-0 nova_compute[351485]: 2025-12-03 01:54:58.953 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:54:58 compute-0 nova_compute[351485]: 2025-12-03 01:54:58.954 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:54:58 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:58.991 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:a6:85', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:2a:11:ae:7b:8c'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 01:54:58 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:58.993 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 03 01:54:59 compute-0 nova_compute[351485]: 2025-12-03 01:54:59.002 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:54:59 compute-0 nova_compute[351485]: 2025-12-03 01:54:59.073 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 01:54:59 compute-0 nova_compute[351485]: 2025-12-03 01:54:59.075 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 01:54:59 compute-0 nova_compute[351485]: 2025-12-03 01:54:59.075 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 01:54:59 compute-0 nova_compute[351485]: 2025-12-03 01:54:59.127 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:54:59 compute-0 busy_moser[415516]: --> passed data devices: 0 physical, 3 LVM
Dec 03 01:54:59 compute-0 busy_moser[415516]: --> relative data size: 1.0
Dec 03 01:54:59 compute-0 busy_moser[415516]: --> All data devices are unavailable
Dec 03 01:54:59 compute-0 systemd[1]: libpod-1dbb14fe812e60e5e5cba84f0b21f4586f2c986d2078cf472092a658f77d6aad.scope: Deactivated successfully.
Dec 03 01:54:59 compute-0 systemd[1]: libpod-1dbb14fe812e60e5e5cba84f0b21f4586f2c986d2078cf472092a658f77d6aad.scope: Consumed 1.089s CPU time.
Dec 03 01:54:59 compute-0 podman[415500]: 2025-12-03 01:54:59.198019644 +0000 UTC m=+1.446040047 container died 1dbb14fe812e60e5e5cba84f0b21f4586f2c986d2078cf472092a658f77d6aad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_moser, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec 03 01:54:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f141437fb3e5f4a426c458dfb90f0f0917cc77a693fa30eceb8f500251203fd-merged.mount: Deactivated successfully.
Dec 03 01:54:59 compute-0 podman[415500]: 2025-12-03 01:54:59.297919268 +0000 UTC m=+1.545939631 container remove 1dbb14fe812e60e5e5cba84f0b21f4586f2c986d2078cf472092a658f77d6aad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:54:59 compute-0 systemd[1]: libpod-conmon-1dbb14fe812e60e5e5cba84f0b21f4586f2c986d2078cf472092a658f77d6aad.scope: Deactivated successfully.
Dec 03 01:54:59 compute-0 sudo[415399]: pam_unix(sudo:session): session closed for user root
Dec 03 01:54:59 compute-0 sudo[415599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:54:59 compute-0 sudo[415599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:54:59 compute-0 sudo[415599]: pam_unix(sudo:session): session closed for user root
Dec 03 01:54:59 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 01:54:59 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2294191349' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:54:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:59.620 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:54:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:59.621 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:54:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:59.622 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:54:59 compute-0 sudo[415624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:54:59 compute-0 sudo[415624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:54:59 compute-0 sudo[415624]: pam_unix(sudo:session): session closed for user root
Dec 03 01:54:59 compute-0 nova_compute[351485]: 2025-12-03 01:54:59.638 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:54:59 compute-0 nova_compute[351485]: 2025-12-03 01:54:59.655 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Updating inventory in ProviderTree for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 03 01:54:59 compute-0 ceph-mon[192821]: pgmap v1212: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 12 KiB/s wr, 60 op/s
Dec 03 01:54:59 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2294191349' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:54:59 compute-0 nova_compute[351485]: 2025-12-03 01:54:59.726 351492 ERROR nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [req-41a9209d-63cd-4d67-a0fe-51df9a7d13db] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID 107397d2-51bc-4a03-bce4-7cd69319cf05.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-41a9209d-63cd-4d67-a0fe-51df9a7d13db"}]}
Dec 03 01:54:59 compute-0 podman[158098]: time="2025-12-03T01:54:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:54:59 compute-0 nova_compute[351485]: 2025-12-03 01:54:59.750 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing inventories for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 03 01:54:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:54:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec 03 01:54:59 compute-0 nova_compute[351485]: 2025-12-03 01:54:59.776 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Updating ProviderTree inventory for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 03 01:54:59 compute-0 sudo[415651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:54:59 compute-0 nova_compute[351485]: 2025-12-03 01:54:59.777 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Updating inventory in ProviderTree for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 03 01:54:59 compute-0 sudo[415651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:54:59 compute-0 sudo[415651]: pam_unix(sudo:session): session closed for user root
Dec 03 01:54:59 compute-0 nova_compute[351485]: 2025-12-03 01:54:59.797 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing aggregate associations for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 03 01:54:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:54:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8621 "" "Go-http-client/1.1"
Dec 03 01:54:59 compute-0 nova_compute[351485]: 2025-12-03 01:54:59.830 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing trait associations for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05, traits: HW_CPU_X86_SSE42,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_ABM,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_F16C,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_RESCUE_BFV,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 03 01:54:59 compute-0 nova_compute[351485]: 2025-12-03 01:54:59.872 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:54:59 compute-0 sudo[415676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 01:54:59 compute-0 sudo[415676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:55:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1213: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 12 KiB/s wr, 60 op/s
Dec 03 01:55:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 01:55:00 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2857368022' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:55:00 compute-0 nova_compute[351485]: 2025-12-03 01:55:00.399 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:55:00 compute-0 nova_compute[351485]: 2025-12-03 01:55:00.409 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Updating inventory in ProviderTree for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 03 01:55:00 compute-0 nova_compute[351485]: 2025-12-03 01:55:00.455 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Updated inventory for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Dec 03 01:55:00 compute-0 nova_compute[351485]: 2025-12-03 01:55:00.456 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Updating resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05 generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Dec 03 01:55:00 compute-0 nova_compute[351485]: 2025-12-03 01:55:00.456 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Updating inventory in ProviderTree for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 03 01:55:00 compute-0 nova_compute[351485]: 2025-12-03 01:55:00.480 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 01:55:00 compute-0 nova_compute[351485]: 2025-12-03 01:55:00.481 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.527s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:55:00 compute-0 podman[415758]: 2025-12-03 01:55:00.527555783 +0000 UTC m=+0.087272675 container create 112bb404c19eadfad88b29a8666c1f2513a281b84a9173f9fbdd59b21d234335 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_einstein, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 03 01:55:00 compute-0 podman[415758]: 2025-12-03 01:55:00.491574919 +0000 UTC m=+0.051291861 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:55:00 compute-0 systemd[1]: Started libpod-conmon-112bb404c19eadfad88b29a8666c1f2513a281b84a9173f9fbdd59b21d234335.scope.
Dec 03 01:55:00 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:55:00 compute-0 podman[415758]: 2025-12-03 01:55:00.65250171 +0000 UTC m=+0.212218692 container init 112bb404c19eadfad88b29a8666c1f2513a281b84a9173f9fbdd59b21d234335 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:55:00 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2857368022' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:55:00 compute-0 podman[415758]: 2025-12-03 01:55:00.670130932 +0000 UTC m=+0.229847844 container start 112bb404c19eadfad88b29a8666c1f2513a281b84a9173f9fbdd59b21d234335 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 03 01:55:00 compute-0 systemd[1]: libpod-112bb404c19eadfad88b29a8666c1f2513a281b84a9173f9fbdd59b21d234335.scope: Deactivated successfully.
Dec 03 01:55:00 compute-0 hungry_einstein[415774]: 167 167
Dec 03 01:55:00 compute-0 conmon[415774]: conmon 112bb404c19eadfad88b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-112bb404c19eadfad88b29a8666c1f2513a281b84a9173f9fbdd59b21d234335.scope/container/memory.events
Dec 03 01:55:00 compute-0 podman[415758]: 2025-12-03 01:55:00.683724019 +0000 UTC m=+0.243441001 container attach 112bb404c19eadfad88b29a8666c1f2513a281b84a9173f9fbdd59b21d234335 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_einstein, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:55:00 compute-0 podman[415758]: 2025-12-03 01:55:00.684909503 +0000 UTC m=+0.244626405 container died 112bb404c19eadfad88b29a8666c1f2513a281b84a9173f9fbdd59b21d234335 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 03 01:55:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-33ad719a465cd63d045b6a6340cb76dcec5585153630d9c410f5ecb056492da3-merged.mount: Deactivated successfully.
Dec 03 01:55:00 compute-0 podman[415758]: 2025-12-03 01:55:00.740720502 +0000 UTC m=+0.300437394 container remove 112bb404c19eadfad88b29a8666c1f2513a281b84a9173f9fbdd59b21d234335 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_einstein, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:55:00 compute-0 systemd[1]: libpod-conmon-112bb404c19eadfad88b29a8666c1f2513a281b84a9173f9fbdd59b21d234335.scope: Deactivated successfully.
Dec 03 01:55:01 compute-0 podman[415797]: 2025-12-03 01:55:01.008634809 +0000 UTC m=+0.097421045 container create f16313d262f155cfb3c4ca9b00b05c251e6163a3ebe1814da6ae91183046c6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_knuth, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 03 01:55:01 compute-0 podman[415797]: 2025-12-03 01:55:00.973269052 +0000 UTC m=+0.062055378 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:55:01 compute-0 systemd[1]: Started libpod-conmon-f16313d262f155cfb3c4ca9b00b05c251e6163a3ebe1814da6ae91183046c6b4.scope.
Dec 03 01:55:01 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:55:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56f3018d192268e15ca9bd1bfb0be8f81057b2cbf4cf8f626a7fbd70518207e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:55:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56f3018d192268e15ca9bd1bfb0be8f81057b2cbf4cf8f626a7fbd70518207e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:55:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56f3018d192268e15ca9bd1bfb0be8f81057b2cbf4cf8f626a7fbd70518207e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:55:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56f3018d192268e15ca9bd1bfb0be8f81057b2cbf4cf8f626a7fbd70518207e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:55:01 compute-0 podman[415797]: 2025-12-03 01:55:01.194667405 +0000 UTC m=+0.283453721 container init f16313d262f155cfb3c4ca9b00b05c251e6163a3ebe1814da6ae91183046c6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:55:01 compute-0 podman[415797]: 2025-12-03 01:55:01.235711093 +0000 UTC m=+0.324497319 container start f16313d262f155cfb3c4ca9b00b05c251e6163a3ebe1814da6ae91183046c6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_knuth, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:55:01 compute-0 podman[415797]: 2025-12-03 01:55:01.240720236 +0000 UTC m=+0.329506472 container attach f16313d262f155cfb3c4ca9b00b05c251e6163a3ebe1814da6ae91183046c6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_knuth, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:55:01 compute-0 nova_compute[351485]: 2025-12-03 01:55:01.275 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:55:01 compute-0 nova_compute[351485]: 2025-12-03 01:55:01.277 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:55:01 compute-0 nova_compute[351485]: 2025-12-03 01:55:01.277 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:55:01 compute-0 nova_compute[351485]: 2025-12-03 01:55:01.278 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:55:01 compute-0 openstack_network_exporter[368278]: ERROR   01:55:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:55:01 compute-0 openstack_network_exporter[368278]: ERROR   01:55:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:55:01 compute-0 openstack_network_exporter[368278]: ERROR   01:55:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:55:01 compute-0 openstack_network_exporter[368278]: ERROR   01:55:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:55:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:55:01 compute-0 openstack_network_exporter[368278]: ERROR   01:55:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:55:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:55:01 compute-0 ceph-mon[192821]: pgmap v1213: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 12 KiB/s wr, 60 op/s
Dec 03 01:55:01 compute-0 anacron[59208]: Job `cron.monthly' started
Dec 03 01:55:01 compute-0 anacron[59208]: Job `cron.monthly' terminated
Dec 03 01:55:01 compute-0 anacron[59208]: Normal exit (3 jobs run)
Dec 03 01:55:02 compute-0 distracted_knuth[415812]: {
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:     "0": [
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:         {
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:             "devices": [
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:                 "/dev/loop3"
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:             ],
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:             "lv_name": "ceph_lv0",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:             "lv_size": "21470642176",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:             "name": "ceph_lv0",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:             "tags": {
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:                 "ceph.cluster_name": "ceph",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:                 "ceph.crush_device_class": "",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:                 "ceph.encrypted": "0",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:                 "ceph.osd_id": "0",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:                 "ceph.type": "block",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:                 "ceph.vdo": "0"
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:             },
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:             "type": "block",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:             "vg_name": "ceph_vg0"
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:         }
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:     ],
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:     "1": [
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:         {
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:             "devices": [
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:                 "/dev/loop4"
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:             ],
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:             "lv_name": "ceph_lv1",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:             "lv_size": "21470642176",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:             "name": "ceph_lv1",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:             "tags": {
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:                 "ceph.cluster_name": "ceph",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:                 "ceph.crush_device_class": "",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:                 "ceph.encrypted": "0",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:                 "ceph.osd_id": "1",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:                 "ceph.type": "block",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:                 "ceph.vdo": "0"
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:             },
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:             "type": "block",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:             "vg_name": "ceph_vg1"
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:         }
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:     ],
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:     "2": [
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:         {
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:             "devices": [
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:                 "/dev/loop5"
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:             ],
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:             "lv_name": "ceph_lv2",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:             "lv_size": "21470642176",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:             "name": "ceph_lv2",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:             "tags": {
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:                 "ceph.cluster_name": "ceph",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:                 "ceph.crush_device_class": "",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:                 "ceph.encrypted": "0",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:                 "ceph.osd_id": "2",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:                 "ceph.type": "block",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:                 "ceph.vdo": "0"
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:             },
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:             "type": "block",
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:             "vg_name": "ceph_vg2"
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:         }
Dec 03 01:55:02 compute-0 distracted_knuth[415812]:     ]
Dec 03 01:55:02 compute-0 distracted_knuth[415812]: }
Dec 03 01:55:02 compute-0 systemd[1]: libpod-f16313d262f155cfb3c4ca9b00b05c251e6163a3ebe1814da6ae91183046c6b4.scope: Deactivated successfully.
Dec 03 01:55:02 compute-0 podman[415797]: 2025-12-03 01:55:02.20297101 +0000 UTC m=+1.291757296 container died f16313d262f155cfb3c4ca9b00b05c251e6163a3ebe1814da6ae91183046c6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_knuth, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:55:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1214: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 12 KiB/s wr, 49 op/s
Dec 03 01:55:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-56f3018d192268e15ca9bd1bfb0be8f81057b2cbf4cf8f626a7fbd70518207e2-merged.mount: Deactivated successfully.
Dec 03 01:55:02 compute-0 podman[415797]: 2025-12-03 01:55:02.397149178 +0000 UTC m=+1.485935424 container remove f16313d262f155cfb3c4ca9b00b05c251e6163a3ebe1814da6ae91183046c6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_knuth, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 03 01:55:02 compute-0 podman[415834]: 2025-12-03 01:55:02.403785477 +0000 UTC m=+0.148731036 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 03 01:55:02 compute-0 systemd[1]: libpod-conmon-f16313d262f155cfb3c4ca9b00b05c251e6163a3ebe1814da6ae91183046c6b4.scope: Deactivated successfully.
Dec 03 01:55:02 compute-0 podman[415835]: 2025-12-03 01:55:02.410400515 +0000 UTC m=+0.144296719 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 03 01:55:02 compute-0 sudo[415676]: pam_unix(sudo:session): session closed for user root
Dec 03 01:55:02 compute-0 podman[415826]: 2025-12-03 01:55:02.482126347 +0000 UTC m=+0.230605456 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 03 01:55:02 compute-0 sudo[415894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:55:02 compute-0 sudo[415894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:55:02 compute-0 sudo[415894]: pam_unix(sudo:session): session closed for user root
Dec 03 01:55:02 compute-0 nova_compute[351485]: 2025-12-03 01:55:02.572 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:55:02 compute-0 sudo[415919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:55:02 compute-0 sudo[415919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:55:02 compute-0 sudo[415919]: pam_unix(sudo:session): session closed for user root
Dec 03 01:55:02 compute-0 nova_compute[351485]: 2025-12-03 01:55:02.741 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:55:02 compute-0 sudo[415944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:55:02 compute-0 sudo[415944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:55:02 compute-0 sudo[415944]: pam_unix(sudo:session): session closed for user root
Dec 03 01:55:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:55:02 compute-0 sudo[415969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 01:55:02 compute-0 sudo[415969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:55:03 compute-0 nova_compute[351485]: 2025-12-03 01:55:03.605 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:55:03 compute-0 podman[416033]: 2025-12-03 01:55:03.610386956 +0000 UTC m=+0.102485948 container create 739246c4a34f5eaf8f64a1485c3ae7a29bd845ab17a71805c9816366638edbea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 03 01:55:03 compute-0 nova_compute[351485]: 2025-12-03 01:55:03.616 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:55:03 compute-0 ovn_controller[89134]: 2025-12-03T01:55:03Z|00032|binding|INFO|Releasing lport 8c8945aa-32be-4ced-a7fe-2b9502f30008 from this chassis (sb_readonly=0)
Dec 03 01:55:03 compute-0 NetworkManager[48912]: <info>  [1764726903.6400] manager: (patch-br-int-to-provnet-80f94762-882c-4d34-b4ad-5139365af23d): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/25)
Dec 03 01:55:03 compute-0 NetworkManager[48912]: <info>  [1764726903.6410] device (patch-br-int-to-provnet-80f94762-882c-4d34-b4ad-5139365af23d)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 03 01:55:03 compute-0 NetworkManager[48912]: <info>  [1764726903.6468] manager: (patch-provnet-80f94762-882c-4d34-b4ad-5139365af23d-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/26)
Dec 03 01:55:03 compute-0 NetworkManager[48912]: <info>  [1764726903.6513] device (patch-provnet-80f94762-882c-4d34-b4ad-5139365af23d-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 03 01:55:03 compute-0 podman[416033]: 2025-12-03 01:55:03.56910515 +0000 UTC m=+0.061204182 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:55:03 compute-0 NetworkManager[48912]: <info>  [1764726903.6590] manager: (patch-provnet-80f94762-882c-4d34-b4ad-5139365af23d-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Dec 03 01:55:03 compute-0 NetworkManager[48912]: <info>  [1764726903.6624] manager: (patch-br-int-to-provnet-80f94762-882c-4d34-b4ad-5139365af23d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Dec 03 01:55:03 compute-0 NetworkManager[48912]: <info>  [1764726903.6653] device (patch-br-int-to-provnet-80f94762-882c-4d34-b4ad-5139365af23d)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec 03 01:55:03 compute-0 NetworkManager[48912]: <info>  [1764726903.6681] device (patch-provnet-80f94762-882c-4d34-b4ad-5139365af23d-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec 03 01:55:03 compute-0 nova_compute[351485]: 2025-12-03 01:55:03.692 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:55:03 compute-0 ovn_controller[89134]: 2025-12-03T01:55:03Z|00033|binding|INFO|Releasing lport 8c8945aa-32be-4ced-a7fe-2b9502f30008 from this chassis (sb_readonly=0)
Dec 03 01:55:03 compute-0 systemd[1]: Started libpod-conmon-739246c4a34f5eaf8f64a1485c3ae7a29bd845ab17a71805c9816366638edbea.scope.
Dec 03 01:55:03 compute-0 ceph-mon[192821]: pgmap v1214: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 12 KiB/s wr, 49 op/s
Dec 03 01:55:03 compute-0 nova_compute[351485]: 2025-12-03 01:55:03.705 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:55:03 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:55:03 compute-0 podman[416033]: 2025-12-03 01:55:03.771910684 +0000 UTC m=+0.264009696 container init 739246c4a34f5eaf8f64a1485c3ae7a29bd845ab17a71805c9816366638edbea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_grothendieck, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 03 01:55:03 compute-0 podman[416033]: 2025-12-03 01:55:03.790853513 +0000 UTC m=+0.282952515 container start 739246c4a34f5eaf8f64a1485c3ae7a29bd845ab17a71805c9816366638edbea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_grothendieck, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 03 01:55:03 compute-0 podman[416033]: 2025-12-03 01:55:03.797163173 +0000 UTC m=+0.289262155 container attach 739246c4a34f5eaf8f64a1485c3ae7a29bd845ab17a71805c9816366638edbea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_grothendieck, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Dec 03 01:55:03 compute-0 frosty_grothendieck[416050]: 167 167
Dec 03 01:55:03 compute-0 systemd[1]: libpod-739246c4a34f5eaf8f64a1485c3ae7a29bd845ab17a71805c9816366638edbea.scope: Deactivated successfully.
Dec 03 01:55:03 compute-0 podman[416033]: 2025-12-03 01:55:03.804612015 +0000 UTC m=+0.296711027 container died 739246c4a34f5eaf8f64a1485c3ae7a29bd845ab17a71805c9816366638edbea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 03 01:55:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-3378d32cb104e2c03402f697ec5bf9803e37ed33db991c0951f99fb72692d531-merged.mount: Deactivated successfully.
Dec 03 01:55:03 compute-0 podman[416033]: 2025-12-03 01:55:03.871696665 +0000 UTC m=+0.363795657 container remove 739246c4a34f5eaf8f64a1485c3ae7a29bd845ab17a71805c9816366638edbea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 03 01:55:03 compute-0 systemd[1]: libpod-conmon-739246c4a34f5eaf8f64a1485c3ae7a29bd845ab17a71805c9816366638edbea.scope: Deactivated successfully.
Dec 03 01:55:04 compute-0 podman[416074]: 2025-12-03 01:55:04.194890355 +0000 UTC m=+0.117214007 container create e899f1cb505a92ccc9c24876e05b8092a81929aa5992103e3314f6647c79be90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:55:04 compute-0 nova_compute[351485]: 2025-12-03 01:55:04.212 351492 DEBUG nova.compute.manager [req-e3358f44-bbad-407e-bcec-42a742727d38 req-69199a08-f096-49a8-a3a8-8032bb048934 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Received event network-changed-d2a50b9b-c23e-4e96-a247-ba01de01a3f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 01:55:04 compute-0 nova_compute[351485]: 2025-12-03 01:55:04.213 351492 DEBUG nova.compute.manager [req-e3358f44-bbad-407e-bcec-42a742727d38 req-69199a08-f096-49a8-a3a8-8032bb048934 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Refreshing instance network info cache due to event network-changed-d2a50b9b-c23e-4e96-a247-ba01de01a3f1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 03 01:55:04 compute-0 nova_compute[351485]: 2025-12-03 01:55:04.214 351492 DEBUG oslo_concurrency.lockutils [req-e3358f44-bbad-407e-bcec-42a742727d38 req-69199a08-f096-49a8-a3a8-8032bb048934 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 01:55:04 compute-0 nova_compute[351485]: 2025-12-03 01:55:04.215 351492 DEBUG oslo_concurrency.lockutils [req-e3358f44-bbad-407e-bcec-42a742727d38 req-69199a08-f096-49a8-a3a8-8032bb048934 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 01:55:04 compute-0 nova_compute[351485]: 2025-12-03 01:55:04.216 351492 DEBUG nova.network.neutron [req-e3358f44-bbad-407e-bcec-42a742727d38 req-69199a08-f096-49a8-a3a8-8032bb048934 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Refreshing network info cache for port d2a50b9b-c23e-4e96-a247-ba01de01a3f1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 03 01:55:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1215: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 602 KiB/s rd, 18 op/s
Dec 03 01:55:04 compute-0 podman[416074]: 2025-12-03 01:55:04.160231289 +0000 UTC m=+0.082555001 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:55:04 compute-0 systemd[1]: Started libpod-conmon-e899f1cb505a92ccc9c24876e05b8092a81929aa5992103e3314f6647c79be90.scope.
Dec 03 01:55:04 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:55:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc8da6633541e7760393ad0d9903b63cfba90e77faa0274e3383f93a982b1bbf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:55:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc8da6633541e7760393ad0d9903b63cfba90e77faa0274e3383f93a982b1bbf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:55:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc8da6633541e7760393ad0d9903b63cfba90e77faa0274e3383f93a982b1bbf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:55:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc8da6633541e7760393ad0d9903b63cfba90e77faa0274e3383f93a982b1bbf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:55:04 compute-0 podman[416074]: 2025-12-03 01:55:04.40300567 +0000 UTC m=+0.325329352 container init e899f1cb505a92ccc9c24876e05b8092a81929aa5992103e3314f6647c79be90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_blackburn, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 03 01:55:04 compute-0 podman[416074]: 2025-12-03 01:55:04.42900244 +0000 UTC m=+0.351326102 container start e899f1cb505a92ccc9c24876e05b8092a81929aa5992103e3314f6647c79be90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_blackburn, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:55:04 compute-0 podman[416074]: 2025-12-03 01:55:04.440975411 +0000 UTC m=+0.363299083 container attach e899f1cb505a92ccc9c24876e05b8092a81929aa5992103e3314f6647c79be90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:55:04 compute-0 sshd-session[416067]: Invalid user testuser from 80.253.31.232 port 59972
Dec 03 01:55:04 compute-0 sshd-session[416067]: Received disconnect from 80.253.31.232 port 59972:11: Bye Bye [preauth]
Dec 03 01:55:04 compute-0 sshd-session[416067]: Disconnected from invalid user testuser 80.253.31.232 port 59972 [preauth]
Dec 03 01:55:05 compute-0 nova_compute[351485]: 2025-12-03 01:55:05.561 351492 DEBUG nova.network.neutron [req-e3358f44-bbad-407e-bcec-42a742727d38 req-69199a08-f096-49a8-a3a8-8032bb048934 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Updated VIF entry in instance network info cache for port d2a50b9b-c23e-4e96-a247-ba01de01a3f1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 03 01:55:05 compute-0 nova_compute[351485]: 2025-12-03 01:55:05.563 351492 DEBUG nova.network.neutron [req-e3358f44-bbad-407e-bcec-42a742727d38 req-69199a08-f096-49a8-a3a8-8032bb048934 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Updating instance_info_cache with network_info: [{"id": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "address": "fa:16:3e:8f:a6:32", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2a50b9b-c2", "ovs_interfaceid": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 01:55:05 compute-0 cool_blackburn[416089]: {
Dec 03 01:55:05 compute-0 cool_blackburn[416089]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 01:55:05 compute-0 cool_blackburn[416089]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:55:05 compute-0 cool_blackburn[416089]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 01:55:05 compute-0 cool_blackburn[416089]:         "osd_id": 2,
Dec 03 01:55:05 compute-0 cool_blackburn[416089]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:55:05 compute-0 cool_blackburn[416089]:         "type": "bluestore"
Dec 03 01:55:05 compute-0 cool_blackburn[416089]:     },
Dec 03 01:55:05 compute-0 cool_blackburn[416089]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 01:55:05 compute-0 cool_blackburn[416089]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:55:05 compute-0 cool_blackburn[416089]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 01:55:05 compute-0 cool_blackburn[416089]:         "osd_id": 1,
Dec 03 01:55:05 compute-0 cool_blackburn[416089]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:55:05 compute-0 cool_blackburn[416089]:         "type": "bluestore"
Dec 03 01:55:05 compute-0 cool_blackburn[416089]:     },
Dec 03 01:55:05 compute-0 cool_blackburn[416089]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 01:55:05 compute-0 cool_blackburn[416089]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:55:05 compute-0 cool_blackburn[416089]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 01:55:05 compute-0 cool_blackburn[416089]:         "osd_id": 0,
Dec 03 01:55:05 compute-0 cool_blackburn[416089]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:55:05 compute-0 cool_blackburn[416089]:         "type": "bluestore"
Dec 03 01:55:05 compute-0 cool_blackburn[416089]:     }
Dec 03 01:55:05 compute-0 cool_blackburn[416089]: }
Dec 03 01:55:05 compute-0 nova_compute[351485]: 2025-12-03 01:55:05.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:55:05 compute-0 nova_compute[351485]: 2025-12-03 01:55:05.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 01:55:05 compute-0 nova_compute[351485]: 2025-12-03 01:55:05.584 351492 DEBUG oslo_concurrency.lockutils [req-e3358f44-bbad-407e-bcec-42a742727d38 req-69199a08-f096-49a8-a3a8-8032bb048934 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 01:55:05 compute-0 systemd[1]: libpod-e899f1cb505a92ccc9c24876e05b8092a81929aa5992103e3314f6647c79be90.scope: Deactivated successfully.
Dec 03 01:55:05 compute-0 podman[416074]: 2025-12-03 01:55:05.622084185 +0000 UTC m=+1.544407857 container died e899f1cb505a92ccc9c24876e05b8092a81929aa5992103e3314f6647c79be90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:55:05 compute-0 systemd[1]: libpod-e899f1cb505a92ccc9c24876e05b8092a81929aa5992103e3314f6647c79be90.scope: Consumed 1.188s CPU time.
Dec 03 01:55:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc8da6633541e7760393ad0d9903b63cfba90e77faa0274e3383f93a982b1bbf-merged.mount: Deactivated successfully.
Dec 03 01:55:05 compute-0 ceph-mon[192821]: pgmap v1215: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 602 KiB/s rd, 18 op/s
Dec 03 01:55:05 compute-0 podman[416074]: 2025-12-03 01:55:05.728315919 +0000 UTC m=+1.650639541 container remove e899f1cb505a92ccc9c24876e05b8092a81929aa5992103e3314f6647c79be90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 03 01:55:05 compute-0 systemd[1]: libpod-conmon-e899f1cb505a92ccc9c24876e05b8092a81929aa5992103e3314f6647c79be90.scope: Deactivated successfully.
Dec 03 01:55:05 compute-0 sudo[415969]: pam_unix(sudo:session): session closed for user root
Dec 03 01:55:05 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:55:05 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:55:05 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:55:05 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:55:05 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev f2561ab9-76ef-41cc-a767-7a30e9957b55 does not exist
Dec 03 01:55:05 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 460d5439-d1a0-44c2-a16e-1753c0db1142 does not exist
Dec 03 01:55:05 compute-0 sudo[416133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:55:05 compute-0 sudo[416133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:55:05 compute-0 sudo[416133]: pam_unix(sudo:session): session closed for user root
Dec 03 01:55:06 compute-0 sudo[416158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 01:55:06 compute-0 sudo[416158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:55:06 compute-0 sudo[416158]: pam_unix(sudo:session): session closed for user root
Dec 03 01:55:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1216: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 0 op/s
Dec 03 01:55:06 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:55:06 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:55:07 compute-0 nova_compute[351485]: 2025-12-03 01:55:07.744 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:55:07 compute-0 ceph-mon[192821]: pgmap v1216: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 0 op/s
Dec 03 01:55:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:55:07 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:55:07.996 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=eda9fd7d-f2b1-4121-b9ac-fc31f8426272, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 01:55:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1217: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:55:08 compute-0 nova_compute[351485]: 2025-12-03 01:55:08.611 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:55:08 compute-0 podman[416183]: 2025-12-03 01:55:08.909468771 +0000 UTC m=+0.152980826 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 03 01:55:09 compute-0 ceph-mon[192821]: pgmap v1217: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:55:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1218: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:55:11 compute-0 ceph-mon[192821]: pgmap v1218: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:55:11 compute-0 podman[416203]: 2025-12-03 01:55:11.921457457 +0000 UTC m=+0.176744063 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.29.0, architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, name=ubi9, vcs-type=git, maintainer=Red Hat, Inc., managed_by=edpm_ansible, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, config_id=edpm, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 03 01:55:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1219: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:55:12 compute-0 nova_compute[351485]: 2025-12-03 01:55:12.746 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:55:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:55:13 compute-0 nova_compute[351485]: 2025-12-03 01:55:13.614 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:55:13 compute-0 ceph-mon[192821]: pgmap v1219: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:55:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1220: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:55:15 compute-0 podman[416224]: 2025-12-03 01:55:15.871715473 +0000 UTC m=+0.101274735 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 03 01:55:15 compute-0 podman[416223]: 2025-12-03 01:55:15.879461484 +0000 UTC m=+0.117740773 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_id=edpm, container_name=openstack_network_exporter, io.buildah.version=1.33.7, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., version=9.6, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 03 01:55:15 compute-0 ceph-mon[192821]: pgmap v1220: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:55:15 compute-0 podman[416222]: 2025-12-03 01:55:15.908016127 +0000 UTC m=+0.154725216 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 03 01:55:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1221: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:55:17 compute-0 nova_compute[351485]: 2025-12-03 01:55:17.749 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:55:17 compute-0 podman[416287]: 2025-12-03 01:55:17.868009114 +0000 UTC m=+0.115822557 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 01:55:17 compute-0 ceph-mon[192821]: pgmap v1221: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:55:17 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Dec 03 01:55:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:55:17.915512) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 03 01:55:17 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Dec 03 01:55:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726917915634, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1328, "num_deletes": 251, "total_data_size": 1962161, "memory_usage": 1985808, "flush_reason": "Manual Compaction"}
Dec 03 01:55:17 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Dec 03 01:55:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726917930327, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 1920085, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24202, "largest_seqno": 25529, "table_properties": {"data_size": 1913769, "index_size": 3519, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13422, "raw_average_key_size": 20, "raw_value_size": 1900981, "raw_average_value_size": 2841, "num_data_blocks": 158, "num_entries": 669, "num_filter_entries": 669, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764726798, "oldest_key_time": 1764726798, "file_creation_time": 1764726917, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Dec 03 01:55:17 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 14910 microseconds, and 6887 cpu microseconds.
Dec 03 01:55:17 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 01:55:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:55:17.930419) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 1920085 bytes OK
Dec 03 01:55:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:55:17.930440) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Dec 03 01:55:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:55:17.933976) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Dec 03 01:55:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:55:17.933991) EVENT_LOG_v1 {"time_micros": 1764726917933987, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 03 01:55:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:55:17.934009) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 03 01:55:17 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 1956178, prev total WAL file size 1956178, number of live WAL files 2.
Dec 03 01:55:17 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 01:55:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:55:17.935176) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Dec 03 01:55:17 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 03 01:55:17 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(1875KB)], [56(7162KB)]
Dec 03 01:55:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726917935249, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 9254376, "oldest_snapshot_seqno": -1}
Dec 03 01:55:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:55:17 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 4615 keys, 7483625 bytes, temperature: kUnknown
Dec 03 01:55:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726917991934, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 7483625, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7452511, "index_size": 18460, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11589, "raw_key_size": 115610, "raw_average_key_size": 25, "raw_value_size": 7368578, "raw_average_value_size": 1596, "num_data_blocks": 765, "num_entries": 4615, "num_filter_entries": 4615, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764726917, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Dec 03 01:55:17 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 01:55:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:55:17.992172) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 7483625 bytes
Dec 03 01:55:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:55:17.994323) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 163.0 rd, 131.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 7.0 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(8.7) write-amplify(3.9) OK, records in: 5133, records dropped: 518 output_compression: NoCompression
Dec 03 01:55:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:55:17.994343) EVENT_LOG_v1 {"time_micros": 1764726917994334, "job": 30, "event": "compaction_finished", "compaction_time_micros": 56759, "compaction_time_cpu_micros": 26373, "output_level": 6, "num_output_files": 1, "total_output_size": 7483625, "num_input_records": 5133, "num_output_records": 4615, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 03 01:55:17 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 01:55:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726917994925, "job": 30, "event": "table_file_deletion", "file_number": 58}
Dec 03 01:55:17 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 01:55:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726917996552, "job": 30, "event": "table_file_deletion", "file_number": 56}
Dec 03 01:55:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:55:17.935052) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:55:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:55:17.996719) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:55:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:55:17.996726) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:55:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:55:17.996728) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:55:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:55:17.996730) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:55:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:55:17.996732) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:55:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1222: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:55:18 compute-0 nova_compute[351485]: 2025-12-03 01:55:18.618 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:55:18 compute-0 ceph-mon[192821]: pgmap v1222: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.503 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 03 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.505 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 03 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.505 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.506 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.506 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.507 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.507 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.508 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.508 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.508 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.509 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.509 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.510 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.510 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.511 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.511 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.514 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec 03 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.858 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/9182286b-5a08-4961-b4bb-c0e2f05746f7 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}5774f494984a65ffbde2426a05531a474fe014ea4dcd597248cb0a9b623a789b" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec 03 01:55:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1223: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.510 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1848 Content-Type: application/json Date: Wed, 03 Dec 2025 01:55:19 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-6bc6a690-07c9-4ed7-b8a7-bf9b31bd76e4 x-openstack-request-id: req-6bc6a690-07c9-4ed7-b8a7-bf9b31bd76e4 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.510 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "9182286b-5a08-4961-b4bb-c0e2f05746f7", "name": "test_0", "status": "ACTIVE", "tenant_id": "9746b242761a48048d185ce26d622b33", "user_id": "03ba25e4009b43f7b0054fee32bf9136", "metadata": {}, "hostId": "875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd", "image": {"id": "466cf0db-c3be-4d70-b9f3-08c056c2cad9", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/466cf0db-c3be-4d70-b9f3-08c056c2cad9"}]}, "flavor": {"id": "bc665ec6-3672-4e52-a447-5267b04e227a", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/bc665ec6-3672-4e52-a447-5267b04e227a"}]}, "created": "2025-12-03T01:54:29Z", "updated": "2025-12-03T01:54:47Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.5", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:8f:a6:32"}, {"version": 4, "addr": "192.168.122.241", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:8f:a6:32"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/9182286b-5a08-4961-b4bb-c0e2f05746f7"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/9182286b-5a08-4961-b4bb-c0e2f05746f7"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-03T01:54:47.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.510 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/9182286b-5a08-4961-b4bb-c0e2f05746f7 used request id req-6bc6a690-07c9-4ed7-b8a7-bf9b31bd76e4 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.512 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '9182286b-5a08-4961-b4bb-c0e2f05746f7', 'name': 'test_0', 'flavor': {'id': 'bc665ec6-3672-4e52-a447-5267b04e227a', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9746b242761a48048d185ce26d622b33', 'user_id': '03ba25e4009b43f7b0054fee32bf9136', 'hostId': '875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.512 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.512 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.512 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.513 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.514 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-03T01:55:20.512958) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.556 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/memory.usage volume: 33.30859375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.557 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.558 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.558 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.558 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.558 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.559 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-03T01:55:20.558585) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.564 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 9182286b-5a08-4961-b4bb-c0e2f05746f7 / tapd2a50b9b-c2 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.564 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.565 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.565 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.566 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.566 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.566 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.566 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-03T01:55:20.566476) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.566 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.567 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.567 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.568 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.568 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.568 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.568 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.568 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.568 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-03T01:55:20.568448) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.569 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.569 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.569 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.569 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.569 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.569 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.570 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.570 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.570 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.571 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-03T01:55:20.570008) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.571 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.571 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.571 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.571 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.571 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.571 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.572 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-03T01:55:20.571650) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.572 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.572 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.572 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.572 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.573 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.573 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.573 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-03T01:55:20.573156) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.600 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.601 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.601 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.602 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.602 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.603 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.603 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.603 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.603 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.604 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-03T01:55:20.603734) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.604 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.604 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: test_0>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>]
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.606 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.606 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.607 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.607 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.607 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.608 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-03T01:55:20.607414) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.700 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.bytes volume: 19901952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.701 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.bytes volume: 1077248 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.702 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.bytes volume: 55470 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.703 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.703 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.704 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.704 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.704 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.705 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.705 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.705 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.706 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.706 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.706 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.706 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-03T01:55:20.704997) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.706 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.707 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.707 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.latency volume: 1553742100 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.707 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.latency volume: 104971917 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.707 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.latency volume: 23637421 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.708 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.708 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.709 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-03T01:55:20.706986) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.709 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.709 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.709 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.711 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.712 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.requests volume: 648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.712 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.requests volume: 50 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.714 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-03T01:55:20.711135) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.715 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.requests volume: 19 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.715 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.716 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.716 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.716 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.716 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.716 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.717 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.717 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-03T01:55:20.716568) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.717 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.717 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.717 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.717 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.718 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.718 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.718 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.718 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.718 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.719 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.719 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.719 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.719 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.719 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.720 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.720 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-03T01:55:20.718140) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.720 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.bytes volume: 17526784 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.720 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-03T01:55:20.720254) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.720 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.721 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.721 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.721 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.721 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.721 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.721 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.722 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.722 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.latency volume: 5054541085 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.722 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-03T01:55:20.722070) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.722 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.722 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.723 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.723 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.723 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.723 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.723 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.723 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.724 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.requests volume: 121 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.725 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-03T01:55:20.723896) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.725 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.726 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.728 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.729 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.729 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.729 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.730 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.730 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.730 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.730 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-03T01:55:20.730291) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.731 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.731 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.731 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.731 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.731 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.731 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.731 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/cpu volume: 30980000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.731 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.732 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.732 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.732 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.732 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.732 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.732 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.733 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.733 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.733 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.733 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.733 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.733 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.733 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.734 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.734 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.734 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.734 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.734 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.735 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.735 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.735 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.734 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-03T01:55:20.731485) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.736 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.736 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.736 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-03T01:55:20.732506) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.736 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.736 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.736 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-03T01:55:20.733476) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.736 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.736 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.736 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.737 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.737 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.737 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.737 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.737 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-03T01:55:20.734770) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.737 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.737 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-03T01:55:20.736672) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.738 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-03T01:55:20.737925) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.737 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.738 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.739 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.739 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.739 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.739 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.740 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.740 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.740 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-03T01:55:20.740125) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.741 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.741 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.741 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.743 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.743 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.743 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.744 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.744 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: test_0>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>]
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.745 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.745 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.746 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.746 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.746 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.746 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.746 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.747 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.747 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-03T01:55:20.743725) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.747 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.747 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.747 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.748 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.748 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.748 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.748 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.748 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.749 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.749 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.749 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.749 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.749 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.749 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.749 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.750 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.750 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.750 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:55:21 compute-0 ceph-mon[192821]: pgmap v1223: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:55:21 compute-0 ovn_controller[89134]: 2025-12-03T01:55:21Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8f:a6:32 192.168.0.5
Dec 03 01:55:21 compute-0 ovn_controller[89134]: 2025-12-03T01:55:21Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8f:a6:32 192.168.0.5
Dec 03 01:55:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1224: 321 pgs: 321 active+clean; 54 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 545 KiB/s wr, 15 op/s
Dec 03 01:55:22 compute-0 nova_compute[351485]: 2025-12-03 01:55:22.753 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:55:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:55:23 compute-0 ceph-mon[192821]: pgmap v1224: 321 pgs: 321 active+clean; 54 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 545 KiB/s wr, 15 op/s
Dec 03 01:55:23 compute-0 nova_compute[351485]: 2025-12-03 01:55:23.620 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:55:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1225: 321 pgs: 321 active+clean; 61 MiB data, 185 MiB used, 60 GiB / 60 GiB avail; 88 KiB/s rd, 945 KiB/s wr, 25 op/s
Dec 03 01:55:25 compute-0 ceph-mon[192821]: pgmap v1225: 321 pgs: 321 active+clean; 61 MiB data, 185 MiB used, 60 GiB / 60 GiB avail; 88 KiB/s rd, 945 KiB/s wr, 25 op/s
Dec 03 01:55:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1226: 321 pgs: 321 active+clean; 77 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 157 KiB/s rd, 1.5 MiB/s wr, 56 op/s
Dec 03 01:55:27 compute-0 ceph-mon[192821]: pgmap v1226: 321 pgs: 321 active+clean; 77 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 157 KiB/s rd, 1.5 MiB/s wr, 56 op/s
Dec 03 01:55:27 compute-0 nova_compute[351485]: 2025-12-03 01:55:27.756 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:55:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:55:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1227: 321 pgs: 321 active+clean; 77 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 157 KiB/s rd, 1.5 MiB/s wr, 56 op/s
Dec 03 01:55:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:55:28
Dec 03 01:55:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 01:55:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 01:55:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.meta', 'backups', 'volumes', 'vms', 'images', 'cephfs.cephfs.data', '.mgr']
Dec 03 01:55:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 01:55:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:55:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:55:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:55:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:55:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:55:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:55:28 compute-0 nova_compute[351485]: 2025-12-03 01:55:28.626 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:55:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 01:55:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:55:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 01:55:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:55:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:55:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:55:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:55:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:55:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:55:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:55:29 compute-0 ceph-mon[192821]: pgmap v1227: 321 pgs: 321 active+clean; 77 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 157 KiB/s rd, 1.5 MiB/s wr, 56 op/s
Dec 03 01:55:29 compute-0 podman[158098]: time="2025-12-03T01:55:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:55:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:55:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec 03 01:55:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:55:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8614 "" "Go-http-client/1.1"
Dec 03 01:55:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1228: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 157 KiB/s rd, 1.5 MiB/s wr, 56 op/s
Dec 03 01:55:31 compute-0 ceph-mon[192821]: pgmap v1228: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 157 KiB/s rd, 1.5 MiB/s wr, 56 op/s
Dec 03 01:55:31 compute-0 openstack_network_exporter[368278]: ERROR   01:55:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:55:31 compute-0 openstack_network_exporter[368278]: ERROR   01:55:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:55:31 compute-0 openstack_network_exporter[368278]: ERROR   01:55:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:55:31 compute-0 openstack_network_exporter[368278]: ERROR   01:55:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:55:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:55:31 compute-0 openstack_network_exporter[368278]: ERROR   01:55:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:55:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:55:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1229: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 157 KiB/s rd, 1.5 MiB/s wr, 56 op/s
Dec 03 01:55:32 compute-0 nova_compute[351485]: 2025-12-03 01:55:32.762 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:55:32 compute-0 podman[416315]: 2025-12-03 01:55:32.869640408 +0000 UTC m=+0.114207393 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 03 01:55:32 compute-0 podman[416317]: 2025-12-03 01:55:32.907284289 +0000 UTC m=+0.136042734 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 01:55:32 compute-0 podman[416316]: 2025-12-03 01:55:32.912303932 +0000 UTC m=+0.146710687 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 03 01:55:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:55:33 compute-0 ceph-mon[192821]: pgmap v1229: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 157 KiB/s rd, 1.5 MiB/s wr, 56 op/s
Dec 03 01:55:33 compute-0 ovn_controller[89134]: 2025-12-03T01:55:33Z|00034|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Dec 03 01:55:33 compute-0 nova_compute[351485]: 2025-12-03 01:55:33.630 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:55:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1230: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 114 KiB/s rd, 975 KiB/s wr, 41 op/s
Dec 03 01:55:35 compute-0 ceph-mon[192821]: pgmap v1230: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 114 KiB/s rd, 975 KiB/s wr, 41 op/s
Dec 03 01:55:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1231: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 575 KiB/s wr, 31 op/s
Dec 03 01:55:37 compute-0 ceph-mon[192821]: pgmap v1231: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 575 KiB/s wr, 31 op/s
Dec 03 01:55:37 compute-0 nova_compute[351485]: 2025-12-03 01:55:37.762 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:55:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:55:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1232: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s wr, 0 op/s
Dec 03 01:55:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 01:55:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:55:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 01:55:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:55:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005514586182197044 of space, bias 1.0, pg target 0.1654375854659113 quantized to 32 (current 32)
Dec 03 01:55:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:55:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:55:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:55:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:55:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:55:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec 03 01:55:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:55:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 01:55:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:55:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:55:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:55:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 01:55:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:55:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 01:55:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:55:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:55:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:55:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 01:55:38 compute-0 nova_compute[351485]: 2025-12-03 01:55:38.634 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:55:39 compute-0 ceph-mon[192821]: pgmap v1232: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s wr, 0 op/s
Dec 03 01:55:39 compute-0 podman[416372]: 2025-12-03 01:55:39.891842047 +0000 UTC m=+0.136052324 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi)
Dec 03 01:55:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1233: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s wr, 0 op/s
Dec 03 01:55:41 compute-0 ceph-mon[192821]: pgmap v1233: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s wr, 0 op/s
Dec 03 01:55:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1234: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:55:42 compute-0 nova_compute[351485]: 2025-12-03 01:55:42.766 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:55:42 compute-0 podman[416392]: 2025-12-03 01:55:42.886142518 +0000 UTC m=+0.128499559 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, release=1214.1726694543, config_id=edpm, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., name=ubi9, release-0.7.12=, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec 03 01:55:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:55:43 compute-0 ceph-mon[192821]: pgmap v1234: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:55:43 compute-0 nova_compute[351485]: 2025-12-03 01:55:43.638 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:55:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1235: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:55:45 compute-0 ceph-mon[192821]: pgmap v1235: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:55:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1236: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:55:46 compute-0 podman[416414]: 2025-12-03 01:55:46.894217096 +0000 UTC m=+0.137057563 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, version=9.6, architecture=x86_64, config_id=edpm, com.redhat.component=ubi9-minimal-container)
Dec 03 01:55:46 compute-0 podman[416415]: 2025-12-03 01:55:46.897221622 +0000 UTC m=+0.137518546 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:55:46 compute-0 podman[416413]: 2025-12-03 01:55:46.958805345 +0000 UTC m=+0.206318825 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 03 01:55:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 01:55:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2894116698' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 01:55:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 01:55:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2894116698' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 01:55:47 compute-0 ceph-mon[192821]: pgmap v1236: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:55:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/2894116698' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 01:55:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/2894116698' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 01:55:47 compute-0 nova_compute[351485]: 2025-12-03 01:55:47.771 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:55:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:55:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1237: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:55:48 compute-0 sshd-session[416411]: Invalid user guest from 45.78.219.140 port 48290
Dec 03 01:55:48 compute-0 podman[416478]: 2025-12-03 01:55:48.465207349 +0000 UTC m=+0.160932283 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 01:55:48 compute-0 sshd-session[416411]: Received disconnect from 45.78.219.140 port 48290:11: Bye Bye [preauth]
Dec 03 01:55:48 compute-0 sshd-session[416411]: Disconnected from invalid user guest 45.78.219.140 port 48290 [preauth]
Dec 03 01:55:48 compute-0 nova_compute[351485]: 2025-12-03 01:55:48.642 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:55:49 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:55:49.685 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:a6:85', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:2a:11:ae:7b:8c'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 01:55:49 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:55:49.686 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 03 01:55:49 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:55:49.688 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=eda9fd7d-f2b1-4121-b9ac-fc31f8426272, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 01:55:49 compute-0 nova_compute[351485]: 2025-12-03 01:55:49.692 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:55:49 compute-0 ceph-mon[192821]: pgmap v1237: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:55:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1238: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:55:51 compute-0 ceph-mon[192821]: pgmap v1238: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:55:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1239: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:55:52 compute-0 nova_compute[351485]: 2025-12-03 01:55:52.775 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:55:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:55:53 compute-0 nova_compute[351485]: 2025-12-03 01:55:53.647 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:55:53 compute-0 ceph-mon[192821]: pgmap v1239: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:55:54 compute-0 sshd-session[416503]: Invalid user autrede from 103.146.202.174 port 36578
Dec 03 01:55:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1240: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 85 B/s wr, 0 op/s
Dec 03 01:55:54 compute-0 sshd-session[416503]: Received disconnect from 103.146.202.174 port 36578:11: Bye Bye [preauth]
Dec 03 01:55:54 compute-0 sshd-session[416503]: Disconnected from invalid user autrede 103.146.202.174 port 36578 [preauth]
Dec 03 01:55:55 compute-0 nova_compute[351485]: 2025-12-03 01:55:55.571 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:55:55 compute-0 nova_compute[351485]: 2025-12-03 01:55:55.611 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:55:55 compute-0 ceph-mon[192821]: pgmap v1240: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 85 B/s wr, 0 op/s
Dec 03 01:55:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1241: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Dec 03 01:55:56 compute-0 nova_compute[351485]: 2025-12-03 01:55:56.527 351492 DEBUG oslo_concurrency.lockutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "52862152-12c7-4236-89c3-67750ecbed7a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:55:56 compute-0 nova_compute[351485]: 2025-12-03 01:55:56.528 351492 DEBUG oslo_concurrency.lockutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "52862152-12c7-4236-89c3-67750ecbed7a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:55:56 compute-0 nova_compute[351485]: 2025-12-03 01:55:56.552 351492 DEBUG nova.compute.manager [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 03 01:55:56 compute-0 nova_compute[351485]: 2025-12-03 01:55:56.575 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:55:56 compute-0 nova_compute[351485]: 2025-12-03 01:55:56.620 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:55:56 compute-0 nova_compute[351485]: 2025-12-03 01:55:56.622 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:55:56 compute-0 nova_compute[351485]: 2025-12-03 01:55:56.623 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:55:56 compute-0 nova_compute[351485]: 2025-12-03 01:55:56.624 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 01:55:56 compute-0 nova_compute[351485]: 2025-12-03 01:55:56.625 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:55:56 compute-0 nova_compute[351485]: 2025-12-03 01:55:56.708 351492 DEBUG oslo_concurrency.lockutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:55:56 compute-0 nova_compute[351485]: 2025-12-03 01:55:56.709 351492 DEBUG oslo_concurrency.lockutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:55:56 compute-0 nova_compute[351485]: 2025-12-03 01:55:56.725 351492 DEBUG nova.virt.hardware [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 03 01:55:56 compute-0 nova_compute[351485]: 2025-12-03 01:55:56.725 351492 INFO nova.compute.claims [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Claim successful on node compute-0.ctlplane.example.com
Dec 03 01:55:56 compute-0 nova_compute[351485]: 2025-12-03 01:55:56.927 351492 DEBUG oslo_concurrency.processutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:55:56 compute-0 rsyslogd[188612]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 03 01:55:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 01:55:57 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4115611715' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:55:57 compute-0 nova_compute[351485]: 2025-12-03 01:55:57.166 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:55:57 compute-0 nova_compute[351485]: 2025-12-03 01:55:57.265 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 01:55:57 compute-0 nova_compute[351485]: 2025-12-03 01:55:57.266 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 01:55:57 compute-0 nova_compute[351485]: 2025-12-03 01:55:57.267 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 01:55:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 01:55:57 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3773167511' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:55:57 compute-0 nova_compute[351485]: 2025-12-03 01:55:57.505 351492 DEBUG oslo_concurrency.processutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:55:57 compute-0 nova_compute[351485]: 2025-12-03 01:55:57.522 351492 DEBUG nova.compute.provider_tree [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 01:55:57 compute-0 nova_compute[351485]: 2025-12-03 01:55:57.543 351492 DEBUG nova.scheduler.client.report [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 01:55:57 compute-0 nova_compute[351485]: 2025-12-03 01:55:57.575 351492 DEBUG oslo_concurrency.lockutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.866s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:55:57 compute-0 nova_compute[351485]: 2025-12-03 01:55:57.577 351492 DEBUG nova.compute.manager [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 03 01:55:57 compute-0 nova_compute[351485]: 2025-12-03 01:55:57.643 351492 DEBUG nova.compute.manager [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 03 01:55:57 compute-0 nova_compute[351485]: 2025-12-03 01:55:57.644 351492 DEBUG nova.network.neutron [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 03 01:55:57 compute-0 nova_compute[351485]: 2025-12-03 01:55:57.674 351492 INFO nova.virt.libvirt.driver [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 03 01:55:57 compute-0 nova_compute[351485]: 2025-12-03 01:55:57.720 351492 DEBUG nova.compute.manager [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 03 01:55:57 compute-0 nova_compute[351485]: 2025-12-03 01:55:57.781 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:55:57 compute-0 nova_compute[351485]: 2025-12-03 01:55:57.830 351492 DEBUG nova.compute.manager [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 03 01:55:57 compute-0 nova_compute[351485]: 2025-12-03 01:55:57.832 351492 DEBUG nova.virt.libvirt.driver [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 03 01:55:57 compute-0 nova_compute[351485]: 2025-12-03 01:55:57.833 351492 INFO nova.virt.libvirt.driver [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Creating image(s)
Dec 03 01:55:57 compute-0 ceph-mon[192821]: pgmap v1241: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Dec 03 01:55:57 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/4115611715' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:55:57 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3773167511' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:55:57 compute-0 nova_compute[351485]: 2025-12-03 01:55:57.882 351492 DEBUG nova.storage.rbd_utils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 52862152-12c7-4236-89c3-67750ecbed7a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 01:55:57 compute-0 nova_compute[351485]: 2025-12-03 01:55:57.938 351492 DEBUG nova.storage.rbd_utils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 52862152-12c7-4236-89c3-67750ecbed7a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 01:55:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:55:58 compute-0 nova_compute[351485]: 2025-12-03 01:55:58.001 351492 DEBUG nova.storage.rbd_utils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 52862152-12c7-4236-89c3-67750ecbed7a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 01:55:58 compute-0 nova_compute[351485]: 2025-12-03 01:55:58.009 351492 DEBUG oslo_concurrency.processutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b9e804eb90834f1320f9fd6c25a03e15d4052aa8 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:55:58 compute-0 nova_compute[351485]: 2025-12-03 01:55:58.146 351492 DEBUG oslo_concurrency.processutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b9e804eb90834f1320f9fd6c25a03e15d4052aa8 --force-share --output=json" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:55:58 compute-0 nova_compute[351485]: 2025-12-03 01:55:58.146 351492 DEBUG oslo_concurrency.lockutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "b9e804eb90834f1320f9fd6c25a03e15d4052aa8" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:55:58 compute-0 nova_compute[351485]: 2025-12-03 01:55:58.147 351492 DEBUG oslo_concurrency.lockutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "b9e804eb90834f1320f9fd6c25a03e15d4052aa8" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:55:58 compute-0 nova_compute[351485]: 2025-12-03 01:55:58.148 351492 DEBUG oslo_concurrency.lockutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "b9e804eb90834f1320f9fd6c25a03e15d4052aa8" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:55:58 compute-0 nova_compute[351485]: 2025-12-03 01:55:58.195 351492 DEBUG nova.storage.rbd_utils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 52862152-12c7-4236-89c3-67750ecbed7a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 01:55:58 compute-0 nova_compute[351485]: 2025-12-03 01:55:58.205 351492 DEBUG oslo_concurrency.processutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b9e804eb90834f1320f9fd6c25a03e15d4052aa8 52862152-12c7-4236-89c3-67750ecbed7a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:55:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1242: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Dec 03 01:55:58 compute-0 nova_compute[351485]: 2025-12-03 01:55:58.324 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 01:55:58 compute-0 nova_compute[351485]: 2025-12-03 01:55:58.325 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4017MB free_disk=59.9552001953125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 01:55:58 compute-0 nova_compute[351485]: 2025-12-03 01:55:58.326 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:55:58 compute-0 nova_compute[351485]: 2025-12-03 01:55:58.326 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:55:58 compute-0 nova_compute[351485]: 2025-12-03 01:55:58.406 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 01:55:58 compute-0 nova_compute[351485]: 2025-12-03 01:55:58.406 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 52862152-12c7-4236-89c3-67750ecbed7a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 01:55:58 compute-0 nova_compute[351485]: 2025-12-03 01:55:58.406 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 01:55:58 compute-0 nova_compute[351485]: 2025-12-03 01:55:58.406 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 01:55:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:55:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:55:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:55:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:55:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:55:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:55:58 compute-0 nova_compute[351485]: 2025-12-03 01:55:58.484 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:55:58 compute-0 nova_compute[351485]: 2025-12-03 01:55:58.650 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:55:58 compute-0 nova_compute[351485]: 2025-12-03 01:55:58.680 351492 DEBUG oslo_concurrency.processutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b9e804eb90834f1320f9fd6c25a03e15d4052aa8 52862152-12c7-4236-89c3-67750ecbed7a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:55:58 compute-0 nova_compute[351485]: 2025-12-03 01:55:58.829 351492 DEBUG nova.storage.rbd_utils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] resizing rbd image 52862152-12c7-4236-89c3-67750ecbed7a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 03 01:55:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 01:55:58 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1797809507' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:55:59 compute-0 nova_compute[351485]: 2025-12-03 01:55:59.069 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:55:59 compute-0 nova_compute[351485]: 2025-12-03 01:55:59.087 351492 DEBUG nova.objects.instance [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lazy-loading 'migration_context' on Instance uuid 52862152-12c7-4236-89c3-67750ecbed7a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 01:55:59 compute-0 nova_compute[351485]: 2025-12-03 01:55:59.097 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 01:55:59 compute-0 nova_compute[351485]: 2025-12-03 01:55:59.152 351492 DEBUG nova.storage.rbd_utils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 52862152-12c7-4236-89c3-67750ecbed7a_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 01:55:59 compute-0 nova_compute[351485]: 2025-12-03 01:55:59.217 351492 DEBUG nova.storage.rbd_utils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 52862152-12c7-4236-89c3-67750ecbed7a_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 01:55:59 compute-0 nova_compute[351485]: 2025-12-03 01:55:59.230 351492 DEBUG oslo_concurrency.processutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:55:59 compute-0 nova_compute[351485]: 2025-12-03 01:55:59.264 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 01:55:59 compute-0 nova_compute[351485]: 2025-12-03 01:55:59.292 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 01:55:59 compute-0 nova_compute[351485]: 2025-12-03 01:55:59.293 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.967s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:55:59 compute-0 nova_compute[351485]: 2025-12-03 01:55:59.326 351492 DEBUG oslo_concurrency.processutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:55:59 compute-0 nova_compute[351485]: 2025-12-03 01:55:59.327 351492 DEBUG oslo_concurrency.lockutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:55:59 compute-0 nova_compute[351485]: 2025-12-03 01:55:59.328 351492 DEBUG oslo_concurrency.lockutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:55:59 compute-0 nova_compute[351485]: 2025-12-03 01:55:59.329 351492 DEBUG oslo_concurrency.lockutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:55:59 compute-0 nova_compute[351485]: 2025-12-03 01:55:59.389 351492 DEBUG nova.storage.rbd_utils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 52862152-12c7-4236-89c3-67750ecbed7a_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 01:55:59 compute-0 nova_compute[351485]: 2025-12-03 01:55:59.402 351492 DEBUG oslo_concurrency.processutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 52862152-12c7-4236-89c3-67750ecbed7a_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:55:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:55:59.621 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:55:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:55:59.621 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:55:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:55:59.622 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:55:59 compute-0 podman[158098]: time="2025-12-03T01:55:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:55:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:55:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec 03 01:55:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:55:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8617 "" "Go-http-client/1.1"
Dec 03 01:55:59 compute-0 ceph-mon[192821]: pgmap v1242: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Dec 03 01:55:59 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1797809507' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:56:00 compute-0 nova_compute[351485]: 2025-12-03 01:56:00.089 351492 DEBUG oslo_concurrency.processutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 52862152-12c7-4236-89c3-67750ecbed7a_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.688s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:56:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1243: 321 pgs: 321 active+clean; 105 MiB data, 205 MiB used, 60 GiB / 60 GiB avail; 5.8 KiB/s rd, 1.3 MiB/s wr, 12 op/s
Dec 03 01:56:00 compute-0 nova_compute[351485]: 2025-12-03 01:56:00.328 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:56:00 compute-0 nova_compute[351485]: 2025-12-03 01:56:00.329 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 01:56:00 compute-0 nova_compute[351485]: 2025-12-03 01:56:00.329 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 03 01:56:00 compute-0 nova_compute[351485]: 2025-12-03 01:56:00.350 351492 DEBUG nova.virt.libvirt.driver [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 03 01:56:00 compute-0 nova_compute[351485]: 2025-12-03 01:56:00.352 351492 DEBUG nova.virt.libvirt.driver [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Ensure instance console log exists: /var/lib/nova/instances/52862152-12c7-4236-89c3-67750ecbed7a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 03 01:56:00 compute-0 nova_compute[351485]: 2025-12-03 01:56:00.352 351492 DEBUG oslo_concurrency.lockutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:56:00 compute-0 nova_compute[351485]: 2025-12-03 01:56:00.353 351492 DEBUG oslo_concurrency.lockutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:56:00 compute-0 nova_compute[351485]: 2025-12-03 01:56:00.354 351492 DEBUG oslo_concurrency.lockutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:56:00 compute-0 nova_compute[351485]: 2025-12-03 01:56:00.359 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 03 01:56:00 compute-0 nova_compute[351485]: 2025-12-03 01:56:00.429 351492 DEBUG nova.network.neutron [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Successfully updated port: 521d2181-8f17-4f4d-a3a6-98de1e17b734 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 03 01:56:00 compute-0 nova_compute[351485]: 2025-12-03 01:56:00.450 351492 DEBUG oslo_concurrency.lockutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "refresh_cache-52862152-12c7-4236-89c3-67750ecbed7a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 01:56:00 compute-0 nova_compute[351485]: 2025-12-03 01:56:00.450 351492 DEBUG oslo_concurrency.lockutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquired lock "refresh_cache-52862152-12c7-4236-89c3-67750ecbed7a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 01:56:00 compute-0 nova_compute[351485]: 2025-12-03 01:56:00.451 351492 DEBUG nova.network.neutron [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 03 01:56:00 compute-0 nova_compute[351485]: 2025-12-03 01:56:00.535 351492 DEBUG nova.compute.manager [req-ff63cc41-5b5b-49a8-93bd-6f06fc6f7dcc req-ac519cad-eca9-4e59-89a6-6dda4300ead0 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Received event network-changed-521d2181-8f17-4f4d-a3a6-98de1e17b734 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 01:56:00 compute-0 nova_compute[351485]: 2025-12-03 01:56:00.536 351492 DEBUG nova.compute.manager [req-ff63cc41-5b5b-49a8-93bd-6f06fc6f7dcc req-ac519cad-eca9-4e59-89a6-6dda4300ead0 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Refreshing instance network info cache due to event network-changed-521d2181-8f17-4f4d-a3a6-98de1e17b734. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 03 01:56:00 compute-0 nova_compute[351485]: 2025-12-03 01:56:00.536 351492 DEBUG oslo_concurrency.lockutils [req-ff63cc41-5b5b-49a8-93bd-6f06fc6f7dcc req-ac519cad-eca9-4e59-89a6-6dda4300ead0 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-52862152-12c7-4236-89c3-67750ecbed7a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 01:56:00 compute-0 nova_compute[351485]: 2025-12-03 01:56:00.572 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 01:56:00 compute-0 nova_compute[351485]: 2025-12-03 01:56:00.573 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 01:56:00 compute-0 nova_compute[351485]: 2025-12-03 01:56:00.573 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 03 01:56:00 compute-0 nova_compute[351485]: 2025-12-03 01:56:00.573 351492 DEBUG nova.objects.instance [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9182286b-5a08-4961-b4bb-c0e2f05746f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 01:56:00 compute-0 nova_compute[351485]: 2025-12-03 01:56:00.671 351492 DEBUG nova.network.neutron [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 03 01:56:00 compute-0 ceph-mon[192821]: pgmap v1243: 321 pgs: 321 active+clean; 105 MiB data, 205 MiB used, 60 GiB / 60 GiB avail; 5.8 KiB/s rd, 1.3 MiB/s wr, 12 op/s
Dec 03 01:56:01 compute-0 openstack_network_exporter[368278]: ERROR   01:56:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:56:01 compute-0 openstack_network_exporter[368278]: ERROR   01:56:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:56:01 compute-0 openstack_network_exporter[368278]: ERROR   01:56:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:56:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:56:01 compute-0 openstack_network_exporter[368278]: ERROR   01:56:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:56:01 compute-0 openstack_network_exporter[368278]: ERROR   01:56:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:56:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.220 351492 DEBUG nova.network.neutron [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Updating instance_info_cache with network_info: [{"id": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "address": "fa:16:3e:8e:09:91", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521d2181-8f", "ovs_interfaceid": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.245 351492 DEBUG oslo_concurrency.lockutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Releasing lock "refresh_cache-52862152-12c7-4236-89c3-67750ecbed7a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.246 351492 DEBUG nova.compute.manager [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Instance network_info: |[{"id": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "address": "fa:16:3e:8e:09:91", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521d2181-8f", "ovs_interfaceid": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 03 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.246 351492 DEBUG oslo_concurrency.lockutils [req-ff63cc41-5b5b-49a8-93bd-6f06fc6f7dcc req-ac519cad-eca9-4e59-89a6-6dda4300ead0 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-52862152-12c7-4236-89c3-67750ecbed7a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.247 351492 DEBUG nova.network.neutron [req-ff63cc41-5b5b-49a8-93bd-6f06fc6f7dcc req-ac519cad-eca9-4e59-89a6-6dda4300ead0 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Refreshing network info cache for port 521d2181-8f17-4f4d-a3a6-98de1e17b734 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 03 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.252 351492 DEBUG nova.virt.libvirt.driver [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Start _get_guest_xml network_info=[{"id": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "address": "fa:16:3e:8e:09:91", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521d2181-8f", "ovs_interfaceid": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-03T01:53:18Z,direct_url=<?>,disk_format='qcow2',id=466cf0db-c3be-4d70-b9f3-08c056c2cad9,min_disk=0,min_ram=0,name='cirros',owner='9746b242761a48048d185ce26d622b33',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-03T01:53:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'boot_index': 0, 'guest_format': None, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'image_id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}], 'ephemerals': [{'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vdb', 'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'size': 1, 'encryption_options': None, 'device_type': 'disk'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 03 01:56:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1244: 321 pgs: 321 active+clean; 110 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.4 MiB/s wr, 34 op/s
Dec 03 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.267 351492 WARNING nova.virt.libvirt.driver [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.282 351492 DEBUG nova.virt.libvirt.host [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 03 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.283 351492 DEBUG nova.virt.libvirt.host [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 03 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.290 351492 DEBUG nova.virt.libvirt.host [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 03 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.290 351492 DEBUG nova.virt.libvirt.host [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 03 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.291 351492 DEBUG nova.virt.libvirt.driver [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 03 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.291 351492 DEBUG nova.virt.hardware [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-03T01:53:25Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='bc665ec6-3672-4e52-a447-5267b04e227a',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-03T01:53:18Z,direct_url=<?>,disk_format='qcow2',id=466cf0db-c3be-4d70-b9f3-08c056c2cad9,min_disk=0,min_ram=0,name='cirros',owner='9746b242761a48048d185ce26d622b33',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-03T01:53:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 03 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.292 351492 DEBUG nova.virt.hardware [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 03 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.292 351492 DEBUG nova.virt.hardware [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 03 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.293 351492 DEBUG nova.virt.hardware [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 03 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.293 351492 DEBUG nova.virt.hardware [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 03 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.293 351492 DEBUG nova.virt.hardware [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 03 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.293 351492 DEBUG nova.virt.hardware [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 03 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.294 351492 DEBUG nova.virt.hardware [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 03 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.294 351492 DEBUG nova.virt.hardware [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 03 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.294 351492 DEBUG nova.virt.hardware [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 03 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.295 351492 DEBUG nova.virt.hardware [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 03 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.298 351492 DEBUG oslo_concurrency.processutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.326 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Updating instance_info_cache with network_info: [{"id": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "address": "fa:16:3e:8f:a6:32", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2a50b9b-c2", "ovs_interfaceid": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.344 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.345 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 03 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.346 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.346 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.346 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.347 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.781 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:56:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 03 01:56:02 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/955108270' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.814 351492 DEBUG oslo_concurrency.processutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.816 351492 DEBUG oslo_concurrency.processutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:56:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:56:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 03 01:56:03 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2112121715' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 01:56:03 compute-0 ceph-mon[192821]: pgmap v1244: 321 pgs: 321 active+clean; 110 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.4 MiB/s wr, 34 op/s
Dec 03 01:56:03 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/955108270' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.340 351492 DEBUG oslo_concurrency.processutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.392 351492 DEBUG nova.storage.rbd_utils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 52862152-12c7-4236-89c3-67750ecbed7a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.401 351492 DEBUG oslo_concurrency.processutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.655 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:56:03 compute-0 podman[416953]: 2025-12-03 01:56:03.844141222 +0000 UTC m=+0.090936890 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 03 01:56:03 compute-0 podman[416955]: 2025-12-03 01:56:03.858660355 +0000 UTC m=+0.090939530 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 01:56:03 compute-0 podman[416954]: 2025-12-03 01:56:03.867408564 +0000 UTC m=+0.121224072 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 03 01:56:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 03 01:56:03 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2149362390' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.923 351492 DEBUG oslo_concurrency.processutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.924 351492 DEBUG nova.virt.libvirt.vif [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T01:55:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-44nal64-ppxv5rwaptjv-bbqmylrxhl37-vnf-x65t7efzpd2l',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-44nal64-ppxv5rwaptjv-bbqmylrxhl37-vnf-x65t7efzpd2l',id=2,image_ref='466cf0db-c3be-4d70-b9f3-08c056c2cad9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='0f6ab671-23df-4a6d-9613-02f9fb5fb294'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9746b242761a48048d185ce26d622b33',ramdisk_id='',reservation_id='r-eunmeq81',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='466cf0db-c3be-4d70-b9f3-08c056c2cad9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T01:55:57Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0zOTYxOTAzNjc5MzA4NDQ1ODc5PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTM5NjE5MDM2NzkzMDg0NDU4Nzk9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09Mzk2MTkwMzY3OTMwODQ0NTg3OT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTM5NjE5MDM2NzkzMDg0NDU4Nzk9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0zOTYxOTAzNjc5MzA4NDQ1ODc5PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0zOTYxOTAzNjc5MzA4NDQ1ODc5PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJnc
Dec 03 01:56:03 compute-0 nova_compute[351485]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09Mzk2MTkwMzY3OTMwODQ0NTg3OT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTM5NjE5MDM2NzkzMDg0NDU4Nzk9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0zOTYxOTAzNjc5MzA4NDQ1ODc5PT0tLQo=',user_id='03ba25e4009b43f7b0054fee32bf9136',uuid=52862152-12c7-4236-89c3-67750ecbed7a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "address": "fa:16:3e:8e:09:91", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521d2181-8f", "ovs_interfaceid": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 03 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.925 351492 DEBUG nova.network.os_vif_util [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Converting VIF {"id": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "address": "fa:16:3e:8e:09:91", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521d2181-8f", "ovs_interfaceid": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 03 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.926 351492 DEBUG nova.network.os_vif_util [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8e:09:91,bridge_name='br-int',has_traffic_filtering=True,id=521d2181-8f17-4f4d-a3a6-98de1e17b734,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap521d2181-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 03 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.928 351492 DEBUG nova.objects.instance [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lazy-loading 'pci_devices' on Instance uuid 52862152-12c7-4236-89c3-67750ecbed7a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.949 351492 DEBUG nova.virt.libvirt.driver [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] End _get_guest_xml xml=<domain type="kvm">
Dec 03 01:56:03 compute-0 nova_compute[351485]:   <uuid>52862152-12c7-4236-89c3-67750ecbed7a</uuid>
Dec 03 01:56:03 compute-0 nova_compute[351485]:   <name>instance-00000002</name>
Dec 03 01:56:03 compute-0 nova_compute[351485]:   <memory>524288</memory>
Dec 03 01:56:03 compute-0 nova_compute[351485]:   <vcpu>1</vcpu>
Dec 03 01:56:03 compute-0 nova_compute[351485]:   <metadata>
Dec 03 01:56:03 compute-0 nova_compute[351485]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 03 01:56:03 compute-0 nova_compute[351485]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:       <nova:name>vn-44nal64-ppxv5rwaptjv-bbqmylrxhl37-vnf-x65t7efzpd2l</nova:name>
Dec 03 01:56:03 compute-0 nova_compute[351485]:       <nova:creationTime>2025-12-03 01:56:02</nova:creationTime>
Dec 03 01:56:03 compute-0 nova_compute[351485]:       <nova:flavor name="m1.small">
Dec 03 01:56:03 compute-0 nova_compute[351485]:         <nova:memory>512</nova:memory>
Dec 03 01:56:03 compute-0 nova_compute[351485]:         <nova:disk>1</nova:disk>
Dec 03 01:56:03 compute-0 nova_compute[351485]:         <nova:swap>0</nova:swap>
Dec 03 01:56:03 compute-0 nova_compute[351485]:         <nova:ephemeral>1</nova:ephemeral>
Dec 03 01:56:03 compute-0 nova_compute[351485]:         <nova:vcpus>1</nova:vcpus>
Dec 03 01:56:03 compute-0 nova_compute[351485]:       </nova:flavor>
Dec 03 01:56:03 compute-0 nova_compute[351485]:       <nova:owner>
Dec 03 01:56:03 compute-0 nova_compute[351485]:         <nova:user uuid="03ba25e4009b43f7b0054fee32bf9136">admin</nova:user>
Dec 03 01:56:03 compute-0 nova_compute[351485]:         <nova:project uuid="9746b242761a48048d185ce26d622b33">admin</nova:project>
Dec 03 01:56:03 compute-0 nova_compute[351485]:       </nova:owner>
Dec 03 01:56:03 compute-0 nova_compute[351485]:       <nova:root type="image" uuid="466cf0db-c3be-4d70-b9f3-08c056c2cad9"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:       <nova:ports>
Dec 03 01:56:03 compute-0 nova_compute[351485]:         <nova:port uuid="521d2181-8f17-4f4d-a3a6-98de1e17b734">
Dec 03 01:56:03 compute-0 nova_compute[351485]:           <nova:ip type="fixed" address="192.168.0.178" ipVersion="4"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:         </nova:port>
Dec 03 01:56:03 compute-0 nova_compute[351485]:       </nova:ports>
Dec 03 01:56:03 compute-0 nova_compute[351485]:     </nova:instance>
Dec 03 01:56:03 compute-0 nova_compute[351485]:   </metadata>
Dec 03 01:56:03 compute-0 nova_compute[351485]:   <sysinfo type="smbios">
Dec 03 01:56:03 compute-0 nova_compute[351485]:     <system>
Dec 03 01:56:03 compute-0 nova_compute[351485]:       <entry name="manufacturer">RDO</entry>
Dec 03 01:56:03 compute-0 nova_compute[351485]:       <entry name="product">OpenStack Compute</entry>
Dec 03 01:56:03 compute-0 nova_compute[351485]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 03 01:56:03 compute-0 nova_compute[351485]:       <entry name="serial">52862152-12c7-4236-89c3-67750ecbed7a</entry>
Dec 03 01:56:03 compute-0 nova_compute[351485]:       <entry name="uuid">52862152-12c7-4236-89c3-67750ecbed7a</entry>
Dec 03 01:56:03 compute-0 nova_compute[351485]:       <entry name="family">Virtual Machine</entry>
Dec 03 01:56:03 compute-0 nova_compute[351485]:     </system>
Dec 03 01:56:03 compute-0 nova_compute[351485]:   </sysinfo>
Dec 03 01:56:03 compute-0 nova_compute[351485]:   <os>
Dec 03 01:56:03 compute-0 nova_compute[351485]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 03 01:56:03 compute-0 nova_compute[351485]:     <boot dev="hd"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:     <smbios mode="sysinfo"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:   </os>
Dec 03 01:56:03 compute-0 nova_compute[351485]:   <features>
Dec 03 01:56:03 compute-0 nova_compute[351485]:     <acpi/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:     <apic/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:     <vmcoreinfo/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:   </features>
Dec 03 01:56:03 compute-0 nova_compute[351485]:   <clock offset="utc">
Dec 03 01:56:03 compute-0 nova_compute[351485]:     <timer name="pit" tickpolicy="delay"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:     <timer name="hpet" present="no"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:   </clock>
Dec 03 01:56:03 compute-0 nova_compute[351485]:   <cpu mode="host-model" match="exact">
Dec 03 01:56:03 compute-0 nova_compute[351485]:     <topology sockets="1" cores="1" threads="1"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:   </cpu>
Dec 03 01:56:03 compute-0 nova_compute[351485]:   <devices>
Dec 03 01:56:03 compute-0 nova_compute[351485]:     <disk type="network" device="disk">
Dec 03 01:56:03 compute-0 nova_compute[351485]:       <driver type="raw" cache="none"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:       <source protocol="rbd" name="vms/52862152-12c7-4236-89c3-67750ecbed7a_disk">
Dec 03 01:56:03 compute-0 nova_compute[351485]:         <host name="192.168.122.100" port="6789"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:       </source>
Dec 03 01:56:03 compute-0 nova_compute[351485]:       <auth username="openstack">
Dec 03 01:56:03 compute-0 nova_compute[351485]:         <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:       </auth>
Dec 03 01:56:03 compute-0 nova_compute[351485]:       <target dev="vda" bus="virtio"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:     </disk>
Dec 03 01:56:03 compute-0 nova_compute[351485]:     <disk type="network" device="disk">
Dec 03 01:56:03 compute-0 nova_compute[351485]:       <driver type="raw" cache="none"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:       <source protocol="rbd" name="vms/52862152-12c7-4236-89c3-67750ecbed7a_disk.eph0">
Dec 03 01:56:03 compute-0 nova_compute[351485]:         <host name="192.168.122.100" port="6789"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:       </source>
Dec 03 01:56:03 compute-0 nova_compute[351485]:       <auth username="openstack">
Dec 03 01:56:03 compute-0 nova_compute[351485]:         <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:       </auth>
Dec 03 01:56:03 compute-0 nova_compute[351485]:       <target dev="vdb" bus="virtio"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:     </disk>
Dec 03 01:56:03 compute-0 nova_compute[351485]:     <disk type="network" device="cdrom">
Dec 03 01:56:03 compute-0 nova_compute[351485]:       <driver type="raw" cache="none"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:       <source protocol="rbd" name="vms/52862152-12c7-4236-89c3-67750ecbed7a_disk.config">
Dec 03 01:56:03 compute-0 nova_compute[351485]:         <host name="192.168.122.100" port="6789"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:       </source>
Dec 03 01:56:03 compute-0 nova_compute[351485]:       <auth username="openstack">
Dec 03 01:56:03 compute-0 nova_compute[351485]:         <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:       </auth>
Dec 03 01:56:03 compute-0 nova_compute[351485]:       <target dev="sda" bus="sata"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:     </disk>
Dec 03 01:56:03 compute-0 nova_compute[351485]:     <interface type="ethernet">
Dec 03 01:56:03 compute-0 nova_compute[351485]:       <mac address="fa:16:3e:8e:09:91"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:       <model type="virtio"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:       <driver name="vhost" rx_queue_size="512"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:       <mtu size="1442"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:       <target dev="tap521d2181-8f"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:     </interface>
Dec 03 01:56:03 compute-0 nova_compute[351485]:     <serial type="pty">
Dec 03 01:56:03 compute-0 nova_compute[351485]:       <log file="/var/lib/nova/instances/52862152-12c7-4236-89c3-67750ecbed7a/console.log" append="off"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:     </serial>
Dec 03 01:56:03 compute-0 nova_compute[351485]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:     <video>
Dec 03 01:56:03 compute-0 nova_compute[351485]:       <model type="virtio"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:     </video>
Dec 03 01:56:03 compute-0 nova_compute[351485]:     <input type="tablet" bus="usb"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:     <rng model="virtio">
Dec 03 01:56:03 compute-0 nova_compute[351485]:       <backend model="random">/dev/urandom</backend>
Dec 03 01:56:03 compute-0 nova_compute[351485]:     </rng>
Dec 03 01:56:03 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:     <controller type="usb" index="0"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:     <memballoon model="virtio">
Dec 03 01:56:03 compute-0 nova_compute[351485]:       <stats period="10"/>
Dec 03 01:56:03 compute-0 nova_compute[351485]:     </memballoon>
Dec 03 01:56:03 compute-0 nova_compute[351485]:   </devices>
Dec 03 01:56:03 compute-0 nova_compute[351485]: </domain>
Dec 03 01:56:03 compute-0 nova_compute[351485]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 03 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.950 351492 DEBUG nova.compute.manager [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Preparing to wait for external event network-vif-plugged-521d2181-8f17-4f4d-a3a6-98de1e17b734 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 03 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.950 351492 DEBUG oslo_concurrency.lockutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "52862152-12c7-4236-89c3-67750ecbed7a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.951 351492 DEBUG oslo_concurrency.lockutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "52862152-12c7-4236-89c3-67750ecbed7a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.951 351492 DEBUG oslo_concurrency.lockutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "52862152-12c7-4236-89c3-67750ecbed7a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.952 351492 DEBUG nova.virt.libvirt.vif [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T01:55:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-44nal64-ppxv5rwaptjv-bbqmylrxhl37-vnf-x65t7efzpd2l',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-44nal64-ppxv5rwaptjv-bbqmylrxhl37-vnf-x65t7efzpd2l',id=2,image_ref='466cf0db-c3be-4d70-b9f3-08c056c2cad9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='0f6ab671-23df-4a6d-9613-02f9fb5fb294'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9746b242761a48048d185ce26d622b33',ramdisk_id='',reservation_id='r-eunmeq81',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='466cf0db-c3be-4d70-b9f3-08c056c2cad9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T01:55:57Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0zOTYxOTAzNjc5MzA4NDQ1ODc5PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTM5NjE5MDM2NzkzMDg0NDU4Nzk9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09Mzk2MTkwMzY3OTMwODQ0NTg3OT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTM5NjE5MDM2NzkzMDg0NDU4Nzk9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0zOTYxOTAzNjc5MzA4NDQ1ODc5PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0zOTYxOTAzNjc5MzA4NDQ1ODc5PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9
Dec 03 01:56:03 compute-0 nova_compute[351485]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09Mzk2MTkwMzY3OTMwODQ0NTg3OT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTM5NjE5MDM2NzkzMDg0NDU4Nzk9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0zOTYxOTAzNjc5MzA4NDQ1ODc5PT0tLQo=',user_id='03ba25e4009b43f7b0054fee32bf9136',uuid=52862152-12c7-4236-89c3-67750ecbed7a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "address": "fa:16:3e:8e:09:91", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521d2181-8f", "ovs_interfaceid": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 03 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.952 351492 DEBUG nova.network.os_vif_util [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Converting VIF {"id": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "address": "fa:16:3e:8e:09:91", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521d2181-8f", "ovs_interfaceid": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 03 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.953 351492 DEBUG nova.network.os_vif_util [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8e:09:91,bridge_name='br-int',has_traffic_filtering=True,id=521d2181-8f17-4f4d-a3a6-98de1e17b734,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap521d2181-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 03 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.953 351492 DEBUG os_vif [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:09:91,bridge_name='br-int',has_traffic_filtering=True,id=521d2181-8f17-4f4d-a3a6-98de1e17b734,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap521d2181-8f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 03 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.954 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.954 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.955 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 03 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.959 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.959 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap521d2181-8f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.959 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap521d2181-8f, col_values=(('external_ids', {'iface-id': '521d2181-8f17-4f4d-a3a6-98de1e17b734', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8e:09:91', 'vm-uuid': '52862152-12c7-4236-89c3-67750ecbed7a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.961 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:56:03 compute-0 NetworkManager[48912]: <info>  [1764726963.9627] manager: (tap521d2181-8f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Dec 03 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.965 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 03 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.972 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.973 351492 INFO os_vif [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:09:91,bridge_name='br-int',has_traffic_filtering=True,id=521d2181-8f17-4f4d-a3a6-98de1e17b734,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap521d2181-8f')
Dec 03 01:56:04 compute-0 nova_compute[351485]: 2025-12-03 01:56:04.047 351492 DEBUG nova.virt.libvirt.driver [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 03 01:56:04 compute-0 nova_compute[351485]: 2025-12-03 01:56:04.048 351492 DEBUG nova.virt.libvirt.driver [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 03 01:56:04 compute-0 nova_compute[351485]: 2025-12-03 01:56:04.048 351492 DEBUG nova.virt.libvirt.driver [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 03 01:56:04 compute-0 nova_compute[351485]: 2025-12-03 01:56:04.049 351492 DEBUG nova.virt.libvirt.driver [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] No VIF found with MAC fa:16:3e:8e:09:91, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 03 01:56:04 compute-0 nova_compute[351485]: 2025-12-03 01:56:04.049 351492 INFO nova.virt.libvirt.driver [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Using config drive
Dec 03 01:56:04 compute-0 nova_compute[351485]: 2025-12-03 01:56:04.100 351492 DEBUG nova.storage.rbd_utils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 52862152-12c7-4236-89c3-67750ecbed7a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 01:56:04 compute-0 rsyslogd[188612]: message too long (8192) with configured size 8096, begin of message is: 2025-12-03 01:56:03.924 351492 DEBUG nova.virt.libvirt.vif [None req-c1caf01b-ee [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 03 01:56:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1245: 321 pgs: 321 active+clean; 110 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.4 MiB/s wr, 37 op/s
Dec 03 01:56:04 compute-0 rsyslogd[188612]: message too long (8192) with configured size 8096, begin of message is: 2025-12-03 01:56:03.952 351492 DEBUG nova.virt.libvirt.vif [None req-c1caf01b-ee [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 03 01:56:04 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2112121715' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 01:56:04 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2149362390' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 01:56:04 compute-0 nova_compute[351485]: 2025-12-03 01:56:04.453 351492 DEBUG nova.network.neutron [req-ff63cc41-5b5b-49a8-93bd-6f06fc6f7dcc req-ac519cad-eca9-4e59-89a6-6dda4300ead0 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Updated VIF entry in instance network info cache for port 521d2181-8f17-4f4d-a3a6-98de1e17b734. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 03 01:56:04 compute-0 nova_compute[351485]: 2025-12-03 01:56:04.454 351492 DEBUG nova.network.neutron [req-ff63cc41-5b5b-49a8-93bd-6f06fc6f7dcc req-ac519cad-eca9-4e59-89a6-6dda4300ead0 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Updating instance_info_cache with network_info: [{"id": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "address": "fa:16:3e:8e:09:91", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521d2181-8f", "ovs_interfaceid": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 01:56:04 compute-0 nova_compute[351485]: 2025-12-03 01:56:04.476 351492 INFO nova.virt.libvirt.driver [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Creating config drive at /var/lib/nova/instances/52862152-12c7-4236-89c3-67750ecbed7a/disk.config
Dec 03 01:56:04 compute-0 nova_compute[351485]: 2025-12-03 01:56:04.488 351492 DEBUG oslo_concurrency.processutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/52862152-12c7-4236-89c3-67750ecbed7a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzlf3n0le execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:56:04 compute-0 nova_compute[351485]: 2025-12-03 01:56:04.518 351492 DEBUG oslo_concurrency.lockutils [req-ff63cc41-5b5b-49a8-93bd-6f06fc6f7dcc req-ac519cad-eca9-4e59-89a6-6dda4300ead0 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-52862152-12c7-4236-89c3-67750ecbed7a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 01:56:04 compute-0 nova_compute[351485]: 2025-12-03 01:56:04.634 351492 DEBUG oslo_concurrency.processutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/52862152-12c7-4236-89c3-67750ecbed7a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzlf3n0le" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:56:04 compute-0 nova_compute[351485]: 2025-12-03 01:56:04.703 351492 DEBUG nova.storage.rbd_utils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 52862152-12c7-4236-89c3-67750ecbed7a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 01:56:04 compute-0 nova_compute[351485]: 2025-12-03 01:56:04.726 351492 DEBUG oslo_concurrency.processutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/52862152-12c7-4236-89c3-67750ecbed7a/disk.config 52862152-12c7-4236-89c3-67750ecbed7a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:56:05 compute-0 nova_compute[351485]: 2025-12-03 01:56:05.010 351492 DEBUG oslo_concurrency.processutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/52862152-12c7-4236-89c3-67750ecbed7a/disk.config 52862152-12c7-4236-89c3-67750ecbed7a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.284s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:56:05 compute-0 nova_compute[351485]: 2025-12-03 01:56:05.011 351492 INFO nova.virt.libvirt.driver [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Deleting local config drive /var/lib/nova/instances/52862152-12c7-4236-89c3-67750ecbed7a/disk.config because it was imported into RBD.
Dec 03 01:56:05 compute-0 kernel: tap521d2181-8f: entered promiscuous mode
Dec 03 01:56:05 compute-0 NetworkManager[48912]: <info>  [1764726965.1244] manager: (tap521d2181-8f): new Tun device (/org/freedesktop/NetworkManager/Devices/30)
Dec 03 01:56:05 compute-0 ovn_controller[89134]: 2025-12-03T01:56:05Z|00035|binding|INFO|Claiming lport 521d2181-8f17-4f4d-a3a6-98de1e17b734 for this chassis.
Dec 03 01:56:05 compute-0 ovn_controller[89134]: 2025-12-03T01:56:05Z|00036|binding|INFO|521d2181-8f17-4f4d-a3a6-98de1e17b734: Claiming fa:16:3e:8e:09:91 192.168.0.178
Dec 03 01:56:05 compute-0 nova_compute[351485]: 2025-12-03 01:56:05.129 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:56:05 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:56:05.142 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:09:91 192.168.0.178'], port_security=['fa:16:3e:8e:09:91 192.168.0.178'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-olz3x44nal64-ppxv5rwaptjv-bbqmylrxhl37-port-ucken5qvu3kv', 'neutron:cidrs': '192.168.0.178/24', 'neutron:device_id': '52862152-12c7-4236-89c3-67750ecbed7a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-olz3x44nal64-ppxv5rwaptjv-bbqmylrxhl37-port-ucken5qvu3kv', 'neutron:project_id': '9746b242761a48048d185ce26d622b33', 'neutron:revision_number': '2', 'neutron:security_group_ids': '43ddbc1b-0018-4ea3-a338-8898d9bf8c87', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.212'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=13e9ae70-0999-47f9-bc0c-397e04263018, chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=521d2181-8f17-4f4d-a3a6-98de1e17b734) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 01:56:05 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:56:05.145 288528 INFO neutron.agent.ovn.metadata.agent [-] Port 521d2181-8f17-4f4d-a3a6-98de1e17b734 in datapath 7ba11691-2711-476c-9191-cb6dfd0efa7d bound to our chassis
Dec 03 01:56:05 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:56:05.148 288528 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ba11691-2711-476c-9191-cb6dfd0efa7d
Dec 03 01:56:05 compute-0 ovn_controller[89134]: 2025-12-03T01:56:05Z|00037|binding|INFO|Setting lport 521d2181-8f17-4f4d-a3a6-98de1e17b734 ovn-installed in OVS
Dec 03 01:56:05 compute-0 ovn_controller[89134]: 2025-12-03T01:56:05Z|00038|binding|INFO|Setting lport 521d2181-8f17-4f4d-a3a6-98de1e17b734 up in Southbound
Dec 03 01:56:05 compute-0 nova_compute[351485]: 2025-12-03 01:56:05.171 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:56:05 compute-0 nova_compute[351485]: 2025-12-03 01:56:05.175 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:56:05 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:56:05.183 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[5d05f82f-e0d9-474d-bd0a-14eb588fd414]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 01:56:05 compute-0 systemd-udevd[417086]: Network interface NamePolicy= disabled on kernel command line.
Dec 03 01:56:05 compute-0 systemd-machined[138558]: New machine qemu-2-instance-00000002.
Dec 03 01:56:05 compute-0 NetworkManager[48912]: <info>  [1764726965.2152] device (tap521d2181-8f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 03 01:56:05 compute-0 NetworkManager[48912]: <info>  [1764726965.2168] device (tap521d2181-8f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 03 01:56:05 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Dec 03 01:56:05 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:56:05.235 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[3fcb80a8-6923-4c2e-ab5e-11f6dcd7078c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 01:56:05 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:56:05.239 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[e6042ccd-b61a-4190-ba77-3d74b94823b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 01:56:05 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:56:05.271 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[5c0cd2e0-e7f3-43dc-a985-2da3630e13ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 01:56:05 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:56:05.297 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[5355c38e-331d-4e4b-94c5-65f724bf0a8f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ba11691-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:a4:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573048, 'reachable_time': 36425, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 417096, 'error': None, 'target': 'ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 01:56:05 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:56:05.322 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[69d426ef-7bd0-4378-a228-039bffee61c0]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap7ba11691-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 573065, 'tstamp': 573065}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 417100, 'error': None, 'target': 'ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7ba11691-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 573069, 'tstamp': 573069}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 417100, 'error': None, 'target': 'ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 01:56:05 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:56:05.324 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ba11691-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 01:56:05 compute-0 nova_compute[351485]: 2025-12-03 01:56:05.327 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:56:05 compute-0 nova_compute[351485]: 2025-12-03 01:56:05.329 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:56:05 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:56:05.330 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ba11691-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 01:56:05 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:56:05.331 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 03 01:56:05 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:56:05.331 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ba11691-20, col_values=(('external_ids', {'iface-id': '8c8945aa-32be-4ced-a7fe-2b9502f30008'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 01:56:05 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:56:05.332 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 03 01:56:05 compute-0 ceph-mon[192821]: pgmap v1245: 321 pgs: 321 active+clean; 110 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.4 MiB/s wr, 37 op/s
Dec 03 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.154 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764726966.1533508, 52862152-12c7-4236-89c3-67750ecbed7a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.155 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] VM Started (Lifecycle Event)
Dec 03 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.183 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.193 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764726966.1536407, 52862152-12c7-4236-89c3-67750ecbed7a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.193 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] VM Paused (Lifecycle Event)
Dec 03 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.222 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.230 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 03 01:56:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1246: 321 pgs: 321 active+clean; 110 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 1.4 MiB/s wr, 47 op/s
Dec 03 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.256 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 03 01:56:06 compute-0 sudo[417162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:56:06 compute-0 sudo[417162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:56:06 compute-0 sudo[417162]: pam_unix(sudo:session): session closed for user root
Dec 03 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.341 351492 DEBUG nova.compute.manager [req-4e55141f-e8c6-4667-96b9-0c9e88cb3747 req-84e5b453-6fdb-4910-99db-3396bb7921bb 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Received event network-vif-plugged-521d2181-8f17-4f4d-a3a6-98de1e17b734 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.341 351492 DEBUG oslo_concurrency.lockutils [req-4e55141f-e8c6-4667-96b9-0c9e88cb3747 req-84e5b453-6fdb-4910-99db-3396bb7921bb 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "52862152-12c7-4236-89c3-67750ecbed7a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.342 351492 DEBUG oslo_concurrency.lockutils [req-4e55141f-e8c6-4667-96b9-0c9e88cb3747 req-84e5b453-6fdb-4910-99db-3396bb7921bb 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "52862152-12c7-4236-89c3-67750ecbed7a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.342 351492 DEBUG oslo_concurrency.lockutils [req-4e55141f-e8c6-4667-96b9-0c9e88cb3747 req-84e5b453-6fdb-4910-99db-3396bb7921bb 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "52862152-12c7-4236-89c3-67750ecbed7a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.342 351492 DEBUG nova.compute.manager [req-4e55141f-e8c6-4667-96b9-0c9e88cb3747 req-84e5b453-6fdb-4910-99db-3396bb7921bb 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Processing event network-vif-plugged-521d2181-8f17-4f4d-a3a6-98de1e17b734 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 03 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.343 351492 DEBUG nova.compute.manager [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 03 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.351 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764726966.3509345, 52862152-12c7-4236-89c3-67750ecbed7a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.352 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] VM Resumed (Lifecycle Event)
Dec 03 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.357 351492 DEBUG nova.virt.libvirt.driver [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 03 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.366 351492 INFO nova.virt.libvirt.driver [-] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Instance spawned successfully.
Dec 03 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.367 351492 DEBUG nova.virt.libvirt.driver [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 03 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.374 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.385 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 03 01:56:06 compute-0 sudo[417187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.410 351492 DEBUG nova.virt.libvirt.driver [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.411 351492 DEBUG nova.virt.libvirt.driver [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.412 351492 DEBUG nova.virt.libvirt.driver [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 01:56:06 compute-0 sudo[417187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.413 351492 DEBUG nova.virt.libvirt.driver [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.414 351492 DEBUG nova.virt.libvirt.driver [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.416 351492 DEBUG nova.virt.libvirt.driver [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 01:56:06 compute-0 sudo[417187]: pam_unix(sudo:session): session closed for user root
Dec 03 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.446 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 03 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.474 351492 INFO nova.compute.manager [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Took 8.64 seconds to spawn the instance on the hypervisor.
Dec 03 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.475 351492 DEBUG nova.compute.manager [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 01:56:06 compute-0 sudo[417212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:56:06 compute-0 sudo[417212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:56:06 compute-0 sudo[417212]: pam_unix(sudo:session): session closed for user root
Dec 03 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.542 351492 INFO nova.compute.manager [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Took 9.88 seconds to build instance.
Dec 03 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.575 351492 DEBUG oslo_concurrency.lockutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "52862152-12c7-4236-89c3-67750ecbed7a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.588 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:56:06 compute-0 sudo[417237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 01:56:06 compute-0 sudo[417237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:56:07 compute-0 sudo[417237]: pam_unix(sudo:session): session closed for user root
Dec 03 01:56:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:56:07 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:56:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 01:56:07 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:56:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 01:56:07 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:56:07 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev aeef6703-2b60-4598-920d-609c8ef8eaed does not exist
Dec 03 01:56:07 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 2ebdf4ef-d338-458c-9dbe-4e56ac8a51e3 does not exist
Dec 03 01:56:07 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 2aa095f1-2ca6-45f7-b318-7d8a7cc33b59 does not exist
Dec 03 01:56:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 01:56:07 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:56:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 01:56:07 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:56:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:56:07 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:56:07 compute-0 ceph-mon[192821]: pgmap v1246: 321 pgs: 321 active+clean; 110 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 1.4 MiB/s wr, 47 op/s
Dec 03 01:56:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:56:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:56:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:56:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:56:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:56:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:56:07 compute-0 sudo[417292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:56:07 compute-0 sudo[417292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:56:07 compute-0 sudo[417292]: pam_unix(sudo:session): session closed for user root
Dec 03 01:56:07 compute-0 sudo[417317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:56:07 compute-0 sudo[417317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:56:07 compute-0 sudo[417317]: pam_unix(sudo:session): session closed for user root
Dec 03 01:56:07 compute-0 nova_compute[351485]: 2025-12-03 01:56:07.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:56:07 compute-0 nova_compute[351485]: 2025-12-03 01:56:07.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 01:56:07 compute-0 sudo[417342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:56:07 compute-0 sudo[417342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:56:07 compute-0 sudo[417342]: pam_unix(sudo:session): session closed for user root
Dec 03 01:56:07 compute-0 nova_compute[351485]: 2025-12-03 01:56:07.783 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:56:07 compute-0 sudo[417367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 01:56:07 compute-0 sudo[417367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:56:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:56:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1247: 321 pgs: 321 active+clean; 110 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 1.4 MiB/s wr, 46 op/s
Dec 03 01:56:08 compute-0 podman[417430]: 2025-12-03 01:56:08.404882497 +0000 UTC m=+0.086544985 container create 175e31c39f4cef2494dd7094b3dcd374b8318d90856701453c28b618d84de12d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:56:08 compute-0 nova_compute[351485]: 2025-12-03 01:56:08.430 351492 DEBUG nova.compute.manager [req-8e536ce9-1b23-4cf2-982e-16b472bfcb35 req-0533b843-376c-4054-8617-107e9bf6d92f 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Received event network-vif-plugged-521d2181-8f17-4f4d-a3a6-98de1e17b734 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 01:56:08 compute-0 nova_compute[351485]: 2025-12-03 01:56:08.431 351492 DEBUG oslo_concurrency.lockutils [req-8e536ce9-1b23-4cf2-982e-16b472bfcb35 req-0533b843-376c-4054-8617-107e9bf6d92f 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "52862152-12c7-4236-89c3-67750ecbed7a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:56:08 compute-0 nova_compute[351485]: 2025-12-03 01:56:08.431 351492 DEBUG oslo_concurrency.lockutils [req-8e536ce9-1b23-4cf2-982e-16b472bfcb35 req-0533b843-376c-4054-8617-107e9bf6d92f 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "52862152-12c7-4236-89c3-67750ecbed7a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:56:08 compute-0 nova_compute[351485]: 2025-12-03 01:56:08.432 351492 DEBUG oslo_concurrency.lockutils [req-8e536ce9-1b23-4cf2-982e-16b472bfcb35 req-0533b843-376c-4054-8617-107e9bf6d92f 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "52862152-12c7-4236-89c3-67750ecbed7a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:56:08 compute-0 nova_compute[351485]: 2025-12-03 01:56:08.432 351492 DEBUG nova.compute.manager [req-8e536ce9-1b23-4cf2-982e-16b472bfcb35 req-0533b843-376c-4054-8617-107e9bf6d92f 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] No waiting events found dispatching network-vif-plugged-521d2181-8f17-4f4d-a3a6-98de1e17b734 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 03 01:56:08 compute-0 nova_compute[351485]: 2025-12-03 01:56:08.433 351492 WARNING nova.compute.manager [req-8e536ce9-1b23-4cf2-982e-16b472bfcb35 req-0533b843-376c-4054-8617-107e9bf6d92f 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Received unexpected event network-vif-plugged-521d2181-8f17-4f4d-a3a6-98de1e17b734 for instance with vm_state active and task_state None.
Dec 03 01:56:08 compute-0 podman[417430]: 2025-12-03 01:56:08.371941549 +0000 UTC m=+0.053604127 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:56:08 compute-0 systemd[1]: Started libpod-conmon-175e31c39f4cef2494dd7094b3dcd374b8318d90856701453c28b618d84de12d.scope.
Dec 03 01:56:08 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:56:08 compute-0 podman[417430]: 2025-12-03 01:56:08.534212119 +0000 UTC m=+0.215874647 container init 175e31c39f4cef2494dd7094b3dcd374b8318d90856701453c28b618d84de12d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_benz, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 03 01:56:08 compute-0 podman[417430]: 2025-12-03 01:56:08.559377645 +0000 UTC m=+0.241040113 container start 175e31c39f4cef2494dd7094b3dcd374b8318d90856701453c28b618d84de12d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_benz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 03 01:56:08 compute-0 podman[417430]: 2025-12-03 01:56:08.564662116 +0000 UTC m=+0.246324604 container attach 175e31c39f4cef2494dd7094b3dcd374b8318d90856701453c28b618d84de12d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_benz, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:56:08 compute-0 vibrant_benz[417446]: 167 167
Dec 03 01:56:08 compute-0 systemd[1]: libpod-175e31c39f4cef2494dd7094b3dcd374b8318d90856701453c28b618d84de12d.scope: Deactivated successfully.
Dec 03 01:56:08 compute-0 podman[417430]: 2025-12-03 01:56:08.569475423 +0000 UTC m=+0.251137941 container died 175e31c39f4cef2494dd7094b3dcd374b8318d90856701453c28b618d84de12d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_benz, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 03 01:56:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-dbd2a1c57b8847ef4fb04911e0baf7c4d8a0b52db48ad24c3319e442a6eed06a-merged.mount: Deactivated successfully.
Dec 03 01:56:08 compute-0 podman[417430]: 2025-12-03 01:56:08.640070583 +0000 UTC m=+0.321733071 container remove 175e31c39f4cef2494dd7094b3dcd374b8318d90856701453c28b618d84de12d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec 03 01:56:08 compute-0 systemd[1]: libpod-conmon-175e31c39f4cef2494dd7094b3dcd374b8318d90856701453c28b618d84de12d.scope: Deactivated successfully.
Dec 03 01:56:08 compute-0 podman[417469]: 2025-12-03 01:56:08.876772551 +0000 UTC m=+0.087506992 container create 6bf8d50d0b9a40bb9beec656977fe194ef69400e7d1b7a32aaff587b3d7e956b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 03 01:56:08 compute-0 podman[417469]: 2025-12-03 01:56:08.838903593 +0000 UTC m=+0.049638094 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:56:08 compute-0 nova_compute[351485]: 2025-12-03 01:56:08.961 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:56:08 compute-0 systemd[1]: Started libpod-conmon-6bf8d50d0b9a40bb9beec656977fe194ef69400e7d1b7a32aaff587b3d7e956b.scope.
Dec 03 01:56:09 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:56:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1c642cd8f6f6e14dbfe6f9fcba600a064f5376673b6e5f6f5929f42f1211327/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:56:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1c642cd8f6f6e14dbfe6f9fcba600a064f5376673b6e5f6f5929f42f1211327/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:56:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1c642cd8f6f6e14dbfe6f9fcba600a064f5376673b6e5f6f5929f42f1211327/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:56:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1c642cd8f6f6e14dbfe6f9fcba600a064f5376673b6e5f6f5929f42f1211327/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:56:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1c642cd8f6f6e14dbfe6f9fcba600a064f5376673b6e5f6f5929f42f1211327/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:56:09 compute-0 podman[417469]: 2025-12-03 01:56:09.057066394 +0000 UTC m=+0.267800895 container init 6bf8d50d0b9a40bb9beec656977fe194ef69400e7d1b7a32aaff587b3d7e956b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_taussig, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 03 01:56:09 compute-0 podman[417469]: 2025-12-03 01:56:09.070901378 +0000 UTC m=+0.281635789 container start 6bf8d50d0b9a40bb9beec656977fe194ef69400e7d1b7a32aaff587b3d7e956b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Dec 03 01:56:09 compute-0 podman[417469]: 2025-12-03 01:56:09.077008982 +0000 UTC m=+0.287743473 container attach 6bf8d50d0b9a40bb9beec656977fe194ef69400e7d1b7a32aaff587b3d7e956b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 03 01:56:09 compute-0 ceph-mon[192821]: pgmap v1247: 321 pgs: 321 active+clean; 110 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 1.4 MiB/s wr, 46 op/s
Dec 03 01:56:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1248: 321 pgs: 321 active+clean; 111 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.4 MiB/s wr, 81 op/s
Dec 03 01:56:10 compute-0 goofy_taussig[417485]: --> passed data devices: 0 physical, 3 LVM
Dec 03 01:56:10 compute-0 goofy_taussig[417485]: --> relative data size: 1.0
Dec 03 01:56:10 compute-0 goofy_taussig[417485]: --> All data devices are unavailable
Dec 03 01:56:10 compute-0 systemd[1]: libpod-6bf8d50d0b9a40bb9beec656977fe194ef69400e7d1b7a32aaff587b3d7e956b.scope: Deactivated successfully.
Dec 03 01:56:10 compute-0 systemd[1]: libpod-6bf8d50d0b9a40bb9beec656977fe194ef69400e7d1b7a32aaff587b3d7e956b.scope: Consumed 1.160s CPU time.
Dec 03 01:56:10 compute-0 conmon[417485]: conmon 6bf8d50d0b9a40bb9bee <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6bf8d50d0b9a40bb9beec656977fe194ef69400e7d1b7a32aaff587b3d7e956b.scope/container/memory.events
Dec 03 01:56:10 compute-0 podman[417469]: 2025-12-03 01:56:10.306683858 +0000 UTC m=+1.517418319 container died 6bf8d50d0b9a40bb9beec656977fe194ef69400e7d1b7a32aaff587b3d7e956b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:56:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1c642cd8f6f6e14dbfe6f9fcba600a064f5376673b6e5f6f5929f42f1211327-merged.mount: Deactivated successfully.
Dec 03 01:56:10 compute-0 podman[417469]: 2025-12-03 01:56:10.411597785 +0000 UTC m=+1.622332196 container remove 6bf8d50d0b9a40bb9beec656977fe194ef69400e7d1b7a32aaff587b3d7e956b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_taussig, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Dec 03 01:56:10 compute-0 systemd[1]: libpod-conmon-6bf8d50d0b9a40bb9beec656977fe194ef69400e7d1b7a32aaff587b3d7e956b.scope: Deactivated successfully.
Dec 03 01:56:10 compute-0 sudo[417367]: pam_unix(sudo:session): session closed for user root
Dec 03 01:56:10 compute-0 podman[417514]: 2025-12-03 01:56:10.467810516 +0000 UTC m=+0.102619483 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:56:10 compute-0 sudo[417546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:56:10 compute-0 sudo[417546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:56:10 compute-0 sudo[417546]: pam_unix(sudo:session): session closed for user root
Dec 03 01:56:10 compute-0 sudo[417571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:56:10 compute-0 sudo[417571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:56:10 compute-0 sudo[417571]: pam_unix(sudo:session): session closed for user root
Dec 03 01:56:10 compute-0 sudo[417596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:56:10 compute-0 sudo[417596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:56:10 compute-0 sudo[417596]: pam_unix(sudo:session): session closed for user root
Dec 03 01:56:10 compute-0 sudo[417621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 01:56:10 compute-0 sudo[417621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:56:11 compute-0 ceph-mon[192821]: pgmap v1248: 321 pgs: 321 active+clean; 111 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.4 MiB/s wr, 81 op/s
Dec 03 01:56:11 compute-0 podman[417684]: 2025-12-03 01:56:11.651867563 +0000 UTC m=+0.119597595 container create 5715ba0b9e32669fba19f1a6790bce7e6d6cf8ab2726d85c431a7a602ca0f789 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:56:11 compute-0 podman[417684]: 2025-12-03 01:56:11.598231066 +0000 UTC m=+0.065961158 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:56:11 compute-0 systemd[1]: Started libpod-conmon-5715ba0b9e32669fba19f1a6790bce7e6d6cf8ab2726d85c431a7a602ca0f789.scope.
Dec 03 01:56:11 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:56:11 compute-0 podman[417684]: 2025-12-03 01:56:11.843405946 +0000 UTC m=+0.311136028 container init 5715ba0b9e32669fba19f1a6790bce7e6d6cf8ab2726d85c431a7a602ca0f789 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_kalam, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 03 01:56:11 compute-0 podman[417684]: 2025-12-03 01:56:11.860706858 +0000 UTC m=+0.328436880 container start 5715ba0b9e32669fba19f1a6790bce7e6d6cf8ab2726d85c431a7a602ca0f789 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_kalam, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:56:11 compute-0 podman[417684]: 2025-12-03 01:56:11.866499563 +0000 UTC m=+0.334229595 container attach 5715ba0b9e32669fba19f1a6790bce7e6d6cf8ab2726d85c431a7a602ca0f789 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_kalam, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 03 01:56:11 compute-0 goofy_kalam[417699]: 167 167
Dec 03 01:56:11 compute-0 systemd[1]: libpod-5715ba0b9e32669fba19f1a6790bce7e6d6cf8ab2726d85c431a7a602ca0f789.scope: Deactivated successfully.
Dec 03 01:56:11 compute-0 podman[417684]: 2025-12-03 01:56:11.874874291 +0000 UTC m=+0.342604313 container died 5715ba0b9e32669fba19f1a6790bce7e6d6cf8ab2726d85c431a7a602ca0f789 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:56:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-25e97860783b8e7b3cb13c4f4a325da6bc61b5582a227c7696b8e711dc9935b3-merged.mount: Deactivated successfully.
Dec 03 01:56:11 compute-0 podman[417684]: 2025-12-03 01:56:11.961870928 +0000 UTC m=+0.429600970 container remove 5715ba0b9e32669fba19f1a6790bce7e6d6cf8ab2726d85c431a7a602ca0f789 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_kalam, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:56:11 compute-0 systemd[1]: libpod-conmon-5715ba0b9e32669fba19f1a6790bce7e6d6cf8ab2726d85c431a7a602ca0f789.scope: Deactivated successfully.
Dec 03 01:56:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1249: 321 pgs: 321 active+clean; 111 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 112 KiB/s wr, 74 op/s
Dec 03 01:56:12 compute-0 podman[417722]: 2025-12-03 01:56:12.281795466 +0000 UTC m=+0.134001656 container create 40281e6a9d31fe6a587d6154805254e8d97d6aa6fcca8e2ef1c3326d75cbdea1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec 03 01:56:12 compute-0 podman[417722]: 2025-12-03 01:56:12.22258954 +0000 UTC m=+0.074795750 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:56:12 compute-0 systemd[1]: Started libpod-conmon-40281e6a9d31fe6a587d6154805254e8d97d6aa6fcca8e2ef1c3326d75cbdea1.scope.
Dec 03 01:56:12 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:56:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e74f476dbfd6692e4324e3b6d7ebf9483b445c5cea3e4087eaf3926281820720/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:56:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e74f476dbfd6692e4324e3b6d7ebf9483b445c5cea3e4087eaf3926281820720/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:56:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e74f476dbfd6692e4324e3b6d7ebf9483b445c5cea3e4087eaf3926281820720/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:56:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e74f476dbfd6692e4324e3b6d7ebf9483b445c5cea3e4087eaf3926281820720/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:56:12 compute-0 podman[417722]: 2025-12-03 01:56:12.440761021 +0000 UTC m=+0.292967241 container init 40281e6a9d31fe6a587d6154805254e8d97d6aa6fcca8e2ef1c3326d75cbdea1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_booth, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:56:12 compute-0 podman[417722]: 2025-12-03 01:56:12.455943824 +0000 UTC m=+0.308149974 container start 40281e6a9d31fe6a587d6154805254e8d97d6aa6fcca8e2ef1c3326d75cbdea1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_booth, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Dec 03 01:56:12 compute-0 podman[417722]: 2025-12-03 01:56:12.465984359 +0000 UTC m=+0.318190559 container attach 40281e6a9d31fe6a587d6154805254e8d97d6aa6fcca8e2ef1c3326d75cbdea1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_booth, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 03 01:56:12 compute-0 nova_compute[351485]: 2025-12-03 01:56:12.785 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:56:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:56:13 compute-0 musing_booth[417738]: {
Dec 03 01:56:13 compute-0 musing_booth[417738]:     "0": [
Dec 03 01:56:13 compute-0 musing_booth[417738]:         {
Dec 03 01:56:13 compute-0 musing_booth[417738]:             "devices": [
Dec 03 01:56:13 compute-0 musing_booth[417738]:                 "/dev/loop3"
Dec 03 01:56:13 compute-0 musing_booth[417738]:             ],
Dec 03 01:56:13 compute-0 musing_booth[417738]:             "lv_name": "ceph_lv0",
Dec 03 01:56:13 compute-0 musing_booth[417738]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:56:13 compute-0 musing_booth[417738]:             "lv_size": "21470642176",
Dec 03 01:56:13 compute-0 musing_booth[417738]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:56:13 compute-0 musing_booth[417738]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:56:13 compute-0 musing_booth[417738]:             "name": "ceph_lv0",
Dec 03 01:56:13 compute-0 musing_booth[417738]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:56:13 compute-0 musing_booth[417738]:             "tags": {
Dec 03 01:56:13 compute-0 musing_booth[417738]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:56:13 compute-0 musing_booth[417738]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:56:13 compute-0 musing_booth[417738]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:56:13 compute-0 musing_booth[417738]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:56:13 compute-0 musing_booth[417738]:                 "ceph.cluster_name": "ceph",
Dec 03 01:56:13 compute-0 musing_booth[417738]:                 "ceph.crush_device_class": "",
Dec 03 01:56:13 compute-0 musing_booth[417738]:                 "ceph.encrypted": "0",
Dec 03 01:56:13 compute-0 musing_booth[417738]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:56:13 compute-0 musing_booth[417738]:                 "ceph.osd_id": "0",
Dec 03 01:56:13 compute-0 musing_booth[417738]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:56:13 compute-0 musing_booth[417738]:                 "ceph.type": "block",
Dec 03 01:56:13 compute-0 musing_booth[417738]:                 "ceph.vdo": "0"
Dec 03 01:56:13 compute-0 musing_booth[417738]:             },
Dec 03 01:56:13 compute-0 musing_booth[417738]:             "type": "block",
Dec 03 01:56:13 compute-0 musing_booth[417738]:             "vg_name": "ceph_vg0"
Dec 03 01:56:13 compute-0 musing_booth[417738]:         }
Dec 03 01:56:13 compute-0 musing_booth[417738]:     ],
Dec 03 01:56:13 compute-0 musing_booth[417738]:     "1": [
Dec 03 01:56:13 compute-0 musing_booth[417738]:         {
Dec 03 01:56:13 compute-0 musing_booth[417738]:             "devices": [
Dec 03 01:56:13 compute-0 musing_booth[417738]:                 "/dev/loop4"
Dec 03 01:56:13 compute-0 musing_booth[417738]:             ],
Dec 03 01:56:13 compute-0 musing_booth[417738]:             "lv_name": "ceph_lv1",
Dec 03 01:56:13 compute-0 musing_booth[417738]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:56:13 compute-0 musing_booth[417738]:             "lv_size": "21470642176",
Dec 03 01:56:13 compute-0 musing_booth[417738]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:56:13 compute-0 musing_booth[417738]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:56:13 compute-0 musing_booth[417738]:             "name": "ceph_lv1",
Dec 03 01:56:13 compute-0 musing_booth[417738]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:56:13 compute-0 musing_booth[417738]:             "tags": {
Dec 03 01:56:13 compute-0 musing_booth[417738]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:56:13 compute-0 musing_booth[417738]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:56:13 compute-0 musing_booth[417738]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:56:13 compute-0 musing_booth[417738]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:56:13 compute-0 musing_booth[417738]:                 "ceph.cluster_name": "ceph",
Dec 03 01:56:13 compute-0 musing_booth[417738]:                 "ceph.crush_device_class": "",
Dec 03 01:56:13 compute-0 musing_booth[417738]:                 "ceph.encrypted": "0",
Dec 03 01:56:13 compute-0 musing_booth[417738]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:56:13 compute-0 musing_booth[417738]:                 "ceph.osd_id": "1",
Dec 03 01:56:13 compute-0 musing_booth[417738]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:56:13 compute-0 musing_booth[417738]:                 "ceph.type": "block",
Dec 03 01:56:13 compute-0 musing_booth[417738]:                 "ceph.vdo": "0"
Dec 03 01:56:13 compute-0 musing_booth[417738]:             },
Dec 03 01:56:13 compute-0 musing_booth[417738]:             "type": "block",
Dec 03 01:56:13 compute-0 musing_booth[417738]:             "vg_name": "ceph_vg1"
Dec 03 01:56:13 compute-0 musing_booth[417738]:         }
Dec 03 01:56:13 compute-0 musing_booth[417738]:     ],
Dec 03 01:56:13 compute-0 musing_booth[417738]:     "2": [
Dec 03 01:56:13 compute-0 musing_booth[417738]:         {
Dec 03 01:56:13 compute-0 musing_booth[417738]:             "devices": [
Dec 03 01:56:13 compute-0 musing_booth[417738]:                 "/dev/loop5"
Dec 03 01:56:13 compute-0 musing_booth[417738]:             ],
Dec 03 01:56:13 compute-0 musing_booth[417738]:             "lv_name": "ceph_lv2",
Dec 03 01:56:13 compute-0 musing_booth[417738]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:56:13 compute-0 musing_booth[417738]:             "lv_size": "21470642176",
Dec 03 01:56:13 compute-0 musing_booth[417738]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:56:13 compute-0 musing_booth[417738]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:56:13 compute-0 musing_booth[417738]:             "name": "ceph_lv2",
Dec 03 01:56:13 compute-0 musing_booth[417738]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:56:13 compute-0 musing_booth[417738]:             "tags": {
Dec 03 01:56:13 compute-0 musing_booth[417738]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:56:13 compute-0 musing_booth[417738]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:56:13 compute-0 musing_booth[417738]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:56:13 compute-0 musing_booth[417738]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:56:13 compute-0 musing_booth[417738]:                 "ceph.cluster_name": "ceph",
Dec 03 01:56:13 compute-0 musing_booth[417738]:                 "ceph.crush_device_class": "",
Dec 03 01:56:13 compute-0 musing_booth[417738]:                 "ceph.encrypted": "0",
Dec 03 01:56:13 compute-0 musing_booth[417738]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:56:13 compute-0 musing_booth[417738]:                 "ceph.osd_id": "2",
Dec 03 01:56:13 compute-0 musing_booth[417738]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:56:13 compute-0 musing_booth[417738]:                 "ceph.type": "block",
Dec 03 01:56:13 compute-0 musing_booth[417738]:                 "ceph.vdo": "0"
Dec 03 01:56:13 compute-0 musing_booth[417738]:             },
Dec 03 01:56:13 compute-0 musing_booth[417738]:             "type": "block",
Dec 03 01:56:13 compute-0 musing_booth[417738]:             "vg_name": "ceph_vg2"
Dec 03 01:56:13 compute-0 musing_booth[417738]:         }
Dec 03 01:56:13 compute-0 musing_booth[417738]:     ]
Dec 03 01:56:13 compute-0 musing_booth[417738]: }
Dec 03 01:56:13 compute-0 systemd[1]: libpod-40281e6a9d31fe6a587d6154805254e8d97d6aa6fcca8e2ef1c3326d75cbdea1.scope: Deactivated successfully.
Dec 03 01:56:13 compute-0 podman[417722]: 2025-12-03 01:56:13.321761312 +0000 UTC m=+1.173967472 container died 40281e6a9d31fe6a587d6154805254e8d97d6aa6fcca8e2ef1c3326d75cbdea1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_booth, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec 03 01:56:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-e74f476dbfd6692e4324e3b6d7ebf9483b445c5cea3e4087eaf3926281820720-merged.mount: Deactivated successfully.
Dec 03 01:56:13 compute-0 ceph-mon[192821]: pgmap v1249: 321 pgs: 321 active+clean; 111 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 112 KiB/s wr, 74 op/s
Dec 03 01:56:13 compute-0 podman[417722]: 2025-12-03 01:56:13.424888858 +0000 UTC m=+1.277095018 container remove 40281e6a9d31fe6a587d6154805254e8d97d6aa6fcca8e2ef1c3326d75cbdea1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_booth, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:56:13 compute-0 systemd[1]: libpod-conmon-40281e6a9d31fe6a587d6154805254e8d97d6aa6fcca8e2ef1c3326d75cbdea1.scope: Deactivated successfully.
Dec 03 01:56:13 compute-0 sudo[417621]: pam_unix(sudo:session): session closed for user root
Dec 03 01:56:13 compute-0 podman[417748]: 2025-12-03 01:56:13.502190959 +0000 UTC m=+0.135417496 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, build-date=2024-09-18T21:23:30, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, com.redhat.component=ubi9-container, architecture=x86_64, io.openshift.tags=base rhel9, config_id=edpm, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., maintainer=Red Hat, Inc.)
Dec 03 01:56:13 compute-0 sudo[417776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:56:13 compute-0 sudo[417776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:56:13 compute-0 sudo[417776]: pam_unix(sudo:session): session closed for user root
Dec 03 01:56:13 compute-0 sudo[417802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:56:13 compute-0 sudo[417802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:56:13 compute-0 sudo[417802]: pam_unix(sudo:session): session closed for user root
Dec 03 01:56:13 compute-0 sudo[417827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:56:13 compute-0 sudo[417827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:56:13 compute-0 sudo[417827]: pam_unix(sudo:session): session closed for user root
Dec 03 01:56:13 compute-0 sudo[417852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 01:56:13 compute-0 sudo[417852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:56:13 compute-0 nova_compute[351485]: 2025-12-03 01:56:13.966 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:56:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1250: 321 pgs: 321 active+clean; 111 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 21 KiB/s wr, 63 op/s
Dec 03 01:56:14 compute-0 podman[417916]: 2025-12-03 01:56:14.575442821 +0000 UTC m=+0.093724759 container create 212dcc2c431446bcc875269931752e9b95a1c9ca527180c7ceb9cb7603d18b6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kirch, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 03 01:56:14 compute-0 podman[417916]: 2025-12-03 01:56:14.536512403 +0000 UTC m=+0.054794341 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:56:14 compute-0 systemd[1]: Started libpod-conmon-212dcc2c431446bcc875269931752e9b95a1c9ca527180c7ceb9cb7603d18b6d.scope.
Dec 03 01:56:14 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:56:14 compute-0 podman[417916]: 2025-12-03 01:56:14.751887505 +0000 UTC m=+0.270169453 container init 212dcc2c431446bcc875269931752e9b95a1c9ca527180c7ceb9cb7603d18b6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:56:14 compute-0 podman[417916]: 2025-12-03 01:56:14.776363702 +0000 UTC m=+0.294645610 container start 212dcc2c431446bcc875269931752e9b95a1c9ca527180c7ceb9cb7603d18b6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 03 01:56:14 compute-0 podman[417916]: 2025-12-03 01:56:14.785397249 +0000 UTC m=+0.303679267 container attach 212dcc2c431446bcc875269931752e9b95a1c9ca527180c7ceb9cb7603d18b6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kirch, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:56:14 compute-0 fervent_kirch[417932]: 167 167
Dec 03 01:56:14 compute-0 podman[417916]: 2025-12-03 01:56:14.793141959 +0000 UTC m=+0.311423907 container died 212dcc2c431446bcc875269931752e9b95a1c9ca527180c7ceb9cb7603d18b6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kirch, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 03 01:56:14 compute-0 systemd[1]: libpod-212dcc2c431446bcc875269931752e9b95a1c9ca527180c7ceb9cb7603d18b6d.scope: Deactivated successfully.
Dec 03 01:56:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d1cbfb2a6c878fca2591e913a9eb563d4235cbfb5a38574ed1603d1b5d82430-merged.mount: Deactivated successfully.
Dec 03 01:56:14 compute-0 podman[417916]: 2025-12-03 01:56:14.856808242 +0000 UTC m=+0.375090140 container remove 212dcc2c431446bcc875269931752e9b95a1c9ca527180c7ceb9cb7603d18b6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kirch, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 03 01:56:14 compute-0 systemd[1]: libpod-conmon-212dcc2c431446bcc875269931752e9b95a1c9ca527180c7ceb9cb7603d18b6d.scope: Deactivated successfully.
Dec 03 01:56:15 compute-0 podman[417957]: 2025-12-03 01:56:15.111367949 +0000 UTC m=+0.088873142 container create 8dae29c3f7c710166dcbca532860ebcf38add50f6087112c8152f8cf22e02342 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_euclid, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 03 01:56:15 compute-0 podman[417957]: 2025-12-03 01:56:15.079958614 +0000 UTC m=+0.057463877 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:56:15 compute-0 systemd[1]: Started libpod-conmon-8dae29c3f7c710166dcbca532860ebcf38add50f6087112c8152f8cf22e02342.scope.
Dec 03 01:56:15 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:56:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d002b6d7e6578f42653d786ef6942babbbc8df77bd6b704ff4b361e6cf60da4a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:56:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d002b6d7e6578f42653d786ef6942babbbc8df77bd6b704ff4b361e6cf60da4a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:56:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d002b6d7e6578f42653d786ef6942babbbc8df77bd6b704ff4b361e6cf60da4a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:56:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d002b6d7e6578f42653d786ef6942babbbc8df77bd6b704ff4b361e6cf60da4a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:56:15 compute-0 podman[417957]: 2025-12-03 01:56:15.284151986 +0000 UTC m=+0.261657209 container init 8dae29c3f7c710166dcbca532860ebcf38add50f6087112c8152f8cf22e02342 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_euclid, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:56:15 compute-0 podman[417957]: 2025-12-03 01:56:15.301771808 +0000 UTC m=+0.279277001 container start 8dae29c3f7c710166dcbca532860ebcf38add50f6087112c8152f8cf22e02342 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_euclid, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:56:15 compute-0 podman[417957]: 2025-12-03 01:56:15.307584323 +0000 UTC m=+0.285089606 container attach 8dae29c3f7c710166dcbca532860ebcf38add50f6087112c8152f8cf22e02342 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_euclid, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:56:15 compute-0 ceph-mon[192821]: pgmap v1250: 321 pgs: 321 active+clean; 111 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 21 KiB/s wr, 63 op/s
Dec 03 01:56:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1251: 321 pgs: 321 active+clean; 111 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 20 KiB/s wr, 60 op/s
Dec 03 01:56:16 compute-0 reverent_euclid[417973]: {
Dec 03 01:56:16 compute-0 reverent_euclid[417973]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 01:56:16 compute-0 reverent_euclid[417973]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:56:16 compute-0 reverent_euclid[417973]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 01:56:16 compute-0 reverent_euclid[417973]:         "osd_id": 2,
Dec 03 01:56:16 compute-0 reverent_euclid[417973]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:56:16 compute-0 reverent_euclid[417973]:         "type": "bluestore"
Dec 03 01:56:16 compute-0 reverent_euclid[417973]:     },
Dec 03 01:56:16 compute-0 reverent_euclid[417973]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 01:56:16 compute-0 reverent_euclid[417973]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:56:16 compute-0 reverent_euclid[417973]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 01:56:16 compute-0 reverent_euclid[417973]:         "osd_id": 1,
Dec 03 01:56:16 compute-0 reverent_euclid[417973]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:56:16 compute-0 reverent_euclid[417973]:         "type": "bluestore"
Dec 03 01:56:16 compute-0 reverent_euclid[417973]:     },
Dec 03 01:56:16 compute-0 reverent_euclid[417973]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 01:56:16 compute-0 reverent_euclid[417973]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:56:16 compute-0 reverent_euclid[417973]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 01:56:16 compute-0 reverent_euclid[417973]:         "osd_id": 0,
Dec 03 01:56:16 compute-0 reverent_euclid[417973]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:56:16 compute-0 reverent_euclid[417973]:         "type": "bluestore"
Dec 03 01:56:16 compute-0 reverent_euclid[417973]:     }
Dec 03 01:56:16 compute-0 reverent_euclid[417973]: }
Dec 03 01:56:16 compute-0 systemd[1]: libpod-8dae29c3f7c710166dcbca532860ebcf38add50f6087112c8152f8cf22e02342.scope: Deactivated successfully.
Dec 03 01:56:16 compute-0 systemd[1]: libpod-8dae29c3f7c710166dcbca532860ebcf38add50f6087112c8152f8cf22e02342.scope: Consumed 1.205s CPU time.
Dec 03 01:56:16 compute-0 podman[418006]: 2025-12-03 01:56:16.637788651 +0000 UTC m=+0.048694647 container died 8dae29c3f7c710166dcbca532860ebcf38add50f6087112c8152f8cf22e02342 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_euclid, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:56:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-d002b6d7e6578f42653d786ef6942babbbc8df77bd6b704ff4b361e6cf60da4a-merged.mount: Deactivated successfully.
Dec 03 01:56:16 compute-0 podman[418006]: 2025-12-03 01:56:16.786630289 +0000 UTC m=+0.197536245 container remove 8dae29c3f7c710166dcbca532860ebcf38add50f6087112c8152f8cf22e02342 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_euclid, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:56:16 compute-0 systemd[1]: libpod-conmon-8dae29c3f7c710166dcbca532860ebcf38add50f6087112c8152f8cf22e02342.scope: Deactivated successfully.
Dec 03 01:56:16 compute-0 sudo[417852]: pam_unix(sudo:session): session closed for user root
Dec 03 01:56:16 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:56:16 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:56:16 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:56:16 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:56:16 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 4b617dac-bd3e-4cf2-acac-0194b62fc9a7 does not exist
Dec 03 01:56:16 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev a0da2ffd-b30b-4cc4-8395-a8fb9c488253 does not exist
Dec 03 01:56:17 compute-0 sudo[418019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:56:17 compute-0 sudo[418019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:56:17 compute-0 sudo[418019]: pam_unix(sudo:session): session closed for user root
Dec 03 01:56:17 compute-0 sudo[418064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 01:56:17 compute-0 sudo[418064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:56:17 compute-0 sudo[418064]: pam_unix(sudo:session): session closed for user root
Dec 03 01:56:17 compute-0 podman[418044]: 2025-12-03 01:56:17.241742285 +0000 UTC m=+0.147243163 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, version=9.6, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, config_id=edpm, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc.)
Dec 03 01:56:17 compute-0 podman[418045]: 2025-12-03 01:56:17.242081665 +0000 UTC m=+0.145273717 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 03 01:56:17 compute-0 podman[418043]: 2025-12-03 01:56:17.280338534 +0000 UTC m=+0.185681307 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 03 01:56:17 compute-0 nova_compute[351485]: 2025-12-03 01:56:17.789 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:56:17 compute-0 ceph-mon[192821]: pgmap v1251: 321 pgs: 321 active+clean; 111 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 20 KiB/s wr, 60 op/s
Dec 03 01:56:17 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:56:17 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:56:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:56:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1252: 321 pgs: 321 active+clean; 111 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 20 KiB/s wr, 50 op/s
Dec 03 01:56:18 compute-0 podman[418128]: 2025-12-03 01:56:18.875473444 +0000 UTC m=+0.132955066 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 03 01:56:18 compute-0 nova_compute[351485]: 2025-12-03 01:56:18.972 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:56:19 compute-0 ceph-mon[192821]: pgmap v1252: 321 pgs: 321 active+clean; 111 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 20 KiB/s wr, 50 op/s
Dec 03 01:56:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1253: 321 pgs: 321 active+clean; 111 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 20 KiB/s wr, 50 op/s
Dec 03 01:56:21 compute-0 sshd-session[418154]: Invalid user cc from 80.253.31.232 port 48518
Dec 03 01:56:21 compute-0 sshd-session[418154]: Received disconnect from 80.253.31.232 port 48518:11: Bye Bye [preauth]
Dec 03 01:56:21 compute-0 sshd-session[418154]: Disconnected from invalid user cc 80.253.31.232 port 48518 [preauth]
Dec 03 01:56:21 compute-0 ceph-mon[192821]: pgmap v1253: 321 pgs: 321 active+clean; 111 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 20 KiB/s wr, 50 op/s
Dec 03 01:56:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1254: 321 pgs: 321 active+clean; 111 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 502 KiB/s rd, 16 op/s
Dec 03 01:56:22 compute-0 nova_compute[351485]: 2025-12-03 01:56:22.792 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:56:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:56:23 compute-0 sshd-session[416505]: ssh_dispatch_run_fatal: Connection from 14.103.201.7 port 54708: Connection timed out [preauth]
Dec 03 01:56:23 compute-0 ceph-mon[192821]: pgmap v1254: 321 pgs: 321 active+clean; 111 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 502 KiB/s rd, 16 op/s
Dec 03 01:56:23 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec 03 01:56:23 compute-0 nova_compute[351485]: 2025-12-03 01:56:23.976 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:56:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1255: 321 pgs: 321 active+clean; 111 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 378 KiB/s rd, 11 op/s
Dec 03 01:56:24 compute-0 ceph-mon[192821]: pgmap v1255: 321 pgs: 321 active+clean; 111 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 378 KiB/s rd, 11 op/s
Dec 03 01:56:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1256: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Dec 03 01:56:27 compute-0 ceph-mon[192821]: pgmap v1256: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Dec 03 01:56:27 compute-0 nova_compute[351485]: 2025-12-03 01:56:27.794 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:56:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:56:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1257: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Dec 03 01:56:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:56:28
Dec 03 01:56:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 01:56:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 01:56:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', '.mgr', 'default.rgw.control', '.rgw.root', 'volumes', 'images', 'cephfs.cephfs.meta', 'backups', 'default.rgw.log', 'default.rgw.meta']
Dec 03 01:56:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 01:56:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:56:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:56:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:56:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:56:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:56:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:56:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 01:56:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:56:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 01:56:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:56:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:56:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:56:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:56:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:56:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:56:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:56:28 compute-0 nova_compute[351485]: 2025-12-03 01:56:28.979 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:56:29 compute-0 ceph-mon[192821]: pgmap v1257: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Dec 03 01:56:29 compute-0 podman[158098]: time="2025-12-03T01:56:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:56:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:56:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec 03 01:56:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:56:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8631 "" "Go-http-client/1.1"
Dec 03 01:56:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1258: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Dec 03 01:56:31 compute-0 ceph-mon[192821]: pgmap v1258: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Dec 03 01:56:31 compute-0 openstack_network_exporter[368278]: ERROR   01:56:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:56:31 compute-0 openstack_network_exporter[368278]: ERROR   01:56:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:56:31 compute-0 openstack_network_exporter[368278]: ERROR   01:56:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:56:31 compute-0 openstack_network_exporter[368278]: ERROR   01:56:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:56:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:56:31 compute-0 openstack_network_exporter[368278]: ERROR   01:56:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:56:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:56:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1259: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Dec 03 01:56:32 compute-0 nova_compute[351485]: 2025-12-03 01:56:32.797 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:56:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:56:33 compute-0 ceph-mon[192821]: pgmap v1259: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Dec 03 01:56:33 compute-0 nova_compute[351485]: 2025-12-03 01:56:33.981 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:56:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1260: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Dec 03 01:56:34 compute-0 podman[418156]: 2025-12-03 01:56:34.864429231 +0000 UTC m=+0.112768521 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Dec 03 01:56:34 compute-0 podman[418158]: 2025-12-03 01:56:34.872618434 +0000 UTC m=+0.116614551 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 03 01:56:34 compute-0 podman[418157]: 2025-12-03 01:56:34.884211624 +0000 UTC m=+0.133482781 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125)
Dec 03 01:56:35 compute-0 ovn_controller[89134]: 2025-12-03T01:56:35Z|00039|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Dec 03 01:56:35 compute-0 ceph-mon[192821]: pgmap v1260: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Dec 03 01:56:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1261: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Dec 03 01:56:37 compute-0 ceph-mon[192821]: pgmap v1261: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Dec 03 01:56:37 compute-0 nova_compute[351485]: 2025-12-03 01:56:37.799 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:56:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:56:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1262: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:56:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 01:56:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:56:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 01:56:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:56:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008203201308849384 of space, bias 1.0, pg target 0.24609603926548154 quantized to 32 (current 32)
Dec 03 01:56:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:56:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:56:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:56:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:56:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:56:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec 03 01:56:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:56:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 01:56:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:56:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:56:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:56:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 01:56:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:56:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 01:56:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:56:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:56:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:56:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 01:56:38 compute-0 nova_compute[351485]: 2025-12-03 01:56:38.985 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:56:39 compute-0 ceph-mon[192821]: pgmap v1262: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:56:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1263: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:56:40 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec 03 01:56:40 compute-0 podman[418214]: 2025-12-03 01:56:40.900464445 +0000 UTC m=+0.145351949 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 03 01:56:41 compute-0 ovn_controller[89134]: 2025-12-03T01:56:41Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8e:09:91 192.168.0.178
Dec 03 01:56:41 compute-0 ovn_controller[89134]: 2025-12-03T01:56:41Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8e:09:91 192.168.0.178
Dec 03 01:56:41 compute-0 ceph-mon[192821]: pgmap v1263: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:56:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1264: 321 pgs: 321 active+clean; 115 MiB data, 236 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 381 KiB/s wr, 13 op/s
Dec 03 01:56:42 compute-0 nova_compute[351485]: 2025-12-03 01:56:42.804 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:56:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:56:43 compute-0 ceph-mon[192821]: pgmap v1264: 321 pgs: 321 active+clean; 115 MiB data, 236 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 381 KiB/s wr, 13 op/s
Dec 03 01:56:43 compute-0 podman[418234]: 2025-12-03 01:56:43.88221072 +0000 UTC m=+0.125075882 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, com.redhat.component=ubi9-container, distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, config_id=edpm, vcs-type=git, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release-0.7.12=, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, container_name=kepler, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec 03 01:56:43 compute-0 nova_compute[351485]: 2025-12-03 01:56:43.989 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:56:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1265: 321 pgs: 321 active+clean; 122 MiB data, 241 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 795 KiB/s wr, 28 op/s
Dec 03 01:56:45 compute-0 ceph-mon[192821]: pgmap v1265: 321 pgs: 321 active+clean; 122 MiB data, 241 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 795 KiB/s wr, 28 op/s
Dec 03 01:56:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1266: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Dec 03 01:56:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 01:56:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/322596143' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 01:56:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 01:56:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/322596143' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 01:56:47 compute-0 ceph-mon[192821]: pgmap v1266: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Dec 03 01:56:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/322596143' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 01:56:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/322596143' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 01:56:47 compute-0 nova_compute[351485]: 2025-12-03 01:56:47.809 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:56:47 compute-0 podman[418254]: 2025-12-03 01:56:47.880511995 +0000 UTC m=+0.128234872 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, vcs-type=git, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9)
Dec 03 01:56:47 compute-0 podman[418255]: 2025-12-03 01:56:47.881496933 +0000 UTC m=+0.125860044 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 03 01:56:47 compute-0 podman[418253]: 2025-12-03 01:56:47.929320164 +0000 UTC m=+0.182441355 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 03 01:56:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:56:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1267: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Dec 03 01:56:48 compute-0 nova_compute[351485]: 2025-12-03 01:56:48.993 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:56:49 compute-0 ceph-mon[192821]: pgmap v1267: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Dec 03 01:56:49 compute-0 podman[418317]: 2025-12-03 01:56:49.848928592 +0000 UTC m=+0.104557078 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 01:56:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1268: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Dec 03 01:56:51 compute-0 ceph-mon[192821]: pgmap v1268: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Dec 03 01:56:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1269: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 58 op/s
Dec 03 01:56:52 compute-0 nova_compute[351485]: 2025-12-03 01:56:52.815 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:56:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:56:53 compute-0 ceph-mon[192821]: pgmap v1269: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 58 op/s
Dec 03 01:56:53 compute-0 nova_compute[351485]: 2025-12-03 01:56:53.998 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:56:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1270: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 1.1 MiB/s wr, 44 op/s
Dec 03 01:56:55 compute-0 ceph-mon[192821]: pgmap v1270: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 1.1 MiB/s wr, 44 op/s
Dec 03 01:56:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1271: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 725 KiB/s wr, 29 op/s
Dec 03 01:56:56 compute-0 nova_compute[351485]: 2025-12-03 01:56:56.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:56:56 compute-0 nova_compute[351485]: 2025-12-03 01:56:56.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:56:56 compute-0 nova_compute[351485]: 2025-12-03 01:56:56.621 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:56:56 compute-0 nova_compute[351485]: 2025-12-03 01:56:56.622 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:56:56 compute-0 nova_compute[351485]: 2025-12-03 01:56:56.622 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:56:56 compute-0 nova_compute[351485]: 2025-12-03 01:56:56.622 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 01:56:56 compute-0 nova_compute[351485]: 2025-12-03 01:56:56.623 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:56:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 01:56:57 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/757362410' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:56:57 compute-0 nova_compute[351485]: 2025-12-03 01:56:57.105 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:56:57 compute-0 nova_compute[351485]: 2025-12-03 01:56:57.228 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 01:56:57 compute-0 nova_compute[351485]: 2025-12-03 01:56:57.229 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 01:56:57 compute-0 nova_compute[351485]: 2025-12-03 01:56:57.229 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 01:56:57 compute-0 nova_compute[351485]: 2025-12-03 01:56:57.242 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 01:56:57 compute-0 nova_compute[351485]: 2025-12-03 01:56:57.243 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 01:56:57 compute-0 nova_compute[351485]: 2025-12-03 01:56:57.244 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 01:56:57 compute-0 ceph-mon[192821]: pgmap v1271: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 725 KiB/s wr, 29 op/s
Dec 03 01:56:57 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/757362410' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:56:57 compute-0 nova_compute[351485]: 2025-12-03 01:56:57.818 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:56:57 compute-0 nova_compute[351485]: 2025-12-03 01:56:57.853 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 01:56:57 compute-0 nova_compute[351485]: 2025-12-03 01:56:57.856 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3767MB free_disk=59.92203140258789GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 01:56:57 compute-0 nova_compute[351485]: 2025-12-03 01:56:57.857 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:56:57 compute-0 nova_compute[351485]: 2025-12-03 01:56:57.859 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:56:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:56:58 compute-0 nova_compute[351485]: 2025-12-03 01:56:58.139 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 01:56:58 compute-0 nova_compute[351485]: 2025-12-03 01:56:58.139 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 52862152-12c7-4236-89c3-67750ecbed7a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 01:56:58 compute-0 nova_compute[351485]: 2025-12-03 01:56:58.139 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 01:56:58 compute-0 nova_compute[351485]: 2025-12-03 01:56:58.140 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 01:56:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1272: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s wr, 0 op/s
Dec 03 01:56:58 compute-0 nova_compute[351485]: 2025-12-03 01:56:58.343 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:56:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:56:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:56:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:56:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:56:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:56:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:56:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 01:56:58 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2259349955' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:56:58 compute-0 nova_compute[351485]: 2025-12-03 01:56:58.899 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:56:58 compute-0 nova_compute[351485]: 2025-12-03 01:56:58.914 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 01:56:58 compute-0 nova_compute[351485]: 2025-12-03 01:56:58.937 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 01:56:58 compute-0 nova_compute[351485]: 2025-12-03 01:56:58.970 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 01:56:58 compute-0 nova_compute[351485]: 2025-12-03 01:56:58.971 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.112s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:56:58 compute-0 nova_compute[351485]: 2025-12-03 01:56:58.972 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:56:58 compute-0 nova_compute[351485]: 2025-12-03 01:56:58.972 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 03 01:56:58 compute-0 nova_compute[351485]: 2025-12-03 01:56:58.992 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 03 01:56:59 compute-0 nova_compute[351485]: 2025-12-03 01:56:59.003 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:56:59 compute-0 ceph-mon[192821]: pgmap v1272: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s wr, 0 op/s
Dec 03 01:56:59 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2259349955' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:56:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:56:59.621 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:56:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:56:59.622 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:56:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:56:59.623 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:56:59 compute-0 podman[158098]: time="2025-12-03T01:56:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:56:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:56:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec 03 01:56:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:56:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8633 "" "Go-http-client/1.1"
Dec 03 01:57:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1273: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s wr, 0 op/s
Dec 03 01:57:00 compute-0 nova_compute[351485]: 2025-12-03 01:57:00.992 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:57:00 compute-0 nova_compute[351485]: 2025-12-03 01:57:00.993 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 01:57:00 compute-0 nova_compute[351485]: 2025-12-03 01:57:00.994 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 03 01:57:01 compute-0 openstack_network_exporter[368278]: ERROR   01:57:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:57:01 compute-0 openstack_network_exporter[368278]: ERROR   01:57:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:57:01 compute-0 openstack_network_exporter[368278]: ERROR   01:57:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:57:01 compute-0 openstack_network_exporter[368278]: ERROR   01:57:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:57:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:57:01 compute-0 openstack_network_exporter[368278]: ERROR   01:57:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:57:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:57:01 compute-0 ceph-mon[192821]: pgmap v1273: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s wr, 0 op/s
Dec 03 01:57:01 compute-0 nova_compute[351485]: 2025-12-03 01:57:01.949 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 01:57:01 compute-0 nova_compute[351485]: 2025-12-03 01:57:01.950 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 01:57:01 compute-0 nova_compute[351485]: 2025-12-03 01:57:01.951 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 03 01:57:01 compute-0 nova_compute[351485]: 2025-12-03 01:57:01.952 351492 DEBUG nova.objects.instance [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9182286b-5a08-4961-b4bb-c0e2f05746f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 01:57:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1274: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s wr, 0 op/s
Dec 03 01:57:02 compute-0 nova_compute[351485]: 2025-12-03 01:57:02.821 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:57:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:57:03 compute-0 ceph-mon[192821]: pgmap v1274: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s wr, 0 op/s
Dec 03 01:57:04 compute-0 nova_compute[351485]: 2025-12-03 01:57:04.007 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:57:04 compute-0 nova_compute[351485]: 2025-12-03 01:57:04.130 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Updating instance_info_cache with network_info: [{"id": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "address": "fa:16:3e:8f:a6:32", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2a50b9b-c2", "ovs_interfaceid": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 01:57:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1275: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:57:04 compute-0 nova_compute[351485]: 2025-12-03 01:57:04.360 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 01:57:04 compute-0 nova_compute[351485]: 2025-12-03 01:57:04.361 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 03 01:57:04 compute-0 nova_compute[351485]: 2025-12-03 01:57:04.363 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:57:04 compute-0 nova_compute[351485]: 2025-12-03 01:57:04.363 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:57:04 compute-0 nova_compute[351485]: 2025-12-03 01:57:04.364 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:57:04 compute-0 nova_compute[351485]: 2025-12-03 01:57:04.364 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:57:05 compute-0 ceph-mon[192821]: pgmap v1275: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:57:05 compute-0 podman[418387]: 2025-12-03 01:57:05.842893967 +0000 UTC m=+0.086823443 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 03 01:57:05 compute-0 podman[418386]: 2025-12-03 01:57:05.855675121 +0000 UTC m=+0.105536576 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 03 01:57:05 compute-0 podman[418388]: 2025-12-03 01:57:05.864689167 +0000 UTC m=+0.100507742 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 03 01:57:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1276: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:57:07 compute-0 ceph-mon[192821]: pgmap v1276: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:57:07 compute-0 nova_compute[351485]: 2025-12-03 01:57:07.825 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:57:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:57:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1277: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:57:08 compute-0 nova_compute[351485]: 2025-12-03 01:57:08.942 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:57:09 compute-0 nova_compute[351485]: 2025-12-03 01:57:09.010 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:57:09 compute-0 nova_compute[351485]: 2025-12-03 01:57:09.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:57:09 compute-0 nova_compute[351485]: 2025-12-03 01:57:09.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 01:57:09 compute-0 ceph-mon[192821]: pgmap v1277: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:57:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1278: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:57:11 compute-0 nova_compute[351485]: 2025-12-03 01:57:11.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:57:11 compute-0 ceph-mon[192821]: pgmap v1278: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:57:11 compute-0 podman[418446]: 2025-12-03 01:57:11.874405921 +0000 UTC m=+0.129968001 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 03 01:57:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1279: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:57:12 compute-0 sshd-session[418444]: Invalid user redmine from 103.146.202.174 port 36244
Dec 03 01:57:12 compute-0 nova_compute[351485]: 2025-12-03 01:57:12.828 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:57:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:57:13 compute-0 sshd-session[418444]: Received disconnect from 103.146.202.174 port 36244:11: Bye Bye [preauth]
Dec 03 01:57:13 compute-0 sshd-session[418444]: Disconnected from invalid user redmine 103.146.202.174 port 36244 [preauth]
Dec 03 01:57:13 compute-0 nova_compute[351485]: 2025-12-03 01:57:13.597 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:57:13 compute-0 nova_compute[351485]: 2025-12-03 01:57:13.598 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 03 01:57:13 compute-0 ceph-mon[192821]: pgmap v1279: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:57:14 compute-0 nova_compute[351485]: 2025-12-03 01:57:14.014 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:57:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1280: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 03 01:57:14 compute-0 podman[418466]: 2025-12-03 01:57:14.879372066 +0000 UTC m=+0.151396711 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, version=9.4, build-date=2024-09-18T21:23:30, release=1214.1726694543, name=ubi9, release-0.7.12=, vcs-type=git, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, architecture=x86_64, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 03 01:57:15 compute-0 ceph-mon[192821]: pgmap v1280: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 03 01:57:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1281: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Dec 03 01:57:17 compute-0 sudo[418487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:57:17 compute-0 sudo[418487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:57:17 compute-0 sudo[418487]: pam_unix(sudo:session): session closed for user root
Dec 03 01:57:17 compute-0 sudo[418512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:57:17 compute-0 sudo[418512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:57:17 compute-0 sudo[418512]: pam_unix(sudo:session): session closed for user root
Dec 03 01:57:17 compute-0 sudo[418537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:57:17 compute-0 sudo[418537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:57:17 compute-0 sudo[418537]: pam_unix(sudo:session): session closed for user root
Dec 03 01:57:17 compute-0 ceph-mon[192821]: pgmap v1281: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Dec 03 01:57:17 compute-0 sudo[418562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 01:57:17 compute-0 sudo[418562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:57:17 compute-0 nova_compute[351485]: 2025-12-03 01:57:17.831 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:57:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:57:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1282: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Dec 03 01:57:18 compute-0 sudo[418562]: pam_unix(sudo:session): session closed for user root
Dec 03 01:57:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:57:18 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:57:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 01:57:18 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:57:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 01:57:18 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:57:18 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev d1c20a02-8447-4aa7-8f93-24cb19a11764 does not exist
Dec 03 01:57:18 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 00796445-046d-4a60-bafa-0db5c34e18ec does not exist
Dec 03 01:57:18 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 752a43dc-9242-4610-87fd-3c67a2981d4e does not exist
Dec 03 01:57:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 01:57:18 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:57:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 01:57:18 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:57:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:57:18 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:57:18 compute-0 sudo[418616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:57:18 compute-0 sudo[418616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:57:18 compute-0 sudo[418616]: pam_unix(sudo:session): session closed for user root
Dec 03 01:57:18 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:57:18 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:57:18 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:57:18 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:57:18 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:57:18 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:57:18 compute-0 sudo[418659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:57:18 compute-0 sudo[418659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:57:18 compute-0 sudo[418659]: pam_unix(sudo:session): session closed for user root
Dec 03 01:57:18 compute-0 podman[418642]: 2025-12-03 01:57:18.8098643 +0000 UTC m=+0.108144820 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 03 01:57:18 compute-0 podman[418641]: 2025-12-03 01:57:18.810282802 +0000 UTC m=+0.103368124 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, distribution-scope=public, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.openshift.tags=minimal rhel9, config_id=edpm, release=1755695350, vendor=Red Hat, Inc., vcs-type=git)
Dec 03 01:57:18 compute-0 podman[418640]: 2025-12-03 01:57:18.851128085 +0000 UTC m=+0.155217200 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 03 01:57:18 compute-0 sudo[418726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:57:18 compute-0 sudo[418726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:57:18 compute-0 sudo[418726]: pam_unix(sudo:session): session closed for user root
Dec 03 01:57:18 compute-0 sudo[418757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 01:57:18 compute-0 sudo[418757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:57:19 compute-0 nova_compute[351485]: 2025-12-03 01:57:19.017 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.504 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 03 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.504 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 03 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.505 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.505 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.505 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.506 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.506 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.506 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.506 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.507 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.507 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.507 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.508 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.508 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.509 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.509 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.509 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.510 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.510 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 52862152-12c7-4236-89c3-67750ecbed7a from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec 03 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.511 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.512 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/52862152-12c7-4236-89c3-67750ecbed7a -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}5774f494984a65ffbde2426a05531a474fe014ea4dcd597248cb0a9b623a789b" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec 03 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:57:19 compute-0 podman[418818]: 2025-12-03 01:57:19.541363215 +0000 UTC m=+0.089861760 container create 039ac1e3fe45efb051d3a9ddd2dd390eaa68068343afbbc0bf36de1c9be256b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_shtern, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec 03 01:57:19 compute-0 podman[418818]: 2025-12-03 01:57:19.500853501 +0000 UTC m=+0.049352086 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:57:19 compute-0 systemd[1]: Started libpod-conmon-039ac1e3fe45efb051d3a9ddd2dd390eaa68068343afbbc0bf36de1c9be256b1.scope.
Dec 03 01:57:19 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:57:19 compute-0 podman[418818]: 2025-12-03 01:57:19.70001226 +0000 UTC m=+0.248510815 container init 039ac1e3fe45efb051d3a9ddd2dd390eaa68068343afbbc0bf36de1c9be256b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_shtern, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:57:19 compute-0 podman[418818]: 2025-12-03 01:57:19.717497068 +0000 UTC m=+0.265995583 container start 039ac1e3fe45efb051d3a9ddd2dd390eaa68068343afbbc0bf36de1c9be256b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:57:19 compute-0 podman[418818]: 2025-12-03 01:57:19.722682515 +0000 UTC m=+0.271181070 container attach 039ac1e3fe45efb051d3a9ddd2dd390eaa68068343afbbc0bf36de1c9be256b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Dec 03 01:57:19 compute-0 peaceful_shtern[418835]: 167 167
Dec 03 01:57:19 compute-0 systemd[1]: libpod-039ac1e3fe45efb051d3a9ddd2dd390eaa68068343afbbc0bf36de1c9be256b1.scope: Deactivated successfully.
Dec 03 01:57:19 compute-0 podman[418818]: 2025-12-03 01:57:19.732970098 +0000 UTC m=+0.281468613 container died 039ac1e3fe45efb051d3a9ddd2dd390eaa68068343afbbc0bf36de1c9be256b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_shtern, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:57:19 compute-0 ceph-mon[192821]: pgmap v1282: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Dec 03 01:57:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-2769030cee8bd57b48611438f886015a6956f0c98beb52114e5037c92899095f-merged.mount: Deactivated successfully.
Dec 03 01:57:19 compute-0 podman[418818]: 2025-12-03 01:57:19.801704425 +0000 UTC m=+0.350202950 container remove 039ac1e3fe45efb051d3a9ddd2dd390eaa68068343afbbc0bf36de1c9be256b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 03 01:57:19 compute-0 systemd[1]: libpod-conmon-039ac1e3fe45efb051d3a9ddd2dd390eaa68068343afbbc0bf36de1c9be256b1.scope: Deactivated successfully.
Dec 03 01:57:20 compute-0 podman[418858]: 2025-12-03 01:57:20.064876307 +0000 UTC m=+0.106116892 container create 8702205d592ac59e33762937dff0cf34ddea7bab9fbbaee09f42cb8fce1d7889 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 03 01:57:20 compute-0 podman[418858]: 2025-12-03 01:57:20.034495242 +0000 UTC m=+0.075735927 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:57:20 compute-0 systemd[1]: Started libpod-conmon-8702205d592ac59e33762937dff0cf34ddea7bab9fbbaee09f42cb8fce1d7889.scope.
Dec 03 01:57:20 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:57:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/265ed5ee5f5a28f46c44bc4a99a4e64ebf9933d6097e9eab107acc1969af54d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:57:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/265ed5ee5f5a28f46c44bc4a99a4e64ebf9933d6097e9eab107acc1969af54d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:57:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/265ed5ee5f5a28f46c44bc4a99a4e64ebf9933d6097e9eab107acc1969af54d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:57:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/265ed5ee5f5a28f46c44bc4a99a4e64ebf9933d6097e9eab107acc1969af54d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:57:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/265ed5ee5f5a28f46c44bc4a99a4e64ebf9933d6097e9eab107acc1969af54d3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:57:20 compute-0 podman[418858]: 2025-12-03 01:57:20.225422668 +0000 UTC m=+0.266663293 container init 8702205d592ac59e33762937dff0cf34ddea7bab9fbbaee09f42cb8fce1d7889 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:57:20 compute-0 podman[418858]: 2025-12-03 01:57:20.241032672 +0000 UTC m=+0.282273287 container start 8702205d592ac59e33762937dff0cf34ddea7bab9fbbaee09f42cb8fce1d7889 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_ramanujan, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Dec 03 01:57:20 compute-0 podman[418858]: 2025-12-03 01:57:20.251753487 +0000 UTC m=+0.292994092 container attach 8702205d592ac59e33762937dff0cf34ddea7bab9fbbaee09f42cb8fce1d7889 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_ramanujan, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.252 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1960 Content-Type: application/json Date: Wed, 03 Dec 2025 01:57:19 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-5bfa588f-504a-43fd-9f33-4984925b3cd4 x-openstack-request-id: req-5bfa588f-504a-43fd-9f33-4984925b3cd4 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.253 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "52862152-12c7-4236-89c3-67750ecbed7a", "name": "vn-44nal64-ppxv5rwaptjv-bbqmylrxhl37-vnf-x65t7efzpd2l", "status": "ACTIVE", "tenant_id": "9746b242761a48048d185ce26d622b33", "user_id": "03ba25e4009b43f7b0054fee32bf9136", "metadata": {"metering.server_group": "0f6ab671-23df-4a6d-9613-02f9fb5fb294"}, "hostId": "875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd", "image": {"id": "466cf0db-c3be-4d70-b9f3-08c056c2cad9", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/466cf0db-c3be-4d70-b9f3-08c056c2cad9"}]}, "flavor": {"id": "bc665ec6-3672-4e52-a447-5267b04e227a", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/bc665ec6-3672-4e52-a447-5267b04e227a"}]}, "created": "2025-12-03T01:55:54Z", "updated": "2025-12-03T01:56:06Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.178", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:8e:09:91"}, {"version": 4, "addr": "192.168.122.212", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:8e:09:91"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/52862152-12c7-4236-89c3-67750ecbed7a"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/52862152-12c7-4236-89c3-67750ecbed7a"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-03T01:56:06.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.253 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/52862152-12c7-4236-89c3-67750ecbed7a used request id req-5bfa588f-504a-43fd-9f33-4984925b3cd4 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.254 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '52862152-12c7-4236-89c3-67750ecbed7a', 'name': 'vn-44nal64-ppxv5rwaptjv-bbqmylrxhl37-vnf-x65t7efzpd2l', 'flavor': {'id': 'bc665ec6-3672-4e52-a447-5267b04e227a', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9746b242761a48048d185ce26d622b33', 'user_id': '03ba25e4009b43f7b0054fee32bf9136', 'hostId': '875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd', 'status': 'active', 'metadata': {'metering.server_group': '0f6ab671-23df-4a6d-9613-02f9fb5fb294'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 03 01:57:20 compute-0 podman[418872]: 2025-12-03 01:57:20.255077842 +0000 UTC m=+0.124317210 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.257 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '9182286b-5a08-4961-b4bb-c0e2f05746f7', 'name': 'test_0', 'flavor': {'id': 'bc665ec6-3672-4e52-a447-5267b04e227a', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9746b242761a48048d185ce26d622b33', 'user_id': '03ba25e4009b43f7b0054fee32bf9136', 'hostId': '875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.257 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.257 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.257 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.257 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.258 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-03T01:57:20.257851) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.291 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/memory.usage volume: 49.078125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1283: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 1.4 KiB/s wr, 0 op/s
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.321 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/memory.usage volume: 49.0390625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.322 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.322 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.322 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.322 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.322 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.322 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.323 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-03T01:57:20.322808) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.327 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 52862152-12c7-4236-89c3-67750ecbed7a / tap521d2181-8f inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.327 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.outgoing.packets volume: 40 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.330 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.331 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.331 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.331 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.331 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.331 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.332 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.332 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.332 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.bytes.delta volume: 1788 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.332 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.333 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.333 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.333 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-03T01:57:20.332194) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.333 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.333 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.333 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.333 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.334 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.334 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.334 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.334 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-03T01:57:20.333670) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.334 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.334 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.334 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.335 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.335 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.335 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.336 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.336 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.336 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.336 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.337 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.337 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.337 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.337 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.338 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.338 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.338 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-03T01:57:20.335050) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.338 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.338 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-03T01:57:20.337100) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.338 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.339 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-03T01:57:20.338644) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.371 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.371 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.372 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.398 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.398 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.399 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.399 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.399 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.399 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.399 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.399 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.400 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.400 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.400 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-44nal64-ppxv5rwaptjv-bbqmylrxhl37-vnf-x65t7efzpd2l>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-44nal64-ppxv5rwaptjv-bbqmylrxhl37-vnf-x65t7efzpd2l>]
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.400 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.401 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-03T01:57:20.400056) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.401 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.401 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.401 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.401 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.401 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-03T01:57:20.401331) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.467 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.468 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.468 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.558 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.559 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.560 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.560 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.561 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.561 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.561 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.562 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.562 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.562 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.incoming.bytes volume: 4849 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.563 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.bytes volume: 1878 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.563 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-03T01:57:20.562294) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.564 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.564 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.564 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.565 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.565 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.565 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.565 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.latency volume: 1829221883 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.566 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-03T01:57:20.565448) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.566 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.latency volume: 322583639 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.566 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.latency volume: 204508972 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.567 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.latency volume: 1854350820 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.568 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.latency volume: 322798135 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.568 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.latency volume: 163317736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.569 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.569 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.570 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.570 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.570 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.570 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.570 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.571 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.571 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.572 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.572 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.572 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.573 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.573 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.574 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.574 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.574 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.574 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-03T01:57:20.570407) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.575 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.575 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.575 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-03T01:57:20.575023) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.576 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.576 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.576 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.577 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.577 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.577 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.577 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.577 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.578 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-03T01:57:20.577714) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.578 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.579 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.579 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.579 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.580 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.581 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.581 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.581 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.582 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.582 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.582 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.582 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.bytes volume: 41713664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.583 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.584 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.584 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-03T01:57:20.582661) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.585 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.585 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.586 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.587 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.587 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.587 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.588 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.588 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.588 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.588 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.latency volume: 6674812043 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.589 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.latency volume: 29937762 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.590 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.591 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.latency volume: 7224488215 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.591 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-03T01:57:20.588465) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.592 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.latency volume: 31628821 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.592 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.593 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.594 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.594 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.594 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.594 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.594 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.595 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.requests volume: 222 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.595 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.596 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.596 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.requests volume: 229 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.597 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.597 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.599 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.599 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.599 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-03T01:57:20.594853) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.599 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.599 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.600 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.600 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.600 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.incoming.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.600 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-03T01:57:20.600172) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.600 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.601 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.601 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.601 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.601 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.601 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.602 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.602 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/cpu volume: 36190000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.602 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/cpu volume: 34570000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.602 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.603 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.603 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.603 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-03T01:57:20.601979) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.603 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.603 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.603 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.604 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.604 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-03T01:57:20.603712) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.604 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.605 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.605 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.605 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.605 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.606 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-03T01:57:20.605635) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.606 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.outgoing.bytes volume: 4686 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.606 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.bytes volume: 2132 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.607 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.607 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.607 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.607 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.607 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.608 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.608 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-03T01:57:20.608127) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.608 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.608 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.609 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.609 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.610 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.610 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.611 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.611 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.612 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.612 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.612 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.612 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.612 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.612 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.613 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.613 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.613 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-03T01:57:20.612434) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.614 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.614 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.614 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.614 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.614 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-03T01:57:20.614362) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.615 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.615 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.615 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.615 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.616 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.616 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.616 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.616 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.bytes.delta volume: 2132 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.617 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.617 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.617 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.617 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.617 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.617 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-03T01:57:20.616165) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.618 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.618 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.618 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-44nal64-ppxv5rwaptjv-bbqmylrxhl37-vnf-x65t7efzpd2l>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-44nal64-ppxv5rwaptjv-bbqmylrxhl37-vnf-x65t7efzpd2l>]
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.619 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.619 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-03T01:57:20.618062) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.619 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.619 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.619 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.619 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.619 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.619 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.619 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.619 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.620 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.620 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.620 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.620 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.620 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.620 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.620 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.620 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.620 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.620 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.620 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.620 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.620 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.620 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.620 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.621 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.621 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:57:21 compute-0 bold_ramanujan[418881]: --> passed data devices: 0 physical, 3 LVM
Dec 03 01:57:21 compute-0 bold_ramanujan[418881]: --> relative data size: 1.0
Dec 03 01:57:21 compute-0 bold_ramanujan[418881]: --> All data devices are unavailable
Dec 03 01:57:21 compute-0 systemd[1]: libpod-8702205d592ac59e33762937dff0cf34ddea7bab9fbbaee09f42cb8fce1d7889.scope: Deactivated successfully.
Dec 03 01:57:21 compute-0 systemd[1]: libpod-8702205d592ac59e33762937dff0cf34ddea7bab9fbbaee09f42cb8fce1d7889.scope: Consumed 1.210s CPU time.
Dec 03 01:57:21 compute-0 podman[418858]: 2025-12-03 01:57:21.547350401 +0000 UTC m=+1.588591016 container died 8702205d592ac59e33762937dff0cf34ddea7bab9fbbaee09f42cb8fce1d7889 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_ramanujan, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:57:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-265ed5ee5f5a28f46c44bc4a99a4e64ebf9933d6097e9eab107acc1969af54d3-merged.mount: Deactivated successfully.
Dec 03 01:57:21 compute-0 podman[418858]: 2025-12-03 01:57:21.633844423 +0000 UTC m=+1.675085008 container remove 8702205d592ac59e33762937dff0cf34ddea7bab9fbbaee09f42cb8fce1d7889 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_ramanujan, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec 03 01:57:21 compute-0 systemd[1]: libpod-conmon-8702205d592ac59e33762937dff0cf34ddea7bab9fbbaee09f42cb8fce1d7889.scope: Deactivated successfully.
Dec 03 01:57:21 compute-0 sudo[418757]: pam_unix(sudo:session): session closed for user root
Dec 03 01:57:21 compute-0 ceph-mon[192821]: pgmap v1283: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 1.4 KiB/s wr, 0 op/s
Dec 03 01:57:21 compute-0 sudo[418935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:57:21 compute-0 sudo[418935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:57:21 compute-0 sudo[418935]: pam_unix(sudo:session): session closed for user root
Dec 03 01:57:21 compute-0 sudo[418960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:57:22 compute-0 sudo[418960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:57:22 compute-0 sudo[418960]: pam_unix(sudo:session): session closed for user root
Dec 03 01:57:22 compute-0 sudo[418985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:57:22 compute-0 sudo[418985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:57:22 compute-0 sudo[418985]: pam_unix(sudo:session): session closed for user root
Dec 03 01:57:22 compute-0 sudo[419010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 01:57:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1284: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 1.5 KiB/s wr, 0 op/s
Dec 03 01:57:22 compute-0 sudo[419010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:57:22 compute-0 nova_compute[351485]: 2025-12-03 01:57:22.834 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:57:22 compute-0 podman[419073]: 2025-12-03 01:57:22.911322521 +0000 UTC m=+0.073357619 container create 57948232751d62aa8fdebd3e04cf37711d57dab5df7ac962678016f9bfd5a6e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 03 01:57:22 compute-0 podman[419073]: 2025-12-03 01:57:22.879101564 +0000 UTC m=+0.041136712 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:57:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:57:22 compute-0 systemd[1]: Started libpod-conmon-57948232751d62aa8fdebd3e04cf37711d57dab5df7ac962678016f9bfd5a6e8.scope.
Dec 03 01:57:23 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:57:23 compute-0 podman[419073]: 2025-12-03 01:57:23.078064318 +0000 UTC m=+0.240099446 container init 57948232751d62aa8fdebd3e04cf37711d57dab5df7ac962678016f9bfd5a6e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 03 01:57:23 compute-0 podman[419073]: 2025-12-03 01:57:23.096102492 +0000 UTC m=+0.258137590 container start 57948232751d62aa8fdebd3e04cf37711d57dab5df7ac962678016f9bfd5a6e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True)
Dec 03 01:57:23 compute-0 musing_ritchie[419089]: 167 167
Dec 03 01:57:23 compute-0 systemd[1]: libpod-57948232751d62aa8fdebd3e04cf37711d57dab5df7ac962678016f9bfd5a6e8.scope: Deactivated successfully.
Dec 03 01:57:23 compute-0 podman[419073]: 2025-12-03 01:57:23.108067702 +0000 UTC m=+0.270102860 container attach 57948232751d62aa8fdebd3e04cf37711d57dab5df7ac962678016f9bfd5a6e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 03 01:57:23 compute-0 conmon[419089]: conmon 57948232751d62aa8fde <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-57948232751d62aa8fdebd3e04cf37711d57dab5df7ac962678016f9bfd5a6e8.scope/container/memory.events
Dec 03 01:57:23 compute-0 podman[419073]: 2025-12-03 01:57:23.109895324 +0000 UTC m=+0.271930392 container died 57948232751d62aa8fdebd3e04cf37711d57dab5df7ac962678016f9bfd5a6e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:57:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ca127befe674676f895259f419dcb1679fa04c130d53d4c183d092f145e679d-merged.mount: Deactivated successfully.
Dec 03 01:57:23 compute-0 podman[419073]: 2025-12-03 01:57:23.176476689 +0000 UTC m=+0.338511757 container remove 57948232751d62aa8fdebd3e04cf37711d57dab5df7ac962678016f9bfd5a6e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ritchie, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 03 01:57:23 compute-0 systemd[1]: libpod-conmon-57948232751d62aa8fdebd3e04cf37711d57dab5df7ac962678016f9bfd5a6e8.scope: Deactivated successfully.
Dec 03 01:57:23 compute-0 podman[419113]: 2025-12-03 01:57:23.451035545 +0000 UTC m=+0.096250691 container create fd3e44059479ab1f7273bc62986b5f2b54b9ffea83cf6fb11e120a5a7583a08d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_nightingale, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:57:23 compute-0 podman[419113]: 2025-12-03 01:57:23.402817872 +0000 UTC m=+0.048033058 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:57:23 compute-0 systemd[1]: Started libpod-conmon-fd3e44059479ab1f7273bc62986b5f2b54b9ffea83cf6fb11e120a5a7583a08d.scope.
Dec 03 01:57:23 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:57:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc1df8fb50ee975806cce9a1daed6812da1386d4af6e7fd7b427667df858cba8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:57:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc1df8fb50ee975806cce9a1daed6812da1386d4af6e7fd7b427667df858cba8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:57:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc1df8fb50ee975806cce9a1daed6812da1386d4af6e7fd7b427667df858cba8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:57:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc1df8fb50ee975806cce9a1daed6812da1386d4af6e7fd7b427667df858cba8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:57:23 compute-0 podman[419113]: 2025-12-03 01:57:23.625955635 +0000 UTC m=+0.271170841 container init fd3e44059479ab1f7273bc62986b5f2b54b9ffea83cf6fb11e120a5a7583a08d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 03 01:57:23 compute-0 podman[419113]: 2025-12-03 01:57:23.6458165 +0000 UTC m=+0.291031646 container start fd3e44059479ab1f7273bc62986b5f2b54b9ffea83cf6fb11e120a5a7583a08d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_nightingale, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:57:23 compute-0 podman[419113]: 2025-12-03 01:57:23.654313552 +0000 UTC m=+0.299528758 container attach fd3e44059479ab1f7273bc62986b5f2b54b9ffea83cf6fb11e120a5a7583a08d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_nightingale, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 03 01:57:23 compute-0 ceph-mon[192821]: pgmap v1284: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 1.5 KiB/s wr, 0 op/s
Dec 03 01:57:24 compute-0 nova_compute[351485]: 2025-12-03 01:57:24.021 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:57:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1285: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 1.5 KiB/s wr, 0 op/s
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]: {
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:     "0": [
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:         {
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:             "devices": [
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:                 "/dev/loop3"
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:             ],
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:             "lv_name": "ceph_lv0",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:             "lv_size": "21470642176",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:             "name": "ceph_lv0",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:             "tags": {
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:                 "ceph.cluster_name": "ceph",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:                 "ceph.crush_device_class": "",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:                 "ceph.encrypted": "0",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:                 "ceph.osd_id": "0",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:                 "ceph.type": "block",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:                 "ceph.vdo": "0"
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:             },
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:             "type": "block",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:             "vg_name": "ceph_vg0"
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:         }
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:     ],
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:     "1": [
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:         {
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:             "devices": [
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:                 "/dev/loop4"
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:             ],
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:             "lv_name": "ceph_lv1",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:             "lv_size": "21470642176",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:             "name": "ceph_lv1",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:             "tags": {
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:                 "ceph.cluster_name": "ceph",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:                 "ceph.crush_device_class": "",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:                 "ceph.encrypted": "0",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:                 "ceph.osd_id": "1",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:                 "ceph.type": "block",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:                 "ceph.vdo": "0"
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:             },
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:             "type": "block",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:             "vg_name": "ceph_vg1"
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:         }
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:     ],
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:     "2": [
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:         {
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:             "devices": [
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:                 "/dev/loop5"
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:             ],
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:             "lv_name": "ceph_lv2",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:             "lv_size": "21470642176",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:             "name": "ceph_lv2",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:             "tags": {
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:                 "ceph.cluster_name": "ceph",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:                 "ceph.crush_device_class": "",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:                 "ceph.encrypted": "0",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:                 "ceph.osd_id": "2",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:                 "ceph.type": "block",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:                 "ceph.vdo": "0"
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:             },
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:             "type": "block",
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:             "vg_name": "ceph_vg2"
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:         }
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]:     ]
Dec 03 01:57:24 compute-0 blissful_nightingale[419129]: }
Dec 03 01:57:24 compute-0 systemd[1]: libpod-fd3e44059479ab1f7273bc62986b5f2b54b9ffea83cf6fb11e120a5a7583a08d.scope: Deactivated successfully.
Dec 03 01:57:24 compute-0 podman[419113]: 2025-12-03 01:57:24.492424491 +0000 UTC m=+1.137639637 container died fd3e44059479ab1f7273bc62986b5f2b54b9ffea83cf6fb11e120a5a7583a08d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:57:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc1df8fb50ee975806cce9a1daed6812da1386d4af6e7fd7b427667df858cba8-merged.mount: Deactivated successfully.
Dec 03 01:57:24 compute-0 podman[419113]: 2025-12-03 01:57:24.573281974 +0000 UTC m=+1.218497100 container remove fd3e44059479ab1f7273bc62986b5f2b54b9ffea83cf6fb11e120a5a7583a08d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_nightingale, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:57:24 compute-0 systemd[1]: libpod-conmon-fd3e44059479ab1f7273bc62986b5f2b54b9ffea83cf6fb11e120a5a7583a08d.scope: Deactivated successfully.
Dec 03 01:57:24 compute-0 sudo[419010]: pam_unix(sudo:session): session closed for user root
Dec 03 01:57:24 compute-0 sudo[419149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:57:24 compute-0 sudo[419149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:57:24 compute-0 sudo[419149]: pam_unix(sudo:session): session closed for user root
Dec 03 01:57:24 compute-0 sudo[419174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:57:24 compute-0 sudo[419174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:57:24 compute-0 sudo[419174]: pam_unix(sudo:session): session closed for user root
Dec 03 01:57:24 compute-0 sudo[419199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:57:24 compute-0 sudo[419199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:57:24 compute-0 sudo[419199]: pam_unix(sudo:session): session closed for user root
Dec 03 01:57:25 compute-0 sudo[419224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 01:57:25 compute-0 sudo[419224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:57:25 compute-0 podman[419286]: 2025-12-03 01:57:25.703863259 +0000 UTC m=+0.090844357 container create 0d388107a3ce64492193b74b81b6dbd0f2a2ce83db970161babbccb3cd1ee55b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:57:25 compute-0 podman[419286]: 2025-12-03 01:57:25.662830221 +0000 UTC m=+0.049811349 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:57:25 compute-0 ceph-mon[192821]: pgmap v1285: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 1.5 KiB/s wr, 0 op/s
Dec 03 01:57:25 compute-0 systemd[1]: Started libpod-conmon-0d388107a3ce64492193b74b81b6dbd0f2a2ce83db970161babbccb3cd1ee55b.scope.
Dec 03 01:57:25 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:57:25 compute-0 podman[419286]: 2025-12-03 01:57:25.857506433 +0000 UTC m=+0.244487571 container init 0d388107a3ce64492193b74b81b6dbd0f2a2ce83db970161babbccb3cd1ee55b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 03 01:57:25 compute-0 podman[419286]: 2025-12-03 01:57:25.872082798 +0000 UTC m=+0.259063916 container start 0d388107a3ce64492193b74b81b6dbd0f2a2ce83db970161babbccb3cd1ee55b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:57:25 compute-0 podman[419286]: 2025-12-03 01:57:25.877853422 +0000 UTC m=+0.264834550 container attach 0d388107a3ce64492193b74b81b6dbd0f2a2ce83db970161babbccb3cd1ee55b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mcclintock, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 03 01:57:25 compute-0 determined_mcclintock[419301]: 167 167
Dec 03 01:57:25 compute-0 systemd[1]: libpod-0d388107a3ce64492193b74b81b6dbd0f2a2ce83db970161babbccb3cd1ee55b.scope: Deactivated successfully.
Dec 03 01:57:25 compute-0 podman[419286]: 2025-12-03 01:57:25.885266173 +0000 UTC m=+0.272247291 container died 0d388107a3ce64492193b74b81b6dbd0f2a2ce83db970161babbccb3cd1ee55b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:57:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-0592ad9285dda35fa8c758e610542d6b3f18746678fe81edb59d8b6d53247249-merged.mount: Deactivated successfully.
Dec 03 01:57:25 compute-0 podman[419286]: 2025-12-03 01:57:25.978845087 +0000 UTC m=+0.365826205 container remove 0d388107a3ce64492193b74b81b6dbd0f2a2ce83db970161babbccb3cd1ee55b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 03 01:57:25 compute-0 systemd[1]: libpod-conmon-0d388107a3ce64492193b74b81b6dbd0f2a2ce83db970161babbccb3cd1ee55b.scope: Deactivated successfully.
Dec 03 01:57:26 compute-0 podman[419326]: 2025-12-03 01:57:26.294614167 +0000 UTC m=+0.103357664 container create fb11721ceb81495552b99841801f80baa7a882a3201d8be50924fe93600cf320 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:57:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1286: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 8.2 KiB/s wr, 1 op/s
Dec 03 01:57:26 compute-0 podman[419326]: 2025-12-03 01:57:26.244794849 +0000 UTC m=+0.053538396 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:57:26 compute-0 systemd[1]: Started libpod-conmon-fb11721ceb81495552b99841801f80baa7a882a3201d8be50924fe93600cf320.scope.
Dec 03 01:57:26 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:57:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b47354e178c7f61aa0a30d20a787908048d091d3aff3089ad8bbed6b14b285b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:57:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b47354e178c7f61aa0a30d20a787908048d091d3aff3089ad8bbed6b14b285b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:57:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b47354e178c7f61aa0a30d20a787908048d091d3aff3089ad8bbed6b14b285b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:57:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b47354e178c7f61aa0a30d20a787908048d091d3aff3089ad8bbed6b14b285b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:57:26 compute-0 podman[419326]: 2025-12-03 01:57:26.502920337 +0000 UTC m=+0.311663844 container init fb11721ceb81495552b99841801f80baa7a882a3201d8be50924fe93600cf320 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_brown, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:57:26 compute-0 podman[419326]: 2025-12-03 01:57:26.520108646 +0000 UTC m=+0.328852173 container start fb11721ceb81495552b99841801f80baa7a882a3201d8be50924fe93600cf320 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_brown, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 03 01:57:26 compute-0 podman[419326]: 2025-12-03 01:57:26.526419736 +0000 UTC m=+0.335163223 container attach fb11721ceb81495552b99841801f80baa7a882a3201d8be50924fe93600cf320 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_brown, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 03 01:57:27 compute-0 hardcore_brown[419342]: {
Dec 03 01:57:27 compute-0 hardcore_brown[419342]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 01:57:27 compute-0 hardcore_brown[419342]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:57:27 compute-0 hardcore_brown[419342]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 01:57:27 compute-0 hardcore_brown[419342]:         "osd_id": 2,
Dec 03 01:57:27 compute-0 hardcore_brown[419342]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:57:27 compute-0 hardcore_brown[419342]:         "type": "bluestore"
Dec 03 01:57:27 compute-0 hardcore_brown[419342]:     },
Dec 03 01:57:27 compute-0 hardcore_brown[419342]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 01:57:27 compute-0 hardcore_brown[419342]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:57:27 compute-0 hardcore_brown[419342]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 01:57:27 compute-0 hardcore_brown[419342]:         "osd_id": 1,
Dec 03 01:57:27 compute-0 hardcore_brown[419342]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:57:27 compute-0 hardcore_brown[419342]:         "type": "bluestore"
Dec 03 01:57:27 compute-0 hardcore_brown[419342]:     },
Dec 03 01:57:27 compute-0 hardcore_brown[419342]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 01:57:27 compute-0 hardcore_brown[419342]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:57:27 compute-0 hardcore_brown[419342]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 01:57:27 compute-0 hardcore_brown[419342]:         "osd_id": 0,
Dec 03 01:57:27 compute-0 hardcore_brown[419342]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:57:27 compute-0 hardcore_brown[419342]:         "type": "bluestore"
Dec 03 01:57:27 compute-0 hardcore_brown[419342]:     }
Dec 03 01:57:27 compute-0 hardcore_brown[419342]: }
Dec 03 01:57:27 compute-0 systemd[1]: libpod-fb11721ceb81495552b99841801f80baa7a882a3201d8be50924fe93600cf320.scope: Deactivated successfully.
Dec 03 01:57:27 compute-0 systemd[1]: libpod-fb11721ceb81495552b99841801f80baa7a882a3201d8be50924fe93600cf320.scope: Consumed 1.210s CPU time.
Dec 03 01:57:27 compute-0 podman[419326]: 2025-12-03 01:57:27.732242093 +0000 UTC m=+1.540985610 container died fb11721ceb81495552b99841801f80baa7a882a3201d8be50924fe93600cf320 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_brown, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 03 01:57:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b47354e178c7f61aa0a30d20a787908048d091d3aff3089ad8bbed6b14b285b-merged.mount: Deactivated successfully.
Dec 03 01:57:27 compute-0 ceph-mon[192821]: pgmap v1286: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 8.2 KiB/s wr, 1 op/s
Dec 03 01:57:27 compute-0 nova_compute[351485]: 2025-12-03 01:57:27.836 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:57:27 compute-0 podman[419326]: 2025-12-03 01:57:27.850903331 +0000 UTC m=+1.659646858 container remove fb11721ceb81495552b99841801f80baa7a882a3201d8be50924fe93600cf320 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_brown, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:57:27 compute-0 systemd[1]: libpod-conmon-fb11721ceb81495552b99841801f80baa7a882a3201d8be50924fe93600cf320.scope: Deactivated successfully.
Dec 03 01:57:27 compute-0 sudo[419224]: pam_unix(sudo:session): session closed for user root
Dec 03 01:57:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:57:27 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:57:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:57:27 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:57:27 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 381693e6-6dd5-441a-9813-ea855e24610a does not exist
Dec 03 01:57:27 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev b1161126-419c-4642-b684-8bd36b03048e does not exist
Dec 03 01:57:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:57:28 compute-0 sudo[419388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:57:28 compute-0 sudo[419388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:57:28 compute-0 sudo[419388]: pam_unix(sudo:session): session closed for user root
Dec 03 01:57:28 compute-0 sudo[419413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 01:57:28 compute-0 sudo[419413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:57:28 compute-0 sudo[419413]: pam_unix(sudo:session): session closed for user root
Dec 03 01:57:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1287: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.1 KiB/s wr, 1 op/s
Dec 03 01:57:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:57:28
Dec 03 01:57:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 01:57:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 01:57:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', 'images', 'cephfs.cephfs.data', 'backups', 'vms', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'volumes', 'default.rgw.control']
Dec 03 01:57:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 01:57:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:57:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:57:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:57:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:57:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:57:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:57:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 01:57:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:57:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 01:57:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:57:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:57:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:57:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:57:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:57:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:57:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:57:28 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:57:28 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:57:29 compute-0 nova_compute[351485]: 2025-12-03 01:57:29.026 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:57:29 compute-0 podman[158098]: time="2025-12-03T01:57:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:57:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:57:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec 03 01:57:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:57:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8624 "" "Go-http-client/1.1"
Dec 03 01:57:29 compute-0 ceph-mon[192821]: pgmap v1287: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.1 KiB/s wr, 1 op/s
Dec 03 01:57:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1288: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.1 KiB/s wr, 1 op/s
Dec 03 01:57:31 compute-0 openstack_network_exporter[368278]: ERROR   01:57:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:57:31 compute-0 openstack_network_exporter[368278]: ERROR   01:57:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:57:31 compute-0 openstack_network_exporter[368278]: ERROR   01:57:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:57:31 compute-0 openstack_network_exporter[368278]: ERROR   01:57:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:57:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:57:31 compute-0 openstack_network_exporter[368278]: ERROR   01:57:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:57:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:57:31 compute-0 ceph-mon[192821]: pgmap v1288: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.1 KiB/s wr, 1 op/s
Dec 03 01:57:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1289: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 7.1 KiB/s wr, 1 op/s
Dec 03 01:57:32 compute-0 nova_compute[351485]: 2025-12-03 01:57:32.839 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:57:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:57:33 compute-0 ceph-mon[192821]: pgmap v1289: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 7.1 KiB/s wr, 1 op/s
Dec 03 01:57:34 compute-0 nova_compute[351485]: 2025-12-03 01:57:34.030 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:57:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1290: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s wr, 0 op/s
Dec 03 01:57:35 compute-0 ceph-mon[192821]: pgmap v1290: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s wr, 0 op/s
Dec 03 01:57:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1291: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s wr, 0 op/s
Dec 03 01:57:36 compute-0 podman[419438]: 2025-12-03 01:57:36.854390974 +0000 UTC m=+0.117764813 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 03 01:57:36 compute-0 podman[419439]: 2025-12-03 01:57:36.860155098 +0000 UTC m=+0.109348703 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_id=edpm, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Dec 03 01:57:36 compute-0 podman[419443]: 2025-12-03 01:57:36.86022354 +0000 UTC m=+0.097494076 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 01:57:37 compute-0 ceph-mon[192821]: pgmap v1291: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s wr, 0 op/s
Dec 03 01:57:37 compute-0 nova_compute[351485]: 2025-12-03 01:57:37.843 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:57:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:57:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1292: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:57:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 01:57:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:57:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 01:57:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:57:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011048885483818454 of space, bias 1.0, pg target 0.33146656451455364 quantized to 32 (current 32)
Dec 03 01:57:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:57:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:57:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:57:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:57:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:57:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec 03 01:57:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:57:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 01:57:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:57:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:57:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:57:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 01:57:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:57:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 01:57:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:57:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:57:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:57:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 01:57:39 compute-0 nova_compute[351485]: 2025-12-03 01:57:39.034 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:57:39 compute-0 ceph-mon[192821]: pgmap v1292: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:57:40 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 03 01:57:40 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 2400.0 total, 600.0 interval
                                            Cumulative writes: 5936 writes, 26K keys, 5936 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                            Cumulative WAL: 5936 writes, 5936 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 1341 writes, 6083 keys, 1341 commit groups, 1.0 writes per commit group, ingest: 8.80 MB, 0.01 MB/s
                                            Interval WAL: 1341 writes, 1341 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     94.4      0.32              0.14        15    0.021       0      0       0.0       0.0
                                              L6      1/0    7.14 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.3    130.3    105.7      0.94              0.46        14    0.067     63K   7810       0.0       0.0
                                             Sum      1/0    7.14 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.3     97.1    102.8      1.26              0.60        29    0.044     63K   7810       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.6    110.5    111.6      0.35              0.17         8    0.043     20K   2552       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    130.3    105.7      0.94              0.46        14    0.067     63K   7810       0.0       0.0
                                            High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     95.1      0.32              0.14        14    0.023       0      0       0.0       0.0
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     18.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 2400.0 total, 600.0 interval
                                            Flush(GB): cumulative 0.030, interval 0.008
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.13 GB write, 0.05 MB/s write, 0.12 GB read, 0.05 MB/s read, 1.3 seconds
                                            Interval compaction: 0.04 GB write, 0.06 MB/s write, 0.04 GB read, 0.06 MB/s read, 0.3 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x559a0b5b71f0#2 capacity: 308.00 MB usage: 13.08 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000199 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(855,12.57 MB,4.08052%) FilterBlock(30,183.67 KB,0.0582361%) IndexBlock(30,338.95 KB,0.10747%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
Dec 03 01:57:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1293: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:57:41 compute-0 ceph-mon[192821]: pgmap v1293: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:57:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1294: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:57:42 compute-0 nova_compute[351485]: 2025-12-03 01:57:42.848 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:57:42 compute-0 podman[419496]: 2025-12-03 01:57:42.883856482 +0000 UTC m=+0.131708991 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:57:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:57:43 compute-0 ceph-mon[192821]: pgmap v1294: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:57:44 compute-0 nova_compute[351485]: 2025-12-03 01:57:44.039 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:57:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1295: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:57:45 compute-0 ceph-mon[192821]: pgmap v1295: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:57:45 compute-0 podman[419517]: 2025-12-03 01:57:45.868413557 +0000 UTC m=+0.125143504 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, release=1214.1726694543, io.openshift.expose-services=, managed_by=edpm_ansible, config_id=edpm, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, distribution-scope=public)
Dec 03 01:57:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1296: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:57:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 01:57:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/41394511' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 01:57:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 01:57:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/41394511' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 01:57:47 compute-0 ceph-mon[192821]: pgmap v1296: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:57:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/41394511' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 01:57:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/41394511' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 01:57:47 compute-0 nova_compute[351485]: 2025-12-03 01:57:47.851 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:57:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:57:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1297: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:57:49 compute-0 nova_compute[351485]: 2025-12-03 01:57:49.044 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:57:49 compute-0 ceph-mon[192821]: pgmap v1297: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:57:49 compute-0 podman[419537]: 2025-12-03 01:57:49.894948455 +0000 UTC m=+0.143604069 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.openshift.expose-services=, name=ubi9-minimal, build-date=2025-08-20T13:12:41, release=1755695350, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9)
Dec 03 01:57:49 compute-0 podman[419538]: 2025-12-03 01:57:49.896470198 +0000 UTC m=+0.138286587 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 03 01:57:49 compute-0 podman[419536]: 2025-12-03 01:57:49.943265821 +0000 UTC m=+0.194943281 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:57:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1298: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:57:50 compute-0 podman[419595]: 2025-12-03 01:57:50.883850917 +0000 UTC m=+0.142451846 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 03 01:57:51 compute-0 ceph-mon[192821]: pgmap v1298: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:57:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1299: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:57:52 compute-0 nova_compute[351485]: 2025-12-03 01:57:52.852 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:57:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:57:53 compute-0 ceph-mon[192821]: pgmap v1299: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:57:54 compute-0 nova_compute[351485]: 2025-12-03 01:57:54.048 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:57:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1300: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:57:55 compute-0 ceph-mon[192821]: pgmap v1300: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:57:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1301: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Dec 03 01:57:56 compute-0 nova_compute[351485]: 2025-12-03 01:57:56.599 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:57:57 compute-0 ceph-mon[192821]: pgmap v1301: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Dec 03 01:57:57 compute-0 nova_compute[351485]: 2025-12-03 01:57:57.855 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:57:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:57:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1302: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Dec 03 01:57:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:57:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:57:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:57:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:57:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:57:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:57:58 compute-0 nova_compute[351485]: 2025-12-03 01:57:58.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:57:58 compute-0 nova_compute[351485]: 2025-12-03 01:57:58.580 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:57:58 compute-0 nova_compute[351485]: 2025-12-03 01:57:58.606 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:57:58 compute-0 nova_compute[351485]: 2025-12-03 01:57:58.607 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:57:58 compute-0 nova_compute[351485]: 2025-12-03 01:57:58.607 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:57:58 compute-0 nova_compute[351485]: 2025-12-03 01:57:58.608 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 01:57:58 compute-0 nova_compute[351485]: 2025-12-03 01:57:58.608 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:57:59 compute-0 nova_compute[351485]: 2025-12-03 01:57:59.052 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:57:59 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 01:57:59 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3665492698' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:57:59 compute-0 nova_compute[351485]: 2025-12-03 01:57:59.145 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:57:59 compute-0 nova_compute[351485]: 2025-12-03 01:57:59.273 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 01:57:59 compute-0 nova_compute[351485]: 2025-12-03 01:57:59.274 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 01:57:59 compute-0 nova_compute[351485]: 2025-12-03 01:57:59.275 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 01:57:59 compute-0 nova_compute[351485]: 2025-12-03 01:57:59.285 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 01:57:59 compute-0 nova_compute[351485]: 2025-12-03 01:57:59.286 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 01:57:59 compute-0 nova_compute[351485]: 2025-12-03 01:57:59.287 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 01:57:59 compute-0 ceph-mon[192821]: pgmap v1302: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Dec 03 01:57:59 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3665492698' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:57:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:57:59.623 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:57:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:57:59.624 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:57:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:57:59.624 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:57:59 compute-0 podman[158098]: time="2025-12-03T01:57:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:57:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:57:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec 03 01:57:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:57:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8626 "" "Go-http-client/1.1"
Dec 03 01:57:59 compute-0 nova_compute[351485]: 2025-12-03 01:57:59.844 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 01:57:59 compute-0 nova_compute[351485]: 2025-12-03 01:57:59.846 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3750MB free_disk=59.922000885009766GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 01:57:59 compute-0 nova_compute[351485]: 2025-12-03 01:57:59.846 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:57:59 compute-0 nova_compute[351485]: 2025-12-03 01:57:59.846 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:57:59 compute-0 nova_compute[351485]: 2025-12-03 01:57:59.924 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 01:57:59 compute-0 nova_compute[351485]: 2025-12-03 01:57:59.925 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 52862152-12c7-4236-89c3-67750ecbed7a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 01:57:59 compute-0 nova_compute[351485]: 2025-12-03 01:57:59.925 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 01:57:59 compute-0 nova_compute[351485]: 2025-12-03 01:57:59.925 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 01:58:00 compute-0 nova_compute[351485]: 2025-12-03 01:58:00.001 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:58:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1303: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Dec 03 01:58:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 01:58:00 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1834711140' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:58:00 compute-0 nova_compute[351485]: 2025-12-03 01:58:00.567 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:58:00 compute-0 nova_compute[351485]: 2025-12-03 01:58:00.580 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 01:58:00 compute-0 nova_compute[351485]: 2025-12-03 01:58:00.601 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 01:58:00 compute-0 nova_compute[351485]: 2025-12-03 01:58:00.605 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 01:58:00 compute-0 nova_compute[351485]: 2025-12-03 01:58:00.606 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:58:01 compute-0 openstack_network_exporter[368278]: ERROR   01:58:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:58:01 compute-0 openstack_network_exporter[368278]: ERROR   01:58:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:58:01 compute-0 openstack_network_exporter[368278]: ERROR   01:58:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:58:01 compute-0 openstack_network_exporter[368278]: ERROR   01:58:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:58:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:58:01 compute-0 openstack_network_exporter[368278]: ERROR   01:58:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:58:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:58:01 compute-0 ceph-mon[192821]: pgmap v1303: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Dec 03 01:58:01 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1834711140' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:58:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1304: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Dec 03 01:58:02 compute-0 nova_compute[351485]: 2025-12-03 01:58:02.604 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:58:02 compute-0 nova_compute[351485]: 2025-12-03 01:58:02.604 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 01:58:02 compute-0 nova_compute[351485]: 2025-12-03 01:58:02.857 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:58:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:58:03 compute-0 nova_compute[351485]: 2025-12-03 01:58:03.182 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-52862152-12c7-4236-89c3-67750ecbed7a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 01:58:03 compute-0 nova_compute[351485]: 2025-12-03 01:58:03.183 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-52862152-12c7-4236-89c3-67750ecbed7a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 01:58:03 compute-0 nova_compute[351485]: 2025-12-03 01:58:03.184 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 03 01:58:03 compute-0 ceph-mon[192821]: pgmap v1304: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Dec 03 01:58:04 compute-0 nova_compute[351485]: 2025-12-03 01:58:04.058 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:58:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1305: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Dec 03 01:58:05 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec 03 01:58:05 compute-0 nova_compute[351485]: 2025-12-03 01:58:05.234 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Updating instance_info_cache with network_info: [{"id": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "address": "fa:16:3e:8e:09:91", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521d2181-8f", "ovs_interfaceid": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 01:58:05 compute-0 nova_compute[351485]: 2025-12-03 01:58:05.272 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-52862152-12c7-4236-89c3-67750ecbed7a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 01:58:05 compute-0 nova_compute[351485]: 2025-12-03 01:58:05.273 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 03 01:58:05 compute-0 nova_compute[351485]: 2025-12-03 01:58:05.274 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:58:05 compute-0 nova_compute[351485]: 2025-12-03 01:58:05.275 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:58:05 compute-0 nova_compute[351485]: 2025-12-03 01:58:05.277 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:58:05 compute-0 nova_compute[351485]: 2025-12-03 01:58:05.278 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:58:05 compute-0 sshd-session[419664]: Invalid user redmine from 45.78.219.140 port 54074
Dec 03 01:58:05 compute-0 ceph-mon[192821]: pgmap v1305: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Dec 03 01:58:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1306: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Dec 03 01:58:07 compute-0 ceph-mon[192821]: pgmap v1306: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Dec 03 01:58:07 compute-0 nova_compute[351485]: 2025-12-03 01:58:07.860 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:58:07 compute-0 podman[419668]: 2025-12-03 01:58:07.883454104 +0000 UTC m=+0.125965547 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Dec 03 01:58:07 compute-0 podman[419669]: 2025-12-03 01:58:07.905568734 +0000 UTC m=+0.143425424 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 01:58:07 compute-0 podman[419667]: 2025-12-03 01:58:07.905962295 +0000 UTC m=+0.149134197 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125)
Dec 03 01:58:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:58:08 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec 03 01:58:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1307: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:58:09 compute-0 nova_compute[351485]: 2025-12-03 01:58:09.063 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:58:09 compute-0 ceph-mon[192821]: pgmap v1307: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:58:10 compute-0 nova_compute[351485]: 2025-12-03 01:58:10.245 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:58:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1308: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:58:10 compute-0 nova_compute[351485]: 2025-12-03 01:58:10.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:58:10 compute-0 nova_compute[351485]: 2025-12-03 01:58:10.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 01:58:11 compute-0 ceph-mon[192821]: pgmap v1308: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:58:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1309: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:58:12 compute-0 nova_compute[351485]: 2025-12-03 01:58:12.866 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:58:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:58:13 compute-0 ceph-mon[192821]: pgmap v1309: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:58:13 compute-0 podman[419723]: 2025-12-03 01:58:13.877506514 +0000 UTC m=+0.129495358 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Dec 03 01:58:14 compute-0 nova_compute[351485]: 2025-12-03 01:58:14.067 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:58:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1310: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:58:15 compute-0 ceph-mon[192821]: pgmap v1310: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:58:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1311: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:58:16 compute-0 podman[419743]: 2025-12-03 01:58:16.890362464 +0000 UTC m=+0.135953540 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, config_id=edpm, release-0.7.12=, container_name=kepler, vcs-type=git, vendor=Red Hat, Inc., version=9.4, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, release=1214.1726694543, architecture=x86_64)
Dec 03 01:58:17 compute-0 ceph-mon[192821]: pgmap v1311: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:58:17 compute-0 nova_compute[351485]: 2025-12-03 01:58:17.865 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:58:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:58:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1312: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:58:19 compute-0 nova_compute[351485]: 2025-12-03 01:58:19.072 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:58:19 compute-0 ceph-mon[192821]: pgmap v1312: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:58:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1313: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:58:20 compute-0 podman[419764]: 2025-12-03 01:58:20.85395208 +0000 UTC m=+0.110583189 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, config_id=edpm, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, version=9.6, architecture=x86_64, release=1755695350, vcs-type=git, distribution-scope=public, managed_by=edpm_ansible, name=ubi9-minimal)
Dec 03 01:58:20 compute-0 podman[419765]: 2025-12-03 01:58:20.891481779 +0000 UTC m=+0.129963161 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 03 01:58:20 compute-0 podman[419763]: 2025-12-03 01:58:20.912121116 +0000 UTC m=+0.165408510 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible)
Dec 03 01:58:21 compute-0 podman[419825]: 2025-12-03 01:58:21.07523213 +0000 UTC m=+0.137673851 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 03 01:58:21 compute-0 ceph-mon[192821]: pgmap v1313: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:58:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1314: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:58:22 compute-0 nova_compute[351485]: 2025-12-03 01:58:22.869 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:58:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:58:23 compute-0 ceph-mon[192821]: pgmap v1314: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:58:24 compute-0 nova_compute[351485]: 2025-12-03 01:58:24.076 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:58:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1315: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:58:26 compute-0 ceph-mon[192821]: pgmap v1315: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:58:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1316: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:58:26 compute-0 sshd-session[419850]: Invalid user localhost from 146.190.144.138 port 51290
Dec 03 01:58:26 compute-0 sshd-session[419850]: Received disconnect from 146.190.144.138 port 51290:11: Bye Bye [preauth]
Dec 03 01:58:26 compute-0 sshd-session[419850]: Disconnected from invalid user localhost 146.190.144.138 port 51290 [preauth]
Dec 03 01:58:27 compute-0 ceph-mon[192821]: pgmap v1316: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:58:27 compute-0 nova_compute[351485]: 2025-12-03 01:58:27.873 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:58:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:58:28 compute-0 sudo[419852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:58:28 compute-0 sudo[419852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:58:28 compute-0 sudo[419852]: pam_unix(sudo:session): session closed for user root
Dec 03 01:58:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1317: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:58:28 compute-0 sudo[419877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:58:28 compute-0 sudo[419877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:58:28 compute-0 sudo[419877]: pam_unix(sudo:session): session closed for user root
Dec 03 01:58:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:58:28
Dec 03 01:58:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 01:58:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 01:58:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'images', 'backups', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta', '.mgr', 'default.rgw.control']
Dec 03 01:58:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 01:58:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:58:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:58:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:58:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:58:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:58:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:58:28 compute-0 sudo[419902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:58:28 compute-0 sudo[419902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:58:28 compute-0 sudo[419902]: pam_unix(sudo:session): session closed for user root
Dec 03 01:58:28 compute-0 sudo[419927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 01:58:28 compute-0 sudo[419927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:58:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 01:58:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:58:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 01:58:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:58:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:58:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:58:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:58:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:58:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:58:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:58:29 compute-0 nova_compute[351485]: 2025-12-03 01:58:29.080 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:58:29 compute-0 sudo[419927]: pam_unix(sudo:session): session closed for user root
Dec 03 01:58:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:58:29 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:58:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 01:58:29 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:58:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 01:58:29 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:58:29 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 48b916e5-72d2-412a-995f-f4a40c736671 does not exist
Dec 03 01:58:29 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 826cef08-43ff-40be-b71b-fbef29d88a6f does not exist
Dec 03 01:58:29 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev d556bf48-2086-409c-8d42-8672436a7e4e does not exist
Dec 03 01:58:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 01:58:29 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:58:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 01:58:29 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:58:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:58:29 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:58:29 compute-0 ceph-mon[192821]: pgmap v1317: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:58:29 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:58:29 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:58:29 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:58:29 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:58:29 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:58:29 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:58:29 compute-0 sudo[419982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:58:29 compute-0 sudo[419982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:58:29 compute-0 sudo[419982]: pam_unix(sudo:session): session closed for user root
Dec 03 01:58:29 compute-0 sudo[420009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:58:29 compute-0 sudo[420009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:58:29 compute-0 sudo[420009]: pam_unix(sudo:session): session closed for user root
Dec 03 01:58:29 compute-0 sudo[420034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:58:29 compute-0 sudo[420034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:58:29 compute-0 sudo[420034]: pam_unix(sudo:session): session closed for user root
Dec 03 01:58:29 compute-0 sudo[420059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 01:58:29 compute-0 sudo[420059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:58:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1318: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:58:30 compute-0 podman[158098]: time="2025-12-03T01:58:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:58:30 compute-0 podman[158098]: @ - - [03/Dec/2025:01:58:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec 03 01:58:30 compute-0 podman[158098]: @ - - [03/Dec/2025:01:58:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8632 "" "Go-http-client/1.1"
Dec 03 01:58:30 compute-0 podman[420123]: 2025-12-03 01:58:30.497971569 +0000 UTC m=+0.100340218 container create 8921a87896fce64f7304f4cf61923694e2d6d1d35a5592f89c54f970047ee53e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:58:30 compute-0 podman[420123]: 2025-12-03 01:58:30.453429471 +0000 UTC m=+0.055798180 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:58:30 compute-0 systemd[1]: Started libpod-conmon-8921a87896fce64f7304f4cf61923694e2d6d1d35a5592f89c54f970047ee53e.scope.
Dec 03 01:58:30 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:58:30 compute-0 podman[420123]: 2025-12-03 01:58:30.650689916 +0000 UTC m=+0.253058575 container init 8921a87896fce64f7304f4cf61923694e2d6d1d35a5592f89c54f970047ee53e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_carver, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:58:30 compute-0 podman[420123]: 2025-12-03 01:58:30.670804599 +0000 UTC m=+0.273173248 container start 8921a87896fce64f7304f4cf61923694e2d6d1d35a5592f89c54f970047ee53e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 03 01:58:30 compute-0 podman[420123]: 2025-12-03 01:58:30.678427666 +0000 UTC m=+0.280796305 container attach 8921a87896fce64f7304f4cf61923694e2d6d1d35a5592f89c54f970047ee53e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:58:30 compute-0 objective_carver[420138]: 167 167
Dec 03 01:58:30 compute-0 systemd[1]: libpod-8921a87896fce64f7304f4cf61923694e2d6d1d35a5592f89c54f970047ee53e.scope: Deactivated successfully.
Dec 03 01:58:30 compute-0 conmon[420138]: conmon 8921a87896fce64f7304 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8921a87896fce64f7304f4cf61923694e2d6d1d35a5592f89c54f970047ee53e.scope/container/memory.events
Dec 03 01:58:30 compute-0 podman[420123]: 2025-12-03 01:58:30.686157696 +0000 UTC m=+0.288526335 container died 8921a87896fce64f7304f4cf61923694e2d6d1d35a5592f89c54f970047ee53e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 03 01:58:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-13004c528b2ac976a4f5923f3611ffb2119d4f27964d7fd9dc89f841a916ac60-merged.mount: Deactivated successfully.
Dec 03 01:58:30 compute-0 podman[420123]: 2025-12-03 01:58:30.759331749 +0000 UTC m=+0.361700368 container remove 8921a87896fce64f7304f4cf61923694e2d6d1d35a5592f89c54f970047ee53e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_carver, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 03 01:58:30 compute-0 systemd[1]: libpod-conmon-8921a87896fce64f7304f4cf61923694e2d6d1d35a5592f89c54f970047ee53e.scope: Deactivated successfully.
Dec 03 01:58:30 compute-0 sshd-session[419983]: Invalid user userroot from 103.146.202.174 port 35908
Dec 03 01:58:31 compute-0 podman[420161]: 2025-12-03 01:58:31.033474754 +0000 UTC m=+0.084871438 container create 172fcfc27e20f940a2e8806f8c39af3f3c97e667fc9ff54456b82712f5123abe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec 03 01:58:31 compute-0 sshd-session[419983]: Received disconnect from 103.146.202.174 port 35908:11: Bye Bye [preauth]
Dec 03 01:58:31 compute-0 sshd-session[419983]: Disconnected from invalid user userroot 103.146.202.174 port 35908 [preauth]
Dec 03 01:58:31 compute-0 podman[420161]: 2025-12-03 01:58:30.998639982 +0000 UTC m=+0.050036666 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:58:31 compute-0 systemd[1]: Started libpod-conmon-172fcfc27e20f940a2e8806f8c39af3f3c97e667fc9ff54456b82712f5123abe.scope.
Dec 03 01:58:31 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:58:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fc691cae6b2ea53f3513405a37c9a771183bd5e7376b7cd63035a1c32e0771c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:58:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fc691cae6b2ea53f3513405a37c9a771183bd5e7376b7cd63035a1c32e0771c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:58:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fc691cae6b2ea53f3513405a37c9a771183bd5e7376b7cd63035a1c32e0771c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:58:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fc691cae6b2ea53f3513405a37c9a771183bd5e7376b7cd63035a1c32e0771c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:58:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fc691cae6b2ea53f3513405a37c9a771183bd5e7376b7cd63035a1c32e0771c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:58:31 compute-0 podman[420161]: 2025-12-03 01:58:31.252830467 +0000 UTC m=+0.304227181 container init 172fcfc27e20f940a2e8806f8c39af3f3c97e667fc9ff54456b82712f5123abe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_grothendieck, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:58:31 compute-0 podman[420161]: 2025-12-03 01:58:31.283610794 +0000 UTC m=+0.335007458 container start 172fcfc27e20f940a2e8806f8c39af3f3c97e667fc9ff54456b82712f5123abe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 03 01:58:31 compute-0 podman[420161]: 2025-12-03 01:58:31.291667733 +0000 UTC m=+0.343064417 container attach 172fcfc27e20f940a2e8806f8c39af3f3c97e667fc9ff54456b82712f5123abe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Dec 03 01:58:31 compute-0 openstack_network_exporter[368278]: ERROR   01:58:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:58:31 compute-0 openstack_network_exporter[368278]: ERROR   01:58:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:58:31 compute-0 openstack_network_exporter[368278]: ERROR   01:58:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:58:31 compute-0 ceph-mon[192821]: pgmap v1318: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:58:31 compute-0 openstack_network_exporter[368278]: ERROR   01:58:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:58:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:58:31 compute-0 openstack_network_exporter[368278]: ERROR   01:58:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:58:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:58:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1319: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:58:32 compute-0 ecstatic_grothendieck[420177]: --> passed data devices: 0 physical, 3 LVM
Dec 03 01:58:32 compute-0 ecstatic_grothendieck[420177]: --> relative data size: 1.0
Dec 03 01:58:32 compute-0 ecstatic_grothendieck[420177]: --> All data devices are unavailable
Dec 03 01:58:32 compute-0 systemd[1]: libpod-172fcfc27e20f940a2e8806f8c39af3f3c97e667fc9ff54456b82712f5123abe.scope: Deactivated successfully.
Dec 03 01:58:32 compute-0 podman[420161]: 2025-12-03 01:58:32.618606349 +0000 UTC m=+1.670003033 container died 172fcfc27e20f940a2e8806f8c39af3f3c97e667fc9ff54456b82712f5123abe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:58:32 compute-0 systemd[1]: libpod-172fcfc27e20f940a2e8806f8c39af3f3c97e667fc9ff54456b82712f5123abe.scope: Consumed 1.271s CPU time.
Dec 03 01:58:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-9fc691cae6b2ea53f3513405a37c9a771183bd5e7376b7cd63035a1c32e0771c-merged.mount: Deactivated successfully.
Dec 03 01:58:32 compute-0 podman[420161]: 2025-12-03 01:58:32.729680781 +0000 UTC m=+1.781077435 container remove 172fcfc27e20f940a2e8806f8c39af3f3c97e667fc9ff54456b82712f5123abe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_grothendieck, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:58:32 compute-0 systemd[1]: libpod-conmon-172fcfc27e20f940a2e8806f8c39af3f3c97e667fc9ff54456b82712f5123abe.scope: Deactivated successfully.
Dec 03 01:58:32 compute-0 sudo[420059]: pam_unix(sudo:session): session closed for user root
Dec 03 01:58:32 compute-0 nova_compute[351485]: 2025-12-03 01:58:32.874 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:58:32 compute-0 sudo[420219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:58:32 compute-0 sudo[420219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:58:32 compute-0 sudo[420219]: pam_unix(sudo:session): session closed for user root
Dec 03 01:58:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:58:33 compute-0 sudo[420244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:58:33 compute-0 sudo[420244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:58:33 compute-0 sudo[420244]: pam_unix(sudo:session): session closed for user root
Dec 03 01:58:33 compute-0 sudo[420269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:58:33 compute-0 sudo[420269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:58:33 compute-0 sudo[420269]: pam_unix(sudo:session): session closed for user root
Dec 03 01:58:33 compute-0 sudo[420294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 01:58:33 compute-0 sudo[420294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:58:33 compute-0 ceph-mon[192821]: pgmap v1319: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:58:33 compute-0 podman[420358]: 2025-12-03 01:58:33.943899658 +0000 UTC m=+0.081973155 container create 4b7e4862b976ee478d87eb77e9318031978fc07b63dce024d71ad9b3d180b3b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_perlman, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 03 01:58:34 compute-0 podman[420358]: 2025-12-03 01:58:33.913455271 +0000 UTC m=+0.051528848 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:58:34 compute-0 systemd[1]: Started libpod-conmon-4b7e4862b976ee478d87eb77e9318031978fc07b63dce024d71ad9b3d180b3b6.scope.
Dec 03 01:58:34 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:58:34 compute-0 nova_compute[351485]: 2025-12-03 01:58:34.085 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:58:34 compute-0 podman[420358]: 2025-12-03 01:58:34.094860886 +0000 UTC m=+0.232934393 container init 4b7e4862b976ee478d87eb77e9318031978fc07b63dce024d71ad9b3d180b3b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_perlman, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 03 01:58:34 compute-0 podman[420358]: 2025-12-03 01:58:34.112464217 +0000 UTC m=+0.250537744 container start 4b7e4862b976ee478d87eb77e9318031978fc07b63dce024d71ad9b3d180b3b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:58:34 compute-0 podman[420358]: 2025-12-03 01:58:34.124561101 +0000 UTC m=+0.262634628 container attach 4b7e4862b976ee478d87eb77e9318031978fc07b63dce024d71ad9b3d180b3b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_perlman, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 03 01:58:34 compute-0 eager_perlman[420373]: 167 167
Dec 03 01:58:34 compute-0 systemd[1]: libpod-4b7e4862b976ee478d87eb77e9318031978fc07b63dce024d71ad9b3d180b3b6.scope: Deactivated successfully.
Dec 03 01:58:34 compute-0 podman[420358]: 2025-12-03 01:58:34.133685161 +0000 UTC m=+0.271758658 container died 4b7e4862b976ee478d87eb77e9318031978fc07b63dce024d71ad9b3d180b3b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:58:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-f14cf6abee0b5088d4be5b7ac072d7836dad75ed175dd289afa0ecb4319eacdb-merged.mount: Deactivated successfully.
Dec 03 01:58:34 compute-0 podman[420358]: 2025-12-03 01:58:34.187103842 +0000 UTC m=+0.325177339 container remove 4b7e4862b976ee478d87eb77e9318031978fc07b63dce024d71ad9b3d180b3b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_perlman, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 03 01:58:34 compute-0 systemd[1]: libpod-conmon-4b7e4862b976ee478d87eb77e9318031978fc07b63dce024d71ad9b3d180b3b6.scope: Deactivated successfully.
Dec 03 01:58:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1320: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:58:34 compute-0 podman[420396]: 2025-12-03 01:58:34.458712244 +0000 UTC m=+0.083096897 container create 917cadf327ac9a0632cc1c109c5cac2f9570adcc063afd591a6d09aa136b5041 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_gould, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 03 01:58:34 compute-0 podman[420396]: 2025-12-03 01:58:34.425052515 +0000 UTC m=+0.049437198 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:58:34 compute-0 systemd[1]: Started libpod-conmon-917cadf327ac9a0632cc1c109c5cac2f9570adcc063afd591a6d09aa136b5041.scope.
Dec 03 01:58:34 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:58:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9648888d36fa492062575301eed4001ae2317a57f6ebc263486181729c74e58c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:58:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9648888d36fa492062575301eed4001ae2317a57f6ebc263486181729c74e58c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:58:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9648888d36fa492062575301eed4001ae2317a57f6ebc263486181729c74e58c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:58:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9648888d36fa492062575301eed4001ae2317a57f6ebc263486181729c74e58c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:58:34 compute-0 podman[420396]: 2025-12-03 01:58:34.681269389 +0000 UTC m=+0.305654042 container init 917cadf327ac9a0632cc1c109c5cac2f9570adcc063afd591a6d09aa136b5041 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_gould, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec 03 01:58:34 compute-0 podman[420396]: 2025-12-03 01:58:34.700214699 +0000 UTC m=+0.324599342 container start 917cadf327ac9a0632cc1c109c5cac2f9570adcc063afd591a6d09aa136b5041 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_gould, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 03 01:58:34 compute-0 podman[420396]: 2025-12-03 01:58:34.710342226 +0000 UTC m=+0.334726859 container attach 917cadf327ac9a0632cc1c109c5cac2f9570adcc063afd591a6d09aa136b5041 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_gould, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 03 01:58:35 compute-0 ceph-mon[192821]: pgmap v1320: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:58:35 compute-0 zen_gould[420412]: {
Dec 03 01:58:35 compute-0 zen_gould[420412]:     "0": [
Dec 03 01:58:35 compute-0 zen_gould[420412]:         {
Dec 03 01:58:35 compute-0 zen_gould[420412]:             "devices": [
Dec 03 01:58:35 compute-0 zen_gould[420412]:                 "/dev/loop3"
Dec 03 01:58:35 compute-0 zen_gould[420412]:             ],
Dec 03 01:58:35 compute-0 zen_gould[420412]:             "lv_name": "ceph_lv0",
Dec 03 01:58:35 compute-0 zen_gould[420412]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:58:35 compute-0 zen_gould[420412]:             "lv_size": "21470642176",
Dec 03 01:58:35 compute-0 zen_gould[420412]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:58:35 compute-0 zen_gould[420412]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:58:35 compute-0 zen_gould[420412]:             "name": "ceph_lv0",
Dec 03 01:58:35 compute-0 zen_gould[420412]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:58:35 compute-0 zen_gould[420412]:             "tags": {
Dec 03 01:58:35 compute-0 zen_gould[420412]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:58:35 compute-0 zen_gould[420412]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:58:35 compute-0 zen_gould[420412]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:58:35 compute-0 zen_gould[420412]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:58:35 compute-0 zen_gould[420412]:                 "ceph.cluster_name": "ceph",
Dec 03 01:58:35 compute-0 zen_gould[420412]:                 "ceph.crush_device_class": "",
Dec 03 01:58:35 compute-0 zen_gould[420412]:                 "ceph.encrypted": "0",
Dec 03 01:58:35 compute-0 zen_gould[420412]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:58:35 compute-0 zen_gould[420412]:                 "ceph.osd_id": "0",
Dec 03 01:58:35 compute-0 zen_gould[420412]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:58:35 compute-0 zen_gould[420412]:                 "ceph.type": "block",
Dec 03 01:58:35 compute-0 zen_gould[420412]:                 "ceph.vdo": "0"
Dec 03 01:58:35 compute-0 zen_gould[420412]:             },
Dec 03 01:58:35 compute-0 zen_gould[420412]:             "type": "block",
Dec 03 01:58:35 compute-0 zen_gould[420412]:             "vg_name": "ceph_vg0"
Dec 03 01:58:35 compute-0 zen_gould[420412]:         }
Dec 03 01:58:35 compute-0 zen_gould[420412]:     ],
Dec 03 01:58:35 compute-0 zen_gould[420412]:     "1": [
Dec 03 01:58:35 compute-0 zen_gould[420412]:         {
Dec 03 01:58:35 compute-0 zen_gould[420412]:             "devices": [
Dec 03 01:58:35 compute-0 zen_gould[420412]:                 "/dev/loop4"
Dec 03 01:58:35 compute-0 zen_gould[420412]:             ],
Dec 03 01:58:35 compute-0 zen_gould[420412]:             "lv_name": "ceph_lv1",
Dec 03 01:58:35 compute-0 zen_gould[420412]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:58:35 compute-0 zen_gould[420412]:             "lv_size": "21470642176",
Dec 03 01:58:35 compute-0 zen_gould[420412]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:58:35 compute-0 zen_gould[420412]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:58:35 compute-0 zen_gould[420412]:             "name": "ceph_lv1",
Dec 03 01:58:35 compute-0 zen_gould[420412]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:58:35 compute-0 zen_gould[420412]:             "tags": {
Dec 03 01:58:35 compute-0 zen_gould[420412]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:58:35 compute-0 zen_gould[420412]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:58:35 compute-0 zen_gould[420412]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:58:35 compute-0 zen_gould[420412]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:58:35 compute-0 zen_gould[420412]:                 "ceph.cluster_name": "ceph",
Dec 03 01:58:35 compute-0 zen_gould[420412]:                 "ceph.crush_device_class": "",
Dec 03 01:58:35 compute-0 zen_gould[420412]:                 "ceph.encrypted": "0",
Dec 03 01:58:35 compute-0 zen_gould[420412]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:58:35 compute-0 zen_gould[420412]:                 "ceph.osd_id": "1",
Dec 03 01:58:35 compute-0 zen_gould[420412]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:58:35 compute-0 zen_gould[420412]:                 "ceph.type": "block",
Dec 03 01:58:35 compute-0 zen_gould[420412]:                 "ceph.vdo": "0"
Dec 03 01:58:35 compute-0 zen_gould[420412]:             },
Dec 03 01:58:35 compute-0 zen_gould[420412]:             "type": "block",
Dec 03 01:58:35 compute-0 zen_gould[420412]:             "vg_name": "ceph_vg1"
Dec 03 01:58:35 compute-0 zen_gould[420412]:         }
Dec 03 01:58:35 compute-0 zen_gould[420412]:     ],
Dec 03 01:58:35 compute-0 zen_gould[420412]:     "2": [
Dec 03 01:58:35 compute-0 zen_gould[420412]:         {
Dec 03 01:58:35 compute-0 zen_gould[420412]:             "devices": [
Dec 03 01:58:35 compute-0 zen_gould[420412]:                 "/dev/loop5"
Dec 03 01:58:35 compute-0 zen_gould[420412]:             ],
Dec 03 01:58:35 compute-0 zen_gould[420412]:             "lv_name": "ceph_lv2",
Dec 03 01:58:35 compute-0 zen_gould[420412]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:58:35 compute-0 zen_gould[420412]:             "lv_size": "21470642176",
Dec 03 01:58:35 compute-0 zen_gould[420412]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:58:35 compute-0 zen_gould[420412]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:58:35 compute-0 zen_gould[420412]:             "name": "ceph_lv2",
Dec 03 01:58:35 compute-0 zen_gould[420412]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:58:35 compute-0 zen_gould[420412]:             "tags": {
Dec 03 01:58:35 compute-0 zen_gould[420412]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:58:35 compute-0 zen_gould[420412]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:58:35 compute-0 zen_gould[420412]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:58:35 compute-0 zen_gould[420412]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:58:35 compute-0 zen_gould[420412]:                 "ceph.cluster_name": "ceph",
Dec 03 01:58:35 compute-0 zen_gould[420412]:                 "ceph.crush_device_class": "",
Dec 03 01:58:35 compute-0 zen_gould[420412]:                 "ceph.encrypted": "0",
Dec 03 01:58:35 compute-0 zen_gould[420412]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:58:35 compute-0 zen_gould[420412]:                 "ceph.osd_id": "2",
Dec 03 01:58:35 compute-0 zen_gould[420412]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:58:35 compute-0 zen_gould[420412]:                 "ceph.type": "block",
Dec 03 01:58:35 compute-0 zen_gould[420412]:                 "ceph.vdo": "0"
Dec 03 01:58:35 compute-0 zen_gould[420412]:             },
Dec 03 01:58:35 compute-0 zen_gould[420412]:             "type": "block",
Dec 03 01:58:35 compute-0 zen_gould[420412]:             "vg_name": "ceph_vg2"
Dec 03 01:58:35 compute-0 zen_gould[420412]:         }
Dec 03 01:58:35 compute-0 zen_gould[420412]:     ]
Dec 03 01:58:35 compute-0 zen_gould[420412]: }
Dec 03 01:58:35 compute-0 systemd[1]: libpod-917cadf327ac9a0632cc1c109c5cac2f9570adcc063afd591a6d09aa136b5041.scope: Deactivated successfully.
Dec 03 01:58:35 compute-0 podman[420396]: 2025-12-03 01:58:35.56617567 +0000 UTC m=+1.190560313 container died 917cadf327ac9a0632cc1c109c5cac2f9570adcc063afd591a6d09aa136b5041 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Dec 03 01:58:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-9648888d36fa492062575301eed4001ae2317a57f6ebc263486181729c74e58c-merged.mount: Deactivated successfully.
Dec 03 01:58:35 compute-0 podman[420396]: 2025-12-03 01:58:35.659891968 +0000 UTC m=+1.284276581 container remove 917cadf327ac9a0632cc1c109c5cac2f9570adcc063afd591a6d09aa136b5041 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_gould, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 03 01:58:35 compute-0 systemd[1]: libpod-conmon-917cadf327ac9a0632cc1c109c5cac2f9570adcc063afd591a6d09aa136b5041.scope: Deactivated successfully.
Dec 03 01:58:35 compute-0 sudo[420294]: pam_unix(sudo:session): session closed for user root
Dec 03 01:58:35 compute-0 sudo[420432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:58:35 compute-0 sudo[420432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:58:35 compute-0 sudo[420432]: pam_unix(sudo:session): session closed for user root
Dec 03 01:58:35 compute-0 sudo[420457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:58:35 compute-0 sudo[420457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:58:35 compute-0 sudo[420457]: pam_unix(sudo:session): session closed for user root
Dec 03 01:58:36 compute-0 sudo[420482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:58:36 compute-0 sudo[420482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:58:36 compute-0 sudo[420482]: pam_unix(sudo:session): session closed for user root
Dec 03 01:58:36 compute-0 sudo[420507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 01:58:36 compute-0 sudo[420507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:58:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1321: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:58:36 compute-0 podman[420570]: 2025-12-03 01:58:36.765910135 +0000 UTC m=+0.067481373 container create 843942d364f19739d682b4ef5e8a4d878fbe572b2517e715549bfd178f3cdb06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:58:36 compute-0 systemd[1]: Started libpod-conmon-843942d364f19739d682b4ef5e8a4d878fbe572b2517e715549bfd178f3cdb06.scope.
Dec 03 01:58:36 compute-0 podman[420570]: 2025-12-03 01:58:36.739204474 +0000 UTC m=+0.040775702 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:58:36 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:58:36 compute-0 podman[420570]: 2025-12-03 01:58:36.891090888 +0000 UTC m=+0.192662146 container init 843942d364f19739d682b4ef5e8a4d878fbe572b2517e715549bfd178f3cdb06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_raman, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 03 01:58:36 compute-0 podman[420570]: 2025-12-03 01:58:36.910375797 +0000 UTC m=+0.211947005 container start 843942d364f19739d682b4ef5e8a4d878fbe572b2517e715549bfd178f3cdb06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_raman, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 03 01:58:36 compute-0 podman[420570]: 2025-12-03 01:58:36.914914506 +0000 UTC m=+0.216485734 container attach 843942d364f19739d682b4ef5e8a4d878fbe572b2517e715549bfd178f3cdb06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_raman, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 03 01:58:36 compute-0 eager_raman[420586]: 167 167
Dec 03 01:58:36 compute-0 systemd[1]: libpod-843942d364f19739d682b4ef5e8a4d878fbe572b2517e715549bfd178f3cdb06.scope: Deactivated successfully.
Dec 03 01:58:36 compute-0 podman[420570]: 2025-12-03 01:58:36.922099261 +0000 UTC m=+0.223670469 container died 843942d364f19739d682b4ef5e8a4d878fbe572b2517e715549bfd178f3cdb06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_raman, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:58:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3234b79b61a9e1992bf15f7bea7b168ea6877805c9b3f2117e664b23922dcf4-merged.mount: Deactivated successfully.
Dec 03 01:58:36 compute-0 podman[420570]: 2025-12-03 01:58:36.978086075 +0000 UTC m=+0.279657293 container remove 843942d364f19739d682b4ef5e8a4d878fbe572b2517e715549bfd178f3cdb06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_raman, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 03 01:58:37 compute-0 systemd[1]: libpod-conmon-843942d364f19739d682b4ef5e8a4d878fbe572b2517e715549bfd178f3cdb06.scope: Deactivated successfully.
Dec 03 01:58:37 compute-0 podman[420609]: 2025-12-03 01:58:37.272755804 +0000 UTC m=+0.114411469 container create c2f1506abf36ea52559d190f4335a2c6696b3b818376b5d03dc80b984e319a7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_lamport, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:58:37 compute-0 podman[420609]: 2025-12-03 01:58:37.224912542 +0000 UTC m=+0.066568277 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:58:37 compute-0 systemd[1]: Started libpod-conmon-c2f1506abf36ea52559d190f4335a2c6696b3b818376b5d03dc80b984e319a7c.scope.
Dec 03 01:58:37 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:58:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b5ec52496afabfca90bf16574fd515c966616187e23a449fa9a462e42223014/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:58:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b5ec52496afabfca90bf16574fd515c966616187e23a449fa9a462e42223014/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:58:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b5ec52496afabfca90bf16574fd515c966616187e23a449fa9a462e42223014/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:58:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b5ec52496afabfca90bf16574fd515c966616187e23a449fa9a462e42223014/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:58:37 compute-0 podman[420609]: 2025-12-03 01:58:37.402449476 +0000 UTC m=+0.244105151 container init c2f1506abf36ea52559d190f4335a2c6696b3b818376b5d03dc80b984e319a7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 03 01:58:37 compute-0 podman[420609]: 2025-12-03 01:58:37.424775321 +0000 UTC m=+0.266430986 container start c2f1506abf36ea52559d190f4335a2c6696b3b818376b5d03dc80b984e319a7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_lamport, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec 03 01:58:37 compute-0 podman[420609]: 2025-12-03 01:58:37.42999235 +0000 UTC m=+0.271648025 container attach c2f1506abf36ea52559d190f4335a2c6696b3b818376b5d03dc80b984e319a7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_lamport, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 03 01:58:37 compute-0 ceph-mon[192821]: pgmap v1321: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:58:37 compute-0 nova_compute[351485]: 2025-12-03 01:58:37.876 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:58:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:58:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1322: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:58:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 01:58:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:58:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 01:58:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:58:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011048885483818454 of space, bias 1.0, pg target 0.33146656451455364 quantized to 32 (current 32)
Dec 03 01:58:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:58:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:58:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:58:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:58:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:58:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec 03 01:58:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:58:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 01:58:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:58:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:58:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:58:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 01:58:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:58:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 01:58:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:58:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:58:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:58:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 01:58:38 compute-0 charming_lamport[420625]: {
Dec 03 01:58:38 compute-0 charming_lamport[420625]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 01:58:38 compute-0 charming_lamport[420625]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:58:38 compute-0 charming_lamport[420625]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 01:58:38 compute-0 charming_lamport[420625]:         "osd_id": 2,
Dec 03 01:58:38 compute-0 charming_lamport[420625]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:58:38 compute-0 charming_lamport[420625]:         "type": "bluestore"
Dec 03 01:58:38 compute-0 charming_lamport[420625]:     },
Dec 03 01:58:38 compute-0 charming_lamport[420625]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 01:58:38 compute-0 charming_lamport[420625]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:58:38 compute-0 charming_lamport[420625]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 01:58:38 compute-0 charming_lamport[420625]:         "osd_id": 1,
Dec 03 01:58:38 compute-0 charming_lamport[420625]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:58:38 compute-0 charming_lamport[420625]:         "type": "bluestore"
Dec 03 01:58:38 compute-0 charming_lamport[420625]:     },
Dec 03 01:58:38 compute-0 charming_lamport[420625]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 01:58:38 compute-0 charming_lamport[420625]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:58:38 compute-0 charming_lamport[420625]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 01:58:38 compute-0 charming_lamport[420625]:         "osd_id": 0,
Dec 03 01:58:38 compute-0 charming_lamport[420625]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:58:38 compute-0 charming_lamport[420625]:         "type": "bluestore"
Dec 03 01:58:38 compute-0 charming_lamport[420625]:     }
Dec 03 01:58:38 compute-0 charming_lamport[420625]: }
Dec 03 01:58:38 compute-0 podman[420609]: 2025-12-03 01:58:38.718754528 +0000 UTC m=+1.560410263 container died c2f1506abf36ea52559d190f4335a2c6696b3b818376b5d03dc80b984e319a7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 03 01:58:38 compute-0 systemd[1]: libpod-c2f1506abf36ea52559d190f4335a2c6696b3b818376b5d03dc80b984e319a7c.scope: Deactivated successfully.
Dec 03 01:58:38 compute-0 systemd[1]: libpod-c2f1506abf36ea52559d190f4335a2c6696b3b818376b5d03dc80b984e319a7c.scope: Consumed 1.272s CPU time.
Dec 03 01:58:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b5ec52496afabfca90bf16574fd515c966616187e23a449fa9a462e42223014-merged.mount: Deactivated successfully.
Dec 03 01:58:38 compute-0 podman[420609]: 2025-12-03 01:58:38.82103477 +0000 UTC m=+1.662690425 container remove c2f1506abf36ea52559d190f4335a2c6696b3b818376b5d03dc80b984e319a7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:58:38 compute-0 systemd[1]: libpod-conmon-c2f1506abf36ea52559d190f4335a2c6696b3b818376b5d03dc80b984e319a7c.scope: Deactivated successfully.
Dec 03 01:58:38 compute-0 sudo[420507]: pam_unix(sudo:session): session closed for user root
Dec 03 01:58:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:58:38 compute-0 podman[420658]: 2025-12-03 01:58:38.869672424 +0000 UTC m=+0.119876983 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Dec 03 01:58:38 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:58:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:58:38 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:58:38 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 46caf98d-7372-4f0f-81b1-1d2f7673d44f does not exist
Dec 03 01:58:38 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 90d5c649-00da-4989-9d52-48d3b2316891 does not exist
Dec 03 01:58:38 compute-0 podman[420667]: 2025-12-03 01:58:38.886925346 +0000 UTC m=+0.120751109 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 03 01:58:38 compute-0 podman[420660]: 2025-12-03 01:58:38.893777261 +0000 UTC m=+0.144624729 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image)
Dec 03 01:58:38 compute-0 sudo[420727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:58:38 compute-0 sudo[420727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:58:38 compute-0 sudo[420727]: pam_unix(sudo:session): session closed for user root
Dec 03 01:58:39 compute-0 sudo[420752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 01:58:39 compute-0 sudo[420752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:58:39 compute-0 sudo[420752]: pam_unix(sudo:session): session closed for user root
Dec 03 01:58:39 compute-0 nova_compute[351485]: 2025-12-03 01:58:39.090 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:58:39 compute-0 ceph-mon[192821]: pgmap v1322: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:58:39 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:58:39 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:58:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1323: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:58:41 compute-0 ceph-mon[192821]: pgmap v1323: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:58:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1324: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:58:42 compute-0 nova_compute[351485]: 2025-12-03 01:58:42.882 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:58:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:58:43 compute-0 ceph-mon[192821]: pgmap v1324: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:58:44 compute-0 nova_compute[351485]: 2025-12-03 01:58:44.095 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:58:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1325: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:58:44 compute-0 podman[420777]: 2025-12-03 01:58:44.843316424 +0000 UTC m=+0.124842505 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Dec 03 01:58:45 compute-0 ceph-mon[192821]: pgmap v1325: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:58:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1326: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:58:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 01:58:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3292056608' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 01:58:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 01:58:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3292056608' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 01:58:47 compute-0 nova_compute[351485]: 2025-12-03 01:58:47.885 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:58:47 compute-0 podman[420796]: 2025-12-03 01:58:47.898294373 +0000 UTC m=+0.145322928 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, build-date=2024-09-18T21:23:30, release-0.7.12=, container_name=kepler, io.openshift.expose-services=, version=9.4, architecture=x86_64, distribution-scope=public, name=ubi9, vendor=Red Hat, Inc., io.buildah.version=1.29.0, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 03 01:58:47 compute-0 ceph-mon[192821]: pgmap v1326: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:58:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/3292056608' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 01:58:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/3292056608' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 01:58:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:58:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1327: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:58:49 compute-0 nova_compute[351485]: 2025-12-03 01:58:49.099 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:58:49 compute-0 ceph-mon[192821]: pgmap v1327: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:58:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1328: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:58:51 compute-0 podman[420818]: 2025-12-03 01:58:51.860387227 +0000 UTC m=+0.095688445 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, version=9.6, build-date=2025-08-20T13:12:41, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, container_name=openstack_network_exporter, architecture=x86_64, io.buildah.version=1.33.7, release=1755695350, vcs-type=git, distribution-scope=public)
Dec 03 01:58:51 compute-0 podman[420819]: 2025-12-03 01:58:51.879301625 +0000 UTC m=+0.107178602 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 01:58:51 compute-0 podman[420820]: 2025-12-03 01:58:51.891576005 +0000 UTC m=+0.106709779 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd)
Dec 03 01:58:51 compute-0 podman[420817]: 2025-12-03 01:58:51.907625552 +0000 UTC m=+0.151007810 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 03 01:58:51 compute-0 ceph-mon[192821]: pgmap v1328: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:58:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1329: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:58:52 compute-0 nova_compute[351485]: 2025-12-03 01:58:52.890 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:58:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:58:53 compute-0 ceph-mon[192821]: pgmap v1329: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:58:54 compute-0 nova_compute[351485]: 2025-12-03 01:58:54.103 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:58:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1330: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:58:55 compute-0 ceph-mon[192821]: pgmap v1330: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:58:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1331: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:58:57 compute-0 nova_compute[351485]: 2025-12-03 01:58:57.892 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:58:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:58:57 compute-0 ceph-mon[192821]: pgmap v1331: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:58:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1332: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:58:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:58:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:58:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:58:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:58:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:58:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:58:59 compute-0 ceph-mon[192821]: pgmap v1332: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:58:59 compute-0 nova_compute[351485]: 2025-12-03 01:58:59.108 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:58:59 compute-0 nova_compute[351485]: 2025-12-03 01:58:59.580 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:58:59 compute-0 nova_compute[351485]: 2025-12-03 01:58:59.614 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:58:59 compute-0 nova_compute[351485]: 2025-12-03 01:58:59.615 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:58:59 compute-0 nova_compute[351485]: 2025-12-03 01:58:59.616 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:58:59 compute-0 nova_compute[351485]: 2025-12-03 01:58:59.617 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 01:58:59 compute-0 nova_compute[351485]: 2025-12-03 01:58:59.617 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:58:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:58:59.625 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:58:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:58:59.626 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:58:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:58:59.626 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:58:59 compute-0 podman[158098]: time="2025-12-03T01:58:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:58:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:58:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec 03 01:58:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:58:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8634 "" "Go-http-client/1.1"
Dec 03 01:59:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 01:59:00 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3000760888' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:59:00 compute-0 nova_compute[351485]: 2025-12-03 01:59:00.165 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:59:00 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3000760888' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:59:00 compute-0 nova_compute[351485]: 2025-12-03 01:59:00.298 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 01:59:00 compute-0 nova_compute[351485]: 2025-12-03 01:59:00.298 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 01:59:00 compute-0 nova_compute[351485]: 2025-12-03 01:59:00.299 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 01:59:00 compute-0 nova_compute[351485]: 2025-12-03 01:59:00.308 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 01:59:00 compute-0 nova_compute[351485]: 2025-12-03 01:59:00.309 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 01:59:00 compute-0 nova_compute[351485]: 2025-12-03 01:59:00.309 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 01:59:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1333: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:00 compute-0 nova_compute[351485]: 2025-12-03 01:59:00.831 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 01:59:00 compute-0 nova_compute[351485]: 2025-12-03 01:59:00.832 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3785MB free_disk=59.922000885009766GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 01:59:00 compute-0 nova_compute[351485]: 2025-12-03 01:59:00.833 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:59:00 compute-0 nova_compute[351485]: 2025-12-03 01:59:00.833 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:59:00 compute-0 nova_compute[351485]: 2025-12-03 01:59:00.903 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 01:59:00 compute-0 nova_compute[351485]: 2025-12-03 01:59:00.903 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 52862152-12c7-4236-89c3-67750ecbed7a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 01:59:00 compute-0 nova_compute[351485]: 2025-12-03 01:59:00.903 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 01:59:00 compute-0 nova_compute[351485]: 2025-12-03 01:59:00.904 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 01:59:00 compute-0 nova_compute[351485]: 2025-12-03 01:59:00.979 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 01:59:01 compute-0 ceph-mon[192821]: pgmap v1333: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:01 compute-0 openstack_network_exporter[368278]: ERROR   01:59:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:59:01 compute-0 openstack_network_exporter[368278]: ERROR   01:59:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:59:01 compute-0 openstack_network_exporter[368278]: ERROR   01:59:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:59:01 compute-0 openstack_network_exporter[368278]: ERROR   01:59:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:59:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:59:01 compute-0 openstack_network_exporter[368278]: ERROR   01:59:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:59:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:59:01 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 01:59:01 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1495191346' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:59:01 compute-0 nova_compute[351485]: 2025-12-03 01:59:01.483 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 01:59:01 compute-0 nova_compute[351485]: 2025-12-03 01:59:01.493 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 01:59:01 compute-0 nova_compute[351485]: 2025-12-03 01:59:01.519 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 01:59:01 compute-0 nova_compute[351485]: 2025-12-03 01:59:01.521 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 01:59:01 compute-0 nova_compute[351485]: 2025-12-03 01:59:01.521 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:59:02 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1495191346' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 01:59:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1334: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:02 compute-0 nova_compute[351485]: 2025-12-03 01:59:02.519 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:59:02 compute-0 nova_compute[351485]: 2025-12-03 01:59:02.519 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 01:59:02 compute-0 nova_compute[351485]: 2025-12-03 01:59:02.520 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 03 01:59:02 compute-0 nova_compute[351485]: 2025-12-03 01:59:02.853 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 01:59:02 compute-0 nova_compute[351485]: 2025-12-03 01:59:02.854 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 01:59:02 compute-0 nova_compute[351485]: 2025-12-03 01:59:02.855 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 03 01:59:02 compute-0 nova_compute[351485]: 2025-12-03 01:59:02.857 351492 DEBUG nova.objects.instance [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9182286b-5a08-4961-b4bb-c0e2f05746f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 01:59:02 compute-0 nova_compute[351485]: 2025-12-03 01:59:02.893 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:59:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:59:03 compute-0 ceph-mon[192821]: pgmap v1334: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:04 compute-0 nova_compute[351485]: 2025-12-03 01:59:04.111 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:59:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1335: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:04 compute-0 nova_compute[351485]: 2025-12-03 01:59:04.467 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Updating instance_info_cache with network_info: [{"id": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "address": "fa:16:3e:8f:a6:32", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2a50b9b-c2", "ovs_interfaceid": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 01:59:04 compute-0 nova_compute[351485]: 2025-12-03 01:59:04.485 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 01:59:04 compute-0 nova_compute[351485]: 2025-12-03 01:59:04.486 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 03 01:59:04 compute-0 nova_compute[351485]: 2025-12-03 01:59:04.487 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:59:04 compute-0 nova_compute[351485]: 2025-12-03 01:59:04.488 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:59:04 compute-0 nova_compute[351485]: 2025-12-03 01:59:04.489 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:59:04 compute-0 nova_compute[351485]: 2025-12-03 01:59:04.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:59:05 compute-0 ceph-mon[192821]: pgmap v1335: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:05 compute-0 nova_compute[351485]: 2025-12-03 01:59:05.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:59:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1336: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:07 compute-0 ceph-mon[192821]: pgmap v1336: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:07 compute-0 nova_compute[351485]: 2025-12-03 01:59:07.571 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:59:07 compute-0 nova_compute[351485]: 2025-12-03 01:59:07.896 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:59:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:59:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1337: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:09 compute-0 nova_compute[351485]: 2025-12-03 01:59:09.116 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:59:09 compute-0 ceph-mon[192821]: pgmap v1337: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:09 compute-0 podman[420946]: 2025-12-03 01:59:09.852177915 +0000 UTC m=+0.091012075 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 03 01:59:09 compute-0 podman[420944]: 2025-12-03 01:59:09.857082403 +0000 UTC m=+0.100721088 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 03 01:59:09 compute-0 podman[420945]: 2025-12-03 01:59:09.895209707 +0000 UTC m=+0.137448893 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 03 01:59:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1338: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:11 compute-0 ceph-mon[192821]: pgmap v1338: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:11 compute-0 nova_compute[351485]: 2025-12-03 01:59:11.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:59:11 compute-0 nova_compute[351485]: 2025-12-03 01:59:11.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 01:59:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1339: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:12 compute-0 nova_compute[351485]: 2025-12-03 01:59:12.901 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:59:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:59:13 compute-0 ceph-mon[192821]: pgmap v1339: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:13 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Dec 03 01:59:13 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:59:13.520210) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 03 01:59:13 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Dec 03 01:59:13 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727153520264, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2051, "num_deletes": 251, "total_data_size": 3483105, "memory_usage": 3531680, "flush_reason": "Manual Compaction"}
Dec 03 01:59:13 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Dec 03 01:59:13 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727153548802, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3428285, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25530, "largest_seqno": 27580, "table_properties": {"data_size": 3418745, "index_size": 6098, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18680, "raw_average_key_size": 20, "raw_value_size": 3400065, "raw_average_value_size": 3659, "num_data_blocks": 270, "num_entries": 929, "num_filter_entries": 929, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764726918, "oldest_key_time": 1764726918, "file_creation_time": 1764727153, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Dec 03 01:59:13 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 28698 microseconds, and 13875 cpu microseconds.
Dec 03 01:59:13 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 01:59:13 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:59:13.548903) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3428285 bytes OK
Dec 03 01:59:13 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:59:13.548931) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Dec 03 01:59:13 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:59:13.551913) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Dec 03 01:59:13 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:59:13.551938) EVENT_LOG_v1 {"time_micros": 1764727153551931, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 03 01:59:13 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:59:13.551963) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 03 01:59:13 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3474521, prev total WAL file size 3474521, number of live WAL files 2.
Dec 03 01:59:13 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 01:59:13 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:59:13.554454) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Dec 03 01:59:13 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 03 01:59:13 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3347KB)], [59(7308KB)]
Dec 03 01:59:13 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727153554629, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 10911910, "oldest_snapshot_seqno": -1}
Dec 03 01:59:13 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5030 keys, 9140224 bytes, temperature: kUnknown
Dec 03 01:59:13 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727153651583, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 9140224, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9104675, "index_size": 21871, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12613, "raw_key_size": 124863, "raw_average_key_size": 24, "raw_value_size": 9011761, "raw_average_value_size": 1791, "num_data_blocks": 907, "num_entries": 5030, "num_filter_entries": 5030, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764727153, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Dec 03 01:59:13 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 01:59:13 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:59:13.651856) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 9140224 bytes
Dec 03 01:59:13 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:59:13.654796) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 112.5 rd, 94.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.1 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(5.8) write-amplify(2.7) OK, records in: 5544, records dropped: 514 output_compression: NoCompression
Dec 03 01:59:13 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:59:13.654832) EVENT_LOG_v1 {"time_micros": 1764727153654815, "job": 32, "event": "compaction_finished", "compaction_time_micros": 97027, "compaction_time_cpu_micros": 46051, "output_level": 6, "num_output_files": 1, "total_output_size": 9140224, "num_input_records": 5544, "num_output_records": 5030, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 03 01:59:13 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 01:59:13 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727153656321, "job": 32, "event": "table_file_deletion", "file_number": 61}
Dec 03 01:59:13 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 01:59:13 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727153659362, "job": 32, "event": "table_file_deletion", "file_number": 59}
Dec 03 01:59:13 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:59:13.553891) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:59:13 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:59:13.659742) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:59:13 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:59:13.659750) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:59:13 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:59:13.659754) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:59:13 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:59:13.659757) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:59:13 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:59:13.659760) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 01:59:14 compute-0 nova_compute[351485]: 2025-12-03 01:59:14.121 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:59:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1340: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:15 compute-0 ceph-mon[192821]: pgmap v1340: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:15 compute-0 podman[421006]: 2025-12-03 01:59:15.881297073 +0000 UTC m=+0.132791292 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 03 01:59:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1341: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:17 compute-0 ceph-mon[192821]: pgmap v1341: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:17 compute-0 nova_compute[351485]: 2025-12-03 01:59:17.903 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:59:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:59:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1342: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:18 compute-0 podman[421025]: 2025-12-03 01:59:18.859613216 +0000 UTC m=+0.117554832 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.openshift.expose-services=, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, distribution-scope=public, architecture=x86_64, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, container_name=kepler, build-date=2024-09-18T21:23:30, config_id=edpm, maintainer=Red Hat, Inc.)
Dec 03 01:59:19 compute-0 nova_compute[351485]: 2025-12-03 01:59:19.125 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.504 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.505 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.505 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95ea43dd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.506 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.507 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95ea43dd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.507 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95ea43dd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.507 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95ea43dd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.507 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95ea43dd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.508 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95ea43dd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.508 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95ea43dd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.508 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95ea43dd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.508 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95ea43dd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.508 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95ea43dd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.509 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95ea43dd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.509 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95ea43dd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.509 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95ea43dd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.509 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95ea43dd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.510 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95ea43dd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.510 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95ea43dd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.510 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95ea43dd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.510 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95ea43dd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.511 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95ea43dd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.511 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95ea43dd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.511 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95ea43dd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.511 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95ea43dd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95ea43dd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95ea43dd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95ea43dd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95ea43dd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.514 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '52862152-12c7-4236-89c3-67750ecbed7a', 'name': 'vn-44nal64-ppxv5rwaptjv-bbqmylrxhl37-vnf-x65t7efzpd2l', 'flavor': {'id': 'bc665ec6-3672-4e52-a447-5267b04e227a', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9746b242761a48048d185ce26d622b33', 'user_id': '03ba25e4009b43f7b0054fee32bf9136', 'hostId': '875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd', 'status': 'active', 'metadata': {'metering.server_group': '0f6ab671-23df-4a6d-9613-02f9fb5fb294'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.518 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '9182286b-5a08-4961-b4bb-c0e2f05746f7', 'name': 'test_0', 'flavor': {'id': 'bc665ec6-3672-4e52-a447-5267b04e227a', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9746b242761a48048d185ce26d622b33', 'user_id': '03ba25e4009b43f7b0054fee32bf9136', 'hostId': '875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.519 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.519 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.519 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.520 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.521 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-03T01:59:19.520153) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:59:19 compute-0 ceph-mon[192821]: pgmap v1342: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.560 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/memory.usage volume: 49.16015625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.596 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/memory.usage volume: 49.0390625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.597 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.597 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.598 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.598 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.599 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.599 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.599 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-03T01:59:19.599226) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.605 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.outgoing.packets volume: 42 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.612 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.612 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.613 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.613 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.613 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.613 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.613 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.614 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-03T01:59:19.613750) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.614 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.614 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.615 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.615 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.615 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.616 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.616 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.616 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.616 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-03T01:59:19.616306) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.617 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.618 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.619 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.619 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.619 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.619 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.619 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.620 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.620 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-03T01:59:19.620015) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.620 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.621 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.621 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.621 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.622 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.622 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.622 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.622 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-03T01:59:19.622480) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.622 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.623 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.623 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.624 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.624 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.624 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.624 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.624 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.625 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.625 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-03T01:59:19.624964) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.656 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.657 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.657 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.690 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.692 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.692 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.693 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.694 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.694 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.694 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.695 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.695 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.695 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.695 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.696 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-03T01:59:19.695441) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.775 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.776 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.777 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.869 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.870 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.871 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.872 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.872 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.872 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.873 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.873 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.873 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.873 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-03T01:59:19.873419) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.874 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.incoming.bytes volume: 4849 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.874 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.bytes volume: 1878 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.875 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.875 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.875 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.875 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.876 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.876 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-03T01:59:19.876202) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.876 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.876 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.latency volume: 1829221883 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.877 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.latency volume: 322583639 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.877 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.latency volume: 204508972 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.878 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.latency volume: 1854350820 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.878 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.latency volume: 322798135 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.878 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.latency volume: 163317736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.879 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.880 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.880 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.881 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.881 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.881 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-03T01:59:19.881316) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.881 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.881 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.882 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.882 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.883 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.884 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.884 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.885 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.886 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.886 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.886 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.886 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.886 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.887 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-03T01:59:19.886815) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.887 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.887 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.888 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.888 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.888 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.889 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.889 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.889 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-03T01:59:19.889353) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.889 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.890 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.890 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.890 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.891 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.891 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.892 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.892 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.893 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.893 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.893 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.893 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.893 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.894 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.bytes volume: 41824256 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.894 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.895 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.895 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.896 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.896 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.897 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-03T01:59:19.893863) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.897 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.897 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.898 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.898 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.898 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.898 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-03T01:59:19.898685) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.898 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.899 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.latency volume: 6964190045 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.899 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.latency volume: 29937762 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.900 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.900 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.latency volume: 7224488215 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.900 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.latency volume: 31628821 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.901 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.902 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.902 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.903 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.903 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.903 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.903 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.904 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-03T01:59:19.903863) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.904 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.requests volume: 237 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.904 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.905 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.905 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.requests volume: 229 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.906 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.906 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.907 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.907 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.907 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.907 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.907 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.908 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.908 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.incoming.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.908 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.908 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.909 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-03T01:59:19.908045) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.909 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.909 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.909 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.909 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.909 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.909 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-03T01:59:19.909653) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.909 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/cpu volume: 154220000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.910 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/cpu volume: 36610000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.910 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.910 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.910 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.911 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.911 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.911 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.911 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.912 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.912 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-03T01:59:19.911299) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.912 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.912 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.912 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.912 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.912 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.outgoing.bytes volume: 4826 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.913 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-03T01:59:19.912704) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.913 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.bytes volume: 2202 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.913 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.913 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.914 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.914 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.914 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.914 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.914 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-03T01:59:19.914294) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.914 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.914 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.915 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.915 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.915 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.915 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.916 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.916 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.916 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.916 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.916 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.917 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.917 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.917 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.917 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-03T01:59:19.917052) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.918 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.918 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.918 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.918 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.918 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.919 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.919 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.919 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.920 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.920 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.920 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.920 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-03T01:59:19.919060) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.920 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.920 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.outgoing.bytes.delta volume: 140 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.921 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.921 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-03T01:59:19.920624) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.921 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.921 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.921 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.922 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.922 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.922 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.923 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.923 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.923 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.923 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.923 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.923 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.924 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.924 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.924 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.924 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.924 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.924 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.924 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.925 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.925 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.925 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.925 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.925 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.925 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.925 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.926 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.926 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.926 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 01:59:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1343: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:21 compute-0 ceph-mon[192821]: pgmap v1343: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1344: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:22 compute-0 podman[421048]: 2025-12-03 01:59:22.862883846 +0000 UTC m=+0.105956285 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 03 01:59:22 compute-0 podman[421047]: 2025-12-03 01:59:22.884774343 +0000 UTC m=+0.133670316 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, architecture=x86_64, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, managed_by=edpm_ansible, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, name=ubi9-minimal, version=9.6, build-date=2025-08-20T13:12:41)
Dec 03 01:59:22 compute-0 podman[421054]: 2025-12-03 01:59:22.885446142 +0000 UTC m=+0.115949087 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 03 01:59:22 compute-0 podman[421046]: 2025-12-03 01:59:22.888431926 +0000 UTC m=+0.152585369 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller)
Dec 03 01:59:22 compute-0 nova_compute[351485]: 2025-12-03 01:59:22.905 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:59:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:59:23 compute-0 ceph-mon[192821]: pgmap v1344: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:24 compute-0 nova_compute[351485]: 2025-12-03 01:59:24.130 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:59:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1345: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:25 compute-0 ceph-mon[192821]: pgmap v1345: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1346: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:27 compute-0 ceph-mon[192821]: pgmap v1346: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:27 compute-0 nova_compute[351485]: 2025-12-03 01:59:27.909 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:59:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:59:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:59:28
Dec 03 01:59:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 01:59:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 01:59:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', 'volumes', 'cephfs.cephfs.data', 'images', 'default.rgw.control', '.rgw.root', 'vms', 'default.rgw.meta', '.mgr', 'default.rgw.log']
Dec 03 01:59:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 01:59:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1347: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:59:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:59:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:59:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:59:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:59:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:59:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 01:59:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:59:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 01:59:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 01:59:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:59:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 01:59:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:59:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 01:59:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:59:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 01:59:29 compute-0 nova_compute[351485]: 2025-12-03 01:59:29.134 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:59:29 compute-0 ceph-mon[192821]: pgmap v1347: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:29 compute-0 podman[158098]: time="2025-12-03T01:59:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:59:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:59:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec 03 01:59:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:59:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8643 "" "Go-http-client/1.1"
Dec 03 01:59:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1348: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:31 compute-0 openstack_network_exporter[368278]: ERROR   01:59:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:59:31 compute-0 openstack_network_exporter[368278]: ERROR   01:59:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 01:59:31 compute-0 openstack_network_exporter[368278]: ERROR   01:59:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 01:59:31 compute-0 openstack_network_exporter[368278]: ERROR   01:59:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 01:59:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:59:31 compute-0 openstack_network_exporter[368278]: ERROR   01:59:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 01:59:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 01:59:31 compute-0 ceph-mon[192821]: pgmap v1348: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1349: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:32 compute-0 nova_compute[351485]: 2025-12-03 01:59:32.911 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:59:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:59:33 compute-0 ceph-mon[192821]: pgmap v1349: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:34 compute-0 nova_compute[351485]: 2025-12-03 01:59:34.141 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:59:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1350: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:35 compute-0 ceph-mon[192821]: pgmap v1350: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1351: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:37 compute-0 ceph-mon[192821]: pgmap v1351: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:37 compute-0 nova_compute[351485]: 2025-12-03 01:59:37.914 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:59:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:59:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1352: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 01:59:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:59:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 01:59:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:59:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011048885483818454 of space, bias 1.0, pg target 0.33146656451455364 quantized to 32 (current 32)
Dec 03 01:59:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:59:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:59:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:59:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:59:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:59:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec 03 01:59:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:59:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 01:59:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:59:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:59:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:59:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 01:59:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:59:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 01:59:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:59:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 01:59:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 01:59:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 01:59:39 compute-0 nova_compute[351485]: 2025-12-03 01:59:39.148 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:59:39 compute-0 sudo[421129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:59:39 compute-0 sudo[421129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:59:39 compute-0 sudo[421129]: pam_unix(sudo:session): session closed for user root
Dec 03 01:59:39 compute-0 sudo[421154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:59:39 compute-0 sudo[421154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:59:39 compute-0 sudo[421154]: pam_unix(sudo:session): session closed for user root
Dec 03 01:59:39 compute-0 sudo[421179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:59:39 compute-0 sudo[421179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:59:39 compute-0 sudo[421179]: pam_unix(sudo:session): session closed for user root
Dec 03 01:59:39 compute-0 sudo[421204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 01:59:39 compute-0 sudo[421204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:59:39 compute-0 ceph-mon[192821]: pgmap v1352: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1353: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:40 compute-0 sudo[421204]: pam_unix(sudo:session): session closed for user root
Dec 03 01:59:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:59:40 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:59:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 01:59:40 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:59:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 01:59:40 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:59:40 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 1a8310ee-4889-494b-9a16-96c42de02887 does not exist
Dec 03 01:59:40 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 51e167a9-95f3-49ed-a936-70a3f1c1736b does not exist
Dec 03 01:59:40 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 1018ab37-c28e-41ef-9033-744c9d25a660 does not exist
Dec 03 01:59:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 01:59:40 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:59:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 01:59:40 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:59:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 01:59:40 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:59:40 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:59:40 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 01:59:40 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:59:40 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 01:59:40 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 01:59:40 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 01:59:40 compute-0 sudo[421259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:59:40 compute-0 sudo[421259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:59:40 compute-0 sudo[421259]: pam_unix(sudo:session): session closed for user root
Dec 03 01:59:40 compute-0 sudo[421299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:59:40 compute-0 sudo[421299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:59:40 compute-0 sudo[421299]: pam_unix(sudo:session): session closed for user root
Dec 03 01:59:40 compute-0 podman[421285]: 2025-12-03 01:59:40.864725932 +0000 UTC m=+0.116559914 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 01:59:40 compute-0 podman[421283]: 2025-12-03 01:59:40.865766211 +0000 UTC m=+0.139556591 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec 03 01:59:40 compute-0 podman[421284]: 2025-12-03 01:59:40.871675368 +0000 UTC m=+0.143545384 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm)
Dec 03 01:59:40 compute-0 sudo[421366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:59:40 compute-0 sudo[421366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:59:40 compute-0 sudo[421366]: pam_unix(sudo:session): session closed for user root
Dec 03 01:59:41 compute-0 sudo[421392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 01:59:41 compute-0 sudo[421392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:59:41 compute-0 podman[421457]: 2025-12-03 01:59:41.56658122 +0000 UTC m=+0.099422152 container create ad4991d21701bae496a73898260c6d020b2df4374e0ba9ec63ae91f170caecac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_vaughan, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Dec 03 01:59:41 compute-0 podman[421457]: 2025-12-03 01:59:41.519498674 +0000 UTC m=+0.052339696 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:59:41 compute-0 systemd[1]: Started libpod-conmon-ad4991d21701bae496a73898260c6d020b2df4374e0ba9ec63ae91f170caecac.scope.
Dec 03 01:59:41 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:59:41 compute-0 podman[421457]: 2025-12-03 01:59:41.713891479 +0000 UTC m=+0.246732441 container init ad4991d21701bae496a73898260c6d020b2df4374e0ba9ec63ae91f170caecac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:59:41 compute-0 podman[421457]: 2025-12-03 01:59:41.732092011 +0000 UTC m=+0.264932983 container start ad4991d21701bae496a73898260c6d020b2df4374e0ba9ec63ae91f170caecac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_vaughan, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec 03 01:59:41 compute-0 ceph-mon[192821]: pgmap v1353: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:41 compute-0 podman[421457]: 2025-12-03 01:59:41.737677859 +0000 UTC m=+0.270518831 container attach ad4991d21701bae496a73898260c6d020b2df4374e0ba9ec63ae91f170caecac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:59:41 compute-0 interesting_vaughan[421473]: 167 167
Dec 03 01:59:41 compute-0 systemd[1]: libpod-ad4991d21701bae496a73898260c6d020b2df4374e0ba9ec63ae91f170caecac.scope: Deactivated successfully.
Dec 03 01:59:41 compute-0 podman[421457]: 2025-12-03 01:59:41.750952052 +0000 UTC m=+0.283793044 container died ad4991d21701bae496a73898260c6d020b2df4374e0ba9ec63ae91f170caecac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_vaughan, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 03 01:59:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b43dd4f0179b4acbfd98621ca50e452e96411d78c337beb6a6998c507c51f6e-merged.mount: Deactivated successfully.
Dec 03 01:59:41 compute-0 podman[421457]: 2025-12-03 01:59:41.835936466 +0000 UTC m=+0.368777438 container remove ad4991d21701bae496a73898260c6d020b2df4374e0ba9ec63ae91f170caecac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:59:41 compute-0 systemd[1]: libpod-conmon-ad4991d21701bae496a73898260c6d020b2df4374e0ba9ec63ae91f170caecac.scope: Deactivated successfully.
Dec 03 01:59:42 compute-0 podman[421496]: 2025-12-03 01:59:42.147501701 +0000 UTC m=+0.101303164 container create 8791ceeccd38951c9d544e72bb3522ece72aae1f69b8f5809cc1c1f3fd99d072 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_volhard, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:59:42 compute-0 podman[421496]: 2025-12-03 01:59:42.114049009 +0000 UTC m=+0.067850552 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:59:42 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 03 01:59:42 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 2400.1 total, 600.0 interval
                                            Cumulative writes: 6543 writes, 26K keys, 6543 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                            Cumulative WAL: 6543 writes, 1250 syncs, 5.23 writes per sync, written: 0.02 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 641 writes, 2055 keys, 641 commit groups, 1.0 writes per commit group, ingest: 2.25 MB, 0.00 MB/s
                                            Interval WAL: 641 writes, 259 syncs, 2.47 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 03 01:59:42 compute-0 systemd[1]: Started libpod-conmon-8791ceeccd38951c9d544e72bb3522ece72aae1f69b8f5809cc1c1f3fd99d072.scope.
Dec 03 01:59:42 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:59:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/957b9a1fa41dbd703e0ce6ebcf4cab184f248b15645f35a6263928346e87a400/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:59:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/957b9a1fa41dbd703e0ce6ebcf4cab184f248b15645f35a6263928346e87a400/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:59:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/957b9a1fa41dbd703e0ce6ebcf4cab184f248b15645f35a6263928346e87a400/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:59:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/957b9a1fa41dbd703e0ce6ebcf4cab184f248b15645f35a6263928346e87a400/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:59:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/957b9a1fa41dbd703e0ce6ebcf4cab184f248b15645f35a6263928346e87a400/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 01:59:42 compute-0 podman[421496]: 2025-12-03 01:59:42.312163729 +0000 UTC m=+0.265965192 container init 8791ceeccd38951c9d544e72bb3522ece72aae1f69b8f5809cc1c1f3fd99d072 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_volhard, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:59:42 compute-0 podman[421496]: 2025-12-03 01:59:42.328615942 +0000 UTC m=+0.282417405 container start 8791ceeccd38951c9d544e72bb3522ece72aae1f69b8f5809cc1c1f3fd99d072 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_volhard, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 03 01:59:42 compute-0 podman[421496]: 2025-12-03 01:59:42.333110719 +0000 UTC m=+0.286912182 container attach 8791ceeccd38951c9d544e72bb3522ece72aae1f69b8f5809cc1c1f3fd99d072 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_volhard, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:59:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1354: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:42 compute-0 nova_compute[351485]: 2025-12-03 01:59:42.915 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:59:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:59:43 compute-0 strange_volhard[421511]: --> passed data devices: 0 physical, 3 LVM
Dec 03 01:59:43 compute-0 strange_volhard[421511]: --> relative data size: 1.0
Dec 03 01:59:43 compute-0 strange_volhard[421511]: --> All data devices are unavailable
Dec 03 01:59:43 compute-0 systemd[1]: libpod-8791ceeccd38951c9d544e72bb3522ece72aae1f69b8f5809cc1c1f3fd99d072.scope: Deactivated successfully.
Dec 03 01:59:43 compute-0 systemd[1]: libpod-8791ceeccd38951c9d544e72bb3522ece72aae1f69b8f5809cc1c1f3fd99d072.scope: Consumed 1.234s CPU time.
Dec 03 01:59:43 compute-0 podman[421496]: 2025-12-03 01:59:43.643656729 +0000 UTC m=+1.597458222 container died 8791ceeccd38951c9d544e72bb3522ece72aae1f69b8f5809cc1c1f3fd99d072 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_volhard, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:59:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-957b9a1fa41dbd703e0ce6ebcf4cab184f248b15645f35a6263928346e87a400-merged.mount: Deactivated successfully.
Dec 03 01:59:43 compute-0 ceph-mon[192821]: pgmap v1354: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:43 compute-0 podman[421496]: 2025-12-03 01:59:43.759137392 +0000 UTC m=+1.712938855 container remove 8791ceeccd38951c9d544e72bb3522ece72aae1f69b8f5809cc1c1f3fd99d072 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_volhard, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 01:59:43 compute-0 systemd[1]: libpod-conmon-8791ceeccd38951c9d544e72bb3522ece72aae1f69b8f5809cc1c1f3fd99d072.scope: Deactivated successfully.
Dec 03 01:59:43 compute-0 sudo[421392]: pam_unix(sudo:session): session closed for user root
Dec 03 01:59:43 compute-0 sudo[421553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:59:43 compute-0 sudo[421553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:59:43 compute-0 sudo[421553]: pam_unix(sudo:session): session closed for user root
Dec 03 01:59:44 compute-0 sudo[421578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:59:44 compute-0 sudo[421578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:59:44 compute-0 sudo[421578]: pam_unix(sudo:session): session closed for user root
Dec 03 01:59:44 compute-0 nova_compute[351485]: 2025-12-03 01:59:44.152 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:59:44 compute-0 sudo[421603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:59:44 compute-0 sudo[421603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:59:44 compute-0 sudo[421603]: pam_unix(sudo:session): session closed for user root
Dec 03 01:59:44 compute-0 sudo[421628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 01:59:44 compute-0 sudo[421628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:59:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1355: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:44 compute-0 podman[421693]: 2025-12-03 01:59:44.895778055 +0000 UTC m=+0.082471234 container create e873cceb582c393b10d2acdff3bbdb6d6ec7d372610d7b64a1cf17bee0d2080e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_montalcini, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 03 01:59:44 compute-0 podman[421693]: 2025-12-03 01:59:44.861320715 +0000 UTC m=+0.048013974 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:59:44 compute-0 systemd[1]: Started libpod-conmon-e873cceb582c393b10d2acdff3bbdb6d6ec7d372610d7b64a1cf17bee0d2080e.scope.
Dec 03 01:59:44 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:59:45 compute-0 podman[421693]: 2025-12-03 01:59:45.011495984 +0000 UTC m=+0.198189183 container init e873cceb582c393b10d2acdff3bbdb6d6ec7d372610d7b64a1cf17bee0d2080e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True)
Dec 03 01:59:45 compute-0 podman[421693]: 2025-12-03 01:59:45.022432322 +0000 UTC m=+0.209125541 container start e873cceb582c393b10d2acdff3bbdb6d6ec7d372610d7b64a1cf17bee0d2080e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_montalcini, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 01:59:45 compute-0 podman[421693]: 2025-12-03 01:59:45.029213533 +0000 UTC m=+0.215906732 container attach e873cceb582c393b10d2acdff3bbdb6d6ec7d372610d7b64a1cf17bee0d2080e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 03 01:59:45 compute-0 wonderful_montalcini[421709]: 167 167
Dec 03 01:59:45 compute-0 systemd[1]: libpod-e873cceb582c393b10d2acdff3bbdb6d6ec7d372610d7b64a1cf17bee0d2080e.scope: Deactivated successfully.
Dec 03 01:59:45 compute-0 conmon[421709]: conmon e873cceb582c393b10d2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e873cceb582c393b10d2acdff3bbdb6d6ec7d372610d7b64a1cf17bee0d2080e.scope/container/memory.events
Dec 03 01:59:45 compute-0 podman[421693]: 2025-12-03 01:59:45.035427548 +0000 UTC m=+0.222120737 container died e873cceb582c393b10d2acdff3bbdb6d6ec7d372610d7b64a1cf17bee0d2080e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_montalcini, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 01:59:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8ee6a01a6ff321a65af10dc291d4b0b0f45a0344dcf86457b535ba4f1560266-merged.mount: Deactivated successfully.
Dec 03 01:59:45 compute-0 podman[421693]: 2025-12-03 01:59:45.098875885 +0000 UTC m=+0.285569074 container remove e873cceb582c393b10d2acdff3bbdb6d6ec7d372610d7b64a1cf17bee0d2080e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 01:59:45 compute-0 systemd[1]: libpod-conmon-e873cceb582c393b10d2acdff3bbdb6d6ec7d372610d7b64a1cf17bee0d2080e.scope: Deactivated successfully.
Dec 03 01:59:45 compute-0 podman[421736]: 2025-12-03 01:59:45.364938959 +0000 UTC m=+0.087852125 container create 6bafcd6fc27e76b2dfb309599cecbf3f70f1d381371dd7e39ab41fef09e315b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_shamir, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 03 01:59:45 compute-0 podman[421736]: 2025-12-03 01:59:45.330938701 +0000 UTC m=+0.053851937 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:59:45 compute-0 systemd[1]: Started libpod-conmon-6bafcd6fc27e76b2dfb309599cecbf3f70f1d381371dd7e39ab41fef09e315b8.scope.
Dec 03 01:59:45 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:59:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81142c2cc8a89cfac84eaae2ad044fd2e95cc5c24bd343a534583f71ad1740f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:59:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81142c2cc8a89cfac84eaae2ad044fd2e95cc5c24bd343a534583f71ad1740f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:59:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81142c2cc8a89cfac84eaae2ad044fd2e95cc5c24bd343a534583f71ad1740f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:59:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81142c2cc8a89cfac84eaae2ad044fd2e95cc5c24bd343a534583f71ad1740f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:59:45 compute-0 podman[421736]: 2025-12-03 01:59:45.53005781 +0000 UTC m=+0.252970986 container init 6bafcd6fc27e76b2dfb309599cecbf3f70f1d381371dd7e39ab41fef09e315b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 03 01:59:45 compute-0 podman[421736]: 2025-12-03 01:59:45.54710356 +0000 UTC m=+0.270016716 container start 6bafcd6fc27e76b2dfb309599cecbf3f70f1d381371dd7e39ab41fef09e315b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_shamir, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:59:45 compute-0 podman[421736]: 2025-12-03 01:59:45.552216014 +0000 UTC m=+0.275129220 container attach 6bafcd6fc27e76b2dfb309599cecbf3f70f1d381371dd7e39ab41fef09e315b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 03 01:59:45 compute-0 ceph-mon[192821]: pgmap v1355: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:46 compute-0 quirky_shamir[421751]: {
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:     "0": [
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:         {
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:             "devices": [
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:                 "/dev/loop3"
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:             ],
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:             "lv_name": "ceph_lv0",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:             "lv_size": "21470642176",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:             "name": "ceph_lv0",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:             "tags": {
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:                 "ceph.cluster_name": "ceph",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:                 "ceph.crush_device_class": "",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:                 "ceph.encrypted": "0",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:                 "ceph.osd_id": "0",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:                 "ceph.type": "block",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:                 "ceph.vdo": "0"
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:             },
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:             "type": "block",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:             "vg_name": "ceph_vg0"
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:         }
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:     ],
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:     "1": [
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:         {
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:             "devices": [
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:                 "/dev/loop4"
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:             ],
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:             "lv_name": "ceph_lv1",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:             "lv_size": "21470642176",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:             "name": "ceph_lv1",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:             "tags": {
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:                 "ceph.cluster_name": "ceph",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:                 "ceph.crush_device_class": "",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:                 "ceph.encrypted": "0",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:                 "ceph.osd_id": "1",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:                 "ceph.type": "block",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:                 "ceph.vdo": "0"
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:             },
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:             "type": "block",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:             "vg_name": "ceph_vg1"
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:         }
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:     ],
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:     "2": [
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:         {
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:             "devices": [
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:                 "/dev/loop5"
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:             ],
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:             "lv_name": "ceph_lv2",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:             "lv_size": "21470642176",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:             "name": "ceph_lv2",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:             "tags": {
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:                 "ceph.cluster_name": "ceph",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:                 "ceph.crush_device_class": "",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:                 "ceph.encrypted": "0",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:                 "ceph.osd_id": "2",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:                 "ceph.type": "block",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:                 "ceph.vdo": "0"
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:             },
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:             "type": "block",
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:             "vg_name": "ceph_vg2"
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:         }
Dec 03 01:59:46 compute-0 quirky_shamir[421751]:     ]
Dec 03 01:59:46 compute-0 quirky_shamir[421751]: }
Dec 03 01:59:46 compute-0 systemd[1]: libpod-6bafcd6fc27e76b2dfb309599cecbf3f70f1d381371dd7e39ab41fef09e315b8.scope: Deactivated successfully.
Dec 03 01:59:46 compute-0 podman[421736]: 2025-12-03 01:59:46.361800605 +0000 UTC m=+1.084713861 container died 6bafcd6fc27e76b2dfb309599cecbf3f70f1d381371dd7e39ab41fef09e315b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_shamir, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 01:59:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1356: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-81142c2cc8a89cfac84eaae2ad044fd2e95cc5c24bd343a534583f71ad1740f8-merged.mount: Deactivated successfully.
Dec 03 01:59:46 compute-0 podman[421736]: 2025-12-03 01:59:46.468879981 +0000 UTC m=+1.191793147 container remove 6bafcd6fc27e76b2dfb309599cecbf3f70f1d381371dd7e39ab41fef09e315b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_shamir, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec 03 01:59:46 compute-0 systemd[1]: libpod-conmon-6bafcd6fc27e76b2dfb309599cecbf3f70f1d381371dd7e39ab41fef09e315b8.scope: Deactivated successfully.
Dec 03 01:59:46 compute-0 sudo[421628]: pam_unix(sudo:session): session closed for user root
Dec 03 01:59:46 compute-0 podman[421761]: 2025-12-03 01:59:46.541218508 +0000 UTC m=+0.142646608 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 03 01:59:46 compute-0 sudo[421789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:59:46 compute-0 sudo[421789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:59:46 compute-0 sudo[421789]: pam_unix(sudo:session): session closed for user root
Dec 03 01:59:46 compute-0 sudo[421816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 01:59:46 compute-0 sudo[421816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:59:46 compute-0 sudo[421816]: pam_unix(sudo:session): session closed for user root
Dec 03 01:59:46 compute-0 sudo[421841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:59:46 compute-0 sudo[421841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:59:46 compute-0 sudo[421841]: pam_unix(sudo:session): session closed for user root
Dec 03 01:59:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 01:59:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1638375043' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 01:59:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 01:59:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1638375043' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 01:59:47 compute-0 sudo[421866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 01:59:47 compute-0 sudo[421866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:59:47 compute-0 podman[421928]: 2025-12-03 01:59:47.617268056 +0000 UTC m=+0.075249461 container create 98c52da358e806830a27d6e069a0c8f2ae4668624066832c56a6f41960fc9532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_villani, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:59:47 compute-0 podman[421928]: 2025-12-03 01:59:47.584431411 +0000 UTC m=+0.042412846 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:59:47 compute-0 systemd[1]: Started libpod-conmon-98c52da358e806830a27d6e069a0c8f2ae4668624066832c56a6f41960fc9532.scope.
Dec 03 01:59:47 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:59:47 compute-0 podman[421928]: 2025-12-03 01:59:47.75339855 +0000 UTC m=+0.211380015 container init 98c52da358e806830a27d6e069a0c8f2ae4668624066832c56a6f41960fc9532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_villani, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 03 01:59:47 compute-0 podman[421928]: 2025-12-03 01:59:47.770062669 +0000 UTC m=+0.228044064 container start 98c52da358e806830a27d6e069a0c8f2ae4668624066832c56a6f41960fc9532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_villani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:59:47 compute-0 podman[421928]: 2025-12-03 01:59:47.77506863 +0000 UTC m=+0.233050065 container attach 98c52da358e806830a27d6e069a0c8f2ae4668624066832c56a6f41960fc9532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_villani, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 03 01:59:47 compute-0 ceph-mon[192821]: pgmap v1356: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/1638375043' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 01:59:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/1638375043' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 01:59:47 compute-0 interesting_villani[421944]: 167 167
Dec 03 01:59:47 compute-0 systemd[1]: libpod-98c52da358e806830a27d6e069a0c8f2ae4668624066832c56a6f41960fc9532.scope: Deactivated successfully.
Dec 03 01:59:47 compute-0 podman[421928]: 2025-12-03 01:59:47.783086016 +0000 UTC m=+0.241067441 container died 98c52da358e806830a27d6e069a0c8f2ae4668624066832c56a6f41960fc9532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_villani, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Dec 03 01:59:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-37fcca9bcce26c47610b46d30a47676e16328643624f5e318ad706c45c4aaa68-merged.mount: Deactivated successfully.
Dec 03 01:59:47 compute-0 podman[421928]: 2025-12-03 01:59:47.874670625 +0000 UTC m=+0.332652040 container remove 98c52da358e806830a27d6e069a0c8f2ae4668624066832c56a6f41960fc9532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 03 01:59:47 compute-0 systemd[1]: libpod-conmon-98c52da358e806830a27d6e069a0c8f2ae4668624066832c56a6f41960fc9532.scope: Deactivated successfully.
Dec 03 01:59:47 compute-0 nova_compute[351485]: 2025-12-03 01:59:47.917 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:59:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:59:48 compute-0 podman[421967]: 2025-12-03 01:59:48.15851216 +0000 UTC m=+0.104396762 container create d71d4d8bd84e2053a7a2e06bb058c3307211caf2879d1220377ed7d89d06564e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 03 01:59:48 compute-0 podman[421967]: 2025-12-03 01:59:48.117458423 +0000 UTC m=+0.063343065 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 01:59:48 compute-0 systemd[1]: Started libpod-conmon-d71d4d8bd84e2053a7a2e06bb058c3307211caf2879d1220377ed7d89d06564e.scope.
Dec 03 01:59:48 compute-0 systemd[1]: Started libcrun container.
Dec 03 01:59:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd864cbc9ad34d9b568167a6e276c3767ede763dc7ee9e9054e901ceba23fc5b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 01:59:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd864cbc9ad34d9b568167a6e276c3767ede763dc7ee9e9054e901ceba23fc5b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 01:59:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd864cbc9ad34d9b568167a6e276c3767ede763dc7ee9e9054e901ceba23fc5b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 01:59:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd864cbc9ad34d9b568167a6e276c3767ede763dc7ee9e9054e901ceba23fc5b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 01:59:48 compute-0 podman[421967]: 2025-12-03 01:59:48.346179735 +0000 UTC m=+0.292064337 container init d71d4d8bd84e2053a7a2e06bb058c3307211caf2879d1220377ed7d89d06564e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_feistel, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec 03 01:59:48 compute-0 podman[421967]: 2025-12-03 01:59:48.362258898 +0000 UTC m=+0.308143470 container start d71d4d8bd84e2053a7a2e06bb058c3307211caf2879d1220377ed7d89d06564e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_feistel, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Dec 03 01:59:48 compute-0 podman[421967]: 2025-12-03 01:59:48.367791324 +0000 UTC m=+0.313675936 container attach d71d4d8bd84e2053a7a2e06bb058c3307211caf2879d1220377ed7d89d06564e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_feistel, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Dec 03 01:59:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1357: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:48 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 03 01:59:48 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 2400.1 total, 600.0 interval
                                            Cumulative writes: 7821 writes, 31K keys, 7821 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                            Cumulative WAL: 7821 writes, 1612 syncs, 4.85 writes per sync, written: 0.02 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 721 writes, 2335 keys, 721 commit groups, 1.0 writes per commit group, ingest: 2.50 MB, 0.00 MB/s
                                            Interval WAL: 721 writes, 280 syncs, 2.58 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 03 01:59:49 compute-0 nova_compute[351485]: 2025-12-03 01:59:49.156 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:59:49 compute-0 laughing_feistel[421981]: {
Dec 03 01:59:49 compute-0 laughing_feistel[421981]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 01:59:49 compute-0 laughing_feistel[421981]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:59:49 compute-0 laughing_feistel[421981]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 01:59:49 compute-0 laughing_feistel[421981]:         "osd_id": 2,
Dec 03 01:59:49 compute-0 laughing_feistel[421981]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 01:59:49 compute-0 laughing_feistel[421981]:         "type": "bluestore"
Dec 03 01:59:49 compute-0 laughing_feistel[421981]:     },
Dec 03 01:59:49 compute-0 laughing_feistel[421981]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 01:59:49 compute-0 laughing_feistel[421981]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:59:49 compute-0 laughing_feistel[421981]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 01:59:49 compute-0 laughing_feistel[421981]:         "osd_id": 1,
Dec 03 01:59:49 compute-0 laughing_feistel[421981]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 01:59:49 compute-0 laughing_feistel[421981]:         "type": "bluestore"
Dec 03 01:59:49 compute-0 laughing_feistel[421981]:     },
Dec 03 01:59:49 compute-0 laughing_feistel[421981]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 01:59:49 compute-0 laughing_feistel[421981]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 01:59:49 compute-0 laughing_feistel[421981]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 01:59:49 compute-0 laughing_feistel[421981]:         "osd_id": 0,
Dec 03 01:59:49 compute-0 laughing_feistel[421981]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 01:59:49 compute-0 laughing_feistel[421981]:         "type": "bluestore"
Dec 03 01:59:49 compute-0 laughing_feistel[421981]:     }
Dec 03 01:59:49 compute-0 laughing_feistel[421981]: }
Dec 03 01:59:49 compute-0 systemd[1]: libpod-d71d4d8bd84e2053a7a2e06bb058c3307211caf2879d1220377ed7d89d06564e.scope: Deactivated successfully.
Dec 03 01:59:49 compute-0 systemd[1]: libpod-d71d4d8bd84e2053a7a2e06bb058c3307211caf2879d1220377ed7d89d06564e.scope: Consumed 1.180s CPU time.
Dec 03 01:59:49 compute-0 podman[421967]: 2025-12-03 01:59:49.543300552 +0000 UTC m=+1.489185144 container died d71d4d8bd84e2053a7a2e06bb058c3307211caf2879d1220377ed7d89d06564e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_feistel, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 01:59:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd864cbc9ad34d9b568167a6e276c3767ede763dc7ee9e9054e901ceba23fc5b-merged.mount: Deactivated successfully.
Dec 03 01:59:49 compute-0 podman[421967]: 2025-12-03 01:59:49.650427359 +0000 UTC m=+1.596311951 container remove d71d4d8bd84e2053a7a2e06bb058c3307211caf2879d1220377ed7d89d06564e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_feistel, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 03 01:59:49 compute-0 systemd[1]: libpod-conmon-d71d4d8bd84e2053a7a2e06bb058c3307211caf2879d1220377ed7d89d06564e.scope: Deactivated successfully.
Dec 03 01:59:49 compute-0 sudo[421866]: pam_unix(sudo:session): session closed for user root
Dec 03 01:59:49 compute-0 podman[422016]: 2025-12-03 01:59:49.694627714 +0000 UTC m=+0.111141121 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, version=9.4, name=ubi9, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc.)
Dec 03 01:59:49 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 01:59:49 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:59:49 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 01:59:49 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:59:49 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 07cb493a-7e00-42d3-a379-0a323afc73e8 does not exist
Dec 03 01:59:49 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 2ae93c6f-210b-4f61-8942-86e77ef060de does not exist
Dec 03 01:59:49 compute-0 ceph-mon[192821]: pgmap v1357: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:49 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:59:49 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 01:59:49 compute-0 sudo[422043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 01:59:49 compute-0 sudo[422043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:59:49 compute-0 sudo[422043]: pam_unix(sudo:session): session closed for user root
Dec 03 01:59:49 compute-0 sudo[422068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 01:59:49 compute-0 sudo[422068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 01:59:49 compute-0 sudo[422068]: pam_unix(sudo:session): session closed for user root
Dec 03 01:59:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1358: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:51 compute-0 ceph-mon[192821]: pgmap v1358: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1359: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:52 compute-0 nova_compute[351485]: 2025-12-03 01:59:52.923 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:59:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:59:53 compute-0 ceph-mon[192821]: pgmap v1359: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:53 compute-0 podman[422094]: 2025-12-03 01:59:53.866773711 +0000 UTC m=+0.112749387 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.6, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, container_name=openstack_network_exporter, io.buildah.version=1.33.7, name=ubi9-minimal)
Dec 03 01:59:53 compute-0 podman[422095]: 2025-12-03 01:59:53.87315126 +0000 UTC m=+0.115193205 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 03 01:59:53 compute-0 podman[422093]: 2025-12-03 01:59:53.915801471 +0000 UTC m=+0.167104257 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251125, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 03 01:59:53 compute-0 podman[422096]: 2025-12-03 01:59:53.918262351 +0000 UTC m=+0.151774956 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 01:59:54 compute-0 nova_compute[351485]: 2025-12-03 01:59:54.161 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:59:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1360: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:55 compute-0 ceph-mon[192821]: pgmap v1360: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:55 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 03 01:59:55 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 2400.1 total, 600.0 interval
                                            Cumulative writes: 6387 writes, 26K keys, 6387 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                            Cumulative WAL: 6387 writes, 1201 syncs, 5.32 writes per sync, written: 0.02 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 498 writes, 1537 keys, 498 commit groups, 1.0 writes per commit group, ingest: 1.24 MB, 0.00 MB/s
                                            Interval WAL: 498 writes, 203 syncs, 2.45 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 03 01:59:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1361: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:57 compute-0 ceph-mgr[193109]: [devicehealth INFO root] Check health
Dec 03 01:59:57 compute-0 nova_compute[351485]: 2025-12-03 01:59:57.571 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 01:59:57 compute-0 ceph-mon[192821]: pgmap v1361: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:57 compute-0 nova_compute[351485]: 2025-12-03 01:59:57.925 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:59:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 01:59:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1362: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 01:59:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:59:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:59:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:59:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:59:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 01:59:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 01:59:59 compute-0 nova_compute[351485]: 2025-12-03 01:59:59.172 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 01:59:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:59:59.626 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 01:59:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:59:59.627 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 01:59:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:59:59.628 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 01:59:59 compute-0 podman[158098]: time="2025-12-03T01:59:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 01:59:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:59:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec 03 01:59:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:59:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8643 "" "Go-http-client/1.1"
Dec 03 01:59:59 compute-0 ceph-mon[192821]: pgmap v1362: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:00:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1363: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:00:00 compute-0 nova_compute[351485]: 2025-12-03 02:00:00.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:00:00 compute-0 nova_compute[351485]: 2025-12-03 02:00:00.576 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 02:00:01 compute-0 nova_compute[351485]: 2025-12-03 02:00:01.293 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-52862152-12c7-4236-89c3-67750ecbed7a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:00:01 compute-0 nova_compute[351485]: 2025-12-03 02:00:01.294 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-52862152-12c7-4236-89c3-67750ecbed7a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:00:01 compute-0 nova_compute[351485]: 2025-12-03 02:00:01.294 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 03 02:00:01 compute-0 openstack_network_exporter[368278]: ERROR   02:00:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:00:01 compute-0 openstack_network_exporter[368278]: ERROR   02:00:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:00:01 compute-0 openstack_network_exporter[368278]: ERROR   02:00:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:00:01 compute-0 openstack_network_exporter[368278]: ERROR   02:00:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:00:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:00:01 compute-0 openstack_network_exporter[368278]: ERROR   02:00:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:00:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:00:01 compute-0 ceph-mon[192821]: pgmap v1363: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:00:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1364: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:00:02 compute-0 nova_compute[351485]: 2025-12-03 02:00:02.628 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Updating instance_info_cache with network_info: [{"id": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "address": "fa:16:3e:8e:09:91", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521d2181-8f", "ovs_interfaceid": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:00:02 compute-0 nova_compute[351485]: 2025-12-03 02:00:02.656 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-52862152-12c7-4236-89c3-67750ecbed7a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:00:02 compute-0 nova_compute[351485]: 2025-12-03 02:00:02.657 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 03 02:00:02 compute-0 nova_compute[351485]: 2025-12-03 02:00:02.658 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:00:02 compute-0 nova_compute[351485]: 2025-12-03 02:00:02.658 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:00:02 compute-0 nova_compute[351485]: 2025-12-03 02:00:02.659 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:00:02 compute-0 nova_compute[351485]: 2025-12-03 02:00:02.687 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:00:02 compute-0 nova_compute[351485]: 2025-12-03 02:00:02.688 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:00:02 compute-0 nova_compute[351485]: 2025-12-03 02:00:02.688 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:00:02 compute-0 nova_compute[351485]: 2025-12-03 02:00:02.689 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 02:00:02 compute-0 nova_compute[351485]: 2025-12-03 02:00:02.689 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:00:02 compute-0 nova_compute[351485]: 2025-12-03 02:00:02.927 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:00:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:00:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:00:03 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1967752976' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:00:03 compute-0 nova_compute[351485]: 2025-12-03 02:00:03.215 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:00:03 compute-0 nova_compute[351485]: 2025-12-03 02:00:03.366 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:00:03 compute-0 nova_compute[351485]: 2025-12-03 02:00:03.367 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:00:03 compute-0 nova_compute[351485]: 2025-12-03 02:00:03.367 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:00:03 compute-0 nova_compute[351485]: 2025-12-03 02:00:03.377 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:00:03 compute-0 nova_compute[351485]: 2025-12-03 02:00:03.377 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:00:03 compute-0 nova_compute[351485]: 2025-12-03 02:00:03.378 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:00:03 compute-0 nova_compute[351485]: 2025-12-03 02:00:03.867 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:00:03 compute-0 nova_compute[351485]: 2025-12-03 02:00:03.868 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3740MB free_disk=59.922000885009766GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 02:00:03 compute-0 nova_compute[351485]: 2025-12-03 02:00:03.868 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:00:03 compute-0 nova_compute[351485]: 2025-12-03 02:00:03.868 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:00:03 compute-0 nova_compute[351485]: 2025-12-03 02:00:03.949 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:00:03 compute-0 nova_compute[351485]: 2025-12-03 02:00:03.949 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 52862152-12c7-4236-89c3-67750ecbed7a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:00:03 compute-0 nova_compute[351485]: 2025-12-03 02:00:03.950 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 02:00:03 compute-0 nova_compute[351485]: 2025-12-03 02:00:03.950 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 02:00:03 compute-0 nova_compute[351485]: 2025-12-03 02:00:03.964 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing inventories for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 03 02:00:03 compute-0 ceph-mon[192821]: pgmap v1364: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:00:03 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1967752976' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:00:03 compute-0 nova_compute[351485]: 2025-12-03 02:00:03.993 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Updating ProviderTree inventory for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 03 02:00:03 compute-0 nova_compute[351485]: 2025-12-03 02:00:03.993 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Updating inventory in ProviderTree for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 03 02:00:04 compute-0 nova_compute[351485]: 2025-12-03 02:00:04.012 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing aggregate associations for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 03 02:00:04 compute-0 nova_compute[351485]: 2025-12-03 02:00:04.051 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing trait associations for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05, traits: HW_CPU_X86_SSE42,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_ABM,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_F16C,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_RESCUE_BFV,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 03 02:00:04 compute-0 nova_compute[351485]: 2025-12-03 02:00:04.132 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:00:04 compute-0 nova_compute[351485]: 2025-12-03 02:00:04.175 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:00:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1365: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:00:04 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:00:04 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/530235725' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:00:04 compute-0 nova_compute[351485]: 2025-12-03 02:00:04.659 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:00:04 compute-0 nova_compute[351485]: 2025-12-03 02:00:04.667 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:00:04 compute-0 nova_compute[351485]: 2025-12-03 02:00:04.681 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:00:04 compute-0 nova_compute[351485]: 2025-12-03 02:00:04.682 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 02:00:04 compute-0 nova_compute[351485]: 2025-12-03 02:00:04.682 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:00:04 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/530235725' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:00:06 compute-0 ceph-mon[192821]: pgmap v1365: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:00:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1366: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:00:06 compute-0 nova_compute[351485]: 2025-12-03 02:00:06.601 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:00:06 compute-0 nova_compute[351485]: 2025-12-03 02:00:06.602 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:00:06 compute-0 nova_compute[351485]: 2025-12-03 02:00:06.602 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:00:07 compute-0 sshd[113879]: Timeout before authentication for connection from 45.78.219.140 to 38.102.83.36, pid = 419664
Dec 03 02:00:07 compute-0 nova_compute[351485]: 2025-12-03 02:00:07.572 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:00:07 compute-0 nova_compute[351485]: 2025-12-03 02:00:07.933 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:00:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:00:08 compute-0 ceph-mon[192821]: pgmap v1366: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:00:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1367: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:00:09 compute-0 ceph-mon[192821]: pgmap v1367: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:00:09 compute-0 nova_compute[351485]: 2025-12-03 02:00:09.179 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:00:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1368: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:00:11 compute-0 ceph-mon[192821]: pgmap v1368: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:00:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:00:11.636 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:a6:85', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:2a:11:ae:7b:8c'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 02:00:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:00:11.638 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 03 02:00:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:00:11.640 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=eda9fd7d-f2b1-4121-b9ac-fc31f8426272, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:00:11 compute-0 nova_compute[351485]: 2025-12-03 02:00:11.645 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:00:11 compute-0 podman[422225]: 2025-12-03 02:00:11.852629126 +0000 UTC m=+0.093131204 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 02:00:11 compute-0 podman[422223]: 2025-12-03 02:00:11.864794089 +0000 UTC m=+0.122981545 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 03 02:00:11 compute-0 podman[422224]: 2025-12-03 02:00:11.874153232 +0000 UTC m=+0.119043674 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Dec 03 02:00:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1369: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:00:12 compute-0 nova_compute[351485]: 2025-12-03 02:00:12.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:00:12 compute-0 nova_compute[351485]: 2025-12-03 02:00:12.576 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 02:00:12 compute-0 nova_compute[351485]: 2025-12-03 02:00:12.935 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:00:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:00:13 compute-0 ceph-mon[192821]: pgmap v1369: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:00:14 compute-0 nova_compute[351485]: 2025-12-03 02:00:14.184 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:00:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1370: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:00:15 compute-0 ceph-mon[192821]: pgmap v1370: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:00:16 compute-0 nova_compute[351485]: 2025-12-03 02:00:16.339 351492 DEBUG oslo_concurrency.lockutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:00:16 compute-0 nova_compute[351485]: 2025-12-03 02:00:16.340 351492 DEBUG oslo_concurrency.lockutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:00:16 compute-0 nova_compute[351485]: 2025-12-03 02:00:16.363 351492 DEBUG nova.compute.manager [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 03 02:00:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1371: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:00:16 compute-0 nova_compute[351485]: 2025-12-03 02:00:16.467 351492 DEBUG oslo_concurrency.lockutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:00:16 compute-0 nova_compute[351485]: 2025-12-03 02:00:16.468 351492 DEBUG oslo_concurrency.lockutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:00:16 compute-0 nova_compute[351485]: 2025-12-03 02:00:16.481 351492 DEBUG nova.virt.hardware [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 03 02:00:16 compute-0 nova_compute[351485]: 2025-12-03 02:00:16.482 351492 INFO nova.compute.claims [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Claim successful on node compute-0.ctlplane.example.com
Dec 03 02:00:16 compute-0 nova_compute[351485]: 2025-12-03 02:00:16.670 351492 DEBUG oslo_concurrency.processutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:00:16 compute-0 podman[422283]: 2025-12-03 02:00:16.888800328 +0000 UTC m=+0.144224483 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 03 02:00:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:00:17 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1446501929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:00:17 compute-0 nova_compute[351485]: 2025-12-03 02:00:17.194 351492 DEBUG oslo_concurrency.processutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:00:17 compute-0 nova_compute[351485]: 2025-12-03 02:00:17.209 351492 DEBUG nova.compute.provider_tree [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:00:17 compute-0 nova_compute[351485]: 2025-12-03 02:00:17.235 351492 DEBUG nova.scheduler.client.report [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:00:17 compute-0 nova_compute[351485]: 2025-12-03 02:00:17.270 351492 DEBUG oslo_concurrency.lockutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.801s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:00:17 compute-0 nova_compute[351485]: 2025-12-03 02:00:17.271 351492 DEBUG nova.compute.manager [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 03 02:00:17 compute-0 nova_compute[351485]: 2025-12-03 02:00:17.321 351492 DEBUG nova.compute.manager [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 03 02:00:17 compute-0 nova_compute[351485]: 2025-12-03 02:00:17.322 351492 DEBUG nova.network.neutron [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 03 02:00:17 compute-0 nova_compute[351485]: 2025-12-03 02:00:17.345 351492 INFO nova.virt.libvirt.driver [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 03 02:00:17 compute-0 nova_compute[351485]: 2025-12-03 02:00:17.382 351492 DEBUG nova.compute.manager [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 03 02:00:17 compute-0 nova_compute[351485]: 2025-12-03 02:00:17.461 351492 DEBUG nova.compute.manager [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 03 02:00:17 compute-0 nova_compute[351485]: 2025-12-03 02:00:17.463 351492 DEBUG nova.virt.libvirt.driver [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 03 02:00:17 compute-0 nova_compute[351485]: 2025-12-03 02:00:17.464 351492 INFO nova.virt.libvirt.driver [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Creating image(s)
Dec 03 02:00:17 compute-0 nova_compute[351485]: 2025-12-03 02:00:17.500 351492 DEBUG nova.storage.rbd_utils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:00:17 compute-0 ceph-mon[192821]: pgmap v1371: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:00:17 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1446501929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:00:17 compute-0 nova_compute[351485]: 2025-12-03 02:00:17.555 351492 DEBUG nova.storage.rbd_utils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:00:17 compute-0 nova_compute[351485]: 2025-12-03 02:00:17.603 351492 DEBUG nova.storage.rbd_utils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:00:17 compute-0 nova_compute[351485]: 2025-12-03 02:00:17.613 351492 DEBUG oslo_concurrency.processutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b9e804eb90834f1320f9fd6c25a03e15d4052aa8 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:00:17 compute-0 nova_compute[351485]: 2025-12-03 02:00:17.700 351492 DEBUG oslo_concurrency.processutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b9e804eb90834f1320f9fd6c25a03e15d4052aa8 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:00:17 compute-0 nova_compute[351485]: 2025-12-03 02:00:17.701 351492 DEBUG oslo_concurrency.lockutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "b9e804eb90834f1320f9fd6c25a03e15d4052aa8" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:00:17 compute-0 nova_compute[351485]: 2025-12-03 02:00:17.702 351492 DEBUG oslo_concurrency.lockutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "b9e804eb90834f1320f9fd6c25a03e15d4052aa8" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:00:17 compute-0 nova_compute[351485]: 2025-12-03 02:00:17.702 351492 DEBUG oslo_concurrency.lockutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "b9e804eb90834f1320f9fd6c25a03e15d4052aa8" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:00:17 compute-0 nova_compute[351485]: 2025-12-03 02:00:17.735 351492 DEBUG nova.storage.rbd_utils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:00:17 compute-0 nova_compute[351485]: 2025-12-03 02:00:17.759 351492 DEBUG oslo_concurrency.processutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b9e804eb90834f1320f9fd6c25a03e15d4052aa8 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:00:17 compute-0 nova_compute[351485]: 2025-12-03 02:00:17.935 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:00:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:00:18 compute-0 nova_compute[351485]: 2025-12-03 02:00:18.114 351492 DEBUG oslo_concurrency.processutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b9e804eb90834f1320f9fd6c25a03e15d4052aa8 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.355s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:00:18 compute-0 nova_compute[351485]: 2025-12-03 02:00:18.292 351492 DEBUG nova.storage.rbd_utils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] resizing rbd image 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 03 02:00:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1372: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:00:18 compute-0 nova_compute[351485]: 2025-12-03 02:00:18.522 351492 DEBUG nova.objects.instance [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lazy-loading 'migration_context' on Instance uuid 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:00:18 compute-0 nova_compute[351485]: 2025-12-03 02:00:18.580 351492 DEBUG nova.storage.rbd_utils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:00:18 compute-0 nova_compute[351485]: 2025-12-03 02:00:18.633 351492 DEBUG nova.storage.rbd_utils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:00:18 compute-0 nova_compute[351485]: 2025-12-03 02:00:18.643 351492 DEBUG oslo_concurrency.processutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:00:18 compute-0 nova_compute[351485]: 2025-12-03 02:00:18.729 351492 DEBUG oslo_concurrency.processutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:00:18 compute-0 nova_compute[351485]: 2025-12-03 02:00:18.730 351492 DEBUG oslo_concurrency.lockutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:00:18 compute-0 nova_compute[351485]: 2025-12-03 02:00:18.731 351492 DEBUG oslo_concurrency.lockutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:00:18 compute-0 nova_compute[351485]: 2025-12-03 02:00:18.732 351492 DEBUG oslo_concurrency.lockutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:00:18 compute-0 nova_compute[351485]: 2025-12-03 02:00:18.782 351492 DEBUG nova.storage.rbd_utils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:00:18 compute-0 nova_compute[351485]: 2025-12-03 02:00:18.791 351492 DEBUG oslo_concurrency.processutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:00:19 compute-0 nova_compute[351485]: 2025-12-03 02:00:19.188 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:00:19 compute-0 nova_compute[351485]: 2025-12-03 02:00:19.306 351492 DEBUG oslo_concurrency.processutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:00:19 compute-0 nova_compute[351485]: 2025-12-03 02:00:19.480 351492 DEBUG nova.network.neutron [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Successfully updated port: d0c565d0-5299-45e5-84ac-ea722711af3d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 03 02:00:19 compute-0 ceph-mon[192821]: pgmap v1372: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:00:19 compute-0 nova_compute[351485]: 2025-12-03 02:00:19.567 351492 DEBUG nova.compute.manager [req-a4fb9f20-73e9-4a72-ac6a-6cd6885bd56e req-915728fb-9341-4188-9246-9f754e39e23a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Received event network-changed-d0c565d0-5299-45e5-84ac-ea722711af3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:00:19 compute-0 nova_compute[351485]: 2025-12-03 02:00:19.567 351492 DEBUG nova.compute.manager [req-a4fb9f20-73e9-4a72-ac6a-6cd6885bd56e req-915728fb-9341-4188-9246-9f754e39e23a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Refreshing instance network info cache due to event network-changed-d0c565d0-5299-45e5-84ac-ea722711af3d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 03 02:00:19 compute-0 nova_compute[351485]: 2025-12-03 02:00:19.568 351492 DEBUG oslo_concurrency.lockutils [req-a4fb9f20-73e9-4a72-ac6a-6cd6885bd56e req-915728fb-9341-4188-9246-9f754e39e23a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:00:19 compute-0 nova_compute[351485]: 2025-12-03 02:00:19.568 351492 DEBUG oslo_concurrency.lockutils [req-a4fb9f20-73e9-4a72-ac6a-6cd6885bd56e req-915728fb-9341-4188-9246-9f754e39e23a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:00:19 compute-0 nova_compute[351485]: 2025-12-03 02:00:19.568 351492 DEBUG nova.network.neutron [req-a4fb9f20-73e9-4a72-ac6a-6cd6885bd56e req-915728fb-9341-4188-9246-9f754e39e23a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Refreshing network info cache for port d0c565d0-5299-45e5-84ac-ea722711af3d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 03 02:00:19 compute-0 nova_compute[351485]: 2025-12-03 02:00:19.569 351492 DEBUG oslo_concurrency.lockutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "refresh_cache-55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:00:19 compute-0 nova_compute[351485]: 2025-12-03 02:00:19.582 351492 DEBUG nova.virt.libvirt.driver [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 03 02:00:19 compute-0 nova_compute[351485]: 2025-12-03 02:00:19.583 351492 DEBUG nova.virt.libvirt.driver [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Ensure instance console log exists: /var/lib/nova/instances/55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 03 02:00:19 compute-0 nova_compute[351485]: 2025-12-03 02:00:19.583 351492 DEBUG oslo_concurrency.lockutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:00:19 compute-0 nova_compute[351485]: 2025-12-03 02:00:19.583 351492 DEBUG oslo_concurrency.lockutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:00:19 compute-0 nova_compute[351485]: 2025-12-03 02:00:19.584 351492 DEBUG oslo_concurrency.lockutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:00:19 compute-0 nova_compute[351485]: 2025-12-03 02:00:19.753 351492 DEBUG nova.network.neutron [req-a4fb9f20-73e9-4a72-ac6a-6cd6885bd56e req-915728fb-9341-4188-9246-9f754e39e23a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 03 02:00:20 compute-0 nova_compute[351485]: 2025-12-03 02:00:20.177 351492 DEBUG nova.network.neutron [req-a4fb9f20-73e9-4a72-ac6a-6cd6885bd56e req-915728fb-9341-4188-9246-9f754e39e23a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:00:20 compute-0 nova_compute[351485]: 2025-12-03 02:00:20.195 351492 DEBUG oslo_concurrency.lockutils [req-a4fb9f20-73e9-4a72-ac6a-6cd6885bd56e req-915728fb-9341-4188-9246-9f754e39e23a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:00:20 compute-0 nova_compute[351485]: 2025-12-03 02:00:20.195 351492 DEBUG oslo_concurrency.lockutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquired lock "refresh_cache-55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:00:20 compute-0 nova_compute[351485]: 2025-12-03 02:00:20.196 351492 DEBUG nova.network.neutron [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 03 02:00:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1373: 321 pgs: 321 active+clean; 154 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 6.8 KiB/s rd, 746 KiB/s wr, 14 op/s
Dec 03 02:00:20 compute-0 nova_compute[351485]: 2025-12-03 02:00:20.429 351492 DEBUG nova.network.neutron [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 03 02:00:20 compute-0 podman[422620]: 2025-12-03 02:00:20.889377614 +0000 UTC m=+0.141232039 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, com.redhat.component=ubi9-container, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, config_id=edpm, distribution-scope=public, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, maintainer=Red Hat, Inc., vcs-type=git, container_name=kepler, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec 03 02:00:21 compute-0 nova_compute[351485]: 2025-12-03 02:00:21.389 351492 DEBUG nova.network.neutron [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Updating instance_info_cache with network_info: [{"id": "d0c565d0-5299-45e5-84ac-ea722711af3d", "address": "fa:16:3e:de:1b:b0", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.227", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0c565d0-52", "ovs_interfaceid": "d0c565d0-5299-45e5-84ac-ea722711af3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:00:21 compute-0 nova_compute[351485]: 2025-12-03 02:00:21.418 351492 DEBUG oslo_concurrency.lockutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Releasing lock "refresh_cache-55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:00:21 compute-0 nova_compute[351485]: 2025-12-03 02:00:21.418 351492 DEBUG nova.compute.manager [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Instance network_info: |[{"id": "d0c565d0-5299-45e5-84ac-ea722711af3d", "address": "fa:16:3e:de:1b:b0", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.227", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0c565d0-52", "ovs_interfaceid": "d0c565d0-5299-45e5-84ac-ea722711af3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 03 02:00:21 compute-0 nova_compute[351485]: 2025-12-03 02:00:21.423 351492 DEBUG nova.virt.libvirt.driver [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Start _get_guest_xml network_info=[{"id": "d0c565d0-5299-45e5-84ac-ea722711af3d", "address": "fa:16:3e:de:1b:b0", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.227", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0c565d0-52", "ovs_interfaceid": "d0c565d0-5299-45e5-84ac-ea722711af3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-03T01:53:18Z,direct_url=<?>,disk_format='qcow2',id=466cf0db-c3be-4d70-b9f3-08c056c2cad9,min_disk=0,min_ram=0,name='cirros',owner='9746b242761a48048d185ce26d622b33',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-03T01:53:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'boot_index': 0, 'guest_format': None, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'image_id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}], 'ephemerals': [{'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vdb', 'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'size': 1, 'encryption_options': None, 'device_type': 'disk'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 03 02:00:21 compute-0 nova_compute[351485]: 2025-12-03 02:00:21.434 351492 WARNING nova.virt.libvirt.driver [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:00:21 compute-0 nova_compute[351485]: 2025-12-03 02:00:21.441 351492 DEBUG nova.virt.libvirt.host [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 03 02:00:21 compute-0 nova_compute[351485]: 2025-12-03 02:00:21.443 351492 DEBUG nova.virt.libvirt.host [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 03 02:00:21 compute-0 nova_compute[351485]: 2025-12-03 02:00:21.448 351492 DEBUG nova.virt.libvirt.host [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 03 02:00:21 compute-0 nova_compute[351485]: 2025-12-03 02:00:21.449 351492 DEBUG nova.virt.libvirt.host [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 03 02:00:21 compute-0 nova_compute[351485]: 2025-12-03 02:00:21.450 351492 DEBUG nova.virt.libvirt.driver [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 03 02:00:21 compute-0 nova_compute[351485]: 2025-12-03 02:00:21.451 351492 DEBUG nova.virt.hardware [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-03T01:53:25Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='bc665ec6-3672-4e52-a447-5267b04e227a',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-03T01:53:18Z,direct_url=<?>,disk_format='qcow2',id=466cf0db-c3be-4d70-b9f3-08c056c2cad9,min_disk=0,min_ram=0,name='cirros',owner='9746b242761a48048d185ce26d622b33',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-03T01:53:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 03 02:00:21 compute-0 nova_compute[351485]: 2025-12-03 02:00:21.452 351492 DEBUG nova.virt.hardware [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 03 02:00:21 compute-0 nova_compute[351485]: 2025-12-03 02:00:21.453 351492 DEBUG nova.virt.hardware [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 03 02:00:21 compute-0 nova_compute[351485]: 2025-12-03 02:00:21.453 351492 DEBUG nova.virt.hardware [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 03 02:00:21 compute-0 nova_compute[351485]: 2025-12-03 02:00:21.454 351492 DEBUG nova.virt.hardware [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 03 02:00:21 compute-0 nova_compute[351485]: 2025-12-03 02:00:21.455 351492 DEBUG nova.virt.hardware [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 03 02:00:21 compute-0 nova_compute[351485]: 2025-12-03 02:00:21.456 351492 DEBUG nova.virt.hardware [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 03 02:00:21 compute-0 nova_compute[351485]: 2025-12-03 02:00:21.456 351492 DEBUG nova.virt.hardware [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 03 02:00:21 compute-0 nova_compute[351485]: 2025-12-03 02:00:21.457 351492 DEBUG nova.virt.hardware [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 03 02:00:21 compute-0 nova_compute[351485]: 2025-12-03 02:00:21.458 351492 DEBUG nova.virt.hardware [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 03 02:00:21 compute-0 nova_compute[351485]: 2025-12-03 02:00:21.458 351492 DEBUG nova.virt.hardware [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 03 02:00:21 compute-0 nova_compute[351485]: 2025-12-03 02:00:21.464 351492 DEBUG oslo_concurrency.processutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:00:21 compute-0 ceph-mon[192821]: pgmap v1373: 321 pgs: 321 active+clean; 154 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 6.8 KiB/s rd, 746 KiB/s wr, 14 op/s
Dec 03 02:00:21 compute-0 sshd-session[422642]: Invalid user frontend from 146.190.144.138 port 50674
Dec 03 02:00:21 compute-0 sshd-session[422642]: Received disconnect from 146.190.144.138 port 50674:11: Bye Bye [preauth]
Dec 03 02:00:21 compute-0 sshd-session[422642]: Disconnected from invalid user frontend 146.190.144.138 port 50674 [preauth]
Dec 03 02:00:21 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 03 02:00:21 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4161021410' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:00:21 compute-0 nova_compute[351485]: 2025-12-03 02:00:21.995 351492 DEBUG oslo_concurrency.processutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:00:22 compute-0 nova_compute[351485]: 2025-12-03 02:00:21.999 351492 DEBUG oslo_concurrency.processutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:00:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1374: 321 pgs: 321 active+clean; 170 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 1.4 MiB/s wr, 26 op/s
Dec 03 02:00:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 03 02:00:22 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3545262422' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:00:22 compute-0 nova_compute[351485]: 2025-12-03 02:00:22.494 351492 DEBUG oslo_concurrency.processutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:00:22 compute-0 nova_compute[351485]: 2025-12-03 02:00:22.558 351492 DEBUG nova.storage.rbd_utils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:00:22 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/4161021410' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:00:22 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3545262422' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:00:22 compute-0 nova_compute[351485]: 2025-12-03 02:00:22.575 351492 DEBUG oslo_concurrency.processutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:00:22 compute-0 nova_compute[351485]: 2025-12-03 02:00:22.939 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:00:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:00:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 03 02:00:23 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3240562883' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.095 351492 DEBUG oslo_concurrency.processutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.098 351492 DEBUG nova.virt.libvirt.vif [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T02:00:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-44nal64-kaobzdetwujj-uf5345mx272a-vnf-xg4pxtj76f4j',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-44nal64-kaobzdetwujj-uf5345mx272a-vnf-xg4pxtj76f4j',id=3,image_ref='466cf0db-c3be-4d70-b9f3-08c056c2cad9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='0f6ab671-23df-4a6d-9613-02f9fb5fb294'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9746b242761a48048d185ce26d622b33',ramdisk_id='',reservation_id='r-7757xffq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='466cf0db-c3be-4d70-b9f3-08c056c2cad9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T02:00:17Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0wMjA2NjgzMzEzMjg5MDAzOTM3PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTAyMDY2ODMzMTMyODkwMDM5Mzc9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MDIwNjY4MzMxMzI4OTAwMzkzNz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTAyMDY2ODMzMTMyODkwMDM5Mzc9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0wMjA2NjgzMzEzMjg5MDAzOTM3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0wMjA2NjgzMzEzMjg5MDAzOTM3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJnc
Dec 03 02:00:23 compute-0 nova_compute[351485]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MDIwNjY4MzMxMzI4OTAwMzkzNz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTAyMDY2ODMzMTMyODkwMDM5Mzc9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0wMjA2NjgzMzEzMjg5MDAzOTM3PT0tLQo=',user_id='03ba25e4009b43f7b0054fee32bf9136',uuid=55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d0c565d0-5299-45e5-84ac-ea722711af3d", "address": "fa:16:3e:de:1b:b0", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.227", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0c565d0-52", "ovs_interfaceid": "d0c565d0-5299-45e5-84ac-ea722711af3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 03 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.099 351492 DEBUG nova.network.os_vif_util [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Converting VIF {"id": "d0c565d0-5299-45e5-84ac-ea722711af3d", "address": "fa:16:3e:de:1b:b0", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.227", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0c565d0-52", "ovs_interfaceid": "d0c565d0-5299-45e5-84ac-ea722711af3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 03 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.102 351492 DEBUG nova.network.os_vif_util [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:1b:b0,bridge_name='br-int',has_traffic_filtering=True,id=d0c565d0-5299-45e5-84ac-ea722711af3d,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd0c565d0-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 03 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.106 351492 DEBUG nova.objects.instance [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lazy-loading 'pci_devices' on Instance uuid 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.143 351492 DEBUG nova.virt.libvirt.driver [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] End _get_guest_xml xml=<domain type="kvm">
Dec 03 02:00:23 compute-0 nova_compute[351485]:   <uuid>55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274</uuid>
Dec 03 02:00:23 compute-0 nova_compute[351485]:   <name>instance-00000003</name>
Dec 03 02:00:23 compute-0 nova_compute[351485]:   <memory>524288</memory>
Dec 03 02:00:23 compute-0 nova_compute[351485]:   <vcpu>1</vcpu>
Dec 03 02:00:23 compute-0 nova_compute[351485]:   <metadata>
Dec 03 02:00:23 compute-0 nova_compute[351485]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 03 02:00:23 compute-0 nova_compute[351485]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:       <nova:name>vn-44nal64-kaobzdetwujj-uf5345mx272a-vnf-xg4pxtj76f4j</nova:name>
Dec 03 02:00:23 compute-0 nova_compute[351485]:       <nova:creationTime>2025-12-03 02:00:21</nova:creationTime>
Dec 03 02:00:23 compute-0 nova_compute[351485]:       <nova:flavor name="m1.small">
Dec 03 02:00:23 compute-0 nova_compute[351485]:         <nova:memory>512</nova:memory>
Dec 03 02:00:23 compute-0 nova_compute[351485]:         <nova:disk>1</nova:disk>
Dec 03 02:00:23 compute-0 nova_compute[351485]:         <nova:swap>0</nova:swap>
Dec 03 02:00:23 compute-0 nova_compute[351485]:         <nova:ephemeral>1</nova:ephemeral>
Dec 03 02:00:23 compute-0 nova_compute[351485]:         <nova:vcpus>1</nova:vcpus>
Dec 03 02:00:23 compute-0 nova_compute[351485]:       </nova:flavor>
Dec 03 02:00:23 compute-0 nova_compute[351485]:       <nova:owner>
Dec 03 02:00:23 compute-0 nova_compute[351485]:         <nova:user uuid="03ba25e4009b43f7b0054fee32bf9136">admin</nova:user>
Dec 03 02:00:23 compute-0 nova_compute[351485]:         <nova:project uuid="9746b242761a48048d185ce26d622b33">admin</nova:project>
Dec 03 02:00:23 compute-0 nova_compute[351485]:       </nova:owner>
Dec 03 02:00:23 compute-0 nova_compute[351485]:       <nova:root type="image" uuid="466cf0db-c3be-4d70-b9f3-08c056c2cad9"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:       <nova:ports>
Dec 03 02:00:23 compute-0 nova_compute[351485]:         <nova:port uuid="d0c565d0-5299-45e5-84ac-ea722711af3d">
Dec 03 02:00:23 compute-0 nova_compute[351485]:           <nova:ip type="fixed" address="192.168.0.227" ipVersion="4"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:         </nova:port>
Dec 03 02:00:23 compute-0 nova_compute[351485]:       </nova:ports>
Dec 03 02:00:23 compute-0 nova_compute[351485]:     </nova:instance>
Dec 03 02:00:23 compute-0 nova_compute[351485]:   </metadata>
Dec 03 02:00:23 compute-0 nova_compute[351485]:   <sysinfo type="smbios">
Dec 03 02:00:23 compute-0 nova_compute[351485]:     <system>
Dec 03 02:00:23 compute-0 nova_compute[351485]:       <entry name="manufacturer">RDO</entry>
Dec 03 02:00:23 compute-0 nova_compute[351485]:       <entry name="product">OpenStack Compute</entry>
Dec 03 02:00:23 compute-0 nova_compute[351485]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 03 02:00:23 compute-0 nova_compute[351485]:       <entry name="serial">55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274</entry>
Dec 03 02:00:23 compute-0 nova_compute[351485]:       <entry name="uuid">55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274</entry>
Dec 03 02:00:23 compute-0 nova_compute[351485]:       <entry name="family">Virtual Machine</entry>
Dec 03 02:00:23 compute-0 nova_compute[351485]:     </system>
Dec 03 02:00:23 compute-0 nova_compute[351485]:   </sysinfo>
Dec 03 02:00:23 compute-0 nova_compute[351485]:   <os>
Dec 03 02:00:23 compute-0 nova_compute[351485]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 03 02:00:23 compute-0 nova_compute[351485]:     <boot dev="hd"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:     <smbios mode="sysinfo"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:   </os>
Dec 03 02:00:23 compute-0 nova_compute[351485]:   <features>
Dec 03 02:00:23 compute-0 nova_compute[351485]:     <acpi/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:     <apic/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:     <vmcoreinfo/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:   </features>
Dec 03 02:00:23 compute-0 nova_compute[351485]:   <clock offset="utc">
Dec 03 02:00:23 compute-0 nova_compute[351485]:     <timer name="pit" tickpolicy="delay"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:     <timer name="hpet" present="no"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:   </clock>
Dec 03 02:00:23 compute-0 nova_compute[351485]:   <cpu mode="host-model" match="exact">
Dec 03 02:00:23 compute-0 nova_compute[351485]:     <topology sockets="1" cores="1" threads="1"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:   </cpu>
Dec 03 02:00:23 compute-0 nova_compute[351485]:   <devices>
Dec 03 02:00:23 compute-0 nova_compute[351485]:     <disk type="network" device="disk">
Dec 03 02:00:23 compute-0 nova_compute[351485]:       <driver type="raw" cache="none"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:       <source protocol="rbd" name="vms/55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274_disk">
Dec 03 02:00:23 compute-0 nova_compute[351485]:         <host name="192.168.122.100" port="6789"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:       </source>
Dec 03 02:00:23 compute-0 nova_compute[351485]:       <auth username="openstack">
Dec 03 02:00:23 compute-0 nova_compute[351485]:         <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:       </auth>
Dec 03 02:00:23 compute-0 nova_compute[351485]:       <target dev="vda" bus="virtio"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:     </disk>
Dec 03 02:00:23 compute-0 nova_compute[351485]:     <disk type="network" device="disk">
Dec 03 02:00:23 compute-0 nova_compute[351485]:       <driver type="raw" cache="none"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:       <source protocol="rbd" name="vms/55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274_disk.eph0">
Dec 03 02:00:23 compute-0 nova_compute[351485]:         <host name="192.168.122.100" port="6789"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:       </source>
Dec 03 02:00:23 compute-0 nova_compute[351485]:       <auth username="openstack">
Dec 03 02:00:23 compute-0 nova_compute[351485]:         <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:       </auth>
Dec 03 02:00:23 compute-0 nova_compute[351485]:       <target dev="vdb" bus="virtio"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:     </disk>
Dec 03 02:00:23 compute-0 nova_compute[351485]:     <disk type="network" device="cdrom">
Dec 03 02:00:23 compute-0 nova_compute[351485]:       <driver type="raw" cache="none"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:       <source protocol="rbd" name="vms/55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274_disk.config">
Dec 03 02:00:23 compute-0 nova_compute[351485]:         <host name="192.168.122.100" port="6789"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:       </source>
Dec 03 02:00:23 compute-0 nova_compute[351485]:       <auth username="openstack">
Dec 03 02:00:23 compute-0 nova_compute[351485]:         <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:       </auth>
Dec 03 02:00:23 compute-0 nova_compute[351485]:       <target dev="sda" bus="sata"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:     </disk>
Dec 03 02:00:23 compute-0 nova_compute[351485]:     <interface type="ethernet">
Dec 03 02:00:23 compute-0 nova_compute[351485]:       <mac address="fa:16:3e:de:1b:b0"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:       <model type="virtio"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:       <driver name="vhost" rx_queue_size="512"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:       <mtu size="1442"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:       <target dev="tapd0c565d0-52"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:     </interface>
Dec 03 02:00:23 compute-0 nova_compute[351485]:     <serial type="pty">
Dec 03 02:00:23 compute-0 nova_compute[351485]:       <log file="/var/lib/nova/instances/55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/console.log" append="off"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:     </serial>
Dec 03 02:00:23 compute-0 nova_compute[351485]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:     <video>
Dec 03 02:00:23 compute-0 nova_compute[351485]:       <model type="virtio"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:     </video>
Dec 03 02:00:23 compute-0 nova_compute[351485]:     <input type="tablet" bus="usb"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:     <rng model="virtio">
Dec 03 02:00:23 compute-0 nova_compute[351485]:       <backend model="random">/dev/urandom</backend>
Dec 03 02:00:23 compute-0 nova_compute[351485]:     </rng>
Dec 03 02:00:23 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:     <controller type="usb" index="0"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:     <memballoon model="virtio">
Dec 03 02:00:23 compute-0 nova_compute[351485]:       <stats period="10"/>
Dec 03 02:00:23 compute-0 nova_compute[351485]:     </memballoon>
Dec 03 02:00:23 compute-0 nova_compute[351485]:   </devices>
Dec 03 02:00:23 compute-0 nova_compute[351485]: </domain>
Dec 03 02:00:23 compute-0 nova_compute[351485]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 03 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.147 351492 DEBUG nova.compute.manager [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Preparing to wait for external event network-vif-plugged-d0c565d0-5299-45e5-84ac-ea722711af3d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 03 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.148 351492 DEBUG oslo_concurrency.lockutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.149 351492 DEBUG oslo_concurrency.lockutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.149 351492 DEBUG oslo_concurrency.lockutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.151 351492 DEBUG nova.virt.libvirt.vif [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T02:00:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-44nal64-kaobzdetwujj-uf5345mx272a-vnf-xg4pxtj76f4j',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-44nal64-kaobzdetwujj-uf5345mx272a-vnf-xg4pxtj76f4j',id=3,image_ref='466cf0db-c3be-4d70-b9f3-08c056c2cad9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='0f6ab671-23df-4a6d-9613-02f9fb5fb294'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9746b242761a48048d185ce26d622b33',ramdisk_id='',reservation_id='r-7757xffq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='466cf0db-c3be-4d70-b9f3-08c056c2cad9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T02:00:17Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0wMjA2NjgzMzEzMjg5MDAzOTM3PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTAyMDY2ODMzMTMyODkwMDM5Mzc9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MDIwNjY4MzMxMzI4OTAwMzkzNz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTAyMDY2ODMzMTMyODkwMDM5Mzc9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0wMjA2NjgzMzEzMjg5MDAzOTM3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0wMjA2NjgzMzEzMjg5MDAzOTM3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9
Dec 03 02:00:23 compute-0 nova_compute[351485]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MDIwNjY4MzMxMzI4OTAwMzkzNz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTAyMDY2ODMzMTMyODkwMDM5Mzc9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0wMjA2NjgzMzEzMjg5MDAzOTM3PT0tLQo=',user_id='03ba25e4009b43f7b0054fee32bf9136',uuid=55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d0c565d0-5299-45e5-84ac-ea722711af3d", "address": "fa:16:3e:de:1b:b0", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.227", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0c565d0-52", "ovs_interfaceid": "d0c565d0-5299-45e5-84ac-ea722711af3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 03 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.152 351492 DEBUG nova.network.os_vif_util [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Converting VIF {"id": "d0c565d0-5299-45e5-84ac-ea722711af3d", "address": "fa:16:3e:de:1b:b0", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.227", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0c565d0-52", "ovs_interfaceid": "d0c565d0-5299-45e5-84ac-ea722711af3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 03 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.154 351492 DEBUG nova.network.os_vif_util [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:1b:b0,bridge_name='br-int',has_traffic_filtering=True,id=d0c565d0-5299-45e5-84ac-ea722711af3d,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd0c565d0-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 03 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.156 351492 DEBUG os_vif [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:1b:b0,bridge_name='br-int',has_traffic_filtering=True,id=d0c565d0-5299-45e5-84ac-ea722711af3d,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd0c565d0-52') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 03 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.158 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.159 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.160 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 03 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.166 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.167 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd0c565d0-52, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.168 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd0c565d0-52, col_values=(('external_ids', {'iface-id': 'd0c565d0-5299-45e5-84ac-ea722711af3d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:de:1b:b0', 'vm-uuid': '55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:00:23 compute-0 NetworkManager[48912]: <info>  [1764727223.1742] manager: (tapd0c565d0-52): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Dec 03 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.179 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.197 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 03 02:00:23 compute-0 rsyslogd[188612]: message too long (8192) with configured size 8096, begin of message is: 2025-12-03 02:00:23.098 351492 DEBUG nova.virt.libvirt.vif [None req-a64fc55b-93 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 03 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.197 351492 INFO os_vif [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:1b:b0,bridge_name='br-int',has_traffic_filtering=True,id=d0c565d0-5299-45e5-84ac-ea722711af3d,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd0c565d0-52')
Dec 03 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.288 351492 DEBUG nova.virt.libvirt.driver [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 03 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.289 351492 DEBUG nova.virt.libvirt.driver [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 03 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.289 351492 DEBUG nova.virt.libvirt.driver [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 03 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.290 351492 DEBUG nova.virt.libvirt.driver [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] No VIF found with MAC fa:16:3e:de:1b:b0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 03 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.291 351492 INFO nova.virt.libvirt.driver [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Using config drive
Dec 03 02:00:23 compute-0 rsyslogd[188612]: message too long (8192) with configured size 8096, begin of message is: 2025-12-03 02:00:23.151 351492 DEBUG nova.virt.libvirt.vif [None req-a64fc55b-93 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 03 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.348 351492 DEBUG nova.storage.rbd_utils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:00:23 compute-0 ceph-mon[192821]: pgmap v1374: 321 pgs: 321 active+clean; 170 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 1.4 MiB/s wr, 26 op/s
Dec 03 02:00:23 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3240562883' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:00:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1375: 321 pgs: 321 active+clean; 172 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.4 MiB/s wr, 37 op/s
Dec 03 02:00:24 compute-0 nova_compute[351485]: 2025-12-03 02:00:24.514 351492 INFO nova.virt.libvirt.driver [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Creating config drive at /var/lib/nova/instances/55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.config
Dec 03 02:00:24 compute-0 nova_compute[351485]: 2025-12-03 02:00:24.527 351492 DEBUG oslo_concurrency.processutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphn03v_ef execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:00:24 compute-0 nova_compute[351485]: 2025-12-03 02:00:24.680 351492 DEBUG oslo_concurrency.processutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphn03v_ef" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:00:24 compute-0 sshd[113879]: drop connection #0 from [45.78.219.140]:52220 on [38.102.83.36]:22 penalty: exceeded LoginGraceTime
Dec 03 02:00:24 compute-0 nova_compute[351485]: 2025-12-03 02:00:24.769 351492 DEBUG nova.storage.rbd_utils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:00:24 compute-0 nova_compute[351485]: 2025-12-03 02:00:24.792 351492 DEBUG oslo_concurrency.processutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.config 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:00:24 compute-0 podman[422769]: 2025-12-03 02:00:24.870713117 +0000 UTC m=+0.103986910 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 02:00:24 compute-0 podman[422771]: 2025-12-03 02:00:24.899316792 +0000 UTC m=+0.125992629 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd)
Dec 03 02:00:24 compute-0 podman[422768]: 2025-12-03 02:00:24.910747514 +0000 UTC m=+0.151908209 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, maintainer=Red Hat, Inc., vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.openshift.expose-services=, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 03 02:00:24 compute-0 podman[422759]: 2025-12-03 02:00:24.923823293 +0000 UTC m=+0.170308958 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 03 02:00:25 compute-0 nova_compute[351485]: 2025-12-03 02:00:25.027 351492 DEBUG oslo_concurrency.processutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.config 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.235s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:00:25 compute-0 nova_compute[351485]: 2025-12-03 02:00:25.028 351492 INFO nova.virt.libvirt.driver [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Deleting local config drive /var/lib/nova/instances/55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.config because it was imported into RBD.
Dec 03 02:00:25 compute-0 systemd[1]: Starting libvirt secret daemon...
Dec 03 02:00:25 compute-0 systemd[1]: Started libvirt secret daemon.
Dec 03 02:00:25 compute-0 kernel: tapd0c565d0-52: entered promiscuous mode
Dec 03 02:00:25 compute-0 NetworkManager[48912]: <info>  [1764727225.1691] manager: (tapd0c565d0-52): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Dec 03 02:00:25 compute-0 ovn_controller[89134]: 2025-12-03T02:00:25Z|00040|binding|INFO|Claiming lport d0c565d0-5299-45e5-84ac-ea722711af3d for this chassis.
Dec 03 02:00:25 compute-0 ovn_controller[89134]: 2025-12-03T02:00:25Z|00041|binding|INFO|d0c565d0-5299-45e5-84ac-ea722711af3d: Claiming fa:16:3e:de:1b:b0 192.168.0.227
Dec 03 02:00:25 compute-0 nova_compute[351485]: 2025-12-03 02:00:25.173 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:00:25 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:00:25.191 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:1b:b0 192.168.0.227'], port_security=['fa:16:3e:de:1b:b0 192.168.0.227'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-olz3x44nal64-kaobzdetwujj-uf5345mx272a-port-25woqro3y5s6', 'neutron:cidrs': '192.168.0.227/24', 'neutron:device_id': '55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-olz3x44nal64-kaobzdetwujj-uf5345mx272a-port-25woqro3y5s6', 'neutron:project_id': '9746b242761a48048d185ce26d622b33', 'neutron:revision_number': '2', 'neutron:security_group_ids': '43ddbc1b-0018-4ea3-a338-8898d9bf8c87', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.186'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=13e9ae70-0999-47f9-bc0c-397e04263018, chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=d0c565d0-5299-45e5-84ac-ea722711af3d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 02:00:25 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:00:25.196 288528 INFO neutron.agent.ovn.metadata.agent [-] Port d0c565d0-5299-45e5-84ac-ea722711af3d in datapath 7ba11691-2711-476c-9191-cb6dfd0efa7d bound to our chassis
Dec 03 02:00:25 compute-0 ovn_controller[89134]: 2025-12-03T02:00:25Z|00042|binding|INFO|Setting lport d0c565d0-5299-45e5-84ac-ea722711af3d ovn-installed in OVS
Dec 03 02:00:25 compute-0 ovn_controller[89134]: 2025-12-03T02:00:25Z|00043|binding|INFO|Setting lport d0c565d0-5299-45e5-84ac-ea722711af3d up in Southbound
Dec 03 02:00:25 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:00:25.203 288528 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ba11691-2711-476c-9191-cb6dfd0efa7d
Dec 03 02:00:25 compute-0 nova_compute[351485]: 2025-12-03 02:00:25.205 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:00:25 compute-0 nova_compute[351485]: 2025-12-03 02:00:25.210 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:00:25 compute-0 nova_compute[351485]: 2025-12-03 02:00:25.220 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:00:25 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:00:25.223 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[257213a7-55b9-4ae4-bdad-97fa4cf7cc07]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:00:25 compute-0 systemd-machined[138558]: New machine qemu-3-instance-00000003.
Dec 03 02:00:25 compute-0 systemd-udevd[422906]: Network interface NamePolicy= disabled on kernel command line.
Dec 03 02:00:25 compute-0 NetworkManager[48912]: <info>  [1764727225.2593] device (tapd0c565d0-52): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 03 02:00:25 compute-0 NetworkManager[48912]: <info>  [1764727225.2603] device (tapd0c565d0-52): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 03 02:00:25 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Dec 03 02:00:25 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:00:25.271 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[635ad974-eb45-412f-854a-1b37263acf69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:00:25 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:00:25.275 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[7963167d-388e-4138-a0aa-999c5af969bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:00:25 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:00:25.303 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[ef3630ec-c87d-4b9d-aed0-713d3220fe9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:00:25 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:00:25.334 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[e442072a-2d6c-4cef-8bc5-d7049dd90875]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ba11691-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:a4:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 7, 'rx_bytes': 574, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 7, 'rx_bytes': 574, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573048, 'reachable_time': 36425, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 422917, 'error': None, 'target': 'ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:00:25 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:00:25.359 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[90a05ca3-6162-4929-ae97-af278493e743]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap7ba11691-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 573065, 'tstamp': 573065}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 422919, 'error': None, 'target': 'ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7ba11691-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 573069, 'tstamp': 573069}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 422919, 'error': None, 'target': 'ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:00:25 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:00:25.362 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ba11691-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:00:25 compute-0 nova_compute[351485]: 2025-12-03 02:00:25.366 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:00:25 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:00:25.368 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ba11691-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:00:25 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:00:25.368 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 03 02:00:25 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:00:25.369 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ba11691-20, col_values=(('external_ids', {'iface-id': '8c8945aa-32be-4ced-a7fe-2b9502f30008'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:00:25 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:00:25.370 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 03 02:00:25 compute-0 ceph-mon[192821]: pgmap v1375: 321 pgs: 321 active+clean; 172 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.4 MiB/s wr, 37 op/s
Dec 03 02:00:25 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec 03 02:00:26 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec 03 02:00:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1376: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.4 MiB/s wr, 40 op/s
Dec 03 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.511 351492 DEBUG nova.compute.manager [req-3ae5947a-0880-484b-9022-be866d745edf req-c812f328-25ec-42f2-8d72-a4562bee6a2e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Received event network-vif-plugged-d0c565d0-5299-45e5-84ac-ea722711af3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.512 351492 DEBUG oslo_concurrency.lockutils [req-3ae5947a-0880-484b-9022-be866d745edf req-c812f328-25ec-42f2-8d72-a4562bee6a2e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.513 351492 DEBUG oslo_concurrency.lockutils [req-3ae5947a-0880-484b-9022-be866d745edf req-c812f328-25ec-42f2-8d72-a4562bee6a2e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.513 351492 DEBUG oslo_concurrency.lockutils [req-3ae5947a-0880-484b-9022-be866d745edf req-c812f328-25ec-42f2-8d72-a4562bee6a2e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.515 351492 DEBUG nova.compute.manager [req-3ae5947a-0880-484b-9022-be866d745edf req-c812f328-25ec-42f2-8d72-a4562bee6a2e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Processing event network-vif-plugged-d0c565d0-5299-45e5-84ac-ea722711af3d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 03 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.754 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764727226.7532237, 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.754 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] VM Started (Lifecycle Event)
Dec 03 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.758 351492 DEBUG nova.compute.manager [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 03 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.767 351492 DEBUG nova.virt.libvirt.driver [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 03 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.775 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.788 351492 INFO nova.virt.libvirt.driver [-] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Instance spawned successfully.
Dec 03 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.789 351492 DEBUG nova.virt.libvirt.driver [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 03 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.793 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 03 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.835 351492 DEBUG nova.virt.libvirt.driver [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.836 351492 DEBUG nova.virt.libvirt.driver [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.836 351492 DEBUG nova.virt.libvirt.driver [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.837 351492 DEBUG nova.virt.libvirt.driver [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.838 351492 DEBUG nova.virt.libvirt.driver [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.839 351492 DEBUG nova.virt.libvirt.driver [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.886 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 03 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.886 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764727226.7537677, 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.887 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] VM Paused (Lifecycle Event)
Dec 03 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.917 351492 INFO nova.compute.manager [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Took 9.46 seconds to spawn the instance on the hypervisor.
Dec 03 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.918 351492 DEBUG nova.compute.manager [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.921 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.938 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764727226.7631435, 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.938 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] VM Resumed (Lifecycle Event)
Dec 03 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.974 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.982 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 03 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.999 351492 INFO nova.compute.manager [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Took 10.57 seconds to build instance.
Dec 03 02:00:27 compute-0 nova_compute[351485]: 2025-12-03 02:00:27.023 351492 DEBUG oslo_concurrency.lockutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:00:27 compute-0 ceph-mon[192821]: pgmap v1376: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.4 MiB/s wr, 40 op/s
Dec 03 02:00:27 compute-0 nova_compute[351485]: 2025-12-03 02:00:27.942 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:00:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:00:28 compute-0 nova_compute[351485]: 2025-12-03 02:00:28.172 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:00:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:00:28
Dec 03 02:00:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 02:00:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 02:00:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['default.rgw.meta', 'volumes', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', 'default.rgw.log', 'images', '.mgr', 'backups']
Dec 03 02:00:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 02:00:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:00:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:00:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1377: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.4 MiB/s wr, 40 op/s
Dec 03 02:00:28 compute-0 nova_compute[351485]: 2025-12-03 02:00:28.431 351492 DEBUG nova.compute.manager [req-34da7fbc-4fb7-42ee-a591-6df2b13e28a7 req-92099847-48e6-43cc-beea-f35989feb6d8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Received event network-vif-plugged-d0c565d0-5299-45e5-84ac-ea722711af3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:00:28 compute-0 nova_compute[351485]: 2025-12-03 02:00:28.431 351492 DEBUG oslo_concurrency.lockutils [req-34da7fbc-4fb7-42ee-a591-6df2b13e28a7 req-92099847-48e6-43cc-beea-f35989feb6d8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:00:28 compute-0 nova_compute[351485]: 2025-12-03 02:00:28.431 351492 DEBUG oslo_concurrency.lockutils [req-34da7fbc-4fb7-42ee-a591-6df2b13e28a7 req-92099847-48e6-43cc-beea-f35989feb6d8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:00:28 compute-0 nova_compute[351485]: 2025-12-03 02:00:28.432 351492 DEBUG oslo_concurrency.lockutils [req-34da7fbc-4fb7-42ee-a591-6df2b13e28a7 req-92099847-48e6-43cc-beea-f35989feb6d8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:00:28 compute-0 nova_compute[351485]: 2025-12-03 02:00:28.432 351492 DEBUG nova.compute.manager [req-34da7fbc-4fb7-42ee-a591-6df2b13e28a7 req-92099847-48e6-43cc-beea-f35989feb6d8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] No waiting events found dispatching network-vif-plugged-d0c565d0-5299-45e5-84ac-ea722711af3d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 03 02:00:28 compute-0 nova_compute[351485]: 2025-12-03 02:00:28.432 351492 WARNING nova.compute.manager [req-34da7fbc-4fb7-42ee-a591-6df2b13e28a7 req-92099847-48e6-43cc-beea-f35989feb6d8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Received unexpected event network-vif-plugged-d0c565d0-5299-45e5-84ac-ea722711af3d for instance with vm_state active and task_state None.
Dec 03 02:00:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:00:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:00:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:00:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:00:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 02:00:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:00:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 02:00:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:00:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:00:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:00:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:00:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:00:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:00:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:00:29 compute-0 ceph-mon[192821]: pgmap v1377: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.4 MiB/s wr, 40 op/s
Dec 03 02:00:29 compute-0 podman[158098]: time="2025-12-03T02:00:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:00:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:00:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec 03 02:00:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:00:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8647 "" "Go-http-client/1.1"
Dec 03 02:00:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1378: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 491 KiB/s rd, 1.4 MiB/s wr, 64 op/s
Dec 03 02:00:31 compute-0 openstack_network_exporter[368278]: ERROR   02:00:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:00:31 compute-0 openstack_network_exporter[368278]: ERROR   02:00:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:00:31 compute-0 openstack_network_exporter[368278]: ERROR   02:00:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:00:31 compute-0 openstack_network_exporter[368278]: ERROR   02:00:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:00:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:00:31 compute-0 openstack_network_exporter[368278]: ERROR   02:00:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:00:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:00:31 compute-0 ceph-mon[192821]: pgmap v1378: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 491 KiB/s rd, 1.4 MiB/s wr, 64 op/s
Dec 03 02:00:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1379: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 664 KiB/s wr, 74 op/s
Dec 03 02:00:32 compute-0 nova_compute[351485]: 2025-12-03 02:00:32.944 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:00:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:00:33 compute-0 nova_compute[351485]: 2025-12-03 02:00:33.174 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:00:33 compute-0 ceph-mon[192821]: pgmap v1379: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 664 KiB/s wr, 74 op/s
Dec 03 02:00:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1380: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 22 KiB/s wr, 71 op/s
Dec 03 02:00:35 compute-0 ceph-mon[192821]: pgmap v1380: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 22 KiB/s wr, 71 op/s
Dec 03 02:00:36 compute-0 sshd-session[422999]: Received disconnect from 117.5.148.56 port 48178:11:  [preauth]
Dec 03 02:00:36 compute-0 sshd-session[422999]: Disconnected from authenticating user root 117.5.148.56 port 48178 [preauth]
Dec 03 02:00:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1381: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 20 KiB/s wr, 60 op/s
Dec 03 02:00:37 compute-0 ceph-mon[192821]: pgmap v1381: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 20 KiB/s wr, 60 op/s
Dec 03 02:00:37 compute-0 nova_compute[351485]: 2025-12-03 02:00:37.947 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:00:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:00:38 compute-0 nova_compute[351485]: 2025-12-03 02:00:38.177 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:00:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1382: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 255 B/s wr, 57 op/s
Dec 03 02:00:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 02:00:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:00:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 02:00:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:00:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0013737500610470795 of space, bias 1.0, pg target 0.41212501831412385 quantized to 32 (current 32)
Dec 03 02:00:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:00:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:00:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:00:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:00:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:00:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec 03 02:00:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:00:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 02:00:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:00:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:00:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:00:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 02:00:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:00:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 02:00:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:00:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:00:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:00:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 02:00:39 compute-0 ceph-mon[192821]: pgmap v1382: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 255 B/s wr, 57 op/s
Dec 03 02:00:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1383: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 255 B/s wr, 57 op/s
Dec 03 02:00:41 compute-0 ceph-mon[192821]: pgmap v1383: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 255 B/s wr, 57 op/s
Dec 03 02:00:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1384: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 33 op/s
Dec 03 02:00:42 compute-0 podman[423001]: 2025-12-03 02:00:42.866645046 +0000 UTC m=+0.110947956 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 03 02:00:42 compute-0 podman[423002]: 2025-12-03 02:00:42.887141553 +0000 UTC m=+0.130373383 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4)
Dec 03 02:00:42 compute-0 podman[423003]: 2025-12-03 02:00:42.905370387 +0000 UTC m=+0.142242658 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 02:00:42 compute-0 nova_compute[351485]: 2025-12-03 02:00:42.950 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:00:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:00:43 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Dec 03 02:00:43 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:00:43.061733) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 03 02:00:43 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Dec 03 02:00:43 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727243061762, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 935, "num_deletes": 250, "total_data_size": 1327700, "memory_usage": 1348184, "flush_reason": "Manual Compaction"}
Dec 03 02:00:43 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Dec 03 02:00:43 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727243072309, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 802916, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 27581, "largest_seqno": 28515, "table_properties": {"data_size": 799192, "index_size": 1440, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9812, "raw_average_key_size": 20, "raw_value_size": 791249, "raw_average_value_size": 1665, "num_data_blocks": 65, "num_entries": 475, "num_filter_entries": 475, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764727154, "oldest_key_time": 1764727154, "file_creation_time": 1764727243, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Dec 03 02:00:43 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 10637 microseconds, and 3213 cpu microseconds.
Dec 03 02:00:43 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 02:00:43 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:00:43.072357) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 802916 bytes OK
Dec 03 02:00:43 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:00:43.072383) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Dec 03 02:00:43 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:00:43.076625) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Dec 03 02:00:43 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:00:43.076638) EVENT_LOG_v1 {"time_micros": 1764727243076634, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 03 02:00:43 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:00:43.076653) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 03 02:00:43 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 1323209, prev total WAL file size 1323209, number of live WAL files 2.
Dec 03 02:00:43 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:00:43 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:00:43.077481) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303032' seq:72057594037927935, type:22 .. '6D6772737461740031323533' seq:0, type:0; will stop at (end)
Dec 03 02:00:43 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 03 02:00:43 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(784KB)], [62(8926KB)]
Dec 03 02:00:43 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727243077610, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 9943140, "oldest_snapshot_seqno": -1}
Dec 03 02:00:43 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5032 keys, 7156338 bytes, temperature: kUnknown
Dec 03 02:00:43 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727243136948, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 7156338, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7124440, "index_size": 18220, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12613, "raw_key_size": 125062, "raw_average_key_size": 24, "raw_value_size": 7035057, "raw_average_value_size": 1398, "num_data_blocks": 758, "num_entries": 5032, "num_filter_entries": 5032, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764727243, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Dec 03 02:00:43 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 02:00:43 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:00:43.137267) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 7156338 bytes
Dec 03 02:00:43 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:00:43.141268) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 167.3 rd, 120.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 8.7 +0.0 blob) out(6.8 +0.0 blob), read-write-amplify(21.3) write-amplify(8.9) OK, records in: 5505, records dropped: 473 output_compression: NoCompression
Dec 03 02:00:43 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:00:43.141307) EVENT_LOG_v1 {"time_micros": 1764727243141290, "job": 34, "event": "compaction_finished", "compaction_time_micros": 59428, "compaction_time_cpu_micros": 34847, "output_level": 6, "num_output_files": 1, "total_output_size": 7156338, "num_input_records": 5505, "num_output_records": 5032, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 03 02:00:43 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:00:43 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727243142294, "job": 34, "event": "table_file_deletion", "file_number": 64}
Dec 03 02:00:43 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:00:43 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727243146245, "job": 34, "event": "table_file_deletion", "file_number": 62}
Dec 03 02:00:43 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:00:43.077195) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:00:43 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:00:43.146679) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:00:43 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:00:43.146688) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:00:43 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:00:43.146693) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:00:43 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:00:43.146697) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:00:43 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:00:43.146701) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:00:43 compute-0 nova_compute[351485]: 2025-12-03 02:00:43.179 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:00:43 compute-0 ceph-mon[192821]: pgmap v1384: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 33 op/s
Dec 03 02:00:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1385: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 288 KiB/s rd, 9 op/s
Dec 03 02:00:45 compute-0 ceph-mon[192821]: pgmap v1385: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 288 KiB/s rd, 9 op/s
Dec 03 02:00:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1386: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:00:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 02:00:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1856709503' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:00:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 02:00:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1856709503' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:00:47 compute-0 ceph-mon[192821]: pgmap v1386: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:00:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/1856709503' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:00:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/1856709503' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:00:47 compute-0 podman[423059]: 2025-12-03 02:00:47.906902363 +0000 UTC m=+0.159566156 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm)
Dec 03 02:00:47 compute-0 nova_compute[351485]: 2025-12-03 02:00:47.952 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:00:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:00:48 compute-0 nova_compute[351485]: 2025-12-03 02:00:48.182 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:00:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1387: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:00:49 compute-0 ceph-mon[192821]: pgmap v1387: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:00:50 compute-0 sudo[423080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:00:50 compute-0 sudo[423080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:00:50 compute-0 sudo[423080]: pam_unix(sudo:session): session closed for user root
Dec 03 02:00:50 compute-0 sudo[423105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:00:50 compute-0 sudo[423105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:00:50 compute-0 sudo[423105]: pam_unix(sudo:session): session closed for user root
Dec 03 02:00:50 compute-0 sudo[423130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:00:50 compute-0 sudo[423130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:00:50 compute-0 sudo[423130]: pam_unix(sudo:session): session closed for user root
Dec 03 02:00:50 compute-0 sudo[423155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 02:00:50 compute-0 sudo[423155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:00:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1388: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:00:51 compute-0 sudo[423155]: pam_unix(sudo:session): session closed for user root
Dec 03 02:00:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:00:51 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:00:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 02:00:51 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:00:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 02:00:51 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:00:51 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 1964bf0c-99d1-4a3b-b321-26f7ad476fe4 does not exist
Dec 03 02:00:51 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev d27a319b-0043-4db3-bdc5-4eb11c1e4845 does not exist
Dec 03 02:00:51 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 81ee39bb-7d62-4b84-b506-59eb4598e8f3 does not exist
Dec 03 02:00:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 02:00:51 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:00:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 02:00:51 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:00:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:00:51 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:00:51 compute-0 sudo[423209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:00:51 compute-0 sudo[423209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:00:51 compute-0 sudo[423209]: pam_unix(sudo:session): session closed for user root
Dec 03 02:00:51 compute-0 sudo[423240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:00:51 compute-0 sudo[423240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:00:51 compute-0 sudo[423240]: pam_unix(sudo:session): session closed for user root
Dec 03 02:00:51 compute-0 podman[423233]: 2025-12-03 02:00:51.532859396 +0000 UTC m=+0.109989349 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, release=1214.1726694543, com.redhat.component=ubi9-container, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm)
Dec 03 02:00:51 compute-0 sudo[423278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:00:51 compute-0 sudo[423278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:00:51 compute-0 sudo[423278]: pam_unix(sudo:session): session closed for user root
Dec 03 02:00:51 compute-0 sudo[423303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 02:00:51 compute-0 sudo[423303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:00:51 compute-0 ceph-mon[192821]: pgmap v1388: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:00:51 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:00:51 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:00:51 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:00:51 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:00:51 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:00:51 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:00:52 compute-0 podman[423362]: 2025-12-03 02:00:52.300637941 +0000 UTC m=+0.074505520 container create 3e91d013313159965897c4fe38a3e0df4061ec72356df665672cc4c94125eded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 03 02:00:52 compute-0 systemd[1]: Started libpod-conmon-3e91d013313159965897c4fe38a3e0df4061ec72356df665672cc4c94125eded.scope.
Dec 03 02:00:52 compute-0 podman[423362]: 2025-12-03 02:00:52.271162311 +0000 UTC m=+0.045029900 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:00:52 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:00:52 compute-0 podman[423362]: 2025-12-03 02:00:52.432782013 +0000 UTC m=+0.206649642 container init 3e91d013313159965897c4fe38a3e0df4061ec72356df665672cc4c94125eded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_williamson, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 03 02:00:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1389: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:00:52 compute-0 podman[423362]: 2025-12-03 02:00:52.449246136 +0000 UTC m=+0.223113745 container start 3e91d013313159965897c4fe38a3e0df4061ec72356df665672cc4c94125eded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:00:52 compute-0 elated_williamson[423378]: 167 167
Dec 03 02:00:52 compute-0 podman[423362]: 2025-12-03 02:00:52.461914343 +0000 UTC m=+0.235781962 container attach 3e91d013313159965897c4fe38a3e0df4061ec72356df665672cc4c94125eded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:00:52 compute-0 systemd[1]: libpod-3e91d013313159965897c4fe38a3e0df4061ec72356df665672cc4c94125eded.scope: Deactivated successfully.
Dec 03 02:00:52 compute-0 conmon[423378]: conmon 3e91d013313159965897 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3e91d013313159965897c4fe38a3e0df4061ec72356df665672cc4c94125eded.scope/container/memory.events
Dec 03 02:00:52 compute-0 podman[423362]: 2025-12-03 02:00:52.465115393 +0000 UTC m=+0.238983002 container died 3e91d013313159965897c4fe38a3e0df4061ec72356df665672cc4c94125eded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 03 02:00:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-21033c6d95b47a1411bd8769eeb6bf58687fa501e53163d038bcc601d8d527fd-merged.mount: Deactivated successfully.
Dec 03 02:00:52 compute-0 podman[423362]: 2025-12-03 02:00:52.542662357 +0000 UTC m=+0.316529946 container remove 3e91d013313159965897c4fe38a3e0df4061ec72356df665672cc4c94125eded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:00:52 compute-0 systemd[1]: libpod-conmon-3e91d013313159965897c4fe38a3e0df4061ec72356df665672cc4c94125eded.scope: Deactivated successfully.
Dec 03 02:00:52 compute-0 podman[423403]: 2025-12-03 02:00:52.817672073 +0000 UTC m=+0.097522338 container create 046fa72c31e171fff5fceec239aa82f90effb817b8a135547514ea2e878adb4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:00:52 compute-0 podman[423403]: 2025-12-03 02:00:52.775152805 +0000 UTC m=+0.055003120 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:00:52 compute-0 systemd[1]: Started libpod-conmon-046fa72c31e171fff5fceec239aa82f90effb817b8a135547514ea2e878adb4f.scope.
Dec 03 02:00:52 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:00:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fda6abc460e83f5b8443325d1bcff96c82c89d066182df3d4bd672f898bdbcb1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:00:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fda6abc460e83f5b8443325d1bcff96c82c89d066182df3d4bd672f898bdbcb1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:00:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fda6abc460e83f5b8443325d1bcff96c82c89d066182df3d4bd672f898bdbcb1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:00:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fda6abc460e83f5b8443325d1bcff96c82c89d066182df3d4bd672f898bdbcb1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:00:52 compute-0 nova_compute[351485]: 2025-12-03 02:00:52.954 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:00:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fda6abc460e83f5b8443325d1bcff96c82c89d066182df3d4bd672f898bdbcb1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 02:00:52 compute-0 podman[423403]: 2025-12-03 02:00:52.982083634 +0000 UTC m=+0.261933939 container init 046fa72c31e171fff5fceec239aa82f90effb817b8a135547514ea2e878adb4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_elbakyan, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:00:52 compute-0 podman[423403]: 2025-12-03 02:00:52.999193976 +0000 UTC m=+0.279044241 container start 046fa72c31e171fff5fceec239aa82f90effb817b8a135547514ea2e878adb4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_elbakyan, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 03 02:00:53 compute-0 podman[423403]: 2025-12-03 02:00:53.007650294 +0000 UTC m=+0.287500589 container attach 046fa72c31e171fff5fceec239aa82f90effb817b8a135547514ea2e878adb4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_elbakyan, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:00:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:00:53 compute-0 nova_compute[351485]: 2025-12-03 02:00:53.185 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:00:53 compute-0 ceph-mon[192821]: pgmap v1389: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:00:54 compute-0 determined_elbakyan[423417]: --> passed data devices: 0 physical, 3 LVM
Dec 03 02:00:54 compute-0 determined_elbakyan[423417]: --> relative data size: 1.0
Dec 03 02:00:54 compute-0 determined_elbakyan[423417]: --> All data devices are unavailable
Dec 03 02:00:54 compute-0 systemd[1]: libpod-046fa72c31e171fff5fceec239aa82f90effb817b8a135547514ea2e878adb4f.scope: Deactivated successfully.
Dec 03 02:00:54 compute-0 systemd[1]: libpod-046fa72c31e171fff5fceec239aa82f90effb817b8a135547514ea2e878adb4f.scope: Consumed 1.147s CPU time.
Dec 03 02:00:54 compute-0 podman[423446]: 2025-12-03 02:00:54.302202734 +0000 UTC m=+0.045464641 container died 046fa72c31e171fff5fceec239aa82f90effb817b8a135547514ea2e878adb4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 03 02:00:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-fda6abc460e83f5b8443325d1bcff96c82c89d066182df3d4bd672f898bdbcb1-merged.mount: Deactivated successfully.
Dec 03 02:00:54 compute-0 podman[423446]: 2025-12-03 02:00:54.427646877 +0000 UTC m=+0.170908714 container remove 046fa72c31e171fff5fceec239aa82f90effb817b8a135547514ea2e878adb4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_elbakyan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 03 02:00:54 compute-0 systemd[1]: libpod-conmon-046fa72c31e171fff5fceec239aa82f90effb817b8a135547514ea2e878adb4f.scope: Deactivated successfully.
Dec 03 02:00:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1390: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:00:54 compute-0 sudo[423303]: pam_unix(sudo:session): session closed for user root
Dec 03 02:00:54 compute-0 sudo[423459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:00:54 compute-0 sudo[423459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:00:54 compute-0 sudo[423459]: pam_unix(sudo:session): session closed for user root
Dec 03 02:00:54 compute-0 sudo[423484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:00:54 compute-0 sudo[423484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:00:54 compute-0 sudo[423484]: pam_unix(sudo:session): session closed for user root
Dec 03 02:00:54 compute-0 sudo[423509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:00:54 compute-0 sudo[423509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:00:54 compute-0 sudo[423509]: pam_unix(sudo:session): session closed for user root
Dec 03 02:00:55 compute-0 sudo[423555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 02:00:55 compute-0 sudo[423555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:00:55 compute-0 podman[423535]: 2025-12-03 02:00:55.094126288 +0000 UTC m=+0.110684518 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3)
Dec 03 02:00:55 compute-0 podman[423534]: 2025-12-03 02:00:55.111083286 +0000 UTC m=+0.134473499 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 02:00:55 compute-0 podman[423533]: 2025-12-03 02:00:55.124419801 +0000 UTC m=+0.143321087 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, container_name=openstack_network_exporter, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, vcs-type=git, architecture=x86_64, maintainer=Red Hat, Inc., release=1755695350)
Dec 03 02:00:55 compute-0 podman[423542]: 2025-12-03 02:00:55.147883042 +0000 UTC m=+0.135964160 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 03 02:00:55 compute-0 ovn_controller[89134]: 2025-12-03T02:00:55Z|00044|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec 03 02:00:55 compute-0 podman[423676]: 2025-12-03 02:00:55.505194236 +0000 UTC m=+0.068747317 container create ba38fc59ecd0f1cf403406375cf0c81d5720b264d3a30fa5d00352c40af2b8ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_benz, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 03 02:00:55 compute-0 systemd[1]: Started libpod-conmon-ba38fc59ecd0f1cf403406375cf0c81d5720b264d3a30fa5d00352c40af2b8ca.scope.
Dec 03 02:00:55 compute-0 podman[423676]: 2025-12-03 02:00:55.475850759 +0000 UTC m=+0.039403870 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:00:55 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:00:55 compute-0 podman[423676]: 2025-12-03 02:00:55.643636745 +0000 UTC m=+0.207189826 container init ba38fc59ecd0f1cf403406375cf0c81d5720b264d3a30fa5d00352c40af2b8ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_benz, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:00:55 compute-0 podman[423676]: 2025-12-03 02:00:55.659754119 +0000 UTC m=+0.223307180 container start ba38fc59ecd0f1cf403406375cf0c81d5720b264d3a30fa5d00352c40af2b8ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_benz, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:00:55 compute-0 podman[423676]: 2025-12-03 02:00:55.664130592 +0000 UTC m=+0.227683653 container attach ba38fc59ecd0f1cf403406375cf0c81d5720b264d3a30fa5d00352c40af2b8ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_benz, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 03 02:00:55 compute-0 quizzical_benz[423692]: 167 167
Dec 03 02:00:55 compute-0 systemd[1]: libpod-ba38fc59ecd0f1cf403406375cf0c81d5720b264d3a30fa5d00352c40af2b8ca.scope: Deactivated successfully.
Dec 03 02:00:55 compute-0 podman[423676]: 2025-12-03 02:00:55.674786232 +0000 UTC m=+0.238339293 container died ba38fc59ecd0f1cf403406375cf0c81d5720b264d3a30fa5d00352c40af2b8ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_benz, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:00:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8e56ae81b63d702e2bfdc9f4ace6fc9a664036854d36992ce54e9e8eb36a355-merged.mount: Deactivated successfully.
Dec 03 02:00:55 compute-0 podman[423676]: 2025-12-03 02:00:55.74392968 +0000 UTC m=+0.307482781 container remove ba38fc59ecd0f1cf403406375cf0c81d5720b264d3a30fa5d00352c40af2b8ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_benz, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:00:55 compute-0 systemd[1]: libpod-conmon-ba38fc59ecd0f1cf403406375cf0c81d5720b264d3a30fa5d00352c40af2b8ca.scope: Deactivated successfully.
Dec 03 02:00:55 compute-0 ceph-mon[192821]: pgmap v1390: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:00:56 compute-0 podman[423714]: 2025-12-03 02:00:56.036369746 +0000 UTC m=+0.083245375 container create abc5726f81a64ffe0e66a7345a5ed71a5e4fc4823be96e75ea98b754c44163e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_bose, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 03 02:00:56 compute-0 podman[423714]: 2025-12-03 02:00:56.006121705 +0000 UTC m=+0.052997374 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:00:56 compute-0 systemd[1]: Started libpod-conmon-abc5726f81a64ffe0e66a7345a5ed71a5e4fc4823be96e75ea98b754c44163e9.scope.
Dec 03 02:00:56 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:00:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/068fb00d96d5b98bbba1fe45195ff1b08ada21b81dc3ac8029eb51c194555d2a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:00:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/068fb00d96d5b98bbba1fe45195ff1b08ada21b81dc3ac8029eb51c194555d2a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:00:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/068fb00d96d5b98bbba1fe45195ff1b08ada21b81dc3ac8029eb51c194555d2a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:00:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/068fb00d96d5b98bbba1fe45195ff1b08ada21b81dc3ac8029eb51c194555d2a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:00:56 compute-0 podman[423714]: 2025-12-03 02:00:56.2253756 +0000 UTC m=+0.272251269 container init abc5726f81a64ffe0e66a7345a5ed71a5e4fc4823be96e75ea98b754c44163e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_bose, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:00:56 compute-0 podman[423714]: 2025-12-03 02:00:56.238655274 +0000 UTC m=+0.285530933 container start abc5726f81a64ffe0e66a7345a5ed71a5e4fc4823be96e75ea98b754c44163e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_bose, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Dec 03 02:00:56 compute-0 podman[423714]: 2025-12-03 02:00:56.246449153 +0000 UTC m=+0.293324822 container attach abc5726f81a64ffe0e66a7345a5ed71a5e4fc4823be96e75ea98b754c44163e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_bose, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Dec 03 02:00:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1391: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:00:56 compute-0 zealous_bose[423729]: {
Dec 03 02:00:56 compute-0 zealous_bose[423729]:     "0": [
Dec 03 02:00:56 compute-0 zealous_bose[423729]:         {
Dec 03 02:00:56 compute-0 zealous_bose[423729]:             "devices": [
Dec 03 02:00:56 compute-0 zealous_bose[423729]:                 "/dev/loop3"
Dec 03 02:00:56 compute-0 zealous_bose[423729]:             ],
Dec 03 02:00:56 compute-0 zealous_bose[423729]:             "lv_name": "ceph_lv0",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:             "lv_size": "21470642176",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:             "name": "ceph_lv0",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:             "tags": {
Dec 03 02:00:56 compute-0 zealous_bose[423729]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:                 "ceph.cluster_name": "ceph",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:                 "ceph.crush_device_class": "",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:                 "ceph.encrypted": "0",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:                 "ceph.osd_id": "0",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:                 "ceph.type": "block",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:                 "ceph.vdo": "0"
Dec 03 02:00:56 compute-0 zealous_bose[423729]:             },
Dec 03 02:00:56 compute-0 zealous_bose[423729]:             "type": "block",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:             "vg_name": "ceph_vg0"
Dec 03 02:00:56 compute-0 zealous_bose[423729]:         }
Dec 03 02:00:56 compute-0 zealous_bose[423729]:     ],
Dec 03 02:00:56 compute-0 zealous_bose[423729]:     "1": [
Dec 03 02:00:56 compute-0 zealous_bose[423729]:         {
Dec 03 02:00:56 compute-0 zealous_bose[423729]:             "devices": [
Dec 03 02:00:56 compute-0 zealous_bose[423729]:                 "/dev/loop4"
Dec 03 02:00:56 compute-0 zealous_bose[423729]:             ],
Dec 03 02:00:56 compute-0 zealous_bose[423729]:             "lv_name": "ceph_lv1",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:             "lv_size": "21470642176",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:             "name": "ceph_lv1",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:             "tags": {
Dec 03 02:00:56 compute-0 zealous_bose[423729]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:                 "ceph.cluster_name": "ceph",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:                 "ceph.crush_device_class": "",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:                 "ceph.encrypted": "0",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:                 "ceph.osd_id": "1",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:                 "ceph.type": "block",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:                 "ceph.vdo": "0"
Dec 03 02:00:56 compute-0 zealous_bose[423729]:             },
Dec 03 02:00:56 compute-0 zealous_bose[423729]:             "type": "block",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:             "vg_name": "ceph_vg1"
Dec 03 02:00:56 compute-0 zealous_bose[423729]:         }
Dec 03 02:00:56 compute-0 zealous_bose[423729]:     ],
Dec 03 02:00:56 compute-0 zealous_bose[423729]:     "2": [
Dec 03 02:00:56 compute-0 zealous_bose[423729]:         {
Dec 03 02:00:56 compute-0 zealous_bose[423729]:             "devices": [
Dec 03 02:00:56 compute-0 zealous_bose[423729]:                 "/dev/loop5"
Dec 03 02:00:56 compute-0 zealous_bose[423729]:             ],
Dec 03 02:00:56 compute-0 zealous_bose[423729]:             "lv_name": "ceph_lv2",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:             "lv_size": "21470642176",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:             "name": "ceph_lv2",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:             "tags": {
Dec 03 02:00:56 compute-0 zealous_bose[423729]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:                 "ceph.cluster_name": "ceph",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:                 "ceph.crush_device_class": "",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:                 "ceph.encrypted": "0",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:                 "ceph.osd_id": "2",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:                 "ceph.type": "block",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:                 "ceph.vdo": "0"
Dec 03 02:00:56 compute-0 zealous_bose[423729]:             },
Dec 03 02:00:56 compute-0 zealous_bose[423729]:             "type": "block",
Dec 03 02:00:56 compute-0 zealous_bose[423729]:             "vg_name": "ceph_vg2"
Dec 03 02:00:56 compute-0 zealous_bose[423729]:         }
Dec 03 02:00:56 compute-0 zealous_bose[423729]:     ]
Dec 03 02:00:56 compute-0 zealous_bose[423729]: }
Dec 03 02:00:57 compute-0 systemd[1]: libpod-abc5726f81a64ffe0e66a7345a5ed71a5e4fc4823be96e75ea98b754c44163e9.scope: Deactivated successfully.
Dec 03 02:00:57 compute-0 podman[423714]: 2025-12-03 02:00:57.051043736 +0000 UTC m=+1.097919405 container died abc5726f81a64ffe0e66a7345a5ed71a5e4fc4823be96e75ea98b754c44163e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_bose, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:00:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-068fb00d96d5b98bbba1fe45195ff1b08ada21b81dc3ac8029eb51c194555d2a-merged.mount: Deactivated successfully.
Dec 03 02:00:57 compute-0 podman[423714]: 2025-12-03 02:00:57.17971964 +0000 UTC m=+1.226595259 container remove abc5726f81a64ffe0e66a7345a5ed71a5e4fc4823be96e75ea98b754c44163e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 03 02:00:57 compute-0 systemd[1]: libpod-conmon-abc5726f81a64ffe0e66a7345a5ed71a5e4fc4823be96e75ea98b754c44163e9.scope: Deactivated successfully.
Dec 03 02:00:57 compute-0 sudo[423555]: pam_unix(sudo:session): session closed for user root
Dec 03 02:00:57 compute-0 sudo[423748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:00:57 compute-0 sudo[423748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:00:57 compute-0 sudo[423748]: pam_unix(sudo:session): session closed for user root
Dec 03 02:00:57 compute-0 sudo[423773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:00:57 compute-0 sudo[423773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:00:57 compute-0 sudo[423773]: pam_unix(sudo:session): session closed for user root
Dec 03 02:00:57 compute-0 sudo[423798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:00:57 compute-0 sudo[423798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:00:57 compute-0 sudo[423798]: pam_unix(sudo:session): session closed for user root
Dec 03 02:00:57 compute-0 sudo[423823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 02:00:57 compute-0 sudo[423823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:00:57 compute-0 ceph-mon[192821]: pgmap v1391: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:00:57 compute-0 nova_compute[351485]: 2025-12-03 02:00:57.957 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:00:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:00:58 compute-0 nova_compute[351485]: 2025-12-03 02:00:58.188 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:00:58 compute-0 podman[423885]: 2025-12-03 02:00:58.282216311 +0000 UTC m=+0.095898432 container create 703d15f0a68a7c5f926de8d6cca21cb5f8acd483b55cceafbbac283955cbfb26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:00:58 compute-0 podman[423885]: 2025-12-03 02:00:58.248029858 +0000 UTC m=+0.061711969 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:00:58 compute-0 systemd[1]: Started libpod-conmon-703d15f0a68a7c5f926de8d6cca21cb5f8acd483b55cceafbbac283955cbfb26.scope.
Dec 03 02:00:58 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:00:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:00:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:00:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:00:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:00:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:00:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:00:58 compute-0 podman[423885]: 2025-12-03 02:00:58.439180942 +0000 UTC m=+0.252863043 container init 703d15f0a68a7c5f926de8d6cca21cb5f8acd483b55cceafbbac283955cbfb26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mirzakhani, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 03 02:00:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1392: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:00:58 compute-0 podman[423885]: 2025-12-03 02:00:58.457505038 +0000 UTC m=+0.271187119 container start 703d15f0a68a7c5f926de8d6cca21cb5f8acd483b55cceafbbac283955cbfb26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:00:58 compute-0 kind_mirzakhani[423900]: 167 167
Dec 03 02:00:58 compute-0 systemd[1]: libpod-703d15f0a68a7c5f926de8d6cca21cb5f8acd483b55cceafbbac283955cbfb26.scope: Deactivated successfully.
Dec 03 02:00:58 compute-0 podman[423885]: 2025-12-03 02:00:58.47036206 +0000 UTC m=+0.284044171 container attach 703d15f0a68a7c5f926de8d6cca21cb5f8acd483b55cceafbbac283955cbfb26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:00:58 compute-0 podman[423885]: 2025-12-03 02:00:58.471720168 +0000 UTC m=+0.285402249 container died 703d15f0a68a7c5f926de8d6cca21cb5f8acd483b55cceafbbac283955cbfb26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 03 02:00:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-c568a41fdb784a20c136e173697e8879a1409cc538b648ebf70b5eddaea5beff-merged.mount: Deactivated successfully.
Dec 03 02:00:58 compute-0 podman[423885]: 2025-12-03 02:00:58.534178417 +0000 UTC m=+0.347860498 container remove 703d15f0a68a7c5f926de8d6cca21cb5f8acd483b55cceafbbac283955cbfb26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 03 02:00:58 compute-0 systemd[1]: libpod-conmon-703d15f0a68a7c5f926de8d6cca21cb5f8acd483b55cceafbbac283955cbfb26.scope: Deactivated successfully.
Dec 03 02:00:58 compute-0 podman[423922]: 2025-12-03 02:00:58.845249758 +0000 UTC m=+0.087841605 container create ffe2a87bbe58ec5b5ea38bda1c693cdb3505e41295b0269fb95000f4ad3a0883 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bhaskara, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 03 02:00:58 compute-0 podman[423922]: 2025-12-03 02:00:58.816277593 +0000 UTC m=+0.058869470 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:00:58 compute-0 systemd[1]: Started libpod-conmon-ffe2a87bbe58ec5b5ea38bda1c693cdb3505e41295b0269fb95000f4ad3a0883.scope.
Dec 03 02:00:58 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:00:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae4402f3a81bae4657d6dc551dd97980ae6ecaf32c8d9ac0b7cefae086cf64d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:00:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae4402f3a81bae4657d6dc551dd97980ae6ecaf32c8d9ac0b7cefae086cf64d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:00:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae4402f3a81bae4657d6dc551dd97980ae6ecaf32c8d9ac0b7cefae086cf64d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:00:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae4402f3a81bae4657d6dc551dd97980ae6ecaf32c8d9ac0b7cefae086cf64d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:00:59 compute-0 podman[423922]: 2025-12-03 02:00:59.001265533 +0000 UTC m=+0.243857380 container init ffe2a87bbe58ec5b5ea38bda1c693cdb3505e41295b0269fb95000f4ad3a0883 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bhaskara, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec 03 02:00:59 compute-0 podman[423922]: 2025-12-03 02:00:59.018682063 +0000 UTC m=+0.261273900 container start ffe2a87bbe58ec5b5ea38bda1c693cdb3505e41295b0269fb95000f4ad3a0883 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bhaskara, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 03 02:00:59 compute-0 podman[423922]: 2025-12-03 02:00:59.025979039 +0000 UTC m=+0.268570906 container attach ffe2a87bbe58ec5b5ea38bda1c693cdb3505e41295b0269fb95000f4ad3a0883 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec 03 02:00:59 compute-0 ceph-mon[192821]: pgmap v1392: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:00:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:00:59.627 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:00:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:00:59.630 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:00:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:00:59.632 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:00:59 compute-0 podman[158098]: time="2025-12-03T02:00:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:00:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:00:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45381 "" "Go-http-client/1.1"
Dec 03 02:00:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:00:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9055 "" "Go-http-client/1.1"
Dec 03 02:01:00 compute-0 happy_bhaskara[423937]: {
Dec 03 02:01:00 compute-0 happy_bhaskara[423937]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 02:01:00 compute-0 happy_bhaskara[423937]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:01:00 compute-0 happy_bhaskara[423937]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 02:01:00 compute-0 happy_bhaskara[423937]:         "osd_id": 2,
Dec 03 02:01:00 compute-0 happy_bhaskara[423937]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:01:00 compute-0 happy_bhaskara[423937]:         "type": "bluestore"
Dec 03 02:01:00 compute-0 happy_bhaskara[423937]:     },
Dec 03 02:01:00 compute-0 happy_bhaskara[423937]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 02:01:00 compute-0 happy_bhaskara[423937]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:01:00 compute-0 happy_bhaskara[423937]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 02:01:00 compute-0 happy_bhaskara[423937]:         "osd_id": 1,
Dec 03 02:01:00 compute-0 happy_bhaskara[423937]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:01:00 compute-0 happy_bhaskara[423937]:         "type": "bluestore"
Dec 03 02:01:00 compute-0 happy_bhaskara[423937]:     },
Dec 03 02:01:00 compute-0 happy_bhaskara[423937]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 02:01:00 compute-0 happy_bhaskara[423937]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:01:00 compute-0 happy_bhaskara[423937]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 02:01:00 compute-0 happy_bhaskara[423937]:         "osd_id": 0,
Dec 03 02:01:00 compute-0 happy_bhaskara[423937]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:01:00 compute-0 happy_bhaskara[423937]:         "type": "bluestore"
Dec 03 02:01:00 compute-0 happy_bhaskara[423937]:     }
Dec 03 02:01:00 compute-0 happy_bhaskara[423937]: }
Dec 03 02:01:00 compute-0 systemd[1]: libpod-ffe2a87bbe58ec5b5ea38bda1c693cdb3505e41295b0269fb95000f4ad3a0883.scope: Deactivated successfully.
Dec 03 02:01:00 compute-0 podman[423922]: 2025-12-03 02:01:00.251905157 +0000 UTC m=+1.494497024 container died ffe2a87bbe58ec5b5ea38bda1c693cdb3505e41295b0269fb95000f4ad3a0883 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bhaskara, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 03 02:01:00 compute-0 systemd[1]: libpod-ffe2a87bbe58ec5b5ea38bda1c693cdb3505e41295b0269fb95000f4ad3a0883.scope: Consumed 1.183s CPU time.
Dec 03 02:01:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae4402f3a81bae4657d6dc551dd97980ae6ecaf32c8d9ac0b7cefae086cf64d3-merged.mount: Deactivated successfully.
Dec 03 02:01:00 compute-0 podman[423922]: 2025-12-03 02:01:00.333944298 +0000 UTC m=+1.576536125 container remove ffe2a87bbe58ec5b5ea38bda1c693cdb3505e41295b0269fb95000f4ad3a0883 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bhaskara, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 03 02:01:00 compute-0 systemd[1]: libpod-conmon-ffe2a87bbe58ec5b5ea38bda1c693cdb3505e41295b0269fb95000f4ad3a0883.scope: Deactivated successfully.
Dec 03 02:01:00 compute-0 sudo[423823]: pam_unix(sudo:session): session closed for user root
Dec 03 02:01:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 02:01:00 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:01:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 02:01:00 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:01:00 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev e20eea0b-febf-4877-ab95-5e41b6416529 does not exist
Dec 03 02:01:00 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 8691eab9-01c4-49ad-b43e-88ba074b5e21 does not exist
Dec 03 02:01:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1393: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 op/s
Dec 03 02:01:00 compute-0 sudo[423983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:01:00 compute-0 sudo[423983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:01:00 compute-0 sudo[423983]: pam_unix(sudo:session): session closed for user root
Dec 03 02:01:00 compute-0 sudo[424008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 02:01:00 compute-0 sudo[424008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:01:00 compute-0 sudo[424008]: pam_unix(sudo:session): session closed for user root
Dec 03 02:01:01 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:01:01 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:01:01 compute-0 ceph-mon[192821]: pgmap v1393: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 op/s
Dec 03 02:01:01 compute-0 openstack_network_exporter[368278]: ERROR   02:01:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:01:01 compute-0 openstack_network_exporter[368278]: ERROR   02:01:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:01:01 compute-0 openstack_network_exporter[368278]: ERROR   02:01:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:01:01 compute-0 openstack_network_exporter[368278]: ERROR   02:01:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:01:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:01:01 compute-0 openstack_network_exporter[368278]: ERROR   02:01:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:01:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:01:01 compute-0 nova_compute[351485]: 2025-12-03 02:01:01.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:01:01 compute-0 nova_compute[351485]: 2025-12-03 02:01:01.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:01:01 compute-0 nova_compute[351485]: 2025-12-03 02:01:01.611 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:01:01 compute-0 nova_compute[351485]: 2025-12-03 02:01:01.611 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:01:01 compute-0 nova_compute[351485]: 2025-12-03 02:01:01.612 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:01:01 compute-0 nova_compute[351485]: 2025-12-03 02:01:01.612 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 02:01:01 compute-0 nova_compute[351485]: 2025-12-03 02:01:01.612 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:01:01 compute-0 ovn_controller[89134]: 2025-12-03T02:01:01Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:de:1b:b0 192.168.0.227
Dec 03 02:01:01 compute-0 ovn_controller[89134]: 2025-12-03T02:01:01Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:de:1b:b0 192.168.0.227
Dec 03 02:01:01 compute-0 CROND[424054]: (root) CMD (run-parts /etc/cron.hourly)
Dec 03 02:01:01 compute-0 run-parts[424057]: (/etc/cron.hourly) starting 0anacron
Dec 03 02:01:01 compute-0 run-parts[424063]: (/etc/cron.hourly) finished 0anacron
Dec 03 02:01:01 compute-0 CROND[424053]: (root) CMDEND (run-parts /etc/cron.hourly)
Dec 03 02:01:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:01:02 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1455105108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:01:02 compute-0 nova_compute[351485]: 2025-12-03 02:01:02.141 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:01:02 compute-0 nova_compute[351485]: 2025-12-03 02:01:02.285 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:01:02 compute-0 nova_compute[351485]: 2025-12-03 02:01:02.286 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:01:02 compute-0 nova_compute[351485]: 2025-12-03 02:01:02.287 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:01:02 compute-0 nova_compute[351485]: 2025-12-03 02:01:02.294 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:01:02 compute-0 nova_compute[351485]: 2025-12-03 02:01:02.295 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:01:02 compute-0 nova_compute[351485]: 2025-12-03 02:01:02.295 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:01:02 compute-0 nova_compute[351485]: 2025-12-03 02:01:02.301 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:01:02 compute-0 nova_compute[351485]: 2025-12-03 02:01:02.302 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:01:02 compute-0 nova_compute[351485]: 2025-12-03 02:01:02.303 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:01:02 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1455105108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:01:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1394: 321 pgs: 321 active+clean; 182 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 926 KiB/s wr, 11 op/s
Dec 03 02:01:02 compute-0 nova_compute[351485]: 2025-12-03 02:01:02.785 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:01:02 compute-0 nova_compute[351485]: 2025-12-03 02:01:02.787 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3473MB free_disk=59.9058723449707GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 02:01:02 compute-0 nova_compute[351485]: 2025-12-03 02:01:02.787 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:01:02 compute-0 nova_compute[351485]: 2025-12-03 02:01:02.787 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:01:02 compute-0 nova_compute[351485]: 2025-12-03 02:01:02.899 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:01:02 compute-0 nova_compute[351485]: 2025-12-03 02:01:02.899 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 52862152-12c7-4236-89c3-67750ecbed7a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:01:02 compute-0 nova_compute[351485]: 2025-12-03 02:01:02.899 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:01:02 compute-0 nova_compute[351485]: 2025-12-03 02:01:02.900 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 02:01:02 compute-0 nova_compute[351485]: 2025-12-03 02:01:02.900 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 02:01:02 compute-0 nova_compute[351485]: 2025-12-03 02:01:02.960 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:01:02 compute-0 nova_compute[351485]: 2025-12-03 02:01:02.996 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:01:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:01:03 compute-0 nova_compute[351485]: 2025-12-03 02:01:03.191 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:01:03 compute-0 ceph-mon[192821]: pgmap v1394: 321 pgs: 321 active+clean; 182 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 926 KiB/s wr, 11 op/s
Dec 03 02:01:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:01:03 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1481947854' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:01:03 compute-0 nova_compute[351485]: 2025-12-03 02:01:03.511 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:01:03 compute-0 nova_compute[351485]: 2025-12-03 02:01:03.519 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:01:03 compute-0 nova_compute[351485]: 2025-12-03 02:01:03.535 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:01:03 compute-0 nova_compute[351485]: 2025-12-03 02:01:03.563 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 02:01:03 compute-0 nova_compute[351485]: 2025-12-03 02:01:03.563 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:01:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1395: 321 pgs: 321 active+clean; 187 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 1.1 MiB/s wr, 20 op/s
Dec 03 02:01:04 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1481947854' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:01:04 compute-0 nova_compute[351485]: 2025-12-03 02:01:04.563 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:01:04 compute-0 nova_compute[351485]: 2025-12-03 02:01:04.564 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 02:01:04 compute-0 nova_compute[351485]: 2025-12-03 02:01:04.565 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 03 02:01:04 compute-0 nova_compute[351485]: 2025-12-03 02:01:04.861 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:01:04 compute-0 nova_compute[351485]: 2025-12-03 02:01:04.861 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:01:04 compute-0 nova_compute[351485]: 2025-12-03 02:01:04.861 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 03 02:01:04 compute-0 nova_compute[351485]: 2025-12-03 02:01:04.862 351492 DEBUG nova.objects.instance [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9182286b-5a08-4961-b4bb-c0e2f05746f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:01:05 compute-0 ceph-mon[192821]: pgmap v1395: 321 pgs: 321 active+clean; 187 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 1.1 MiB/s wr, 20 op/s
Dec 03 02:01:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1396: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 58 op/s
Dec 03 02:01:06 compute-0 nova_compute[351485]: 2025-12-03 02:01:06.464 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Updating instance_info_cache with network_info: [{"id": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "address": "fa:16:3e:8f:a6:32", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2a50b9b-c2", "ovs_interfaceid": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:01:06 compute-0 nova_compute[351485]: 2025-12-03 02:01:06.484 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:01:06 compute-0 nova_compute[351485]: 2025-12-03 02:01:06.485 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 03 02:01:06 compute-0 nova_compute[351485]: 2025-12-03 02:01:06.485 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:01:06 compute-0 nova_compute[351485]: 2025-12-03 02:01:06.486 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:01:06 compute-0 nova_compute[351485]: 2025-12-03 02:01:06.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:01:07 compute-0 ceph-mon[192821]: pgmap v1396: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 58 op/s
Dec 03 02:01:07 compute-0 nova_compute[351485]: 2025-12-03 02:01:07.965 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:01:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:01:08 compute-0 nova_compute[351485]: 2025-12-03 02:01:08.194 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:01:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1397: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 58 op/s
Dec 03 02:01:08 compute-0 nova_compute[351485]: 2025-12-03 02:01:08.570 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:01:08 compute-0 nova_compute[351485]: 2025-12-03 02:01:08.575 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:01:09 compute-0 ceph-mon[192821]: pgmap v1397: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 58 op/s
Dec 03 02:01:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1398: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 58 op/s
Dec 03 02:01:11 compute-0 ceph-mon[192821]: pgmap v1398: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 58 op/s
Dec 03 02:01:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1399: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 164 KiB/s rd, 1.5 MiB/s wr, 58 op/s
Dec 03 02:01:12 compute-0 nova_compute[351485]: 2025-12-03 02:01:12.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:01:12 compute-0 nova_compute[351485]: 2025-12-03 02:01:12.576 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 02:01:12 compute-0 nova_compute[351485]: 2025-12-03 02:01:12.969 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:01:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:01:13 compute-0 nova_compute[351485]: 2025-12-03 02:01:13.197 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:01:13 compute-0 ceph-mon[192821]: pgmap v1399: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 164 KiB/s rd, 1.5 MiB/s wr, 58 op/s
Dec 03 02:01:13 compute-0 podman[424091]: 2025-12-03 02:01:13.873014171 +0000 UTC m=+0.112088088 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 03 02:01:13 compute-0 podman[424089]: 2025-12-03 02:01:13.885865673 +0000 UTC m=+0.122806700 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 03 02:01:13 compute-0 podman[424090]: 2025-12-03 02:01:13.901109582 +0000 UTC m=+0.136442384 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 03 02:01:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1400: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 138 KiB/s rd, 594 KiB/s wr, 47 op/s
Dec 03 02:01:15 compute-0 ceph-mon[192821]: pgmap v1400: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 138 KiB/s rd, 594 KiB/s wr, 47 op/s
Dec 03 02:01:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1401: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 98 KiB/s rd, 439 KiB/s wr, 38 op/s
Dec 03 02:01:17 compute-0 ceph-mon[192821]: pgmap v1401: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 98 KiB/s rd, 439 KiB/s wr, 38 op/s
Dec 03 02:01:17 compute-0 nova_compute[351485]: 2025-12-03 02:01:17.973 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:01:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:01:18 compute-0 nova_compute[351485]: 2025-12-03 02:01:18.199 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:01:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1402: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s wr, 0 op/s
Dec 03 02:01:18 compute-0 podman[424146]: 2025-12-03 02:01:18.875202066 +0000 UTC m=+0.126428622 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 03 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.505 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 03 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.505 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 03 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.506 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.506 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.507 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.508 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.508 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.508 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.508 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.509 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.509 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.509 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.509 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.510 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.510 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.510 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.511 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.511 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.517 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '52862152-12c7-4236-89c3-67750ecbed7a', 'name': 'vn-44nal64-ppxv5rwaptjv-bbqmylrxhl37-vnf-x65t7efzpd2l', 'flavor': {'id': 'bc665ec6-3672-4e52-a447-5267b04e227a', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9746b242761a48048d185ce26d622b33', 'user_id': '03ba25e4009b43f7b0054fee32bf9136', 'hostId': '875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd', 'status': 'active', 'metadata': {'metering.server_group': '0f6ab671-23df-4a6d-9613-02f9fb5fb294'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 03 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.522 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec 03 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.523 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}5774f494984a65ffbde2426a05531a474fe014ea4dcd597248cb0a9b623a789b" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec 03 02:01:19 compute-0 ceph-mon[192821]: pgmap v1402: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s wr, 0 op/s
Dec 03 02:01:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1403: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 8.7 KiB/s wr, 2 op/s
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.540 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1960 Content-Type: application/json Date: Wed, 03 Dec 2025 02:01:19 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-a52c8bb6-5014-4956-b2e0-eabcad9f47d2 x-openstack-request-id: req-a52c8bb6-5014-4956-b2e0-eabcad9f47d2 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.541 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274", "name": "vn-44nal64-kaobzdetwujj-uf5345mx272a-vnf-xg4pxtj76f4j", "status": "ACTIVE", "tenant_id": "9746b242761a48048d185ce26d622b33", "user_id": "03ba25e4009b43f7b0054fee32bf9136", "metadata": {"metering.server_group": "0f6ab671-23df-4a6d-9613-02f9fb5fb294"}, "hostId": "875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd", "image": {"id": "466cf0db-c3be-4d70-b9f3-08c056c2cad9", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/466cf0db-c3be-4d70-b9f3-08c056c2cad9"}]}, "flavor": {"id": "bc665ec6-3672-4e52-a447-5267b04e227a", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/bc665ec6-3672-4e52-a447-5267b04e227a"}]}, "created": "2025-12-03T02:00:14Z", "updated": "2025-12-03T02:00:26Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.227", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:de:1b:b0"}, {"version": 4, "addr": "192.168.122.186", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:de:1b:b0"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-03T02:00:26.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000003", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.541 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274 used request id req-a52c8bb6-5014-4956-b2e0-eabcad9f47d2 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.543 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274', 'name': 'vn-44nal64-kaobzdetwujj-uf5345mx272a-vnf-xg4pxtj76f4j', 'flavor': {'id': 'bc665ec6-3672-4e52-a447-5267b04e227a', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9746b242761a48048d185ce26d622b33', 'user_id': '03ba25e4009b43f7b0054fee32bf9136', 'hostId': '875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd', 'status': 'active', 'metadata': {'metering.server_group': '0f6ab671-23df-4a6d-9613-02f9fb5fb294'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.548 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '9182286b-5a08-4961-b4bb-c0e2f05746f7', 'name': 'test_0', 'flavor': {'id': 'bc665ec6-3672-4e52-a447-5267b04e227a', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9746b242761a48048d185ce26d622b33', 'user_id': '03ba25e4009b43f7b0054fee32bf9136', 'hostId': '875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.549 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.549 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.549 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.550 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.551 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-03T02:01:20.550070) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.605 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/memory.usage volume: 49.16015625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.645 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/memory.usage volume: 49.62890625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.676 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/memory.usage volume: 48.88671875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.677 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.678 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.678 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.678 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.678 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.678 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.679 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-03T02:01:20.678804) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.685 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.outgoing.packets volume: 43 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.691 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274 / tapd0c565d0-52 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.691 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.697 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.698 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.698 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.698 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.699 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.699 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.699 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.700 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-03T02:01:20.699501) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.700 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.701 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.701 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.702 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.702 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.702 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.703 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.703 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.703 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.703 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.704 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.704 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.705 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-03T02:01:20.703338) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.723 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.723 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.724 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.724 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.724 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.724 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.724 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.725 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-03T02:01:20.724409) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.726 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.727 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.727 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.727 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.728 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.728 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.728 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.728 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.728 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.728 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.729 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.729 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.729 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.729 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.730 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.730 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-03T02:01:20.728372) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.730 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.731 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.731 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-03T02:01:20.730956) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.777 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.778 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.779 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.810 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.811 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.812 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.860 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.860 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.861 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.862 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.863 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.863 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.863 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.863 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.863 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.864 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.864 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-44nal64-kaobzdetwujj-uf5345mx272a-vnf-xg4pxtj76f4j>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-44nal64-kaobzdetwujj-uf5345mx272a-vnf-xg4pxtj76f4j>]
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.865 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.865 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.865 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-03T02:01:20.863835) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.865 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.866 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.866 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.866 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-03T02:01:20.866164) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.980 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.981 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.981 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.066 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.066 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.067 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.166 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.167 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.168 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.169 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.169 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.170 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.170 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.170 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.170 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.171 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.incoming.bytes volume: 4933 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.171 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-03T02:01:21.170673) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.172 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.173 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.bytes volume: 1962 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.174 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.174 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.174 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.175 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.175 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.175 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.175 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.latency volume: 1829221883 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.176 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.latency volume: 322583639 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.176 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-03T02:01:21.175510) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.177 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.latency volume: 204508972 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.177 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.latency volume: 1828594840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.178 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.latency volume: 317962452 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.178 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.latency volume: 234609421 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.179 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.latency volume: 1854350820 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.180 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.latency volume: 322798135 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.180 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.latency volume: 163317736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.181 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.181 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.182 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.182 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.182 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.182 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.182 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.183 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-03T02:01:21.182670) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.183 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.184 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.184 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.185 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.185 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.186 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.186 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.187 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.188 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.188 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.188 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.188 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.188 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.189 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.189 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.190 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.190 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.191 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.192 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-03T02:01:21.189174) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.192 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.192 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.192 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.192 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.193 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.193 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.193 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.194 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.194 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.195 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-03T02:01:21.192919) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.195 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.196 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.196 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.197 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.197 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.198 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.198 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.199 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.199 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.199 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.199 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-03T02:01:21.199664) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.199 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.200 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.bytes volume: 41824256 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.200 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.201 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.201 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.bytes volume: 41697280 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.201 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.202 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.202 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.203 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.203 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.204 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.205 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.205 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.205 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.205 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.206 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-03T02:01:21.205761) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.205 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.206 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.latency volume: 6964190045 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.206 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.latency volume: 29937762 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.207 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.207 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.latency volume: 5318095604 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.208 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.latency volume: 23420930 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.208 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.208 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.latency volume: 7224488215 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.208 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.latency volume: 31628821 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.209 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.209 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.209 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.209 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.210 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.210 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.210 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.210 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.requests volume: 237 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.210 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.210 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-03T02:01:21.210264) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.211 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.211 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.requests volume: 220 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.211 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.211 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.212 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.requests volume: 229 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.212 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.212 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.213 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.213 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.213 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.213 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.213 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.213 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.213 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.214 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.214 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.packets volume: 18 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.214 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-03T02:01:21.213672) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.215 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.215 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.215 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.215 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.215 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.215 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.216 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-03T02:01:21.215733) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.216 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/cpu volume: 273860000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.216 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/cpu volume: 33970000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.216 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/cpu volume: 38600000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.216 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.217 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.217 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.217 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.217 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.217 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.218 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.218 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.218 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.218 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.218 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.218 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.218 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.outgoing.bytes volume: 4896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.219 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.outgoing.bytes volume: 1906 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.219 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.219 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-03T02:01:21.217508) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.219 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-03T02:01:21.218787) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.220 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.220 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.220 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.220 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.220 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.220 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.220 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.220 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.221 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.221 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.221 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.222 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.222 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-03T02:01:21.220494) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.222 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.222 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.222 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.223 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.223 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.223 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.223 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.223 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.224 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.224 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.224 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.224 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.225 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.225 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.225 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.225 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.225 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.225 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.226 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.226 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.226 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-03T02:01:21.224071) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.226 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.226 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-03T02:01:21.225708) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.226 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.226 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.227 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.227 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.227 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.227 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-03T02:01:21.227077) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.227 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.228 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.228 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.228 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.228 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.228 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.228 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.229 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-03T02:01:21.228918) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.229 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.229 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-44nal64-kaobzdetwujj-uf5345mx272a-vnf-xg4pxtj76f4j>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-44nal64-kaobzdetwujj-uf5345mx272a-vnf-xg4pxtj76f4j>]
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.229 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.230 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.230 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.230 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.231 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.231 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.231 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.231 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.232 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.232 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.232 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.232 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.233 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.233 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.233 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.233 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.233 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.233 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.233 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.234 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.234 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.234 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.234 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.234 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.234 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.235 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:01:21 compute-0 ceph-mon[192821]: pgmap v1403: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 8.7 KiB/s wr, 2 op/s
Dec 03 02:01:21 compute-0 podman[424169]: 2025-12-03 02:01:21.911594925 +0000 UTC m=+0.158635029 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, version=9.4, container_name=kepler, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=ubi9, release=1214.1726694543)
Dec 03 02:01:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1404: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 8.0 KiB/s wr, 6 op/s
Dec 03 02:01:22 compute-0 nova_compute[351485]: 2025-12-03 02:01:22.975 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:01:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:01:23 compute-0 nova_compute[351485]: 2025-12-03 02:01:23.202 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:01:23 compute-0 ceph-mon[192821]: pgmap v1404: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 8.0 KiB/s wr, 6 op/s
Dec 03 02:01:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1405: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 24 op/s
Dec 03 02:01:25 compute-0 ceph-mon[192821]: pgmap v1405: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 24 op/s
Dec 03 02:01:26 compute-0 podman[424188]: 2025-12-03 02:01:26.096215163 +0000 UTC m=+0.350591715 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 03 02:01:26 compute-0 podman[424190]: 2025-12-03 02:01:26.119032216 +0000 UTC m=+0.353688723 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 03 02:01:26 compute-0 podman[424191]: 2025-12-03 02:01:26.119311864 +0000 UTC m=+0.348296671 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Dec 03 02:01:26 compute-0 podman[424189]: 2025-12-03 02:01:26.1514776 +0000 UTC m=+0.389381938 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.openshift.tags=minimal rhel9, release=1755695350, vcs-type=git, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, build-date=2025-08-20T13:12:41, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 03 02:01:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1406: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 03 02:01:27 compute-0 ceph-mon[192821]: pgmap v1406: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 03 02:01:27 compute-0 nova_compute[351485]: 2025-12-03 02:01:27.980 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:01:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:01:28 compute-0 nova_compute[351485]: 2025-12-03 02:01:28.205 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:01:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:01:28
Dec 03 02:01:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 02:01:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 02:01:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['backups', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms', '.mgr', 'cephfs.cephfs.data', 'images', 'volumes', 'default.rgw.control']
Dec 03 02:01:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 02:01:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:01:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:01:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:01:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:01:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:01:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:01:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1407: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 03 02:01:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 02:01:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:01:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 02:01:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:01:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:01:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:01:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:01:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:01:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:01:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:01:29 compute-0 ceph-mon[192821]: pgmap v1407: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 03 02:01:29 compute-0 podman[158098]: time="2025-12-03T02:01:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:01:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:01:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec 03 02:01:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:01:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8640 "" "Go-http-client/1.1"
Dec 03 02:01:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1408: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 03 02:01:31 compute-0 openstack_network_exporter[368278]: ERROR   02:01:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:01:31 compute-0 openstack_network_exporter[368278]: ERROR   02:01:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:01:31 compute-0 openstack_network_exporter[368278]: ERROR   02:01:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:01:31 compute-0 openstack_network_exporter[368278]: ERROR   02:01:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:01:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:01:31 compute-0 openstack_network_exporter[368278]: ERROR   02:01:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:01:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:01:31 compute-0 ceph-mon[192821]: pgmap v1408: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 03 02:01:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1409: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 57 op/s
Dec 03 02:01:32 compute-0 nova_compute[351485]: 2025-12-03 02:01:32.983 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:01:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:01:33 compute-0 nova_compute[351485]: 2025-12-03 02:01:33.209 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:01:33 compute-0 ceph-mon[192821]: pgmap v1409: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 57 op/s
Dec 03 02:01:34 compute-0 sshd-session[424268]: Received disconnect from 154.113.10.113 port 52184:11: Bye Bye [preauth]
Dec 03 02:01:34 compute-0 sshd-session[424268]: Disconnected from authenticating user root 154.113.10.113 port 52184 [preauth]
Dec 03 02:01:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1410: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Dec 03 02:01:35 compute-0 ceph-mon[192821]: pgmap v1410: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Dec 03 02:01:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1411: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 6.7 KiB/s wr, 35 op/s
Dec 03 02:01:37 compute-0 ceph-mon[192821]: pgmap v1411: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 6.7 KiB/s wr, 35 op/s
Dec 03 02:01:37 compute-0 nova_compute[351485]: 2025-12-03 02:01:37.991 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:01:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:01:38 compute-0 nova_compute[351485]: 2025-12-03 02:01:38.213 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:01:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1412: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.7 KiB/s wr, 1 op/s
Dec 03 02:01:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 02:01:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:01:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 02:01:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:01:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0016578097528814222 of space, bias 1.0, pg target 0.4973429258644267 quantized to 32 (current 32)
Dec 03 02:01:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:01:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:01:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:01:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:01:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:01:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec 03 02:01:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:01:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 02:01:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:01:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:01:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:01:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 02:01:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:01:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 02:01:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:01:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:01:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:01:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 02:01:39 compute-0 ceph-mon[192821]: pgmap v1412: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.7 KiB/s wr, 1 op/s
Dec 03 02:01:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1413: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.7 KiB/s wr, 1 op/s
Dec 03 02:01:41 compute-0 ceph-mon[192821]: pgmap v1413: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.7 KiB/s wr, 1 op/s
Dec 03 02:01:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1414: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.7 KiB/s wr, 1 op/s
Dec 03 02:01:42 compute-0 nova_compute[351485]: 2025-12-03 02:01:42.994 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:01:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:01:43 compute-0 nova_compute[351485]: 2025-12-03 02:01:43.217 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:01:43 compute-0 ceph-mon[192821]: pgmap v1414: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.7 KiB/s wr, 1 op/s
Dec 03 02:01:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1415: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.7 KiB/s wr, 1 op/s
Dec 03 02:01:44 compute-0 podman[424270]: 2025-12-03 02:01:44.814960271 +0000 UTC m=+0.100032319 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 03 02:01:44 compute-0 podman[424272]: 2025-12-03 02:01:44.829443599 +0000 UTC m=+0.089331797 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 02:01:44 compute-0 podman[424271]: 2025-12-03 02:01:44.84440146 +0000 UTC m=+0.116480692 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Dec 03 02:01:45 compute-0 ceph-mon[192821]: pgmap v1415: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.7 KiB/s wr, 1 op/s
Dec 03 02:01:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1416: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.7 KiB/s wr, 1 op/s
Dec 03 02:01:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 02:01:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1018134638' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:01:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 02:01:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1018134638' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:01:47 compute-0 ceph-mon[192821]: pgmap v1416: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.7 KiB/s wr, 1 op/s
Dec 03 02:01:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/1018134638' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:01:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/1018134638' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:01:47 compute-0 nova_compute[351485]: 2025-12-03 02:01:47.996 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:01:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:01:48 compute-0 nova_compute[351485]: 2025-12-03 02:01:48.222 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:01:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1417: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:01:49 compute-0 ceph-mon[192821]: pgmap v1417: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:01:49 compute-0 podman[424326]: 2025-12-03 02:01:49.887891149 +0000 UTC m=+0.139924382 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:01:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1418: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:01:51 compute-0 ceph-mon[192821]: pgmap v1418: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:01:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1419: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:01:52 compute-0 podman[424345]: 2025-12-03 02:01:52.845479088 +0000 UTC m=+0.098532176 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., architecture=x86_64, build-date=2024-09-18T21:23:30, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.4, io.openshift.tags=base rhel9, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 03 02:01:52 compute-0 nova_compute[351485]: 2025-12-03 02:01:52.996 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:01:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:01:53 compute-0 nova_compute[351485]: 2025-12-03 02:01:53.224 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:01:53 compute-0 ceph-mon[192821]: pgmap v1419: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:01:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1420: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:01:55 compute-0 ceph-mon[192821]: pgmap v1420: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:01:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1421: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:01:56 compute-0 podman[424368]: 2025-12-03 02:01:56.877357175 +0000 UTC m=+0.113495488 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Dec 03 02:01:56 compute-0 podman[424367]: 2025-12-03 02:01:56.883447266 +0000 UTC m=+0.117634194 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 02:01:56 compute-0 podman[424366]: 2025-12-03 02:01:56.90666642 +0000 UTC m=+0.149472971 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, release=1755695350)
Dec 03 02:01:56 compute-0 podman[424365]: 2025-12-03 02:01:56.917983919 +0000 UTC m=+0.162970371 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 03 02:01:57 compute-0 ceph-mon[192821]: pgmap v1421: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:01:58 compute-0 nova_compute[351485]: 2025-12-03 02:01:58.000 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:01:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:01:58 compute-0 nova_compute[351485]: 2025-12-03 02:01:58.227 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:01:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:01:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:01:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:01:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:01:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:01:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:01:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1422: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:01:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:01:59.629 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:01:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:01:59.630 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:01:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:01:59.631 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:01:59 compute-0 podman[158098]: time="2025-12-03T02:01:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:01:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:01:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec 03 02:01:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:01:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8642 "" "Go-http-client/1.1"
Dec 03 02:01:59 compute-0 ceph-mon[192821]: pgmap v1422: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:02:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1423: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:02:00 compute-0 sudo[424444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:02:00 compute-0 sudo[424444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:02:00 compute-0 sudo[424444]: pam_unix(sudo:session): session closed for user root
Dec 03 02:02:00 compute-0 sudo[424469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:02:00 compute-0 sudo[424469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:02:00 compute-0 sudo[424469]: pam_unix(sudo:session): session closed for user root
Dec 03 02:02:01 compute-0 sudo[424494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:02:01 compute-0 sudo[424494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:02:01 compute-0 sudo[424494]: pam_unix(sudo:session): session closed for user root
Dec 03 02:02:01 compute-0 sudo[424519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Dec 03 02:02:01 compute-0 sudo[424519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:02:01 compute-0 openstack_network_exporter[368278]: ERROR   02:02:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:02:01 compute-0 openstack_network_exporter[368278]: ERROR   02:02:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:02:01 compute-0 openstack_network_exporter[368278]: ERROR   02:02:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:02:01 compute-0 openstack_network_exporter[368278]: ERROR   02:02:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:02:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:02:01 compute-0 openstack_network_exporter[368278]: ERROR   02:02:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:02:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:02:01 compute-0 nova_compute[351485]: 2025-12-03 02:02:01.571 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:02:01 compute-0 sudo[424519]: pam_unix(sudo:session): session closed for user root
Dec 03 02:02:01 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 02:02:01 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:02:01 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 02:02:01 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:02:01 compute-0 sudo[424562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:02:01 compute-0 sudo[424562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:02:01 compute-0 sudo[424562]: pam_unix(sudo:session): session closed for user root
Dec 03 02:02:01 compute-0 ceph-mon[192821]: pgmap v1423: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:02:01 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:02:01 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:02:01 compute-0 sudo[424587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:02:01 compute-0 sudo[424587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:02:01 compute-0 sudo[424587]: pam_unix(sudo:session): session closed for user root
Dec 03 02:02:02 compute-0 sudo[424612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:02:02 compute-0 sudo[424612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:02:02 compute-0 sudo[424612]: pam_unix(sudo:session): session closed for user root
Dec 03 02:02:02 compute-0 sudo[424637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 02:02:02 compute-0 sudo[424637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:02:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1424: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:02:02 compute-0 nova_compute[351485]: 2025-12-03 02:02:02.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:02:02 compute-0 sudo[424637]: pam_unix(sudo:session): session closed for user root
Dec 03 02:02:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:02:02 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:02:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 02:02:02 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:02:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 02:02:02 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:02:02 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev fdd795ba-a08d-42a7-b758-04a3e4b61687 does not exist
Dec 03 02:02:02 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 5aaff57f-d11f-4a6a-81e4-86c424f22628 does not exist
Dec 03 02:02:02 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 86c89704-b372-434c-8c65-acf4ed3a762b does not exist
Dec 03 02:02:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 02:02:02 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:02:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 02:02:02 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:02:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:02:02 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:02:03 compute-0 nova_compute[351485]: 2025-12-03 02:02:03.004 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:02:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:02:03 compute-0 sudo[424693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:02:03 compute-0 sudo[424693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:02:03 compute-0 sudo[424693]: pam_unix(sudo:session): session closed for user root
Dec 03 02:02:03 compute-0 sudo[424718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:02:03 compute-0 sudo[424718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:02:03 compute-0 sudo[424718]: pam_unix(sudo:session): session closed for user root
Dec 03 02:02:03 compute-0 nova_compute[351485]: 2025-12-03 02:02:03.231 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:02:03 compute-0 sudo[424743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:02:03 compute-0 sudo[424743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:02:03 compute-0 sudo[424743]: pam_unix(sudo:session): session closed for user root
Dec 03 02:02:03 compute-0 sudo[424768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 02:02:03 compute-0 sudo[424768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:02:03 compute-0 nova_compute[351485]: 2025-12-03 02:02:03.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:02:03 compute-0 nova_compute[351485]: 2025-12-03 02:02:03.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 02:02:03 compute-0 ceph-mon[192821]: pgmap v1424: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:02:03 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:02:03 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:02:03 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:02:03 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:02:03 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:02:03 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:02:03 compute-0 podman[424831]: 2025-12-03 02:02:03.987133719 +0000 UTC m=+0.095634095 container create f32efae010aea2908986c68d6c2b0850d082c2d16bb2f164b785ce208a8a518f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 03 02:02:04 compute-0 podman[424831]: 2025-12-03 02:02:03.939739354 +0000 UTC m=+0.048239810 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:02:04 compute-0 systemd[1]: Started libpod-conmon-f32efae010aea2908986c68d6c2b0850d082c2d16bb2f164b785ce208a8a518f.scope.
Dec 03 02:02:04 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:02:04 compute-0 podman[424831]: 2025-12-03 02:02:04.138001578 +0000 UTC m=+0.246501974 container init f32efae010aea2908986c68d6c2b0850d082c2d16bb2f164b785ce208a8a518f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bohr, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 03 02:02:04 compute-0 podman[424831]: 2025-12-03 02:02:04.15085827 +0000 UTC m=+0.259358636 container start f32efae010aea2908986c68d6c2b0850d082c2d16bb2f164b785ce208a8a518f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 03 02:02:04 compute-0 podman[424831]: 2025-12-03 02:02:04.156612042 +0000 UTC m=+0.265112448 container attach f32efae010aea2908986c68d6c2b0850d082c2d16bb2f164b785ce208a8a518f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bohr, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec 03 02:02:04 compute-0 systemd[1]: libpod-f32efae010aea2908986c68d6c2b0850d082c2d16bb2f164b785ce208a8a518f.scope: Deactivated successfully.
Dec 03 02:02:04 compute-0 xenodochial_bohr[424847]: 167 167
Dec 03 02:02:04 compute-0 podman[424831]: 2025-12-03 02:02:04.163765484 +0000 UTC m=+0.272265870 container died f32efae010aea2908986c68d6c2b0850d082c2d16bb2f164b785ce208a8a518f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bohr, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:02:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8f7d770ef0485f1e428e1759f982fc95b4acb23f40cfa5d0ff5fbfe029a74cf-merged.mount: Deactivated successfully.
Dec 03 02:02:04 compute-0 podman[424831]: 2025-12-03 02:02:04.226760538 +0000 UTC m=+0.335260904 container remove f32efae010aea2908986c68d6c2b0850d082c2d16bb2f164b785ce208a8a518f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec 03 02:02:04 compute-0 systemd[1]: libpod-conmon-f32efae010aea2908986c68d6c2b0850d082c2d16bb2f164b785ce208a8a518f.scope: Deactivated successfully.
Dec 03 02:02:04 compute-0 nova_compute[351485]: 2025-12-03 02:02:04.373 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-52862152-12c7-4236-89c3-67750ecbed7a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:02:04 compute-0 nova_compute[351485]: 2025-12-03 02:02:04.375 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-52862152-12c7-4236-89c3-67750ecbed7a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:02:04 compute-0 nova_compute[351485]: 2025-12-03 02:02:04.375 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 03 02:02:04 compute-0 podman[424869]: 2025-12-03 02:02:04.467306613 +0000 UTC m=+0.065093384 container create a75bbc2e0141c67fd8ccd02adf79a03e4e6a2a9e83722bdaa0e29071431038c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_gates, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 03 02:02:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1425: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:02:04 compute-0 podman[424869]: 2025-12-03 02:02:04.440820237 +0000 UTC m=+0.038607048 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:02:04 compute-0 systemd[1]: Started libpod-conmon-a75bbc2e0141c67fd8ccd02adf79a03e4e6a2a9e83722bdaa0e29071431038c8.scope.
Dec 03 02:02:04 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:02:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75a1cc7383ecf04f90ebf873f1475a4fd11de108fff2d6d80507a45d1c08c08a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:02:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75a1cc7383ecf04f90ebf873f1475a4fd11de108fff2d6d80507a45d1c08c08a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:02:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75a1cc7383ecf04f90ebf873f1475a4fd11de108fff2d6d80507a45d1c08c08a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:02:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75a1cc7383ecf04f90ebf873f1475a4fd11de108fff2d6d80507a45d1c08c08a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:02:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75a1cc7383ecf04f90ebf873f1475a4fd11de108fff2d6d80507a45d1c08c08a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 02:02:04 compute-0 podman[424869]: 2025-12-03 02:02:04.622927356 +0000 UTC m=+0.220714217 container init a75bbc2e0141c67fd8ccd02adf79a03e4e6a2a9e83722bdaa0e29071431038c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_gates, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 03 02:02:04 compute-0 podman[424869]: 2025-12-03 02:02:04.652079697 +0000 UTC m=+0.249866508 container start a75bbc2e0141c67fd8ccd02adf79a03e4e6a2a9e83722bdaa0e29071431038c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_gates, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:02:04 compute-0 podman[424869]: 2025-12-03 02:02:04.66071625 +0000 UTC m=+0.258503111 container attach a75bbc2e0141c67fd8ccd02adf79a03e4e6a2a9e83722bdaa0e29071431038c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_gates, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec 03 02:02:05 compute-0 serene_gates[424885]: --> passed data devices: 0 physical, 3 LVM
Dec 03 02:02:05 compute-0 serene_gates[424885]: --> relative data size: 1.0
Dec 03 02:02:05 compute-0 serene_gates[424885]: --> All data devices are unavailable
Dec 03 02:02:05 compute-0 ceph-mon[192821]: pgmap v1425: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:02:05 compute-0 systemd[1]: libpod-a75bbc2e0141c67fd8ccd02adf79a03e4e6a2a9e83722bdaa0e29071431038c8.scope: Deactivated successfully.
Dec 03 02:02:05 compute-0 systemd[1]: libpod-a75bbc2e0141c67fd8ccd02adf79a03e4e6a2a9e83722bdaa0e29071431038c8.scope: Consumed 1.193s CPU time.
Dec 03 02:02:05 compute-0 podman[424914]: 2025-12-03 02:02:05.963839922 +0000 UTC m=+0.043005963 container died a75bbc2e0141c67fd8ccd02adf79a03e4e6a2a9e83722bdaa0e29071431038c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_gates, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:02:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-75a1cc7383ecf04f90ebf873f1475a4fd11de108fff2d6d80507a45d1c08c08a-merged.mount: Deactivated successfully.
Dec 03 02:02:06 compute-0 podman[424914]: 2025-12-03 02:02:06.090489849 +0000 UTC m=+0.169655850 container remove a75bbc2e0141c67fd8ccd02adf79a03e4e6a2a9e83722bdaa0e29071431038c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:02:06 compute-0 systemd[1]: libpod-conmon-a75bbc2e0141c67fd8ccd02adf79a03e4e6a2a9e83722bdaa0e29071431038c8.scope: Deactivated successfully.
Dec 03 02:02:06 compute-0 sudo[424768]: pam_unix(sudo:session): session closed for user root
Dec 03 02:02:06 compute-0 sudo[424929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:02:06 compute-0 sudo[424929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:02:06 compute-0 sudo[424929]: pam_unix(sudo:session): session closed for user root
Dec 03 02:02:06 compute-0 sudo[424954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:02:06 compute-0 sudo[424954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:02:06 compute-0 sudo[424954]: pam_unix(sudo:session): session closed for user root
Dec 03 02:02:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1426: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:02:06 compute-0 sudo[424979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:02:06 compute-0 sudo[424979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:02:06 compute-0 sudo[424979]: pam_unix(sudo:session): session closed for user root
Dec 03 02:02:06 compute-0 sudo[425004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 02:02:06 compute-0 sudo[425004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:02:06 compute-0 nova_compute[351485]: 2025-12-03 02:02:06.745 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Updating instance_info_cache with network_info: [{"id": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "address": "fa:16:3e:8e:09:91", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521d2181-8f", "ovs_interfaceid": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:02:06 compute-0 nova_compute[351485]: 2025-12-03 02:02:06.759 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-52862152-12c7-4236-89c3-67750ecbed7a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:02:06 compute-0 nova_compute[351485]: 2025-12-03 02:02:06.760 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 03 02:02:06 compute-0 nova_compute[351485]: 2025-12-03 02:02:06.761 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:02:06 compute-0 nova_compute[351485]: 2025-12-03 02:02:06.762 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:02:06 compute-0 nova_compute[351485]: 2025-12-03 02:02:06.762 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:02:06 compute-0 nova_compute[351485]: 2025-12-03 02:02:06.788 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:02:06 compute-0 nova_compute[351485]: 2025-12-03 02:02:06.788 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:02:06 compute-0 nova_compute[351485]: 2025-12-03 02:02:06.789 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:02:06 compute-0 nova_compute[351485]: 2025-12-03 02:02:06.789 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 02:02:06 compute-0 nova_compute[351485]: 2025-12-03 02:02:06.790 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:02:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:02:07 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3401004877' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:02:07 compute-0 podman[425088]: 2025-12-03 02:02:07.270213496 +0000 UTC m=+0.089151472 container create 3664e140ce1e926a621a8fae0d0fa6a3902ee7580f74422a5ad46b40afb868c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 03 02:02:07 compute-0 nova_compute[351485]: 2025-12-03 02:02:07.269 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:02:07 compute-0 podman[425088]: 2025-12-03 02:02:07.239442589 +0000 UTC m=+0.058380595 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:02:07 compute-0 systemd[1]: Started libpod-conmon-3664e140ce1e926a621a8fae0d0fa6a3902ee7580f74422a5ad46b40afb868c4.scope.
Dec 03 02:02:07 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:02:07 compute-0 nova_compute[351485]: 2025-12-03 02:02:07.408 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:02:07 compute-0 nova_compute[351485]: 2025-12-03 02:02:07.410 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:02:07 compute-0 nova_compute[351485]: 2025-12-03 02:02:07.411 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:02:07 compute-0 podman[425088]: 2025-12-03 02:02:07.413967545 +0000 UTC m=+0.232905561 container init 3664e140ce1e926a621a8fae0d0fa6a3902ee7580f74422a5ad46b40afb868c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_darwin, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 03 02:02:07 compute-0 podman[425088]: 2025-12-03 02:02:07.426890339 +0000 UTC m=+0.245828315 container start 3664e140ce1e926a621a8fae0d0fa6a3902ee7580f74422a5ad46b40afb868c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_darwin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:02:07 compute-0 nova_compute[351485]: 2025-12-03 02:02:07.435 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:02:07 compute-0 podman[425088]: 2025-12-03 02:02:07.435917724 +0000 UTC m=+0.254855790 container attach 3664e140ce1e926a621a8fae0d0fa6a3902ee7580f74422a5ad46b40afb868c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_darwin, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 03 02:02:07 compute-0 nova_compute[351485]: 2025-12-03 02:02:07.436 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:02:07 compute-0 nova_compute[351485]: 2025-12-03 02:02:07.437 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:02:07 compute-0 agitated_darwin[425106]: 167 167
Dec 03 02:02:07 compute-0 systemd[1]: libpod-3664e140ce1e926a621a8fae0d0fa6a3902ee7580f74422a5ad46b40afb868c4.scope: Deactivated successfully.
Dec 03 02:02:07 compute-0 podman[425088]: 2025-12-03 02:02:07.444959708 +0000 UTC m=+0.263897714 container died 3664e140ce1e926a621a8fae0d0fa6a3902ee7580f74422a5ad46b40afb868c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 03 02:02:07 compute-0 nova_compute[351485]: 2025-12-03 02:02:07.466 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:02:07 compute-0 nova_compute[351485]: 2025-12-03 02:02:07.467 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:02:07 compute-0 nova_compute[351485]: 2025-12-03 02:02:07.468 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:02:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8b43487bf7d9488bffd1b938190402287caed2c356e9e9b4e4353c744f0205c-merged.mount: Deactivated successfully.
Dec 03 02:02:07 compute-0 podman[425088]: 2025-12-03 02:02:07.518459658 +0000 UTC m=+0.337397634 container remove 3664e140ce1e926a621a8fae0d0fa6a3902ee7580f74422a5ad46b40afb868c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 03 02:02:07 compute-0 systemd[1]: libpod-conmon-3664e140ce1e926a621a8fae0d0fa6a3902ee7580f74422a5ad46b40afb868c4.scope: Deactivated successfully.
Dec 03 02:02:07 compute-0 podman[425130]: 2025-12-03 02:02:07.794127382 +0000 UTC m=+0.073309825 container create 587ece0563076530dbba73c2667f4a1519c5bc14718d7667237f810e9f3bfecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_hermann, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:02:07 compute-0 podman[425130]: 2025-12-03 02:02:07.767031699 +0000 UTC m=+0.046214142 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:02:07 compute-0 systemd[1]: Started libpod-conmon-587ece0563076530dbba73c2667f4a1519c5bc14718d7667237f810e9f3bfecf.scope.
Dec 03 02:02:07 compute-0 ceph-mon[192821]: pgmap v1426: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:02:07 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3401004877' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:02:07 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:02:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc7d5b0cf706537f946f339550f40fb651f09e8b6c7e78e651cf00f6a5c43787/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:02:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc7d5b0cf706537f946f339550f40fb651f09e8b6c7e78e651cf00f6a5c43787/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:02:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc7d5b0cf706537f946f339550f40fb651f09e8b6c7e78e651cf00f6a5c43787/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:02:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc7d5b0cf706537f946f339550f40fb651f09e8b6c7e78e651cf00f6a5c43787/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:02:07 compute-0 podman[425130]: 2025-12-03 02:02:07.914969916 +0000 UTC m=+0.194152369 container init 587ece0563076530dbba73c2667f4a1519c5bc14718d7667237f810e9f3bfecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_hermann, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 03 02:02:07 compute-0 podman[425130]: 2025-12-03 02:02:07.931818461 +0000 UTC m=+0.211000894 container start 587ece0563076530dbba73c2667f4a1519c5bc14718d7667237f810e9f3bfecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_hermann, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:02:07 compute-0 podman[425130]: 2025-12-03 02:02:07.942191173 +0000 UTC m=+0.221373606 container attach 587ece0563076530dbba73c2667f4a1519c5bc14718d7667237f810e9f3bfecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_hermann, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:02:08 compute-0 nova_compute[351485]: 2025-12-03 02:02:08.003 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:02:08 compute-0 nova_compute[351485]: 2025-12-03 02:02:08.027 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:02:08 compute-0 nova_compute[351485]: 2025-12-03 02:02:08.028 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3348MB free_disk=59.888832092285156GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 02:02:08 compute-0 nova_compute[351485]: 2025-12-03 02:02:08.028 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:02:08 compute-0 nova_compute[351485]: 2025-12-03 02:02:08.029 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:02:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:02:08 compute-0 nova_compute[351485]: 2025-12-03 02:02:08.236 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:02:08 compute-0 nova_compute[351485]: 2025-12-03 02:02:08.335 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:02:08 compute-0 nova_compute[351485]: 2025-12-03 02:02:08.336 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 52862152-12c7-4236-89c3-67750ecbed7a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:02:08 compute-0 nova_compute[351485]: 2025-12-03 02:02:08.338 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:02:08 compute-0 nova_compute[351485]: 2025-12-03 02:02:08.339 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 02:02:08 compute-0 nova_compute[351485]: 2025-12-03 02:02:08.340 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 02:02:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1427: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:02:08 compute-0 nova_compute[351485]: 2025-12-03 02:02:08.616 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:02:08 compute-0 awesome_hermann[425147]: {
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:     "0": [
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:         {
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:             "devices": [
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:                 "/dev/loop3"
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:             ],
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:             "lv_name": "ceph_lv0",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:             "lv_size": "21470642176",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:             "name": "ceph_lv0",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:             "tags": {
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:                 "ceph.cluster_name": "ceph",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:                 "ceph.crush_device_class": "",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:                 "ceph.encrypted": "0",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:                 "ceph.osd_id": "0",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:                 "ceph.type": "block",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:                 "ceph.vdo": "0"
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:             },
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:             "type": "block",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:             "vg_name": "ceph_vg0"
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:         }
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:     ],
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:     "1": [
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:         {
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:             "devices": [
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:                 "/dev/loop4"
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:             ],
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:             "lv_name": "ceph_lv1",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:             "lv_size": "21470642176",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:             "name": "ceph_lv1",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:             "tags": {
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:                 "ceph.cluster_name": "ceph",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:                 "ceph.crush_device_class": "",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:                 "ceph.encrypted": "0",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:                 "ceph.osd_id": "1",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:                 "ceph.type": "block",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:                 "ceph.vdo": "0"
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:             },
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:             "type": "block",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:             "vg_name": "ceph_vg1"
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:         }
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:     ],
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:     "2": [
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:         {
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:             "devices": [
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:                 "/dev/loop5"
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:             ],
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:             "lv_name": "ceph_lv2",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:             "lv_size": "21470642176",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:             "name": "ceph_lv2",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:             "tags": {
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:                 "ceph.cluster_name": "ceph",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:                 "ceph.crush_device_class": "",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:                 "ceph.encrypted": "0",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:                 "ceph.osd_id": "2",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:                 "ceph.type": "block",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:                 "ceph.vdo": "0"
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:             },
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:             "type": "block",
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:             "vg_name": "ceph_vg2"
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:         }
Dec 03 02:02:08 compute-0 awesome_hermann[425147]:     ]
Dec 03 02:02:08 compute-0 awesome_hermann[425147]: }
Dec 03 02:02:08 compute-0 systemd[1]: libpod-587ece0563076530dbba73c2667f4a1519c5bc14718d7667237f810e9f3bfecf.scope: Deactivated successfully.
Dec 03 02:02:08 compute-0 podman[425130]: 2025-12-03 02:02:08.759797381 +0000 UTC m=+1.038979824 container died 587ece0563076530dbba73c2667f4a1519c5bc14718d7667237f810e9f3bfecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_hermann, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:02:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc7d5b0cf706537f946f339550f40fb651f09e8b6c7e78e651cf00f6a5c43787-merged.mount: Deactivated successfully.
Dec 03 02:02:08 compute-0 podman[425130]: 2025-12-03 02:02:08.850152836 +0000 UTC m=+1.129335279 container remove 587ece0563076530dbba73c2667f4a1519c5bc14718d7667237f810e9f3bfecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_hermann, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 03 02:02:08 compute-0 systemd[1]: libpod-conmon-587ece0563076530dbba73c2667f4a1519c5bc14718d7667237f810e9f3bfecf.scope: Deactivated successfully.
Dec 03 02:02:08 compute-0 sudo[425004]: pam_unix(sudo:session): session closed for user root
Dec 03 02:02:09 compute-0 sudo[425188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:02:09 compute-0 sudo[425188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:02:09 compute-0 sudo[425188]: pam_unix(sudo:session): session closed for user root
Dec 03 02:02:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:02:09 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2446100229' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:02:09 compute-0 nova_compute[351485]: 2025-12-03 02:02:09.178 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:02:09 compute-0 nova_compute[351485]: 2025-12-03 02:02:09.196 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:02:09 compute-0 sudo[425213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:02:09 compute-0 sudo[425213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:02:09 compute-0 sudo[425213]: pam_unix(sudo:session): session closed for user root
Dec 03 02:02:09 compute-0 nova_compute[351485]: 2025-12-03 02:02:09.220 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:02:09 compute-0 nova_compute[351485]: 2025-12-03 02:02:09.225 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 02:02:09 compute-0 nova_compute[351485]: 2025-12-03 02:02:09.226 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.197s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:02:09 compute-0 nova_compute[351485]: 2025-12-03 02:02:09.228 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:02:09 compute-0 nova_compute[351485]: 2025-12-03 02:02:09.228 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 03 02:02:09 compute-0 nova_compute[351485]: 2025-12-03 02:02:09.253 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 03 02:02:09 compute-0 sudo[425240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:02:09 compute-0 sudo[425240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:02:09 compute-0 sudo[425240]: pam_unix(sudo:session): session closed for user root
Dec 03 02:02:09 compute-0 sudo[425265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 02:02:09 compute-0 sudo[425265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:02:09 compute-0 ceph-mon[192821]: pgmap v1427: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:02:09 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2446100229' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:02:10 compute-0 nova_compute[351485]: 2025-12-03 02:02:10.070 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:02:10 compute-0 nova_compute[351485]: 2025-12-03 02:02:10.071 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:02:10 compute-0 podman[425327]: 2025-12-03 02:02:10.094929414 +0000 UTC m=+0.096124089 container create 3417ef324aa4ee83f5a3942d40dcaa175cf684e7d3efdb06c0c8465c04630d3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:02:10 compute-0 podman[425327]: 2025-12-03 02:02:10.058676363 +0000 UTC m=+0.059871108 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:02:10 compute-0 systemd[1]: Started libpod-conmon-3417ef324aa4ee83f5a3942d40dcaa175cf684e7d3efdb06c0c8465c04630d3c.scope.
Dec 03 02:02:10 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:02:10 compute-0 podman[425327]: 2025-12-03 02:02:10.223758552 +0000 UTC m=+0.224953267 container init 3417ef324aa4ee83f5a3942d40dcaa175cf684e7d3efdb06c0c8465c04630d3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:02:10 compute-0 podman[425327]: 2025-12-03 02:02:10.237489059 +0000 UTC m=+0.238683734 container start 3417ef324aa4ee83f5a3942d40dcaa175cf684e7d3efdb06c0c8465c04630d3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wilbur, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:02:10 compute-0 podman[425327]: 2025-12-03 02:02:10.242995634 +0000 UTC m=+0.244190349 container attach 3417ef324aa4ee83f5a3942d40dcaa175cf684e7d3efdb06c0c8465c04630d3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Dec 03 02:02:10 compute-0 gallant_wilbur[425343]: 167 167
Dec 03 02:02:10 compute-0 systemd[1]: libpod-3417ef324aa4ee83f5a3942d40dcaa175cf684e7d3efdb06c0c8465c04630d3c.scope: Deactivated successfully.
Dec 03 02:02:10 compute-0 podman[425327]: 2025-12-03 02:02:10.25102274 +0000 UTC m=+0.252217415 container died 3417ef324aa4ee83f5a3942d40dcaa175cf684e7d3efdb06c0c8465c04630d3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wilbur, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 03 02:02:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1c46f2db41cb440675ace8933337a3502442fd030704a5c4274faa6d410eb6a-merged.mount: Deactivated successfully.
Dec 03 02:02:10 compute-0 podman[425327]: 2025-12-03 02:02:10.312894823 +0000 UTC m=+0.314089488 container remove 3417ef324aa4ee83f5a3942d40dcaa175cf684e7d3efdb06c0c8465c04630d3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:02:10 compute-0 systemd[1]: libpod-conmon-3417ef324aa4ee83f5a3942d40dcaa175cf684e7d3efdb06c0c8465c04630d3c.scope: Deactivated successfully.
Dec 03 02:02:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1428: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:02:10 compute-0 nova_compute[351485]: 2025-12-03 02:02:10.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:02:10 compute-0 podman[425366]: 2025-12-03 02:02:10.583300919 +0000 UTC m=+0.087244629 container create 1bdb66700bd919b26b04285e4ec35402f92ae9d67307adeb2706dc2f4fc86947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_northcutt, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 03 02:02:10 compute-0 podman[425366]: 2025-12-03 02:02:10.546666477 +0000 UTC m=+0.050610247 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:02:10 compute-0 systemd[1]: Started libpod-conmon-1bdb66700bd919b26b04285e4ec35402f92ae9d67307adeb2706dc2f4fc86947.scope.
Dec 03 02:02:10 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:02:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5f02b837f994d867dd4a75fb7c75e8d29606adaf3bf38f89667fea3dc4ed18e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:02:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5f02b837f994d867dd4a75fb7c75e8d29606adaf3bf38f89667fea3dc4ed18e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:02:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5f02b837f994d867dd4a75fb7c75e8d29606adaf3bf38f89667fea3dc4ed18e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:02:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5f02b837f994d867dd4a75fb7c75e8d29606adaf3bf38f89667fea3dc4ed18e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:02:10 compute-0 podman[425366]: 2025-12-03 02:02:10.751746203 +0000 UTC m=+0.255689903 container init 1bdb66700bd919b26b04285e4ec35402f92ae9d67307adeb2706dc2f4fc86947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_northcutt, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:02:10 compute-0 podman[425366]: 2025-12-03 02:02:10.765448839 +0000 UTC m=+0.269392509 container start 1bdb66700bd919b26b04285e4ec35402f92ae9d67307adeb2706dc2f4fc86947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:02:10 compute-0 podman[425366]: 2025-12-03 02:02:10.770336026 +0000 UTC m=+0.274279736 container attach 1bdb66700bd919b26b04285e4ec35402f92ae9d67307adeb2706dc2f4fc86947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 03 02:02:11 compute-0 compassionate_northcutt[425381]: {
Dec 03 02:02:11 compute-0 compassionate_northcutt[425381]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 02:02:11 compute-0 compassionate_northcutt[425381]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:02:11 compute-0 compassionate_northcutt[425381]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 02:02:11 compute-0 compassionate_northcutt[425381]:         "osd_id": 2,
Dec 03 02:02:11 compute-0 compassionate_northcutt[425381]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:02:11 compute-0 compassionate_northcutt[425381]:         "type": "bluestore"
Dec 03 02:02:11 compute-0 compassionate_northcutt[425381]:     },
Dec 03 02:02:11 compute-0 compassionate_northcutt[425381]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 02:02:11 compute-0 compassionate_northcutt[425381]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:02:11 compute-0 compassionate_northcutt[425381]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 02:02:11 compute-0 compassionate_northcutt[425381]:         "osd_id": 1,
Dec 03 02:02:11 compute-0 compassionate_northcutt[425381]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:02:11 compute-0 compassionate_northcutt[425381]:         "type": "bluestore"
Dec 03 02:02:11 compute-0 compassionate_northcutt[425381]:     },
Dec 03 02:02:11 compute-0 compassionate_northcutt[425381]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 02:02:11 compute-0 compassionate_northcutt[425381]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:02:11 compute-0 compassionate_northcutt[425381]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 02:02:11 compute-0 compassionate_northcutt[425381]:         "osd_id": 0,
Dec 03 02:02:11 compute-0 compassionate_northcutt[425381]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:02:11 compute-0 compassionate_northcutt[425381]:         "type": "bluestore"
Dec 03 02:02:11 compute-0 compassionate_northcutt[425381]:     }
Dec 03 02:02:11 compute-0 compassionate_northcutt[425381]: }
Dec 03 02:02:11 compute-0 systemd[1]: libpod-1bdb66700bd919b26b04285e4ec35402f92ae9d67307adeb2706dc2f4fc86947.scope: Deactivated successfully.
Dec 03 02:02:11 compute-0 podman[425366]: 2025-12-03 02:02:11.804300108 +0000 UTC m=+1.308243788 container died 1bdb66700bd919b26b04285e4ec35402f92ae9d67307adeb2706dc2f4fc86947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 03 02:02:11 compute-0 systemd[1]: libpod-1bdb66700bd919b26b04285e4ec35402f92ae9d67307adeb2706dc2f4fc86947.scope: Consumed 1.022s CPU time.
Dec 03 02:02:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5f02b837f994d867dd4a75fb7c75e8d29606adaf3bf38f89667fea3dc4ed18e-merged.mount: Deactivated successfully.
Dec 03 02:02:11 compute-0 podman[425366]: 2025-12-03 02:02:11.880439752 +0000 UTC m=+1.384383432 container remove 1bdb66700bd919b26b04285e4ec35402f92ae9d67307adeb2706dc2f4fc86947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:02:11 compute-0 systemd[1]: libpod-conmon-1bdb66700bd919b26b04285e4ec35402f92ae9d67307adeb2706dc2f4fc86947.scope: Deactivated successfully.
Dec 03 02:02:11 compute-0 ceph-mon[192821]: pgmap v1428: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:02:11 compute-0 sudo[425265]: pam_unix(sudo:session): session closed for user root
Dec 03 02:02:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 02:02:11 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:02:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 02:02:11 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:02:11 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 7bbd2736-8d9e-4283-826d-ceb4174cbb83 does not exist
Dec 03 02:02:11 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev f689d914-fb4e-4ea7-98c4-9949e7684ed7 does not exist
Dec 03 02:02:12 compute-0 sudo[425428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:02:12 compute-0 sudo[425428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:02:12 compute-0 sudo[425428]: pam_unix(sudo:session): session closed for user root
Dec 03 02:02:12 compute-0 sudo[425453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 02:02:12 compute-0 sudo[425453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:02:12 compute-0 sudo[425453]: pam_unix(sudo:session): session closed for user root
Dec 03 02:02:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1429: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:02:12 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:02:12 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:02:13 compute-0 nova_compute[351485]: 2025-12-03 02:02:13.006 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:02:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:02:13 compute-0 nova_compute[351485]: 2025-12-03 02:02:13.239 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:02:13 compute-0 nova_compute[351485]: 2025-12-03 02:02:13.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:02:13 compute-0 nova_compute[351485]: 2025-12-03 02:02:13.579 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 02:02:13 compute-0 ceph-mon[192821]: pgmap v1429: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:02:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1430: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:02:15 compute-0 podman[425480]: 2025-12-03 02:02:15.898059857 +0000 UTC m=+0.134590491 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 03 02:02:15 compute-0 podman[425478]: 2025-12-03 02:02:15.902689198 +0000 UTC m=+0.138295276 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 03 02:02:15 compute-0 podman[425479]: 2025-12-03 02:02:15.910898609 +0000 UTC m=+0.146185628 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, managed_by=edpm_ansible)
Dec 03 02:02:15 compute-0 ceph-mon[192821]: pgmap v1430: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:02:16 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:02:16.481 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:a6:85', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:2a:11:ae:7b:8c'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 02:02:16 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:02:16.482 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 03 02:02:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1431: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:02:16 compute-0 nova_compute[351485]: 2025-12-03 02:02:16.485 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:02:16 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:02:16.488 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:a6:85', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:2a:11:ae:7b:8c'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 02:02:16 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:02:16.489 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 03 02:02:16 compute-0 nova_compute[351485]: 2025-12-03 02:02:16.492 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:02:16 compute-0 sshd-session[425540]: Received disconnect from 146.190.144.138 port 35006:11: Bye Bye [preauth]
Dec 03 02:02:16 compute-0 sshd-session[425540]: Disconnected from authenticating user root 146.190.144.138 port 35006 [preauth]
Dec 03 02:02:17 compute-0 ceph-mon[192821]: pgmap v1431: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:02:18 compute-0 nova_compute[351485]: 2025-12-03 02:02:18.009 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:02:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:02:18 compute-0 nova_compute[351485]: 2025-12-03 02:02:18.243 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:02:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1432: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:02:20 compute-0 ceph-mon[192821]: pgmap v1432: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:02:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1433: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:02:20 compute-0 nova_compute[351485]: 2025-12-03 02:02:20.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:02:20 compute-0 nova_compute[351485]: 2025-12-03 02:02:20.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 03 02:02:20 compute-0 nova_compute[351485]: 2025-12-03 02:02:20.613 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:02:20 compute-0 podman[425543]: 2025-12-03 02:02:20.890477737 +0000 UTC m=+0.153172125 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 03 02:02:22 compute-0 ceph-mon[192821]: pgmap v1433: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:02:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1434: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:02:22 compute-0 nova_compute[351485]: 2025-12-03 02:02:22.609 351492 DEBUG oslo_concurrency.lockutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "b43e79bd-550f-42f8-9aa7-980b6bca3f70" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:02:22 compute-0 nova_compute[351485]: 2025-12-03 02:02:22.611 351492 DEBUG oslo_concurrency.lockutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "b43e79bd-550f-42f8-9aa7-980b6bca3f70" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:02:22 compute-0 nova_compute[351485]: 2025-12-03 02:02:22.662 351492 DEBUG nova.compute.manager [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 03 02:02:22 compute-0 nova_compute[351485]: 2025-12-03 02:02:22.781 351492 DEBUG oslo_concurrency.lockutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:02:22 compute-0 nova_compute[351485]: 2025-12-03 02:02:22.782 351492 DEBUG oslo_concurrency.lockutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:02:22 compute-0 nova_compute[351485]: 2025-12-03 02:02:22.796 351492 DEBUG nova.virt.hardware [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 03 02:02:22 compute-0 nova_compute[351485]: 2025-12-03 02:02:22.798 351492 INFO nova.compute.claims [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Claim successful on node compute-0.ctlplane.example.com
Dec 03 02:02:22 compute-0 nova_compute[351485]: 2025-12-03 02:02:22.977 351492 DEBUG oslo_concurrency.processutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:02:23 compute-0 nova_compute[351485]: 2025-12-03 02:02:23.011 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:02:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:02:23 compute-0 nova_compute[351485]: 2025-12-03 02:02:23.247 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:02:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:02:23 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1588833365' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:02:23 compute-0 nova_compute[351485]: 2025-12-03 02:02:23.504 351492 DEBUG oslo_concurrency.processutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:02:23 compute-0 nova_compute[351485]: 2025-12-03 02:02:23.518 351492 DEBUG nova.compute.provider_tree [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:02:23 compute-0 nova_compute[351485]: 2025-12-03 02:02:23.537 351492 DEBUG nova.scheduler.client.report [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:02:23 compute-0 nova_compute[351485]: 2025-12-03 02:02:23.575 351492 DEBUG oslo_concurrency.lockutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.792s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:02:23 compute-0 nova_compute[351485]: 2025-12-03 02:02:23.576 351492 DEBUG nova.compute.manager [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 03 02:02:23 compute-0 nova_compute[351485]: 2025-12-03 02:02:23.632 351492 DEBUG nova.compute.manager [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 03 02:02:23 compute-0 nova_compute[351485]: 2025-12-03 02:02:23.634 351492 DEBUG nova.network.neutron [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 03 02:02:23 compute-0 nova_compute[351485]: 2025-12-03 02:02:23.664 351492 INFO nova.virt.libvirt.driver [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 03 02:02:23 compute-0 nova_compute[351485]: 2025-12-03 02:02:23.712 351492 DEBUG nova.compute.manager [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 03 02:02:23 compute-0 nova_compute[351485]: 2025-12-03 02:02:23.812 351492 DEBUG nova.compute.manager [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 03 02:02:23 compute-0 nova_compute[351485]: 2025-12-03 02:02:23.814 351492 DEBUG nova.virt.libvirt.driver [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 03 02:02:23 compute-0 nova_compute[351485]: 2025-12-03 02:02:23.815 351492 INFO nova.virt.libvirt.driver [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Creating image(s)
Dec 03 02:02:23 compute-0 nova_compute[351485]: 2025-12-03 02:02:23.865 351492 DEBUG nova.storage.rbd_utils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image b43e79bd-550f-42f8-9aa7-980b6bca3f70_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:02:23 compute-0 podman[425585]: 2025-12-03 02:02:23.908363064 +0000 UTC m=+0.166582592 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, io.openshift.expose-services=, managed_by=edpm_ansible, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, release=1214.1726694543, vendor=Red Hat, Inc., distribution-scope=public, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, build-date=2024-09-18T21:23:30, vcs-type=git, config_id=edpm, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9)
Dec 03 02:02:23 compute-0 nova_compute[351485]: 2025-12-03 02:02:23.919 351492 DEBUG nova.storage.rbd_utils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image b43e79bd-550f-42f8-9aa7-980b6bca3f70_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:02:23 compute-0 nova_compute[351485]: 2025-12-03 02:02:23.982 351492 DEBUG nova.storage.rbd_utils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image b43e79bd-550f-42f8-9aa7-980b6bca3f70_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:02:23 compute-0 nova_compute[351485]: 2025-12-03 02:02:23.992 351492 DEBUG oslo_concurrency.processutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b9e804eb90834f1320f9fd6c25a03e15d4052aa8 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:02:24 compute-0 ceph-mon[192821]: pgmap v1434: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:02:24 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1588833365' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:02:24 compute-0 nova_compute[351485]: 2025-12-03 02:02:24.060 351492 DEBUG oslo_concurrency.processutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b9e804eb90834f1320f9fd6c25a03e15d4052aa8 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:02:24 compute-0 nova_compute[351485]: 2025-12-03 02:02:24.061 351492 DEBUG oslo_concurrency.lockutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "b9e804eb90834f1320f9fd6c25a03e15d4052aa8" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:02:24 compute-0 nova_compute[351485]: 2025-12-03 02:02:24.062 351492 DEBUG oslo_concurrency.lockutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "b9e804eb90834f1320f9fd6c25a03e15d4052aa8" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:02:24 compute-0 nova_compute[351485]: 2025-12-03 02:02:24.063 351492 DEBUG oslo_concurrency.lockutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "b9e804eb90834f1320f9fd6c25a03e15d4052aa8" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:02:24 compute-0 nova_compute[351485]: 2025-12-03 02:02:24.112 351492 DEBUG nova.storage.rbd_utils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image b43e79bd-550f-42f8-9aa7-980b6bca3f70_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:02:24 compute-0 nova_compute[351485]: 2025-12-03 02:02:24.133 351492 DEBUG oslo_concurrency.processutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b9e804eb90834f1320f9fd6c25a03e15d4052aa8 b43e79bd-550f-42f8-9aa7-980b6bca3f70_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:02:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1435: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:02:24 compute-0 nova_compute[351485]: 2025-12-03 02:02:24.531 351492 DEBUG oslo_concurrency.processutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b9e804eb90834f1320f9fd6c25a03e15d4052aa8 b43e79bd-550f-42f8-9aa7-980b6bca3f70_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.399s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:02:24 compute-0 nova_compute[351485]: 2025-12-03 02:02:24.673 351492 DEBUG nova.storage.rbd_utils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] resizing rbd image b43e79bd-550f-42f8-9aa7-980b6bca3f70_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 03 02:02:24 compute-0 nova_compute[351485]: 2025-12-03 02:02:24.871 351492 DEBUG nova.objects.instance [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lazy-loading 'migration_context' on Instance uuid b43e79bd-550f-42f8-9aa7-980b6bca3f70 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:02:24 compute-0 nova_compute[351485]: 2025-12-03 02:02:24.920 351492 DEBUG nova.storage.rbd_utils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image b43e79bd-550f-42f8-9aa7-980b6bca3f70_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:02:24 compute-0 nova_compute[351485]: 2025-12-03 02:02:24.971 351492 DEBUG nova.storage.rbd_utils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image b43e79bd-550f-42f8-9aa7-980b6bca3f70_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:02:24 compute-0 nova_compute[351485]: 2025-12-03 02:02:24.979 351492 DEBUG oslo_concurrency.processutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:02:25 compute-0 nova_compute[351485]: 2025-12-03 02:02:25.044 351492 DEBUG oslo_concurrency.processutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:02:25 compute-0 nova_compute[351485]: 2025-12-03 02:02:25.046 351492 DEBUG oslo_concurrency.lockutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:02:25 compute-0 nova_compute[351485]: 2025-12-03 02:02:25.047 351492 DEBUG oslo_concurrency.lockutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:02:25 compute-0 nova_compute[351485]: 2025-12-03 02:02:25.047 351492 DEBUG oslo_concurrency.lockutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:02:25 compute-0 nova_compute[351485]: 2025-12-03 02:02:25.087 351492 DEBUG nova.storage.rbd_utils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image b43e79bd-550f-42f8-9aa7-980b6bca3f70_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:02:25 compute-0 nova_compute[351485]: 2025-12-03 02:02:25.100 351492 DEBUG oslo_concurrency.processutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 b43e79bd-550f-42f8-9aa7-980b6bca3f70_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:02:25 compute-0 nova_compute[351485]: 2025-12-03 02:02:25.126 351492 DEBUG nova.network.neutron [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Successfully updated port: 6b217cd3-164a-4fb4-8eb6-f1eb3c806963 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 03 02:02:25 compute-0 nova_compute[351485]: 2025-12-03 02:02:25.149 351492 DEBUG oslo_concurrency.lockutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "refresh_cache-b43e79bd-550f-42f8-9aa7-980b6bca3f70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:02:25 compute-0 nova_compute[351485]: 2025-12-03 02:02:25.150 351492 DEBUG oslo_concurrency.lockutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquired lock "refresh_cache-b43e79bd-550f-42f8-9aa7-980b6bca3f70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:02:25 compute-0 nova_compute[351485]: 2025-12-03 02:02:25.150 351492 DEBUG nova.network.neutron [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 03 02:02:25 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec 03 02:02:25 compute-0 nova_compute[351485]: 2025-12-03 02:02:25.242 351492 DEBUG nova.compute.manager [req-fbb8825c-b083-4f6d-882e-9a9d689e7d54 req-73939166-4375-449d-b29d-f7869a003902 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Received event network-changed-6b217cd3-164a-4fb4-8eb6-f1eb3c806963 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:02:25 compute-0 nova_compute[351485]: 2025-12-03 02:02:25.243 351492 DEBUG nova.compute.manager [req-fbb8825c-b083-4f6d-882e-9a9d689e7d54 req-73939166-4375-449d-b29d-f7869a003902 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Refreshing instance network info cache due to event network-changed-6b217cd3-164a-4fb4-8eb6-f1eb3c806963. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 03 02:02:25 compute-0 nova_compute[351485]: 2025-12-03 02:02:25.243 351492 DEBUG oslo_concurrency.lockutils [req-fbb8825c-b083-4f6d-882e-9a9d689e7d54 req-73939166-4375-449d-b29d-f7869a003902 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-b43e79bd-550f-42f8-9aa7-980b6bca3f70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:02:25 compute-0 nova_compute[351485]: 2025-12-03 02:02:25.292 351492 DEBUG nova.network.neutron [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 03 02:02:25 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:02:25.492 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=eda9fd7d-f2b1-4121-b9ac-fc31f8426272, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:02:25 compute-0 nova_compute[351485]: 2025-12-03 02:02:25.887 351492 DEBUG oslo_concurrency.processutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 b43e79bd-550f-42f8-9aa7-980b6bca3f70_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.787s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:02:26 compute-0 ceph-mon[192821]: pgmap v1435: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:02:26 compute-0 nova_compute[351485]: 2025-12-03 02:02:26.164 351492 DEBUG nova.virt.libvirt.driver [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 03 02:02:26 compute-0 nova_compute[351485]: 2025-12-03 02:02:26.165 351492 DEBUG nova.virt.libvirt.driver [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Ensure instance console log exists: /var/lib/nova/instances/b43e79bd-550f-42f8-9aa7-980b6bca3f70/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 03 02:02:26 compute-0 nova_compute[351485]: 2025-12-03 02:02:26.168 351492 DEBUG oslo_concurrency.lockutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:02:26 compute-0 nova_compute[351485]: 2025-12-03 02:02:26.168 351492 DEBUG oslo_concurrency.lockutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:02:26 compute-0 nova_compute[351485]: 2025-12-03 02:02:26.168 351492 DEBUG oslo_concurrency.lockutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:02:26 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:02:26.486 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=eda9fd7d-f2b1-4121-b9ac-fc31f8426272, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:02:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1436: 321 pgs: 321 active+clean; 228 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.1 MiB/s wr, 30 op/s
Dec 03 02:02:27 compute-0 nova_compute[351485]: 2025-12-03 02:02:27.168 351492 DEBUG nova.network.neutron [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Updating instance_info_cache with network_info: [{"id": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "address": "fa:16:3e:da:35:ef", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.85", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b217cd3-16", "ovs_interfaceid": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:02:27 compute-0 nova_compute[351485]: 2025-12-03 02:02:27.205 351492 DEBUG oslo_concurrency.lockutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Releasing lock "refresh_cache-b43e79bd-550f-42f8-9aa7-980b6bca3f70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:02:27 compute-0 nova_compute[351485]: 2025-12-03 02:02:27.206 351492 DEBUG nova.compute.manager [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Instance network_info: |[{"id": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "address": "fa:16:3e:da:35:ef", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.85", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b217cd3-16", "ovs_interfaceid": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 03 02:02:27 compute-0 nova_compute[351485]: 2025-12-03 02:02:27.206 351492 DEBUG oslo_concurrency.lockutils [req-fbb8825c-b083-4f6d-882e-9a9d689e7d54 req-73939166-4375-449d-b29d-f7869a003902 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-b43e79bd-550f-42f8-9aa7-980b6bca3f70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:02:27 compute-0 nova_compute[351485]: 2025-12-03 02:02:27.207 351492 DEBUG nova.network.neutron [req-fbb8825c-b083-4f6d-882e-9a9d689e7d54 req-73939166-4375-449d-b29d-f7869a003902 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Refreshing network info cache for port 6b217cd3-164a-4fb4-8eb6-f1eb3c806963 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 03 02:02:27 compute-0 nova_compute[351485]: 2025-12-03 02:02:27.213 351492 DEBUG nova.virt.libvirt.driver [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Start _get_guest_xml network_info=[{"id": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "address": "fa:16:3e:da:35:ef", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.85", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b217cd3-16", "ovs_interfaceid": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-03T01:53:18Z,direct_url=<?>,disk_format='qcow2',id=466cf0db-c3be-4d70-b9f3-08c056c2cad9,min_disk=0,min_ram=0,name='cirros',owner='9746b242761a48048d185ce26d622b33',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-03T01:53:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'boot_index': 0, 'guest_format': None, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'image_id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}], 'ephemerals': [{'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vdb', 'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'size': 1, 'encryption_options': None, 'device_type': 'disk'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 03 02:02:27 compute-0 nova_compute[351485]: 2025-12-03 02:02:27.226 351492 WARNING nova.virt.libvirt.driver [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:02:27 compute-0 nova_compute[351485]: 2025-12-03 02:02:27.239 351492 DEBUG nova.virt.libvirt.host [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 03 02:02:27 compute-0 nova_compute[351485]: 2025-12-03 02:02:27.240 351492 DEBUG nova.virt.libvirt.host [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 03 02:02:27 compute-0 nova_compute[351485]: 2025-12-03 02:02:27.249 351492 DEBUG nova.virt.libvirt.host [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 03 02:02:27 compute-0 nova_compute[351485]: 2025-12-03 02:02:27.249 351492 DEBUG nova.virt.libvirt.host [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 03 02:02:27 compute-0 nova_compute[351485]: 2025-12-03 02:02:27.250 351492 DEBUG nova.virt.libvirt.driver [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 03 02:02:27 compute-0 nova_compute[351485]: 2025-12-03 02:02:27.250 351492 DEBUG nova.virt.hardware [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-03T01:53:25Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='bc665ec6-3672-4e52-a447-5267b04e227a',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-03T01:53:18Z,direct_url=<?>,disk_format='qcow2',id=466cf0db-c3be-4d70-b9f3-08c056c2cad9,min_disk=0,min_ram=0,name='cirros',owner='9746b242761a48048d185ce26d622b33',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-03T01:53:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 03 02:02:27 compute-0 nova_compute[351485]: 2025-12-03 02:02:27.251 351492 DEBUG nova.virt.hardware [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 03 02:02:27 compute-0 nova_compute[351485]: 2025-12-03 02:02:27.251 351492 DEBUG nova.virt.hardware [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 03 02:02:27 compute-0 nova_compute[351485]: 2025-12-03 02:02:27.251 351492 DEBUG nova.virt.hardware [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 03 02:02:27 compute-0 nova_compute[351485]: 2025-12-03 02:02:27.252 351492 DEBUG nova.virt.hardware [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 03 02:02:27 compute-0 nova_compute[351485]: 2025-12-03 02:02:27.252 351492 DEBUG nova.virt.hardware [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 03 02:02:27 compute-0 nova_compute[351485]: 2025-12-03 02:02:27.252 351492 DEBUG nova.virt.hardware [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 03 02:02:27 compute-0 nova_compute[351485]: 2025-12-03 02:02:27.252 351492 DEBUG nova.virt.hardware [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 03 02:02:27 compute-0 nova_compute[351485]: 2025-12-03 02:02:27.252 351492 DEBUG nova.virt.hardware [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 03 02:02:27 compute-0 nova_compute[351485]: 2025-12-03 02:02:27.253 351492 DEBUG nova.virt.hardware [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 03 02:02:27 compute-0 nova_compute[351485]: 2025-12-03 02:02:27.253 351492 DEBUG nova.virt.hardware [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 03 02:02:27 compute-0 nova_compute[351485]: 2025-12-03 02:02:27.256 351492 DEBUG oslo_concurrency.processutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:02:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 03 02:02:27 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/200000264' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:02:27 compute-0 nova_compute[351485]: 2025-12-03 02:02:27.848 351492 DEBUG oslo_concurrency.processutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:02:27 compute-0 nova_compute[351485]: 2025-12-03 02:02:27.850 351492 DEBUG oslo_concurrency.processutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:02:27 compute-0 podman[425922]: 2025-12-03 02:02:27.860776933 +0000 UTC m=+0.113711344 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.openshift.expose-services=, release=1755695350, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, version=9.6, container_name=openstack_network_exporter, vcs-type=git, maintainer=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec 03 02:02:27 compute-0 podman[425923]: 2025-12-03 02:02:27.870872287 +0000 UTC m=+0.109263718 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 02:02:27 compute-0 podman[425924]: 2025-12-03 02:02:27.87949865 +0000 UTC m=+0.124742404 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 03 02:02:27 compute-0 podman[425921]: 2025-12-03 02:02:27.888329969 +0000 UTC m=+0.147933238 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 03 02:02:28 compute-0 nova_compute[351485]: 2025-12-03 02:02:28.012 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:02:28 compute-0 ceph-mon[192821]: pgmap v1436: 321 pgs: 321 active+clean; 228 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.1 MiB/s wr, 30 op/s
Dec 03 02:02:28 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/200000264' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:02:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:02:28 compute-0 nova_compute[351485]: 2025-12-03 02:02:28.255 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:02:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 03 02:02:28 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1334003611' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:02:28 compute-0 nova_compute[351485]: 2025-12-03 02:02:28.348 351492 DEBUG oslo_concurrency.processutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:02:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:02:28
Dec 03 02:02:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 02:02:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 02:02:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'images', '.rgw.root', 'default.rgw.meta', 'volumes', '.mgr', 'cephfs.cephfs.data', 'default.rgw.log', 'backups']
Dec 03 02:02:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 02:02:28 compute-0 nova_compute[351485]: 2025-12-03 02:02:28.407 351492 DEBUG nova.storage.rbd_utils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image b43e79bd-550f-42f8-9aa7-980b6bca3f70_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:02:28 compute-0 nova_compute[351485]: 2025-12-03 02:02:28.420 351492 DEBUG oslo_concurrency.processutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:02:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:02:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:02:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:02:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:02:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:02:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:02:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1437: 321 pgs: 321 active+clean; 228 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.1 MiB/s wr, 30 op/s
Dec 03 02:02:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 02:02:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:02:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 02:02:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:02:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:02:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:02:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:02:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:02:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:02:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 03 02:02:28 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2604432881' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:02:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:02:28 compute-0 nova_compute[351485]: 2025-12-03 02:02:28.989 351492 DEBUG oslo_concurrency.processutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:02:28 compute-0 nova_compute[351485]: 2025-12-03 02:02:28.991 351492 DEBUG nova.virt.libvirt.vif [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T02:02:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-44nal64-mj7m4uljqyof-c7kfgdonucij-vnf-5nwa6zvischw',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-44nal64-mj7m4uljqyof-c7kfgdonucij-vnf-5nwa6zvischw',id=4,image_ref='466cf0db-c3be-4d70-b9f3-08c056c2cad9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='0f6ab671-23df-4a6d-9613-02f9fb5fb294'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9746b242761a48048d185ce26d622b33',ramdisk_id='',reservation_id='r-54gvmjwo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='466cf0db-c3be-4d70-b9f3-08c056c2cad9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T02:02:23Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT04MjE5MDc0MDkyMzM2MjQzOTEwPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTgyMTkwNzQwOTIzMzYyNDM5MTA9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09ODIxOTA3NDA5MjMzNjI0MzkxMD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTgyMTkwNzQwOTIzMzYyNDM5MTA9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT04MjE5MDc0MDkyMzM2MjQzOTEwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT04MjE5MDc0MDkyMzM2MjQzOTEwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJnc
Dec 03 02:02:28 compute-0 nova_compute[351485]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09ODIxOTA3NDA5MjMzNjI0MzkxMD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTgyMTkwNzQwOTIzMzYyNDM5MTA9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT04MjE5MDc0MDkyMzM2MjQzOTEwPT0tLQo=',user_id='03ba25e4009b43f7b0054fee32bf9136',uuid=b43e79bd-550f-42f8-9aa7-980b6bca3f70,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "address": "fa:16:3e:da:35:ef", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.85", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b217cd3-16", "ovs_interfaceid": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 03 02:02:28 compute-0 nova_compute[351485]: 2025-12-03 02:02:28.991 351492 DEBUG nova.network.os_vif_util [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Converting VIF {"id": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "address": "fa:16:3e:da:35:ef", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.85", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b217cd3-16", "ovs_interfaceid": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 03 02:02:28 compute-0 nova_compute[351485]: 2025-12-03 02:02:28.992 351492 DEBUG nova.network.os_vif_util [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:35:ef,bridge_name='br-int',has_traffic_filtering=True,id=6b217cd3-164a-4fb4-8eb6-f1eb3c806963,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6b217cd3-16') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 03 02:02:28 compute-0 nova_compute[351485]: 2025-12-03 02:02:28.994 351492 DEBUG nova.objects.instance [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lazy-loading 'pci_devices' on Instance uuid b43e79bd-550f-42f8-9aa7-980b6bca3f70 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.011 351492 DEBUG nova.virt.libvirt.driver [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] End _get_guest_xml xml=<domain type="kvm">
Dec 03 02:02:29 compute-0 nova_compute[351485]:   <uuid>b43e79bd-550f-42f8-9aa7-980b6bca3f70</uuid>
Dec 03 02:02:29 compute-0 nova_compute[351485]:   <name>instance-00000004</name>
Dec 03 02:02:29 compute-0 nova_compute[351485]:   <memory>524288</memory>
Dec 03 02:02:29 compute-0 nova_compute[351485]:   <vcpu>1</vcpu>
Dec 03 02:02:29 compute-0 nova_compute[351485]:   <metadata>
Dec 03 02:02:29 compute-0 nova_compute[351485]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 03 02:02:29 compute-0 nova_compute[351485]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:       <nova:name>vn-44nal64-mj7m4uljqyof-c7kfgdonucij-vnf-5nwa6zvischw</nova:name>
Dec 03 02:02:29 compute-0 nova_compute[351485]:       <nova:creationTime>2025-12-03 02:02:27</nova:creationTime>
Dec 03 02:02:29 compute-0 nova_compute[351485]:       <nova:flavor name="m1.small">
Dec 03 02:02:29 compute-0 nova_compute[351485]:         <nova:memory>512</nova:memory>
Dec 03 02:02:29 compute-0 nova_compute[351485]:         <nova:disk>1</nova:disk>
Dec 03 02:02:29 compute-0 nova_compute[351485]:         <nova:swap>0</nova:swap>
Dec 03 02:02:29 compute-0 nova_compute[351485]:         <nova:ephemeral>1</nova:ephemeral>
Dec 03 02:02:29 compute-0 nova_compute[351485]:         <nova:vcpus>1</nova:vcpus>
Dec 03 02:02:29 compute-0 nova_compute[351485]:       </nova:flavor>
Dec 03 02:02:29 compute-0 nova_compute[351485]:       <nova:owner>
Dec 03 02:02:29 compute-0 nova_compute[351485]:         <nova:user uuid="03ba25e4009b43f7b0054fee32bf9136">admin</nova:user>
Dec 03 02:02:29 compute-0 nova_compute[351485]:         <nova:project uuid="9746b242761a48048d185ce26d622b33">admin</nova:project>
Dec 03 02:02:29 compute-0 nova_compute[351485]:       </nova:owner>
Dec 03 02:02:29 compute-0 nova_compute[351485]:       <nova:root type="image" uuid="466cf0db-c3be-4d70-b9f3-08c056c2cad9"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:       <nova:ports>
Dec 03 02:02:29 compute-0 nova_compute[351485]:         <nova:port uuid="6b217cd3-164a-4fb4-8eb6-f1eb3c806963">
Dec 03 02:02:29 compute-0 nova_compute[351485]:           <nova:ip type="fixed" address="192.168.0.85" ipVersion="4"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:         </nova:port>
Dec 03 02:02:29 compute-0 nova_compute[351485]:       </nova:ports>
Dec 03 02:02:29 compute-0 nova_compute[351485]:     </nova:instance>
Dec 03 02:02:29 compute-0 nova_compute[351485]:   </metadata>
Dec 03 02:02:29 compute-0 nova_compute[351485]:   <sysinfo type="smbios">
Dec 03 02:02:29 compute-0 nova_compute[351485]:     <system>
Dec 03 02:02:29 compute-0 nova_compute[351485]:       <entry name="manufacturer">RDO</entry>
Dec 03 02:02:29 compute-0 nova_compute[351485]:       <entry name="product">OpenStack Compute</entry>
Dec 03 02:02:29 compute-0 nova_compute[351485]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 03 02:02:29 compute-0 nova_compute[351485]:       <entry name="serial">b43e79bd-550f-42f8-9aa7-980b6bca3f70</entry>
Dec 03 02:02:29 compute-0 nova_compute[351485]:       <entry name="uuid">b43e79bd-550f-42f8-9aa7-980b6bca3f70</entry>
Dec 03 02:02:29 compute-0 nova_compute[351485]:       <entry name="family">Virtual Machine</entry>
Dec 03 02:02:29 compute-0 nova_compute[351485]:     </system>
Dec 03 02:02:29 compute-0 nova_compute[351485]:   </sysinfo>
Dec 03 02:02:29 compute-0 nova_compute[351485]:   <os>
Dec 03 02:02:29 compute-0 nova_compute[351485]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 03 02:02:29 compute-0 nova_compute[351485]:     <boot dev="hd"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:     <smbios mode="sysinfo"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:   </os>
Dec 03 02:02:29 compute-0 nova_compute[351485]:   <features>
Dec 03 02:02:29 compute-0 nova_compute[351485]:     <acpi/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:     <apic/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:     <vmcoreinfo/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:   </features>
Dec 03 02:02:29 compute-0 nova_compute[351485]:   <clock offset="utc">
Dec 03 02:02:29 compute-0 nova_compute[351485]:     <timer name="pit" tickpolicy="delay"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:     <timer name="hpet" present="no"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:   </clock>
Dec 03 02:02:29 compute-0 nova_compute[351485]:   <cpu mode="host-model" match="exact">
Dec 03 02:02:29 compute-0 nova_compute[351485]:     <topology sockets="1" cores="1" threads="1"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:   </cpu>
Dec 03 02:02:29 compute-0 nova_compute[351485]:   <devices>
Dec 03 02:02:29 compute-0 nova_compute[351485]:     <disk type="network" device="disk">
Dec 03 02:02:29 compute-0 nova_compute[351485]:       <driver type="raw" cache="none"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:       <source protocol="rbd" name="vms/b43e79bd-550f-42f8-9aa7-980b6bca3f70_disk">
Dec 03 02:02:29 compute-0 nova_compute[351485]:         <host name="192.168.122.100" port="6789"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:       </source>
Dec 03 02:02:29 compute-0 nova_compute[351485]:       <auth username="openstack">
Dec 03 02:02:29 compute-0 nova_compute[351485]:         <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:       </auth>
Dec 03 02:02:29 compute-0 nova_compute[351485]:       <target dev="vda" bus="virtio"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:     </disk>
Dec 03 02:02:29 compute-0 nova_compute[351485]:     <disk type="network" device="disk">
Dec 03 02:02:29 compute-0 nova_compute[351485]:       <driver type="raw" cache="none"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:       <source protocol="rbd" name="vms/b43e79bd-550f-42f8-9aa7-980b6bca3f70_disk.eph0">
Dec 03 02:02:29 compute-0 nova_compute[351485]:         <host name="192.168.122.100" port="6789"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:       </source>
Dec 03 02:02:29 compute-0 nova_compute[351485]:       <auth username="openstack">
Dec 03 02:02:29 compute-0 nova_compute[351485]:         <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:       </auth>
Dec 03 02:02:29 compute-0 nova_compute[351485]:       <target dev="vdb" bus="virtio"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:     </disk>
Dec 03 02:02:29 compute-0 nova_compute[351485]:     <disk type="network" device="cdrom">
Dec 03 02:02:29 compute-0 nova_compute[351485]:       <driver type="raw" cache="none"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:       <source protocol="rbd" name="vms/b43e79bd-550f-42f8-9aa7-980b6bca3f70_disk.config">
Dec 03 02:02:29 compute-0 nova_compute[351485]:         <host name="192.168.122.100" port="6789"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:       </source>
Dec 03 02:02:29 compute-0 nova_compute[351485]:       <auth username="openstack">
Dec 03 02:02:29 compute-0 nova_compute[351485]:         <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:       </auth>
Dec 03 02:02:29 compute-0 nova_compute[351485]:       <target dev="sda" bus="sata"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:     </disk>
Dec 03 02:02:29 compute-0 nova_compute[351485]:     <interface type="ethernet">
Dec 03 02:02:29 compute-0 nova_compute[351485]:       <mac address="fa:16:3e:da:35:ef"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:       <model type="virtio"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:       <driver name="vhost" rx_queue_size="512"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:       <mtu size="1442"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:       <target dev="tap6b217cd3-16"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:     </interface>
Dec 03 02:02:29 compute-0 nova_compute[351485]:     <serial type="pty">
Dec 03 02:02:29 compute-0 nova_compute[351485]:       <log file="/var/lib/nova/instances/b43e79bd-550f-42f8-9aa7-980b6bca3f70/console.log" append="off"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:     </serial>
Dec 03 02:02:29 compute-0 nova_compute[351485]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:     <video>
Dec 03 02:02:29 compute-0 nova_compute[351485]:       <model type="virtio"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:     </video>
Dec 03 02:02:29 compute-0 nova_compute[351485]:     <input type="tablet" bus="usb"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:     <rng model="virtio">
Dec 03 02:02:29 compute-0 nova_compute[351485]:       <backend model="random">/dev/urandom</backend>
Dec 03 02:02:29 compute-0 nova_compute[351485]:     </rng>
Dec 03 02:02:29 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:     <controller type="usb" index="0"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:     <memballoon model="virtio">
Dec 03 02:02:29 compute-0 nova_compute[351485]:       <stats period="10"/>
Dec 03 02:02:29 compute-0 nova_compute[351485]:     </memballoon>
Dec 03 02:02:29 compute-0 nova_compute[351485]:   </devices>
Dec 03 02:02:29 compute-0 nova_compute[351485]: </domain>
Dec 03 02:02:29 compute-0 nova_compute[351485]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 03 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.012 351492 DEBUG nova.compute.manager [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Preparing to wait for external event network-vif-plugged-6b217cd3-164a-4fb4-8eb6-f1eb3c806963 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 03 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.012 351492 DEBUG oslo_concurrency.lockutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "b43e79bd-550f-42f8-9aa7-980b6bca3f70-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.012 351492 DEBUG oslo_concurrency.lockutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "b43e79bd-550f-42f8-9aa7-980b6bca3f70-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.013 351492 DEBUG oslo_concurrency.lockutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "b43e79bd-550f-42f8-9aa7-980b6bca3f70-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.014 351492 DEBUG nova.virt.libvirt.vif [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T02:02:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-44nal64-mj7m4uljqyof-c7kfgdonucij-vnf-5nwa6zvischw',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-44nal64-mj7m4uljqyof-c7kfgdonucij-vnf-5nwa6zvischw',id=4,image_ref='466cf0db-c3be-4d70-b9f3-08c056c2cad9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='0f6ab671-23df-4a6d-9613-02f9fb5fb294'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9746b242761a48048d185ce26d622b33',ramdisk_id='',reservation_id='r-54gvmjwo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='466cf0db-c3be-4d70-b9f3-08c056c2cad9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T02:02:23Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT04MjE5MDc0MDkyMzM2MjQzOTEwPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTgyMTkwNzQwOTIzMzYyNDM5MTA9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09ODIxOTA3NDA5MjMzNjI0MzkxMD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTgyMTkwNzQwOTIzMzYyNDM5MTA9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT04MjE5MDc0MDkyMzM2MjQzOTEwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT04MjE5MDc0MDkyMzM2MjQzOTEwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9
Dec 03 02:02:29 compute-0 nova_compute[351485]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09ODIxOTA3NDA5MjMzNjI0MzkxMD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTgyMTkwNzQwOTIzMzYyNDM5MTA9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT04MjE5MDc0MDkyMzM2MjQzOTEwPT0tLQo=',user_id='03ba25e4009b43f7b0054fee32bf9136',uuid=b43e79bd-550f-42f8-9aa7-980b6bca3f70,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "address": "fa:16:3e:da:35:ef", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.85", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b217cd3-16", "ovs_interfaceid": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 03 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.014 351492 DEBUG nova.network.os_vif_util [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Converting VIF {"id": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "address": "fa:16:3e:da:35:ef", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.85", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b217cd3-16", "ovs_interfaceid": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 03 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.015 351492 DEBUG nova.network.os_vif_util [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:35:ef,bridge_name='br-int',has_traffic_filtering=True,id=6b217cd3-164a-4fb4-8eb6-f1eb3c806963,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6b217cd3-16') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 03 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.016 351492 DEBUG os_vif [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:35:ef,bridge_name='br-int',has_traffic_filtering=True,id=6b217cd3-164a-4fb4-8eb6-f1eb3c806963,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6b217cd3-16') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 03 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.016 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.018 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.018 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 03 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.024 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.025 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6b217cd3-16, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.026 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6b217cd3-16, col_values=(('external_ids', {'iface-id': '6b217cd3-164a-4fb4-8eb6-f1eb3c806963', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:da:35:ef', 'vm-uuid': 'b43e79bd-550f-42f8-9aa7-980b6bca3f70'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.029 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:02:29 compute-0 NetworkManager[48912]: <info>  [1764727349.0310] manager: (tap6b217cd3-16): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Dec 03 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.037 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 03 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.040 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.041 351492 INFO os_vif [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:35:ef,bridge_name='br-int',has_traffic_filtering=True,id=6b217cd3-164a-4fb4-8eb6-f1eb3c806963,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6b217cd3-16')
Dec 03 02:02:29 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1334003611' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:02:29 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2604432881' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:02:29 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec 03 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.107 351492 DEBUG nova.virt.libvirt.driver [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 03 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.108 351492 DEBUG nova.virt.libvirt.driver [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 03 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.108 351492 DEBUG nova.virt.libvirt.driver [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 03 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.109 351492 DEBUG nova.virt.libvirt.driver [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] No VIF found with MAC fa:16:3e:da:35:ef, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 03 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.110 351492 INFO nova.virt.libvirt.driver [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Using config drive
Dec 03 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.161 351492 DEBUG nova.storage.rbd_utils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image b43e79bd-550f-42f8-9aa7-980b6bca3f70_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:02:29 compute-0 rsyslogd[188612]: message too long (8192) with configured size 8096, begin of message is: 2025-12-03 02:02:28.991 351492 DEBUG nova.virt.libvirt.vif [None req-72496262-9b [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 03 02:02:29 compute-0 rsyslogd[188612]: message too long (8192) with configured size 8096, begin of message is: 2025-12-03 02:02:29.014 351492 DEBUG nova.virt.libvirt.vif [None req-72496262-9b [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 03 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.687 351492 INFO nova.virt.libvirt.driver [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Creating config drive at /var/lib/nova/instances/b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.config
Dec 03 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.700 351492 DEBUG oslo_concurrency.processutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmkq1yyx9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:02:29 compute-0 podman[158098]: time="2025-12-03T02:02:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:02:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:02:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec 03 02:02:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:02:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8645 "" "Go-http-client/1.1"
Dec 03 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.852 351492 DEBUG oslo_concurrency.processutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmkq1yyx9" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.906 351492 DEBUG nova.storage.rbd_utils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image b43e79bd-550f-42f8-9aa7-980b6bca3f70_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.917 351492 DEBUG oslo_concurrency.processutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.config b43e79bd-550f-42f8-9aa7-980b6bca3f70_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:02:30 compute-0 ceph-mon[192821]: pgmap v1437: 321 pgs: 321 active+clean; 228 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.1 MiB/s wr, 30 op/s
Dec 03 02:02:30 compute-0 nova_compute[351485]: 2025-12-03 02:02:30.202 351492 DEBUG oslo_concurrency.processutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.config b43e79bd-550f-42f8-9aa7-980b6bca3f70_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.285s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:02:30 compute-0 nova_compute[351485]: 2025-12-03 02:02:30.203 351492 INFO nova.virt.libvirt.driver [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Deleting local config drive /var/lib/nova/instances/b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.config because it was imported into RBD.
Dec 03 02:02:30 compute-0 systemd[1]: Starting libvirt secret daemon...
Dec 03 02:02:30 compute-0 systemd[1]: Started libvirt secret daemon.
Dec 03 02:02:30 compute-0 NetworkManager[48912]: <info>  [1764727350.3795] manager: (tap6b217cd3-16): new Tun device (/org/freedesktop/NetworkManager/Devices/34)
Dec 03 02:02:30 compute-0 kernel: tap6b217cd3-16: entered promiscuous mode
Dec 03 02:02:30 compute-0 ovn_controller[89134]: 2025-12-03T02:02:30Z|00045|binding|INFO|Claiming lport 6b217cd3-164a-4fb4-8eb6-f1eb3c806963 for this chassis.
Dec 03 02:02:30 compute-0 ovn_controller[89134]: 2025-12-03T02:02:30Z|00046|binding|INFO|6b217cd3-164a-4fb4-8eb6-f1eb3c806963: Claiming fa:16:3e:da:35:ef 192.168.0.85
Dec 03 02:02:30 compute-0 nova_compute[351485]: 2025-12-03 02:02:30.393 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:02:30 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:02:30.401 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:35:ef 192.168.0.85'], port_security=['fa:16:3e:da:35:ef 192.168.0.85'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-olz3x44nal64-mj7m4uljqyof-c7kfgdonucij-port-nmbntpj2trtj', 'neutron:cidrs': '192.168.0.85/24', 'neutron:device_id': 'b43e79bd-550f-42f8-9aa7-980b6bca3f70', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-olz3x44nal64-mj7m4uljqyof-c7kfgdonucij-port-nmbntpj2trtj', 'neutron:project_id': '9746b242761a48048d185ce26d622b33', 'neutron:revision_number': '2', 'neutron:security_group_ids': '43ddbc1b-0018-4ea3-a338-8898d9bf8c87', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.232'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=13e9ae70-0999-47f9-bc0c-397e04263018, chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=6b217cd3-164a-4fb4-8eb6-f1eb3c806963) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 02:02:30 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:02:30.403 288528 INFO neutron.agent.ovn.metadata.agent [-] Port 6b217cd3-164a-4fb4-8eb6-f1eb3c806963 in datapath 7ba11691-2711-476c-9191-cb6dfd0efa7d bound to our chassis
Dec 03 02:02:30 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:02:30.407 288528 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ba11691-2711-476c-9191-cb6dfd0efa7d
Dec 03 02:02:30 compute-0 ovn_controller[89134]: 2025-12-03T02:02:30Z|00047|binding|INFO|Setting lport 6b217cd3-164a-4fb4-8eb6-f1eb3c806963 ovn-installed in OVS
Dec 03 02:02:30 compute-0 ovn_controller[89134]: 2025-12-03T02:02:30Z|00048|binding|INFO|Setting lport 6b217cd3-164a-4fb4-8eb6-f1eb3c806963 up in Southbound
Dec 03 02:02:30 compute-0 nova_compute[351485]: 2025-12-03 02:02:30.427 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:02:30 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:02:30.432 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[17a446c4-7cbf-43dc-ad09-759dcf706412]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:02:30 compute-0 systemd-machined[138558]: New machine qemu-4-instance-00000004.
Dec 03 02:02:30 compute-0 nova_compute[351485]: 2025-12-03 02:02:30.444 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:02:30 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Dec 03 02:02:30 compute-0 systemd-udevd[426170]: Network interface NamePolicy= disabled on kernel command line.
Dec 03 02:02:30 compute-0 NetworkManager[48912]: <info>  [1764727350.4858] device (tap6b217cd3-16): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 03 02:02:30 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:02:30.488 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[5418c8ca-35f0-40a5-a1b1-96b88ad70408]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:02:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1438: 321 pgs: 321 active+clean; 234 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.4 MiB/s wr, 32 op/s
Dec 03 02:02:30 compute-0 NetworkManager[48912]: <info>  [1764727350.4923] device (tap6b217cd3-16): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 03 02:02:30 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:02:30.494 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[4f1b9091-e168-4c9d-8085-61d2ccc5a306]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:02:30 compute-0 nova_compute[351485]: 2025-12-03 02:02:30.521 351492 DEBUG nova.network.neutron [req-fbb8825c-b083-4f6d-882e-9a9d689e7d54 req-73939166-4375-449d-b29d-f7869a003902 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Updated VIF entry in instance network info cache for port 6b217cd3-164a-4fb4-8eb6-f1eb3c806963. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 03 02:02:30 compute-0 nova_compute[351485]: 2025-12-03 02:02:30.522 351492 DEBUG nova.network.neutron [req-fbb8825c-b083-4f6d-882e-9a9d689e7d54 req-73939166-4375-449d-b29d-f7869a003902 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Updating instance_info_cache with network_info: [{"id": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "address": "fa:16:3e:da:35:ef", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.85", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b217cd3-16", "ovs_interfaceid": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:02:30 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:02:30.529 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[be9d5699-b997-4985-a8b1-2a8ad9fcbc64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:02:30 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:02:30.549 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[667255ae-aae1-46fc-a3df-e8eae7f3bba6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ba11691-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:a4:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573048, 'reachable_time': 21284, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 426178, 'error': None, 'target': 'ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:02:30 compute-0 nova_compute[351485]: 2025-12-03 02:02:30.552 351492 DEBUG oslo_concurrency.lockutils [req-fbb8825c-b083-4f6d-882e-9a9d689e7d54 req-73939166-4375-449d-b29d-f7869a003902 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-b43e79bd-550f-42f8-9aa7-980b6bca3f70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:02:30 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:02:30.576 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[6a647c0e-4e5d-4089-9814-7698902731a9]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap7ba11691-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 573065, 'tstamp': 573065}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 426181, 'error': None, 'target': 'ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7ba11691-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 573069, 'tstamp': 573069}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 426181, 'error': None, 'target': 'ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:02:30 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:02:30.578 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ba11691-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:02:30 compute-0 nova_compute[351485]: 2025-12-03 02:02:30.580 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:02:30 compute-0 nova_compute[351485]: 2025-12-03 02:02:30.582 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:02:30 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:02:30.583 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ba11691-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:02:30 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:02:30.583 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 03 02:02:30 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:02:30.584 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ba11691-20, col_values=(('external_ids', {'iface-id': '8c8945aa-32be-4ced-a7fe-2b9502f30008'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:02:30 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:02:30.584 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 03 02:02:30 compute-0 nova_compute[351485]: 2025-12-03 02:02:30.835 351492 DEBUG nova.compute.manager [req-84f057b5-82ff-4441-b534-98c617d1d47d req-3651b5f8-afe9-4b9c-b94c-90aafaea6de5 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Received event network-vif-plugged-6b217cd3-164a-4fb4-8eb6-f1eb3c806963 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:02:30 compute-0 nova_compute[351485]: 2025-12-03 02:02:30.836 351492 DEBUG oslo_concurrency.lockutils [req-84f057b5-82ff-4441-b534-98c617d1d47d req-3651b5f8-afe9-4b9c-b94c-90aafaea6de5 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "b43e79bd-550f-42f8-9aa7-980b6bca3f70-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:02:30 compute-0 nova_compute[351485]: 2025-12-03 02:02:30.837 351492 DEBUG oslo_concurrency.lockutils [req-84f057b5-82ff-4441-b534-98c617d1d47d req-3651b5f8-afe9-4b9c-b94c-90aafaea6de5 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "b43e79bd-550f-42f8-9aa7-980b6bca3f70-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:02:30 compute-0 nova_compute[351485]: 2025-12-03 02:02:30.837 351492 DEBUG oslo_concurrency.lockutils [req-84f057b5-82ff-4441-b534-98c617d1d47d req-3651b5f8-afe9-4b9c-b94c-90aafaea6de5 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "b43e79bd-550f-42f8-9aa7-980b6bca3f70-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:02:30 compute-0 nova_compute[351485]: 2025-12-03 02:02:30.837 351492 DEBUG nova.compute.manager [req-84f057b5-82ff-4441-b534-98c617d1d47d req-3651b5f8-afe9-4b9c-b94c-90aafaea6de5 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Processing event network-vif-plugged-6b217cd3-164a-4fb4-8eb6-f1eb3c806963 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 03 02:02:31 compute-0 nova_compute[351485]: 2025-12-03 02:02:31.307 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764727351.3064172, b43e79bd-550f-42f8-9aa7-980b6bca3f70 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 02:02:31 compute-0 nova_compute[351485]: 2025-12-03 02:02:31.307 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] VM Started (Lifecycle Event)
Dec 03 02:02:31 compute-0 nova_compute[351485]: 2025-12-03 02:02:31.310 351492 DEBUG nova.compute.manager [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 03 02:02:31 compute-0 nova_compute[351485]: 2025-12-03 02:02:31.337 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:02:31 compute-0 nova_compute[351485]: 2025-12-03 02:02:31.342 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 03 02:02:31 compute-0 nova_compute[351485]: 2025-12-03 02:02:31.350 351492 DEBUG nova.virt.libvirt.driver [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 03 02:02:31 compute-0 nova_compute[351485]: 2025-12-03 02:02:31.356 351492 INFO nova.virt.libvirt.driver [-] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Instance spawned successfully.
Dec 03 02:02:31 compute-0 nova_compute[351485]: 2025-12-03 02:02:31.357 351492 DEBUG nova.virt.libvirt.driver [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 03 02:02:31 compute-0 nova_compute[351485]: 2025-12-03 02:02:31.384 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 03 02:02:31 compute-0 nova_compute[351485]: 2025-12-03 02:02:31.385 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764727351.306694, b43e79bd-550f-42f8-9aa7-980b6bca3f70 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 02:02:31 compute-0 nova_compute[351485]: 2025-12-03 02:02:31.385 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] VM Paused (Lifecycle Event)
Dec 03 02:02:31 compute-0 nova_compute[351485]: 2025-12-03 02:02:31.396 351492 DEBUG nova.virt.libvirt.driver [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:02:31 compute-0 nova_compute[351485]: 2025-12-03 02:02:31.397 351492 DEBUG nova.virt.libvirt.driver [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:02:31 compute-0 nova_compute[351485]: 2025-12-03 02:02:31.398 351492 DEBUG nova.virt.libvirt.driver [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:02:31 compute-0 nova_compute[351485]: 2025-12-03 02:02:31.399 351492 DEBUG nova.virt.libvirt.driver [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:02:31 compute-0 nova_compute[351485]: 2025-12-03 02:02:31.399 351492 DEBUG nova.virt.libvirt.driver [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:02:31 compute-0 nova_compute[351485]: 2025-12-03 02:02:31.400 351492 DEBUG nova.virt.libvirt.driver [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:02:31 compute-0 nova_compute[351485]: 2025-12-03 02:02:31.408 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:02:31 compute-0 nova_compute[351485]: 2025-12-03 02:02:31.413 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764727351.312392, b43e79bd-550f-42f8-9aa7-980b6bca3f70 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 02:02:31 compute-0 nova_compute[351485]: 2025-12-03 02:02:31.413 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] VM Resumed (Lifecycle Event)
Dec 03 02:02:31 compute-0 openstack_network_exporter[368278]: ERROR   02:02:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:02:31 compute-0 openstack_network_exporter[368278]: ERROR   02:02:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:02:31 compute-0 openstack_network_exporter[368278]: ERROR   02:02:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:02:31 compute-0 openstack_network_exporter[368278]: ERROR   02:02:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:02:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:02:31 compute-0 openstack_network_exporter[368278]: ERROR   02:02:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:02:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:02:31 compute-0 nova_compute[351485]: 2025-12-03 02:02:31.463 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:02:31 compute-0 nova_compute[351485]: 2025-12-03 02:02:31.468 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 03 02:02:31 compute-0 nova_compute[351485]: 2025-12-03 02:02:31.485 351492 INFO nova.compute.manager [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Took 7.67 seconds to spawn the instance on the hypervisor.
Dec 03 02:02:31 compute-0 nova_compute[351485]: 2025-12-03 02:02:31.486 351492 DEBUG nova.compute.manager [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:02:31 compute-0 nova_compute[351485]: 2025-12-03 02:02:31.495 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 03 02:02:31 compute-0 nova_compute[351485]: 2025-12-03 02:02:31.552 351492 INFO nova.compute.manager [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Took 8.81 seconds to build instance.
Dec 03 02:02:31 compute-0 nova_compute[351485]: 2025-12-03 02:02:31.570 351492 DEBUG oslo_concurrency.lockutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "b43e79bd-550f-42f8-9aa7-980b6bca3f70" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.959s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:02:31 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec 03 02:02:32 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec 03 02:02:32 compute-0 ceph-mon[192821]: pgmap v1438: 321 pgs: 321 active+clean; 234 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.4 MiB/s wr, 32 op/s
Dec 03 02:02:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1439: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 1.4 MiB/s wr, 42 op/s
Dec 03 02:02:33 compute-0 nova_compute[351485]: 2025-12-03 02:02:33.016 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:02:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:02:33 compute-0 ceph-mon[192821]: pgmap v1439: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 1.4 MiB/s wr, 42 op/s
Dec 03 02:02:33 compute-0 nova_compute[351485]: 2025-12-03 02:02:33.120 351492 DEBUG nova.compute.manager [req-658c3eec-b651-4373-a9ed-9f36abf00229 req-d2054e42-7844-4566-b956-dff534174c20 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Received event network-vif-plugged-6b217cd3-164a-4fb4-8eb6-f1eb3c806963 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:02:33 compute-0 nova_compute[351485]: 2025-12-03 02:02:33.120 351492 DEBUG oslo_concurrency.lockutils [req-658c3eec-b651-4373-a9ed-9f36abf00229 req-d2054e42-7844-4566-b956-dff534174c20 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "b43e79bd-550f-42f8-9aa7-980b6bca3f70-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:02:33 compute-0 nova_compute[351485]: 2025-12-03 02:02:33.121 351492 DEBUG oslo_concurrency.lockutils [req-658c3eec-b651-4373-a9ed-9f36abf00229 req-d2054e42-7844-4566-b956-dff534174c20 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "b43e79bd-550f-42f8-9aa7-980b6bca3f70-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:02:33 compute-0 nova_compute[351485]: 2025-12-03 02:02:33.121 351492 DEBUG oslo_concurrency.lockutils [req-658c3eec-b651-4373-a9ed-9f36abf00229 req-d2054e42-7844-4566-b956-dff534174c20 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "b43e79bd-550f-42f8-9aa7-980b6bca3f70-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:02:33 compute-0 nova_compute[351485]: 2025-12-03 02:02:33.121 351492 DEBUG nova.compute.manager [req-658c3eec-b651-4373-a9ed-9f36abf00229 req-d2054e42-7844-4566-b956-dff534174c20 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] No waiting events found dispatching network-vif-plugged-6b217cd3-164a-4fb4-8eb6-f1eb3c806963 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 03 02:02:33 compute-0 nova_compute[351485]: 2025-12-03 02:02:33.121 351492 WARNING nova.compute.manager [req-658c3eec-b651-4373-a9ed-9f36abf00229 req-d2054e42-7844-4566-b956-dff534174c20 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Received unexpected event network-vif-plugged-6b217cd3-164a-4fb4-8eb6-f1eb3c806963 for instance with vm_state active and task_state None.
Dec 03 02:02:34 compute-0 nova_compute[351485]: 2025-12-03 02:02:34.031 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:02:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1440: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 294 KiB/s rd, 1.4 MiB/s wr, 51 op/s
Dec 03 02:02:35 compute-0 ceph-mon[192821]: pgmap v1440: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 294 KiB/s rd, 1.4 MiB/s wr, 51 op/s
Dec 03 02:02:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1441: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.4 MiB/s wr, 96 op/s
Dec 03 02:02:37 compute-0 ceph-mon[192821]: pgmap v1441: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.4 MiB/s wr, 96 op/s
Dec 03 02:02:38 compute-0 nova_compute[351485]: 2025-12-03 02:02:38.019 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:02:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:02:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1442: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 320 KiB/s wr, 65 op/s
Dec 03 02:02:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 02:02:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:02:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 02:02:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:02:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0019266712655466563 of space, bias 1.0, pg target 0.5780013796639969 quantized to 32 (current 32)
Dec 03 02:02:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:02:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:02:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:02:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:02:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:02:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec 03 02:02:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:02:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 02:02:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:02:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:02:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:02:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 02:02:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:02:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 02:02:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:02:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:02:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:02:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 02:02:39 compute-0 nova_compute[351485]: 2025-12-03 02:02:39.036 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:02:39 compute-0 ceph-mon[192821]: pgmap v1442: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 320 KiB/s wr, 65 op/s
Dec 03 02:02:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1443: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 320 KiB/s wr, 66 op/s
Dec 03 02:02:41 compute-0 ceph-mon[192821]: pgmap v1443: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 320 KiB/s wr, 66 op/s
Dec 03 02:02:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1444: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 22 KiB/s wr, 65 op/s
Dec 03 02:02:43 compute-0 nova_compute[351485]: 2025-12-03 02:02:43.022 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:02:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:02:43 compute-0 ceph-mon[192821]: pgmap v1444: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 22 KiB/s wr, 65 op/s
Dec 03 02:02:44 compute-0 nova_compute[351485]: 2025-12-03 02:02:44.041 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:02:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1445: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.5 KiB/s wr, 55 op/s
Dec 03 02:02:45 compute-0 ceph-mon[192821]: pgmap v1445: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.5 KiB/s wr, 55 op/s
Dec 03 02:02:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1446: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.4 KiB/s wr, 46 op/s
Dec 03 02:02:46 compute-0 podman[426262]: 2025-12-03 02:02:46.849259829 +0000 UTC m=+0.095291065 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 03 02:02:46 compute-0 podman[426264]: 2025-12-03 02:02:46.865591749 +0000 UTC m=+0.109775773 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 02:02:46 compute-0 podman[426263]: 2025-12-03 02:02:46.884233684 +0000 UTC m=+0.129139418 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2)
Dec 03 02:02:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 02:02:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4097071292' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:02:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 02:02:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4097071292' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:02:47 compute-0 ceph-mon[192821]: pgmap v1446: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.4 KiB/s wr, 46 op/s
Dec 03 02:02:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/4097071292' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:02:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/4097071292' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:02:48 compute-0 nova_compute[351485]: 2025-12-03 02:02:48.024 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:02:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:02:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1447: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 1.1 KiB/s wr, 1 op/s
Dec 03 02:02:49 compute-0 nova_compute[351485]: 2025-12-03 02:02:49.044 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:02:49 compute-0 ceph-mon[192821]: pgmap v1447: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 1.1 KiB/s wr, 1 op/s
Dec 03 02:02:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1448: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 1.1 KiB/s wr, 1 op/s
Dec 03 02:02:51 compute-0 ceph-mon[192821]: pgmap v1448: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 1.1 KiB/s wr, 1 op/s
Dec 03 02:02:51 compute-0 podman[426321]: 2025-12-03 02:02:51.877996022 +0000 UTC m=+0.128991074 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 03 02:02:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1449: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec 03 02:02:53 compute-0 nova_compute[351485]: 2025-12-03 02:02:53.026 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:02:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:02:53 compute-0 ceph-mon[192821]: pgmap v1449: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec 03 02:02:54 compute-0 nova_compute[351485]: 2025-12-03 02:02:54.049 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:02:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1450: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:02:54 compute-0 podman[426340]: 2025-12-03 02:02:54.89481751 +0000 UTC m=+0.139816079 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, architecture=x86_64, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, build-date=2024-09-18T21:23:30, name=ubi9, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9)
Dec 03 02:02:55 compute-0 ceph-mon[192821]: pgmap v1450: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:02:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1451: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:02:57 compute-0 ceph-mon[192821]: pgmap v1451: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:02:58 compute-0 nova_compute[351485]: 2025-12-03 02:02:58.028 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:02:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:02:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:02:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:02:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:02:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:02:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:02:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:02:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1452: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:02:58 compute-0 podman[426363]: 2025-12-03 02:02:58.574424695 +0000 UTC m=+0.108426885 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 03 02:02:58 compute-0 podman[426361]: 2025-12-03 02:02:58.576066681 +0000 UTC m=+0.122265165 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, vendor=Red Hat, Inc., release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7)
Dec 03 02:02:58 compute-0 podman[426362]: 2025-12-03 02:02:58.581053481 +0000 UTC m=+0.103955399 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 02:02:58 compute-0 podman[426360]: 2025-12-03 02:02:58.640763993 +0000 UTC m=+0.175390641 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 03 02:02:59 compute-0 nova_compute[351485]: 2025-12-03 02:02:59.052 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:02:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:02:59.631 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:02:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:02:59.631 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:02:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:02:59.632 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:02:59 compute-0 ceph-mon[192821]: pgmap v1452: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:02:59 compute-0 podman[158098]: time="2025-12-03T02:02:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:02:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:02:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec 03 02:02:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:02:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8654 "" "Go-http-client/1.1"
Dec 03 02:03:00 compute-0 ovn_controller[89134]: 2025-12-03T02:03:00Z|00049|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Dec 03 02:03:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1453: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:03:01 compute-0 openstack_network_exporter[368278]: ERROR   02:03:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:03:01 compute-0 openstack_network_exporter[368278]: ERROR   02:03:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:03:01 compute-0 openstack_network_exporter[368278]: ERROR   02:03:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:03:01 compute-0 openstack_network_exporter[368278]: ERROR   02:03:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:03:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:03:01 compute-0 openstack_network_exporter[368278]: ERROR   02:03:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:03:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:03:01 compute-0 ceph-mon[192821]: pgmap v1453: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:03:02 compute-0 nova_compute[351485]: 2025-12-03 02:03:02.383 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:03:02 compute-0 nova_compute[351485]: 2025-12-03 02:03:02.422 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Triggering sync for uuid 9182286b-5a08-4961-b4bb-c0e2f05746f7 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 03 02:03:02 compute-0 nova_compute[351485]: 2025-12-03 02:03:02.422 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Triggering sync for uuid 52862152-12c7-4236-89c3-67750ecbed7a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 03 02:03:02 compute-0 nova_compute[351485]: 2025-12-03 02:03:02.423 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Triggering sync for uuid 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 03 02:03:02 compute-0 nova_compute[351485]: 2025-12-03 02:03:02.424 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Triggering sync for uuid b43e79bd-550f-42f8-9aa7-980b6bca3f70 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 03 02:03:02 compute-0 nova_compute[351485]: 2025-12-03 02:03:02.425 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "9182286b-5a08-4961-b4bb-c0e2f05746f7" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:03:02 compute-0 nova_compute[351485]: 2025-12-03 02:03:02.426 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "9182286b-5a08-4961-b4bb-c0e2f05746f7" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:03:02 compute-0 nova_compute[351485]: 2025-12-03 02:03:02.427 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "52862152-12c7-4236-89c3-67750ecbed7a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:03:02 compute-0 nova_compute[351485]: 2025-12-03 02:03:02.428 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "52862152-12c7-4236-89c3-67750ecbed7a" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:03:02 compute-0 nova_compute[351485]: 2025-12-03 02:03:02.429 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:03:02 compute-0 nova_compute[351485]: 2025-12-03 02:03:02.430 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:03:02 compute-0 nova_compute[351485]: 2025-12-03 02:03:02.430 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "b43e79bd-550f-42f8-9aa7-980b6bca3f70" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:03:02 compute-0 nova_compute[351485]: 2025-12-03 02:03:02.431 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "b43e79bd-550f-42f8-9aa7-980b6bca3f70" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:03:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1454: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:03:02 compute-0 nova_compute[351485]: 2025-12-03 02:03:02.541 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "9182286b-5a08-4961-b4bb-c0e2f05746f7" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.115s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:03:02 compute-0 nova_compute[351485]: 2025-12-03 02:03:02.546 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "52862152-12c7-4236-89c3-67750ecbed7a" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.118s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:03:02 compute-0 nova_compute[351485]: 2025-12-03 02:03:02.566 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "b43e79bd-550f-42f8-9aa7-980b6bca3f70" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.135s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:03:02 compute-0 nova_compute[351485]: 2025-12-03 02:03:02.598 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.169s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:03:02 compute-0 nova_compute[351485]: 2025-12-03 02:03:02.625 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:03:03 compute-0 nova_compute[351485]: 2025-12-03 02:03:03.032 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:03:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:03:03 compute-0 nova_compute[351485]: 2025-12-03 02:03:03.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:03:03 compute-0 nova_compute[351485]: 2025-12-03 02:03:03.578 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 02:03:03 compute-0 ceph-mon[192821]: pgmap v1454: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:03:03 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Dec 03 02:03:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:03:03.863774) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 03 02:03:03 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Dec 03 02:03:03 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727383863849, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 1617, "num_deletes": 506, "total_data_size": 2145094, "memory_usage": 2186960, "flush_reason": "Manual Compaction"}
Dec 03 02:03:03 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Dec 03 02:03:03 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727383883213, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 2102342, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28516, "largest_seqno": 30132, "table_properties": {"data_size": 2095204, "index_size": 3699, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 17567, "raw_average_key_size": 18, "raw_value_size": 2078982, "raw_average_value_size": 2233, "num_data_blocks": 167, "num_entries": 931, "num_filter_entries": 931, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764727243, "oldest_key_time": 1764727243, "file_creation_time": 1764727383, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Dec 03 02:03:03 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 19556 microseconds, and 10317 cpu microseconds.
Dec 03 02:03:03 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 02:03:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:03:03.883322) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 2102342 bytes OK
Dec 03 02:03:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:03:03.883351) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Dec 03 02:03:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:03:03.886283) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Dec 03 02:03:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:03:03.886305) EVENT_LOG_v1 {"time_micros": 1764727383886298, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 03 02:03:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:03:03.886327) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 03 02:03:03 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 2136989, prev total WAL file size 2136989, number of live WAL files 2.
Dec 03 02:03:03 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:03:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:03:03.888367) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Dec 03 02:03:03 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 03 02:03:03 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(2053KB)], [65(6988KB)]
Dec 03 02:03:03 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727383888525, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 9258680, "oldest_snapshot_seqno": -1}
Dec 03 02:03:03 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 4938 keys, 7433883 bytes, temperature: kUnknown
Dec 03 02:03:03 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727383937677, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 7433883, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7401610, "index_size": 18851, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12357, "raw_key_size": 124958, "raw_average_key_size": 25, "raw_value_size": 7312881, "raw_average_value_size": 1480, "num_data_blocks": 776, "num_entries": 4938, "num_filter_entries": 4938, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764727383, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Dec 03 02:03:03 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 02:03:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:03:03.937905) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 7433883 bytes
Dec 03 02:03:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:03:03.940004) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 188.4 rd, 151.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 6.8 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(7.9) write-amplify(3.5) OK, records in: 5963, records dropped: 1025 output_compression: NoCompression
Dec 03 02:03:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:03:03.940023) EVENT_LOG_v1 {"time_micros": 1764727383940014, "job": 36, "event": "compaction_finished", "compaction_time_micros": 49141, "compaction_time_cpu_micros": 25216, "output_level": 6, "num_output_files": 1, "total_output_size": 7433883, "num_input_records": 5963, "num_output_records": 4938, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 03 02:03:03 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:03:03 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727383940801, "job": 36, "event": "table_file_deletion", "file_number": 67}
Dec 03 02:03:03 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:03:03 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727383942306, "job": 36, "event": "table_file_deletion", "file_number": 65}
Dec 03 02:03:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:03:03.887827) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:03:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:03:03.942676) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:03:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:03:03.942683) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:03:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:03:03.942687) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:03:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:03:03.942690) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:03:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:03:03.942693) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:03:04 compute-0 nova_compute[351485]: 2025-12-03 02:03:04.055 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:03:04 compute-0 nova_compute[351485]: 2025-12-03 02:03:04.480 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:03:04 compute-0 nova_compute[351485]: 2025-12-03 02:03:04.481 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:03:04 compute-0 nova_compute[351485]: 2025-12-03 02:03:04.481 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 03 02:03:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1455: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:03:05 compute-0 ceph-mon[192821]: pgmap v1455: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:03:06 compute-0 nova_compute[351485]: 2025-12-03 02:03:06.486 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Updating instance_info_cache with network_info: [{"id": "d0c565d0-5299-45e5-84ac-ea722711af3d", "address": "fa:16:3e:de:1b:b0", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.227", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0c565d0-52", "ovs_interfaceid": "d0c565d0-5299-45e5-84ac-ea722711af3d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:03:06 compute-0 nova_compute[351485]: 2025-12-03 02:03:06.507 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:03:06 compute-0 nova_compute[351485]: 2025-12-03 02:03:06.509 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 03 02:03:06 compute-0 nova_compute[351485]: 2025-12-03 02:03:06.510 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:03:06 compute-0 nova_compute[351485]: 2025-12-03 02:03:06.511 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:03:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1456: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:03:06 compute-0 nova_compute[351485]: 2025-12-03 02:03:06.541 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:03:06 compute-0 nova_compute[351485]: 2025-12-03 02:03:06.542 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:03:06 compute-0 nova_compute[351485]: 2025-12-03 02:03:06.544 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:03:06 compute-0 nova_compute[351485]: 2025-12-03 02:03:06.545 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 02:03:06 compute-0 nova_compute[351485]: 2025-12-03 02:03:06.546 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:03:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:03:07 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2148213523' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:03:07 compute-0 nova_compute[351485]: 2025-12-03 02:03:07.064 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:03:07 compute-0 nova_compute[351485]: 2025-12-03 02:03:07.201 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:03:07 compute-0 nova_compute[351485]: 2025-12-03 02:03:07.203 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:03:07 compute-0 nova_compute[351485]: 2025-12-03 02:03:07.204 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:03:07 compute-0 nova_compute[351485]: 2025-12-03 02:03:07.214 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:03:07 compute-0 nova_compute[351485]: 2025-12-03 02:03:07.214 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:03:07 compute-0 nova_compute[351485]: 2025-12-03 02:03:07.215 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:03:07 compute-0 nova_compute[351485]: 2025-12-03 02:03:07.223 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:03:07 compute-0 nova_compute[351485]: 2025-12-03 02:03:07.224 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:03:07 compute-0 nova_compute[351485]: 2025-12-03 02:03:07.224 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:03:07 compute-0 nova_compute[351485]: 2025-12-03 02:03:07.232 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:03:07 compute-0 nova_compute[351485]: 2025-12-03 02:03:07.233 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:03:07 compute-0 nova_compute[351485]: 2025-12-03 02:03:07.233 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:03:07 compute-0 nova_compute[351485]: 2025-12-03 02:03:07.848 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:03:07 compute-0 nova_compute[351485]: 2025-12-03 02:03:07.849 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3266MB free_disk=59.8726921081543GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 02:03:07 compute-0 nova_compute[351485]: 2025-12-03 02:03:07.849 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:03:07 compute-0 nova_compute[351485]: 2025-12-03 02:03:07.849 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:03:07 compute-0 ceph-mon[192821]: pgmap v1456: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:03:07 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2148213523' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:03:07 compute-0 nova_compute[351485]: 2025-12-03 02:03:07.938 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:03:07 compute-0 nova_compute[351485]: 2025-12-03 02:03:07.939 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 52862152-12c7-4236-89c3-67750ecbed7a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:03:07 compute-0 nova_compute[351485]: 2025-12-03 02:03:07.939 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:03:07 compute-0 nova_compute[351485]: 2025-12-03 02:03:07.940 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance b43e79bd-550f-42f8-9aa7-980b6bca3f70 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:03:07 compute-0 nova_compute[351485]: 2025-12-03 02:03:07.940 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 02:03:07 compute-0 nova_compute[351485]: 2025-12-03 02:03:07.941 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=59GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 02:03:08 compute-0 nova_compute[351485]: 2025-12-03 02:03:08.035 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:03:08 compute-0 nova_compute[351485]: 2025-12-03 02:03:08.047 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:03:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:03:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1457: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:03:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:03:08 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2386382906' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:03:08 compute-0 nova_compute[351485]: 2025-12-03 02:03:08.625 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:03:08 compute-0 nova_compute[351485]: 2025-12-03 02:03:08.638 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:03:08 compute-0 nova_compute[351485]: 2025-12-03 02:03:08.661 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:03:08 compute-0 nova_compute[351485]: 2025-12-03 02:03:08.690 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 02:03:08 compute-0 nova_compute[351485]: 2025-12-03 02:03:08.690 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:03:08 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2386382906' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:03:09 compute-0 nova_compute[351485]: 2025-12-03 02:03:09.059 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:03:09 compute-0 ovn_controller[89134]: 2025-12-03T02:03:09Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:da:35:ef 192.168.0.85
Dec 03 02:03:09 compute-0 ovn_controller[89134]: 2025-12-03T02:03:09Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:da:35:ef 192.168.0.85
Dec 03 02:03:09 compute-0 ceph-mon[192821]: pgmap v1457: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:03:10 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec 03 02:03:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1458: 321 pgs: 321 active+clean; 248 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 76 KiB/s rd, 1020 KiB/s wr, 23 op/s
Dec 03 02:03:10 compute-0 nova_compute[351485]: 2025-12-03 02:03:10.756 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:03:10 compute-0 nova_compute[351485]: 2025-12-03 02:03:10.756 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:03:10 compute-0 nova_compute[351485]: 2025-12-03 02:03:10.758 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:03:11 compute-0 nova_compute[351485]: 2025-12-03 02:03:11.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:03:11 compute-0 ceph-mon[192821]: pgmap v1458: 321 pgs: 321 active+clean; 248 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 76 KiB/s rd, 1020 KiB/s wr, 23 op/s
Dec 03 02:03:12 compute-0 sudo[426487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:03:12 compute-0 sudo[426487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:03:12 compute-0 sudo[426487]: pam_unix(sudo:session): session closed for user root
Dec 03 02:03:12 compute-0 sudo[426512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:03:12 compute-0 sudo[426512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:03:12 compute-0 sudo[426512]: pam_unix(sudo:session): session closed for user root
Dec 03 02:03:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1459: 321 pgs: 321 active+clean; 252 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 146 KiB/s rd, 1.4 MiB/s wr, 44 op/s
Dec 03 02:03:12 compute-0 sudo[426537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:03:12 compute-0 sudo[426537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:03:12 compute-0 sudo[426537]: pam_unix(sudo:session): session closed for user root
Dec 03 02:03:12 compute-0 sudo[426562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 02:03:12 compute-0 sudo[426562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:03:13 compute-0 nova_compute[351485]: 2025-12-03 02:03:13.040 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:03:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:03:13 compute-0 sudo[426562]: pam_unix(sudo:session): session closed for user root
Dec 03 02:03:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec 03 02:03:13 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 03 02:03:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:03:13 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:03:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 02:03:13 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:03:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 02:03:13 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:03:13 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 716eceed-7ee6-4723-8770-a9e23b3452b4 does not exist
Dec 03 02:03:13 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 2aee36d6-a93e-4583-89be-7346368a1835 does not exist
Dec 03 02:03:13 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 09dd795c-72fa-439c-b0a4-2fffb84c971b does not exist
Dec 03 02:03:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 02:03:13 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:03:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 02:03:13 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:03:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:03:13 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:03:13 compute-0 sudo[426619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:03:13 compute-0 sudo[426619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:03:13 compute-0 sudo[426619]: pam_unix(sudo:session): session closed for user root
Dec 03 02:03:13 compute-0 sudo[426644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:03:13 compute-0 sudo[426644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:03:13 compute-0 sudo[426644]: pam_unix(sudo:session): session closed for user root
Dec 03 02:03:13 compute-0 ceph-mon[192821]: pgmap v1459: 321 pgs: 321 active+clean; 252 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 146 KiB/s rd, 1.4 MiB/s wr, 44 op/s
Dec 03 02:03:13 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 03 02:03:13 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:03:13 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:03:13 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:03:13 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:03:13 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:03:13 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:03:13 compute-0 sudo[426669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:03:13 compute-0 sudo[426669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:03:14 compute-0 sudo[426669]: pam_unix(sudo:session): session closed for user root
Dec 03 02:03:14 compute-0 nova_compute[351485]: 2025-12-03 02:03:14.061 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:03:14 compute-0 sudo[426694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 02:03:14 compute-0 sudo[426694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:03:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1460: 321 pgs: 321 active+clean; 260 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 148 KiB/s rd, 1.4 MiB/s wr, 46 op/s
Dec 03 02:03:14 compute-0 nova_compute[351485]: 2025-12-03 02:03:14.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:03:14 compute-0 nova_compute[351485]: 2025-12-03 02:03:14.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 02:03:14 compute-0 podman[426758]: 2025-12-03 02:03:14.759430636 +0000 UTC m=+0.122608229 container create aa5bd88e644b44b46059ea86a30fe7a73db0c67796666a0f1a053df00cf0d0bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 03 02:03:14 compute-0 podman[426758]: 2025-12-03 02:03:14.720607761 +0000 UTC m=+0.083785384 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:03:14 compute-0 systemd[1]: Started libpod-conmon-aa5bd88e644b44b46059ea86a30fe7a73db0c67796666a0f1a053df00cf0d0bf.scope.
Dec 03 02:03:14 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:03:14 compute-0 podman[426758]: 2025-12-03 02:03:14.878277337 +0000 UTC m=+0.241454930 container init aa5bd88e644b44b46059ea86a30fe7a73db0c67796666a0f1a053df00cf0d0bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 03 02:03:14 compute-0 podman[426758]: 2025-12-03 02:03:14.890908323 +0000 UTC m=+0.254085946 container start aa5bd88e644b44b46059ea86a30fe7a73db0c67796666a0f1a053df00cf0d0bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_germain, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 03 02:03:14 compute-0 podman[426758]: 2025-12-03 02:03:14.897914661 +0000 UTC m=+0.261092254 container attach aa5bd88e644b44b46059ea86a30fe7a73db0c67796666a0f1a053df00cf0d0bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_germain, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:03:14 compute-0 hardcore_germain[426772]: 167 167
Dec 03 02:03:14 compute-0 systemd[1]: libpod-aa5bd88e644b44b46059ea86a30fe7a73db0c67796666a0f1a053df00cf0d0bf.scope: Deactivated successfully.
Dec 03 02:03:14 compute-0 podman[426758]: 2025-12-03 02:03:14.905520365 +0000 UTC m=+0.268697958 container died aa5bd88e644b44b46059ea86a30fe7a73db0c67796666a0f1a053df00cf0d0bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 03 02:03:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb5124f10ae54020370f8d118fd6da6e6cbc868aa0ce695d883904873803b480-merged.mount: Deactivated successfully.
Dec 03 02:03:14 compute-0 podman[426758]: 2025-12-03 02:03:14.971047453 +0000 UTC m=+0.334225056 container remove aa5bd88e644b44b46059ea86a30fe7a73db0c67796666a0f1a053df00cf0d0bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 03 02:03:14 compute-0 systemd[1]: libpod-conmon-aa5bd88e644b44b46059ea86a30fe7a73db0c67796666a0f1a053df00cf0d0bf.scope: Deactivated successfully.
Dec 03 02:03:15 compute-0 podman[426797]: 2025-12-03 02:03:15.240960504 +0000 UTC m=+0.078892295 container create d86ec00e9cb082b51c5b7df6a5a8dfe7c70e93626e7fd92fac91ead295b0314f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_kare, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 03 02:03:15 compute-0 systemd[1]: Started libpod-conmon-d86ec00e9cb082b51c5b7df6a5a8dfe7c70e93626e7fd92fac91ead295b0314f.scope.
Dec 03 02:03:15 compute-0 podman[426797]: 2025-12-03 02:03:15.218957064 +0000 UTC m=+0.056888895 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:03:15 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:03:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87f06be9773404e3220616ef845a31e21a054c71a324a3ac34d9542e8f9eb134/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:03:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87f06be9773404e3220616ef845a31e21a054c71a324a3ac34d9542e8f9eb134/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:03:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87f06be9773404e3220616ef845a31e21a054c71a324a3ac34d9542e8f9eb134/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:03:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87f06be9773404e3220616ef845a31e21a054c71a324a3ac34d9542e8f9eb134/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:03:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87f06be9773404e3220616ef845a31e21a054c71a324a3ac34d9542e8f9eb134/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 02:03:15 compute-0 podman[426797]: 2025-12-03 02:03:15.389670918 +0000 UTC m=+0.227602729 container init d86ec00e9cb082b51c5b7df6a5a8dfe7c70e93626e7fd92fac91ead295b0314f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_kare, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 03 02:03:15 compute-0 podman[426797]: 2025-12-03 02:03:15.4078186 +0000 UTC m=+0.245750391 container start d86ec00e9cb082b51c5b7df6a5a8dfe7c70e93626e7fd92fac91ead295b0314f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_kare, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:03:15 compute-0 podman[426797]: 2025-12-03 02:03:15.412387368 +0000 UTC m=+0.250319159 container attach d86ec00e9cb082b51c5b7df6a5a8dfe7c70e93626e7fd92fac91ead295b0314f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_kare, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:03:15 compute-0 ceph-mon[192821]: pgmap v1460: 321 pgs: 321 active+clean; 260 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 148 KiB/s rd, 1.4 MiB/s wr, 46 op/s
Dec 03 02:03:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1461: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Dec 03 02:03:16 compute-0 frosty_kare[426813]: --> passed data devices: 0 physical, 3 LVM
Dec 03 02:03:16 compute-0 frosty_kare[426813]: --> relative data size: 1.0
Dec 03 02:03:16 compute-0 frosty_kare[426813]: --> All data devices are unavailable
Dec 03 02:03:16 compute-0 systemd[1]: libpod-d86ec00e9cb082b51c5b7df6a5a8dfe7c70e93626e7fd92fac91ead295b0314f.scope: Deactivated successfully.
Dec 03 02:03:16 compute-0 systemd[1]: libpod-d86ec00e9cb082b51c5b7df6a5a8dfe7c70e93626e7fd92fac91ead295b0314f.scope: Consumed 1.394s CPU time.
Dec 03 02:03:17 compute-0 podman[426843]: 2025-12-03 02:03:17.036342412 +0000 UTC m=+0.081870179 container died d86ec00e9cb082b51c5b7df6a5a8dfe7c70e93626e7fd92fac91ead295b0314f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 03 02:03:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-87f06be9773404e3220616ef845a31e21a054c71a324a3ac34d9542e8f9eb134-merged.mount: Deactivated successfully.
Dec 03 02:03:17 compute-0 podman[426842]: 2025-12-03 02:03:17.113518738 +0000 UTC m=+0.145226806 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 03 02:03:17 compute-0 podman[426845]: 2025-12-03 02:03:17.114032213 +0000 UTC m=+0.145168705 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute, tcib_managed=true)
Dec 03 02:03:17 compute-0 podman[426846]: 2025-12-03 02:03:17.121945986 +0000 UTC m=+0.133692401 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 03 02:03:17 compute-0 podman[426843]: 2025-12-03 02:03:17.136168827 +0000 UTC m=+0.181696524 container remove d86ec00e9cb082b51c5b7df6a5a8dfe7c70e93626e7fd92fac91ead295b0314f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_kare, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 03 02:03:17 compute-0 systemd[1]: libpod-conmon-d86ec00e9cb082b51c5b7df6a5a8dfe7c70e93626e7fd92fac91ead295b0314f.scope: Deactivated successfully.
Dec 03 02:03:17 compute-0 sudo[426694]: pam_unix(sudo:session): session closed for user root
Dec 03 02:03:17 compute-0 sudo[426911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:03:17 compute-0 sudo[426911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:03:17 compute-0 sudo[426911]: pam_unix(sudo:session): session closed for user root
Dec 03 02:03:17 compute-0 sudo[426936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:03:17 compute-0 sudo[426936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:03:17 compute-0 sudo[426936]: pam_unix(sudo:session): session closed for user root
Dec 03 02:03:17 compute-0 sudo[426961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:03:17 compute-0 sudo[426961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:03:17 compute-0 sudo[426961]: pam_unix(sudo:session): session closed for user root
Dec 03 02:03:17 compute-0 sudo[426986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 02:03:17 compute-0 sudo[426986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:03:17 compute-0 ceph-mon[192821]: pgmap v1461: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Dec 03 02:03:18 compute-0 nova_compute[351485]: 2025-12-03 02:03:18.041 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:03:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:03:18 compute-0 podman[427049]: 2025-12-03 02:03:18.322005555 +0000 UTC m=+0.082765135 container create 77a804ff02e9104cce832fcc913332cb0b135c1bf57bbb6998dad722cfd534d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:03:18 compute-0 podman[427049]: 2025-12-03 02:03:18.285910777 +0000 UTC m=+0.046670407 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:03:18 compute-0 systemd[1]: Started libpod-conmon-77a804ff02e9104cce832fcc913332cb0b135c1bf57bbb6998dad722cfd534d0.scope.
Dec 03 02:03:18 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:03:18 compute-0 podman[427049]: 2025-12-03 02:03:18.491413903 +0000 UTC m=+0.252173543 container init 77a804ff02e9104cce832fcc913332cb0b135c1bf57bbb6998dad722cfd534d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_varahamihira, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Dec 03 02:03:18 compute-0 podman[427049]: 2025-12-03 02:03:18.509213534 +0000 UTC m=+0.269973114 container start 77a804ff02e9104cce832fcc913332cb0b135c1bf57bbb6998dad722cfd534d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_varahamihira, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 03 02:03:18 compute-0 podman[427049]: 2025-12-03 02:03:18.515856352 +0000 UTC m=+0.276615972 container attach 77a804ff02e9104cce832fcc913332cb0b135c1bf57bbb6998dad722cfd534d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Dec 03 02:03:18 compute-0 heuristic_varahamihira[427064]: 167 167
Dec 03 02:03:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1462: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Dec 03 02:03:18 compute-0 systemd[1]: libpod-77a804ff02e9104cce832fcc913332cb0b135c1bf57bbb6998dad722cfd534d0.scope: Deactivated successfully.
Dec 03 02:03:18 compute-0 podman[427049]: 2025-12-03 02:03:18.522235941 +0000 UTC m=+0.282995531 container died 77a804ff02e9104cce832fcc913332cb0b135c1bf57bbb6998dad722cfd534d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_varahamihira, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:03:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-4fea1bc0c872037e6b536d39640be278c45bc8af5e3fd67d1fee09933ee627f6-merged.mount: Deactivated successfully.
Dec 03 02:03:18 compute-0 podman[427049]: 2025-12-03 02:03:18.599627184 +0000 UTC m=+0.360386774 container remove 77a804ff02e9104cce832fcc913332cb0b135c1bf57bbb6998dad722cfd534d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_varahamihira, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:03:18 compute-0 systemd[1]: libpod-conmon-77a804ff02e9104cce832fcc913332cb0b135c1bf57bbb6998dad722cfd534d0.scope: Deactivated successfully.
Dec 03 02:03:18 compute-0 podman[427087]: 2025-12-03 02:03:18.908291278 +0000 UTC m=+0.102770929 container create 639232b947389517e4ec041a572e7ef8570cd302c1d4cbd00a318138e5428d28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mayer, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 03 02:03:18 compute-0 podman[427087]: 2025-12-03 02:03:18.878874668 +0000 UTC m=+0.073354319 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:03:18 compute-0 systemd[1]: Started libpod-conmon-639232b947389517e4ec041a572e7ef8570cd302c1d4cbd00a318138e5428d28.scope.
Dec 03 02:03:19 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:03:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de98bb237e139714bcbb6352e37f27242fb2e95929de7fa91ad40d360573efe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:03:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de98bb237e139714bcbb6352e37f27242fb2e95929de7fa91ad40d360573efe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:03:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de98bb237e139714bcbb6352e37f27242fb2e95929de7fa91ad40d360573efe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:03:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de98bb237e139714bcbb6352e37f27242fb2e95929de7fa91ad40d360573efe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:03:19 compute-0 podman[427087]: 2025-12-03 02:03:19.063662219 +0000 UTC m=+0.258141860 container init 639232b947389517e4ec041a572e7ef8570cd302c1d4cbd00a318138e5428d28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mayer, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 03 02:03:19 compute-0 nova_compute[351485]: 2025-12-03 02:03:19.076 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:03:19 compute-0 podman[427087]: 2025-12-03 02:03:19.084819035 +0000 UTC m=+0.279298656 container start 639232b947389517e4ec041a572e7ef8570cd302c1d4cbd00a318138e5428d28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:03:19 compute-0 podman[427087]: 2025-12-03 02:03:19.090410333 +0000 UTC m=+0.284889994 container attach 639232b947389517e4ec041a572e7ef8570cd302c1d4cbd00a318138e5428d28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Dec 03 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.506 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 03 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.507 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 03 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.507 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.509 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.510 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.510 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.510 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.510 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.511 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.520 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '52862152-12c7-4236-89c3-67750ecbed7a', 'name': 'vn-44nal64-ppxv5rwaptjv-bbqmylrxhl37-vnf-x65t7efzpd2l', 'flavor': {'id': 'bc665ec6-3672-4e52-a447-5267b04e227a', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9746b242761a48048d185ce26d622b33', 'user_id': '03ba25e4009b43f7b0054fee32bf9136', 'hostId': '875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd', 'status': 'active', 'metadata': {'metering.server_group': '0f6ab671-23df-4a6d-9613-02f9fb5fb294'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 03 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.526 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274', 'name': 'vn-44nal64-kaobzdetwujj-uf5345mx272a-vnf-xg4pxtj76f4j', 'flavor': {'id': 'bc665ec6-3672-4e52-a447-5267b04e227a', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9746b242761a48048d185ce26d622b33', 'user_id': '03ba25e4009b43f7b0054fee32bf9136', 'hostId': '875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd', 'status': 'active', 'metadata': {'metering.server_group': '0f6ab671-23df-4a6d-9613-02f9fb5fb294'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 03 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.531 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance b43e79bd-550f-42f8-9aa7-980b6bca3f70 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec 03 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.533 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/b43e79bd-550f-42f8-9aa7-980b6bca3f70 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}5774f494984a65ffbde2426a05531a474fe014ea4dcd597248cb0a9b623a789b" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec 03 02:03:19 compute-0 admiring_mayer[427103]: {
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:     "0": [
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:         {
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:             "devices": [
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:                 "/dev/loop3"
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:             ],
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:             "lv_name": "ceph_lv0",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:             "lv_size": "21470642176",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:             "name": "ceph_lv0",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:             "tags": {
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:                 "ceph.cluster_name": "ceph",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:                 "ceph.crush_device_class": "",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:                 "ceph.encrypted": "0",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:                 "ceph.osd_id": "0",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:                 "ceph.type": "block",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:                 "ceph.vdo": "0"
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:             },
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:             "type": "block",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:             "vg_name": "ceph_vg0"
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:         }
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:     ],
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:     "1": [
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:         {
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:             "devices": [
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:                 "/dev/loop4"
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:             ],
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:             "lv_name": "ceph_lv1",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:             "lv_size": "21470642176",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:             "name": "ceph_lv1",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:             "tags": {
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:                 "ceph.cluster_name": "ceph",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:                 "ceph.crush_device_class": "",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:                 "ceph.encrypted": "0",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:                 "ceph.osd_id": "1",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:                 "ceph.type": "block",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:                 "ceph.vdo": "0"
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:             },
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:             "type": "block",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:             "vg_name": "ceph_vg1"
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:         }
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:     ],
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:     "2": [
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:         {
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:             "devices": [
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:                 "/dev/loop5"
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:             ],
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:             "lv_name": "ceph_lv2",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:             "lv_size": "21470642176",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:             "name": "ceph_lv2",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:             "tags": {
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:                 "ceph.cluster_name": "ceph",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:                 "ceph.crush_device_class": "",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:                 "ceph.encrypted": "0",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:                 "ceph.osd_id": "2",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:                 "ceph.type": "block",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:                 "ceph.vdo": "0"
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:             },
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:             "type": "block",
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:             "vg_name": "ceph_vg2"
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:         }
Dec 03 02:03:19 compute-0 admiring_mayer[427103]:     ]
Dec 03 02:03:19 compute-0 admiring_mayer[427103]: }
Dec 03 02:03:19 compute-0 systemd[1]: libpod-639232b947389517e4ec041a572e7ef8570cd302c1d4cbd00a318138e5428d28.scope: Deactivated successfully.
Dec 03 02:03:19 compute-0 podman[427087]: 2025-12-03 02:03:19.926919151 +0000 UTC m=+1.121398812 container died 639232b947389517e4ec041a572e7ef8570cd302c1d4cbd00a318138e5428d28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mayer, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:03:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-1de98bb237e139714bcbb6352e37f27242fb2e95929de7fa91ad40d360573efe-merged.mount: Deactivated successfully.
Dec 03 02:03:19 compute-0 ceph-mon[192821]: pgmap v1462: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Dec 03 02:03:20 compute-0 podman[427087]: 2025-12-03 02:03:20.022841536 +0000 UTC m=+1.217321157 container remove 639232b947389517e4ec041a572e7ef8570cd302c1d4cbd00a318138e5428d28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mayer, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 03 02:03:20 compute-0 systemd[1]: libpod-conmon-639232b947389517e4ec041a572e7ef8570cd302c1d4cbd00a318138e5428d28.scope: Deactivated successfully.
Dec 03 02:03:20 compute-0 sudo[426986]: pam_unix(sudo:session): session closed for user root
Dec 03 02:03:20 compute-0 sudo[427125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:03:20 compute-0 sudo[427125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:03:20 compute-0 sudo[427125]: pam_unix(sudo:session): session closed for user root
Dec 03 02:03:20 compute-0 sudo[427150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:03:20 compute-0 sudo[427150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:03:20 compute-0 sudo[427150]: pam_unix(sudo:session): session closed for user root
Dec 03 02:03:20 compute-0 sudo[427175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:03:20 compute-0 sudo[427175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:03:20 compute-0 sudo[427175]: pam_unix(sudo:session): session closed for user root
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.479 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1959 Content-Type: application/json Date: Wed, 03 Dec 2025 02:03:19 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-729513d0-32b1-4d68-aaef-ee1337233879 x-openstack-request-id: req-729513d0-32b1-4d68-aaef-ee1337233879 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.479 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "b43e79bd-550f-42f8-9aa7-980b6bca3f70", "name": "vn-44nal64-mj7m4uljqyof-c7kfgdonucij-vnf-5nwa6zvischw", "status": "ACTIVE", "tenant_id": "9746b242761a48048d185ce26d622b33", "user_id": "03ba25e4009b43f7b0054fee32bf9136", "metadata": {"metering.server_group": "0f6ab671-23df-4a6d-9613-02f9fb5fb294"}, "hostId": "875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd", "image": {"id": "466cf0db-c3be-4d70-b9f3-08c056c2cad9", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/466cf0db-c3be-4d70-b9f3-08c056c2cad9"}]}, "flavor": {"id": "bc665ec6-3672-4e52-a447-5267b04e227a", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/bc665ec6-3672-4e52-a447-5267b04e227a"}]}, "created": "2025-12-03T02:02:21Z", "updated": "2025-12-03T02:02:31Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.85", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:da:35:ef"}, {"version": 4, "addr": "192.168.122.232", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:da:35:ef"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/b43e79bd-550f-42f8-9aa7-980b6bca3f70"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/b43e79bd-550f-42f8-9aa7-980b6bca3f70"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-03T02:02:31.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000004", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.479 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/b43e79bd-550f-42f8-9aa7-980b6bca3f70 used request id req-729513d0-32b1-4d68-aaef-ee1337233879 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.481 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43e79bd-550f-42f8-9aa7-980b6bca3f70', 'name': 'vn-44nal64-mj7m4uljqyof-c7kfgdonucij-vnf-5nwa6zvischw', 'flavor': {'id': 'bc665ec6-3672-4e52-a447-5267b04e227a', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9746b242761a48048d185ce26d622b33', 'user_id': '03ba25e4009b43f7b0054fee32bf9136', 'hostId': '875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd', 'status': 'active', 'metadata': {'metering.server_group': '0f6ab671-23df-4a6d-9613-02f9fb5fb294'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.484 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '9182286b-5a08-4961-b4bb-c0e2f05746f7', 'name': 'test_0', 'flavor': {'id': 'bc665ec6-3672-4e52-a447-5267b04e227a', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9746b242761a48048d185ce26d622b33', 'user_id': '03ba25e4009b43f7b0054fee32bf9136', 'hostId': '875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.485 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.485 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.485 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.485 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.486 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-03T02:03:20.485302) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:03:20 compute-0 sudo[427200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.520 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/memory.usage volume: 49.00390625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1463: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Dec 03 02:03:20 compute-0 sudo[427200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.549 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/memory.usage volume: 49.01171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.589 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/memory.usage volume: 49.73046875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.621 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/memory.usage volume: 48.88671875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.622 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.622 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.622 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.622 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.622 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.623 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-03T02:03:20.622685) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.622 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.628 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.outgoing.packets volume: 66 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.632 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.637 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for b43e79bd-550f-42f8-9aa7-980b6bca3f70 / tap6b217cd3-16 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.637 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.outgoing.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.643 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.643 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.643 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.643 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.644 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.644 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.644 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-03T02:03:20.644288) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.644 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.645 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.incoming.bytes.delta volume: 3431 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.645 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.incoming.bytes.delta volume: 126 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.646 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.646 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.646 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.646 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.647 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.647 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.647 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.647 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-03T02:03:20.647343) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.647 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.648 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.648 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.648 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.648 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.649 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.649 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.649 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.649 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.649 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.650 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-03T02:03:20.650013) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.650 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.650 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.651 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.651 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.651 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.652 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.653 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.653 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.653 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.653 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.654 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-03T02:03:20.653961) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.654 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.654 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.655 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.655 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.655 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.656 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.656 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.656 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.656 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.656 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.657 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-03T02:03:20.656776) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.656 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.683 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.684 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.684 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.710 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.711 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.711 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.741 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.742 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.742 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.769 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.769 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.770 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.770 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.771 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.771 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.771 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.771 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.771 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.771 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.771 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-03T02:03:20.771404) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.771 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-44nal64-mj7m4uljqyof-c7kfgdonucij-vnf-5nwa6zvischw>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-44nal64-mj7m4uljqyof-c7kfgdonucij-vnf-5nwa6zvischw>]
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.772 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.772 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.772 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.772 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.772 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.773 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-03T02:03:20.772896) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.843 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.844 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.844 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.925 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.926 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.927 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.987 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.988 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.988 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.038 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.038 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.038 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.039 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.039 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.039 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.039 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.039 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.039 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.040 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.incoming.bytes volume: 8364 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.040 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.incoming.bytes volume: 1612 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.040 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.040 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.bytes volume: 2046 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.041 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.041 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.041 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.041 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.042 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.042 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.042 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.latency volume: 1829221883 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.042 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-03T02:03:21.039919) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.042 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.latency volume: 322583639 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.042 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-03T02:03:21.042150) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.042 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.latency volume: 204508972 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.043 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.latency volume: 1828594840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.043 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.latency volume: 317962452 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.043 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.latency volume: 234609421 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.043 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.latency volume: 1930310646 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.044 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.latency volume: 271584338 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.044 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.latency volume: 193440648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.044 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.latency volume: 1854350820 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.044 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.latency volume: 322798135 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.045 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.latency volume: 163317736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.045 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.045 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.046 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.046 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.046 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.046 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.046 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.046 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.047 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.047 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-03T02:03:21.046295) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.047 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.047 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.047 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.048 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.048 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.048 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.048 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.049 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.049 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.051 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.051 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.051 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.051 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.051 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.051 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.051 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.052 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-03T02:03:21.051794) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.052 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.052 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.052 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.053 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.053 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.053 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.053 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.053 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.053 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.053 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.054 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-03T02:03:21.053801) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.054 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.054 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.054 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.055 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.055 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.055 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.055 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.056 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.056 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.056 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.056 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.057 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.057 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.057 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.057 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.057 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.058 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.058 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.bytes volume: 41840640 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.058 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.058 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-03T02:03:21.058037) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.059 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.059 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.059 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.059 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.060 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.bytes volume: 41689088 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.060 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.060 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.060 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.061 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.061 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.061 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.062 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.062 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.062 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.062 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.062 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.062 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.latency volume: 6998528252 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.062 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.latency volume: 29937762 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.063 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.063 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-03T02:03:21.062432) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.063 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.latency volume: 5579657720 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.063 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.latency volume: 23420930 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.064 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.064 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.latency volume: 7883313820 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.064 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.latency volume: 27311239 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.064 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.065 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.latency volume: 7224488215 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.065 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.latency volume: 31628821 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.065 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.066 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.066 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.066 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.066 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.066 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.066 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.067 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.requests volume: 240 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.067 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-03T02:03:21.066740) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.067 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.067 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.067 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.067 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.068 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.068 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.requests volume: 220 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.068 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.068 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.069 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.requests volume: 229 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.069 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.069 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.070 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.070 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.071 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.071 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.071 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.071 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.071 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.incoming.packets volume: 54 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.071 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.071 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-03T02:03:21.071324) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.072 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.072 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.072 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.072 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.072 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.072 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.072 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.072 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.073 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/cpu volume: 345190000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.073 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/cpu volume: 36000000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.073 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/cpu volume: 36880000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.073 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/cpu volume: 40530000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.073 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.074 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.074 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.074 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.074 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.074 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.074 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.075 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.075 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.075 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.075 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.075 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.outgoing.bytes volume: 7568 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.075 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.outgoing.bytes volume: 2258 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.076 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.outgoing.bytes volume: 1751 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.076 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.076 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.076 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.076 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.076 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.076 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.077 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.077 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.077 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.077 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.077 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.077 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.078 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.078 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.078 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.078 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.078 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.079 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.079 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.079 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-03T02:03:21.072964) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.079 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-03T02:03:21.074375) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.079 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-03T02:03:21.075449) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.079 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-03T02:03:21.076999) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.080 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.080 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.080 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.080 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.080 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.080 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.080 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.080 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-03T02:03:21.080600) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.081 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.081 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.081 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.081 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.081 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.081 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.081 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.082 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.082 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.082 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.082 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.082 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.082 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.083 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.083 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-03T02:03:21.082071) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.083 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.083 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.outgoing.bytes.delta volume: 2672 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.083 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-03T02:03:21.083207) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.083 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.outgoing.bytes.delta volume: 352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.083 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.084 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.084 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.084 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.084 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.084 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.084 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.084 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.084 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.084 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-44nal64-mj7m4uljqyof-c7kfgdonucij-vnf-5nwa6zvischw>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-44nal64-mj7m4uljqyof-c7kfgdonucij-vnf-5nwa6zvischw>]
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.085 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-03T02:03:21.084733) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:03:21 compute-0 podman[427260]: 2025-12-03 02:03:21.091565681 +0000 UTC m=+0.083175905 container create d90743f8fb1ffb8677621860876f257df1639f810b2d0ed7ae92498247321f92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef)
Dec 03 02:03:21 compute-0 podman[427260]: 2025-12-03 02:03:21.049887326 +0000 UTC m=+0.041497590 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:03:21 compute-0 systemd[1]: Started libpod-conmon-d90743f8fb1ffb8677621860876f257df1639f810b2d0ed7ae92498247321f92.scope.
Dec 03 02:03:21 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:03:21 compute-0 podman[427260]: 2025-12-03 02:03:21.21352406 +0000 UTC m=+0.205193796 container init d90743f8fb1ffb8677621860876f257df1639f810b2d0ed7ae92498247321f92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:03:21 compute-0 podman[427260]: 2025-12-03 02:03:21.229268084 +0000 UTC m=+0.220878308 container start d90743f8fb1ffb8677621860876f257df1639f810b2d0ed7ae92498247321f92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 03 02:03:21 compute-0 podman[427260]: 2025-12-03 02:03:21.234051429 +0000 UTC m=+0.225661693 container attach d90743f8fb1ffb8677621860876f257df1639f810b2d0ed7ae92498247321f92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_shamir, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:03:21 compute-0 nifty_shamir[427275]: 167 167
Dec 03 02:03:21 compute-0 systemd[1]: libpod-d90743f8fb1ffb8677621860876f257df1639f810b2d0ed7ae92498247321f92.scope: Deactivated successfully.
Dec 03 02:03:21 compute-0 podman[427260]: 2025-12-03 02:03:21.242444866 +0000 UTC m=+0.234055120 container died d90743f8fb1ffb8677621860876f257df1639f810b2d0ed7ae92498247321f92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_shamir, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:03:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-8985b32740d7e9dd1a24e5e59f1d7d227071ede19b7748ec0c9876364e037e15-merged.mount: Deactivated successfully.
Dec 03 02:03:21 compute-0 podman[427260]: 2025-12-03 02:03:21.325388124 +0000 UTC m=+0.316998388 container remove d90743f8fb1ffb8677621860876f257df1639f810b2d0ed7ae92498247321f92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:03:21 compute-0 systemd[1]: libpod-conmon-d90743f8fb1ffb8677621860876f257df1639f810b2d0ed7ae92498247321f92.scope: Deactivated successfully.
Dec 03 02:03:21 compute-0 podman[427298]: 2025-12-03 02:03:21.637155966 +0000 UTC m=+0.100191257 container create 7ba0bfae0c73c5df5ec170c907f7bd021e88e80367a77c6070e0ea1ca7cf992f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 03 02:03:21 compute-0 podman[427298]: 2025-12-03 02:03:21.602111487 +0000 UTC m=+0.065146838 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:03:21 compute-0 systemd[1]: Started libpod-conmon-7ba0bfae0c73c5df5ec170c907f7bd021e88e80367a77c6070e0ea1ca7cf992f.scope.
Dec 03 02:03:21 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:03:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b282e5a0651e0577e0d8c47e4860cd1ac4868632574da25fec5820bc48e6cf7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:03:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b282e5a0651e0577e0d8c47e4860cd1ac4868632574da25fec5820bc48e6cf7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:03:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b282e5a0651e0577e0d8c47e4860cd1ac4868632574da25fec5820bc48e6cf7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:03:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b282e5a0651e0577e0d8c47e4860cd1ac4868632574da25fec5820bc48e6cf7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:03:21 compute-0 podman[427298]: 2025-12-03 02:03:21.823180721 +0000 UTC m=+0.286216062 container init 7ba0bfae0c73c5df5ec170c907f7bd021e88e80367a77c6070e0ea1ca7cf992f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bell, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:03:21 compute-0 podman[427298]: 2025-12-03 02:03:21.848346351 +0000 UTC m=+0.311381642 container start 7ba0bfae0c73c5df5ec170c907f7bd021e88e80367a77c6070e0ea1ca7cf992f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bell, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec 03 02:03:21 compute-0 podman[427298]: 2025-12-03 02:03:21.855899254 +0000 UTC m=+0.318934525 container attach 7ba0bfae0c73c5df5ec170c907f7bd021e88e80367a77c6070e0ea1ca7cf992f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bell, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Dec 03 02:03:22 compute-0 ceph-mon[192821]: pgmap v1463: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Dec 03 02:03:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1464: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 89 KiB/s rd, 499 KiB/s wr, 34 op/s
Dec 03 02:03:22 compute-0 podman[427332]: 2025-12-03 02:03:22.842527735 +0000 UTC m=+0.090670227 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:03:22 compute-0 friendly_bell[427315]: {
Dec 03 02:03:22 compute-0 friendly_bell[427315]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 02:03:22 compute-0 friendly_bell[427315]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:03:22 compute-0 friendly_bell[427315]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 02:03:22 compute-0 friendly_bell[427315]:         "osd_id": 2,
Dec 03 02:03:22 compute-0 friendly_bell[427315]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:03:22 compute-0 friendly_bell[427315]:         "type": "bluestore"
Dec 03 02:03:22 compute-0 friendly_bell[427315]:     },
Dec 03 02:03:22 compute-0 friendly_bell[427315]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 02:03:22 compute-0 friendly_bell[427315]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:03:22 compute-0 friendly_bell[427315]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 02:03:22 compute-0 friendly_bell[427315]:         "osd_id": 1,
Dec 03 02:03:22 compute-0 friendly_bell[427315]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:03:22 compute-0 friendly_bell[427315]:         "type": "bluestore"
Dec 03 02:03:22 compute-0 friendly_bell[427315]:     },
Dec 03 02:03:22 compute-0 friendly_bell[427315]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 02:03:22 compute-0 friendly_bell[427315]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:03:22 compute-0 friendly_bell[427315]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 02:03:22 compute-0 friendly_bell[427315]:         "osd_id": 0,
Dec 03 02:03:22 compute-0 friendly_bell[427315]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:03:22 compute-0 friendly_bell[427315]:         "type": "bluestore"
Dec 03 02:03:22 compute-0 friendly_bell[427315]:     }
Dec 03 02:03:22 compute-0 friendly_bell[427315]: }
Dec 03 02:03:23 compute-0 systemd[1]: libpod-7ba0bfae0c73c5df5ec170c907f7bd021e88e80367a77c6070e0ea1ca7cf992f.scope: Deactivated successfully.
Dec 03 02:03:23 compute-0 podman[427298]: 2025-12-03 02:03:23.021260415 +0000 UTC m=+1.484295686 container died 7ba0bfae0c73c5df5ec170c907f7bd021e88e80367a77c6070e0ea1ca7cf992f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec 03 02:03:23 compute-0 systemd[1]: libpod-7ba0bfae0c73c5df5ec170c907f7bd021e88e80367a77c6070e0ea1ca7cf992f.scope: Consumed 1.157s CPU time.
Dec 03 02:03:23 compute-0 nova_compute[351485]: 2025-12-03 02:03:23.044 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:03:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b282e5a0651e0577e0d8c47e4860cd1ac4868632574da25fec5820bc48e6cf7-merged.mount: Deactivated successfully.
Dec 03 02:03:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:03:23 compute-0 podman[427298]: 2025-12-03 02:03:23.101170899 +0000 UTC m=+1.564206170 container remove 7ba0bfae0c73c5df5ec170c907f7bd021e88e80367a77c6070e0ea1ca7cf992f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bell, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 03 02:03:23 compute-0 systemd[1]: libpod-conmon-7ba0bfae0c73c5df5ec170c907f7bd021e88e80367a77c6070e0ea1ca7cf992f.scope: Deactivated successfully.
Dec 03 02:03:23 compute-0 sudo[427200]: pam_unix(sudo:session): session closed for user root
Dec 03 02:03:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 02:03:23 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:03:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 02:03:23 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:03:23 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 185de5cd-be34-414f-8d39-a2662890bc86 does not exist
Dec 03 02:03:23 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev fe7db48c-44f3-4bf4-a70d-adc40731b2de does not exist
Dec 03 02:03:23 compute-0 sudo[427383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:03:23 compute-0 sudo[427383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:03:23 compute-0 sudo[427383]: pam_unix(sudo:session): session closed for user root
Dec 03 02:03:23 compute-0 sudo[427408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 02:03:23 compute-0 sudo[427408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:03:23 compute-0 sudo[427408]: pam_unix(sudo:session): session closed for user root
Dec 03 02:03:24 compute-0 ceph-mon[192821]: pgmap v1464: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 89 KiB/s rd, 499 KiB/s wr, 34 op/s
Dec 03 02:03:24 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:03:24 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:03:24 compute-0 nova_compute[351485]: 2025-12-03 02:03:24.079 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:03:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1465: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 72 KiB/s wr, 13 op/s
Dec 03 02:03:25 compute-0 podman[427433]: 2025-12-03 02:03:25.909066486 +0000 UTC m=+0.155264630 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., release=1214.1726694543, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.buildah.version=1.29.0, managed_by=edpm_ansible, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, version=9.4, name=ubi9, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec 03 02:03:26 compute-0 ceph-mon[192821]: pgmap v1465: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 72 KiB/s wr, 13 op/s
Dec 03 02:03:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1466: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 58 KiB/s wr, 11 op/s
Dec 03 02:03:28 compute-0 nova_compute[351485]: 2025-12-03 02:03:28.047 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:03:28 compute-0 ceph-mon[192821]: pgmap v1466: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 58 KiB/s wr, 11 op/s
Dec 03 02:03:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:03:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:03:28
Dec 03 02:03:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 02:03:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 02:03:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'vms', 'default.rgw.meta', 'backups']
Dec 03 02:03:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 02:03:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:03:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:03:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:03:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:03:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:03:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:03:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1467: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s wr, 0 op/s
Dec 03 02:03:28 compute-0 podman[427454]: 2025-12-03 02:03:28.886141273 +0000 UTC m=+0.119361967 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., config_id=edpm, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-type=git, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, architecture=x86_64, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, managed_by=edpm_ansible, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., name=ubi9-minimal)
Dec 03 02:03:28 compute-0 podman[427456]: 2025-12-03 02:03:28.893230693 +0000 UTC m=+0.108296355 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible)
Dec 03 02:03:28 compute-0 podman[427455]: 2025-12-03 02:03:28.904721977 +0000 UTC m=+0.130242434 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 03 02:03:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 02:03:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:03:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 02:03:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:03:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:03:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:03:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:03:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:03:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:03:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:03:28 compute-0 podman[427453]: 2025-12-03 02:03:28.96227081 +0000 UTC m=+0.202009048 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 03 02:03:29 compute-0 nova_compute[351485]: 2025-12-03 02:03:29.085 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:03:29 compute-0 podman[158098]: time="2025-12-03T02:03:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:03:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:03:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec 03 02:03:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:03:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8638 "" "Go-http-client/1.1"
Dec 03 02:03:30 compute-0 ceph-mon[192821]: pgmap v1467: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s wr, 0 op/s
Dec 03 02:03:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1468: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s wr, 0 op/s
Dec 03 02:03:31 compute-0 openstack_network_exporter[368278]: ERROR   02:03:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:03:31 compute-0 openstack_network_exporter[368278]: ERROR   02:03:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:03:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:03:31 compute-0 openstack_network_exporter[368278]: ERROR   02:03:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:03:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:03:31 compute-0 openstack_network_exporter[368278]: ERROR   02:03:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:03:31 compute-0 openstack_network_exporter[368278]: ERROR   02:03:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:03:32 compute-0 ceph-mon[192821]: pgmap v1468: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s wr, 0 op/s
Dec 03 02:03:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1469: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:03:33 compute-0 nova_compute[351485]: 2025-12-03 02:03:33.052 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:03:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:03:34 compute-0 nova_compute[351485]: 2025-12-03 02:03:34.089 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:03:34 compute-0 ceph-mon[192821]: pgmap v1469: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:03:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1470: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:03:34 compute-0 sshd-session[427538]: Invalid user test from 80.94.95.116 port 17160
Dec 03 02:03:35 compute-0 sshd-session[427538]: Connection closed by invalid user test 80.94.95.116 port 17160 [preauth]
Dec 03 02:03:36 compute-0 ceph-mon[192821]: pgmap v1470: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:03:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1471: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:03:37 compute-0 ceph-mon[192821]: pgmap v1471: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:03:38 compute-0 nova_compute[351485]: 2025-12-03 02:03:38.054 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:03:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:03:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 02:03:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:03:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 02:03:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:03:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0022107945480888194 of space, bias 1.0, pg target 0.6632383644266459 quantized to 32 (current 32)
Dec 03 02:03:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:03:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:03:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:03:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:03:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:03:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec 03 02:03:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:03:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 02:03:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:03:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:03:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:03:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 02:03:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:03:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 02:03:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:03:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:03:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:03:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 02:03:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1472: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:03:39 compute-0 nova_compute[351485]: 2025-12-03 02:03:39.094 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:03:39 compute-0 ceph-mon[192821]: pgmap v1472: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:03:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1473: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:03:41 compute-0 ceph-mon[192821]: pgmap v1473: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:03:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1474: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:03:43 compute-0 nova_compute[351485]: 2025-12-03 02:03:43.057 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:03:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:03:43 compute-0 ceph-mon[192821]: pgmap v1474: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:03:44 compute-0 nova_compute[351485]: 2025-12-03 02:03:44.098 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:03:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1475: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:03:45 compute-0 ceph-mon[192821]: pgmap v1475: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:03:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1476: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec 03 02:03:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 02:03:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1377572860' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:03:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 02:03:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1377572860' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:03:47 compute-0 ceph-mon[192821]: pgmap v1476: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec 03 02:03:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/1377572860' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:03:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/1377572860' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:03:47 compute-0 podman[427540]: 2025-12-03 02:03:47.870211137 +0000 UTC m=+0.108840060 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 03 02:03:47 compute-0 podman[427542]: 2025-12-03 02:03:47.907851598 +0000 UTC m=+0.140549934 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 03 02:03:47 compute-0 podman[427541]: 2025-12-03 02:03:47.907948231 +0000 UTC m=+0.136855750 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm)
Dec 03 02:03:48 compute-0 nova_compute[351485]: 2025-12-03 02:03:48.059 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:03:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:03:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1477: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec 03 02:03:49 compute-0 nova_compute[351485]: 2025-12-03 02:03:49.103 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:03:49 compute-0 ceph-mon[192821]: pgmap v1477: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec 03 02:03:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1478: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec 03 02:03:51 compute-0 ceph-mon[192821]: pgmap v1478: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec 03 02:03:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1479: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec 03 02:03:53 compute-0 nova_compute[351485]: 2025-12-03 02:03:53.063 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:03:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:03:53 compute-0 ceph-mon[192821]: pgmap v1479: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec 03 02:03:53 compute-0 podman[427598]: 2025-12-03 02:03:53.905983966 +0000 UTC m=+0.155311191 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 03 02:03:54 compute-0 nova_compute[351485]: 2025-12-03 02:03:54.108 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:03:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1480: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec 03 02:03:55 compute-0 ceph-mon[192821]: pgmap v1480: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec 03 02:03:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1481: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec 03 02:03:56 compute-0 podman[427618]: 2025-12-03 02:03:56.874803141 +0000 UTC m=+0.129740848 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, name=ubi9, version=9.4, io.openshift.expose-services=, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, build-date=2024-09-18T21:23:30, distribution-scope=public, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, maintainer=Red Hat, Inc., release=1214.1726694543, release-0.7.12=, vendor=Red Hat, Inc.)
Dec 03 02:03:57 compute-0 ceph-mon[192821]: pgmap v1481: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec 03 02:03:58 compute-0 nova_compute[351485]: 2025-12-03 02:03:58.066 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:03:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:03:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:03:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:03:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:03:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:03:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:03:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:03:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1482: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:03:59 compute-0 nova_compute[351485]: 2025-12-03 02:03:59.110 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:03:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:03:59.632 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:03:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:03:59.633 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:03:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:03:59.634 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:03:59 compute-0 podman[158098]: time="2025-12-03T02:03:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:03:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:03:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec 03 02:03:59 compute-0 ceph-mon[192821]: pgmap v1482: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:03:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:03:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8638 "" "Go-http-client/1.1"
Dec 03 02:03:59 compute-0 podman[427638]: 2025-12-03 02:03:59.88197216 +0000 UTC m=+0.101022770 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 03 02:03:59 compute-0 podman[427637]: 2025-12-03 02:03:59.888852554 +0000 UTC m=+0.123807173 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=Red Hat, Inc., name=ubi9-minimal, version=9.6, managed_by=edpm_ansible, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, vcs-type=git)
Dec 03 02:03:59 compute-0 podman[427639]: 2025-12-03 02:03:59.890422778 +0000 UTC m=+0.098381485 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Dec 03 02:03:59 compute-0 podman[427636]: 2025-12-03 02:03:59.926774613 +0000 UTC m=+0.156098723 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:04:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1483: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:04:01 compute-0 openstack_network_exporter[368278]: ERROR   02:04:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:04:01 compute-0 openstack_network_exporter[368278]: ERROR   02:04:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:04:01 compute-0 openstack_network_exporter[368278]: ERROR   02:04:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:04:01 compute-0 openstack_network_exporter[368278]: ERROR   02:04:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:04:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:04:01 compute-0 openstack_network_exporter[368278]: ERROR   02:04:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:04:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:04:01 compute-0 ceph-mon[192821]: pgmap v1483: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:04:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1484: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:04:02 compute-0 nova_compute[351485]: 2025-12-03 02:04:02.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:04:03 compute-0 nova_compute[351485]: 2025-12-03 02:04:03.069 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:04:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:04:03 compute-0 ceph-mon[192821]: pgmap v1484: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:04:04 compute-0 nova_compute[351485]: 2025-12-03 02:04:04.114 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:04:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1485: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:04:04 compute-0 nova_compute[351485]: 2025-12-03 02:04:04.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:04:04 compute-0 nova_compute[351485]: 2025-12-03 02:04:04.633 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:04:04 compute-0 nova_compute[351485]: 2025-12-03 02:04:04.634 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:04:04 compute-0 nova_compute[351485]: 2025-12-03 02:04:04.634 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:04:04 compute-0 nova_compute[351485]: 2025-12-03 02:04:04.635 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 02:04:04 compute-0 nova_compute[351485]: 2025-12-03 02:04:04.636 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:04:05 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:04:05 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1506268759' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:04:05 compute-0 nova_compute[351485]: 2025-12-03 02:04:05.184 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:04:05 compute-0 nova_compute[351485]: 2025-12-03 02:04:05.331 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:04:05 compute-0 nova_compute[351485]: 2025-12-03 02:04:05.332 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:04:05 compute-0 nova_compute[351485]: 2025-12-03 02:04:05.333 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:04:05 compute-0 nova_compute[351485]: 2025-12-03 02:04:05.338 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:04:05 compute-0 nova_compute[351485]: 2025-12-03 02:04:05.338 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:04:05 compute-0 nova_compute[351485]: 2025-12-03 02:04:05.339 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:04:05 compute-0 nova_compute[351485]: 2025-12-03 02:04:05.343 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:04:05 compute-0 nova_compute[351485]: 2025-12-03 02:04:05.343 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:04:05 compute-0 nova_compute[351485]: 2025-12-03 02:04:05.344 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:04:05 compute-0 nova_compute[351485]: 2025-12-03 02:04:05.351 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:04:05 compute-0 nova_compute[351485]: 2025-12-03 02:04:05.351 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:04:05 compute-0 nova_compute[351485]: 2025-12-03 02:04:05.352 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:04:05 compute-0 ceph-mon[192821]: pgmap v1485: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:04:05 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1506268759' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:04:05 compute-0 nova_compute[351485]: 2025-12-03 02:04:05.875 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:04:05 compute-0 nova_compute[351485]: 2025-12-03 02:04:05.876 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3198MB free_disk=59.85565948486328GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 02:04:05 compute-0 nova_compute[351485]: 2025-12-03 02:04:05.877 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:04:05 compute-0 nova_compute[351485]: 2025-12-03 02:04:05.877 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:04:06 compute-0 nova_compute[351485]: 2025-12-03 02:04:06.037 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:04:06 compute-0 nova_compute[351485]: 2025-12-03 02:04:06.037 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 52862152-12c7-4236-89c3-67750ecbed7a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:04:06 compute-0 nova_compute[351485]: 2025-12-03 02:04:06.037 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:04:06 compute-0 nova_compute[351485]: 2025-12-03 02:04:06.038 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance b43e79bd-550f-42f8-9aa7-980b6bca3f70 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:04:06 compute-0 nova_compute[351485]: 2025-12-03 02:04:06.038 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 02:04:06 compute-0 nova_compute[351485]: 2025-12-03 02:04:06.039 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=59GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 02:04:06 compute-0 nova_compute[351485]: 2025-12-03 02:04:06.157 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:04:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1486: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:04:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:04:06 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3999662885' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:04:06 compute-0 nova_compute[351485]: 2025-12-03 02:04:06.672 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:04:06 compute-0 nova_compute[351485]: 2025-12-03 02:04:06.686 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:04:06 compute-0 nova_compute[351485]: 2025-12-03 02:04:06.712 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:04:06 compute-0 nova_compute[351485]: 2025-12-03 02:04:06.715 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 02:04:06 compute-0 nova_compute[351485]: 2025-12-03 02:04:06.715 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.838s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:04:06 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3999662885' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:04:07 compute-0 nova_compute[351485]: 2025-12-03 02:04:07.717 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:04:07 compute-0 nova_compute[351485]: 2025-12-03 02:04:07.746 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:04:07 compute-0 nova_compute[351485]: 2025-12-03 02:04:07.747 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 02:04:07 compute-0 nova_compute[351485]: 2025-12-03 02:04:07.748 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 03 02:04:07 compute-0 ceph-mon[192821]: pgmap v1486: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:04:08 compute-0 nova_compute[351485]: 2025-12-03 02:04:08.073 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:04:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:04:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1487: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:04:08 compute-0 nova_compute[351485]: 2025-12-03 02:04:08.562 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:04:08 compute-0 nova_compute[351485]: 2025-12-03 02:04:08.564 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:04:08 compute-0 nova_compute[351485]: 2025-12-03 02:04:08.565 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 03 02:04:08 compute-0 nova_compute[351485]: 2025-12-03 02:04:08.565 351492 DEBUG nova.objects.instance [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9182286b-5a08-4961-b4bb-c0e2f05746f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:04:09 compute-0 nova_compute[351485]: 2025-12-03 02:04:09.117 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:04:09 compute-0 ceph-mon[192821]: pgmap v1487: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:04:10 compute-0 sshd-session[427767]: Invalid user sonarqube from 146.190.144.138 port 53656
Dec 03 02:04:10 compute-0 sshd-session[427767]: Received disconnect from 146.190.144.138 port 53656:11: Bye Bye [preauth]
Dec 03 02:04:10 compute-0 sshd-session[427767]: Disconnected from invalid user sonarqube 146.190.144.138 port 53656 [preauth]
Dec 03 02:04:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1488: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:04:10 compute-0 nova_compute[351485]: 2025-12-03 02:04:10.892 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Updating instance_info_cache with network_info: [{"id": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "address": "fa:16:3e:8f:a6:32", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2a50b9b-c2", "ovs_interfaceid": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:04:10 compute-0 nova_compute[351485]: 2025-12-03 02:04:10.910 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:04:10 compute-0 nova_compute[351485]: 2025-12-03 02:04:10.910 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 03 02:04:10 compute-0 nova_compute[351485]: 2025-12-03 02:04:10.912 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:04:10 compute-0 nova_compute[351485]: 2025-12-03 02:04:10.912 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:04:10 compute-0 nova_compute[351485]: 2025-12-03 02:04:10.913 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:04:11 compute-0 ceph-mon[192821]: pgmap v1488: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:04:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1489: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:04:13 compute-0 nova_compute[351485]: 2025-12-03 02:04:13.076 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:04:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:04:13 compute-0 nova_compute[351485]: 2025-12-03 02:04:13.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:04:13 compute-0 nova_compute[351485]: 2025-12-03 02:04:13.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:04:13 compute-0 ceph-mon[192821]: pgmap v1489: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:04:14 compute-0 nova_compute[351485]: 2025-12-03 02:04:14.122 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:04:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1490: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:04:15 compute-0 ceph-mon[192821]: pgmap v1490: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:04:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1491: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 0 op/s
Dec 03 02:04:16 compute-0 nova_compute[351485]: 2025-12-03 02:04:16.575 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:04:16 compute-0 nova_compute[351485]: 2025-12-03 02:04:16.578 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 02:04:17 compute-0 ceph-mon[192821]: pgmap v1491: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 0 op/s
Dec 03 02:04:18 compute-0 nova_compute[351485]: 2025-12-03 02:04:18.080 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:04:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:04:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1492: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 0 op/s
Dec 03 02:04:18 compute-0 podman[427771]: 2025-12-03 02:04:18.877000513 +0000 UTC m=+0.110157757 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 03 02:04:18 compute-0 podman[427769]: 2025-12-03 02:04:18.881445038 +0000 UTC m=+0.123769211 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 03 02:04:18 compute-0 podman[427770]: 2025-12-03 02:04:18.884736871 +0000 UTC m=+0.123704799 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, container_name=ceilometer_agent_compute)
Dec 03 02:04:19 compute-0 nova_compute[351485]: 2025-12-03 02:04:19.125 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:04:19 compute-0 ceph-mon[192821]: pgmap v1492: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 0 op/s
Dec 03 02:04:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1493: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 0 op/s
Dec 03 02:04:21 compute-0 ceph-mon[192821]: pgmap v1493: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 0 op/s
Dec 03 02:04:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1494: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 0 op/s
Dec 03 02:04:23 compute-0 nova_compute[351485]: 2025-12-03 02:04:23.084 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:04:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:04:23 compute-0 sudo[427828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:04:23 compute-0 sudo[427828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:04:23 compute-0 sudo[427828]: pam_unix(sudo:session): session closed for user root
Dec 03 02:04:23 compute-0 sudo[427853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:04:23 compute-0 sudo[427853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:04:23 compute-0 sudo[427853]: pam_unix(sudo:session): session closed for user root
Dec 03 02:04:23 compute-0 sudo[427878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:04:23 compute-0 sudo[427878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:04:23 compute-0 sudo[427878]: pam_unix(sudo:session): session closed for user root
Dec 03 02:04:23 compute-0 ceph-mon[192821]: pgmap v1494: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 0 op/s
Dec 03 02:04:24 compute-0 sudo[427903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 02:04:24 compute-0 sudo[427903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:04:24 compute-0 nova_compute[351485]: 2025-12-03 02:04:24.130 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:04:24 compute-0 podman[427927]: 2025-12-03 02:04:24.176088018 +0000 UTC m=+0.135380388 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.build-date=20251125)
Dec 03 02:04:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1495: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 0 op/s
Dec 03 02:04:24 compute-0 sudo[427903]: pam_unix(sudo:session): session closed for user root
Dec 03 02:04:24 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:04:24 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:04:24 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 02:04:24 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:04:24 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 02:04:24 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:04:24 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev a9510035-7ff0-4fa4-baea-74d56f945009 does not exist
Dec 03 02:04:24 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 87534f33-8117-4ae1-bd1b-32577a63adb6 does not exist
Dec 03 02:04:24 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 0a55b09f-b7e0-4e4d-b039-1cf50cf4c2ea does not exist
Dec 03 02:04:24 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 02:04:24 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:04:24 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 02:04:24 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:04:24 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:04:24 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:04:24 compute-0 sudo[427976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:04:24 compute-0 sudo[427976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:04:24 compute-0 sudo[427976]: pam_unix(sudo:session): session closed for user root
Dec 03 02:04:24 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:04:24 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:04:24 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:04:24 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:04:24 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:04:24 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:04:25 compute-0 sudo[428001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:04:25 compute-0 sudo[428001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:04:25 compute-0 sudo[428001]: pam_unix(sudo:session): session closed for user root
Dec 03 02:04:25 compute-0 sudo[428026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:04:25 compute-0 sudo[428026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:04:25 compute-0 sudo[428026]: pam_unix(sudo:session): session closed for user root
Dec 03 02:04:25 compute-0 sudo[428051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 02:04:25 compute-0 sudo[428051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:04:25 compute-0 podman[428115]: 2025-12-03 02:04:25.878267437 +0000 UTC m=+0.097104029 container create 81338d2e78de5ee267350f5a847c38fc6cb63197b04d9ccda2ce54e6ebaac703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_varahamihira, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 03 02:04:25 compute-0 podman[428115]: 2025-12-03 02:04:25.848159928 +0000 UTC m=+0.066996600 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:04:25 compute-0 systemd[1]: Started libpod-conmon-81338d2e78de5ee267350f5a847c38fc6cb63197b04d9ccda2ce54e6ebaac703.scope.
Dec 03 02:04:26 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:04:26 compute-0 ceph-mon[192821]: pgmap v1495: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 0 op/s
Dec 03 02:04:26 compute-0 podman[428115]: 2025-12-03 02:04:26.048791825 +0000 UTC m=+0.267628447 container init 81338d2e78de5ee267350f5a847c38fc6cb63197b04d9ccda2ce54e6ebaac703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_varahamihira, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 03 02:04:26 compute-0 podman[428115]: 2025-12-03 02:04:26.060824415 +0000 UTC m=+0.279661007 container start 81338d2e78de5ee267350f5a847c38fc6cb63197b04d9ccda2ce54e6ebaac703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec 03 02:04:26 compute-0 podman[428115]: 2025-12-03 02:04:26.06599626 +0000 UTC m=+0.284832922 container attach 81338d2e78de5ee267350f5a847c38fc6cb63197b04d9ccda2ce54e6ebaac703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_varahamihira, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:04:26 compute-0 silly_varahamihira[428130]: 167 167
Dec 03 02:04:26 compute-0 systemd[1]: libpod-81338d2e78de5ee267350f5a847c38fc6cb63197b04d9ccda2ce54e6ebaac703.scope: Deactivated successfully.
Dec 03 02:04:26 compute-0 podman[428115]: 2025-12-03 02:04:26.077243848 +0000 UTC m=+0.296080460 container died 81338d2e78de5ee267350f5a847c38fc6cb63197b04d9ccda2ce54e6ebaac703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_varahamihira, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec 03 02:04:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-1bdb7c5b106f6656867ef77cc8872a74927db72e470a8638e82484a5c2be15f5-merged.mount: Deactivated successfully.
Dec 03 02:04:26 compute-0 podman[428115]: 2025-12-03 02:04:26.158445997 +0000 UTC m=+0.377282599 container remove 81338d2e78de5ee267350f5a847c38fc6cb63197b04d9ccda2ce54e6ebaac703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:04:26 compute-0 systemd[1]: libpod-conmon-81338d2e78de5ee267350f5a847c38fc6cb63197b04d9ccda2ce54e6ebaac703.scope: Deactivated successfully.
Dec 03 02:04:26 compute-0 podman[428153]: 2025-12-03 02:04:26.415698812 +0000 UTC m=+0.075836230 container create f2d2c3d33dab846336443c45e57314047b57bd4ec55c5ed4d578c5e3ec4174a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 03 02:04:26 compute-0 podman[428153]: 2025-12-03 02:04:26.382372282 +0000 UTC m=+0.042509780 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:04:26 compute-0 systemd[1]: Started libpod-conmon-f2d2c3d33dab846336443c45e57314047b57bd4ec55c5ed4d578c5e3ec4174a2.scope.
Dec 03 02:04:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1496: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 0 op/s
Dec 03 02:04:26 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:04:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fedce6caf64dc8dd8011f05f367f50aa409c5fcce0f0e8d9611a4b70f2f9597f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:04:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fedce6caf64dc8dd8011f05f367f50aa409c5fcce0f0e8d9611a4b70f2f9597f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:04:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fedce6caf64dc8dd8011f05f367f50aa409c5fcce0f0e8d9611a4b70f2f9597f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:04:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fedce6caf64dc8dd8011f05f367f50aa409c5fcce0f0e8d9611a4b70f2f9597f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:04:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fedce6caf64dc8dd8011f05f367f50aa409c5fcce0f0e8d9611a4b70f2f9597f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 02:04:26 compute-0 podman[428153]: 2025-12-03 02:04:26.62167506 +0000 UTC m=+0.281812568 container init f2d2c3d33dab846336443c45e57314047b57bd4ec55c5ed4d578c5e3ec4174a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mayer, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:04:26 compute-0 podman[428153]: 2025-12-03 02:04:26.64436283 +0000 UTC m=+0.304500268 container start f2d2c3d33dab846336443c45e57314047b57bd4ec55c5ed4d578c5e3ec4174a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 03 02:04:26 compute-0 podman[428153]: 2025-12-03 02:04:26.651486211 +0000 UTC m=+0.311623669 container attach f2d2c3d33dab846336443c45e57314047b57bd4ec55c5ed4d578c5e3ec4174a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mayer, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:04:27 compute-0 podman[428190]: 2025-12-03 02:04:27.882119943 +0000 UTC m=+0.126638472 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, version=9.4, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release-0.7.12=, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, name=ubi9, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9)
Dec 03 02:04:27 compute-0 wizardly_mayer[428169]: --> passed data devices: 0 physical, 3 LVM
Dec 03 02:04:27 compute-0 wizardly_mayer[428169]: --> relative data size: 1.0
Dec 03 02:04:27 compute-0 wizardly_mayer[428169]: --> All data devices are unavailable
Dec 03 02:04:27 compute-0 systemd[1]: libpod-f2d2c3d33dab846336443c45e57314047b57bd4ec55c5ed4d578c5e3ec4174a2.scope: Deactivated successfully.
Dec 03 02:04:27 compute-0 podman[428153]: 2025-12-03 02:04:27.997100445 +0000 UTC m=+1.657237853 container died f2d2c3d33dab846336443c45e57314047b57bd4ec55c5ed4d578c5e3ec4174a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mayer, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:04:27 compute-0 systemd[1]: libpod-f2d2c3d33dab846336443c45e57314047b57bd4ec55c5ed4d578c5e3ec4174a2.scope: Consumed 1.276s CPU time.
Dec 03 02:04:28 compute-0 ceph-mon[192821]: pgmap v1496: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 0 op/s
Dec 03 02:04:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-fedce6caf64dc8dd8011f05f367f50aa409c5fcce0f0e8d9611a4b70f2f9597f-merged.mount: Deactivated successfully.
Dec 03 02:04:28 compute-0 nova_compute[351485]: 2025-12-03 02:04:28.087 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:04:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:04:28 compute-0 podman[428153]: 2025-12-03 02:04:28.099178734 +0000 UTC m=+1.759316152 container remove f2d2c3d33dab846336443c45e57314047b57bd4ec55c5ed4d578c5e3ec4174a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mayer, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 03 02:04:28 compute-0 systemd[1]: libpod-conmon-f2d2c3d33dab846336443c45e57314047b57bd4ec55c5ed4d578c5e3ec4174a2.scope: Deactivated successfully.
Dec 03 02:04:28 compute-0 sudo[428051]: pam_unix(sudo:session): session closed for user root
Dec 03 02:04:28 compute-0 sudo[428230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:04:28 compute-0 sudo[428230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:04:28 compute-0 sudo[428230]: pam_unix(sudo:session): session closed for user root
Dec 03 02:04:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:04:28
Dec 03 02:04:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 02:04:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 02:04:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'backups', 'images', 'vms', '.mgr']
Dec 03 02:04:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 02:04:28 compute-0 sudo[428255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:04:28 compute-0 sudo[428255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:04:28 compute-0 sudo[428255]: pam_unix(sudo:session): session closed for user root
Dec 03 02:04:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:04:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:04:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:04:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:04:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:04:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:04:28 compute-0 sudo[428280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:04:28 compute-0 sudo[428280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:04:28 compute-0 sudo[428280]: pam_unix(sudo:session): session closed for user root
Dec 03 02:04:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1497: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:04:28 compute-0 sudo[428305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 02:04:28 compute-0 sudo[428305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:04:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 02:04:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:04:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 02:04:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:04:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:04:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:04:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:04:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:04:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:04:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:04:29 compute-0 nova_compute[351485]: 2025-12-03 02:04:29.133 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:04:29 compute-0 podman[428370]: 2025-12-03 02:04:29.271057548 +0000 UTC m=+0.093182478 container create a65905a91419b8ce7f5225daf1c88c6a78a8a3fd489dddb26af7ebfeafb15dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mcclintock, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:04:29 compute-0 podman[428370]: 2025-12-03 02:04:29.235889957 +0000 UTC m=+0.058014937 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:04:29 compute-0 systemd[1]: Started libpod-conmon-a65905a91419b8ce7f5225daf1c88c6a78a8a3fd489dddb26af7ebfeafb15dc8.scope.
Dec 03 02:04:29 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:04:29 compute-0 podman[428370]: 2025-12-03 02:04:29.427300694 +0000 UTC m=+0.249425664 container init a65905a91419b8ce7f5225daf1c88c6a78a8a3fd489dddb26af7ebfeafb15dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mcclintock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 03 02:04:29 compute-0 podman[428370]: 2025-12-03 02:04:29.444623403 +0000 UTC m=+0.266748293 container start a65905a91419b8ce7f5225daf1c88c6a78a8a3fd489dddb26af7ebfeafb15dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mcclintock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec 03 02:04:29 compute-0 podman[428370]: 2025-12-03 02:04:29.449819269 +0000 UTC m=+0.271944159 container attach a65905a91419b8ce7f5225daf1c88c6a78a8a3fd489dddb26af7ebfeafb15dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 03 02:04:29 compute-0 great_mcclintock[428385]: 167 167
Dec 03 02:04:29 compute-0 systemd[1]: libpod-a65905a91419b8ce7f5225daf1c88c6a78a8a3fd489dddb26af7ebfeafb15dc8.scope: Deactivated successfully.
Dec 03 02:04:29 compute-0 podman[428370]: 2025-12-03 02:04:29.459380419 +0000 UTC m=+0.281505339 container died a65905a91419b8ce7f5225daf1c88c6a78a8a3fd489dddb26af7ebfeafb15dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 03 02:04:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9142a6e55d6191f8e89ce0678f6b4f8cda0591df01da98e1e828e300d9c4098-merged.mount: Deactivated successfully.
Dec 03 02:04:29 compute-0 podman[428370]: 2025-12-03 02:04:29.535184396 +0000 UTC m=+0.357309286 container remove a65905a91419b8ce7f5225daf1c88c6a78a8a3fd489dddb26af7ebfeafb15dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:04:29 compute-0 systemd[1]: libpod-conmon-a65905a91419b8ce7f5225daf1c88c6a78a8a3fd489dddb26af7ebfeafb15dc8.scope: Deactivated successfully.
Dec 03 02:04:29 compute-0 podman[158098]: time="2025-12-03T02:04:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:04:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:04:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec 03 02:04:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:04:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8648 "" "Go-http-client/1.1"
Dec 03 02:04:29 compute-0 podman[428411]: 2025-12-03 02:04:29.840482425 +0000 UTC m=+0.082003854 container create e92fc779932c32854eff232891fa3a792843237dde5a2f50e0f1d798f5de4f24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 03 02:04:29 compute-0 podman[428411]: 2025-12-03 02:04:29.809874252 +0000 UTC m=+0.051395751 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:04:29 compute-0 systemd[1]: Started libpod-conmon-e92fc779932c32854eff232891fa3a792843237dde5a2f50e0f1d798f5de4f24.scope.
Dec 03 02:04:29 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:04:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54fbb0901e169415853ef5295889bf59f0e356261e16754b8feadebfcb94d86b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:04:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54fbb0901e169415853ef5295889bf59f0e356261e16754b8feadebfcb94d86b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:04:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54fbb0901e169415853ef5295889bf59f0e356261e16754b8feadebfcb94d86b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:04:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54fbb0901e169415853ef5295889bf59f0e356261e16754b8feadebfcb94d86b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:04:30 compute-0 podman[428411]: 2025-12-03 02:04:30.00593509 +0000 UTC m=+0.247456539 container init e92fc779932c32854eff232891fa3a792843237dde5a2f50e0f1d798f5de4f24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 03 02:04:30 compute-0 ceph-mon[192821]: pgmap v1497: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:04:30 compute-0 podman[428411]: 2025-12-03 02:04:30.034491896 +0000 UTC m=+0.276013335 container start e92fc779932c32854eff232891fa3a792843237dde5a2f50e0f1d798f5de4f24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 03 02:04:30 compute-0 podman[428411]: 2025-12-03 02:04:30.043304614 +0000 UTC m=+0.284826043 container attach e92fc779932c32854eff232891fa3a792843237dde5a2f50e0f1d798f5de4f24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:04:30 compute-0 podman[428431]: 2025-12-03 02:04:30.095786574 +0000 UTC m=+0.113460710 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible)
Dec 03 02:04:30 compute-0 podman[428428]: 2025-12-03 02:04:30.094973731 +0000 UTC m=+0.123070131 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, release=1755695350, vcs-type=git, distribution-scope=public, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, version=9.6, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=openstack_network_exporter, name=ubi9-minimal, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 03 02:04:30 compute-0 podman[428430]: 2025-12-03 02:04:30.096110233 +0000 UTC m=+0.123896124 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 02:04:30 compute-0 podman[428433]: 2025-12-03 02:04:30.133693293 +0000 UTC m=+0.154887239 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 03 02:04:30 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec 03 02:04:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1498: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:04:30 compute-0 fervent_jones[428427]: {
Dec 03 02:04:30 compute-0 fervent_jones[428427]:     "0": [
Dec 03 02:04:30 compute-0 fervent_jones[428427]:         {
Dec 03 02:04:30 compute-0 fervent_jones[428427]:             "devices": [
Dec 03 02:04:30 compute-0 fervent_jones[428427]:                 "/dev/loop3"
Dec 03 02:04:30 compute-0 fervent_jones[428427]:             ],
Dec 03 02:04:30 compute-0 fervent_jones[428427]:             "lv_name": "ceph_lv0",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:             "lv_size": "21470642176",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:             "name": "ceph_lv0",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:             "tags": {
Dec 03 02:04:30 compute-0 fervent_jones[428427]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:                 "ceph.cluster_name": "ceph",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:                 "ceph.crush_device_class": "",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:                 "ceph.encrypted": "0",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:                 "ceph.osd_id": "0",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:                 "ceph.type": "block",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:                 "ceph.vdo": "0"
Dec 03 02:04:30 compute-0 fervent_jones[428427]:             },
Dec 03 02:04:30 compute-0 fervent_jones[428427]:             "type": "block",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:             "vg_name": "ceph_vg0"
Dec 03 02:04:30 compute-0 fervent_jones[428427]:         }
Dec 03 02:04:30 compute-0 fervent_jones[428427]:     ],
Dec 03 02:04:30 compute-0 fervent_jones[428427]:     "1": [
Dec 03 02:04:30 compute-0 fervent_jones[428427]:         {
Dec 03 02:04:30 compute-0 fervent_jones[428427]:             "devices": [
Dec 03 02:04:30 compute-0 fervent_jones[428427]:                 "/dev/loop4"
Dec 03 02:04:30 compute-0 fervent_jones[428427]:             ],
Dec 03 02:04:30 compute-0 fervent_jones[428427]:             "lv_name": "ceph_lv1",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:             "lv_size": "21470642176",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:             "name": "ceph_lv1",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:             "tags": {
Dec 03 02:04:30 compute-0 fervent_jones[428427]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:                 "ceph.cluster_name": "ceph",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:                 "ceph.crush_device_class": "",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:                 "ceph.encrypted": "0",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:                 "ceph.osd_id": "1",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:                 "ceph.type": "block",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:                 "ceph.vdo": "0"
Dec 03 02:04:30 compute-0 fervent_jones[428427]:             },
Dec 03 02:04:30 compute-0 fervent_jones[428427]:             "type": "block",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:             "vg_name": "ceph_vg1"
Dec 03 02:04:30 compute-0 fervent_jones[428427]:         }
Dec 03 02:04:30 compute-0 fervent_jones[428427]:     ],
Dec 03 02:04:30 compute-0 fervent_jones[428427]:     "2": [
Dec 03 02:04:30 compute-0 fervent_jones[428427]:         {
Dec 03 02:04:30 compute-0 fervent_jones[428427]:             "devices": [
Dec 03 02:04:30 compute-0 fervent_jones[428427]:                 "/dev/loop5"
Dec 03 02:04:30 compute-0 fervent_jones[428427]:             ],
Dec 03 02:04:30 compute-0 fervent_jones[428427]:             "lv_name": "ceph_lv2",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:             "lv_size": "21470642176",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:             "name": "ceph_lv2",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:             "tags": {
Dec 03 02:04:30 compute-0 fervent_jones[428427]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:                 "ceph.cluster_name": "ceph",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:                 "ceph.crush_device_class": "",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:                 "ceph.encrypted": "0",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:                 "ceph.osd_id": "2",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:                 "ceph.type": "block",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:                 "ceph.vdo": "0"
Dec 03 02:04:30 compute-0 fervent_jones[428427]:             },
Dec 03 02:04:30 compute-0 fervent_jones[428427]:             "type": "block",
Dec 03 02:04:30 compute-0 fervent_jones[428427]:             "vg_name": "ceph_vg2"
Dec 03 02:04:30 compute-0 fervent_jones[428427]:         }
Dec 03 02:04:30 compute-0 fervent_jones[428427]:     ]
Dec 03 02:04:30 compute-0 fervent_jones[428427]: }
Dec 03 02:04:30 compute-0 systemd[1]: libpod-e92fc779932c32854eff232891fa3a792843237dde5a2f50e0f1d798f5de4f24.scope: Deactivated successfully.
Dec 03 02:04:30 compute-0 podman[428411]: 2025-12-03 02:04:30.865596902 +0000 UTC m=+1.107118321 container died e92fc779932c32854eff232891fa3a792843237dde5a2f50e0f1d798f5de4f24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_jones, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 03 02:04:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-54fbb0901e169415853ef5295889bf59f0e356261e16754b8feadebfcb94d86b-merged.mount: Deactivated successfully.
Dec 03 02:04:30 compute-0 podman[428411]: 2025-12-03 02:04:30.955313432 +0000 UTC m=+1.196834891 container remove e92fc779932c32854eff232891fa3a792843237dde5a2f50e0f1d798f5de4f24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_jones, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:04:30 compute-0 systemd[1]: libpod-conmon-e92fc779932c32854eff232891fa3a792843237dde5a2f50e0f1d798f5de4f24.scope: Deactivated successfully.
Dec 03 02:04:31 compute-0 sudo[428305]: pam_unix(sudo:session): session closed for user root
Dec 03 02:04:31 compute-0 sudo[428533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:04:31 compute-0 sudo[428533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:04:31 compute-0 sudo[428533]: pam_unix(sudo:session): session closed for user root
Dec 03 02:04:31 compute-0 sudo[428558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:04:31 compute-0 sudo[428558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:04:31 compute-0 sudo[428558]: pam_unix(sudo:session): session closed for user root
Dec 03 02:04:31 compute-0 sudo[428583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:04:31 compute-0 sudo[428583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:04:31 compute-0 sudo[428583]: pam_unix(sudo:session): session closed for user root
Dec 03 02:04:31 compute-0 openstack_network_exporter[368278]: ERROR   02:04:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:04:31 compute-0 openstack_network_exporter[368278]: ERROR   02:04:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:04:31 compute-0 openstack_network_exporter[368278]: ERROR   02:04:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:04:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:04:31 compute-0 openstack_network_exporter[368278]: ERROR   02:04:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:04:31 compute-0 openstack_network_exporter[368278]: ERROR   02:04:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:04:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:04:31 compute-0 sudo[428608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 02:04:31 compute-0 sudo[428608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:04:32 compute-0 ceph-mon[192821]: pgmap v1498: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:04:32 compute-0 podman[428672]: 2025-12-03 02:04:32.039969787 +0000 UTC m=+0.102263324 container create 7689b14e040f9380556c928b8c7953a12b9320094852b6f67b327afa75c6d5ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_dirac, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:04:32 compute-0 podman[428672]: 2025-12-03 02:04:32.010115856 +0000 UTC m=+0.072409393 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:04:32 compute-0 systemd[1]: Started libpod-conmon-7689b14e040f9380556c928b8c7953a12b9320094852b6f67b327afa75c6d5ac.scope.
Dec 03 02:04:32 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:04:32 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec 03 02:04:32 compute-0 podman[428672]: 2025-12-03 02:04:32.213216292 +0000 UTC m=+0.275509839 container init 7689b14e040f9380556c928b8c7953a12b9320094852b6f67b327afa75c6d5ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:04:32 compute-0 podman[428672]: 2025-12-03 02:04:32.225279502 +0000 UTC m=+0.287573049 container start 7689b14e040f9380556c928b8c7953a12b9320094852b6f67b327afa75c6d5ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_dirac, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:04:32 compute-0 podman[428672]: 2025-12-03 02:04:32.232770204 +0000 UTC m=+0.295063711 container attach 7689b14e040f9380556c928b8c7953a12b9320094852b6f67b327afa75c6d5ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_dirac, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:04:32 compute-0 angry_dirac[428688]: 167 167
Dec 03 02:04:32 compute-0 systemd[1]: libpod-7689b14e040f9380556c928b8c7953a12b9320094852b6f67b327afa75c6d5ac.scope: Deactivated successfully.
Dec 03 02:04:32 compute-0 podman[428672]: 2025-12-03 02:04:32.239703239 +0000 UTC m=+0.301996776 container died 7689b14e040f9380556c928b8c7953a12b9320094852b6f67b327afa75c6d5ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_dirac, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:04:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-1af5c1600c2d78876e4bbb394e7becf2d3e602c5e08a956e820adf1a4e6de85a-merged.mount: Deactivated successfully.
Dec 03 02:04:32 compute-0 podman[428672]: 2025-12-03 02:04:32.313168831 +0000 UTC m=+0.375462338 container remove 7689b14e040f9380556c928b8c7953a12b9320094852b6f67b327afa75c6d5ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 03 02:04:32 compute-0 systemd[1]: libpod-conmon-7689b14e040f9380556c928b8c7953a12b9320094852b6f67b327afa75c6d5ac.scope: Deactivated successfully.
Dec 03 02:04:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1499: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:04:32 compute-0 podman[428712]: 2025-12-03 02:04:32.616808813 +0000 UTC m=+0.095116413 container create 5b04bb3c4d7f0abe831a7d782e44c452639a6c5b3a8044a8504a30b97e53b165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:04:32 compute-0 podman[428712]: 2025-12-03 02:04:32.577215777 +0000 UTC m=+0.055523427 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:04:32 compute-0 systemd[1]: Started libpod-conmon-5b04bb3c4d7f0abe831a7d782e44c452639a6c5b3a8044a8504a30b97e53b165.scope.
Dec 03 02:04:32 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:04:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a8088f1b0db2c2c0eede4ac708b79488f89951bda7e5b7f09c2c1194251e916/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:04:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a8088f1b0db2c2c0eede4ac708b79488f89951bda7e5b7f09c2c1194251e916/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:04:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a8088f1b0db2c2c0eede4ac708b79488f89951bda7e5b7f09c2c1194251e916/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:04:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a8088f1b0db2c2c0eede4ac708b79488f89951bda7e5b7f09c2c1194251e916/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:04:32 compute-0 podman[428712]: 2025-12-03 02:04:32.78906941 +0000 UTC m=+0.267377050 container init 5b04bb3c4d7f0abe831a7d782e44c452639a6c5b3a8044a8504a30b97e53b165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec 03 02:04:32 compute-0 podman[428712]: 2025-12-03 02:04:32.819797726 +0000 UTC m=+0.298105326 container start 5b04bb3c4d7f0abe831a7d782e44c452639a6c5b3a8044a8504a30b97e53b165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_blackburn, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 03 02:04:32 compute-0 podman[428712]: 2025-12-03 02:04:32.826190437 +0000 UTC m=+0.304498077 container attach 5b04bb3c4d7f0abe831a7d782e44c452639a6c5b3a8044a8504a30b97e53b165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Dec 03 02:04:33 compute-0 nova_compute[351485]: 2025-12-03 02:04:33.089 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:04:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:04:33 compute-0 bold_blackburn[428728]: {
Dec 03 02:04:33 compute-0 bold_blackburn[428728]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 02:04:33 compute-0 bold_blackburn[428728]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:04:33 compute-0 bold_blackburn[428728]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 02:04:33 compute-0 bold_blackburn[428728]:         "osd_id": 2,
Dec 03 02:04:33 compute-0 bold_blackburn[428728]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:04:33 compute-0 bold_blackburn[428728]:         "type": "bluestore"
Dec 03 02:04:33 compute-0 bold_blackburn[428728]:     },
Dec 03 02:04:33 compute-0 bold_blackburn[428728]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 02:04:33 compute-0 bold_blackburn[428728]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:04:33 compute-0 bold_blackburn[428728]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 02:04:33 compute-0 bold_blackburn[428728]:         "osd_id": 1,
Dec 03 02:04:33 compute-0 bold_blackburn[428728]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:04:33 compute-0 bold_blackburn[428728]:         "type": "bluestore"
Dec 03 02:04:33 compute-0 bold_blackburn[428728]:     },
Dec 03 02:04:33 compute-0 bold_blackburn[428728]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 02:04:33 compute-0 bold_blackburn[428728]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:04:33 compute-0 bold_blackburn[428728]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 02:04:33 compute-0 bold_blackburn[428728]:         "osd_id": 0,
Dec 03 02:04:33 compute-0 bold_blackburn[428728]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:04:33 compute-0 bold_blackburn[428728]:         "type": "bluestore"
Dec 03 02:04:33 compute-0 bold_blackburn[428728]:     }
Dec 03 02:04:33 compute-0 bold_blackburn[428728]: }
Dec 03 02:04:34 compute-0 systemd[1]: libpod-5b04bb3c4d7f0abe831a7d782e44c452639a6c5b3a8044a8504a30b97e53b165.scope: Deactivated successfully.
Dec 03 02:04:34 compute-0 systemd[1]: libpod-5b04bb3c4d7f0abe831a7d782e44c452639a6c5b3a8044a8504a30b97e53b165.scope: Consumed 1.192s CPU time.
Dec 03 02:04:34 compute-0 podman[428712]: 2025-12-03 02:04:34.0291946 +0000 UTC m=+1.507502190 container died 5b04bb3c4d7f0abe831a7d782e44c452639a6c5b3a8044a8504a30b97e53b165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_blackburn, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:04:34 compute-0 ceph-mon[192821]: pgmap v1499: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:04:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a8088f1b0db2c2c0eede4ac708b79488f89951bda7e5b7f09c2c1194251e916-merged.mount: Deactivated successfully.
Dec 03 02:04:34 compute-0 podman[428712]: 2025-12-03 02:04:34.125988459 +0000 UTC m=+1.604296029 container remove 5b04bb3c4d7f0abe831a7d782e44c452639a6c5b3a8044a8504a30b97e53b165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 03 02:04:34 compute-0 nova_compute[351485]: 2025-12-03 02:04:34.138 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:04:34 compute-0 systemd[1]: libpod-conmon-5b04bb3c4d7f0abe831a7d782e44c452639a6c5b3a8044a8504a30b97e53b165.scope: Deactivated successfully.
Dec 03 02:04:34 compute-0 sudo[428608]: pam_unix(sudo:session): session closed for user root
Dec 03 02:04:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 02:04:34 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:04:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 02:04:34 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:04:34 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 88695869-7b15-4888-bff3-f25f866866cd does not exist
Dec 03 02:04:34 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev d0ff8397-6576-4abb-98d6-8f4ea3087834 does not exist
Dec 03 02:04:34 compute-0 sudo[428774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:04:34 compute-0 sudo[428774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:04:34 compute-0 sudo[428774]: pam_unix(sudo:session): session closed for user root
Dec 03 02:04:34 compute-0 sudo[428799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 02:04:34 compute-0 sudo[428799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:04:34 compute-0 sudo[428799]: pam_unix(sudo:session): session closed for user root
Dec 03 02:04:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1500: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:04:35 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:04:35 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:04:35 compute-0 ceph-mon[192821]: pgmap v1500: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:04:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1501: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:04:37 compute-0 sshd-session[428166]: error: kex_exchange_identification: read: Connection timed out
Dec 03 02:04:37 compute-0 sshd-session[428166]: banner exchange: Connection from 14.103.201.7 port 54202: Connection timed out
Dec 03 02:04:37 compute-0 ceph-mon[192821]: pgmap v1501: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:04:38 compute-0 nova_compute[351485]: 2025-12-03 02:04:38.091 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:04:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:04:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 02:04:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:04:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 02:04:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:04:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0022107945480888194 of space, bias 1.0, pg target 0.6632383644266459 quantized to 32 (current 32)
Dec 03 02:04:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:04:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:04:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:04:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:04:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:04:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec 03 02:04:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:04:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 02:04:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:04:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:04:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:04:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 02:04:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:04:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 02:04:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:04:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:04:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:04:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 02:04:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1502: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:04:39 compute-0 nova_compute[351485]: 2025-12-03 02:04:39.142 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:04:39 compute-0 ceph-mon[192821]: pgmap v1502: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:04:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1503: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:04:41 compute-0 ceph-mon[192821]: pgmap v1503: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:04:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1504: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:04:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:04:43 compute-0 nova_compute[351485]: 2025-12-03 02:04:43.094 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:04:43 compute-0 ceph-mon[192821]: pgmap v1504: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:04:44 compute-0 nova_compute[351485]: 2025-12-03 02:04:44.146 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:04:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1505: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:04:45 compute-0 ceph-mon[192821]: pgmap v1505: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:04:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1506: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:04:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 02:04:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/13556132' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:04:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 02:04:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/13556132' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:04:47 compute-0 ceph-mon[192821]: pgmap v1506: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:04:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/13556132' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:04:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/13556132' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:04:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:04:48 compute-0 nova_compute[351485]: 2025-12-03 02:04:48.098 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:04:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1507: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:04:49 compute-0 nova_compute[351485]: 2025-12-03 02:04:49.150 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:04:49 compute-0 ceph-mon[192821]: pgmap v1507: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:04:49 compute-0 podman[428825]: 2025-12-03 02:04:49.890420358 +0000 UTC m=+0.127233399 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 03 02:04:49 compute-0 podman[428826]: 2025-12-03 02:04:49.897348523 +0000 UTC m=+0.133579008 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Dec 03 02:04:49 compute-0 podman[428827]: 2025-12-03 02:04:49.90080077 +0000 UTC m=+0.133680220 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 02:04:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1508: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:04:51 compute-0 sshd[113879]: Timeout before authentication for connection from 14.103.201.7 to 38.102.83.36, pid = 426320
Dec 03 02:04:51 compute-0 ceph-mon[192821]: pgmap v1508: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:04:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1509: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:04:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:04:53 compute-0 nova_compute[351485]: 2025-12-03 02:04:53.101 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:04:53 compute-0 ceph-mon[192821]: pgmap v1509: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:04:54 compute-0 nova_compute[351485]: 2025-12-03 02:04:54.153 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:04:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1510: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:04:54 compute-0 podman[428884]: 2025-12-03 02:04:54.878357618 +0000 UTC m=+0.125782998 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 03 02:04:55 compute-0 ceph-mon[192821]: pgmap v1510: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:04:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1511: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:04:57 compute-0 ceph-mon[192821]: pgmap v1511: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:04:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:04:58 compute-0 nova_compute[351485]: 2025-12-03 02:04:58.105 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:04:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:04:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:04:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:04:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:04:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:04:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:04:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1512: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:04:58 compute-0 podman[428902]: 2025-12-03 02:04:58.88000061 +0000 UTC m=+0.133542957 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, distribution-scope=public, build-date=2024-09-18T21:23:30, version=9.4, managed_by=edpm_ansible, name=ubi9, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, release=1214.1726694543, vendor=Red Hat, Inc., config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0)
Dec 03 02:04:59 compute-0 nova_compute[351485]: 2025-12-03 02:04:59.158 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:04:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:04:59.633 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:04:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:04:59.634 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:04:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:04:59.634 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:04:59 compute-0 podman[158098]: time="2025-12-03T02:04:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:04:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:04:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec 03 02:04:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:04:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8642 "" "Go-http-client/1.1"
Dec 03 02:04:59 compute-0 ceph-mon[192821]: pgmap v1512: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1513: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:00 compute-0 podman[428925]: 2025-12-03 02:05:00.892015766 +0000 UTC m=+0.118084651 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 02:05:00 compute-0 podman[428924]: 2025-12-03 02:05:00.899755884 +0000 UTC m=+0.129359668 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, version=9.6, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, com.redhat.component=ubi9-minimal-container, vcs-type=git, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc.)
Dec 03 02:05:00 compute-0 podman[428926]: 2025-12-03 02:05:00.903727106 +0000 UTC m=+0.122570827 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 03 02:05:00 compute-0 podman[428923]: 2025-12-03 02:05:00.931796538 +0000 UTC m=+0.169173812 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Dec 03 02:05:01 compute-0 openstack_network_exporter[368278]: ERROR   02:05:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:05:01 compute-0 openstack_network_exporter[368278]: ERROR   02:05:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:05:01 compute-0 openstack_network_exporter[368278]: ERROR   02:05:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:05:01 compute-0 openstack_network_exporter[368278]: ERROR   02:05:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:05:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:05:01 compute-0 openstack_network_exporter[368278]: ERROR   02:05:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:05:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:05:01 compute-0 ceph-mon[192821]: pgmap v1513: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1514: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:05:03 compute-0 nova_compute[351485]: 2025-12-03 02:05:03.109 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:05:03 compute-0 ceph-mon[192821]: pgmap v1514: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:04 compute-0 nova_compute[351485]: 2025-12-03 02:05:04.164 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:05:04 compute-0 nova_compute[351485]: 2025-12-03 02:05:04.580 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:05:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1515: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:04 compute-0 sshd-session[429011]: Invalid user frontend from 45.78.219.140 port 45812
Dec 03 02:05:05 compute-0 sshd-session[429011]: Received disconnect from 45.78.219.140 port 45812:11: Bye Bye [preauth]
Dec 03 02:05:05 compute-0 sshd-session[429011]: Disconnected from invalid user frontend 45.78.219.140 port 45812 [preauth]
Dec 03 02:05:05 compute-0 ceph-mon[192821]: pgmap v1515: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:06 compute-0 nova_compute[351485]: 2025-12-03 02:05:06.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:05:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1516: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:06 compute-0 nova_compute[351485]: 2025-12-03 02:05:06.659 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:05:06 compute-0 nova_compute[351485]: 2025-12-03 02:05:06.660 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:05:06 compute-0 nova_compute[351485]: 2025-12-03 02:05:06.661 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:05:06 compute-0 nova_compute[351485]: 2025-12-03 02:05:06.662 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 02:05:06 compute-0 nova_compute[351485]: 2025-12-03 02:05:06.663 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:05:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:05:07 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/976053642' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:05:07 compute-0 nova_compute[351485]: 2025-12-03 02:05:07.258 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:05:07 compute-0 nova_compute[351485]: 2025-12-03 02:05:07.408 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:05:07 compute-0 nova_compute[351485]: 2025-12-03 02:05:07.409 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:05:07 compute-0 nova_compute[351485]: 2025-12-03 02:05:07.410 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:05:07 compute-0 nova_compute[351485]: 2025-12-03 02:05:07.422 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:05:07 compute-0 nova_compute[351485]: 2025-12-03 02:05:07.423 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:05:07 compute-0 nova_compute[351485]: 2025-12-03 02:05:07.424 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:05:07 compute-0 nova_compute[351485]: 2025-12-03 02:05:07.433 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:05:07 compute-0 nova_compute[351485]: 2025-12-03 02:05:07.434 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:05:07 compute-0 nova_compute[351485]: 2025-12-03 02:05:07.434 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:05:07 compute-0 nova_compute[351485]: 2025-12-03 02:05:07.443 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:05:07 compute-0 nova_compute[351485]: 2025-12-03 02:05:07.443 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:05:07 compute-0 nova_compute[351485]: 2025-12-03 02:05:07.444 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:05:08 compute-0 ceph-mon[192821]: pgmap v1516: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:08 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/976053642' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:05:08 compute-0 nova_compute[351485]: 2025-12-03 02:05:08.041 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:05:08 compute-0 nova_compute[351485]: 2025-12-03 02:05:08.043 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3191MB free_disk=59.85565948486328GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 02:05:08 compute-0 nova_compute[351485]: 2025-12-03 02:05:08.044 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:05:08 compute-0 nova_compute[351485]: 2025-12-03 02:05:08.044 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:05:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:05:08 compute-0 nova_compute[351485]: 2025-12-03 02:05:08.112 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:05:08 compute-0 nova_compute[351485]: 2025-12-03 02:05:08.170 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:05:08 compute-0 nova_compute[351485]: 2025-12-03 02:05:08.171 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 52862152-12c7-4236-89c3-67750ecbed7a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:05:08 compute-0 nova_compute[351485]: 2025-12-03 02:05:08.171 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:05:08 compute-0 nova_compute[351485]: 2025-12-03 02:05:08.171 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance b43e79bd-550f-42f8-9aa7-980b6bca3f70 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:05:08 compute-0 nova_compute[351485]: 2025-12-03 02:05:08.171 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 02:05:08 compute-0 nova_compute[351485]: 2025-12-03 02:05:08.171 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=59GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 02:05:08 compute-0 nova_compute[351485]: 2025-12-03 02:05:08.187 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing inventories for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 03 02:05:08 compute-0 nova_compute[351485]: 2025-12-03 02:05:08.208 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Updating ProviderTree inventory for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 03 02:05:08 compute-0 nova_compute[351485]: 2025-12-03 02:05:08.209 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Updating inventory in ProviderTree for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 03 02:05:08 compute-0 nova_compute[351485]: 2025-12-03 02:05:08.227 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing aggregate associations for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 03 02:05:08 compute-0 nova_compute[351485]: 2025-12-03 02:05:08.259 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing trait associations for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05, traits: HW_CPU_X86_SSE42,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_ABM,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_F16C,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_RESCUE_BFV,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 03 02:05:08 compute-0 nova_compute[351485]: 2025-12-03 02:05:08.397 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:05:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1517: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:05:08 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1242192799' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:05:08 compute-0 nova_compute[351485]: 2025-12-03 02:05:08.917 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:05:08 compute-0 nova_compute[351485]: 2025-12-03 02:05:08.930 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:05:08 compute-0 nova_compute[351485]: 2025-12-03 02:05:08.948 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:05:08 compute-0 nova_compute[351485]: 2025-12-03 02:05:08.951 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 02:05:08 compute-0 nova_compute[351485]: 2025-12-03 02:05:08.952 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.907s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:05:09 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1242192799' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:05:09 compute-0 nova_compute[351485]: 2025-12-03 02:05:09.167 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:05:09 compute-0 nova_compute[351485]: 2025-12-03 02:05:09.954 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:05:09 compute-0 nova_compute[351485]: 2025-12-03 02:05:09.955 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 02:05:10 compute-0 ceph-mon[192821]: pgmap v1517: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:10 compute-0 nova_compute[351485]: 2025-12-03 02:05:10.503 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-52862152-12c7-4236-89c3-67750ecbed7a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:05:10 compute-0 nova_compute[351485]: 2025-12-03 02:05:10.504 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-52862152-12c7-4236-89c3-67750ecbed7a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:05:10 compute-0 nova_compute[351485]: 2025-12-03 02:05:10.504 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 03 02:05:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1518: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:12 compute-0 ceph-mon[192821]: pgmap v1518: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:12 compute-0 nova_compute[351485]: 2025-12-03 02:05:12.215 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Updating instance_info_cache with network_info: [{"id": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "address": "fa:16:3e:8e:09:91", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521d2181-8f", "ovs_interfaceid": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:05:12 compute-0 nova_compute[351485]: 2025-12-03 02:05:12.230 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-52862152-12c7-4236-89c3-67750ecbed7a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:05:12 compute-0 nova_compute[351485]: 2025-12-03 02:05:12.232 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 03 02:05:12 compute-0 nova_compute[351485]: 2025-12-03 02:05:12.234 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:05:12 compute-0 nova_compute[351485]: 2025-12-03 02:05:12.234 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:05:12 compute-0 nova_compute[351485]: 2025-12-03 02:05:12.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:05:12 compute-0 nova_compute[351485]: 2025-12-03 02:05:12.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:05:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1519: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:05:13 compute-0 nova_compute[351485]: 2025-12-03 02:05:13.116 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:05:13 compute-0 nova_compute[351485]: 2025-12-03 02:05:13.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:05:14 compute-0 ceph-mon[192821]: pgmap v1519: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:14 compute-0 nova_compute[351485]: 2025-12-03 02:05:14.172 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:05:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1520: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:16 compute-0 ceph-mon[192821]: pgmap v1520: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:16 compute-0 sshd-session[429057]: Received disconnect from 154.113.10.113 port 47680:11: Bye Bye [preauth]
Dec 03 02:05:16 compute-0 sshd-session[429057]: Disconnected from authenticating user root 154.113.10.113 port 47680 [preauth]
Dec 03 02:05:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1521: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:18 compute-0 ceph-mon[192821]: pgmap v1521: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:05:18 compute-0 nova_compute[351485]: 2025-12-03 02:05:18.120 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:05:18 compute-0 nova_compute[351485]: 2025-12-03 02:05:18.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:05:18 compute-0 nova_compute[351485]: 2025-12-03 02:05:18.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 02:05:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1522: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:19 compute-0 nova_compute[351485]: 2025-12-03 02:05:19.176 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.506 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.507 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.508 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.509 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.510 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.511 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.511 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.511 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.511 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.511 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.520 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '52862152-12c7-4236-89c3-67750ecbed7a', 'name': 'vn-44nal64-ppxv5rwaptjv-bbqmylrxhl37-vnf-x65t7efzpd2l', 'flavor': {'id': 'bc665ec6-3672-4e52-a447-5267b04e227a', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9746b242761a48048d185ce26d622b33', 'user_id': '03ba25e4009b43f7b0054fee32bf9136', 'hostId': '875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd', 'status': 'active', 'metadata': {'metering.server_group': '0f6ab671-23df-4a6d-9613-02f9fb5fb294'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.526 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274', 'name': 'vn-44nal64-kaobzdetwujj-uf5345mx272a-vnf-xg4pxtj76f4j', 'flavor': {'id': 'bc665ec6-3672-4e52-a447-5267b04e227a', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9746b242761a48048d185ce26d622b33', 'user_id': '03ba25e4009b43f7b0054fee32bf9136', 'hostId': '875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd', 'status': 'active', 'metadata': {'metering.server_group': '0f6ab671-23df-4a6d-9613-02f9fb5fb294'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.531 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43e79bd-550f-42f8-9aa7-980b6bca3f70', 'name': 'vn-44nal64-mj7m4uljqyof-c7kfgdonucij-vnf-5nwa6zvischw', 'flavor': {'id': 'bc665ec6-3672-4e52-a447-5267b04e227a', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9746b242761a48048d185ce26d622b33', 'user_id': '03ba25e4009b43f7b0054fee32bf9136', 'hostId': '875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd', 'status': 'active', 'metadata': {'metering.server_group': '0f6ab671-23df-4a6d-9613-02f9fb5fb294'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.537 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '9182286b-5a08-4961-b4bb-c0e2f05746f7', 'name': 'test_0', 'flavor': {'id': 'bc665ec6-3672-4e52-a447-5267b04e227a', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9746b242761a48048d185ce26d622b33', 'user_id': '03ba25e4009b43f7b0054fee32bf9136', 'hostId': '875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.538 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.538 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.538 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.539 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.540 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-03T02:05:19.538887) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.581 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/memory.usage volume: 49.00390625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.630 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/memory.usage volume: 49.01171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.672 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/memory.usage volume: 49.07421875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.708 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/memory.usage volume: 48.88671875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.708 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.708 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.709 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.709 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.709 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.709 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.710 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-03T02:05:19.709394) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.716 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.outgoing.packets volume: 66 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.720 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.725 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.730 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.731 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.731 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.731 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.731 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.731 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.732 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.732 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.732 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-03T02:05:19.731886) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.732 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.733 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.incoming.bytes.delta volume: 42 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.733 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.734 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.734 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.734 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.734 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.734 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.734 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.735 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.735 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-03T02:05:19.734740) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.735 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.736 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.736 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.737 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.737 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.737 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.737 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.737 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.738 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.738 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.738 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.738 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-03T02:05:19.737848) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.739 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.739 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.740 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.740 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.740 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.740 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.740 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.740 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.741 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.741 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-03T02:05:19.740708) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.741 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.742 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.742 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.742 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.742 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.743 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.743 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.743 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.743 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.744 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-03T02:05:19.743445) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.772 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.772 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.773 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.807 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.809 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.809 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.839 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.840 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.840 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.870 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.871 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.872 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.874 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.874 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.875 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.876 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.877 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.877 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.877 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.878 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.879 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-03T02:05:19.877941) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.970 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.971 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.971 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.049 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.050 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.051 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceph-mon[192821]: pgmap v1522: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.154 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.155 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.156 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.253 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.254 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.254 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.256 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.256 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.256 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.257 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.259 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.260 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.260 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.incoming.bytes volume: 8364 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.262 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.incoming.bytes volume: 1612 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.263 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-03T02:05:20.259996) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.263 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.incoming.bytes volume: 1528 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.264 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.bytes volume: 2046 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.265 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.265 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.265 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.265 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.265 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.266 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.266 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.latency volume: 1829221883 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.266 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.latency volume: 322583639 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.267 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.latency volume: 204508972 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.267 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.latency volume: 1828594840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.268 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.latency volume: 317962452 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.268 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.latency volume: 234609421 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.269 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.latency volume: 1930310646 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.269 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.latency volume: 271584338 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.270 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.latency volume: 193440648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.271 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-03T02:05:20.266089) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.270 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.latency volume: 1854350820 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.271 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.latency volume: 322798135 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.272 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.latency volume: 163317736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.273 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.273 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.274 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.274 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.274 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.274 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.275 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-03T02:05:20.274514) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.275 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.276 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.277 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.278 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.278 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.279 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.280 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.280 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.281 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.282 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.282 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.283 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.284 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.284 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.285 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.285 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.285 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.286 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.286 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.286 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.287 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.288 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.289 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.289 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.290 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.290 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.290 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.290 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.291 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.291 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.292 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.292 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-03T02:05:20.285959) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.292 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.293 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.293 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-03T02:05:20.290828) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.293 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.294 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.294 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.295 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.295 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.296 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.296 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.297 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.298 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.298 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.298 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.298 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.299 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.bytes volume: 41840640 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.299 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.300 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-03T02:05:20.298797) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.300 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.300 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.301 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.302 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.302 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.bytes volume: 41762816 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.303 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.303 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.304 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.304 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.305 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.306 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.306 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.306 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.306 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.307 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.307 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.307 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.latency volume: 6998528252 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.308 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.latency volume: 29937762 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.308 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-03T02:05:20.307329) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.309 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.309 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.latency volume: 5579657720 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.310 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.latency volume: 23420930 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.310 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.310 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.latency volume: 8159105015 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.311 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.latency volume: 27311239 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.311 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.312 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.latency volume: 7224488215 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.312 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.latency volume: 31628821 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.313 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.314 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.314 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.314 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.315 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.315 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.315 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.315 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.requests volume: 240 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.316 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.316 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.317 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.317 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-03T02:05:20.315422) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.318 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.318 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.319 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.319 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.320 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.320 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.requests volume: 229 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.321 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.321 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.323 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.323 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.323 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.323 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.324 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.324 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.324 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.incoming.packets volume: 54 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.325 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.326 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.326 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.327 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.327 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.327 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-03T02:05:20.324357) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.327 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.328 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.328 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.328 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/cpu volume: 347260000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.328 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/cpu volume: 38050000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.329 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/cpu volume: 39080000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.329 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/cpu volume: 42600000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.330 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.330 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.330 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.330 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.330 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.331 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.331 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-03T02:05:20.328105) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.332 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.332 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.332 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.332 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.332 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.333 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.333 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.outgoing.bytes volume: 7568 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.333 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-03T02:05:20.331130) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.333 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.outgoing.bytes volume: 2328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.333 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-03T02:05:20.333057) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.334 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.outgoing.bytes volume: 2328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.334 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.334 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.335 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.335 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.335 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.335 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.335 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.336 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.336 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.337 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.337 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-03T02:05:20.335606) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.337 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.337 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.338 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.338 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.338 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.339 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.339 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.339 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.340 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.340 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.340 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.341 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.341 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.341 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.341 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.341 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-03T02:05:20.341087) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.342 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.342 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.342 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.343 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.343 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.343 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.343 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.343 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.344 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.345 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.345 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.345 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-03T02:05:20.343646) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.345 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.346 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.346 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.346 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.346 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-03T02:05:20.346037) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.347 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.outgoing.bytes.delta volume: 577 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.347 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.348 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.348 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.348 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.350 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.350 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.351 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.351 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.351 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.354 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.354 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.355 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.355 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.355 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.355 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.356 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.356 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.356 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.356 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.357 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.357 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.357 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:05:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1523: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:20 compute-0 podman[429061]: 2025-12-03 02:05:20.887053362 +0000 UTC m=+0.130425909 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 03 02:05:20 compute-0 podman[429063]: 2025-12-03 02:05:20.884180571 +0000 UTC m=+0.114403987 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 02:05:20 compute-0 podman[429062]: 2025-12-03 02:05:20.918295243 +0000 UTC m=+0.151669008 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, config_id=edpm)
Dec 03 02:05:22 compute-0 ceph-mon[192821]: pgmap v1523: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1524: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:05:23 compute-0 nova_compute[351485]: 2025-12-03 02:05:23.123 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:05:24 compute-0 ceph-mon[192821]: pgmap v1524: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:24 compute-0 nova_compute[351485]: 2025-12-03 02:05:24.180 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:05:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1525: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:25 compute-0 podman[429119]: 2025-12-03 02:05:25.881401126 +0000 UTC m=+0.123121083 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 03 02:05:26 compute-0 ceph-mon[192821]: pgmap v1525: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1526: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:27 compute-0 ceph-mon[192821]: pgmap v1526: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:05:28 compute-0 nova_compute[351485]: 2025-12-03 02:05:28.127 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:05:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:05:28
Dec 03 02:05:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 02:05:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 02:05:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['images', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr', 'vms', 'default.rgw.meta', 'volumes', 'backups', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.control']
Dec 03 02:05:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 02:05:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:05:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:05:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:05:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:05:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:05:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:05:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1527: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 02:05:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:05:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 02:05:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:05:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:05:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:05:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:05:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:05:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:05:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:05:29 compute-0 nova_compute[351485]: 2025-12-03 02:05:29.184 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:05:29 compute-0 ceph-mon[192821]: pgmap v1527: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:29 compute-0 podman[158098]: time="2025-12-03T02:05:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:05:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:05:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec 03 02:05:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:05:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8642 "" "Go-http-client/1.1"
Dec 03 02:05:29 compute-0 podman[429139]: 2025-12-03 02:05:29.925900923 +0000 UTC m=+0.158799618 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, release=1214.1726694543, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., distribution-scope=public, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, io.buildah.version=1.29.0, managed_by=edpm_ansible, com.redhat.component=ubi9-container, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.openshift.expose-services=)
Dec 03 02:05:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1528: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:31 compute-0 openstack_network_exporter[368278]: ERROR   02:05:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:05:31 compute-0 openstack_network_exporter[368278]: ERROR   02:05:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:05:31 compute-0 openstack_network_exporter[368278]: ERROR   02:05:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:05:31 compute-0 openstack_network_exporter[368278]: ERROR   02:05:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:05:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:05:31 compute-0 openstack_network_exporter[368278]: ERROR   02:05:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:05:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:05:31 compute-0 ceph-mon[192821]: pgmap v1528: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:31 compute-0 podman[429166]: 2025-12-03 02:05:31.871461255 +0000 UTC m=+0.099805425 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 03 02:05:31 compute-0 podman[429164]: 2025-12-03 02:05:31.871587049 +0000 UTC m=+0.108129780 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 03 02:05:31 compute-0 podman[429159]: 2025-12-03 02:05:31.873104801 +0000 UTC m=+0.115026794 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, architecture=x86_64, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, release=1755695350, vcs-type=git, version=9.6, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9)
Dec 03 02:05:31 compute-0 podman[429158]: 2025-12-03 02:05:31.889045321 +0000 UTC m=+0.151739300 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 03 02:05:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1529: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:05:33 compute-0 nova_compute[351485]: 2025-12-03 02:05:33.130 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:05:33 compute-0 ceph-mon[192821]: pgmap v1529: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:34 compute-0 nova_compute[351485]: 2025-12-03 02:05:34.188 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:05:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1530: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:34 compute-0 sudo[429242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:05:34 compute-0 sudo[429242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:05:34 compute-0 sudo[429242]: pam_unix(sudo:session): session closed for user root
Dec 03 02:05:34 compute-0 sudo[429267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:05:34 compute-0 sudo[429267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:05:34 compute-0 sudo[429267]: pam_unix(sudo:session): session closed for user root
Dec 03 02:05:34 compute-0 sudo[429292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:05:34 compute-0 sudo[429292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:05:34 compute-0 sudo[429292]: pam_unix(sudo:session): session closed for user root
Dec 03 02:05:35 compute-0 sudo[429317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 03 02:05:35 compute-0 sudo[429317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:05:35 compute-0 ceph-mon[192821]: pgmap v1530: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:36 compute-0 podman[429412]: 2025-12-03 02:05:36.06047247 +0000 UTC m=+0.168012039 container exec d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 03 02:05:36 compute-0 podman[429412]: 2025-12-03 02:05:36.183395286 +0000 UTC m=+0.290934855 container exec_died d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 03 02:05:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1531: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:37 compute-0 sudo[429317]: pam_unix(sudo:session): session closed for user root
Dec 03 02:05:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 02:05:37 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:05:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 02:05:37 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:05:37 compute-0 sudo[429566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:05:37 compute-0 sudo[429566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:05:37 compute-0 sudo[429566]: pam_unix(sudo:session): session closed for user root
Dec 03 02:05:37 compute-0 ceph-mon[192821]: pgmap v1531: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:37 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:05:37 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:05:37 compute-0 sudo[429591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:05:37 compute-0 sudo[429591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:05:37 compute-0 sudo[429591]: pam_unix(sudo:session): session closed for user root
Dec 03 02:05:37 compute-0 sudo[429616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:05:37 compute-0 sudo[429616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:05:37 compute-0 sudo[429616]: pam_unix(sudo:session): session closed for user root
Dec 03 02:05:38 compute-0 sudo[429641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 02:05:38 compute-0 sudo[429641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:05:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:05:38 compute-0 nova_compute[351485]: 2025-12-03 02:05:38.133 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:05:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 02:05:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:05:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 02:05:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:05:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0022107945480888194 of space, bias 1.0, pg target 0.6632383644266459 quantized to 32 (current 32)
Dec 03 02:05:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:05:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:05:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:05:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:05:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:05:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec 03 02:05:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:05:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 02:05:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:05:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:05:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:05:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 02:05:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:05:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 02:05:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:05:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:05:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:05:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 02:05:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1532: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:38 compute-0 sudo[429641]: pam_unix(sudo:session): session closed for user root
Dec 03 02:05:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:05:38 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:05:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 02:05:38 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:05:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 02:05:38 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:05:38 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev d00633c2-6072-429f-b182-134fb661ac3c does not exist
Dec 03 02:05:38 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev feb6d4c1-e25d-406d-9e71-1767d6055bab does not exist
Dec 03 02:05:38 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev cea3d146-8bc8-4f7d-8d65-8944f8afbde6 does not exist
Dec 03 02:05:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 02:05:38 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:05:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 02:05:38 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:05:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:05:38 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:05:39 compute-0 sudo[429698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:05:39 compute-0 sudo[429698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:05:39 compute-0 sudo[429698]: pam_unix(sudo:session): session closed for user root
Dec 03 02:05:39 compute-0 sudo[429723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:05:39 compute-0 nova_compute[351485]: 2025-12-03 02:05:39.193 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:05:39 compute-0 sudo[429723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:05:39 compute-0 sudo[429723]: pam_unix(sudo:session): session closed for user root
Dec 03 02:05:39 compute-0 sudo[429748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:05:39 compute-0 sudo[429748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:05:39 compute-0 sudo[429748]: pam_unix(sudo:session): session closed for user root
Dec 03 02:05:39 compute-0 sudo[429773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 02:05:39 compute-0 sudo[429773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:05:39 compute-0 ceph-mon[192821]: pgmap v1532: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:39 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:05:39 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:05:39 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:05:39 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:05:39 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:05:39 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:05:40 compute-0 podman[429838]: 2025-12-03 02:05:40.059836377 +0000 UTC m=+0.085404000 container create 64d280e4c6e248103f0a448138b072ae40a6b2a3dfffad2e2c832e13e74cf9b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:05:40 compute-0 podman[429838]: 2025-12-03 02:05:40.023316977 +0000 UTC m=+0.048884660 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:05:40 compute-0 systemd[1]: Started libpod-conmon-64d280e4c6e248103f0a448138b072ae40a6b2a3dfffad2e2c832e13e74cf9b6.scope.
Dec 03 02:05:40 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:05:40 compute-0 podman[429838]: 2025-12-03 02:05:40.253135958 +0000 UTC m=+0.278703601 container init 64d280e4c6e248103f0a448138b072ae40a6b2a3dfffad2e2c832e13e74cf9b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_heisenberg, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:05:40 compute-0 podman[429838]: 2025-12-03 02:05:40.264729975 +0000 UTC m=+0.290297578 container start 64d280e4c6e248103f0a448138b072ae40a6b2a3dfffad2e2c832e13e74cf9b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_heisenberg, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:05:40 compute-0 podman[429838]: 2025-12-03 02:05:40.269869739 +0000 UTC m=+0.295437612 container attach 64d280e4c6e248103f0a448138b072ae40a6b2a3dfffad2e2c832e13e74cf9b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Dec 03 02:05:40 compute-0 quirky_heisenberg[429854]: 167 167
Dec 03 02:05:40 compute-0 podman[429838]: 2025-12-03 02:05:40.275063726 +0000 UTC m=+0.300631329 container died 64d280e4c6e248103f0a448138b072ae40a6b2a3dfffad2e2c832e13e74cf9b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:05:40 compute-0 systemd[1]: libpod-64d280e4c6e248103f0a448138b072ae40a6b2a3dfffad2e2c832e13e74cf9b6.scope: Deactivated successfully.
Dec 03 02:05:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-98bce0c036d65bccc4e4226934db27d0e11eea21d0116c09639b6769c062246d-merged.mount: Deactivated successfully.
Dec 03 02:05:40 compute-0 podman[429838]: 2025-12-03 02:05:40.353123197 +0000 UTC m=+0.378690800 container remove 64d280e4c6e248103f0a448138b072ae40a6b2a3dfffad2e2c832e13e74cf9b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_heisenberg, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:05:40 compute-0 systemd[1]: libpod-conmon-64d280e4c6e248103f0a448138b072ae40a6b2a3dfffad2e2c832e13e74cf9b6.scope: Deactivated successfully.
Dec 03 02:05:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1533: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:40 compute-0 podman[429878]: 2025-12-03 02:05:40.664161497 +0000 UTC m=+0.098192289 container create efa3d7ce95de1bc7319da9e35335f441a15a38319f8cb09a73f149e809e2a851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_leavitt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:05:40 compute-0 podman[429878]: 2025-12-03 02:05:40.631745594 +0000 UTC m=+0.065776446 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:05:40 compute-0 systemd[1]: Started libpod-conmon-efa3d7ce95de1bc7319da9e35335f441a15a38319f8cb09a73f149e809e2a851.scope.
Dec 03 02:05:40 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:05:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c26b2e56b943cdcf4d4228d59fc89d640dc126ffcdd801863089a46a570d5d4d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:05:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c26b2e56b943cdcf4d4228d59fc89d640dc126ffcdd801863089a46a570d5d4d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:05:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c26b2e56b943cdcf4d4228d59fc89d640dc126ffcdd801863089a46a570d5d4d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:05:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c26b2e56b943cdcf4d4228d59fc89d640dc126ffcdd801863089a46a570d5d4d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:05:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c26b2e56b943cdcf4d4228d59fc89d640dc126ffcdd801863089a46a570d5d4d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 02:05:40 compute-0 podman[429878]: 2025-12-03 02:05:40.851345685 +0000 UTC m=+0.285376497 container init efa3d7ce95de1bc7319da9e35335f441a15a38319f8cb09a73f149e809e2a851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_leavitt, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:05:40 compute-0 podman[429878]: 2025-12-03 02:05:40.87741012 +0000 UTC m=+0.311440922 container start efa3d7ce95de1bc7319da9e35335f441a15a38319f8cb09a73f149e809e2a851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_leavitt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:05:40 compute-0 podman[429878]: 2025-12-03 02:05:40.887402192 +0000 UTC m=+0.321433044 container attach efa3d7ce95de1bc7319da9e35335f441a15a38319f8cb09a73f149e809e2a851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:05:41 compute-0 ceph-mon[192821]: pgmap v1533: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:42 compute-0 amazing_leavitt[429894]: --> passed data devices: 0 physical, 3 LVM
Dec 03 02:05:42 compute-0 amazing_leavitt[429894]: --> relative data size: 1.0
Dec 03 02:05:42 compute-0 amazing_leavitt[429894]: --> All data devices are unavailable
Dec 03 02:05:42 compute-0 systemd[1]: libpod-efa3d7ce95de1bc7319da9e35335f441a15a38319f8cb09a73f149e809e2a851.scope: Deactivated successfully.
Dec 03 02:05:42 compute-0 systemd[1]: libpod-efa3d7ce95de1bc7319da9e35335f441a15a38319f8cb09a73f149e809e2a851.scope: Consumed 1.249s CPU time.
Dec 03 02:05:42 compute-0 podman[429878]: 2025-12-03 02:05:42.19412215 +0000 UTC m=+1.628152962 container died efa3d7ce95de1bc7319da9e35335f441a15a38319f8cb09a73f149e809e2a851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_leavitt, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 03 02:05:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-c26b2e56b943cdcf4d4228d59fc89d640dc126ffcdd801863089a46a570d5d4d-merged.mount: Deactivated successfully.
Dec 03 02:05:42 compute-0 podman[429878]: 2025-12-03 02:05:42.300219391 +0000 UTC m=+1.734250173 container remove efa3d7ce95de1bc7319da9e35335f441a15a38319f8cb09a73f149e809e2a851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_leavitt, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 03 02:05:42 compute-0 systemd[1]: libpod-conmon-efa3d7ce95de1bc7319da9e35335f441a15a38319f8cb09a73f149e809e2a851.scope: Deactivated successfully.
Dec 03 02:05:42 compute-0 sudo[429773]: pam_unix(sudo:session): session closed for user root
Dec 03 02:05:42 compute-0 sudo[429935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:05:42 compute-0 sudo[429935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:05:42 compute-0 sudo[429935]: pam_unix(sudo:session): session closed for user root
Dec 03 02:05:42 compute-0 sudo[429960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:05:42 compute-0 sudo[429960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:05:42 compute-0 sudo[429960]: pam_unix(sudo:session): session closed for user root
Dec 03 02:05:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1534: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:42 compute-0 sudo[429985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:05:42 compute-0 sudo[429985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:05:42 compute-0 sudo[429985]: pam_unix(sudo:session): session closed for user root
Dec 03 02:05:42 compute-0 sudo[430010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 02:05:42 compute-0 sudo[430010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:05:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:05:43 compute-0 nova_compute[351485]: 2025-12-03 02:05:43.136 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:05:43 compute-0 podman[430073]: 2025-12-03 02:05:43.357516866 +0000 UTC m=+0.075642264 container create 825616ace6ec7d8fdcae6081ffe88411559fe7292b1bba88ce2af4ac88a76298 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Dec 03 02:05:43 compute-0 podman[430073]: 2025-12-03 02:05:43.317037755 +0000 UTC m=+0.035163203 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:05:43 compute-0 systemd[1]: Started libpod-conmon-825616ace6ec7d8fdcae6081ffe88411559fe7292b1bba88ce2af4ac88a76298.scope.
Dec 03 02:05:43 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:05:43 compute-0 podman[430073]: 2025-12-03 02:05:43.492246245 +0000 UTC m=+0.210371603 container init 825616ace6ec7d8fdcae6081ffe88411559fe7292b1bba88ce2af4ac88a76298 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:05:43 compute-0 podman[430073]: 2025-12-03 02:05:43.501708502 +0000 UTC m=+0.219833900 container start 825616ace6ec7d8fdcae6081ffe88411559fe7292b1bba88ce2af4ac88a76298 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 03 02:05:43 compute-0 podman[430073]: 2025-12-03 02:05:43.507972809 +0000 UTC m=+0.226098217 container attach 825616ace6ec7d8fdcae6081ffe88411559fe7292b1bba88ce2af4ac88a76298 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_elbakyan, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:05:43 compute-0 cool_elbakyan[430089]: 167 167
Dec 03 02:05:43 compute-0 systemd[1]: libpod-825616ace6ec7d8fdcae6081ffe88411559fe7292b1bba88ce2af4ac88a76298.scope: Deactivated successfully.
Dec 03 02:05:43 compute-0 conmon[430089]: conmon 825616ace6ec7d8fdcae <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-825616ace6ec7d8fdcae6081ffe88411559fe7292b1bba88ce2af4ac88a76298.scope/container/memory.events
Dec 03 02:05:43 compute-0 podman[430073]: 2025-12-03 02:05:43.514494242 +0000 UTC m=+0.232619610 container died 825616ace6ec7d8fdcae6081ffe88411559fe7292b1bba88ce2af4ac88a76298 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:05:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-68193d099d149f787ae05750eed4ab3415c454c6810de4260ee183887c98a9a7-merged.mount: Deactivated successfully.
Dec 03 02:05:43 compute-0 podman[430073]: 2025-12-03 02:05:43.569040131 +0000 UTC m=+0.287165489 container remove 825616ace6ec7d8fdcae6081ffe88411559fe7292b1bba88ce2af4ac88a76298 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_elbakyan, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 03 02:05:43 compute-0 systemd[1]: libpod-conmon-825616ace6ec7d8fdcae6081ffe88411559fe7292b1bba88ce2af4ac88a76298.scope: Deactivated successfully.
Dec 03 02:05:43 compute-0 ceph-mon[192821]: pgmap v1534: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:43 compute-0 podman[430112]: 2025-12-03 02:05:43.819364679 +0000 UTC m=+0.091327086 container create de78d3b13780f2925d455882e660d136397c66fb0f53be5cbd91160feb8288e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:05:43 compute-0 podman[430112]: 2025-12-03 02:05:43.778629291 +0000 UTC m=+0.050591738 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:05:43 compute-0 systemd[1]: Started libpod-conmon-de78d3b13780f2925d455882e660d136397c66fb0f53be5cbd91160feb8288e2.scope.
Dec 03 02:05:43 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:05:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3abb959d39b9b731a146e0cbf9c5a71152cbeaf478ce6b9e31d06a3ce2027e1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:05:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3abb959d39b9b731a146e0cbf9c5a71152cbeaf478ce6b9e31d06a3ce2027e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:05:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3abb959d39b9b731a146e0cbf9c5a71152cbeaf478ce6b9e31d06a3ce2027e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:05:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3abb959d39b9b731a146e0cbf9c5a71152cbeaf478ce6b9e31d06a3ce2027e1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:05:43 compute-0 podman[430112]: 2025-12-03 02:05:43.973213088 +0000 UTC m=+0.245175485 container init de78d3b13780f2925d455882e660d136397c66fb0f53be5cbd91160feb8288e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 03 02:05:44 compute-0 podman[430112]: 2025-12-03 02:05:44.003129362 +0000 UTC m=+0.275091729 container start de78d3b13780f2925d455882e660d136397c66fb0f53be5cbd91160feb8288e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:05:44 compute-0 podman[430112]: 2025-12-03 02:05:44.007619538 +0000 UTC m=+0.279581915 container attach de78d3b13780f2925d455882e660d136397c66fb0f53be5cbd91160feb8288e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_dubinsky, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 03 02:05:44 compute-0 nova_compute[351485]: 2025-12-03 02:05:44.198 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:05:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1535: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]: {
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:     "0": [
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:         {
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:             "devices": [
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:                 "/dev/loop3"
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:             ],
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:             "lv_name": "ceph_lv0",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:             "lv_size": "21470642176",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:             "name": "ceph_lv0",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:             "tags": {
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:                 "ceph.cluster_name": "ceph",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:                 "ceph.crush_device_class": "",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:                 "ceph.encrypted": "0",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:                 "ceph.osd_id": "0",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:                 "ceph.type": "block",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:                 "ceph.vdo": "0"
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:             },
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:             "type": "block",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:             "vg_name": "ceph_vg0"
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:         }
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:     ],
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:     "1": [
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:         {
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:             "devices": [
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:                 "/dev/loop4"
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:             ],
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:             "lv_name": "ceph_lv1",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:             "lv_size": "21470642176",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:             "name": "ceph_lv1",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:             "tags": {
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:                 "ceph.cluster_name": "ceph",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:                 "ceph.crush_device_class": "",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:                 "ceph.encrypted": "0",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:                 "ceph.osd_id": "1",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:                 "ceph.type": "block",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:                 "ceph.vdo": "0"
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:             },
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:             "type": "block",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:             "vg_name": "ceph_vg1"
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:         }
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:     ],
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:     "2": [
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:         {
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:             "devices": [
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:                 "/dev/loop5"
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:             ],
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:             "lv_name": "ceph_lv2",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:             "lv_size": "21470642176",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:             "name": "ceph_lv2",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:             "tags": {
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:                 "ceph.cluster_name": "ceph",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:                 "ceph.crush_device_class": "",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:                 "ceph.encrypted": "0",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:                 "ceph.osd_id": "2",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:                 "ceph.type": "block",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:                 "ceph.vdo": "0"
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:             },
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:             "type": "block",
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:             "vg_name": "ceph_vg2"
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:         }
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]:     ]
Dec 03 02:05:44 compute-0 charming_dubinsky[430128]: }
Dec 03 02:05:44 compute-0 systemd[1]: libpod-de78d3b13780f2925d455882e660d136397c66fb0f53be5cbd91160feb8288e2.scope: Deactivated successfully.
Dec 03 02:05:44 compute-0 podman[430112]: 2025-12-03 02:05:44.845604057 +0000 UTC m=+1.117566464 container died de78d3b13780f2925d455882e660d136397c66fb0f53be5cbd91160feb8288e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_dubinsky, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:05:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3abb959d39b9b731a146e0cbf9c5a71152cbeaf478ce6b9e31d06a3ce2027e1-merged.mount: Deactivated successfully.
Dec 03 02:05:44 compute-0 podman[430112]: 2025-12-03 02:05:44.951181335 +0000 UTC m=+1.223143702 container remove de78d3b13780f2925d455882e660d136397c66fb0f53be5cbd91160feb8288e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec 03 02:05:44 compute-0 systemd[1]: libpod-conmon-de78d3b13780f2925d455882e660d136397c66fb0f53be5cbd91160feb8288e2.scope: Deactivated successfully.
Dec 03 02:05:45 compute-0 sudo[430010]: pam_unix(sudo:session): session closed for user root
Dec 03 02:05:45 compute-0 sudo[430148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:05:45 compute-0 sudo[430148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:05:45 compute-0 sudo[430148]: pam_unix(sudo:session): session closed for user root
Dec 03 02:05:45 compute-0 sudo[430173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:05:45 compute-0 sudo[430173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:05:45 compute-0 sudo[430173]: pam_unix(sudo:session): session closed for user root
Dec 03 02:05:45 compute-0 sudo[430198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:05:45 compute-0 sudo[430198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:05:45 compute-0 sudo[430198]: pam_unix(sudo:session): session closed for user root
Dec 03 02:05:45 compute-0 sudo[430223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 02:05:45 compute-0 sudo[430223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:05:45 compute-0 ceph-mon[192821]: pgmap v1535: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:46 compute-0 podman[430290]: 2025-12-03 02:05:46.164477998 +0000 UTC m=+0.108634804 container create 364cbeb1b9795848af09c4bfd9de598aaeb4ed7ad74360bfa39b0f3118601f6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Dec 03 02:05:46 compute-0 podman[430290]: 2025-12-03 02:05:46.125957452 +0000 UTC m=+0.070114298 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:05:46 compute-0 systemd[1]: Started libpod-conmon-364cbeb1b9795848af09c4bfd9de598aaeb4ed7ad74360bfa39b0f3118601f6f.scope.
Dec 03 02:05:46 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:05:46 compute-0 podman[430290]: 2025-12-03 02:05:46.336463788 +0000 UTC m=+0.280620634 container init 364cbeb1b9795848af09c4bfd9de598aaeb4ed7ad74360bfa39b0f3118601f6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_shirley, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 03 02:05:46 compute-0 podman[430290]: 2025-12-03 02:05:46.357515001 +0000 UTC m=+0.301671807 container start 364cbeb1b9795848af09c4bfd9de598aaeb4ed7ad74360bfa39b0f3118601f6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_shirley, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:05:46 compute-0 podman[430290]: 2025-12-03 02:05:46.365153657 +0000 UTC m=+0.309310523 container attach 364cbeb1b9795848af09c4bfd9de598aaeb4ed7ad74360bfa39b0f3118601f6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_shirley, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:05:46 compute-0 inspiring_shirley[430304]: 167 167
Dec 03 02:05:46 compute-0 systemd[1]: libpod-364cbeb1b9795848af09c4bfd9de598aaeb4ed7ad74360bfa39b0f3118601f6f.scope: Deactivated successfully.
Dec 03 02:05:46 compute-0 conmon[430304]: conmon 364cbeb1b9795848af09 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-364cbeb1b9795848af09c4bfd9de598aaeb4ed7ad74360bfa39b0f3118601f6f.scope/container/memory.events
Dec 03 02:05:46 compute-0 podman[430290]: 2025-12-03 02:05:46.373628656 +0000 UTC m=+0.317785462 container died 364cbeb1b9795848af09c4bfd9de598aaeb4ed7ad74360bfa39b0f3118601f6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_shirley, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:05:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-0859741c3500c8cd6e7c117830e9223b6be69123cdd51fd8ecf0f8c1490b876a-merged.mount: Deactivated successfully.
Dec 03 02:05:46 compute-0 podman[430290]: 2025-12-03 02:05:46.447648753 +0000 UTC m=+0.391805529 container remove 364cbeb1b9795848af09c4bfd9de598aaeb4ed7ad74360bfa39b0f3118601f6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_shirley, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:05:46 compute-0 systemd[1]: libpod-conmon-364cbeb1b9795848af09c4bfd9de598aaeb4ed7ad74360bfa39b0f3118601f6f.scope: Deactivated successfully.
Dec 03 02:05:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1536: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:46 compute-0 podman[430329]: 2025-12-03 02:05:46.762154032 +0000 UTC m=+0.097699406 container create e71e499a1a84e65d0a69fcb832afd1134561d05be36b771461ebff70251811b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_shannon, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 03 02:05:46 compute-0 podman[430329]: 2025-12-03 02:05:46.717180283 +0000 UTC m=+0.052725707 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:05:46 compute-0 systemd[1]: Started libpod-conmon-e71e499a1a84e65d0a69fcb832afd1134561d05be36b771461ebff70251811b1.scope.
Dec 03 02:05:46 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:05:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fe750ebacef5dc113542dcb12557ec98ea81008a231dcd082cac72ab000fe1d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:05:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fe750ebacef5dc113542dcb12557ec98ea81008a231dcd082cac72ab000fe1d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:05:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fe750ebacef5dc113542dcb12557ec98ea81008a231dcd082cac72ab000fe1d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:05:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fe750ebacef5dc113542dcb12557ec98ea81008a231dcd082cac72ab000fe1d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:05:46 compute-0 podman[430329]: 2025-12-03 02:05:46.915194387 +0000 UTC m=+0.250739821 container init e71e499a1a84e65d0a69fcb832afd1134561d05be36b771461ebff70251811b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_shannon, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:05:46 compute-0 podman[430329]: 2025-12-03 02:05:46.929900622 +0000 UTC m=+0.265445986 container start e71e499a1a84e65d0a69fcb832afd1134561d05be36b771461ebff70251811b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec 03 02:05:46 compute-0 podman[430329]: 2025-12-03 02:05:46.936239851 +0000 UTC m=+0.271785265 container attach e71e499a1a84e65d0a69fcb832afd1134561d05be36b771461ebff70251811b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Dec 03 02:05:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 02:05:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1903521338' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:05:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 02:05:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1903521338' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:05:47 compute-0 ceph-mon[192821]: pgmap v1536: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/1903521338' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:05:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/1903521338' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:05:47 compute-0 vigorous_shannon[430345]: {
Dec 03 02:05:47 compute-0 vigorous_shannon[430345]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 02:05:47 compute-0 vigorous_shannon[430345]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:05:47 compute-0 vigorous_shannon[430345]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 02:05:47 compute-0 vigorous_shannon[430345]:         "osd_id": 2,
Dec 03 02:05:47 compute-0 vigorous_shannon[430345]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:05:47 compute-0 vigorous_shannon[430345]:         "type": "bluestore"
Dec 03 02:05:47 compute-0 vigorous_shannon[430345]:     },
Dec 03 02:05:47 compute-0 vigorous_shannon[430345]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 02:05:47 compute-0 vigorous_shannon[430345]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:05:47 compute-0 vigorous_shannon[430345]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 02:05:47 compute-0 vigorous_shannon[430345]:         "osd_id": 1,
Dec 03 02:05:47 compute-0 vigorous_shannon[430345]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:05:47 compute-0 vigorous_shannon[430345]:         "type": "bluestore"
Dec 03 02:05:47 compute-0 vigorous_shannon[430345]:     },
Dec 03 02:05:47 compute-0 vigorous_shannon[430345]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 02:05:47 compute-0 vigorous_shannon[430345]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:05:47 compute-0 vigorous_shannon[430345]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 02:05:47 compute-0 vigorous_shannon[430345]:         "osd_id": 0,
Dec 03 02:05:47 compute-0 vigorous_shannon[430345]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:05:47 compute-0 vigorous_shannon[430345]:         "type": "bluestore"
Dec 03 02:05:47 compute-0 vigorous_shannon[430345]:     }
Dec 03 02:05:47 compute-0 vigorous_shannon[430345]: }
Dec 03 02:05:48 compute-0 systemd[1]: libpod-e71e499a1a84e65d0a69fcb832afd1134561d05be36b771461ebff70251811b1.scope: Deactivated successfully.
Dec 03 02:05:48 compute-0 systemd[1]: libpod-e71e499a1a84e65d0a69fcb832afd1134561d05be36b771461ebff70251811b1.scope: Consumed 1.097s CPU time.
Dec 03 02:05:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:05:48 compute-0 podman[430378]: 2025-12-03 02:05:48.111866661 +0000 UTC m=+0.055598029 container died e71e499a1a84e65d0a69fcb832afd1134561d05be36b771461ebff70251811b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:05:48 compute-0 nova_compute[351485]: 2025-12-03 02:05:48.138 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:05:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-7fe750ebacef5dc113542dcb12557ec98ea81008a231dcd082cac72ab000fe1d-merged.mount: Deactivated successfully.
Dec 03 02:05:48 compute-0 podman[430378]: 2025-12-03 02:05:48.212755626 +0000 UTC m=+0.156486914 container remove e71e499a1a84e65d0a69fcb832afd1134561d05be36b771461ebff70251811b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_shannon, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 03 02:05:48 compute-0 systemd[1]: libpod-conmon-e71e499a1a84e65d0a69fcb832afd1134561d05be36b771461ebff70251811b1.scope: Deactivated successfully.
Dec 03 02:05:48 compute-0 sudo[430223]: pam_unix(sudo:session): session closed for user root
Dec 03 02:05:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 02:05:48 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:05:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 02:05:48 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:05:48 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 23d7452f-a0db-41b6-9dba-56033c6f0700 does not exist
Dec 03 02:05:48 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 398664ad-b489-41eb-9640-bcff1c9b1f27 does not exist
Dec 03 02:05:48 compute-0 sudo[430392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:05:48 compute-0 sudo[430392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:05:48 compute-0 sudo[430392]: pam_unix(sudo:session): session closed for user root
Dec 03 02:05:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1537: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:48 compute-0 sudo[430417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 02:05:48 compute-0 sudo[430417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:05:48 compute-0 sudo[430417]: pam_unix(sudo:session): session closed for user root
Dec 03 02:05:49 compute-0 nova_compute[351485]: 2025-12-03 02:05:49.202 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:05:49 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:05:49 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:05:49 compute-0 ceph-mon[192821]: pgmap v1537: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1538: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:51 compute-0 ceph-mon[192821]: pgmap v1538: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:51 compute-0 podman[430445]: 2025-12-03 02:05:51.879891644 +0000 UTC m=+0.116406194 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 03 02:05:51 compute-0 podman[430443]: 2025-12-03 02:05:51.887018125 +0000 UTC m=+0.129511494 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Dec 03 02:05:51 compute-0 podman[430444]: 2025-12-03 02:05:51.922600238 +0000 UTC m=+0.158475420 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_managed=true)
Dec 03 02:05:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1539: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:05:53 compute-0 nova_compute[351485]: 2025-12-03 02:05:53.143 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:05:53 compute-0 ceph-mon[192821]: pgmap v1539: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:54 compute-0 nova_compute[351485]: 2025-12-03 02:05:54.205 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:05:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1540: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:55 compute-0 ceph-mon[192821]: pgmap v1540: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1541: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:56 compute-0 podman[430499]: 2025-12-03 02:05:56.877203051 +0000 UTC m=+0.129040740 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 03 02:05:57 compute-0 ceph-mon[192821]: pgmap v1541: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:05:58 compute-0 nova_compute[351485]: 2025-12-03 02:05:58.145 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:05:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:05:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:05:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:05:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:05:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:05:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:05:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1542: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:59 compute-0 nova_compute[351485]: 2025-12-03 02:05:59.209 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:05:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:05:59.635 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:05:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:05:59.635 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:05:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:05:59.637 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:05:59 compute-0 ceph-mon[192821]: pgmap v1542: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:05:59 compute-0 podman[158098]: time="2025-12-03T02:05:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:05:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:05:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec 03 02:05:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:05:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8643 "" "Go-http-client/1.1"
Dec 03 02:06:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1543: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:06:00 compute-0 podman[430518]: 2025-12-03 02:06:00.896203171 +0000 UTC m=+0.140397000 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, distribution-scope=public, build-date=2024-09-18T21:23:30, release=1214.1726694543, release-0.7.12=, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, container_name=kepler, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., name=ubi9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 03 02:06:01 compute-0 openstack_network_exporter[368278]: ERROR   02:06:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:06:01 compute-0 openstack_network_exporter[368278]: ERROR   02:06:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:06:01 compute-0 openstack_network_exporter[368278]: ERROR   02:06:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:06:01 compute-0 openstack_network_exporter[368278]: ERROR   02:06:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:06:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:06:01 compute-0 openstack_network_exporter[368278]: ERROR   02:06:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:06:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:06:01 compute-0 ceph-mon[192821]: pgmap v1543: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:06:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1544: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:06:02 compute-0 podman[430541]: 2025-12-03 02:06:02.904801429 +0000 UTC m=+0.115856308 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 03 02:06:02 compute-0 podman[430539]: 2025-12-03 02:06:02.912065844 +0000 UTC m=+0.147252053 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, distribution-scope=public, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, vcs-type=git, io.openshift.expose-services=, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., version=9.6, io.openshift.tags=minimal rhel9)
Dec 03 02:06:02 compute-0 podman[430540]: 2025-12-03 02:06:02.91619854 +0000 UTC m=+0.142027976 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 03 02:06:02 compute-0 podman[430538]: 2025-12-03 02:06:02.936461742 +0000 UTC m=+0.172731942 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Dec 03 02:06:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:06:03 compute-0 nova_compute[351485]: 2025-12-03 02:06:03.149 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:06:03 compute-0 ceph-mon[192821]: pgmap v1544: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:06:04 compute-0 nova_compute[351485]: 2025-12-03 02:06:04.213 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:06:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1545: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:06:05 compute-0 nova_compute[351485]: 2025-12-03 02:06:05.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:06:05 compute-0 ceph-mon[192821]: pgmap v1545: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:06:06 compute-0 nova_compute[351485]: 2025-12-03 02:06:06.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:06:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1546: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:06:06 compute-0 nova_compute[351485]: 2025-12-03 02:06:06.639 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:06:06 compute-0 nova_compute[351485]: 2025-12-03 02:06:06.640 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:06:06 compute-0 nova_compute[351485]: 2025-12-03 02:06:06.640 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:06:06 compute-0 nova_compute[351485]: 2025-12-03 02:06:06.641 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 02:06:06 compute-0 nova_compute[351485]: 2025-12-03 02:06:06.642 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:06:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:06:07 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/301704655' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:06:07 compute-0 nova_compute[351485]: 2025-12-03 02:06:07.164 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:06:07 compute-0 nova_compute[351485]: 2025-12-03 02:06:07.395 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:06:07 compute-0 nova_compute[351485]: 2025-12-03 02:06:07.395 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:06:07 compute-0 nova_compute[351485]: 2025-12-03 02:06:07.397 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:06:07 compute-0 nova_compute[351485]: 2025-12-03 02:06:07.405 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:06:07 compute-0 nova_compute[351485]: 2025-12-03 02:06:07.406 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:06:07 compute-0 nova_compute[351485]: 2025-12-03 02:06:07.407 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:06:07 compute-0 nova_compute[351485]: 2025-12-03 02:06:07.416 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:06:07 compute-0 nova_compute[351485]: 2025-12-03 02:06:07.417 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:06:07 compute-0 nova_compute[351485]: 2025-12-03 02:06:07.418 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:06:07 compute-0 nova_compute[351485]: 2025-12-03 02:06:07.426 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:06:07 compute-0 nova_compute[351485]: 2025-12-03 02:06:07.427 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:06:07 compute-0 nova_compute[351485]: 2025-12-03 02:06:07.427 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:06:07 compute-0 ceph-mon[192821]: pgmap v1546: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:06:07 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/301704655' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:06:08 compute-0 nova_compute[351485]: 2025-12-03 02:06:08.083 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:06:08 compute-0 nova_compute[351485]: 2025-12-03 02:06:08.086 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3169MB free_disk=59.85565948486328GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 02:06:08 compute-0 nova_compute[351485]: 2025-12-03 02:06:08.087 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:06:08 compute-0 nova_compute[351485]: 2025-12-03 02:06:08.088 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:06:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:06:08 compute-0 nova_compute[351485]: 2025-12-03 02:06:08.153 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:06:08 compute-0 nova_compute[351485]: 2025-12-03 02:06:08.232 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:06:08 compute-0 nova_compute[351485]: 2025-12-03 02:06:08.233 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 52862152-12c7-4236-89c3-67750ecbed7a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:06:08 compute-0 nova_compute[351485]: 2025-12-03 02:06:08.234 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:06:08 compute-0 nova_compute[351485]: 2025-12-03 02:06:08.234 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance b43e79bd-550f-42f8-9aa7-980b6bca3f70 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:06:08 compute-0 nova_compute[351485]: 2025-12-03 02:06:08.235 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 02:06:08 compute-0 nova_compute[351485]: 2025-12-03 02:06:08.235 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=59GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 02:06:08 compute-0 nova_compute[351485]: 2025-12-03 02:06:08.390 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:06:08 compute-0 sshd[113879]: drop connection #0 from [14.103.201.7]:35436 on [38.102.83.36]:22 penalty: exceeded LoginGraceTime
Dec 03 02:06:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1547: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:06:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:06:08 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2608886054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:06:08 compute-0 nova_compute[351485]: 2025-12-03 02:06:08.971 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:06:08 compute-0 nova_compute[351485]: 2025-12-03 02:06:08.986 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:06:09 compute-0 nova_compute[351485]: 2025-12-03 02:06:09.005 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:06:09 compute-0 nova_compute[351485]: 2025-12-03 02:06:09.009 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 02:06:09 compute-0 nova_compute[351485]: 2025-12-03 02:06:09.010 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.922s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:06:09 compute-0 nova_compute[351485]: 2025-12-03 02:06:09.218 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:06:09 compute-0 ceph-mon[192821]: pgmap v1547: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:06:09 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2608886054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.423 351492 DEBUG oslo_concurrency.lockutils [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "52862152-12c7-4236-89c3-67750ecbed7a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.424 351492 DEBUG oslo_concurrency.lockutils [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "52862152-12c7-4236-89c3-67750ecbed7a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.426 351492 DEBUG oslo_concurrency.lockutils [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "52862152-12c7-4236-89c3-67750ecbed7a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.427 351492 DEBUG oslo_concurrency.lockutils [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "52862152-12c7-4236-89c3-67750ecbed7a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.427 351492 DEBUG oslo_concurrency.lockutils [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "52862152-12c7-4236-89c3-67750ecbed7a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.431 351492 INFO nova.compute.manager [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Terminating instance
Dec 03 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.433 351492 DEBUG nova.compute.manager [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 03 02:06:10 compute-0 kernel: tap521d2181-8f (unregistering): left promiscuous mode
Dec 03 02:06:10 compute-0 NetworkManager[48912]: <info>  [1764727570.6326] device (tap521d2181-8f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 03 02:06:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1548: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:06:10 compute-0 ovn_controller[89134]: 2025-12-03T02:06:10Z|00050|binding|INFO|Releasing lport 521d2181-8f17-4f4d-a3a6-98de1e17b734 from this chassis (sb_readonly=0)
Dec 03 02:06:10 compute-0 ovn_controller[89134]: 2025-12-03T02:06:10Z|00051|binding|INFO|Setting lport 521d2181-8f17-4f4d-a3a6-98de1e17b734 down in Southbound
Dec 03 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.650 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:06:10 compute-0 ovn_controller[89134]: 2025-12-03T02:06:10Z|00052|binding|INFO|Removing iface tap521d2181-8f ovn-installed in OVS
Dec 03 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.655 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:06:10 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:06:10.666 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:09:91 192.168.0.178'], port_security=['fa:16:3e:8e:09:91 192.168.0.178'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-olz3x44nal64-ppxv5rwaptjv-bbqmylrxhl37-port-ucken5qvu3kv', 'neutron:cidrs': '192.168.0.178/24', 'neutron:device_id': '52862152-12c7-4236-89c3-67750ecbed7a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-olz3x44nal64-ppxv5rwaptjv-bbqmylrxhl37-port-ucken5qvu3kv', 'neutron:project_id': '9746b242761a48048d185ce26d622b33', 'neutron:revision_number': '4', 'neutron:security_group_ids': '43ddbc1b-0018-4ea3-a338-8898d9bf8c87', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.212', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=13e9ae70-0999-47f9-bc0c-397e04263018, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=521d2181-8f17-4f4d-a3a6-98de1e17b734) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 02:06:10 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:06:10.668 288528 INFO neutron.agent.ovn.metadata.agent [-] Port 521d2181-8f17-4f4d-a3a6-98de1e17b734 in datapath 7ba11691-2711-476c-9191-cb6dfd0efa7d unbound from our chassis
Dec 03 02:06:10 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:06:10.671 288528 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ba11691-2711-476c-9191-cb6dfd0efa7d
Dec 03 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.697 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:06:10 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Dec 03 02:06:10 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:06:10.707 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[db9b232e-e430-4a23-b457-b8dea94026f6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:06:10 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 7min 13.793s CPU time.
Dec 03 02:06:10 compute-0 systemd-machined[138558]: Machine qemu-2-instance-00000002 terminated.
Dec 03 02:06:10 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:06:10.752 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[50598518-a71e-41c9-80a3-64089deb22c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:06:10 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:06:10.757 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[d5787ff1-ed45-483b-bf42-9751b5ef6393]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:06:10 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:06:10.799 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[e73e6f82-da13-475f-a42d-0c35c206b48e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:06:10 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:06:10.829 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[88efeb78-5965-4718-9061-946abe76573a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ba11691-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:a4:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573048, 'reachable_time': 15808, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 430680, 'error': None, 'target': 'ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:06:10 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:06:10.860 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[d768aa1d-c769-41ed-9a7b-02b5a2ee11ba]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap7ba11691-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 573065, 'tstamp': 573065}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 430681, 'error': None, 'target': 'ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7ba11691-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 573069, 'tstamp': 573069}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 430681, 'error': None, 'target': 'ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:06:10 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:06:10.866 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ba11691-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.870 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.881 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:06:10 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:06:10.883 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ba11691-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:06:10 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:06:10.884 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 03 02:06:10 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:06:10.886 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ba11691-20, col_values=(('external_ids', {'iface-id': '8c8945aa-32be-4ced-a7fe-2b9502f30008'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:06:10 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:06:10.887 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 03 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.890 351492 INFO nova.virt.libvirt.driver [-] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Instance destroyed successfully.
Dec 03 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.891 351492 DEBUG nova.objects.instance [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lazy-loading 'resources' on Instance uuid 52862152-12c7-4236-89c3-67750ecbed7a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.908 351492 DEBUG nova.virt.libvirt.vif [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T01:55:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-44nal64-ppxv5rwaptjv-bbqmylrxhl37-vnf-x65t7efzpd2l',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-44nal64-ppxv5rwaptjv-bbqmylrxhl37-vnf-x65t7efzpd2l',id=2,image_ref='466cf0db-c3be-4d70-b9f3-08c056c2cad9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-03T01:56:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='0f6ab671-23df-4a6d-9613-02f9fb5fb294'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9746b242761a48048d185ce26d622b33',ramdisk_id='',reservation_id='r-eunmeq81',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='466cf0db-c3be-4d70-b9f3-08c056c2cad9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-03T01:56:06Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0zOTYxOTAzNjc5MzA4NDQ1ODc5PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTM5NjE5MDM2NzkzMDg0NDU4Nzk9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09Mzk2MTkwMzY3OTMwODQ0NTg3OT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTM5NjE5MDM2NzkzMDg0NDU4Nzk9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0zOTYxOTAzNjc5MzA4NDQ1ODc5PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0zOTYxOTAzNjc5MzA4NDQ1ODc5PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvK
Dec 03 02:06:10 compute-0 nova_compute[351485]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09Mzk2MTkwMzY3OTMwODQ0NTg3OT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTM5NjE5MDM2NzkzMDg0NDU4Nzk9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0zOTYxOTAzNjc5MzA4NDQ1ODc5PT0tLQo=',user_id='03ba25e4009b43f7b0054fee32bf9136',uuid=52862152-12c7-4236-89c3-67750ecbed7a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "address": "fa:16:3e:8e:09:91", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521d2181-8f", "ovs_interfaceid": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 03 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.908 351492 DEBUG nova.network.os_vif_util [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Converting VIF {"id": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "address": "fa:16:3e:8e:09:91", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521d2181-8f", "ovs_interfaceid": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 03 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.909 351492 DEBUG nova.network.os_vif_util [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8e:09:91,bridge_name='br-int',has_traffic_filtering=True,id=521d2181-8f17-4f4d-a3a6-98de1e17b734,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap521d2181-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 03 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.910 351492 DEBUG os_vif [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8e:09:91,bridge_name='br-int',has_traffic_filtering=True,id=521d2181-8f17-4f4d-a3a6-98de1e17b734,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap521d2181-8f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 03 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.913 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.914 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap521d2181-8f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.916 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.918 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 03 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.923 351492 INFO os_vif [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8e:09:91,bridge_name='br-int',has_traffic_filtering=True,id=521d2181-8f17-4f4d-a3a6-98de1e17b734,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap521d2181-8f')
Dec 03 02:06:10 compute-0 rsyslogd[188612]: message too long (8192) with configured size 8096, begin of message is: 2025-12-03 02:06:10.908 351492 DEBUG nova.virt.libvirt.vif [None req-cf0a54e0-d2 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 03 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.979 351492 DEBUG nova.compute.manager [req-6e2aefb9-bcb1-4420-b0ca-516ef8a6ac68 req-782be09b-6f98-4aac-890c-ce5497aba7a8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Received event network-vif-unplugged-521d2181-8f17-4f4d-a3a6-98de1e17b734 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.980 351492 DEBUG oslo_concurrency.lockutils [req-6e2aefb9-bcb1-4420-b0ca-516ef8a6ac68 req-782be09b-6f98-4aac-890c-ce5497aba7a8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "52862152-12c7-4236-89c3-67750ecbed7a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.981 351492 DEBUG oslo_concurrency.lockutils [req-6e2aefb9-bcb1-4420-b0ca-516ef8a6ac68 req-782be09b-6f98-4aac-890c-ce5497aba7a8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "52862152-12c7-4236-89c3-67750ecbed7a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.982 351492 DEBUG oslo_concurrency.lockutils [req-6e2aefb9-bcb1-4420-b0ca-516ef8a6ac68 req-782be09b-6f98-4aac-890c-ce5497aba7a8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "52862152-12c7-4236-89c3-67750ecbed7a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.982 351492 DEBUG nova.compute.manager [req-6e2aefb9-bcb1-4420-b0ca-516ef8a6ac68 req-782be09b-6f98-4aac-890c-ce5497aba7a8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] No waiting events found dispatching network-vif-unplugged-521d2181-8f17-4f4d-a3a6-98de1e17b734 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 03 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.986 351492 DEBUG nova.compute.manager [req-6e2aefb9-bcb1-4420-b0ca-516ef8a6ac68 req-782be09b-6f98-4aac-890c-ce5497aba7a8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Received event network-vif-unplugged-521d2181-8f17-4f4d-a3a6-98de1e17b734 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 03 02:06:11 compute-0 nova_compute[351485]: 2025-12-03 02:06:11.004 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:06:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:06:11.011 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:a6:85', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:2a:11:ae:7b:8c'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 02:06:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:06:11.012 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 03 02:06:11 compute-0 nova_compute[351485]: 2025-12-03 02:06:11.016 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:06:11 compute-0 nova_compute[351485]: 2025-12-03 02:06:11.038 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:06:11 compute-0 nova_compute[351485]: 2025-12-03 02:06:11.038 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 02:06:11 compute-0 nova_compute[351485]: 2025-12-03 02:06:11.294 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:06:11 compute-0 nova_compute[351485]: 2025-12-03 02:06:11.295 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:06:11 compute-0 nova_compute[351485]: 2025-12-03 02:06:11.295 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 03 02:06:11 compute-0 nova_compute[351485]: 2025-12-03 02:06:11.745 351492 DEBUG nova.compute.manager [req-920f79b6-3dab-4716-be4f-8f035ae5d09e req-e7ddba86-3a6b-449a-8a38-c3a4d63d30aa 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Received event network-changed-521d2181-8f17-4f4d-a3a6-98de1e17b734 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:06:11 compute-0 nova_compute[351485]: 2025-12-03 02:06:11.745 351492 DEBUG nova.compute.manager [req-920f79b6-3dab-4716-be4f-8f035ae5d09e req-e7ddba86-3a6b-449a-8a38-c3a4d63d30aa 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Refreshing instance network info cache due to event network-changed-521d2181-8f17-4f4d-a3a6-98de1e17b734. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 03 02:06:11 compute-0 nova_compute[351485]: 2025-12-03 02:06:11.746 351492 DEBUG oslo_concurrency.lockutils [req-920f79b6-3dab-4716-be4f-8f035ae5d09e req-e7ddba86-3a6b-449a-8a38-c3a4d63d30aa 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-52862152-12c7-4236-89c3-67750ecbed7a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:06:11 compute-0 nova_compute[351485]: 2025-12-03 02:06:11.747 351492 DEBUG oslo_concurrency.lockutils [req-920f79b6-3dab-4716-be4f-8f035ae5d09e req-e7ddba86-3a6b-449a-8a38-c3a4d63d30aa 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-52862152-12c7-4236-89c3-67750ecbed7a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:06:11 compute-0 nova_compute[351485]: 2025-12-03 02:06:11.748 351492 DEBUG nova.network.neutron [req-920f79b6-3dab-4716-be4f-8f035ae5d09e req-e7ddba86-3a6b-449a-8a38-c3a4d63d30aa 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Refreshing network info cache for port 521d2181-8f17-4f4d-a3a6-98de1e17b734 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 03 02:06:11 compute-0 ceph-mon[192821]: pgmap v1548: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:06:12 compute-0 nova_compute[351485]: 2025-12-03 02:06:12.273 351492 INFO nova.virt.libvirt.driver [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Deleting instance files /var/lib/nova/instances/52862152-12c7-4236-89c3-67750ecbed7a_del
Dec 03 02:06:12 compute-0 nova_compute[351485]: 2025-12-03 02:06:12.275 351492 INFO nova.virt.libvirt.driver [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Deletion of /var/lib/nova/instances/52862152-12c7-4236-89c3-67750ecbed7a_del complete
Dec 03 02:06:12 compute-0 nova_compute[351485]: 2025-12-03 02:06:12.368 351492 DEBUG nova.virt.libvirt.host [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Dec 03 02:06:12 compute-0 nova_compute[351485]: 2025-12-03 02:06:12.369 351492 INFO nova.virt.libvirt.host [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] UEFI support detected
Dec 03 02:06:12 compute-0 nova_compute[351485]: 2025-12-03 02:06:12.372 351492 INFO nova.compute.manager [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Took 1.94 seconds to destroy the instance on the hypervisor.
Dec 03 02:06:12 compute-0 nova_compute[351485]: 2025-12-03 02:06:12.373 351492 DEBUG oslo.service.loopingcall [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 03 02:06:12 compute-0 nova_compute[351485]: 2025-12-03 02:06:12.373 351492 DEBUG nova.compute.manager [-] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 03 02:06:12 compute-0 nova_compute[351485]: 2025-12-03 02:06:12.373 351492 DEBUG nova.network.neutron [-] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 03 02:06:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1549: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 4.1 KiB/s rd, 0 B/s wr, 4 op/s
Dec 03 02:06:13 compute-0 nova_compute[351485]: 2025-12-03 02:06:13.068 351492 DEBUG nova.compute.manager [req-42e9ffd2-4cf4-4359-8126-ed11cf8b3295 req-909720d2-c4e9-4096-923a-3dc52300210d 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Received event network-vif-plugged-521d2181-8f17-4f4d-a3a6-98de1e17b734 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:06:13 compute-0 nova_compute[351485]: 2025-12-03 02:06:13.069 351492 DEBUG oslo_concurrency.lockutils [req-42e9ffd2-4cf4-4359-8126-ed11cf8b3295 req-909720d2-c4e9-4096-923a-3dc52300210d 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "52862152-12c7-4236-89c3-67750ecbed7a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:06:13 compute-0 nova_compute[351485]: 2025-12-03 02:06:13.069 351492 DEBUG oslo_concurrency.lockutils [req-42e9ffd2-4cf4-4359-8126-ed11cf8b3295 req-909720d2-c4e9-4096-923a-3dc52300210d 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "52862152-12c7-4236-89c3-67750ecbed7a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:06:13 compute-0 nova_compute[351485]: 2025-12-03 02:06:13.073 351492 DEBUG oslo_concurrency.lockutils [req-42e9ffd2-4cf4-4359-8126-ed11cf8b3295 req-909720d2-c4e9-4096-923a-3dc52300210d 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "52862152-12c7-4236-89c3-67750ecbed7a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:06:13 compute-0 nova_compute[351485]: 2025-12-03 02:06:13.074 351492 DEBUG nova.compute.manager [req-42e9ffd2-4cf4-4359-8126-ed11cf8b3295 req-909720d2-c4e9-4096-923a-3dc52300210d 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] No waiting events found dispatching network-vif-plugged-521d2181-8f17-4f4d-a3a6-98de1e17b734 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 03 02:06:13 compute-0 nova_compute[351485]: 2025-12-03 02:06:13.075 351492 WARNING nova.compute.manager [req-42e9ffd2-4cf4-4359-8126-ed11cf8b3295 req-909720d2-c4e9-4096-923a-3dc52300210d 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Received unexpected event network-vif-plugged-521d2181-8f17-4f4d-a3a6-98de1e17b734 for instance with vm_state active and task_state deleting.
Dec 03 02:06:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:06:13 compute-0 nova_compute[351485]: 2025-12-03 02:06:13.156 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:06:13 compute-0 nova_compute[351485]: 2025-12-03 02:06:13.291 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Updating instance_info_cache with network_info: [{"id": "d0c565d0-5299-45e5-84ac-ea722711af3d", "address": "fa:16:3e:de:1b:b0", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.227", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0c565d0-52", "ovs_interfaceid": "d0c565d0-5299-45e5-84ac-ea722711af3d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:06:13 compute-0 nova_compute[351485]: 2025-12-03 02:06:13.313 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:06:13 compute-0 nova_compute[351485]: 2025-12-03 02:06:13.314 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 03 02:06:13 compute-0 nova_compute[351485]: 2025-12-03 02:06:13.316 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:06:13 compute-0 nova_compute[351485]: 2025-12-03 02:06:13.316 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:06:13 compute-0 nova_compute[351485]: 2025-12-03 02:06:13.502 351492 DEBUG nova.network.neutron [req-920f79b6-3dab-4716-be4f-8f035ae5d09e req-e7ddba86-3a6b-449a-8a38-c3a4d63d30aa 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Updated VIF entry in instance network info cache for port 521d2181-8f17-4f4d-a3a6-98de1e17b734. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 03 02:06:13 compute-0 nova_compute[351485]: 2025-12-03 02:06:13.503 351492 DEBUG nova.network.neutron [req-920f79b6-3dab-4716-be4f-8f035ae5d09e req-e7ddba86-3a6b-449a-8a38-c3a4d63d30aa 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Updating instance_info_cache with network_info: [{"id": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "address": "fa:16:3e:8e:09:91", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521d2181-8f", "ovs_interfaceid": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:06:13 compute-0 nova_compute[351485]: 2025-12-03 02:06:13.526 351492 DEBUG oslo_concurrency.lockutils [req-920f79b6-3dab-4716-be4f-8f035ae5d09e req-e7ddba86-3a6b-449a-8a38-c3a4d63d30aa 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-52862152-12c7-4236-89c3-67750ecbed7a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:06:13 compute-0 nova_compute[351485]: 2025-12-03 02:06:13.575 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:06:13 compute-0 nova_compute[351485]: 2025-12-03 02:06:13.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:06:13 compute-0 nova_compute[351485]: 2025-12-03 02:06:13.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:06:13 compute-0 nova_compute[351485]: 2025-12-03 02:06:13.655 351492 DEBUG nova.network.neutron [-] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:06:13 compute-0 nova_compute[351485]: 2025-12-03 02:06:13.676 351492 INFO nova.compute.manager [-] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Took 1.30 seconds to deallocate network for instance.
Dec 03 02:06:13 compute-0 nova_compute[351485]: 2025-12-03 02:06:13.731 351492 DEBUG oslo_concurrency.lockutils [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:06:13 compute-0 nova_compute[351485]: 2025-12-03 02:06:13.732 351492 DEBUG oslo_concurrency.lockutils [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:06:13 compute-0 ceph-mon[192821]: pgmap v1549: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 4.1 KiB/s rd, 0 B/s wr, 4 op/s
Dec 03 02:06:13 compute-0 nova_compute[351485]: 2025-12-03 02:06:13.886 351492 DEBUG oslo_concurrency.processutils [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:06:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:06:14 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2195037886' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:06:14 compute-0 nova_compute[351485]: 2025-12-03 02:06:14.432 351492 DEBUG oslo_concurrency.processutils [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:06:14 compute-0 nova_compute[351485]: 2025-12-03 02:06:14.446 351492 DEBUG nova.compute.provider_tree [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:06:14 compute-0 nova_compute[351485]: 2025-12-03 02:06:14.481 351492 DEBUG nova.scheduler.client.report [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:06:14 compute-0 nova_compute[351485]: 2025-12-03 02:06:14.517 351492 DEBUG oslo_concurrency.lockutils [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:06:14 compute-0 nova_compute[351485]: 2025-12-03 02:06:14.545 351492 INFO nova.scheduler.client.report [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Deleted allocations for instance 52862152-12c7-4236-89c3-67750ecbed7a
Dec 03 02:06:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1550: 321 pgs: 321 active+clean; 239 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 6.7 KiB/s rd, 341 B/s wr, 9 op/s
Dec 03 02:06:14 compute-0 nova_compute[351485]: 2025-12-03 02:06:14.660 351492 DEBUG oslo_concurrency.lockutils [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "52862152-12c7-4236-89c3-67750ecbed7a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.236s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:06:14 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2195037886' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:06:15 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:06:15.015 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=eda9fd7d-f2b1-4121-b9ac-fc31f8426272, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:06:15 compute-0 ceph-mon[192821]: pgmap v1550: 321 pgs: 321 active+clean; 239 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 6.7 KiB/s rd, 341 B/s wr, 9 op/s
Dec 03 02:06:15 compute-0 nova_compute[351485]: 2025-12-03 02:06:15.918 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:06:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1551: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 03 02:06:17 compute-0 ceph-mon[192821]: pgmap v1551: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 03 02:06:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:06:18 compute-0 nova_compute[351485]: 2025-12-03 02:06:18.160 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:06:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1552: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 03 02:06:19 compute-0 ceph-mon[192821]: pgmap v1552: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 03 02:06:20 compute-0 nova_compute[351485]: 2025-12-03 02:06:20.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:06:20 compute-0 nova_compute[351485]: 2025-12-03 02:06:20.576 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 02:06:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1553: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 03 02:06:20 compute-0 nova_compute[351485]: 2025-12-03 02:06:20.922 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:06:21 compute-0 ceph-mon[192821]: pgmap v1553: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 03 02:06:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1554: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 03 02:06:22 compute-0 podman[430736]: 2025-12-03 02:06:22.864707879 +0000 UTC m=+0.116022533 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Dec 03 02:06:22 compute-0 podman[430737]: 2025-12-03 02:06:22.880746161 +0000 UTC m=+0.125263873 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Dec 03 02:06:22 compute-0 podman[430738]: 2025-12-03 02:06:22.891984608 +0000 UTC m=+0.123569726 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 03 02:06:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:06:23 compute-0 nova_compute[351485]: 2025-12-03 02:06:23.163 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:06:23 compute-0 ceph-mon[192821]: pgmap v1554: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 03 02:06:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1555: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.7 KiB/s wr, 35 op/s
Dec 03 02:06:25 compute-0 nova_compute[351485]: 2025-12-03 02:06:25.885 351492 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764727570.884012, 52862152-12c7-4236-89c3-67750ecbed7a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 02:06:25 compute-0 nova_compute[351485]: 2025-12-03 02:06:25.886 351492 INFO nova.compute.manager [-] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] VM Stopped (Lifecycle Event)
Dec 03 02:06:25 compute-0 nova_compute[351485]: 2025-12-03 02:06:25.914 351492 DEBUG nova.compute.manager [None req-679c2fab-95b2-49d1-a0a1-a2b371db4d88 - - - - - -] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:06:25 compute-0 nova_compute[351485]: 2025-12-03 02:06:25.926 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:06:25 compute-0 ceph-mon[192821]: pgmap v1555: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.7 KiB/s wr, 35 op/s
Dec 03 02:06:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1556: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.4 KiB/s wr, 30 op/s
Dec 03 02:06:27 compute-0 systemd[1]: Starting dnf makecache...
Dec 03 02:06:27 compute-0 podman[430791]: 2025-12-03 02:06:27.906094269 +0000 UTC m=+0.149054344 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 03 02:06:27 compute-0 ceph-mon[192821]: pgmap v1556: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.4 KiB/s wr, 30 op/s
Dec 03 02:06:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:06:28 compute-0 nova_compute[351485]: 2025-12-03 02:06:28.167 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:06:28 compute-0 dnf[430811]: Metadata cache refreshed recently.
Dec 03 02:06:28 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec 03 02:06:28 compute-0 systemd[1]: Finished dnf makecache.
Dec 03 02:06:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:06:28
Dec 03 02:06:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 02:06:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 02:06:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'images', 'volumes', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms', 'default.rgw.control', 'backups']
Dec 03 02:06:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 02:06:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:06:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:06:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:06:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:06:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:06:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:06:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1557: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:06:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 02:06:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:06:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 02:06:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:06:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:06:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:06:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:06:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:06:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:06:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:06:29 compute-0 podman[158098]: time="2025-12-03T02:06:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:06:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:06:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec 03 02:06:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:06:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8635 "" "Go-http-client/1.1"
Dec 03 02:06:29 compute-0 ceph-mon[192821]: pgmap v1557: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:06:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1558: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:06:30 compute-0 nova_compute[351485]: 2025-12-03 02:06:30.929 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:06:31 compute-0 openstack_network_exporter[368278]: ERROR   02:06:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:06:31 compute-0 openstack_network_exporter[368278]: ERROR   02:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:06:31 compute-0 openstack_network_exporter[368278]: ERROR   02:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:06:31 compute-0 openstack_network_exporter[368278]: ERROR   02:06:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:06:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:06:31 compute-0 openstack_network_exporter[368278]: ERROR   02:06:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:06:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:06:31 compute-0 podman[430812]: 2025-12-03 02:06:31.893637851 +0000 UTC m=+0.146367958 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, container_name=kepler, io.openshift.tags=base rhel9, distribution-scope=public, release-0.7.12=, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, name=ubi9, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, vcs-type=git, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, managed_by=edpm_ansible)
Dec 03 02:06:32 compute-0 ceph-mon[192821]: pgmap v1558: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:06:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1559: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:06:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:06:33 compute-0 nova_compute[351485]: 2025-12-03 02:06:33.170 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:06:33 compute-0 podman[430833]: 2025-12-03 02:06:33.877941816 +0000 UTC m=+0.101658228 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 02:06:33 compute-0 podman[430832]: 2025-12-03 02:06:33.891415796 +0000 UTC m=+0.125572582 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, io.openshift.expose-services=, release=1755695350, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, architecture=x86_64, io.buildah.version=1.33.7, config_id=edpm, io.openshift.tags=minimal rhel9, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec 03 02:06:33 compute-0 podman[430834]: 2025-12-03 02:06:33.909112335 +0000 UTC m=+0.138085355 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 03 02:06:33 compute-0 podman[430831]: 2025-12-03 02:06:33.940190651 +0000 UTC m=+0.178412162 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 03 02:06:34 compute-0 ceph-mon[192821]: pgmap v1559: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:06:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1560: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:06:35 compute-0 nova_compute[351485]: 2025-12-03 02:06:35.932 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:06:36 compute-0 ceph-mon[192821]: pgmap v1560: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:06:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1561: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:06:38 compute-0 ceph-mon[192821]: pgmap v1561: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:06:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:06:38 compute-0 nova_compute[351485]: 2025-12-03 02:06:38.174 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:06:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 02:06:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:06:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 02:06:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:06:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0016571738458032168 of space, bias 1.0, pg target 0.49715215374096505 quantized to 32 (current 32)
Dec 03 02:06:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:06:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:06:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:06:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:06:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:06:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec 03 02:06:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:06:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 02:06:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:06:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:06:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:06:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 02:06:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:06:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 02:06:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:06:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:06:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:06:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 02:06:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1562: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:06:40 compute-0 ceph-mon[192821]: pgmap v1562: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:06:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1563: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:06:40 compute-0 nova_compute[351485]: 2025-12-03 02:06:40.935 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:06:41 compute-0 sshd-session[430916]: Invalid user tuan from 154.113.10.113 port 51084
Dec 03 02:06:41 compute-0 sshd-session[430916]: Received disconnect from 154.113.10.113 port 51084:11: Bye Bye [preauth]
Dec 03 02:06:41 compute-0 sshd-session[430916]: Disconnected from invalid user tuan 154.113.10.113 port 51084 [preauth]
Dec 03 02:06:42 compute-0 ceph-mon[192821]: pgmap v1563: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:06:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1564: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:06:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:06:43 compute-0 nova_compute[351485]: 2025-12-03 02:06:43.177 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:06:44 compute-0 ceph-mon[192821]: pgmap v1564: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:06:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1565: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:06:45 compute-0 nova_compute[351485]: 2025-12-03 02:06:45.938 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:06:45 compute-0 ovn_controller[89134]: 2025-12-03T02:06:45Z|00053|memory_trim|INFO|Detected inactivity (last active 30012 ms ago): trimming memory
Dec 03 02:06:46 compute-0 ceph-mon[192821]: pgmap v1565: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:06:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1566: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:06:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 02:06:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4084479757' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:06:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 02:06:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4084479757' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:06:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/4084479757' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:06:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/4084479757' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:06:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:06:48 compute-0 ceph-mon[192821]: pgmap v1566: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:06:48 compute-0 nova_compute[351485]: 2025-12-03 02:06:48.180 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:06:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1567: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:06:48 compute-0 sudo[430918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:06:48 compute-0 sudo[430918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:06:48 compute-0 sudo[430918]: pam_unix(sudo:session): session closed for user root
Dec 03 02:06:48 compute-0 sudo[430943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:06:48 compute-0 sudo[430943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:06:48 compute-0 sudo[430943]: pam_unix(sudo:session): session closed for user root
Dec 03 02:06:49 compute-0 sudo[430968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:06:49 compute-0 sudo[430968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:06:49 compute-0 sudo[430968]: pam_unix(sudo:session): session closed for user root
Dec 03 02:06:49 compute-0 sudo[430993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 02:06:49 compute-0 sudo[430993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:06:49 compute-0 sudo[430993]: pam_unix(sudo:session): session closed for user root
Dec 03 02:06:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:06:50 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:06:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 02:06:50 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:06:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 02:06:50 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:06:50 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 67998f89-8d2c-4e87-b5b6-4d19691e5b49 does not exist
Dec 03 02:06:50 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 46ff17bb-7759-4d0a-991d-a8b9b4001883 does not exist
Dec 03 02:06:50 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev b5ea6251-dd32-41c3-aecd-af67f98dced8 does not exist
Dec 03 02:06:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 02:06:50 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:06:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 02:06:50 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:06:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:06:50 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:06:50 compute-0 sudo[431049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:06:50 compute-0 sudo[431049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:06:50 compute-0 ceph-mon[192821]: pgmap v1567: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:06:50 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:06:50 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:06:50 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:06:50 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:06:50 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:06:50 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:06:50 compute-0 sudo[431049]: pam_unix(sudo:session): session closed for user root
Dec 03 02:06:50 compute-0 sudo[431074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:06:50 compute-0 sudo[431074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:06:50 compute-0 sudo[431074]: pam_unix(sudo:session): session closed for user root
Dec 03 02:06:50 compute-0 sudo[431099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:06:50 compute-0 sudo[431099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:06:50 compute-0 sudo[431099]: pam_unix(sudo:session): session closed for user root
Dec 03 02:06:50 compute-0 sudo[431124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 02:06:50 compute-0 sudo[431124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:06:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1568: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:06:50 compute-0 nova_compute[351485]: 2025-12-03 02:06:50.940 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:06:51 compute-0 podman[431189]: 2025-12-03 02:06:51.109715845 +0000 UTC m=+0.091353057 container create 856e26c63885fd1674749bb15ca1a9ca21fee084b66e58d1e6f23954ca4d5640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec 03 02:06:51 compute-0 podman[431189]: 2025-12-03 02:06:51.066000042 +0000 UTC m=+0.047637314 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:06:51 compute-0 systemd[1]: Started libpod-conmon-856e26c63885fd1674749bb15ca1a9ca21fee084b66e58d1e6f23954ca4d5640.scope.
Dec 03 02:06:51 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:06:51 compute-0 podman[431189]: 2025-12-03 02:06:51.271896439 +0000 UTC m=+0.253533671 container init 856e26c63885fd1674749bb15ca1a9ca21fee084b66e58d1e6f23954ca4d5640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:06:51 compute-0 podman[431189]: 2025-12-03 02:06:51.29108655 +0000 UTC m=+0.272723742 container start 856e26c63885fd1674749bb15ca1a9ca21fee084b66e58d1e6f23954ca4d5640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:06:51 compute-0 podman[431189]: 2025-12-03 02:06:51.299137747 +0000 UTC m=+0.280775039 container attach 856e26c63885fd1674749bb15ca1a9ca21fee084b66e58d1e6f23954ca4d5640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_carson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:06:51 compute-0 heuristic_carson[431204]: 167 167
Dec 03 02:06:51 compute-0 systemd[1]: libpod-856e26c63885fd1674749bb15ca1a9ca21fee084b66e58d1e6f23954ca4d5640.scope: Deactivated successfully.
Dec 03 02:06:51 compute-0 podman[431189]: 2025-12-03 02:06:51.306396741 +0000 UTC m=+0.288033953 container died 856e26c63885fd1674749bb15ca1a9ca21fee084b66e58d1e6f23954ca4d5640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_carson, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 03 02:06:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ffcdb462fc309f4ee14db168fefe15555929477136c00bc36a5fe3fdfc2307b-merged.mount: Deactivated successfully.
Dec 03 02:06:51 compute-0 podman[431189]: 2025-12-03 02:06:51.37442149 +0000 UTC m=+0.356058682 container remove 856e26c63885fd1674749bb15ca1a9ca21fee084b66e58d1e6f23954ca4d5640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_carson, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:06:51 compute-0 systemd[1]: libpod-conmon-856e26c63885fd1674749bb15ca1a9ca21fee084b66e58d1e6f23954ca4d5640.scope: Deactivated successfully.
Dec 03 02:06:51 compute-0 podman[431228]: 2025-12-03 02:06:51.639663809 +0000 UTC m=+0.081565641 container create 717e6fb8c9c985c200ae826473e9b22292f5cfd938abf340c1537d0bebf2c08b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec 03 02:06:51 compute-0 podman[431228]: 2025-12-03 02:06:51.607946335 +0000 UTC m=+0.049848207 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:06:51 compute-0 systemd[1]: Started libpod-conmon-717e6fb8c9c985c200ae826473e9b22292f5cfd938abf340c1537d0bebf2c08b.scope.
Dec 03 02:06:51 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:06:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/557ec1c4761f940d859f57c539bb29e579e55bbc1cea8780872a1365d1fd1ddb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:06:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/557ec1c4761f940d859f57c539bb29e579e55bbc1cea8780872a1365d1fd1ddb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:06:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/557ec1c4761f940d859f57c539bb29e579e55bbc1cea8780872a1365d1fd1ddb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:06:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/557ec1c4761f940d859f57c539bb29e579e55bbc1cea8780872a1365d1fd1ddb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:06:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/557ec1c4761f940d859f57c539bb29e579e55bbc1cea8780872a1365d1fd1ddb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 02:06:51 compute-0 podman[431228]: 2025-12-03 02:06:51.801351038 +0000 UTC m=+0.243252930 container init 717e6fb8c9c985c200ae826473e9b22292f5cfd938abf340c1537d0bebf2c08b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_torvalds, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec 03 02:06:51 compute-0 podman[431228]: 2025-12-03 02:06:51.818313327 +0000 UTC m=+0.260215159 container start 717e6fb8c9c985c200ae826473e9b22292f5cfd938abf340c1537d0bebf2c08b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_torvalds, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 03 02:06:51 compute-0 podman[431228]: 2025-12-03 02:06:51.82587424 +0000 UTC m=+0.267776082 container attach 717e6fb8c9c985c200ae826473e9b22292f5cfd938abf340c1537d0bebf2c08b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:06:52 compute-0 ceph-mon[192821]: pgmap v1568: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:06:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1569: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:06:53 compute-0 relaxed_torvalds[431245]: --> passed data devices: 0 physical, 3 LVM
Dec 03 02:06:53 compute-0 relaxed_torvalds[431245]: --> relative data size: 1.0
Dec 03 02:06:53 compute-0 relaxed_torvalds[431245]: --> All data devices are unavailable
Dec 03 02:06:53 compute-0 systemd[1]: libpod-717e6fb8c9c985c200ae826473e9b22292f5cfd938abf340c1537d0bebf2c08b.scope: Deactivated successfully.
Dec 03 02:06:53 compute-0 systemd[1]: libpod-717e6fb8c9c985c200ae826473e9b22292f5cfd938abf340c1537d0bebf2c08b.scope: Consumed 1.205s CPU time.
Dec 03 02:06:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:06:53 compute-0 nova_compute[351485]: 2025-12-03 02:06:53.183 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:06:53 compute-0 podman[431275]: 2025-12-03 02:06:53.21001437 +0000 UTC m=+0.060940819 container died 717e6fb8c9c985c200ae826473e9b22292f5cfd938abf340c1537d0bebf2c08b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:06:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-557ec1c4761f940d859f57c539bb29e579e55bbc1cea8780872a1365d1fd1ddb-merged.mount: Deactivated successfully.
Dec 03 02:06:53 compute-0 podman[431275]: 2025-12-03 02:06:53.2826952 +0000 UTC m=+0.133621599 container remove 717e6fb8c9c985c200ae826473e9b22292f5cfd938abf340c1537d0bebf2c08b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:06:53 compute-0 systemd[1]: libpod-conmon-717e6fb8c9c985c200ae826473e9b22292f5cfd938abf340c1537d0bebf2c08b.scope: Deactivated successfully.
Dec 03 02:06:53 compute-0 podman[431274]: 2025-12-03 02:06:53.300317577 +0000 UTC m=+0.155050384 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 03 02:06:53 compute-0 podman[431277]: 2025-12-03 02:06:53.30504675 +0000 UTC m=+0.137974492 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 03 02:06:53 compute-0 podman[431276]: 2025-12-03 02:06:53.315268618 +0000 UTC m=+0.157981046 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Dec 03 02:06:53 compute-0 sudo[431124]: pam_unix(sudo:session): session closed for user root
Dec 03 02:06:53 compute-0 sudo[431343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:06:53 compute-0 sudo[431343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:06:53 compute-0 sudo[431343]: pam_unix(sudo:session): session closed for user root
Dec 03 02:06:53 compute-0 sudo[431368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:06:53 compute-0 sudo[431368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:06:53 compute-0 sudo[431368]: pam_unix(sudo:session): session closed for user root
Dec 03 02:06:53 compute-0 sudo[431393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:06:53 compute-0 sudo[431393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:06:53 compute-0 sudo[431393]: pam_unix(sudo:session): session closed for user root
Dec 03 02:06:53 compute-0 sudo[431418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 02:06:53 compute-0 sudo[431418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:06:54 compute-0 ceph-mon[192821]: pgmap v1569: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:06:54 compute-0 podman[431481]: 2025-12-03 02:06:54.525810553 +0000 UTC m=+0.100075113 container create bbf36ff6eb82eca0193d757cbcc05aee95097270e860e7118ea283da4f6b1705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dhawan, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 03 02:06:54 compute-0 podman[431481]: 2025-12-03 02:06:54.487948235 +0000 UTC m=+0.062212835 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:06:54 compute-0 systemd[1]: Started libpod-conmon-bbf36ff6eb82eca0193d757cbcc05aee95097270e860e7118ea283da4f6b1705.scope.
Dec 03 02:06:54 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:06:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1570: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:06:54 compute-0 podman[431481]: 2025-12-03 02:06:54.685340782 +0000 UTC m=+0.259605372 container init bbf36ff6eb82eca0193d757cbcc05aee95097270e860e7118ea283da4f6b1705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:06:54 compute-0 podman[431481]: 2025-12-03 02:06:54.702829395 +0000 UTC m=+0.277093945 container start bbf36ff6eb82eca0193d757cbcc05aee95097270e860e7118ea283da4f6b1705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dhawan, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:06:54 compute-0 podman[431481]: 2025-12-03 02:06:54.708947968 +0000 UTC m=+0.283212578 container attach bbf36ff6eb82eca0193d757cbcc05aee95097270e860e7118ea283da4f6b1705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 03 02:06:54 compute-0 sweet_dhawan[431497]: 167 167
Dec 03 02:06:54 compute-0 systemd[1]: libpod-bbf36ff6eb82eca0193d757cbcc05aee95097270e860e7118ea283da4f6b1705.scope: Deactivated successfully.
Dec 03 02:06:54 compute-0 podman[431502]: 2025-12-03 02:06:54.814220986 +0000 UTC m=+0.071972700 container died bbf36ff6eb82eca0193d757cbcc05aee95097270e860e7118ea283da4f6b1705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 03 02:06:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-dff4facf74d5cbab27c1877fc6af51cd92218fe53c0856e17f800f3338024796-merged.mount: Deactivated successfully.
Dec 03 02:06:54 compute-0 podman[431502]: 2025-12-03 02:06:54.900617703 +0000 UTC m=+0.158369317 container remove bbf36ff6eb82eca0193d757cbcc05aee95097270e860e7118ea283da4f6b1705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dhawan, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 03 02:06:54 compute-0 systemd[1]: libpod-conmon-bbf36ff6eb82eca0193d757cbcc05aee95097270e860e7118ea283da4f6b1705.scope: Deactivated successfully.
Dec 03 02:06:55 compute-0 podman[431522]: 2025-12-03 02:06:55.219052963 +0000 UTC m=+0.093543599 container create 39c94cb38e5d0eff8f691b2e18a8c300b9b6e7d4eec4a7cbfa4b63ed3b2d7b51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bhabha, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:06:55 compute-0 podman[431522]: 2025-12-03 02:06:55.175673829 +0000 UTC m=+0.050164495 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:06:55 compute-0 systemd[1]: Started libpod-conmon-39c94cb38e5d0eff8f691b2e18a8c300b9b6e7d4eec4a7cbfa4b63ed3b2d7b51.scope.
Dec 03 02:06:55 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:06:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae191baa68e501ea046b986f50b082233e6fd3d2312548da46289b18d848b0b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:06:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae191baa68e501ea046b986f50b082233e6fd3d2312548da46289b18d848b0b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:06:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae191baa68e501ea046b986f50b082233e6fd3d2312548da46289b18d848b0b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:06:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae191baa68e501ea046b986f50b082233e6fd3d2312548da46289b18d848b0b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:06:55 compute-0 podman[431522]: 2025-12-03 02:06:55.382067849 +0000 UTC m=+0.256558485 container init 39c94cb38e5d0eff8f691b2e18a8c300b9b6e7d4eec4a7cbfa4b63ed3b2d7b51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bhabha, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 03 02:06:55 compute-0 podman[431522]: 2025-12-03 02:06:55.402480675 +0000 UTC m=+0.276971311 container start 39c94cb38e5d0eff8f691b2e18a8c300b9b6e7d4eec4a7cbfa4b63ed3b2d7b51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 03 02:06:55 compute-0 podman[431522]: 2025-12-03 02:06:55.407798395 +0000 UTC m=+0.282289111 container attach 39c94cb38e5d0eff8f691b2e18a8c300b9b6e7d4eec4a7cbfa4b63ed3b2d7b51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:06:55 compute-0 nova_compute[351485]: 2025-12-03 02:06:55.942 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:06:56 compute-0 ceph-mon[192821]: pgmap v1570: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:06:56 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Dec 03 02:06:56 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:06:56.215818) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 03 02:06:56 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Dec 03 02:06:56 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727616215869, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 2056, "num_deletes": 251, "total_data_size": 3476095, "memory_usage": 3528472, "flush_reason": "Manual Compaction"}
Dec 03 02:06:56 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Dec 03 02:06:56 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727616242232, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 3410257, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30133, "largest_seqno": 32188, "table_properties": {"data_size": 3400745, "index_size": 6070, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18769, "raw_average_key_size": 20, "raw_value_size": 3381998, "raw_average_value_size": 3624, "num_data_blocks": 269, "num_entries": 933, "num_filter_entries": 933, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764727384, "oldest_key_time": 1764727384, "file_creation_time": 1764727616, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Dec 03 02:06:56 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 26482 microseconds, and 13577 cpu microseconds.
Dec 03 02:06:56 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 02:06:56 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:06:56.242296) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 3410257 bytes OK
Dec 03 02:06:56 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:06:56.242317) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Dec 03 02:06:56 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:06:56.244688) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Dec 03 02:06:56 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:06:56.244704) EVENT_LOG_v1 {"time_micros": 1764727616244699, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 03 02:06:56 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:06:56.244722) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 03 02:06:56 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 3467476, prev total WAL file size 3467476, number of live WAL files 2.
Dec 03 02:06:56 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:06:56 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:06:56.246075) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Dec 03 02:06:56 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 03 02:06:56 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(3330KB)], [68(7259KB)]
Dec 03 02:06:56 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727616246123, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 10844140, "oldest_snapshot_seqno": -1}
Dec 03 02:06:56 compute-0 confident_bhabha[431536]: {
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:     "0": [
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:         {
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:             "devices": [
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:                 "/dev/loop3"
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:             ],
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:             "lv_name": "ceph_lv0",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:             "lv_size": "21470642176",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:             "name": "ceph_lv0",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:             "tags": {
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:                 "ceph.cluster_name": "ceph",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:                 "ceph.crush_device_class": "",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:                 "ceph.encrypted": "0",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:                 "ceph.osd_id": "0",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:                 "ceph.type": "block",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:                 "ceph.vdo": "0"
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:             },
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:             "type": "block",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:             "vg_name": "ceph_vg0"
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:         }
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:     ],
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:     "1": [
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:         {
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:             "devices": [
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:                 "/dev/loop4"
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:             ],
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:             "lv_name": "ceph_lv1",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:             "lv_size": "21470642176",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:             "name": "ceph_lv1",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:             "tags": {
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:                 "ceph.cluster_name": "ceph",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:                 "ceph.crush_device_class": "",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:                 "ceph.encrypted": "0",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:                 "ceph.osd_id": "1",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:                 "ceph.type": "block",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:                 "ceph.vdo": "0"
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:             },
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:             "type": "block",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:             "vg_name": "ceph_vg1"
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:         }
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:     ],
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:     "2": [
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:         {
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:             "devices": [
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:                 "/dev/loop5"
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:             ],
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:             "lv_name": "ceph_lv2",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:             "lv_size": "21470642176",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:             "name": "ceph_lv2",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:             "tags": {
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:                 "ceph.cluster_name": "ceph",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:                 "ceph.crush_device_class": "",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:                 "ceph.encrypted": "0",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:                 "ceph.osd_id": "2",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:                 "ceph.type": "block",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:                 "ceph.vdo": "0"
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:             },
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:             "type": "block",
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:             "vg_name": "ceph_vg2"
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:         }
Dec 03 02:06:56 compute-0 confident_bhabha[431536]:     ]
Dec 03 02:06:56 compute-0 confident_bhabha[431536]: }
Dec 03 02:06:56 compute-0 systemd[1]: libpod-39c94cb38e5d0eff8f691b2e18a8c300b9b6e7d4eec4a7cbfa4b63ed3b2d7b51.scope: Deactivated successfully.
Dec 03 02:06:56 compute-0 podman[431522]: 2025-12-03 02:06:56.29742069 +0000 UTC m=+1.171911356 container died 39c94cb38e5d0eff8f691b2e18a8c300b9b6e7d4eec4a7cbfa4b63ed3b2d7b51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 03 02:06:56 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 5357 keys, 9083373 bytes, temperature: kUnknown
Dec 03 02:06:56 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727616305211, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 9083373, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9046653, "index_size": 22210, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13445, "raw_key_size": 134300, "raw_average_key_size": 25, "raw_value_size": 8948848, "raw_average_value_size": 1670, "num_data_blocks": 917, "num_entries": 5357, "num_filter_entries": 5357, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764727616, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Dec 03 02:06:56 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 02:06:56 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:06:56.305507) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 9083373 bytes
Dec 03 02:06:56 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:06:56.309190) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 183.3 rd, 153.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.1 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(5.8) write-amplify(2.7) OK, records in: 5871, records dropped: 514 output_compression: NoCompression
Dec 03 02:06:56 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:06:56.309221) EVENT_LOG_v1 {"time_micros": 1764727616309206, "job": 38, "event": "compaction_finished", "compaction_time_micros": 59173, "compaction_time_cpu_micros": 27398, "output_level": 6, "num_output_files": 1, "total_output_size": 9083373, "num_input_records": 5871, "num_output_records": 5357, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 03 02:06:56 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:06:56 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727616310621, "job": 38, "event": "table_file_deletion", "file_number": 70}
Dec 03 02:06:56 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:06:56 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727616313656, "job": 38, "event": "table_file_deletion", "file_number": 68}
Dec 03 02:06:56 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:06:56.245912) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:06:56 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:06:56.313996) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:06:56 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:06:56.314007) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:06:56 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:06:56.314012) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:06:56 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:06:56.314017) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:06:56 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:06:56.314021) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:06:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae191baa68e501ea046b986f50b082233e6fd3d2312548da46289b18d848b0b1-merged.mount: Deactivated successfully.
Dec 03 02:06:56 compute-0 podman[431522]: 2025-12-03 02:06:56.407245017 +0000 UTC m=+1.281735653 container remove 39c94cb38e5d0eff8f691b2e18a8c300b9b6e7d4eec4a7cbfa4b63ed3b2d7b51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:06:56 compute-0 systemd[1]: libpod-conmon-39c94cb38e5d0eff8f691b2e18a8c300b9b6e7d4eec4a7cbfa4b63ed3b2d7b51.scope: Deactivated successfully.
Dec 03 02:06:56 compute-0 sudo[431418]: pam_unix(sudo:session): session closed for user root
Dec 03 02:06:56 compute-0 sudo[431560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:06:56 compute-0 sudo[431560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:06:56 compute-0 sudo[431560]: pam_unix(sudo:session): session closed for user root
Dec 03 02:06:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1571: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:06:56 compute-0 sudo[431585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:06:56 compute-0 sudo[431585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:06:56 compute-0 sudo[431585]: pam_unix(sudo:session): session closed for user root
Dec 03 02:06:56 compute-0 sudo[431610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:06:56 compute-0 sudo[431610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:06:56 compute-0 sudo[431610]: pam_unix(sudo:session): session closed for user root
Dec 03 02:06:56 compute-0 sudo[431635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 02:06:56 compute-0 sudo[431635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:06:57 compute-0 ceph-mon[192821]: pgmap v1571: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:06:57 compute-0 podman[431699]: 2025-12-03 02:06:57.584895196 +0000 UTC m=+0.099922909 container create 01123f5efef7572e421c431df14dc2ed41f4cc7473dd02cdd4fb334330932247 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_snyder, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec 03 02:06:57 compute-0 podman[431699]: 2025-12-03 02:06:57.54889698 +0000 UTC m=+0.063924703 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:06:57 compute-0 systemd[1]: Started libpod-conmon-01123f5efef7572e421c431df14dc2ed41f4cc7473dd02cdd4fb334330932247.scope.
Dec 03 02:06:57 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:06:57 compute-0 podman[431699]: 2025-12-03 02:06:57.712162664 +0000 UTC m=+0.227190447 container init 01123f5efef7572e421c431df14dc2ed41f4cc7473dd02cdd4fb334330932247 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 03 02:06:57 compute-0 podman[431699]: 2025-12-03 02:06:57.724356928 +0000 UTC m=+0.239384641 container start 01123f5efef7572e421c431df14dc2ed41f4cc7473dd02cdd4fb334330932247 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_snyder, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec 03 02:06:57 compute-0 podman[431699]: 2025-12-03 02:06:57.731398307 +0000 UTC m=+0.246426080 container attach 01123f5efef7572e421c431df14dc2ed41f4cc7473dd02cdd4fb334330932247 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 03 02:06:57 compute-0 vibrant_snyder[431715]: 167 167
Dec 03 02:06:57 compute-0 systemd[1]: libpod-01123f5efef7572e421c431df14dc2ed41f4cc7473dd02cdd4fb334330932247.scope: Deactivated successfully.
Dec 03 02:06:57 compute-0 podman[431699]: 2025-12-03 02:06:57.735081061 +0000 UTC m=+0.250108744 container died 01123f5efef7572e421c431df14dc2ed41f4cc7473dd02cdd4fb334330932247 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_snyder, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:06:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-07b3c40a97966b8baad31afc8a22e026d9bcbfd703d892c17042be1ce586eb5c-merged.mount: Deactivated successfully.
Dec 03 02:06:57 compute-0 podman[431699]: 2025-12-03 02:06:57.810461066 +0000 UTC m=+0.325488779 container remove 01123f5efef7572e421c431df14dc2ed41f4cc7473dd02cdd4fb334330932247 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_snyder, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:06:57 compute-0 systemd[1]: libpod-conmon-01123f5efef7572e421c431df14dc2ed41f4cc7473dd02cdd4fb334330932247.scope: Deactivated successfully.
Dec 03 02:06:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:06:58 compute-0 podman[431738]: 2025-12-03 02:06:58.15494597 +0000 UTC m=+0.095611647 container create 9cd1da86e0612bf2d8137aa0d0e736e91cc1acc5fb762e57d22b5182473bf7f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_einstein, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 03 02:06:58 compute-0 nova_compute[351485]: 2025-12-03 02:06:58.187 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:06:58 compute-0 podman[431738]: 2025-12-03 02:06:58.124145842 +0000 UTC m=+0.064811519 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:06:58 compute-0 systemd[1]: Started libpod-conmon-9cd1da86e0612bf2d8137aa0d0e736e91cc1acc5fb762e57d22b5182473bf7f3.scope.
Dec 03 02:06:58 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:06:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d82a5e376d278573416e4234ad4aa4c2d39c73068de1e2c4fe645067124d5e6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:06:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d82a5e376d278573416e4234ad4aa4c2d39c73068de1e2c4fe645067124d5e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:06:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d82a5e376d278573416e4234ad4aa4c2d39c73068de1e2c4fe645067124d5e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:06:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d82a5e376d278573416e4234ad4aa4c2d39c73068de1e2c4fe645067124d5e6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:06:58 compute-0 podman[431738]: 2025-12-03 02:06:58.303429147 +0000 UTC m=+0.244094804 container init 9cd1da86e0612bf2d8137aa0d0e736e91cc1acc5fb762e57d22b5182473bf7f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 03 02:06:58 compute-0 podman[431738]: 2025-12-03 02:06:58.332777345 +0000 UTC m=+0.273442972 container start 9cd1da86e0612bf2d8137aa0d0e736e91cc1acc5fb762e57d22b5182473bf7f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_einstein, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:06:58 compute-0 podman[431738]: 2025-12-03 02:06:58.338508526 +0000 UTC m=+0.279174163 container attach 9cd1da86e0612bf2d8137aa0d0e736e91cc1acc5fb762e57d22b5182473bf7f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_einstein, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 03 02:06:58 compute-0 podman[431752]: 2025-12-03 02:06:58.350202366 +0000 UTC m=+0.126841867 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 03 02:06:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:06:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:06:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:06:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:06:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:06:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:06:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1572: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:06:59 compute-0 compassionate_einstein[431760]: {
Dec 03 02:06:59 compute-0 compassionate_einstein[431760]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 02:06:59 compute-0 compassionate_einstein[431760]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:06:59 compute-0 compassionate_einstein[431760]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 02:06:59 compute-0 compassionate_einstein[431760]:         "osd_id": 2,
Dec 03 02:06:59 compute-0 compassionate_einstein[431760]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:06:59 compute-0 compassionate_einstein[431760]:         "type": "bluestore"
Dec 03 02:06:59 compute-0 compassionate_einstein[431760]:     },
Dec 03 02:06:59 compute-0 compassionate_einstein[431760]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 02:06:59 compute-0 compassionate_einstein[431760]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:06:59 compute-0 compassionate_einstein[431760]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 02:06:59 compute-0 compassionate_einstein[431760]:         "osd_id": 1,
Dec 03 02:06:59 compute-0 compassionate_einstein[431760]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:06:59 compute-0 compassionate_einstein[431760]:         "type": "bluestore"
Dec 03 02:06:59 compute-0 compassionate_einstein[431760]:     },
Dec 03 02:06:59 compute-0 compassionate_einstein[431760]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 02:06:59 compute-0 compassionate_einstein[431760]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:06:59 compute-0 compassionate_einstein[431760]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 02:06:59 compute-0 compassionate_einstein[431760]:         "osd_id": 0,
Dec 03 02:06:59 compute-0 compassionate_einstein[431760]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:06:59 compute-0 compassionate_einstein[431760]:         "type": "bluestore"
Dec 03 02:06:59 compute-0 compassionate_einstein[431760]:     }
Dec 03 02:06:59 compute-0 compassionate_einstein[431760]: }
Dec 03 02:06:59 compute-0 systemd[1]: libpod-9cd1da86e0612bf2d8137aa0d0e736e91cc1acc5fb762e57d22b5182473bf7f3.scope: Deactivated successfully.
Dec 03 02:06:59 compute-0 podman[431738]: 2025-12-03 02:06:59.585114958 +0000 UTC m=+1.525780635 container died 9cd1da86e0612bf2d8137aa0d0e736e91cc1acc5fb762e57d22b5182473bf7f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_einstein, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:06:59 compute-0 systemd[1]: libpod-9cd1da86e0612bf2d8137aa0d0e736e91cc1acc5fb762e57d22b5182473bf7f3.scope: Consumed 1.239s CPU time.
Dec 03 02:06:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:06:59.636 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:06:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:06:59.637 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:06:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:06:59.638 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:06:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d82a5e376d278573416e4234ad4aa4c2d39c73068de1e2c4fe645067124d5e6-merged.mount: Deactivated successfully.
Dec 03 02:06:59 compute-0 podman[431738]: 2025-12-03 02:06:59.693489584 +0000 UTC m=+1.634155231 container remove 9cd1da86e0612bf2d8137aa0d0e736e91cc1acc5fb762e57d22b5182473bf7f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:06:59 compute-0 systemd[1]: libpod-conmon-9cd1da86e0612bf2d8137aa0d0e736e91cc1acc5fb762e57d22b5182473bf7f3.scope: Deactivated successfully.
Dec 03 02:06:59 compute-0 ceph-mon[192821]: pgmap v1572: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:06:59 compute-0 podman[158098]: time="2025-12-03T02:06:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:06:59 compute-0 sudo[431635]: pam_unix(sudo:session): session closed for user root
Dec 03 02:06:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:06:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec 03 02:06:59 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 02:06:59 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:06:59 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 02:06:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:06:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8646 "" "Go-http-client/1.1"
Dec 03 02:06:59 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:06:59 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev b7c441cb-0e37-4a51-b041-c3b2064a8238 does not exist
Dec 03 02:06:59 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 3a1c9c70-4c1f-44ba-82f0-6e2e1e83b2c9 does not exist
Dec 03 02:06:59 compute-0 sudo[431818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:06:59 compute-0 sudo[431818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:06:59 compute-0 sudo[431818]: pam_unix(sudo:session): session closed for user root
Dec 03 02:07:00 compute-0 sudo[431843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 02:07:00 compute-0 sudo[431843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:07:00 compute-0 sudo[431843]: pam_unix(sudo:session): session closed for user root
Dec 03 02:07:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1573: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:00 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:07:00 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:07:00 compute-0 nova_compute[351485]: 2025-12-03 02:07:00.947 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:07:01 compute-0 openstack_network_exporter[368278]: ERROR   02:07:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:07:01 compute-0 openstack_network_exporter[368278]: ERROR   02:07:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:07:01 compute-0 openstack_network_exporter[368278]: ERROR   02:07:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:07:01 compute-0 openstack_network_exporter[368278]: ERROR   02:07:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:07:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:07:01 compute-0 openstack_network_exporter[368278]: ERROR   02:07:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:07:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:07:01 compute-0 ceph-mon[192821]: pgmap v1573: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1574: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:02 compute-0 podman[431868]: 2025-12-03 02:07:02.904982953 +0000 UTC m=+0.148745065 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, release=1214.1726694543, managed_by=edpm_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, release-0.7.12=, maintainer=Red Hat, Inc., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, config_id=edpm, container_name=kepler, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9)
Dec 03 02:07:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:07:03 compute-0 nova_compute[351485]: 2025-12-03 02:07:03.190 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:07:03 compute-0 ceph-mon[192821]: pgmap v1574: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1575: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:04 compute-0 podman[431888]: 2025-12-03 02:07:04.866986858 +0000 UTC m=+0.110177718 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 02:07:04 compute-0 podman[431887]: 2025-12-03 02:07:04.878247385 +0000 UTC m=+0.125761337 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.openshift.tags=minimal rhel9, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter)
Dec 03 02:07:04 compute-0 podman[431889]: 2025-12-03 02:07:04.882205967 +0000 UTC m=+0.115117427 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd)
Dec 03 02:07:04 compute-0 podman[431886]: 2025-12-03 02:07:04.911645327 +0000 UTC m=+0.164957613 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 03 02:07:05 compute-0 ceph-mon[192821]: pgmap v1575: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:05 compute-0 nova_compute[351485]: 2025-12-03 02:07:05.949 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:07:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1576: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:07 compute-0 nova_compute[351485]: 2025-12-03 02:07:07.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:07:07 compute-0 nova_compute[351485]: 2025-12-03 02:07:07.578 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:07:07 compute-0 nova_compute[351485]: 2025-12-03 02:07:07.578 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 03 02:07:07 compute-0 nova_compute[351485]: 2025-12-03 02:07:07.614 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 03 02:07:07 compute-0 ceph-mon[192821]: pgmap v1576: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:07:08 compute-0 nova_compute[351485]: 2025-12-03 02:07:08.194 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:07:08 compute-0 nova_compute[351485]: 2025-12-03 02:07:08.613 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:07:08 compute-0 nova_compute[351485]: 2025-12-03 02:07:08.614 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:07:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1577: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:08 compute-0 nova_compute[351485]: 2025-12-03 02:07:08.719 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:07:08 compute-0 nova_compute[351485]: 2025-12-03 02:07:08.720 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:07:08 compute-0 nova_compute[351485]: 2025-12-03 02:07:08.721 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:07:08 compute-0 nova_compute[351485]: 2025-12-03 02:07:08.722 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 02:07:08 compute-0 nova_compute[351485]: 2025-12-03 02:07:08.723 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:07:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:07:09 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2730002295' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:07:09 compute-0 nova_compute[351485]: 2025-12-03 02:07:09.234 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:07:09 compute-0 nova_compute[351485]: 2025-12-03 02:07:09.362 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:07:09 compute-0 nova_compute[351485]: 2025-12-03 02:07:09.363 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:07:09 compute-0 nova_compute[351485]: 2025-12-03 02:07:09.364 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:07:09 compute-0 nova_compute[351485]: 2025-12-03 02:07:09.376 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:07:09 compute-0 nova_compute[351485]: 2025-12-03 02:07:09.377 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:07:09 compute-0 nova_compute[351485]: 2025-12-03 02:07:09.379 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:07:09 compute-0 nova_compute[351485]: 2025-12-03 02:07:09.388 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:07:09 compute-0 nova_compute[351485]: 2025-12-03 02:07:09.389 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:07:09 compute-0 nova_compute[351485]: 2025-12-03 02:07:09.389 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:07:09 compute-0 ceph-mon[192821]: pgmap v1577: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:09 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2730002295' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:07:10 compute-0 nova_compute[351485]: 2025-12-03 02:07:10.019 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:07:10 compute-0 nova_compute[351485]: 2025-12-03 02:07:10.020 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3406MB free_disk=59.88887023925781GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 02:07:10 compute-0 nova_compute[351485]: 2025-12-03 02:07:10.021 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:07:10 compute-0 nova_compute[351485]: 2025-12-03 02:07:10.021 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:07:10 compute-0 nova_compute[351485]: 2025-12-03 02:07:10.271 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:07:10 compute-0 nova_compute[351485]: 2025-12-03 02:07:10.271 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:07:10 compute-0 nova_compute[351485]: 2025-12-03 02:07:10.272 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance b43e79bd-550f-42f8-9aa7-980b6bca3f70 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:07:10 compute-0 nova_compute[351485]: 2025-12-03 02:07:10.272 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 02:07:10 compute-0 nova_compute[351485]: 2025-12-03 02:07:10.273 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 02:07:10 compute-0 nova_compute[351485]: 2025-12-03 02:07:10.497 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:07:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1578: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:10 compute-0 nova_compute[351485]: 2025-12-03 02:07:10.952 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:07:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:07:11 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1242033160' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:07:11 compute-0 nova_compute[351485]: 2025-12-03 02:07:11.054 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:07:11 compute-0 nova_compute[351485]: 2025-12-03 02:07:11.067 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:07:11 compute-0 nova_compute[351485]: 2025-12-03 02:07:11.089 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:07:11 compute-0 nova_compute[351485]: 2025-12-03 02:07:11.092 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 02:07:11 compute-0 nova_compute[351485]: 2025-12-03 02:07:11.092 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.071s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:07:11 compute-0 ceph-mon[192821]: pgmap v1578: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:11 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1242033160' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:07:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1579: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:13 compute-0 nova_compute[351485]: 2025-12-03 02:07:13.057 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:07:13 compute-0 nova_compute[351485]: 2025-12-03 02:07:13.057 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:07:13 compute-0 nova_compute[351485]: 2025-12-03 02:07:13.058 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 02:07:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:07:13 compute-0 nova_compute[351485]: 2025-12-03 02:07:13.196 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:07:13 compute-0 nova_compute[351485]: 2025-12-03 02:07:13.712 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-b43e79bd-550f-42f8-9aa7-980b6bca3f70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:07:13 compute-0 nova_compute[351485]: 2025-12-03 02:07:13.713 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-b43e79bd-550f-42f8-9aa7-980b6bca3f70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:07:13 compute-0 nova_compute[351485]: 2025-12-03 02:07:13.714 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 03 02:07:13 compute-0 ceph-mon[192821]: pgmap v1579: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1580: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:15 compute-0 nova_compute[351485]: 2025-12-03 02:07:15.348 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Updating instance_info_cache with network_info: [{"id": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "address": "fa:16:3e:da:35:ef", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.85", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b217cd3-16", "ovs_interfaceid": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:07:15 compute-0 nova_compute[351485]: 2025-12-03 02:07:15.371 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-b43e79bd-550f-42f8-9aa7-980b6bca3f70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:07:15 compute-0 nova_compute[351485]: 2025-12-03 02:07:15.372 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 03 02:07:15 compute-0 nova_compute[351485]: 2025-12-03 02:07:15.373 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:07:15 compute-0 nova_compute[351485]: 2025-12-03 02:07:15.373 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:07:15 compute-0 nova_compute[351485]: 2025-12-03 02:07:15.374 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:07:15 compute-0 ceph-mon[192821]: pgmap v1580: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:15 compute-0 nova_compute[351485]: 2025-12-03 02:07:15.956 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:07:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1581: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:17 compute-0 ceph-mon[192821]: pgmap v1581: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:07:18 compute-0 nova_compute[351485]: 2025-12-03 02:07:18.199 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:07:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1582: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.508 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.510 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.511 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.512 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.521 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.522 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.522 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274', 'name': 'vn-44nal64-kaobzdetwujj-uf5345mx272a-vnf-xg4pxtj76f4j', 'flavor': {'id': 'bc665ec6-3672-4e52-a447-5267b04e227a', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9746b242761a48048d185ce26d622b33', 'user_id': '03ba25e4009b43f7b0054fee32bf9136', 'hostId': '875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd', 'status': 'active', 'metadata': {'metering.server_group': '0f6ab671-23df-4a6d-9613-02f9fb5fb294'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.522 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.526 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.526 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.527 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.528 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.528 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.531 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43e79bd-550f-42f8-9aa7-980b6bca3f70', 'name': 'vn-44nal64-mj7m4uljqyof-c7kfgdonucij-vnf-5nwa6zvischw', 'flavor': {'id': 'bc665ec6-3672-4e52-a447-5267b04e227a', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9746b242761a48048d185ce26d622b33', 'user_id': '03ba25e4009b43f7b0054fee32bf9136', 'hostId': '875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd', 'status': 'active', 'metadata': {'metering.server_group': '0f6ab671-23df-4a6d-9613-02f9fb5fb294'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.538 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '9182286b-5a08-4961-b4bb-c0e2f05746f7', 'name': 'test_0', 'flavor': {'id': 'bc665ec6-3672-4e52-a447-5267b04e227a', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9746b242761a48048d185ce26d622b33', 'user_id': '03ba25e4009b43f7b0054fee32bf9136', 'hostId': '875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.539 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.539 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.539 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.540 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.541 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-03T02:07:19.540115) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.585 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/memory.usage volume: 48.890625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.624 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/memory.usage volume: 49.07421875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.667 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/memory.usage volume: 48.85546875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.669 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.669 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.669 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.669 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.669 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.669 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.670 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-03T02:07:19.669910) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.678 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.686 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.693 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.694 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.694 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.694 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.694 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.695 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.695 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.695 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.696 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.696 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.697 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-03T02:07:19.695248) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.698 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.698 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.698 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.698 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.698 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.699 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.699 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.700 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-03T02:07:19.699142) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.700 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.700 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.701 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.701 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.702 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.702 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.702 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.702 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.703 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.703 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.704 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.704 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.705 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.705 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.705 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.705 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.706 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.706 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-03T02:07:19.702604) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.706 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-03T02:07:19.705983) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.706 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.707 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.707 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.708 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.708 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.708 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.709 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.709 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.709 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.710 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-03T02:07:19.709393) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.744 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.745 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.745 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.783 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.784 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.785 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.829 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.830 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.830 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.831 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.832 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.832 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.832 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.832 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.832 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.833 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.833 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.834 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-03T02:07:19.833236) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.938 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.940 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.940 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:19 compute-0 ceph-mon[192821]: pgmap v1582: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.039 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.040 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.041 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.147 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.148 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.149 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.150 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.151 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.152 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.152 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.153 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.153 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.154 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-03T02:07:20.153615) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.154 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.incoming.bytes volume: 1696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.155 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.incoming.bytes volume: 1612 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.156 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.bytes volume: 2130 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.156 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.157 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.157 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.157 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.157 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.157 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.157 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.latency volume: 1828594840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.158 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.latency volume: 317962452 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.158 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.latency volume: 234609421 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.158 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.latency volume: 1930310646 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.159 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.latency volume: 271584338 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.159 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.latency volume: 193440648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.159 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.latency volume: 1854350820 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.160 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.latency volume: 322798135 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.160 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.latency volume: 163317736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.161 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.161 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.161 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.161 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.161 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.161 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.162 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.162 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.162 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.162 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-03T02:07:20.157440) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.163 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-03T02:07:20.161926) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.164 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.164 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.164 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.164 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.165 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.165 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.165 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.166 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.166 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.166 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.166 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.166 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.166 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.167 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.167 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-03T02:07:20.166643) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.168 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.168 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.168 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.168 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.169 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.169 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.169 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.169 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.169 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.170 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.170 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.170 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.171 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.171 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.171 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.172 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.172 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.172 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.173 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.173 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.173 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.173 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.173 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.173 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.174 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.174 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.bytes volume: 41762816 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.174 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.175 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.175 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.175 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.176 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.173 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-03T02:07:20.169353) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.176 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.176 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.177 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.177 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.177 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.177 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.178 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.latency volume: 5579657720 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.177 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-03T02:07:20.173503) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.178 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.latency volume: 23420930 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.179 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.179 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.latency volume: 8159105015 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.180 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-03T02:07:20.177789) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.180 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.latency volume: 27311239 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.180 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.181 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.latency volume: 7224488215 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.181 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.latency volume: 31628821 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.181 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.182 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.182 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.183 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.183 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.183 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.183 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.183 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.184 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-03T02:07:20.183657) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.184 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.184 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.185 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.185 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.186 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.186 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.requests volume: 229 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.186 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.187 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.188 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.188 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.188 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.188 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.188 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.188 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.189 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.189 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.189 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.190 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-03T02:07:20.188706) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.190 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.190 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.190 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.191 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.191 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.191 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.191 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/cpu volume: 40100000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.191 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/cpu volume: 41150000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.192 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/cpu volume: 44670000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.192 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.193 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.194 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-03T02:07:20.191460) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.194 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.195 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.196 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.197 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.199 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.200 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.200 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.201 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-03T02:07:20.196952) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.201 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.201 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.201 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.202 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.outgoing.bytes volume: 2398 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.203 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.outgoing.bytes volume: 2398 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.203 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-03T02:07:20.201740) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.204 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.205 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.205 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.205 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.206 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.206 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.206 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.207 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-03T02:07:20.206409) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.207 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.207 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.208 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.208 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.209 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.209 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.210 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.210 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.211 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.212 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.212 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.213 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.213 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.213 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.213 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.213 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.214 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.214 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.215 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-03T02:07:20.213522) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.216 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.216 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.216 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.216 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.217 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.217 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.218 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.219 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.219 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.219 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.220 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-03T02:07:20.217256) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.220 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.220 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.221 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-03T02:07:20.220402) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.221 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.221 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.222 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.223 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.223 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.224 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.224 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.225 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.225 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.225 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.226 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.226 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.226 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.226 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.227 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.227 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.227 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.228 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.228 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.228 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.228 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.229 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.229 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.229 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.229 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.230 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.230 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.230 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.231 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.231 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.231 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.232 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:07:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1583: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:20 compute-0 nova_compute[351485]: 2025-12-03 02:07:20.959 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:07:21 compute-0 nova_compute[351485]: 2025-12-03 02:07:21.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:07:21 compute-0 nova_compute[351485]: 2025-12-03 02:07:21.578 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 02:07:21 compute-0 ceph-mon[192821]: pgmap v1583: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1584: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:07:23 compute-0 nova_compute[351485]: 2025-12-03 02:07:23.203 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:07:23 compute-0 podman[432020]: 2025-12-03 02:07:23.881265164 +0000 UTC m=+0.109629423 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 03 02:07:23 compute-0 podman[432018]: 2025-12-03 02:07:23.88573614 +0000 UTC m=+0.123716070 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 03 02:07:23 compute-0 podman[432019]: 2025-12-03 02:07:23.891971696 +0000 UTC m=+0.126925951 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm)
Dec 03 02:07:23 compute-0 ceph-mon[192821]: pgmap v1584: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:24 compute-0 nova_compute[351485]: 2025-12-03 02:07:24.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:07:24 compute-0 nova_compute[351485]: 2025-12-03 02:07:24.578 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 03 02:07:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1585: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:25 compute-0 nova_compute[351485]: 2025-12-03 02:07:25.960 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:07:25 compute-0 ceph-mon[192821]: pgmap v1585: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1586: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:07:28 compute-0 nova_compute[351485]: 2025-12-03 02:07:28.207 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:07:28 compute-0 ceph-mon[192821]: pgmap v1586: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:07:28
Dec 03 02:07:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 02:07:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 02:07:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', '.rgw.root', 'volumes', 'images', 'default.rgw.log']
Dec 03 02:07:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 02:07:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:07:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:07:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:07:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:07:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:07:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:07:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1587: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:28 compute-0 podman[432075]: 2025-12-03 02:07:28.886917904 +0000 UTC m=+0.131212521 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible)
Dec 03 02:07:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 02:07:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:07:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 02:07:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:07:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:07:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:07:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:07:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:07:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:07:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:07:29 compute-0 ceph-mon[192821]: pgmap v1587: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:29 compute-0 podman[158098]: time="2025-12-03T02:07:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:07:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:07:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec 03 02:07:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:07:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8651 "" "Go-http-client/1.1"
Dec 03 02:07:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1588: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:30 compute-0 nova_compute[351485]: 2025-12-03 02:07:30.963 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:07:31 compute-0 sshd-session[432016]: Invalid user userroot from 45.78.219.140 port 53576
Dec 03 02:07:31 compute-0 sshd-session[432016]: Received disconnect from 45.78.219.140 port 53576:11: Bye Bye [preauth]
Dec 03 02:07:31 compute-0 sshd-session[432016]: Disconnected from invalid user userroot 45.78.219.140 port 53576 [preauth]
Dec 03 02:07:31 compute-0 openstack_network_exporter[368278]: ERROR   02:07:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:07:31 compute-0 openstack_network_exporter[368278]: ERROR   02:07:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:07:31 compute-0 openstack_network_exporter[368278]: ERROR   02:07:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:07:31 compute-0 openstack_network_exporter[368278]: ERROR   02:07:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:07:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:07:31 compute-0 openstack_network_exporter[368278]: ERROR   02:07:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:07:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:07:31 compute-0 ceph-mon[192821]: pgmap v1588: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1589: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:07:33 compute-0 nova_compute[351485]: 2025-12-03 02:07:33.211 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:07:33 compute-0 ceph-mon[192821]: pgmap v1589: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:33 compute-0 podman[432094]: 2025-12-03 02:07:33.880267866 +0000 UTC m=+0.138581539 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, release=1214.1726694543, vcs-type=git, io.buildah.version=1.29.0, managed_by=edpm_ansible, architecture=x86_64, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 03 02:07:34 compute-0 nova_compute[351485]: 2025-12-03 02:07:34.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:07:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1590: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:35 compute-0 ceph-mon[192821]: pgmap v1590: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:35 compute-0 podman[432116]: 2025-12-03 02:07:35.886791555 +0000 UTC m=+0.116670721 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec 03 02:07:35 compute-0 podman[432114]: 2025-12-03 02:07:35.89016095 +0000 UTC m=+0.131191410 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, architecture=x86_64, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, name=ubi9-minimal, io.openshift.expose-services=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, distribution-scope=public, container_name=openstack_network_exporter)
Dec 03 02:07:35 compute-0 podman[432115]: 2025-12-03 02:07:35.89192488 +0000 UTC m=+0.130010017 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 02:07:35 compute-0 podman[432113]: 2025-12-03 02:07:35.919078116 +0000 UTC m=+0.170106748 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 03 02:07:35 compute-0 nova_compute[351485]: 2025-12-03 02:07:35.967 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:07:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1591: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:37 compute-0 ceph-mon[192821]: pgmap v1591: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:07:38 compute-0 nova_compute[351485]: 2025-12-03 02:07:38.213 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:07:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 02:07:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:07:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 02:07:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:07:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0016571738458032168 of space, bias 1.0, pg target 0.49715215374096505 quantized to 32 (current 32)
Dec 03 02:07:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:07:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:07:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:07:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:07:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:07:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec 03 02:07:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:07:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 02:07:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:07:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:07:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:07:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 02:07:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:07:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 02:07:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:07:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:07:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:07:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 02:07:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1592: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:39 compute-0 ceph-mon[192821]: pgmap v1592: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:40 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 03 02:07:40 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 3000.0 total, 600.0 interval
                                            Cumulative writes: 7234 writes, 32K keys, 7234 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                            Cumulative WAL: 7234 writes, 7234 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 1298 writes, 5905 keys, 1298 commit groups, 1.0 writes per commit group, ingest: 8.55 MB, 0.01 MB/s
                                            Interval WAL: 1298 writes, 1298 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     97.4      0.41              0.18        19    0.021       0      0       0.0       0.0
                                              L6      1/0    8.66 MB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   3.3    134.1    108.4      1.21              0.59        18    0.067     86K    10K       0.0       0.0
                                             Sum      1/0    8.66 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.3    100.3    105.6      1.61              0.78        37    0.044     86K    10K       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.4    111.6    115.9      0.35              0.17         8    0.044     22K   2526       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   0.0    134.1    108.4      1.21              0.59        18    0.067     86K    10K       0.0       0.0
                                            High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     98.0      0.40              0.18        18    0.022       0      0       0.0       0.0
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     18.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 3000.0 total, 600.0 interval
                                            Flush(GB): cumulative 0.039, interval 0.009
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.17 GB write, 0.06 MB/s write, 0.16 GB read, 0.05 MB/s read, 1.6 seconds
                                            Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.4 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x559a0b5b71f0#2 capacity: 308.00 MB usage: 19.82 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000195 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1288,19.15 MB,6.2164%) FilterBlock(38,247.67 KB,0.0785283%) IndexBlock(38,445.08 KB,0.141119%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
Dec 03 02:07:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1593: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:40 compute-0 nova_compute[351485]: 2025-12-03 02:07:40.970 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:07:41 compute-0 ceph-mon[192821]: pgmap v1593: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1594: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:07:43 compute-0 nova_compute[351485]: 2025-12-03 02:07:43.216 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:07:43 compute-0 ceph-mon[192821]: pgmap v1594: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1595: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:45 compute-0 ceph-mon[192821]: pgmap v1595: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:45 compute-0 nova_compute[351485]: 2025-12-03 02:07:45.973 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:07:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1596: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 02:07:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2261362780' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:07:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 02:07:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2261362780' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:07:47 compute-0 ceph-mon[192821]: pgmap v1596: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/2261362780' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:07:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/2261362780' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:07:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:07:48 compute-0 nova_compute[351485]: 2025-12-03 02:07:48.219 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:07:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1597: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:49 compute-0 ceph-mon[192821]: pgmap v1597: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1598: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:50 compute-0 nova_compute[351485]: 2025-12-03 02:07:50.977 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:07:51 compute-0 ceph-mon[192821]: pgmap v1598: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1599: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:07:53 compute-0 nova_compute[351485]: 2025-12-03 02:07:53.223 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:07:53 compute-0 ceph-mon[192821]: pgmap v1599: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1600: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:54 compute-0 podman[432197]: 2025-12-03 02:07:54.894227705 +0000 UTC m=+0.143374344 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 03 02:07:54 compute-0 podman[432198]: 2025-12-03 02:07:54.897918709 +0000 UTC m=+0.141548563 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm)
Dec 03 02:07:54 compute-0 podman[432199]: 2025-12-03 02:07:54.917042478 +0000 UTC m=+0.155919898 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 02:07:55 compute-0 ceph-mon[192821]: pgmap v1600: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:55 compute-0 nova_compute[351485]: 2025-12-03 02:07:55.980 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:07:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1601: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:57 compute-0 ceph-mon[192821]: pgmap v1601: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:07:58 compute-0 nova_compute[351485]: 2025-12-03 02:07:58.226 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:07:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:07:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:07:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:07:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:07:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:07:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:07:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1602: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:07:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:07:59.636 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:07:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:07:59.638 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:07:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:07:59.638 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:07:59 compute-0 podman[158098]: time="2025-12-03T02:07:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:07:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:07:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec 03 02:07:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:07:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8647 "" "Go-http-client/1.1"
Dec 03 02:07:59 compute-0 podman[432254]: 2025-12-03 02:07:59.928395362 +0000 UTC m=+0.173577535 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi)
Dec 03 02:07:59 compute-0 ceph-mon[192821]: pgmap v1602: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:08:00 compute-0 sudo[432274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:08:00 compute-0 sudo[432274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:08:00 compute-0 sudo[432274]: pam_unix(sudo:session): session closed for user root
Dec 03 02:08:00 compute-0 sudo[432301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:08:00 compute-0 sudo[432301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:08:00 compute-0 sudo[432301]: pam_unix(sudo:session): session closed for user root
Dec 03 02:08:00 compute-0 sudo[432326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:08:00 compute-0 sudo[432326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:08:00 compute-0 sudo[432326]: pam_unix(sudo:session): session closed for user root
Dec 03 02:08:00 compute-0 sudo[432351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 02:08:00 compute-0 sudo[432351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:08:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1603: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:08:00 compute-0 sshd-session[432298]: Invalid user foundry from 146.190.144.138 port 38820
Dec 03 02:08:00 compute-0 sshd-session[432298]: Received disconnect from 146.190.144.138 port 38820:11: Bye Bye [preauth]
Dec 03 02:08:00 compute-0 sshd-session[432298]: Disconnected from invalid user foundry 146.190.144.138 port 38820 [preauth]
Dec 03 02:08:00 compute-0 nova_compute[351485]: 2025-12-03 02:08:00.982 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:08:01 compute-0 openstack_network_exporter[368278]: ERROR   02:08:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:08:01 compute-0 openstack_network_exporter[368278]: ERROR   02:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:08:01 compute-0 openstack_network_exporter[368278]: ERROR   02:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:08:01 compute-0 openstack_network_exporter[368278]: ERROR   02:08:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:08:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:08:01 compute-0 openstack_network_exporter[368278]: ERROR   02:08:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:08:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:08:01 compute-0 sudo[432351]: pam_unix(sudo:session): session closed for user root
Dec 03 02:08:01 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:08:01 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:08:01 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 02:08:01 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:08:01 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 02:08:01 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:08:01 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 8f4d7105-bcb1-4588-8105-f87fba16754e does not exist
Dec 03 02:08:01 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev e54821e9-04b5-4ccc-a42a-383fe18036d9 does not exist
Dec 03 02:08:01 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev bb4f502c-2112-4f00-951b-f6821f11259a does not exist
Dec 03 02:08:01 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 02:08:01 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:08:01 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 02:08:01 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:08:01 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:08:01 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:08:01 compute-0 sudo[432406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:08:01 compute-0 sudo[432406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:08:01 compute-0 sudo[432406]: pam_unix(sudo:session): session closed for user root
Dec 03 02:08:01 compute-0 sudo[432431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:08:01 compute-0 sudo[432431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:08:01 compute-0 sudo[432431]: pam_unix(sudo:session): session closed for user root
Dec 03 02:08:01 compute-0 ceph-mon[192821]: pgmap v1603: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:08:01 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:08:01 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:08:01 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:08:01 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:08:01 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:08:01 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:08:02 compute-0 sudo[432456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:08:02 compute-0 sudo[432456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:08:02 compute-0 sudo[432456]: pam_unix(sudo:session): session closed for user root
Dec 03 02:08:02 compute-0 sudo[432481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 02:08:02 compute-0 sudo[432481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:08:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1604: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:08:02 compute-0 podman[432545]: 2025-12-03 02:08:02.787834043 +0000 UTC m=+0.097396557 container create 68cd753ce3e388c9949e997c910140a6880ff3463b0d6d766053416f65e62573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec 03 02:08:02 compute-0 podman[432545]: 2025-12-03 02:08:02.747035123 +0000 UTC m=+0.056597677 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:08:02 compute-0 systemd[1]: Started libpod-conmon-68cd753ce3e388c9949e997c910140a6880ff3463b0d6d766053416f65e62573.scope.
Dec 03 02:08:02 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:08:02 compute-0 podman[432545]: 2025-12-03 02:08:02.953204917 +0000 UTC m=+0.262767481 container init 68cd753ce3e388c9949e997c910140a6880ff3463b0d6d766053416f65e62573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 03 02:08:02 compute-0 podman[432545]: 2025-12-03 02:08:02.973102388 +0000 UTC m=+0.282664912 container start 68cd753ce3e388c9949e997c910140a6880ff3463b0d6d766053416f65e62573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_hypatia, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:08:02 compute-0 podman[432545]: 2025-12-03 02:08:02.979998962 +0000 UTC m=+0.289561506 container attach 68cd753ce3e388c9949e997c910140a6880ff3463b0d6d766053416f65e62573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_hypatia, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 03 02:08:02 compute-0 reverent_hypatia[432561]: 167 167
Dec 03 02:08:02 compute-0 systemd[1]: libpod-68cd753ce3e388c9949e997c910140a6880ff3463b0d6d766053416f65e62573.scope: Deactivated successfully.
Dec 03 02:08:02 compute-0 conmon[432561]: conmon 68cd753ce3e388c9949e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-68cd753ce3e388c9949e997c910140a6880ff3463b0d6d766053416f65e62573.scope/container/memory.events
Dec 03 02:08:02 compute-0 podman[432545]: 2025-12-03 02:08:02.99589013 +0000 UTC m=+0.305452654 container died 68cd753ce3e388c9949e997c910140a6880ff3463b0d6d766053416f65e62573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_hypatia, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Dec 03 02:08:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-66c8747fcb8029bd433fd38e6ae6a3a1867f54ddbca00c6bf6c5f3ed4638135e-merged.mount: Deactivated successfully.
Dec 03 02:08:03 compute-0 podman[432545]: 2025-12-03 02:08:03.084829128 +0000 UTC m=+0.394391652 container remove 68cd753ce3e388c9949e997c910140a6880ff3463b0d6d766053416f65e62573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_hypatia, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Dec 03 02:08:03 compute-0 systemd[1]: libpod-conmon-68cd753ce3e388c9949e997c910140a6880ff3463b0d6d766053416f65e62573.scope: Deactivated successfully.
Dec 03 02:08:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:08:03 compute-0 nova_compute[351485]: 2025-12-03 02:08:03.230 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:08:03 compute-0 podman[432583]: 2025-12-03 02:08:03.36646766 +0000 UTC m=+0.073641658 container create 7af804fd7ef375efe571e5d9e090e9a7a3a8b472c2b4de97ae56426b5db747a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_heyrovsky, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:08:03 compute-0 systemd[1]: Started libpod-conmon-7af804fd7ef375efe571e5d9e090e9a7a3a8b472c2b4de97ae56426b5db747a2.scope.
Dec 03 02:08:03 compute-0 podman[432583]: 2025-12-03 02:08:03.342481923 +0000 UTC m=+0.049655951 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:08:03 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:08:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e421848a9c2fd13abe47ce207c6470bb52d5a9ce43daa01888e082cbcfc60c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:08:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e421848a9c2fd13abe47ce207c6470bb52d5a9ce43daa01888e082cbcfc60c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:08:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e421848a9c2fd13abe47ce207c6470bb52d5a9ce43daa01888e082cbcfc60c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:08:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e421848a9c2fd13abe47ce207c6470bb52d5a9ce43daa01888e082cbcfc60c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:08:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e421848a9c2fd13abe47ce207c6470bb52d5a9ce43daa01888e082cbcfc60c1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 02:08:03 compute-0 podman[432583]: 2025-12-03 02:08:03.517017555 +0000 UTC m=+0.224191583 container init 7af804fd7ef375efe571e5d9e090e9a7a3a8b472c2b4de97ae56426b5db747a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_heyrovsky, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 03 02:08:03 compute-0 podman[432583]: 2025-12-03 02:08:03.544906751 +0000 UTC m=+0.252080759 container start 7af804fd7ef375efe571e5d9e090e9a7a3a8b472c2b4de97ae56426b5db747a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_heyrovsky, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:08:03 compute-0 podman[432583]: 2025-12-03 02:08:03.549665965 +0000 UTC m=+0.256839963 container attach 7af804fd7ef375efe571e5d9e090e9a7a3a8b472c2b4de97ae56426b5db747a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_heyrovsky, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:08:04 compute-0 ceph-mon[192821]: pgmap v1604: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:08:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1605: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:08:04 compute-0 fervent_heyrovsky[432599]: --> passed data devices: 0 physical, 3 LVM
Dec 03 02:08:04 compute-0 fervent_heyrovsky[432599]: --> relative data size: 1.0
Dec 03 02:08:04 compute-0 fervent_heyrovsky[432599]: --> All data devices are unavailable
Dec 03 02:08:04 compute-0 systemd[1]: libpod-7af804fd7ef375efe571e5d9e090e9a7a3a8b472c2b4de97ae56426b5db747a2.scope: Deactivated successfully.
Dec 03 02:08:04 compute-0 systemd[1]: libpod-7af804fd7ef375efe571e5d9e090e9a7a3a8b472c2b4de97ae56426b5db747a2.scope: Consumed 1.194s CPU time.
Dec 03 02:08:04 compute-0 podman[432583]: 2025-12-03 02:08:04.797227424 +0000 UTC m=+1.504401452 container died 7af804fd7ef375efe571e5d9e090e9a7a3a8b472c2b4de97ae56426b5db747a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 03 02:08:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e421848a9c2fd13abe47ce207c6470bb52d5a9ce43daa01888e082cbcfc60c1-merged.mount: Deactivated successfully.
Dec 03 02:08:04 compute-0 podman[432583]: 2025-12-03 02:08:04.894317311 +0000 UTC m=+1.601491309 container remove 7af804fd7ef375efe571e5d9e090e9a7a3a8b472c2b4de97ae56426b5db747a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec 03 02:08:04 compute-0 podman[432627]: 2025-12-03 02:08:04.895626268 +0000 UTC m=+0.138748103 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, name=ubi9, container_name=kepler, io.openshift.tags=base rhel9, release-0.7.12=, build-date=2024-09-18T21:23:30, version=9.4, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, vendor=Red Hat, Inc., config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public)
Dec 03 02:08:04 compute-0 systemd[1]: libpod-conmon-7af804fd7ef375efe571e5d9e090e9a7a3a8b472c2b4de97ae56426b5db747a2.scope: Deactivated successfully.
Dec 03 02:08:04 compute-0 sudo[432481]: pam_unix(sudo:session): session closed for user root
Dec 03 02:08:05 compute-0 sudo[432659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:08:05 compute-0 sudo[432659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:08:05 compute-0 sudo[432659]: pam_unix(sudo:session): session closed for user root
Dec 03 02:08:05 compute-0 sudo[432684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:08:05 compute-0 sudo[432684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:08:05 compute-0 sudo[432684]: pam_unix(sudo:session): session closed for user root
Dec 03 02:08:05 compute-0 sudo[432709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:08:05 compute-0 sudo[432709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:08:05 compute-0 sudo[432709]: pam_unix(sudo:session): session closed for user root
Dec 03 02:08:05 compute-0 sudo[432734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 02:08:05 compute-0 sudo[432734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:08:05 compute-0 nova_compute[351485]: 2025-12-03 02:08:05.987 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:08:06 compute-0 ceph-mon[192821]: pgmap v1605: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:08:06 compute-0 podman[432797]: 2025-12-03 02:08:06.023140223 +0000 UTC m=+0.074537573 container create 2045fc4eafae41e231fb0f2b9827f7aab31dc1a57d64592e0a48c736ebc302fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_benz, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 03 02:08:06 compute-0 podman[432797]: 2025-12-03 02:08:05.99394296 +0000 UTC m=+0.045340400 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:08:06 compute-0 systemd[1]: Started libpod-conmon-2045fc4eafae41e231fb0f2b9827f7aab31dc1a57d64592e0a48c736ebc302fb.scope.
Dec 03 02:08:06 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:08:06 compute-0 podman[432797]: 2025-12-03 02:08:06.158576222 +0000 UTC m=+0.209973612 container init 2045fc4eafae41e231fb0f2b9827f7aab31dc1a57d64592e0a48c736ebc302fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec 03 02:08:06 compute-0 podman[432797]: 2025-12-03 02:08:06.170631172 +0000 UTC m=+0.222028532 container start 2045fc4eafae41e231fb0f2b9827f7aab31dc1a57d64592e0a48c736ebc302fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec 03 02:08:06 compute-0 podman[432797]: 2025-12-03 02:08:06.176030524 +0000 UTC m=+0.227427914 container attach 2045fc4eafae41e231fb0f2b9827f7aab31dc1a57d64592e0a48c736ebc302fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:08:06 compute-0 fervent_benz[432834]: 167 167
Dec 03 02:08:06 compute-0 systemd[1]: libpod-2045fc4eafae41e231fb0f2b9827f7aab31dc1a57d64592e0a48c736ebc302fb.scope: Deactivated successfully.
Dec 03 02:08:06 compute-0 podman[432797]: 2025-12-03 02:08:06.181053366 +0000 UTC m=+0.232450716 container died 2045fc4eafae41e231fb0f2b9827f7aab31dc1a57d64592e0a48c736ebc302fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_benz, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 03 02:08:06 compute-0 podman[432815]: 2025-12-03 02:08:06.204822536 +0000 UTC m=+0.101174274 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 03 02:08:06 compute-0 podman[432816]: 2025-12-03 02:08:06.20672866 +0000 UTC m=+0.103652934 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 03 02:08:06 compute-0 podman[432814]: 2025-12-03 02:08:06.207145882 +0000 UTC m=+0.116694172 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, config_id=edpm, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, name=ubi9-minimal, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.6, container_name=openstack_network_exporter, maintainer=Red Hat, Inc.)
Dec 03 02:08:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-eefd0cee5a8071eb80287af9f8704412dfeacf567f8550a827aa2e49b84b7547-merged.mount: Deactivated successfully.
Dec 03 02:08:06 compute-0 podman[432797]: 2025-12-03 02:08:06.232642401 +0000 UTC m=+0.284039751 container remove 2045fc4eafae41e231fb0f2b9827f7aab31dc1a57d64592e0a48c736ebc302fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_benz, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:08:06 compute-0 systemd[1]: libpod-conmon-2045fc4eafae41e231fb0f2b9827f7aab31dc1a57d64592e0a48c736ebc302fb.scope: Deactivated successfully.
Dec 03 02:08:06 compute-0 podman[432811]: 2025-12-03 02:08:06.271582729 +0000 UTC m=+0.172772173 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible)
Dec 03 02:08:06 compute-0 podman[432921]: 2025-12-03 02:08:06.463192422 +0000 UTC m=+0.080132621 container create 185717487c6e451025fa758aebb57bdc67cbf490def02e085dd433cc157f1c46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_sanderson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 03 02:08:06 compute-0 podman[432921]: 2025-12-03 02:08:06.42942789 +0000 UTC m=+0.046368149 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:08:06 compute-0 systemd[1]: Started libpod-conmon-185717487c6e451025fa758aebb57bdc67cbf490def02e085dd433cc157f1c46.scope.
Dec 03 02:08:06 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:08:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/281e4d0aabfa5b0960ce054dba7ea9ee17f81b9b7146776a369a1500b4cc7b20/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:08:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/281e4d0aabfa5b0960ce054dba7ea9ee17f81b9b7146776a369a1500b4cc7b20/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:08:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/281e4d0aabfa5b0960ce054dba7ea9ee17f81b9b7146776a369a1500b4cc7b20/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:08:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/281e4d0aabfa5b0960ce054dba7ea9ee17f81b9b7146776a369a1500b4cc7b20/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:08:06 compute-0 podman[432921]: 2025-12-03 02:08:06.677776363 +0000 UTC m=+0.294716622 container init 185717487c6e451025fa758aebb57bdc67cbf490def02e085dd433cc157f1c46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_sanderson, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:08:06 compute-0 podman[432921]: 2025-12-03 02:08:06.697135959 +0000 UTC m=+0.314076168 container start 185717487c6e451025fa758aebb57bdc67cbf490def02e085dd433cc157f1c46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_sanderson, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec 03 02:08:06 compute-0 podman[432921]: 2025-12-03 02:08:06.703743985 +0000 UTC m=+0.320684624 container attach 185717487c6e451025fa758aebb57bdc67cbf490def02e085dd433cc157f1c46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:08:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1606: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]: {
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:     "0": [
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:         {
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:             "devices": [
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:                 "/dev/loop3"
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:             ],
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:             "lv_name": "ceph_lv0",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:             "lv_size": "21470642176",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:             "name": "ceph_lv0",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:             "tags": {
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:                 "ceph.cluster_name": "ceph",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:                 "ceph.crush_device_class": "",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:                 "ceph.encrypted": "0",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:                 "ceph.osd_id": "0",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:                 "ceph.type": "block",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:                 "ceph.vdo": "0"
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:             },
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:             "type": "block",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:             "vg_name": "ceph_vg0"
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:         }
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:     ],
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:     "1": [
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:         {
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:             "devices": [
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:                 "/dev/loop4"
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:             ],
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:             "lv_name": "ceph_lv1",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:             "lv_size": "21470642176",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:             "name": "ceph_lv1",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:             "tags": {
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:                 "ceph.cluster_name": "ceph",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:                 "ceph.crush_device_class": "",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:                 "ceph.encrypted": "0",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:                 "ceph.osd_id": "1",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:                 "ceph.type": "block",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:                 "ceph.vdo": "0"
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:             },
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:             "type": "block",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:             "vg_name": "ceph_vg1"
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:         }
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:     ],
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:     "2": [
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:         {
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:             "devices": [
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:                 "/dev/loop5"
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:             ],
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:             "lv_name": "ceph_lv2",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:             "lv_size": "21470642176",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:             "name": "ceph_lv2",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:             "tags": {
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:                 "ceph.cluster_name": "ceph",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:                 "ceph.crush_device_class": "",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:                 "ceph.encrypted": "0",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:                 "ceph.osd_id": "2",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:                 "ceph.type": "block",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:                 "ceph.vdo": "0"
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:             },
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:             "type": "block",
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:             "vg_name": "ceph_vg2"
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:         }
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]:     ]
Dec 03 02:08:07 compute-0 flamboyant_sanderson[432937]: }
Dec 03 02:08:07 compute-0 systemd[1]: libpod-185717487c6e451025fa758aebb57bdc67cbf490def02e085dd433cc157f1c46.scope: Deactivated successfully.
Dec 03 02:08:07 compute-0 podman[432921]: 2025-12-03 02:08:07.575794665 +0000 UTC m=+1.192734874 container died 185717487c6e451025fa758aebb57bdc67cbf490def02e085dd433cc157f1c46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:08:07 compute-0 nova_compute[351485]: 2025-12-03 02:08:07.593 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:08:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-281e4d0aabfa5b0960ce054dba7ea9ee17f81b9b7146776a369a1500b4cc7b20-merged.mount: Deactivated successfully.
Dec 03 02:08:07 compute-0 podman[432921]: 2025-12-03 02:08:07.683209814 +0000 UTC m=+1.300149993 container remove 185717487c6e451025fa758aebb57bdc67cbf490def02e085dd433cc157f1c46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Dec 03 02:08:07 compute-0 systemd[1]: libpod-conmon-185717487c6e451025fa758aebb57bdc67cbf490def02e085dd433cc157f1c46.scope: Deactivated successfully.
Dec 03 02:08:07 compute-0 sudo[432734]: pam_unix(sudo:session): session closed for user root
Dec 03 02:08:07 compute-0 sudo[432959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:08:07 compute-0 sudo[432959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:08:07 compute-0 sudo[432959]: pam_unix(sudo:session): session closed for user root
Dec 03 02:08:07 compute-0 sudo[432984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:08:07 compute-0 sudo[432984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:08:08 compute-0 sudo[432984]: pam_unix(sudo:session): session closed for user root
Dec 03 02:08:08 compute-0 ceph-mon[192821]: pgmap v1606: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:08:08 compute-0 sudo[433009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:08:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:08:08 compute-0 sudo[433009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:08:08 compute-0 sudo[433009]: pam_unix(sudo:session): session closed for user root
Dec 03 02:08:08 compute-0 nova_compute[351485]: 2025-12-03 02:08:08.233 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:08:08 compute-0 sudo[433034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 02:08:08 compute-0 sudo[433034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:08:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1607: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:08:08 compute-0 podman[433099]: 2025-12-03 02:08:08.809305019 +0000 UTC m=+0.091431010 container create 4c2e34a55849620d8c55ce8932c9ce7aabb09cdb95e19bd07e5b3334580f199d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:08:08 compute-0 podman[433099]: 2025-12-03 02:08:08.779485288 +0000 UTC m=+0.061611349 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:08:08 compute-0 systemd[1]: Started libpod-conmon-4c2e34a55849620d8c55ce8932c9ce7aabb09cdb95e19bd07e5b3334580f199d.scope.
Dec 03 02:08:08 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:08:08 compute-0 podman[433099]: 2025-12-03 02:08:08.965308238 +0000 UTC m=+0.247434299 container init 4c2e34a55849620d8c55ce8932c9ce7aabb09cdb95e19bd07e5b3334580f199d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:08:08 compute-0 podman[433099]: 2025-12-03 02:08:08.983742988 +0000 UTC m=+0.265869009 container start 4c2e34a55849620d8c55ce8932c9ce7aabb09cdb95e19bd07e5b3334580f199d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Dec 03 02:08:08 compute-0 podman[433099]: 2025-12-03 02:08:08.991177557 +0000 UTC m=+0.273303578 container attach 4c2e34a55849620d8c55ce8932c9ce7aabb09cdb95e19bd07e5b3334580f199d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_sutherland, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:08:08 compute-0 hardcore_sutherland[433115]: 167 167
Dec 03 02:08:08 compute-0 systemd[1]: libpod-4c2e34a55849620d8c55ce8932c9ce7aabb09cdb95e19bd07e5b3334580f199d.scope: Deactivated successfully.
Dec 03 02:08:08 compute-0 podman[433099]: 2025-12-03 02:08:08.994001557 +0000 UTC m=+0.276127578 container died 4c2e34a55849620d8c55ce8932c9ce7aabb09cdb95e19bd07e5b3334580f199d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 03 02:08:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6390ee4cc7bcc629f234d94e3d564c85db8dbe2cf230ecc9016e99e3de82477-merged.mount: Deactivated successfully.
Dec 03 02:08:09 compute-0 podman[433099]: 2025-12-03 02:08:09.075125574 +0000 UTC m=+0.357251595 container remove 4c2e34a55849620d8c55ce8932c9ce7aabb09cdb95e19bd07e5b3334580f199d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:08:09 compute-0 systemd[1]: libpod-conmon-4c2e34a55849620d8c55ce8932c9ce7aabb09cdb95e19bd07e5b3334580f199d.scope: Deactivated successfully.
Dec 03 02:08:09 compute-0 podman[433144]: 2025-12-03 02:08:09.375396112 +0000 UTC m=+0.104677723 container create 8bb616fd3295b6272c2c7fcab50f53b4c81e2a3710538b849dedebf66d40ff74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hamilton, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 03 02:08:09 compute-0 podman[433144]: 2025-12-03 02:08:09.338646475 +0000 UTC m=+0.067928146 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:08:09 compute-0 systemd[1]: Started libpod-conmon-8bb616fd3295b6272c2c7fcab50f53b4c81e2a3710538b849dedebf66d40ff74.scope.
Dec 03 02:08:09 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:08:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3294c236045b24f5b497f70773ac2a3b62b49499d9c13212b9641f75801195e1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:08:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3294c236045b24f5b497f70773ac2a3b62b49499d9c13212b9641f75801195e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:08:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3294c236045b24f5b497f70773ac2a3b62b49499d9c13212b9641f75801195e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:08:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3294c236045b24f5b497f70773ac2a3b62b49499d9c13212b9641f75801195e1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:08:09 compute-0 podman[433144]: 2025-12-03 02:08:09.552481505 +0000 UTC m=+0.281763176 container init 8bb616fd3295b6272c2c7fcab50f53b4c81e2a3710538b849dedebf66d40ff74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hamilton, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 03 02:08:09 compute-0 podman[433144]: 2025-12-03 02:08:09.576278966 +0000 UTC m=+0.305560577 container start 8bb616fd3295b6272c2c7fcab50f53b4c81e2a3710538b849dedebf66d40ff74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 03 02:08:09 compute-0 nova_compute[351485]: 2025-12-03 02:08:09.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:08:09 compute-0 podman[433144]: 2025-12-03 02:08:09.585378353 +0000 UTC m=+0.314659974 container attach 8bb616fd3295b6272c2c7fcab50f53b4c81e2a3710538b849dedebf66d40ff74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hamilton, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 03 02:08:10 compute-0 sshd-session[433122]: Invalid user openbravo from 154.113.10.113 port 54006
Dec 03 02:08:10 compute-0 ceph-mon[192821]: pgmap v1607: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:08:10 compute-0 sshd-session[433122]: Received disconnect from 154.113.10.113 port 54006:11: Bye Bye [preauth]
Dec 03 02:08:10 compute-0 sshd-session[433122]: Disconnected from invalid user openbravo 154.113.10.113 port 54006 [preauth]
Dec 03 02:08:10 compute-0 nova_compute[351485]: 2025-12-03 02:08:10.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:08:10 compute-0 nova_compute[351485]: 2025-12-03 02:08:10.604 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:08:10 compute-0 nova_compute[351485]: 2025-12-03 02:08:10.606 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:08:10 compute-0 nova_compute[351485]: 2025-12-03 02:08:10.607 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:08:10 compute-0 nova_compute[351485]: 2025-12-03 02:08:10.608 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 02:08:10 compute-0 nova_compute[351485]: 2025-12-03 02:08:10.609 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:08:10 compute-0 nova_compute[351485]: 2025-12-03 02:08:10.699 351492 DEBUG nova.compute.manager [req-4dce454e-5a29-46aa-9451-ed4c8cd34dc4 req-b8700b45-3502-4344-b507-971fdce50b38 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Received event network-changed-d0c565d0-5299-45e5-84ac-ea722711af3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:08:10 compute-0 nova_compute[351485]: 2025-12-03 02:08:10.700 351492 DEBUG nova.compute.manager [req-4dce454e-5a29-46aa-9451-ed4c8cd34dc4 req-b8700b45-3502-4344-b507-971fdce50b38 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Refreshing instance network info cache due to event network-changed-d0c565d0-5299-45e5-84ac-ea722711af3d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 03 02:08:10 compute-0 nova_compute[351485]: 2025-12-03 02:08:10.701 351492 DEBUG oslo_concurrency.lockutils [req-4dce454e-5a29-46aa-9451-ed4c8cd34dc4 req-b8700b45-3502-4344-b507-971fdce50b38 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:08:10 compute-0 nova_compute[351485]: 2025-12-03 02:08:10.701 351492 DEBUG oslo_concurrency.lockutils [req-4dce454e-5a29-46aa-9451-ed4c8cd34dc4 req-b8700b45-3502-4344-b507-971fdce50b38 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:08:10 compute-0 nova_compute[351485]: 2025-12-03 02:08:10.702 351492 DEBUG nova.network.neutron [req-4dce454e-5a29-46aa-9451-ed4c8cd34dc4 req-b8700b45-3502-4344-b507-971fdce50b38 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Refreshing network info cache for port d0c565d0-5299-45e5-84ac-ea722711af3d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 03 02:08:10 compute-0 angry_hamilton[433160]: {
Dec 03 02:08:10 compute-0 angry_hamilton[433160]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 02:08:10 compute-0 angry_hamilton[433160]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:08:10 compute-0 angry_hamilton[433160]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 02:08:10 compute-0 angry_hamilton[433160]:         "osd_id": 2,
Dec 03 02:08:10 compute-0 angry_hamilton[433160]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:08:10 compute-0 angry_hamilton[433160]:         "type": "bluestore"
Dec 03 02:08:10 compute-0 angry_hamilton[433160]:     },
Dec 03 02:08:10 compute-0 angry_hamilton[433160]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 02:08:10 compute-0 angry_hamilton[433160]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:08:10 compute-0 angry_hamilton[433160]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 02:08:10 compute-0 angry_hamilton[433160]:         "osd_id": 1,
Dec 03 02:08:10 compute-0 angry_hamilton[433160]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:08:10 compute-0 angry_hamilton[433160]:         "type": "bluestore"
Dec 03 02:08:10 compute-0 angry_hamilton[433160]:     },
Dec 03 02:08:10 compute-0 angry_hamilton[433160]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 02:08:10 compute-0 angry_hamilton[433160]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:08:10 compute-0 angry_hamilton[433160]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 02:08:10 compute-0 angry_hamilton[433160]:         "osd_id": 0,
Dec 03 02:08:10 compute-0 angry_hamilton[433160]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:08:10 compute-0 angry_hamilton[433160]:         "type": "bluestore"
Dec 03 02:08:10 compute-0 angry_hamilton[433160]:     }
Dec 03 02:08:10 compute-0 angry_hamilton[433160]: }
Dec 03 02:08:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1608: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:08:10 compute-0 systemd[1]: libpod-8bb616fd3295b6272c2c7fcab50f53b4c81e2a3710538b849dedebf66d40ff74.scope: Deactivated successfully.
Dec 03 02:08:10 compute-0 podman[433144]: 2025-12-03 02:08:10.745428875 +0000 UTC m=+1.474710466 container died 8bb616fd3295b6272c2c7fcab50f53b4c81e2a3710538b849dedebf66d40ff74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hamilton, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:08:10 compute-0 systemd[1]: libpod-8bb616fd3295b6272c2c7fcab50f53b4c81e2a3710538b849dedebf66d40ff74.scope: Consumed 1.158s CPU time.
Dec 03 02:08:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-3294c236045b24f5b497f70773ac2a3b62b49499d9c13212b9641f75801195e1-merged.mount: Deactivated successfully.
Dec 03 02:08:10 compute-0 podman[433144]: 2025-12-03 02:08:10.809646956 +0000 UTC m=+1.538928537 container remove 8bb616fd3295b6272c2c7fcab50f53b4c81e2a3710538b849dedebf66d40ff74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:08:10 compute-0 systemd[1]: libpod-conmon-8bb616fd3295b6272c2c7fcab50f53b4c81e2a3710538b849dedebf66d40ff74.scope: Deactivated successfully.
Dec 03 02:08:10 compute-0 sudo[433034]: pam_unix(sudo:session): session closed for user root
Dec 03 02:08:10 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 02:08:10 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:08:10 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 02:08:10 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:08:10 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev b0e70a03-a465-40c2-89fd-6bf1153c1f83 does not exist
Dec 03 02:08:10 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 2c88b18b-af7c-4355-93a1-952be2c5b09b does not exist
Dec 03 02:08:10 compute-0 nova_compute[351485]: 2025-12-03 02:08:10.990 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:08:11 compute-0 sudo[433224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:08:11 compute-0 sudo[433224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:08:11 compute-0 sudo[433224]: pam_unix(sudo:session): session closed for user root
Dec 03 02:08:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:08:11 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1546365977' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:08:11 compute-0 sudo[433249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 02:08:11 compute-0 sudo[433249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:08:11 compute-0 sudo[433249]: pam_unix(sudo:session): session closed for user root
Dec 03 02:08:11 compute-0 nova_compute[351485]: 2025-12-03 02:08:11.136 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:08:11 compute-0 nova_compute[351485]: 2025-12-03 02:08:11.250 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:08:11 compute-0 nova_compute[351485]: 2025-12-03 02:08:11.251 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:08:11 compute-0 nova_compute[351485]: 2025-12-03 02:08:11.251 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:08:11 compute-0 nova_compute[351485]: 2025-12-03 02:08:11.257 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:08:11 compute-0 nova_compute[351485]: 2025-12-03 02:08:11.257 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:08:11 compute-0 nova_compute[351485]: 2025-12-03 02:08:11.257 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:08:11 compute-0 nova_compute[351485]: 2025-12-03 02:08:11.265 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:08:11 compute-0 nova_compute[351485]: 2025-12-03 02:08:11.265 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:08:11 compute-0 nova_compute[351485]: 2025-12-03 02:08:11.265 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:08:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:08:11.610 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:a6:85', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:2a:11:ae:7b:8c'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 02:08:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:08:11.612 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 03 02:08:11 compute-0 nova_compute[351485]: 2025-12-03 02:08:11.611 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:08:11 compute-0 nova_compute[351485]: 2025-12-03 02:08:11.683 351492 DEBUG oslo_concurrency.lockutils [None req-ff165601-3427-461c-b2b6-662be00680dc 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:08:11 compute-0 nova_compute[351485]: 2025-12-03 02:08:11.684 351492 DEBUG oslo_concurrency.lockutils [None req-ff165601-3427-461c-b2b6-662be00680dc 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:08:11 compute-0 nova_compute[351485]: 2025-12-03 02:08:11.685 351492 DEBUG oslo_concurrency.lockutils [None req-ff165601-3427-461c-b2b6-662be00680dc 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:08:11 compute-0 nova_compute[351485]: 2025-12-03 02:08:11.685 351492 DEBUG oslo_concurrency.lockutils [None req-ff165601-3427-461c-b2b6-662be00680dc 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:08:11 compute-0 nova_compute[351485]: 2025-12-03 02:08:11.686 351492 DEBUG oslo_concurrency.lockutils [None req-ff165601-3427-461c-b2b6-662be00680dc 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:08:11 compute-0 nova_compute[351485]: 2025-12-03 02:08:11.687 351492 INFO nova.compute.manager [None req-ff165601-3427-461c-b2b6-662be00680dc 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Terminating instance
Dec 03 02:08:11 compute-0 nova_compute[351485]: 2025-12-03 02:08:11.689 351492 DEBUG nova.compute.manager [None req-ff165601-3427-461c-b2b6-662be00680dc 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 03 02:08:11 compute-0 kernel: tapd0c565d0-52 (unregistering): left promiscuous mode
Dec 03 02:08:11 compute-0 NetworkManager[48912]: <info>  [1764727691.8375] device (tapd0c565d0-52): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 03 02:08:11 compute-0 nova_compute[351485]: 2025-12-03 02:08:11.853 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:08:11 compute-0 ovn_controller[89134]: 2025-12-03T02:08:11Z|00054|binding|INFO|Releasing lport d0c565d0-5299-45e5-84ac-ea722711af3d from this chassis (sb_readonly=0)
Dec 03 02:08:11 compute-0 ovn_controller[89134]: 2025-12-03T02:08:11Z|00055|binding|INFO|Setting lport d0c565d0-5299-45e5-84ac-ea722711af3d down in Southbound
Dec 03 02:08:11 compute-0 ovn_controller[89134]: 2025-12-03T02:08:11Z|00056|binding|INFO|Removing iface tapd0c565d0-52 ovn-installed in OVS
Dec 03 02:08:11 compute-0 nova_compute[351485]: 2025-12-03 02:08:11.863 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:08:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:08:11.870 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:1b:b0 192.168.0.227'], port_security=['fa:16:3e:de:1b:b0 192.168.0.227'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-olz3x44nal64-kaobzdetwujj-uf5345mx272a-port-25woqro3y5s6', 'neutron:cidrs': '192.168.0.227/24', 'neutron:device_id': '55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-olz3x44nal64-kaobzdetwujj-uf5345mx272a-port-25woqro3y5s6', 'neutron:project_id': '9746b242761a48048d185ce26d622b33', 'neutron:revision_number': '4', 'neutron:security_group_ids': '43ddbc1b-0018-4ea3-a338-8898d9bf8c87', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=13e9ae70-0999-47f9-bc0c-397e04263018, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=d0c565d0-5299-45e5-84ac-ea722711af3d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 02:08:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:08:11.872 288528 INFO neutron.agent.ovn.metadata.agent [-] Port d0c565d0-5299-45e5-84ac-ea722711af3d in datapath 7ba11691-2711-476c-9191-cb6dfd0efa7d unbound from our chassis
Dec 03 02:08:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:08:11.873 288528 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ba11691-2711-476c-9191-cb6dfd0efa7d
Dec 03 02:08:11 compute-0 nova_compute[351485]: 2025-12-03 02:08:11.887 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:08:11 compute-0 ceph-mon[192821]: pgmap v1608: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:08:11 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:08:11 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:08:11 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1546365977' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:08:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:08:11.902 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[8381feba-1b9b-45bb-971b-5a0f0359cf82]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:08:11 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Dec 03 02:08:11 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 1min 49.316s CPU time.
Dec 03 02:08:11 compute-0 nova_compute[351485]: 2025-12-03 02:08:11.940 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:08:11 compute-0 systemd-machined[138558]: Machine qemu-3-instance-00000003 terminated.
Dec 03 02:08:11 compute-0 nova_compute[351485]: 2025-12-03 02:08:11.942 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3414MB free_disk=59.88887023925781GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 02:08:11 compute-0 nova_compute[351485]: 2025-12-03 02:08:11.943 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:08:11 compute-0 nova_compute[351485]: 2025-12-03 02:08:11.943 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:08:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:08:11.946 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[77871d90-6490-43ee-9677-ce257ebb429b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:08:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:08:11.949 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[d5d57232-28e9-44a0-a48e-6ed0ca551fe8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:08:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:08:11.988 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[21cd0ced-a546-461f-9e9c-c9eb542e2a45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:08:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:08:12.016 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[44890c6b-41f8-470c-bf19-84ce0e195a2e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ba11691-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:a4:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573048, 'reachable_time': 15808, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 433288, 'error': None, 'target': 'ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:08:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:08:12.047 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[526a61f2-eb70-4bfb-a09c-5377fc62b26c]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap7ba11691-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 573065, 'tstamp': 573065}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 433289, 'error': None, 'target': 'ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7ba11691-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 573069, 'tstamp': 573069}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 433289, 'error': None, 'target': 'ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:08:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:08:12.050 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ba11691-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.053 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.056 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.056 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.056 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance b43e79bd-550f-42f8-9aa7-980b6bca3f70 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.056 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.057 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.062 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:08:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:08:12.063 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ba11691-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:08:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:08:12.064 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 03 02:08:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:08:12.065 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ba11691-20, col_values=(('external_ids', {'iface-id': '8c8945aa-32be-4ced-a7fe-2b9502f30008'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:08:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:08:12.066 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 03 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.131 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.142 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.156 351492 INFO nova.virt.libvirt.driver [-] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Instance destroyed successfully.
Dec 03 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.156 351492 DEBUG nova.objects.instance [None req-ff165601-3427-461c-b2b6-662be00680dc 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lazy-loading 'resources' on Instance uuid 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.173 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.205 351492 DEBUG nova.virt.libvirt.vif [None req-ff165601-3427-461c-b2b6-662be00680dc 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T02:00:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-44nal64-kaobzdetwujj-uf5345mx272a-vnf-xg4pxtj76f4j',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-44nal64-kaobzdetwujj-uf5345mx272a-vnf-xg4pxtj76f4j',id=3,image_ref='466cf0db-c3be-4d70-b9f3-08c056c2cad9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-03T02:00:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='0f6ab671-23df-4a6d-9613-02f9fb5fb294'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9746b242761a48048d185ce26d622b33',ramdisk_id='',reservation_id='r-7757xffq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='466cf0db-c3be-4d70-b9f3-08c056c2cad9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-03T02:00:26Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0wMjA2NjgzMzEzMjg5MDAzOTM3PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTAyMDY2ODMzMTMyODkwMDM5Mzc9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MDIwNjY4MzMxMzI4OTAwMzkzNz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTAyMDY2ODMzMTMyODkwMDM5Mzc9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0wMjA2NjgzMzEzMjg5MDAzOTM3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0wMjA2NjgzMzEzMjg5MDAzOTM3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvK
Dec 03 02:08:12 compute-0 nova_compute[351485]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MDIwNjY4MzMxMzI4OTAwMzkzNz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTAyMDY2ODMzMTMyODkwMDM5Mzc9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0wMjA2NjgzMzEzMjg5MDAzOTM3PT0tLQo=',user_id='03ba25e4009b43f7b0054fee32bf9136',uuid=55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d0c565d0-5299-45e5-84ac-ea722711af3d", "address": "fa:16:3e:de:1b:b0", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.227", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0c565d0-52", "ovs_interfaceid": "d0c565d0-5299-45e5-84ac-ea722711af3d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 03 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.206 351492 DEBUG nova.network.os_vif_util [None req-ff165601-3427-461c-b2b6-662be00680dc 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Converting VIF {"id": "d0c565d0-5299-45e5-84ac-ea722711af3d", "address": "fa:16:3e:de:1b:b0", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.227", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0c565d0-52", "ovs_interfaceid": "d0c565d0-5299-45e5-84ac-ea722711af3d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 03 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.207 351492 DEBUG nova.network.os_vif_util [None req-ff165601-3427-461c-b2b6-662be00680dc 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:de:1b:b0,bridge_name='br-int',has_traffic_filtering=True,id=d0c565d0-5299-45e5-84ac-ea722711af3d,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd0c565d0-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 03 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.208 351492 DEBUG os_vif [None req-ff165601-3427-461c-b2b6-662be00680dc 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:de:1b:b0,bridge_name='br-int',has_traffic_filtering=True,id=d0c565d0-5299-45e5-84ac-ea722711af3d,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd0c565d0-52') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 03 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.210 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.211 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd0c565d0-52, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.216 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.219 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 03 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.228 351492 INFO os_vif [None req-ff165601-3427-461c-b2b6-662be00680dc 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:de:1b:b0,bridge_name='br-int',has_traffic_filtering=True,id=d0c565d0-5299-45e5-84ac-ea722711af3d,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd0c565d0-52')
Dec 03 02:08:12 compute-0 rsyslogd[188612]: message too long (8192) with configured size 8096, begin of message is: 2025-12-03 02:08:12.205 351492 DEBUG nova.virt.libvirt.vif [None req-ff165601-34 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 03 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.618 351492 DEBUG nova.network.neutron [req-4dce454e-5a29-46aa-9451-ed4c8cd34dc4 req-b8700b45-3502-4344-b507-971fdce50b38 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Updated VIF entry in instance network info cache for port d0c565d0-5299-45e5-84ac-ea722711af3d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 03 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.618 351492 DEBUG nova.network.neutron [req-4dce454e-5a29-46aa-9451-ed4c8cd34dc4 req-b8700b45-3502-4344-b507-971fdce50b38 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Updating instance_info_cache with network_info: [{"id": "d0c565d0-5299-45e5-84ac-ea722711af3d", "address": "fa:16:3e:de:1b:b0", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.227", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0c565d0-52", "ovs_interfaceid": "d0c565d0-5299-45e5-84ac-ea722711af3d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.644 351492 DEBUG oslo_concurrency.lockutils [req-4dce454e-5a29-46aa-9451-ed4c8cd34dc4 req-b8700b45-3502-4344-b507-971fdce50b38 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:08:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:08:12 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1345447596' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:08:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1609: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.717 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.732 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.757 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.807 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.807 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.864s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.827 351492 DEBUG nova.compute.manager [req-36361c4e-a4ad-4acb-ba21-ccc1ba6a1a20 req-bb94d516-7945-451f-a5f5-07ad39451f03 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Received event network-vif-unplugged-d0c565d0-5299-45e5-84ac-ea722711af3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.828 351492 DEBUG oslo_concurrency.lockutils [req-36361c4e-a4ad-4acb-ba21-ccc1ba6a1a20 req-bb94d516-7945-451f-a5f5-07ad39451f03 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.829 351492 DEBUG oslo_concurrency.lockutils [req-36361c4e-a4ad-4acb-ba21-ccc1ba6a1a20 req-bb94d516-7945-451f-a5f5-07ad39451f03 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.829 351492 DEBUG oslo_concurrency.lockutils [req-36361c4e-a4ad-4acb-ba21-ccc1ba6a1a20 req-bb94d516-7945-451f-a5f5-07ad39451f03 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.829 351492 DEBUG nova.compute.manager [req-36361c4e-a4ad-4acb-ba21-ccc1ba6a1a20 req-bb94d516-7945-451f-a5f5-07ad39451f03 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] No waiting events found dispatching network-vif-unplugged-d0c565d0-5299-45e5-84ac-ea722711af3d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 03 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.830 351492 DEBUG nova.compute.manager [req-36361c4e-a4ad-4acb-ba21-ccc1ba6a1a20 req-bb94d516-7945-451f-a5f5-07ad39451f03 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Received event network-vif-unplugged-d0c565d0-5299-45e5-84ac-ea722711af3d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 03 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.830 351492 DEBUG nova.compute.manager [req-36361c4e-a4ad-4acb-ba21-ccc1ba6a1a20 req-bb94d516-7945-451f-a5f5-07ad39451f03 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Received event network-vif-plugged-d0c565d0-5299-45e5-84ac-ea722711af3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.830 351492 DEBUG oslo_concurrency.lockutils [req-36361c4e-a4ad-4acb-ba21-ccc1ba6a1a20 req-bb94d516-7945-451f-a5f5-07ad39451f03 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.831 351492 DEBUG oslo_concurrency.lockutils [req-36361c4e-a4ad-4acb-ba21-ccc1ba6a1a20 req-bb94d516-7945-451f-a5f5-07ad39451f03 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.831 351492 DEBUG oslo_concurrency.lockutils [req-36361c4e-a4ad-4acb-ba21-ccc1ba6a1a20 req-bb94d516-7945-451f-a5f5-07ad39451f03 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.831 351492 DEBUG nova.compute.manager [req-36361c4e-a4ad-4acb-ba21-ccc1ba6a1a20 req-bb94d516-7945-451f-a5f5-07ad39451f03 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] No waiting events found dispatching network-vif-plugged-d0c565d0-5299-45e5-84ac-ea722711af3d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 03 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.831 351492 WARNING nova.compute.manager [req-36361c4e-a4ad-4acb-ba21-ccc1ba6a1a20 req-bb94d516-7945-451f-a5f5-07ad39451f03 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Received unexpected event network-vif-plugged-d0c565d0-5299-45e5-84ac-ea722711af3d for instance with vm_state active and task_state deleting.
Dec 03 02:08:12 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1345447596' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:08:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:08:13 compute-0 nova_compute[351485]: 2025-12-03 02:08:13.236 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:08:13 compute-0 nova_compute[351485]: 2025-12-03 02:08:13.607 351492 INFO nova.virt.libvirt.driver [None req-ff165601-3427-461c-b2b6-662be00680dc 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Deleting instance files /var/lib/nova/instances/55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274_del
Dec 03 02:08:13 compute-0 nova_compute[351485]: 2025-12-03 02:08:13.608 351492 INFO nova.virt.libvirt.driver [None req-ff165601-3427-461c-b2b6-662be00680dc 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Deletion of /var/lib/nova/instances/55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274_del complete
Dec 03 02:08:13 compute-0 nova_compute[351485]: 2025-12-03 02:08:13.673 351492 INFO nova.compute.manager [None req-ff165601-3427-461c-b2b6-662be00680dc 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Took 1.98 seconds to destroy the instance on the hypervisor.
Dec 03 02:08:13 compute-0 nova_compute[351485]: 2025-12-03 02:08:13.674 351492 DEBUG oslo.service.loopingcall [None req-ff165601-3427-461c-b2b6-662be00680dc 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 03 02:08:13 compute-0 nova_compute[351485]: 2025-12-03 02:08:13.674 351492 DEBUG nova.compute.manager [-] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 03 02:08:13 compute-0 nova_compute[351485]: 2025-12-03 02:08:13.675 351492 DEBUG nova.network.neutron [-] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 03 02:08:13 compute-0 nova_compute[351485]: 2025-12-03 02:08:13.807 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:08:13 compute-0 nova_compute[351485]: 2025-12-03 02:08:13.808 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:08:13 compute-0 nova_compute[351485]: 2025-12-03 02:08:13.842 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:08:13 compute-0 nova_compute[351485]: 2025-12-03 02:08:13.842 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 02:08:13 compute-0 nova_compute[351485]: 2025-12-03 02:08:13.843 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 03 02:08:13 compute-0 nova_compute[351485]: 2025-12-03 02:08:13.872 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Dec 03 02:08:13 compute-0 ceph-mon[192821]: pgmap v1609: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:08:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1610: 321 pgs: 321 active+clean; 190 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 9.2 KiB/s rd, 341 B/s wr, 12 op/s
Dec 03 02:08:14 compute-0 nova_compute[351485]: 2025-12-03 02:08:14.782 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:08:14 compute-0 nova_compute[351485]: 2025-12-03 02:08:14.784 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:08:14 compute-0 nova_compute[351485]: 2025-12-03 02:08:14.784 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 03 02:08:14 compute-0 nova_compute[351485]: 2025-12-03 02:08:14.785 351492 DEBUG nova.objects.instance [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9182286b-5a08-4961-b4bb-c0e2f05746f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:08:15 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:08:15.615 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=eda9fd7d-f2b1-4121-b9ac-fc31f8426272, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:08:15 compute-0 ceph-mon[192821]: pgmap v1610: 321 pgs: 321 active+clean; 190 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 9.2 KiB/s rd, 341 B/s wr, 12 op/s
Dec 03 02:08:16 compute-0 nova_compute[351485]: 2025-12-03 02:08:16.335 351492 DEBUG nova.network.neutron [-] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:08:16 compute-0 nova_compute[351485]: 2025-12-03 02:08:16.363 351492 INFO nova.compute.manager [-] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Took 2.69 seconds to deallocate network for instance.
Dec 03 02:08:16 compute-0 nova_compute[351485]: 2025-12-03 02:08:16.432 351492 DEBUG oslo_concurrency.lockutils [None req-ff165601-3427-461c-b2b6-662be00680dc 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:08:16 compute-0 nova_compute[351485]: 2025-12-03 02:08:16.433 351492 DEBUG oslo_concurrency.lockutils [None req-ff165601-3427-461c-b2b6-662be00680dc 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:08:16 compute-0 nova_compute[351485]: 2025-12-03 02:08:16.540 351492 DEBUG oslo_concurrency.processutils [None req-ff165601-3427-461c-b2b6-662be00680dc 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:08:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1611: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 03 02:08:16 compute-0 nova_compute[351485]: 2025-12-03 02:08:16.817 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Updating instance_info_cache with network_info: [{"id": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "address": "fa:16:3e:8f:a6:32", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2a50b9b-c2", "ovs_interfaceid": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:08:16 compute-0 nova_compute[351485]: 2025-12-03 02:08:16.837 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:08:16 compute-0 nova_compute[351485]: 2025-12-03 02:08:16.838 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 03 02:08:16 compute-0 nova_compute[351485]: 2025-12-03 02:08:16.839 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:08:16 compute-0 nova_compute[351485]: 2025-12-03 02:08:16.839 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:08:16 compute-0 nova_compute[351485]: 2025-12-03 02:08:16.840 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:08:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:08:17 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2596664746' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:08:17 compute-0 nova_compute[351485]: 2025-12-03 02:08:17.066 351492 DEBUG oslo_concurrency.processutils [None req-ff165601-3427-461c-b2b6-662be00680dc 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:08:17 compute-0 nova_compute[351485]: 2025-12-03 02:08:17.079 351492 DEBUG nova.compute.provider_tree [None req-ff165601-3427-461c-b2b6-662be00680dc 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:08:17 compute-0 nova_compute[351485]: 2025-12-03 02:08:17.094 351492 DEBUG nova.scheduler.client.report [None req-ff165601-3427-461c-b2b6-662be00680dc 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:08:17 compute-0 nova_compute[351485]: 2025-12-03 02:08:17.121 351492 DEBUG oslo_concurrency.lockutils [None req-ff165601-3427-461c-b2b6-662be00680dc 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:08:17 compute-0 nova_compute[351485]: 2025-12-03 02:08:17.172 351492 INFO nova.scheduler.client.report [None req-ff165601-3427-461c-b2b6-662be00680dc 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Deleted allocations for instance 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274
Dec 03 02:08:17 compute-0 nova_compute[351485]: 2025-12-03 02:08:17.214 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:08:17 compute-0 nova_compute[351485]: 2025-12-03 02:08:17.296 351492 DEBUG oslo_concurrency.lockutils [None req-ff165601-3427-461c-b2b6-662be00680dc 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:08:17 compute-0 ceph-mon[192821]: pgmap v1611: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 03 02:08:17 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2596664746' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:08:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:08:18 compute-0 nova_compute[351485]: 2025-12-03 02:08:18.238 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:08:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1612: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 03 02:08:19 compute-0 ceph-mon[192821]: pgmap v1612: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 03 02:08:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1613: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 03 02:08:21 compute-0 ceph-mon[192821]: pgmap v1613: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 03 02:08:22 compute-0 nova_compute[351485]: 2025-12-03 02:08:22.217 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:08:22 compute-0 nova_compute[351485]: 2025-12-03 02:08:22.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:08:22 compute-0 nova_compute[351485]: 2025-12-03 02:08:22.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 02:08:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1614: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 03 02:08:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:08:23 compute-0 nova_compute[351485]: 2025-12-03 02:08:23.242 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:08:24 compute-0 ceph-mon[192821]: pgmap v1614: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 03 02:08:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1615: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 03 02:08:25 compute-0 podman[433364]: 2025-12-03 02:08:25.849073201 +0000 UTC m=+0.107292237 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 03 02:08:25 compute-0 podman[433366]: 2025-12-03 02:08:25.873063567 +0000 UTC m=+0.108390487 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 03 02:08:25 compute-0 podman[433365]: 2025-12-03 02:08:25.884833319 +0000 UTC m=+0.124398929 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true)
Dec 03 02:08:26 compute-0 ceph-mon[192821]: pgmap v1615: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 03 02:08:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1616: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.4 KiB/s wr, 27 op/s
Dec 03 02:08:27 compute-0 nova_compute[351485]: 2025-12-03 02:08:27.146 351492 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764727692.1441972, 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 02:08:27 compute-0 nova_compute[351485]: 2025-12-03 02:08:27.147 351492 INFO nova.compute.manager [-] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] VM Stopped (Lifecycle Event)
Dec 03 02:08:27 compute-0 nova_compute[351485]: 2025-12-03 02:08:27.183 351492 DEBUG nova.compute.manager [None req-5de1a6e4-6226-4461-baec-cb32b77869b3 - - - - - -] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:08:27 compute-0 nova_compute[351485]: 2025-12-03 02:08:27.219 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:08:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:08:28 compute-0 ceph-mon[192821]: pgmap v1616: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.4 KiB/s wr, 27 op/s
Dec 03 02:08:28 compute-0 nova_compute[351485]: 2025-12-03 02:08:28.244 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:08:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:08:28
Dec 03 02:08:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 02:08:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 02:08:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['volumes', 'default.rgw.control', 'backups', '.mgr', 'images', 'default.rgw.log', '.rgw.root', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta']
Dec 03 02:08:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 02:08:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:08:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:08:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:08:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:08:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:08:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:08:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1617: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:08:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 02:08:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:08:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 02:08:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:08:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:08:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:08:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:08:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:08:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:08:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:08:29 compute-0 podman[158098]: time="2025-12-03T02:08:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:08:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:08:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec 03 02:08:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:08:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8654 "" "Go-http-client/1.1"
Dec 03 02:08:30 compute-0 ceph-mon[192821]: pgmap v1617: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:08:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1618: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:08:30 compute-0 podman[433426]: 2025-12-03 02:08:30.848751103 +0000 UTC m=+0.095968907 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 03 02:08:31 compute-0 openstack_network_exporter[368278]: ERROR   02:08:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:08:31 compute-0 openstack_network_exporter[368278]: ERROR   02:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:08:31 compute-0 openstack_network_exporter[368278]: ERROR   02:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:08:31 compute-0 openstack_network_exporter[368278]: ERROR   02:08:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:08:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:08:31 compute-0 openstack_network_exporter[368278]: ERROR   02:08:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:08:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:08:32 compute-0 nova_compute[351485]: 2025-12-03 02:08:32.223 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:08:32 compute-0 ceph-mon[192821]: pgmap v1618: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:08:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1619: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:08:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:08:33 compute-0 nova_compute[351485]: 2025-12-03 02:08:33.249 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:08:33 compute-0 ceph-mon[192821]: pgmap v1619: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:08:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1620: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:08:35 compute-0 ceph-mon[192821]: pgmap v1620: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:08:35 compute-0 podman[433444]: 2025-12-03 02:08:35.890439281 +0000 UTC m=+0.131408717 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, config_id=edpm, release-0.7.12=, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=base rhel9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, release=1214.1726694543, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, name=ubi9, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, distribution-scope=public, maintainer=Red Hat, Inc., version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec 03 02:08:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1621: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:08:36 compute-0 podman[433465]: 2025-12-03 02:08:36.869062005 +0000 UTC m=+0.097599033 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 03 02:08:36 compute-0 podman[433466]: 2025-12-03 02:08:36.909081094 +0000 UTC m=+0.136479880 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:08:36 compute-0 podman[433464]: 2025-12-03 02:08:36.914470166 +0000 UTC m=+0.148641393 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, vcs-type=git, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, version=9.6, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64)
Dec 03 02:08:36 compute-0 podman[433463]: 2025-12-03 02:08:36.953028953 +0000 UTC m=+0.192423187 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:08:37 compute-0 nova_compute[351485]: 2025-12-03 02:08:37.227 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:08:37 compute-0 ceph-mon[192821]: pgmap v1621: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:08:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:08:38 compute-0 nova_compute[351485]: 2025-12-03 02:08:38.253 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:08:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 02:08:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:08:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 02:08:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:08:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00110425264130364 of space, bias 1.0, pg target 0.331275792391092 quantized to 32 (current 32)
Dec 03 02:08:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:08:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:08:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:08:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:08:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:08:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec 03 02:08:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:08:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 02:08:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:08:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:08:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:08:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 02:08:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:08:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 02:08:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:08:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:08:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:08:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 02:08:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1622: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:08:39 compute-0 ceph-mon[192821]: pgmap v1622: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:08:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1623: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:08:41 compute-0 ceph-mon[192821]: pgmap v1623: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:08:42 compute-0 nova_compute[351485]: 2025-12-03 02:08:42.230 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:08:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1624: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:08:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:08:43 compute-0 nova_compute[351485]: 2025-12-03 02:08:43.255 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:08:43 compute-0 ceph-mon[192821]: pgmap v1624: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:08:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1625: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:08:45 compute-0 sshd-session[433543]: Accepted publickey for zuul from 38.102.83.18 port 47202 ssh2: RSA SHA256:NqevRhMCntWIOoTdK6+DV077scp/CQGou+r/H3um4YU
Dec 03 02:08:45 compute-0 systemd-logind[800]: New session 61 of user zuul.
Dec 03 02:08:45 compute-0 systemd[1]: Started Session 61 of User zuul.
Dec 03 02:08:45 compute-0 sshd-session[433543]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 03 02:08:45 compute-0 ceph-mon[192821]: pgmap v1625: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:08:46 compute-0 sudo[433720]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdkfzgojyvgbyrmfzgcpzoewovohivkn ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764727725.8613946-60806-109620621649884/AnsiballZ_command.py'
Dec 03 02:08:46 compute-0 sudo[433720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 02:08:46 compute-0 python3[433722]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 02:08:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1626: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:08:46 compute-0 sudo[433720]: pam_unix(sudo:session): session closed for user root
Dec 03 02:08:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 02:08:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3411792179' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:08:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 02:08:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3411792179' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:08:47 compute-0 nova_compute[351485]: 2025-12-03 02:08:47.234 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:08:47 compute-0 ceph-mon[192821]: pgmap v1626: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:08:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/3411792179' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:08:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/3411792179' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:08:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:08:48 compute-0 nova_compute[351485]: 2025-12-03 02:08:48.258 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:08:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1627: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:08:49 compute-0 ovn_controller[89134]: 2025-12-03T02:08:49Z|00057|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Dec 03 02:08:49 compute-0 ceph-mon[192821]: pgmap v1627: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:08:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1628: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:08:51 compute-0 ceph-mon[192821]: pgmap v1628: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:08:52 compute-0 nova_compute[351485]: 2025-12-03 02:08:52.237 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:08:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1629: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:08:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:08:53 compute-0 nova_compute[351485]: 2025-12-03 02:08:53.261 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:08:53 compute-0 ceph-mon[192821]: pgmap v1629: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:08:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1630: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Dec 03 02:08:55 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Dec 03 02:08:55 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Dec 03 02:08:55 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Dec 03 02:08:55 compute-0 ceph-mon[192821]: pgmap v1630: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Dec 03 02:08:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1632: 321 pgs: 321 active+clean; 147 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 820 KiB/s wr, 15 op/s
Dec 03 02:08:56 compute-0 podman[433762]: 2025-12-03 02:08:56.877909349 +0000 UTC m=+0.120134068 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 03 02:08:56 compute-0 podman[433763]: 2025-12-03 02:08:56.883428245 +0000 UTC m=+0.119769158 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 03 02:08:56 compute-0 podman[433764]: 2025-12-03 02:08:56.902964756 +0000 UTC m=+0.134258557 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 03 02:08:56 compute-0 ceph-mon[192821]: osdmap e128: 3 total, 3 up, 3 in
Dec 03 02:08:57 compute-0 nova_compute[351485]: 2025-12-03 02:08:57.240 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:08:57 compute-0 ceph-mon[192821]: pgmap v1632: 321 pgs: 321 active+clean; 147 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 820 KiB/s wr, 15 op/s
Dec 03 02:08:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:08:58 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Dec 03 02:08:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:08:58.156936) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 03 02:08:58 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Dec 03 02:08:58 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727738157011, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 1202, "num_deletes": 258, "total_data_size": 1833767, "memory_usage": 1867984, "flush_reason": "Manual Compaction"}
Dec 03 02:08:58 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Dec 03 02:08:58 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727738170396, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 1806063, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32189, "largest_seqno": 33390, "table_properties": {"data_size": 1800211, "index_size": 3183, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12037, "raw_average_key_size": 19, "raw_value_size": 1788496, "raw_average_value_size": 2884, "num_data_blocks": 143, "num_entries": 620, "num_filter_entries": 620, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764727617, "oldest_key_time": 1764727617, "file_creation_time": 1764727738, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Dec 03 02:08:58 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 13565 microseconds, and 6555 cpu microseconds.
Dec 03 02:08:58 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 02:08:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:08:58.170496) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 1806063 bytes OK
Dec 03 02:08:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:08:58.170794) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Dec 03 02:08:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:08:58.173671) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Dec 03 02:08:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:08:58.173697) EVENT_LOG_v1 {"time_micros": 1764727738173689, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 03 02:08:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:08:58.173721) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 03 02:08:58 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 1828275, prev total WAL file size 1828275, number of live WAL files 2.
Dec 03 02:08:58 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:08:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:08:58.175412) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303032' seq:72057594037927935, type:22 .. '6C6F676D0031323536' seq:0, type:0; will stop at (end)
Dec 03 02:08:58 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 03 02:08:58 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(1763KB)], [71(8870KB)]
Dec 03 02:08:58 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727738175481, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 10889436, "oldest_snapshot_seqno": -1}
Dec 03 02:08:58 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 5445 keys, 10785328 bytes, temperature: kUnknown
Dec 03 02:08:58 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727738244286, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 10785328, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10745621, "index_size": 25005, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13637, "raw_key_size": 137017, "raw_average_key_size": 25, "raw_value_size": 10643834, "raw_average_value_size": 1954, "num_data_blocks": 1036, "num_entries": 5445, "num_filter_entries": 5445, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764727738, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Dec 03 02:08:58 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 02:08:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:08:58.244522) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 10785328 bytes
Dec 03 02:08:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:08:58.246871) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 158.1 rd, 156.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 8.7 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(12.0) write-amplify(6.0) OK, records in: 5977, records dropped: 532 output_compression: NoCompression
Dec 03 02:08:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:08:58.246910) EVENT_LOG_v1 {"time_micros": 1764727738246895, "job": 40, "event": "compaction_finished", "compaction_time_micros": 68871, "compaction_time_cpu_micros": 31532, "output_level": 6, "num_output_files": 1, "total_output_size": 10785328, "num_input_records": 5977, "num_output_records": 5445, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 03 02:08:58 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:08:58 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727738247325, "job": 40, "event": "table_file_deletion", "file_number": 73}
Dec 03 02:08:58 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:08:58 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727738249222, "job": 40, "event": "table_file_deletion", "file_number": 71}
Dec 03 02:08:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:08:58.175151) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:08:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:08:58.249337) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:08:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:08:58.249342) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:08:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:08:58.249343) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:08:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:08:58.249345) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:08:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:08:58.249346) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:08:58 compute-0 nova_compute[351485]: 2025-12-03 02:08:58.264 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:08:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:08:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:08:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:08:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:08:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:08:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:08:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1633: 321 pgs: 321 active+clean; 147 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 820 KiB/s wr, 15 op/s
Dec 03 02:08:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:08:59.638 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:08:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:08:59.639 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:08:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:08:59.640 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:08:59 compute-0 podman[158098]: time="2025-12-03T02:08:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:08:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:08:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec 03 02:08:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:08:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8653 "" "Go-http-client/1.1"
Dec 03 02:09:00 compute-0 ceph-mon[192821]: pgmap v1633: 321 pgs: 321 active+clean; 147 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 820 KiB/s wr, 15 op/s
Dec 03 02:09:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1634: 321 pgs: 321 active+clean; 155 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.6 MiB/s wr, 18 op/s
Dec 03 02:09:01 compute-0 openstack_network_exporter[368278]: ERROR   02:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:09:01 compute-0 openstack_network_exporter[368278]: ERROR   02:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:09:01 compute-0 openstack_network_exporter[368278]: ERROR   02:09:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:09:01 compute-0 openstack_network_exporter[368278]: ERROR   02:09:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:09:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:09:01 compute-0 openstack_network_exporter[368278]: ERROR   02:09:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:09:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:09:01 compute-0 podman[433822]: 2025-12-03 02:09:01.880297461 +0000 UTC m=+0.130023678 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_ipmi)
Dec 03 02:09:02 compute-0 ceph-mon[192821]: pgmap v1634: 321 pgs: 321 active+clean; 155 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.6 MiB/s wr, 18 op/s
Dec 03 02:09:02 compute-0 nova_compute[351485]: 2025-12-03 02:09:02.243 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:09:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1635: 321 pgs: 321 active+clean; 155 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.6 MiB/s wr, 18 op/s
Dec 03 02:09:02 compute-0 nova_compute[351485]: 2025-12-03 02:09:02.774 351492 DEBUG oslo_concurrency.lockutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "3d670990-5a2a-4334-b8b1-9ae49d171323" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:09:02 compute-0 nova_compute[351485]: 2025-12-03 02:09:02.775 351492 DEBUG oslo_concurrency.lockutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "3d670990-5a2a-4334-b8b1-9ae49d171323" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:09:02 compute-0 nova_compute[351485]: 2025-12-03 02:09:02.804 351492 DEBUG nova.compute.manager [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 03 02:09:02 compute-0 nova_compute[351485]: 2025-12-03 02:09:02.918 351492 DEBUG oslo_concurrency.lockutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:09:02 compute-0 nova_compute[351485]: 2025-12-03 02:09:02.920 351492 DEBUG oslo_concurrency.lockutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:09:02 compute-0 nova_compute[351485]: 2025-12-03 02:09:02.934 351492 DEBUG nova.virt.hardware [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 03 02:09:02 compute-0 nova_compute[351485]: 2025-12-03 02:09:02.935 351492 INFO nova.compute.claims [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Claim successful on node compute-0.ctlplane.example.com
Dec 03 02:09:03 compute-0 nova_compute[351485]: 2025-12-03 02:09:03.112 351492 DEBUG oslo_concurrency.processutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:09:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:09:03 compute-0 nova_compute[351485]: 2025-12-03 02:09:03.267 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:09:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:09:03 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4071522504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:09:03 compute-0 nova_compute[351485]: 2025-12-03 02:09:03.657 351492 DEBUG oslo_concurrency.processutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:09:03 compute-0 nova_compute[351485]: 2025-12-03 02:09:03.666 351492 DEBUG nova.compute.provider_tree [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:09:03 compute-0 nova_compute[351485]: 2025-12-03 02:09:03.688 351492 DEBUG nova.scheduler.client.report [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:09:03 compute-0 nova_compute[351485]: 2025-12-03 02:09:03.705 351492 DEBUG oslo_concurrency.lockutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:09:03 compute-0 nova_compute[351485]: 2025-12-03 02:09:03.706 351492 DEBUG nova.compute.manager [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 03 02:09:03 compute-0 nova_compute[351485]: 2025-12-03 02:09:03.766 351492 DEBUG nova.compute.manager [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Dec 03 02:09:03 compute-0 nova_compute[351485]: 2025-12-03 02:09:03.786 351492 INFO nova.virt.libvirt.driver [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 03 02:09:03 compute-0 nova_compute[351485]: 2025-12-03 02:09:03.832 351492 DEBUG nova.compute.manager [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 03 02:09:03 compute-0 nova_compute[351485]: 2025-12-03 02:09:03.950 351492 DEBUG nova.compute.manager [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 03 02:09:03 compute-0 nova_compute[351485]: 2025-12-03 02:09:03.953 351492 DEBUG nova.virt.libvirt.driver [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 03 02:09:03 compute-0 nova_compute[351485]: 2025-12-03 02:09:03.954 351492 INFO nova.virt.libvirt.driver [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Creating image(s)
Dec 03 02:09:04 compute-0 nova_compute[351485]: 2025-12-03 02:09:04.019 351492 DEBUG nova.storage.rbd_utils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 3d670990-5a2a-4334-b8b1-9ae49d171323_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:09:04 compute-0 nova_compute[351485]: 2025-12-03 02:09:04.097 351492 DEBUG nova.storage.rbd_utils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 3d670990-5a2a-4334-b8b1-9ae49d171323_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:09:04 compute-0 nova_compute[351485]: 2025-12-03 02:09:04.163 351492 DEBUG nova.storage.rbd_utils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 3d670990-5a2a-4334-b8b1-9ae49d171323_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:09:04 compute-0 nova_compute[351485]: 2025-12-03 02:09:04.176 351492 DEBUG oslo_concurrency.lockutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "c29aeb8fc873eee85b0369901388993e8201c8d4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:09:04 compute-0 nova_compute[351485]: 2025-12-03 02:09:04.179 351492 DEBUG oslo_concurrency.lockutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "c29aeb8fc873eee85b0369901388993e8201c8d4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:09:04 compute-0 ceph-mon[192821]: pgmap v1635: 321 pgs: 321 active+clean; 155 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.6 MiB/s wr, 18 op/s
Dec 03 02:09:04 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/4071522504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:09:04 compute-0 nova_compute[351485]: 2025-12-03 02:09:04.427 351492 DEBUG nova.virt.libvirt.imagebackend [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Image locations are: [{'url': 'rbd://3765feb2-36f8-5b86-b74c-64e9221f9c4c/images/774b7995-1f03-43de-ad4e-feac9d5f9136/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://3765feb2-36f8-5b86-b74c-64e9221f9c4c/images/774b7995-1f03-43de-ad4e-feac9d5f9136/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Dec 03 02:09:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1636: 321 pgs: 321 active+clean; 155 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 1.6 MiB/s wr, 16 op/s
Dec 03 02:09:05 compute-0 nova_compute[351485]: 2025-12-03 02:09:05.469 351492 DEBUG oslo_concurrency.processutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c29aeb8fc873eee85b0369901388993e8201c8d4.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:09:05 compute-0 nova_compute[351485]: 2025-12-03 02:09:05.566 351492 DEBUG oslo_concurrency.processutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c29aeb8fc873eee85b0369901388993e8201c8d4.part --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:09:05 compute-0 nova_compute[351485]: 2025-12-03 02:09:05.568 351492 DEBUG nova.virt.images [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] 774b7995-1f03-43de-ad4e-feac9d5f9136 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Dec 03 02:09:05 compute-0 nova_compute[351485]: 2025-12-03 02:09:05.570 351492 DEBUG nova.privsep.utils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Dec 03 02:09:05 compute-0 nova_compute[351485]: 2025-12-03 02:09:05.571 351492 DEBUG oslo_concurrency.processutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/c29aeb8fc873eee85b0369901388993e8201c8d4.part /var/lib/nova/instances/_base/c29aeb8fc873eee85b0369901388993e8201c8d4.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:09:05 compute-0 nova_compute[351485]: 2025-12-03 02:09:05.803 351492 DEBUG oslo_concurrency.processutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/c29aeb8fc873eee85b0369901388993e8201c8d4.part /var/lib/nova/instances/_base/c29aeb8fc873eee85b0369901388993e8201c8d4.converted" returned: 0 in 0.232s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:09:05 compute-0 nova_compute[351485]: 2025-12-03 02:09:05.812 351492 DEBUG oslo_concurrency.processutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c29aeb8fc873eee85b0369901388993e8201c8d4.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:09:05 compute-0 nova_compute[351485]: 2025-12-03 02:09:05.912 351492 DEBUG oslo_concurrency.processutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c29aeb8fc873eee85b0369901388993e8201c8d4.converted --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:09:05 compute-0 nova_compute[351485]: 2025-12-03 02:09:05.914 351492 DEBUG oslo_concurrency.lockutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "c29aeb8fc873eee85b0369901388993e8201c8d4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:09:05 compute-0 nova_compute[351485]: 2025-12-03 02:09:05.949 351492 DEBUG nova.storage.rbd_utils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 3d670990-5a2a-4334-b8b1-9ae49d171323_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:09:05 compute-0 nova_compute[351485]: 2025-12-03 02:09:05.959 351492 DEBUG oslo_concurrency.processutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/c29aeb8fc873eee85b0369901388993e8201c8d4 3d670990-5a2a-4334-b8b1-9ae49d171323_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:09:06 compute-0 ceph-mon[192821]: pgmap v1636: 321 pgs: 321 active+clean; 155 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 1.6 MiB/s wr, 16 op/s
Dec 03 02:09:06 compute-0 nova_compute[351485]: 2025-12-03 02:09:06.417 351492 DEBUG oslo_concurrency.processutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/c29aeb8fc873eee85b0369901388993e8201c8d4 3d670990-5a2a-4334-b8b1-9ae49d171323_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:09:06 compute-0 nova_compute[351485]: 2025-12-03 02:09:06.589 351492 DEBUG nova.storage.rbd_utils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] resizing rbd image 3d670990-5a2a-4334-b8b1-9ae49d171323_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 03 02:09:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1637: 321 pgs: 321 active+clean; 155 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 776 KiB/s rd, 1.4 MiB/s wr, 22 op/s
Dec 03 02:09:06 compute-0 nova_compute[351485]: 2025-12-03 02:09:06.874 351492 DEBUG nova.objects.instance [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lazy-loading 'migration_context' on Instance uuid 3d670990-5a2a-4334-b8b1-9ae49d171323 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:09:06 compute-0 podman[434020]: 2025-12-03 02:09:06.91315856 +0000 UTC m=+0.156435883 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, distribution-scope=public, release-0.7.12=, io.buildah.version=1.29.0, config_id=edpm, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, build-date=2024-09-18T21:23:30, release=1214.1726694543, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 03 02:09:06 compute-0 nova_compute[351485]: 2025-12-03 02:09:06.956 351492 DEBUG nova.storage.rbd_utils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 3d670990-5a2a-4334-b8b1-9ae49d171323_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.027 351492 DEBUG nova.storage.rbd_utils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 3d670990-5a2a-4334-b8b1-9ae49d171323_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:09:07 compute-0 podman[434057]: 2025-12-03 02:09:07.037718082 +0000 UTC m=+0.107050370 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.041 351492 DEBUG oslo_concurrency.processutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.099 351492 DEBUG oslo_concurrency.processutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.100 351492 DEBUG oslo_concurrency.lockutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.101 351492 DEBUG oslo_concurrency.lockutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:09:07 compute-0 podman[434081]: 2025-12-03 02:09:07.1029061 +0000 UTC m=+0.126792536 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 03 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.103 351492 DEBUG oslo_concurrency.lockutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:09:07 compute-0 podman[434078]: 2025-12-03 02:09:07.108943211 +0000 UTC m=+0.134171425 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, container_name=openstack_network_exporter, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vcs-type=git, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 03 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.144 351492 DEBUG nova.storage.rbd_utils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 3d670990-5a2a-4334-b8b1-9ae49d171323_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.154 351492 DEBUG oslo_concurrency.processutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 3d670990-5a2a-4334-b8b1-9ae49d171323_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:09:07 compute-0 podman[434137]: 2025-12-03 02:09:07.205922565 +0000 UTC m=+0.140682038 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125)
Dec 03 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.246 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:09:07 compute-0 ceph-mon[192821]: pgmap v1637: 321 pgs: 321 active+clean; 155 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 776 KiB/s rd, 1.4 MiB/s wr, 22 op/s
Dec 03 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.609 351492 DEBUG oslo_concurrency.processutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 3d670990-5a2a-4334-b8b1-9ae49d171323_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.862 351492 DEBUG nova.virt.libvirt.driver [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 03 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.864 351492 DEBUG nova.virt.libvirt.driver [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Ensure instance console log exists: /var/lib/nova/instances/3d670990-5a2a-4334-b8b1-9ae49d171323/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 03 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.866 351492 DEBUG oslo_concurrency.lockutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.867 351492 DEBUG oslo_concurrency.lockutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.868 351492 DEBUG oslo_concurrency.lockutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.872 351492 DEBUG nova.virt.libvirt.driver [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-03T02:08:51Z,direct_url=<?>,disk_format='qcow2',id=774b7995-1f03-43de-ad4e-feac9d5f9136,min_disk=0,min_ram=0,name='fvt_testing_image',owner='9746b242761a48048d185ce26d622b33',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-03T02:08:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'boot_index': 0, 'guest_format': None, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'image_id': '774b7995-1f03-43de-ad4e-feac9d5f9136'}], 'ephemerals': [{'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vdb', 'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'size': 1, 'encryption_options': None, 'device_type': 'disk'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 03 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.880 351492 WARNING nova.virt.libvirt.driver [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.888 351492 DEBUG nova.virt.libvirt.host [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 03 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.889 351492 DEBUG nova.virt.libvirt.host [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 03 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.895 351492 DEBUG nova.virt.libvirt.host [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 03 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.896 351492 DEBUG nova.virt.libvirt.host [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 03 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.897 351492 DEBUG nova.virt.libvirt.driver [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 03 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.898 351492 DEBUG nova.virt.hardware [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-03T02:08:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='8fb4324d-1fde-4886-9d66-fedd66b56d0f',id=2,is_public=True,memory_mb=512,name='fvt_testing_flavor',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-03T02:08:51Z,direct_url=<?>,disk_format='qcow2',id=774b7995-1f03-43de-ad4e-feac9d5f9136,min_disk=0,min_ram=0,name='fvt_testing_image',owner='9746b242761a48048d185ce26d622b33',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-03T02:08:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 03 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.899 351492 DEBUG nova.virt.hardware [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 03 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.899 351492 DEBUG nova.virt.hardware [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 03 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.900 351492 DEBUG nova.virt.hardware [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 03 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.900 351492 DEBUG nova.virt.hardware [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 03 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.901 351492 DEBUG nova.virt.hardware [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 03 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.901 351492 DEBUG nova.virt.hardware [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 03 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.902 351492 DEBUG nova.virt.hardware [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 03 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.902 351492 DEBUG nova.virt.hardware [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 03 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.903 351492 DEBUG nova.virt.hardware [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 03 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.904 351492 DEBUG nova.virt.hardware [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 03 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.909 351492 DEBUG oslo_concurrency.processutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:09:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:09:08 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Dec 03 02:09:08 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:09:08.169005) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 03 02:09:08 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Dec 03 02:09:08 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727748169055, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 328, "num_deletes": 252, "total_data_size": 154994, "memory_usage": 161936, "flush_reason": "Manual Compaction"}
Dec 03 02:09:08 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Dec 03 02:09:08 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727748173783, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 153366, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33391, "largest_seqno": 33718, "table_properties": {"data_size": 151244, "index_size": 286, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5780, "raw_average_key_size": 20, "raw_value_size": 147121, "raw_average_value_size": 514, "num_data_blocks": 13, "num_entries": 286, "num_filter_entries": 286, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764727739, "oldest_key_time": 1764727739, "file_creation_time": 1764727748, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Dec 03 02:09:08 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 4872 microseconds, and 2077 cpu microseconds.
Dec 03 02:09:08 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 02:09:08 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:09:08.173880) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 153366 bytes OK
Dec 03 02:09:08 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:09:08.173899) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Dec 03 02:09:08 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:09:08.176162) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Dec 03 02:09:08 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:09:08.176181) EVENT_LOG_v1 {"time_micros": 1764727748176174, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 03 02:09:08 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:09:08.176199) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 03 02:09:08 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 152710, prev total WAL file size 152710, number of live WAL files 2.
Dec 03 02:09:08 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:09:08 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:09:08.177044) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323532' seq:72057594037927935, type:22 .. '6D6772737461740031353035' seq:0, type:0; will stop at (end)
Dec 03 02:09:08 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 03 02:09:08 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(149KB)], [74(10MB)]
Dec 03 02:09:08 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727748177068, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 10938694, "oldest_snapshot_seqno": -1}
Dec 03 02:09:08 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 5220 keys, 7624703 bytes, temperature: kUnknown
Dec 03 02:09:08 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727748257515, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 7624703, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7591318, "index_size": 19259, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13061, "raw_key_size": 132532, "raw_average_key_size": 25, "raw_value_size": 7498265, "raw_average_value_size": 1436, "num_data_blocks": 795, "num_entries": 5220, "num_filter_entries": 5220, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764727748, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Dec 03 02:09:08 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 02:09:08 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:09:08.257850) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 7624703 bytes
Dec 03 02:09:08 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:09:08.259903) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 135.7 rd, 94.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 10.3 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(121.0) write-amplify(49.7) OK, records in: 5731, records dropped: 511 output_compression: NoCompression
Dec 03 02:09:08 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:09:08.259922) EVENT_LOG_v1 {"time_micros": 1764727748259913, "job": 42, "event": "compaction_finished", "compaction_time_micros": 80619, "compaction_time_cpu_micros": 19893, "output_level": 6, "num_output_files": 1, "total_output_size": 7624703, "num_input_records": 5731, "num_output_records": 5220, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 03 02:09:08 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:09:08 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727748260071, "job": 42, "event": "table_file_deletion", "file_number": 76}
Dec 03 02:09:08 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:09:08 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727748261680, "job": 42, "event": "table_file_deletion", "file_number": 74}
Dec 03 02:09:08 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:09:08.176872) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:09:08 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:09:08.262025) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:09:08 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:09:08.262034) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:09:08 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:09:08.262037) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:09:08 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:09:08.262040) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:09:08 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:09:08.262043) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:09:08 compute-0 nova_compute[351485]: 2025-12-03 02:09:08.270 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:09:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 03 02:09:08 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/376091058' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:09:08 compute-0 nova_compute[351485]: 2025-12-03 02:09:08.384 351492 DEBUG oslo_concurrency.processutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:09:08 compute-0 nova_compute[351485]: 2025-12-03 02:09:08.385 351492 DEBUG oslo_concurrency.processutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:09:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1638: 321 pgs: 321 active+clean; 155 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 689 KiB/s rd, 644 KiB/s wr, 9 op/s
Dec 03 02:09:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 03 02:09:08 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2253709311' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:09:08 compute-0 nova_compute[351485]: 2025-12-03 02:09:08.892 351492 DEBUG oslo_concurrency.processutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:09:08 compute-0 nova_compute[351485]: 2025-12-03 02:09:08.945 351492 DEBUG nova.storage.rbd_utils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 3d670990-5a2a-4334-b8b1-9ae49d171323_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:09:08 compute-0 nova_compute[351485]: 2025-12-03 02:09:08.956 351492 DEBUG oslo_concurrency.processutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:09:09 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/376091058' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:09:09 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2253709311' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:09:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 03 02:09:09 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1367925342' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:09:09 compute-0 nova_compute[351485]: 2025-12-03 02:09:09.491 351492 DEBUG oslo_concurrency.processutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:09:09 compute-0 nova_compute[351485]: 2025-12-03 02:09:09.494 351492 DEBUG nova.objects.instance [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3d670990-5a2a-4334-b8b1-9ae49d171323 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:09:09 compute-0 nova_compute[351485]: 2025-12-03 02:09:09.521 351492 DEBUG nova.virt.libvirt.driver [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] End _get_guest_xml xml=<domain type="kvm">
Dec 03 02:09:09 compute-0 nova_compute[351485]:   <uuid>3d670990-5a2a-4334-b8b1-9ae49d171323</uuid>
Dec 03 02:09:09 compute-0 nova_compute[351485]:   <name>instance-00000005</name>
Dec 03 02:09:09 compute-0 nova_compute[351485]:   <memory>524288</memory>
Dec 03 02:09:09 compute-0 nova_compute[351485]:   <vcpu>1</vcpu>
Dec 03 02:09:09 compute-0 nova_compute[351485]:   <metadata>
Dec 03 02:09:09 compute-0 nova_compute[351485]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 03 02:09:09 compute-0 nova_compute[351485]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 03 02:09:09 compute-0 nova_compute[351485]:       <nova:name>fvt_testing_server</nova:name>
Dec 03 02:09:09 compute-0 nova_compute[351485]:       <nova:creationTime>2025-12-03 02:09:07</nova:creationTime>
Dec 03 02:09:09 compute-0 nova_compute[351485]:       <nova:flavor name="fvt_testing_flavor">
Dec 03 02:09:09 compute-0 nova_compute[351485]:         <nova:memory>512</nova:memory>
Dec 03 02:09:09 compute-0 nova_compute[351485]:         <nova:disk>1</nova:disk>
Dec 03 02:09:09 compute-0 nova_compute[351485]:         <nova:swap>0</nova:swap>
Dec 03 02:09:09 compute-0 nova_compute[351485]:         <nova:ephemeral>1</nova:ephemeral>
Dec 03 02:09:09 compute-0 nova_compute[351485]:         <nova:vcpus>1</nova:vcpus>
Dec 03 02:09:09 compute-0 nova_compute[351485]:       </nova:flavor>
Dec 03 02:09:09 compute-0 nova_compute[351485]:       <nova:owner>
Dec 03 02:09:09 compute-0 nova_compute[351485]:         <nova:user uuid="03ba25e4009b43f7b0054fee32bf9136">admin</nova:user>
Dec 03 02:09:09 compute-0 nova_compute[351485]:         <nova:project uuid="9746b242761a48048d185ce26d622b33">admin</nova:project>
Dec 03 02:09:09 compute-0 nova_compute[351485]:       </nova:owner>
Dec 03 02:09:09 compute-0 nova_compute[351485]:       <nova:root type="image" uuid="774b7995-1f03-43de-ad4e-feac9d5f9136"/>
Dec 03 02:09:09 compute-0 nova_compute[351485]:       <nova:ports/>
Dec 03 02:09:09 compute-0 nova_compute[351485]:     </nova:instance>
Dec 03 02:09:09 compute-0 nova_compute[351485]:   </metadata>
Dec 03 02:09:09 compute-0 nova_compute[351485]:   <sysinfo type="smbios">
Dec 03 02:09:09 compute-0 nova_compute[351485]:     <system>
Dec 03 02:09:09 compute-0 nova_compute[351485]:       <entry name="manufacturer">RDO</entry>
Dec 03 02:09:09 compute-0 nova_compute[351485]:       <entry name="product">OpenStack Compute</entry>
Dec 03 02:09:09 compute-0 nova_compute[351485]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 03 02:09:09 compute-0 nova_compute[351485]:       <entry name="serial">3d670990-5a2a-4334-b8b1-9ae49d171323</entry>
Dec 03 02:09:09 compute-0 nova_compute[351485]:       <entry name="uuid">3d670990-5a2a-4334-b8b1-9ae49d171323</entry>
Dec 03 02:09:09 compute-0 nova_compute[351485]:       <entry name="family">Virtual Machine</entry>
Dec 03 02:09:09 compute-0 nova_compute[351485]:     </system>
Dec 03 02:09:09 compute-0 nova_compute[351485]:   </sysinfo>
Dec 03 02:09:09 compute-0 nova_compute[351485]:   <os>
Dec 03 02:09:09 compute-0 nova_compute[351485]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 03 02:09:09 compute-0 nova_compute[351485]:     <boot dev="hd"/>
Dec 03 02:09:09 compute-0 nova_compute[351485]:     <smbios mode="sysinfo"/>
Dec 03 02:09:09 compute-0 nova_compute[351485]:   </os>
Dec 03 02:09:09 compute-0 nova_compute[351485]:   <features>
Dec 03 02:09:09 compute-0 nova_compute[351485]:     <acpi/>
Dec 03 02:09:09 compute-0 nova_compute[351485]:     <apic/>
Dec 03 02:09:09 compute-0 nova_compute[351485]:     <vmcoreinfo/>
Dec 03 02:09:09 compute-0 nova_compute[351485]:   </features>
Dec 03 02:09:09 compute-0 nova_compute[351485]:   <clock offset="utc">
Dec 03 02:09:09 compute-0 nova_compute[351485]:     <timer name="pit" tickpolicy="delay"/>
Dec 03 02:09:09 compute-0 nova_compute[351485]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 03 02:09:09 compute-0 nova_compute[351485]:     <timer name="hpet" present="no"/>
Dec 03 02:09:09 compute-0 nova_compute[351485]:   </clock>
Dec 03 02:09:09 compute-0 nova_compute[351485]:   <cpu mode="host-model" match="exact">
Dec 03 02:09:09 compute-0 nova_compute[351485]:     <topology sockets="1" cores="1" threads="1"/>
Dec 03 02:09:09 compute-0 nova_compute[351485]:   </cpu>
Dec 03 02:09:09 compute-0 nova_compute[351485]:   <devices>
Dec 03 02:09:09 compute-0 nova_compute[351485]:     <disk type="network" device="disk">
Dec 03 02:09:09 compute-0 nova_compute[351485]:       <driver type="raw" cache="none"/>
Dec 03 02:09:09 compute-0 nova_compute[351485]:       <source protocol="rbd" name="vms/3d670990-5a2a-4334-b8b1-9ae49d171323_disk">
Dec 03 02:09:09 compute-0 nova_compute[351485]:         <host name="192.168.122.100" port="6789"/>
Dec 03 02:09:09 compute-0 nova_compute[351485]:       </source>
Dec 03 02:09:09 compute-0 nova_compute[351485]:       <auth username="openstack">
Dec 03 02:09:09 compute-0 nova_compute[351485]:         <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec 03 02:09:09 compute-0 nova_compute[351485]:       </auth>
Dec 03 02:09:09 compute-0 nova_compute[351485]:       <target dev="vda" bus="virtio"/>
Dec 03 02:09:09 compute-0 nova_compute[351485]:     </disk>
Dec 03 02:09:09 compute-0 nova_compute[351485]:     <disk type="network" device="disk">
Dec 03 02:09:09 compute-0 nova_compute[351485]:       <driver type="raw" cache="none"/>
Dec 03 02:09:09 compute-0 nova_compute[351485]:       <source protocol="rbd" name="vms/3d670990-5a2a-4334-b8b1-9ae49d171323_disk.eph0">
Dec 03 02:09:09 compute-0 nova_compute[351485]:         <host name="192.168.122.100" port="6789"/>
Dec 03 02:09:09 compute-0 nova_compute[351485]:       </source>
Dec 03 02:09:09 compute-0 nova_compute[351485]:       <auth username="openstack">
Dec 03 02:09:09 compute-0 nova_compute[351485]:         <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec 03 02:09:09 compute-0 nova_compute[351485]:       </auth>
Dec 03 02:09:09 compute-0 nova_compute[351485]:       <target dev="vdb" bus="virtio"/>
Dec 03 02:09:09 compute-0 nova_compute[351485]:     </disk>
Dec 03 02:09:09 compute-0 nova_compute[351485]:     <disk type="network" device="cdrom">
Dec 03 02:09:09 compute-0 nova_compute[351485]:       <driver type="raw" cache="none"/>
Dec 03 02:09:09 compute-0 nova_compute[351485]:       <source protocol="rbd" name="vms/3d670990-5a2a-4334-b8b1-9ae49d171323_disk.config">
Dec 03 02:09:09 compute-0 nova_compute[351485]:         <host name="192.168.122.100" port="6789"/>
Dec 03 02:09:09 compute-0 nova_compute[351485]:       </source>
Dec 03 02:09:09 compute-0 nova_compute[351485]:       <auth username="openstack">
Dec 03 02:09:09 compute-0 nova_compute[351485]:         <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec 03 02:09:09 compute-0 nova_compute[351485]:       </auth>
Dec 03 02:09:09 compute-0 nova_compute[351485]:       <target dev="sda" bus="sata"/>
Dec 03 02:09:09 compute-0 nova_compute[351485]:     </disk>
Dec 03 02:09:09 compute-0 nova_compute[351485]:     <serial type="pty">
Dec 03 02:09:09 compute-0 nova_compute[351485]:       <log file="/var/lib/nova/instances/3d670990-5a2a-4334-b8b1-9ae49d171323/console.log" append="off"/>
Dec 03 02:09:09 compute-0 nova_compute[351485]:     </serial>
Dec 03 02:09:09 compute-0 nova_compute[351485]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 03 02:09:09 compute-0 nova_compute[351485]:     <video>
Dec 03 02:09:09 compute-0 nova_compute[351485]:       <model type="virtio"/>
Dec 03 02:09:09 compute-0 nova_compute[351485]:     </video>
Dec 03 02:09:09 compute-0 nova_compute[351485]:     <input type="tablet" bus="usb"/>
Dec 03 02:09:09 compute-0 nova_compute[351485]:     <rng model="virtio">
Dec 03 02:09:09 compute-0 nova_compute[351485]:       <backend model="random">/dev/urandom</backend>
Dec 03 02:09:09 compute-0 nova_compute[351485]:     </rng>
Dec 03 02:09:09 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root"/>
Dec 03 02:09:09 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:09:09 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:09:09 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:09:09 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:09:09 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:09:09 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:09:09 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:09:09 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:09:09 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:09:09 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:09:09 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:09:09 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:09:09 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:09:09 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:09:09 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:09:09 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:09:09 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:09:09 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:09:09 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:09:09 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:09:09 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:09:09 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:09:09 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:09:09 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:09:09 compute-0 nova_compute[351485]:     <controller type="usb" index="0"/>
Dec 03 02:09:09 compute-0 nova_compute[351485]:     <memballoon model="virtio">
Dec 03 02:09:09 compute-0 nova_compute[351485]:       <stats period="10"/>
Dec 03 02:09:09 compute-0 nova_compute[351485]:     </memballoon>
Dec 03 02:09:09 compute-0 nova_compute[351485]:   </devices>
Dec 03 02:09:09 compute-0 nova_compute[351485]: </domain>
Dec 03 02:09:09 compute-0 nova_compute[351485]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 03 02:09:09 compute-0 nova_compute[351485]: 2025-12-03 02:09:09.583 351492 DEBUG nova.virt.libvirt.driver [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 03 02:09:09 compute-0 nova_compute[351485]: 2025-12-03 02:09:09.584 351492 DEBUG nova.virt.libvirt.driver [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 03 02:09:09 compute-0 nova_compute[351485]: 2025-12-03 02:09:09.584 351492 DEBUG nova.virt.libvirt.driver [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 03 02:09:09 compute-0 nova_compute[351485]: 2025-12-03 02:09:09.585 351492 INFO nova.virt.libvirt.driver [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Using config drive
Dec 03 02:09:09 compute-0 nova_compute[351485]: 2025-12-03 02:09:09.637 351492 DEBUG nova.storage.rbd_utils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 3d670990-5a2a-4334-b8b1-9ae49d171323_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:09:10 compute-0 ceph-mon[192821]: pgmap v1638: 321 pgs: 321 active+clean; 155 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 689 KiB/s rd, 644 KiB/s wr, 9 op/s
Dec 03 02:09:10 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1367925342' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:09:10 compute-0 nova_compute[351485]: 2025-12-03 02:09:10.540 351492 INFO nova.virt.libvirt.driver [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Creating config drive at /var/lib/nova/instances/3d670990-5a2a-4334-b8b1-9ae49d171323/disk.config
Dec 03 02:09:10 compute-0 nova_compute[351485]: 2025-12-03 02:09:10.546 351492 DEBUG oslo_concurrency.processutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3d670990-5a2a-4334-b8b1-9ae49d171323/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk1a35ikk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:09:10 compute-0 nova_compute[351485]: 2025-12-03 02:09:10.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:09:10 compute-0 nova_compute[351485]: 2025-12-03 02:09:10.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:09:10 compute-0 nova_compute[351485]: 2025-12-03 02:09:10.623 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:09:10 compute-0 nova_compute[351485]: 2025-12-03 02:09:10.623 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:09:10 compute-0 nova_compute[351485]: 2025-12-03 02:09:10.624 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:09:10 compute-0 nova_compute[351485]: 2025-12-03 02:09:10.624 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 02:09:10 compute-0 nova_compute[351485]: 2025-12-03 02:09:10.624 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:09:10 compute-0 nova_compute[351485]: 2025-12-03 02:09:10.694 351492 DEBUG oslo_concurrency.processutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3d670990-5a2a-4334-b8b1-9ae49d171323/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk1a35ikk" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:09:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1639: 321 pgs: 321 active+clean; 178 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.5 MiB/s wr, 24 op/s
Dec 03 02:09:10 compute-0 nova_compute[351485]: 2025-12-03 02:09:10.759 351492 DEBUG nova.storage.rbd_utils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 3d670990-5a2a-4334-b8b1-9ae49d171323_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:09:10 compute-0 nova_compute[351485]: 2025-12-03 02:09:10.772 351492 DEBUG oslo_concurrency.processutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3d670990-5a2a-4334-b8b1-9ae49d171323/disk.config 3d670990-5a2a-4334-b8b1-9ae49d171323_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:09:11 compute-0 nova_compute[351485]: 2025-12-03 02:09:11.069 351492 DEBUG oslo_concurrency.processutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3d670990-5a2a-4334-b8b1-9ae49d171323/disk.config 3d670990-5a2a-4334-b8b1-9ae49d171323_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.297s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:09:11 compute-0 nova_compute[351485]: 2025-12-03 02:09:11.071 351492 INFO nova.virt.libvirt.driver [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Deleting local config drive /var/lib/nova/instances/3d670990-5a2a-4334-b8b1-9ae49d171323/disk.config because it was imported into RBD.
Dec 03 02:09:11 compute-0 systemd[1]: Starting libvirt secret daemon...
Dec 03 02:09:11 compute-0 systemd[1]: Started libvirt secret daemon.
Dec 03 02:09:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:09:11 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2143152588' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:09:11 compute-0 virtqemud[154511]: End of file while reading data: Input/output error
Dec 03 02:09:11 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2143152588' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:09:11 compute-0 nova_compute[351485]: 2025-12-03 02:09:11.209 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:09:11 compute-0 sudo[434450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:09:11 compute-0 sudo[434450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:09:11 compute-0 sudo[434450]: pam_unix(sudo:session): session closed for user root
Dec 03 02:09:11 compute-0 systemd-machined[138558]: New machine qemu-5-instance-00000005.
Dec 03 02:09:11 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Dec 03 02:09:11 compute-0 sudo[434488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:09:11 compute-0 sudo[434488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:09:11 compute-0 sudo[434488]: pam_unix(sudo:session): session closed for user root
Dec 03 02:09:11 compute-0 sudo[434519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:09:11 compute-0 sudo[434519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:09:11 compute-0 sudo[434519]: pam_unix(sudo:session): session closed for user root
Dec 03 02:09:11 compute-0 sudo[434544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 02:09:11 compute-0 sudo[434544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:09:11 compute-0 nova_compute[351485]: 2025-12-03 02:09:11.774 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:09:11 compute-0 nova_compute[351485]: 2025-12-03 02:09:11.774 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:09:11 compute-0 nova_compute[351485]: 2025-12-03 02:09:11.774 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:09:11 compute-0 nova_compute[351485]: 2025-12-03 02:09:11.779 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:09:11 compute-0 nova_compute[351485]: 2025-12-03 02:09:11.779 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:09:11 compute-0 nova_compute[351485]: 2025-12-03 02:09:11.780 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:09:11 compute-0 nova_compute[351485]: 2025-12-03 02:09:11.784 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:09:11 compute-0 nova_compute[351485]: 2025-12-03 02:09:11.785 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:09:11 compute-0 nova_compute[351485]: 2025-12-03 02:09:11.785 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:09:12 compute-0 ceph-mon[192821]: pgmap v1639: 321 pgs: 321 active+clean; 178 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.5 MiB/s wr, 24 op/s
Dec 03 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.252 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:09:12 compute-0 sudo[434544]: pam_unix(sudo:session): session closed for user root
Dec 03 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.375 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.376 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3568MB free_disk=59.92203140258789GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.376 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.377 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:09:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:09:12 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:09:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 02:09:12 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:09:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 02:09:12 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:09:12 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 4f6313d0-d7b2-48b2-b9e2-846781eece59 does not exist
Dec 03 02:09:12 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev f258548e-1ce4-4f0d-adea-b9d7bc27a84d does not exist
Dec 03 02:09:12 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev f4a62abe-be38-4a56-881a-b691a75f95e6 does not exist
Dec 03 02:09:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 02:09:12 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:09:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 02:09:12 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:09:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:09:12 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.458 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.458 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance b43e79bd-550f-42f8-9aa7-980b6bca3f70 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.458 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 3d670990-5a2a-4334-b8b1-9ae49d171323 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.458 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.459 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 02:09:12 compute-0 sudo[434616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.530 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:09:12 compute-0 sudo[434616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:09:12 compute-0 sudo[434616]: pam_unix(sudo:session): session closed for user root
Dec 03 02:09:12 compute-0 sudo[434675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:09:12 compute-0 sudo[434675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:09:12 compute-0 sudo[434675]: pam_unix(sudo:session): session closed for user root
Dec 03 02:09:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1640: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.4 MiB/s wr, 45 op/s
Dec 03 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.760 351492 DEBUG nova.compute.manager [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 03 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.760 351492 DEBUG nova.virt.libvirt.driver [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 03 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.764 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764727752.763187, 3d670990-5a2a-4334-b8b1-9ae49d171323 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.764 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] VM Resumed (Lifecycle Event)
Dec 03 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.770 351492 INFO nova.virt.libvirt.driver [-] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Instance spawned successfully.
Dec 03 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.771 351492 DEBUG nova.virt.libvirt.driver [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 03 02:09:12 compute-0 sudo[434708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:09:12 compute-0 sudo[434708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:09:12 compute-0 sudo[434708]: pam_unix(sudo:session): session closed for user root
Dec 03 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.820 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.835 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 03 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.840 351492 DEBUG nova.virt.libvirt.driver [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.841 351492 DEBUG nova.virt.libvirt.driver [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.841 351492 DEBUG nova.virt.libvirt.driver [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.842 351492 DEBUG nova.virt.libvirt.driver [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.843 351492 DEBUG nova.virt.libvirt.driver [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.843 351492 DEBUG nova.virt.libvirt.driver [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.871 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 03 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.871 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764727752.7636049, 3d670990-5a2a-4334-b8b1-9ae49d171323 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.872 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] VM Started (Lifecycle Event)
Dec 03 02:09:12 compute-0 sudo[434753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 02:09:12 compute-0 sudo[434753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.908 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.916 351492 INFO nova.compute.manager [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Took 8.96 seconds to spawn the instance on the hypervisor.
Dec 03 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.916 351492 DEBUG nova.compute.manager [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.919 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 03 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.948 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 03 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.989 351492 INFO nova.compute.manager [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Took 10.12 seconds to build instance.
Dec 03 02:09:13 compute-0 nova_compute[351485]: 2025-12-03 02:09:13.011 351492 DEBUG oslo_concurrency.lockutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "3d670990-5a2a-4334-b8b1-9ae49d171323" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.236s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:09:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:09:13 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/522871387' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:09:13 compute-0 nova_compute[351485]: 2025-12-03 02:09:13.059 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:09:13 compute-0 nova_compute[351485]: 2025-12-03 02:09:13.082 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:09:13 compute-0 nova_compute[351485]: 2025-12-03 02:09:13.104 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:09:13 compute-0 nova_compute[351485]: 2025-12-03 02:09:13.141 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 02:09:13 compute-0 nova_compute[351485]: 2025-12-03 02:09:13.142 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.765s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:09:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:09:13 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:09:13 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:09:13 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:09:13 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:09:13 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:09:13 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:09:13 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/522871387' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:09:13 compute-0 nova_compute[351485]: 2025-12-03 02:09:13.272 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:09:13 compute-0 podman[434819]: 2025-12-03 02:09:13.499100723 +0000 UTC m=+0.079471912 container create 2a3be12832ab13e2e72cbc121d60bceffbc349ea53824016d91dddcf5dc5fed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_cannon, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:09:13 compute-0 podman[434819]: 2025-12-03 02:09:13.463821498 +0000 UTC m=+0.044192727 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:09:13 compute-0 systemd[1]: Started libpod-conmon-2a3be12832ab13e2e72cbc121d60bceffbc349ea53824016d91dddcf5dc5fed6.scope.
Dec 03 02:09:13 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:09:13 compute-0 podman[434819]: 2025-12-03 02:09:13.667502361 +0000 UTC m=+0.247873540 container init 2a3be12832ab13e2e72cbc121d60bceffbc349ea53824016d91dddcf5dc5fed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:09:13 compute-0 podman[434819]: 2025-12-03 02:09:13.697904728 +0000 UTC m=+0.278275917 container start 2a3be12832ab13e2e72cbc121d60bceffbc349ea53824016d91dddcf5dc5fed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_cannon, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec 03 02:09:13 compute-0 podman[434819]: 2025-12-03 02:09:13.704196565 +0000 UTC m=+0.284567754 container attach 2a3be12832ab13e2e72cbc121d60bceffbc349ea53824016d91dddcf5dc5fed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_cannon, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:09:13 compute-0 inspiring_cannon[434835]: 167 167
Dec 03 02:09:13 compute-0 podman[434819]: 2025-12-03 02:09:13.713798376 +0000 UTC m=+0.294169565 container died 2a3be12832ab13e2e72cbc121d60bceffbc349ea53824016d91dddcf5dc5fed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 03 02:09:13 compute-0 systemd[1]: libpod-2a3be12832ab13e2e72cbc121d60bceffbc349ea53824016d91dddcf5dc5fed6.scope: Deactivated successfully.
Dec 03 02:09:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-72f223cc956d0ceda28b4319e85016a20ef366d1c46a6a7f252b2358370c4055-merged.mount: Deactivated successfully.
Dec 03 02:09:13 compute-0 podman[434819]: 2025-12-03 02:09:13.791463986 +0000 UTC m=+0.371835175 container remove 2a3be12832ab13e2e72cbc121d60bceffbc349ea53824016d91dddcf5dc5fed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_cannon, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 03 02:09:13 compute-0 systemd[1]: libpod-conmon-2a3be12832ab13e2e72cbc121d60bceffbc349ea53824016d91dddcf5dc5fed6.scope: Deactivated successfully.
Dec 03 02:09:14 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec 03 02:09:14 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec 03 02:09:14 compute-0 podman[434858]: 2025-12-03 02:09:14.143639947 +0000 UTC m=+0.119667445 container create 1a630f41b63e4f6e64c3d6af8286766d5b9bba62e616603f9d4dff0aac614ebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_elbakyan, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:09:14 compute-0 podman[434858]: 2025-12-03 02:09:14.088729319 +0000 UTC m=+0.064756877 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:09:14 compute-0 systemd[1]: Started libpod-conmon-1a630f41b63e4f6e64c3d6af8286766d5b9bba62e616603f9d4dff0aac614ebc.scope.
Dec 03 02:09:14 compute-0 ceph-mon[192821]: pgmap v1640: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.4 MiB/s wr, 45 op/s
Dec 03 02:09:14 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:09:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb7a8c65e21c6a4a56c22126f531e49e9c05b27240c719ad8f24c72f2916dbfd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:09:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb7a8c65e21c6a4a56c22126f531e49e9c05b27240c719ad8f24c72f2916dbfd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:09:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb7a8c65e21c6a4a56c22126f531e49e9c05b27240c719ad8f24c72f2916dbfd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:09:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb7a8c65e21c6a4a56c22126f531e49e9c05b27240c719ad8f24c72f2916dbfd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:09:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb7a8c65e21c6a4a56c22126f531e49e9c05b27240c719ad8f24c72f2916dbfd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 02:09:14 compute-0 podman[434858]: 2025-12-03 02:09:14.325059422 +0000 UTC m=+0.301086920 container init 1a630f41b63e4f6e64c3d6af8286766d5b9bba62e616603f9d4dff0aac614ebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_elbakyan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Dec 03 02:09:14 compute-0 podman[434858]: 2025-12-03 02:09:14.337227885 +0000 UTC m=+0.313255383 container start 1a630f41b63e4f6e64c3d6af8286766d5b9bba62e616603f9d4dff0aac614ebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:09:14 compute-0 podman[434858]: 2025-12-03 02:09:14.342466733 +0000 UTC m=+0.318494311 container attach 1a630f41b63e4f6e64c3d6af8286766d5b9bba62e616603f9d4dff0aac614ebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_elbakyan, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 03 02:09:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1641: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.4 MiB/s wr, 50 op/s
Dec 03 02:09:15 compute-0 sad_elbakyan[434894]: --> passed data devices: 0 physical, 3 LVM
Dec 03 02:09:15 compute-0 sad_elbakyan[434894]: --> relative data size: 1.0
Dec 03 02:09:15 compute-0 sad_elbakyan[434894]: --> All data devices are unavailable
Dec 03 02:09:15 compute-0 systemd[1]: libpod-1a630f41b63e4f6e64c3d6af8286766d5b9bba62e616603f9d4dff0aac614ebc.scope: Deactivated successfully.
Dec 03 02:09:15 compute-0 systemd[1]: libpod-1a630f41b63e4f6e64c3d6af8286766d5b9bba62e616603f9d4dff0aac614ebc.scope: Consumed 1.283s CPU time.
Dec 03 02:09:15 compute-0 podman[434858]: 2025-12-03 02:09:15.724764121 +0000 UTC m=+1.700791649 container died 1a630f41b63e4f6e64c3d6af8286766d5b9bba62e616603f9d4dff0aac614ebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_elbakyan, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 03 02:09:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb7a8c65e21c6a4a56c22126f531e49e9c05b27240c719ad8f24c72f2916dbfd-merged.mount: Deactivated successfully.
Dec 03 02:09:15 compute-0 podman[434858]: 2025-12-03 02:09:15.813911865 +0000 UTC m=+1.789939353 container remove 1a630f41b63e4f6e64c3d6af8286766d5b9bba62e616603f9d4dff0aac614ebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 03 02:09:15 compute-0 systemd[1]: libpod-conmon-1a630f41b63e4f6e64c3d6af8286766d5b9bba62e616603f9d4dff0aac614ebc.scope: Deactivated successfully.
Dec 03 02:09:15 compute-0 sudo[434753]: pam_unix(sudo:session): session closed for user root
Dec 03 02:09:15 compute-0 sudo[434936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:09:15 compute-0 sudo[434936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:09:15 compute-0 sudo[434936]: pam_unix(sudo:session): session closed for user root
Dec 03 02:09:16 compute-0 sudo[434961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:09:16 compute-0 sudo[434961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:09:16 compute-0 sudo[434961]: pam_unix(sudo:session): session closed for user root
Dec 03 02:09:16 compute-0 nova_compute[351485]: 2025-12-03 02:09:16.141 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:09:16 compute-0 nova_compute[351485]: 2025-12-03 02:09:16.142 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:09:16 compute-0 nova_compute[351485]: 2025-12-03 02:09:16.143 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 02:09:16 compute-0 sudo[434986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:09:16 compute-0 sudo[434986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:09:16 compute-0 sudo[434986]: pam_unix(sudo:session): session closed for user root
Dec 03 02:09:16 compute-0 ceph-mon[192821]: pgmap v1641: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.4 MiB/s wr, 50 op/s
Dec 03 02:09:16 compute-0 sudo[435011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 02:09:16 compute-0 sudo[435011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:09:16 compute-0 nova_compute[351485]: 2025-12-03 02:09:16.411 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-b43e79bd-550f-42f8-9aa7-980b6bca3f70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:09:16 compute-0 nova_compute[351485]: 2025-12-03 02:09:16.412 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-b43e79bd-550f-42f8-9aa7-980b6bca3f70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:09:16 compute-0 nova_compute[351485]: 2025-12-03 02:09:16.412 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 03 02:09:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1642: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.4 MiB/s wr, 70 op/s
Dec 03 02:09:16 compute-0 podman[435075]: 2025-12-03 02:09:16.949227309 +0000 UTC m=+0.086717166 container create 6d9bc0ad830e35a9702dd45c2115d59ed1a9d4fbf5e5d87630021832aa0fc571 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_brattain, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 03 02:09:17 compute-0 podman[435075]: 2025-12-03 02:09:16.917889866 +0000 UTC m=+0.055379803 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:09:17 compute-0 systemd[1]: Started libpod-conmon-6d9bc0ad830e35a9702dd45c2115d59ed1a9d4fbf5e5d87630021832aa0fc571.scope.
Dec 03 02:09:17 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:09:17 compute-0 podman[435075]: 2025-12-03 02:09:17.117669999 +0000 UTC m=+0.255159896 container init 6d9bc0ad830e35a9702dd45c2115d59ed1a9d4fbf5e5d87630021832aa0fc571 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_brattain, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 03 02:09:17 compute-0 podman[435075]: 2025-12-03 02:09:17.137329324 +0000 UTC m=+0.274819181 container start 6d9bc0ad830e35a9702dd45c2115d59ed1a9d4fbf5e5d87630021832aa0fc571 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_brattain, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 03 02:09:17 compute-0 podman[435075]: 2025-12-03 02:09:17.142327815 +0000 UTC m=+0.279817702 container attach 6d9bc0ad830e35a9702dd45c2115d59ed1a9d4fbf5e5d87630021832aa0fc571 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_brattain, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 03 02:09:17 compute-0 quizzical_brattain[435091]: 167 167
Dec 03 02:09:17 compute-0 systemd[1]: libpod-6d9bc0ad830e35a9702dd45c2115d59ed1a9d4fbf5e5d87630021832aa0fc571.scope: Deactivated successfully.
Dec 03 02:09:17 compute-0 podman[435075]: 2025-12-03 02:09:17.152021498 +0000 UTC m=+0.289511395 container died 6d9bc0ad830e35a9702dd45c2115d59ed1a9d4fbf5e5d87630021832aa0fc571 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 03 02:09:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-c49600c6262a42418154c434d0319c405d050300631ca24df9f2010f33de6477-merged.mount: Deactivated successfully.
Dec 03 02:09:17 compute-0 podman[435075]: 2025-12-03 02:09:17.224956615 +0000 UTC m=+0.362446462 container remove 6d9bc0ad830e35a9702dd45c2115d59ed1a9d4fbf5e5d87630021832aa0fc571 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:09:17 compute-0 systemd[1]: libpod-conmon-6d9bc0ad830e35a9702dd45c2115d59ed1a9d4fbf5e5d87630021832aa0fc571.scope: Deactivated successfully.
Dec 03 02:09:17 compute-0 nova_compute[351485]: 2025-12-03 02:09:17.255 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:09:17 compute-0 podman[435114]: 2025-12-03 02:09:17.592273643 +0000 UTC m=+0.131326585 container create 60e3812e2554da68b1f3d1080d3df92ae42b2701062609e6d5ebea4d102d861e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:09:17 compute-0 podman[435114]: 2025-12-03 02:09:17.549956169 +0000 UTC m=+0.089009181 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:09:17 compute-0 systemd[1]: Started libpod-conmon-60e3812e2554da68b1f3d1080d3df92ae42b2701062609e6d5ebea4d102d861e.scope.
Dec 03 02:09:17 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:09:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98b09adf70d5a6c497996accff05b0896c47deccf0921b33a56d6df4116006ff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:09:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98b09adf70d5a6c497996accff05b0896c47deccf0921b33a56d6df4116006ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:09:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98b09adf70d5a6c497996accff05b0896c47deccf0921b33a56d6df4116006ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:09:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98b09adf70d5a6c497996accff05b0896c47deccf0921b33a56d6df4116006ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:09:17 compute-0 podman[435114]: 2025-12-03 02:09:17.770775966 +0000 UTC m=+0.309828968 container init 60e3812e2554da68b1f3d1080d3df92ae42b2701062609e6d5ebea4d102d861e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_banach, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:09:17 compute-0 podman[435114]: 2025-12-03 02:09:17.789606437 +0000 UTC m=+0.328659349 container start 60e3812e2554da68b1f3d1080d3df92ae42b2701062609e6d5ebea4d102d861e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_banach, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 03 02:09:17 compute-0 podman[435114]: 2025-12-03 02:09:17.794973448 +0000 UTC m=+0.334026450 container attach 60e3812e2554da68b1f3d1080d3df92ae42b2701062609e6d5ebea4d102d861e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_banach, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:09:17 compute-0 nova_compute[351485]: 2025-12-03 02:09:17.794 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Updating instance_info_cache with network_info: [{"id": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "address": "fa:16:3e:da:35:ef", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.85", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b217cd3-16", "ovs_interfaceid": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:09:17 compute-0 nova_compute[351485]: 2025-12-03 02:09:17.813 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-b43e79bd-550f-42f8-9aa7-980b6bca3f70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:09:17 compute-0 nova_compute[351485]: 2025-12-03 02:09:17.813 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 03 02:09:17 compute-0 nova_compute[351485]: 2025-12-03 02:09:17.814 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:09:17 compute-0 nova_compute[351485]: 2025-12-03 02:09:17.814 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:09:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:09:18 compute-0 ceph-mon[192821]: pgmap v1642: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.4 MiB/s wr, 70 op/s
Dec 03 02:09:18 compute-0 nova_compute[351485]: 2025-12-03 02:09:18.275 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:09:18 compute-0 nova_compute[351485]: 2025-12-03 02:09:18.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:09:18 compute-0 condescending_banach[435131]: {
Dec 03 02:09:18 compute-0 condescending_banach[435131]:     "0": [
Dec 03 02:09:18 compute-0 condescending_banach[435131]:         {
Dec 03 02:09:18 compute-0 condescending_banach[435131]:             "devices": [
Dec 03 02:09:18 compute-0 condescending_banach[435131]:                 "/dev/loop3"
Dec 03 02:09:18 compute-0 condescending_banach[435131]:             ],
Dec 03 02:09:18 compute-0 condescending_banach[435131]:             "lv_name": "ceph_lv0",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:             "lv_size": "21470642176",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:             "name": "ceph_lv0",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:             "tags": {
Dec 03 02:09:18 compute-0 condescending_banach[435131]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:                 "ceph.cluster_name": "ceph",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:                 "ceph.crush_device_class": "",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:                 "ceph.encrypted": "0",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:                 "ceph.osd_id": "0",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:                 "ceph.type": "block",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:                 "ceph.vdo": "0"
Dec 03 02:09:18 compute-0 condescending_banach[435131]:             },
Dec 03 02:09:18 compute-0 condescending_banach[435131]:             "type": "block",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:             "vg_name": "ceph_vg0"
Dec 03 02:09:18 compute-0 condescending_banach[435131]:         }
Dec 03 02:09:18 compute-0 condescending_banach[435131]:     ],
Dec 03 02:09:18 compute-0 condescending_banach[435131]:     "1": [
Dec 03 02:09:18 compute-0 condescending_banach[435131]:         {
Dec 03 02:09:18 compute-0 condescending_banach[435131]:             "devices": [
Dec 03 02:09:18 compute-0 condescending_banach[435131]:                 "/dev/loop4"
Dec 03 02:09:18 compute-0 condescending_banach[435131]:             ],
Dec 03 02:09:18 compute-0 condescending_banach[435131]:             "lv_name": "ceph_lv1",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:             "lv_size": "21470642176",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:             "name": "ceph_lv1",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:             "tags": {
Dec 03 02:09:18 compute-0 condescending_banach[435131]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:                 "ceph.cluster_name": "ceph",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:                 "ceph.crush_device_class": "",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:                 "ceph.encrypted": "0",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:                 "ceph.osd_id": "1",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:                 "ceph.type": "block",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:                 "ceph.vdo": "0"
Dec 03 02:09:18 compute-0 condescending_banach[435131]:             },
Dec 03 02:09:18 compute-0 condescending_banach[435131]:             "type": "block",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:             "vg_name": "ceph_vg1"
Dec 03 02:09:18 compute-0 condescending_banach[435131]:         }
Dec 03 02:09:18 compute-0 condescending_banach[435131]:     ],
Dec 03 02:09:18 compute-0 condescending_banach[435131]:     "2": [
Dec 03 02:09:18 compute-0 condescending_banach[435131]:         {
Dec 03 02:09:18 compute-0 condescending_banach[435131]:             "devices": [
Dec 03 02:09:18 compute-0 condescending_banach[435131]:                 "/dev/loop5"
Dec 03 02:09:18 compute-0 condescending_banach[435131]:             ],
Dec 03 02:09:18 compute-0 condescending_banach[435131]:             "lv_name": "ceph_lv2",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:             "lv_size": "21470642176",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:             "name": "ceph_lv2",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:             "tags": {
Dec 03 02:09:18 compute-0 condescending_banach[435131]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:                 "ceph.cluster_name": "ceph",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:                 "ceph.crush_device_class": "",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:                 "ceph.encrypted": "0",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:                 "ceph.osd_id": "2",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:                 "ceph.type": "block",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:                 "ceph.vdo": "0"
Dec 03 02:09:18 compute-0 condescending_banach[435131]:             },
Dec 03 02:09:18 compute-0 condescending_banach[435131]:             "type": "block",
Dec 03 02:09:18 compute-0 condescending_banach[435131]:             "vg_name": "ceph_vg2"
Dec 03 02:09:18 compute-0 condescending_banach[435131]:         }
Dec 03 02:09:18 compute-0 condescending_banach[435131]:     ]
Dec 03 02:09:18 compute-0 condescending_banach[435131]: }
Dec 03 02:09:18 compute-0 systemd[1]: libpod-60e3812e2554da68b1f3d1080d3df92ae42b2701062609e6d5ebea4d102d861e.scope: Deactivated successfully.
Dec 03 02:09:18 compute-0 podman[435114]: 2025-12-03 02:09:18.647921351 +0000 UTC m=+1.186974263 container died 60e3812e2554da68b1f3d1080d3df92ae42b2701062609e6d5ebea4d102d861e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_banach, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 03 02:09:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-98b09adf70d5a6c497996accff05b0896c47deccf0921b33a56d6df4116006ff-merged.mount: Deactivated successfully.
Dec 03 02:09:18 compute-0 podman[435114]: 2025-12-03 02:09:18.729841591 +0000 UTC m=+1.268894493 container remove 60e3812e2554da68b1f3d1080d3df92ae42b2701062609e6d5ebea4d102d861e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_banach, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:09:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1643: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.4 MiB/s wr, 62 op/s
Dec 03 02:09:18 compute-0 systemd[1]: libpod-conmon-60e3812e2554da68b1f3d1080d3df92ae42b2701062609e6d5ebea4d102d861e.scope: Deactivated successfully.
Dec 03 02:09:18 compute-0 sudo[435011]: pam_unix(sudo:session): session closed for user root
Dec 03 02:09:18 compute-0 sudo[435153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:09:18 compute-0 sudo[435153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:09:18 compute-0 sudo[435153]: pam_unix(sudo:session): session closed for user root
Dec 03 02:09:19 compute-0 sudo[435178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:09:19 compute-0 sudo[435178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:09:19 compute-0 sudo[435178]: pam_unix(sudo:session): session closed for user root
Dec 03 02:09:19 compute-0 sudo[435203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:09:19 compute-0 sudo[435203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:09:19 compute-0 sudo[435203]: pam_unix(sudo:session): session closed for user root
Dec 03 02:09:19 compute-0 sudo[435228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 02:09:19 compute-0 sudo[435228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.508 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 03 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.510 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 03 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.510 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.511 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.511 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.521 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.522 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.523 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43e79bd-550f-42f8-9aa7-980b6bca3f70', 'name': 'vn-44nal64-mj7m4uljqyof-c7kfgdonucij-vnf-5nwa6zvischw', 'flavor': {'id': 'bc665ec6-3672-4e52-a447-5267b04e227a', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9746b242761a48048d185ce26d622b33', 'user_id': '03ba25e4009b43f7b0054fee32bf9136', 'hostId': '875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd', 'status': 'active', 'metadata': {'metering.server_group': '0f6ab671-23df-4a6d-9613-02f9fb5fb294'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 03 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.527 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 3d670990-5a2a-4334-b8b1-9ae49d171323 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec 03 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.529 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/3d670990-5a2a-4334-b8b1-9ae49d171323 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}5774f494984a65ffbde2426a05531a474fe014ea4dcd597248cb0a9b623a789b" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec 03 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.899 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1572 Content-Type: application/json Date: Wed, 03 Dec 2025 02:09:19 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-eba20fba-b015-491f-8f3f-7b8091be8a4a x-openstack-request-id: req-eba20fba-b015-491f-8f3f-7b8091be8a4a _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec 03 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.900 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "3d670990-5a2a-4334-b8b1-9ae49d171323", "name": "fvt_testing_server", "status": "ACTIVE", "tenant_id": "9746b242761a48048d185ce26d622b33", "user_id": "03ba25e4009b43f7b0054fee32bf9136", "metadata": {}, "hostId": "875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd", "image": {"id": "774b7995-1f03-43de-ad4e-feac9d5f9136", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/774b7995-1f03-43de-ad4e-feac9d5f9136"}]}, "flavor": {"id": "8fb4324d-1fde-4886-9d66-fedd66b56d0f", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/8fb4324d-1fde-4886-9d66-fedd66b56d0f"}]}, "created": "2025-12-03T02:09:01Z", "updated": "2025-12-03T02:09:12Z", "addresses": {}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/3d670990-5a2a-4334-b8b1-9ae49d171323"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/3d670990-5a2a-4334-b8b1-9ae49d171323"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-03T02:09:12.000000", "OS-SRV-USG:terminated_at": null, "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000005", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec 03 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.900 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/3d670990-5a2a-4334-b8b1-9ae49d171323 used request id req-eba20fba-b015-491f-8f3f-7b8091be8a4a request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec 03 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.902 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '3d670990-5a2a-4334-b8b1-9ae49d171323', 'name': 'fvt_testing_server', 'flavor': {'id': '8fb4324d-1fde-4886-9d66-fedd66b56d0f', 'name': 'fvt_testing_flavor', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '774b7995-1f03-43de-ad4e-feac9d5f9136'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000005', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9746b242761a48048d185ce26d622b33', 'user_id': '03ba25e4009b43f7b0054fee32bf9136', 'hostId': '875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 03 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.906 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '9182286b-5a08-4961-b4bb-c0e2f05746f7', 'name': 'test_0', 'flavor': {'id': 'bc665ec6-3672-4e52-a447-5267b04e227a', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9746b242761a48048d185ce26d622b33', 'user_id': '03ba25e4009b43f7b0054fee32bf9136', 'hostId': '875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 03 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.907 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 03 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.907 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.907 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.907 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.908 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-03T02:09:19.907331) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.958 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/memory.usage volume: 48.953125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:19 compute-0 podman[435289]: 2025-12-03 02:09:19.997427144 +0000 UTC m=+0.068991437 container create b8bcd24a9198705b0f1b0457ce1dd6654921a83b8bfe8fa0f7622e4cd0ef6b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.008 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.008 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 3d670990-5a2a-4334-b8b1-9ae49d171323: ceilometer.compute.pollsters.NoVolumeException
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.055 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/memory.usage volume: 48.85546875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.060 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.060 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.061 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.061 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.061 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.061 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.066 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-03T02:09:20.061322) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:09:20 compute-0 podman[435289]: 2025-12-03 02:09:19.973848109 +0000 UTC m=+0.045412422 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.071 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 systemd[1]: Started libpod-conmon-b8bcd24a9198705b0f1b0457ce1dd6654921a83b8bfe8fa0f7622e4cd0ef6b84.scope.
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.081 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.082 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.082 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.083 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.083 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.083 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.083 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.083 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.084 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-03T02:09:20.083368) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.084 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.085 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.085 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.085 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.085 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.085 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.085 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.086 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.086 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.086 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-03T02:09:20.085862) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.087 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.087 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.087 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.087 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.087 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.087 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.087 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.088 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.088 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-03T02:09:20.087611) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.088 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.088 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.089 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.089 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.089 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.089 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.089 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.090 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.089 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-03T02:09:20.089331) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.090 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.090 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.090 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.090 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.091 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.091 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.091 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-03T02:09:20.091086) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.118 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.119 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.120 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.138 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.139 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.139 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 podman[435289]: 2025-12-03 02:09:20.147858696 +0000 UTC m=+0.219423019 container init b8bcd24a9198705b0f1b0457ce1dd6654921a83b8bfe8fa0f7622e4cd0ef6b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_turing, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:09:20 compute-0 podman[435289]: 2025-12-03 02:09:20.158067694 +0000 UTC m=+0.229631977 container start b8bcd24a9198705b0f1b0457ce1dd6654921a83b8bfe8fa0f7622e4cd0ef6b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Dec 03 02:09:20 compute-0 podman[435289]: 2025-12-03 02:09:20.164836094 +0000 UTC m=+0.236400457 container attach b8bcd24a9198705b0f1b0457ce1dd6654921a83b8bfe8fa0f7622e4cd0ef6b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.172 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 funny_turing[435304]: 167 167
Dec 03 02:09:20 compute-0 systemd[1]: libpod-b8bcd24a9198705b0f1b0457ce1dd6654921a83b8bfe8fa0f7622e4cd0ef6b84.scope: Deactivated successfully.
Dec 03 02:09:20 compute-0 conmon[435304]: conmon b8bcd24a9198705b0f1b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b8bcd24a9198705b0f1b0457ce1dd6654921a83b8bfe8fa0f7622e4cd0ef6b84.scope/container/memory.events
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.176 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.177 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.178 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.178 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.179 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.179 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.179 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.179 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.179 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.180 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: fvt_testing_server>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: fvt_testing_server>]
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.181 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.181 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.181 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.180 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-03T02:09:20.179503) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.181 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.183 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.184 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-03T02:09:20.183126) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:09:20 compute-0 podman[435309]: 2025-12-03 02:09:20.217922531 +0000 UTC m=+0.028834164 container died b8bcd24a9198705b0f1b0457ce1dd6654921a83b8bfe8fa0f7622e4cd0ef6b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.253 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.254 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.255 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-5637fa5fb1b972241f4e134c019fef6ff38b8ba592c2d6590e599272f7b6195a-merged.mount: Deactivated successfully.
Dec 03 02:09:20 compute-0 ceph-mon[192821]: pgmap v1643: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.4 MiB/s wr, 62 op/s
Dec 03 02:09:20 compute-0 podman[435309]: 2025-12-03 02:09:20.272653525 +0000 UTC m=+0.083565148 container remove b8bcd24a9198705b0f1b0457ce1dd6654921a83b8bfe8fa0f7622e4cd0ef6b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_turing, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec 03 02:09:20 compute-0 systemd[1]: libpod-conmon-b8bcd24a9198705b0f1b0457ce1dd6654921a83b8bfe8fa0f7622e4cd0ef6b84.scope: Deactivated successfully.
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.318 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/disk.device.read.bytes volume: 18348032 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.319 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.319 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.370 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.370 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.371 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.372 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.372 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.373 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.373 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.373 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.374 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.374 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.incoming.bytes volume: 1696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.374 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-03T02:09:20.374018) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.375 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.bytes volume: 2214 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.375 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.375 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.376 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.376 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.376 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.376 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.376 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.latency volume: 1930310646 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.377 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.latency volume: 271584338 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.377 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-03T02:09:20.376455) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.378 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.latency volume: 193440648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.378 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/disk.device.read.latency volume: 1568219530 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.379 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.379 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/disk.device.read.latency volume: 10891607 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.380 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.latency volume: 1854350820 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.380 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.latency volume: 322798135 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.380 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.latency volume: 163317736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.381 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.382 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.382 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.382 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.382 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.382 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.383 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-03T02:09:20.382730) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.383 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.383 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.384 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.384 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/disk.device.read.requests volume: 573 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.385 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.385 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.385 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.386 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.386 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.387 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.388 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.388 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.388 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.388 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.389 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.389 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.389 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-03T02:09:20.389029) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.390 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.390 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.391 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.391 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.391 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.391 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.391 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.392 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.392 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-03T02:09:20.391969) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.392 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.392 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.393 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.393 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.394 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.394 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.395 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.395 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.395 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.396 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.397 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.397 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.397 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.397 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.397 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.397 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.bytes volume: 41762816 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.397 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-03T02:09:20.397596) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.398 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.398 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.398 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.398 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.399 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.399 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.399 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.399 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.400 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.400 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.400 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.400 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.400 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.400 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.400 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.latency volume: 8159105015 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.401 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.latency volume: 27311239 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.401 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.401 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-03T02:09:20.400829) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.402 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.402 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.402 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.402 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.latency volume: 7224488215 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.402 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.latency volume: 31628821 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.403 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.403 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.403 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.404 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.404 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.404 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.404 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.404 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.404 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.404 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-03T02:09:20.404246) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.405 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.405 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.405 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.405 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.406 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.requests volume: 229 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.406 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.406 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.407 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.407 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.407 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.407 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.407 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.407 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.407 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.408 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.408 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.408 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-03T02:09:20.407598) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.408 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.408 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.408 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.408 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.409 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.409 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/cpu volume: 43250000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.409 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/cpu volume: 6840000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.409 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/cpu volume: 46700000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.410 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.410 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.410 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.410 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.410 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.411 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.411 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.411 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.411 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.412 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.412 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.412 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.412 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.outgoing.bytes volume: 2398 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.412 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.413 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.413 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.413 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.413 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.413 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.414 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.414 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.414 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.414 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.414 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.414 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-03T02:09:20.409090) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.415 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.415 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-03T02:09:20.410968) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.415 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-03T02:09:20.412400) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.415 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.415 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-03T02:09:20.413973) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.415 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.415 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.416 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.416 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.416 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.416 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.416 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.416 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.417 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.417 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.417 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.418 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.418 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.418 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.418 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.418 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.418 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.419 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.419 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.419 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.419 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.419 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.420 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.420 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.420 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.420 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.420 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.421 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.421 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.421 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.421 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.421 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.421 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: fvt_testing_server>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: fvt_testing_server>]
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.422 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.422 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.422 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.422 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.422 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.422 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-03T02:09:20.416998) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.422 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.422 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-03T02:09:20.418570) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.423 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.423 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.423 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.423 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-03T02:09:20.419957) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.423 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.423 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.423 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-03T02:09:20.421279) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.423 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.423 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.423 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.423 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.423 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.423 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.423 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.423 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.423 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.423 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.423 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.423 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.423 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.424 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.424 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:09:20 compute-0 podman[435329]: 2025-12-03 02:09:20.565967046 +0000 UTC m=+0.093306912 container create f2e62eef5e6d1dbc759bcfdad0f53bb0583b268eb7aee5bb0bc6f93696178e71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 03 02:09:20 compute-0 podman[435329]: 2025-12-03 02:09:20.534086477 +0000 UTC m=+0.061426373 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:09:20 compute-0 systemd[1]: Started libpod-conmon-f2e62eef5e6d1dbc759bcfdad0f53bb0583b268eb7aee5bb0bc6f93696178e71.scope.
Dec 03 02:09:20 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:09:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c50c82d572c922f018d7956017eef9fe44a79712112cd3d29757494143f9f45/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:09:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c50c82d572c922f018d7956017eef9fe44a79712112cd3d29757494143f9f45/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:09:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c50c82d572c922f018d7956017eef9fe44a79712112cd3d29757494143f9f45/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:09:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c50c82d572c922f018d7956017eef9fe44a79712112cd3d29757494143f9f45/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:09:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1644: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.4 MiB/s wr, 95 op/s
Dec 03 02:09:20 compute-0 podman[435329]: 2025-12-03 02:09:20.772948232 +0000 UTC m=+0.300288148 container init f2e62eef5e6d1dbc759bcfdad0f53bb0583b268eb7aee5bb0bc6f93696178e71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:09:20 compute-0 podman[435329]: 2025-12-03 02:09:20.795277932 +0000 UTC m=+0.322617828 container start f2e62eef5e6d1dbc759bcfdad0f53bb0583b268eb7aee5bb0bc6f93696178e71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:09:20 compute-0 podman[435329]: 2025-12-03 02:09:20.802488395 +0000 UTC m=+0.329828341 container attach f2e62eef5e6d1dbc759bcfdad0f53bb0583b268eb7aee5bb0bc6f93696178e71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 03 02:09:21 compute-0 heuristic_lederberg[435345]: {
Dec 03 02:09:21 compute-0 heuristic_lederberg[435345]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 02:09:21 compute-0 heuristic_lederberg[435345]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:09:21 compute-0 heuristic_lederberg[435345]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 02:09:21 compute-0 heuristic_lederberg[435345]:         "osd_id": 2,
Dec 03 02:09:21 compute-0 heuristic_lederberg[435345]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:09:21 compute-0 heuristic_lederberg[435345]:         "type": "bluestore"
Dec 03 02:09:21 compute-0 heuristic_lederberg[435345]:     },
Dec 03 02:09:21 compute-0 heuristic_lederberg[435345]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 02:09:21 compute-0 heuristic_lederberg[435345]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:09:21 compute-0 heuristic_lederberg[435345]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 02:09:21 compute-0 heuristic_lederberg[435345]:         "osd_id": 1,
Dec 03 02:09:21 compute-0 heuristic_lederberg[435345]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:09:21 compute-0 heuristic_lederberg[435345]:         "type": "bluestore"
Dec 03 02:09:21 compute-0 heuristic_lederberg[435345]:     },
Dec 03 02:09:21 compute-0 heuristic_lederberg[435345]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 02:09:21 compute-0 heuristic_lederberg[435345]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:09:21 compute-0 heuristic_lederberg[435345]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 02:09:21 compute-0 heuristic_lederberg[435345]:         "osd_id": 0,
Dec 03 02:09:21 compute-0 heuristic_lederberg[435345]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:09:21 compute-0 heuristic_lederberg[435345]:         "type": "bluestore"
Dec 03 02:09:21 compute-0 heuristic_lederberg[435345]:     }
Dec 03 02:09:21 compute-0 heuristic_lederberg[435345]: }
Dec 03 02:09:21 compute-0 podman[435329]: 2025-12-03 02:09:21.972660013 +0000 UTC m=+1.499999869 container died f2e62eef5e6d1dbc759bcfdad0f53bb0583b268eb7aee5bb0bc6f93696178e71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:09:21 compute-0 systemd[1]: libpod-f2e62eef5e6d1dbc759bcfdad0f53bb0583b268eb7aee5bb0bc6f93696178e71.scope: Deactivated successfully.
Dec 03 02:09:21 compute-0 systemd[1]: libpod-f2e62eef5e6d1dbc759bcfdad0f53bb0583b268eb7aee5bb0bc6f93696178e71.scope: Consumed 1.162s CPU time.
Dec 03 02:09:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c50c82d572c922f018d7956017eef9fe44a79712112cd3d29757494143f9f45-merged.mount: Deactivated successfully.
Dec 03 02:09:22 compute-0 podman[435329]: 2025-12-03 02:09:22.068884126 +0000 UTC m=+1.596223992 container remove f2e62eef5e6d1dbc759bcfdad0f53bb0583b268eb7aee5bb0bc6f93696178e71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 03 02:09:22 compute-0 systemd[1]: libpod-conmon-f2e62eef5e6d1dbc759bcfdad0f53bb0583b268eb7aee5bb0bc6f93696178e71.scope: Deactivated successfully.
Dec 03 02:09:22 compute-0 sudo[435228]: pam_unix(sudo:session): session closed for user root
Dec 03 02:09:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 02:09:22 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:09:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 02:09:22 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:09:22 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev d55f476f-1861-49a3-b4f5-cfac63a65b33 does not exist
Dec 03 02:09:22 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 49f1e839-03f1-4229-8623-7edbed5adc39 does not exist
Dec 03 02:09:22 compute-0 nova_compute[351485]: 2025-12-03 02:09:22.258 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:09:22 compute-0 ceph-mon[192821]: pgmap v1644: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.4 MiB/s wr, 95 op/s
Dec 03 02:09:22 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:09:22 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:09:22 compute-0 sudo[435389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:09:22 compute-0 sudo[435389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:09:22 compute-0 sudo[435389]: pam_unix(sudo:session): session closed for user root
Dec 03 02:09:22 compute-0 sudo[435414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 02:09:22 compute-0 sudo[435414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:09:22 compute-0 sudo[435414]: pam_unix(sudo:session): session closed for user root
Dec 03 02:09:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1645: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 465 KiB/s wr, 82 op/s
Dec 03 02:09:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:09:23 compute-0 nova_compute[351485]: 2025-12-03 02:09:23.279 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:09:23 compute-0 ceph-mon[192821]: pgmap v1645: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 465 KiB/s wr, 82 op/s
Dec 03 02:09:23 compute-0 nova_compute[351485]: 2025-12-03 02:09:23.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:09:23 compute-0 nova_compute[351485]: 2025-12-03 02:09:23.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 02:09:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1646: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 255 B/s wr, 59 op/s
Dec 03 02:09:25 compute-0 ceph-mon[192821]: pgmap v1646: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 255 B/s wr, 59 op/s
Dec 03 02:09:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1647: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 170 B/s wr, 53 op/s
Dec 03 02:09:27 compute-0 nova_compute[351485]: 2025-12-03 02:09:27.262 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:09:27 compute-0 podman[435440]: 2025-12-03 02:09:27.868758374 +0000 UTC m=+0.110306102 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 03 02:09:27 compute-0 podman[435441]: 2025-12-03 02:09:27.868324851 +0000 UTC m=+0.115883218 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 02:09:27 compute-0 podman[435439]: 2025-12-03 02:09:27.876926204 +0000 UTC m=+0.127969770 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:09:28 compute-0 ceph-mon[192821]: pgmap v1647: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 170 B/s wr, 53 op/s
Dec 03 02:09:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:09:28 compute-0 nova_compute[351485]: 2025-12-03 02:09:28.289 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:09:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:09:28
Dec 03 02:09:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 02:09:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 02:09:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['.mgr', 'volumes', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', 'images']
Dec 03 02:09:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 02:09:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:09:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:09:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:09:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:09:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:09:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:09:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1648: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 34 op/s
Dec 03 02:09:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 02:09:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:09:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 02:09:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:09:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:09:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:09:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:09:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:09:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:09:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:09:29 compute-0 podman[158098]: time="2025-12-03T02:09:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:09:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:09:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec 03 02:09:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:09:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8657 "" "Go-http-client/1.1"
Dec 03 02:09:30 compute-0 ceph-mon[192821]: pgmap v1648: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 34 op/s
Dec 03 02:09:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1649: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 34 op/s
Dec 03 02:09:31 compute-0 openstack_network_exporter[368278]: ERROR   02:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:09:31 compute-0 openstack_network_exporter[368278]: ERROR   02:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:09:31 compute-0 openstack_network_exporter[368278]: ERROR   02:09:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:09:31 compute-0 openstack_network_exporter[368278]: ERROR   02:09:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:09:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:09:31 compute-0 openstack_network_exporter[368278]: ERROR   02:09:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:09:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:09:32 compute-0 ceph-mon[192821]: pgmap v1649: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 34 op/s
Dec 03 02:09:32 compute-0 nova_compute[351485]: 2025-12-03 02:09:32.268 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:09:32 compute-0 nova_compute[351485]: 2025-12-03 02:09:32.407 351492 DEBUG oslo_concurrency.lockutils [None req-1abb2e1a-97a3-4ebc-a58d-c6c5fd1d0ab0 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "3d670990-5a2a-4334-b8b1-9ae49d171323" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:09:32 compute-0 nova_compute[351485]: 2025-12-03 02:09:32.411 351492 DEBUG oslo_concurrency.lockutils [None req-1abb2e1a-97a3-4ebc-a58d-c6c5fd1d0ab0 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "3d670990-5a2a-4334-b8b1-9ae49d171323" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:09:32 compute-0 nova_compute[351485]: 2025-12-03 02:09:32.412 351492 DEBUG oslo_concurrency.lockutils [None req-1abb2e1a-97a3-4ebc-a58d-c6c5fd1d0ab0 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "3d670990-5a2a-4334-b8b1-9ae49d171323-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:09:32 compute-0 nova_compute[351485]: 2025-12-03 02:09:32.413 351492 DEBUG oslo_concurrency.lockutils [None req-1abb2e1a-97a3-4ebc-a58d-c6c5fd1d0ab0 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "3d670990-5a2a-4334-b8b1-9ae49d171323-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:09:32 compute-0 nova_compute[351485]: 2025-12-03 02:09:32.414 351492 DEBUG oslo_concurrency.lockutils [None req-1abb2e1a-97a3-4ebc-a58d-c6c5fd1d0ab0 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "3d670990-5a2a-4334-b8b1-9ae49d171323-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:09:32 compute-0 nova_compute[351485]: 2025-12-03 02:09:32.418 351492 INFO nova.compute.manager [None req-1abb2e1a-97a3-4ebc-a58d-c6c5fd1d0ab0 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Terminating instance
Dec 03 02:09:32 compute-0 nova_compute[351485]: 2025-12-03 02:09:32.420 351492 DEBUG oslo_concurrency.lockutils [None req-1abb2e1a-97a3-4ebc-a58d-c6c5fd1d0ab0 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "refresh_cache-3d670990-5a2a-4334-b8b1-9ae49d171323" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:09:32 compute-0 nova_compute[351485]: 2025-12-03 02:09:32.421 351492 DEBUG oslo_concurrency.lockutils [None req-1abb2e1a-97a3-4ebc-a58d-c6c5fd1d0ab0 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquired lock "refresh_cache-3d670990-5a2a-4334-b8b1-9ae49d171323" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:09:32 compute-0 nova_compute[351485]: 2025-12-03 02:09:32.422 351492 DEBUG nova.network.neutron [None req-1abb2e1a-97a3-4ebc-a58d-c6c5fd1d0ab0 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 03 02:09:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1650: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 1 op/s
Dec 03 02:09:32 compute-0 nova_compute[351485]: 2025-12-03 02:09:32.816 351492 DEBUG nova.network.neutron [None req-1abb2e1a-97a3-4ebc-a58d-c6c5fd1d0ab0 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 03 02:09:32 compute-0 podman[435501]: 2025-12-03 02:09:32.886067095 +0000 UTC m=+0.130752928 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 03 02:09:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:09:33 compute-0 nova_compute[351485]: 2025-12-03 02:09:33.191 351492 DEBUG nova.network.neutron [None req-1abb2e1a-97a3-4ebc-a58d-c6c5fd1d0ab0 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:09:33 compute-0 nova_compute[351485]: 2025-12-03 02:09:33.209 351492 DEBUG oslo_concurrency.lockutils [None req-1abb2e1a-97a3-4ebc-a58d-c6c5fd1d0ab0 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Releasing lock "refresh_cache-3d670990-5a2a-4334-b8b1-9ae49d171323" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:09:33 compute-0 nova_compute[351485]: 2025-12-03 02:09:33.211 351492 DEBUG nova.compute.manager [None req-1abb2e1a-97a3-4ebc-a58d-c6c5fd1d0ab0 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 03 02:09:33 compute-0 nova_compute[351485]: 2025-12-03 02:09:33.292 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:09:33 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Dec 03 02:09:33 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 22.170s CPU time.
Dec 03 02:09:33 compute-0 systemd-machined[138558]: Machine qemu-5-instance-00000005 terminated.
Dec 03 02:09:33 compute-0 nova_compute[351485]: 2025-12-03 02:09:33.448 351492 INFO nova.virt.libvirt.driver [-] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Instance destroyed successfully.
Dec 03 02:09:33 compute-0 nova_compute[351485]: 2025-12-03 02:09:33.449 351492 DEBUG nova.objects.instance [None req-1abb2e1a-97a3-4ebc-a58d-c6c5fd1d0ab0 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lazy-loading 'resources' on Instance uuid 3d670990-5a2a-4334-b8b1-9ae49d171323 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:09:34 compute-0 ceph-mon[192821]: pgmap v1650: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 1 op/s
Dec 03 02:09:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1651: 321 pgs: 321 active+clean; 173 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 0 B/s wr, 3 op/s
Dec 03 02:09:34 compute-0 nova_compute[351485]: 2025-12-03 02:09:34.800 351492 INFO nova.virt.libvirt.driver [None req-1abb2e1a-97a3-4ebc-a58d-c6c5fd1d0ab0 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Deleting instance files /var/lib/nova/instances/3d670990-5a2a-4334-b8b1-9ae49d171323_del
Dec 03 02:09:34 compute-0 nova_compute[351485]: 2025-12-03 02:09:34.802 351492 INFO nova.virt.libvirt.driver [None req-1abb2e1a-97a3-4ebc-a58d-c6c5fd1d0ab0 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Deletion of /var/lib/nova/instances/3d670990-5a2a-4334-b8b1-9ae49d171323_del complete
Dec 03 02:09:34 compute-0 nova_compute[351485]: 2025-12-03 02:09:34.873 351492 INFO nova.compute.manager [None req-1abb2e1a-97a3-4ebc-a58d-c6c5fd1d0ab0 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Took 1.66 seconds to destroy the instance on the hypervisor.
Dec 03 02:09:34 compute-0 nova_compute[351485]: 2025-12-03 02:09:34.874 351492 DEBUG oslo.service.loopingcall [None req-1abb2e1a-97a3-4ebc-a58d-c6c5fd1d0ab0 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 03 02:09:34 compute-0 nova_compute[351485]: 2025-12-03 02:09:34.875 351492 DEBUG nova.compute.manager [-] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 03 02:09:34 compute-0 nova_compute[351485]: 2025-12-03 02:09:34.876 351492 DEBUG nova.network.neutron [-] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 03 02:09:35 compute-0 nova_compute[351485]: 2025-12-03 02:09:35.794 351492 DEBUG nova.network.neutron [-] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 03 02:09:35 compute-0 nova_compute[351485]: 2025-12-03 02:09:35.807 351492 DEBUG nova.network.neutron [-] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:09:35 compute-0 nova_compute[351485]: 2025-12-03 02:09:35.822 351492 INFO nova.compute.manager [-] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Took 0.95 seconds to deallocate network for instance.
Dec 03 02:09:35 compute-0 nova_compute[351485]: 2025-12-03 02:09:35.897 351492 DEBUG oslo_concurrency.lockutils [None req-1abb2e1a-97a3-4ebc-a58d-c6c5fd1d0ab0 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:09:35 compute-0 nova_compute[351485]: 2025-12-03 02:09:35.898 351492 DEBUG oslo_concurrency.lockutils [None req-1abb2e1a-97a3-4ebc-a58d-c6c5fd1d0ab0 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:09:36 compute-0 nova_compute[351485]: 2025-12-03 02:09:36.035 351492 DEBUG oslo_concurrency.processutils [None req-1abb2e1a-97a3-4ebc-a58d-c6c5fd1d0ab0 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:09:36 compute-0 ceph-mon[192821]: pgmap v1651: 321 pgs: 321 active+clean; 173 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 0 B/s wr, 3 op/s
Dec 03 02:09:36 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:09:36 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1619494202' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:09:36 compute-0 nova_compute[351485]: 2025-12-03 02:09:36.549 351492 DEBUG oslo_concurrency.processutils [None req-1abb2e1a-97a3-4ebc-a58d-c6c5fd1d0ab0 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:09:36 compute-0 nova_compute[351485]: 2025-12-03 02:09:36.566 351492 DEBUG nova.compute.provider_tree [None req-1abb2e1a-97a3-4ebc-a58d-c6c5fd1d0ab0 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:09:36 compute-0 nova_compute[351485]: 2025-12-03 02:09:36.607 351492 DEBUG nova.scheduler.client.report [None req-1abb2e1a-97a3-4ebc-a58d-c6c5fd1d0ab0 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:09:36 compute-0 nova_compute[351485]: 2025-12-03 02:09:36.640 351492 DEBUG oslo_concurrency.lockutils [None req-1abb2e1a-97a3-4ebc-a58d-c6c5fd1d0ab0 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:09:36 compute-0 nova_compute[351485]: 2025-12-03 02:09:36.669 351492 INFO nova.scheduler.client.report [None req-1abb2e1a-97a3-4ebc-a58d-c6c5fd1d0ab0 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Deleted allocations for instance 3d670990-5a2a-4334-b8b1-9ae49d171323
Dec 03 02:09:36 compute-0 sshd-session[435543]: Received disconnect from 154.113.10.113 port 49812:11: Bye Bye [preauth]
Dec 03 02:09:36 compute-0 sshd-session[435543]: Disconnected from authenticating user root 154.113.10.113 port 49812 [preauth]
Dec 03 02:09:36 compute-0 nova_compute[351485]: 2025-12-03 02:09:36.754 351492 DEBUG oslo_concurrency.lockutils [None req-1abb2e1a-97a3-4ebc-a58d-c6c5fd1d0ab0 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "3d670990-5a2a-4334-b8b1-9ae49d171323" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.343s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:09:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1652: 321 pgs: 321 active+clean; 157 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 32 op/s
Dec 03 02:09:37 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1619494202' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:09:37 compute-0 nova_compute[351485]: 2025-12-03 02:09:37.271 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:09:37 compute-0 podman[435569]: 2025-12-03 02:09:37.871180278 +0000 UTC m=+0.100729462 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 02:09:37 compute-0 podman[435570]: 2025-12-03 02:09:37.902069489 +0000 UTC m=+0.125812879 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, distribution-scope=public, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, architecture=x86_64, io.buildah.version=1.29.0, managed_by=edpm_ansible, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, vcs-type=git, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4)
Dec 03 02:09:37 compute-0 podman[435576]: 2025-12-03 02:09:37.906883294 +0000 UTC m=+0.128221186 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd)
Dec 03 02:09:37 compute-0 podman[435567]: 2025-12-03 02:09:37.906981407 +0000 UTC m=+0.156514324 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller)
Dec 03 02:09:37 compute-0 podman[435568]: 2025-12-03 02:09:37.913876012 +0000 UTC m=+0.152887873 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, version=9.6, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, vcs-type=git, vendor=Red Hat, Inc., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 03 02:09:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:09:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Dec 03 02:09:38 compute-0 ceph-mon[192821]: pgmap v1652: 321 pgs: 321 active+clean; 157 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 32 op/s
Dec 03 02:09:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Dec 03 02:09:38 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Dec 03 02:09:38 compute-0 nova_compute[351485]: 2025-12-03 02:09:38.297 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:09:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 02:09:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:09:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 02:09:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:09:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011045070041349222 of space, bias 1.0, pg target 0.33135210124047665 quantized to 32 (current 32)
Dec 03 02:09:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:09:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:09:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:09:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:09:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:09:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0005066271692062251 of space, bias 1.0, pg target 0.15198815076186756 quantized to 32 (current 32)
Dec 03 02:09:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:09:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 02:09:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:09:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:09:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:09:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 02:09:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:09:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 02:09:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:09:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:09:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:09:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 02:09:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1654: 321 pgs: 321 active+clean; 157 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 2.1 KiB/s wr, 39 op/s
Dec 03 02:09:39 compute-0 ceph-mon[192821]: osdmap e129: 3 total, 3 up, 3 in
Dec 03 02:09:40 compute-0 ceph-mon[192821]: pgmap v1654: 321 pgs: 321 active+clean; 157 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 2.1 KiB/s wr, 39 op/s
Dec 03 02:09:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1655: 321 pgs: 321 active+clean; 147 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 2.9 KiB/s wr, 53 op/s
Dec 03 02:09:42 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 03 02:09:42 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 3000.1 total, 600.0 interval
                                            Cumulative writes: 7458 writes, 29K keys, 7458 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                            Cumulative WAL: 7458 writes, 1633 syncs, 4.57 writes per sync, written: 0.02 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 915 writes, 2916 keys, 915 commit groups, 1.0 writes per commit group, ingest: 2.84 MB, 0.00 MB/s
                                            Interval WAL: 915 writes, 383 syncs, 2.39 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 03 02:09:42 compute-0 nova_compute[351485]: 2025-12-03 02:09:42.275 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:09:42 compute-0 ceph-mon[192821]: pgmap v1655: 321 pgs: 321 active+clean; 147 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 2.9 KiB/s wr, 53 op/s
Dec 03 02:09:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1656: 321 pgs: 321 active+clean; 139 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 3.5 KiB/s wr, 71 op/s
Dec 03 02:09:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:09:43 compute-0 ceph-mon[192821]: pgmap v1656: 321 pgs: 321 active+clean; 139 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 3.5 KiB/s wr, 71 op/s
Dec 03 02:09:43 compute-0 nova_compute[351485]: 2025-12-03 02:09:43.301 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:09:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1657: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 3.5 KiB/s wr, 67 op/s
Dec 03 02:09:45 compute-0 ceph-mon[192821]: pgmap v1657: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 3.5 KiB/s wr, 67 op/s
Dec 03 02:09:46 compute-0 sshd-session[433546]: Received disconnect from 38.102.83.18 port 47202:11: disconnected by user
Dec 03 02:09:46 compute-0 sshd-session[433546]: Disconnected from user zuul 38.102.83.18 port 47202
Dec 03 02:09:46 compute-0 sshd-session[433543]: pam_unix(sshd:session): session closed for user zuul
Dec 03 02:09:46 compute-0 systemd[1]: session-61.scope: Deactivated successfully.
Dec 03 02:09:46 compute-0 systemd[1]: session-61.scope: Consumed 1.369s CPU time.
Dec 03 02:09:46 compute-0 systemd-logind[800]: Session 61 logged out. Waiting for processes to exit.
Dec 03 02:09:46 compute-0 systemd-logind[800]: Removed session 61.
Dec 03 02:09:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1658: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.4 KiB/s wr, 32 op/s
Dec 03 02:09:46 compute-0 sshd-session[435670]: Connection closed by 45.78.219.140 port 33812 [preauth]
Dec 03 02:09:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 02:09:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1192582471' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:09:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 02:09:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1192582471' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:09:47 compute-0 nova_compute[351485]: 2025-12-03 02:09:47.278 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:09:47 compute-0 ceph-mon[192821]: pgmap v1658: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.4 KiB/s wr, 32 op/s
Dec 03 02:09:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/1192582471' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:09:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/1192582471' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:09:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:09:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Dec 03 02:09:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Dec 03 02:09:48 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Dec 03 02:09:48 compute-0 nova_compute[351485]: 2025-12-03 02:09:48.304 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:09:48 compute-0 nova_compute[351485]: 2025-12-03 02:09:48.445 351492 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764727773.4440382, 3d670990-5a2a-4334-b8b1-9ae49d171323 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 02:09:48 compute-0 nova_compute[351485]: 2025-12-03 02:09:48.446 351492 INFO nova.compute.manager [-] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] VM Stopped (Lifecycle Event)
Dec 03 02:09:48 compute-0 nova_compute[351485]: 2025-12-03 02:09:48.475 351492 DEBUG nova.compute.manager [None req-a06ff9b7-ff3e-4a68-b855-8c39c26a77d2 - - - - - -] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:09:48 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 03 02:09:48 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 3000.1 total, 600.0 interval
                                            Cumulative writes: 8945 writes, 34K keys, 8945 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                            Cumulative WAL: 8945 writes, 2107 syncs, 4.25 writes per sync, written: 0.02 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 1124 writes, 3389 keys, 1124 commit groups, 1.0 writes per commit group, ingest: 2.51 MB, 0.00 MB/s
                                            Interval WAL: 1124 writes, 495 syncs, 2.27 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 03 02:09:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1660: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.4 KiB/s wr, 32 op/s
Dec 03 02:09:49 compute-0 ceph-mon[192821]: osdmap e130: 3 total, 3 up, 3 in
Dec 03 02:09:50 compute-0 ceph-mon[192821]: pgmap v1660: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.4 KiB/s wr, 32 op/s
Dec 03 02:09:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1661: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 818 B/s wr, 18 op/s
Dec 03 02:09:52 compute-0 ceph-mon[192821]: pgmap v1661: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 818 B/s wr, 18 op/s
Dec 03 02:09:52 compute-0 nova_compute[351485]: 2025-12-03 02:09:52.282 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:09:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1662: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 204 B/s wr, 1 op/s
Dec 03 02:09:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:09:53 compute-0 nova_compute[351485]: 2025-12-03 02:09:53.307 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:09:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Dec 03 02:09:54 compute-0 ceph-mon[192821]: pgmap v1662: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 204 B/s wr, 1 op/s
Dec 03 02:09:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Dec 03 02:09:54 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Dec 03 02:09:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1664: 321 pgs: 321 active+clean; 147 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 9.2 KiB/s rd, 966 KiB/s wr, 12 op/s
Dec 03 02:09:55 compute-0 ceph-mon[192821]: osdmap e131: 3 total, 3 up, 3 in
Dec 03 02:09:55 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 03 02:09:55 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 3000.1 total, 600.0 interval
                                            Cumulative writes: 7002 writes, 28K keys, 7002 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                            Cumulative WAL: 7002 writes, 1484 syncs, 4.72 writes per sync, written: 0.02 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 615 writes, 1901 keys, 615 commit groups, 1.0 writes per commit group, ingest: 1.36 MB, 0.00 MB/s
                                            Interval WAL: 615 writes, 283 syncs, 2.17 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 03 02:09:56 compute-0 ceph-mon[192821]: pgmap v1664: 321 pgs: 321 active+clean; 147 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 9.2 KiB/s rd, 966 KiB/s wr, 12 op/s
Dec 03 02:09:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1665: 321 pgs: 321 active+clean; 155 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 1.8 MiB/s wr, 16 op/s
Dec 03 02:09:57 compute-0 ceph-mgr[193109]: [devicehealth INFO root] Check health
Dec 03 02:09:57 compute-0 nova_compute[351485]: 2025-12-03 02:09:57.286 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:09:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:09:58 compute-0 ceph-mon[192821]: pgmap v1665: 321 pgs: 321 active+clean; 155 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 1.8 MiB/s wr, 16 op/s
Dec 03 02:09:58 compute-0 nova_compute[351485]: 2025-12-03 02:09:58.311 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:09:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:09:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:09:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:09:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:09:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:09:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:09:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1666: 321 pgs: 321 active+clean; 155 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 9.9 KiB/s rd, 1.6 MiB/s wr, 14 op/s
Dec 03 02:09:58 compute-0 podman[435674]: 2025-12-03 02:09:58.884914896 +0000 UTC m=+0.118683657 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 03 02:09:58 compute-0 podman[435675]: 2025-12-03 02:09:58.890686559 +0000 UTC m=+0.116566758 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 03 02:09:58 compute-0 podman[435673]: 2025-12-03 02:09:58.90206365 +0000 UTC m=+0.141440410 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 03 02:09:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:09:59.638 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:09:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:09:59.639 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:09:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:09:59.640 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:09:59 compute-0 podman[158098]: time="2025-12-03T02:09:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:09:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:09:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec 03 02:09:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:09:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8658 "" "Go-http-client/1.1"
Dec 03 02:10:00 compute-0 ceph-mon[192821]: pgmap v1666: 321 pgs: 321 active+clean; 155 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 9.9 KiB/s rd, 1.6 MiB/s wr, 14 op/s
Dec 03 02:10:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1667: 321 pgs: 321 active+clean; 155 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.6 MiB/s wr, 17 op/s
Dec 03 02:10:01 compute-0 ceph-mon[192821]: pgmap v1667: 321 pgs: 321 active+clean; 155 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.6 MiB/s wr, 17 op/s
Dec 03 02:10:01 compute-0 openstack_network_exporter[368278]: ERROR   02:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:10:01 compute-0 openstack_network_exporter[368278]: ERROR   02:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:10:01 compute-0 openstack_network_exporter[368278]: ERROR   02:10:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:10:01 compute-0 openstack_network_exporter[368278]: ERROR   02:10:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:10:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:10:01 compute-0 openstack_network_exporter[368278]: ERROR   02:10:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:10:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:10:02 compute-0 nova_compute[351485]: 2025-12-03 02:10:02.291 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:10:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Dec 03 02:10:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Dec 03 02:10:02 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Dec 03 02:10:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1669: 321 pgs: 321 active+clean; 155 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 962 KiB/s wr, 9 op/s
Dec 03 02:10:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:10:03 compute-0 nova_compute[351485]: 2025-12-03 02:10:03.313 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:10:03 compute-0 ceph-mon[192821]: osdmap e132: 3 total, 3 up, 3 in
Dec 03 02:10:03 compute-0 ceph-mon[192821]: pgmap v1669: 321 pgs: 321 active+clean; 155 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 962 KiB/s wr, 9 op/s
Dec 03 02:10:03 compute-0 podman[435731]: 2025-12-03 02:10:03.881866422 +0000 UTC m=+0.130159452 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 03 02:10:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1670: 321 pgs: 321 active+clean; 147 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 821 KiB/s wr, 26 op/s
Dec 03 02:10:05 compute-0 ceph-mon[192821]: pgmap v1670: 321 pgs: 321 active+clean; 147 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 821 KiB/s wr, 26 op/s
Dec 03 02:10:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1671: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.7 KiB/s wr, 28 op/s
Dec 03 02:10:07 compute-0 nova_compute[351485]: 2025-12-03 02:10:07.295 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:10:07 compute-0 ceph-mon[192821]: pgmap v1671: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.7 KiB/s wr, 28 op/s
Dec 03 02:10:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:10:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Dec 03 02:10:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Dec 03 02:10:08 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Dec 03 02:10:08 compute-0 nova_compute[351485]: 2025-12-03 02:10:08.317 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:10:08 compute-0 nova_compute[351485]: 2025-12-03 02:10:08.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:10:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1673: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Dec 03 02:10:08 compute-0 podman[435749]: 2025-12-03 02:10:08.873095416 +0000 UTC m=+0.113261065 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, version=9.6, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, maintainer=Red Hat, Inc., release=1755695350, name=ubi9-minimal, vcs-type=git, architecture=x86_64, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, com.redhat.component=ubi9-minimal-container)
Dec 03 02:10:08 compute-0 podman[435750]: 2025-12-03 02:10:08.889006875 +0000 UTC m=+0.118678918 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 03 02:10:08 compute-0 podman[435751]: 2025-12-03 02:10:08.895849418 +0000 UTC m=+0.125599433 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, name=ubi9, io.openshift.expose-services=, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, release=1214.1726694543, config_id=edpm, version=9.4, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, container_name=kepler, maintainer=Red Hat, Inc., release-0.7.12=, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec 03 02:10:08 compute-0 podman[435758]: 2025-12-03 02:10:08.908294899 +0000 UTC m=+0.117088473 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 03 02:10:08 compute-0 podman[435748]: 2025-12-03 02:10:08.908519725 +0000 UTC m=+0.153715246 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, tcib_managed=true)
Dec 03 02:10:09 compute-0 ceph-mon[192821]: osdmap e133: 3 total, 3 up, 3 in
Dec 03 02:10:10 compute-0 ceph-mon[192821]: pgmap v1673: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Dec 03 02:10:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1674: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.7 KiB/s wr, 29 op/s
Dec 03 02:10:11 compute-0 nova_compute[351485]: 2025-12-03 02:10:11.575 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:10:11 compute-0 nova_compute[351485]: 2025-12-03 02:10:11.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:10:11 compute-0 nova_compute[351485]: 2025-12-03 02:10:11.607 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:10:11 compute-0 nova_compute[351485]: 2025-12-03 02:10:11.608 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:10:11 compute-0 nova_compute[351485]: 2025-12-03 02:10:11.609 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:10:11 compute-0 nova_compute[351485]: 2025-12-03 02:10:11.609 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 02:10:11 compute-0 nova_compute[351485]: 2025-12-03 02:10:11.610 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:10:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:10:12 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/984081273' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:10:12 compute-0 nova_compute[351485]: 2025-12-03 02:10:12.132 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:10:12 compute-0 ceph-mon[192821]: pgmap v1674: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.7 KiB/s wr, 29 op/s
Dec 03 02:10:12 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/984081273' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:10:12 compute-0 nova_compute[351485]: 2025-12-03 02:10:12.270 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:10:12 compute-0 nova_compute[351485]: 2025-12-03 02:10:12.271 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:10:12 compute-0 nova_compute[351485]: 2025-12-03 02:10:12.272 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:10:12 compute-0 nova_compute[351485]: 2025-12-03 02:10:12.279 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:10:12 compute-0 nova_compute[351485]: 2025-12-03 02:10:12.280 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:10:12 compute-0 nova_compute[351485]: 2025-12-03 02:10:12.281 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:10:12 compute-0 nova_compute[351485]: 2025-12-03 02:10:12.300 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:10:12 compute-0 sshd-session[435873]: Accepted publickey for zuul from 38.102.83.18 port 60528 ssh2: RSA SHA256:NqevRhMCntWIOoTdK6+DV077scp/CQGou+r/H3um4YU
Dec 03 02:10:12 compute-0 systemd-logind[800]: New session 62 of user zuul.
Dec 03 02:10:12 compute-0 systemd[1]: Started Session 62 of User zuul.
Dec 03 02:10:12 compute-0 sshd-session[435873]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 03 02:10:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1675: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Dec 03 02:10:12 compute-0 nova_compute[351485]: 2025-12-03 02:10:12.832 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:10:12 compute-0 nova_compute[351485]: 2025-12-03 02:10:12.834 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3590MB free_disk=59.92203903198242GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 02:10:12 compute-0 nova_compute[351485]: 2025-12-03 02:10:12.834 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:10:12 compute-0 nova_compute[351485]: 2025-12-03 02:10:12.835 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:10:12 compute-0 nova_compute[351485]: 2025-12-03 02:10:12.954 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:10:12 compute-0 nova_compute[351485]: 2025-12-03 02:10:12.955 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance b43e79bd-550f-42f8-9aa7-980b6bca3f70 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:10:12 compute-0 nova_compute[351485]: 2025-12-03 02:10:12.955 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 02:10:12 compute-0 nova_compute[351485]: 2025-12-03 02:10:12.956 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 02:10:12 compute-0 nova_compute[351485]: 2025-12-03 02:10:12.982 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing inventories for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 03 02:10:13 compute-0 nova_compute[351485]: 2025-12-03 02:10:13.012 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Updating ProviderTree inventory for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 03 02:10:13 compute-0 nova_compute[351485]: 2025-12-03 02:10:13.013 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Updating inventory in ProviderTree for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 03 02:10:13 compute-0 nova_compute[351485]: 2025-12-03 02:10:13.036 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing aggregate associations for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 03 02:10:13 compute-0 nova_compute[351485]: 2025-12-03 02:10:13.057 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing trait associations for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05, traits: HW_CPU_X86_SSE42,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_ABM,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_F16C,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_RESCUE_BFV,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 03 02:10:13 compute-0 nova_compute[351485]: 2025-12-03 02:10:13.120 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:10:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:10:13 compute-0 nova_compute[351485]: 2025-12-03 02:10:13.321 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:10:13 compute-0 sudo[436070]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxeuwgmiuekupnmvcyhxhowqsdxbnfvz ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764727812.6418128-61566-97088969207591/AnsiballZ_command.py'
Dec 03 02:10:13 compute-0 sudo[436070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 02:10:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:10:13 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4117898108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:10:13 compute-0 nova_compute[351485]: 2025-12-03 02:10:13.625 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:10:13 compute-0 nova_compute[351485]: 2025-12-03 02:10:13.634 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:10:13 compute-0 nova_compute[351485]: 2025-12-03 02:10:13.652 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:10:13 compute-0 nova_compute[351485]: 2025-12-03 02:10:13.688 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 02:10:13 compute-0 nova_compute[351485]: 2025-12-03 02:10:13.689 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.854s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:10:13 compute-0 python3[436072]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 02:10:13 compute-0 sudo[436070]: pam_unix(sudo:session): session closed for user root
Dec 03 02:10:14 compute-0 ceph-mon[192821]: pgmap v1675: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Dec 03 02:10:14 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/4117898108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:10:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1676: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 3.8 KiB/s rd, 818 B/s wr, 6 op/s
Dec 03 02:10:15 compute-0 nova_compute[351485]: 2025-12-03 02:10:15.690 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:10:15 compute-0 nova_compute[351485]: 2025-12-03 02:10:15.691 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:10:15 compute-0 nova_compute[351485]: 2025-12-03 02:10:15.739 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:10:15 compute-0 nova_compute[351485]: 2025-12-03 02:10:15.739 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 02:10:15 compute-0 nova_compute[351485]: 2025-12-03 02:10:15.740 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 03 02:10:16 compute-0 ceph-mon[192821]: pgmap v1676: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 3.8 KiB/s rd, 818 B/s wr, 6 op/s
Dec 03 02:10:16 compute-0 nova_compute[351485]: 2025-12-03 02:10:16.321 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:10:16 compute-0 nova_compute[351485]: 2025-12-03 02:10:16.322 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:10:16 compute-0 nova_compute[351485]: 2025-12-03 02:10:16.322 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 03 02:10:16 compute-0 nova_compute[351485]: 2025-12-03 02:10:16.323 351492 DEBUG nova.objects.instance [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9182286b-5a08-4961-b4bb-c0e2f05746f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:10:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1677: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:10:17 compute-0 nova_compute[351485]: 2025-12-03 02:10:17.304 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:10:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:10:18 compute-0 ceph-mon[192821]: pgmap v1677: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:10:18 compute-0 nova_compute[351485]: 2025-12-03 02:10:18.324 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:10:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1678: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:10:18 compute-0 nova_compute[351485]: 2025-12-03 02:10:18.900 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Updating instance_info_cache with network_info: [{"id": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "address": "fa:16:3e:8f:a6:32", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2a50b9b-c2", "ovs_interfaceid": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:10:18 compute-0 nova_compute[351485]: 2025-12-03 02:10:18.926 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:10:18 compute-0 nova_compute[351485]: 2025-12-03 02:10:18.926 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 03 02:10:18 compute-0 nova_compute[351485]: 2025-12-03 02:10:18.928 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:10:18 compute-0 nova_compute[351485]: 2025-12-03 02:10:18.929 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:10:19 compute-0 nova_compute[351485]: 2025-12-03 02:10:19.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:10:20 compute-0 ceph-mon[192821]: pgmap v1678: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:10:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1679: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:10:22 compute-0 sudo[436285]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gosinxvqcxjdxdschiradxvgardwgvax ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764727821.2463117-61730-3217414405420/AnsiballZ_command.py'
Dec 03 02:10:22 compute-0 sudo[436285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 02:10:22 compute-0 ceph-mon[192821]: pgmap v1679: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:10:22 compute-0 nova_compute[351485]: 2025-12-03 02:10:22.309 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:10:22 compute-0 python3[436287]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep podman_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 02:10:22 compute-0 sudo[436285]: pam_unix(sudo:session): session closed for user root
Dec 03 02:10:22 compute-0 sudo[436302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:10:22 compute-0 sudo[436302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:10:22 compute-0 sudo[436302]: pam_unix(sudo:session): session closed for user root
Dec 03 02:10:22 compute-0 sudo[436348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:10:22 compute-0 sudo[436348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:10:22 compute-0 sudo[436348]: pam_unix(sudo:session): session closed for user root
Dec 03 02:10:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1680: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:10:22 compute-0 sudo[436376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:10:22 compute-0 sudo[436376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:10:22 compute-0 sudo[436376]: pam_unix(sudo:session): session closed for user root
Dec 03 02:10:22 compute-0 sudo[436401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 02:10:23 compute-0 sudo[436401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:10:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:10:23 compute-0 nova_compute[351485]: 2025-12-03 02:10:23.326 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:10:23 compute-0 sudo[436401]: pam_unix(sudo:session): session closed for user root
Dec 03 02:10:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:10:23 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:10:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 02:10:23 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:10:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 02:10:23 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:10:23 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev ea1c1dc8-1d45-4e20-b1c5-42071807000f does not exist
Dec 03 02:10:23 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 3cd007d3-7655-499c-8629-6363a4042eff does not exist
Dec 03 02:10:23 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 00b75e47-c8ea-4b4c-801c-6284b3d57c16 does not exist
Dec 03 02:10:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 02:10:23 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:10:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 02:10:23 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:10:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:10:23 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:10:23 compute-0 sudo[436458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:10:23 compute-0 sudo[436458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:10:23 compute-0 sudo[436458]: pam_unix(sudo:session): session closed for user root
Dec 03 02:10:24 compute-0 sudo[436483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:10:24 compute-0 sudo[436483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:10:24 compute-0 sudo[436483]: pam_unix(sudo:session): session closed for user root
Dec 03 02:10:24 compute-0 sudo[436508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:10:24 compute-0 sudo[436508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:10:24 compute-0 sudo[436508]: pam_unix(sudo:session): session closed for user root
Dec 03 02:10:24 compute-0 sudo[436533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 02:10:24 compute-0 sudo[436533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:10:24 compute-0 ceph-mon[192821]: pgmap v1680: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:10:24 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:10:24 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:10:24 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:10:24 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:10:24 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:10:24 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:10:24 compute-0 nova_compute[351485]: 2025-12-03 02:10:24.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:10:24 compute-0 nova_compute[351485]: 2025-12-03 02:10:24.576 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 02:10:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1681: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:10:24 compute-0 podman[436598]: 2025-12-03 02:10:24.932784014 +0000 UTC m=+0.081603102 container create ae3e7c0d0f5f8c4d21657bf4ed2185010d5cdbc906436633b2268a2bd7bde6f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_maxwell, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:10:24 compute-0 podman[436598]: 2025-12-03 02:10:24.908203831 +0000 UTC m=+0.057022969 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:10:25 compute-0 systemd[1]: Started libpod-conmon-ae3e7c0d0f5f8c4d21657bf4ed2185010d5cdbc906436633b2268a2bd7bde6f2.scope.
Dec 03 02:10:25 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:10:25 compute-0 podman[436598]: 2025-12-03 02:10:25.102108679 +0000 UTC m=+0.250927857 container init ae3e7c0d0f5f8c4d21657bf4ed2185010d5cdbc906436633b2268a2bd7bde6f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_maxwell, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec 03 02:10:25 compute-0 podman[436598]: 2025-12-03 02:10:25.115326211 +0000 UTC m=+0.264145299 container start ae3e7c0d0f5f8c4d21657bf4ed2185010d5cdbc906436633b2268a2bd7bde6f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_maxwell, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:10:25 compute-0 podman[436598]: 2025-12-03 02:10:25.120511997 +0000 UTC m=+0.269331085 container attach ae3e7c0d0f5f8c4d21657bf4ed2185010d5cdbc906436633b2268a2bd7bde6f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:10:25 compute-0 stupefied_maxwell[436613]: 167 167
Dec 03 02:10:25 compute-0 systemd[1]: libpod-ae3e7c0d0f5f8c4d21657bf4ed2185010d5cdbc906436633b2268a2bd7bde6f2.scope: Deactivated successfully.
Dec 03 02:10:25 compute-0 podman[436598]: 2025-12-03 02:10:25.128474792 +0000 UTC m=+0.277293910 container died ae3e7c0d0f5f8c4d21657bf4ed2185010d5cdbc906436633b2268a2bd7bde6f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_maxwell, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:10:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e62f0949fa927f2019099c22dfe28512a9d5ff7fc40b804609ff0177511bb14-merged.mount: Deactivated successfully.
Dec 03 02:10:25 compute-0 podman[436598]: 2025-12-03 02:10:25.216801883 +0000 UTC m=+0.365621001 container remove ae3e7c0d0f5f8c4d21657bf4ed2185010d5cdbc906436633b2268a2bd7bde6f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_maxwell, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:10:25 compute-0 systemd[1]: libpod-conmon-ae3e7c0d0f5f8c4d21657bf4ed2185010d5cdbc906436633b2268a2bd7bde6f2.scope: Deactivated successfully.
Dec 03 02:10:25 compute-0 ceph-mon[192821]: pgmap v1681: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:10:25 compute-0 podman[436635]: 2025-12-03 02:10:25.515278929 +0000 UTC m=+0.097778878 container create 3857d20e29b1466bae5abcac94f41743f3db3cd077550566f537efafa34c2bfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_goldstine, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 03 02:10:25 compute-0 podman[436635]: 2025-12-03 02:10:25.473691987 +0000 UTC m=+0.056191996 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:10:25 compute-0 systemd[1]: Started libpod-conmon-3857d20e29b1466bae5abcac94f41743f3db3cd077550566f537efafa34c2bfe.scope.
Dec 03 02:10:25 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:10:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/413a5aebb48bc0603420a673a1250e28ca252f907c06ed6847bc67233348a013/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:10:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/413a5aebb48bc0603420a673a1250e28ca252f907c06ed6847bc67233348a013/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:10:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/413a5aebb48bc0603420a673a1250e28ca252f907c06ed6847bc67233348a013/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:10:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/413a5aebb48bc0603420a673a1250e28ca252f907c06ed6847bc67233348a013/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:10:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/413a5aebb48bc0603420a673a1250e28ca252f907c06ed6847bc67233348a013/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 02:10:25 compute-0 podman[436635]: 2025-12-03 02:10:25.734233234 +0000 UTC m=+0.316733233 container init 3857d20e29b1466bae5abcac94f41743f3db3cd077550566f537efafa34c2bfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_goldstine, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:10:25 compute-0 podman[436635]: 2025-12-03 02:10:25.760648569 +0000 UTC m=+0.343148528 container start 3857d20e29b1466bae5abcac94f41743f3db3cd077550566f537efafa34c2bfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_goldstine, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 03 02:10:25 compute-0 podman[436635]: 2025-12-03 02:10:25.767752569 +0000 UTC m=+0.350252518 container attach 3857d20e29b1466bae5abcac94f41743f3db3cd077550566f537efafa34c2bfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_goldstine, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 03 02:10:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1682: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:10:27 compute-0 pedantic_goldstine[436651]: --> passed data devices: 0 physical, 3 LVM
Dec 03 02:10:27 compute-0 pedantic_goldstine[436651]: --> relative data size: 1.0
Dec 03 02:10:27 compute-0 pedantic_goldstine[436651]: --> All data devices are unavailable
Dec 03 02:10:27 compute-0 systemd[1]: libpod-3857d20e29b1466bae5abcac94f41743f3db3cd077550566f537efafa34c2bfe.scope: Deactivated successfully.
Dec 03 02:10:27 compute-0 podman[436635]: 2025-12-03 02:10:27.050703996 +0000 UTC m=+1.633203925 container died 3857d20e29b1466bae5abcac94f41743f3db3cd077550566f537efafa34c2bfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_goldstine, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:10:27 compute-0 systemd[1]: libpod-3857d20e29b1466bae5abcac94f41743f3db3cd077550566f537efafa34c2bfe.scope: Consumed 1.198s CPU time.
Dec 03 02:10:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-413a5aebb48bc0603420a673a1250e28ca252f907c06ed6847bc67233348a013-merged.mount: Deactivated successfully.
Dec 03 02:10:27 compute-0 podman[436635]: 2025-12-03 02:10:27.150275634 +0000 UTC m=+1.732775583 container remove 3857d20e29b1466bae5abcac94f41743f3db3cd077550566f537efafa34c2bfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_goldstine, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec 03 02:10:27 compute-0 systemd[1]: libpod-conmon-3857d20e29b1466bae5abcac94f41743f3db3cd077550566f537efafa34c2bfe.scope: Deactivated successfully.
Dec 03 02:10:27 compute-0 sudo[436533]: pam_unix(sudo:session): session closed for user root
Dec 03 02:10:27 compute-0 nova_compute[351485]: 2025-12-03 02:10:27.312 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:10:27 compute-0 sudo[436691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:10:27 compute-0 sudo[436691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:10:27 compute-0 sudo[436691]: pam_unix(sudo:session): session closed for user root
Dec 03 02:10:27 compute-0 sudo[436716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:10:27 compute-0 sudo[436716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:10:27 compute-0 sudo[436716]: pam_unix(sudo:session): session closed for user root
Dec 03 02:10:27 compute-0 sudo[436741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:10:27 compute-0 sudo[436741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:10:27 compute-0 sudo[436741]: pam_unix(sudo:session): session closed for user root
Dec 03 02:10:27 compute-0 sudo[436766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 02:10:27 compute-0 sudo[436766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:10:27 compute-0 ceph-mon[192821]: pgmap v1682: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:10:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:10:28 compute-0 nova_compute[351485]: 2025-12-03 02:10:28.329 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:10:28 compute-0 podman[436826]: 2025-12-03 02:10:28.435228088 +0000 UTC m=+0.099717843 container create 44fef424fd784edcc8bf214367ad98596d2d2e5662e952f1962db1f6bbf52820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swartz, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:10:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:10:28
Dec 03 02:10:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 02:10:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 02:10:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['backups', 'default.rgw.control', 'images', '.mgr', '.rgw.root', 'volumes', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log']
Dec 03 02:10:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:10:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:10:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 02:10:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:10:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:10:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:10:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:10:28 compute-0 podman[436826]: 2025-12-03 02:10:28.398883253 +0000 UTC m=+0.063373068 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:10:28 compute-0 systemd[1]: Started libpod-conmon-44fef424fd784edcc8bf214367ad98596d2d2e5662e952f1962db1f6bbf52820.scope.
Dec 03 02:10:28 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:10:28 compute-0 podman[436826]: 2025-12-03 02:10:28.595668302 +0000 UTC m=+0.260158067 container init 44fef424fd784edcc8bf214367ad98596d2d2e5662e952f1962db1f6bbf52820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swartz, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:10:28 compute-0 podman[436826]: 2025-12-03 02:10:28.612572399 +0000 UTC m=+0.277062144 container start 44fef424fd784edcc8bf214367ad98596d2d2e5662e952f1962db1f6bbf52820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 03 02:10:28 compute-0 podman[436826]: 2025-12-03 02:10:28.616746226 +0000 UTC m=+0.281235971 container attach 44fef424fd784edcc8bf214367ad98596d2d2e5662e952f1962db1f6bbf52820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swartz, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:10:28 compute-0 hungry_swartz[436842]: 167 167
Dec 03 02:10:28 compute-0 systemd[1]: libpod-44fef424fd784edcc8bf214367ad98596d2d2e5662e952f1962db1f6bbf52820.scope: Deactivated successfully.
Dec 03 02:10:28 compute-0 podman[436826]: 2025-12-03 02:10:28.625467812 +0000 UTC m=+0.289957607 container died 44fef424fd784edcc8bf214367ad98596d2d2e5662e952f1962db1f6bbf52820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:10:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-bae1cf21c2dd727fbfcfda733bc9e3191a1351b6e7873bf3cf2619f07b750ca9-merged.mount: Deactivated successfully.
Dec 03 02:10:28 compute-0 podman[436826]: 2025-12-03 02:10:28.694649753 +0000 UTC m=+0.359139508 container remove 44fef424fd784edcc8bf214367ad98596d2d2e5662e952f1962db1f6bbf52820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swartz, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:10:28 compute-0 systemd[1]: libpod-conmon-44fef424fd784edcc8bf214367ad98596d2d2e5662e952f1962db1f6bbf52820.scope: Deactivated successfully.
Dec 03 02:10:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1683: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:10:28 compute-0 podman[436867]: 2025-12-03 02:10:28.99879614 +0000 UTC m=+0.098440337 container create 00570f736b1f2fda6061c88c9174a4957b7ca10fa6a06dc305b876c90f7005d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 03 02:10:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 02:10:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:10:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 02:10:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:10:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:10:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:10:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:10:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:10:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:10:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:10:29 compute-0 podman[436867]: 2025-12-03 02:10:28.962883717 +0000 UTC m=+0.062527984 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:10:29 compute-0 systemd[1]: Started libpod-conmon-00570f736b1f2fda6061c88c9174a4957b7ca10fa6a06dc305b876c90f7005d4.scope.
Dec 03 02:10:29 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:10:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b91985e3d485c7d3a0d3dcedc4da516ab1355474a38f7f5a99900da000dfc679/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:10:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b91985e3d485c7d3a0d3dcedc4da516ab1355474a38f7f5a99900da000dfc679/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:10:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b91985e3d485c7d3a0d3dcedc4da516ab1355474a38f7f5a99900da000dfc679/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:10:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b91985e3d485c7d3a0d3dcedc4da516ab1355474a38f7f5a99900da000dfc679/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:10:29 compute-0 podman[436867]: 2025-12-03 02:10:29.147445901 +0000 UTC m=+0.247090118 container init 00570f736b1f2fda6061c88c9174a4957b7ca10fa6a06dc305b876c90f7005d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Dec 03 02:10:29 compute-0 podman[436867]: 2025-12-03 02:10:29.1736318 +0000 UTC m=+0.273275997 container start 00570f736b1f2fda6061c88c9174a4957b7ca10fa6a06dc305b876c90f7005d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_spence, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 03 02:10:29 compute-0 podman[436867]: 2025-12-03 02:10:29.179227528 +0000 UTC m=+0.278871725 container attach 00570f736b1f2fda6061c88c9174a4957b7ca10fa6a06dc305b876c90f7005d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 03 02:10:29 compute-0 podman[436880]: 2025-12-03 02:10:29.181964595 +0000 UTC m=+0.108924883 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec 03 02:10:29 compute-0 podman[436884]: 2025-12-03 02:10:29.203520903 +0000 UTC m=+0.111132115 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 02:10:29 compute-0 podman[436883]: 2025-12-03 02:10:29.215426108 +0000 UTC m=+0.125774587 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 03 02:10:29 compute-0 podman[158098]: time="2025-12-03T02:10:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:10:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:10:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45383 "" "Go-http-client/1.1"
Dec 03 02:10:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:10:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9062 "" "Go-http-client/1.1"
Dec 03 02:10:29 compute-0 ceph-mon[192821]: pgmap v1683: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:10:29 compute-0 practical_spence[436890]: {
Dec 03 02:10:29 compute-0 practical_spence[436890]:     "0": [
Dec 03 02:10:29 compute-0 practical_spence[436890]:         {
Dec 03 02:10:29 compute-0 practical_spence[436890]:             "devices": [
Dec 03 02:10:29 compute-0 practical_spence[436890]:                 "/dev/loop3"
Dec 03 02:10:29 compute-0 practical_spence[436890]:             ],
Dec 03 02:10:29 compute-0 practical_spence[436890]:             "lv_name": "ceph_lv0",
Dec 03 02:10:29 compute-0 practical_spence[436890]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:10:29 compute-0 practical_spence[436890]:             "lv_size": "21470642176",
Dec 03 02:10:29 compute-0 practical_spence[436890]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:10:29 compute-0 practical_spence[436890]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:10:29 compute-0 practical_spence[436890]:             "name": "ceph_lv0",
Dec 03 02:10:29 compute-0 practical_spence[436890]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:10:29 compute-0 practical_spence[436890]:             "tags": {
Dec 03 02:10:29 compute-0 practical_spence[436890]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:10:29 compute-0 practical_spence[436890]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:10:29 compute-0 practical_spence[436890]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:10:29 compute-0 practical_spence[436890]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:10:29 compute-0 practical_spence[436890]:                 "ceph.cluster_name": "ceph",
Dec 03 02:10:29 compute-0 practical_spence[436890]:                 "ceph.crush_device_class": "",
Dec 03 02:10:29 compute-0 practical_spence[436890]:                 "ceph.encrypted": "0",
Dec 03 02:10:29 compute-0 practical_spence[436890]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:10:29 compute-0 practical_spence[436890]:                 "ceph.osd_id": "0",
Dec 03 02:10:29 compute-0 practical_spence[436890]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:10:29 compute-0 practical_spence[436890]:                 "ceph.type": "block",
Dec 03 02:10:29 compute-0 practical_spence[436890]:                 "ceph.vdo": "0"
Dec 03 02:10:29 compute-0 practical_spence[436890]:             },
Dec 03 02:10:29 compute-0 practical_spence[436890]:             "type": "block",
Dec 03 02:10:29 compute-0 practical_spence[436890]:             "vg_name": "ceph_vg0"
Dec 03 02:10:29 compute-0 practical_spence[436890]:         }
Dec 03 02:10:29 compute-0 practical_spence[436890]:     ],
Dec 03 02:10:29 compute-0 practical_spence[436890]:     "1": [
Dec 03 02:10:29 compute-0 practical_spence[436890]:         {
Dec 03 02:10:29 compute-0 practical_spence[436890]:             "devices": [
Dec 03 02:10:29 compute-0 practical_spence[436890]:                 "/dev/loop4"
Dec 03 02:10:29 compute-0 practical_spence[436890]:             ],
Dec 03 02:10:29 compute-0 practical_spence[436890]:             "lv_name": "ceph_lv1",
Dec 03 02:10:29 compute-0 practical_spence[436890]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:10:29 compute-0 practical_spence[436890]:             "lv_size": "21470642176",
Dec 03 02:10:29 compute-0 practical_spence[436890]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:10:29 compute-0 practical_spence[436890]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:10:29 compute-0 practical_spence[436890]:             "name": "ceph_lv1",
Dec 03 02:10:29 compute-0 practical_spence[436890]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:10:29 compute-0 practical_spence[436890]:             "tags": {
Dec 03 02:10:29 compute-0 practical_spence[436890]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:10:29 compute-0 practical_spence[436890]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:10:29 compute-0 practical_spence[436890]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:10:29 compute-0 practical_spence[436890]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:10:29 compute-0 practical_spence[436890]:                 "ceph.cluster_name": "ceph",
Dec 03 02:10:29 compute-0 practical_spence[436890]:                 "ceph.crush_device_class": "",
Dec 03 02:10:29 compute-0 practical_spence[436890]:                 "ceph.encrypted": "0",
Dec 03 02:10:29 compute-0 practical_spence[436890]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:10:29 compute-0 practical_spence[436890]:                 "ceph.osd_id": "1",
Dec 03 02:10:29 compute-0 practical_spence[436890]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:10:29 compute-0 practical_spence[436890]:                 "ceph.type": "block",
Dec 03 02:10:29 compute-0 practical_spence[436890]:                 "ceph.vdo": "0"
Dec 03 02:10:29 compute-0 practical_spence[436890]:             },
Dec 03 02:10:29 compute-0 practical_spence[436890]:             "type": "block",
Dec 03 02:10:29 compute-0 practical_spence[436890]:             "vg_name": "ceph_vg1"
Dec 03 02:10:29 compute-0 practical_spence[436890]:         }
Dec 03 02:10:29 compute-0 practical_spence[436890]:     ],
Dec 03 02:10:29 compute-0 practical_spence[436890]:     "2": [
Dec 03 02:10:29 compute-0 practical_spence[436890]:         {
Dec 03 02:10:29 compute-0 practical_spence[436890]:             "devices": [
Dec 03 02:10:29 compute-0 practical_spence[436890]:                 "/dev/loop5"
Dec 03 02:10:29 compute-0 practical_spence[436890]:             ],
Dec 03 02:10:29 compute-0 practical_spence[436890]:             "lv_name": "ceph_lv2",
Dec 03 02:10:29 compute-0 practical_spence[436890]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:10:29 compute-0 practical_spence[436890]:             "lv_size": "21470642176",
Dec 03 02:10:29 compute-0 practical_spence[436890]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:10:29 compute-0 practical_spence[436890]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:10:29 compute-0 practical_spence[436890]:             "name": "ceph_lv2",
Dec 03 02:10:29 compute-0 practical_spence[436890]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:10:29 compute-0 practical_spence[436890]:             "tags": {
Dec 03 02:10:29 compute-0 practical_spence[436890]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:10:29 compute-0 practical_spence[436890]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:10:29 compute-0 practical_spence[436890]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:10:29 compute-0 practical_spence[436890]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:10:29 compute-0 practical_spence[436890]:                 "ceph.cluster_name": "ceph",
Dec 03 02:10:29 compute-0 practical_spence[436890]:                 "ceph.crush_device_class": "",
Dec 03 02:10:29 compute-0 practical_spence[436890]:                 "ceph.encrypted": "0",
Dec 03 02:10:29 compute-0 practical_spence[436890]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:10:29 compute-0 practical_spence[436890]:                 "ceph.osd_id": "2",
Dec 03 02:10:29 compute-0 practical_spence[436890]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:10:29 compute-0 practical_spence[436890]:                 "ceph.type": "block",
Dec 03 02:10:29 compute-0 practical_spence[436890]:                 "ceph.vdo": "0"
Dec 03 02:10:29 compute-0 practical_spence[436890]:             },
Dec 03 02:10:29 compute-0 practical_spence[436890]:             "type": "block",
Dec 03 02:10:29 compute-0 practical_spence[436890]:             "vg_name": "ceph_vg2"
Dec 03 02:10:29 compute-0 practical_spence[436890]:         }
Dec 03 02:10:29 compute-0 practical_spence[436890]:     ]
Dec 03 02:10:29 compute-0 practical_spence[436890]: }
Dec 03 02:10:29 compute-0 systemd[1]: libpod-00570f736b1f2fda6061c88c9174a4957b7ca10fa6a06dc305b876c90f7005d4.scope: Deactivated successfully.
Dec 03 02:10:29 compute-0 podman[436867]: 2025-12-03 02:10:29.938245311 +0000 UTC m=+1.037889518 container died 00570f736b1f2fda6061c88c9174a4957b7ca10fa6a06dc305b876c90f7005d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_spence, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 03 02:10:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-b91985e3d485c7d3a0d3dcedc4da516ab1355474a38f7f5a99900da000dfc679-merged.mount: Deactivated successfully.
Dec 03 02:10:30 compute-0 podman[436867]: 2025-12-03 02:10:30.040971138 +0000 UTC m=+1.140615355 container remove 00570f736b1f2fda6061c88c9174a4957b7ca10fa6a06dc305b876c90f7005d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_spence, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 03 02:10:30 compute-0 systemd[1]: libpod-conmon-00570f736b1f2fda6061c88c9174a4957b7ca10fa6a06dc305b876c90f7005d4.scope: Deactivated successfully.
Dec 03 02:10:30 compute-0 sudo[436766]: pam_unix(sudo:session): session closed for user root
Dec 03 02:10:30 compute-0 sudo[436960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:10:30 compute-0 sudo[436960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:10:30 compute-0 sudo[436960]: pam_unix(sudo:session): session closed for user root
Dec 03 02:10:30 compute-0 sudo[436985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:10:30 compute-0 sudo[436985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:10:30 compute-0 sudo[436985]: pam_unix(sudo:session): session closed for user root
Dec 03 02:10:30 compute-0 sudo[437010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:10:30 compute-0 sudo[437010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:10:30 compute-0 sudo[437010]: pam_unix(sudo:session): session closed for user root
Dec 03 02:10:30 compute-0 sudo[437035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 02:10:30 compute-0 sudo[437035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:10:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1684: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:10:31 compute-0 podman[437098]: 2025-12-03 02:10:31.275619002 +0000 UTC m=+0.082572049 container create d5c4865d64ae19e6517a5b68f2034438bec391ca74ac94888de30303944c8d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_kalam, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:10:31 compute-0 systemd[1]: Started libpod-conmon-d5c4865d64ae19e6517a5b68f2034438bec391ca74ac94888de30303944c8d82.scope.
Dec 03 02:10:31 compute-0 podman[437098]: 2025-12-03 02:10:31.25070468 +0000 UTC m=+0.057657757 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:10:31 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:10:31 compute-0 podman[437098]: 2025-12-03 02:10:31.412503792 +0000 UTC m=+0.219456919 container init d5c4865d64ae19e6517a5b68f2034438bec391ca74ac94888de30303944c8d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_kalam, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:10:31 compute-0 openstack_network_exporter[368278]: ERROR   02:10:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:10:31 compute-0 openstack_network_exporter[368278]: ERROR   02:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:10:31 compute-0 openstack_network_exporter[368278]: ERROR   02:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:10:31 compute-0 openstack_network_exporter[368278]: ERROR   02:10:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:10:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:10:31 compute-0 openstack_network_exporter[368278]: ERROR   02:10:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:10:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:10:31 compute-0 podman[437098]: 2025-12-03 02:10:31.429157782 +0000 UTC m=+0.236110859 container start d5c4865d64ae19e6517a5b68f2034438bec391ca74ac94888de30303944c8d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:10:31 compute-0 podman[437098]: 2025-12-03 02:10:31.43618263 +0000 UTC m=+0.243135697 container attach d5c4865d64ae19e6517a5b68f2034438bec391ca74ac94888de30303944c8d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:10:31 compute-0 interesting_kalam[437114]: 167 167
Dec 03 02:10:31 compute-0 systemd[1]: libpod-d5c4865d64ae19e6517a5b68f2034438bec391ca74ac94888de30303944c8d82.scope: Deactivated successfully.
Dec 03 02:10:31 compute-0 podman[437098]: 2025-12-03 02:10:31.443812135 +0000 UTC m=+0.250765242 container died d5c4865d64ae19e6517a5b68f2034438bec391ca74ac94888de30303944c8d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_kalam, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:10:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6ac96c27c5e45d060d58a2d2808f7528bd10a957e74ef5afa1ce6f041299007-merged.mount: Deactivated successfully.
Dec 03 02:10:31 compute-0 podman[437098]: 2025-12-03 02:10:31.508748176 +0000 UTC m=+0.315701223 container remove d5c4865d64ae19e6517a5b68f2034438bec391ca74ac94888de30303944c8d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_kalam, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:10:31 compute-0 systemd[1]: libpod-conmon-d5c4865d64ae19e6517a5b68f2034438bec391ca74ac94888de30303944c8d82.scope: Deactivated successfully.
Dec 03 02:10:31 compute-0 podman[437137]: 2025-12-03 02:10:31.760733642 +0000 UTC m=+0.075257993 container create b349af3503cc411408a9bb7c85ac55079cb008639ddca0ff842b9891ea53f737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 03 02:10:31 compute-0 systemd[1]: Started libpod-conmon-b349af3503cc411408a9bb7c85ac55079cb008639ddca0ff842b9891ea53f737.scope.
Dec 03 02:10:31 compute-0 podman[437137]: 2025-12-03 02:10:31.736275572 +0000 UTC m=+0.050799953 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:10:31 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:10:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2903f226979ec8344a12a2edf3543e5b9de4ed7c2498cae590f196092a25d6e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:10:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2903f226979ec8344a12a2edf3543e5b9de4ed7c2498cae590f196092a25d6e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:10:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2903f226979ec8344a12a2edf3543e5b9de4ed7c2498cae590f196092a25d6e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:10:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2903f226979ec8344a12a2edf3543e5b9de4ed7c2498cae590f196092a25d6e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:10:31 compute-0 podman[437137]: 2025-12-03 02:10:31.895200304 +0000 UTC m=+0.209724635 container init b349af3503cc411408a9bb7c85ac55079cb008639ddca0ff842b9891ea53f737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_margulis, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:10:31 compute-0 ceph-mon[192821]: pgmap v1684: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:10:31 compute-0 podman[437137]: 2025-12-03 02:10:31.922715489 +0000 UTC m=+0.237239820 container start b349af3503cc411408a9bb7c85ac55079cb008639ddca0ff842b9891ea53f737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 03 02:10:31 compute-0 podman[437137]: 2025-12-03 02:10:31.930078537 +0000 UTC m=+0.244602878 container attach b349af3503cc411408a9bb7c85ac55079cb008639ddca0ff842b9891ea53f737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_margulis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 03 02:10:32 compute-0 nova_compute[351485]: 2025-12-03 02:10:32.314 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:10:32 compute-0 sudo[437330]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlysrblxufygcampzkxgwtaeqibjvtte ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764727831.8158948-61886-209053274818341/AnsiballZ_command.py'
Dec 03 02:10:32 compute-0 sudo[437330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 02:10:32 compute-0 python3[437332]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep kepler
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 02:10:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1685: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:10:32 compute-0 sudo[437330]: pam_unix(sudo:session): session closed for user root
Dec 03 02:10:33 compute-0 crazy_margulis[437175]: {
Dec 03 02:10:33 compute-0 crazy_margulis[437175]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 02:10:33 compute-0 crazy_margulis[437175]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:10:33 compute-0 crazy_margulis[437175]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 02:10:33 compute-0 crazy_margulis[437175]:         "osd_id": 2,
Dec 03 02:10:33 compute-0 crazy_margulis[437175]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:10:33 compute-0 crazy_margulis[437175]:         "type": "bluestore"
Dec 03 02:10:33 compute-0 crazy_margulis[437175]:     },
Dec 03 02:10:33 compute-0 crazy_margulis[437175]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 02:10:33 compute-0 crazy_margulis[437175]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:10:33 compute-0 crazy_margulis[437175]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 02:10:33 compute-0 crazy_margulis[437175]:         "osd_id": 1,
Dec 03 02:10:33 compute-0 crazy_margulis[437175]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:10:33 compute-0 crazy_margulis[437175]:         "type": "bluestore"
Dec 03 02:10:33 compute-0 crazy_margulis[437175]:     },
Dec 03 02:10:33 compute-0 crazy_margulis[437175]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 02:10:33 compute-0 crazy_margulis[437175]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:10:33 compute-0 crazy_margulis[437175]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 02:10:33 compute-0 crazy_margulis[437175]:         "osd_id": 0,
Dec 03 02:10:33 compute-0 crazy_margulis[437175]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:10:33 compute-0 crazy_margulis[437175]:         "type": "bluestore"
Dec 03 02:10:33 compute-0 crazy_margulis[437175]:     }
Dec 03 02:10:33 compute-0 crazy_margulis[437175]: }
Dec 03 02:10:33 compute-0 systemd[1]: libpod-b349af3503cc411408a9bb7c85ac55079cb008639ddca0ff842b9891ea53f737.scope: Deactivated successfully.
Dec 03 02:10:33 compute-0 systemd[1]: libpod-b349af3503cc411408a9bb7c85ac55079cb008639ddca0ff842b9891ea53f737.scope: Consumed 1.174s CPU time.
Dec 03 02:10:33 compute-0 podman[437137]: 2025-12-03 02:10:33.103762623 +0000 UTC m=+1.418286974 container died b349af3503cc411408a9bb7c85ac55079cb008639ddca0ff842b9891ea53f737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_margulis, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:10:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-2903f226979ec8344a12a2edf3543e5b9de4ed7c2498cae590f196092a25d6e7-merged.mount: Deactivated successfully.
Dec 03 02:10:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:10:33 compute-0 podman[437137]: 2025-12-03 02:10:33.201132009 +0000 UTC m=+1.515656330 container remove b349af3503cc411408a9bb7c85ac55079cb008639ddca0ff842b9891ea53f737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:10:33 compute-0 systemd[1]: libpod-conmon-b349af3503cc411408a9bb7c85ac55079cb008639ddca0ff842b9891ea53f737.scope: Deactivated successfully.
Dec 03 02:10:33 compute-0 sudo[437035]: pam_unix(sudo:session): session closed for user root
Dec 03 02:10:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 02:10:33 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:10:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 02:10:33 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:10:33 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 094a866d-8bac-4eed-a3af-8bee4e936114 does not exist
Dec 03 02:10:33 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev b3f17902-352e-4513-8d9d-aeca4dd6e5dd does not exist
Dec 03 02:10:33 compute-0 nova_compute[351485]: 2025-12-03 02:10:33.332 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:10:33 compute-0 sudo[437413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:10:33 compute-0 sudo[437413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:10:33 compute-0 sudo[437413]: pam_unix(sudo:session): session closed for user root
Dec 03 02:10:33 compute-0 sudo[437438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 02:10:33 compute-0 sudo[437438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:10:33 compute-0 sudo[437438]: pam_unix(sudo:session): session closed for user root
Dec 03 02:10:34 compute-0 ceph-mon[192821]: pgmap v1685: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:10:34 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:10:34 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:10:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1686: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:10:34 compute-0 podman[437463]: 2025-12-03 02:10:34.886233506 +0000 UTC m=+0.128209047 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 03 02:10:36 compute-0 ceph-mon[192821]: pgmap v1686: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:10:36 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Dec 03 02:10:36 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:10:36.317081) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 03 02:10:36 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Dec 03 02:10:36 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727836317164, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 1052, "num_deletes": 252, "total_data_size": 1439364, "memory_usage": 1459064, "flush_reason": "Manual Compaction"}
Dec 03 02:10:36 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Dec 03 02:10:36 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727836332937, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 1413508, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33719, "largest_seqno": 34770, "table_properties": {"data_size": 1408336, "index_size": 2632, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11329, "raw_average_key_size": 19, "raw_value_size": 1397875, "raw_average_value_size": 2465, "num_data_blocks": 117, "num_entries": 567, "num_filter_entries": 567, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764727749, "oldest_key_time": 1764727749, "file_creation_time": 1764727836, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Dec 03 02:10:36 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 15933 microseconds, and 8455 cpu microseconds.
Dec 03 02:10:36 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 02:10:36 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:10:36.333013) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 1413508 bytes OK
Dec 03 02:10:36 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:10:36.333035) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Dec 03 02:10:36 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:10:36.335600) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Dec 03 02:10:36 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:10:36.335620) EVENT_LOG_v1 {"time_micros": 1764727836335613, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 03 02:10:36 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:10:36.335639) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 03 02:10:36 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 1434400, prev total WAL file size 1434400, number of live WAL files 2.
Dec 03 02:10:36 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:10:36 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:10:36.336584) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Dec 03 02:10:36 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 03 02:10:36 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(1380KB)], [77(7445KB)]
Dec 03 02:10:36 compute-0 rsyslogd[188612]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 03 02:10:36 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727836336624, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 9038211, "oldest_snapshot_seqno": -1}
Dec 03 02:10:36 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 5268 keys, 7279009 bytes, temperature: kUnknown
Dec 03 02:10:36 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727836393025, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 7279009, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7245617, "index_size": 19138, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13189, "raw_key_size": 134279, "raw_average_key_size": 25, "raw_value_size": 7152039, "raw_average_value_size": 1357, "num_data_blocks": 783, "num_entries": 5268, "num_filter_entries": 5268, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764727836, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Dec 03 02:10:36 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 02:10:36 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:10:36.393336) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 7279009 bytes
Dec 03 02:10:36 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:10:36.395832) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 160.0 rd, 128.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 7.3 +0.0 blob) out(6.9 +0.0 blob), read-write-amplify(11.5) write-amplify(5.1) OK, records in: 5787, records dropped: 519 output_compression: NoCompression
Dec 03 02:10:36 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:10:36.395860) EVENT_LOG_v1 {"time_micros": 1764727836395847, "job": 44, "event": "compaction_finished", "compaction_time_micros": 56492, "compaction_time_cpu_micros": 34162, "output_level": 6, "num_output_files": 1, "total_output_size": 7279009, "num_input_records": 5787, "num_output_records": 5268, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 03 02:10:36 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:10:36 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727836396461, "job": 44, "event": "table_file_deletion", "file_number": 79}
Dec 03 02:10:36 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:10:36 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727836399328, "job": 44, "event": "table_file_deletion", "file_number": 77}
Dec 03 02:10:36 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:10:36.336398) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:10:36 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:10:36.399730) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:10:36 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:10:36.399736) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:10:36 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:10:36.399739) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:10:36 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:10:36.399742) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:10:36 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:10:36.399745) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:10:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1687: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:10:37 compute-0 nova_compute[351485]: 2025-12-03 02:10:37.319 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:10:37 compute-0 ceph-mon[192821]: pgmap v1687: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:10:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:10:38 compute-0 nova_compute[351485]: 2025-12-03 02:10:38.336 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:10:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 02:10:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:10:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 02:10:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:10:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00110425264130364 of space, bias 1.0, pg target 0.331275792391092 quantized to 32 (current 32)
Dec 03 02:10:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:10:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:10:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:10:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:10:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:10:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec 03 02:10:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:10:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 02:10:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:10:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:10:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:10:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 02:10:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:10:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 02:10:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:10:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:10:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:10:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 02:10:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1688: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:10:39 compute-0 ceph-mon[192821]: pgmap v1688: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:10:39 compute-0 podman[437483]: 2025-12-03 02:10:39.878126407 +0000 UTC m=+0.114512020 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.7, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, config_id=edpm, maintainer=Red Hat, Inc., io.openshift.expose-services=)
Dec 03 02:10:39 compute-0 podman[437485]: 2025-12-03 02:10:39.884495996 +0000 UTC m=+0.111748282 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, config_id=edpm, release=1214.1726694543, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, io.openshift.expose-services=, managed_by=edpm_ansible, release-0.7.12=)
Dec 03 02:10:39 compute-0 podman[437484]: 2025-12-03 02:10:39.897901754 +0000 UTC m=+0.129921524 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 03 02:10:39 compute-0 podman[437486]: 2025-12-03 02:10:39.901133396 +0000 UTC m=+0.122430164 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 03 02:10:39 compute-0 podman[437482]: 2025-12-03 02:10:39.938913031 +0000 UTC m=+0.182934710 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 03 02:10:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1689: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:10:41 compute-0 ceph-mon[192821]: pgmap v1689: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:10:42 compute-0 nova_compute[351485]: 2025-12-03 02:10:42.322 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:10:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1690: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:10:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:10:43 compute-0 nova_compute[351485]: 2025-12-03 02:10:43.340 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:10:43 compute-0 ceph-mon[192821]: pgmap v1690: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:10:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1691: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:10:45 compute-0 ceph-mon[192821]: pgmap v1691: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:10:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1692: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:10:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 02:10:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1777396058' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:10:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 02:10:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1777396058' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:10:47 compute-0 nova_compute[351485]: 2025-12-03 02:10:47.324 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:10:47 compute-0 ceph-mon[192821]: pgmap v1692: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:10:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/1777396058' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:10:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/1777396058' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:10:48 compute-0 sudo[437756]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrhxatbxzrhjataptxehcttphxcjsvvi ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764727847.3590202-62102-178561271776238/AnsiballZ_command.py'
Dec 03 02:10:48 compute-0 sudo[437756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 02:10:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:10:48 compute-0 python3[437758]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep openstack_network_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 03 02:10:48 compute-0 nova_compute[351485]: 2025-12-03 02:10:48.343 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:10:48 compute-0 sudo[437756]: pam_unix(sudo:session): session closed for user root
Dec 03 02:10:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1693: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:10:49 compute-0 ceph-mon[192821]: pgmap v1693: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:10:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1694: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:10:51 compute-0 ceph-mon[192821]: pgmap v1694: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:10:52 compute-0 nova_compute[351485]: 2025-12-03 02:10:52.327 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:10:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1695: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:10:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:10:53 compute-0 nova_compute[351485]: 2025-12-03 02:10:53.348 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:10:53 compute-0 ceph-mon[192821]: pgmap v1695: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:10:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1696: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:10:55 compute-0 ceph-mon[192821]: pgmap v1696: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:10:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1697: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:10:57 compute-0 nova_compute[351485]: 2025-12-03 02:10:57.330 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:10:58 compute-0 ceph-mon[192821]: pgmap v1697: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:10:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:10:58 compute-0 nova_compute[351485]: 2025-12-03 02:10:58.351 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:10:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:10:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:10:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:10:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:10:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:10:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:10:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1698: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:10:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:10:59.640 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:10:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:10:59.642 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:10:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:10:59.643 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:10:59 compute-0 podman[158098]: time="2025-12-03T02:10:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:10:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:10:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec 03 02:10:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:10:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8656 "" "Go-http-client/1.1"
Dec 03 02:10:59 compute-0 podman[437796]: 2025-12-03 02:10:59.879603704 +0000 UTC m=+0.096853440 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent)
Dec 03 02:10:59 compute-0 podman[437797]: 2025-12-03 02:10:59.887380514 +0000 UTC m=+0.097138549 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 03 02:10:59 compute-0 podman[437798]: 2025-12-03 02:10:59.892795886 +0000 UTC m=+0.098623751 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 03 02:11:00 compute-0 ceph-mon[192821]: pgmap v1698: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:11:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1699: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:11:01 compute-0 openstack_network_exporter[368278]: ERROR   02:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:11:01 compute-0 openstack_network_exporter[368278]: ERROR   02:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:11:01 compute-0 openstack_network_exporter[368278]: ERROR   02:11:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:11:01 compute-0 openstack_network_exporter[368278]: ERROR   02:11:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:11:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:11:01 compute-0 openstack_network_exporter[368278]: ERROR   02:11:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:11:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:11:02 compute-0 ceph-mon[192821]: pgmap v1699: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:11:02 compute-0 nova_compute[351485]: 2025-12-03 02:11:02.335 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:11:02 compute-0 sshd-session[437853]: Received disconnect from 154.113.10.113 port 41542:11: Bye Bye [preauth]
Dec 03 02:11:02 compute-0 sshd-session[437853]: Disconnected from authenticating user root 154.113.10.113 port 41542 [preauth]
Dec 03 02:11:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1700: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:11:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:11:03 compute-0 nova_compute[351485]: 2025-12-03 02:11:03.354 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:11:04 compute-0 ceph-mon[192821]: pgmap v1700: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:11:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1701: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:11:05 compute-0 podman[437855]: 2025-12-03 02:11:05.90923552 +0000 UTC m=+0.158289185 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:11:06 compute-0 ceph-mon[192821]: pgmap v1701: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:11:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1702: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:11:07 compute-0 nova_compute[351485]: 2025-12-03 02:11:07.340 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:11:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:11:08 compute-0 ceph-mon[192821]: pgmap v1702: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:11:08 compute-0 nova_compute[351485]: 2025-12-03 02:11:08.358 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:11:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1703: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:11:09 compute-0 nova_compute[351485]: 2025-12-03 02:11:09.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:11:10 compute-0 ceph-mon[192821]: pgmap v1703: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:11:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1704: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:11:10 compute-0 podman[437877]: 2025-12-03 02:11:10.872679868 +0000 UTC m=+0.105583309 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 03 02:11:10 compute-0 podman[437876]: 2025-12-03 02:11:10.885294183 +0000 UTC m=+0.126466167 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, distribution-scope=public, release=1755695350, container_name=openstack_network_exporter, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, vcs-type=git, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container)
Dec 03 02:11:10 compute-0 podman[437875]: 2025-12-03 02:11:10.904199836 +0000 UTC m=+0.150772202 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 03 02:11:10 compute-0 podman[437883]: 2025-12-03 02:11:10.90433016 +0000 UTC m=+0.134981597 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 03 02:11:10 compute-0 podman[437878]: 2025-12-03 02:11:10.914400554 +0000 UTC m=+0.152533392 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release=1214.1726694543, version=9.4, build-date=2024-09-18T21:23:30, release-0.7.12=, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, config_id=edpm, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec 03 02:11:11 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec 03 02:11:11 compute-0 nova_compute[351485]: 2025-12-03 02:11:11.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:11:12 compute-0 nova_compute[351485]: 2025-12-03 02:11:12.344 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:11:12 compute-0 ceph-mon[192821]: pgmap v1704: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:11:12 compute-0 nova_compute[351485]: 2025-12-03 02:11:12.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:11:12 compute-0 nova_compute[351485]: 2025-12-03 02:11:12.627 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:11:12 compute-0 nova_compute[351485]: 2025-12-03 02:11:12.628 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:11:12 compute-0 nova_compute[351485]: 2025-12-03 02:11:12.629 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:11:12 compute-0 nova_compute[351485]: 2025-12-03 02:11:12.630 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 02:11:12 compute-0 nova_compute[351485]: 2025-12-03 02:11:12.631 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:11:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1705: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:11:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:11:13 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1960510896' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:11:13 compute-0 nova_compute[351485]: 2025-12-03 02:11:13.114 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:11:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:11:13 compute-0 nova_compute[351485]: 2025-12-03 02:11:13.247 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:11:13 compute-0 nova_compute[351485]: 2025-12-03 02:11:13.249 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:11:13 compute-0 nova_compute[351485]: 2025-12-03 02:11:13.249 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:11:13 compute-0 nova_compute[351485]: 2025-12-03 02:11:13.259 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:11:13 compute-0 nova_compute[351485]: 2025-12-03 02:11:13.260 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:11:13 compute-0 nova_compute[351485]: 2025-12-03 02:11:13.260 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:11:13 compute-0 nova_compute[351485]: 2025-12-03 02:11:13.361 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:11:13 compute-0 ceph-mon[192821]: pgmap v1705: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:11:13 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1960510896' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:11:13 compute-0 nova_compute[351485]: 2025-12-03 02:11:13.851 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:11:13 compute-0 nova_compute[351485]: 2025-12-03 02:11:13.854 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3604MB free_disk=59.92203903198242GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 02:11:13 compute-0 nova_compute[351485]: 2025-12-03 02:11:13.855 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:11:13 compute-0 nova_compute[351485]: 2025-12-03 02:11:13.856 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:11:13 compute-0 nova_compute[351485]: 2025-12-03 02:11:13.976 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:11:13 compute-0 nova_compute[351485]: 2025-12-03 02:11:13.977 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance b43e79bd-550f-42f8-9aa7-980b6bca3f70 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:11:13 compute-0 nova_compute[351485]: 2025-12-03 02:11:13.977 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 02:11:13 compute-0 nova_compute[351485]: 2025-12-03 02:11:13.978 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 02:11:14 compute-0 nova_compute[351485]: 2025-12-03 02:11:14.072 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:11:14 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec 03 02:11:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:11:14 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/44039771' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:11:14 compute-0 nova_compute[351485]: 2025-12-03 02:11:14.546 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:11:14 compute-0 nova_compute[351485]: 2025-12-03 02:11:14.557 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:11:14 compute-0 nova_compute[351485]: 2025-12-03 02:11:14.569 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:11:14 compute-0 nova_compute[351485]: 2025-12-03 02:11:14.572 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 02:11:14 compute-0 nova_compute[351485]: 2025-12-03 02:11:14.573 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:11:14 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/44039771' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:11:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1706: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:11:15 compute-0 nova_compute[351485]: 2025-12-03 02:11:15.574 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:11:15 compute-0 nova_compute[351485]: 2025-12-03 02:11:15.575 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 02:11:15 compute-0 ceph-mon[192821]: pgmap v1706: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:11:15 compute-0 nova_compute[351485]: 2025-12-03 02:11:15.864 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-b43e79bd-550f-42f8-9aa7-980b6bca3f70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:11:15 compute-0 nova_compute[351485]: 2025-12-03 02:11:15.865 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-b43e79bd-550f-42f8-9aa7-980b6bca3f70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:11:15 compute-0 nova_compute[351485]: 2025-12-03 02:11:15.865 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 03 02:11:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1707: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:11:17 compute-0 nova_compute[351485]: 2025-12-03 02:11:17.347 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:11:17 compute-0 nova_compute[351485]: 2025-12-03 02:11:17.726 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Updating instance_info_cache with network_info: [{"id": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "address": "fa:16:3e:da:35:ef", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.85", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b217cd3-16", "ovs_interfaceid": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:11:17 compute-0 nova_compute[351485]: 2025-12-03 02:11:17.743 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-b43e79bd-550f-42f8-9aa7-980b6bca3f70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:11:17 compute-0 nova_compute[351485]: 2025-12-03 02:11:17.744 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 03 02:11:17 compute-0 nova_compute[351485]: 2025-12-03 02:11:17.745 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:11:17 compute-0 ceph-mon[192821]: pgmap v1707: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:11:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:11:18 compute-0 nova_compute[351485]: 2025-12-03 02:11:18.363 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:11:18 compute-0 nova_compute[351485]: 2025-12-03 02:11:18.740 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:11:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1708: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.509 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.510 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.510 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.511 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.511 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.519 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43e79bd-550f-42f8-9aa7-980b6bca3f70', 'name': 'vn-44nal64-mj7m4uljqyof-c7kfgdonucij-vnf-5nwa6zvischw', 'flavor': {'id': 'bc665ec6-3672-4e52-a447-5267b04e227a', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9746b242761a48048d185ce26d622b33', 'user_id': '03ba25e4009b43f7b0054fee32bf9136', 'hostId': '875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd', 'status': 'active', 'metadata': {'metering.server_group': '0f6ab671-23df-4a6d-9613-02f9fb5fb294'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.525 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '9182286b-5a08-4961-b4bb-c0e2f05746f7', 'name': 'test_0', 'flavor': {'id': 'bc665ec6-3672-4e52-a447-5267b04e227a', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9746b242761a48048d185ce26d622b33', 'user_id': '03ba25e4009b43f7b0054fee32bf9136', 'hostId': '875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.525 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.525 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.526 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.526 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.528 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-03T02:11:19.526283) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.568 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/memory.usage volume: 48.953125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 nova_compute[351485]: 2025-12-03 02:11:19.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.606 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/memory.usage volume: 48.85546875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.606 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.607 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.607 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.607 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.607 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.608 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.609 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-03T02:11:19.607941) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.613 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.620 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.620 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.621 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.621 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.621 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.621 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.621 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.622 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.622 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.623 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.623 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.623 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.624 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.624 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.624 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.624 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.624 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-03T02:11:19.621812) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.625 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.626 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.626 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.626 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.626 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.627 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.627 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.627 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.627 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-03T02:11:19.624608) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.628 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.629 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.629 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.630 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.630 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.631 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.631 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-03T02:11:19.627386) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.631 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.632 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.632 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.633 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.634 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.634 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.634 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.634 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.634 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.633 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-03T02:11:19.631888) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.635 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-03T02:11:19.634777) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.670 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.671 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.672 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.708 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.709 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.709 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.710 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.711 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.711 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.711 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.711 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.711 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.712 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.712 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.713 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-03T02:11:19.712211) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.810 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.811 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.811 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceph-mon[192821]: pgmap v1708: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.919 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.919 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.920 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.921 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.921 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.921 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.922 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.922 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.922 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.923 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.incoming.bytes volume: 1696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.923 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-03T02:11:19.922373) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.923 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.bytes volume: 2214 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.924 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.924 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.924 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.925 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.925 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.925 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.925 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.latency volume: 1930310646 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.926 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.latency volume: 271584338 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.926 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.latency volume: 193440648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.927 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.latency volume: 1854350820 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.927 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.latency volume: 322798135 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.928 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.latency volume: 163317736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.929 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.929 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.929 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.929 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.930 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.930 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-03T02:11:19.925343) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.930 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.930 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.931 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-03T02:11:19.930302) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.931 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.931 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.932 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.932 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.933 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.934 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.934 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.934 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.935 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.935 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.935 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.935 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.936 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.937 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.937 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.937 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-03T02:11:19.935444) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.937 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.938 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.938 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.938 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.938 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.939 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-03T02:11:19.938382) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.939 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.940 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.940 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.940 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.941 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.942 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.942 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.943 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.943 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.943 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.944 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.944 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.bytes volume: 41762816 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.944 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-03T02:11:19.944023) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.945 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.945 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.946 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.946 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.946 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.948 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.948 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.948 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.948 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.949 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.949 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.949 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.latency volume: 8159105015 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.949 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-03T02:11:19.949161) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.950 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.latency volume: 27311239 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.950 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.951 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.latency volume: 7224488215 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.951 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.latency volume: 31628821 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.952 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.952 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.952 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.952 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.953 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.953 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.953 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.953 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.953 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.954 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.954 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.requests volume: 229 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.954 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.954 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.955 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.955 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.955 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.955 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.956 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.956 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.956 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.956 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.957 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.957 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.957 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.957 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.957 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-03T02:11:19.953252) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.957 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.957 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.957 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-03T02:11:19.956180) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.958 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/cpu volume: 45360000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.958 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/cpu volume: 48770000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.958 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-03T02:11:19.957857) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.958 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.959 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.959 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.959 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.959 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.959 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.960 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.960 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.960 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.960 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.960 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-03T02:11:19.959424) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.960 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.960 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.960 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.outgoing.bytes volume: 2468 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.961 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.961 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.961 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.961 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.962 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.962 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.962 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.962 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.962 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-03T02:11:19.960816) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.962 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.962 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-03T02:11:19.962332) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.963 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.963 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.964 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.964 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.964 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.964 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.965 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.965 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.965 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.965 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.965 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.965 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.966 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.966 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.966 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.966 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.966 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-03T02:11:19.965329) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.967 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.967 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.967 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.967 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.968 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.968 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.968 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.968 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.968 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.968 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.969 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.969 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.969 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.970 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-03T02:11:19.967129) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.970 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.970 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-03T02:11:19.968489) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.970 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.970 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.970 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.970 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.971 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.971 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.971 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.971 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.971 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.971 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.972 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.972 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.972 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.972 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.973 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.973 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.973 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.973 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.973 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.973 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.973 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.974 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.974 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.974 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.974 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:11:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1709: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:11:21 compute-0 nova_compute[351485]: 2025-12-03 02:11:21.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:11:21 compute-0 ceph-mon[192821]: pgmap v1709: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:11:22 compute-0 nova_compute[351485]: 2025-12-03 02:11:22.352 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:11:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1710: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 0 B/s wr, 3 op/s
Dec 03 02:11:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:11:23 compute-0 nova_compute[351485]: 2025-12-03 02:11:23.367 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:11:23 compute-0 ceph-mon[192821]: pgmap v1710: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 0 B/s wr, 3 op/s
Dec 03 02:11:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1711: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 22 op/s
Dec 03 02:11:25 compute-0 ceph-mon[192821]: pgmap v1711: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 22 op/s
Dec 03 02:11:26 compute-0 nova_compute[351485]: 2025-12-03 02:11:26.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:11:26 compute-0 nova_compute[351485]: 2025-12-03 02:11:26.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 02:11:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1712: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 03 02:11:27 compute-0 nova_compute[351485]: 2025-12-03 02:11:27.354 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:11:27 compute-0 ceph-mon[192821]: pgmap v1712: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 03 02:11:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:11:28 compute-0 nova_compute[351485]: 2025-12-03 02:11:28.370 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:11:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:11:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:11:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:11:28
Dec 03 02:11:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 02:11:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 02:11:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['.rgw.root', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images', 'backups', 'default.rgw.log']
Dec 03 02:11:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 02:11:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:11:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:11:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:11:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:11:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1713: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 03 02:11:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 02:11:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:11:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 02:11:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:11:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:11:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:11:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:11:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:11:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:11:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:11:29 compute-0 podman[158098]: time="2025-12-03T02:11:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:11:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:11:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec 03 02:11:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:11:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8657 "" "Go-http-client/1.1"
Dec 03 02:11:29 compute-0 ceph-mon[192821]: pgmap v1713: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 03 02:11:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1714: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 03 02:11:30 compute-0 podman[438024]: 2025-12-03 02:11:30.87120193 +0000 UTC m=+0.109397736 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 03 02:11:30 compute-0 podman[438026]: 2025-12-03 02:11:30.890970037 +0000 UTC m=+0.116218618 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 02:11:30 compute-0 podman[438025]: 2025-12-03 02:11:30.903040487 +0000 UTC m=+0.133146805 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image)
Dec 03 02:11:31 compute-0 openstack_network_exporter[368278]: ERROR   02:11:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:11:31 compute-0 openstack_network_exporter[368278]: ERROR   02:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:11:31 compute-0 openstack_network_exporter[368278]: ERROR   02:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:11:31 compute-0 openstack_network_exporter[368278]: ERROR   02:11:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:11:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:11:31 compute-0 openstack_network_exporter[368278]: ERROR   02:11:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:11:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:11:31 compute-0 ceph-mon[192821]: pgmap v1714: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 03 02:11:32 compute-0 nova_compute[351485]: 2025-12-03 02:11:32.358 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:11:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1715: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 03 02:11:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:11:33 compute-0 nova_compute[351485]: 2025-12-03 02:11:33.372 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:11:33 compute-0 sudo[438082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:11:33 compute-0 sudo[438082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:11:33 compute-0 sudo[438082]: pam_unix(sudo:session): session closed for user root
Dec 03 02:11:33 compute-0 sudo[438107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:11:33 compute-0 sudo[438107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:11:33 compute-0 sudo[438107]: pam_unix(sudo:session): session closed for user root
Dec 03 02:11:34 compute-0 ceph-mon[192821]: pgmap v1715: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 03 02:11:34 compute-0 sudo[438132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:11:34 compute-0 sudo[438132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:11:34 compute-0 sudo[438132]: pam_unix(sudo:session): session closed for user root
Dec 03 02:11:34 compute-0 sudo[438157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 02:11:34 compute-0 sudo[438157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:11:34 compute-0 sudo[438157]: pam_unix(sudo:session): session closed for user root
Dec 03 02:11:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1716: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Dec 03 02:11:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:11:34 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:11:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 02:11:34 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:11:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 02:11:34 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:11:34 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 9b943978-44b9-4ba0-8980-c4371b17c598 does not exist
Dec 03 02:11:34 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev b631a657-cfbe-4aa8-ad09-f082b04e2eb2 does not exist
Dec 03 02:11:34 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 07e94cd6-1b31-49aa-8da6-c7d3a611bbf8 does not exist
Dec 03 02:11:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 02:11:34 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:11:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 02:11:34 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:11:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:11:34 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:11:35 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:11:35 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:11:35 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:11:35 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:11:35 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:11:35 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:11:35 compute-0 sudo[438212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:11:35 compute-0 sudo[438212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:11:35 compute-0 sudo[438212]: pam_unix(sudo:session): session closed for user root
Dec 03 02:11:35 compute-0 sudo[438237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:11:35 compute-0 sudo[438237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:11:35 compute-0 sudo[438237]: pam_unix(sudo:session): session closed for user root
Dec 03 02:11:35 compute-0 sudo[438262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:11:35 compute-0 sudo[438262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:11:35 compute-0 sudo[438262]: pam_unix(sudo:session): session closed for user root
Dec 03 02:11:35 compute-0 sudo[438287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 02:11:35 compute-0 sudo[438287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:11:36 compute-0 podman[438349]: 2025-12-03 02:11:36.025730828 +0000 UTC m=+0.074881933 container create 3553c3e173bc145591675cfc11b4cc897f8bc84a3419bc0df2a7b934534cc6a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 03 02:11:36 compute-0 ceph-mon[192821]: pgmap v1716: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Dec 03 02:11:36 compute-0 systemd[1]: Started libpod-conmon-3553c3e173bc145591675cfc11b4cc897f8bc84a3419bc0df2a7b934534cc6a8.scope.
Dec 03 02:11:36 compute-0 podman[438349]: 2025-12-03 02:11:35.994395914 +0000 UTC m=+0.043547059 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:11:36 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:11:36 compute-0 podman[438349]: 2025-12-03 02:11:36.140630778 +0000 UTC m=+0.189781903 container init 3553c3e173bc145591675cfc11b4cc897f8bc84a3419bc0df2a7b934534cc6a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_goldberg, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 03 02:11:36 compute-0 podman[438349]: 2025-12-03 02:11:36.154510719 +0000 UTC m=+0.203661824 container start 3553c3e173bc145591675cfc11b4cc897f8bc84a3419bc0df2a7b934534cc6a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_goldberg, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 03 02:11:36 compute-0 podman[438349]: 2025-12-03 02:11:36.159279194 +0000 UTC m=+0.208430369 container attach 3553c3e173bc145591675cfc11b4cc897f8bc84a3419bc0df2a7b934534cc6a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_goldberg, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:11:36 compute-0 pensive_goldberg[438363]: 167 167
Dec 03 02:11:36 compute-0 systemd[1]: libpod-3553c3e173bc145591675cfc11b4cc897f8bc84a3419bc0df2a7b934534cc6a8.scope: Deactivated successfully.
Dec 03 02:11:36 compute-0 podman[438349]: 2025-12-03 02:11:36.165323974 +0000 UTC m=+0.214475099 container died 3553c3e173bc145591675cfc11b4cc897f8bc84a3419bc0df2a7b934534cc6a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 03 02:11:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-732b8a1de358f1d8e973b2aa70729e3504abd19604a4526395a898b18b75cfc4-merged.mount: Deactivated successfully.
Dec 03 02:11:36 compute-0 podman[438360]: 2025-12-03 02:11:36.200902798 +0000 UTC m=+0.106707071 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125)
Dec 03 02:11:36 compute-0 podman[438349]: 2025-12-03 02:11:36.227166158 +0000 UTC m=+0.276317253 container remove 3553c3e173bc145591675cfc11b4cc897f8bc84a3419bc0df2a7b934534cc6a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_goldberg, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:11:36 compute-0 systemd[1]: libpod-conmon-3553c3e173bc145591675cfc11b4cc897f8bc84a3419bc0df2a7b934534cc6a8.scope: Deactivated successfully.
Dec 03 02:11:36 compute-0 podman[438404]: 2025-12-03 02:11:36.484130614 +0000 UTC m=+0.082924069 container create 19f5eaa9ec3e93fe71c09ee882d0010fc3baadabc715da97863d17a12742e7ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 03 02:11:36 compute-0 podman[438404]: 2025-12-03 02:11:36.450871026 +0000 UTC m=+0.049664551 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:11:36 compute-0 systemd[1]: Started libpod-conmon-19f5eaa9ec3e93fe71c09ee882d0010fc3baadabc715da97863d17a12742e7ac.scope.
Dec 03 02:11:36 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:11:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a45088c12be3f1270f2a8f9dbed7c68b4437fc48f877d219761b072f3fb3e52b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:11:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a45088c12be3f1270f2a8f9dbed7c68b4437fc48f877d219761b072f3fb3e52b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:11:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a45088c12be3f1270f2a8f9dbed7c68b4437fc48f877d219761b072f3fb3e52b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:11:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a45088c12be3f1270f2a8f9dbed7c68b4437fc48f877d219761b072f3fb3e52b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:11:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a45088c12be3f1270f2a8f9dbed7c68b4437fc48f877d219761b072f3fb3e52b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 02:11:36 compute-0 podman[438404]: 2025-12-03 02:11:36.660696423 +0000 UTC m=+0.259489918 container init 19f5eaa9ec3e93fe71c09ee882d0010fc3baadabc715da97863d17a12742e7ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_blackwell, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:11:36 compute-0 podman[438404]: 2025-12-03 02:11:36.694714462 +0000 UTC m=+0.293507917 container start 19f5eaa9ec3e93fe71c09ee882d0010fc3baadabc715da97863d17a12742e7ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_blackwell, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:11:36 compute-0 podman[438404]: 2025-12-03 02:11:36.701944546 +0000 UTC m=+0.300738001 container attach 19f5eaa9ec3e93fe71c09ee882d0010fc3baadabc715da97863d17a12742e7ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_blackwell, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:11:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1717: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 36 op/s
Dec 03 02:11:37 compute-0 nova_compute[351485]: 2025-12-03 02:11:37.361 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:11:38 compute-0 heuristic_blackwell[438420]: --> passed data devices: 0 physical, 3 LVM
Dec 03 02:11:38 compute-0 heuristic_blackwell[438420]: --> relative data size: 1.0
Dec 03 02:11:38 compute-0 heuristic_blackwell[438420]: --> All data devices are unavailable
Dec 03 02:11:38 compute-0 ceph-mon[192821]: pgmap v1717: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 36 op/s
Dec 03 02:11:38 compute-0 systemd[1]: libpod-19f5eaa9ec3e93fe71c09ee882d0010fc3baadabc715da97863d17a12742e7ac.scope: Deactivated successfully.
Dec 03 02:11:38 compute-0 systemd[1]: libpod-19f5eaa9ec3e93fe71c09ee882d0010fc3baadabc715da97863d17a12742e7ac.scope: Consumed 1.300s CPU time.
Dec 03 02:11:38 compute-0 podman[438404]: 2025-12-03 02:11:38.063313975 +0000 UTC m=+1.662107410 container died 19f5eaa9ec3e93fe71c09ee882d0010fc3baadabc715da97863d17a12742e7ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_blackwell, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec 03 02:11:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-a45088c12be3f1270f2a8f9dbed7c68b4437fc48f877d219761b072f3fb3e52b-merged.mount: Deactivated successfully.
Dec 03 02:11:38 compute-0 podman[438404]: 2025-12-03 02:11:38.142037755 +0000 UTC m=+1.740831180 container remove 19f5eaa9ec3e93fe71c09ee882d0010fc3baadabc715da97863d17a12742e7ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_blackwell, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:11:38 compute-0 systemd[1]: libpod-conmon-19f5eaa9ec3e93fe71c09ee882d0010fc3baadabc715da97863d17a12742e7ac.scope: Deactivated successfully.
Dec 03 02:11:38 compute-0 sudo[438287]: pam_unix(sudo:session): session closed for user root
Dec 03 02:11:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:11:38 compute-0 sudo[438460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:11:38 compute-0 sudo[438460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:11:38 compute-0 sudo[438460]: pam_unix(sudo:session): session closed for user root
Dec 03 02:11:38 compute-0 nova_compute[351485]: 2025-12-03 02:11:38.376 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:11:38 compute-0 sudo[438485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:11:38 compute-0 sudo[438485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:11:38 compute-0 sudo[438485]: pam_unix(sudo:session): session closed for user root
Dec 03 02:11:38 compute-0 sudo[438510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:11:38 compute-0 sudo[438510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:11:38 compute-0 sudo[438510]: pam_unix(sudo:session): session closed for user root
Dec 03 02:11:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 02:11:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:11:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 02:11:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:11:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00110425264130364 of space, bias 1.0, pg target 0.331275792391092 quantized to 32 (current 32)
Dec 03 02:11:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:11:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:11:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:11:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:11:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:11:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec 03 02:11:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:11:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 02:11:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:11:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:11:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:11:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 02:11:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:11:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 02:11:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:11:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:11:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:11:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 02:11:38 compute-0 sudo[438535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 02:11:38 compute-0 sudo[438535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:11:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1718: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:11:39 compute-0 podman[438599]: 2025-12-03 02:11:39.351045417 +0000 UTC m=+0.095330800 container create 4f90c5f705b1fa5718203b779e98079b140ba6bdc87b1d7ce620006712763ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_torvalds, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 03 02:11:39 compute-0 podman[438599]: 2025-12-03 02:11:39.315807563 +0000 UTC m=+0.060092996 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:11:39 compute-0 systemd[1]: Started libpod-conmon-4f90c5f705b1fa5718203b779e98079b140ba6bdc87b1d7ce620006712763ffe.scope.
Dec 03 02:11:39 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:11:39 compute-0 podman[438599]: 2025-12-03 02:11:39.538291857 +0000 UTC m=+0.282577300 container init 4f90c5f705b1fa5718203b779e98079b140ba6bdc87b1d7ce620006712763ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_torvalds, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 03 02:11:39 compute-0 podman[438599]: 2025-12-03 02:11:39.55542984 +0000 UTC m=+0.299715233 container start 4f90c5f705b1fa5718203b779e98079b140ba6bdc87b1d7ce620006712763ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_torvalds, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:11:39 compute-0 nice_torvalds[438615]: 167 167
Dec 03 02:11:39 compute-0 systemd[1]: libpod-4f90c5f705b1fa5718203b779e98079b140ba6bdc87b1d7ce620006712763ffe.scope: Deactivated successfully.
Dec 03 02:11:39 compute-0 podman[438599]: 2025-12-03 02:11:39.56640791 +0000 UTC m=+0.310693343 container attach 4f90c5f705b1fa5718203b779e98079b140ba6bdc87b1d7ce620006712763ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 03 02:11:39 compute-0 conmon[438615]: conmon 4f90c5f705b1fa571820 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4f90c5f705b1fa5718203b779e98079b140ba6bdc87b1d7ce620006712763ffe.scope/container/memory.events
Dec 03 02:11:39 compute-0 podman[438599]: 2025-12-03 02:11:39.569937499 +0000 UTC m=+0.314222862 container died 4f90c5f705b1fa5718203b779e98079b140ba6bdc87b1d7ce620006712763ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:11:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-77eff13956e429a97fe68c5fa2af15f3d48b3abffb0d54a99623d180b5ff93c1-merged.mount: Deactivated successfully.
Dec 03 02:11:39 compute-0 podman[438599]: 2025-12-03 02:11:39.638253396 +0000 UTC m=+0.382538779 container remove 4f90c5f705b1fa5718203b779e98079b140ba6bdc87b1d7ce620006712763ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 03 02:11:39 compute-0 systemd[1]: libpod-conmon-4f90c5f705b1fa5718203b779e98079b140ba6bdc87b1d7ce620006712763ffe.scope: Deactivated successfully.
Dec 03 02:11:39 compute-0 podman[438639]: 2025-12-03 02:11:39.932488783 +0000 UTC m=+0.081554421 container create da16110551b901359e340060da0bf3c7e1675b5fc8f7e1343b22b32ab47ecde6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_brown, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 03 02:11:40 compute-0 podman[438639]: 2025-12-03 02:11:39.907051305 +0000 UTC m=+0.056116953 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:11:40 compute-0 systemd[1]: Started libpod-conmon-da16110551b901359e340060da0bf3c7e1675b5fc8f7e1343b22b32ab47ecde6.scope.
Dec 03 02:11:40 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:11:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aeb7ab803fc70250d5f895a900853c977ae211e07ffbc4d20350de1f873c539e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:11:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aeb7ab803fc70250d5f895a900853c977ae211e07ffbc4d20350de1f873c539e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:11:40 compute-0 ceph-mon[192821]: pgmap v1718: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:11:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aeb7ab803fc70250d5f895a900853c977ae211e07ffbc4d20350de1f873c539e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:11:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aeb7ab803fc70250d5f895a900853c977ae211e07ffbc4d20350de1f873c539e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:11:40 compute-0 podman[438639]: 2025-12-03 02:11:40.108897787 +0000 UTC m=+0.257963425 container init da16110551b901359e340060da0bf3c7e1675b5fc8f7e1343b22b32ab47ecde6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_brown, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Dec 03 02:11:40 compute-0 podman[438639]: 2025-12-03 02:11:40.131003501 +0000 UTC m=+0.280069119 container start da16110551b901359e340060da0bf3c7e1675b5fc8f7e1343b22b32ab47ecde6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_brown, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:11:40 compute-0 podman[438639]: 2025-12-03 02:11:40.137923046 +0000 UTC m=+0.286988664 container attach da16110551b901359e340060da0bf3c7e1675b5fc8f7e1343b22b32ab47ecde6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_brown, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:11:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1719: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:11:41 compute-0 pedantic_brown[438654]: {
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:     "0": [
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:         {
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:             "devices": [
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:                 "/dev/loop3"
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:             ],
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:             "lv_name": "ceph_lv0",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:             "lv_size": "21470642176",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:             "name": "ceph_lv0",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:             "tags": {
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:                 "ceph.cluster_name": "ceph",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:                 "ceph.crush_device_class": "",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:                 "ceph.encrypted": "0",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:                 "ceph.osd_id": "0",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:                 "ceph.type": "block",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:                 "ceph.vdo": "0"
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:             },
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:             "type": "block",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:             "vg_name": "ceph_vg0"
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:         }
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:     ],
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:     "1": [
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:         {
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:             "devices": [
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:                 "/dev/loop4"
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:             ],
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:             "lv_name": "ceph_lv1",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:             "lv_size": "21470642176",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:             "name": "ceph_lv1",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:             "tags": {
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:                 "ceph.cluster_name": "ceph",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:                 "ceph.crush_device_class": "",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:                 "ceph.encrypted": "0",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:                 "ceph.osd_id": "1",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:                 "ceph.type": "block",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:                 "ceph.vdo": "0"
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:             },
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:             "type": "block",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:             "vg_name": "ceph_vg1"
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:         }
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:     ],
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:     "2": [
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:         {
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:             "devices": [
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:                 "/dev/loop5"
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:             ],
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:             "lv_name": "ceph_lv2",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:             "lv_size": "21470642176",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:             "name": "ceph_lv2",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:             "tags": {
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:                 "ceph.cluster_name": "ceph",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:                 "ceph.crush_device_class": "",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:                 "ceph.encrypted": "0",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:                 "ceph.osd_id": "2",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:                 "ceph.type": "block",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:                 "ceph.vdo": "0"
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:             },
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:             "type": "block",
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:             "vg_name": "ceph_vg2"
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:         }
Dec 03 02:11:41 compute-0 pedantic_brown[438654]:     ]
Dec 03 02:11:41 compute-0 pedantic_brown[438654]: }
Dec 03 02:11:41 compute-0 systemd[1]: libpod-da16110551b901359e340060da0bf3c7e1675b5fc8f7e1343b22b32ab47ecde6.scope: Deactivated successfully.
Dec 03 02:11:41 compute-0 podman[438639]: 2025-12-03 02:11:41.062351303 +0000 UTC m=+1.211416941 container died da16110551b901359e340060da0bf3c7e1675b5fc8f7e1343b22b32ab47ecde6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Dec 03 02:11:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-aeb7ab803fc70250d5f895a900853c977ae211e07ffbc4d20350de1f873c539e-merged.mount: Deactivated successfully.
Dec 03 02:11:41 compute-0 podman[438639]: 2025-12-03 02:11:41.183450238 +0000 UTC m=+1.332515846 container remove da16110551b901359e340060da0bf3c7e1675b5fc8f7e1343b22b32ab47ecde6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:11:41 compute-0 systemd[1]: libpod-conmon-da16110551b901359e340060da0bf3c7e1675b5fc8f7e1343b22b32ab47ecde6.scope: Deactivated successfully.
Dec 03 02:11:41 compute-0 sudo[438535]: pam_unix(sudo:session): session closed for user root
Dec 03 02:11:41 compute-0 podman[438670]: 2025-12-03 02:11:41.252008101 +0000 UTC m=+0.133291919 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, io.openshift.tags=minimal rhel9, release=1755695350, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vendor=Red Hat, Inc.)
Dec 03 02:11:41 compute-0 podman[438672]: 2025-12-03 02:11:41.270684918 +0000 UTC m=+0.122022132 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 03 02:11:41 compute-0 podman[438688]: 2025-12-03 02:11:41.285914528 +0000 UTC m=+0.131565491 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd)
Dec 03 02:11:41 compute-0 sudo[438750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:11:41 compute-0 podman[438686]: 2025-12-03 02:11:41.303439782 +0000 UTC m=+0.129275297 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, managed_by=edpm_ansible, vendor=Red Hat, Inc., version=9.4, io.openshift.tags=base rhel9, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, container_name=kepler, name=ubi9, architecture=x86_64, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec 03 02:11:41 compute-0 sudo[438750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:11:41 compute-0 sudo[438750]: pam_unix(sudo:session): session closed for user root
Dec 03 02:11:41 compute-0 podman[438664]: 2025-12-03 02:11:41.311854689 +0000 UTC m=+0.190932685 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 03 02:11:41 compute-0 sudo[438803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:11:41 compute-0 sudo[438803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:11:41 compute-0 sudo[438803]: pam_unix(sudo:session): session closed for user root
Dec 03 02:11:41 compute-0 sudo[438828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:11:41 compute-0 sudo[438828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:11:41 compute-0 sudo[438828]: pam_unix(sudo:session): session closed for user root
Dec 03 02:11:41 compute-0 sudo[438853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 02:11:41 compute-0 sudo[438853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:11:42 compute-0 ceph-mon[192821]: pgmap v1719: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:11:42 compute-0 podman[438918]: 2025-12-03 02:11:42.15628355 +0000 UTC m=+0.083028851 container create 924d9e9112db3bc8523a08d5df3cd3875335605c4a334d672656e2d3bbe51d79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 03 02:11:42 compute-0 podman[438918]: 2025-12-03 02:11:42.124217547 +0000 UTC m=+0.050962858 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:11:42 compute-0 systemd[1]: Started libpod-conmon-924d9e9112db3bc8523a08d5df3cd3875335605c4a334d672656e2d3bbe51d79.scope.
Dec 03 02:11:42 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:11:42 compute-0 podman[438918]: 2025-12-03 02:11:42.294206139 +0000 UTC m=+0.220951450 container init 924d9e9112db3bc8523a08d5df3cd3875335605c4a334d672656e2d3bbe51d79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Dec 03 02:11:42 compute-0 podman[438918]: 2025-12-03 02:11:42.314120511 +0000 UTC m=+0.240865822 container start 924d9e9112db3bc8523a08d5df3cd3875335605c4a334d672656e2d3bbe51d79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_rosalind, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 03 02:11:42 compute-0 podman[438918]: 2025-12-03 02:11:42.321789037 +0000 UTC m=+0.248534348 container attach 924d9e9112db3bc8523a08d5df3cd3875335605c4a334d672656e2d3bbe51d79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:11:42 compute-0 boring_rosalind[438932]: 167 167
Dec 03 02:11:42 compute-0 systemd[1]: libpod-924d9e9112db3bc8523a08d5df3cd3875335605c4a334d672656e2d3bbe51d79.scope: Deactivated successfully.
Dec 03 02:11:42 compute-0 podman[438918]: 2025-12-03 02:11:42.326351186 +0000 UTC m=+0.253096467 container died 924d9e9112db3bc8523a08d5df3cd3875335605c4a334d672656e2d3bbe51d79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_rosalind, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec 03 02:11:42 compute-0 nova_compute[351485]: 2025-12-03 02:11:42.364 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:11:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-95f70ab416a48a81060452df1ee9963892bc1d7650f341568f3011b3b11ca637-merged.mount: Deactivated successfully.
Dec 03 02:11:42 compute-0 podman[438918]: 2025-12-03 02:11:42.403657166 +0000 UTC m=+0.330402447 container remove 924d9e9112db3bc8523a08d5df3cd3875335605c4a334d672656e2d3bbe51d79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_rosalind, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:11:42 compute-0 systemd[1]: libpod-conmon-924d9e9112db3bc8523a08d5df3cd3875335605c4a334d672656e2d3bbe51d79.scope: Deactivated successfully.
Dec 03 02:11:42 compute-0 podman[438955]: 2025-12-03 02:11:42.697927714 +0000 UTC m=+0.095698710 container create fe0d719c741ed5d7cf362d79d1dd66c88ded2bd83319150b1f8debe0c1a4a974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_montalcini, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:11:42 compute-0 podman[438955]: 2025-12-03 02:11:42.658989666 +0000 UTC m=+0.056760732 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:11:42 compute-0 systemd[1]: Started libpod-conmon-fe0d719c741ed5d7cf362d79d1dd66c88ded2bd83319150b1f8debe0c1a4a974.scope.
Dec 03 02:11:42 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:11:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1720: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:11:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e42037c50d1c580d206729b66e724e7ccc00aef5cf594fecb307415d7041728a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:11:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e42037c50d1c580d206729b66e724e7ccc00aef5cf594fecb307415d7041728a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:11:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e42037c50d1c580d206729b66e724e7ccc00aef5cf594fecb307415d7041728a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:11:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e42037c50d1c580d206729b66e724e7ccc00aef5cf594fecb307415d7041728a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:11:42 compute-0 podman[438955]: 2025-12-03 02:11:42.876280593 +0000 UTC m=+0.274051629 container init fe0d719c741ed5d7cf362d79d1dd66c88ded2bd83319150b1f8debe0c1a4a974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_montalcini, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 03 02:11:42 compute-0 podman[438955]: 2025-12-03 02:11:42.90809769 +0000 UTC m=+0.305868686 container start fe0d719c741ed5d7cf362d79d1dd66c88ded2bd83319150b1f8debe0c1a4a974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 03 02:11:42 compute-0 podman[438955]: 2025-12-03 02:11:42.915003855 +0000 UTC m=+0.312774861 container attach fe0d719c741ed5d7cf362d79d1dd66c88ded2bd83319150b1f8debe0c1a4a974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec 03 02:11:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:11:43 compute-0 nova_compute[351485]: 2025-12-03 02:11:43.379 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:11:44 compute-0 ceph-mon[192821]: pgmap v1720: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:11:44 compute-0 romantic_montalcini[438971]: {
Dec 03 02:11:44 compute-0 romantic_montalcini[438971]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 02:11:44 compute-0 romantic_montalcini[438971]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:11:44 compute-0 romantic_montalcini[438971]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 02:11:44 compute-0 romantic_montalcini[438971]:         "osd_id": 2,
Dec 03 02:11:44 compute-0 romantic_montalcini[438971]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:11:44 compute-0 romantic_montalcini[438971]:         "type": "bluestore"
Dec 03 02:11:44 compute-0 romantic_montalcini[438971]:     },
Dec 03 02:11:44 compute-0 romantic_montalcini[438971]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 02:11:44 compute-0 romantic_montalcini[438971]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:11:44 compute-0 romantic_montalcini[438971]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 02:11:44 compute-0 romantic_montalcini[438971]:         "osd_id": 1,
Dec 03 02:11:44 compute-0 romantic_montalcini[438971]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:11:44 compute-0 romantic_montalcini[438971]:         "type": "bluestore"
Dec 03 02:11:44 compute-0 romantic_montalcini[438971]:     },
Dec 03 02:11:44 compute-0 romantic_montalcini[438971]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 02:11:44 compute-0 romantic_montalcini[438971]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:11:44 compute-0 romantic_montalcini[438971]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 02:11:44 compute-0 romantic_montalcini[438971]:         "osd_id": 0,
Dec 03 02:11:44 compute-0 romantic_montalcini[438971]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:11:44 compute-0 romantic_montalcini[438971]:         "type": "bluestore"
Dec 03 02:11:44 compute-0 romantic_montalcini[438971]:     }
Dec 03 02:11:44 compute-0 romantic_montalcini[438971]: }
Dec 03 02:11:44 compute-0 systemd[1]: libpod-fe0d719c741ed5d7cf362d79d1dd66c88ded2bd83319150b1f8debe0c1a4a974.scope: Deactivated successfully.
Dec 03 02:11:44 compute-0 systemd[1]: libpod-fe0d719c741ed5d7cf362d79d1dd66c88ded2bd83319150b1f8debe0c1a4a974.scope: Consumed 1.287s CPU time.
Dec 03 02:11:44 compute-0 podman[439004]: 2025-12-03 02:11:44.307892883 +0000 UTC m=+0.059167550 container died fe0d719c741ed5d7cf362d79d1dd66c88ded2bd83319150b1f8debe0c1a4a974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_montalcini, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 03 02:11:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-e42037c50d1c580d206729b66e724e7ccc00aef5cf594fecb307415d7041728a-merged.mount: Deactivated successfully.
Dec 03 02:11:44 compute-0 podman[439004]: 2025-12-03 02:11:44.425415347 +0000 UTC m=+0.176690014 container remove fe0d719c741ed5d7cf362d79d1dd66c88ded2bd83319150b1f8debe0c1a4a974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_montalcini, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:11:44 compute-0 systemd[1]: libpod-conmon-fe0d719c741ed5d7cf362d79d1dd66c88ded2bd83319150b1f8debe0c1a4a974.scope: Deactivated successfully.
Dec 03 02:11:44 compute-0 sudo[438853]: pam_unix(sudo:session): session closed for user root
Dec 03 02:11:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 02:11:44 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:11:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 02:11:44 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:11:44 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev b3295f73-b368-4d42-9115-547e964cf3bb does not exist
Dec 03 02:11:44 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev bb130fd2-025f-435e-aee3-743db6a496ce does not exist
Dec 03 02:11:44 compute-0 sudo[439018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:11:44 compute-0 sudo[439018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:11:44 compute-0 sudo[439018]: pam_unix(sudo:session): session closed for user root
Dec 03 02:11:44 compute-0 sudo[439043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 02:11:44 compute-0 sudo[439043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:11:44 compute-0 sudo[439043]: pam_unix(sudo:session): session closed for user root
Dec 03 02:11:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1721: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:11:45 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:11:45 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:11:45 compute-0 ceph-mon[192821]: pgmap v1721: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:11:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1722: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:11:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 02:11:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2703153391' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:11:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 02:11:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2703153391' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:11:47 compute-0 nova_compute[351485]: 2025-12-03 02:11:47.369 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:11:47 compute-0 ceph-mon[192821]: pgmap v1722: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:11:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/2703153391' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:11:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/2703153391' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:11:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:11:48 compute-0 sshd-session[435876]: Received disconnect from 38.102.83.18 port 60528:11: disconnected by user
Dec 03 02:11:48 compute-0 sshd-session[435876]: Disconnected from user zuul 38.102.83.18 port 60528
Dec 03 02:11:48 compute-0 sshd-session[435873]: pam_unix(sshd:session): session closed for user zuul
Dec 03 02:11:48 compute-0 nova_compute[351485]: 2025-12-03 02:11:48.383 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:11:48 compute-0 systemd[1]: session-62.scope: Deactivated successfully.
Dec 03 02:11:48 compute-0 systemd[1]: session-62.scope: Consumed 5.493s CPU time.
Dec 03 02:11:48 compute-0 systemd-logind[800]: Session 62 logged out. Waiting for processes to exit.
Dec 03 02:11:48 compute-0 systemd-logind[800]: Removed session 62.
Dec 03 02:11:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1723: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:11:49 compute-0 sshd-session[439068]: Invalid user openbravo from 146.190.144.138 port 57250
Dec 03 02:11:49 compute-0 sshd-session[439068]: Received disconnect from 146.190.144.138 port 57250:11: Bye Bye [preauth]
Dec 03 02:11:49 compute-0 sshd-session[439068]: Disconnected from invalid user openbravo 146.190.144.138 port 57250 [preauth]
Dec 03 02:11:49 compute-0 ceph-mon[192821]: pgmap v1723: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:11:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1724: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:11:51 compute-0 ceph-mon[192821]: pgmap v1724: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:11:52 compute-0 nova_compute[351485]: 2025-12-03 02:11:52.375 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:11:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1725: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:11:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:11:53 compute-0 nova_compute[351485]: 2025-12-03 02:11:53.387 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:11:53 compute-0 ceph-mon[192821]: pgmap v1725: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:11:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1726: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:11:55 compute-0 ceph-mon[192821]: pgmap v1726: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:11:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1727: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:11:57 compute-0 nova_compute[351485]: 2025-12-03 02:11:57.379 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:11:57 compute-0 ceph-mon[192821]: pgmap v1727: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:11:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:11:58 compute-0 nova_compute[351485]: 2025-12-03 02:11:58.392 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:11:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:11:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:11:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:11:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:11:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:11:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:11:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1728: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:11:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:11:59.641 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:11:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:11:59.642 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:11:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:11:59.643 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:11:59 compute-0 podman[158098]: time="2025-12-03T02:11:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:11:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:11:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec 03 02:11:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:11:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8654 "" "Go-http-client/1.1"
Dec 03 02:12:00 compute-0 ceph-mon[192821]: pgmap v1728: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1729: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:01 compute-0 sshd-session[439071]: Received disconnect from 45.78.219.140 port 59068:11: Bye Bye [preauth]
Dec 03 02:12:01 compute-0 sshd-session[439071]: Disconnected from authenticating user root 45.78.219.140 port 59068 [preauth]
Dec 03 02:12:01 compute-0 openstack_network_exporter[368278]: ERROR   02:12:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:12:01 compute-0 openstack_network_exporter[368278]: ERROR   02:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:12:01 compute-0 openstack_network_exporter[368278]: ERROR   02:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:12:01 compute-0 openstack_network_exporter[368278]: ERROR   02:12:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:12:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:12:01 compute-0 openstack_network_exporter[368278]: ERROR   02:12:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:12:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:12:01 compute-0 podman[439075]: 2025-12-03 02:12:01.88263634 +0000 UTC m=+0.120921242 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 03 02:12:01 compute-0 podman[439073]: 2025-12-03 02:12:01.884576015 +0000 UTC m=+0.130918395 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 03 02:12:01 compute-0 podman[439074]: 2025-12-03 02:12:01.927216651 +0000 UTC m=+0.167490319 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 03 02:12:02 compute-0 ceph-mon[192821]: pgmap v1729: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:02 compute-0 nova_compute[351485]: 2025-12-03 02:12:02.384 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:12:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1730: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:12:03 compute-0 nova_compute[351485]: 2025-12-03 02:12:03.395 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:12:04 compute-0 ceph-mon[192821]: pgmap v1730: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1731: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:06 compute-0 ceph-mon[192821]: pgmap v1731: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1732: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:06 compute-0 podman[439131]: 2025-12-03 02:12:06.905444321 +0000 UTC m=+0.151758425 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 03 02:12:07 compute-0 nova_compute[351485]: 2025-12-03 02:12:07.388 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:12:08 compute-0 ceph-mon[192821]: pgmap v1732: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:12:08 compute-0 nova_compute[351485]: 2025-12-03 02:12:08.398 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:12:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1733: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:10 compute-0 ceph-mon[192821]: pgmap v1733: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1734: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:11 compute-0 nova_compute[351485]: 2025-12-03 02:12:11.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:12:11 compute-0 podman[439153]: 2025-12-03 02:12:11.869619862 +0000 UTC m=+0.097625263 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 02:12:11 compute-0 podman[439152]: 2025-12-03 02:12:11.883296169 +0000 UTC m=+0.117542796 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, name=ubi9-minimal, config_id=edpm, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.openshift.expose-services=)
Dec 03 02:12:11 compute-0 podman[439160]: 2025-12-03 02:12:11.88968433 +0000 UTC m=+0.103284683 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 03 02:12:11 compute-0 podman[439154]: 2025-12-03 02:12:11.922822658 +0000 UTC m=+0.140867127 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.4, com.redhat.component=ubi9-container, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, container_name=kepler, managed_by=edpm_ansible, name=ubi9, release-0.7.12=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=base rhel9)
Dec 03 02:12:11 compute-0 podman[439151]: 2025-12-03 02:12:11.935139276 +0000 UTC m=+0.175399524 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 03 02:12:12 compute-0 ceph-mon[192821]: pgmap v1734: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:12 compute-0 nova_compute[351485]: 2025-12-03 02:12:12.391 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:12:12 compute-0 nova_compute[351485]: 2025-12-03 02:12:12.575 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:12:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1735: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:12:13 compute-0 nova_compute[351485]: 2025-12-03 02:12:13.401 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:12:13 compute-0 nova_compute[351485]: 2025-12-03 02:12:13.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:12:13 compute-0 nova_compute[351485]: 2025-12-03 02:12:13.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:12:13 compute-0 nova_compute[351485]: 2025-12-03 02:12:13.641 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:12:13 compute-0 nova_compute[351485]: 2025-12-03 02:12:13.641 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:12:13 compute-0 nova_compute[351485]: 2025-12-03 02:12:13.642 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:12:13 compute-0 nova_compute[351485]: 2025-12-03 02:12:13.642 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 02:12:13 compute-0 nova_compute[351485]: 2025-12-03 02:12:13.643 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:12:14 compute-0 ceph-mon[192821]: pgmap v1735: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:12:14 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/178662724' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:12:14 compute-0 nova_compute[351485]: 2025-12-03 02:12:14.174 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:12:14 compute-0 nova_compute[351485]: 2025-12-03 02:12:14.276 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:12:14 compute-0 nova_compute[351485]: 2025-12-03 02:12:14.277 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:12:14 compute-0 nova_compute[351485]: 2025-12-03 02:12:14.277 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:12:14 compute-0 nova_compute[351485]: 2025-12-03 02:12:14.282 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:12:14 compute-0 nova_compute[351485]: 2025-12-03 02:12:14.283 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:12:14 compute-0 nova_compute[351485]: 2025-12-03 02:12:14.283 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:12:14 compute-0 nova_compute[351485]: 2025-12-03 02:12:14.792 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:12:14 compute-0 nova_compute[351485]: 2025-12-03 02:12:14.795 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3579MB free_disk=59.92203903198242GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 02:12:14 compute-0 nova_compute[351485]: 2025-12-03 02:12:14.796 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:12:14 compute-0 nova_compute[351485]: 2025-12-03 02:12:14.797 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:12:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1736: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:15 compute-0 nova_compute[351485]: 2025-12-03 02:12:15.022 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:12:15 compute-0 nova_compute[351485]: 2025-12-03 02:12:15.023 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance b43e79bd-550f-42f8-9aa7-980b6bca3f70 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:12:15 compute-0 nova_compute[351485]: 2025-12-03 02:12:15.024 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 02:12:15 compute-0 nova_compute[351485]: 2025-12-03 02:12:15.025 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 02:12:15 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/178662724' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:12:15 compute-0 nova_compute[351485]: 2025-12-03 02:12:15.214 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:12:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:12:15 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2352976424' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:12:15 compute-0 nova_compute[351485]: 2025-12-03 02:12:15.729 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:12:15 compute-0 nova_compute[351485]: 2025-12-03 02:12:15.745 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:12:15 compute-0 nova_compute[351485]: 2025-12-03 02:12:15.762 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:12:15 compute-0 nova_compute[351485]: 2025-12-03 02:12:15.766 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 02:12:15 compute-0 nova_compute[351485]: 2025-12-03 02:12:15.767 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.970s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:12:15 compute-0 nova_compute[351485]: 2025-12-03 02:12:15.768 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:12:15 compute-0 nova_compute[351485]: 2025-12-03 02:12:15.769 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 03 02:12:15 compute-0 nova_compute[351485]: 2025-12-03 02:12:15.799 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 03 02:12:16 compute-0 ceph-mon[192821]: pgmap v1736: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:16 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2352976424' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:12:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1737: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:17 compute-0 nova_compute[351485]: 2025-12-03 02:12:17.394 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:12:17 compute-0 nova_compute[351485]: 2025-12-03 02:12:17.793 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:12:17 compute-0 nova_compute[351485]: 2025-12-03 02:12:17.794 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:12:17 compute-0 nova_compute[351485]: 2025-12-03 02:12:17.826 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:12:17 compute-0 nova_compute[351485]: 2025-12-03 02:12:17.826 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 02:12:17 compute-0 nova_compute[351485]: 2025-12-03 02:12:17.827 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 03 02:12:18 compute-0 ceph-mon[192821]: pgmap v1737: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:12:18 compute-0 nova_compute[351485]: 2025-12-03 02:12:18.405 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:12:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1738: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:18 compute-0 nova_compute[351485]: 2025-12-03 02:12:18.879 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:12:18 compute-0 nova_compute[351485]: 2025-12-03 02:12:18.879 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:12:18 compute-0 nova_compute[351485]: 2025-12-03 02:12:18.880 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 03 02:12:18 compute-0 nova_compute[351485]: 2025-12-03 02:12:18.881 351492 DEBUG nova.objects.instance [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9182286b-5a08-4961-b4bb-c0e2f05746f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:12:20 compute-0 ceph-mon[192821]: pgmap v1738: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1739: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:21 compute-0 nova_compute[351485]: 2025-12-03 02:12:21.911 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Updating instance_info_cache with network_info: [{"id": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "address": "fa:16:3e:8f:a6:32", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2a50b9b-c2", "ovs_interfaceid": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:12:21 compute-0 nova_compute[351485]: 2025-12-03 02:12:21.963 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:12:21 compute-0 nova_compute[351485]: 2025-12-03 02:12:21.964 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 03 02:12:21 compute-0 nova_compute[351485]: 2025-12-03 02:12:21.966 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:12:21 compute-0 nova_compute[351485]: 2025-12-03 02:12:21.967 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:12:22 compute-0 ceph-mon[192821]: pgmap v1739: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:22 compute-0 nova_compute[351485]: 2025-12-03 02:12:22.397 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:12:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1740: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:12:23 compute-0 nova_compute[351485]: 2025-12-03 02:12:23.409 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:12:24 compute-0 ceph-mon[192821]: pgmap v1740: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:24 compute-0 nova_compute[351485]: 2025-12-03 02:12:24.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:12:24 compute-0 nova_compute[351485]: 2025-12-03 02:12:24.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 03 02:12:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1741: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:26 compute-0 ceph-mon[192821]: pgmap v1741: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1742: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:27 compute-0 nova_compute[351485]: 2025-12-03 02:12:27.400 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:12:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:12:28 compute-0 ceph-mon[192821]: pgmap v1742: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:28 compute-0 nova_compute[351485]: 2025-12-03 02:12:28.412 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:12:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:12:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:12:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:12:28
Dec 03 02:12:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 02:12:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 02:12:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', '.rgw.root', 'images', 'backups', 'default.rgw.log', 'default.rgw.meta']
Dec 03 02:12:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 02:12:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:12:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:12:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:12:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:12:28 compute-0 nova_compute[351485]: 2025-12-03 02:12:28.602 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:12:28 compute-0 nova_compute[351485]: 2025-12-03 02:12:28.602 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 02:12:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1743: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 02:12:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:12:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 02:12:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:12:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:12:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:12:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:12:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:12:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:12:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:12:29 compute-0 podman[158098]: time="2025-12-03T02:12:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:12:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:12:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec 03 02:12:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:12:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8652 "" "Go-http-client/1.1"
Dec 03 02:12:30 compute-0 ceph-mon[192821]: pgmap v1743: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:30 compute-0 sshd-session[439296]: Received disconnect from 154.113.10.113 port 40032:11: Bye Bye [preauth]
Dec 03 02:12:30 compute-0 sshd-session[439296]: Disconnected from authenticating user root 154.113.10.113 port 40032 [preauth]
Dec 03 02:12:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1744: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:31 compute-0 openstack_network_exporter[368278]: ERROR   02:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:12:31 compute-0 openstack_network_exporter[368278]: ERROR   02:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:12:31 compute-0 openstack_network_exporter[368278]: ERROR   02:12:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:12:31 compute-0 openstack_network_exporter[368278]: ERROR   02:12:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:12:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:12:31 compute-0 openstack_network_exporter[368278]: ERROR   02:12:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:12:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:12:32 compute-0 ceph-mon[192821]: pgmap v1744: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:32 compute-0 nova_compute[351485]: 2025-12-03 02:12:32.404 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:12:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1745: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:32 compute-0 podman[439299]: 2025-12-03 02:12:32.878098748 +0000 UTC m=+0.114808729 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 03 02:12:32 compute-0 podman[439300]: 2025-12-03 02:12:32.901786088 +0000 UTC m=+0.133962211 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 02:12:32 compute-0 podman[439298]: 2025-12-03 02:12:32.94781617 +0000 UTC m=+0.193039092 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:12:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:12:33 compute-0 nova_compute[351485]: 2025-12-03 02:12:33.346 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:12:33 compute-0 nova_compute[351485]: 2025-12-03 02:12:33.414 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:12:34 compute-0 ceph-mon[192821]: pgmap v1745: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1746: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:36 compute-0 ceph-mon[192821]: pgmap v1746: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1747: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:37 compute-0 nova_compute[351485]: 2025-12-03 02:12:37.407 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:12:37 compute-0 nova_compute[351485]: 2025-12-03 02:12:37.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:12:37 compute-0 podman[439356]: 2025-12-03 02:12:37.869061777 +0000 UTC m=+0.118666289 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 03 02:12:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:12:38 compute-0 ceph-mon[192821]: pgmap v1747: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:38 compute-0 nova_compute[351485]: 2025-12-03 02:12:38.417 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:12:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 02:12:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:12:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 02:12:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:12:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00110425264130364 of space, bias 1.0, pg target 0.331275792391092 quantized to 32 (current 32)
Dec 03 02:12:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:12:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:12:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:12:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:12:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:12:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec 03 02:12:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:12:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 02:12:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:12:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:12:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:12:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 02:12:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:12:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 02:12:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:12:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:12:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:12:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 02:12:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1748: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:40 compute-0 ceph-mon[192821]: pgmap v1748: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1749: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:42 compute-0 ceph-mon[192821]: pgmap v1749: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:42 compute-0 nova_compute[351485]: 2025-12-03 02:12:42.412 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:12:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1750: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:42 compute-0 podman[439380]: 2025-12-03 02:12:42.881057691 +0000 UTC m=+0.105962159 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, managed_by=edpm_ansible)
Dec 03 02:12:42 compute-0 podman[439377]: 2025-12-03 02:12:42.898113784 +0000 UTC m=+0.137930164 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., name=ubi9-minimal, config_id=edpm, distribution-scope=public, vendor=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 03 02:12:42 compute-0 podman[439378]: 2025-12-03 02:12:42.907976313 +0000 UTC m=+0.145031395 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 02:12:42 compute-0 podman[439376]: 2025-12-03 02:12:42.908042165 +0000 UTC m=+0.150212831 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 03 02:12:42 compute-0 podman[439379]: 2025-12-03 02:12:42.909263479 +0000 UTC m=+0.136297047 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, managed_by=edpm_ansible, container_name=kepler, io.buildah.version=1.29.0, io.openshift.expose-services=, build-date=2024-09-18T21:23:30)
Dec 03 02:12:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:12:43 compute-0 nova_compute[351485]: 2025-12-03 02:12:43.420 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:12:44 compute-0 ceph-mon[192821]: pgmap v1750: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1751: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:44 compute-0 sudo[439483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:12:44 compute-0 sudo[439483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:12:44 compute-0 sudo[439483]: pam_unix(sudo:session): session closed for user root
Dec 03 02:12:45 compute-0 sudo[439508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:12:45 compute-0 sudo[439508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:12:45 compute-0 sudo[439508]: pam_unix(sudo:session): session closed for user root
Dec 03 02:12:45 compute-0 sudo[439533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:12:45 compute-0 sudo[439533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:12:45 compute-0 sudo[439533]: pam_unix(sudo:session): session closed for user root
Dec 03 02:12:45 compute-0 sudo[439558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Dec 03 02:12:45 compute-0 sudo[439558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:12:45 compute-0 sudo[439558]: pam_unix(sudo:session): session closed for user root
Dec 03 02:12:45 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 02:12:45 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:12:45 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 02:12:45 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:12:45 compute-0 sudo[439602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:12:45 compute-0 sudo[439602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:12:45 compute-0 sudo[439602]: pam_unix(sudo:session): session closed for user root
Dec 03 02:12:46 compute-0 sudo[439627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:12:46 compute-0 sudo[439627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:12:46 compute-0 sudo[439627]: pam_unix(sudo:session): session closed for user root
Dec 03 02:12:46 compute-0 sudo[439652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:12:46 compute-0 sudo[439652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:12:46 compute-0 sudo[439652]: pam_unix(sudo:session): session closed for user root
Dec 03 02:12:46 compute-0 ceph-mon[192821]: pgmap v1751: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:46 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:12:46 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:12:46 compute-0 sudo[439677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 02:12:46 compute-0 sudo[439677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:12:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1752: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 02:12:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/989159114' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:12:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 02:12:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/989159114' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:12:47 compute-0 sudo[439677]: pam_unix(sudo:session): session closed for user root
Dec 03 02:12:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:12:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:12:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 02:12:47 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:12:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 02:12:47 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:12:47 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 405f590a-c17f-44eb-ad6b-d4408186e97c does not exist
Dec 03 02:12:47 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 266da04f-cb46-4d6b-9554-53294b60ffc5 does not exist
Dec 03 02:12:47 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 3563e199-36a6-4368-8b3e-1aaea8c4397d does not exist
Dec 03 02:12:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 02:12:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:12:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 02:12:47 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:12:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:12:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:12:47 compute-0 sudo[439732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:12:47 compute-0 sudo[439732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:12:47 compute-0 sudo[439732]: pam_unix(sudo:session): session closed for user root
Dec 03 02:12:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/989159114' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:12:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/989159114' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:12:47 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:12:47 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:12:47 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:12:47 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:12:47 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:12:47 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:12:47 compute-0 nova_compute[351485]: 2025-12-03 02:12:47.416 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:12:47 compute-0 sudo[439757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:12:47 compute-0 sudo[439757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:12:47 compute-0 sudo[439757]: pam_unix(sudo:session): session closed for user root
Dec 03 02:12:47 compute-0 sudo[439782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:12:47 compute-0 sudo[439782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:12:47 compute-0 sudo[439782]: pam_unix(sudo:session): session closed for user root
Dec 03 02:12:47 compute-0 sudo[439807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 02:12:47 compute-0 sudo[439807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:12:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:12:48 compute-0 ceph-mon[192821]: pgmap v1752: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:48 compute-0 nova_compute[351485]: 2025-12-03 02:12:48.424 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:12:48 compute-0 podman[439869]: 2025-12-03 02:12:48.447026211 +0000 UTC m=+0.107267836 container create 473cb707630fe201dfb27c7483cd90d902b1502a6e94020845faab6828372ec5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec 03 02:12:48 compute-0 podman[439869]: 2025-12-03 02:12:48.404994712 +0000 UTC m=+0.065236387 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:12:48 compute-0 systemd[1]: Started libpod-conmon-473cb707630fe201dfb27c7483cd90d902b1502a6e94020845faab6828372ec5.scope.
Dec 03 02:12:48 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:12:48 compute-0 podman[439869]: 2025-12-03 02:12:48.641247877 +0000 UTC m=+0.301489532 container init 473cb707630fe201dfb27c7483cd90d902b1502a6e94020845faab6828372ec5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_meninsky, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 03 02:12:48 compute-0 podman[439869]: 2025-12-03 02:12:48.65835126 +0000 UTC m=+0.318592885 container start 473cb707630fe201dfb27c7483cd90d902b1502a6e94020845faab6828372ec5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:12:48 compute-0 podman[439869]: 2025-12-03 02:12:48.66505069 +0000 UTC m=+0.325292305 container attach 473cb707630fe201dfb27c7483cd90d902b1502a6e94020845faab6828372ec5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_meninsky, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Dec 03 02:12:48 compute-0 jovial_meninsky[439886]: 167 167
Dec 03 02:12:48 compute-0 systemd[1]: libpod-473cb707630fe201dfb27c7483cd90d902b1502a6e94020845faab6828372ec5.scope: Deactivated successfully.
Dec 03 02:12:48 compute-0 podman[439869]: 2025-12-03 02:12:48.674203329 +0000 UTC m=+0.334444954 container died 473cb707630fe201dfb27c7483cd90d902b1502a6e94020845faab6828372ec5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_meninsky, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec 03 02:12:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-3fedc665a06ddf56ee54055670fa062c4b5e5b50f25747657dd99109c6adcc2d-merged.mount: Deactivated successfully.
Dec 03 02:12:48 compute-0 podman[439869]: 2025-12-03 02:12:48.765276916 +0000 UTC m=+0.425518541 container remove 473cb707630fe201dfb27c7483cd90d902b1502a6e94020845faab6828372ec5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_meninsky, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:12:48 compute-0 systemd[1]: libpod-conmon-473cb707630fe201dfb27c7483cd90d902b1502a6e94020845faab6828372ec5.scope: Deactivated successfully.
Dec 03 02:12:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1753: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:49 compute-0 podman[439908]: 2025-12-03 02:12:49.064469141 +0000 UTC m=+0.083025150 container create cf1e8b3bb5ec50ef806490d5092b7207d3216a4a0fb1e61111b5ab1afcc2d74e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kare, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 03 02:12:49 compute-0 podman[439908]: 2025-12-03 02:12:49.035618325 +0000 UTC m=+0.054174334 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:12:49 compute-0 systemd[1]: Started libpod-conmon-cf1e8b3bb5ec50ef806490d5092b7207d3216a4a0fb1e61111b5ab1afcc2d74e.scope.
Dec 03 02:12:49 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:12:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23081b9ce55d8c8c02e9c78842c376db11761d13c41978750f328695622bcb75/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:12:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23081b9ce55d8c8c02e9c78842c376db11761d13c41978750f328695622bcb75/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:12:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23081b9ce55d8c8c02e9c78842c376db11761d13c41978750f328695622bcb75/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:12:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23081b9ce55d8c8c02e9c78842c376db11761d13c41978750f328695622bcb75/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:12:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23081b9ce55d8c8c02e9c78842c376db11761d13c41978750f328695622bcb75/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 02:12:49 compute-0 podman[439908]: 2025-12-03 02:12:49.285277449 +0000 UTC m=+0.303833518 container init cf1e8b3bb5ec50ef806490d5092b7207d3216a4a0fb1e61111b5ab1afcc2d74e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kare, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 03 02:12:49 compute-0 podman[439908]: 2025-12-03 02:12:49.311075509 +0000 UTC m=+0.329631518 container start cf1e8b3bb5ec50ef806490d5092b7207d3216a4a0fb1e61111b5ab1afcc2d74e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 03 02:12:49 compute-0 podman[439908]: 2025-12-03 02:12:49.317001757 +0000 UTC m=+0.335557816 container attach cf1e8b3bb5ec50ef806490d5092b7207d3216a4a0fb1e61111b5ab1afcc2d74e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:12:49 compute-0 ceph-mon[192821]: pgmap v1753: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:50 compute-0 peaceful_kare[439923]: --> passed data devices: 0 physical, 3 LVM
Dec 03 02:12:50 compute-0 peaceful_kare[439923]: --> relative data size: 1.0
Dec 03 02:12:50 compute-0 peaceful_kare[439923]: --> All data devices are unavailable
Dec 03 02:12:50 compute-0 systemd[1]: libpod-cf1e8b3bb5ec50ef806490d5092b7207d3216a4a0fb1e61111b5ab1afcc2d74e.scope: Deactivated successfully.
Dec 03 02:12:50 compute-0 podman[439908]: 2025-12-03 02:12:50.639474632 +0000 UTC m=+1.658030641 container died cf1e8b3bb5ec50ef806490d5092b7207d3216a4a0fb1e61111b5ab1afcc2d74e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kare, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 03 02:12:50 compute-0 systemd[1]: libpod-cf1e8b3bb5ec50ef806490d5092b7207d3216a4a0fb1e61111b5ab1afcc2d74e.scope: Consumed 1.235s CPU time.
Dec 03 02:12:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-23081b9ce55d8c8c02e9c78842c376db11761d13c41978750f328695622bcb75-merged.mount: Deactivated successfully.
Dec 03 02:12:50 compute-0 podman[439908]: 2025-12-03 02:12:50.737314181 +0000 UTC m=+1.755870160 container remove cf1e8b3bb5ec50ef806490d5092b7207d3216a4a0fb1e61111b5ab1afcc2d74e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kare, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:12:50 compute-0 systemd[1]: libpod-conmon-cf1e8b3bb5ec50ef806490d5092b7207d3216a4a0fb1e61111b5ab1afcc2d74e.scope: Deactivated successfully.
Dec 03 02:12:50 compute-0 sudo[439807]: pam_unix(sudo:session): session closed for user root
Dec 03 02:12:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1754: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:50 compute-0 sudo[439965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:12:50 compute-0 sudo[439965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:12:50 compute-0 sudo[439965]: pam_unix(sudo:session): session closed for user root
Dec 03 02:12:51 compute-0 sudo[439990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:12:51 compute-0 sudo[439990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:12:51 compute-0 sudo[439990]: pam_unix(sudo:session): session closed for user root
Dec 03 02:12:51 compute-0 sudo[440015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:12:51 compute-0 sudo[440015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:12:51 compute-0 sudo[440015]: pam_unix(sudo:session): session closed for user root
Dec 03 02:12:51 compute-0 sudo[440040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 02:12:51 compute-0 sudo[440040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:12:51 compute-0 ceph-mon[192821]: pgmap v1754: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:51 compute-0 podman[440102]: 2025-12-03 02:12:51.980116234 +0000 UTC m=+0.075282421 container create 6d601cc658d9a37e028e64fbfd73f41772dfe9f83dfc012b2a14b7852e1f6ab1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_cerf, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:12:52 compute-0 podman[440102]: 2025-12-03 02:12:51.952865563 +0000 UTC m=+0.048031730 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:12:52 compute-0 systemd[1]: Started libpod-conmon-6d601cc658d9a37e028e64fbfd73f41772dfe9f83dfc012b2a14b7852e1f6ab1.scope.
Dec 03 02:12:52 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:12:52 compute-0 podman[440102]: 2025-12-03 02:12:52.135165421 +0000 UTC m=+0.230331668 container init 6d601cc658d9a37e028e64fbfd73f41772dfe9f83dfc012b2a14b7852e1f6ab1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 03 02:12:52 compute-0 podman[440102]: 2025-12-03 02:12:52.151636007 +0000 UTC m=+0.246802194 container start 6d601cc658d9a37e028e64fbfd73f41772dfe9f83dfc012b2a14b7852e1f6ab1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_cerf, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:12:52 compute-0 podman[440102]: 2025-12-03 02:12:52.158390178 +0000 UTC m=+0.253556375 container attach 6d601cc658d9a37e028e64fbfd73f41772dfe9f83dfc012b2a14b7852e1f6ab1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Dec 03 02:12:52 compute-0 festive_cerf[440117]: 167 167
Dec 03 02:12:52 compute-0 systemd[1]: libpod-6d601cc658d9a37e028e64fbfd73f41772dfe9f83dfc012b2a14b7852e1f6ab1.scope: Deactivated successfully.
Dec 03 02:12:52 compute-0 podman[440102]: 2025-12-03 02:12:52.167952249 +0000 UTC m=+0.263118436 container died 6d601cc658d9a37e028e64fbfd73f41772dfe9f83dfc012b2a14b7852e1f6ab1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_cerf, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 03 02:12:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5078db38c57c1206de462c038b03721af1d7ed8c837f1e7feb4f380123ff8ea-merged.mount: Deactivated successfully.
Dec 03 02:12:52 compute-0 podman[440102]: 2025-12-03 02:12:52.251011258 +0000 UTC m=+0.346177445 container remove 6d601cc658d9a37e028e64fbfd73f41772dfe9f83dfc012b2a14b7852e1f6ab1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_cerf, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec 03 02:12:52 compute-0 systemd[1]: libpod-conmon-6d601cc658d9a37e028e64fbfd73f41772dfe9f83dfc012b2a14b7852e1f6ab1.scope: Deactivated successfully.
Dec 03 02:12:52 compute-0 nova_compute[351485]: 2025-12-03 02:12:52.420 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:12:52 compute-0 podman[440140]: 2025-12-03 02:12:52.499850129 +0000 UTC m=+0.081396174 container create d6a7519010a42e0a26f2aaa1f1c869e2df718e6d6fb514c11315960f8e62341c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_archimedes, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:12:52 compute-0 podman[440140]: 2025-12-03 02:12:52.463376457 +0000 UTC m=+0.044922502 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:12:52 compute-0 systemd[1]: Started libpod-conmon-d6a7519010a42e0a26f2aaa1f1c869e2df718e6d6fb514c11315960f8e62341c.scope.
Dec 03 02:12:52 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:12:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b56366379b6655a180064af1159cbbceb4f82fe9990d673b4f33a08f6a339b35/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:12:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b56366379b6655a180064af1159cbbceb4f82fe9990d673b4f33a08f6a339b35/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:12:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b56366379b6655a180064af1159cbbceb4f82fe9990d673b4f33a08f6a339b35/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:12:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b56366379b6655a180064af1159cbbceb4f82fe9990d673b4f33a08f6a339b35/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:12:52 compute-0 podman[440140]: 2025-12-03 02:12:52.700495926 +0000 UTC m=+0.282042011 container init d6a7519010a42e0a26f2aaa1f1c869e2df718e6d6fb514c11315960f8e62341c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_archimedes, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 03 02:12:52 compute-0 podman[440140]: 2025-12-03 02:12:52.720028008 +0000 UTC m=+0.301574043 container start d6a7519010a42e0a26f2aaa1f1c869e2df718e6d6fb514c11315960f8e62341c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_archimedes, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:12:52 compute-0 podman[440140]: 2025-12-03 02:12:52.726255064 +0000 UTC m=+0.307801109 container attach d6a7519010a42e0a26f2aaa1f1c869e2df718e6d6fb514c11315960f8e62341c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Dec 03 02:12:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1755: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:12:53 compute-0 nova_compute[351485]: 2025-12-03 02:12:53.426 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]: {
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:     "0": [
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:         {
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:             "devices": [
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:                 "/dev/loop3"
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:             ],
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:             "lv_name": "ceph_lv0",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:             "lv_size": "21470642176",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:             "name": "ceph_lv0",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:             "tags": {
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:                 "ceph.cluster_name": "ceph",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:                 "ceph.crush_device_class": "",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:                 "ceph.encrypted": "0",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:                 "ceph.osd_id": "0",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:                 "ceph.type": "block",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:                 "ceph.vdo": "0"
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:             },
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:             "type": "block",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:             "vg_name": "ceph_vg0"
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:         }
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:     ],
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:     "1": [
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:         {
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:             "devices": [
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:                 "/dev/loop4"
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:             ],
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:             "lv_name": "ceph_lv1",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:             "lv_size": "21470642176",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:             "name": "ceph_lv1",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:             "tags": {
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:                 "ceph.cluster_name": "ceph",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:                 "ceph.crush_device_class": "",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:                 "ceph.encrypted": "0",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:                 "ceph.osd_id": "1",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:                 "ceph.type": "block",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:                 "ceph.vdo": "0"
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:             },
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:             "type": "block",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:             "vg_name": "ceph_vg1"
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:         }
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:     ],
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:     "2": [
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:         {
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:             "devices": [
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:                 "/dev/loop5"
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:             ],
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:             "lv_name": "ceph_lv2",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:             "lv_size": "21470642176",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:             "name": "ceph_lv2",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:             "tags": {
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:                 "ceph.cluster_name": "ceph",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:                 "ceph.crush_device_class": "",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:                 "ceph.encrypted": "0",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:                 "ceph.osd_id": "2",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:                 "ceph.type": "block",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:                 "ceph.vdo": "0"
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:             },
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:             "type": "block",
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:             "vg_name": "ceph_vg2"
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:         }
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]:     ]
Dec 03 02:12:53 compute-0 dazzling_archimedes[440155]: }
Dec 03 02:12:53 compute-0 systemd[1]: libpod-d6a7519010a42e0a26f2aaa1f1c869e2df718e6d6fb514c11315960f8e62341c.scope: Deactivated successfully.
Dec 03 02:12:53 compute-0 podman[440140]: 2025-12-03 02:12:53.584190098 +0000 UTC m=+1.165736133 container died d6a7519010a42e0a26f2aaa1f1c869e2df718e6d6fb514c11315960f8e62341c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:12:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-b56366379b6655a180064af1159cbbceb4f82fe9990d673b4f33a08f6a339b35-merged.mount: Deactivated successfully.
Dec 03 02:12:53 compute-0 podman[440140]: 2025-12-03 02:12:53.683724094 +0000 UTC m=+1.265270109 container remove d6a7519010a42e0a26f2aaa1f1c869e2df718e6d6fb514c11315960f8e62341c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:12:53 compute-0 systemd[1]: libpod-conmon-d6a7519010a42e0a26f2aaa1f1c869e2df718e6d6fb514c11315960f8e62341c.scope: Deactivated successfully.
Dec 03 02:12:53 compute-0 sudo[440040]: pam_unix(sudo:session): session closed for user root
Dec 03 02:12:53 compute-0 sudo[440175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:12:53 compute-0 sudo[440175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:12:53 compute-0 sudo[440175]: pam_unix(sudo:session): session closed for user root
Dec 03 02:12:53 compute-0 ceph-mon[192821]: pgmap v1755: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:54 compute-0 sudo[440200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:12:54 compute-0 sudo[440200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:12:54 compute-0 sudo[440200]: pam_unix(sudo:session): session closed for user root
Dec 03 02:12:54 compute-0 sudo[440225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:12:54 compute-0 sudo[440225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:12:54 compute-0 sudo[440225]: pam_unix(sudo:session): session closed for user root
Dec 03 02:12:54 compute-0 sudo[440250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 02:12:54 compute-0 sudo[440250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:12:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1756: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:54 compute-0 podman[440311]: 2025-12-03 02:12:54.9135972 +0000 UTC m=+0.089979347 container create 515979227c57bb28438012c99e4f301f1341f9e94d1e887b8f8de25bdf32ae64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:12:54 compute-0 podman[440311]: 2025-12-03 02:12:54.88037579 +0000 UTC m=+0.056757987 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:12:54 compute-0 systemd[1]: Started libpod-conmon-515979227c57bb28438012c99e4f301f1341f9e94d1e887b8f8de25bdf32ae64.scope.
Dec 03 02:12:55 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:12:55 compute-0 podman[440311]: 2025-12-03 02:12:55.057382248 +0000 UTC m=+0.233764425 container init 515979227c57bb28438012c99e4f301f1341f9e94d1e887b8f8de25bdf32ae64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 03 02:12:55 compute-0 podman[440311]: 2025-12-03 02:12:55.074248845 +0000 UTC m=+0.250631002 container start 515979227c57bb28438012c99e4f301f1341f9e94d1e887b8f8de25bdf32ae64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec 03 02:12:55 compute-0 podman[440311]: 2025-12-03 02:12:55.080870223 +0000 UTC m=+0.257252430 container attach 515979227c57bb28438012c99e4f301f1341f9e94d1e887b8f8de25bdf32ae64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lalande, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 03 02:12:55 compute-0 sharp_lalande[440327]: 167 167
Dec 03 02:12:55 compute-0 systemd[1]: libpod-515979227c57bb28438012c99e4f301f1341f9e94d1e887b8f8de25bdf32ae64.scope: Deactivated successfully.
Dec 03 02:12:55 compute-0 podman[440311]: 2025-12-03 02:12:55.087267424 +0000 UTC m=+0.263649571 container died 515979227c57bb28438012c99e4f301f1341f9e94d1e887b8f8de25bdf32ae64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lalande, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:12:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-05a523adb9da6a49bc5aca3bd642737554cea8bff897747f58cc18f51a54fb75-merged.mount: Deactivated successfully.
Dec 03 02:12:55 compute-0 podman[440311]: 2025-12-03 02:12:55.179612877 +0000 UTC m=+0.355995014 container remove 515979227c57bb28438012c99e4f301f1341f9e94d1e887b8f8de25bdf32ae64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:12:55 compute-0 systemd[1]: libpod-conmon-515979227c57bb28438012c99e4f301f1341f9e94d1e887b8f8de25bdf32ae64.scope: Deactivated successfully.
Dec 03 02:12:55 compute-0 podman[440351]: 2025-12-03 02:12:55.464881998 +0000 UTC m=+0.098839958 container create c7d3197e2bf0903555389e673208c59b7171ef6a3bb4e9dc3782291078a7b6b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_booth, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 03 02:12:55 compute-0 podman[440351]: 2025-12-03 02:12:55.430176046 +0000 UTC m=+0.064134056 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:12:55 compute-0 systemd[1]: Started libpod-conmon-c7d3197e2bf0903555389e673208c59b7171ef6a3bb4e9dc3782291078a7b6b2.scope.
Dec 03 02:12:55 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:12:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a322f5da64313ffb6c1b0db9fb19fcc3295d89169b98c1bcb4a41fdd1024df71/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:12:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a322f5da64313ffb6c1b0db9fb19fcc3295d89169b98c1bcb4a41fdd1024df71/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:12:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a322f5da64313ffb6c1b0db9fb19fcc3295d89169b98c1bcb4a41fdd1024df71/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:12:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a322f5da64313ffb6c1b0db9fb19fcc3295d89169b98c1bcb4a41fdd1024df71/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:12:55 compute-0 podman[440351]: 2025-12-03 02:12:55.630855334 +0000 UTC m=+0.264813294 container init c7d3197e2bf0903555389e673208c59b7171ef6a3bb4e9dc3782291078a7b6b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_booth, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 03 02:12:55 compute-0 podman[440351]: 2025-12-03 02:12:55.663431045 +0000 UTC m=+0.297388975 container start c7d3197e2bf0903555389e673208c59b7171ef6a3bb4e9dc3782291078a7b6b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 03 02:12:55 compute-0 podman[440351]: 2025-12-03 02:12:55.670150095 +0000 UTC m=+0.304108055 container attach c7d3197e2bf0903555389e673208c59b7171ef6a3bb4e9dc3782291078a7b6b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_booth, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:12:55 compute-0 ceph-mon[192821]: pgmap v1756: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1757: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:56 compute-0 determined_booth[440367]: {
Dec 03 02:12:56 compute-0 determined_booth[440367]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 02:12:56 compute-0 determined_booth[440367]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:12:56 compute-0 determined_booth[440367]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 02:12:56 compute-0 determined_booth[440367]:         "osd_id": 2,
Dec 03 02:12:56 compute-0 determined_booth[440367]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:12:56 compute-0 determined_booth[440367]:         "type": "bluestore"
Dec 03 02:12:56 compute-0 determined_booth[440367]:     },
Dec 03 02:12:56 compute-0 determined_booth[440367]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 02:12:56 compute-0 determined_booth[440367]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:12:56 compute-0 determined_booth[440367]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 02:12:56 compute-0 determined_booth[440367]:         "osd_id": 1,
Dec 03 02:12:56 compute-0 determined_booth[440367]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:12:56 compute-0 determined_booth[440367]:         "type": "bluestore"
Dec 03 02:12:56 compute-0 determined_booth[440367]:     },
Dec 03 02:12:56 compute-0 determined_booth[440367]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 02:12:56 compute-0 determined_booth[440367]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:12:56 compute-0 determined_booth[440367]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 02:12:56 compute-0 determined_booth[440367]:         "osd_id": 0,
Dec 03 02:12:56 compute-0 determined_booth[440367]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:12:56 compute-0 determined_booth[440367]:         "type": "bluestore"
Dec 03 02:12:56 compute-0 determined_booth[440367]:     }
Dec 03 02:12:56 compute-0 determined_booth[440367]: }
Dec 03 02:12:56 compute-0 systemd[1]: libpod-c7d3197e2bf0903555389e673208c59b7171ef6a3bb4e9dc3782291078a7b6b2.scope: Deactivated successfully.
Dec 03 02:12:56 compute-0 systemd[1]: libpod-c7d3197e2bf0903555389e673208c59b7171ef6a3bb4e9dc3782291078a7b6b2.scope: Consumed 1.287s CPU time.
Dec 03 02:12:57 compute-0 podman[440400]: 2025-12-03 02:12:57.052414404 +0000 UTC m=+0.067825870 container died c7d3197e2bf0903555389e673208c59b7171ef6a3bb4e9dc3782291078a7b6b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_booth, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 03 02:12:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-a322f5da64313ffb6c1b0db9fb19fcc3295d89169b98c1bcb4a41fdd1024df71-merged.mount: Deactivated successfully.
Dec 03 02:12:57 compute-0 podman[440400]: 2025-12-03 02:12:57.155005066 +0000 UTC m=+0.170416482 container remove c7d3197e2bf0903555389e673208c59b7171ef6a3bb4e9dc3782291078a7b6b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_booth, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 03 02:12:57 compute-0 systemd[1]: libpod-conmon-c7d3197e2bf0903555389e673208c59b7171ef6a3bb4e9dc3782291078a7b6b2.scope: Deactivated successfully.
Dec 03 02:12:57 compute-0 sudo[440250]: pam_unix(sudo:session): session closed for user root
Dec 03 02:12:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 02:12:57 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:12:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 02:12:57 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:12:57 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 8a0aa3cf-3e19-418a-98a6-2b2d96aee487 does not exist
Dec 03 02:12:57 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev a99c2746-8dc0-4764-81e9-af86ab7387d9 does not exist
Dec 03 02:12:57 compute-0 sudo[440413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:12:57 compute-0 sudo[440413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:12:57 compute-0 sudo[440413]: pam_unix(sudo:session): session closed for user root
Dec 03 02:12:57 compute-0 nova_compute[351485]: 2025-12-03 02:12:57.424 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:12:57 compute-0 sudo[440438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 02:12:57 compute-0 sudo[440438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:12:57 compute-0 sudo[440438]: pam_unix(sudo:session): session closed for user root
Dec 03 02:12:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:12:58 compute-0 ceph-mon[192821]: pgmap v1757: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:58 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:12:58 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:12:58 compute-0 nova_compute[351485]: 2025-12-03 02:12:58.430 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:12:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:12:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:12:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:12:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:12:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:12:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:12:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1758: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:12:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:12:59.643 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:12:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:12:59.644 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:12:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:12:59.645 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:12:59 compute-0 nova_compute[351485]: 2025-12-03 02:12:59.657 351492 DEBUG nova.compute.manager [req-dcc5bb62-07ad-449d-85b2-bd3ada8f2548 req-67b90880-c60a-43b4-a80d-d0984d97d08e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Received event network-changed-6b217cd3-164a-4fb4-8eb6-f1eb3c806963 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:12:59 compute-0 nova_compute[351485]: 2025-12-03 02:12:59.658 351492 DEBUG nova.compute.manager [req-dcc5bb62-07ad-449d-85b2-bd3ada8f2548 req-67b90880-c60a-43b4-a80d-d0984d97d08e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Refreshing instance network info cache due to event network-changed-6b217cd3-164a-4fb4-8eb6-f1eb3c806963. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 03 02:12:59 compute-0 nova_compute[351485]: 2025-12-03 02:12:59.659 351492 DEBUG oslo_concurrency.lockutils [req-dcc5bb62-07ad-449d-85b2-bd3ada8f2548 req-67b90880-c60a-43b4-a80d-d0984d97d08e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-b43e79bd-550f-42f8-9aa7-980b6bca3f70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:12:59 compute-0 nova_compute[351485]: 2025-12-03 02:12:59.660 351492 DEBUG oslo_concurrency.lockutils [req-dcc5bb62-07ad-449d-85b2-bd3ada8f2548 req-67b90880-c60a-43b4-a80d-d0984d97d08e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-b43e79bd-550f-42f8-9aa7-980b6bca3f70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:12:59 compute-0 nova_compute[351485]: 2025-12-03 02:12:59.661 351492 DEBUG nova.network.neutron [req-dcc5bb62-07ad-449d-85b2-bd3ada8f2548 req-67b90880-c60a-43b4-a80d-d0984d97d08e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Refreshing network info cache for port 6b217cd3-164a-4fb4-8eb6-f1eb3c806963 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 03 02:12:59 compute-0 podman[158098]: time="2025-12-03T02:12:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:12:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:12:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec 03 02:12:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:12:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8658 "" "Go-http-client/1.1"
Dec 03 02:12:59 compute-0 nova_compute[351485]: 2025-12-03 02:12:59.883 351492 DEBUG oslo_concurrency.lockutils [None req-00c20947-1b0f-41ae-befa-b7ac1d1620ad 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "b43e79bd-550f-42f8-9aa7-980b6bca3f70" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:12:59 compute-0 nova_compute[351485]: 2025-12-03 02:12:59.884 351492 DEBUG oslo_concurrency.lockutils [None req-00c20947-1b0f-41ae-befa-b7ac1d1620ad 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "b43e79bd-550f-42f8-9aa7-980b6bca3f70" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:12:59 compute-0 nova_compute[351485]: 2025-12-03 02:12:59.885 351492 DEBUG oslo_concurrency.lockutils [None req-00c20947-1b0f-41ae-befa-b7ac1d1620ad 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "b43e79bd-550f-42f8-9aa7-980b6bca3f70-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:12:59 compute-0 nova_compute[351485]: 2025-12-03 02:12:59.886 351492 DEBUG oslo_concurrency.lockutils [None req-00c20947-1b0f-41ae-befa-b7ac1d1620ad 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "b43e79bd-550f-42f8-9aa7-980b6bca3f70-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:12:59 compute-0 nova_compute[351485]: 2025-12-03 02:12:59.887 351492 DEBUG oslo_concurrency.lockutils [None req-00c20947-1b0f-41ae-befa-b7ac1d1620ad 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "b43e79bd-550f-42f8-9aa7-980b6bca3f70-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:12:59 compute-0 nova_compute[351485]: 2025-12-03 02:12:59.890 351492 INFO nova.compute.manager [None req-00c20947-1b0f-41ae-befa-b7ac1d1620ad 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Terminating instance
Dec 03 02:12:59 compute-0 nova_compute[351485]: 2025-12-03 02:12:59.892 351492 DEBUG nova.compute.manager [None req-00c20947-1b0f-41ae-befa-b7ac1d1620ad 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 03 02:12:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:12:59.909 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:a6:85', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:2a:11:ae:7b:8c'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 02:12:59 compute-0 nova_compute[351485]: 2025-12-03 02:12:59.910 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:12:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:12:59.911 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 03 02:13:00 compute-0 kernel: tap6b217cd3-16 (unregistering): left promiscuous mode
Dec 03 02:13:00 compute-0 NetworkManager[48912]: <info>  [1764727980.0743] device (tap6b217cd3-16): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 03 02:13:00 compute-0 ovn_controller[89134]: 2025-12-03T02:13:00Z|00058|binding|INFO|Releasing lport 6b217cd3-164a-4fb4-8eb6-f1eb3c806963 from this chassis (sb_readonly=0)
Dec 03 02:13:00 compute-0 ovn_controller[89134]: 2025-12-03T02:13:00Z|00059|binding|INFO|Setting lport 6b217cd3-164a-4fb4-8eb6-f1eb3c806963 down in Southbound
Dec 03 02:13:00 compute-0 ovn_controller[89134]: 2025-12-03T02:13:00Z|00060|binding|INFO|Removing iface tap6b217cd3-16 ovn-installed in OVS
Dec 03 02:13:00 compute-0 nova_compute[351485]: 2025-12-03 02:13:00.094 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:13:00 compute-0 nova_compute[351485]: 2025-12-03 02:13:00.098 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:13:00 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:00.101 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:35:ef 192.168.0.85'], port_security=['fa:16:3e:da:35:ef 192.168.0.85'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-olz3x44nal64-mj7m4uljqyof-c7kfgdonucij-port-nmbntpj2trtj', 'neutron:cidrs': '192.168.0.85/24', 'neutron:device_id': 'b43e79bd-550f-42f8-9aa7-980b6bca3f70', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-olz3x44nal64-mj7m4uljqyof-c7kfgdonucij-port-nmbntpj2trtj', 'neutron:project_id': '9746b242761a48048d185ce26d622b33', 'neutron:revision_number': '4', 'neutron:security_group_ids': '43ddbc1b-0018-4ea3-a338-8898d9bf8c87', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=13e9ae70-0999-47f9-bc0c-397e04263018, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=6b217cd3-164a-4fb4-8eb6-f1eb3c806963) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 02:13:00 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:00.103 288528 INFO neutron.agent.ovn.metadata.agent [-] Port 6b217cd3-164a-4fb4-8eb6-f1eb3c806963 in datapath 7ba11691-2711-476c-9191-cb6dfd0efa7d unbound from our chassis
Dec 03 02:13:00 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:00.105 288528 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ba11691-2711-476c-9191-cb6dfd0efa7d
Dec 03 02:13:00 compute-0 nova_compute[351485]: 2025-12-03 02:13:00.114 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:13:00 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:00.133 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[482bdd4d-44d4-476f-9737-bb00e8a97622]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:13:00 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Dec 03 02:13:00 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 2min 23.080s CPU time.
Dec 03 02:13:00 compute-0 systemd-machined[138558]: Machine qemu-4-instance-00000004 terminated.
Dec 03 02:13:00 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:00.189 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[3d86a6e3-285a-4dea-8271-25b3a138b833]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:13:00 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:00.192 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[43c0462b-bbb1-4c67-ae3a-c037acda73b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:13:00 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:00.222 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[cf829d92-56ff-4379-acf3-218a815b75b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:13:00 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:00.252 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[e6a0bcd0-1c16-4af7-b454-a90581a0cc97]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ba11691-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:a4:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573048, 'reachable_time': 30697, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 440475, 'error': None, 'target': 'ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:13:00 compute-0 ceph-mon[192821]: pgmap v1758: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:13:00 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:00.280 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[77a54cf0-c696-4e66-a271-f0f1a5b36f7c]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap7ba11691-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 573065, 'tstamp': 573065}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 440476, 'error': None, 'target': 'ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7ba11691-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 573069, 'tstamp': 573069}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 440476, 'error': None, 'target': 'ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:13:00 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:00.282 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ba11691-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:13:00 compute-0 nova_compute[351485]: 2025-12-03 02:13:00.285 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:13:00 compute-0 nova_compute[351485]: 2025-12-03 02:13:00.293 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:13:00 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:00.294 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ba11691-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:13:00 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:00.294 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 03 02:13:00 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:00.295 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ba11691-20, col_values=(('external_ids', {'iface-id': '8c8945aa-32be-4ced-a7fe-2b9502f30008'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:13:00 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:00.296 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 03 02:13:00 compute-0 nova_compute[351485]: 2025-12-03 02:13:00.363 351492 INFO nova.virt.libvirt.driver [-] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Instance destroyed successfully.
Dec 03 02:13:00 compute-0 nova_compute[351485]: 2025-12-03 02:13:00.364 351492 DEBUG nova.objects.instance [None req-00c20947-1b0f-41ae-befa-b7ac1d1620ad 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lazy-loading 'resources' on Instance uuid b43e79bd-550f-42f8-9aa7-980b6bca3f70 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:13:00 compute-0 nova_compute[351485]: 2025-12-03 02:13:00.380 351492 DEBUG nova.virt.libvirt.vif [None req-00c20947-1b0f-41ae-befa-b7ac1d1620ad 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T02:02:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-44nal64-mj7m4uljqyof-c7kfgdonucij-vnf-5nwa6zvischw',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-44nal64-mj7m4uljqyof-c7kfgdonucij-vnf-5nwa6zvischw',id=4,image_ref='466cf0db-c3be-4d70-b9f3-08c056c2cad9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-03T02:02:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='0f6ab671-23df-4a6d-9613-02f9fb5fb294'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9746b242761a48048d185ce26d622b33',ramdisk_id='',reservation_id='r-54gvmjwo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='466cf0db-c3be-4d70-b9f3-08c056c2cad9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-03T02:02:31Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT04MjE5MDc0MDkyMzM2MjQzOTEwPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTgyMTkwNzQwOTIzMzYyNDM5MTA9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09ODIxOTA3NDA5MjMzNjI0MzkxMD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTgyMTkwNzQwOTIzMzYyNDM5MTA9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT04MjE5MDc0MDkyMzM2MjQzOTEwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT04MjE5MDc0MDkyMzM2MjQzOTEwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvK
Dec 03 02:13:00 compute-0 nova_compute[351485]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09ODIxOTA3NDA5MjMzNjI0MzkxMD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTgyMTkwNzQwOTIzMzYyNDM5MTA9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT04MjE5MDc0MDkyMzM2MjQzOTEwPT0tLQo=',user_id='03ba25e4009b43f7b0054fee32bf9136',uuid=b43e79bd-550f-42f8-9aa7-980b6bca3f70,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "address": "fa:16:3e:da:35:ef", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.85", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b217cd3-16", "ovs_interfaceid": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 03 02:13:00 compute-0 nova_compute[351485]: 2025-12-03 02:13:00.380 351492 DEBUG nova.network.os_vif_util [None req-00c20947-1b0f-41ae-befa-b7ac1d1620ad 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Converting VIF {"id": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "address": "fa:16:3e:da:35:ef", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.85", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b217cd3-16", "ovs_interfaceid": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 03 02:13:00 compute-0 nova_compute[351485]: 2025-12-03 02:13:00.381 351492 DEBUG nova.network.os_vif_util [None req-00c20947-1b0f-41ae-befa-b7ac1d1620ad 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:da:35:ef,bridge_name='br-int',has_traffic_filtering=True,id=6b217cd3-164a-4fb4-8eb6-f1eb3c806963,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6b217cd3-16') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 03 02:13:00 compute-0 nova_compute[351485]: 2025-12-03 02:13:00.382 351492 DEBUG os_vif [None req-00c20947-1b0f-41ae-befa-b7ac1d1620ad 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:da:35:ef,bridge_name='br-int',has_traffic_filtering=True,id=6b217cd3-164a-4fb4-8eb6-f1eb3c806963,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6b217cd3-16') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 03 02:13:00 compute-0 nova_compute[351485]: 2025-12-03 02:13:00.386 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:13:00 compute-0 nova_compute[351485]: 2025-12-03 02:13:00.386 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6b217cd3-16, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:13:00 compute-0 nova_compute[351485]: 2025-12-03 02:13:00.391 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:13:00 compute-0 nova_compute[351485]: 2025-12-03 02:13:00.393 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 03 02:13:00 compute-0 nova_compute[351485]: 2025-12-03 02:13:00.395 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:13:00 compute-0 nova_compute[351485]: 2025-12-03 02:13:00.399 351492 INFO os_vif [None req-00c20947-1b0f-41ae-befa-b7ac1d1620ad 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:da:35:ef,bridge_name='br-int',has_traffic_filtering=True,id=6b217cd3-164a-4fb4-8eb6-f1eb3c806963,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6b217cd3-16')
Dec 03 02:13:00 compute-0 rsyslogd[188612]: message too long (8192) with configured size 8096, begin of message is: 2025-12-03 02:13:00.380 351492 DEBUG nova.virt.libvirt.vif [None req-00c20947-1b [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 03 02:13:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1759: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Dec 03 02:13:01 compute-0 nova_compute[351485]: 2025-12-03 02:13:01.100 351492 DEBUG nova.network.neutron [req-dcc5bb62-07ad-449d-85b2-bd3ada8f2548 req-67b90880-c60a-43b4-a80d-d0984d97d08e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Updated VIF entry in instance network info cache for port 6b217cd3-164a-4fb4-8eb6-f1eb3c806963. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 03 02:13:01 compute-0 nova_compute[351485]: 2025-12-03 02:13:01.101 351492 DEBUG nova.network.neutron [req-dcc5bb62-07ad-449d-85b2-bd3ada8f2548 req-67b90880-c60a-43b4-a80d-d0984d97d08e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Updating instance_info_cache with network_info: [{"id": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "address": "fa:16:3e:da:35:ef", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.85", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b217cd3-16", "ovs_interfaceid": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:13:01 compute-0 nova_compute[351485]: 2025-12-03 02:13:01.120 351492 DEBUG oslo_concurrency.lockutils [req-dcc5bb62-07ad-449d-85b2-bd3ada8f2548 req-67b90880-c60a-43b4-a80d-d0984d97d08e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-b43e79bd-550f-42f8-9aa7-980b6bca3f70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:13:01 compute-0 openstack_network_exporter[368278]: ERROR   02:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:13:01 compute-0 openstack_network_exporter[368278]: ERROR   02:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:13:01 compute-0 openstack_network_exporter[368278]: ERROR   02:13:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:13:01 compute-0 openstack_network_exporter[368278]: ERROR   02:13:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:13:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:13:01 compute-0 openstack_network_exporter[368278]: ERROR   02:13:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:13:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:13:01 compute-0 nova_compute[351485]: 2025-12-03 02:13:01.776 351492 INFO nova.virt.libvirt.driver [None req-00c20947-1b0f-41ae-befa-b7ac1d1620ad 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Deleting instance files /var/lib/nova/instances/b43e79bd-550f-42f8-9aa7-980b6bca3f70_del
Dec 03 02:13:01 compute-0 nova_compute[351485]: 2025-12-03 02:13:01.777 351492 INFO nova.virt.libvirt.driver [None req-00c20947-1b0f-41ae-befa-b7ac1d1620ad 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Deletion of /var/lib/nova/instances/b43e79bd-550f-42f8-9aa7-980b6bca3f70_del complete
Dec 03 02:13:01 compute-0 nova_compute[351485]: 2025-12-03 02:13:01.785 351492 DEBUG nova.compute.manager [req-13fe782a-5b6e-46af-83af-9a99ab98fa3f req-9f2fa56f-475f-4480-8192-7299dcd3f2d8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Received event network-vif-unplugged-6b217cd3-164a-4fb4-8eb6-f1eb3c806963 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:13:01 compute-0 nova_compute[351485]: 2025-12-03 02:13:01.785 351492 DEBUG oslo_concurrency.lockutils [req-13fe782a-5b6e-46af-83af-9a99ab98fa3f req-9f2fa56f-475f-4480-8192-7299dcd3f2d8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "b43e79bd-550f-42f8-9aa7-980b6bca3f70-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:13:01 compute-0 nova_compute[351485]: 2025-12-03 02:13:01.785 351492 DEBUG oslo_concurrency.lockutils [req-13fe782a-5b6e-46af-83af-9a99ab98fa3f req-9f2fa56f-475f-4480-8192-7299dcd3f2d8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "b43e79bd-550f-42f8-9aa7-980b6bca3f70-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:13:01 compute-0 nova_compute[351485]: 2025-12-03 02:13:01.785 351492 DEBUG oslo_concurrency.lockutils [req-13fe782a-5b6e-46af-83af-9a99ab98fa3f req-9f2fa56f-475f-4480-8192-7299dcd3f2d8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "b43e79bd-550f-42f8-9aa7-980b6bca3f70-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:13:01 compute-0 nova_compute[351485]: 2025-12-03 02:13:01.786 351492 DEBUG nova.compute.manager [req-13fe782a-5b6e-46af-83af-9a99ab98fa3f req-9f2fa56f-475f-4480-8192-7299dcd3f2d8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] No waiting events found dispatching network-vif-unplugged-6b217cd3-164a-4fb4-8eb6-f1eb3c806963 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 03 02:13:01 compute-0 nova_compute[351485]: 2025-12-03 02:13:01.786 351492 DEBUG nova.compute.manager [req-13fe782a-5b6e-46af-83af-9a99ab98fa3f req-9f2fa56f-475f-4480-8192-7299dcd3f2d8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Received event network-vif-unplugged-6b217cd3-164a-4fb4-8eb6-f1eb3c806963 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 03 02:13:01 compute-0 nova_compute[351485]: 2025-12-03 02:13:01.786 351492 DEBUG nova.compute.manager [req-13fe782a-5b6e-46af-83af-9a99ab98fa3f req-9f2fa56f-475f-4480-8192-7299dcd3f2d8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Received event network-vif-plugged-6b217cd3-164a-4fb4-8eb6-f1eb3c806963 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:13:01 compute-0 nova_compute[351485]: 2025-12-03 02:13:01.786 351492 DEBUG oslo_concurrency.lockutils [req-13fe782a-5b6e-46af-83af-9a99ab98fa3f req-9f2fa56f-475f-4480-8192-7299dcd3f2d8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "b43e79bd-550f-42f8-9aa7-980b6bca3f70-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:13:01 compute-0 nova_compute[351485]: 2025-12-03 02:13:01.787 351492 DEBUG oslo_concurrency.lockutils [req-13fe782a-5b6e-46af-83af-9a99ab98fa3f req-9f2fa56f-475f-4480-8192-7299dcd3f2d8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "b43e79bd-550f-42f8-9aa7-980b6bca3f70-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:13:01 compute-0 nova_compute[351485]: 2025-12-03 02:13:01.787 351492 DEBUG oslo_concurrency.lockutils [req-13fe782a-5b6e-46af-83af-9a99ab98fa3f req-9f2fa56f-475f-4480-8192-7299dcd3f2d8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "b43e79bd-550f-42f8-9aa7-980b6bca3f70-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:13:01 compute-0 nova_compute[351485]: 2025-12-03 02:13:01.788 351492 DEBUG nova.compute.manager [req-13fe782a-5b6e-46af-83af-9a99ab98fa3f req-9f2fa56f-475f-4480-8192-7299dcd3f2d8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] No waiting events found dispatching network-vif-plugged-6b217cd3-164a-4fb4-8eb6-f1eb3c806963 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 03 02:13:01 compute-0 nova_compute[351485]: 2025-12-03 02:13:01.788 351492 WARNING nova.compute.manager [req-13fe782a-5b6e-46af-83af-9a99ab98fa3f req-9f2fa56f-475f-4480-8192-7299dcd3f2d8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Received unexpected event network-vif-plugged-6b217cd3-164a-4fb4-8eb6-f1eb3c806963 for instance with vm_state active and task_state deleting.
Dec 03 02:13:01 compute-0 nova_compute[351485]: 2025-12-03 02:13:01.866 351492 INFO nova.compute.manager [None req-00c20947-1b0f-41ae-befa-b7ac1d1620ad 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Took 1.97 seconds to destroy the instance on the hypervisor.
Dec 03 02:13:01 compute-0 nova_compute[351485]: 2025-12-03 02:13:01.868 351492 DEBUG oslo.service.loopingcall [None req-00c20947-1b0f-41ae-befa-b7ac1d1620ad 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 03 02:13:01 compute-0 nova_compute[351485]: 2025-12-03 02:13:01.869 351492 DEBUG nova.compute.manager [-] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 03 02:13:01 compute-0 nova_compute[351485]: 2025-12-03 02:13:01.870 351492 DEBUG nova.network.neutron [-] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 03 02:13:02 compute-0 ceph-mon[192821]: pgmap v1759: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Dec 03 02:13:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1760: 321 pgs: 321 active+clean; 116 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 511 B/s wr, 10 op/s
Dec 03 02:13:02 compute-0 nova_compute[351485]: 2025-12-03 02:13:02.970 351492 DEBUG nova.network.neutron [-] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:13:02 compute-0 nova_compute[351485]: 2025-12-03 02:13:02.991 351492 INFO nova.compute.manager [-] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Took 1.12 seconds to deallocate network for instance.
Dec 03 02:13:03 compute-0 nova_compute[351485]: 2025-12-03 02:13:03.049 351492 DEBUG oslo_concurrency.lockutils [None req-00c20947-1b0f-41ae-befa-b7ac1d1620ad 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:13:03 compute-0 nova_compute[351485]: 2025-12-03 02:13:03.049 351492 DEBUG oslo_concurrency.lockutils [None req-00c20947-1b0f-41ae-befa-b7ac1d1620ad 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:13:03 compute-0 nova_compute[351485]: 2025-12-03 02:13:03.160 351492 DEBUG oslo_concurrency.processutils [None req-00c20947-1b0f-41ae-befa-b7ac1d1620ad 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:13:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:13:03 compute-0 nova_compute[351485]: 2025-12-03 02:13:03.433 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:13:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:13:03 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3117242781' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:13:03 compute-0 nova_compute[351485]: 2025-12-03 02:13:03.642 351492 DEBUG oslo_concurrency.processutils [None req-00c20947-1b0f-41ae-befa-b7ac1d1620ad 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:13:03 compute-0 nova_compute[351485]: 2025-12-03 02:13:03.657 351492 DEBUG nova.compute.provider_tree [None req-00c20947-1b0f-41ae-befa-b7ac1d1620ad 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:13:03 compute-0 nova_compute[351485]: 2025-12-03 02:13:03.687 351492 DEBUG nova.scheduler.client.report [None req-00c20947-1b0f-41ae-befa-b7ac1d1620ad 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:13:03 compute-0 nova_compute[351485]: 2025-12-03 02:13:03.721 351492 DEBUG oslo_concurrency.lockutils [None req-00c20947-1b0f-41ae-befa-b7ac1d1620ad 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:13:03 compute-0 nova_compute[351485]: 2025-12-03 02:13:03.768 351492 INFO nova.scheduler.client.report [None req-00c20947-1b0f-41ae-befa-b7ac1d1620ad 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Deleted allocations for instance b43e79bd-550f-42f8-9aa7-980b6bca3f70
Dec 03 02:13:03 compute-0 podman[440530]: 2025-12-03 02:13:03.857175633 +0000 UTC m=+0.108422108 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 03 02:13:03 compute-0 nova_compute[351485]: 2025-12-03 02:13:03.874 351492 DEBUG oslo_concurrency.lockutils [None req-00c20947-1b0f-41ae-befa-b7ac1d1620ad 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "b43e79bd-550f-42f8-9aa7-980b6bca3f70" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.990s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:13:03 compute-0 podman[440531]: 2025-12-03 02:13:03.883655092 +0000 UTC m=+0.137494531 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image)
Dec 03 02:13:03 compute-0 podman[440532]: 2025-12-03 02:13:03.901273301 +0000 UTC m=+0.141183566 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 02:13:03 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:03.914 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=eda9fd7d-f2b1-4121-b9ac-fc31f8426272, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:13:04 compute-0 ceph-mon[192821]: pgmap v1760: 321 pgs: 321 active+clean; 116 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 511 B/s wr, 10 op/s
Dec 03 02:13:04 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3117242781' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:13:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1761: 321 pgs: 321 active+clean; 101 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 511 B/s wr, 11 op/s
Dec 03 02:13:05 compute-0 nova_compute[351485]: 2025-12-03 02:13:05.392 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:13:06 compute-0 ceph-mon[192821]: pgmap v1761: 321 pgs: 321 active+clean; 101 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 511 B/s wr, 11 op/s
Dec 03 02:13:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1762: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 03 02:13:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:13:08 compute-0 ceph-mon[192821]: pgmap v1762: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 03 02:13:08 compute-0 nova_compute[351485]: 2025-12-03 02:13:08.435 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:13:08 compute-0 podman[440589]: 2025-12-03 02:13:08.884256788 +0000 UTC m=+0.128347162 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 03 02:13:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1763: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 03 02:13:10 compute-0 ceph-mon[192821]: pgmap v1763: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 03 02:13:10 compute-0 nova_compute[351485]: 2025-12-03 02:13:10.395 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:13:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1764: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 03 02:13:12 compute-0 ceph-mon[192821]: pgmap v1764: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 03 02:13:12 compute-0 nova_compute[351485]: 2025-12-03 02:13:12.603 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:13:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1765: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 39 op/s
Dec 03 02:13:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:13:13 compute-0 nova_compute[351485]: 2025-12-03 02:13:13.441 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:13:13 compute-0 nova_compute[351485]: 2025-12-03 02:13:13.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:13:13 compute-0 nova_compute[351485]: 2025-12-03 02:13:13.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:13:13 compute-0 nova_compute[351485]: 2025-12-03 02:13:13.611 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:13:13 compute-0 nova_compute[351485]: 2025-12-03 02:13:13.611 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:13:13 compute-0 nova_compute[351485]: 2025-12-03 02:13:13.612 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:13:13 compute-0 nova_compute[351485]: 2025-12-03 02:13:13.612 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 02:13:13 compute-0 nova_compute[351485]: 2025-12-03 02:13:13.613 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:13:13 compute-0 podman[440611]: 2025-12-03 02:13:13.869674652 +0000 UTC m=+0.108688406 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, managed_by=edpm_ansible, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, architecture=x86_64, version=9.6)
Dec 03 02:13:13 compute-0 podman[440612]: 2025-12-03 02:13:13.877786572 +0000 UTC m=+0.117157106 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 02:13:13 compute-0 podman[440613]: 2025-12-03 02:13:13.886759686 +0000 UTC m=+0.116542309 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, config_id=edpm, distribution-scope=public, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, container_name=kepler, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, release-0.7.12=, vcs-type=git, architecture=x86_64, build-date=2024-09-18T21:23:30)
Dec 03 02:13:13 compute-0 podman[440614]: 2025-12-03 02:13:13.888351141 +0000 UTC m=+0.121851329 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible)
Dec 03 02:13:13 compute-0 podman[440610]: 2025-12-03 02:13:13.964163076 +0000 UTC m=+0.210427355 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 03 02:13:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:13:14 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2657833017' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:13:14 compute-0 nova_compute[351485]: 2025-12-03 02:13:14.144 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:13:14 compute-0 nova_compute[351485]: 2025-12-03 02:13:14.259 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:13:14 compute-0 nova_compute[351485]: 2025-12-03 02:13:14.259 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:13:14 compute-0 nova_compute[351485]: 2025-12-03 02:13:14.260 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:13:14 compute-0 ceph-mon[192821]: pgmap v1765: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 39 op/s
Dec 03 02:13:14 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2657833017' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:13:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1766: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Dec 03 02:13:14 compute-0 nova_compute[351485]: 2025-12-03 02:13:14.913 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:13:14 compute-0 nova_compute[351485]: 2025-12-03 02:13:14.915 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3790MB free_disk=59.9552001953125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 02:13:14 compute-0 nova_compute[351485]: 2025-12-03 02:13:14.916 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:13:14 compute-0 nova_compute[351485]: 2025-12-03 02:13:14.917 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:13:15 compute-0 nova_compute[351485]: 2025-12-03 02:13:15.000 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:13:15 compute-0 nova_compute[351485]: 2025-12-03 02:13:15.001 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 02:13:15 compute-0 nova_compute[351485]: 2025-12-03 02:13:15.002 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 02:13:15 compute-0 nova_compute[351485]: 2025-12-03 02:13:15.036 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:13:15 compute-0 nova_compute[351485]: 2025-12-03 02:13:15.357 351492 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764727980.3559287, b43e79bd-550f-42f8-9aa7-980b6bca3f70 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 02:13:15 compute-0 nova_compute[351485]: 2025-12-03 02:13:15.358 351492 INFO nova.compute.manager [-] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] VM Stopped (Lifecycle Event)
Dec 03 02:13:15 compute-0 nova_compute[351485]: 2025-12-03 02:13:15.390 351492 DEBUG nova.compute.manager [None req-32dde6c3-7b78-419d-997c-a680123d8031 - - - - - -] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:13:15 compute-0 nova_compute[351485]: 2025-12-03 02:13:15.399 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:13:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:13:15 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/268019785' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:13:15 compute-0 nova_compute[351485]: 2025-12-03 02:13:15.604 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:13:15 compute-0 nova_compute[351485]: 2025-12-03 02:13:15.618 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:13:15 compute-0 nova_compute[351485]: 2025-12-03 02:13:15.643 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:13:15 compute-0 nova_compute[351485]: 2025-12-03 02:13:15.646 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 02:13:15 compute-0 nova_compute[351485]: 2025-12-03 02:13:15.646 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:13:16 compute-0 ceph-mon[192821]: pgmap v1766: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Dec 03 02:13:16 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/268019785' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:13:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1767: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec 03 02:13:17 compute-0 ceph-mon[192821]: pgmap v1767: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec 03 02:13:17 compute-0 nova_compute[351485]: 2025-12-03 02:13:17.649 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:13:17 compute-0 nova_compute[351485]: 2025-12-03 02:13:17.650 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 02:13:17 compute-0 nova_compute[351485]: 2025-12-03 02:13:17.689 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 03 02:13:17 compute-0 nova_compute[351485]: 2025-12-03 02:13:17.690 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:13:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:13:18 compute-0 nova_compute[351485]: 2025-12-03 02:13:18.446 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:13:18 compute-0 nova_compute[351485]: 2025-12-03 02:13:18.612 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:13:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1768: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:13:19 compute-0 nova_compute[351485]: 2025-12-03 02:13:19.305 351492 DEBUG oslo_concurrency.lockutils [None req-c046b59f-4c02-4651-a8e1-906971ac810e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "9182286b-5a08-4961-b4bb-c0e2f05746f7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:13:19 compute-0 nova_compute[351485]: 2025-12-03 02:13:19.307 351492 DEBUG oslo_concurrency.lockutils [None req-c046b59f-4c02-4651-a8e1-906971ac810e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "9182286b-5a08-4961-b4bb-c0e2f05746f7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:13:19 compute-0 nova_compute[351485]: 2025-12-03 02:13:19.308 351492 DEBUG oslo_concurrency.lockutils [None req-c046b59f-4c02-4651-a8e1-906971ac810e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "9182286b-5a08-4961-b4bb-c0e2f05746f7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:13:19 compute-0 nova_compute[351485]: 2025-12-03 02:13:19.308 351492 DEBUG oslo_concurrency.lockutils [None req-c046b59f-4c02-4651-a8e1-906971ac810e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "9182286b-5a08-4961-b4bb-c0e2f05746f7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:13:19 compute-0 nova_compute[351485]: 2025-12-03 02:13:19.308 351492 DEBUG oslo_concurrency.lockutils [None req-c046b59f-4c02-4651-a8e1-906971ac810e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "9182286b-5a08-4961-b4bb-c0e2f05746f7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:13:19 compute-0 nova_compute[351485]: 2025-12-03 02:13:19.311 351492 INFO nova.compute.manager [None req-c046b59f-4c02-4651-a8e1-906971ac810e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Terminating instance
Dec 03 02:13:19 compute-0 nova_compute[351485]: 2025-12-03 02:13:19.313 351492 DEBUG nova.compute.manager [None req-c046b59f-4c02-4651-a8e1-906971ac810e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 03 02:13:19 compute-0 kernel: tapd2a50b9b-c2 (unregistering): left promiscuous mode
Dec 03 02:13:19 compute-0 NetworkManager[48912]: <info>  [1764727999.5024] device (tapd2a50b9b-c2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 03 02:13:19 compute-0 ovn_controller[89134]: 2025-12-03T02:13:19Z|00061|binding|INFO|Releasing lport d2a50b9b-c23e-4e96-a247-ba01de01a3f1 from this chassis (sb_readonly=0)
Dec 03 02:13:19 compute-0 ovn_controller[89134]: 2025-12-03T02:13:19Z|00062|binding|INFO|Setting lport d2a50b9b-c23e-4e96-a247-ba01de01a3f1 down in Southbound
Dec 03 02:13:19 compute-0 ovn_controller[89134]: 2025-12-03T02:13:19Z|00063|binding|INFO|Removing iface tapd2a50b9b-c2 ovn-installed in OVS
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.510 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.511 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 03 02:13:19 compute-0 nova_compute[351485]: 2025-12-03 02:13:19.509 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.512 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:13:19 compute-0 nova_compute[351485]: 2025-12-03 02:13:19.513 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:13:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:19.519 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8f:a6:32 192.168.0.5'], port_security=['fa:16:3e:8f:a6:32 192.168.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.5/24', 'neutron:device_id': '9182286b-5a08-4961-b4bb-c0e2f05746f7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9746b242761a48048d185ce26d622b33', 'neutron:revision_number': '4', 'neutron:security_group_ids': '43ddbc1b-0018-4ea3-a338-8898d9bf8c87', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.241'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=13e9ae70-0999-47f9-bc0c-397e04263018, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=d2a50b9b-c23e-4e96-a247-ba01de01a3f1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 02:13:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:19.522 288528 INFO neutron.agent.ovn.metadata.agent [-] Port d2a50b9b-c23e-4e96-a247-ba01de01a3f1 in datapath 7ba11691-2711-476c-9191-cb6dfd0efa7d unbound from our chassis
Dec 03 02:13:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:19.524 288528 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7ba11691-2711-476c-9191-cb6dfd0efa7d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 03 02:13:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:19.527 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[0dc217da-257a-4e81-99c6-caa9bae30e4c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:13:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:19.528 288528 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d namespace which is not needed anymore
Dec 03 02:13:19 compute-0 nova_compute[351485]: 2025-12-03 02:13:19.540 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:13:19 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Dec 03 02:13:19 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 3min 37.900s CPU time.
Dec 03 02:13:19 compute-0 systemd-machined[138558]: Machine qemu-1-instance-00000001 terminated.
Dec 03 02:13:19 compute-0 kernel: tapd2a50b9b-c2: entered promiscuous mode
Dec 03 02:13:19 compute-0 kernel: tapd2a50b9b-c2 (unregistering): left promiscuous mode
Dec 03 02:13:19 compute-0 neutron-haproxy-ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d[414873]: [NOTICE]   (414882) : haproxy version is 2.8.14-c23fe91
Dec 03 02:13:19 compute-0 neutron-haproxy-ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d[414873]: [NOTICE]   (414882) : path to executable is /usr/sbin/haproxy
Dec 03 02:13:19 compute-0 neutron-haproxy-ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d[414873]: [WARNING]  (414882) : Exiting Master process...
Dec 03 02:13:19 compute-0 neutron-haproxy-ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d[414873]: [WARNING]  (414882) : Exiting Master process...
Dec 03 02:13:19 compute-0 nova_compute[351485]: 2025-12-03 02:13:19.766 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:13:19 compute-0 neutron-haproxy-ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d[414873]: [ALERT]    (414882) : Current worker (414906) exited with code 143 (Terminated)
Dec 03 02:13:19 compute-0 neutron-haproxy-ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d[414873]: [WARNING]  (414882) : All workers exited. Exiting... (0)
Dec 03 02:13:19 compute-0 systemd[1]: libpod-08a96f0c99af215211c236242d278753571f77111c0901d8562f775763893a28.scope: Deactivated successfully.
Dec 03 02:13:19 compute-0 podman[440784]: 2025-12-03 02:13:19.779454416 +0000 UTC m=+0.091175661 container died 08a96f0c99af215211c236242d278753571f77111c0901d8562f775763893a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.786 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '9182286b-5a08-4961-b4bb-c0e2f05746f7', 'name': 'test_0', 'flavor': {'id': 'bc665ec6-3672-4e52-a447-5267b04e227a', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'shutdown', 'tenant_id': '9746b242761a48048d185ce26d622b33', 'user_id': '03ba25e4009b43f7b0054fee32bf9136', 'hostId': '875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd', 'status': 'stopped', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.787 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.787 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.788 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:13:19 compute-0 nova_compute[351485]: 2025-12-03 02:13:19.790 351492 INFO nova.virt.libvirt.driver [-] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Instance destroyed successfully.
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.790 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-03T02:13:19.788762) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.789 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:13:19 compute-0 nova_compute[351485]: 2025-12-03 02:13:19.792 351492 DEBUG nova.objects.instance [None req-c046b59f-4c02-4651-a8e1-906971ac810e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lazy-loading 'resources' on Instance uuid 9182286b-5a08-4961-b4bb-c0e2f05746f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.795 14 DEBUG ceilometer.compute.pollsters [-] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 was shut off while getting sample of memory.usage: Failed to inspect data of instance <name=instance-00000001, id=9182286b-5a08-4961-b4bb-c0e2f05746f7>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.795 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.796 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.797 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.797 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.798 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.800 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-03T02:13:19.798963) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.799 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.803 14 DEBUG ceilometer.compute.pollsters [-] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 was shut off while getting sample of network.outgoing.packets: Failed to inspect data of instance <name=instance-00000001, id=9182286b-5a08-4961-b4bb-c0e2f05746f7>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.803 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.803 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.803 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.803 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.803 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.803 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.805 14 DEBUG ceilometer.compute.pollsters [-] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 was shut off while getting sample of network.incoming.bytes.delta: Failed to inspect data of instance <name=instance-00000001, id=9182286b-5a08-4961-b4bb-c0e2f05746f7>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.804 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-03T02:13:19.803897) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.805 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.805 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.805 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.805 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.805 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.805 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.806 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-03T02:13:19.805633) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.807 14 DEBUG ceilometer.compute.pollsters [-] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 was shut off while getting sample of network.outgoing.packets.drop: Failed to inspect data of instance <name=instance-00000001, id=9182286b-5a08-4961-b4bb-c0e2f05746f7>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.807 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.807 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.807 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.807 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.807 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.807 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.809 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-03T02:13:19.807848) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.809 14 DEBUG ceilometer.compute.pollsters [-] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 was shut off while getting sample of network.outgoing.packets.error: Failed to inspect data of instance <name=instance-00000001, id=9182286b-5a08-4961-b4bb-c0e2f05746f7>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.809 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.809 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.809 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.810 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.810 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.810 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:13:19 compute-0 nova_compute[351485]: 2025-12-03 02:13:19.811 351492 DEBUG nova.virt.libvirt.vif [None req-c046b59f-4c02-4651-a8e1-906971ac810e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T01:54:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='466cf0db-c3be-4d70-b9f3-08c056c2cad9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-03T01:54:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9746b242761a48048d185ce26d622b33',ramdisk_id='',reservation_id='r-2j005007',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='466cf0db-c3be-4d70-b9f3-08c056c2cad9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-03T01:54:47Z,user_data=None,user_id='03ba25e4009b43f7b0054fee32bf9136',uuid=9182286b-5a08-4961-b4bb-c0e2f05746f7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "address": "fa:16:3e:8f:a6:32", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2a50b9b-c2", "ovs_interfaceid": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.811 14 DEBUG ceilometer.compute.pollsters [-] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 was shut off while getting sample of network.incoming.packets.error: Failed to inspect data of instance <name=instance-00000001, id=9182286b-5a08-4961-b4bb-c0e2f05746f7>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.811 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.811 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-03T02:13:19.810120) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.811 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.812 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.812 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.812 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:13:19 compute-0 nova_compute[351485]: 2025-12-03 02:13:19.811 351492 DEBUG nova.network.os_vif_util [None req-c046b59f-4c02-4651-a8e1-906971ac810e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Converting VIF {"id": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "address": "fa:16:3e:8f:a6:32", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2a50b9b-c2", "ovs_interfaceid": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.812 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:13:19 compute-0 nova_compute[351485]: 2025-12-03 02:13:19.813 351492 DEBUG nova.network.os_vif_util [None req-c046b59f-4c02-4651-a8e1-906971ac810e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8f:a6:32,bridge_name='br-int',has_traffic_filtering=True,id=d2a50b9b-c23e-4e96-a247-ba01de01a3f1,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd2a50b9b-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.813 14 DEBUG ceilometer.compute.pollsters [-] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 was shut off while getting sample of disk.device.capacity: Failed to inspect data of instance <name=instance-00000001, id=9182286b-5a08-4961-b4bb-c0e2f05746f7>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.813 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-03T02:13:19.812261) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.814 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.814 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.814 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.814 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.814 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 03 02:13:19 compute-0 nova_compute[351485]: 2025-12-03 02:13:19.814 351492 DEBUG os_vif [None req-c046b59f-4c02-4651-a8e1-906971ac810e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8f:a6:32,bridge_name='br-int',has_traffic_filtering=True,id=d2a50b9b-c23e-4e96-a247-ba01de01a3f1,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd2a50b9b-c2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.814 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.814 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.814 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.815 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-03T02:13:19.814811) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.816 14 DEBUG ceilometer.compute.pollsters [-] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 was shut off while getting sample of disk.device.read.bytes: Failed to inspect data of instance <name=instance-00000001, id=9182286b-5a08-4961-b4bb-c0e2f05746f7>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.816 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.816 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.816 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.816 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.816 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.817 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:13:19 compute-0 nova_compute[351485]: 2025-12-03 02:13:19.817 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:13:19 compute-0 nova_compute[351485]: 2025-12-03 02:13:19.817 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd2a50b9b-c2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.818 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-03T02:13:19.817050) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.818 14 DEBUG ceilometer.compute.pollsters [-] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 was shut off while getting sample of network.incoming.bytes: Failed to inspect data of instance <name=instance-00000001, id=9182286b-5a08-4961-b4bb-c0e2f05746f7>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.818 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.818 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.819 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.819 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.819 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.819 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.823 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-03T02:13:19.819361) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:13:19 compute-0 nova_compute[351485]: 2025-12-03 02:13:19.823 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.824 14 DEBUG ceilometer.compute.pollsters [-] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 was shut off while getting sample of disk.device.read.latency: Failed to inspect data of instance <name=instance-00000001, id=9182286b-5a08-4961-b4bb-c0e2f05746f7>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.824 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.824 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.824 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.824 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.824 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.825 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.825 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-03T02:13:19.825065) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:13:19 compute-0 nova_compute[351485]: 2025-12-03 02:13:19.826 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.826 14 DEBUG ceilometer.compute.pollsters [-] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 was shut off while getting sample of disk.device.read.requests: Failed to inspect data of instance <name=instance-00000001, id=9182286b-5a08-4961-b4bb-c0e2f05746f7>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.826 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.827 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.827 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.827 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.827 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.827 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.827 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-03T02:13:19.827350) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.829 14 DEBUG ceilometer.compute.pollsters [-] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 was shut off while getting sample of power.state: Failed to inspect data of instance <name=instance-00000001, id=9182286b-5a08-4961-b4bb-c0e2f05746f7>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.829 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.829 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.829 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.829 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.829 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.829 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.830 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-03T02:13:19.829719) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:13:19 compute-0 nova_compute[351485]: 2025-12-03 02:13:19.830 351492 INFO os_vif [None req-c046b59f-4c02-4651-a8e1-906971ac810e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8f:a6:32,bridge_name='br-int',has_traffic_filtering=True,id=d2a50b9b-c23e-4e96-a247-ba01de01a3f1,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd2a50b9b-c2')
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.831 14 DEBUG ceilometer.compute.pollsters [-] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 was shut off while getting sample of disk.device.usage: Failed to inspect data of instance <name=instance-00000001, id=9182286b-5a08-4961-b4bb-c0e2f05746f7>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.831 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.831 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.832 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.832 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.832 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.832 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.833 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-03T02:13:19.832723) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.834 14 DEBUG ceilometer.compute.pollsters [-] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 was shut off while getting sample of disk.device.write.bytes: Failed to inspect data of instance <name=instance-00000001, id=9182286b-5a08-4961-b4bb-c0e2f05746f7>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.835 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.835 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.835 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.835 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.835 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.835 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.836 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-03T02:13:19.835501) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.837 14 DEBUG ceilometer.compute.pollsters [-] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 was shut off while getting sample of disk.device.write.latency: Failed to inspect data of instance <name=instance-00000001, id=9182286b-5a08-4961-b4bb-c0e2f05746f7>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.837 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.837 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.837 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.837 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.837 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.837 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.839 14 DEBUG ceilometer.compute.pollsters [-] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 was shut off while getting sample of disk.device.write.requests: Failed to inspect data of instance <name=instance-00000001, id=9182286b-5a08-4961-b4bb-c0e2f05746f7>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.839 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.839 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.839 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.840 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.840 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.840 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.839 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-03T02:13:19.837907) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.840 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-03T02:13:19.840170) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.841 14 DEBUG ceilometer.compute.pollsters [-] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 was shut off while getting sample of network.incoming.packets: Failed to inspect data of instance <name=instance-00000001, id=9182286b-5a08-4961-b4bb-c0e2f05746f7>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.841 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.842 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.842 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.842 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.842 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.843 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.843 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-03T02:13:19.842985) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:13:19 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-08a96f0c99af215211c236242d278753571f77111c0901d8562f775763893a28-userdata-shm.mount: Deactivated successfully.
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.850 14 DEBUG ceilometer.compute.pollsters [-] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 was shut off while getting sample of cpu: Failed to inspect data of instance <name=instance-00000001, id=9182286b-5a08-4961-b4bb-c0e2f05746f7>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.850 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.850 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.851 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.852 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:13:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-be2b42ad51a1eafabc174b54703a8a7fc40735ce50000101ab3bd4077ab4d5c6-merged.mount: Deactivated successfully.
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.852 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.852 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.853 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.853 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.853 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.853 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.853 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.855 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.856 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-03T02:13:19.852683) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.857 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-03T02:13:19.854798) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.857 14 DEBUG ceilometer.compute.pollsters [-] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 was shut off while getting sample of network.outgoing.bytes: Failed to inspect data of instance <name=instance-00000001, id=9182286b-5a08-4961-b4bb-c0e2f05746f7>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.860 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.860 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.861 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.861 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.861 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.862 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.863 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-03T02:13:19.862669) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.864 14 DEBUG ceilometer.compute.pollsters [-] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 was shut off while getting sample of disk.device.allocation: Failed to inspect data of instance <name=instance-00000001, id=9182286b-5a08-4961-b4bb-c0e2f05746f7>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.865 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.865 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.865 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.865 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:13:19 compute-0 podman[440784]: 2025-12-03 02:13:19.871397847 +0000 UTC m=+0.183119092 container cleanup 08a96f0c99af215211c236242d278753571f77111c0901d8562f775763893a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.871 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.872 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.873 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-03T02:13:19.872027) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.874 14 DEBUG ceilometer.compute.pollsters [-] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 was shut off while getting sample of network.incoming.packets.drop: Failed to inspect data of instance <name=instance-00000001, id=9182286b-5a08-4961-b4bb-c0e2f05746f7>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.874 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.875 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.875 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.876 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.876 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.877 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.878 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.879 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.879 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-03T02:13:19.877318) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.880 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.880 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.880 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.881 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.881 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-03T02:13:19.881170) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.883 14 DEBUG ceilometer.compute.pollsters [-] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 was shut off while getting sample of network.outgoing.bytes.delta: Failed to inspect data of instance <name=instance-00000001, id=9182286b-5a08-4961-b4bb-c0e2f05746f7>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.884 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.884 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.884 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.884 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.885 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.885 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.885 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.886 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.886 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.886 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:13:19 compute-0 systemd[1]: libpod-conmon-08a96f0c99af215211c236242d278753571f77111c0901d8562f775763893a28.scope: Deactivated successfully.
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.887 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.887 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.887 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.888 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.888 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.888 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.888 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.889 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.889 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.889 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.890 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.890 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.890 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.891 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.891 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.891 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.892 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.892 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.892 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:13:19 compute-0 ceph-mon[192821]: pgmap v1768: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:13:19 compute-0 podman[440830]: 2025-12-03 02:13:19.976712087 +0000 UTC m=+0.068250912 container remove 08a96f0c99af215211c236242d278753571f77111c0901d8562f775763893a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:13:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:19.986 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[43285fa9-cb31-4b8b-a768-ca48373d86e3]: (4, ('Wed Dec  3 02:13:19 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d (08a96f0c99af215211c236242d278753571f77111c0901d8562f775763893a28)\n08a96f0c99af215211c236242d278753571f77111c0901d8562f775763893a28\nWed Dec  3 02:13:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d (08a96f0c99af215211c236242d278753571f77111c0901d8562f775763893a28)\n08a96f0c99af215211c236242d278753571f77111c0901d8562f775763893a28\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:13:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:19.988 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[01673a78-677a-4f0d-9c15-e104bb3222c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:13:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:19.989 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ba11691-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:13:19 compute-0 nova_compute[351485]: 2025-12-03 02:13:19.991 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:13:19 compute-0 kernel: tap7ba11691-20: left promiscuous mode
Dec 03 02:13:19 compute-0 nova_compute[351485]: 2025-12-03 02:13:19.996 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:13:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:19.998 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[9809a70b-3fc2-4abf-9803-a5f5c345baf0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:13:20 compute-0 nova_compute[351485]: 2025-12-03 02:13:20.005 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:13:20 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:20.013 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[4b78f996-0e4e-4d28-8dbd-b158ad79ce75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:13:20 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:20.014 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[2998aec4-8bd4-4dbd-b4a3-0e09e2b614c1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:13:20 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:20.036 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[137531a9-9696-4d4e-b4a6-19962e234d46]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573032, 'reachable_time': 42038, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 440847, 'error': None, 'target': 'ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:13:20 compute-0 systemd[1]: run-netns-ovnmeta\x2d7ba11691\x2d2711\x2d476c\x2d9191\x2dcb6dfd0efa7d.mount: Deactivated successfully.
Dec 03 02:13:20 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:20.052 288639 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 03 02:13:20 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:20.054 288639 DEBUG oslo.privsep.daemon [-] privsep: reply[a259cd59-ef0d-43eb-9062-464f4c9e8c0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:13:20 compute-0 nova_compute[351485]: 2025-12-03 02:13:20.129 351492 DEBUG nova.compute.manager [req-14d10a20-f1b3-4bb3-9e8e-1c5552dc2a73 req-1ac8dfc1-8da3-463a-9cd4-97e17a58be28 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Received event network-vif-unplugged-d2a50b9b-c23e-4e96-a247-ba01de01a3f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:13:20 compute-0 nova_compute[351485]: 2025-12-03 02:13:20.129 351492 DEBUG oslo_concurrency.lockutils [req-14d10a20-f1b3-4bb3-9e8e-1c5552dc2a73 req-1ac8dfc1-8da3-463a-9cd4-97e17a58be28 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "9182286b-5a08-4961-b4bb-c0e2f05746f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:13:20 compute-0 nova_compute[351485]: 2025-12-03 02:13:20.130 351492 DEBUG oslo_concurrency.lockutils [req-14d10a20-f1b3-4bb3-9e8e-1c5552dc2a73 req-1ac8dfc1-8da3-463a-9cd4-97e17a58be28 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "9182286b-5a08-4961-b4bb-c0e2f05746f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:13:20 compute-0 nova_compute[351485]: 2025-12-03 02:13:20.130 351492 DEBUG oslo_concurrency.lockutils [req-14d10a20-f1b3-4bb3-9e8e-1c5552dc2a73 req-1ac8dfc1-8da3-463a-9cd4-97e17a58be28 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "9182286b-5a08-4961-b4bb-c0e2f05746f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:13:20 compute-0 nova_compute[351485]: 2025-12-03 02:13:20.131 351492 DEBUG nova.compute.manager [req-14d10a20-f1b3-4bb3-9e8e-1c5552dc2a73 req-1ac8dfc1-8da3-463a-9cd4-97e17a58be28 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] No waiting events found dispatching network-vif-unplugged-d2a50b9b-c23e-4e96-a247-ba01de01a3f1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 03 02:13:20 compute-0 nova_compute[351485]: 2025-12-03 02:13:20.132 351492 DEBUG nova.compute.manager [req-14d10a20-f1b3-4bb3-9e8e-1c5552dc2a73 req-1ac8dfc1-8da3-463a-9cd4-97e17a58be28 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Received event network-vif-unplugged-d2a50b9b-c23e-4e96-a247-ba01de01a3f1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 03 02:13:20 compute-0 nova_compute[351485]: 2025-12-03 02:13:20.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:13:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1769: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 0 B/s wr, 0 op/s
Dec 03 02:13:21 compute-0 nova_compute[351485]: 2025-12-03 02:13:21.084 351492 INFO nova.virt.libvirt.driver [None req-c046b59f-4c02-4651-a8e1-906971ac810e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Deleting instance files /var/lib/nova/instances/9182286b-5a08-4961-b4bb-c0e2f05746f7_del
Dec 03 02:13:21 compute-0 nova_compute[351485]: 2025-12-03 02:13:21.085 351492 INFO nova.virt.libvirt.driver [None req-c046b59f-4c02-4651-a8e1-906971ac810e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Deletion of /var/lib/nova/instances/9182286b-5a08-4961-b4bb-c0e2f05746f7_del complete
Dec 03 02:13:21 compute-0 nova_compute[351485]: 2025-12-03 02:13:21.161 351492 INFO nova.compute.manager [None req-c046b59f-4c02-4651-a8e1-906971ac810e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Took 1.85 seconds to destroy the instance on the hypervisor.
Dec 03 02:13:21 compute-0 nova_compute[351485]: 2025-12-03 02:13:21.161 351492 DEBUG oslo.service.loopingcall [None req-c046b59f-4c02-4651-a8e1-906971ac810e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 03 02:13:21 compute-0 nova_compute[351485]: 2025-12-03 02:13:21.162 351492 DEBUG nova.compute.manager [-] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 03 02:13:21 compute-0 nova_compute[351485]: 2025-12-03 02:13:21.162 351492 DEBUG nova.network.neutron [-] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 03 02:13:21 compute-0 ceph-mon[192821]: pgmap v1769: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 0 B/s wr, 0 op/s
Dec 03 02:13:22 compute-0 nova_compute[351485]: 2025-12-03 02:13:22.251 351492 DEBUG nova.compute.manager [req-1697f01c-2816-4b7b-881a-7756f6a7fb0e req-2782cf66-6914-4cfc-bd37-4f9493049dfa 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Received event network-vif-plugged-d2a50b9b-c23e-4e96-a247-ba01de01a3f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:13:22 compute-0 nova_compute[351485]: 2025-12-03 02:13:22.252 351492 DEBUG oslo_concurrency.lockutils [req-1697f01c-2816-4b7b-881a-7756f6a7fb0e req-2782cf66-6914-4cfc-bd37-4f9493049dfa 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "9182286b-5a08-4961-b4bb-c0e2f05746f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:13:22 compute-0 nova_compute[351485]: 2025-12-03 02:13:22.252 351492 DEBUG oslo_concurrency.lockutils [req-1697f01c-2816-4b7b-881a-7756f6a7fb0e req-2782cf66-6914-4cfc-bd37-4f9493049dfa 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "9182286b-5a08-4961-b4bb-c0e2f05746f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:13:22 compute-0 nova_compute[351485]: 2025-12-03 02:13:22.252 351492 DEBUG oslo_concurrency.lockutils [req-1697f01c-2816-4b7b-881a-7756f6a7fb0e req-2782cf66-6914-4cfc-bd37-4f9493049dfa 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "9182286b-5a08-4961-b4bb-c0e2f05746f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:13:22 compute-0 nova_compute[351485]: 2025-12-03 02:13:22.253 351492 DEBUG nova.compute.manager [req-1697f01c-2816-4b7b-881a-7756f6a7fb0e req-2782cf66-6914-4cfc-bd37-4f9493049dfa 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] No waiting events found dispatching network-vif-plugged-d2a50b9b-c23e-4e96-a247-ba01de01a3f1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 03 02:13:22 compute-0 nova_compute[351485]: 2025-12-03 02:13:22.253 351492 WARNING nova.compute.manager [req-1697f01c-2816-4b7b-881a-7756f6a7fb0e req-2782cf66-6914-4cfc-bd37-4f9493049dfa 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Received unexpected event network-vif-plugged-d2a50b9b-c23e-4e96-a247-ba01de01a3f1 for instance with vm_state active and task_state deleting.
Dec 03 02:13:22 compute-0 nova_compute[351485]: 2025-12-03 02:13:22.644 351492 DEBUG nova.network.neutron [-] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:13:22 compute-0 nova_compute[351485]: 2025-12-03 02:13:22.661 351492 INFO nova.compute.manager [-] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Took 1.50 seconds to deallocate network for instance.
Dec 03 02:13:22 compute-0 nova_compute[351485]: 2025-12-03 02:13:22.709 351492 DEBUG oslo_concurrency.lockutils [None req-c046b59f-4c02-4651-a8e1-906971ac810e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:13:22 compute-0 nova_compute[351485]: 2025-12-03 02:13:22.710 351492 DEBUG oslo_concurrency.lockutils [None req-c046b59f-4c02-4651-a8e1-906971ac810e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:13:22 compute-0 nova_compute[351485]: 2025-12-03 02:13:22.735 351492 DEBUG nova.compute.manager [req-4f75bc96-6b4a-4f96-9867-39e13ebf7be6 req-a06dd9cd-5261-48df-9d0b-4eb977fdd646 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Received event network-vif-deleted-d2a50b9b-c23e-4e96-a247-ba01de01a3f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:13:22 compute-0 nova_compute[351485]: 2025-12-03 02:13:22.788 351492 DEBUG oslo_concurrency.processutils [None req-c046b59f-4c02-4651-a8e1-906971ac810e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:13:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1770: 321 pgs: 321 active+clean; 51 MiB data, 244 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.1 KiB/s wr, 15 op/s
Dec 03 02:13:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:13:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:13:23 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2529706778' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:13:23 compute-0 nova_compute[351485]: 2025-12-03 02:13:23.318 351492 DEBUG oslo_concurrency.processutils [None req-c046b59f-4c02-4651-a8e1-906971ac810e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:13:23 compute-0 nova_compute[351485]: 2025-12-03 02:13:23.323 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:13:23 compute-0 nova_compute[351485]: 2025-12-03 02:13:23.330 351492 DEBUG nova.compute.provider_tree [None req-c046b59f-4c02-4651-a8e1-906971ac810e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:13:23 compute-0 nova_compute[351485]: 2025-12-03 02:13:23.361 351492 WARNING nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] While synchronizing instance power states, found 1 instances in the database and 0 instances on the hypervisor.
Dec 03 02:13:23 compute-0 nova_compute[351485]: 2025-12-03 02:13:23.361 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Triggering sync for uuid 9182286b-5a08-4961-b4bb-c0e2f05746f7 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 03 02:13:23 compute-0 nova_compute[351485]: 2025-12-03 02:13:23.362 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "9182286b-5a08-4961-b4bb-c0e2f05746f7" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:13:23 compute-0 nova_compute[351485]: 2025-12-03 02:13:23.367 351492 DEBUG nova.scheduler.client.report [None req-c046b59f-4c02-4651-a8e1-906971ac810e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:13:23 compute-0 nova_compute[351485]: 2025-12-03 02:13:23.401 351492 DEBUG oslo_concurrency.lockutils [None req-c046b59f-4c02-4651-a8e1-906971ac810e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:13:23 compute-0 nova_compute[351485]: 2025-12-03 02:13:23.449 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:13:23 compute-0 nova_compute[351485]: 2025-12-03 02:13:23.455 351492 INFO nova.scheduler.client.report [None req-c046b59f-4c02-4651-a8e1-906971ac810e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Deleted allocations for instance 9182286b-5a08-4961-b4bb-c0e2f05746f7
Dec 03 02:13:23 compute-0 nova_compute[351485]: 2025-12-03 02:13:23.560 351492 DEBUG oslo_concurrency.lockutils [None req-c046b59f-4c02-4651-a8e1-906971ac810e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "9182286b-5a08-4961-b4bb-c0e2f05746f7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.253s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:13:23 compute-0 nova_compute[351485]: 2025-12-03 02:13:23.562 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "9182286b-5a08-4961-b4bb-c0e2f05746f7" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.200s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:13:23 compute-0 nova_compute[351485]: 2025-12-03 02:13:23.586 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "9182286b-5a08-4961-b4bb-c0e2f05746f7" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.024s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:13:23 compute-0 nova_compute[351485]: 2025-12-03 02:13:23.615 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:13:23 compute-0 ceph-mon[192821]: pgmap v1770: 321 pgs: 321 active+clean; 51 MiB data, 244 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.1 KiB/s wr, 15 op/s
Dec 03 02:13:23 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2529706778' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:13:24 compute-0 nova_compute[351485]: 2025-12-03 02:13:24.823 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:13:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1771: 321 pgs: 321 active+clean; 31 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.4 KiB/s wr, 35 op/s
Dec 03 02:13:26 compute-0 ceph-mon[192821]: pgmap v1771: 321 pgs: 321 active+clean; 31 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.4 KiB/s wr, 35 op/s
Dec 03 02:13:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1772: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 03 02:13:28 compute-0 ceph-mon[192821]: pgmap v1772: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 03 02:13:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:13:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:13:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:13:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:13:28
Dec 03 02:13:28 compute-0 nova_compute[351485]: 2025-12-03 02:13:28.452 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:13:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 02:13:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 02:13:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:13:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:13:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:13:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:13:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'images', '.mgr', 'default.rgw.meta', 'backups', 'volumes', 'vms', 'default.rgw.control', 'default.rgw.log']
Dec 03 02:13:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 02:13:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1773: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 03 02:13:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 02:13:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:13:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 02:13:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:13:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:13:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:13:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:13:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:13:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:13:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:13:29 compute-0 nova_compute[351485]: 2025-12-03 02:13:29.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:13:29 compute-0 nova_compute[351485]: 2025-12-03 02:13:29.576 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 02:13:29 compute-0 podman[158098]: time="2025-12-03T02:13:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:13:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:13:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec 03 02:13:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:13:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8177 "" "Go-http-client/1.1"
Dec 03 02:13:29 compute-0 nova_compute[351485]: 2025-12-03 02:13:29.827 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:13:30 compute-0 ceph-mon[192821]: pgmap v1773: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 03 02:13:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1774: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 03 02:13:31 compute-0 openstack_network_exporter[368278]: ERROR   02:13:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:13:31 compute-0 openstack_network_exporter[368278]: ERROR   02:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:13:31 compute-0 openstack_network_exporter[368278]: ERROR   02:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:13:31 compute-0 openstack_network_exporter[368278]: ERROR   02:13:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:13:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:13:31 compute-0 openstack_network_exporter[368278]: ERROR   02:13:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:13:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:13:32 compute-0 ceph-mon[192821]: pgmap v1774: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 03 02:13:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1775: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 39 op/s
Dec 03 02:13:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:13:33 compute-0 nova_compute[351485]: 2025-12-03 02:13:33.455 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:13:34 compute-0 ceph-mon[192821]: pgmap v1775: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 39 op/s
Dec 03 02:13:34 compute-0 nova_compute[351485]: 2025-12-03 02:13:34.785 351492 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764727999.7830377, 9182286b-5a08-4961-b4bb-c0e2f05746f7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 02:13:34 compute-0 nova_compute[351485]: 2025-12-03 02:13:34.785 351492 INFO nova.compute.manager [-] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] VM Stopped (Lifecycle Event)
Dec 03 02:13:34 compute-0 nova_compute[351485]: 2025-12-03 02:13:34.802 351492 DEBUG nova.compute.manager [None req-399f83d3-a8c6-4f88-9dde-2680da6c20e6 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:13:34 compute-0 nova_compute[351485]: 2025-12-03 02:13:34.830 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:13:34 compute-0 podman[440872]: 2025-12-03 02:13:34.86242773 +0000 UTC m=+0.113640596 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 03 02:13:34 compute-0 podman[440874]: 2025-12-03 02:13:34.876130027 +0000 UTC m=+0.116014953 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 02:13:34 compute-0 podman[440873]: 2025-12-03 02:13:34.888706403 +0000 UTC m=+0.133080906 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Dec 03 02:13:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1776: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 682 B/s wr, 24 op/s
Dec 03 02:13:36 compute-0 ceph-mon[192821]: pgmap v1776: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 682 B/s wr, 24 op/s
Dec 03 02:13:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1777: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 341 B/s wr, 4 op/s
Dec 03 02:13:38 compute-0 ceph-mon[192821]: pgmap v1777: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 341 B/s wr, 4 op/s
Dec 03 02:13:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:13:38 compute-0 nova_compute[351485]: 2025-12-03 02:13:38.459 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:13:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 02:13:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:13:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 02:13:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:13:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 03 02:13:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:13:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:13:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:13:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:13:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:13:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec 03 02:13:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:13:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 02:13:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:13:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:13:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:13:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 02:13:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:13:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 02:13:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:13:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:13:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:13:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 02:13:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1778: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:13:39 compute-0 nova_compute[351485]: 2025-12-03 02:13:39.832 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:13:39 compute-0 podman[440930]: 2025-12-03 02:13:39.879140678 +0000 UTC m=+0.131294096 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 03 02:13:40 compute-0 ceph-mon[192821]: pgmap v1778: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:13:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1779: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:13:42 compute-0 ceph-mon[192821]: pgmap v1779: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:13:42 compute-0 sshd-session[440950]: Invalid user cc from 146.190.144.138 port 48304
Dec 03 02:13:42 compute-0 sshd-session[440950]: Received disconnect from 146.190.144.138 port 48304:11: Bye Bye [preauth]
Dec 03 02:13:42 compute-0 sshd-session[440950]: Disconnected from invalid user cc 146.190.144.138 port 48304 [preauth]
Dec 03 02:13:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1780: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:13:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:13:43 compute-0 nova_compute[351485]: 2025-12-03 02:13:43.463 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:13:44 compute-0 ceph-mon[192821]: pgmap v1780: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:13:44 compute-0 nova_compute[351485]: 2025-12-03 02:13:44.837 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:13:44 compute-0 podman[440955]: 2025-12-03 02:13:44.844454622 +0000 UTC m=+0.111013152 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, vcs-type=git, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, architecture=x86_64, distribution-scope=public, release=1214.1726694543, config_id=edpm, io.openshift.expose-services=, container_name=kepler, com.redhat.component=ubi9-container, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9)
Dec 03 02:13:44 compute-0 podman[440961]: 2025-12-03 02:13:44.850631637 +0000 UTC m=+0.106631438 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Dec 03 02:13:44 compute-0 podman[440954]: 2025-12-03 02:13:44.85533097 +0000 UTC m=+0.133667113 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 03 02:13:44 compute-0 podman[440953]: 2025-12-03 02:13:44.875193592 +0000 UTC m=+0.156065017 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., release=1755695350, architecture=x86_64, version=9.6, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vendor=Red Hat, Inc., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 03 02:13:44 compute-0 podman[440952]: 2025-12-03 02:13:44.903456321 +0000 UTC m=+0.193094384 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 03 02:13:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1781: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:13:46 compute-0 ceph-mon[192821]: pgmap v1781: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:13:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1782: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:13:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 02:13:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2077358783' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:13:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 02:13:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2077358783' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:13:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/2077358783' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:13:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/2077358783' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:13:48 compute-0 ceph-mon[192821]: pgmap v1782: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:13:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:13:48 compute-0 nova_compute[351485]: 2025-12-03 02:13:48.466 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:13:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1783: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:13:49 compute-0 nova_compute[351485]: 2025-12-03 02:13:49.840 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:13:50 compute-0 ceph-mon[192821]: pgmap v1783: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:13:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1784: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:13:52 compute-0 ceph-mon[192821]: pgmap v1784: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:13:52 compute-0 ovn_controller[89134]: 2025-12-03T02:13:52Z|00064|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Dec 03 02:13:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1785: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:13:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:13:53 compute-0 nova_compute[351485]: 2025-12-03 02:13:53.469 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:13:54 compute-0 ceph-mon[192821]: pgmap v1785: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:13:54 compute-0 nova_compute[351485]: 2025-12-03 02:13:54.844 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:13:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1786: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:13:56 compute-0 ceph-mon[192821]: pgmap v1786: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:13:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1787: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:13:57 compute-0 sudo[441057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:13:57 compute-0 sudo[441057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:13:57 compute-0 sudo[441057]: pam_unix(sudo:session): session closed for user root
Dec 03 02:13:57 compute-0 sudo[441082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:13:57 compute-0 sudo[441082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:13:57 compute-0 sudo[441082]: pam_unix(sudo:session): session closed for user root
Dec 03 02:13:57 compute-0 sudo[441107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:13:57 compute-0 sudo[441107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:13:57 compute-0 sudo[441107]: pam_unix(sudo:session): session closed for user root
Dec 03 02:13:58 compute-0 sudo[441132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 02:13:58 compute-0 sudo[441132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:13:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:13:58 compute-0 ceph-mon[192821]: pgmap v1787: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:13:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:13:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:13:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:13:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:13:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:13:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:13:58 compute-0 nova_compute[351485]: 2025-12-03 02:13:58.472 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:13:58 compute-0 sudo[441132]: pam_unix(sudo:session): session closed for user root
Dec 03 02:13:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec 03 02:13:58 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 03 02:13:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:13:58 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:13:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 02:13:58 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:13:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 02:13:58 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:13:58 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 82665bd8-1d24-4239-ac89-4e1cf701ada3 does not exist
Dec 03 02:13:58 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 87c00d6d-dacd-4bee-a59d-2aac79096476 does not exist
Dec 03 02:13:58 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev f5eca8ff-705a-4915-bb84-772bbe55622b does not exist
Dec 03 02:13:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 02:13:58 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:13:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 02:13:58 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:13:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:13:58 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:13:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1788: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:13:58 compute-0 sudo[441186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:13:58 compute-0 sudo[441186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:13:58 compute-0 sudo[441186]: pam_unix(sudo:session): session closed for user root
Dec 03 02:13:59 compute-0 sudo[441211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:13:59 compute-0 sudo[441211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:13:59 compute-0 sudo[441211]: pam_unix(sudo:session): session closed for user root
Dec 03 02:13:59 compute-0 sudo[441236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:13:59 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 03 02:13:59 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:13:59 compute-0 sudo[441236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:13:59 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:13:59 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:13:59 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:13:59 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:13:59 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:13:59 compute-0 sudo[441236]: pam_unix(sudo:session): session closed for user root
Dec 03 02:13:59 compute-0 sudo[441261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 02:13:59 compute-0 sudo[441261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:13:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:59.645 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:13:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:59.646 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:13:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:59.646 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:13:59 compute-0 podman[158098]: time="2025-12-03T02:13:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:13:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:13:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec 03 02:13:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:13:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8187 "" "Go-http-client/1.1"
Dec 03 02:13:59 compute-0 nova_compute[351485]: 2025-12-03 02:13:59.848 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:14:00 compute-0 podman[441324]: 2025-12-03 02:14:00.062956566 +0000 UTC m=+0.119557414 container create 9b7f50cdb6ec18416a28577f937f33fc371640f8a7acdf291e6aa78ee4f28b5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:14:00 compute-0 podman[441324]: 2025-12-03 02:14:00.002467475 +0000 UTC m=+0.059068363 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:14:00 compute-0 systemd[1]: Started libpod-conmon-9b7f50cdb6ec18416a28577f937f33fc371640f8a7acdf291e6aa78ee4f28b5e.scope.
Dec 03 02:14:00 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:14:00 compute-0 podman[441324]: 2025-12-03 02:14:00.244364989 +0000 UTC m=+0.300965897 container init 9b7f50cdb6ec18416a28577f937f33fc371640f8a7acdf291e6aa78ee4f28b5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_williamson, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec 03 02:14:00 compute-0 podman[441324]: 2025-12-03 02:14:00.264835858 +0000 UTC m=+0.321436696 container start 9b7f50cdb6ec18416a28577f937f33fc371640f8a7acdf291e6aa78ee4f28b5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_williamson, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Dec 03 02:14:00 compute-0 podman[441324]: 2025-12-03 02:14:00.271085695 +0000 UTC m=+0.327686543 container attach 9b7f50cdb6ec18416a28577f937f33fc371640f8a7acdf291e6aa78ee4f28b5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_williamson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:14:00 compute-0 thirsty_williamson[441343]: 167 167
Dec 03 02:14:00 compute-0 systemd[1]: libpod-9b7f50cdb6ec18416a28577f937f33fc371640f8a7acdf291e6aa78ee4f28b5e.scope: Deactivated successfully.
Dec 03 02:14:00 compute-0 ceph-mon[192821]: pgmap v1788: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:14:00 compute-0 podman[441324]: 2025-12-03 02:14:00.278808323 +0000 UTC m=+0.335409171 container died 9b7f50cdb6ec18416a28577f937f33fc371640f8a7acdf291e6aa78ee4f28b5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec 03 02:14:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-60cb65c6f35898da597774f7f4d409c7e47881663410b208bbfb5a2474299c98-merged.mount: Deactivated successfully.
Dec 03 02:14:00 compute-0 podman[441324]: 2025-12-03 02:14:00.365783344 +0000 UTC m=+0.422384182 container remove 9b7f50cdb6ec18416a28577f937f33fc371640f8a7acdf291e6aa78ee4f28b5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_williamson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:14:00 compute-0 systemd[1]: libpod-conmon-9b7f50cdb6ec18416a28577f937f33fc371640f8a7acdf291e6aa78ee4f28b5e.scope: Deactivated successfully.
Dec 03 02:14:00 compute-0 podman[441366]: 2025-12-03 02:14:00.676944228 +0000 UTC m=+0.099077535 container create beb344986d8fcb56ea1ae3527b4834f52a424cbfa5de7dda874c4e3080e076e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_noether, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:14:00 compute-0 podman[441366]: 2025-12-03 02:14:00.641914237 +0000 UTC m=+0.064047614 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:14:00 compute-0 systemd[1]: Started libpod-conmon-beb344986d8fcb56ea1ae3527b4834f52a424cbfa5de7dda874c4e3080e076e9.scope.
Dec 03 02:14:00 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:14:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8ec453aa34f75a8563b70bc79a2f8daec1ead861f0b22c9268da00b16ccf4aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:14:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8ec453aa34f75a8563b70bc79a2f8daec1ead861f0b22c9268da00b16ccf4aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:14:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8ec453aa34f75a8563b70bc79a2f8daec1ead861f0b22c9268da00b16ccf4aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:14:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8ec453aa34f75a8563b70bc79a2f8daec1ead861f0b22c9268da00b16ccf4aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:14:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8ec453aa34f75a8563b70bc79a2f8daec1ead861f0b22c9268da00b16ccf4aa/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 02:14:00 compute-0 podman[441366]: 2025-12-03 02:14:00.85338642 +0000 UTC m=+0.275519807 container init beb344986d8fcb56ea1ae3527b4834f52a424cbfa5de7dda874c4e3080e076e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_noether, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:14:00 compute-0 podman[441366]: 2025-12-03 02:14:00.891389385 +0000 UTC m=+0.313522702 container start beb344986d8fcb56ea1ae3527b4834f52a424cbfa5de7dda874c4e3080e076e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_noether, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec 03 02:14:00 compute-0 podman[441366]: 2025-12-03 02:14:00.898584169 +0000 UTC m=+0.320717546 container attach beb344986d8fcb56ea1ae3527b4834f52a424cbfa5de7dda874c4e3080e076e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_noether, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:14:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1789: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:14:01 compute-0 sshd-session[441337]: Invalid user cumulus from 154.113.10.113 port 34164
Dec 03 02:14:01 compute-0 sshd-session[441337]: Received disconnect from 154.113.10.113 port 34164:11: Bye Bye [preauth]
Dec 03 02:14:01 compute-0 sshd-session[441337]: Disconnected from invalid user cumulus 154.113.10.113 port 34164 [preauth]
Dec 03 02:14:01 compute-0 openstack_network_exporter[368278]: ERROR   02:14:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:14:01 compute-0 openstack_network_exporter[368278]: ERROR   02:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:14:01 compute-0 openstack_network_exporter[368278]: ERROR   02:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:14:01 compute-0 openstack_network_exporter[368278]: ERROR   02:14:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:14:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:14:01 compute-0 openstack_network_exporter[368278]: ERROR   02:14:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:14:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:14:02 compute-0 focused_noether[441382]: --> passed data devices: 0 physical, 3 LVM
Dec 03 02:14:02 compute-0 focused_noether[441382]: --> relative data size: 1.0
Dec 03 02:14:02 compute-0 focused_noether[441382]: --> All data devices are unavailable
Dec 03 02:14:02 compute-0 systemd[1]: libpod-beb344986d8fcb56ea1ae3527b4834f52a424cbfa5de7dda874c4e3080e076e9.scope: Deactivated successfully.
Dec 03 02:14:02 compute-0 systemd[1]: libpod-beb344986d8fcb56ea1ae3527b4834f52a424cbfa5de7dda874c4e3080e076e9.scope: Consumed 1.169s CPU time.
Dec 03 02:14:02 compute-0 podman[441411]: 2025-12-03 02:14:02.206840832 +0000 UTC m=+0.049478161 container died beb344986d8fcb56ea1ae3527b4834f52a424cbfa5de7dda874c4e3080e076e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_noether, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 03 02:14:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8ec453aa34f75a8563b70bc79a2f8daec1ead861f0b22c9268da00b16ccf4aa-merged.mount: Deactivated successfully.
Dec 03 02:14:02 compute-0 ceph-mon[192821]: pgmap v1789: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:14:02 compute-0 podman[441411]: 2025-12-03 02:14:02.315424724 +0000 UTC m=+0.158062053 container remove beb344986d8fcb56ea1ae3527b4834f52a424cbfa5de7dda874c4e3080e076e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_noether, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:14:02 compute-0 systemd[1]: libpod-conmon-beb344986d8fcb56ea1ae3527b4834f52a424cbfa5de7dda874c4e3080e076e9.scope: Deactivated successfully.
Dec 03 02:14:02 compute-0 sudo[441261]: pam_unix(sudo:session): session closed for user root
Dec 03 02:14:02 compute-0 sudo[441424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:14:02 compute-0 sudo[441424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:14:02 compute-0 sudo[441424]: pam_unix(sudo:session): session closed for user root
Dec 03 02:14:02 compute-0 sudo[441449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:14:02 compute-0 sudo[441449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:14:02 compute-0 sudo[441449]: pam_unix(sudo:session): session closed for user root
Dec 03 02:14:02 compute-0 sudo[441474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:14:02 compute-0 sudo[441474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:14:02 compute-0 sudo[441474]: pam_unix(sudo:session): session closed for user root
Dec 03 02:14:02 compute-0 sudo[441499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 02:14:02 compute-0 sudo[441499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:14:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1790: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:14:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:14:03 compute-0 nova_compute[351485]: 2025-12-03 02:14:03.476 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:14:03 compute-0 podman[441561]: 2025-12-03 02:14:03.48234623 +0000 UTC m=+0.101640146 container create 5e3948b2805792919e8864e7eee64b842fccc98d38c0e586b70fcfa3134d6fc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 03 02:14:03 compute-0 podman[441561]: 2025-12-03 02:14:03.435242638 +0000 UTC m=+0.054536604 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:14:03 compute-0 systemd[1]: Started libpod-conmon-5e3948b2805792919e8864e7eee64b842fccc98d38c0e586b70fcfa3134d6fc9.scope.
Dec 03 02:14:03 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:14:03 compute-0 podman[441561]: 2025-12-03 02:14:03.614622533 +0000 UTC m=+0.233916449 container init 5e3948b2805792919e8864e7eee64b842fccc98d38c0e586b70fcfa3134d6fc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_blackburn, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 03 02:14:03 compute-0 podman[441561]: 2025-12-03 02:14:03.625612554 +0000 UTC m=+0.244906430 container start 5e3948b2805792919e8864e7eee64b842fccc98d38c0e586b70fcfa3134d6fc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec 03 02:14:03 compute-0 podman[441561]: 2025-12-03 02:14:03.630832722 +0000 UTC m=+0.250126608 container attach 5e3948b2805792919e8864e7eee64b842fccc98d38c0e586b70fcfa3134d6fc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_blackburn, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 03 02:14:03 compute-0 friendly_blackburn[441577]: 167 167
Dec 03 02:14:03 compute-0 systemd[1]: libpod-5e3948b2805792919e8864e7eee64b842fccc98d38c0e586b70fcfa3134d6fc9.scope: Deactivated successfully.
Dec 03 02:14:03 compute-0 podman[441561]: 2025-12-03 02:14:03.632767246 +0000 UTC m=+0.252061122 container died 5e3948b2805792919e8864e7eee64b842fccc98d38c0e586b70fcfa3134d6fc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_blackburn, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:14:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d545dcbb67281ede438f0e7fbb66a8401c822d77d7fe08ccb547a39b7c3d929-merged.mount: Deactivated successfully.
Dec 03 02:14:03 compute-0 podman[441561]: 2025-12-03 02:14:03.678443419 +0000 UTC m=+0.297737295 container remove 5e3948b2805792919e8864e7eee64b842fccc98d38c0e586b70fcfa3134d6fc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_blackburn, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:14:03 compute-0 systemd[1]: libpod-conmon-5e3948b2805792919e8864e7eee64b842fccc98d38c0e586b70fcfa3134d6fc9.scope: Deactivated successfully.
Dec 03 02:14:03 compute-0 podman[441600]: 2025-12-03 02:14:03.93402085 +0000 UTC m=+0.099875847 container create f188bb8dfff5b1e3421894af88f029aca86c67a5e5a666da4dd436c60a4bd4f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:14:03 compute-0 podman[441600]: 2025-12-03 02:14:03.8969347 +0000 UTC m=+0.062789757 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:14:04 compute-0 systemd[1]: Started libpod-conmon-f188bb8dfff5b1e3421894af88f029aca86c67a5e5a666da4dd436c60a4bd4f7.scope.
Dec 03 02:14:04 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:14:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30183fd9da6cc9f7bed24e6e0e93bef1162a6293c5687d88c246698da01547aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:14:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30183fd9da6cc9f7bed24e6e0e93bef1162a6293c5687d88c246698da01547aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:14:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30183fd9da6cc9f7bed24e6e0e93bef1162a6293c5687d88c246698da01547aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:14:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30183fd9da6cc9f7bed24e6e0e93bef1162a6293c5687d88c246698da01547aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:14:04 compute-0 podman[441600]: 2025-12-03 02:14:04.098419701 +0000 UTC m=+0.264274758 container init f188bb8dfff5b1e3421894af88f029aca86c67a5e5a666da4dd436c60a4bd4f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bassi, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:14:04 compute-0 podman[441600]: 2025-12-03 02:14:04.122966745 +0000 UTC m=+0.288821732 container start f188bb8dfff5b1e3421894af88f029aca86c67a5e5a666da4dd436c60a4bd4f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec 03 02:14:04 compute-0 podman[441600]: 2025-12-03 02:14:04.129928592 +0000 UTC m=+0.295783629 container attach f188bb8dfff5b1e3421894af88f029aca86c67a5e5a666da4dd436c60a4bd4f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bassi, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:14:04 compute-0 ceph-mon[192821]: pgmap v1790: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:14:04 compute-0 nova_compute[351485]: 2025-12-03 02:14:04.853 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:14:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1791: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]: {
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:     "0": [
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:         {
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:             "devices": [
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:                 "/dev/loop3"
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:             ],
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:             "lv_name": "ceph_lv0",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:             "lv_size": "21470642176",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:             "name": "ceph_lv0",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:             "tags": {
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:                 "ceph.cluster_name": "ceph",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:                 "ceph.crush_device_class": "",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:                 "ceph.encrypted": "0",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:                 "ceph.osd_id": "0",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:                 "ceph.type": "block",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:                 "ceph.vdo": "0"
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:             },
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:             "type": "block",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:             "vg_name": "ceph_vg0"
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:         }
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:     ],
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:     "1": [
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:         {
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:             "devices": [
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:                 "/dev/loop4"
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:             ],
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:             "lv_name": "ceph_lv1",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:             "lv_size": "21470642176",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:             "name": "ceph_lv1",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:             "tags": {
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:                 "ceph.cluster_name": "ceph",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:                 "ceph.crush_device_class": "",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:                 "ceph.encrypted": "0",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:                 "ceph.osd_id": "1",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:                 "ceph.type": "block",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:                 "ceph.vdo": "0"
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:             },
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:             "type": "block",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:             "vg_name": "ceph_vg1"
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:         }
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:     ],
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:     "2": [
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:         {
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:             "devices": [
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:                 "/dev/loop5"
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:             ],
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:             "lv_name": "ceph_lv2",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:             "lv_size": "21470642176",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:             "name": "ceph_lv2",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:             "tags": {
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:                 "ceph.cluster_name": "ceph",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:                 "ceph.crush_device_class": "",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:                 "ceph.encrypted": "0",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:                 "ceph.osd_id": "2",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:                 "ceph.type": "block",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:                 "ceph.vdo": "0"
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:             },
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:             "type": "block",
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:             "vg_name": "ceph_vg2"
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:         }
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]:     ]
Dec 03 02:14:04 compute-0 beautiful_bassi[441615]: }
Dec 03 02:14:05 compute-0 systemd[1]: libpod-f188bb8dfff5b1e3421894af88f029aca86c67a5e5a666da4dd436c60a4bd4f7.scope: Deactivated successfully.
Dec 03 02:14:05 compute-0 podman[441600]: 2025-12-03 02:14:05.007875432 +0000 UTC m=+1.173730429 container died f188bb8dfff5b1e3421894af88f029aca86c67a5e5a666da4dd436c60a4bd4f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bassi, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec 03 02:14:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-30183fd9da6cc9f7bed24e6e0e93bef1162a6293c5687d88c246698da01547aa-merged.mount: Deactivated successfully.
Dec 03 02:14:05 compute-0 podman[441600]: 2025-12-03 02:14:05.117503994 +0000 UTC m=+1.283358961 container remove f188bb8dfff5b1e3421894af88f029aca86c67a5e5a666da4dd436c60a4bd4f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bassi, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 03 02:14:05 compute-0 systemd[1]: libpod-conmon-f188bb8dfff5b1e3421894af88f029aca86c67a5e5a666da4dd436c60a4bd4f7.scope: Deactivated successfully.
Dec 03 02:14:05 compute-0 podman[441627]: 2025-12-03 02:14:05.1459958 +0000 UTC m=+0.089846083 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 03 02:14:05 compute-0 sudo[441499]: pam_unix(sudo:session): session closed for user root
Dec 03 02:14:05 compute-0 podman[441636]: 2025-12-03 02:14:05.179590231 +0000 UTC m=+0.118680029 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 03 02:14:05 compute-0 podman[441633]: 2025-12-03 02:14:05.180503926 +0000 UTC m=+0.120592422 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 03 02:14:05 compute-0 sudo[441694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:14:05 compute-0 sudo[441694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:14:05 compute-0 sudo[441694]: pam_unix(sudo:session): session closed for user root
Dec 03 02:14:05 compute-0 sudo[441719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:14:05 compute-0 sudo[441719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:14:05 compute-0 sudo[441719]: pam_unix(sudo:session): session closed for user root
Dec 03 02:14:05 compute-0 sudo[441744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:14:05 compute-0 sudo[441744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:14:05 compute-0 sudo[441744]: pam_unix(sudo:session): session closed for user root
Dec 03 02:14:05 compute-0 sudo[441769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 02:14:05 compute-0 sudo[441769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:14:06 compute-0 podman[441831]: 2025-12-03 02:14:06.253859174 +0000 UTC m=+0.104440906 container create 565cdda18ba721415bf6693e664ac5a256926ec41086d1e484f99da5f046c958 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:14:06 compute-0 podman[441831]: 2025-12-03 02:14:06.209207701 +0000 UTC m=+0.059789463 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:14:06 compute-0 systemd[1]: Started libpod-conmon-565cdda18ba721415bf6693e664ac5a256926ec41086d1e484f99da5f046c958.scope.
Dec 03 02:14:06 compute-0 ceph-mon[192821]: pgmap v1791: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:14:06 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:14:06 compute-0 podman[441831]: 2025-12-03 02:14:06.378190662 +0000 UTC m=+0.228772394 container init 565cdda18ba721415bf6693e664ac5a256926ec41086d1e484f99da5f046c958 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_elion, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 03 02:14:06 compute-0 podman[441831]: 2025-12-03 02:14:06.397839538 +0000 UTC m=+0.248421240 container start 565cdda18ba721415bf6693e664ac5a256926ec41086d1e484f99da5f046c958 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 03 02:14:06 compute-0 podman[441831]: 2025-12-03 02:14:06.402908831 +0000 UTC m=+0.253490533 container attach 565cdda18ba721415bf6693e664ac5a256926ec41086d1e484f99da5f046c958 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Dec 03 02:14:06 compute-0 flamboyant_elion[441846]: 167 167
Dec 03 02:14:06 compute-0 podman[441831]: 2025-12-03 02:14:06.413218073 +0000 UTC m=+0.263799785 container died 565cdda18ba721415bf6693e664ac5a256926ec41086d1e484f99da5f046c958 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_elion, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:14:06 compute-0 systemd[1]: libpod-565cdda18ba721415bf6693e664ac5a256926ec41086d1e484f99da5f046c958.scope: Deactivated successfully.
Dec 03 02:14:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc95364e5da55729124ba7e1bffa5451044b07321af1d3b7c2816e3236bc3a48-merged.mount: Deactivated successfully.
Dec 03 02:14:06 compute-0 podman[441831]: 2025-12-03 02:14:06.478076838 +0000 UTC m=+0.328658530 container remove 565cdda18ba721415bf6693e664ac5a256926ec41086d1e484f99da5f046c958 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:14:06 compute-0 systemd[1]: libpod-conmon-565cdda18ba721415bf6693e664ac5a256926ec41086d1e484f99da5f046c958.scope: Deactivated successfully.
Dec 03 02:14:06 compute-0 podman[441869]: 2025-12-03 02:14:06.791352751 +0000 UTC m=+0.113351308 container create 8431f9c48f6eb5e0e946416a058e79182af1dfac2b192cfeb50e82a1383de76b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 03 02:14:06 compute-0 podman[441869]: 2025-12-03 02:14:06.740943405 +0000 UTC m=+0.062942042 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:14:06 compute-0 systemd[1]: Started libpod-conmon-8431f9c48f6eb5e0e946416a058e79182af1dfac2b192cfeb50e82a1383de76b.scope.
Dec 03 02:14:06 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:14:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a19ad806ea7e23c893de836694e12fb63198e37f56af711f6443e1932a28ad76/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:14:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a19ad806ea7e23c893de836694e12fb63198e37f56af711f6443e1932a28ad76/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:14:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a19ad806ea7e23c893de836694e12fb63198e37f56af711f6443e1932a28ad76/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:14:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a19ad806ea7e23c893de836694e12fb63198e37f56af711f6443e1932a28ad76/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:14:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1792: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:14:06 compute-0 podman[441869]: 2025-12-03 02:14:06.957993536 +0000 UTC m=+0.279992163 container init 8431f9c48f6eb5e0e946416a058e79182af1dfac2b192cfeb50e82a1383de76b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_pike, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 03 02:14:06 compute-0 podman[441869]: 2025-12-03 02:14:06.990976539 +0000 UTC m=+0.312975116 container start 8431f9c48f6eb5e0e946416a058e79182af1dfac2b192cfeb50e82a1383de76b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_pike, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:14:06 compute-0 podman[441869]: 2025-12-03 02:14:06.998496382 +0000 UTC m=+0.320495029 container attach 8431f9c48f6eb5e0e946416a058e79182af1dfac2b192cfeb50e82a1383de76b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 03 02:14:08 compute-0 mystifying_pike[441885]: {
Dec 03 02:14:08 compute-0 mystifying_pike[441885]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 02:14:08 compute-0 mystifying_pike[441885]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:14:08 compute-0 mystifying_pike[441885]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 02:14:08 compute-0 mystifying_pike[441885]:         "osd_id": 2,
Dec 03 02:14:08 compute-0 mystifying_pike[441885]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:14:08 compute-0 mystifying_pike[441885]:         "type": "bluestore"
Dec 03 02:14:08 compute-0 mystifying_pike[441885]:     },
Dec 03 02:14:08 compute-0 mystifying_pike[441885]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 02:14:08 compute-0 mystifying_pike[441885]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:14:08 compute-0 mystifying_pike[441885]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 02:14:08 compute-0 mystifying_pike[441885]:         "osd_id": 1,
Dec 03 02:14:08 compute-0 mystifying_pike[441885]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:14:08 compute-0 mystifying_pike[441885]:         "type": "bluestore"
Dec 03 02:14:08 compute-0 mystifying_pike[441885]:     },
Dec 03 02:14:08 compute-0 mystifying_pike[441885]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 02:14:08 compute-0 mystifying_pike[441885]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:14:08 compute-0 mystifying_pike[441885]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 02:14:08 compute-0 mystifying_pike[441885]:         "osd_id": 0,
Dec 03 02:14:08 compute-0 mystifying_pike[441885]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:14:08 compute-0 mystifying_pike[441885]:         "type": "bluestore"
Dec 03 02:14:08 compute-0 mystifying_pike[441885]:     }
Dec 03 02:14:08 compute-0 mystifying_pike[441885]: }
Dec 03 02:14:08 compute-0 systemd[1]: libpod-8431f9c48f6eb5e0e946416a058e79182af1dfac2b192cfeb50e82a1383de76b.scope: Deactivated successfully.
Dec 03 02:14:08 compute-0 podman[441869]: 2025-12-03 02:14:08.223741178 +0000 UTC m=+1.545739765 container died 8431f9c48f6eb5e0e946416a058e79182af1dfac2b192cfeb50e82a1383de76b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_pike, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:14:08 compute-0 systemd[1]: libpod-8431f9c48f6eb5e0e946416a058e79182af1dfac2b192cfeb50e82a1383de76b.scope: Consumed 1.236s CPU time.
Dec 03 02:14:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:14:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-a19ad806ea7e23c893de836694e12fb63198e37f56af711f6443e1932a28ad76-merged.mount: Deactivated successfully.
Dec 03 02:14:08 compute-0 podman[441869]: 2025-12-03 02:14:08.317943253 +0000 UTC m=+1.639941840 container remove 8431f9c48f6eb5e0e946416a058e79182af1dfac2b192cfeb50e82a1383de76b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_pike, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:14:08 compute-0 systemd[1]: libpod-conmon-8431f9c48f6eb5e0e946416a058e79182af1dfac2b192cfeb50e82a1383de76b.scope: Deactivated successfully.
Dec 03 02:14:08 compute-0 ceph-mon[192821]: pgmap v1792: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:14:08 compute-0 sudo[441769]: pam_unix(sudo:session): session closed for user root
Dec 03 02:14:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 02:14:08 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:14:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 02:14:08 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:14:08 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 2522e6e4-962e-4894-b6b8-bd453a8efaaf does not exist
Dec 03 02:14:08 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 83d52cef-ee93-4211-a11d-f348f9c9d6aa does not exist
Dec 03 02:14:08 compute-0 nova_compute[351485]: 2025-12-03 02:14:08.479 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:14:08 compute-0 sudo[441928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:14:08 compute-0 sudo[441928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:14:08 compute-0 sudo[441928]: pam_unix(sudo:session): session closed for user root
Dec 03 02:14:08 compute-0 sudo[441953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 02:14:08 compute-0 sudo[441953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:14:08 compute-0 sudo[441953]: pam_unix(sudo:session): session closed for user root
Dec 03 02:14:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1793: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:14:09 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:14:09 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:14:09 compute-0 nova_compute[351485]: 2025-12-03 02:14:09.857 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:14:10 compute-0 ceph-mon[192821]: pgmap v1793: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:14:10 compute-0 podman[441979]: 2025-12-03 02:14:10.910148714 +0000 UTC m=+0.154336237 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 03 02:14:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1794: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:14:11 compute-0 ceph-mon[192821]: pgmap v1794: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:14:12 compute-0 nova_compute[351485]: 2025-12-03 02:14:12.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:14:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1795: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:14:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:14:13 compute-0 nova_compute[351485]: 2025-12-03 02:14:13.481 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:14:13 compute-0 nova_compute[351485]: 2025-12-03 02:14:13.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:14:13 compute-0 nova_compute[351485]: 2025-12-03 02:14:13.640 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:14:13 compute-0 nova_compute[351485]: 2025-12-03 02:14:13.641 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:14:13 compute-0 nova_compute[351485]: 2025-12-03 02:14:13.641 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:14:13 compute-0 nova_compute[351485]: 2025-12-03 02:14:13.642 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 02:14:13 compute-0 nova_compute[351485]: 2025-12-03 02:14:13.642 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:14:14 compute-0 ceph-mon[192821]: pgmap v1795: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:14:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:14:14 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3809762466' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:14:14 compute-0 nova_compute[351485]: 2025-12-03 02:14:14.198 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:14:14 compute-0 nova_compute[351485]: 2025-12-03 02:14:14.802 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:14:14 compute-0 nova_compute[351485]: 2025-12-03 02:14:14.805 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4126MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 02:14:14 compute-0 nova_compute[351485]: 2025-12-03 02:14:14.806 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:14:14 compute-0 nova_compute[351485]: 2025-12-03 02:14:14.807 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:14:14 compute-0 nova_compute[351485]: 2025-12-03 02:14:14.862 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:14:14 compute-0 nova_compute[351485]: 2025-12-03 02:14:14.896 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 02:14:14 compute-0 nova_compute[351485]: 2025-12-03 02:14:14.897 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 02:14:14 compute-0 nova_compute[351485]: 2025-12-03 02:14:14.917 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:14:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1796: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:14:15 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3809762466' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:14:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:14:15 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3937025677' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:14:15 compute-0 nova_compute[351485]: 2025-12-03 02:14:15.440 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:14:15 compute-0 nova_compute[351485]: 2025-12-03 02:14:15.456 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:14:15 compute-0 nova_compute[351485]: 2025-12-03 02:14:15.830 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:14:15 compute-0 nova_compute[351485]: 2025-12-03 02:14:15.867 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 02:14:15 compute-0 nova_compute[351485]: 2025-12-03 02:14:15.867 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.061s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:14:15 compute-0 podman[442044]: 2025-12-03 02:14:15.883491195 +0000 UTC m=+0.114535901 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 03 02:14:15 compute-0 podman[442043]: 2025-12-03 02:14:15.885159782 +0000 UTC m=+0.129880475 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, architecture=x86_64, distribution-scope=public, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-type=git)
Dec 03 02:14:15 compute-0 podman[442054]: 2025-12-03 02:14:15.888503517 +0000 UTC m=+0.106071902 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.license=GPLv2)
Dec 03 02:14:15 compute-0 podman[442045]: 2025-12-03 02:14:15.924223198 +0000 UTC m=+0.142830682 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, managed_by=edpm_ansible, release-0.7.12=, maintainer=Red Hat, Inc., config_id=edpm, vendor=Red Hat, Inc., version=9.4, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, container_name=kepler, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, distribution-scope=public, com.redhat.component=ubi9-container, release=1214.1726694543)
Dec 03 02:14:15 compute-0 podman[442042]: 2025-12-03 02:14:15.946997872 +0000 UTC m=+0.192928830 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 03 02:14:16 compute-0 ceph-mon[192821]: pgmap v1796: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:14:16 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3937025677' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:14:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1797: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:14:17 compute-0 nova_compute[351485]: 2025-12-03 02:14:17.868 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:14:17 compute-0 nova_compute[351485]: 2025-12-03 02:14:17.869 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 02:14:17 compute-0 nova_compute[351485]: 2025-12-03 02:14:17.870 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 03 02:14:18 compute-0 ceph-mon[192821]: pgmap v1797: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:14:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:14:18 compute-0 nova_compute[351485]: 2025-12-03 02:14:18.423 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 03 02:14:18 compute-0 nova_compute[351485]: 2025-12-03 02:14:18.424 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:14:18 compute-0 nova_compute[351485]: 2025-12-03 02:14:18.425 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:14:18 compute-0 nova_compute[351485]: 2025-12-03 02:14:18.484 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:14:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1798: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:14:19 compute-0 nova_compute[351485]: 2025-12-03 02:14:19.127 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:14:19 compute-0 nova_compute[351485]: 2025-12-03 02:14:19.596 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:14:19 compute-0 nova_compute[351485]: 2025-12-03 02:14:19.868 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:14:20 compute-0 ceph-mon[192821]: pgmap v1798: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:14:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1799: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:14:22 compute-0 ceph-mon[192821]: pgmap v1799: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:14:22 compute-0 sshd-session[442146]: Connection closed by authenticating user sshd 80.94.95.116 port 59090 [preauth]
Dec 03 02:14:22 compute-0 nova_compute[351485]: 2025-12-03 02:14:22.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:14:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1800: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:14:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:14:23 compute-0 nova_compute[351485]: 2025-12-03 02:14:23.487 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:14:23 compute-0 nova_compute[351485]: 2025-12-03 02:14:23.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:14:24 compute-0 ceph-mon[192821]: pgmap v1800: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:14:24 compute-0 nova_compute[351485]: 2025-12-03 02:14:24.872 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:14:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1801: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:14:26 compute-0 ceph-mon[192821]: pgmap v1801: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:14:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1802: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:14:27 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Dec 03 02:14:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:14:27.144164) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 03 02:14:27 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Dec 03 02:14:27 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728067144235, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 2047, "num_deletes": 251, "total_data_size": 3465843, "memory_usage": 3521016, "flush_reason": "Manual Compaction"}
Dec 03 02:14:27 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Dec 03 02:14:27 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728067171886, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 3400030, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34771, "largest_seqno": 36817, "table_properties": {"data_size": 3390593, "index_size": 5995, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18586, "raw_average_key_size": 20, "raw_value_size": 3372034, "raw_average_value_size": 3641, "num_data_blocks": 266, "num_entries": 926, "num_filter_entries": 926, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764727837, "oldest_key_time": 1764727837, "file_creation_time": 1764728067, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Dec 03 02:14:27 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 28313 microseconds, and 14375 cpu microseconds.
Dec 03 02:14:27 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 02:14:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:14:27.172478) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 3400030 bytes OK
Dec 03 02:14:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:14:27.173248) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Dec 03 02:14:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:14:27.176862) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Dec 03 02:14:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:14:27.176898) EVENT_LOG_v1 {"time_micros": 1764728067176886, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 03 02:14:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:14:27.176927) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 03 02:14:27 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 3457287, prev total WAL file size 3457287, number of live WAL files 2.
Dec 03 02:14:27 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:14:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:14:27.183399) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Dec 03 02:14:27 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 03 02:14:27 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(3320KB)], [80(7108KB)]
Dec 03 02:14:27 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728067183447, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 10679039, "oldest_snapshot_seqno": -1}
Dec 03 02:14:27 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 5680 keys, 8921779 bytes, temperature: kUnknown
Dec 03 02:14:27 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728067256405, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 8921779, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8884043, "index_size": 22458, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14213, "raw_key_size": 143438, "raw_average_key_size": 25, "raw_value_size": 8781518, "raw_average_value_size": 1546, "num_data_blocks": 922, "num_entries": 5680, "num_filter_entries": 5680, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764728067, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Dec 03 02:14:27 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 02:14:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:14:27.257402) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 8921779 bytes
Dec 03 02:14:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:14:27.260721) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 145.9 rd, 121.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 6.9 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(5.8) write-amplify(2.6) OK, records in: 6194, records dropped: 514 output_compression: NoCompression
Dec 03 02:14:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:14:27.260754) EVENT_LOG_v1 {"time_micros": 1764728067260738, "job": 46, "event": "compaction_finished", "compaction_time_micros": 73185, "compaction_time_cpu_micros": 44350, "output_level": 6, "num_output_files": 1, "total_output_size": 8921779, "num_input_records": 6194, "num_output_records": 5680, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 03 02:14:27 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:14:27 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728067262850, "job": 46, "event": "table_file_deletion", "file_number": 82}
Dec 03 02:14:27 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:14:27 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728067266409, "job": 46, "event": "table_file_deletion", "file_number": 80}
Dec 03 02:14:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:14:27.183237) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:14:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:14:27.266678) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:14:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:14:27.266685) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:14:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:14:27.266689) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:14:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:14:27.266692) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:14:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:14:27.266695) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:14:28 compute-0 ceph-mon[192821]: pgmap v1802: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:14:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:14:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:14:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:14:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:14:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:14:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:14:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:14:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:14:28
Dec 03 02:14:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 02:14:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 02:14:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['backups', 'default.rgw.control', 'vms', 'default.rgw.log', 'images', '.rgw.root', 'cephfs.cephfs.data', 'volumes', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta']
Dec 03 02:14:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 02:14:28 compute-0 nova_compute[351485]: 2025-12-03 02:14:28.489 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:14:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1803: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:14:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 02:14:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:14:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 02:14:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:14:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:14:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:14:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:14:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:14:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:14:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:14:29 compute-0 podman[158098]: time="2025-12-03T02:14:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:14:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:14:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec 03 02:14:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:14:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8179 "" "Go-http-client/1.1"
Dec 03 02:14:29 compute-0 nova_compute[351485]: 2025-12-03 02:14:29.876 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:14:30 compute-0 ceph-mon[192821]: pgmap v1803: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:14:30 compute-0 nova_compute[351485]: 2025-12-03 02:14:30.575 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:14:30 compute-0 nova_compute[351485]: 2025-12-03 02:14:30.576 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 02:14:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1804: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:14:31 compute-0 openstack_network_exporter[368278]: ERROR   02:14:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:14:31 compute-0 openstack_network_exporter[368278]: ERROR   02:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:14:31 compute-0 openstack_network_exporter[368278]: ERROR   02:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:14:31 compute-0 openstack_network_exporter[368278]: ERROR   02:14:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:14:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:14:31 compute-0 openstack_network_exporter[368278]: ERROR   02:14:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:14:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:14:32 compute-0 ceph-mon[192821]: pgmap v1804: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:14:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1805: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:14:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:14:33 compute-0 nova_compute[351485]: 2025-12-03 02:14:33.491 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:14:34 compute-0 ceph-mon[192821]: pgmap v1805: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:14:34 compute-0 nova_compute[351485]: 2025-12-03 02:14:34.880 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:14:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1806: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:14:35 compute-0 podman[442148]: 2025-12-03 02:14:35.84889292 +0000 UTC m=+0.102213293 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 03 02:14:35 compute-0 podman[442149]: 2025-12-03 02:14:35.869273116 +0000 UTC m=+0.129798343 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 03 02:14:35 compute-0 podman[442150]: 2025-12-03 02:14:35.870946824 +0000 UTC m=+0.109324424 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 03 02:14:36 compute-0 ceph-mon[192821]: pgmap v1806: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:14:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1807: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:14:38 compute-0 ceph-mon[192821]: pgmap v1807: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:14:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:14:38 compute-0 nova_compute[351485]: 2025-12-03 02:14:38.494 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:14:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 02:14:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:14:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 02:14:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:14:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 03 02:14:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:14:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:14:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:14:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:14:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:14:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec 03 02:14:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:14:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 02:14:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:14:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:14:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:14:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 02:14:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:14:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 02:14:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:14:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:14:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:14:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 02:14:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1808: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:14:39 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:14:39.458 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:a6:85', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:2a:11:ae:7b:8c'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 02:14:39 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:14:39.460 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 03 02:14:39 compute-0 nova_compute[351485]: 2025-12-03 02:14:39.462 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:14:39 compute-0 nova_compute[351485]: 2025-12-03 02:14:39.884 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:14:40 compute-0 ceph-mon[192821]: pgmap v1808: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:14:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1809: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:14:41 compute-0 podman[442206]: 2025-12-03 02:14:41.876251842 +0000 UTC m=+0.126003516 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Dec 03 02:14:42 compute-0 ceph-mon[192821]: pgmap v1809: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:14:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1810: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:14:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:14:43 compute-0 nova_compute[351485]: 2025-12-03 02:14:43.496 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:14:44 compute-0 ceph-mon[192821]: pgmap v1810: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:14:44 compute-0 nova_compute[351485]: 2025-12-03 02:14:44.888 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:14:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1811: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:14:45 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:14:45.463 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=eda9fd7d-f2b1-4121-b9ac-fc31f8426272, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:14:46 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Dec 03 02:14:46 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Dec 03 02:14:46 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Dec 03 02:14:46 compute-0 ceph-mon[192821]: pgmap v1811: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:14:46 compute-0 podman[442226]: 2025-12-03 02:14:46.872417769 +0000 UTC m=+0.114436969 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, architecture=x86_64, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, config_id=edpm, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., release=1755695350, io.openshift.expose-services=, name=ubi9-minimal, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec 03 02:14:46 compute-0 podman[442232]: 2025-12-03 02:14:46.87705241 +0000 UTC m=+0.106371181 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible)
Dec 03 02:14:46 compute-0 podman[442225]: 2025-12-03 02:14:46.896960173 +0000 UTC m=+0.146951708 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Dec 03 02:14:46 compute-0 podman[442227]: 2025-12-03 02:14:46.902962883 +0000 UTC m=+0.139549869 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 03 02:14:46 compute-0 podman[442228]: 2025-12-03 02:14:46.914474679 +0000 UTC m=+0.147637098 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, managed_by=edpm_ansible, io.buildah.version=1.29.0, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, release-0.7.12=, architecture=x86_64, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9)
Dec 03 02:14:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1813: 321 pgs: 321 active+clean; 24 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 4.3 KiB/s rd, 819 KiB/s wr, 5 op/s
Dec 03 02:14:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 02:14:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/929566334' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:14:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 02:14:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/929566334' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:14:47 compute-0 ceph-mon[192821]: osdmap e134: 3 total, 3 up, 3 in
Dec 03 02:14:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/929566334' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:14:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/929566334' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:14:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:14:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Dec 03 02:14:48 compute-0 ceph-mon[192821]: pgmap v1813: 321 pgs: 321 active+clean; 24 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 4.3 KiB/s rd, 819 KiB/s wr, 5 op/s
Dec 03 02:14:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Dec 03 02:14:48 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Dec 03 02:14:48 compute-0 nova_compute[351485]: 2025-12-03 02:14:48.499 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:14:48 compute-0 sshd-session[442330]: Received disconnect from 217.170.199.90 port 56054:11:  [preauth]
Dec 03 02:14:48 compute-0 sshd-session[442330]: Disconnected from authenticating user root 217.170.199.90 port 56054 [preauth]
Dec 03 02:14:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1815: 321 pgs: 321 active+clean; 24 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 5.4 KiB/s rd, 1024 KiB/s wr, 7 op/s
Dec 03 02:14:49 compute-0 ceph-mon[192821]: osdmap e135: 3 total, 3 up, 3 in
Dec 03 02:14:49 compute-0 nova_compute[351485]: 2025-12-03 02:14:49.892 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:14:50 compute-0 ceph-mon[192821]: pgmap v1815: 321 pgs: 321 active+clean; 24 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 5.4 KiB/s rd, 1024 KiB/s wr, 7 op/s
Dec 03 02:14:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1816: 321 pgs: 321 active+clean; 52 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 4.6 MiB/s wr, 40 op/s
Dec 03 02:14:52 compute-0 ceph-mon[192821]: pgmap v1816: 321 pgs: 321 active+clean; 52 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 4.6 MiB/s wr, 40 op/s
Dec 03 02:14:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1817: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Dec 03 02:14:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:14:53 compute-0 nova_compute[351485]: 2025-12-03 02:14:53.503 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:14:54 compute-0 ceph-mon[192821]: pgmap v1817: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Dec 03 02:14:54 compute-0 nova_compute[351485]: 2025-12-03 02:14:54.896 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:14:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1818: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 3.8 MiB/s wr, 37 op/s
Dec 03 02:14:56 compute-0 ceph-mon[192821]: pgmap v1818: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 3.8 MiB/s wr, 37 op/s
Dec 03 02:14:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1819: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 3.3 MiB/s wr, 32 op/s
Dec 03 02:14:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:14:58 compute-0 ceph-mon[192821]: pgmap v1819: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 3.3 MiB/s wr, 32 op/s
Dec 03 02:14:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:14:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:14:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:14:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:14:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:14:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:14:58 compute-0 nova_compute[351485]: 2025-12-03 02:14:58.505 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:14:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1820: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 3.1 MiB/s wr, 30 op/s
Dec 03 02:14:59 compute-0 ceph-mon[192821]: pgmap v1820: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 3.1 MiB/s wr, 30 op/s
Dec 03 02:14:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:14:59.647 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:14:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:14:59.647 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:14:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:14:59.648 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:14:59 compute-0 podman[158098]: time="2025-12-03T02:14:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:14:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:14:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec 03 02:14:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:14:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8181 "" "Go-http-client/1.1"
Dec 03 02:14:59 compute-0 nova_compute[351485]: 2025-12-03 02:14:59.901 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1821: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 2.7 MiB/s wr, 26 op/s
Dec 03 02:15:01 compute-0 openstack_network_exporter[368278]: ERROR   02:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:15:01 compute-0 openstack_network_exporter[368278]: ERROR   02:15:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:15:01 compute-0 openstack_network_exporter[368278]: ERROR   02:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:15:01 compute-0 openstack_network_exporter[368278]: ERROR   02:15:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:15:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:15:01 compute-0 openstack_network_exporter[368278]: ERROR   02:15:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:15:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:15:02 compute-0 ceph-mon[192821]: pgmap v1821: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 2.7 MiB/s wr, 26 op/s
Dec 03 02:15:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1822: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 379 KiB/s wr, 4 op/s
Dec 03 02:15:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:15:03 compute-0 nova_compute[351485]: 2025-12-03 02:15:03.508 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:04 compute-0 ceph-mon[192821]: pgmap v1822: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 379 KiB/s wr, 4 op/s
Dec 03 02:15:04 compute-0 nova_compute[351485]: 2025-12-03 02:15:04.905 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1823: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:15:06 compute-0 ceph-mon[192821]: pgmap v1823: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:15:06 compute-0 podman[442335]: 2025-12-03 02:15:06.869732586 +0000 UTC m=+0.102534282 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 02:15:06 compute-0 podman[442333]: 2025-12-03 02:15:06.878421132 +0000 UTC m=+0.125252755 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 03 02:15:06 compute-0 podman[442334]: 2025-12-03 02:15:06.883758443 +0000 UTC m=+0.123317170 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 03 02:15:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1824: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:15:08 compute-0 ceph-mon[192821]: pgmap v1824: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:15:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:15:08 compute-0 nova_compute[351485]: 2025-12-03 02:15:08.512 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:08 compute-0 sudo[442388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:15:08 compute-0 sudo[442388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:15:08 compute-0 sudo[442388]: pam_unix(sudo:session): session closed for user root
Dec 03 02:15:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1825: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:15:08 compute-0 sudo[442413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:15:09 compute-0 sudo[442413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:15:09 compute-0 sudo[442413]: pam_unix(sudo:session): session closed for user root
Dec 03 02:15:09 compute-0 sudo[442438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:15:09 compute-0 sudo[442438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:15:09 compute-0 sudo[442438]: pam_unix(sudo:session): session closed for user root
Dec 03 02:15:09 compute-0 sudo[442463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 02:15:09 compute-0 sudo[442463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:15:09 compute-0 ovn_controller[89134]: 2025-12-03T02:15:09Z|00065|memory_trim|INFO|Detected inactivity (last active 30010 ms ago): trimming memory
Dec 03 02:15:09 compute-0 nova_compute[351485]: 2025-12-03 02:15:09.909 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:09 compute-0 sudo[442463]: pam_unix(sudo:session): session closed for user root
Dec 03 02:15:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:15:09 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:15:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 02:15:09 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:15:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 02:15:09 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:15:09 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev eb3b30b3-5a11-4eb4-90a2-ced56086bd49 does not exist
Dec 03 02:15:09 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev d720c6ec-df24-43c4-92dc-cb06b3489e78 does not exist
Dec 03 02:15:09 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 86e65e3f-e748-41b4-8c40-e90827f8de9f does not exist
Dec 03 02:15:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 02:15:09 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:15:10 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 02:15:10 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:15:10 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:15:10 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:15:10 compute-0 ceph-mon[192821]: pgmap v1825: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:15:10 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:15:10 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:15:10 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:15:10 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:15:10 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:15:10 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:15:10 compute-0 sudo[442518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:15:10 compute-0 sudo[442518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:15:10 compute-0 sudo[442518]: pam_unix(sudo:session): session closed for user root
Dec 03 02:15:10 compute-0 sudo[442543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:15:10 compute-0 sudo[442543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:15:10 compute-0 sudo[442543]: pam_unix(sudo:session): session closed for user root
Dec 03 02:15:10 compute-0 sudo[442568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:15:10 compute-0 sudo[442568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:15:10 compute-0 sudo[442568]: pam_unix(sudo:session): session closed for user root
Dec 03 02:15:10 compute-0 sudo[442593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 02:15:10 compute-0 sudo[442593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:15:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1826: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:15:11 compute-0 podman[442656]: 2025-12-03 02:15:11.156438959 +0000 UTC m=+0.086707064 container create 31de91df52f3e3246c049274c515c908b43fb2d05e8eeb0da3e14c239cb46df1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_darwin, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 03 02:15:11 compute-0 podman[442656]: 2025-12-03 02:15:11.122419957 +0000 UTC m=+0.052688102 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:15:11 compute-0 systemd[1]: Started libpod-conmon-31de91df52f3e3246c049274c515c908b43fb2d05e8eeb0da3e14c239cb46df1.scope.
Dec 03 02:15:11 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:15:11 compute-0 podman[442656]: 2025-12-03 02:15:11.314132341 +0000 UTC m=+0.244400496 container init 31de91df52f3e3246c049274c515c908b43fb2d05e8eeb0da3e14c239cb46df1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_darwin, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 03 02:15:11 compute-0 podman[442656]: 2025-12-03 02:15:11.331488382 +0000 UTC m=+0.261756487 container start 31de91df52f3e3246c049274c515c908b43fb2d05e8eeb0da3e14c239cb46df1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Dec 03 02:15:11 compute-0 podman[442656]: 2025-12-03 02:15:11.339277732 +0000 UTC m=+0.269545887 container attach 31de91df52f3e3246c049274c515c908b43fb2d05e8eeb0da3e14c239cb46df1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:15:11 compute-0 jolly_darwin[442673]: 167 167
Dec 03 02:15:11 compute-0 systemd[1]: libpod-31de91df52f3e3246c049274c515c908b43fb2d05e8eeb0da3e14c239cb46df1.scope: Deactivated successfully.
Dec 03 02:15:11 compute-0 podman[442656]: 2025-12-03 02:15:11.346826776 +0000 UTC m=+0.277094871 container died 31de91df52f3e3246c049274c515c908b43fb2d05e8eeb0da3e14c239cb46df1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_darwin, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:15:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-ecb462b7c75f5a5a2b708adc3ba0e85e536539b042f7275e3e49268750b35df7-merged.mount: Deactivated successfully.
Dec 03 02:15:11 compute-0 podman[442656]: 2025-12-03 02:15:11.429489895 +0000 UTC m=+0.359757970 container remove 31de91df52f3e3246c049274c515c908b43fb2d05e8eeb0da3e14c239cb46df1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_darwin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:15:11 compute-0 systemd[1]: libpod-conmon-31de91df52f3e3246c049274c515c908b43fb2d05e8eeb0da3e14c239cb46df1.scope: Deactivated successfully.
Dec 03 02:15:11 compute-0 podman[442697]: 2025-12-03 02:15:11.707310275 +0000 UTC m=+0.095895474 container create 30ba8593a54c5bdee132d221e3ccd7d7ca7a1875358f64bf586f92805217a3d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hermann, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 03 02:15:11 compute-0 podman[442697]: 2025-12-03 02:15:11.671601685 +0000 UTC m=+0.060186934 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:15:11 compute-0 systemd[1]: Started libpod-conmon-30ba8593a54c5bdee132d221e3ccd7d7ca7a1875358f64bf586f92805217a3d1.scope.
Dec 03 02:15:11 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:15:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4c64650f39446626df1afdfffb8e2d9b33aad66a2db29c0d0d5f5ecf4222411/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:15:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4c64650f39446626df1afdfffb8e2d9b33aad66a2db29c0d0d5f5ecf4222411/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:15:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4c64650f39446626df1afdfffb8e2d9b33aad66a2db29c0d0d5f5ecf4222411/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:15:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4c64650f39446626df1afdfffb8e2d9b33aad66a2db29c0d0d5f5ecf4222411/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:15:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4c64650f39446626df1afdfffb8e2d9b33aad66a2db29c0d0d5f5ecf4222411/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 02:15:11 compute-0 podman[442697]: 2025-12-03 02:15:11.902520928 +0000 UTC m=+0.291106177 container init 30ba8593a54c5bdee132d221e3ccd7d7ca7a1875358f64bf586f92805217a3d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec 03 02:15:11 compute-0 podman[442697]: 2025-12-03 02:15:11.924763717 +0000 UTC m=+0.313348916 container start 30ba8593a54c5bdee132d221e3ccd7d7ca7a1875358f64bf586f92805217a3d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hermann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:15:11 compute-0 podman[442697]: 2025-12-03 02:15:11.932697112 +0000 UTC m=+0.321282361 container attach 30ba8593a54c5bdee132d221e3ccd7d7ca7a1875358f64bf586f92805217a3d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hermann, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:15:12 compute-0 ceph-mon[192821]: pgmap v1826: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:15:12 compute-0 nova_compute[351485]: 2025-12-03 02:15:12.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:15:12 compute-0 podman[442722]: 2025-12-03 02:15:12.872679537 +0000 UTC m=+0.113521083 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm)
Dec 03 02:15:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1827: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:15:13 compute-0 optimistic_hermann[442713]: --> passed data devices: 0 physical, 3 LVM
Dec 03 02:15:13 compute-0 optimistic_hermann[442713]: --> relative data size: 1.0
Dec 03 02:15:13 compute-0 optimistic_hermann[442713]: --> All data devices are unavailable
Dec 03 02:15:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:15:13 compute-0 systemd[1]: libpod-30ba8593a54c5bdee132d221e3ccd7d7ca7a1875358f64bf586f92805217a3d1.scope: Deactivated successfully.
Dec 03 02:15:13 compute-0 systemd[1]: libpod-30ba8593a54c5bdee132d221e3ccd7d7ca7a1875358f64bf586f92805217a3d1.scope: Consumed 1.323s CPU time.
Dec 03 02:15:13 compute-0 podman[442697]: 2025-12-03 02:15:13.300784739 +0000 UTC m=+1.689369978 container died 30ba8593a54c5bdee132d221e3ccd7d7ca7a1875358f64bf586f92805217a3d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hermann, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 03 02:15:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4c64650f39446626df1afdfffb8e2d9b33aad66a2db29c0d0d5f5ecf4222411-merged.mount: Deactivated successfully.
Dec 03 02:15:13 compute-0 podman[442697]: 2025-12-03 02:15:13.422970775 +0000 UTC m=+1.811555984 container remove 30ba8593a54c5bdee132d221e3ccd7d7ca7a1875358f64bf586f92805217a3d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 03 02:15:13 compute-0 systemd[1]: libpod-conmon-30ba8593a54c5bdee132d221e3ccd7d7ca7a1875358f64bf586f92805217a3d1.scope: Deactivated successfully.
Dec 03 02:15:13 compute-0 sudo[442593]: pam_unix(sudo:session): session closed for user root
Dec 03 02:15:13 compute-0 nova_compute[351485]: 2025-12-03 02:15:13.514 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:13 compute-0 sudo[442775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:15:13 compute-0 sudo[442775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:15:13 compute-0 sudo[442775]: pam_unix(sudo:session): session closed for user root
Dec 03 02:15:13 compute-0 sudo[442800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:15:13 compute-0 sudo[442800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:15:13 compute-0 sudo[442800]: pam_unix(sudo:session): session closed for user root
Dec 03 02:15:13 compute-0 sudo[442825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:15:13 compute-0 sudo[442825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:15:13 compute-0 sudo[442825]: pam_unix(sudo:session): session closed for user root
Dec 03 02:15:14 compute-0 sudo[442850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 02:15:14 compute-0 sudo[442850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:15:14 compute-0 ceph-mon[192821]: pgmap v1827: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:15:14 compute-0 nova_compute[351485]: 2025-12-03 02:15:14.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:15:14 compute-0 nova_compute[351485]: 2025-12-03 02:15:14.607 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:15:14 compute-0 nova_compute[351485]: 2025-12-03 02:15:14.607 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:15:14 compute-0 nova_compute[351485]: 2025-12-03 02:15:14.608 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:15:14 compute-0 nova_compute[351485]: 2025-12-03 02:15:14.609 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 02:15:14 compute-0 nova_compute[351485]: 2025-12-03 02:15:14.609 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:15:14 compute-0 podman[442911]: 2025-12-03 02:15:14.653981504 +0000 UTC m=+0.096404448 container create 51f0bc3bca80b8d8bf01900cf936e0aefbad70b92660e1d59aed31ff0d143ec9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec 03 02:15:14 compute-0 podman[442911]: 2025-12-03 02:15:14.610318309 +0000 UTC m=+0.052741313 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:15:14 compute-0 systemd[1]: Started libpod-conmon-51f0bc3bca80b8d8bf01900cf936e0aefbad70b92660e1d59aed31ff0d143ec9.scope.
Dec 03 02:15:14 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:15:14 compute-0 podman[442911]: 2025-12-03 02:15:14.786443482 +0000 UTC m=+0.228866406 container init 51f0bc3bca80b8d8bf01900cf936e0aefbad70b92660e1d59aed31ff0d143ec9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_antonelli, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 03 02:15:14 compute-0 podman[442911]: 2025-12-03 02:15:14.797910897 +0000 UTC m=+0.240333801 container start 51f0bc3bca80b8d8bf01900cf936e0aefbad70b92660e1d59aed31ff0d143ec9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_antonelli, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:15:14 compute-0 podman[442911]: 2025-12-03 02:15:14.802382563 +0000 UTC m=+0.244805487 container attach 51f0bc3bca80b8d8bf01900cf936e0aefbad70b92660e1d59aed31ff0d143ec9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:15:14 compute-0 flamboyant_antonelli[442928]: 167 167
Dec 03 02:15:14 compute-0 systemd[1]: libpod-51f0bc3bca80b8d8bf01900cf936e0aefbad70b92660e1d59aed31ff0d143ec9.scope: Deactivated successfully.
Dec 03 02:15:14 compute-0 podman[442952]: 2025-12-03 02:15:14.8761291 +0000 UTC m=+0.041809514 container died 51f0bc3bca80b8d8bf01900cf936e0aefbad70b92660e1d59aed31ff0d143ec9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_antonelli, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:15:14 compute-0 nova_compute[351485]: 2025-12-03 02:15:14.913 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a0e97045ddbd6e16e9fb1fe7e16e6366351a1e04be6f530c683b6bfd1390418-merged.mount: Deactivated successfully.
Dec 03 02:15:14 compute-0 podman[442952]: 2025-12-03 02:15:14.949220258 +0000 UTC m=+0.114900642 container remove 51f0bc3bca80b8d8bf01900cf936e0aefbad70b92660e1d59aed31ff0d143ec9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_antonelli, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:15:14 compute-0 systemd[1]: libpod-conmon-51f0bc3bca80b8d8bf01900cf936e0aefbad70b92660e1d59aed31ff0d143ec9.scope: Deactivated successfully.
Dec 03 02:15:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1828: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:15:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:15:15 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1373845775' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:15:15 compute-0 nova_compute[351485]: 2025-12-03 02:15:15.154 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:15:15 compute-0 podman[442971]: 2025-12-03 02:15:15.213787873 +0000 UTC m=+0.087110185 container create 17d95e41e58a53e974812b4fec5415d7dfa2a93f659800a54da3873213a277d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:15:15 compute-0 podman[442971]: 2025-12-03 02:15:15.190263148 +0000 UTC m=+0.063585510 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:15:15 compute-0 systemd[1]: Started libpod-conmon-17d95e41e58a53e974812b4fec5415d7dfa2a93f659800a54da3873213a277d2.scope.
Dec 03 02:15:15 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:15:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f7996450ab6a701ca71eb200519f84c35fa6c70d8d188108a234c0868ff6617/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:15:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f7996450ab6a701ca71eb200519f84c35fa6c70d8d188108a234c0868ff6617/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:15:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f7996450ab6a701ca71eb200519f84c35fa6c70d8d188108a234c0868ff6617/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:15:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f7996450ab6a701ca71eb200519f84c35fa6c70d8d188108a234c0868ff6617/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:15:15 compute-0 podman[442971]: 2025-12-03 02:15:15.432589364 +0000 UTC m=+0.305911736 container init 17d95e41e58a53e974812b4fec5415d7dfa2a93f659800a54da3873213a277d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_dewdney, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 03 02:15:15 compute-0 podman[442971]: 2025-12-03 02:15:15.45685134 +0000 UTC m=+0.330173642 container start 17d95e41e58a53e974812b4fec5415d7dfa2a93f659800a54da3873213a277d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec 03 02:15:15 compute-0 podman[442971]: 2025-12-03 02:15:15.461858812 +0000 UTC m=+0.335181224 container attach 17d95e41e58a53e974812b4fec5415d7dfa2a93f659800a54da3873213a277d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_dewdney, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:15:15 compute-0 nova_compute[351485]: 2025-12-03 02:15:15.694 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:15:15 compute-0 nova_compute[351485]: 2025-12-03 02:15:15.697 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4106MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 02:15:15 compute-0 nova_compute[351485]: 2025-12-03 02:15:15.698 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:15:15 compute-0 nova_compute[351485]: 2025-12-03 02:15:15.699 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:15:15 compute-0 nova_compute[351485]: 2025-12-03 02:15:15.794 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 02:15:15 compute-0 nova_compute[351485]: 2025-12-03 02:15:15.795 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 02:15:15 compute-0 nova_compute[351485]: 2025-12-03 02:15:15.814 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing inventories for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 03 02:15:15 compute-0 nova_compute[351485]: 2025-12-03 02:15:15.854 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Updating ProviderTree inventory for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 03 02:15:15 compute-0 nova_compute[351485]: 2025-12-03 02:15:15.854 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Updating inventory in ProviderTree for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 03 02:15:15 compute-0 nova_compute[351485]: 2025-12-03 02:15:15.875 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing aggregate associations for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 03 02:15:15 compute-0 nova_compute[351485]: 2025-12-03 02:15:15.903 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing trait associations for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05, traits: HW_CPU_X86_SSE42,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_ABM,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_F16C,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_RESCUE_BFV,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 03 02:15:15 compute-0 nova_compute[351485]: 2025-12-03 02:15:15.922 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:15:16 compute-0 ceph-mon[192821]: pgmap v1828: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:15:16 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1373845775' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]: {
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:     "0": [
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:         {
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:             "devices": [
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:                 "/dev/loop3"
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:             ],
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:             "lv_name": "ceph_lv0",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:             "lv_size": "21470642176",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:             "name": "ceph_lv0",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:             "tags": {
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:                 "ceph.cluster_name": "ceph",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:                 "ceph.crush_device_class": "",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:                 "ceph.encrypted": "0",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:                 "ceph.osd_id": "0",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:                 "ceph.type": "block",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:                 "ceph.vdo": "0"
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:             },
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:             "type": "block",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:             "vg_name": "ceph_vg0"
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:         }
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:     ],
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:     "1": [
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:         {
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:             "devices": [
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:                 "/dev/loop4"
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:             ],
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:             "lv_name": "ceph_lv1",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:             "lv_size": "21470642176",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:             "name": "ceph_lv1",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:             "tags": {
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:                 "ceph.cluster_name": "ceph",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:                 "ceph.crush_device_class": "",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:                 "ceph.encrypted": "0",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:                 "ceph.osd_id": "1",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:                 "ceph.type": "block",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:                 "ceph.vdo": "0"
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:             },
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:             "type": "block",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:             "vg_name": "ceph_vg1"
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:         }
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:     ],
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:     "2": [
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:         {
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:             "devices": [
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:                 "/dev/loop5"
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:             ],
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:             "lv_name": "ceph_lv2",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:             "lv_size": "21470642176",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:             "name": "ceph_lv2",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:             "tags": {
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:                 "ceph.cluster_name": "ceph",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:                 "ceph.crush_device_class": "",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:                 "ceph.encrypted": "0",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:                 "ceph.osd_id": "2",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:                 "ceph.type": "block",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:                 "ceph.vdo": "0"
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:             },
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:             "type": "block",
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:             "vg_name": "ceph_vg2"
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:         }
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]:     ]
Dec 03 02:15:16 compute-0 infallible_dewdney[442988]: }
Dec 03 02:15:16 compute-0 systemd[1]: libpod-17d95e41e58a53e974812b4fec5415d7dfa2a93f659800a54da3873213a277d2.scope: Deactivated successfully.
Dec 03 02:15:16 compute-0 conmon[442988]: conmon 17d95e41e58a53e97481 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-17d95e41e58a53e974812b4fec5415d7dfa2a93f659800a54da3873213a277d2.scope/container/memory.events
Dec 03 02:15:16 compute-0 podman[442971]: 2025-12-03 02:15:16.315174475 +0000 UTC m=+1.188496817 container died 17d95e41e58a53e974812b4fec5415d7dfa2a93f659800a54da3873213a277d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:15:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f7996450ab6a701ca71eb200519f84c35fa6c70d8d188108a234c0868ff6617-merged.mount: Deactivated successfully.
Dec 03 02:15:16 compute-0 podman[442971]: 2025-12-03 02:15:16.420238867 +0000 UTC m=+1.293561179 container remove 17d95e41e58a53e974812b4fec5415d7dfa2a93f659800a54da3873213a277d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_dewdney, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:15:16 compute-0 systemd[1]: libpod-conmon-17d95e41e58a53e974812b4fec5415d7dfa2a93f659800a54da3873213a277d2.scope: Deactivated successfully.
Dec 03 02:15:16 compute-0 sudo[442850]: pam_unix(sudo:session): session closed for user root
Dec 03 02:15:16 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:15:16 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2460498832' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:15:16 compute-0 nova_compute[351485]: 2025-12-03 02:15:16.513 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:15:16 compute-0 nova_compute[351485]: 2025-12-03 02:15:16.527 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:15:16 compute-0 nova_compute[351485]: 2025-12-03 02:15:16.546 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:15:16 compute-0 nova_compute[351485]: 2025-12-03 02:15:16.549 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 02:15:16 compute-0 nova_compute[351485]: 2025-12-03 02:15:16.550 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.851s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:15:16 compute-0 sudo[443033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:15:16 compute-0 sudo[443033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:15:16 compute-0 sudo[443033]: pam_unix(sudo:session): session closed for user root
Dec 03 02:15:16 compute-0 sudo[443058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:15:16 compute-0 sudo[443058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:15:16 compute-0 sudo[443058]: pam_unix(sudo:session): session closed for user root
Dec 03 02:15:16 compute-0 sudo[443083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:15:16 compute-0 sudo[443083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:15:16 compute-0 sudo[443083]: pam_unix(sudo:session): session closed for user root
Dec 03 02:15:16 compute-0 nova_compute[351485]: 2025-12-03 02:15:16.910 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1829: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:15:17 compute-0 sudo[443108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 02:15:17 compute-0 sudo[443108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:15:17 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2460498832' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:15:17 compute-0 podman[443135]: 2025-12-03 02:15:17.165442761 +0000 UTC m=+0.108837311 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, release=1214.1726694543, architecture=x86_64, distribution-scope=public, io.openshift.tags=base rhel9, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, container_name=kepler, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, vendor=Red Hat, Inc., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, com.redhat.component=ubi9-container, managed_by=edpm_ansible, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec 03 02:15:17 compute-0 podman[443136]: 2025-12-03 02:15:17.170231906 +0000 UTC m=+0.111330821 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 03 02:15:17 compute-0 podman[443134]: 2025-12-03 02:15:17.17142585 +0000 UTC m=+0.116997161 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 03 02:15:17 compute-0 podman[443133]: 2025-12-03 02:15:17.174553778 +0000 UTC m=+0.132626393 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.7, vcs-type=git, name=ubi9-minimal, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, version=9.6, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 03 02:15:17 compute-0 podman[443132]: 2025-12-03 02:15:17.220982012 +0000 UTC m=+0.184769739 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 03 02:15:17 compute-0 nova_compute[351485]: 2025-12-03 02:15:17.551 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:15:17 compute-0 nova_compute[351485]: 2025-12-03 02:15:17.551 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 02:15:17 compute-0 nova_compute[351485]: 2025-12-03 02:15:17.552 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 03 02:15:17 compute-0 podman[443271]: 2025-12-03 02:15:17.564622395 +0000 UTC m=+0.104367114 container create e8236ae066edc2cc51952c46c9d433606c1ed9e8d3858dd36785e7c8f83b3df6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 03 02:15:17 compute-0 nova_compute[351485]: 2025-12-03 02:15:17.569 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 03 02:15:17 compute-0 nova_compute[351485]: 2025-12-03 02:15:17.569 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:15:17 compute-0 nova_compute[351485]: 2025-12-03 02:15:17.570 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:15:17 compute-0 podman[443271]: 2025-12-03 02:15:17.51887813 +0000 UTC m=+0.058622849 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:15:17 compute-0 systemd[1]: Started libpod-conmon-e8236ae066edc2cc51952c46c9d433606c1ed9e8d3858dd36785e7c8f83b3df6.scope.
Dec 03 02:15:17 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:15:17 compute-0 podman[443271]: 2025-12-03 02:15:17.697446003 +0000 UTC m=+0.237190732 container init e8236ae066edc2cc51952c46c9d433606c1ed9e8d3858dd36785e7c8f83b3df6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_payne, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:15:17 compute-0 podman[443271]: 2025-12-03 02:15:17.715500913 +0000 UTC m=+0.255245582 container start e8236ae066edc2cc51952c46c9d433606c1ed9e8d3858dd36785e7c8f83b3df6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 03 02:15:17 compute-0 podman[443271]: 2025-12-03 02:15:17.722607874 +0000 UTC m=+0.262352593 container attach e8236ae066edc2cc51952c46c9d433606c1ed9e8d3858dd36785e7c8f83b3df6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_payne, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Dec 03 02:15:17 compute-0 confident_payne[443287]: 167 167
Dec 03 02:15:17 compute-0 systemd[1]: libpod-e8236ae066edc2cc51952c46c9d433606c1ed9e8d3858dd36785e7c8f83b3df6.scope: Deactivated successfully.
Dec 03 02:15:17 compute-0 podman[443292]: 2025-12-03 02:15:17.812049625 +0000 UTC m=+0.058408543 container died e8236ae066edc2cc51952c46c9d433606c1ed9e8d3858dd36785e7c8f83b3df6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 03 02:15:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f62d4038ddeaf9d7b68ed037258ce9350a3d2ea60aa226c4517b9c1dbfbbb7c-merged.mount: Deactivated successfully.
Dec 03 02:15:17 compute-0 podman[443292]: 2025-12-03 02:15:17.884832164 +0000 UTC m=+0.131191082 container remove e8236ae066edc2cc51952c46c9d433606c1ed9e8d3858dd36785e7c8f83b3df6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_payne, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:15:17 compute-0 systemd[1]: libpod-conmon-e8236ae066edc2cc51952c46c9d433606c1ed9e8d3858dd36785e7c8f83b3df6.scope: Deactivated successfully.
Dec 03 02:15:18 compute-0 ceph-mon[192821]: pgmap v1829: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:15:18 compute-0 podman[443315]: 2025-12-03 02:15:18.197803329 +0000 UTC m=+0.097580412 container create 99d8777efbbec0b967606b31131e017dec904fbd465feed679bed380af34907e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cartwright, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:15:18 compute-0 podman[443315]: 2025-12-03 02:15:18.164854517 +0000 UTC m=+0.064631640 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:15:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:15:18 compute-0 systemd[1]: Started libpod-conmon-99d8777efbbec0b967606b31131e017dec904fbd465feed679bed380af34907e.scope.
Dec 03 02:15:18 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:15:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fcfdd4accaa775e9fcfaac982e7812b474e8176ba1ecec2269f6905ba4f8b16/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:15:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fcfdd4accaa775e9fcfaac982e7812b474e8176ba1ecec2269f6905ba4f8b16/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:15:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fcfdd4accaa775e9fcfaac982e7812b474e8176ba1ecec2269f6905ba4f8b16/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:15:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fcfdd4accaa775e9fcfaac982e7812b474e8176ba1ecec2269f6905ba4f8b16/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:15:18 compute-0 podman[443315]: 2025-12-03 02:15:18.394265858 +0000 UTC m=+0.294042981 container init 99d8777efbbec0b967606b31131e017dec904fbd465feed679bed380af34907e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cartwright, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:15:18 compute-0 podman[443315]: 2025-12-03 02:15:18.410713333 +0000 UTC m=+0.310490416 container start 99d8777efbbec0b967606b31131e017dec904fbd465feed679bed380af34907e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cartwright, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 03 02:15:18 compute-0 podman[443315]: 2025-12-03 02:15:18.416411884 +0000 UTC m=+0.316188987 container attach 99d8777efbbec0b967606b31131e017dec904fbd465feed679bed380af34907e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cartwright, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:15:18 compute-0 nova_compute[351485]: 2025-12-03 02:15:18.517 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1830: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:15:19 compute-0 nova_compute[351485]: 2025-12-03 02:15:19.150 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.510 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.511 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.511 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.512 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.514 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.517 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.517 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.517 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.517 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.517 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.517 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.517 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.518 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.518 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.518 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.518 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.518 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.518 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.518 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.518 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.518 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.519 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.519 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.519 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.519 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.519 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.519 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.519 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.519 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.519 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.520 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.520 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.520 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.520 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.520 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.520 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.520 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.520 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.520 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.521 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.521 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.521 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.521 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.521 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.521 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.521 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.521 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.521 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.521 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.522 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.522 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.522 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.522 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.522 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.522 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.522 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.523 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.523 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.523 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.523 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.523 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.523 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.523 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.523 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.523 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.523 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.524 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.524 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.524 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.524 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.524 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.524 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.524 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.524 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.525 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.525 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.525 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.525 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.525 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.525 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.525 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:15:19 compute-0 quizzical_cartwright[443331]: {
Dec 03 02:15:19 compute-0 quizzical_cartwright[443331]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 02:15:19 compute-0 quizzical_cartwright[443331]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:15:19 compute-0 quizzical_cartwright[443331]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 02:15:19 compute-0 quizzical_cartwright[443331]:         "osd_id": 2,
Dec 03 02:15:19 compute-0 quizzical_cartwright[443331]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:15:19 compute-0 quizzical_cartwright[443331]:         "type": "bluestore"
Dec 03 02:15:19 compute-0 quizzical_cartwright[443331]:     },
Dec 03 02:15:19 compute-0 quizzical_cartwright[443331]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 02:15:19 compute-0 quizzical_cartwright[443331]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:15:19 compute-0 quizzical_cartwright[443331]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 02:15:19 compute-0 quizzical_cartwright[443331]:         "osd_id": 1,
Dec 03 02:15:19 compute-0 quizzical_cartwright[443331]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:15:19 compute-0 quizzical_cartwright[443331]:         "type": "bluestore"
Dec 03 02:15:19 compute-0 quizzical_cartwright[443331]:     },
Dec 03 02:15:19 compute-0 quizzical_cartwright[443331]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 02:15:19 compute-0 quizzical_cartwright[443331]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:15:19 compute-0 quizzical_cartwright[443331]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 02:15:19 compute-0 quizzical_cartwright[443331]:         "osd_id": 0,
Dec 03 02:15:19 compute-0 quizzical_cartwright[443331]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:15:19 compute-0 quizzical_cartwright[443331]:         "type": "bluestore"
Dec 03 02:15:19 compute-0 quizzical_cartwright[443331]:     }
Dec 03 02:15:19 compute-0 quizzical_cartwright[443331]: }
Dec 03 02:15:19 compute-0 systemd[1]: libpod-99d8777efbbec0b967606b31131e017dec904fbd465feed679bed380af34907e.scope: Deactivated successfully.
Dec 03 02:15:19 compute-0 systemd[1]: libpod-99d8777efbbec0b967606b31131e017dec904fbd465feed679bed380af34907e.scope: Consumed 1.235s CPU time.
Dec 03 02:15:19 compute-0 podman[443315]: 2025-12-03 02:15:19.653633999 +0000 UTC m=+1.553411082 container died 99d8777efbbec0b967606b31131e017dec904fbd465feed679bed380af34907e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cartwright, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 03 02:15:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-3fcfdd4accaa775e9fcfaac982e7812b474e8176ba1ecec2269f6905ba4f8b16-merged.mount: Deactivated successfully.
Dec 03 02:15:19 compute-0 podman[443315]: 2025-12-03 02:15:19.744287284 +0000 UTC m=+1.644064327 container remove 99d8777efbbec0b967606b31131e017dec904fbd465feed679bed380af34907e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cartwright, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:15:19 compute-0 systemd[1]: libpod-conmon-99d8777efbbec0b967606b31131e017dec904fbd465feed679bed380af34907e.scope: Deactivated successfully.
Dec 03 02:15:19 compute-0 sudo[443108]: pam_unix(sudo:session): session closed for user root
Dec 03 02:15:19 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 02:15:19 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:15:19 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 02:15:19 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:15:19 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev fe2fbd03-9e6e-46e5-90a2-faceb23996fa does not exist
Dec 03 02:15:19 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 6f2c6e30-cc47-4de5-8392-a6c78903b1f8 does not exist
Dec 03 02:15:19 compute-0 nova_compute[351485]: 2025-12-03 02:15:19.917 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:19 compute-0 sudo[443376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:15:19 compute-0 sudo[443376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:15:20 compute-0 sudo[443376]: pam_unix(sudo:session): session closed for user root
Dec 03 02:15:20 compute-0 sudo[443401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 02:15:20 compute-0 sudo[443401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:15:20 compute-0 sudo[443401]: pam_unix(sudo:session): session closed for user root
Dec 03 02:15:20 compute-0 ceph-mon[192821]: pgmap v1830: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:15:20 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:15:20 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:15:20 compute-0 nova_compute[351485]: 2025-12-03 02:15:20.590 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:15:20 compute-0 nova_compute[351485]: 2025-12-03 02:15:20.885 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1831: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:15:22 compute-0 ceph-mon[192821]: pgmap v1831: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:15:22 compute-0 nova_compute[351485]: 2025-12-03 02:15:22.795 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1832: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:15:23 compute-0 nova_compute[351485]: 2025-12-03 02:15:23.123 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:15:23 compute-0 nova_compute[351485]: 2025-12-03 02:15:23.500 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:23 compute-0 nova_compute[351485]: 2025-12-03 02:15:23.519 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:23 compute-0 nova_compute[351485]: 2025-12-03 02:15:23.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:15:24 compute-0 ceph-mon[192821]: pgmap v1832: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:15:24 compute-0 nova_compute[351485]: 2025-12-03 02:15:24.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:15:24 compute-0 nova_compute[351485]: 2025-12-03 02:15:24.922 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1833: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:15:26 compute-0 ceph-mon[192821]: pgmap v1833: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:15:26 compute-0 nova_compute[351485]: 2025-12-03 02:15:26.797 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1834: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:15:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:15:28 compute-0 ceph-mon[192821]: pgmap v1834: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:15:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:15:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:15:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:15:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:15:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:15:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:15:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:15:28
Dec 03 02:15:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 02:15:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 02:15:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'backups', 'images', 'vms', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', '.rgw.root']
Dec 03 02:15:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 02:15:28 compute-0 nova_compute[351485]: 2025-12-03 02:15:28.523 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1835: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:15:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 02:15:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:15:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 02:15:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:15:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:15:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:15:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:15:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:15:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:15:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:15:29 compute-0 nova_compute[351485]: 2025-12-03 02:15:29.224 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:29 compute-0 nova_compute[351485]: 2025-12-03 02:15:29.275 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:29 compute-0 podman[158098]: time="2025-12-03T02:15:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:15:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:15:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec 03 02:15:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:15:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8168 "" "Go-http-client/1.1"
Dec 03 02:15:29 compute-0 nova_compute[351485]: 2025-12-03 02:15:29.925 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:30 compute-0 ceph-mon[192821]: pgmap v1835: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:15:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1836: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:15:31 compute-0 openstack_network_exporter[368278]: ERROR   02:15:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:15:31 compute-0 openstack_network_exporter[368278]: ERROR   02:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:15:31 compute-0 openstack_network_exporter[368278]: ERROR   02:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:15:31 compute-0 openstack_network_exporter[368278]: ERROR   02:15:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:15:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:15:31 compute-0 openstack_network_exporter[368278]: ERROR   02:15:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:15:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:15:32 compute-0 ceph-mon[192821]: pgmap v1836: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:15:32 compute-0 nova_compute[351485]: 2025-12-03 02:15:32.413 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:32 compute-0 nova_compute[351485]: 2025-12-03 02:15:32.575 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:15:32 compute-0 nova_compute[351485]: 2025-12-03 02:15:32.576 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 02:15:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1837: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:15:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:15:33 compute-0 nova_compute[351485]: 2025-12-03 02:15:33.526 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:34 compute-0 ceph-mon[192821]: pgmap v1837: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:15:34 compute-0 nova_compute[351485]: 2025-12-03 02:15:34.335 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:34 compute-0 nova_compute[351485]: 2025-12-03 02:15:34.513 351492 DEBUG oslo_concurrency.lockutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Acquiring lock "4f50e501-f565-4e1f-aa02-df921702eff9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:15:34 compute-0 nova_compute[351485]: 2025-12-03 02:15:34.514 351492 DEBUG oslo_concurrency.lockutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Lock "4f50e501-f565-4e1f-aa02-df921702eff9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:15:34 compute-0 nova_compute[351485]: 2025-12-03 02:15:34.548 351492 DEBUG nova.compute.manager [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 03 02:15:34 compute-0 sshd-session[443427]: Invalid user sinusbot from 154.113.10.113 port 47102
Dec 03 02:15:34 compute-0 nova_compute[351485]: 2025-12-03 02:15:34.714 351492 DEBUG oslo_concurrency.lockutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:15:34 compute-0 nova_compute[351485]: 2025-12-03 02:15:34.715 351492 DEBUG oslo_concurrency.lockutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:15:34 compute-0 nova_compute[351485]: 2025-12-03 02:15:34.732 351492 DEBUG nova.virt.hardware [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 03 02:15:34 compute-0 nova_compute[351485]: 2025-12-03 02:15:34.733 351492 INFO nova.compute.claims [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Claim successful on node compute-0.ctlplane.example.com
Dec 03 02:15:34 compute-0 sshd-session[443427]: Received disconnect from 154.113.10.113 port 47102:11: Bye Bye [preauth]
Dec 03 02:15:34 compute-0 sshd-session[443427]: Disconnected from invalid user sinusbot 154.113.10.113 port 47102 [preauth]
Dec 03 02:15:34 compute-0 nova_compute[351485]: 2025-12-03 02:15:34.862 351492 DEBUG oslo_concurrency.processutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:15:34 compute-0 nova_compute[351485]: 2025-12-03 02:15:34.929 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1838: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:15:35 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:15:35 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3748561572' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:15:35 compute-0 nova_compute[351485]: 2025-12-03 02:15:35.376 351492 DEBUG oslo_concurrency.processutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:15:35 compute-0 nova_compute[351485]: 2025-12-03 02:15:35.390 351492 DEBUG nova.compute.provider_tree [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:15:35 compute-0 nova_compute[351485]: 2025-12-03 02:15:35.415 351492 DEBUG nova.scheduler.client.report [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:15:35 compute-0 nova_compute[351485]: 2025-12-03 02:15:35.474 351492 DEBUG oslo_concurrency.lockutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:15:35 compute-0 nova_compute[351485]: 2025-12-03 02:15:35.476 351492 DEBUG nova.compute.manager [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 03 02:15:35 compute-0 nova_compute[351485]: 2025-12-03 02:15:35.540 351492 DEBUG nova.compute.manager [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 03 02:15:35 compute-0 nova_compute[351485]: 2025-12-03 02:15:35.541 351492 DEBUG nova.network.neutron [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 03 02:15:35 compute-0 nova_compute[351485]: 2025-12-03 02:15:35.566 351492 INFO nova.virt.libvirt.driver [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 03 02:15:35 compute-0 nova_compute[351485]: 2025-12-03 02:15:35.590 351492 DEBUG nova.compute.manager [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 03 02:15:35 compute-0 nova_compute[351485]: 2025-12-03 02:15:35.722 351492 DEBUG nova.compute.manager [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 03 02:15:35 compute-0 nova_compute[351485]: 2025-12-03 02:15:35.725 351492 DEBUG nova.virt.libvirt.driver [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 03 02:15:35 compute-0 nova_compute[351485]: 2025-12-03 02:15:35.726 351492 INFO nova.virt.libvirt.driver [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Creating image(s)
Dec 03 02:15:35 compute-0 nova_compute[351485]: 2025-12-03 02:15:35.777 351492 DEBUG nova.storage.rbd_utils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] rbd image 4f50e501-f565-4e1f-aa02-df921702eff9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:15:35 compute-0 nova_compute[351485]: 2025-12-03 02:15:35.834 351492 DEBUG nova.storage.rbd_utils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] rbd image 4f50e501-f565-4e1f-aa02-df921702eff9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:15:35 compute-0 nova_compute[351485]: 2025-12-03 02:15:35.894 351492 DEBUG nova.storage.rbd_utils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] rbd image 4f50e501-f565-4e1f-aa02-df921702eff9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:15:35 compute-0 nova_compute[351485]: 2025-12-03 02:15:35.907 351492 DEBUG oslo_concurrency.lockutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Acquiring lock "d68b22249947adf9ae6139a52d3c87b68df8a601" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:15:35 compute-0 nova_compute[351485]: 2025-12-03 02:15:35.908 351492 DEBUG oslo_concurrency.lockutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Lock "d68b22249947adf9ae6139a52d3c87b68df8a601" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:15:35 compute-0 nova_compute[351485]: 2025-12-03 02:15:35.916 351492 DEBUG nova.policy [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '08c7d81f1f9e4989b1eb8b8cf96bbf11', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a9efdda7cf984595a9c5a855bae62b0e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 03 02:15:36 compute-0 ceph-mon[192821]: pgmap v1838: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:15:36 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3748561572' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:15:36 compute-0 nova_compute[351485]: 2025-12-03 02:15:36.497 351492 DEBUG nova.virt.libvirt.imagebackend [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Image locations are: [{'url': 'rbd://3765feb2-36f8-5b86-b74c-64e9221f9c4c/images/ef773cba-72f0-486f-b5e5-792ff26bb688/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://3765feb2-36f8-5b86-b74c-64e9221f9c4c/images/ef773cba-72f0-486f-b5e5-792ff26bb688/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Dec 03 02:15:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1839: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:15:37 compute-0 nova_compute[351485]: 2025-12-03 02:15:37.204 351492 DEBUG nova.network.neutron [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Successfully created port: b7fa8023-e50c-4bea-be79-8fbe005f0b8a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 03 02:15:37 compute-0 podman[443507]: 2025-12-03 02:15:37.877017771 +0000 UTC m=+0.115006605 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 03 02:15:37 compute-0 podman[443505]: 2025-12-03 02:15:37.877941177 +0000 UTC m=+0.135821814 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Dec 03 02:15:37 compute-0 podman[443506]: 2025-12-03 02:15:37.901896295 +0000 UTC m=+0.146067944 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 03 02:15:38 compute-0 nova_compute[351485]: 2025-12-03 02:15:38.024 351492 DEBUG oslo_concurrency.processutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:15:38 compute-0 nova_compute[351485]: 2025-12-03 02:15:38.125 351492 DEBUG oslo_concurrency.processutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601.part --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:15:38 compute-0 nova_compute[351485]: 2025-12-03 02:15:38.127 351492 DEBUG nova.virt.images [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] ef773cba-72f0-486f-b5e5-792ff26bb688 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Dec 03 02:15:38 compute-0 nova_compute[351485]: 2025-12-03 02:15:38.130 351492 DEBUG nova.privsep.utils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Dec 03 02:15:38 compute-0 nova_compute[351485]: 2025-12-03 02:15:38.131 351492 DEBUG oslo_concurrency.processutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601.part /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:15:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:15:38 compute-0 ceph-mon[192821]: pgmap v1839: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:15:38 compute-0 nova_compute[351485]: 2025-12-03 02:15:38.388 351492 DEBUG oslo_concurrency.processutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601.part /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601.converted" returned: 0 in 0.257s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:15:38 compute-0 nova_compute[351485]: 2025-12-03 02:15:38.396 351492 DEBUG oslo_concurrency.processutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:15:38 compute-0 nova_compute[351485]: 2025-12-03 02:15:38.439 351492 DEBUG oslo_concurrency.lockutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Acquiring lock "07ce21e6-3627-467a-9b7e-d9045308576c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:15:38 compute-0 nova_compute[351485]: 2025-12-03 02:15:38.440 351492 DEBUG oslo_concurrency.lockutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Lock "07ce21e6-3627-467a-9b7e-d9045308576c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:15:38 compute-0 nova_compute[351485]: 2025-12-03 02:15:38.476 351492 DEBUG nova.compute.manager [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 03 02:15:38 compute-0 nova_compute[351485]: 2025-12-03 02:15:38.496 351492 DEBUG oslo_concurrency.processutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601.converted --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:15:38 compute-0 nova_compute[351485]: 2025-12-03 02:15:38.498 351492 DEBUG oslo_concurrency.lockutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Lock "d68b22249947adf9ae6139a52d3c87b68df8a601" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:15:38 compute-0 nova_compute[351485]: 2025-12-03 02:15:38.538 351492 DEBUG nova.storage.rbd_utils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] rbd image 4f50e501-f565-4e1f-aa02-df921702eff9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:15:38 compute-0 nova_compute[351485]: 2025-12-03 02:15:38.550 351492 DEBUG oslo_concurrency.processutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 4f50e501-f565-4e1f-aa02-df921702eff9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:15:38 compute-0 nova_compute[351485]: 2025-12-03 02:15:38.583 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:38 compute-0 nova_compute[351485]: 2025-12-03 02:15:38.622 351492 DEBUG oslo_concurrency.lockutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:15:38 compute-0 nova_compute[351485]: 2025-12-03 02:15:38.623 351492 DEBUG oslo_concurrency.lockutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:15:38 compute-0 nova_compute[351485]: 2025-12-03 02:15:38.634 351492 DEBUG nova.virt.hardware [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 03 02:15:38 compute-0 nova_compute[351485]: 2025-12-03 02:15:38.635 351492 INFO nova.compute.claims [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Claim successful on node compute-0.ctlplane.example.com
Dec 03 02:15:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 02:15:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:15:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 02:15:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:15:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 03 02:15:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:15:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:15:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:15:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:15:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:15:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec 03 02:15:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:15:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 02:15:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:15:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:15:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:15:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 02:15:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:15:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 02:15:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:15:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:15:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:15:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 02:15:38 compute-0 nova_compute[351485]: 2025-12-03 02:15:38.810 351492 DEBUG oslo_concurrency.processutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:15:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1840: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.029 351492 DEBUG oslo_concurrency.processutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 4f50e501-f565-4e1f-aa02-df921702eff9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.154 351492 DEBUG nova.storage.rbd_utils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] resizing rbd image 4f50e501-f565-4e1f-aa02-df921702eff9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 03 02:15:39 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:15:39 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/810822016' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:15:39 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/810822016' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.401 351492 DEBUG oslo_concurrency.processutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.421 351492 DEBUG nova.objects.instance [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Lazy-loading 'migration_context' on Instance uuid 4f50e501-f565-4e1f-aa02-df921702eff9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.430 351492 DEBUG nova.compute.provider_tree [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.455 351492 DEBUG nova.virt.libvirt.driver [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 03 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.456 351492 DEBUG nova.virt.libvirt.driver [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Ensure instance console log exists: /var/lib/nova/instances/4f50e501-f565-4e1f-aa02-df921702eff9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 03 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.456 351492 DEBUG oslo_concurrency.lockutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.457 351492 DEBUG oslo_concurrency.lockutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.457 351492 DEBUG oslo_concurrency.lockutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.459 351492 DEBUG nova.scheduler.client.report [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.492 351492 DEBUG oslo_concurrency.lockutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.868s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.493 351492 DEBUG nova.compute.manager [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 03 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.545 351492 DEBUG nova.compute.manager [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 03 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.546 351492 DEBUG nova.network.neutron [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 03 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.567 351492 INFO nova.virt.libvirt.driver [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 03 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.591 351492 DEBUG nova.compute.manager [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 03 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.691 351492 DEBUG nova.compute.manager [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 03 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.694 351492 DEBUG nova.virt.libvirt.driver [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 03 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.695 351492 INFO nova.virt.libvirt.driver [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Creating image(s)
Dec 03 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.749 351492 DEBUG nova.storage.rbd_utils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] rbd image 07ce21e6-3627-467a-9b7e-d9045308576c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.812 351492 DEBUG nova.storage.rbd_utils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] rbd image 07ce21e6-3627-467a-9b7e-d9045308576c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.880 351492 DEBUG nova.storage.rbd_utils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] rbd image 07ce21e6-3627-467a-9b7e-d9045308576c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.890 351492 DEBUG oslo_concurrency.processutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.938 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.987 351492 DEBUG oslo_concurrency.processutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.987 351492 DEBUG oslo_concurrency.lockutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Acquiring lock "d68b22249947adf9ae6139a52d3c87b68df8a601" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.988 351492 DEBUG oslo_concurrency.lockutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Lock "d68b22249947adf9ae6139a52d3c87b68df8a601" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.989 351492 DEBUG oslo_concurrency.lockutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Lock "d68b22249947adf9ae6139a52d3c87b68df8a601" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:15:40 compute-0 nova_compute[351485]: 2025-12-03 02:15:40.033 351492 DEBUG nova.storage.rbd_utils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] rbd image 07ce21e6-3627-467a-9b7e-d9045308576c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:15:40 compute-0 nova_compute[351485]: 2025-12-03 02:15:40.042 351492 DEBUG oslo_concurrency.processutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 07ce21e6-3627-467a-9b7e-d9045308576c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:15:40 compute-0 nova_compute[351485]: 2025-12-03 02:15:40.156 351492 DEBUG nova.policy [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8a7f624afcf845f786397f8aa1bb2a63', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5a1cf3657daa4d798d912ceaae049aa0', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 03 02:15:40 compute-0 nova_compute[351485]: 2025-12-03 02:15:40.222 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:40 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:40.352 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:a6:85', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:2a:11:ae:7b:8c'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 02:15:40 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:40.353 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 03 02:15:40 compute-0 nova_compute[351485]: 2025-12-03 02:15:40.355 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:40 compute-0 ceph-mon[192821]: pgmap v1840: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:15:40 compute-0 nova_compute[351485]: 2025-12-03 02:15:40.498 351492 DEBUG oslo_concurrency.processutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 07ce21e6-3627-467a-9b7e-d9045308576c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:15:40 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:40.548 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:a6:85', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:2a:11:ae:7b:8c'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 02:15:40 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:40.551 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 03 02:15:40 compute-0 nova_compute[351485]: 2025-12-03 02:15:40.578 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:40 compute-0 nova_compute[351485]: 2025-12-03 02:15:40.689 351492 DEBUG nova.storage.rbd_utils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] resizing rbd image 07ce21e6-3627-467a-9b7e-d9045308576c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 03 02:15:40 compute-0 nova_compute[351485]: 2025-12-03 02:15:40.896 351492 DEBUG nova.network.neutron [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Successfully updated port: b7fa8023-e50c-4bea-be79-8fbe005f0b8a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 03 02:15:40 compute-0 nova_compute[351485]: 2025-12-03 02:15:40.921 351492 DEBUG oslo_concurrency.lockutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Acquiring lock "refresh_cache-4f50e501-f565-4e1f-aa02-df921702eff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:15:40 compute-0 nova_compute[351485]: 2025-12-03 02:15:40.921 351492 DEBUG oslo_concurrency.lockutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Acquired lock "refresh_cache-4f50e501-f565-4e1f-aa02-df921702eff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:15:40 compute-0 nova_compute[351485]: 2025-12-03 02:15:40.922 351492 DEBUG nova.network.neutron [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 03 02:15:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1841: 321 pgs: 321 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.1 MiB/s wr, 30 op/s
Dec 03 02:15:41 compute-0 nova_compute[351485]: 2025-12-03 02:15:41.120 351492 DEBUG nova.objects.instance [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Lazy-loading 'migration_context' on Instance uuid 07ce21e6-3627-467a-9b7e-d9045308576c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:15:41 compute-0 nova_compute[351485]: 2025-12-03 02:15:41.139 351492 DEBUG nova.virt.libvirt.driver [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 03 02:15:41 compute-0 nova_compute[351485]: 2025-12-03 02:15:41.139 351492 DEBUG nova.virt.libvirt.driver [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Ensure instance console log exists: /var/lib/nova/instances/07ce21e6-3627-467a-9b7e-d9045308576c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 03 02:15:41 compute-0 nova_compute[351485]: 2025-12-03 02:15:41.140 351492 DEBUG oslo_concurrency.lockutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:15:41 compute-0 nova_compute[351485]: 2025-12-03 02:15:41.141 351492 DEBUG oslo_concurrency.lockutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:15:41 compute-0 nova_compute[351485]: 2025-12-03 02:15:41.141 351492 DEBUG oslo_concurrency.lockutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:15:41 compute-0 nova_compute[351485]: 2025-12-03 02:15:41.637 351492 DEBUG nova.network.neutron [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 03 02:15:42 compute-0 ceph-mon[192821]: pgmap v1841: 321 pgs: 321 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.1 MiB/s wr, 30 op/s
Dec 03 02:15:42 compute-0 nova_compute[351485]: 2025-12-03 02:15:42.916 351492 DEBUG nova.network.neutron [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Successfully created port: 5009f27c-5ce3-46eb-b7aa-e82645a3097e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 03 02:15:42 compute-0 nova_compute[351485]: 2025-12-03 02:15:42.971 351492 DEBUG nova.compute.manager [req-e8ba8ab5-55a9-4b09-90de-02681036b5df req-456a9d1c-60e5-407d-9d4b-a1568c2e0216 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Received event network-changed-b7fa8023-e50c-4bea-be79-8fbe005f0b8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:15:42 compute-0 nova_compute[351485]: 2025-12-03 02:15:42.972 351492 DEBUG nova.compute.manager [req-e8ba8ab5-55a9-4b09-90de-02681036b5df req-456a9d1c-60e5-407d-9d4b-a1568c2e0216 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Refreshing instance network info cache due to event network-changed-b7fa8023-e50c-4bea-be79-8fbe005f0b8a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 03 02:15:42 compute-0 nova_compute[351485]: 2025-12-03 02:15:42.973 351492 DEBUG oslo_concurrency.lockutils [req-e8ba8ab5-55a9-4b09-90de-02681036b5df req-456a9d1c-60e5-407d-9d4b-a1568c2e0216 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-4f50e501-f565-4e1f-aa02-df921702eff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:15:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1842: 321 pgs: 321 active+clean; 115 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.6 MiB/s wr, 33 op/s
Dec 03 02:15:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.407 351492 DEBUG nova.network.neutron [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Updating instance_info_cache with network_info: [{"id": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "address": "fa:16:3e:12:b3:fa", "network": {"id": "a5e23dc0-bcc2-406c-bc7f-b978295be94b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1951903174-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9efdda7cf984595a9c5a855bae62b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7fa8023-e5", "ovs_interfaceid": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.441 351492 DEBUG oslo_concurrency.lockutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Releasing lock "refresh_cache-4f50e501-f565-4e1f-aa02-df921702eff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.442 351492 DEBUG nova.compute.manager [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Instance network_info: |[{"id": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "address": "fa:16:3e:12:b3:fa", "network": {"id": "a5e23dc0-bcc2-406c-bc7f-b978295be94b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1951903174-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9efdda7cf984595a9c5a855bae62b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7fa8023-e5", "ovs_interfaceid": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 03 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.443 351492 DEBUG oslo_concurrency.lockutils [req-e8ba8ab5-55a9-4b09-90de-02681036b5df req-456a9d1c-60e5-407d-9d4b-a1568c2e0216 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-4f50e501-f565-4e1f-aa02-df921702eff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.444 351492 DEBUG nova.network.neutron [req-e8ba8ab5-55a9-4b09-90de-02681036b5df req-456a9d1c-60e5-407d-9d4b-a1568c2e0216 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Refreshing network info cache for port b7fa8023-e50c-4bea-be79-8fbe005f0b8a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 03 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.450 351492 DEBUG nova.virt.libvirt.driver [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Start _get_guest_xml network_info=[{"id": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "address": "fa:16:3e:12:b3:fa", "network": {"id": "a5e23dc0-bcc2-406c-bc7f-b978295be94b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1951903174-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9efdda7cf984595a9c5a855bae62b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7fa8023-e5", "ovs_interfaceid": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T02:14:44Z,direct_url=<?>,disk_format='qcow2',id=ef773cba-72f0-486f-b5e5-792ff26bb688,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9746b242761a48048d185ce26d622b33',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T02:14:46Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'boot_index': 0, 'guest_format': None, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'image_id': 'ef773cba-72f0-486f-b5e5-792ff26bb688'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 03 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.464 351492 WARNING nova.virt.libvirt.driver [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.479 351492 DEBUG nova.virt.libvirt.host [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 03 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.481 351492 DEBUG nova.virt.libvirt.host [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 03 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.488 351492 DEBUG nova.virt.libvirt.host [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 03 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.489 351492 DEBUG nova.virt.libvirt.host [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 03 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.490 351492 DEBUG nova.virt.libvirt.driver [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 03 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.491 351492 DEBUG nova.virt.hardware [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-03T02:14:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='89219634-32e9-4cb5-896f-6fa0b1edfe13',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T02:14:44Z,direct_url=<?>,disk_format='qcow2',id=ef773cba-72f0-486f-b5e5-792ff26bb688,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9746b242761a48048d185ce26d622b33',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T02:14:46Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 03 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.492 351492 DEBUG nova.virt.hardware [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 03 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.492 351492 DEBUG nova.virt.hardware [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 03 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.493 351492 DEBUG nova.virt.hardware [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 03 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.494 351492 DEBUG nova.virt.hardware [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 03 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.494 351492 DEBUG nova.virt.hardware [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 03 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.495 351492 DEBUG nova.virt.hardware [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 03 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.496 351492 DEBUG nova.virt.hardware [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 03 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.496 351492 DEBUG nova.virt.hardware [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 03 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.497 351492 DEBUG nova.virt.hardware [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 03 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.498 351492 DEBUG nova.virt.hardware [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 03 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.503 351492 DEBUG oslo_concurrency.processutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.533 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.578 351492 DEBUG oslo_concurrency.lockutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Acquiring lock "a48b4084-369d-432a-9f47-9378cdcc011f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.579 351492 DEBUG oslo_concurrency.lockutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Lock "a48b4084-369d-432a-9f47-9378cdcc011f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.604 351492 DEBUG nova.compute.manager [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 03 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.702 351492 DEBUG oslo_concurrency.lockutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.703 351492 DEBUG oslo_concurrency.lockutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.714 351492 DEBUG nova.virt.hardware [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 03 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.715 351492 INFO nova.compute.claims [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Claim successful on node compute-0.ctlplane.example.com
Dec 03 02:15:43 compute-0 podman[443894]: 2025-12-03 02:15:43.886474544 +0000 UTC m=+0.136401230 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Dec 03 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.897 351492 DEBUG oslo_concurrency.processutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:15:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 03 02:15:44 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2835045495' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.065 351492 DEBUG oslo_concurrency.processutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.120 351492 DEBUG nova.storage.rbd_utils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] rbd image 4f50e501-f565-4e1f-aa02-df921702eff9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.131 351492 DEBUG oslo_concurrency.processutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.408 351492 DEBUG nova.network.neutron [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Successfully updated port: 5009f27c-5ce3-46eb-b7aa-e82645a3097e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 03 02:15:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:15:44 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/278663102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.435 351492 DEBUG oslo_concurrency.lockutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Acquiring lock "refresh_cache-07ce21e6-3627-467a-9b7e-d9045308576c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.436 351492 DEBUG oslo_concurrency.lockutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Acquired lock "refresh_cache-07ce21e6-3627-467a-9b7e-d9045308576c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.436 351492 DEBUG nova.network.neutron [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 03 02:15:44 compute-0 ceph-mon[192821]: pgmap v1842: 321 pgs: 321 active+clean; 115 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.6 MiB/s wr, 33 op/s
Dec 03 02:15:44 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2835045495' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:15:44 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/278663102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.447 351492 DEBUG oslo_concurrency.processutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.472 351492 DEBUG nova.compute.provider_tree [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.492 351492 DEBUG nova.scheduler.client.report [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.521 351492 DEBUG oslo_concurrency.lockutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.817s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.521 351492 DEBUG nova.compute.manager [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 03 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.576 351492 DEBUG nova.compute.manager [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 03 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.577 351492 DEBUG nova.network.neutron [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 03 02:15:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 03 02:15:44 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1028958713' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.601 351492 INFO nova.virt.libvirt.driver [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 03 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.623 351492 DEBUG nova.compute.manager [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 03 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.640 351492 DEBUG oslo_concurrency.processutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.643 351492 DEBUG nova.virt.libvirt.vif [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T02:15:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1950125250',display_name='tempest-AttachInterfacesUnderV243Test-server-1950125250',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1950125250',id=6,image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBB9OuHdIBdpYaktjGsefgccfH8R9SNK99mHHbJQ9rg+G2U1LTvmjO9Wsnt6ghp9uwnzyNl9odxW0s4EjHMYofeke7VnvOokwl4rSnaOh/gTQhB30j9Q5ponmvnWGOY9dA==',key_name='tempest-keypair-48380121',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a9efdda7cf984595a9c5a855bae62b0e',ramdisk_id='',reservation_id='r-dnx5z6kj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1651825730',owner_user_name='tempest-AttachInterfacesUnderV243Test-1651825730-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T02:15:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='08c7d81f1f9e4989b1eb8b8cf96bbf11',uuid=4f50e501-f565-4e1f-aa02-df921702eff9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "address": "fa:16:3e:12:b3:fa", "network": {"id": "a5e23dc0-bcc2-406c-bc7f-b978295be94b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1951903174-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9efdda7cf984595a9c5a855bae62b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7fa8023-e5", "ovs_interfaceid": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 03 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.644 351492 DEBUG nova.network.os_vif_util [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Converting VIF {"id": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "address": "fa:16:3e:12:b3:fa", "network": {"id": "a5e23dc0-bcc2-406c-bc7f-b978295be94b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1951903174-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9efdda7cf984595a9c5a855bae62b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7fa8023-e5", "ovs_interfaceid": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 03 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.646 351492 DEBUG nova.network.os_vif_util [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:12:b3:fa,bridge_name='br-int',has_traffic_filtering=True,id=b7fa8023-e50c-4bea-be79-8fbe005f0b8a,network=Network(a5e23dc0-bcc2-406c-bc7f-b978295be94b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7fa8023-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 03 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.649 351492 DEBUG nova.objects.instance [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Lazy-loading 'pci_devices' on Instance uuid 4f50e501-f565-4e1f-aa02-df921702eff9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.673 351492 DEBUG nova.virt.libvirt.driver [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] End _get_guest_xml xml=<domain type="kvm">
Dec 03 02:15:44 compute-0 nova_compute[351485]:   <uuid>4f50e501-f565-4e1f-aa02-df921702eff9</uuid>
Dec 03 02:15:44 compute-0 nova_compute[351485]:   <name>instance-00000006</name>
Dec 03 02:15:44 compute-0 nova_compute[351485]:   <memory>131072</memory>
Dec 03 02:15:44 compute-0 nova_compute[351485]:   <vcpu>1</vcpu>
Dec 03 02:15:44 compute-0 nova_compute[351485]:   <metadata>
Dec 03 02:15:44 compute-0 nova_compute[351485]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 03 02:15:44 compute-0 nova_compute[351485]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:       <nova:name>tempest-AttachInterfacesUnderV243Test-server-1950125250</nova:name>
Dec 03 02:15:44 compute-0 nova_compute[351485]:       <nova:creationTime>2025-12-03 02:15:43</nova:creationTime>
Dec 03 02:15:44 compute-0 nova_compute[351485]:       <nova:flavor name="m1.nano">
Dec 03 02:15:44 compute-0 nova_compute[351485]:         <nova:memory>128</nova:memory>
Dec 03 02:15:44 compute-0 nova_compute[351485]:         <nova:disk>1</nova:disk>
Dec 03 02:15:44 compute-0 nova_compute[351485]:         <nova:swap>0</nova:swap>
Dec 03 02:15:44 compute-0 nova_compute[351485]:         <nova:ephemeral>0</nova:ephemeral>
Dec 03 02:15:44 compute-0 nova_compute[351485]:         <nova:vcpus>1</nova:vcpus>
Dec 03 02:15:44 compute-0 nova_compute[351485]:       </nova:flavor>
Dec 03 02:15:44 compute-0 nova_compute[351485]:       <nova:owner>
Dec 03 02:15:44 compute-0 nova_compute[351485]:         <nova:user uuid="08c7d81f1f9e4989b1eb8b8cf96bbf11">tempest-AttachInterfacesUnderV243Test-1651825730-project-member</nova:user>
Dec 03 02:15:44 compute-0 nova_compute[351485]:         <nova:project uuid="a9efdda7cf984595a9c5a855bae62b0e">tempest-AttachInterfacesUnderV243Test-1651825730</nova:project>
Dec 03 02:15:44 compute-0 nova_compute[351485]:       </nova:owner>
Dec 03 02:15:44 compute-0 nova_compute[351485]:       <nova:root type="image" uuid="ef773cba-72f0-486f-b5e5-792ff26bb688"/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:       <nova:ports>
Dec 03 02:15:44 compute-0 nova_compute[351485]:         <nova:port uuid="b7fa8023-e50c-4bea-be79-8fbe005f0b8a">
Dec 03 02:15:44 compute-0 nova_compute[351485]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:         </nova:port>
Dec 03 02:15:44 compute-0 nova_compute[351485]:       </nova:ports>
Dec 03 02:15:44 compute-0 nova_compute[351485]:     </nova:instance>
Dec 03 02:15:44 compute-0 nova_compute[351485]:   </metadata>
Dec 03 02:15:44 compute-0 nova_compute[351485]:   <sysinfo type="smbios">
Dec 03 02:15:44 compute-0 nova_compute[351485]:     <system>
Dec 03 02:15:44 compute-0 nova_compute[351485]:       <entry name="manufacturer">RDO</entry>
Dec 03 02:15:44 compute-0 nova_compute[351485]:       <entry name="product">OpenStack Compute</entry>
Dec 03 02:15:44 compute-0 nova_compute[351485]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 03 02:15:44 compute-0 nova_compute[351485]:       <entry name="serial">4f50e501-f565-4e1f-aa02-df921702eff9</entry>
Dec 03 02:15:44 compute-0 nova_compute[351485]:       <entry name="uuid">4f50e501-f565-4e1f-aa02-df921702eff9</entry>
Dec 03 02:15:44 compute-0 nova_compute[351485]:       <entry name="family">Virtual Machine</entry>
Dec 03 02:15:44 compute-0 nova_compute[351485]:     </system>
Dec 03 02:15:44 compute-0 nova_compute[351485]:   </sysinfo>
Dec 03 02:15:44 compute-0 nova_compute[351485]:   <os>
Dec 03 02:15:44 compute-0 nova_compute[351485]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 03 02:15:44 compute-0 nova_compute[351485]:     <boot dev="hd"/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:     <smbios mode="sysinfo"/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:   </os>
Dec 03 02:15:44 compute-0 nova_compute[351485]:   <features>
Dec 03 02:15:44 compute-0 nova_compute[351485]:     <acpi/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:     <apic/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:     <vmcoreinfo/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:   </features>
Dec 03 02:15:44 compute-0 nova_compute[351485]:   <clock offset="utc">
Dec 03 02:15:44 compute-0 nova_compute[351485]:     <timer name="pit" tickpolicy="delay"/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:     <timer name="hpet" present="no"/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:   </clock>
Dec 03 02:15:44 compute-0 nova_compute[351485]:   <cpu mode="host-model" match="exact">
Dec 03 02:15:44 compute-0 nova_compute[351485]:     <topology sockets="1" cores="1" threads="1"/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:   </cpu>
Dec 03 02:15:44 compute-0 nova_compute[351485]:   <devices>
Dec 03 02:15:44 compute-0 nova_compute[351485]:     <disk type="network" device="disk">
Dec 03 02:15:44 compute-0 nova_compute[351485]:       <driver type="raw" cache="none"/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:       <source protocol="rbd" name="vms/4f50e501-f565-4e1f-aa02-df921702eff9_disk">
Dec 03 02:15:44 compute-0 nova_compute[351485]:         <host name="192.168.122.100" port="6789"/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:       </source>
Dec 03 02:15:44 compute-0 nova_compute[351485]:       <auth username="openstack">
Dec 03 02:15:44 compute-0 nova_compute[351485]:         <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:       </auth>
Dec 03 02:15:44 compute-0 nova_compute[351485]:       <target dev="vda" bus="virtio"/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:     </disk>
Dec 03 02:15:44 compute-0 nova_compute[351485]:     <disk type="network" device="cdrom">
Dec 03 02:15:44 compute-0 nova_compute[351485]:       <driver type="raw" cache="none"/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:       <source protocol="rbd" name="vms/4f50e501-f565-4e1f-aa02-df921702eff9_disk.config">
Dec 03 02:15:44 compute-0 nova_compute[351485]:         <host name="192.168.122.100" port="6789"/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:       </source>
Dec 03 02:15:44 compute-0 nova_compute[351485]:       <auth username="openstack">
Dec 03 02:15:44 compute-0 nova_compute[351485]:         <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:       </auth>
Dec 03 02:15:44 compute-0 nova_compute[351485]:       <target dev="sda" bus="sata"/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:     </disk>
Dec 03 02:15:44 compute-0 nova_compute[351485]:     <interface type="ethernet">
Dec 03 02:15:44 compute-0 nova_compute[351485]:       <mac address="fa:16:3e:12:b3:fa"/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:       <model type="virtio"/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:       <driver name="vhost" rx_queue_size="512"/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:       <mtu size="1442"/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:       <target dev="tapb7fa8023-e5"/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:     </interface>
Dec 03 02:15:44 compute-0 nova_compute[351485]:     <serial type="pty">
Dec 03 02:15:44 compute-0 nova_compute[351485]:       <log file="/var/lib/nova/instances/4f50e501-f565-4e1f-aa02-df921702eff9/console.log" append="off"/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:     </serial>
Dec 03 02:15:44 compute-0 nova_compute[351485]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:     <video>
Dec 03 02:15:44 compute-0 nova_compute[351485]:       <model type="virtio"/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:     </video>
Dec 03 02:15:44 compute-0 nova_compute[351485]:     <input type="tablet" bus="usb"/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:     <rng model="virtio">
Dec 03 02:15:44 compute-0 nova_compute[351485]:       <backend model="random">/dev/urandom</backend>
Dec 03 02:15:44 compute-0 nova_compute[351485]:     </rng>
Dec 03 02:15:44 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root"/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:     <controller type="usb" index="0"/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:     <memballoon model="virtio">
Dec 03 02:15:44 compute-0 nova_compute[351485]:       <stats period="10"/>
Dec 03 02:15:44 compute-0 nova_compute[351485]:     </memballoon>
Dec 03 02:15:44 compute-0 nova_compute[351485]:   </devices>
Dec 03 02:15:44 compute-0 nova_compute[351485]: </domain>
Dec 03 02:15:44 compute-0 nova_compute[351485]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 03 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.675 351492 DEBUG nova.compute.manager [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Preparing to wait for external event network-vif-plugged-b7fa8023-e50c-4bea-be79-8fbe005f0b8a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 03 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.675 351492 DEBUG oslo_concurrency.lockutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Acquiring lock "4f50e501-f565-4e1f-aa02-df921702eff9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.676 351492 DEBUG oslo_concurrency.lockutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Lock "4f50e501-f565-4e1f-aa02-df921702eff9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.676 351492 DEBUG oslo_concurrency.lockutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Lock "4f50e501-f565-4e1f-aa02-df921702eff9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.677 351492 DEBUG nova.virt.libvirt.vif [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T02:15:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1950125250',display_name='tempest-AttachInterfacesUnderV243Test-server-1950125250',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1950125250',id=6,image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBB9OuHdIBdpYaktjGsefgccfH8R9SNK99mHHbJQ9rg+G2U1LTvmjO9Wsnt6ghp9uwnzyNl9odxW0s4EjHMYofeke7VnvOokwl4rSnaOh/gTQhB30j9Q5ponmvnWGOY9dA==',key_name='tempest-keypair-48380121',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a9efdda7cf984595a9c5a855bae62b0e',ramdisk_id='',reservation_id='r-dnx5z6kj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1651825730',owner_user_name='tempest-AttachInterfacesUnderV243Test-1651825730-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T02:15:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='08c7d81f1f9e4989b1eb8b8cf96bbf11',uuid=4f50e501-f565-4e1f-aa02-df921702eff9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "address": "fa:16:3e:12:b3:fa", "network": {"id": "a5e23dc0-bcc2-406c-bc7f-b978295be94b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1951903174-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9efdda7cf984595a9c5a855bae62b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7fa8023-e5", "ovs_interfaceid": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 03 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.678 351492 DEBUG nova.network.os_vif_util [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Converting VIF {"id": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "address": "fa:16:3e:12:b3:fa", "network": {"id": "a5e23dc0-bcc2-406c-bc7f-b978295be94b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1951903174-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9efdda7cf984595a9c5a855bae62b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7fa8023-e5", "ovs_interfaceid": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 03 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.679 351492 DEBUG nova.network.os_vif_util [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:12:b3:fa,bridge_name='br-int',has_traffic_filtering=True,id=b7fa8023-e50c-4bea-be79-8fbe005f0b8a,network=Network(a5e23dc0-bcc2-406c-bc7f-b978295be94b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7fa8023-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 03 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.680 351492 DEBUG os_vif [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:b3:fa,bridge_name='br-int',has_traffic_filtering=True,id=b7fa8023-e50c-4bea-be79-8fbe005f0b8a,network=Network(a5e23dc0-bcc2-406c-bc7f-b978295be94b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7fa8023-e5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 03 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.681 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.682 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.683 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 03 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.692 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.693 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb7fa8023-e5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.694 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb7fa8023-e5, col_values=(('external_ids', {'iface-id': 'b7fa8023-e50c-4bea-be79-8fbe005f0b8a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:12:b3:fa', 'vm-uuid': '4f50e501-f565-4e1f-aa02-df921702eff9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.696 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:44 compute-0 NetworkManager[48912]: <info>  [1764728144.6985] manager: (tapb7fa8023-e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Dec 03 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.699 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 03 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.710 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.711 351492 INFO os_vif [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:b3:fa,bridge_name='br-int',has_traffic_filtering=True,id=b7fa8023-e50c-4bea-be79-8fbe005f0b8a,network=Network(a5e23dc0-bcc2-406c-bc7f-b978295be94b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7fa8023-e5')
Dec 03 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.775 351492 DEBUG nova.compute.manager [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 03 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.777 351492 DEBUG nova.virt.libvirt.driver [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 03 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.777 351492 INFO nova.virt.libvirt.driver [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Creating image(s)
Dec 03 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.830 351492 DEBUG nova.storage.rbd_utils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] rbd image a48b4084-369d-432a-9f47-9378cdcc011f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.890 351492 DEBUG nova.storage.rbd_utils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] rbd image a48b4084-369d-432a-9f47-9378cdcc011f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.951 351492 DEBUG nova.storage.rbd_utils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] rbd image a48b4084-369d-432a-9f47-9378cdcc011f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.972 351492 DEBUG oslo_concurrency.processutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:15:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1843: 321 pgs: 321 active+clean; 123 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.6 MiB/s wr, 36 op/s
Dec 03 02:15:45 compute-0 nova_compute[351485]: 2025-12-03 02:15:45.014 351492 DEBUG nova.network.neutron [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 03 02:15:45 compute-0 nova_compute[351485]: 2025-12-03 02:15:45.031 351492 DEBUG nova.virt.libvirt.driver [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 03 02:15:45 compute-0 nova_compute[351485]: 2025-12-03 02:15:45.032 351492 DEBUG nova.virt.libvirt.driver [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 03 02:15:45 compute-0 nova_compute[351485]: 2025-12-03 02:15:45.033 351492 DEBUG nova.virt.libvirt.driver [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] No VIF found with MAC fa:16:3e:12:b3:fa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 03 02:15:45 compute-0 nova_compute[351485]: 2025-12-03 02:15:45.034 351492 INFO nova.virt.libvirt.driver [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Using config drive
Dec 03 02:15:45 compute-0 nova_compute[351485]: 2025-12-03 02:15:45.077 351492 DEBUG nova.storage.rbd_utils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] rbd image 4f50e501-f565-4e1f-aa02-df921702eff9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:15:45 compute-0 nova_compute[351485]: 2025-12-03 02:15:45.089 351492 DEBUG oslo_concurrency.processutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 --force-share --output=json" returned: 0 in 0.117s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:15:45 compute-0 nova_compute[351485]: 2025-12-03 02:15:45.090 351492 DEBUG oslo_concurrency.lockutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Acquiring lock "d68b22249947adf9ae6139a52d3c87b68df8a601" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:15:45 compute-0 nova_compute[351485]: 2025-12-03 02:15:45.090 351492 DEBUG oslo_concurrency.lockutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Lock "d68b22249947adf9ae6139a52d3c87b68df8a601" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:15:45 compute-0 nova_compute[351485]: 2025-12-03 02:15:45.091 351492 DEBUG oslo_concurrency.lockutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Lock "d68b22249947adf9ae6139a52d3c87b68df8a601" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:15:45 compute-0 nova_compute[351485]: 2025-12-03 02:15:45.126 351492 DEBUG nova.storage.rbd_utils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] rbd image a48b4084-369d-432a-9f47-9378cdcc011f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:15:45 compute-0 nova_compute[351485]: 2025-12-03 02:15:45.136 351492 DEBUG oslo_concurrency.processutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 a48b4084-369d-432a-9f47-9378cdcc011f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:15:45 compute-0 nova_compute[351485]: 2025-12-03 02:15:45.169 351492 DEBUG nova.policy [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '292dd1da4e67424b855327b32f0623b7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b95bb4c57d3543acb25997bedee9dec3', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 03 02:15:45 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1028958713' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:15:45 compute-0 nova_compute[351485]: 2025-12-03 02:15:45.549 351492 DEBUG oslo_concurrency.processutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 a48b4084-369d-432a-9f47-9378cdcc011f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:15:45 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:45.555 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=eda9fd7d-f2b1-4121-b9ac-fc31f8426272, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:15:45 compute-0 nova_compute[351485]: 2025-12-03 02:15:45.724 351492 DEBUG nova.storage.rbd_utils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] resizing rbd image a48b4084-369d-432a-9f47-9378cdcc011f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 03 02:15:45 compute-0 nova_compute[351485]: 2025-12-03 02:15:45.867 351492 INFO nova.virt.libvirt.driver [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Creating config drive at /var/lib/nova/instances/4f50e501-f565-4e1f-aa02-df921702eff9/disk.config
Dec 03 02:15:45 compute-0 nova_compute[351485]: 2025-12-03 02:15:45.881 351492 DEBUG oslo_concurrency.processutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4f50e501-f565-4e1f-aa02-df921702eff9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo0mbnonu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:15:46 compute-0 nova_compute[351485]: 2025-12-03 02:15:46.010 351492 DEBUG nova.objects.instance [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Lazy-loading 'migration_context' on Instance uuid a48b4084-369d-432a-9f47-9378cdcc011f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:15:46 compute-0 nova_compute[351485]: 2025-12-03 02:15:46.030 351492 DEBUG nova.virt.libvirt.driver [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 03 02:15:46 compute-0 nova_compute[351485]: 2025-12-03 02:15:46.030 351492 DEBUG nova.virt.libvirt.driver [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Ensure instance console log exists: /var/lib/nova/instances/a48b4084-369d-432a-9f47-9378cdcc011f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 03 02:15:46 compute-0 nova_compute[351485]: 2025-12-03 02:15:46.032 351492 DEBUG oslo_concurrency.lockutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:15:46 compute-0 nova_compute[351485]: 2025-12-03 02:15:46.032 351492 DEBUG oslo_concurrency.lockutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:15:46 compute-0 nova_compute[351485]: 2025-12-03 02:15:46.033 351492 DEBUG oslo_concurrency.lockutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:15:46 compute-0 nova_compute[351485]: 2025-12-03 02:15:46.037 351492 DEBUG oslo_concurrency.processutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4f50e501-f565-4e1f-aa02-df921702eff9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo0mbnonu" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:15:46 compute-0 nova_compute[351485]: 2025-12-03 02:15:46.088 351492 DEBUG nova.storage.rbd_utils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] rbd image 4f50e501-f565-4e1f-aa02-df921702eff9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:15:46 compute-0 nova_compute[351485]: 2025-12-03 02:15:46.100 351492 DEBUG oslo_concurrency.processutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4f50e501-f565-4e1f-aa02-df921702eff9/disk.config 4f50e501-f565-4e1f-aa02-df921702eff9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:15:46 compute-0 nova_compute[351485]: 2025-12-03 02:15:46.304 351492 DEBUG nova.network.neutron [req-e8ba8ab5-55a9-4b09-90de-02681036b5df req-456a9d1c-60e5-407d-9d4b-a1568c2e0216 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Updated VIF entry in instance network info cache for port b7fa8023-e50c-4bea-be79-8fbe005f0b8a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 03 02:15:46 compute-0 nova_compute[351485]: 2025-12-03 02:15:46.306 351492 DEBUG nova.network.neutron [req-e8ba8ab5-55a9-4b09-90de-02681036b5df req-456a9d1c-60e5-407d-9d4b-a1568c2e0216 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Updating instance_info_cache with network_info: [{"id": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "address": "fa:16:3e:12:b3:fa", "network": {"id": "a5e23dc0-bcc2-406c-bc7f-b978295be94b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1951903174-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9efdda7cf984595a9c5a855bae62b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7fa8023-e5", "ovs_interfaceid": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:15:46 compute-0 nova_compute[351485]: 2025-12-03 02:15:46.330 351492 DEBUG oslo_concurrency.lockutils [req-e8ba8ab5-55a9-4b09-90de-02681036b5df req-456a9d1c-60e5-407d-9d4b-a1568c2e0216 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-4f50e501-f565-4e1f-aa02-df921702eff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:15:46 compute-0 nova_compute[351485]: 2025-12-03 02:15:46.367 351492 DEBUG oslo_concurrency.processutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4f50e501-f565-4e1f-aa02-df921702eff9/disk.config 4f50e501-f565-4e1f-aa02-df921702eff9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.268s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:15:46 compute-0 nova_compute[351485]: 2025-12-03 02:15:46.368 351492 INFO nova.virt.libvirt.driver [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Deleting local config drive /var/lib/nova/instances/4f50e501-f565-4e1f-aa02-df921702eff9/disk.config because it was imported into RBD.
Dec 03 02:15:46 compute-0 systemd[1]: Starting libvirt secret daemon...
Dec 03 02:15:46 compute-0 systemd[1]: Started libvirt secret daemon.
Dec 03 02:15:46 compute-0 nova_compute[351485]: 2025-12-03 02:15:46.453 351492 DEBUG nova.network.neutron [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Successfully created port: ee5c2dfc-04c3-400a-8073-6f2c65dcea03 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 03 02:15:46 compute-0 ceph-mon[192821]: pgmap v1843: 321 pgs: 321 active+clean; 123 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.6 MiB/s wr, 36 op/s
Dec 03 02:15:46 compute-0 kernel: tapb7fa8023-e5: entered promiscuous mode
Dec 03 02:15:46 compute-0 ovn_controller[89134]: 2025-12-03T02:15:46Z|00066|binding|INFO|Claiming lport b7fa8023-e50c-4bea-be79-8fbe005f0b8a for this chassis.
Dec 03 02:15:46 compute-0 ovn_controller[89134]: 2025-12-03T02:15:46Z|00067|binding|INFO|b7fa8023-e50c-4bea-be79-8fbe005f0b8a: Claiming fa:16:3e:12:b3:fa 10.100.0.3
Dec 03 02:15:46 compute-0 nova_compute[351485]: 2025-12-03 02:15:46.529 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:46 compute-0 NetworkManager[48912]: <info>  [1764728146.5345] manager: (tapb7fa8023-e5): new Tun device (/org/freedesktop/NetworkManager/Devices/36)
Dec 03 02:15:46 compute-0 nova_compute[351485]: 2025-12-03 02:15:46.546 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:46 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:46.546 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:12:b3:fa 10.100.0.3'], port_security=['fa:16:3e:12:b3:fa 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '4f50e501-f565-4e1f-aa02-df921702eff9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a5e23dc0-bcc2-406c-bc7f-b978295be94b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9efdda7cf984595a9c5a855bae62b0e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '532f80d5-065d-43cb-9604-ad1c2a6e3902', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=319776e3-1c91-4ec0-bfb2-2325dfaa1fa2, chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=b7fa8023-e50c-4bea-be79-8fbe005f0b8a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 02:15:46 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:46.549 288528 INFO neutron.agent.ovn.metadata.agent [-] Port b7fa8023-e50c-4bea-be79-8fbe005f0b8a in datapath a5e23dc0-bcc2-406c-bc7f-b978295be94b bound to our chassis
Dec 03 02:15:46 compute-0 ovn_controller[89134]: 2025-12-03T02:15:46Z|00068|binding|INFO|Setting lport b7fa8023-e50c-4bea-be79-8fbe005f0b8a ovn-installed in OVS
Dec 03 02:15:46 compute-0 ovn_controller[89134]: 2025-12-03T02:15:46Z|00069|binding|INFO|Setting lport b7fa8023-e50c-4bea-be79-8fbe005f0b8a up in Southbound
Dec 03 02:15:46 compute-0 nova_compute[351485]: 2025-12-03 02:15:46.554 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:46 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:46.554 288528 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a5e23dc0-bcc2-406c-bc7f-b978295be94b
Dec 03 02:15:46 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:46.573 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[d35b86a2-1fb4-45e4-ad21-cf848666c3f8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:46 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:46.574 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa5e23dc0-b1 in ovnmeta-a5e23dc0-bcc2-406c-bc7f-b978295be94b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 03 02:15:46 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:46.577 414755 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa5e23dc0-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 03 02:15:46 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:46.577 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[94c29a11-82ae-411c-b460-0999f40c1303]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:46 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:46.578 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[cef05539-2a64-4017-ba8b-a417433468e7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:46 compute-0 systemd-machined[138558]: New machine qemu-6-instance-00000006.
Dec 03 02:15:46 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Dec 03 02:15:46 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:46.598 288639 DEBUG oslo.privsep.daemon [-] privsep: reply[2c65b49c-c19c-4b55-a601-8215211c2392]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:46 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:46.633 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[ec71ec43-500c-406b-9783-eebc2f172322]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:46 compute-0 systemd-udevd[444239]: Network interface NamePolicy= disabled on kernel command line.
Dec 03 02:15:46 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:46.680 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[4adf057c-1283-4b1b-9a83-216872fa8a40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:46 compute-0 NetworkManager[48912]: <info>  [1764728146.6848] device (tapb7fa8023-e5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 03 02:15:46 compute-0 NetworkManager[48912]: <info>  [1764728146.6862] device (tapb7fa8023-e5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 03 02:15:46 compute-0 NetworkManager[48912]: <info>  [1764728146.6914] manager: (tapa5e23dc0-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/37)
Dec 03 02:15:46 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:46.690 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[d0c54239-df22-4fdd-97fd-9138f71e7ef2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:46 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:46.737 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[72c8581f-5f3c-4f47-8013-0cb40681d284]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:46 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:46.743 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[2f2a2949-2e39-4ed1-a00e-ebb66dcba907]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:46 compute-0 NetworkManager[48912]: <info>  [1764728146.7734] device (tapa5e23dc0-b0): carrier: link connected
Dec 03 02:15:46 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:46.778 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[eb4726dc-04b8-4ef0-b85a-59e2944531ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:46 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:46.804 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[38a989df-1d46-4f71-a65b-a88fa3989966]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa5e23dc0-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:e2:60'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 698625, 'reachable_time': 17261, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 444268, 'error': None, 'target': 'ovnmeta-a5e23dc0-bcc2-406c-bc7f-b978295be94b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:46 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:46.827 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[fb417c2f-f7f4-4078-9923-4c582d39ba54]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4c:e260'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 698625, 'tstamp': 698625}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 444269, 'error': None, 'target': 'ovnmeta-a5e23dc0-bcc2-406c-bc7f-b978295be94b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:46 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:46.860 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[4a5ab2c4-798f-450c-95c6-532ad7957ca0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa5e23dc0-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:e2:60'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 698625, 'reachable_time': 17261, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 444270, 'error': None, 'target': 'ovnmeta-a5e23dc0-bcc2-406c-bc7f-b978295be94b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:46 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:46.915 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[1fbd1a6a-5dff-441f-8b94-99b7e97669aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1844: 321 pgs: 321 active+clean; 165 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 4.6 MiB/s wr, 65 op/s
Dec 03 02:15:47 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec 03 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.014 351492 DEBUG nova.network.neutron [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Updating instance_info_cache with network_info: [{"id": "5009f27c-5ce3-46eb-b7aa-e82645a3097e", "address": "fa:16:3e:3a:ad:09", "network": {"id": "9f9dd264-e73a-4200-ba74-0833c40bd14c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1921093277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a1cf3657daa4d798d912ceaae049aa0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5009f27c-5c", "ovs_interfaceid": "5009f27c-5ce3-46eb-b7aa-e82645a3097e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:15:47 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:47.018 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[0c94dbc1-9c84-4b1b-9417-e1a910095e41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:47 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:47.020 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa5e23dc0-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:15:47 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:47.021 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 03 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.024 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:47 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:47.021 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa5e23dc0-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:15:47 compute-0 kernel: tapa5e23dc0-b0: entered promiscuous mode
Dec 03 02:15:47 compute-0 NetworkManager[48912]: <info>  [1764728147.0255] manager: (tapa5e23dc0-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Dec 03 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.030 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:47 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:47.031 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa5e23dc0-b0, col_values=(('external_ids', {'iface-id': 'f4f388aa-0af5-4918-b8ad-5c74c22057c6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.032 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:47 compute-0 ovn_controller[89134]: 2025-12-03T02:15:47Z|00070|binding|INFO|Releasing lport f4f388aa-0af5-4918-b8ad-5c74c22057c6 from this chassis (sb_readonly=0)
Dec 03 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.052 351492 DEBUG oslo_concurrency.lockutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Releasing lock "refresh_cache-07ce21e6-3627-467a-9b7e-d9045308576c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.052 351492 DEBUG nova.compute.manager [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Instance network_info: |[{"id": "5009f27c-5ce3-46eb-b7aa-e82645a3097e", "address": "fa:16:3e:3a:ad:09", "network": {"id": "9f9dd264-e73a-4200-ba74-0833c40bd14c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1921093277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a1cf3657daa4d798d912ceaae049aa0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5009f27c-5c", "ovs_interfaceid": "5009f27c-5ce3-46eb-b7aa-e82645a3097e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 03 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.054 351492 DEBUG nova.virt.libvirt.driver [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Start _get_guest_xml network_info=[{"id": "5009f27c-5ce3-46eb-b7aa-e82645a3097e", "address": "fa:16:3e:3a:ad:09", "network": {"id": "9f9dd264-e73a-4200-ba74-0833c40bd14c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1921093277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a1cf3657daa4d798d912ceaae049aa0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5009f27c-5c", "ovs_interfaceid": "5009f27c-5ce3-46eb-b7aa-e82645a3097e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T02:14:44Z,direct_url=<?>,disk_format='qcow2',id=ef773cba-72f0-486f-b5e5-792ff26bb688,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9746b242761a48048d185ce26d622b33',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T02:14:46Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'boot_index': 0, 'guest_format': None, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'image_id': 'ef773cba-72f0-486f-b5e5-792ff26bb688'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 03 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.061 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.063 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.064 351492 WARNING nova.virt.libvirt.driver [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:15:47 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:47.064 288528 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a5e23dc0-bcc2-406c-bc7f-b978295be94b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a5e23dc0-bcc2-406c-bc7f-b978295be94b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 03 02:15:47 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:47.066 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[ef0ac628-ca91-44c2-90db-82722b23cad8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:47 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:47.067 288528 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 03 02:15:47 compute-0 ovn_metadata_agent[288523]: global
Dec 03 02:15:47 compute-0 ovn_metadata_agent[288523]:     log         /dev/log local0 debug
Dec 03 02:15:47 compute-0 ovn_metadata_agent[288523]:     log-tag     haproxy-metadata-proxy-a5e23dc0-bcc2-406c-bc7f-b978295be94b
Dec 03 02:15:47 compute-0 ovn_metadata_agent[288523]:     user        root
Dec 03 02:15:47 compute-0 ovn_metadata_agent[288523]:     group       root
Dec 03 02:15:47 compute-0 ovn_metadata_agent[288523]:     maxconn     1024
Dec 03 02:15:47 compute-0 ovn_metadata_agent[288523]:     pidfile     /var/lib/neutron/external/pids/a5e23dc0-bcc2-406c-bc7f-b978295be94b.pid.haproxy
Dec 03 02:15:47 compute-0 ovn_metadata_agent[288523]:     daemon
Dec 03 02:15:47 compute-0 ovn_metadata_agent[288523]: 
Dec 03 02:15:47 compute-0 ovn_metadata_agent[288523]: defaults
Dec 03 02:15:47 compute-0 ovn_metadata_agent[288523]:     log global
Dec 03 02:15:47 compute-0 ovn_metadata_agent[288523]:     mode http
Dec 03 02:15:47 compute-0 ovn_metadata_agent[288523]:     option httplog
Dec 03 02:15:47 compute-0 ovn_metadata_agent[288523]:     option dontlognull
Dec 03 02:15:47 compute-0 ovn_metadata_agent[288523]:     option http-server-close
Dec 03 02:15:47 compute-0 ovn_metadata_agent[288523]:     option forwardfor
Dec 03 02:15:47 compute-0 ovn_metadata_agent[288523]:     retries                 3
Dec 03 02:15:47 compute-0 ovn_metadata_agent[288523]:     timeout http-request    30s
Dec 03 02:15:47 compute-0 ovn_metadata_agent[288523]:     timeout connect         30s
Dec 03 02:15:47 compute-0 ovn_metadata_agent[288523]:     timeout client          32s
Dec 03 02:15:47 compute-0 ovn_metadata_agent[288523]:     timeout server          32s
Dec 03 02:15:47 compute-0 ovn_metadata_agent[288523]:     timeout http-keep-alive 30s
Dec 03 02:15:47 compute-0 ovn_metadata_agent[288523]: 
Dec 03 02:15:47 compute-0 ovn_metadata_agent[288523]: 
Dec 03 02:15:47 compute-0 ovn_metadata_agent[288523]: listen listener
Dec 03 02:15:47 compute-0 ovn_metadata_agent[288523]:     bind 169.254.169.254:80
Dec 03 02:15:47 compute-0 ovn_metadata_agent[288523]:     server metadata /var/lib/neutron/metadata_proxy
Dec 03 02:15:47 compute-0 ovn_metadata_agent[288523]:     http-request add-header X-OVN-Network-ID a5e23dc0-bcc2-406c-bc7f-b978295be94b
Dec 03 02:15:47 compute-0 ovn_metadata_agent[288523]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 03 02:15:47 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:47.068 288528 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a5e23dc0-bcc2-406c-bc7f-b978295be94b', 'env', 'PROCESS_TAG=haproxy-a5e23dc0-bcc2-406c-bc7f-b978295be94b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a5e23dc0-bcc2-406c-bc7f-b978295be94b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 03 02:15:47 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec 03 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.071 351492 DEBUG nova.virt.libvirt.host [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 03 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.072 351492 DEBUG nova.virt.libvirt.host [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 03 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.077 351492 DEBUG nova.virt.libvirt.host [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 03 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.077 351492 DEBUG nova.virt.libvirt.host [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 03 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.078 351492 DEBUG nova.virt.libvirt.driver [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 03 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.078 351492 DEBUG nova.virt.hardware [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-03T02:14:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='89219634-32e9-4cb5-896f-6fa0b1edfe13',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T02:14:44Z,direct_url=<?>,disk_format='qcow2',id=ef773cba-72f0-486f-b5e5-792ff26bb688,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9746b242761a48048d185ce26d622b33',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T02:14:46Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 03 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.078 351492 DEBUG nova.virt.hardware [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 03 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.079 351492 DEBUG nova.virt.hardware [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 03 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.079 351492 DEBUG nova.virt.hardware [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 03 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.079 351492 DEBUG nova.virt.hardware [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 03 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.079 351492 DEBUG nova.virt.hardware [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 03 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.079 351492 DEBUG nova.virt.hardware [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 03 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.079 351492 DEBUG nova.virt.hardware [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 03 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.080 351492 DEBUG nova.virt.hardware [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 03 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.080 351492 DEBUG nova.virt.hardware [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 03 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.080 351492 DEBUG nova.virt.hardware [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 03 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.082 351492 DEBUG oslo_concurrency.processutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:15:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 02:15:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1640582546' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:15:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 02:15:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1640582546' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.464 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728147.4639695, 4f50e501-f565-4e1f-aa02-df921702eff9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.465 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] VM Started (Lifecycle Event)
Dec 03 02:15:47 compute-0 ceph-mon[192821]: pgmap v1844: 321 pgs: 321 active+clean; 165 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 4.6 MiB/s wr, 65 op/s
Dec 03 02:15:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/1640582546' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:15:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/1640582546' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.492 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.493 351492 DEBUG oslo_concurrency.lockutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Acquiring lock "5c870f25-6c33-4e95-b540-5a806454f556" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.493 351492 DEBUG oslo_concurrency.lockutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Lock "5c870f25-6c33-4e95-b540-5a806454f556" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.500 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728147.47037, 4f50e501-f565-4e1f-aa02-df921702eff9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.501 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] VM Paused (Lifecycle Event)
Dec 03 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.516 351492 DEBUG nova.compute.manager [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 03 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.520 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.529 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 03 02:15:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 03 02:15:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/533004978' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.628 351492 DEBUG oslo_concurrency.processutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:15:47 compute-0 podman[444383]: 2025-12-03 02:15:47.632361847 +0000 UTC m=+0.099374873 container create 1850961de0e79545d5e6096d2e1507ace37214bae370e4c395b25878f1ca1363 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a5e23dc0-bcc2-406c-bc7f-b978295be94b, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 03 02:15:47 compute-0 podman[444383]: 2025-12-03 02:15:47.594409703 +0000 UTC m=+0.061422709 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 03 02:15:47 compute-0 systemd[1]: Started libpod-conmon-1850961de0e79545d5e6096d2e1507ace37214bae370e4c395b25878f1ca1363.scope.
Dec 03 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.701 351492 DEBUG nova.storage.rbd_utils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] rbd image 07ce21e6-3627-467a-9b7e-d9045308576c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:15:47 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:15:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d0f5e97a1c9cf6a7b1ce8133ccb65b7a2748d41d5e4c00f49714ed27a9e8b68/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 03 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.759 351492 DEBUG oslo_concurrency.processutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:15:47 compute-0 podman[444383]: 2025-12-03 02:15:47.775038394 +0000 UTC m=+0.242051400 container init 1850961de0e79545d5e6096d2e1507ace37214bae370e4c395b25878f1ca1363 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a5e23dc0-bcc2-406c-bc7f-b978295be94b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 03 02:15:47 compute-0 podman[444383]: 2025-12-03 02:15:47.782181226 +0000 UTC m=+0.249194202 container start 1850961de0e79545d5e6096d2e1507ace37214bae370e4c395b25878f1ca1363 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a5e23dc0-bcc2-406c-bc7f-b978295be94b, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 03 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.784 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 03 02:15:47 compute-0 podman[444417]: 2025-12-03 02:15:47.791431357 +0000 UTC m=+0.085651224 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 03 02:15:47 compute-0 neutron-haproxy-ovnmeta-a5e23dc0-bcc2-406c-bc7f-b978295be94b[444426]: [NOTICE]   (444481) : New worker (444494) forked
Dec 03 02:15:47 compute-0 neutron-haproxy-ovnmeta-a5e23dc0-bcc2-406c-bc7f-b978295be94b[444426]: [NOTICE]   (444481) : Loading success.
Dec 03 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.821 351492 DEBUG oslo_concurrency.lockutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.821 351492 DEBUG oslo_concurrency.lockutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.831 351492 DEBUG nova.virt.hardware [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 03 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.831 351492 INFO nova.compute.claims [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Claim successful on node compute-0.ctlplane.example.com
Dec 03 02:15:47 compute-0 podman[444418]: 2025-12-03 02:15:47.852122025 +0000 UTC m=+0.148427311 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_id=edpm, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, com.redhat.component=ubi9-container, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4)
Dec 03 02:15:47 compute-0 podman[444416]: 2025-12-03 02:15:47.852831585 +0000 UTC m=+0.140112316 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, release=1755695350, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git)
Dec 03 02:15:47 compute-0 podman[444411]: 2025-12-03 02:15:47.871486202 +0000 UTC m=+0.177915634 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 03 02:15:47 compute-0 podman[444420]: 2025-12-03 02:15:47.87494478 +0000 UTC m=+0.166998196 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251125)
Dec 03 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.032 351492 DEBUG oslo_concurrency.processutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.112 351492 DEBUG nova.compute.manager [req-c67eaf89-92dc-4efa-961a-930a221183f1 req-b62ac1f8-ed05-4d21-ae4c-f71e09e76aee 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Received event network-changed-5009f27c-5ce3-46eb-b7aa-e82645a3097e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.114 351492 DEBUG nova.compute.manager [req-c67eaf89-92dc-4efa-961a-930a221183f1 req-b62ac1f8-ed05-4d21-ae4c-f71e09e76aee 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Refreshing instance network info cache due to event network-changed-5009f27c-5ce3-46eb-b7aa-e82645a3097e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 03 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.115 351492 DEBUG oslo_concurrency.lockutils [req-c67eaf89-92dc-4efa-961a-930a221183f1 req-b62ac1f8-ed05-4d21-ae4c-f71e09e76aee 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-07ce21e6-3627-467a-9b7e-d9045308576c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.115 351492 DEBUG oslo_concurrency.lockutils [req-c67eaf89-92dc-4efa-961a-930a221183f1 req-b62ac1f8-ed05-4d21-ae4c-f71e09e76aee 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-07ce21e6-3627-467a-9b7e-d9045308576c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.116 351492 DEBUG nova.network.neutron [req-c67eaf89-92dc-4efa-961a-930a221183f1 req-b62ac1f8-ed05-4d21-ae4c-f71e09e76aee 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Refreshing network info cache for port 5009f27c-5ce3-46eb-b7aa-e82645a3097e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 03 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.174 351492 DEBUG nova.network.neutron [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Successfully updated port: ee5c2dfc-04c3-400a-8073-6f2c65dcea03 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 03 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.198 351492 DEBUG oslo_concurrency.lockutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Acquiring lock "refresh_cache-a48b4084-369d-432a-9f47-9378cdcc011f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.200 351492 DEBUG oslo_concurrency.lockutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Acquired lock "refresh_cache-a48b4084-369d-432a-9f47-9378cdcc011f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.201 351492 DEBUG nova.network.neutron [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 03 02:15:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 03 02:15:48 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2582215624' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:15:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.285 351492 DEBUG oslo_concurrency.processutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.291 351492 DEBUG nova.virt.libvirt.vif [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T02:15:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1673813976',display_name='tempest-ServersTestJSON-server-1673813976',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1673813976',id=7,image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJYX2+s+Cn7+6pt2DjGw9oFEuqJNIKKTlZXH+fYJLmbL39TCISRXMer1dBsYcpnaM6SERWPVMBKkG2FwLQyhKQV9uLnyTX7LXwX8AMU3L/hKCWN57p10Cgl0YPkCXm4JFA==',key_name='tempest-keypair-555022383',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5a1cf3657daa4d798d912ceaae049aa0',ramdisk_id='',reservation_id='r-cpufgz7g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-263993337',owner_user_name='tempest-ServersTestJSON-263993337-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T02:15:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8a7f624afcf845f786397f8aa1bb2a63',uuid=07ce21e6-3627-467a-9b7e-d9045308576c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5009f27c-5ce3-46eb-b7aa-e82645a3097e", "address": "fa:16:3e:3a:ad:09", "network": {"id": "9f9dd264-e73a-4200-ba74-0833c40bd14c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1921093277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a1cf3657daa4d798d912ceaae049aa0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5009f27c-5c", "ovs_interfaceid": "5009f27c-5ce3-46eb-b7aa-e82645a3097e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 03 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.293 351492 DEBUG nova.network.os_vif_util [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Converting VIF {"id": "5009f27c-5ce3-46eb-b7aa-e82645a3097e", "address": "fa:16:3e:3a:ad:09", "network": {"id": "9f9dd264-e73a-4200-ba74-0833c40bd14c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1921093277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a1cf3657daa4d798d912ceaae049aa0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5009f27c-5c", "ovs_interfaceid": "5009f27c-5ce3-46eb-b7aa-e82645a3097e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 03 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.296 351492 DEBUG nova.network.os_vif_util [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:ad:09,bridge_name='br-int',has_traffic_filtering=True,id=5009f27c-5ce3-46eb-b7aa-e82645a3097e,network=Network(9f9dd264-e73a-4200-ba74-0833c40bd14c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5009f27c-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 03 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.299 351492 DEBUG nova.objects.instance [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 07ce21e6-3627-467a-9b7e-d9045308576c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.323 351492 DEBUG nova.virt.libvirt.driver [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] End _get_guest_xml xml=<domain type="kvm">
Dec 03 02:15:48 compute-0 nova_compute[351485]:   <uuid>07ce21e6-3627-467a-9b7e-d9045308576c</uuid>
Dec 03 02:15:48 compute-0 nova_compute[351485]:   <name>instance-00000007</name>
Dec 03 02:15:48 compute-0 nova_compute[351485]:   <memory>131072</memory>
Dec 03 02:15:48 compute-0 nova_compute[351485]:   <vcpu>1</vcpu>
Dec 03 02:15:48 compute-0 nova_compute[351485]:   <metadata>
Dec 03 02:15:48 compute-0 nova_compute[351485]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 03 02:15:48 compute-0 nova_compute[351485]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:       <nova:name>tempest-ServersTestJSON-server-1673813976</nova:name>
Dec 03 02:15:48 compute-0 nova_compute[351485]:       <nova:creationTime>2025-12-03 02:15:47</nova:creationTime>
Dec 03 02:15:48 compute-0 nova_compute[351485]:       <nova:flavor name="m1.nano">
Dec 03 02:15:48 compute-0 nova_compute[351485]:         <nova:memory>128</nova:memory>
Dec 03 02:15:48 compute-0 nova_compute[351485]:         <nova:disk>1</nova:disk>
Dec 03 02:15:48 compute-0 nova_compute[351485]:         <nova:swap>0</nova:swap>
Dec 03 02:15:48 compute-0 nova_compute[351485]:         <nova:ephemeral>0</nova:ephemeral>
Dec 03 02:15:48 compute-0 nova_compute[351485]:         <nova:vcpus>1</nova:vcpus>
Dec 03 02:15:48 compute-0 nova_compute[351485]:       </nova:flavor>
Dec 03 02:15:48 compute-0 nova_compute[351485]:       <nova:owner>
Dec 03 02:15:48 compute-0 nova_compute[351485]:         <nova:user uuid="8a7f624afcf845f786397f8aa1bb2a63">tempest-ServersTestJSON-263993337-project-member</nova:user>
Dec 03 02:15:48 compute-0 nova_compute[351485]:         <nova:project uuid="5a1cf3657daa4d798d912ceaae049aa0">tempest-ServersTestJSON-263993337</nova:project>
Dec 03 02:15:48 compute-0 nova_compute[351485]:       </nova:owner>
Dec 03 02:15:48 compute-0 nova_compute[351485]:       <nova:root type="image" uuid="ef773cba-72f0-486f-b5e5-792ff26bb688"/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:       <nova:ports>
Dec 03 02:15:48 compute-0 nova_compute[351485]:         <nova:port uuid="5009f27c-5ce3-46eb-b7aa-e82645a3097e">
Dec 03 02:15:48 compute-0 nova_compute[351485]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:         </nova:port>
Dec 03 02:15:48 compute-0 nova_compute[351485]:       </nova:ports>
Dec 03 02:15:48 compute-0 nova_compute[351485]:     </nova:instance>
Dec 03 02:15:48 compute-0 nova_compute[351485]:   </metadata>
Dec 03 02:15:48 compute-0 nova_compute[351485]:   <sysinfo type="smbios">
Dec 03 02:15:48 compute-0 nova_compute[351485]:     <system>
Dec 03 02:15:48 compute-0 nova_compute[351485]:       <entry name="manufacturer">RDO</entry>
Dec 03 02:15:48 compute-0 nova_compute[351485]:       <entry name="product">OpenStack Compute</entry>
Dec 03 02:15:48 compute-0 nova_compute[351485]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 03 02:15:48 compute-0 nova_compute[351485]:       <entry name="serial">07ce21e6-3627-467a-9b7e-d9045308576c</entry>
Dec 03 02:15:48 compute-0 nova_compute[351485]:       <entry name="uuid">07ce21e6-3627-467a-9b7e-d9045308576c</entry>
Dec 03 02:15:48 compute-0 nova_compute[351485]:       <entry name="family">Virtual Machine</entry>
Dec 03 02:15:48 compute-0 nova_compute[351485]:     </system>
Dec 03 02:15:48 compute-0 nova_compute[351485]:   </sysinfo>
Dec 03 02:15:48 compute-0 nova_compute[351485]:   <os>
Dec 03 02:15:48 compute-0 nova_compute[351485]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 03 02:15:48 compute-0 nova_compute[351485]:     <boot dev="hd"/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:     <smbios mode="sysinfo"/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:   </os>
Dec 03 02:15:48 compute-0 nova_compute[351485]:   <features>
Dec 03 02:15:48 compute-0 nova_compute[351485]:     <acpi/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:     <apic/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:     <vmcoreinfo/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:   </features>
Dec 03 02:15:48 compute-0 nova_compute[351485]:   <clock offset="utc">
Dec 03 02:15:48 compute-0 nova_compute[351485]:     <timer name="pit" tickpolicy="delay"/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:     <timer name="hpet" present="no"/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:   </clock>
Dec 03 02:15:48 compute-0 nova_compute[351485]:   <cpu mode="host-model" match="exact">
Dec 03 02:15:48 compute-0 nova_compute[351485]:     <topology sockets="1" cores="1" threads="1"/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:   </cpu>
Dec 03 02:15:48 compute-0 nova_compute[351485]:   <devices>
Dec 03 02:15:48 compute-0 nova_compute[351485]:     <disk type="network" device="disk">
Dec 03 02:15:48 compute-0 nova_compute[351485]:       <driver type="raw" cache="none"/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:       <source protocol="rbd" name="vms/07ce21e6-3627-467a-9b7e-d9045308576c_disk">
Dec 03 02:15:48 compute-0 nova_compute[351485]:         <host name="192.168.122.100" port="6789"/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:       </source>
Dec 03 02:15:48 compute-0 nova_compute[351485]:       <auth username="openstack">
Dec 03 02:15:48 compute-0 nova_compute[351485]:         <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:       </auth>
Dec 03 02:15:48 compute-0 nova_compute[351485]:       <target dev="vda" bus="virtio"/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:     </disk>
Dec 03 02:15:48 compute-0 nova_compute[351485]:     <disk type="network" device="cdrom">
Dec 03 02:15:48 compute-0 nova_compute[351485]:       <driver type="raw" cache="none"/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:       <source protocol="rbd" name="vms/07ce21e6-3627-467a-9b7e-d9045308576c_disk.config">
Dec 03 02:15:48 compute-0 nova_compute[351485]:         <host name="192.168.122.100" port="6789"/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:       </source>
Dec 03 02:15:48 compute-0 nova_compute[351485]:       <auth username="openstack">
Dec 03 02:15:48 compute-0 nova_compute[351485]:         <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:       </auth>
Dec 03 02:15:48 compute-0 nova_compute[351485]:       <target dev="sda" bus="sata"/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:     </disk>
Dec 03 02:15:48 compute-0 nova_compute[351485]:     <interface type="ethernet">
Dec 03 02:15:48 compute-0 nova_compute[351485]:       <mac address="fa:16:3e:3a:ad:09"/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:       <model type="virtio"/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:       <driver name="vhost" rx_queue_size="512"/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:       <mtu size="1442"/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:       <target dev="tap5009f27c-5c"/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:     </interface>
Dec 03 02:15:48 compute-0 nova_compute[351485]:     <serial type="pty">
Dec 03 02:15:48 compute-0 nova_compute[351485]:       <log file="/var/lib/nova/instances/07ce21e6-3627-467a-9b7e-d9045308576c/console.log" append="off"/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:     </serial>
Dec 03 02:15:48 compute-0 nova_compute[351485]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:     <video>
Dec 03 02:15:48 compute-0 nova_compute[351485]:       <model type="virtio"/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:     </video>
Dec 03 02:15:48 compute-0 nova_compute[351485]:     <input type="tablet" bus="usb"/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:     <rng model="virtio">
Dec 03 02:15:48 compute-0 nova_compute[351485]:       <backend model="random">/dev/urandom</backend>
Dec 03 02:15:48 compute-0 nova_compute[351485]:     </rng>
Dec 03 02:15:48 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root"/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:     <controller type="usb" index="0"/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:     <memballoon model="virtio">
Dec 03 02:15:48 compute-0 nova_compute[351485]:       <stats period="10"/>
Dec 03 02:15:48 compute-0 nova_compute[351485]:     </memballoon>
Dec 03 02:15:48 compute-0 nova_compute[351485]:   </devices>
Dec 03 02:15:48 compute-0 nova_compute[351485]: </domain>
Dec 03 02:15:48 compute-0 nova_compute[351485]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 03 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.339 351492 DEBUG nova.compute.manager [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Preparing to wait for external event network-vif-plugged-5009f27c-5ce3-46eb-b7aa-e82645a3097e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 03 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.339 351492 DEBUG oslo_concurrency.lockutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Acquiring lock "07ce21e6-3627-467a-9b7e-d9045308576c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.340 351492 DEBUG oslo_concurrency.lockutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Lock "07ce21e6-3627-467a-9b7e-d9045308576c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.340 351492 DEBUG oslo_concurrency.lockutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Lock "07ce21e6-3627-467a-9b7e-d9045308576c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.341 351492 DEBUG nova.virt.libvirt.vif [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T02:15:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1673813976',display_name='tempest-ServersTestJSON-server-1673813976',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1673813976',id=7,image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJYX2+s+Cn7+6pt2DjGw9oFEuqJNIKKTlZXH+fYJLmbL39TCISRXMer1dBsYcpnaM6SERWPVMBKkG2FwLQyhKQV9uLnyTX7LXwX8AMU3L/hKCWN57p10Cgl0YPkCXm4JFA==',key_name='tempest-keypair-555022383',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5a1cf3657daa4d798d912ceaae049aa0',ramdisk_id='',reservation_id='r-cpufgz7g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-263993337',owner_user_name='tempest-ServersTestJSON-263993337-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T02:15:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8a7f624afcf845f786397f8aa1bb2a63',uuid=07ce21e6-3627-467a-9b7e-d9045308576c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5009f27c-5ce3-46eb-b7aa-e82645a3097e", "address": "fa:16:3e:3a:ad:09", "network": {"id": "9f9dd264-e73a-4200-ba74-0833c40bd14c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1921093277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a1cf3657daa4d798d912ceaae049aa0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5009f27c-5c", "ovs_interfaceid": "5009f27c-5ce3-46eb-b7aa-e82645a3097e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 03 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.341 351492 DEBUG nova.network.os_vif_util [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Converting VIF {"id": "5009f27c-5ce3-46eb-b7aa-e82645a3097e", "address": "fa:16:3e:3a:ad:09", "network": {"id": "9f9dd264-e73a-4200-ba74-0833c40bd14c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1921093277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a1cf3657daa4d798d912ceaae049aa0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5009f27c-5c", "ovs_interfaceid": "5009f27c-5ce3-46eb-b7aa-e82645a3097e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 03 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.343 351492 DEBUG nova.network.os_vif_util [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:ad:09,bridge_name='br-int',has_traffic_filtering=True,id=5009f27c-5ce3-46eb-b7aa-e82645a3097e,network=Network(9f9dd264-e73a-4200-ba74-0833c40bd14c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5009f27c-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 03 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.343 351492 DEBUG os_vif [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:ad:09,bridge_name='br-int',has_traffic_filtering=True,id=5009f27c-5ce3-46eb-b7aa-e82645a3097e,network=Network(9f9dd264-e73a-4200-ba74-0833c40bd14c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5009f27c-5c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 03 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.344 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.344 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.345 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 03 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.350 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.350 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5009f27c-5c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.351 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5009f27c-5c, col_values=(('external_ids', {'iface-id': '5009f27c-5ce3-46eb-b7aa-e82645a3097e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3a:ad:09', 'vm-uuid': '07ce21e6-3627-467a-9b7e-d9045308576c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:15:48 compute-0 NetworkManager[48912]: <info>  [1764728148.3551] manager: (tap5009f27c-5c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/39)
Dec 03 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.357 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 03 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.368 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.370 351492 INFO os_vif [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:ad:09,bridge_name='br-int',has_traffic_filtering=True,id=5009f27c-5ce3-46eb-b7aa-e82645a3097e,network=Network(9f9dd264-e73a-4200-ba74-0833c40bd14c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5009f27c-5c')
Dec 03 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.435 351492 DEBUG nova.virt.libvirt.driver [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 03 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.438 351492 DEBUG nova.virt.libvirt.driver [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 03 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.439 351492 DEBUG nova.virt.libvirt.driver [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] No VIF found with MAC fa:16:3e:3a:ad:09, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 03 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.440 351492 INFO nova.virt.libvirt.driver [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Using config drive
Dec 03 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.481 351492 DEBUG nova.storage.rbd_utils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] rbd image 07ce21e6-3627-467a-9b7e-d9045308576c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:15:48 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/533004978' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:15:48 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2582215624' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.494 351492 DEBUG nova.network.neutron [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 03 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.536 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:15:48 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1482587053' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.616 351492 DEBUG oslo_concurrency.processutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.629 351492 DEBUG nova.compute.provider_tree [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.656 351492 DEBUG nova.scheduler.client.report [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.685 351492 DEBUG oslo_concurrency.lockutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.864s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.686 351492 DEBUG nova.compute.manager [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 03 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.740 351492 DEBUG nova.compute.manager [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 03 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.741 351492 DEBUG nova.network.neutron [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 03 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.760 351492 INFO nova.virt.libvirt.driver [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 03 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.780 351492 DEBUG nova.compute.manager [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 03 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.885 351492 DEBUG nova.compute.manager [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 03 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.888 351492 DEBUG nova.virt.libvirt.driver [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 03 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.889 351492 INFO nova.virt.libvirt.driver [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Creating image(s)
Dec 03 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.940 351492 DEBUG nova.storage.rbd_utils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] rbd image 5c870f25-6c33-4e95-b540-5a806454f556_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:15:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1845: 321 pgs: 321 active+clean; 165 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 4.6 MiB/s wr, 65 op/s
Dec 03 02:15:49 compute-0 nova_compute[351485]: 2025-12-03 02:15:49.003 351492 DEBUG nova.storage.rbd_utils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] rbd image 5c870f25-6c33-4e95-b540-5a806454f556_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:15:49 compute-0 nova_compute[351485]: 2025-12-03 02:15:49.061 351492 DEBUG nova.storage.rbd_utils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] rbd image 5c870f25-6c33-4e95-b540-5a806454f556_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:15:49 compute-0 nova_compute[351485]: 2025-12-03 02:15:49.075 351492 DEBUG oslo_concurrency.processutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:15:49 compute-0 nova_compute[351485]: 2025-12-03 02:15:49.107 351492 INFO nova.virt.libvirt.driver [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Creating config drive at /var/lib/nova/instances/07ce21e6-3627-467a-9b7e-d9045308576c/disk.config
Dec 03 02:15:49 compute-0 nova_compute[351485]: 2025-12-03 02:15:49.117 351492 DEBUG oslo_concurrency.processutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/07ce21e6-3627-467a-9b7e-d9045308576c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7xwj8d11 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:15:49 compute-0 nova_compute[351485]: 2025-12-03 02:15:49.151 351492 DEBUG nova.policy [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4dc5f09973d5430fb9d8106a1a0a2479', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5875dd9a17274c38a2ae81fb3759558e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 03 02:15:49 compute-0 nova_compute[351485]: 2025-12-03 02:15:49.160 351492 DEBUG oslo_concurrency.processutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:15:49 compute-0 nova_compute[351485]: 2025-12-03 02:15:49.161 351492 DEBUG oslo_concurrency.lockutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Acquiring lock "d68b22249947adf9ae6139a52d3c87b68df8a601" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:15:49 compute-0 nova_compute[351485]: 2025-12-03 02:15:49.162 351492 DEBUG oslo_concurrency.lockutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Lock "d68b22249947adf9ae6139a52d3c87b68df8a601" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:15:49 compute-0 nova_compute[351485]: 2025-12-03 02:15:49.162 351492 DEBUG oslo_concurrency.lockutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Lock "d68b22249947adf9ae6139a52d3c87b68df8a601" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:15:49 compute-0 nova_compute[351485]: 2025-12-03 02:15:49.206 351492 DEBUG nova.storage.rbd_utils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] rbd image 5c870f25-6c33-4e95-b540-5a806454f556_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:15:49 compute-0 nova_compute[351485]: 2025-12-03 02:15:49.213 351492 DEBUG oslo_concurrency.processutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 5c870f25-6c33-4e95-b540-5a806454f556_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:15:49 compute-0 nova_compute[351485]: 2025-12-03 02:15:49.270 351492 DEBUG oslo_concurrency.processutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/07ce21e6-3627-467a-9b7e-d9045308576c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7xwj8d11" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:15:49 compute-0 nova_compute[351485]: 2025-12-03 02:15:49.319 351492 DEBUG nova.storage.rbd_utils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] rbd image 07ce21e6-3627-467a-9b7e-d9045308576c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:15:49 compute-0 nova_compute[351485]: 2025-12-03 02:15:49.345 351492 DEBUG oslo_concurrency.processutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/07ce21e6-3627-467a-9b7e-d9045308576c/disk.config 07ce21e6-3627-467a-9b7e-d9045308576c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:15:49 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1482587053' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:15:49 compute-0 ceph-mon[192821]: pgmap v1845: 321 pgs: 321 active+clean; 165 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 4.6 MiB/s wr, 65 op/s
Dec 03 02:15:49 compute-0 nova_compute[351485]: 2025-12-03 02:15:49.656 351492 DEBUG oslo_concurrency.processutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 5c870f25-6c33-4e95-b540-5a806454f556_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:15:49 compute-0 nova_compute[351485]: 2025-12-03 02:15:49.704 351492 DEBUG oslo_concurrency.processutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/07ce21e6-3627-467a-9b7e-d9045308576c/disk.config 07ce21e6-3627-467a-9b7e-d9045308576c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.358s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:15:49 compute-0 nova_compute[351485]: 2025-12-03 02:15:49.704 351492 INFO nova.virt.libvirt.driver [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Deleting local config drive /var/lib/nova/instances/07ce21e6-3627-467a-9b7e-d9045308576c/disk.config because it was imported into RBD.
Dec 03 02:15:49 compute-0 nova_compute[351485]: 2025-12-03 02:15:49.772 351492 DEBUG nova.storage.rbd_utils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] resizing rbd image 5c870f25-6c33-4e95-b540-5a806454f556_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 03 02:15:49 compute-0 kernel: tap5009f27c-5c: entered promiscuous mode
Dec 03 02:15:49 compute-0 NetworkManager[48912]: <info>  [1764728149.7963] manager: (tap5009f27c-5c): new Tun device (/org/freedesktop/NetworkManager/Devices/40)
Dec 03 02:15:49 compute-0 ovn_controller[89134]: 2025-12-03T02:15:49Z|00071|binding|INFO|Claiming lport 5009f27c-5ce3-46eb-b7aa-e82645a3097e for this chassis.
Dec 03 02:15:49 compute-0 ovn_controller[89134]: 2025-12-03T02:15:49Z|00072|binding|INFO|5009f27c-5ce3-46eb-b7aa-e82645a3097e: Claiming fa:16:3e:3a:ad:09 10.100.0.10
Dec 03 02:15:49 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:49.817 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:ad:09 10.100.0.10'], port_security=['fa:16:3e:3a:ad:09 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '07ce21e6-3627-467a-9b7e-d9045308576c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f9dd264-e73a-4200-ba74-0833c40bd14c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5a1cf3657daa4d798d912ceaae049aa0', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd3e8f04e-3c5d-406e-b48c-aa69bd7ba1c1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=427d4c89-de71-4fff-872a-bb6406d77b1e, chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=5009f27c-5ce3-46eb-b7aa-e82645a3097e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 02:15:49 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:49.820 288528 INFO neutron.agent.ovn.metadata.agent [-] Port 5009f27c-5ce3-46eb-b7aa-e82645a3097e in datapath 9f9dd264-e73a-4200-ba74-0833c40bd14c bound to our chassis
Dec 03 02:15:49 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:49.823 288528 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9f9dd264-e73a-4200-ba74-0833c40bd14c
Dec 03 02:15:49 compute-0 ovn_controller[89134]: 2025-12-03T02:15:49Z|00073|binding|INFO|Setting lport 5009f27c-5ce3-46eb-b7aa-e82645a3097e ovn-installed in OVS
Dec 03 02:15:49 compute-0 ovn_controller[89134]: 2025-12-03T02:15:49Z|00074|binding|INFO|Setting lport 5009f27c-5ce3-46eb-b7aa-e82645a3097e up in Southbound
Dec 03 02:15:49 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:49.838 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[2ceca406-2550-40a1-81b6-329da961146d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:49 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:49.840 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9f9dd264-e1 in ovnmeta-9f9dd264-e73a-4200-ba74-0833c40bd14c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 03 02:15:49 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:49.841 414755 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9f9dd264-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 03 02:15:49 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:49.842 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[cc16ac5e-1226-46d2-8e52-fae3c929f2b9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:49 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:49.844 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[410cc115-4f5f-4290-b348-3e872202a046]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:49 compute-0 systemd-udevd[444800]: Network interface NamePolicy= disabled on kernel command line.
Dec 03 02:15:49 compute-0 systemd-machined[138558]: New machine qemu-7-instance-00000007.
Dec 03 02:15:49 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:49.867 288639 DEBUG oslo.privsep.daemon [-] privsep: reply[cca22ffe-0c30-48bb-b09e-6e07e4e9164c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:49 compute-0 NetworkManager[48912]: <info>  [1764728149.8719] device (tap5009f27c-5c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 03 02:15:49 compute-0 nova_compute[351485]: 2025-12-03 02:15:49.872 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:49 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Dec 03 02:15:49 compute-0 NetworkManager[48912]: <info>  [1764728149.8757] device (tap5009f27c-5c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 03 02:15:49 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:49.908 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[4b681095-2aa2-42ba-95fb-2b1a98d82650]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:49 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:49.953 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[c886956f-ea74-4412-9cec-030e7b3ae07d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:49 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:49.965 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[0b5475c0-6eea-4d04-a577-9fbf48844440]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:49 compute-0 NetworkManager[48912]: <info>  [1764728149.9676] manager: (tap9f9dd264-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/41)
Dec 03 02:15:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:50.017 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[0cdcbaf1-87c9-4429-83f8-bb2b13111be2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:50.021 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[1cb38c84-d41a-47ea-806c-dc162687857b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:50 compute-0 nova_compute[351485]: 2025-12-03 02:15:50.023 351492 DEBUG nova.objects.instance [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Lazy-loading 'migration_context' on Instance uuid 5c870f25-6c33-4e95-b540-5a806454f556 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:15:50 compute-0 nova_compute[351485]: 2025-12-03 02:15:50.043 351492 DEBUG nova.virt.libvirt.driver [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 03 02:15:50 compute-0 nova_compute[351485]: 2025-12-03 02:15:50.044 351492 DEBUG nova.virt.libvirt.driver [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Ensure instance console log exists: /var/lib/nova/instances/5c870f25-6c33-4e95-b540-5a806454f556/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 03 02:15:50 compute-0 nova_compute[351485]: 2025-12-03 02:15:50.045 351492 DEBUG oslo_concurrency.lockutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:15:50 compute-0 nova_compute[351485]: 2025-12-03 02:15:50.045 351492 DEBUG oslo_concurrency.lockutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:15:50 compute-0 nova_compute[351485]: 2025-12-03 02:15:50.046 351492 DEBUG oslo_concurrency.lockutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:15:50 compute-0 NetworkManager[48912]: <info>  [1764728150.0602] device (tap9f9dd264-e0): carrier: link connected
Dec 03 02:15:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:50.071 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[56d46a39-133b-4293-b407-da611f968970]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:50.092 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[2c749674-9692-4f32-8d67-ddaa45c102a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f9dd264-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:07:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 698953, 'reachable_time': 24982, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 444850, 'error': None, 'target': 'ovnmeta-9f9dd264-e73a-4200-ba74-0833c40bd14c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:50.130 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[51a51c17-7d2c-42ad-adc3-945e83757cee]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecf:719'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 698953, 'tstamp': 698953}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 444851, 'error': None, 'target': 'ovnmeta-9f9dd264-e73a-4200-ba74-0833c40bd14c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:50.157 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[f2aec407-61ce-4647-a085-b3d93d68509b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f9dd264-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:07:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 698953, 'reachable_time': 24982, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 444852, 'error': None, 'target': 'ovnmeta-9f9dd264-e73a-4200-ba74-0833c40bd14c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:50.202 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[d63287b9-ea12-4711-ad1b-ef3e3351666c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:50.293 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[c436adbb-4276-4784-afba-e6f82ebba8eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:50.296 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f9dd264-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:15:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:50.303 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 03 02:15:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:50.304 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9f9dd264-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:15:50 compute-0 nova_compute[351485]: 2025-12-03 02:15:50.308 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:50 compute-0 NetworkManager[48912]: <info>  [1764728150.3099] manager: (tap9f9dd264-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Dec 03 02:15:50 compute-0 kernel: tap9f9dd264-e0: entered promiscuous mode
Dec 03 02:15:50 compute-0 nova_compute[351485]: 2025-12-03 02:15:50.318 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:50.319 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9f9dd264-e0, col_values=(('external_ids', {'iface-id': '450cbc12-7d6b-43b0-b43f-cc78dcc16b25'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:15:50 compute-0 nova_compute[351485]: 2025-12-03 02:15:50.321 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:50 compute-0 ovn_controller[89134]: 2025-12-03T02:15:50Z|00075|binding|INFO|Releasing lport 450cbc12-7d6b-43b0-b43f-cc78dcc16b25 from this chassis (sb_readonly=0)
Dec 03 02:15:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:50.351 288528 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9f9dd264-e73a-4200-ba74-0833c40bd14c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9f9dd264-e73a-4200-ba74-0833c40bd14c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 03 02:15:50 compute-0 nova_compute[351485]: 2025-12-03 02:15:50.351 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:50.353 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[58866e01-d737-4a02-a364-85c93a4aa8ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:50.354 288528 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 03 02:15:50 compute-0 ovn_metadata_agent[288523]: global
Dec 03 02:15:50 compute-0 ovn_metadata_agent[288523]:     log         /dev/log local0 debug
Dec 03 02:15:50 compute-0 ovn_metadata_agent[288523]:     log-tag     haproxy-metadata-proxy-9f9dd264-e73a-4200-ba74-0833c40bd14c
Dec 03 02:15:50 compute-0 ovn_metadata_agent[288523]:     user        root
Dec 03 02:15:50 compute-0 ovn_metadata_agent[288523]:     group       root
Dec 03 02:15:50 compute-0 ovn_metadata_agent[288523]:     maxconn     1024
Dec 03 02:15:50 compute-0 ovn_metadata_agent[288523]:     pidfile     /var/lib/neutron/external/pids/9f9dd264-e73a-4200-ba74-0833c40bd14c.pid.haproxy
Dec 03 02:15:50 compute-0 ovn_metadata_agent[288523]:     daemon
Dec 03 02:15:50 compute-0 ovn_metadata_agent[288523]: 
Dec 03 02:15:50 compute-0 ovn_metadata_agent[288523]: defaults
Dec 03 02:15:50 compute-0 ovn_metadata_agent[288523]:     log global
Dec 03 02:15:50 compute-0 ovn_metadata_agent[288523]:     mode http
Dec 03 02:15:50 compute-0 ovn_metadata_agent[288523]:     option httplog
Dec 03 02:15:50 compute-0 ovn_metadata_agent[288523]:     option dontlognull
Dec 03 02:15:50 compute-0 ovn_metadata_agent[288523]:     option http-server-close
Dec 03 02:15:50 compute-0 ovn_metadata_agent[288523]:     option forwardfor
Dec 03 02:15:50 compute-0 ovn_metadata_agent[288523]:     retries                 3
Dec 03 02:15:50 compute-0 ovn_metadata_agent[288523]:     timeout http-request    30s
Dec 03 02:15:50 compute-0 ovn_metadata_agent[288523]:     timeout connect         30s
Dec 03 02:15:50 compute-0 ovn_metadata_agent[288523]:     timeout client          32s
Dec 03 02:15:50 compute-0 ovn_metadata_agent[288523]:     timeout server          32s
Dec 03 02:15:50 compute-0 ovn_metadata_agent[288523]:     timeout http-keep-alive 30s
Dec 03 02:15:50 compute-0 ovn_metadata_agent[288523]: 
Dec 03 02:15:50 compute-0 ovn_metadata_agent[288523]: 
Dec 03 02:15:50 compute-0 ovn_metadata_agent[288523]: listen listener
Dec 03 02:15:50 compute-0 ovn_metadata_agent[288523]:     bind 169.254.169.254:80
Dec 03 02:15:50 compute-0 ovn_metadata_agent[288523]:     server metadata /var/lib/neutron/metadata_proxy
Dec 03 02:15:50 compute-0 ovn_metadata_agent[288523]:     http-request add-header X-OVN-Network-ID 9f9dd264-e73a-4200-ba74-0833c40bd14c
Dec 03 02:15:50 compute-0 ovn_metadata_agent[288523]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 03 02:15:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:50.355 288528 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9f9dd264-e73a-4200-ba74-0833c40bd14c', 'env', 'PROCESS_TAG=haproxy-9f9dd264-e73a-4200-ba74-0833c40bd14c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9f9dd264-e73a-4200-ba74-0833c40bd14c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 03 02:15:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:50.370 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=eda9fd7d-f2b1-4121-b9ac-fc31f8426272, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:15:50 compute-0 nova_compute[351485]: 2025-12-03 02:15:50.467 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728150.4670317, 07ce21e6-3627-467a-9b7e-d9045308576c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 02:15:50 compute-0 nova_compute[351485]: 2025-12-03 02:15:50.468 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] VM Started (Lifecycle Event)
Dec 03 02:15:50 compute-0 nova_compute[351485]: 2025-12-03 02:15:50.487 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:15:50 compute-0 nova_compute[351485]: 2025-12-03 02:15:50.495 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728150.4672446, 07ce21e6-3627-467a-9b7e-d9045308576c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 02:15:50 compute-0 nova_compute[351485]: 2025-12-03 02:15:50.495 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] VM Paused (Lifecycle Event)
Dec 03 02:15:50 compute-0 nova_compute[351485]: 2025-12-03 02:15:50.513 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:15:50 compute-0 nova_compute[351485]: 2025-12-03 02:15:50.519 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 03 02:15:50 compute-0 nova_compute[351485]: 2025-12-03 02:15:50.543 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 03 02:15:50 compute-0 podman[444926]: 2025-12-03 02:15:50.944768194 +0000 UTC m=+0.112119263 container create 7d58250e52fa06f3751bdde305da6190b3c31d1e06120140edcca924bfc1ed7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9f9dd264-e73a-4200-ba74-0833c40bd14c, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 03 02:15:50 compute-0 podman[444926]: 2025-12-03 02:15:50.898429243 +0000 UTC m=+0.065780352 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 03 02:15:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1846: 321 pgs: 321 active+clean; 218 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 6.2 MiB/s wr, 120 op/s
Dec 03 02:15:51 compute-0 systemd[1]: Started libpod-conmon-7d58250e52fa06f3751bdde305da6190b3c31d1e06120140edcca924bfc1ed7b.scope.
Dec 03 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.013 351492 DEBUG nova.network.neutron [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Successfully created port: d7b1b965-f304-40eb-9f34-c63af54da9f4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 03 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.020 351492 DEBUG nova.network.neutron [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Updating instance_info_cache with network_info: [{"id": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "address": "fa:16:3e:ff:dd:2f", "network": {"id": "2fdf214a-0f6e-4e5d-b449-e1988827937a", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-191861003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b95bb4c57d3543acb25997bedee9dec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee5c2dfc-04", "ovs_interfaceid": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.023 351492 DEBUG nova.network.neutron [req-c67eaf89-92dc-4efa-961a-930a221183f1 req-b62ac1f8-ed05-4d21-ae4c-f71e09e76aee 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Updated VIF entry in instance network info cache for port 5009f27c-5ce3-46eb-b7aa-e82645a3097e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 03 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.024 351492 DEBUG nova.network.neutron [req-c67eaf89-92dc-4efa-961a-930a221183f1 req-b62ac1f8-ed05-4d21-ae4c-f71e09e76aee 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Updating instance_info_cache with network_info: [{"id": "5009f27c-5ce3-46eb-b7aa-e82645a3097e", "address": "fa:16:3e:3a:ad:09", "network": {"id": "9f9dd264-e73a-4200-ba74-0833c40bd14c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1921093277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a1cf3657daa4d798d912ceaae049aa0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5009f27c-5c", "ovs_interfaceid": "5009f27c-5ce3-46eb-b7aa-e82645a3097e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:15:51 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.069 351492 DEBUG oslo_concurrency.lockutils [req-c67eaf89-92dc-4efa-961a-930a221183f1 req-b62ac1f8-ed05-4d21-ae4c-f71e09e76aee 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-07ce21e6-3627-467a-9b7e-d9045308576c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.070 351492 DEBUG oslo_concurrency.lockutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Releasing lock "refresh_cache-a48b4084-369d-432a-9f47-9378cdcc011f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.071 351492 DEBUG nova.compute.manager [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Instance network_info: |[{"id": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "address": "fa:16:3e:ff:dd:2f", "network": {"id": "2fdf214a-0f6e-4e5d-b449-e1988827937a", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-191861003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b95bb4c57d3543acb25997bedee9dec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee5c2dfc-04", "ovs_interfaceid": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 03 02:15:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3c2115dbbdd79e6878ea3d1b5fd20b2e30c3ab979ab90b0f907915a9dad459d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 03 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.076 351492 DEBUG nova.virt.libvirt.driver [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Start _get_guest_xml network_info=[{"id": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "address": "fa:16:3e:ff:dd:2f", "network": {"id": "2fdf214a-0f6e-4e5d-b449-e1988827937a", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-191861003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b95bb4c57d3543acb25997bedee9dec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee5c2dfc-04", "ovs_interfaceid": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T02:14:44Z,direct_url=<?>,disk_format='qcow2',id=ef773cba-72f0-486f-b5e5-792ff26bb688,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9746b242761a48048d185ce26d622b33',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T02:14:46Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'boot_index': 0, 'guest_format': None, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'image_id': 'ef773cba-72f0-486f-b5e5-792ff26bb688'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 03 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.088 351492 WARNING nova.virt.libvirt.driver [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.099 351492 DEBUG nova.virt.libvirt.host [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 03 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.100 351492 DEBUG nova.virt.libvirt.host [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 03 02:15:51 compute-0 podman[444926]: 2025-12-03 02:15:51.109182496 +0000 UTC m=+0.276533595 container init 7d58250e52fa06f3751bdde305da6190b3c31d1e06120140edcca924bfc1ed7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9f9dd264-e73a-4200-ba74-0833c40bd14c, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 03 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.108 351492 DEBUG nova.virt.libvirt.host [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 03 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.109 351492 DEBUG nova.virt.libvirt.host [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 03 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.112 351492 DEBUG nova.virt.libvirt.driver [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 03 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.112 351492 DEBUG nova.virt.hardware [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-03T02:14:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='89219634-32e9-4cb5-896f-6fa0b1edfe13',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T02:14:44Z,direct_url=<?>,disk_format='qcow2',id=ef773cba-72f0-486f-b5e5-792ff26bb688,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9746b242761a48048d185ce26d622b33',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T02:14:46Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 03 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.113 351492 DEBUG nova.virt.hardware [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 03 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.113 351492 DEBUG nova.virt.hardware [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 03 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.113 351492 DEBUG nova.virt.hardware [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 03 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.114 351492 DEBUG nova.virt.hardware [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 03 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.114 351492 DEBUG nova.virt.hardware [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 03 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.115 351492 DEBUG nova.virt.hardware [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 03 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.115 351492 DEBUG nova.virt.hardware [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 03 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.115 351492 DEBUG nova.virt.hardware [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 03 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.116 351492 DEBUG nova.virt.hardware [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 03 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.116 351492 DEBUG nova.virt.hardware [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 03 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.121 351492 DEBUG oslo_concurrency.processutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:15:51 compute-0 podman[444926]: 2025-12-03 02:15:51.124485419 +0000 UTC m=+0.291836478 container start 7d58250e52fa06f3751bdde305da6190b3c31d1e06120140edcca924bfc1ed7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9f9dd264-e73a-4200-ba74-0833c40bd14c, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 03 02:15:51 compute-0 neutron-haproxy-ovnmeta-9f9dd264-e73a-4200-ba74-0833c40bd14c[444939]: [NOTICE]   (444943) : New worker (444946) forked
Dec 03 02:15:51 compute-0 neutron-haproxy-ovnmeta-9f9dd264-e73a-4200-ba74-0833c40bd14c[444939]: [NOTICE]   (444943) : Loading success.
Dec 03 02:15:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 03 02:15:51 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/45174843' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.616 351492 DEBUG oslo_concurrency.processutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.654 351492 DEBUG nova.storage.rbd_utils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] rbd image a48b4084-369d-432a-9f47-9378cdcc011f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.662 351492 DEBUG oslo_concurrency.processutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:15:52 compute-0 ceph-mon[192821]: pgmap v1846: 321 pgs: 321 active+clean; 218 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 6.2 MiB/s wr, 120 op/s
Dec 03 02:15:52 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/45174843' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:15:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 03 02:15:52 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/965893036' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.202 351492 DEBUG oslo_concurrency.processutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.203 351492 DEBUG nova.virt.libvirt.vif [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T02:15:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-925455337',display_name='tempest-ServerActionsTestJSON-server-925455337',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-925455337',id=8,image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFGOJzr3C/PPi8eniww/uAf5kjbNsdKavxgkZKaJZFgdiLqS6nfAl7iJt2CTK2Uv8oLXiebIMQ1pupDcRRUQudzYxI5uBKdjcX1Ycil7EMv1Jwv4g9nZX8AidJ89XIoqzA==',key_name='tempest-keypair-354319462',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b95bb4c57d3543acb25997bedee9dec3',ramdisk_id='',reservation_id='r-4j003m20',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-225723275',owner_user_name='tempest-ServerActionsTestJSON-225723275-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T02:15:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='292dd1da4e67424b855327b32f0623b7',uuid=a48b4084-369d-432a-9f47-9378cdcc011f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "address": "fa:16:3e:ff:dd:2f", "network": {"id": "2fdf214a-0f6e-4e5d-b449-e1988827937a", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-191861003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b95bb4c57d3543acb25997bedee9dec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee5c2dfc-04", "ovs_interfaceid": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.204 351492 DEBUG nova.network.os_vif_util [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Converting VIF {"id": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "address": "fa:16:3e:ff:dd:2f", "network": {"id": "2fdf214a-0f6e-4e5d-b449-e1988827937a", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-191861003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b95bb4c57d3543acb25997bedee9dec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee5c2dfc-04", "ovs_interfaceid": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.205 351492 DEBUG nova.network.os_vif_util [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ff:dd:2f,bridge_name='br-int',has_traffic_filtering=True,id=ee5c2dfc-04c3-400a-8073-6f2c65dcea03,network=Network(2fdf214a-0f6e-4e5d-b449-e1988827937a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee5c2dfc-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.206 351492 DEBUG nova.objects.instance [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Lazy-loading 'pci_devices' on Instance uuid a48b4084-369d-432a-9f47-9378cdcc011f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.229 351492 DEBUG nova.virt.libvirt.driver [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] End _get_guest_xml xml=<domain type="kvm">
Dec 03 02:15:52 compute-0 nova_compute[351485]:   <uuid>a48b4084-369d-432a-9f47-9378cdcc011f</uuid>
Dec 03 02:15:52 compute-0 nova_compute[351485]:   <name>instance-00000008</name>
Dec 03 02:15:52 compute-0 nova_compute[351485]:   <memory>131072</memory>
Dec 03 02:15:52 compute-0 nova_compute[351485]:   <vcpu>1</vcpu>
Dec 03 02:15:52 compute-0 nova_compute[351485]:   <metadata>
Dec 03 02:15:52 compute-0 nova_compute[351485]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 03 02:15:52 compute-0 nova_compute[351485]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:       <nova:name>tempest-ServerActionsTestJSON-server-925455337</nova:name>
Dec 03 02:15:52 compute-0 nova_compute[351485]:       <nova:creationTime>2025-12-03 02:15:51</nova:creationTime>
Dec 03 02:15:52 compute-0 nova_compute[351485]:       <nova:flavor name="m1.nano">
Dec 03 02:15:52 compute-0 nova_compute[351485]:         <nova:memory>128</nova:memory>
Dec 03 02:15:52 compute-0 nova_compute[351485]:         <nova:disk>1</nova:disk>
Dec 03 02:15:52 compute-0 nova_compute[351485]:         <nova:swap>0</nova:swap>
Dec 03 02:15:52 compute-0 nova_compute[351485]:         <nova:ephemeral>0</nova:ephemeral>
Dec 03 02:15:52 compute-0 nova_compute[351485]:         <nova:vcpus>1</nova:vcpus>
Dec 03 02:15:52 compute-0 nova_compute[351485]:       </nova:flavor>
Dec 03 02:15:52 compute-0 nova_compute[351485]:       <nova:owner>
Dec 03 02:15:52 compute-0 nova_compute[351485]:         <nova:user uuid="292dd1da4e67424b855327b32f0623b7">tempest-ServerActionsTestJSON-225723275-project-member</nova:user>
Dec 03 02:15:52 compute-0 nova_compute[351485]:         <nova:project uuid="b95bb4c57d3543acb25997bedee9dec3">tempest-ServerActionsTestJSON-225723275</nova:project>
Dec 03 02:15:52 compute-0 nova_compute[351485]:       </nova:owner>
Dec 03 02:15:52 compute-0 nova_compute[351485]:       <nova:root type="image" uuid="ef773cba-72f0-486f-b5e5-792ff26bb688"/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:       <nova:ports>
Dec 03 02:15:52 compute-0 nova_compute[351485]:         <nova:port uuid="ee5c2dfc-04c3-400a-8073-6f2c65dcea03">
Dec 03 02:15:52 compute-0 nova_compute[351485]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:         </nova:port>
Dec 03 02:15:52 compute-0 nova_compute[351485]:       </nova:ports>
Dec 03 02:15:52 compute-0 nova_compute[351485]:     </nova:instance>
Dec 03 02:15:52 compute-0 nova_compute[351485]:   </metadata>
Dec 03 02:15:52 compute-0 nova_compute[351485]:   <sysinfo type="smbios">
Dec 03 02:15:52 compute-0 nova_compute[351485]:     <system>
Dec 03 02:15:52 compute-0 nova_compute[351485]:       <entry name="manufacturer">RDO</entry>
Dec 03 02:15:52 compute-0 nova_compute[351485]:       <entry name="product">OpenStack Compute</entry>
Dec 03 02:15:52 compute-0 nova_compute[351485]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 03 02:15:52 compute-0 nova_compute[351485]:       <entry name="serial">a48b4084-369d-432a-9f47-9378cdcc011f</entry>
Dec 03 02:15:52 compute-0 nova_compute[351485]:       <entry name="uuid">a48b4084-369d-432a-9f47-9378cdcc011f</entry>
Dec 03 02:15:52 compute-0 nova_compute[351485]:       <entry name="family">Virtual Machine</entry>
Dec 03 02:15:52 compute-0 nova_compute[351485]:     </system>
Dec 03 02:15:52 compute-0 nova_compute[351485]:   </sysinfo>
Dec 03 02:15:52 compute-0 nova_compute[351485]:   <os>
Dec 03 02:15:52 compute-0 nova_compute[351485]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 03 02:15:52 compute-0 nova_compute[351485]:     <boot dev="hd"/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:     <smbios mode="sysinfo"/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:   </os>
Dec 03 02:15:52 compute-0 nova_compute[351485]:   <features>
Dec 03 02:15:52 compute-0 nova_compute[351485]:     <acpi/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:     <apic/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:     <vmcoreinfo/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:   </features>
Dec 03 02:15:52 compute-0 nova_compute[351485]:   <clock offset="utc">
Dec 03 02:15:52 compute-0 nova_compute[351485]:     <timer name="pit" tickpolicy="delay"/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:     <timer name="hpet" present="no"/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:   </clock>
Dec 03 02:15:52 compute-0 nova_compute[351485]:   <cpu mode="host-model" match="exact">
Dec 03 02:15:52 compute-0 nova_compute[351485]:     <topology sockets="1" cores="1" threads="1"/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:   </cpu>
Dec 03 02:15:52 compute-0 nova_compute[351485]:   <devices>
Dec 03 02:15:52 compute-0 nova_compute[351485]:     <disk type="network" device="disk">
Dec 03 02:15:52 compute-0 nova_compute[351485]:       <driver type="raw" cache="none"/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:       <source protocol="rbd" name="vms/a48b4084-369d-432a-9f47-9378cdcc011f_disk">
Dec 03 02:15:52 compute-0 nova_compute[351485]:         <host name="192.168.122.100" port="6789"/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:       </source>
Dec 03 02:15:52 compute-0 nova_compute[351485]:       <auth username="openstack">
Dec 03 02:15:52 compute-0 nova_compute[351485]:         <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:       </auth>
Dec 03 02:15:52 compute-0 nova_compute[351485]:       <target dev="vda" bus="virtio"/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:     </disk>
Dec 03 02:15:52 compute-0 nova_compute[351485]:     <disk type="network" device="cdrom">
Dec 03 02:15:52 compute-0 nova_compute[351485]:       <driver type="raw" cache="none"/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:       <source protocol="rbd" name="vms/a48b4084-369d-432a-9f47-9378cdcc011f_disk.config">
Dec 03 02:15:52 compute-0 nova_compute[351485]:         <host name="192.168.122.100" port="6789"/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:       </source>
Dec 03 02:15:52 compute-0 nova_compute[351485]:       <auth username="openstack">
Dec 03 02:15:52 compute-0 nova_compute[351485]:         <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:       </auth>
Dec 03 02:15:52 compute-0 nova_compute[351485]:       <target dev="sda" bus="sata"/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:     </disk>
Dec 03 02:15:52 compute-0 nova_compute[351485]:     <interface type="ethernet">
Dec 03 02:15:52 compute-0 nova_compute[351485]:       <mac address="fa:16:3e:ff:dd:2f"/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:       <model type="virtio"/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:       <driver name="vhost" rx_queue_size="512"/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:       <mtu size="1442"/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:       <target dev="tapee5c2dfc-04"/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:     </interface>
Dec 03 02:15:52 compute-0 nova_compute[351485]:     <serial type="pty">
Dec 03 02:15:52 compute-0 nova_compute[351485]:       <log file="/var/lib/nova/instances/a48b4084-369d-432a-9f47-9378cdcc011f/console.log" append="off"/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:     </serial>
Dec 03 02:15:52 compute-0 nova_compute[351485]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:     <video>
Dec 03 02:15:52 compute-0 nova_compute[351485]:       <model type="virtio"/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:     </video>
Dec 03 02:15:52 compute-0 nova_compute[351485]:     <input type="tablet" bus="usb"/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:     <rng model="virtio">
Dec 03 02:15:52 compute-0 nova_compute[351485]:       <backend model="random">/dev/urandom</backend>
Dec 03 02:15:52 compute-0 nova_compute[351485]:     </rng>
Dec 03 02:15:52 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root"/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:     <controller type="usb" index="0"/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:     <memballoon model="virtio">
Dec 03 02:15:52 compute-0 nova_compute[351485]:       <stats period="10"/>
Dec 03 02:15:52 compute-0 nova_compute[351485]:     </memballoon>
Dec 03 02:15:52 compute-0 nova_compute[351485]:   </devices>
Dec 03 02:15:52 compute-0 nova_compute[351485]: </domain>
Dec 03 02:15:52 compute-0 nova_compute[351485]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.230 351492 DEBUG nova.compute.manager [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Preparing to wait for external event network-vif-plugged-ee5c2dfc-04c3-400a-8073-6f2c65dcea03 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.230 351492 DEBUG oslo_concurrency.lockutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Acquiring lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.231 351492 DEBUG oslo_concurrency.lockutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.231 351492 DEBUG oslo_concurrency.lockutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.231 351492 DEBUG nova.virt.libvirt.vif [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T02:15:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-925455337',display_name='tempest-ServerActionsTestJSON-server-925455337',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-925455337',id=8,image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFGOJzr3C/PPi8eniww/uAf5kjbNsdKavxgkZKaJZFgdiLqS6nfAl7iJt2CTK2Uv8oLXiebIMQ1pupDcRRUQudzYxI5uBKdjcX1Ycil7EMv1Jwv4g9nZX8AidJ89XIoqzA==',key_name='tempest-keypair-354319462',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b95bb4c57d3543acb25997bedee9dec3',ramdisk_id='',reservation_id='r-4j003m20',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-225723275',owner_user_name='tempest-ServerActionsTestJSON-225723275-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T02:15:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='292dd1da4e67424b855327b32f0623b7',uuid=a48b4084-369d-432a-9f47-9378cdcc011f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "address": "fa:16:3e:ff:dd:2f", "network": {"id": "2fdf214a-0f6e-4e5d-b449-e1988827937a", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-191861003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b95bb4c57d3543acb25997bedee9dec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee5c2dfc-04", "ovs_interfaceid": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.232 351492 DEBUG nova.network.os_vif_util [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Converting VIF {"id": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "address": "fa:16:3e:ff:dd:2f", "network": {"id": "2fdf214a-0f6e-4e5d-b449-e1988827937a", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-191861003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b95bb4c57d3543acb25997bedee9dec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee5c2dfc-04", "ovs_interfaceid": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.232 351492 DEBUG nova.network.os_vif_util [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ff:dd:2f,bridge_name='br-int',has_traffic_filtering=True,id=ee5c2dfc-04c3-400a-8073-6f2c65dcea03,network=Network(2fdf214a-0f6e-4e5d-b449-e1988827937a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee5c2dfc-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.233 351492 DEBUG os_vif [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ff:dd:2f,bridge_name='br-int',has_traffic_filtering=True,id=ee5c2dfc-04c3-400a-8073-6f2c65dcea03,network=Network(2fdf214a-0f6e-4e5d-b449-e1988827937a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee5c2dfc-04') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.234 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.235 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.235 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.241 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.241 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee5c2dfc-04, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.242 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapee5c2dfc-04, col_values=(('external_ids', {'iface-id': 'ee5c2dfc-04c3-400a-8073-6f2c65dcea03', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ff:dd:2f', 'vm-uuid': 'a48b4084-369d-432a-9f47-9378cdcc011f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.243 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:52 compute-0 NetworkManager[48912]: <info>  [1764728152.2447] manager: (tapee5c2dfc-04): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.246 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.258 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.258 351492 INFO os_vif [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ff:dd:2f,bridge_name='br-int',has_traffic_filtering=True,id=ee5c2dfc-04c3-400a-8073-6f2c65dcea03,network=Network(2fdf214a-0f6e-4e5d-b449-e1988827937a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee5c2dfc-04')
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.332 351492 DEBUG nova.virt.libvirt.driver [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.333 351492 DEBUG nova.virt.libvirt.driver [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.334 351492 DEBUG nova.virt.libvirt.driver [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] No VIF found with MAC fa:16:3e:ff:dd:2f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.335 351492 INFO nova.virt.libvirt.driver [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Using config drive
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.386 351492 DEBUG nova.storage.rbd_utils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] rbd image a48b4084-369d-432a-9f47-9378cdcc011f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.544 351492 DEBUG nova.compute.manager [req-926d41f0-e9e6-497d-b230-713def277069 req-e6ffc61d-f569-443d-9c55-085850e13b8c 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Received event network-vif-plugged-b7fa8023-e50c-4bea-be79-8fbe005f0b8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.544 351492 DEBUG oslo_concurrency.lockutils [req-926d41f0-e9e6-497d-b230-713def277069 req-e6ffc61d-f569-443d-9c55-085850e13b8c 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "4f50e501-f565-4e1f-aa02-df921702eff9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.545 351492 DEBUG oslo_concurrency.lockutils [req-926d41f0-e9e6-497d-b230-713def277069 req-e6ffc61d-f569-443d-9c55-085850e13b8c 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "4f50e501-f565-4e1f-aa02-df921702eff9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.545 351492 DEBUG oslo_concurrency.lockutils [req-926d41f0-e9e6-497d-b230-713def277069 req-e6ffc61d-f569-443d-9c55-085850e13b8c 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "4f50e501-f565-4e1f-aa02-df921702eff9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.545 351492 DEBUG nova.compute.manager [req-926d41f0-e9e6-497d-b230-713def277069 req-e6ffc61d-f569-443d-9c55-085850e13b8c 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Processing event network-vif-plugged-b7fa8023-e50c-4bea-be79-8fbe005f0b8a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.546 351492 DEBUG nova.compute.manager [req-926d41f0-e9e6-497d-b230-713def277069 req-e6ffc61d-f569-443d-9c55-085850e13b8c 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Received event network-vif-plugged-b7fa8023-e50c-4bea-be79-8fbe005f0b8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.547 351492 DEBUG oslo_concurrency.lockutils [req-926d41f0-e9e6-497d-b230-713def277069 req-e6ffc61d-f569-443d-9c55-085850e13b8c 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "4f50e501-f565-4e1f-aa02-df921702eff9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.547 351492 DEBUG oslo_concurrency.lockutils [req-926d41f0-e9e6-497d-b230-713def277069 req-e6ffc61d-f569-443d-9c55-085850e13b8c 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "4f50e501-f565-4e1f-aa02-df921702eff9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.547 351492 DEBUG oslo_concurrency.lockutils [req-926d41f0-e9e6-497d-b230-713def277069 req-e6ffc61d-f569-443d-9c55-085850e13b8c 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "4f50e501-f565-4e1f-aa02-df921702eff9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.548 351492 DEBUG nova.compute.manager [req-926d41f0-e9e6-497d-b230-713def277069 req-e6ffc61d-f569-443d-9c55-085850e13b8c 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] No waiting events found dispatching network-vif-plugged-b7fa8023-e50c-4bea-be79-8fbe005f0b8a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.548 351492 WARNING nova.compute.manager [req-926d41f0-e9e6-497d-b230-713def277069 req-e6ffc61d-f569-443d-9c55-085850e13b8c 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Received unexpected event network-vif-plugged-b7fa8023-e50c-4bea-be79-8fbe005f0b8a for instance with vm_state building and task_state spawning.
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.549 351492 DEBUG nova.compute.manager [req-926d41f0-e9e6-497d-b230-713def277069 req-e6ffc61d-f569-443d-9c55-085850e13b8c 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Received event network-changed-ee5c2dfc-04c3-400a-8073-6f2c65dcea03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.549 351492 DEBUG nova.compute.manager [req-926d41f0-e9e6-497d-b230-713def277069 req-e6ffc61d-f569-443d-9c55-085850e13b8c 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Refreshing instance network info cache due to event network-changed-ee5c2dfc-04c3-400a-8073-6f2c65dcea03. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.550 351492 DEBUG oslo_concurrency.lockutils [req-926d41f0-e9e6-497d-b230-713def277069 req-e6ffc61d-f569-443d-9c55-085850e13b8c 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-a48b4084-369d-432a-9f47-9378cdcc011f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.550 351492 DEBUG oslo_concurrency.lockutils [req-926d41f0-e9e6-497d-b230-713def277069 req-e6ffc61d-f569-443d-9c55-085850e13b8c 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-a48b4084-369d-432a-9f47-9378cdcc011f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.551 351492 DEBUG nova.network.neutron [req-926d41f0-e9e6-497d-b230-713def277069 req-e6ffc61d-f569-443d-9c55-085850e13b8c 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Refreshing network info cache for port ee5c2dfc-04c3-400a-8073-6f2c65dcea03 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.554 351492 DEBUG nova.compute.manager [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.563 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728152.5622337, 4f50e501-f565-4e1f-aa02-df921702eff9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.564 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] VM Resumed (Lifecycle Event)
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.566 351492 DEBUG nova.virt.libvirt.driver [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.587 351492 INFO nova.virt.libvirt.driver [-] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Instance spawned successfully.
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.587 351492 DEBUG nova.virt.libvirt.driver [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.594 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.601 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.612 351492 DEBUG nova.virt.libvirt.driver [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.612 351492 DEBUG nova.virt.libvirt.driver [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.613 351492 DEBUG nova.virt.libvirt.driver [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.613 351492 DEBUG nova.virt.libvirt.driver [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.613 351492 DEBUG nova.virt.libvirt.driver [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.614 351492 DEBUG nova.virt.libvirt.driver [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.624 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.675 351492 INFO nova.compute.manager [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Took 16.95 seconds to spawn the instance on the hypervisor.
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.676 351492 DEBUG nova.compute.manager [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.745 351492 INFO nova.virt.libvirt.driver [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Creating config drive at /var/lib/nova/instances/a48b4084-369d-432a-9f47-9378cdcc011f/disk.config
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.753 351492 DEBUG oslo_concurrency.processutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a48b4084-369d-432a-9f47-9378cdcc011f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg9acbjlf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.796 351492 INFO nova.compute.manager [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Took 18.13 seconds to build instance.
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.823 351492 DEBUG oslo_concurrency.lockutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Lock "4f50e501-f565-4e1f-aa02-df921702eff9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.308s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.898 351492 DEBUG oslo_concurrency.processutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a48b4084-369d-432a-9f47-9378cdcc011f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg9acbjlf" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.955 351492 DEBUG nova.storage.rbd_utils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] rbd image a48b4084-369d-432a-9f47-9378cdcc011f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.962 351492 DEBUG oslo_concurrency.processutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a48b4084-369d-432a-9f47-9378cdcc011f/disk.config a48b4084-369d-432a-9f47-9378cdcc011f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.988 351492 DEBUG nova.network.neutron [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Successfully updated port: d7b1b965-f304-40eb-9f34-c63af54da9f4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 03 02:15:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1847: 321 pgs: 321 active+clean; 234 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 746 KiB/s rd, 5.8 MiB/s wr, 98 op/s
Dec 03 02:15:53 compute-0 nova_compute[351485]: 2025-12-03 02:15:53.010 351492 DEBUG oslo_concurrency.lockutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Acquiring lock "refresh_cache-5c870f25-6c33-4e95-b540-5a806454f556" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:15:53 compute-0 nova_compute[351485]: 2025-12-03 02:15:53.011 351492 DEBUG oslo_concurrency.lockutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Acquired lock "refresh_cache-5c870f25-6c33-4e95-b540-5a806454f556" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:15:53 compute-0 nova_compute[351485]: 2025-12-03 02:15:53.011 351492 DEBUG nova.network.neutron [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 03 02:15:53 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/965893036' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:15:53 compute-0 nova_compute[351485]: 2025-12-03 02:15:53.188 351492 DEBUG nova.network.neutron [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 03 02:15:53 compute-0 nova_compute[351485]: 2025-12-03 02:15:53.243 351492 DEBUG oslo_concurrency.processutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a48b4084-369d-432a-9f47-9378cdcc011f/disk.config a48b4084-369d-432a-9f47-9378cdcc011f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.281s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:15:53 compute-0 nova_compute[351485]: 2025-12-03 02:15:53.244 351492 INFO nova.virt.libvirt.driver [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Deleting local config drive /var/lib/nova/instances/a48b4084-369d-432a-9f47-9378cdcc011f/disk.config because it was imported into RBD.
Dec 03 02:15:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:15:53 compute-0 kernel: tapee5c2dfc-04: entered promiscuous mode
Dec 03 02:15:53 compute-0 NetworkManager[48912]: <info>  [1764728153.3376] manager: (tapee5c2dfc-04): new Tun device (/org/freedesktop/NetworkManager/Devices/44)
Dec 03 02:15:53 compute-0 ovn_controller[89134]: 2025-12-03T02:15:53Z|00076|binding|INFO|Claiming lport ee5c2dfc-04c3-400a-8073-6f2c65dcea03 for this chassis.
Dec 03 02:15:53 compute-0 ovn_controller[89134]: 2025-12-03T02:15:53Z|00077|binding|INFO|ee5c2dfc-04c3-400a-8073-6f2c65dcea03: Claiming fa:16:3e:ff:dd:2f 10.100.0.9
Dec 03 02:15:53 compute-0 nova_compute[351485]: 2025-12-03 02:15:53.340 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.353 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ff:dd:2f 10.100.0.9'], port_security=['fa:16:3e:ff:dd:2f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'a48b4084-369d-432a-9f47-9378cdcc011f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2fdf214a-0f6e-4e5d-b449-e1988827937a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b95bb4c57d3543acb25997bedee9dec3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '323d2b87-5691-4e3e-84a4-5fb1ca8c1538', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=49517db8-4396-45c4-bc75-59118441fc2e, chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=ee5c2dfc-04c3-400a-8073-6f2c65dcea03) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.355 288528 INFO neutron.agent.ovn.metadata.agent [-] Port ee5c2dfc-04c3-400a-8073-6f2c65dcea03 in datapath 2fdf214a-0f6e-4e5d-b449-e1988827937a bound to our chassis
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.359 288528 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2fdf214a-0f6e-4e5d-b449-e1988827937a
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.374 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[db8df650-f2cf-4bd0-9b3b-65e4b4c3dea0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.375 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2fdf214a-01 in ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.379 414755 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2fdf214a-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.380 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[37dc9de2-4cd9-4473-bbe1-9f20abb3f43a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.381 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[776c8ffc-89e0-4816-a48b-481f1c781dc8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:53 compute-0 ovn_controller[89134]: 2025-12-03T02:15:53Z|00078|binding|INFO|Setting lport ee5c2dfc-04c3-400a-8073-6f2c65dcea03 up in Southbound
Dec 03 02:15:53 compute-0 ovn_controller[89134]: 2025-12-03T02:15:53Z|00079|binding|INFO|Setting lport ee5c2dfc-04c3-400a-8073-6f2c65dcea03 ovn-installed in OVS
Dec 03 02:15:53 compute-0 nova_compute[351485]: 2025-12-03 02:15:53.387 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:53 compute-0 nova_compute[351485]: 2025-12-03 02:15:53.394 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.405 288639 DEBUG oslo.privsep.daemon [-] privsep: reply[b7a0aa98-960a-4f7b-bbf6-0863e44af025]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:53 compute-0 systemd-machined[138558]: New machine qemu-8-instance-00000008.
Dec 03 02:15:53 compute-0 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.432 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[129dc59c-fd10-410f-b62c-1654f4654c31]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:53 compute-0 systemd-udevd[445093]: Network interface NamePolicy= disabled on kernel command line.
Dec 03 02:15:53 compute-0 NetworkManager[48912]: <info>  [1764728153.4556] device (tapee5c2dfc-04): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 03 02:15:53 compute-0 NetworkManager[48912]: <info>  [1764728153.4593] device (tapee5c2dfc-04): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.484 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[07713f59-3f22-4558-9a4d-12cd3b377d11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:53 compute-0 systemd-udevd[445097]: Network interface NamePolicy= disabled on kernel command line.
Dec 03 02:15:53 compute-0 NetworkManager[48912]: <info>  [1764728153.4940] manager: (tap2fdf214a-00): new Veth device (/org/freedesktop/NetworkManager/Devices/45)
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.496 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[f5131873-4b44-4942-a6ee-b39705ba4d8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:53 compute-0 nova_compute[351485]: 2025-12-03 02:15:53.540 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.541 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[8a01c723-7698-484c-91a8-526fececb319]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.557 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[3be02d87-c169-4c6f-97db-21f64352bd77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:53 compute-0 NetworkManager[48912]: <info>  [1764728153.5859] device (tap2fdf214a-00): carrier: link connected
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.594 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[746eb6fa-2d2a-45e9-9de5-52afb70257ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.613 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[93bb3556-130b-409e-93c6-fa446b5524c0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2fdf214a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:62:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699306, 'reachable_time': 26989, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 445123, 'error': None, 'target': 'ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.635 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[0c099eb5-342f-4fc5-ac35-d77556a4b53d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9f:62d4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 699306, 'tstamp': 699306}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 445124, 'error': None, 'target': 'ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.655 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[7fcf73ea-6e58-4683-be59-d4a85b42c8ef]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2fdf214a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:62:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699306, 'reachable_time': 26989, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 445125, 'error': None, 'target': 'ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.704 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[21c88b52-587b-4de7-aab3-6d3719d6f322]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.796 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[c64fe696-67e0-4b97-845f-e01e30e30ae7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.797 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2fdf214a-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.797 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.798 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2fdf214a-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:15:53 compute-0 NetworkManager[48912]: <info>  [1764728153.8004] manager: (tap2fdf214a-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Dec 03 02:15:53 compute-0 nova_compute[351485]: 2025-12-03 02:15:53.800 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:53 compute-0 kernel: tap2fdf214a-00: entered promiscuous mode
Dec 03 02:15:53 compute-0 nova_compute[351485]: 2025-12-03 02:15:53.805 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:53 compute-0 ovn_controller[89134]: 2025-12-03T02:15:53Z|00080|binding|INFO|Releasing lport c8314dfe-5b76-4819-9b3e-1cb76a272253 from this chassis (sb_readonly=0)
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.804 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2fdf214a-00, col_values=(('external_ids', {'iface-id': 'c8314dfe-5b76-4819-9b3e-1cb76a272253'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:15:53 compute-0 nova_compute[351485]: 2025-12-03 02:15:53.824 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.827 288528 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2fdf214a-0f6e-4e5d-b449-e1988827937a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2fdf214a-0f6e-4e5d-b449-e1988827937a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 03 02:15:53 compute-0 nova_compute[351485]: 2025-12-03 02:15:53.827 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.831 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[cd3c93b3-4359-4add-8bed-571fb440b6fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.831 288528 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]: global
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]:     log         /dev/log local0 debug
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]:     log-tag     haproxy-metadata-proxy-2fdf214a-0f6e-4e5d-b449-e1988827937a
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]:     user        root
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]:     group       root
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]:     maxconn     1024
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]:     pidfile     /var/lib/neutron/external/pids/2fdf214a-0f6e-4e5d-b449-e1988827937a.pid.haproxy
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]:     daemon
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]: 
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]: defaults
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]:     log global
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]:     mode http
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]:     option httplog
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]:     option dontlognull
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]:     option http-server-close
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]:     option forwardfor
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]:     retries                 3
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]:     timeout http-request    30s
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]:     timeout connect         30s
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]:     timeout client          32s
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]:     timeout server          32s
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]:     timeout http-keep-alive 30s
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]: 
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]: 
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]: listen listener
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]:     bind 169.254.169.254:80
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]:     server metadata /var/lib/neutron/metadata_proxy
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]:     http-request add-header X-OVN-Network-ID 2fdf214a-0f6e-4e5d-b449-e1988827937a
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 03 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.832 288528 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a', 'env', 'PROCESS_TAG=haproxy-2fdf214a-0f6e-4e5d-b449-e1988827937a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2fdf214a-0f6e-4e5d-b449-e1988827937a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 03 02:15:54 compute-0 ceph-mon[192821]: pgmap v1847: 321 pgs: 321 active+clean; 234 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 746 KiB/s rd, 5.8 MiB/s wr, 98 op/s
Dec 03 02:15:54 compute-0 nova_compute[351485]: 2025-12-03 02:15:54.250 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728154.2493758, a48b4084-369d-432a-9f47-9378cdcc011f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 02:15:54 compute-0 nova_compute[351485]: 2025-12-03 02:15:54.251 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] VM Started (Lifecycle Event)
Dec 03 02:15:54 compute-0 nova_compute[351485]: 2025-12-03 02:15:54.274 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:15:54 compute-0 nova_compute[351485]: 2025-12-03 02:15:54.280 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728154.2498405, a48b4084-369d-432a-9f47-9378cdcc011f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 02:15:54 compute-0 nova_compute[351485]: 2025-12-03 02:15:54.281 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] VM Paused (Lifecycle Event)
Dec 03 02:15:54 compute-0 nova_compute[351485]: 2025-12-03 02:15:54.303 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:15:54 compute-0 nova_compute[351485]: 2025-12-03 02:15:54.311 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 03 02:15:54 compute-0 nova_compute[351485]: 2025-12-03 02:15:54.331 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 03 02:15:54 compute-0 podman[445197]: 2025-12-03 02:15:54.371368343 +0000 UTC m=+0.069092736 container create a7e32c6b2ec711ff4952d75dd39991677c8777498e40fcc11f90542a51cdecf5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:15:54 compute-0 podman[445197]: 2025-12-03 02:15:54.339917663 +0000 UTC m=+0.037642076 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 03 02:15:54 compute-0 systemd[1]: Started libpod-conmon-a7e32c6b2ec711ff4952d75dd39991677c8777498e40fcc11f90542a51cdecf5.scope.
Dec 03 02:15:54 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:15:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/087efa0144787524a70b8446fc5a09fbd51303045924a94f4a2b128c2b8cbdbc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 03 02:15:54 compute-0 podman[445197]: 2025-12-03 02:15:54.523676842 +0000 UTC m=+0.221401265 container init a7e32c6b2ec711ff4952d75dd39991677c8777498e40fcc11f90542a51cdecf5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 03 02:15:54 compute-0 podman[445197]: 2025-12-03 02:15:54.54091817 +0000 UTC m=+0.238642593 container start a7e32c6b2ec711ff4952d75dd39991677c8777498e40fcc11f90542a51cdecf5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 03 02:15:54 compute-0 neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a[445211]: [NOTICE]   (445216) : New worker (445218) forked
Dec 03 02:15:54 compute-0 neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a[445211]: [NOTICE]   (445216) : Loading success.
Dec 03 02:15:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1848: 321 pgs: 321 active+clean; 242 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 170 KiB/s rd, 4.5 MiB/s wr, 107 op/s
Dec 03 02:15:56 compute-0 ceph-mon[192821]: pgmap v1848: 321 pgs: 321 active+clean; 242 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 170 KiB/s rd, 4.5 MiB/s wr, 107 op/s
Dec 03 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.381 351492 DEBUG nova.compute.manager [req-97117c39-91ae-44e4-8a6d-841fe7460c05 req-b3afc542-40e1-4692-98bd-3e3ebf2fb43a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Received event network-changed-d7b1b965-f304-40eb-9f34-c63af54da9f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.382 351492 DEBUG nova.compute.manager [req-97117c39-91ae-44e4-8a6d-841fe7460c05 req-b3afc542-40e1-4692-98bd-3e3ebf2fb43a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Refreshing instance network info cache due to event network-changed-d7b1b965-f304-40eb-9f34-c63af54da9f4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 03 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.382 351492 DEBUG oslo_concurrency.lockutils [req-97117c39-91ae-44e4-8a6d-841fe7460c05 req-b3afc542-40e1-4692-98bd-3e3ebf2fb43a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-5c870f25-6c33-4e95-b540-5a806454f556" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.499 351492 DEBUG nova.network.neutron [req-926d41f0-e9e6-497d-b230-713def277069 req-e6ffc61d-f569-443d-9c55-085850e13b8c 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Updated VIF entry in instance network info cache for port ee5c2dfc-04c3-400a-8073-6f2c65dcea03. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 03 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.500 351492 DEBUG nova.network.neutron [req-926d41f0-e9e6-497d-b230-713def277069 req-e6ffc61d-f569-443d-9c55-085850e13b8c 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Updating instance_info_cache with network_info: [{"id": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "address": "fa:16:3e:ff:dd:2f", "network": {"id": "2fdf214a-0f6e-4e5d-b449-e1988827937a", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-191861003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b95bb4c57d3543acb25997bedee9dec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee5c2dfc-04", "ovs_interfaceid": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.519 351492 DEBUG oslo_concurrency.lockutils [req-926d41f0-e9e6-497d-b230-713def277069 req-e6ffc61d-f569-443d-9c55-085850e13b8c 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-a48b4084-369d-432a-9f47-9378cdcc011f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.704 351492 DEBUG nova.network.neutron [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Updating instance_info_cache with network_info: [{"id": "d7b1b965-f304-40eb-9f34-c63af54da9f4", "address": "fa:16:3e:57:b1:4a", "network": {"id": "e0e44891-e46c-41a0-a083-a444c0d34e1c", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-900280430-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5875dd9a17274c38a2ae81fb3759558e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7b1b965-f3", "ovs_interfaceid": "d7b1b965-f304-40eb-9f34-c63af54da9f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.738 351492 DEBUG oslo_concurrency.lockutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Releasing lock "refresh_cache-5c870f25-6c33-4e95-b540-5a806454f556" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.738 351492 DEBUG nova.compute.manager [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Instance network_info: |[{"id": "d7b1b965-f304-40eb-9f34-c63af54da9f4", "address": "fa:16:3e:57:b1:4a", "network": {"id": "e0e44891-e46c-41a0-a083-a444c0d34e1c", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-900280430-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5875dd9a17274c38a2ae81fb3759558e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7b1b965-f3", "ovs_interfaceid": "d7b1b965-f304-40eb-9f34-c63af54da9f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 03 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.739 351492 DEBUG oslo_concurrency.lockutils [req-97117c39-91ae-44e4-8a6d-841fe7460c05 req-b3afc542-40e1-4692-98bd-3e3ebf2fb43a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-5c870f25-6c33-4e95-b540-5a806454f556" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.739 351492 DEBUG nova.network.neutron [req-97117c39-91ae-44e4-8a6d-841fe7460c05 req-b3afc542-40e1-4692-98bd-3e3ebf2fb43a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Refreshing network info cache for port d7b1b965-f304-40eb-9f34-c63af54da9f4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 03 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.746 351492 DEBUG nova.virt.libvirt.driver [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Start _get_guest_xml network_info=[{"id": "d7b1b965-f304-40eb-9f34-c63af54da9f4", "address": "fa:16:3e:57:b1:4a", "network": {"id": "e0e44891-e46c-41a0-a083-a444c0d34e1c", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-900280430-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5875dd9a17274c38a2ae81fb3759558e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7b1b965-f3", "ovs_interfaceid": "d7b1b965-f304-40eb-9f34-c63af54da9f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T02:14:44Z,direct_url=<?>,disk_format='qcow2',id=ef773cba-72f0-486f-b5e5-792ff26bb688,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9746b242761a48048d185ce26d622b33',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T02:14:46Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'boot_index': 0, 'guest_format': None, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'image_id': 'ef773cba-72f0-486f-b5e5-792ff26bb688'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 03 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.761 351492 WARNING nova.virt.libvirt.driver [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.777 351492 DEBUG nova.virt.libvirt.host [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 03 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.779 351492 DEBUG nova.virt.libvirt.host [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 03 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.786 351492 DEBUG nova.virt.libvirt.host [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 03 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.786 351492 DEBUG nova.virt.libvirt.host [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 03 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.787 351492 DEBUG nova.virt.libvirt.driver [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 03 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.788 351492 DEBUG nova.virt.hardware [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-03T02:14:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='89219634-32e9-4cb5-896f-6fa0b1edfe13',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T02:14:44Z,direct_url=<?>,disk_format='qcow2',id=ef773cba-72f0-486f-b5e5-792ff26bb688,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9746b242761a48048d185ce26d622b33',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T02:14:46Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 03 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.789 351492 DEBUG nova.virt.hardware [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 03 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.789 351492 DEBUG nova.virt.hardware [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 03 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.790 351492 DEBUG nova.virt.hardware [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 03 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.790 351492 DEBUG nova.virt.hardware [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 03 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.791 351492 DEBUG nova.virt.hardware [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 03 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.791 351492 DEBUG nova.virt.hardware [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 03 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.791 351492 DEBUG nova.virt.hardware [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 03 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.792 351492 DEBUG nova.virt.hardware [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 03 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.793 351492 DEBUG nova.virt.hardware [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 03 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.793 351492 DEBUG nova.virt.hardware [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 03 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.799 351492 DEBUG oslo_concurrency.processutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:15:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1849: 321 pgs: 321 active+clean; 243 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 973 KiB/s rd, 4.5 MiB/s wr, 135 op/s
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.185 351492 DEBUG nova.compute.manager [req-a844ea0a-12cc-4eda-8959-94113d5ecc62 req-15dff412-6a6f-4f4f-bba4-4ea3bb817276 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Received event network-vif-plugged-5009f27c-5ce3-46eb-b7aa-e82645a3097e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.187 351492 DEBUG oslo_concurrency.lockutils [req-a844ea0a-12cc-4eda-8959-94113d5ecc62 req-15dff412-6a6f-4f4f-bba4-4ea3bb817276 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "07ce21e6-3627-467a-9b7e-d9045308576c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.188 351492 DEBUG oslo_concurrency.lockutils [req-a844ea0a-12cc-4eda-8959-94113d5ecc62 req-15dff412-6a6f-4f4f-bba4-4ea3bb817276 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "07ce21e6-3627-467a-9b7e-d9045308576c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.188 351492 DEBUG oslo_concurrency.lockutils [req-a844ea0a-12cc-4eda-8959-94113d5ecc62 req-15dff412-6a6f-4f4f-bba4-4ea3bb817276 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "07ce21e6-3627-467a-9b7e-d9045308576c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.189 351492 DEBUG nova.compute.manager [req-a844ea0a-12cc-4eda-8959-94113d5ecc62 req-15dff412-6a6f-4f4f-bba4-4ea3bb817276 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Processing event network-vif-plugged-5009f27c-5ce3-46eb-b7aa-e82645a3097e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.189 351492 DEBUG nova.compute.manager [req-a844ea0a-12cc-4eda-8959-94113d5ecc62 req-15dff412-6a6f-4f4f-bba4-4ea3bb817276 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Received event network-vif-plugged-5009f27c-5ce3-46eb-b7aa-e82645a3097e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.190 351492 DEBUG oslo_concurrency.lockutils [req-a844ea0a-12cc-4eda-8959-94113d5ecc62 req-15dff412-6a6f-4f4f-bba4-4ea3bb817276 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "07ce21e6-3627-467a-9b7e-d9045308576c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.190 351492 DEBUG oslo_concurrency.lockutils [req-a844ea0a-12cc-4eda-8959-94113d5ecc62 req-15dff412-6a6f-4f4f-bba4-4ea3bb817276 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "07ce21e6-3627-467a-9b7e-d9045308576c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.191 351492 DEBUG oslo_concurrency.lockutils [req-a844ea0a-12cc-4eda-8959-94113d5ecc62 req-15dff412-6a6f-4f4f-bba4-4ea3bb817276 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "07ce21e6-3627-467a-9b7e-d9045308576c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.191 351492 DEBUG nova.compute.manager [req-a844ea0a-12cc-4eda-8959-94113d5ecc62 req-15dff412-6a6f-4f4f-bba4-4ea3bb817276 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] No waiting events found dispatching network-vif-plugged-5009f27c-5ce3-46eb-b7aa-e82645a3097e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.192 351492 WARNING nova.compute.manager [req-a844ea0a-12cc-4eda-8959-94113d5ecc62 req-15dff412-6a6f-4f4f-bba4-4ea3bb817276 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Received unexpected event network-vif-plugged-5009f27c-5ce3-46eb-b7aa-e82645a3097e for instance with vm_state building and task_state spawning.
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.194 351492 DEBUG nova.compute.manager [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.216 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728157.2033532, 07ce21e6-3627-467a-9b7e-d9045308576c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.217 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] VM Resumed (Lifecycle Event)
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.230 351492 DEBUG nova.virt.libvirt.driver [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.245 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.264 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.267 351492 INFO nova.virt.libvirt.driver [-] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Instance spawned successfully.
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.268 351492 DEBUG nova.virt.libvirt.driver [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.278 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.306 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.317 351492 DEBUG nova.virt.libvirt.driver [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.318 351492 DEBUG nova.virt.libvirt.driver [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.320 351492 DEBUG nova.virt.libvirt.driver [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.321 351492 DEBUG nova.virt.libvirt.driver [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:15:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 03 02:15:57 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3843984994' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.323 351492 DEBUG nova.virt.libvirt.driver [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.324 351492 DEBUG nova.virt.libvirt.driver [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.351 351492 DEBUG oslo_concurrency.processutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.384 351492 DEBUG nova.storage.rbd_utils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] rbd image 5c870f25-6c33-4e95-b540-5a806454f556_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.392 351492 DEBUG oslo_concurrency.processutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.452 351492 INFO nova.compute.manager [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Took 17.76 seconds to spawn the instance on the hypervisor.
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.453 351492 DEBUG nova.compute.manager [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.514 351492 INFO nova.compute.manager [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Took 18.92 seconds to build instance.
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.526 351492 DEBUG oslo_concurrency.lockutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Lock "07ce21e6-3627-467a-9b7e-d9045308576c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.086s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:15:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 03 02:15:57 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2966741818' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.903 351492 DEBUG oslo_concurrency.processutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.905 351492 DEBUG nova.virt.libvirt.vif [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T02:15:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1318824371',display_name='tempest-ServersTestManualDisk-server-1318824371',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1318824371',id=9,image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHjjprZxgO/4fBzfH66ApAPdvyVvzXxf8Ff5aorWRcZSUbk0SJJUQELjud9zhnFrHG5MNyoaXEfhhqd7MMh1lMDbphtAOFjo2kbDR4EPXiA+56V0JD9bhhKqPo/y7SQ3BA==',key_name='tempest-keypair-1645493537',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5875dd9a17274c38a2ae81fb3759558e',ramdisk_id='',reservation_id='r-a0h400yy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-632797169',owner_user_name='tempest-ServersTestManualDisk-632797169-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T02:15:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4dc5f09973d5430fb9d8106a1a0a2479',uuid=5c870f25-6c33-4e95-b540-5a806454f556,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d7b1b965-f304-40eb-9f34-c63af54da9f4", "address": "fa:16:3e:57:b1:4a", "network": {"id": "e0e44891-e46c-41a0-a083-a444c0d34e1c", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-900280430-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5875dd9a17274c38a2ae81fb3759558e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7b1b965-f3", "ovs_interfaceid": "d7b1b965-f304-40eb-9f34-c63af54da9f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.905 351492 DEBUG nova.network.os_vif_util [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Converting VIF {"id": "d7b1b965-f304-40eb-9f34-c63af54da9f4", "address": "fa:16:3e:57:b1:4a", "network": {"id": "e0e44891-e46c-41a0-a083-a444c0d34e1c", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-900280430-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5875dd9a17274c38a2ae81fb3759558e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7b1b965-f3", "ovs_interfaceid": "d7b1b965-f304-40eb-9f34-c63af54da9f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.906 351492 DEBUG nova.network.os_vif_util [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:b1:4a,bridge_name='br-int',has_traffic_filtering=True,id=d7b1b965-f304-40eb-9f34-c63af54da9f4,network=Network(e0e44891-e46c-41a0-a083-a444c0d34e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7b1b965-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.908 351492 DEBUG nova.objects.instance [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Lazy-loading 'pci_devices' on Instance uuid 5c870f25-6c33-4e95-b540-5a806454f556 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.928 351492 DEBUG nova.virt.libvirt.driver [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] End _get_guest_xml xml=<domain type="kvm">
Dec 03 02:15:57 compute-0 nova_compute[351485]:   <uuid>5c870f25-6c33-4e95-b540-5a806454f556</uuid>
Dec 03 02:15:57 compute-0 nova_compute[351485]:   <name>instance-00000009</name>
Dec 03 02:15:57 compute-0 nova_compute[351485]:   <memory>131072</memory>
Dec 03 02:15:57 compute-0 nova_compute[351485]:   <vcpu>1</vcpu>
Dec 03 02:15:57 compute-0 nova_compute[351485]:   <metadata>
Dec 03 02:15:57 compute-0 nova_compute[351485]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 03 02:15:57 compute-0 nova_compute[351485]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:       <nova:name>tempest-ServersTestManualDisk-server-1318824371</nova:name>
Dec 03 02:15:57 compute-0 nova_compute[351485]:       <nova:creationTime>2025-12-03 02:15:56</nova:creationTime>
Dec 03 02:15:57 compute-0 nova_compute[351485]:       <nova:flavor name="m1.nano">
Dec 03 02:15:57 compute-0 nova_compute[351485]:         <nova:memory>128</nova:memory>
Dec 03 02:15:57 compute-0 nova_compute[351485]:         <nova:disk>1</nova:disk>
Dec 03 02:15:57 compute-0 nova_compute[351485]:         <nova:swap>0</nova:swap>
Dec 03 02:15:57 compute-0 nova_compute[351485]:         <nova:ephemeral>0</nova:ephemeral>
Dec 03 02:15:57 compute-0 nova_compute[351485]:         <nova:vcpus>1</nova:vcpus>
Dec 03 02:15:57 compute-0 nova_compute[351485]:       </nova:flavor>
Dec 03 02:15:57 compute-0 nova_compute[351485]:       <nova:owner>
Dec 03 02:15:57 compute-0 nova_compute[351485]:         <nova:user uuid="4dc5f09973d5430fb9d8106a1a0a2479">tempest-ServersTestManualDisk-632797169-project-member</nova:user>
Dec 03 02:15:57 compute-0 nova_compute[351485]:         <nova:project uuid="5875dd9a17274c38a2ae81fb3759558e">tempest-ServersTestManualDisk-632797169</nova:project>
Dec 03 02:15:57 compute-0 nova_compute[351485]:       </nova:owner>
Dec 03 02:15:57 compute-0 nova_compute[351485]:       <nova:root type="image" uuid="ef773cba-72f0-486f-b5e5-792ff26bb688"/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:       <nova:ports>
Dec 03 02:15:57 compute-0 nova_compute[351485]:         <nova:port uuid="d7b1b965-f304-40eb-9f34-c63af54da9f4">
Dec 03 02:15:57 compute-0 nova_compute[351485]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:         </nova:port>
Dec 03 02:15:57 compute-0 nova_compute[351485]:       </nova:ports>
Dec 03 02:15:57 compute-0 nova_compute[351485]:     </nova:instance>
Dec 03 02:15:57 compute-0 nova_compute[351485]:   </metadata>
Dec 03 02:15:57 compute-0 nova_compute[351485]:   <sysinfo type="smbios">
Dec 03 02:15:57 compute-0 nova_compute[351485]:     <system>
Dec 03 02:15:57 compute-0 nova_compute[351485]:       <entry name="manufacturer">RDO</entry>
Dec 03 02:15:57 compute-0 nova_compute[351485]:       <entry name="product">OpenStack Compute</entry>
Dec 03 02:15:57 compute-0 nova_compute[351485]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 03 02:15:57 compute-0 nova_compute[351485]:       <entry name="serial">5c870f25-6c33-4e95-b540-5a806454f556</entry>
Dec 03 02:15:57 compute-0 nova_compute[351485]:       <entry name="uuid">5c870f25-6c33-4e95-b540-5a806454f556</entry>
Dec 03 02:15:57 compute-0 nova_compute[351485]:       <entry name="family">Virtual Machine</entry>
Dec 03 02:15:57 compute-0 nova_compute[351485]:     </system>
Dec 03 02:15:57 compute-0 nova_compute[351485]:   </sysinfo>
Dec 03 02:15:57 compute-0 nova_compute[351485]:   <os>
Dec 03 02:15:57 compute-0 nova_compute[351485]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 03 02:15:57 compute-0 nova_compute[351485]:     <boot dev="hd"/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:     <smbios mode="sysinfo"/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:   </os>
Dec 03 02:15:57 compute-0 nova_compute[351485]:   <features>
Dec 03 02:15:57 compute-0 nova_compute[351485]:     <acpi/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:     <apic/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:     <vmcoreinfo/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:   </features>
Dec 03 02:15:57 compute-0 nova_compute[351485]:   <clock offset="utc">
Dec 03 02:15:57 compute-0 nova_compute[351485]:     <timer name="pit" tickpolicy="delay"/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:     <timer name="hpet" present="no"/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:   </clock>
Dec 03 02:15:57 compute-0 nova_compute[351485]:   <cpu mode="host-model" match="exact">
Dec 03 02:15:57 compute-0 nova_compute[351485]:     <topology sockets="1" cores="1" threads="1"/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:   </cpu>
Dec 03 02:15:57 compute-0 nova_compute[351485]:   <devices>
Dec 03 02:15:57 compute-0 nova_compute[351485]:     <disk type="network" device="disk">
Dec 03 02:15:57 compute-0 nova_compute[351485]:       <driver type="raw" cache="none"/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:       <source protocol="rbd" name="vms/5c870f25-6c33-4e95-b540-5a806454f556_disk">
Dec 03 02:15:57 compute-0 nova_compute[351485]:         <host name="192.168.122.100" port="6789"/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:       </source>
Dec 03 02:15:57 compute-0 nova_compute[351485]:       <auth username="openstack">
Dec 03 02:15:57 compute-0 nova_compute[351485]:         <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:       </auth>
Dec 03 02:15:57 compute-0 nova_compute[351485]:       <target dev="vda" bus="virtio"/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:     </disk>
Dec 03 02:15:57 compute-0 nova_compute[351485]:     <disk type="network" device="cdrom">
Dec 03 02:15:57 compute-0 nova_compute[351485]:       <driver type="raw" cache="none"/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:       <source protocol="rbd" name="vms/5c870f25-6c33-4e95-b540-5a806454f556_disk.config">
Dec 03 02:15:57 compute-0 nova_compute[351485]:         <host name="192.168.122.100" port="6789"/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:       </source>
Dec 03 02:15:57 compute-0 nova_compute[351485]:       <auth username="openstack">
Dec 03 02:15:57 compute-0 nova_compute[351485]:         <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:       </auth>
Dec 03 02:15:57 compute-0 nova_compute[351485]:       <target dev="sda" bus="sata"/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:     </disk>
Dec 03 02:15:57 compute-0 nova_compute[351485]:     <interface type="ethernet">
Dec 03 02:15:57 compute-0 nova_compute[351485]:       <mac address="fa:16:3e:57:b1:4a"/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:       <model type="virtio"/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:       <driver name="vhost" rx_queue_size="512"/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:       <mtu size="1442"/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:       <target dev="tapd7b1b965-f3"/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:     </interface>
Dec 03 02:15:57 compute-0 nova_compute[351485]:     <serial type="pty">
Dec 03 02:15:57 compute-0 nova_compute[351485]:       <log file="/var/lib/nova/instances/5c870f25-6c33-4e95-b540-5a806454f556/console.log" append="off"/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:     </serial>
Dec 03 02:15:57 compute-0 nova_compute[351485]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:     <video>
Dec 03 02:15:57 compute-0 nova_compute[351485]:       <model type="virtio"/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:     </video>
Dec 03 02:15:57 compute-0 nova_compute[351485]:     <input type="tablet" bus="usb"/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:     <rng model="virtio">
Dec 03 02:15:57 compute-0 nova_compute[351485]:       <backend model="random">/dev/urandom</backend>
Dec 03 02:15:57 compute-0 nova_compute[351485]:     </rng>
Dec 03 02:15:57 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root"/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:     <controller type="usb" index="0"/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:     <memballoon model="virtio">
Dec 03 02:15:57 compute-0 nova_compute[351485]:       <stats period="10"/>
Dec 03 02:15:57 compute-0 nova_compute[351485]:     </memballoon>
Dec 03 02:15:57 compute-0 nova_compute[351485]:   </devices>
Dec 03 02:15:57 compute-0 nova_compute[351485]: </domain>
Dec 03 02:15:57 compute-0 nova_compute[351485]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.929 351492 DEBUG nova.compute.manager [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Preparing to wait for external event network-vif-plugged-d7b1b965-f304-40eb-9f34-c63af54da9f4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.929 351492 DEBUG oslo_concurrency.lockutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Acquiring lock "5c870f25-6c33-4e95-b540-5a806454f556-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.937 351492 DEBUG oslo_concurrency.lockutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Lock "5c870f25-6c33-4e95-b540-5a806454f556-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.009s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.938 351492 DEBUG oslo_concurrency.lockutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Lock "5c870f25-6c33-4e95-b540-5a806454f556-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.938 351492 DEBUG nova.virt.libvirt.vif [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T02:15:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1318824371',display_name='tempest-ServersTestManualDisk-server-1318824371',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1318824371',id=9,image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHjjprZxgO/4fBzfH66ApAPdvyVvzXxf8Ff5aorWRcZSUbk0SJJUQELjud9zhnFrHG5MNyoaXEfhhqd7MMh1lMDbphtAOFjo2kbDR4EPXiA+56V0JD9bhhKqPo/y7SQ3BA==',key_name='tempest-keypair-1645493537',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5875dd9a17274c38a2ae81fb3759558e',ramdisk_id='',reservation_id='r-a0h400yy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-632797169',owner_user_name='tempest-ServersTestManualDisk-632797169-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T02:15:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4dc5f09973d5430fb9d8106a1a0a2479',uuid=5c870f25-6c33-4e95-b540-5a806454f556,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d7b1b965-f304-40eb-9f34-c63af54da9f4", "address": "fa:16:3e:57:b1:4a", "network": {"id": "e0e44891-e46c-41a0-a083-a444c0d34e1c", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-900280430-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5875dd9a17274c38a2ae81fb3759558e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7b1b965-f3", "ovs_interfaceid": "d7b1b965-f304-40eb-9f34-c63af54da9f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.939 351492 DEBUG nova.network.os_vif_util [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Converting VIF {"id": "d7b1b965-f304-40eb-9f34-c63af54da9f4", "address": "fa:16:3e:57:b1:4a", "network": {"id": "e0e44891-e46c-41a0-a083-a444c0d34e1c", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-900280430-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5875dd9a17274c38a2ae81fb3759558e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7b1b965-f3", "ovs_interfaceid": "d7b1b965-f304-40eb-9f34-c63af54da9f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.939 351492 DEBUG nova.network.os_vif_util [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:b1:4a,bridge_name='br-int',has_traffic_filtering=True,id=d7b1b965-f304-40eb-9f34-c63af54da9f4,network=Network(e0e44891-e46c-41a0-a083-a444c0d34e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7b1b965-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.939 351492 DEBUG os_vif [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:b1:4a,bridge_name='br-int',has_traffic_filtering=True,id=d7b1b965-f304-40eb-9f34-c63af54da9f4,network=Network(e0e44891-e46c-41a0-a083-a444c0d34e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7b1b965-f3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.940 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.940 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.941 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.946 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.946 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd7b1b965-f3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.948 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd7b1b965-f3, col_values=(('external_ids', {'iface-id': 'd7b1b965-f304-40eb-9f34-c63af54da9f4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:57:b1:4a', 'vm-uuid': '5c870f25-6c33-4e95-b540-5a806454f556'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:15:57 compute-0 NetworkManager[48912]: <info>  [1764728157.9536] manager: (tapd7b1b965-f3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.956 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.962 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.963 351492 INFO os_vif [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:b1:4a,bridge_name='br-int',has_traffic_filtering=True,id=d7b1b965-f304-40eb-9f34-c63af54da9f4,network=Network(e0e44891-e46c-41a0-a083-a444c0d34e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7b1b965-f3')
Dec 03 02:15:58 compute-0 nova_compute[351485]: 2025-12-03 02:15:58.042 351492 DEBUG nova.virt.libvirt.driver [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 03 02:15:58 compute-0 nova_compute[351485]: 2025-12-03 02:15:58.045 351492 DEBUG nova.virt.libvirt.driver [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 03 02:15:58 compute-0 nova_compute[351485]: 2025-12-03 02:15:58.047 351492 DEBUG nova.virt.libvirt.driver [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] No VIF found with MAC fa:16:3e:57:b1:4a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 03 02:15:58 compute-0 nova_compute[351485]: 2025-12-03 02:15:58.049 351492 INFO nova.virt.libvirt.driver [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Using config drive
Dec 03 02:15:58 compute-0 nova_compute[351485]: 2025-12-03 02:15:58.113 351492 DEBUG nova.storage.rbd_utils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] rbd image 5c870f25-6c33-4e95-b540-5a806454f556_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:15:58 compute-0 ceph-mon[192821]: pgmap v1849: 321 pgs: 321 active+clean; 243 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 973 KiB/s rd, 4.5 MiB/s wr, 135 op/s
Dec 03 02:15:58 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3843984994' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:15:58 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2966741818' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:15:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:15:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:15:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:15:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:15:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:15:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:15:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:15:58 compute-0 nova_compute[351485]: 2025-12-03 02:15:58.546 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:58 compute-0 nova_compute[351485]: 2025-12-03 02:15:58.961 351492 INFO nova.virt.libvirt.driver [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Creating config drive at /var/lib/nova/instances/5c870f25-6c33-4e95-b540-5a806454f556/disk.config
Dec 03 02:15:58 compute-0 nova_compute[351485]: 2025-12-03 02:15:58.977 351492 DEBUG oslo_concurrency.processutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5c870f25-6c33-4e95-b540-5a806454f556/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpryjnql8w execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:15:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1850: 321 pgs: 321 active+clean; 243 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 954 KiB/s rd, 2.5 MiB/s wr, 107 op/s
Dec 03 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.015 351492 DEBUG nova.network.neutron [req-97117c39-91ae-44e4-8a6d-841fe7460c05 req-b3afc542-40e1-4692-98bd-3e3ebf2fb43a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Updated VIF entry in instance network info cache for port d7b1b965-f304-40eb-9f34-c63af54da9f4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 03 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.018 351492 DEBUG nova.network.neutron [req-97117c39-91ae-44e4-8a6d-841fe7460c05 req-b3afc542-40e1-4692-98bd-3e3ebf2fb43a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Updating instance_info_cache with network_info: [{"id": "d7b1b965-f304-40eb-9f34-c63af54da9f4", "address": "fa:16:3e:57:b1:4a", "network": {"id": "e0e44891-e46c-41a0-a083-a444c0d34e1c", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-900280430-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5875dd9a17274c38a2ae81fb3759558e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7b1b965-f3", "ovs_interfaceid": "d7b1b965-f304-40eb-9f34-c63af54da9f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.059 351492 DEBUG oslo_concurrency.lockutils [req-97117c39-91ae-44e4-8a6d-841fe7460c05 req-b3afc542-40e1-4692-98bd-3e3ebf2fb43a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-5c870f25-6c33-4e95-b540-5a806454f556" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.137 351492 DEBUG oslo_concurrency.processutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5c870f25-6c33-4e95-b540-5a806454f556/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpryjnql8w" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.204 351492 DEBUG nova.storage.rbd_utils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] rbd image 5c870f25-6c33-4e95-b540-5a806454f556_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.216 351492 DEBUG oslo_concurrency.processutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5c870f25-6c33-4e95-b540-5a806454f556/disk.config 5c870f25-6c33-4e95-b540-5a806454f556_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.410 351492 DEBUG nova.compute.manager [req-1e05a413-a054-4686-9154-f9eb71480fa4 req-7f0a23b3-97f3-40df-aa35-b6daaed12592 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Received event network-vif-plugged-ee5c2dfc-04c3-400a-8073-6f2c65dcea03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.413 351492 DEBUG oslo_concurrency.lockutils [req-1e05a413-a054-4686-9154-f9eb71480fa4 req-7f0a23b3-97f3-40df-aa35-b6daaed12592 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.414 351492 DEBUG oslo_concurrency.lockutils [req-1e05a413-a054-4686-9154-f9eb71480fa4 req-7f0a23b3-97f3-40df-aa35-b6daaed12592 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.415 351492 DEBUG oslo_concurrency.lockutils [req-1e05a413-a054-4686-9154-f9eb71480fa4 req-7f0a23b3-97f3-40df-aa35-b6daaed12592 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.416 351492 DEBUG nova.compute.manager [req-1e05a413-a054-4686-9154-f9eb71480fa4 req-7f0a23b3-97f3-40df-aa35-b6daaed12592 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Processing event network-vif-plugged-ee5c2dfc-04c3-400a-8073-6f2c65dcea03 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 03 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.416 351492 DEBUG nova.compute.manager [req-1e05a413-a054-4686-9154-f9eb71480fa4 req-7f0a23b3-97f3-40df-aa35-b6daaed12592 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Received event network-vif-plugged-ee5c2dfc-04c3-400a-8073-6f2c65dcea03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.417 351492 DEBUG oslo_concurrency.lockutils [req-1e05a413-a054-4686-9154-f9eb71480fa4 req-7f0a23b3-97f3-40df-aa35-b6daaed12592 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.418 351492 DEBUG oslo_concurrency.lockutils [req-1e05a413-a054-4686-9154-f9eb71480fa4 req-7f0a23b3-97f3-40df-aa35-b6daaed12592 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.418 351492 DEBUG oslo_concurrency.lockutils [req-1e05a413-a054-4686-9154-f9eb71480fa4 req-7f0a23b3-97f3-40df-aa35-b6daaed12592 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.419 351492 DEBUG nova.compute.manager [req-1e05a413-a054-4686-9154-f9eb71480fa4 req-7f0a23b3-97f3-40df-aa35-b6daaed12592 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] No waiting events found dispatching network-vif-plugged-ee5c2dfc-04c3-400a-8073-6f2c65dcea03 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 03 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.420 351492 WARNING nova.compute.manager [req-1e05a413-a054-4686-9154-f9eb71480fa4 req-7f0a23b3-97f3-40df-aa35-b6daaed12592 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Received unexpected event network-vif-plugged-ee5c2dfc-04c3-400a-8073-6f2c65dcea03 for instance with vm_state building and task_state spawning.
Dec 03 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.427 351492 DEBUG nova.compute.manager [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 03 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.432 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728159.4323306, a48b4084-369d-432a-9f47-9378cdcc011f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.435 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] VM Resumed (Lifecycle Event)
Dec 03 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.441 351492 DEBUG nova.virt.libvirt.driver [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 03 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.447 351492 INFO nova.virt.libvirt.driver [-] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Instance spawned successfully.
Dec 03 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.447 351492 DEBUG nova.virt.libvirt.driver [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 03 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.464 351492 DEBUG oslo_concurrency.processutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5c870f25-6c33-4e95-b540-5a806454f556/disk.config 5c870f25-6c33-4e95-b540-5a806454f556_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.248s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.464 351492 INFO nova.virt.libvirt.driver [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Deleting local config drive /var/lib/nova/instances/5c870f25-6c33-4e95-b540-5a806454f556/disk.config because it was imported into RBD.
Dec 03 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.472 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.489 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 03 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.497 351492 DEBUG nova.virt.libvirt.driver [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.498 351492 DEBUG nova.virt.libvirt.driver [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.498 351492 DEBUG nova.virt.libvirt.driver [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.498 351492 DEBUG nova.virt.libvirt.driver [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.499 351492 DEBUG nova.virt.libvirt.driver [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.499 351492 DEBUG nova.virt.libvirt.driver [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:15:59 compute-0 kernel: tapd7b1b965-f3: entered promiscuous mode
Dec 03 02:15:59 compute-0 NetworkManager[48912]: <info>  [1764728159.5274] manager: (tapd7b1b965-f3): new Tun device (/org/freedesktop/NetworkManager/Devices/48)
Dec 03 02:15:59 compute-0 ovn_controller[89134]: 2025-12-03T02:15:59Z|00081|binding|INFO|Claiming lport d7b1b965-f304-40eb-9f34-c63af54da9f4 for this chassis.
Dec 03 02:15:59 compute-0 ovn_controller[89134]: 2025-12-03T02:15:59Z|00082|binding|INFO|d7b1b965-f304-40eb-9f34-c63af54da9f4: Claiming fa:16:3e:57:b1:4a 10.100.0.3
Dec 03 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.531 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.535 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.540 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:b1:4a 10.100.0.3'], port_security=['fa:16:3e:57:b1:4a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5c870f25-6c33-4e95-b540-5a806454f556', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e0e44891-e46c-41a0-a083-a444c0d34e1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5875dd9a17274c38a2ae81fb3759558e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '286ce87f-1fc2-4f0d-bf8b-2c43a617c74d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d6691e56-1a9f-42fd-b8af-9a3ce340219b, chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=d7b1b965-f304-40eb-9f34-c63af54da9f4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.542 288528 INFO neutron.agent.ovn.metadata.agent [-] Port d7b1b965-f304-40eb-9f34-c63af54da9f4 in datapath e0e44891-e46c-41a0-a083-a444c0d34e1c bound to our chassis
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.544 288528 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e0e44891-e46c-41a0-a083-a444c0d34e1c
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.560 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[bd18e317-d5cd-41d8-a71e-b37fe49abd8f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.561 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape0e44891-e1 in ovnmeta-e0e44891-e46c-41a0-a083-a444c0d34e1c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.563 414755 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape0e44891-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.563 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[45e67f50-3be6-4e35-a5ff-742d44d17b8e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:59 compute-0 systemd-udevd[445363]: Network interface NamePolicy= disabled on kernel command line.
Dec 03 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.567 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.565 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[3f630432-2c30-4c18-bcd3-c8f7476bcc12]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:59 compute-0 ovn_controller[89134]: 2025-12-03T02:15:59Z|00083|binding|INFO|Setting lport d7b1b965-f304-40eb-9f34-c63af54da9f4 ovn-installed in OVS
Dec 03 02:15:59 compute-0 ovn_controller[89134]: 2025-12-03T02:15:59Z|00084|binding|INFO|Setting lport d7b1b965-f304-40eb-9f34-c63af54da9f4 up in Southbound
Dec 03 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.576 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:59 compute-0 NetworkManager[48912]: <info>  [1764728159.5784] device (tapd7b1b965-f3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 03 02:15:59 compute-0 NetworkManager[48912]: <info>  [1764728159.5848] device (tapd7b1b965-f3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.589 288639 DEBUG oslo.privsep.daemon [-] privsep: reply[6b92c699-97cc-4efb-ab31-a366ed4d5e5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:59 compute-0 systemd-machined[138558]: New machine qemu-9-instance-00000009.
Dec 03 02:15:59 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-00000009.
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.607 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[d3225e89-6ef2-4066-90b3-bde95b2f68c9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.615 351492 INFO nova.compute.manager [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Took 14.84 seconds to spawn the instance on the hypervisor.
Dec 03 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.616 351492 DEBUG nova.compute.manager [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.644 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[ce6256d7-54d1-4406-87b3-cbdca7f1b98b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.647 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.647 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.648 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.659 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[05f61d24-6b23-447d-974c-f49a010638cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:59 compute-0 NetworkManager[48912]: <info>  [1764728159.6613] manager: (tape0e44891-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/49)
Dec 03 02:15:59 compute-0 systemd-udevd[445367]: Network interface NamePolicy= disabled on kernel command line.
Dec 03 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.697 351492 INFO nova.compute.manager [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Took 16.03 seconds to build instance.
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.696 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[0ede713e-8ebf-4496-a4fd-6751c79ff0d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.700 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[60634159-b2fd-4d22-972c-2ff38f4e9b61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:59 compute-0 NetworkManager[48912]: <info>  [1764728159.7228] device (tape0e44891-e0): carrier: link connected
Dec 03 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.724 351492 DEBUG oslo_concurrency.lockutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Lock "a48b4084-369d-432a-9f47-9378cdcc011f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.145s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.731 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[9f0fb949-90ac-43f1-9d7b-2402b5cd6e21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:59 compute-0 podman[158098]: time="2025-12-03T02:15:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.749 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[679625d6-6cd2-46f4-b23c-eafdcc37ebe5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape0e44891-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:3e:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699920, 'reachable_time': 28264, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 445398, 'error': None, 'target': 'ovnmeta-e0e44891-e46c-41a0-a083-a444c0d34e1c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:15:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46278 "" "Go-http-client/1.1"
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.769 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[055b0be9-5b55-486c-b692-192322aeb779]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe69:3ef8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 699920, 'tstamp': 699920}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 445399, 'error': None, 'target': 'ovnmeta-e0e44891-e46c-41a0-a083-a444c0d34e1c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:15:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9569 "" "Go-http-client/1.1"
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.789 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[afe4d630-1954-4166-b6c5-adacf80f0dea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape0e44891-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:3e:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699920, 'reachable_time': 28264, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 445400, 'error': None, 'target': 'ovnmeta-e0e44891-e46c-41a0-a083-a444c0d34e1c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.844 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[e60ed30b-9c79-4e95-a44d-50849059ac2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.933 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[3094a525-413a-404d-829a-7f60209d56a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.934 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape0e44891-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.934 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.935 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape0e44891-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:15:59 compute-0 kernel: tape0e44891-e0: entered promiscuous mode
Dec 03 02:15:59 compute-0 NetworkManager[48912]: <info>  [1764728159.9392] manager: (tape0e44891-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/50)
Dec 03 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.945 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.947 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape0e44891-e0, col_values=(('external_ids', {'iface-id': 'c4f9e2ab-5c50-4335-91f7-b4ae67182674'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:15:59 compute-0 ovn_controller[89134]: 2025-12-03T02:15:59Z|00085|binding|INFO|Releasing lport c4f9e2ab-5c50-4335-91f7-b4ae67182674 from this chassis (sb_readonly=0)
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.953 288528 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e0e44891-e46c-41a0-a083-a444c0d34e1c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e0e44891-e46c-41a0-a083-a444c0d34e1c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.954 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[ef25275c-65f2-4492-992d-3482eae26fe2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.955 288528 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]: global
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]:     log         /dev/log local0 debug
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]:     log-tag     haproxy-metadata-proxy-e0e44891-e46c-41a0-a083-a444c0d34e1c
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]:     user        root
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]:     group       root
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]:     maxconn     1024
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]:     pidfile     /var/lib/neutron/external/pids/e0e44891-e46c-41a0-a083-a444c0d34e1c.pid.haproxy
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]:     daemon
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]: 
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]: defaults
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]:     log global
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]:     mode http
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]:     option httplog
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]:     option dontlognull
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]:     option http-server-close
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]:     option forwardfor
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]:     retries                 3
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]:     timeout http-request    30s
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]:     timeout connect         30s
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]:     timeout client          32s
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]:     timeout server          32s
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]:     timeout http-keep-alive 30s
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]: 
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]: 
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]: listen listener
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]:     bind 169.254.169.254:80
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]:     server metadata /var/lib/neutron/metadata_proxy
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]:     http-request add-header X-OVN-Network-ID e0e44891-e46c-41a0-a083-a444c0d34e1c
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 03 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.956 288528 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e0e44891-e46c-41a0-a083-a444c0d34e1c', 'env', 'PROCESS_TAG=haproxy-e0e44891-e46c-41a0-a083-a444c0d34e1c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e0e44891-e46c-41a0-a083-a444c0d34e1c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 03 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.963 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:00 compute-0 ceph-mon[192821]: pgmap v1850: 321 pgs: 321 active+clean; 243 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 954 KiB/s rd, 2.5 MiB/s wr, 107 op/s
Dec 03 02:16:00 compute-0 nova_compute[351485]: 2025-12-03 02:16:00.259 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728160.258395, 5c870f25-6c33-4e95-b540-5a806454f556 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 02:16:00 compute-0 nova_compute[351485]: 2025-12-03 02:16:00.259 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] VM Started (Lifecycle Event)
Dec 03 02:16:00 compute-0 nova_compute[351485]: 2025-12-03 02:16:00.283 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:16:00 compute-0 nova_compute[351485]: 2025-12-03 02:16:00.290 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728160.2634172, 5c870f25-6c33-4e95-b540-5a806454f556 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 02:16:00 compute-0 nova_compute[351485]: 2025-12-03 02:16:00.290 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] VM Paused (Lifecycle Event)
Dec 03 02:16:00 compute-0 nova_compute[351485]: 2025-12-03 02:16:00.319 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:16:00 compute-0 nova_compute[351485]: 2025-12-03 02:16:00.324 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 03 02:16:00 compute-0 nova_compute[351485]: 2025-12-03 02:16:00.345 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 03 02:16:00 compute-0 podman[445471]: 2025-12-03 02:16:00.453833723 +0000 UTC m=+0.090165162 container create 51794d70088c7f895c2aa96abef09844a97a7dca0471ddcb8ca433f0a3cc397e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e0e44891-e46c-41a0-a083-a444c0d34e1c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0)
Dec 03 02:16:00 compute-0 systemd[1]: Started libpod-conmon-51794d70088c7f895c2aa96abef09844a97a7dca0471ddcb8ca433f0a3cc397e.scope.
Dec 03 02:16:00 compute-0 podman[445471]: 2025-12-03 02:16:00.411602088 +0000 UTC m=+0.047933547 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 03 02:16:00 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:16:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5295931454d2be4766609de4f9590642eff52873c1c45af103b232bf8f6acedc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 03 02:16:00 compute-0 podman[445471]: 2025-12-03 02:16:00.559893653 +0000 UTC m=+0.196225102 container init 51794d70088c7f895c2aa96abef09844a97a7dca0471ddcb8ca433f0a3cc397e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e0e44891-e46c-41a0-a083-a444c0d34e1c, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 03 02:16:00 compute-0 podman[445471]: 2025-12-03 02:16:00.56826858 +0000 UTC m=+0.204600009 container start 51794d70088c7f895c2aa96abef09844a97a7dca0471ddcb8ca433f0a3cc397e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e0e44891-e46c-41a0-a083-a444c0d34e1c, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Dec 03 02:16:00 compute-0 neutron-haproxy-ovnmeta-e0e44891-e46c-41a0-a083-a444c0d34e1c[445486]: [NOTICE]   (445490) : New worker (445492) forked
Dec 03 02:16:00 compute-0 neutron-haproxy-ovnmeta-e0e44891-e46c-41a0-a083-a444c0d34e1c[445486]: [NOTICE]   (445490) : Loading success.
Dec 03 02:16:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1851: 321 pgs: 321 active+clean; 243 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.5 MiB/s wr, 162 op/s
Dec 03 02:16:01 compute-0 openstack_network_exporter[368278]: ERROR   02:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:16:01 compute-0 openstack_network_exporter[368278]: ERROR   02:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:16:01 compute-0 openstack_network_exporter[368278]: ERROR   02:16:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:16:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:16:01 compute-0 openstack_network_exporter[368278]: ERROR   02:16:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:16:01 compute-0 openstack_network_exporter[368278]: ERROR   02:16:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:16:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:16:01 compute-0 ovn_controller[89134]: 2025-12-03T02:16:01Z|00086|binding|INFO|Releasing lport 450cbc12-7d6b-43b0-b43f-cc78dcc16b25 from this chassis (sb_readonly=0)
Dec 03 02:16:01 compute-0 ovn_controller[89134]: 2025-12-03T02:16:01Z|00087|binding|INFO|Releasing lport c4f9e2ab-5c50-4335-91f7-b4ae67182674 from this chassis (sb_readonly=0)
Dec 03 02:16:01 compute-0 ovn_controller[89134]: 2025-12-03T02:16:01Z|00088|binding|INFO|Releasing lport c8314dfe-5b76-4819-9b3e-1cb76a272253 from this chassis (sb_readonly=0)
Dec 03 02:16:01 compute-0 ovn_controller[89134]: 2025-12-03T02:16:01Z|00089|binding|INFO|Releasing lport f4f388aa-0af5-4918-b8ad-5c74c22057c6 from this chassis (sb_readonly=0)
Dec 03 02:16:01 compute-0 nova_compute[351485]: 2025-12-03 02:16:01.557 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:02 compute-0 ceph-mon[192821]: pgmap v1851: 321 pgs: 321 active+clean; 243 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.5 MiB/s wr, 162 op/s
Dec 03 02:16:02 compute-0 nova_compute[351485]: 2025-12-03 02:16:02.555 351492 DEBUG nova.compute.manager [req-167665d1-3bf5-4700-a874-dfa27ebcdbc4 req-1e24f9ba-9734-4e11-a627-49bad6522236 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Received event network-changed-b7fa8023-e50c-4bea-be79-8fbe005f0b8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:16:02 compute-0 nova_compute[351485]: 2025-12-03 02:16:02.555 351492 DEBUG nova.compute.manager [req-167665d1-3bf5-4700-a874-dfa27ebcdbc4 req-1e24f9ba-9734-4e11-a627-49bad6522236 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Refreshing instance network info cache due to event network-changed-b7fa8023-e50c-4bea-be79-8fbe005f0b8a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 03 02:16:02 compute-0 nova_compute[351485]: 2025-12-03 02:16:02.555 351492 DEBUG oslo_concurrency.lockutils [req-167665d1-3bf5-4700-a874-dfa27ebcdbc4 req-1e24f9ba-9734-4e11-a627-49bad6522236 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-4f50e501-f565-4e1f-aa02-df921702eff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:16:02 compute-0 nova_compute[351485]: 2025-12-03 02:16:02.555 351492 DEBUG oslo_concurrency.lockutils [req-167665d1-3bf5-4700-a874-dfa27ebcdbc4 req-1e24f9ba-9734-4e11-a627-49bad6522236 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-4f50e501-f565-4e1f-aa02-df921702eff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:16:02 compute-0 nova_compute[351485]: 2025-12-03 02:16:02.556 351492 DEBUG nova.network.neutron [req-167665d1-3bf5-4700-a874-dfa27ebcdbc4 req-1e24f9ba-9734-4e11-a627-49bad6522236 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Refreshing network info cache for port b7fa8023-e50c-4bea-be79-8fbe005f0b8a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 03 02:16:02 compute-0 nova_compute[351485]: 2025-12-03 02:16:02.952 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1852: 321 pgs: 321 active+clean; 243 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 929 KiB/s wr, 149 op/s
Dec 03 02:16:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:16:03 compute-0 nova_compute[351485]: 2025-12-03 02:16:03.548 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:04 compute-0 ceph-mon[192821]: pgmap v1852: 321 pgs: 321 active+clean; 243 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 929 KiB/s wr, 149 op/s
Dec 03 02:16:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1853: 321 pgs: 321 active+clean; 243 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 245 KiB/s wr, 154 op/s
Dec 03 02:16:05 compute-0 nova_compute[351485]: 2025-12-03 02:16:05.348 351492 DEBUG nova.compute.manager [req-52be11f6-e2e6-4fcb-a52e-8093698d9b4b req-a1c64abf-def1-4843-9200-13a0e89e6fa4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Received event network-changed-5009f27c-5ce3-46eb-b7aa-e82645a3097e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:16:05 compute-0 nova_compute[351485]: 2025-12-03 02:16:05.348 351492 DEBUG nova.compute.manager [req-52be11f6-e2e6-4fcb-a52e-8093698d9b4b req-a1c64abf-def1-4843-9200-13a0e89e6fa4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Refreshing instance network info cache due to event network-changed-5009f27c-5ce3-46eb-b7aa-e82645a3097e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 03 02:16:05 compute-0 nova_compute[351485]: 2025-12-03 02:16:05.348 351492 DEBUG oslo_concurrency.lockutils [req-52be11f6-e2e6-4fcb-a52e-8093698d9b4b req-a1c64abf-def1-4843-9200-13a0e89e6fa4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-07ce21e6-3627-467a-9b7e-d9045308576c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:16:05 compute-0 nova_compute[351485]: 2025-12-03 02:16:05.348 351492 DEBUG oslo_concurrency.lockutils [req-52be11f6-e2e6-4fcb-a52e-8093698d9b4b req-a1c64abf-def1-4843-9200-13a0e89e6fa4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-07ce21e6-3627-467a-9b7e-d9045308576c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:16:05 compute-0 nova_compute[351485]: 2025-12-03 02:16:05.349 351492 DEBUG nova.network.neutron [req-52be11f6-e2e6-4fcb-a52e-8093698d9b4b req-a1c64abf-def1-4843-9200-13a0e89e6fa4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Refreshing network info cache for port 5009f27c-5ce3-46eb-b7aa-e82645a3097e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 03 02:16:06 compute-0 ceph-mon[192821]: pgmap v1853: 321 pgs: 321 active+clean; 243 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 245 KiB/s wr, 154 op/s
Dec 03 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.373 351492 DEBUG nova.network.neutron [req-167665d1-3bf5-4700-a874-dfa27ebcdbc4 req-1e24f9ba-9734-4e11-a627-49bad6522236 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Updated VIF entry in instance network info cache for port b7fa8023-e50c-4bea-be79-8fbe005f0b8a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 03 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.374 351492 DEBUG nova.network.neutron [req-167665d1-3bf5-4700-a874-dfa27ebcdbc4 req-1e24f9ba-9734-4e11-a627-49bad6522236 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Updating instance_info_cache with network_info: [{"id": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "address": "fa:16:3e:12:b3:fa", "network": {"id": "a5e23dc0-bcc2-406c-bc7f-b978295be94b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1951903174-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9efdda7cf984595a9c5a855bae62b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7fa8023-e5", "ovs_interfaceid": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.403 351492 DEBUG oslo_concurrency.lockutils [req-167665d1-3bf5-4700-a874-dfa27ebcdbc4 req-1e24f9ba-9734-4e11-a627-49bad6522236 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-4f50e501-f565-4e1f-aa02-df921702eff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.404 351492 DEBUG nova.compute.manager [req-167665d1-3bf5-4700-a874-dfa27ebcdbc4 req-1e24f9ba-9734-4e11-a627-49bad6522236 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Received event network-vif-plugged-d7b1b965-f304-40eb-9f34-c63af54da9f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.404 351492 DEBUG oslo_concurrency.lockutils [req-167665d1-3bf5-4700-a874-dfa27ebcdbc4 req-1e24f9ba-9734-4e11-a627-49bad6522236 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "5c870f25-6c33-4e95-b540-5a806454f556-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.404 351492 DEBUG oslo_concurrency.lockutils [req-167665d1-3bf5-4700-a874-dfa27ebcdbc4 req-1e24f9ba-9734-4e11-a627-49bad6522236 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "5c870f25-6c33-4e95-b540-5a806454f556-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.404 351492 DEBUG oslo_concurrency.lockutils [req-167665d1-3bf5-4700-a874-dfa27ebcdbc4 req-1e24f9ba-9734-4e11-a627-49bad6522236 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "5c870f25-6c33-4e95-b540-5a806454f556-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.405 351492 DEBUG nova.compute.manager [req-167665d1-3bf5-4700-a874-dfa27ebcdbc4 req-1e24f9ba-9734-4e11-a627-49bad6522236 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Processing event network-vif-plugged-d7b1b965-f304-40eb-9f34-c63af54da9f4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 03 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.405 351492 DEBUG nova.compute.manager [req-167665d1-3bf5-4700-a874-dfa27ebcdbc4 req-1e24f9ba-9734-4e11-a627-49bad6522236 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Received event network-vif-plugged-d7b1b965-f304-40eb-9f34-c63af54da9f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.405 351492 DEBUG oslo_concurrency.lockutils [req-167665d1-3bf5-4700-a874-dfa27ebcdbc4 req-1e24f9ba-9734-4e11-a627-49bad6522236 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "5c870f25-6c33-4e95-b540-5a806454f556-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.405 351492 DEBUG oslo_concurrency.lockutils [req-167665d1-3bf5-4700-a874-dfa27ebcdbc4 req-1e24f9ba-9734-4e11-a627-49bad6522236 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "5c870f25-6c33-4e95-b540-5a806454f556-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.405 351492 DEBUG oslo_concurrency.lockutils [req-167665d1-3bf5-4700-a874-dfa27ebcdbc4 req-1e24f9ba-9734-4e11-a627-49bad6522236 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "5c870f25-6c33-4e95-b540-5a806454f556-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.406 351492 DEBUG nova.compute.manager [req-167665d1-3bf5-4700-a874-dfa27ebcdbc4 req-1e24f9ba-9734-4e11-a627-49bad6522236 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] No waiting events found dispatching network-vif-plugged-d7b1b965-f304-40eb-9f34-c63af54da9f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 03 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.406 351492 WARNING nova.compute.manager [req-167665d1-3bf5-4700-a874-dfa27ebcdbc4 req-1e24f9ba-9734-4e11-a627-49bad6522236 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Received unexpected event network-vif-plugged-d7b1b965-f304-40eb-9f34-c63af54da9f4 for instance with vm_state building and task_state spawning.
Dec 03 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.406 351492 DEBUG nova.compute.manager [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 03 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.415 351492 DEBUG nova.virt.libvirt.driver [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 03 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.417 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728166.4138896, 5c870f25-6c33-4e95-b540-5a806454f556 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.418 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] VM Resumed (Lifecycle Event)
Dec 03 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.428 351492 INFO nova.virt.libvirt.driver [-] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Instance spawned successfully.
Dec 03 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.428 351492 DEBUG nova.virt.libvirt.driver [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 03 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.443 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.468 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 03 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.482 351492 DEBUG nova.virt.libvirt.driver [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.485 351492 DEBUG nova.virt.libvirt.driver [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.494 351492 DEBUG nova.virt.libvirt.driver [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.503 351492 DEBUG nova.virt.libvirt.driver [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.504 351492 DEBUG nova.virt.libvirt.driver [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.506 351492 DEBUG nova.virt.libvirt.driver [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.516 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 03 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.577 351492 INFO nova.compute.manager [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Took 17.69 seconds to spawn the instance on the hypervisor.
Dec 03 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.577 351492 DEBUG nova.compute.manager [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.668 351492 INFO nova.compute.manager [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Took 18.88 seconds to build instance.
Dec 03 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.689 351492 DEBUG oslo_concurrency.lockutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Lock "5c870f25-6c33-4e95-b540-5a806454f556" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.195s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:16:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1854: 321 pgs: 321 active+clean; 243 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 5.6 MiB/s rd, 29 KiB/s wr, 204 op/s
Dec 03 02:16:07 compute-0 nova_compute[351485]: 2025-12-03 02:16:07.343 351492 DEBUG oslo_concurrency.lockutils [None req-8c22aebe-246d-4047-89f2-89ae300ee2d9 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Acquiring lock "07ce21e6-3627-467a-9b7e-d9045308576c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:16:07 compute-0 nova_compute[351485]: 2025-12-03 02:16:07.344 351492 DEBUG oslo_concurrency.lockutils [None req-8c22aebe-246d-4047-89f2-89ae300ee2d9 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Lock "07ce21e6-3627-467a-9b7e-d9045308576c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:16:07 compute-0 nova_compute[351485]: 2025-12-03 02:16:07.345 351492 DEBUG oslo_concurrency.lockutils [None req-8c22aebe-246d-4047-89f2-89ae300ee2d9 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Acquiring lock "07ce21e6-3627-467a-9b7e-d9045308576c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:16:07 compute-0 nova_compute[351485]: 2025-12-03 02:16:07.346 351492 DEBUG oslo_concurrency.lockutils [None req-8c22aebe-246d-4047-89f2-89ae300ee2d9 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Lock "07ce21e6-3627-467a-9b7e-d9045308576c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:16:07 compute-0 nova_compute[351485]: 2025-12-03 02:16:07.346 351492 DEBUG oslo_concurrency.lockutils [None req-8c22aebe-246d-4047-89f2-89ae300ee2d9 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Lock "07ce21e6-3627-467a-9b7e-d9045308576c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:16:07 compute-0 nova_compute[351485]: 2025-12-03 02:16:07.350 351492 INFO nova.compute.manager [None req-8c22aebe-246d-4047-89f2-89ae300ee2d9 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Terminating instance
Dec 03 02:16:07 compute-0 nova_compute[351485]: 2025-12-03 02:16:07.354 351492 DEBUG nova.compute.manager [None req-8c22aebe-246d-4047-89f2-89ae300ee2d9 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 03 02:16:07 compute-0 kernel: tap5009f27c-5c (unregistering): left promiscuous mode
Dec 03 02:16:07 compute-0 NetworkManager[48912]: <info>  [1764728167.4446] device (tap5009f27c-5c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 03 02:16:07 compute-0 ovn_controller[89134]: 2025-12-03T02:16:07Z|00090|binding|INFO|Releasing lport 5009f27c-5ce3-46eb-b7aa-e82645a3097e from this chassis (sb_readonly=0)
Dec 03 02:16:07 compute-0 ovn_controller[89134]: 2025-12-03T02:16:07Z|00091|binding|INFO|Setting lport 5009f27c-5ce3-46eb-b7aa-e82645a3097e down in Southbound
Dec 03 02:16:07 compute-0 ovn_controller[89134]: 2025-12-03T02:16:07Z|00092|binding|INFO|Removing iface tap5009f27c-5c ovn-installed in OVS
Dec 03 02:16:07 compute-0 nova_compute[351485]: 2025-12-03 02:16:07.467 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:07 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:07.469 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:ad:09 10.100.0.10'], port_security=['fa:16:3e:3a:ad:09 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '07ce21e6-3627-467a-9b7e-d9045308576c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f9dd264-e73a-4200-ba74-0833c40bd14c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5a1cf3657daa4d798d912ceaae049aa0', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd3e8f04e-3c5d-406e-b48c-aa69bd7ba1c1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.189'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=427d4c89-de71-4fff-872a-bb6406d77b1e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=5009f27c-5ce3-46eb-b7aa-e82645a3097e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 02:16:07 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:07.471 288528 INFO neutron.agent.ovn.metadata.agent [-] Port 5009f27c-5ce3-46eb-b7aa-e82645a3097e in datapath 9f9dd264-e73a-4200-ba74-0833c40bd14c unbound from our chassis
Dec 03 02:16:07 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:07.473 288528 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9f9dd264-e73a-4200-ba74-0833c40bd14c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 03 02:16:07 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:07.474 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[86b9ba90-a011-4ef9-b147-559db4b07bff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:16:07 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:07.475 288528 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9f9dd264-e73a-4200-ba74-0833c40bd14c namespace which is not needed anymore
Dec 03 02:16:07 compute-0 nova_compute[351485]: 2025-12-03 02:16:07.505 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:07 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Dec 03 02:16:07 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 10.988s CPU time.
Dec 03 02:16:07 compute-0 systemd-machined[138558]: Machine qemu-7-instance-00000007 terminated.
Dec 03 02:16:07 compute-0 nova_compute[351485]: 2025-12-03 02:16:07.581 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:07 compute-0 nova_compute[351485]: 2025-12-03 02:16:07.593 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:07 compute-0 nova_compute[351485]: 2025-12-03 02:16:07.600 351492 INFO nova.virt.libvirt.driver [-] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Instance destroyed successfully.
Dec 03 02:16:07 compute-0 nova_compute[351485]: 2025-12-03 02:16:07.601 351492 DEBUG nova.objects.instance [None req-8c22aebe-246d-4047-89f2-89ae300ee2d9 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Lazy-loading 'resources' on Instance uuid 07ce21e6-3627-467a-9b7e-d9045308576c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:16:07 compute-0 neutron-haproxy-ovnmeta-9f9dd264-e73a-4200-ba74-0833c40bd14c[444939]: [NOTICE]   (444943) : haproxy version is 2.8.14-c23fe91
Dec 03 02:16:07 compute-0 neutron-haproxy-ovnmeta-9f9dd264-e73a-4200-ba74-0833c40bd14c[444939]: [NOTICE]   (444943) : path to executable is /usr/sbin/haproxy
Dec 03 02:16:07 compute-0 neutron-haproxy-ovnmeta-9f9dd264-e73a-4200-ba74-0833c40bd14c[444939]: [WARNING]  (444943) : Exiting Master process...
Dec 03 02:16:07 compute-0 neutron-haproxy-ovnmeta-9f9dd264-e73a-4200-ba74-0833c40bd14c[444939]: [ALERT]    (444943) : Current worker (444946) exited with code 143 (Terminated)
Dec 03 02:16:07 compute-0 neutron-haproxy-ovnmeta-9f9dd264-e73a-4200-ba74-0833c40bd14c[444939]: [WARNING]  (444943) : All workers exited. Exiting... (0)
Dec 03 02:16:07 compute-0 systemd[1]: libpod-7d58250e52fa06f3751bdde305da6190b3c31d1e06120140edcca924bfc1ed7b.scope: Deactivated successfully.
Dec 03 02:16:07 compute-0 podman[445529]: 2025-12-03 02:16:07.72274184 +0000 UTC m=+0.068545590 container died 7d58250e52fa06f3751bdde305da6190b3c31d1e06120140edcca924bfc1ed7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9f9dd264-e73a-4200-ba74-0833c40bd14c, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 03 02:16:07 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7d58250e52fa06f3751bdde305da6190b3c31d1e06120140edcca924bfc1ed7b-userdata-shm.mount: Deactivated successfully.
Dec 03 02:16:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3c2115dbbdd79e6878ea3d1b5fd20b2e30c3ab979ab90b0f907915a9dad459d-merged.mount: Deactivated successfully.
Dec 03 02:16:07 compute-0 podman[445529]: 2025-12-03 02:16:07.811122521 +0000 UTC m=+0.156926271 container cleanup 7d58250e52fa06f3751bdde305da6190b3c31d1e06120140edcca924bfc1ed7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9f9dd264-e73a-4200-ba74-0833c40bd14c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 03 02:16:07 compute-0 systemd[1]: libpod-conmon-7d58250e52fa06f3751bdde305da6190b3c31d1e06120140edcca924bfc1ed7b.scope: Deactivated successfully.
Dec 03 02:16:07 compute-0 nova_compute[351485]: 2025-12-03 02:16:07.840 351492 DEBUG nova.virt.libvirt.vif [None req-8c22aebe-246d-4047-89f2-89ae300ee2d9 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T02:15:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1673813976',display_name='tempest-ServersTestJSON-server-1673813976',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1673813976',id=7,image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJYX2+s+Cn7+6pt2DjGw9oFEuqJNIKKTlZXH+fYJLmbL39TCISRXMer1dBsYcpnaM6SERWPVMBKkG2FwLQyhKQV9uLnyTX7LXwX8AMU3L/hKCWN57p10Cgl0YPkCXm4JFA==',key_name='tempest-keypair-555022383',keypairs=<?>,launch_index=0,launched_at=2025-12-03T02:15:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5a1cf3657daa4d798d912ceaae049aa0',ramdisk_id='',reservation_id='r-cpufgz7g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-263993337',owner_user_name='tempest-ServersTestJSON-263993337-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-03T02:15:57Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8a7f624afcf845f786397f8aa1bb2a63',uuid=07ce21e6-3627-467a-9b7e-d9045308576c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5009f27c-5ce3-46eb-b7aa-e82645a3097e", "address": "fa:16:3e:3a:ad:09", "network": {"id": "9f9dd264-e73a-4200-ba74-0833c40bd14c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1921093277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a1cf3657daa4d798d912ceaae049aa0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5009f27c-5c", "ovs_interfaceid": "5009f27c-5ce3-46eb-b7aa-e82645a3097e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 03 02:16:07 compute-0 nova_compute[351485]: 2025-12-03 02:16:07.841 351492 DEBUG nova.network.os_vif_util [None req-8c22aebe-246d-4047-89f2-89ae300ee2d9 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Converting VIF {"id": "5009f27c-5ce3-46eb-b7aa-e82645a3097e", "address": "fa:16:3e:3a:ad:09", "network": {"id": "9f9dd264-e73a-4200-ba74-0833c40bd14c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1921093277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a1cf3657daa4d798d912ceaae049aa0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5009f27c-5c", "ovs_interfaceid": "5009f27c-5ce3-46eb-b7aa-e82645a3097e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 03 02:16:07 compute-0 nova_compute[351485]: 2025-12-03 02:16:07.842 351492 DEBUG nova.network.os_vif_util [None req-8c22aebe-246d-4047-89f2-89ae300ee2d9 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:ad:09,bridge_name='br-int',has_traffic_filtering=True,id=5009f27c-5ce3-46eb-b7aa-e82645a3097e,network=Network(9f9dd264-e73a-4200-ba74-0833c40bd14c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5009f27c-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 03 02:16:07 compute-0 nova_compute[351485]: 2025-12-03 02:16:07.842 351492 DEBUG os_vif [None req-8c22aebe-246d-4047-89f2-89ae300ee2d9 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:ad:09,bridge_name='br-int',has_traffic_filtering=True,id=5009f27c-5ce3-46eb-b7aa-e82645a3097e,network=Network(9f9dd264-e73a-4200-ba74-0833c40bd14c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5009f27c-5c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 03 02:16:07 compute-0 nova_compute[351485]: 2025-12-03 02:16:07.845 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:07 compute-0 nova_compute[351485]: 2025-12-03 02:16:07.845 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5009f27c-5c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:16:07 compute-0 nova_compute[351485]: 2025-12-03 02:16:07.847 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:07 compute-0 nova_compute[351485]: 2025-12-03 02:16:07.849 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:07 compute-0 nova_compute[351485]: 2025-12-03 02:16:07.854 351492 INFO os_vif [None req-8c22aebe-246d-4047-89f2-89ae300ee2d9 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:ad:09,bridge_name='br-int',has_traffic_filtering=True,id=5009f27c-5ce3-46eb-b7aa-e82645a3097e,network=Network(9f9dd264-e73a-4200-ba74-0833c40bd14c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5009f27c-5c')
Dec 03 02:16:07 compute-0 podman[445558]: 2025-12-03 02:16:07.970402787 +0000 UTC m=+0.085901911 container remove 7d58250e52fa06f3751bdde305da6190b3c31d1e06120140edcca924bfc1ed7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9f9dd264-e73a-4200-ba74-0833c40bd14c, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 03 02:16:07 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:07.981 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[a5550bb8-bcea-4460-addd-2d8abd3e8b0d]: (4, ('Wed Dec  3 02:16:07 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9f9dd264-e73a-4200-ba74-0833c40bd14c (7d58250e52fa06f3751bdde305da6190b3c31d1e06120140edcca924bfc1ed7b)\n7d58250e52fa06f3751bdde305da6190b3c31d1e06120140edcca924bfc1ed7b\nWed Dec  3 02:16:07 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9f9dd264-e73a-4200-ba74-0833c40bd14c (7d58250e52fa06f3751bdde305da6190b3c31d1e06120140edcca924bfc1ed7b)\n7d58250e52fa06f3751bdde305da6190b3c31d1e06120140edcca924bfc1ed7b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:16:07 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:07.984 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[dbac2887-cea6-4e63-8fcf-5179ac190cd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:16:07 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:07.985 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f9dd264-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:16:07 compute-0 nova_compute[351485]: 2025-12-03 02:16:07.987 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:07 compute-0 kernel: tap9f9dd264-e0: left promiscuous mode
Dec 03 02:16:07 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:07.996 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[9ac73b6b-6ca0-43fe-975d-477b52005d09]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:16:08 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:08.008 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[5528bfa7-4dff-4e94-9972-2ed71674e4c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:16:08 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:08.008 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[5d343ea7-5797-40e1-b62f-75884d85f3b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:16:08 compute-0 nova_compute[351485]: 2025-12-03 02:16:08.026 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:08 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:08.041 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[a1a20e9f-fd68-4b72-b4ca-fce5b6d9781e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 698941, 'reachable_time': 31416, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 445594, 'error': None, 'target': 'ovnmeta-9f9dd264-e73a-4200-ba74-0833c40bd14c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:16:08 compute-0 systemd[1]: run-netns-ovnmeta\x2d9f9dd264\x2de73a\x2d4200\x2dba74\x2d0833c40bd14c.mount: Deactivated successfully.
Dec 03 02:16:08 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:08.049 288639 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9f9dd264-e73a-4200-ba74-0833c40bd14c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 03 02:16:08 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:08.049 288639 DEBUG oslo.privsep.daemon [-] privsep: reply[c229aeeb-22a8-4601-a0d8-be70078dfc9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:16:08 compute-0 podman[445588]: 2025-12-03 02:16:08.145500881 +0000 UTC m=+0.109144039 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Dec 03 02:16:08 compute-0 podman[445589]: 2025-12-03 02:16:08.154805475 +0000 UTC m=+0.127239161 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec 03 02:16:08 compute-0 podman[445591]: 2025-12-03 02:16:08.163311845 +0000 UTC m=+0.128278590 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 02:16:08 compute-0 ceph-mon[192821]: pgmap v1854: 321 pgs: 321 active+clean; 243 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 5.6 MiB/s rd, 29 KiB/s wr, 204 op/s
Dec 03 02:16:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:16:08 compute-0 nova_compute[351485]: 2025-12-03 02:16:08.355 351492 DEBUG nova.compute.manager [req-064855b2-5ebb-4bc5-a297-0f3ceb3ccca6 req-da2d9a38-81d2-4b33-8f5a-f3c600ed8da8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Received event network-changed-ee5c2dfc-04c3-400a-8073-6f2c65dcea03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:16:08 compute-0 nova_compute[351485]: 2025-12-03 02:16:08.356 351492 DEBUG nova.compute.manager [req-064855b2-5ebb-4bc5-a297-0f3ceb3ccca6 req-da2d9a38-81d2-4b33-8f5a-f3c600ed8da8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Refreshing instance network info cache due to event network-changed-ee5c2dfc-04c3-400a-8073-6f2c65dcea03. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 03 02:16:08 compute-0 nova_compute[351485]: 2025-12-03 02:16:08.356 351492 DEBUG oslo_concurrency.lockutils [req-064855b2-5ebb-4bc5-a297-0f3ceb3ccca6 req-da2d9a38-81d2-4b33-8f5a-f3c600ed8da8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-a48b4084-369d-432a-9f47-9378cdcc011f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:16:08 compute-0 nova_compute[351485]: 2025-12-03 02:16:08.357 351492 DEBUG oslo_concurrency.lockutils [req-064855b2-5ebb-4bc5-a297-0f3ceb3ccca6 req-da2d9a38-81d2-4b33-8f5a-f3c600ed8da8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-a48b4084-369d-432a-9f47-9378cdcc011f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:16:08 compute-0 nova_compute[351485]: 2025-12-03 02:16:08.358 351492 DEBUG nova.network.neutron [req-064855b2-5ebb-4bc5-a297-0f3ceb3ccca6 req-da2d9a38-81d2-4b33-8f5a-f3c600ed8da8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Refreshing network info cache for port ee5c2dfc-04c3-400a-8073-6f2c65dcea03 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 03 02:16:08 compute-0 nova_compute[351485]: 2025-12-03 02:16:08.549 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:08 compute-0 nova_compute[351485]: 2025-12-03 02:16:08.627 351492 INFO nova.virt.libvirt.driver [None req-8c22aebe-246d-4047-89f2-89ae300ee2d9 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Deleting instance files /var/lib/nova/instances/07ce21e6-3627-467a-9b7e-d9045308576c_del
Dec 03 02:16:08 compute-0 nova_compute[351485]: 2025-12-03 02:16:08.628 351492 INFO nova.virt.libvirt.driver [None req-8c22aebe-246d-4047-89f2-89ae300ee2d9 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Deletion of /var/lib/nova/instances/07ce21e6-3627-467a-9b7e-d9045308576c_del complete
Dec 03 02:16:08 compute-0 nova_compute[351485]: 2025-12-03 02:16:08.654 351492 DEBUG nova.network.neutron [req-52be11f6-e2e6-4fcb-a52e-8093698d9b4b req-a1c64abf-def1-4843-9200-13a0e89e6fa4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Updated VIF entry in instance network info cache for port 5009f27c-5ce3-46eb-b7aa-e82645a3097e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 03 02:16:08 compute-0 nova_compute[351485]: 2025-12-03 02:16:08.654 351492 DEBUG nova.network.neutron [req-52be11f6-e2e6-4fcb-a52e-8093698d9b4b req-a1c64abf-def1-4843-9200-13a0e89e6fa4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Updating instance_info_cache with network_info: [{"id": "5009f27c-5ce3-46eb-b7aa-e82645a3097e", "address": "fa:16:3e:3a:ad:09", "network": {"id": "9f9dd264-e73a-4200-ba74-0833c40bd14c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1921093277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a1cf3657daa4d798d912ceaae049aa0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5009f27c-5c", "ovs_interfaceid": "5009f27c-5ce3-46eb-b7aa-e82645a3097e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:16:08 compute-0 nova_compute[351485]: 2025-12-03 02:16:08.711 351492 DEBUG oslo_concurrency.lockutils [req-52be11f6-e2e6-4fcb-a52e-8093698d9b4b req-a1c64abf-def1-4843-9200-13a0e89e6fa4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-07ce21e6-3627-467a-9b7e-d9045308576c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:16:08 compute-0 nova_compute[351485]: 2025-12-03 02:16:08.727 351492 INFO nova.compute.manager [None req-8c22aebe-246d-4047-89f2-89ae300ee2d9 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Took 1.37 seconds to destroy the instance on the hypervisor.
Dec 03 02:16:08 compute-0 nova_compute[351485]: 2025-12-03 02:16:08.732 351492 DEBUG oslo.service.loopingcall [None req-8c22aebe-246d-4047-89f2-89ae300ee2d9 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 03 02:16:08 compute-0 nova_compute[351485]: 2025-12-03 02:16:08.733 351492 DEBUG nova.compute.manager [-] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 03 02:16:08 compute-0 nova_compute[351485]: 2025-12-03 02:16:08.733 351492 DEBUG nova.network.neutron [-] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 03 02:16:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1855: 321 pgs: 321 active+clean; 243 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 4.8 MiB/s rd, 14 KiB/s wr, 173 op/s
Dec 03 02:16:10 compute-0 ceph-mon[192821]: pgmap v1855: 321 pgs: 321 active+clean; 243 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 4.8 MiB/s rd, 14 KiB/s wr, 173 op/s
Dec 03 02:16:10 compute-0 nova_compute[351485]: 2025-12-03 02:16:10.575 351492 DEBUG nova.compute.manager [req-73b50b9a-ae0c-4d60-aa21-f9c1a7978c37 req-1249855b-167c-48bb-b5e3-5c2b5885a45a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Received event network-vif-unplugged-5009f27c-5ce3-46eb-b7aa-e82645a3097e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:16:10 compute-0 nova_compute[351485]: 2025-12-03 02:16:10.576 351492 DEBUG oslo_concurrency.lockutils [req-73b50b9a-ae0c-4d60-aa21-f9c1a7978c37 req-1249855b-167c-48bb-b5e3-5c2b5885a45a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "07ce21e6-3627-467a-9b7e-d9045308576c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:16:10 compute-0 nova_compute[351485]: 2025-12-03 02:16:10.577 351492 DEBUG oslo_concurrency.lockutils [req-73b50b9a-ae0c-4d60-aa21-f9c1a7978c37 req-1249855b-167c-48bb-b5e3-5c2b5885a45a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "07ce21e6-3627-467a-9b7e-d9045308576c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:16:10 compute-0 nova_compute[351485]: 2025-12-03 02:16:10.578 351492 DEBUG oslo_concurrency.lockutils [req-73b50b9a-ae0c-4d60-aa21-f9c1a7978c37 req-1249855b-167c-48bb-b5e3-5c2b5885a45a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "07ce21e6-3627-467a-9b7e-d9045308576c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:16:10 compute-0 nova_compute[351485]: 2025-12-03 02:16:10.580 351492 DEBUG nova.compute.manager [req-73b50b9a-ae0c-4d60-aa21-f9c1a7978c37 req-1249855b-167c-48bb-b5e3-5c2b5885a45a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] No waiting events found dispatching network-vif-unplugged-5009f27c-5ce3-46eb-b7aa-e82645a3097e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 03 02:16:10 compute-0 nova_compute[351485]: 2025-12-03 02:16:10.581 351492 DEBUG nova.compute.manager [req-73b50b9a-ae0c-4d60-aa21-f9c1a7978c37 req-1249855b-167c-48bb-b5e3-5c2b5885a45a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Received event network-vif-unplugged-5009f27c-5ce3-46eb-b7aa-e82645a3097e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 03 02:16:10 compute-0 nova_compute[351485]: 2025-12-03 02:16:10.582 351492 DEBUG nova.compute.manager [req-73b50b9a-ae0c-4d60-aa21-f9c1a7978c37 req-1249855b-167c-48bb-b5e3-5c2b5885a45a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Received event network-vif-plugged-5009f27c-5ce3-46eb-b7aa-e82645a3097e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:16:10 compute-0 nova_compute[351485]: 2025-12-03 02:16:10.583 351492 DEBUG oslo_concurrency.lockutils [req-73b50b9a-ae0c-4d60-aa21-f9c1a7978c37 req-1249855b-167c-48bb-b5e3-5c2b5885a45a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "07ce21e6-3627-467a-9b7e-d9045308576c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:16:10 compute-0 nova_compute[351485]: 2025-12-03 02:16:10.584 351492 DEBUG oslo_concurrency.lockutils [req-73b50b9a-ae0c-4d60-aa21-f9c1a7978c37 req-1249855b-167c-48bb-b5e3-5c2b5885a45a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "07ce21e6-3627-467a-9b7e-d9045308576c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:16:10 compute-0 nova_compute[351485]: 2025-12-03 02:16:10.585 351492 DEBUG oslo_concurrency.lockutils [req-73b50b9a-ae0c-4d60-aa21-f9c1a7978c37 req-1249855b-167c-48bb-b5e3-5c2b5885a45a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "07ce21e6-3627-467a-9b7e-d9045308576c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:16:10 compute-0 nova_compute[351485]: 2025-12-03 02:16:10.589 351492 DEBUG nova.compute.manager [req-73b50b9a-ae0c-4d60-aa21-f9c1a7978c37 req-1249855b-167c-48bb-b5e3-5c2b5885a45a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] No waiting events found dispatching network-vif-plugged-5009f27c-5ce3-46eb-b7aa-e82645a3097e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 03 02:16:10 compute-0 nova_compute[351485]: 2025-12-03 02:16:10.590 351492 WARNING nova.compute.manager [req-73b50b9a-ae0c-4d60-aa21-f9c1a7978c37 req-1249855b-167c-48bb-b5e3-5c2b5885a45a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Received unexpected event network-vif-plugged-5009f27c-5ce3-46eb-b7aa-e82645a3097e for instance with vm_state active and task_state deleting.
Dec 03 02:16:10 compute-0 nova_compute[351485]: 2025-12-03 02:16:10.946 351492 DEBUG nova.network.neutron [-] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:16:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1856: 321 pgs: 321 active+clean; 208 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 5.6 MiB/s rd, 15 KiB/s wr, 215 op/s
Dec 03 02:16:11 compute-0 nova_compute[351485]: 2025-12-03 02:16:11.098 351492 INFO nova.compute.manager [-] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Took 2.36 seconds to deallocate network for instance.
Dec 03 02:16:11 compute-0 nova_compute[351485]: 2025-12-03 02:16:11.150 351492 DEBUG oslo_concurrency.lockutils [None req-8c22aebe-246d-4047-89f2-89ae300ee2d9 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:16:11 compute-0 nova_compute[351485]: 2025-12-03 02:16:11.150 351492 DEBUG oslo_concurrency.lockutils [None req-8c22aebe-246d-4047-89f2-89ae300ee2d9 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:16:11 compute-0 nova_compute[351485]: 2025-12-03 02:16:11.419 351492 DEBUG oslo_concurrency.processutils [None req-8c22aebe-246d-4047-89f2-89ae300ee2d9 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:16:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:16:11 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1998216873' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:16:11 compute-0 nova_compute[351485]: 2025-12-03 02:16:11.897 351492 DEBUG oslo_concurrency.processutils [None req-8c22aebe-246d-4047-89f2-89ae300ee2d9 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:16:11 compute-0 nova_compute[351485]: 2025-12-03 02:16:11.910 351492 DEBUG nova.compute.provider_tree [None req-8c22aebe-246d-4047-89f2-89ae300ee2d9 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:16:11 compute-0 nova_compute[351485]: 2025-12-03 02:16:11.941 351492 DEBUG nova.scheduler.client.report [None req-8c22aebe-246d-4047-89f2-89ae300ee2d9 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:16:11 compute-0 nova_compute[351485]: 2025-12-03 02:16:11.970 351492 DEBUG oslo_concurrency.lockutils [None req-8c22aebe-246d-4047-89f2-89ae300ee2d9 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:16:11 compute-0 nova_compute[351485]: 2025-12-03 02:16:11.996 351492 INFO nova.scheduler.client.report [None req-8c22aebe-246d-4047-89f2-89ae300ee2d9 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Deleted allocations for instance 07ce21e6-3627-467a-9b7e-d9045308576c
Dec 03 02:16:12 compute-0 nova_compute[351485]: 2025-12-03 02:16:12.068 351492 DEBUG oslo_concurrency.lockutils [None req-8c22aebe-246d-4047-89f2-89ae300ee2d9 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Lock "07ce21e6-3627-467a-9b7e-d9045308576c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:16:12 compute-0 nova_compute[351485]: 2025-12-03 02:16:12.096 351492 DEBUG nova.compute.manager [req-3ba158f2-72b8-4ac7-ab51-5599d42ef0d2 req-83bca034-7f21-431a-8536-fc66784c51a6 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Received event network-changed-d7b1b965-f304-40eb-9f34-c63af54da9f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:16:12 compute-0 nova_compute[351485]: 2025-12-03 02:16:12.097 351492 DEBUG nova.compute.manager [req-3ba158f2-72b8-4ac7-ab51-5599d42ef0d2 req-83bca034-7f21-431a-8536-fc66784c51a6 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Refreshing instance network info cache due to event network-changed-d7b1b965-f304-40eb-9f34-c63af54da9f4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 03 02:16:12 compute-0 nova_compute[351485]: 2025-12-03 02:16:12.097 351492 DEBUG oslo_concurrency.lockutils [req-3ba158f2-72b8-4ac7-ab51-5599d42ef0d2 req-83bca034-7f21-431a-8536-fc66784c51a6 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-5c870f25-6c33-4e95-b540-5a806454f556" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:16:12 compute-0 nova_compute[351485]: 2025-12-03 02:16:12.097 351492 DEBUG oslo_concurrency.lockutils [req-3ba158f2-72b8-4ac7-ab51-5599d42ef0d2 req-83bca034-7f21-431a-8536-fc66784c51a6 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-5c870f25-6c33-4e95-b540-5a806454f556" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:16:12 compute-0 nova_compute[351485]: 2025-12-03 02:16:12.098 351492 DEBUG nova.network.neutron [req-3ba158f2-72b8-4ac7-ab51-5599d42ef0d2 req-83bca034-7f21-431a-8536-fc66784c51a6 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Refreshing network info cache for port d7b1b965-f304-40eb-9f34-c63af54da9f4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 03 02:16:12 compute-0 nova_compute[351485]: 2025-12-03 02:16:12.125 351492 DEBUG nova.network.neutron [req-064855b2-5ebb-4bc5-a297-0f3ceb3ccca6 req-da2d9a38-81d2-4b33-8f5a-f3c600ed8da8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Updated VIF entry in instance network info cache for port ee5c2dfc-04c3-400a-8073-6f2c65dcea03. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 03 02:16:12 compute-0 nova_compute[351485]: 2025-12-03 02:16:12.125 351492 DEBUG nova.network.neutron [req-064855b2-5ebb-4bc5-a297-0f3ceb3ccca6 req-da2d9a38-81d2-4b33-8f5a-f3c600ed8da8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Updating instance_info_cache with network_info: [{"id": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "address": "fa:16:3e:ff:dd:2f", "network": {"id": "2fdf214a-0f6e-4e5d-b449-e1988827937a", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-191861003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b95bb4c57d3543acb25997bedee9dec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee5c2dfc-04", "ovs_interfaceid": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:16:12 compute-0 nova_compute[351485]: 2025-12-03 02:16:12.171 351492 DEBUG oslo_concurrency.lockutils [req-064855b2-5ebb-4bc5-a297-0f3ceb3ccca6 req-da2d9a38-81d2-4b33-8f5a-f3c600ed8da8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-a48b4084-369d-432a-9f47-9378cdcc011f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:16:12 compute-0 ceph-mon[192821]: pgmap v1856: 321 pgs: 321 active+clean; 208 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 5.6 MiB/s rd, 15 KiB/s wr, 215 op/s
Dec 03 02:16:12 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1998216873' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:16:12 compute-0 nova_compute[351485]: 2025-12-03 02:16:12.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:16:12 compute-0 nova_compute[351485]: 2025-12-03 02:16:12.703 351492 DEBUG nova.compute.manager [req-bc0a4e1e-17e7-4ce6-8594-358cdd016f6a req-1bb46a82-c45e-4be4-9daf-1587168e5168 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Received event network-vif-deleted-5009f27c-5ce3-46eb-b7aa-e82645a3097e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:16:12 compute-0 nova_compute[351485]: 2025-12-03 02:16:12.849 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1857: 321 pgs: 321 active+clean; 196 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 15 KiB/s wr, 193 op/s
Dec 03 02:16:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:16:13 compute-0 nova_compute[351485]: 2025-12-03 02:16:13.433 351492 DEBUG oslo_concurrency.lockutils [None req-d59ee5b9-db12-421e-b341-192c745e8bf7 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Acquiring lock "5c870f25-6c33-4e95-b540-5a806454f556" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:16:13 compute-0 nova_compute[351485]: 2025-12-03 02:16:13.434 351492 DEBUG oslo_concurrency.lockutils [None req-d59ee5b9-db12-421e-b341-192c745e8bf7 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Lock "5c870f25-6c33-4e95-b540-5a806454f556" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:16:13 compute-0 nova_compute[351485]: 2025-12-03 02:16:13.435 351492 DEBUG oslo_concurrency.lockutils [None req-d59ee5b9-db12-421e-b341-192c745e8bf7 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Acquiring lock "5c870f25-6c33-4e95-b540-5a806454f556-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:16:13 compute-0 nova_compute[351485]: 2025-12-03 02:16:13.436 351492 DEBUG oslo_concurrency.lockutils [None req-d59ee5b9-db12-421e-b341-192c745e8bf7 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Lock "5c870f25-6c33-4e95-b540-5a806454f556-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:16:13 compute-0 nova_compute[351485]: 2025-12-03 02:16:13.437 351492 DEBUG oslo_concurrency.lockutils [None req-d59ee5b9-db12-421e-b341-192c745e8bf7 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Lock "5c870f25-6c33-4e95-b540-5a806454f556-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:16:13 compute-0 nova_compute[351485]: 2025-12-03 02:16:13.440 351492 INFO nova.compute.manager [None req-d59ee5b9-db12-421e-b341-192c745e8bf7 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Terminating instance
Dec 03 02:16:13 compute-0 nova_compute[351485]: 2025-12-03 02:16:13.443 351492 DEBUG nova.compute.manager [None req-d59ee5b9-db12-421e-b341-192c745e8bf7 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 03 02:16:13 compute-0 kernel: tapd7b1b965-f3 (unregistering): left promiscuous mode
Dec 03 02:16:13 compute-0 NetworkManager[48912]: <info>  [1764728173.5232] device (tapd7b1b965-f3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 03 02:16:13 compute-0 ovn_controller[89134]: 2025-12-03T02:16:13Z|00093|binding|INFO|Releasing lport d7b1b965-f304-40eb-9f34-c63af54da9f4 from this chassis (sb_readonly=0)
Dec 03 02:16:13 compute-0 ovn_controller[89134]: 2025-12-03T02:16:13Z|00094|binding|INFO|Setting lport d7b1b965-f304-40eb-9f34-c63af54da9f4 down in Southbound
Dec 03 02:16:13 compute-0 ovn_controller[89134]: 2025-12-03T02:16:13Z|00095|binding|INFO|Removing iface tapd7b1b965-f3 ovn-installed in OVS
Dec 03 02:16:13 compute-0 nova_compute[351485]: 2025-12-03 02:16:13.542 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:13 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:13.551 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:b1:4a 10.100.0.3'], port_security=['fa:16:3e:57:b1:4a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5c870f25-6c33-4e95-b540-5a806454f556', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e0e44891-e46c-41a0-a083-a444c0d34e1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5875dd9a17274c38a2ae81fb3759558e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '286ce87f-1fc2-4f0d-bf8b-2c43a617c74d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.209'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d6691e56-1a9f-42fd-b8af-9a3ce340219b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=d7b1b965-f304-40eb-9f34-c63af54da9f4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 02:16:13 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:13.552 288528 INFO neutron.agent.ovn.metadata.agent [-] Port d7b1b965-f304-40eb-9f34-c63af54da9f4 in datapath e0e44891-e46c-41a0-a083-a444c0d34e1c unbound from our chassis
Dec 03 02:16:13 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:13.554 288528 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e0e44891-e46c-41a0-a083-a444c0d34e1c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 03 02:16:13 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:13.555 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[c9c5123a-aab9-42c9-a5c3-8e2319550794]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:16:13 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:13.556 288528 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e0e44891-e46c-41a0-a083-a444c0d34e1c namespace which is not needed anymore
Dec 03 02:16:13 compute-0 nova_compute[351485]: 2025-12-03 02:16:13.567 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:13 compute-0 nova_compute[351485]: 2025-12-03 02:16:13.570 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:13 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Deactivated successfully.
Dec 03 02:16:13 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Consumed 8.094s CPU time.
Dec 03 02:16:13 compute-0 systemd-machined[138558]: Machine qemu-9-instance-00000009 terminated.
Dec 03 02:16:13 compute-0 nova_compute[351485]: 2025-12-03 02:16:13.669 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:13 compute-0 nova_compute[351485]: 2025-12-03 02:16:13.675 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:13 compute-0 nova_compute[351485]: 2025-12-03 02:16:13.681 351492 INFO nova.virt.libvirt.driver [-] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Instance destroyed successfully.
Dec 03 02:16:13 compute-0 nova_compute[351485]: 2025-12-03 02:16:13.682 351492 DEBUG nova.objects.instance [None req-d59ee5b9-db12-421e-b341-192c745e8bf7 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Lazy-loading 'resources' on Instance uuid 5c870f25-6c33-4e95-b540-5a806454f556 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:16:13 compute-0 nova_compute[351485]: 2025-12-03 02:16:13.705 351492 DEBUG nova.virt.libvirt.vif [None req-d59ee5b9-db12-421e-b341-192c745e8bf7 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T02:15:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1318824371',display_name='tempest-ServersTestManualDisk-server-1318824371',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1318824371',id=9,image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHjjprZxgO/4fBzfH66ApAPdvyVvzXxf8Ff5aorWRcZSUbk0SJJUQELjud9zhnFrHG5MNyoaXEfhhqd7MMh1lMDbphtAOFjo2kbDR4EPXiA+56V0JD9bhhKqPo/y7SQ3BA==',key_name='tempest-keypair-1645493537',keypairs=<?>,launch_index=0,launched_at=2025-12-03T02:16:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5875dd9a17274c38a2ae81fb3759558e',ramdisk_id='',reservation_id='r-a0h400yy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestManualDisk-632797169',owner_user_name='tempest-ServersTestManualDisk-632797169-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-03T02:16:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4dc5f09973d5430fb9d8106a1a0a2479',uuid=5c870f25-6c33-4e95-b540-5a806454f556,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d7b1b965-f304-40eb-9f34-c63af54da9f4", "address": "fa:16:3e:57:b1:4a", "network": {"id": "e0e44891-e46c-41a0-a083-a444c0d34e1c", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-900280430-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5875dd9a17274c38a2ae81fb3759558e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7b1b965-f3", "ovs_interfaceid": "d7b1b965-f304-40eb-9f34-c63af54da9f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 03 02:16:13 compute-0 nova_compute[351485]: 2025-12-03 02:16:13.706 351492 DEBUG nova.network.os_vif_util [None req-d59ee5b9-db12-421e-b341-192c745e8bf7 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Converting VIF {"id": "d7b1b965-f304-40eb-9f34-c63af54da9f4", "address": "fa:16:3e:57:b1:4a", "network": {"id": "e0e44891-e46c-41a0-a083-a444c0d34e1c", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-900280430-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5875dd9a17274c38a2ae81fb3759558e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7b1b965-f3", "ovs_interfaceid": "d7b1b965-f304-40eb-9f34-c63af54da9f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 03 02:16:13 compute-0 nova_compute[351485]: 2025-12-03 02:16:13.707 351492 DEBUG nova.network.os_vif_util [None req-d59ee5b9-db12-421e-b341-192c745e8bf7 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:b1:4a,bridge_name='br-int',has_traffic_filtering=True,id=d7b1b965-f304-40eb-9f34-c63af54da9f4,network=Network(e0e44891-e46c-41a0-a083-a444c0d34e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7b1b965-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 03 02:16:13 compute-0 nova_compute[351485]: 2025-12-03 02:16:13.708 351492 DEBUG os_vif [None req-d59ee5b9-db12-421e-b341-192c745e8bf7 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:b1:4a,bridge_name='br-int',has_traffic_filtering=True,id=d7b1b965-f304-40eb-9f34-c63af54da9f4,network=Network(e0e44891-e46c-41a0-a083-a444c0d34e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7b1b965-f3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 03 02:16:13 compute-0 nova_compute[351485]: 2025-12-03 02:16:13.710 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:13 compute-0 nova_compute[351485]: 2025-12-03 02:16:13.710 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7b1b965-f3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:16:13 compute-0 nova_compute[351485]: 2025-12-03 02:16:13.714 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:13 compute-0 nova_compute[351485]: 2025-12-03 02:16:13.716 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:13 compute-0 nova_compute[351485]: 2025-12-03 02:16:13.719 351492 INFO os_vif [None req-d59ee5b9-db12-421e-b341-192c745e8bf7 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:b1:4a,bridge_name='br-int',has_traffic_filtering=True,id=d7b1b965-f304-40eb-9f34-c63af54da9f4,network=Network(e0e44891-e46c-41a0-a083-a444c0d34e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7b1b965-f3')
Dec 03 02:16:13 compute-0 neutron-haproxy-ovnmeta-e0e44891-e46c-41a0-a083-a444c0d34e1c[445486]: [NOTICE]   (445490) : haproxy version is 2.8.14-c23fe91
Dec 03 02:16:13 compute-0 neutron-haproxy-ovnmeta-e0e44891-e46c-41a0-a083-a444c0d34e1c[445486]: [NOTICE]   (445490) : path to executable is /usr/sbin/haproxy
Dec 03 02:16:13 compute-0 neutron-haproxy-ovnmeta-e0e44891-e46c-41a0-a083-a444c0d34e1c[445486]: [WARNING]  (445490) : Exiting Master process...
Dec 03 02:16:13 compute-0 neutron-haproxy-ovnmeta-e0e44891-e46c-41a0-a083-a444c0d34e1c[445486]: [WARNING]  (445490) : Exiting Master process...
Dec 03 02:16:13 compute-0 neutron-haproxy-ovnmeta-e0e44891-e46c-41a0-a083-a444c0d34e1c[445486]: [ALERT]    (445490) : Current worker (445492) exited with code 143 (Terminated)
Dec 03 02:16:13 compute-0 neutron-haproxy-ovnmeta-e0e44891-e46c-41a0-a083-a444c0d34e1c[445486]: [WARNING]  (445490) : All workers exited. Exiting... (0)
Dec 03 02:16:13 compute-0 systemd[1]: libpod-51794d70088c7f895c2aa96abef09844a97a7dca0471ddcb8ca433f0a3cc397e.scope: Deactivated successfully.
Dec 03 02:16:13 compute-0 podman[445693]: 2025-12-03 02:16:13.751508862 +0000 UTC m=+0.073720617 container died 51794d70088c7f895c2aa96abef09844a97a7dca0471ddcb8ca433f0a3cc397e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e0e44891-e46c-41a0-a083-a444c0d34e1c, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 03 02:16:13 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-51794d70088c7f895c2aa96abef09844a97a7dca0471ddcb8ca433f0a3cc397e-userdata-shm.mount: Deactivated successfully.
Dec 03 02:16:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-5295931454d2be4766609de4f9590642eff52873c1c45af103b232bf8f6acedc-merged.mount: Deactivated successfully.
Dec 03 02:16:13 compute-0 podman[445693]: 2025-12-03 02:16:13.798766789 +0000 UTC m=+0.120978544 container cleanup 51794d70088c7f895c2aa96abef09844a97a7dca0471ddcb8ca433f0a3cc397e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e0e44891-e46c-41a0-a083-a444c0d34e1c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 03 02:16:13 compute-0 systemd[1]: libpod-conmon-51794d70088c7f895c2aa96abef09844a97a7dca0471ddcb8ca433f0a3cc397e.scope: Deactivated successfully.
Dec 03 02:16:13 compute-0 podman[445747]: 2025-12-03 02:16:13.90942788 +0000 UTC m=+0.080638943 container remove 51794d70088c7f895c2aa96abef09844a97a7dca0471ddcb8ca433f0a3cc397e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e0e44891-e46c-41a0-a083-a444c0d34e1c, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 03 02:16:13 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:13.931 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[e5ea9797-d70b-4274-89c2-99e046fd2c6d]: (4, ('Wed Dec  3 02:16:13 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e0e44891-e46c-41a0-a083-a444c0d34e1c (51794d70088c7f895c2aa96abef09844a97a7dca0471ddcb8ca433f0a3cc397e)\n51794d70088c7f895c2aa96abef09844a97a7dca0471ddcb8ca433f0a3cc397e\nWed Dec  3 02:16:13 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e0e44891-e46c-41a0-a083-a444c0d34e1c (51794d70088c7f895c2aa96abef09844a97a7dca0471ddcb8ca433f0a3cc397e)\n51794d70088c7f895c2aa96abef09844a97a7dca0471ddcb8ca433f0a3cc397e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:16:13 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:13.934 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[04b02f8c-d068-4115-acf6-8379634c30bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:16:13 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:13.936 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape0e44891-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:16:13 compute-0 nova_compute[351485]: 2025-12-03 02:16:13.939 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:13 compute-0 kernel: tape0e44891-e0: left promiscuous mode
Dec 03 02:16:13 compute-0 nova_compute[351485]: 2025-12-03 02:16:13.958 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:13 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:13.960 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[06de364f-4693-4c05-94df-04d115168e48]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:16:13 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:13.977 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[098f7c3f-fb1f-474e-b960-4881ce8e254d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:16:13 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:13.979 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[fc9a7e7b-69e8-4cdd-b637-7137e4116e9d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:16:13 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:13.998 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[0ae56b59-efe0-4aae-8823-b0466958ba54]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699911, 'reachable_time': 17141, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 445769, 'error': None, 'target': 'ovnmeta-e0e44891-e46c-41a0-a083-a444c0d34e1c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:16:14 compute-0 systemd[1]: run-netns-ovnmeta\x2de0e44891\x2de46c\x2d41a0\x2da083\x2da444c0d34e1c.mount: Deactivated successfully.
Dec 03 02:16:14 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:14.004 288639 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e0e44891-e46c-41a0-a083-a444c0d34e1c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 03 02:16:14 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:14.005 288639 DEBUG oslo.privsep.daemon [-] privsep: reply[fcd2ba20-24ed-457f-a8e8-d035c98dd6ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:16:14 compute-0 podman[445760]: 2025-12-03 02:16:14.068714096 +0000 UTC m=+0.096047278 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 03 02:16:14 compute-0 ceph-mon[192821]: pgmap v1857: 321 pgs: 321 active+clean; 196 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 15 KiB/s wr, 193 op/s
Dec 03 02:16:14 compute-0 nova_compute[351485]: 2025-12-03 02:16:14.296 351492 DEBUG nova.compute.manager [req-028d9948-40f3-4be7-abe5-e24cc023786e req-f81e1ad6-af91-4984-acf3-625e90b9fb45 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Received event network-vif-unplugged-d7b1b965-f304-40eb-9f34-c63af54da9f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:16:14 compute-0 nova_compute[351485]: 2025-12-03 02:16:14.297 351492 DEBUG oslo_concurrency.lockutils [req-028d9948-40f3-4be7-abe5-e24cc023786e req-f81e1ad6-af91-4984-acf3-625e90b9fb45 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "5c870f25-6c33-4e95-b540-5a806454f556-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:16:14 compute-0 nova_compute[351485]: 2025-12-03 02:16:14.298 351492 DEBUG oslo_concurrency.lockutils [req-028d9948-40f3-4be7-abe5-e24cc023786e req-f81e1ad6-af91-4984-acf3-625e90b9fb45 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "5c870f25-6c33-4e95-b540-5a806454f556-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:16:14 compute-0 nova_compute[351485]: 2025-12-03 02:16:14.299 351492 DEBUG oslo_concurrency.lockutils [req-028d9948-40f3-4be7-abe5-e24cc023786e req-f81e1ad6-af91-4984-acf3-625e90b9fb45 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "5c870f25-6c33-4e95-b540-5a806454f556-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:16:14 compute-0 nova_compute[351485]: 2025-12-03 02:16:14.299 351492 DEBUG nova.compute.manager [req-028d9948-40f3-4be7-abe5-e24cc023786e req-f81e1ad6-af91-4984-acf3-625e90b9fb45 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] No waiting events found dispatching network-vif-unplugged-d7b1b965-f304-40eb-9f34-c63af54da9f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 03 02:16:14 compute-0 nova_compute[351485]: 2025-12-03 02:16:14.300 351492 DEBUG nova.compute.manager [req-028d9948-40f3-4be7-abe5-e24cc023786e req-f81e1ad6-af91-4984-acf3-625e90b9fb45 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Received event network-vif-unplugged-d7b1b965-f304-40eb-9f34-c63af54da9f4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 03 02:16:14 compute-0 nova_compute[351485]: 2025-12-03 02:16:14.408 351492 DEBUG nova.network.neutron [req-3ba158f2-72b8-4ac7-ab51-5599d42ef0d2 req-83bca034-7f21-431a-8536-fc66784c51a6 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Updated VIF entry in instance network info cache for port d7b1b965-f304-40eb-9f34-c63af54da9f4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 03 02:16:14 compute-0 nova_compute[351485]: 2025-12-03 02:16:14.409 351492 DEBUG nova.network.neutron [req-3ba158f2-72b8-4ac7-ab51-5599d42ef0d2 req-83bca034-7f21-431a-8536-fc66784c51a6 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Updating instance_info_cache with network_info: [{"id": "d7b1b965-f304-40eb-9f34-c63af54da9f4", "address": "fa:16:3e:57:b1:4a", "network": {"id": "e0e44891-e46c-41a0-a083-a444c0d34e1c", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-900280430-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5875dd9a17274c38a2ae81fb3759558e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7b1b965-f3", "ovs_interfaceid": "d7b1b965-f304-40eb-9f34-c63af54da9f4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:16:14 compute-0 nova_compute[351485]: 2025-12-03 02:16:14.443 351492 DEBUG oslo_concurrency.lockutils [req-3ba158f2-72b8-4ac7-ab51-5599d42ef0d2 req-83bca034-7f21-431a-8536-fc66784c51a6 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-5c870f25-6c33-4e95-b540-5a806454f556" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:16:14 compute-0 nova_compute[351485]: 2025-12-03 02:16:14.511 351492 INFO nova.virt.libvirt.driver [None req-d59ee5b9-db12-421e-b341-192c745e8bf7 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Deleting instance files /var/lib/nova/instances/5c870f25-6c33-4e95-b540-5a806454f556_del
Dec 03 02:16:14 compute-0 nova_compute[351485]: 2025-12-03 02:16:14.512 351492 INFO nova.virt.libvirt.driver [None req-d59ee5b9-db12-421e-b341-192c745e8bf7 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Deletion of /var/lib/nova/instances/5c870f25-6c33-4e95-b540-5a806454f556_del complete
Dec 03 02:16:14 compute-0 nova_compute[351485]: 2025-12-03 02:16:14.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:16:14 compute-0 nova_compute[351485]: 2025-12-03 02:16:14.598 351492 INFO nova.compute.manager [None req-d59ee5b9-db12-421e-b341-192c745e8bf7 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Took 1.15 seconds to destroy the instance on the hypervisor.
Dec 03 02:16:14 compute-0 nova_compute[351485]: 2025-12-03 02:16:14.599 351492 DEBUG oslo.service.loopingcall [None req-d59ee5b9-db12-421e-b341-192c745e8bf7 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 03 02:16:14 compute-0 nova_compute[351485]: 2025-12-03 02:16:14.600 351492 DEBUG nova.compute.manager [-] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 03 02:16:14 compute-0 nova_compute[351485]: 2025-12-03 02:16:14.601 351492 DEBUG nova.network.neutron [-] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 03 02:16:14 compute-0 nova_compute[351485]: 2025-12-03 02:16:14.617 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:16:14 compute-0 nova_compute[351485]: 2025-12-03 02:16:14.618 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:16:14 compute-0 nova_compute[351485]: 2025-12-03 02:16:14.618 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:16:14 compute-0 nova_compute[351485]: 2025-12-03 02:16:14.619 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 02:16:14 compute-0 nova_compute[351485]: 2025-12-03 02:16:14.619 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:16:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1858: 321 pgs: 321 active+clean; 188 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.2 KiB/s wr, 153 op/s
Dec 03 02:16:15 compute-0 ovn_controller[89134]: 2025-12-03T02:16:15Z|00096|binding|INFO|Releasing lport c8314dfe-5b76-4819-9b3e-1cb76a272253 from this chassis (sb_readonly=0)
Dec 03 02:16:15 compute-0 ovn_controller[89134]: 2025-12-03T02:16:15Z|00097|binding|INFO|Releasing lport f4f388aa-0af5-4918-b8ad-5c74c22057c6 from this chassis (sb_readonly=0)
Dec 03 02:16:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:16:15 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/444078930' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:16:15 compute-0 nova_compute[351485]: 2025-12-03 02:16:15.218 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:16:15 compute-0 nova_compute[351485]: 2025-12-03 02:16:15.250 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:15 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/444078930' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:16:15 compute-0 nova_compute[351485]: 2025-12-03 02:16:15.350 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:16:15 compute-0 nova_compute[351485]: 2025-12-03 02:16:15.351 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:16:15 compute-0 nova_compute[351485]: 2025-12-03 02:16:15.360 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:16:15 compute-0 nova_compute[351485]: 2025-12-03 02:16:15.361 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:16:15 compute-0 nova_compute[351485]: 2025-12-03 02:16:15.883 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:16:15 compute-0 nova_compute[351485]: 2025-12-03 02:16:15.884 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3676MB free_disk=59.92551803588867GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 02:16:15 compute-0 nova_compute[351485]: 2025-12-03 02:16:15.885 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:16:15 compute-0 nova_compute[351485]: 2025-12-03 02:16:15.885 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:16:15 compute-0 nova_compute[351485]: 2025-12-03 02:16:15.976 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 4f50e501-f565-4e1f-aa02-df921702eff9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:16:15 compute-0 nova_compute[351485]: 2025-12-03 02:16:15.977 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance a48b4084-369d-432a-9f47-9378cdcc011f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:16:15 compute-0 nova_compute[351485]: 2025-12-03 02:16:15.977 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 5c870f25-6c33-4e95-b540-5a806454f556 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:16:15 compute-0 nova_compute[351485]: 2025-12-03 02:16:15.978 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 02:16:15 compute-0 nova_compute[351485]: 2025-12-03 02:16:15.978 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 02:16:16 compute-0 nova_compute[351485]: 2025-12-03 02:16:16.064 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:16:16 compute-0 ceph-mon[192821]: pgmap v1858: 321 pgs: 321 active+clean; 188 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.2 KiB/s wr, 153 op/s
Dec 03 02:16:16 compute-0 nova_compute[351485]: 2025-12-03 02:16:16.476 351492 DEBUG nova.network.neutron [-] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:16:16 compute-0 nova_compute[351485]: 2025-12-03 02:16:16.495 351492 INFO nova.compute.manager [-] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Took 1.89 seconds to deallocate network for instance.
Dec 03 02:16:16 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:16:16 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/578666258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:16:16 compute-0 nova_compute[351485]: 2025-12-03 02:16:16.565 351492 DEBUG oslo_concurrency.lockutils [None req-d59ee5b9-db12-421e-b341-192c745e8bf7 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:16:16 compute-0 nova_compute[351485]: 2025-12-03 02:16:16.572 351492 DEBUG nova.compute.manager [req-34407a4d-bcef-46ff-b68e-b4f7896160dd req-fc75dec1-c09b-4d69-a8d6-a36917745f24 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Received event network-vif-plugged-d7b1b965-f304-40eb-9f34-c63af54da9f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:16:16 compute-0 nova_compute[351485]: 2025-12-03 02:16:16.574 351492 DEBUG oslo_concurrency.lockutils [req-34407a4d-bcef-46ff-b68e-b4f7896160dd req-fc75dec1-c09b-4d69-a8d6-a36917745f24 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "5c870f25-6c33-4e95-b540-5a806454f556-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:16:16 compute-0 nova_compute[351485]: 2025-12-03 02:16:16.575 351492 DEBUG oslo_concurrency.lockutils [req-34407a4d-bcef-46ff-b68e-b4f7896160dd req-fc75dec1-c09b-4d69-a8d6-a36917745f24 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "5c870f25-6c33-4e95-b540-5a806454f556-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:16:16 compute-0 nova_compute[351485]: 2025-12-03 02:16:16.576 351492 DEBUG oslo_concurrency.lockutils [req-34407a4d-bcef-46ff-b68e-b4f7896160dd req-fc75dec1-c09b-4d69-a8d6-a36917745f24 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "5c870f25-6c33-4e95-b540-5a806454f556-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:16:16 compute-0 nova_compute[351485]: 2025-12-03 02:16:16.577 351492 DEBUG nova.compute.manager [req-34407a4d-bcef-46ff-b68e-b4f7896160dd req-fc75dec1-c09b-4d69-a8d6-a36917745f24 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] No waiting events found dispatching network-vif-plugged-d7b1b965-f304-40eb-9f34-c63af54da9f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 03 02:16:16 compute-0 nova_compute[351485]: 2025-12-03 02:16:16.578 351492 WARNING nova.compute.manager [req-34407a4d-bcef-46ff-b68e-b4f7896160dd req-fc75dec1-c09b-4d69-a8d6-a36917745f24 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Received unexpected event network-vif-plugged-d7b1b965-f304-40eb-9f34-c63af54da9f4 for instance with vm_state active and task_state deleting.
Dec 03 02:16:16 compute-0 nova_compute[351485]: 2025-12-03 02:16:16.585 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:16:16 compute-0 nova_compute[351485]: 2025-12-03 02:16:16.601 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:16:16 compute-0 nova_compute[351485]: 2025-12-03 02:16:16.633 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:16:16 compute-0 nova_compute[351485]: 2025-12-03 02:16:16.675 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 02:16:16 compute-0 nova_compute[351485]: 2025-12-03 02:16:16.676 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:16:16 compute-0 nova_compute[351485]: 2025-12-03 02:16:16.676 351492 DEBUG oslo_concurrency.lockutils [None req-d59ee5b9-db12-421e-b341-192c745e8bf7 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.111s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:16:16 compute-0 nova_compute[351485]: 2025-12-03 02:16:16.777 351492 DEBUG oslo_concurrency.processutils [None req-d59ee5b9-db12-421e-b341-192c745e8bf7 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:16:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1859: 321 pgs: 321 active+clean; 150 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 2.3 KiB/s wr, 179 op/s
Dec 03 02:16:17 compute-0 nova_compute[351485]: 2025-12-03 02:16:17.154 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:16:17 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4085630615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:16:17 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/578666258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:16:17 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/4085630615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:16:17 compute-0 nova_compute[351485]: 2025-12-03 02:16:17.304 351492 DEBUG oslo_concurrency.processutils [None req-d59ee5b9-db12-421e-b341-192c745e8bf7 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:16:17 compute-0 nova_compute[351485]: 2025-12-03 02:16:17.316 351492 DEBUG nova.compute.provider_tree [None req-d59ee5b9-db12-421e-b341-192c745e8bf7 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:16:17 compute-0 nova_compute[351485]: 2025-12-03 02:16:17.342 351492 DEBUG nova.scheduler.client.report [None req-d59ee5b9-db12-421e-b341-192c745e8bf7 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:16:17 compute-0 nova_compute[351485]: 2025-12-03 02:16:17.380 351492 DEBUG oslo_concurrency.lockutils [None req-d59ee5b9-db12-421e-b341-192c745e8bf7 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:16:17 compute-0 nova_compute[351485]: 2025-12-03 02:16:17.413 351492 INFO nova.scheduler.client.report [None req-d59ee5b9-db12-421e-b341-192c745e8bf7 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Deleted allocations for instance 5c870f25-6c33-4e95-b540-5a806454f556
Dec 03 02:16:17 compute-0 nova_compute[351485]: 2025-12-03 02:16:17.499 351492 DEBUG oslo_concurrency.lockutils [None req-d59ee5b9-db12-421e-b341-192c745e8bf7 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Lock "5c870f25-6c33-4e95-b540-5a806454f556" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.065s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:16:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:16:18 compute-0 ceph-mon[192821]: pgmap v1859: 321 pgs: 321 active+clean; 150 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 2.3 KiB/s wr, 179 op/s
Dec 03 02:16:18 compute-0 nova_compute[351485]: 2025-12-03 02:16:18.570 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:18 compute-0 nova_compute[351485]: 2025-12-03 02:16:18.677 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:16:18 compute-0 nova_compute[351485]: 2025-12-03 02:16:18.678 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 02:16:18 compute-0 nova_compute[351485]: 2025-12-03 02:16:18.679 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 03 02:16:18 compute-0 nova_compute[351485]: 2025-12-03 02:16:18.714 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:18 compute-0 nova_compute[351485]: 2025-12-03 02:16:18.769 351492 DEBUG nova.compute.manager [req-4b9ae855-b20e-437d-a2c2-31b7f0ea226d req-131e62cc-9819-4296-a7c0-ab975f3c47a9 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Received event network-vif-deleted-d7b1b965-f304-40eb-9f34-c63af54da9f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:16:18 compute-0 podman[445852]: 2025-12-03 02:16:18.87808972 +0000 UTC m=+0.094743452 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 03 02:16:18 compute-0 podman[445853]: 2025-12-03 02:16:18.88622462 +0000 UTC m=+0.120609413 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.openshift.expose-services=, container_name=kepler, distribution-scope=public, maintainer=Red Hat, Inc., release=1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, managed_by=edpm_ansible, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, vcs-type=git, config_id=edpm, name=ubi9)
Dec 03 02:16:18 compute-0 podman[445864]: 2025-12-03 02:16:18.891985753 +0000 UTC m=+0.112681719 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 03 02:16:18 compute-0 podman[445851]: 2025-12-03 02:16:18.905190107 +0000 UTC m=+0.146950539 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=edpm, maintainer=Red Hat, Inc., vcs-type=git, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, version=9.6, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 03 02:16:18 compute-0 podman[445850]: 2025-12-03 02:16:18.934884347 +0000 UTC m=+0.182881736 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 03 02:16:18 compute-0 nova_compute[351485]: 2025-12-03 02:16:18.960 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-4f50e501-f565-4e1f-aa02-df921702eff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:16:18 compute-0 nova_compute[351485]: 2025-12-03 02:16:18.961 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-4f50e501-f565-4e1f-aa02-df921702eff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:16:18 compute-0 nova_compute[351485]: 2025-12-03 02:16:18.961 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 03 02:16:18 compute-0 nova_compute[351485]: 2025-12-03 02:16:18.961 351492 DEBUG nova.objects.instance [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4f50e501-f565-4e1f-aa02-df921702eff9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:16:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1860: 321 pgs: 321 active+clean; 150 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 KiB/s wr, 116 op/s
Dec 03 02:16:19 compute-0 ovn_controller[89134]: 2025-12-03T02:16:19Z|00098|binding|INFO|Releasing lport c8314dfe-5b76-4819-9b3e-1cb76a272253 from this chassis (sb_readonly=0)
Dec 03 02:16:19 compute-0 ovn_controller[89134]: 2025-12-03T02:16:19Z|00099|binding|INFO|Releasing lport f4f388aa-0af5-4918-b8ad-5c74c22057c6 from this chassis (sb_readonly=0)
Dec 03 02:16:19 compute-0 nova_compute[351485]: 2025-12-03 02:16:19.453 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:20 compute-0 sudo[445951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:16:20 compute-0 sudo[445951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:16:20 compute-0 sudo[445951]: pam_unix(sudo:session): session closed for user root
Dec 03 02:16:20 compute-0 ceph-mon[192821]: pgmap v1860: 321 pgs: 321 active+clean; 150 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 KiB/s wr, 116 op/s
Dec 03 02:16:20 compute-0 sudo[445976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:16:20 compute-0 sudo[445976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:16:20 compute-0 sudo[445976]: pam_unix(sudo:session): session closed for user root
Dec 03 02:16:20 compute-0 sudo[446001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:16:20 compute-0 sudo[446001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:16:20 compute-0 sudo[446001]: pam_unix(sudo:session): session closed for user root
Dec 03 02:16:20 compute-0 sudo[446026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 03 02:16:20 compute-0 sudo[446026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:16:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1861: 321 pgs: 321 active+clean; 150 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 KiB/s wr, 116 op/s
Dec 03 02:16:21 compute-0 nova_compute[351485]: 2025-12-03 02:16:21.049 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Updating instance_info_cache with network_info: [{"id": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "address": "fa:16:3e:12:b3:fa", "network": {"id": "a5e23dc0-bcc2-406c-bc7f-b978295be94b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1951903174-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9efdda7cf984595a9c5a855bae62b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7fa8023-e5", "ovs_interfaceid": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:16:21 compute-0 nova_compute[351485]: 2025-12-03 02:16:21.112 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-4f50e501-f565-4e1f-aa02-df921702eff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:16:21 compute-0 nova_compute[351485]: 2025-12-03 02:16:21.112 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 03 02:16:21 compute-0 nova_compute[351485]: 2025-12-03 02:16:21.112 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:16:21 compute-0 nova_compute[351485]: 2025-12-03 02:16:21.112 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:16:21 compute-0 podman[446121]: 2025-12-03 02:16:21.651352883 +0000 UTC m=+0.122950569 container exec d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 03 02:16:21 compute-0 podman[446121]: 2025-12-03 02:16:21.776389501 +0000 UTC m=+0.247987187 container exec_died d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 03 02:16:22 compute-0 nova_compute[351485]: 2025-12-03 02:16:22.005 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:16:22 compute-0 ceph-mon[192821]: pgmap v1861: 321 pgs: 321 active+clean; 150 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 KiB/s wr, 116 op/s
Dec 03 02:16:22 compute-0 nova_compute[351485]: 2025-12-03 02:16:22.590 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:16:22 compute-0 nova_compute[351485]: 2025-12-03 02:16:22.595 351492 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764728167.59362, 07ce21e6-3627-467a-9b7e-d9045308576c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 02:16:22 compute-0 nova_compute[351485]: 2025-12-03 02:16:22.595 351492 INFO nova.compute.manager [-] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] VM Stopped (Lifecycle Event)
Dec 03 02:16:22 compute-0 nova_compute[351485]: 2025-12-03 02:16:22.622 351492 DEBUG nova.compute.manager [None req-9a9120ac-29b9-4da1-b555-95e995a3bf85 - - - - - -] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:16:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1862: 321 pgs: 321 active+clean; 150 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.7 KiB/s wr, 74 op/s
Dec 03 02:16:23 compute-0 sudo[446026]: pam_unix(sudo:session): session closed for user root
Dec 03 02:16:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 02:16:23 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:16:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 02:16:23 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:16:23 compute-0 sudo[446272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:16:23 compute-0 sudo[446272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:16:23 compute-0 sudo[446272]: pam_unix(sudo:session): session closed for user root
Dec 03 02:16:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:16:23 compute-0 sudo[446297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:16:23 compute-0 sudo[446297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:16:23 compute-0 sudo[446297]: pam_unix(sudo:session): session closed for user root
Dec 03 02:16:23 compute-0 nova_compute[351485]: 2025-12-03 02:16:23.368 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:23 compute-0 sudo[446322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:16:23 compute-0 sudo[446322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:16:23 compute-0 sudo[446322]: pam_unix(sudo:session): session closed for user root
Dec 03 02:16:23 compute-0 sudo[446347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 02:16:23 compute-0 sudo[446347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:16:23 compute-0 nova_compute[351485]: 2025-12-03 02:16:23.573 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:23 compute-0 nova_compute[351485]: 2025-12-03 02:16:23.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:16:23 compute-0 nova_compute[351485]: 2025-12-03 02:16:23.718 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:23 compute-0 ovn_controller[89134]: 2025-12-03T02:16:23Z|00100|binding|INFO|Releasing lport c8314dfe-5b76-4819-9b3e-1cb76a272253 from this chassis (sb_readonly=0)
Dec 03 02:16:23 compute-0 ovn_controller[89134]: 2025-12-03T02:16:23Z|00101|binding|INFO|Releasing lport f4f388aa-0af5-4918-b8ad-5c74c22057c6 from this chassis (sb_readonly=0)
Dec 03 02:16:24 compute-0 ceph-mon[192821]: pgmap v1862: 321 pgs: 321 active+clean; 150 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.7 KiB/s wr, 74 op/s
Dec 03 02:16:24 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:16:24 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:16:24 compute-0 nova_compute[351485]: 2025-12-03 02:16:24.081 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:24 compute-0 sudo[446347]: pam_unix(sudo:session): session closed for user root
Dec 03 02:16:24 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:16:24 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:16:24 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 02:16:24 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:16:24 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 02:16:24 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:16:24 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev de133ea7-cfcb-4226-9d40-d42e848e99ec does not exist
Dec 03 02:16:24 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 4ea57398-5710-469b-a3f8-e5e16c8088ed does not exist
Dec 03 02:16:24 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev e76a8cd6-5974-4bbf-a12e-2de865ecd505 does not exist
Dec 03 02:16:24 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 02:16:24 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:16:24 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 02:16:24 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:16:24 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:16:24 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:16:24 compute-0 sudo[446404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:16:24 compute-0 sudo[446404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:16:24 compute-0 sudo[446404]: pam_unix(sudo:session): session closed for user root
Dec 03 02:16:24 compute-0 sudo[446429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:16:24 compute-0 sudo[446429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:16:24 compute-0 sudo[446429]: pam_unix(sudo:session): session closed for user root
Dec 03 02:16:24 compute-0 sudo[446454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:16:24 compute-0 sudo[446454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:16:24 compute-0 sudo[446454]: pam_unix(sudo:session): session closed for user root
Dec 03 02:16:24 compute-0 sudo[446479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 02:16:24 compute-0 sudo[446479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:16:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1863: 321 pgs: 321 active+clean; 150 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 481 KiB/s rd, 1.2 KiB/s wr, 41 op/s
Dec 03 02:16:25 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:16:25 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:16:25 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:16:25 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:16:25 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:16:25 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:16:25 compute-0 podman[446543]: 2025-12-03 02:16:25.421412649 +0000 UTC m=+0.074077797 container create c8940cdc874bcd10e5beed0e0d0065a21215dbafa726359ad27b2eebf2ae60cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_burnell, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:16:25 compute-0 systemd[1]: Started libpod-conmon-c8940cdc874bcd10e5beed0e0d0065a21215dbafa726359ad27b2eebf2ae60cd.scope.
Dec 03 02:16:25 compute-0 podman[446543]: 2025-12-03 02:16:25.399223481 +0000 UTC m=+0.051888609 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:16:25 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:16:25 compute-0 podman[446543]: 2025-12-03 02:16:25.554194546 +0000 UTC m=+0.206859674 container init c8940cdc874bcd10e5beed0e0d0065a21215dbafa726359ad27b2eebf2ae60cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_burnell, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:16:25 compute-0 podman[446543]: 2025-12-03 02:16:25.56284126 +0000 UTC m=+0.215506368 container start c8940cdc874bcd10e5beed0e0d0065a21215dbafa726359ad27b2eebf2ae60cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_burnell, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 03 02:16:25 compute-0 podman[446543]: 2025-12-03 02:16:25.567510793 +0000 UTC m=+0.220175901 container attach c8940cdc874bcd10e5beed0e0d0065a21215dbafa726359ad27b2eebf2ae60cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_burnell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:16:25 compute-0 practical_burnell[446559]: 167 167
Dec 03 02:16:25 compute-0 systemd[1]: libpod-c8940cdc874bcd10e5beed0e0d0065a21215dbafa726359ad27b2eebf2ae60cd.scope: Deactivated successfully.
Dec 03 02:16:25 compute-0 podman[446564]: 2025-12-03 02:16:25.637144303 +0000 UTC m=+0.044993734 container died c8940cdc874bcd10e5beed0e0d0065a21215dbafa726359ad27b2eebf2ae60cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_burnell, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 03 02:16:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-686bd4bf68dea2958674db1033e71c8c987c6ea1ebf7a296918b08a53d0108e6-merged.mount: Deactivated successfully.
Dec 03 02:16:25 compute-0 podman[446564]: 2025-12-03 02:16:25.699929009 +0000 UTC m=+0.107778370 container remove c8940cdc874bcd10e5beed0e0d0065a21215dbafa726359ad27b2eebf2ae60cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_burnell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 03 02:16:25 compute-0 systemd[1]: libpod-conmon-c8940cdc874bcd10e5beed0e0d0065a21215dbafa726359ad27b2eebf2ae60cd.scope: Deactivated successfully.
Dec 03 02:16:26 compute-0 podman[446585]: 2025-12-03 02:16:25.92898582 +0000 UTC m=+0.043549933 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:16:26 compute-0 ceph-mon[192821]: pgmap v1863: 321 pgs: 321 active+clean; 150 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 481 KiB/s rd, 1.2 KiB/s wr, 41 op/s
Dec 03 02:16:26 compute-0 podman[446585]: 2025-12-03 02:16:26.266261382 +0000 UTC m=+0.380825475 container create c226a8524eb8c3b6ad6a82e2e55800c8df7de7b0aaf25b7d94f5f7a6ab73d977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_wozniak, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef)
Dec 03 02:16:26 compute-0 systemd[1]: Started libpod-conmon-c226a8524eb8c3b6ad6a82e2e55800c8df7de7b0aaf25b7d94f5f7a6ab73d977.scope.
Dec 03 02:16:26 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:16:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbdc32d6cdd1080aa36461dcc28275769709cc7bd13a78e4a8ab9d404ddff84f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:16:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbdc32d6cdd1080aa36461dcc28275769709cc7bd13a78e4a8ab9d404ddff84f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:16:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbdc32d6cdd1080aa36461dcc28275769709cc7bd13a78e4a8ab9d404ddff84f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:16:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbdc32d6cdd1080aa36461dcc28275769709cc7bd13a78e4a8ab9d404ddff84f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:16:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbdc32d6cdd1080aa36461dcc28275769709cc7bd13a78e4a8ab9d404ddff84f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 02:16:26 compute-0 podman[446585]: 2025-12-03 02:16:26.44571473 +0000 UTC m=+0.560278803 container init c226a8524eb8c3b6ad6a82e2e55800c8df7de7b0aaf25b7d94f5f7a6ab73d977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:16:26 compute-0 podman[446585]: 2025-12-03 02:16:26.47402064 +0000 UTC m=+0.588584743 container start c226a8524eb8c3b6ad6a82e2e55800c8df7de7b0aaf25b7d94f5f7a6ab73d977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_wozniak, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 03 02:16:26 compute-0 podman[446585]: 2025-12-03 02:16:26.480746251 +0000 UTC m=+0.595310374 container attach c226a8524eb8c3b6ad6a82e2e55800c8df7de7b0aaf25b7d94f5f7a6ab73d977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 03 02:16:26 compute-0 nova_compute[351485]: 2025-12-03 02:16:26.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:16:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1864: 321 pgs: 321 active+clean; 150 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 422 KiB/s rd, 1.2 KiB/s wr, 38 op/s
Dec 03 02:16:27 compute-0 upbeat_wozniak[446601]: --> passed data devices: 0 physical, 3 LVM
Dec 03 02:16:27 compute-0 upbeat_wozniak[446601]: --> relative data size: 1.0
Dec 03 02:16:27 compute-0 upbeat_wozniak[446601]: --> All data devices are unavailable
Dec 03 02:16:27 compute-0 systemd[1]: libpod-c226a8524eb8c3b6ad6a82e2e55800c8df7de7b0aaf25b7d94f5f7a6ab73d977.scope: Deactivated successfully.
Dec 03 02:16:27 compute-0 systemd[1]: libpod-c226a8524eb8c3b6ad6a82e2e55800c8df7de7b0aaf25b7d94f5f7a6ab73d977.scope: Consumed 1.102s CPU time.
Dec 03 02:16:27 compute-0 podman[446631]: 2025-12-03 02:16:27.751075422 +0000 UTC m=+0.045790276 container died c226a8524eb8c3b6ad6a82e2e55800c8df7de7b0aaf25b7d94f5f7a6ab73d977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_wozniak, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:16:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-dbdc32d6cdd1080aa36461dcc28275769709cc7bd13a78e4a8ab9d404ddff84f-merged.mount: Deactivated successfully.
Dec 03 02:16:27 compute-0 podman[446631]: 2025-12-03 02:16:27.846239455 +0000 UTC m=+0.140954269 container remove c226a8524eb8c3b6ad6a82e2e55800c8df7de7b0aaf25b7d94f5f7a6ab73d977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_wozniak, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:16:27 compute-0 systemd[1]: libpod-conmon-c226a8524eb8c3b6ad6a82e2e55800c8df7de7b0aaf25b7d94f5f7a6ab73d977.scope: Deactivated successfully.
Dec 03 02:16:27 compute-0 sudo[446479]: pam_unix(sudo:session): session closed for user root
Dec 03 02:16:28 compute-0 sudo[446644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:16:28 compute-0 sudo[446644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:16:28 compute-0 sudo[446644]: pam_unix(sudo:session): session closed for user root
Dec 03 02:16:28 compute-0 sudo[446669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:16:28 compute-0 sudo[446669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:16:28 compute-0 sudo[446669]: pam_unix(sudo:session): session closed for user root
Dec 03 02:16:28 compute-0 sudo[446694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:16:28 compute-0 sudo[446694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:16:28 compute-0 sudo[446694]: pam_unix(sudo:session): session closed for user root
Dec 03 02:16:28 compute-0 ceph-mon[192821]: pgmap v1864: 321 pgs: 321 active+clean; 150 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 422 KiB/s rd, 1.2 KiB/s wr, 38 op/s
Dec 03 02:16:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:16:28 compute-0 sudo[446719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 02:16:28 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Dec 03 02:16:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:16:28.287479) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 03 02:16:28 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Dec 03 02:16:28 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728188287521, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 1274, "num_deletes": 256, "total_data_size": 1853510, "memory_usage": 1878144, "flush_reason": "Manual Compaction"}
Dec 03 02:16:28 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Dec 03 02:16:28 compute-0 sudo[446719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:16:28 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728188298897, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 1824226, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36818, "largest_seqno": 38091, "table_properties": {"data_size": 1818182, "index_size": 3311, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 12865, "raw_average_key_size": 19, "raw_value_size": 1805901, "raw_average_value_size": 2752, "num_data_blocks": 148, "num_entries": 656, "num_filter_entries": 656, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764728068, "oldest_key_time": 1764728068, "file_creation_time": 1764728188, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Dec 03 02:16:28 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 11468 microseconds, and 5195 cpu microseconds.
Dec 03 02:16:28 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 02:16:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:16:28.298949) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 1824226 bytes OK
Dec 03 02:16:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:16:28.298966) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Dec 03 02:16:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:16:28.300786) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Dec 03 02:16:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:16:28.300801) EVENT_LOG_v1 {"time_micros": 1764728188300796, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 03 02:16:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:16:28.300817) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 03 02:16:28 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 1847720, prev total WAL file size 1847720, number of live WAL files 2.
Dec 03 02:16:28 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:16:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:16:28.301846) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323535' seq:72057594037927935, type:22 .. '6C6F676D0031353037' seq:0, type:0; will stop at (end)
Dec 03 02:16:28 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 03 02:16:28 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(1781KB)], [83(8712KB)]
Dec 03 02:16:28 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728188301951, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 10746005, "oldest_snapshot_seqno": -1}
Dec 03 02:16:28 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 5808 keys, 10638446 bytes, temperature: kUnknown
Dec 03 02:16:28 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728188379681, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 10638446, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10597472, "index_size": 25376, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14533, "raw_key_size": 147051, "raw_average_key_size": 25, "raw_value_size": 10490322, "raw_average_value_size": 1806, "num_data_blocks": 1046, "num_entries": 5808, "num_filter_entries": 5808, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764728188, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Dec 03 02:16:28 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 02:16:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:16:28.379902) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 10638446 bytes
Dec 03 02:16:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:16:28.381291) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 138.1 rd, 136.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 8.5 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(11.7) write-amplify(5.8) OK, records in: 6336, records dropped: 528 output_compression: NoCompression
Dec 03 02:16:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:16:28.381307) EVENT_LOG_v1 {"time_micros": 1764728188381299, "job": 48, "event": "compaction_finished", "compaction_time_micros": 77808, "compaction_time_cpu_micros": 26225, "output_level": 6, "num_output_files": 1, "total_output_size": 10638446, "num_input_records": 6336, "num_output_records": 5808, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 03 02:16:28 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:16:28 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728188381794, "job": 48, "event": "table_file_deletion", "file_number": 85}
Dec 03 02:16:28 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:16:28 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728188383519, "job": 48, "event": "table_file_deletion", "file_number": 83}
Dec 03 02:16:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:16:28.301558) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:16:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:16:28.383682) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:16:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:16:28.383689) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:16:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:16:28.383691) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:16:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:16:28.383693) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:16:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:16:28.383694) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:16:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:16:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:16:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:16:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:16:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:16:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:16:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:16:28
Dec 03 02:16:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 02:16:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 02:16:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['default.rgw.control', 'vms', 'default.rgw.log', 'images', '.mgr', 'backups', '.rgw.root', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data']
Dec 03 02:16:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 02:16:28 compute-0 nova_compute[351485]: 2025-12-03 02:16:28.575 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:28 compute-0 nova_compute[351485]: 2025-12-03 02:16:28.677 351492 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764728173.6760063, 5c870f25-6c33-4e95-b540-5a806454f556 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 02:16:28 compute-0 nova_compute[351485]: 2025-12-03 02:16:28.677 351492 INFO nova.compute.manager [-] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] VM Stopped (Lifecycle Event)
Dec 03 02:16:28 compute-0 nova_compute[351485]: 2025-12-03 02:16:28.705 351492 DEBUG nova.compute.manager [None req-f38ade89-d080-4331-813f-bc37ef2c9be0 - - - - - -] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:16:28 compute-0 nova_compute[351485]: 2025-12-03 02:16:28.720 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:28 compute-0 podman[446780]: 2025-12-03 02:16:28.812015288 +0000 UTC m=+0.057183518 container create d4b9860229fa06a5116cb6d98b7bcb449e193b5b53159484306119203d602f9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 03 02:16:28 compute-0 podman[446780]: 2025-12-03 02:16:28.78875277 +0000 UTC m=+0.033921040 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:16:28 compute-0 systemd[1]: Started libpod-conmon-d4b9860229fa06a5116cb6d98b7bcb449e193b5b53159484306119203d602f9c.scope.
Dec 03 02:16:28 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:16:28 compute-0 podman[446780]: 2025-12-03 02:16:28.968701212 +0000 UTC m=+0.213869442 container init d4b9860229fa06a5116cb6d98b7bcb449e193b5b53159484306119203d602f9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lalande, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:16:28 compute-0 podman[446780]: 2025-12-03 02:16:28.993257546 +0000 UTC m=+0.238425756 container start d4b9860229fa06a5116cb6d98b7bcb449e193b5b53159484306119203d602f9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:16:28 compute-0 podman[446780]: 2025-12-03 02:16:28.997316261 +0000 UTC m=+0.242484471 container attach d4b9860229fa06a5116cb6d98b7bcb449e193b5b53159484306119203d602f9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lalande, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 03 02:16:28 compute-0 elastic_lalande[446797]: 167 167
Dec 03 02:16:29 compute-0 systemd[1]: libpod-d4b9860229fa06a5116cb6d98b7bcb449e193b5b53159484306119203d602f9c.scope: Deactivated successfully.
Dec 03 02:16:29 compute-0 podman[446780]: 2025-12-03 02:16:29.002629872 +0000 UTC m=+0.247798122 container died d4b9860229fa06a5116cb6d98b7bcb449e193b5b53159484306119203d602f9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:16:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1865: 321 pgs: 321 active+clean; 150 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec 03 02:16:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-073abeb732d6049d7e54abf61dddf4a852f122ff213b0cc0245be9ffb445336d-merged.mount: Deactivated successfully.
Dec 03 02:16:29 compute-0 podman[446780]: 2025-12-03 02:16:29.081486403 +0000 UTC m=+0.326654643 container remove d4b9860229fa06a5116cb6d98b7bcb449e193b5b53159484306119203d602f9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lalande, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:16:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 02:16:29 compute-0 systemd[1]: libpod-conmon-d4b9860229fa06a5116cb6d98b7bcb449e193b5b53159484306119203d602f9c.scope: Deactivated successfully.
Dec 03 02:16:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:16:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 02:16:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:16:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:16:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:16:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:16:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:16:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:16:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:16:29 compute-0 podman[446819]: 2025-12-03 02:16:29.322624345 +0000 UTC m=+0.073742457 container create 64c30bba8b4d5b9eb8afc0b368018d082dd34b592f2dd2866022356a6c8ebcaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_wu, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:16:29 compute-0 podman[446819]: 2025-12-03 02:16:29.300719275 +0000 UTC m=+0.051837407 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:16:29 compute-0 systemd[1]: Started libpod-conmon-64c30bba8b4d5b9eb8afc0b368018d082dd34b592f2dd2866022356a6c8ebcaa.scope.
Dec 03 02:16:29 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:16:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33d6dbc74cb911b9bf0a1963002478468147066b943e16dbc50182a9feb6115f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:16:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33d6dbc74cb911b9bf0a1963002478468147066b943e16dbc50182a9feb6115f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:16:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33d6dbc74cb911b9bf0a1963002478468147066b943e16dbc50182a9feb6115f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:16:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33d6dbc74cb911b9bf0a1963002478468147066b943e16dbc50182a9feb6115f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:16:29 compute-0 podman[446819]: 2025-12-03 02:16:29.43094566 +0000 UTC m=+0.182063842 container init 64c30bba8b4d5b9eb8afc0b368018d082dd34b592f2dd2866022356a6c8ebcaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_wu, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 03 02:16:29 compute-0 podman[446819]: 2025-12-03 02:16:29.447582321 +0000 UTC m=+0.198700433 container start 64c30bba8b4d5b9eb8afc0b368018d082dd34b592f2dd2866022356a6c8ebcaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_wu, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:16:29 compute-0 podman[446819]: 2025-12-03 02:16:29.451271085 +0000 UTC m=+0.202389287 container attach 64c30bba8b4d5b9eb8afc0b368018d082dd34b592f2dd2866022356a6c8ebcaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_wu, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:16:29 compute-0 podman[158098]: time="2025-12-03T02:16:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:16:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:16:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46609 "" "Go-http-client/1.1"
Dec 03 02:16:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:16:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9529 "" "Go-http-client/1.1"
Dec 03 02:16:30 compute-0 ovn_controller[89134]: 2025-12-03T02:16:30Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:12:b3:fa 10.100.0.3
Dec 03 02:16:30 compute-0 ovn_controller[89134]: 2025-12-03T02:16:30Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:12:b3:fa 10.100.0.3
Dec 03 02:16:30 compute-0 ceph-mon[192821]: pgmap v1865: 321 pgs: 321 active+clean; 150 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec 03 02:16:30 compute-0 frosty_wu[446836]: {
Dec 03 02:16:30 compute-0 frosty_wu[446836]:     "0": [
Dec 03 02:16:30 compute-0 frosty_wu[446836]:         {
Dec 03 02:16:30 compute-0 frosty_wu[446836]:             "devices": [
Dec 03 02:16:30 compute-0 frosty_wu[446836]:                 "/dev/loop3"
Dec 03 02:16:30 compute-0 frosty_wu[446836]:             ],
Dec 03 02:16:30 compute-0 frosty_wu[446836]:             "lv_name": "ceph_lv0",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:             "lv_size": "21470642176",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:             "name": "ceph_lv0",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:             "tags": {
Dec 03 02:16:30 compute-0 frosty_wu[446836]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:                 "ceph.cluster_name": "ceph",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:                 "ceph.crush_device_class": "",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:                 "ceph.encrypted": "0",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:                 "ceph.osd_id": "0",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:                 "ceph.type": "block",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:                 "ceph.vdo": "0"
Dec 03 02:16:30 compute-0 frosty_wu[446836]:             },
Dec 03 02:16:30 compute-0 frosty_wu[446836]:             "type": "block",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:             "vg_name": "ceph_vg0"
Dec 03 02:16:30 compute-0 frosty_wu[446836]:         }
Dec 03 02:16:30 compute-0 frosty_wu[446836]:     ],
Dec 03 02:16:30 compute-0 frosty_wu[446836]:     "1": [
Dec 03 02:16:30 compute-0 frosty_wu[446836]:         {
Dec 03 02:16:30 compute-0 frosty_wu[446836]:             "devices": [
Dec 03 02:16:30 compute-0 frosty_wu[446836]:                 "/dev/loop4"
Dec 03 02:16:30 compute-0 frosty_wu[446836]:             ],
Dec 03 02:16:30 compute-0 frosty_wu[446836]:             "lv_name": "ceph_lv1",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:             "lv_size": "21470642176",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:             "name": "ceph_lv1",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:             "tags": {
Dec 03 02:16:30 compute-0 frosty_wu[446836]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:                 "ceph.cluster_name": "ceph",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:                 "ceph.crush_device_class": "",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:                 "ceph.encrypted": "0",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:                 "ceph.osd_id": "1",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:                 "ceph.type": "block",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:                 "ceph.vdo": "0"
Dec 03 02:16:30 compute-0 frosty_wu[446836]:             },
Dec 03 02:16:30 compute-0 frosty_wu[446836]:             "type": "block",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:             "vg_name": "ceph_vg1"
Dec 03 02:16:30 compute-0 frosty_wu[446836]:         }
Dec 03 02:16:30 compute-0 frosty_wu[446836]:     ],
Dec 03 02:16:30 compute-0 frosty_wu[446836]:     "2": [
Dec 03 02:16:30 compute-0 frosty_wu[446836]:         {
Dec 03 02:16:30 compute-0 frosty_wu[446836]:             "devices": [
Dec 03 02:16:30 compute-0 frosty_wu[446836]:                 "/dev/loop5"
Dec 03 02:16:30 compute-0 frosty_wu[446836]:             ],
Dec 03 02:16:30 compute-0 frosty_wu[446836]:             "lv_name": "ceph_lv2",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:             "lv_size": "21470642176",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:             "name": "ceph_lv2",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:             "tags": {
Dec 03 02:16:30 compute-0 frosty_wu[446836]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:                 "ceph.cluster_name": "ceph",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:                 "ceph.crush_device_class": "",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:                 "ceph.encrypted": "0",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:                 "ceph.osd_id": "2",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:                 "ceph.type": "block",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:                 "ceph.vdo": "0"
Dec 03 02:16:30 compute-0 frosty_wu[446836]:             },
Dec 03 02:16:30 compute-0 frosty_wu[446836]:             "type": "block",
Dec 03 02:16:30 compute-0 frosty_wu[446836]:             "vg_name": "ceph_vg2"
Dec 03 02:16:30 compute-0 frosty_wu[446836]:         }
Dec 03 02:16:30 compute-0 frosty_wu[446836]:     ]
Dec 03 02:16:30 compute-0 frosty_wu[446836]: }
Dec 03 02:16:30 compute-0 systemd[1]: libpod-64c30bba8b4d5b9eb8afc0b368018d082dd34b592f2dd2866022356a6c8ebcaa.scope: Deactivated successfully.
Dec 03 02:16:30 compute-0 podman[446819]: 2025-12-03 02:16:30.345403693 +0000 UTC m=+1.096521815 container died 64c30bba8b4d5b9eb8afc0b368018d082dd34b592f2dd2866022356a6c8ebcaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Dec 03 02:16:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-33d6dbc74cb911b9bf0a1963002478468147066b943e16dbc50182a9feb6115f-merged.mount: Deactivated successfully.
Dec 03 02:16:30 compute-0 podman[446819]: 2025-12-03 02:16:30.422851904 +0000 UTC m=+1.173970016 container remove 64c30bba8b4d5b9eb8afc0b368018d082dd34b592f2dd2866022356a6c8ebcaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 03 02:16:30 compute-0 systemd[1]: libpod-conmon-64c30bba8b4d5b9eb8afc0b368018d082dd34b592f2dd2866022356a6c8ebcaa.scope: Deactivated successfully.
Dec 03 02:16:30 compute-0 sudo[446719]: pam_unix(sudo:session): session closed for user root
Dec 03 02:16:30 compute-0 sudo[446855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:16:30 compute-0 sudo[446855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:16:30 compute-0 sudo[446855]: pam_unix(sudo:session): session closed for user root
Dec 03 02:16:30 compute-0 sudo[446880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:16:30 compute-0 sudo[446880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:16:30 compute-0 sudo[446880]: pam_unix(sudo:session): session closed for user root
Dec 03 02:16:30 compute-0 sudo[446905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:16:30 compute-0 sudo[446905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:16:30 compute-0 sudo[446905]: pam_unix(sudo:session): session closed for user root
Dec 03 02:16:30 compute-0 sudo[446930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 02:16:30 compute-0 sudo[446930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:16:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1866: 321 pgs: 321 active+clean; 169 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 183 KiB/s rd, 1.5 MiB/s wr, 36 op/s
Dec 03 02:16:31 compute-0 podman[446993]: 2025-12-03 02:16:31.412461023 +0000 UTC m=+0.100147154 container create d2f8813cdfcbd50db01360f65cbfc2c96cb418d986819cf2d04d586182cf2741 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_haslett, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 03 02:16:31 compute-0 openstack_network_exporter[368278]: ERROR   02:16:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:16:31 compute-0 openstack_network_exporter[368278]: ERROR   02:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:16:31 compute-0 openstack_network_exporter[368278]: ERROR   02:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:16:31 compute-0 openstack_network_exporter[368278]: ERROR   02:16:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:16:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:16:31 compute-0 openstack_network_exporter[368278]: ERROR   02:16:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:16:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:16:31 compute-0 podman[446993]: 2025-12-03 02:16:31.384460171 +0000 UTC m=+0.072146352 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:16:31 compute-0 systemd[1]: Started libpod-conmon-d2f8813cdfcbd50db01360f65cbfc2c96cb418d986819cf2d04d586182cf2741.scope.
Dec 03 02:16:31 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:16:31 compute-0 podman[446993]: 2025-12-03 02:16:31.538340555 +0000 UTC m=+0.226026686 container init d2f8813cdfcbd50db01360f65cbfc2c96cb418d986819cf2d04d586182cf2741 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_haslett, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec 03 02:16:31 compute-0 podman[446993]: 2025-12-03 02:16:31.558088903 +0000 UTC m=+0.245775044 container start d2f8813cdfcbd50db01360f65cbfc2c96cb418d986819cf2d04d586182cf2741 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 03 02:16:31 compute-0 charming_haslett[447009]: 167 167
Dec 03 02:16:31 compute-0 systemd[1]: libpod-d2f8813cdfcbd50db01360f65cbfc2c96cb418d986819cf2d04d586182cf2741.scope: Deactivated successfully.
Dec 03 02:16:31 compute-0 podman[446993]: 2025-12-03 02:16:31.64881032 +0000 UTC m=+0.336496431 container attach d2f8813cdfcbd50db01360f65cbfc2c96cb418d986819cf2d04d586182cf2741 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_haslett, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:16:31 compute-0 podman[446993]: 2025-12-03 02:16:31.651225899 +0000 UTC m=+0.338912020 container died d2f8813cdfcbd50db01360f65cbfc2c96cb418d986819cf2d04d586182cf2741 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_haslett, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 03 02:16:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8cece939bae6a2a03d7fabce3f276c28b6b9e7ccc9772a179328ee9c08dcedc-merged.mount: Deactivated successfully.
Dec 03 02:16:31 compute-0 podman[446993]: 2025-12-03 02:16:31.815039133 +0000 UTC m=+0.502725234 container remove d2f8813cdfcbd50db01360f65cbfc2c96cb418d986819cf2d04d586182cf2741 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Dec 03 02:16:31 compute-0 systemd[1]: libpod-conmon-d2f8813cdfcbd50db01360f65cbfc2c96cb418d986819cf2d04d586182cf2741.scope: Deactivated successfully.
Dec 03 02:16:32 compute-0 podman[447032]: 2025-12-03 02:16:32.189210549 +0000 UTC m=+0.146563507 container create 36486b654e3a06d5187d76b59d74de27111bc081935bfcd26e4ea058c92fc120 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_beaver, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:16:32 compute-0 podman[447032]: 2025-12-03 02:16:32.103325359 +0000 UTC m=+0.060678317 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:16:32 compute-0 systemd[1]: Started libpod-conmon-36486b654e3a06d5187d76b59d74de27111bc081935bfcd26e4ea058c92fc120.scope.
Dec 03 02:16:32 compute-0 ceph-mon[192821]: pgmap v1866: 321 pgs: 321 active+clean; 169 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 183 KiB/s rd, 1.5 MiB/s wr, 36 op/s
Dec 03 02:16:32 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:16:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1426e4c0be10293d1a70903179b120f8e7bf214ae9979deca55ddf7549aa88b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:16:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1426e4c0be10293d1a70903179b120f8e7bf214ae9979deca55ddf7549aa88b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:16:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1426e4c0be10293d1a70903179b120f8e7bf214ae9979deca55ddf7549aa88b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:16:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1426e4c0be10293d1a70903179b120f8e7bf214ae9979deca55ddf7549aa88b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:16:32 compute-0 podman[447032]: 2025-12-03 02:16:32.352043816 +0000 UTC m=+0.309396774 container init 36486b654e3a06d5187d76b59d74de27111bc081935bfcd26e4ea058c92fc120 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Dec 03 02:16:32 compute-0 podman[447032]: 2025-12-03 02:16:32.375142259 +0000 UTC m=+0.332495207 container start 36486b654e3a06d5187d76b59d74de27111bc081935bfcd26e4ea058c92fc120 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:16:32 compute-0 podman[447032]: 2025-12-03 02:16:32.38224403 +0000 UTC m=+0.339596958 container attach 36486b654e3a06d5187d76b59d74de27111bc081935bfcd26e4ea058c92fc120 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_beaver, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 03 02:16:32 compute-0 nova_compute[351485]: 2025-12-03 02:16:32.871 351492 DEBUG oslo_concurrency.lockutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Acquiring lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:16:32 compute-0 nova_compute[351485]: 2025-12-03 02:16:32.873 351492 DEBUG oslo_concurrency.lockutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:16:32 compute-0 nova_compute[351485]: 2025-12-03 02:16:32.891 351492 DEBUG nova.compute.manager [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 03 02:16:32 compute-0 nova_compute[351485]: 2025-12-03 02:16:32.975 351492 DEBUG oslo_concurrency.lockutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:16:32 compute-0 nova_compute[351485]: 2025-12-03 02:16:32.976 351492 DEBUG oslo_concurrency.lockutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:16:32 compute-0 nova_compute[351485]: 2025-12-03 02:16:32.989 351492 DEBUG nova.virt.hardware [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 03 02:16:32 compute-0 nova_compute[351485]: 2025-12-03 02:16:32.990 351492 INFO nova.compute.claims [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Claim successful on node compute-0.ctlplane.example.com
Dec 03 02:16:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1867: 321 pgs: 321 active+clean; 181 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 301 KiB/s rd, 2.1 MiB/s wr, 54 op/s
Dec 03 02:16:33 compute-0 nova_compute[351485]: 2025-12-03 02:16:33.161 351492 DEBUG oslo_concurrency.processutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:16:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:16:33 compute-0 nova_compute[351485]: 2025-12-03 02:16:33.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:16:33 compute-0 nova_compute[351485]: 2025-12-03 02:16:33.578 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 02:16:33 compute-0 nova_compute[351485]: 2025-12-03 02:16:33.578 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:33 compute-0 epic_beaver[447048]: {
Dec 03 02:16:33 compute-0 epic_beaver[447048]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 02:16:33 compute-0 epic_beaver[447048]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:16:33 compute-0 epic_beaver[447048]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 02:16:33 compute-0 epic_beaver[447048]:         "osd_id": 2,
Dec 03 02:16:33 compute-0 epic_beaver[447048]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:16:33 compute-0 epic_beaver[447048]:         "type": "bluestore"
Dec 03 02:16:33 compute-0 epic_beaver[447048]:     },
Dec 03 02:16:33 compute-0 epic_beaver[447048]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 02:16:33 compute-0 epic_beaver[447048]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:16:33 compute-0 epic_beaver[447048]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 02:16:33 compute-0 epic_beaver[447048]:         "osd_id": 1,
Dec 03 02:16:33 compute-0 epic_beaver[447048]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:16:33 compute-0 epic_beaver[447048]:         "type": "bluestore"
Dec 03 02:16:33 compute-0 epic_beaver[447048]:     },
Dec 03 02:16:33 compute-0 epic_beaver[447048]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 02:16:33 compute-0 epic_beaver[447048]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:16:33 compute-0 epic_beaver[447048]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 02:16:33 compute-0 epic_beaver[447048]:         "osd_id": 0,
Dec 03 02:16:33 compute-0 epic_beaver[447048]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:16:33 compute-0 epic_beaver[447048]:         "type": "bluestore"
Dec 03 02:16:33 compute-0 epic_beaver[447048]:     }
Dec 03 02:16:33 compute-0 epic_beaver[447048]: }
Dec 03 02:16:33 compute-0 systemd[1]: libpod-36486b654e3a06d5187d76b59d74de27111bc081935bfcd26e4ea058c92fc120.scope: Deactivated successfully.
Dec 03 02:16:33 compute-0 systemd[1]: libpod-36486b654e3a06d5187d76b59d74de27111bc081935bfcd26e4ea058c92fc120.scope: Consumed 1.254s CPU time.
Dec 03 02:16:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:16:33 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2788056308' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:16:33 compute-0 nova_compute[351485]: 2025-12-03 02:16:33.722 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:33 compute-0 podman[447101]: 2025-12-03 02:16:33.738070881 +0000 UTC m=+0.053852325 container died 36486b654e3a06d5187d76b59d74de27111bc081935bfcd26e4ea058c92fc120 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_beaver, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:16:33 compute-0 nova_compute[351485]: 2025-12-03 02:16:33.752 351492 DEBUG oslo_concurrency.processutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:16:33 compute-0 nova_compute[351485]: 2025-12-03 02:16:33.768 351492 DEBUG nova.compute.provider_tree [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:16:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-1426e4c0be10293d1a70903179b120f8e7bf214ae9979deca55ddf7549aa88b6-merged.mount: Deactivated successfully.
Dec 03 02:16:33 compute-0 nova_compute[351485]: 2025-12-03 02:16:33.803 351492 DEBUG nova.scheduler.client.report [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:16:33 compute-0 podman[447101]: 2025-12-03 02:16:33.806922969 +0000 UTC m=+0.122704413 container remove 36486b654e3a06d5187d76b59d74de27111bc081935bfcd26e4ea058c92fc120 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_beaver, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec 03 02:16:33 compute-0 systemd[1]: libpod-conmon-36486b654e3a06d5187d76b59d74de27111bc081935bfcd26e4ea058c92fc120.scope: Deactivated successfully.
Dec 03 02:16:33 compute-0 nova_compute[351485]: 2025-12-03 02:16:33.839 351492 DEBUG oslo_concurrency.lockutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.863s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:16:33 compute-0 nova_compute[351485]: 2025-12-03 02:16:33.842 351492 DEBUG nova.compute.manager [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 03 02:16:33 compute-0 sudo[446930]: pam_unix(sudo:session): session closed for user root
Dec 03 02:16:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 02:16:33 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:16:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 02:16:33 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:16:33 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev df2015b6-4aa0-4c38-9dff-a2cb051640d8 does not exist
Dec 03 02:16:33 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 99152673-61f1-4ef0-8ac1-18e4a3c7a5f6 does not exist
Dec 03 02:16:33 compute-0 nova_compute[351485]: 2025-12-03 02:16:33.912 351492 DEBUG nova.compute.manager [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 03 02:16:33 compute-0 nova_compute[351485]: 2025-12-03 02:16:33.912 351492 DEBUG nova.network.neutron [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 03 02:16:33 compute-0 nova_compute[351485]: 2025-12-03 02:16:33.952 351492 INFO nova.virt.libvirt.driver [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 03 02:16:33 compute-0 sudo[447118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:16:33 compute-0 sudo[447118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:16:33 compute-0 sudo[447118]: pam_unix(sudo:session): session closed for user root
Dec 03 02:16:33 compute-0 nova_compute[351485]: 2025-12-03 02:16:33.990 351492 DEBUG nova.compute.manager [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 03 02:16:34 compute-0 sudo[447143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 02:16:34 compute-0 sudo[447143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:16:34 compute-0 sudo[447143]: pam_unix(sudo:session): session closed for user root
Dec 03 02:16:34 compute-0 nova_compute[351485]: 2025-12-03 02:16:34.221 351492 DEBUG nova.compute.manager [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 03 02:16:34 compute-0 nova_compute[351485]: 2025-12-03 02:16:34.223 351492 DEBUG nova.virt.libvirt.driver [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 03 02:16:34 compute-0 nova_compute[351485]: 2025-12-03 02:16:34.223 351492 INFO nova.virt.libvirt.driver [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Creating image(s)
Dec 03 02:16:34 compute-0 nova_compute[351485]: 2025-12-03 02:16:34.266 351492 DEBUG nova.storage.rbd_utils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] rbd image 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:16:34 compute-0 nova_compute[351485]: 2025-12-03 02:16:34.309 351492 DEBUG nova.storage.rbd_utils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] rbd image 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:16:34 compute-0 ceph-mon[192821]: pgmap v1867: 321 pgs: 321 active+clean; 181 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 301 KiB/s rd, 2.1 MiB/s wr, 54 op/s
Dec 03 02:16:34 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2788056308' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:16:34 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:16:34 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:16:34 compute-0 nova_compute[351485]: 2025-12-03 02:16:34.364 351492 DEBUG nova.storage.rbd_utils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] rbd image 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:16:34 compute-0 nova_compute[351485]: 2025-12-03 02:16:34.373 351492 DEBUG oslo_concurrency.processutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:16:34 compute-0 nova_compute[351485]: 2025-12-03 02:16:34.419 351492 DEBUG nova.policy [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'abdbefadac2a4d98bd33ed8a1a60ff75', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f8f8e5d142604e8c8aabf1e14a1467ca', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 03 02:16:34 compute-0 nova_compute[351485]: 2025-12-03 02:16:34.462 351492 DEBUG oslo_concurrency.processutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:16:34 compute-0 nova_compute[351485]: 2025-12-03 02:16:34.463 351492 DEBUG oslo_concurrency.lockutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Acquiring lock "d68b22249947adf9ae6139a52d3c87b68df8a601" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:16:34 compute-0 nova_compute[351485]: 2025-12-03 02:16:34.465 351492 DEBUG oslo_concurrency.lockutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "d68b22249947adf9ae6139a52d3c87b68df8a601" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:16:34 compute-0 nova_compute[351485]: 2025-12-03 02:16:34.465 351492 DEBUG oslo_concurrency.lockutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "d68b22249947adf9ae6139a52d3c87b68df8a601" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:16:34 compute-0 nova_compute[351485]: 2025-12-03 02:16:34.506 351492 DEBUG nova.storage.rbd_utils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] rbd image 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:16:34 compute-0 nova_compute[351485]: 2025-12-03 02:16:34.520 351492 DEBUG oslo_concurrency.processutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:16:35 compute-0 nova_compute[351485]: 2025-12-03 02:16:35.006 351492 DEBUG oslo_concurrency.processutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:16:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1868: 321 pgs: 321 active+clean; 190 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.8 MiB/s wr, 66 op/s
Dec 03 02:16:35 compute-0 nova_compute[351485]: 2025-12-03 02:16:35.159 351492 DEBUG nova.storage.rbd_utils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] resizing rbd image 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 03 02:16:35 compute-0 nova_compute[351485]: 2025-12-03 02:16:35.366 351492 DEBUG nova.objects.instance [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lazy-loading 'migration_context' on Instance uuid 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:16:35 compute-0 nova_compute[351485]: 2025-12-03 02:16:35.388 351492 DEBUG nova.virt.libvirt.driver [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 03 02:16:35 compute-0 nova_compute[351485]: 2025-12-03 02:16:35.389 351492 DEBUG nova.virt.libvirt.driver [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Ensure instance console log exists: /var/lib/nova/instances/8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 03 02:16:35 compute-0 nova_compute[351485]: 2025-12-03 02:16:35.389 351492 DEBUG oslo_concurrency.lockutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:16:35 compute-0 nova_compute[351485]: 2025-12-03 02:16:35.390 351492 DEBUG oslo_concurrency.lockutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:16:35 compute-0 nova_compute[351485]: 2025-12-03 02:16:35.391 351492 DEBUG oslo_concurrency.lockutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:16:35 compute-0 nova_compute[351485]: 2025-12-03 02:16:35.868 351492 DEBUG nova.network.neutron [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Successfully created port: ae5db7e6-7a7a-4116-954a-be851ee02864 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 03 02:16:36 compute-0 ceph-mon[192821]: pgmap v1868: 321 pgs: 321 active+clean; 190 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.8 MiB/s wr, 66 op/s
Dec 03 02:16:36 compute-0 ovn_controller[89134]: 2025-12-03T02:16:36Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ff:dd:2f 10.100.0.9
Dec 03 02:16:36 compute-0 ovn_controller[89134]: 2025-12-03T02:16:36Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ff:dd:2f 10.100.0.9
Dec 03 02:16:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1869: 321 pgs: 321 active+clean; 242 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 642 KiB/s rd, 5.6 MiB/s wr, 129 op/s
Dec 03 02:16:37 compute-0 nova_compute[351485]: 2025-12-03 02:16:37.284 351492 DEBUG nova.network.neutron [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Successfully updated port: ae5db7e6-7a7a-4116-954a-be851ee02864 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 03 02:16:37 compute-0 nova_compute[351485]: 2025-12-03 02:16:37.321 351492 DEBUG oslo_concurrency.lockutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Acquiring lock "refresh_cache-8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:16:37 compute-0 nova_compute[351485]: 2025-12-03 02:16:37.322 351492 DEBUG oslo_concurrency.lockutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Acquired lock "refresh_cache-8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:16:37 compute-0 nova_compute[351485]: 2025-12-03 02:16:37.322 351492 DEBUG nova.network.neutron [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 03 02:16:37 compute-0 nova_compute[351485]: 2025-12-03 02:16:37.489 351492 DEBUG nova.compute.manager [req-596bd03c-fdc1-41c1-ab82-31f2872d2757 req-7abf5376-3fac-463b-bfa5-a6144235fa62 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Received event network-changed-ae5db7e6-7a7a-4116-954a-be851ee02864 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:16:37 compute-0 nova_compute[351485]: 2025-12-03 02:16:37.490 351492 DEBUG nova.compute.manager [req-596bd03c-fdc1-41c1-ab82-31f2872d2757 req-7abf5376-3fac-463b-bfa5-a6144235fa62 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Refreshing instance network info cache due to event network-changed-ae5db7e6-7a7a-4116-954a-be851ee02864. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 03 02:16:37 compute-0 nova_compute[351485]: 2025-12-03 02:16:37.490 351492 DEBUG oslo_concurrency.lockutils [req-596bd03c-fdc1-41c1-ab82-31f2872d2757 req-7abf5376-3fac-463b-bfa5-a6144235fa62 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:16:37 compute-0 nova_compute[351485]: 2025-12-03 02:16:37.723 351492 DEBUG nova.network.neutron [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 03 02:16:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:16:38 compute-0 ceph-mon[192821]: pgmap v1869: 321 pgs: 321 active+clean; 242 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 642 KiB/s rd, 5.6 MiB/s wr, 129 op/s
Dec 03 02:16:38 compute-0 nova_compute[351485]: 2025-12-03 02:16:38.583 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:38 compute-0 nova_compute[351485]: 2025-12-03 02:16:38.725 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 02:16:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:16:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 02:16:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:16:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001768203221657876 of space, bias 1.0, pg target 0.5304609664973627 quantized to 32 (current 32)
Dec 03 02:16:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:16:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:16:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:16:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:16:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:16:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec 03 02:16:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:16:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 02:16:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:16:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:16:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:16:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 02:16:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:16:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 02:16:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:16:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:16:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:16:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 02:16:38 compute-0 podman[447336]: 2025-12-03 02:16:38.878119607 +0000 UTC m=+0.120023087 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 02:16:38 compute-0 podman[447334]: 2025-12-03 02:16:38.910916395 +0000 UTC m=+0.152157136 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent)
Dec 03 02:16:38 compute-0 podman[447335]: 2025-12-03 02:16:38.915865555 +0000 UTC m=+0.157139027 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 03 02:16:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1870: 321 pgs: 321 active+clean; 242 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 642 KiB/s rd, 5.6 MiB/s wr, 129 op/s
Dec 03 02:16:39 compute-0 nova_compute[351485]: 2025-12-03 02:16:39.210 351492 DEBUG nova.network.neutron [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Updating instance_info_cache with network_info: [{"id": "ae5db7e6-7a7a-4116-954a-be851ee02864", "address": "fa:16:3e:ed:5c:3e", "network": {"id": "ed008f09-da46-4507-9be2-7398a4728121", "bridge": "br-int", "label": "tempest-network-smoke--628634883", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8f8e5d142604e8c8aabf1e14a1467ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae5db7e6-7a", "ovs_interfaceid": "ae5db7e6-7a7a-4116-954a-be851ee02864", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:16:39 compute-0 nova_compute[351485]: 2025-12-03 02:16:39.234 351492 DEBUG oslo_concurrency.lockutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Releasing lock "refresh_cache-8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:16:39 compute-0 nova_compute[351485]: 2025-12-03 02:16:39.235 351492 DEBUG nova.compute.manager [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Instance network_info: |[{"id": "ae5db7e6-7a7a-4116-954a-be851ee02864", "address": "fa:16:3e:ed:5c:3e", "network": {"id": "ed008f09-da46-4507-9be2-7398a4728121", "bridge": "br-int", "label": "tempest-network-smoke--628634883", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8f8e5d142604e8c8aabf1e14a1467ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae5db7e6-7a", "ovs_interfaceid": "ae5db7e6-7a7a-4116-954a-be851ee02864", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 03 02:16:39 compute-0 nova_compute[351485]: 2025-12-03 02:16:39.235 351492 DEBUG oslo_concurrency.lockutils [req-596bd03c-fdc1-41c1-ab82-31f2872d2757 req-7abf5376-3fac-463b-bfa5-a6144235fa62 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:16:39 compute-0 nova_compute[351485]: 2025-12-03 02:16:39.235 351492 DEBUG nova.network.neutron [req-596bd03c-fdc1-41c1-ab82-31f2872d2757 req-7abf5376-3fac-463b-bfa5-a6144235fa62 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Refreshing network info cache for port ae5db7e6-7a7a-4116-954a-be851ee02864 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 03 02:16:39 compute-0 nova_compute[351485]: 2025-12-03 02:16:39.238 351492 DEBUG nova.virt.libvirt.driver [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Start _get_guest_xml network_info=[{"id": "ae5db7e6-7a7a-4116-954a-be851ee02864", "address": "fa:16:3e:ed:5c:3e", "network": {"id": "ed008f09-da46-4507-9be2-7398a4728121", "bridge": "br-int", "label": "tempest-network-smoke--628634883", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8f8e5d142604e8c8aabf1e14a1467ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae5db7e6-7a", "ovs_interfaceid": "ae5db7e6-7a7a-4116-954a-be851ee02864", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T02:14:44Z,direct_url=<?>,disk_format='qcow2',id=ef773cba-72f0-486f-b5e5-792ff26bb688,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9746b242761a48048d185ce26d622b33',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T02:14:46Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'boot_index': 0, 'guest_format': None, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'image_id': 'ef773cba-72f0-486f-b5e5-792ff26bb688'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 03 02:16:39 compute-0 nova_compute[351485]: 2025-12-03 02:16:39.256 351492 WARNING nova.virt.libvirt.driver [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:16:39 compute-0 nova_compute[351485]: 2025-12-03 02:16:39.264 351492 DEBUG nova.virt.libvirt.host [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 03 02:16:39 compute-0 nova_compute[351485]: 2025-12-03 02:16:39.264 351492 DEBUG nova.virt.libvirt.host [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 03 02:16:39 compute-0 nova_compute[351485]: 2025-12-03 02:16:39.275 351492 DEBUG nova.virt.libvirt.host [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 03 02:16:39 compute-0 nova_compute[351485]: 2025-12-03 02:16:39.276 351492 DEBUG nova.virt.libvirt.host [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 03 02:16:39 compute-0 nova_compute[351485]: 2025-12-03 02:16:39.276 351492 DEBUG nova.virt.libvirt.driver [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 03 02:16:39 compute-0 nova_compute[351485]: 2025-12-03 02:16:39.277 351492 DEBUG nova.virt.hardware [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-03T02:14:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='89219634-32e9-4cb5-896f-6fa0b1edfe13',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T02:14:44Z,direct_url=<?>,disk_format='qcow2',id=ef773cba-72f0-486f-b5e5-792ff26bb688,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9746b242761a48048d185ce26d622b33',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T02:14:46Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 03 02:16:39 compute-0 nova_compute[351485]: 2025-12-03 02:16:39.277 351492 DEBUG nova.virt.hardware [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 03 02:16:39 compute-0 nova_compute[351485]: 2025-12-03 02:16:39.277 351492 DEBUG nova.virt.hardware [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 03 02:16:39 compute-0 nova_compute[351485]: 2025-12-03 02:16:39.278 351492 DEBUG nova.virt.hardware [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 03 02:16:39 compute-0 nova_compute[351485]: 2025-12-03 02:16:39.278 351492 DEBUG nova.virt.hardware [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 03 02:16:39 compute-0 nova_compute[351485]: 2025-12-03 02:16:39.278 351492 DEBUG nova.virt.hardware [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 03 02:16:39 compute-0 nova_compute[351485]: 2025-12-03 02:16:39.278 351492 DEBUG nova.virt.hardware [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 03 02:16:39 compute-0 nova_compute[351485]: 2025-12-03 02:16:39.279 351492 DEBUG nova.virt.hardware [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 03 02:16:39 compute-0 nova_compute[351485]: 2025-12-03 02:16:39.279 351492 DEBUG nova.virt.hardware [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 03 02:16:39 compute-0 nova_compute[351485]: 2025-12-03 02:16:39.279 351492 DEBUG nova.virt.hardware [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 03 02:16:39 compute-0 nova_compute[351485]: 2025-12-03 02:16:39.279 351492 DEBUG nova.virt.hardware [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 03 02:16:39 compute-0 nova_compute[351485]: 2025-12-03 02:16:39.282 351492 DEBUG oslo_concurrency.processutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:16:39 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 03 02:16:39 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/921637875' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:16:39 compute-0 nova_compute[351485]: 2025-12-03 02:16:39.811 351492 DEBUG oslo_concurrency.processutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:16:39 compute-0 nova_compute[351485]: 2025-12-03 02:16:39.860 351492 DEBUG nova.storage.rbd_utils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] rbd image 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:16:39 compute-0 nova_compute[351485]: 2025-12-03 02:16:39.870 351492 DEBUG oslo_concurrency.processutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:16:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 03 02:16:40 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2661874518' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.346 351492 DEBUG oslo_concurrency.processutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.350 351492 DEBUG nova.virt.libvirt.vif [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T02:16:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2141861820',display_name='tempest-TestNetworkBasicOps-server-2141861820',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2141861820',id=10,image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDI3XAJe/oWUFcBwASHQKy1+64OXjmmyB8m7y5N7HAPNoYJg/K1iQtuEUIT2NyhA+m3otLmx2JBqvfSdTGVgxCze3o124/xouvwXfOAKv+FU1Zz518hn/q6Xt9p0SK00+w==',key_name='tempest-TestNetworkBasicOps-1925623369',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f8f8e5d142604e8c8aabf1e14a1467ca',ramdisk_id='',reservation_id='r-90hgdj1m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1039072813',owner_user_name='tempest-TestNetworkBasicOps-1039072813-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T02:16:34Z,user_data=None,user_id='abdbefadac2a4d98bd33ed8a1a60ff75',uuid=8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ae5db7e6-7a7a-4116-954a-be851ee02864", "address": "fa:16:3e:ed:5c:3e", "network": {"id": "ed008f09-da46-4507-9be2-7398a4728121", "bridge": "br-int", "label": "tempest-network-smoke--628634883", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8f8e5d142604e8c8aabf1e14a1467ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae5db7e6-7a", "ovs_interfaceid": "ae5db7e6-7a7a-4116-954a-be851ee02864", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 03 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.352 351492 DEBUG nova.network.os_vif_util [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Converting VIF {"id": "ae5db7e6-7a7a-4116-954a-be851ee02864", "address": "fa:16:3e:ed:5c:3e", "network": {"id": "ed008f09-da46-4507-9be2-7398a4728121", "bridge": "br-int", "label": "tempest-network-smoke--628634883", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8f8e5d142604e8c8aabf1e14a1467ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae5db7e6-7a", "ovs_interfaceid": "ae5db7e6-7a7a-4116-954a-be851ee02864", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 03 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.355 351492 DEBUG nova.network.os_vif_util [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ed:5c:3e,bridge_name='br-int',has_traffic_filtering=True,id=ae5db7e6-7a7a-4116-954a-be851ee02864,network=Network(ed008f09-da46-4507-9be2-7398a4728121),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae5db7e6-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 03 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.358 351492 DEBUG nova.objects.instance [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lazy-loading 'pci_devices' on Instance uuid 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:16:40 compute-0 ceph-mon[192821]: pgmap v1870: 321 pgs: 321 active+clean; 242 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 642 KiB/s rd, 5.6 MiB/s wr, 129 op/s
Dec 03 02:16:40 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/921637875' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:16:40 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2661874518' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.378 351492 DEBUG nova.virt.libvirt.driver [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] End _get_guest_xml xml=<domain type="kvm">
Dec 03 02:16:40 compute-0 nova_compute[351485]:   <uuid>8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592</uuid>
Dec 03 02:16:40 compute-0 nova_compute[351485]:   <name>instance-0000000a</name>
Dec 03 02:16:40 compute-0 nova_compute[351485]:   <memory>131072</memory>
Dec 03 02:16:40 compute-0 nova_compute[351485]:   <vcpu>1</vcpu>
Dec 03 02:16:40 compute-0 nova_compute[351485]:   <metadata>
Dec 03 02:16:40 compute-0 nova_compute[351485]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 03 02:16:40 compute-0 nova_compute[351485]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:       <nova:name>tempest-TestNetworkBasicOps-server-2141861820</nova:name>
Dec 03 02:16:40 compute-0 nova_compute[351485]:       <nova:creationTime>2025-12-03 02:16:39</nova:creationTime>
Dec 03 02:16:40 compute-0 nova_compute[351485]:       <nova:flavor name="m1.nano">
Dec 03 02:16:40 compute-0 nova_compute[351485]:         <nova:memory>128</nova:memory>
Dec 03 02:16:40 compute-0 nova_compute[351485]:         <nova:disk>1</nova:disk>
Dec 03 02:16:40 compute-0 nova_compute[351485]:         <nova:swap>0</nova:swap>
Dec 03 02:16:40 compute-0 nova_compute[351485]:         <nova:ephemeral>0</nova:ephemeral>
Dec 03 02:16:40 compute-0 nova_compute[351485]:         <nova:vcpus>1</nova:vcpus>
Dec 03 02:16:40 compute-0 nova_compute[351485]:       </nova:flavor>
Dec 03 02:16:40 compute-0 nova_compute[351485]:       <nova:owner>
Dec 03 02:16:40 compute-0 nova_compute[351485]:         <nova:user uuid="abdbefadac2a4d98bd33ed8a1a60ff75">tempest-TestNetworkBasicOps-1039072813-project-member</nova:user>
Dec 03 02:16:40 compute-0 nova_compute[351485]:         <nova:project uuid="f8f8e5d142604e8c8aabf1e14a1467ca">tempest-TestNetworkBasicOps-1039072813</nova:project>
Dec 03 02:16:40 compute-0 nova_compute[351485]:       </nova:owner>
Dec 03 02:16:40 compute-0 nova_compute[351485]:       <nova:root type="image" uuid="ef773cba-72f0-486f-b5e5-792ff26bb688"/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:       <nova:ports>
Dec 03 02:16:40 compute-0 nova_compute[351485]:         <nova:port uuid="ae5db7e6-7a7a-4116-954a-be851ee02864">
Dec 03 02:16:40 compute-0 nova_compute[351485]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:         </nova:port>
Dec 03 02:16:40 compute-0 nova_compute[351485]:       </nova:ports>
Dec 03 02:16:40 compute-0 nova_compute[351485]:     </nova:instance>
Dec 03 02:16:40 compute-0 nova_compute[351485]:   </metadata>
Dec 03 02:16:40 compute-0 nova_compute[351485]:   <sysinfo type="smbios">
Dec 03 02:16:40 compute-0 nova_compute[351485]:     <system>
Dec 03 02:16:40 compute-0 nova_compute[351485]:       <entry name="manufacturer">RDO</entry>
Dec 03 02:16:40 compute-0 nova_compute[351485]:       <entry name="product">OpenStack Compute</entry>
Dec 03 02:16:40 compute-0 nova_compute[351485]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 03 02:16:40 compute-0 nova_compute[351485]:       <entry name="serial">8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592</entry>
Dec 03 02:16:40 compute-0 nova_compute[351485]:       <entry name="uuid">8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592</entry>
Dec 03 02:16:40 compute-0 nova_compute[351485]:       <entry name="family">Virtual Machine</entry>
Dec 03 02:16:40 compute-0 nova_compute[351485]:     </system>
Dec 03 02:16:40 compute-0 nova_compute[351485]:   </sysinfo>
Dec 03 02:16:40 compute-0 nova_compute[351485]:   <os>
Dec 03 02:16:40 compute-0 nova_compute[351485]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 03 02:16:40 compute-0 nova_compute[351485]:     <boot dev="hd"/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:     <smbios mode="sysinfo"/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:   </os>
Dec 03 02:16:40 compute-0 nova_compute[351485]:   <features>
Dec 03 02:16:40 compute-0 nova_compute[351485]:     <acpi/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:     <apic/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:     <vmcoreinfo/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:   </features>
Dec 03 02:16:40 compute-0 nova_compute[351485]:   <clock offset="utc">
Dec 03 02:16:40 compute-0 nova_compute[351485]:     <timer name="pit" tickpolicy="delay"/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:     <timer name="hpet" present="no"/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:   </clock>
Dec 03 02:16:40 compute-0 nova_compute[351485]:   <cpu mode="host-model" match="exact">
Dec 03 02:16:40 compute-0 nova_compute[351485]:     <topology sockets="1" cores="1" threads="1"/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:   </cpu>
Dec 03 02:16:40 compute-0 nova_compute[351485]:   <devices>
Dec 03 02:16:40 compute-0 nova_compute[351485]:     <disk type="network" device="disk">
Dec 03 02:16:40 compute-0 nova_compute[351485]:       <driver type="raw" cache="none"/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:       <source protocol="rbd" name="vms/8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592_disk">
Dec 03 02:16:40 compute-0 nova_compute[351485]:         <host name="192.168.122.100" port="6789"/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:       </source>
Dec 03 02:16:40 compute-0 nova_compute[351485]:       <auth username="openstack">
Dec 03 02:16:40 compute-0 nova_compute[351485]:         <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:       </auth>
Dec 03 02:16:40 compute-0 nova_compute[351485]:       <target dev="vda" bus="virtio"/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:     </disk>
Dec 03 02:16:40 compute-0 nova_compute[351485]:     <disk type="network" device="cdrom">
Dec 03 02:16:40 compute-0 nova_compute[351485]:       <driver type="raw" cache="none"/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:       <source protocol="rbd" name="vms/8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592_disk.config">
Dec 03 02:16:40 compute-0 nova_compute[351485]:         <host name="192.168.122.100" port="6789"/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:       </source>
Dec 03 02:16:40 compute-0 nova_compute[351485]:       <auth username="openstack">
Dec 03 02:16:40 compute-0 nova_compute[351485]:         <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:       </auth>
Dec 03 02:16:40 compute-0 nova_compute[351485]:       <target dev="sda" bus="sata"/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:     </disk>
Dec 03 02:16:40 compute-0 nova_compute[351485]:     <interface type="ethernet">
Dec 03 02:16:40 compute-0 nova_compute[351485]:       <mac address="fa:16:3e:ed:5c:3e"/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:       <model type="virtio"/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:       <driver name="vhost" rx_queue_size="512"/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:       <mtu size="1442"/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:       <target dev="tapae5db7e6-7a"/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:     </interface>
Dec 03 02:16:40 compute-0 nova_compute[351485]:     <serial type="pty">
Dec 03 02:16:40 compute-0 nova_compute[351485]:       <log file="/var/lib/nova/instances/8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/console.log" append="off"/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:     </serial>
Dec 03 02:16:40 compute-0 nova_compute[351485]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:     <video>
Dec 03 02:16:40 compute-0 nova_compute[351485]:       <model type="virtio"/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:     </video>
Dec 03 02:16:40 compute-0 nova_compute[351485]:     <input type="tablet" bus="usb"/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:     <rng model="virtio">
Dec 03 02:16:40 compute-0 nova_compute[351485]:       <backend model="random">/dev/urandom</backend>
Dec 03 02:16:40 compute-0 nova_compute[351485]:     </rng>
Dec 03 02:16:40 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root"/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:     <controller type="usb" index="0"/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:     <memballoon model="virtio">
Dec 03 02:16:40 compute-0 nova_compute[351485]:       <stats period="10"/>
Dec 03 02:16:40 compute-0 nova_compute[351485]:     </memballoon>
Dec 03 02:16:40 compute-0 nova_compute[351485]:   </devices>
Dec 03 02:16:40 compute-0 nova_compute[351485]: </domain>
Dec 03 02:16:40 compute-0 nova_compute[351485]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 03 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.379 351492 DEBUG nova.compute.manager [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Preparing to wait for external event network-vif-plugged-ae5db7e6-7a7a-4116-954a-be851ee02864 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 03 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.380 351492 DEBUG oslo_concurrency.lockutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Acquiring lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.380 351492 DEBUG oslo_concurrency.lockutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.381 351492 DEBUG oslo_concurrency.lockutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.382 351492 DEBUG nova.virt.libvirt.vif [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T02:16:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2141861820',display_name='tempest-TestNetworkBasicOps-server-2141861820',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2141861820',id=10,image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDI3XAJe/oWUFcBwASHQKy1+64OXjmmyB8m7y5N7HAPNoYJg/K1iQtuEUIT2NyhA+m3otLmx2JBqvfSdTGVgxCze3o124/xouvwXfOAKv+FU1Zz518hn/q6Xt9p0SK00+w==',key_name='tempest-TestNetworkBasicOps-1925623369',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f8f8e5d142604e8c8aabf1e14a1467ca',ramdisk_id='',reservation_id='r-90hgdj1m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1039072813',owner_user_name='tempest-TestNetworkBasicOps-1039072813-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T02:16:34Z,user_data=None,user_id='abdbefadac2a4d98bd33ed8a1a60ff75',uuid=8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ae5db7e6-7a7a-4116-954a-be851ee02864", "address": "fa:16:3e:ed:5c:3e", "network": {"id": "ed008f09-da46-4507-9be2-7398a4728121", "bridge": "br-int", "label": "tempest-network-smoke--628634883", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8f8e5d142604e8c8aabf1e14a1467ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae5db7e6-7a", "ovs_interfaceid": "ae5db7e6-7a7a-4116-954a-be851ee02864", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 03 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.382 351492 DEBUG nova.network.os_vif_util [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Converting VIF {"id": "ae5db7e6-7a7a-4116-954a-be851ee02864", "address": "fa:16:3e:ed:5c:3e", "network": {"id": "ed008f09-da46-4507-9be2-7398a4728121", "bridge": "br-int", "label": "tempest-network-smoke--628634883", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8f8e5d142604e8c8aabf1e14a1467ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae5db7e6-7a", "ovs_interfaceid": "ae5db7e6-7a7a-4116-954a-be851ee02864", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 03 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.383 351492 DEBUG nova.network.os_vif_util [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ed:5c:3e,bridge_name='br-int',has_traffic_filtering=True,id=ae5db7e6-7a7a-4116-954a-be851ee02864,network=Network(ed008f09-da46-4507-9be2-7398a4728121),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae5db7e6-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 03 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.384 351492 DEBUG os_vif [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:5c:3e,bridge_name='br-int',has_traffic_filtering=True,id=ae5db7e6-7a7a-4116-954a-be851ee02864,network=Network(ed008f09-da46-4507-9be2-7398a4728121),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae5db7e6-7a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 03 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.385 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.385 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.386 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 03 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.391 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.392 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapae5db7e6-7a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.392 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapae5db7e6-7a, col_values=(('external_ids', {'iface-id': 'ae5db7e6-7a7a-4116-954a-be851ee02864', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ed:5c:3e', 'vm-uuid': '8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.397 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:40 compute-0 NetworkManager[48912]: <info>  [1764728200.3982] manager: (tapae5db7e6-7a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Dec 03 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.401 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 03 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.408 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.410 351492 INFO os_vif [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:5c:3e,bridge_name='br-int',has_traffic_filtering=True,id=ae5db7e6-7a7a-4116-954a-be851ee02864,network=Network(ed008f09-da46-4507-9be2-7398a4728121),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae5db7e6-7a')
Dec 03 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.495 351492 DEBUG nova.virt.libvirt.driver [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 03 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.495 351492 DEBUG nova.virt.libvirt.driver [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 03 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.496 351492 DEBUG nova.virt.libvirt.driver [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] No VIF found with MAC fa:16:3e:ed:5c:3e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 03 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.497 351492 INFO nova.virt.libvirt.driver [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Using config drive
Dec 03 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.550 351492 DEBUG nova.storage.rbd_utils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] rbd image 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:16:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1871: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 702 KiB/s rd, 6.0 MiB/s wr, 155 op/s
Dec 03 02:16:41 compute-0 nova_compute[351485]: 2025-12-03 02:16:41.785 351492 INFO nova.virt.libvirt.driver [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Creating config drive at /var/lib/nova/instances/8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/disk.config
Dec 03 02:16:41 compute-0 nova_compute[351485]: 2025-12-03 02:16:41.796 351492 DEBUG oslo_concurrency.processutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2kfdv5sr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:16:41 compute-0 nova_compute[351485]: 2025-12-03 02:16:41.833 351492 DEBUG nova.network.neutron [req-596bd03c-fdc1-41c1-ab82-31f2872d2757 req-7abf5376-3fac-463b-bfa5-a6144235fa62 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Updated VIF entry in instance network info cache for port ae5db7e6-7a7a-4116-954a-be851ee02864. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 03 02:16:41 compute-0 nova_compute[351485]: 2025-12-03 02:16:41.835 351492 DEBUG nova.network.neutron [req-596bd03c-fdc1-41c1-ab82-31f2872d2757 req-7abf5376-3fac-463b-bfa5-a6144235fa62 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Updating instance_info_cache with network_info: [{"id": "ae5db7e6-7a7a-4116-954a-be851ee02864", "address": "fa:16:3e:ed:5c:3e", "network": {"id": "ed008f09-da46-4507-9be2-7398a4728121", "bridge": "br-int", "label": "tempest-network-smoke--628634883", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8f8e5d142604e8c8aabf1e14a1467ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae5db7e6-7a", "ovs_interfaceid": "ae5db7e6-7a7a-4116-954a-be851ee02864", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:16:41 compute-0 nova_compute[351485]: 2025-12-03 02:16:41.936 351492 DEBUG oslo_concurrency.lockutils [req-596bd03c-fdc1-41c1-ab82-31f2872d2757 req-7abf5376-3fac-463b-bfa5-a6144235fa62 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:16:41 compute-0 nova_compute[351485]: 2025-12-03 02:16:41.947 351492 DEBUG oslo_concurrency.processutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2kfdv5sr" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:16:42 compute-0 nova_compute[351485]: 2025-12-03 02:16:42.009 351492 DEBUG nova.storage.rbd_utils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] rbd image 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:16:42 compute-0 nova_compute[351485]: 2025-12-03 02:16:42.022 351492 DEBUG oslo_concurrency.processutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/disk.config 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.039 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:a6:85', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:2a:11:ae:7b:8c'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.046 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 03 02:16:42 compute-0 nova_compute[351485]: 2025-12-03 02:16:42.058 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:42 compute-0 nova_compute[351485]: 2025-12-03 02:16:42.323 351492 DEBUG oslo_concurrency.processutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/disk.config 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.300s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:16:42 compute-0 nova_compute[351485]: 2025-12-03 02:16:42.325 351492 INFO nova.virt.libvirt.driver [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Deleting local config drive /var/lib/nova/instances/8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/disk.config because it was imported into RBD.
Dec 03 02:16:42 compute-0 ceph-mon[192821]: pgmap v1871: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 702 KiB/s rd, 6.0 MiB/s wr, 155 op/s
Dec 03 02:16:42 compute-0 kernel: tapae5db7e6-7a: entered promiscuous mode
Dec 03 02:16:42 compute-0 NetworkManager[48912]: <info>  [1764728202.4378] manager: (tapae5db7e6-7a): new Tun device (/org/freedesktop/NetworkManager/Devices/52)
Dec 03 02:16:42 compute-0 ovn_controller[89134]: 2025-12-03T02:16:42Z|00102|binding|INFO|Claiming lport ae5db7e6-7a7a-4116-954a-be851ee02864 for this chassis.
Dec 03 02:16:42 compute-0 nova_compute[351485]: 2025-12-03 02:16:42.442 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:42 compute-0 ovn_controller[89134]: 2025-12-03T02:16:42Z|00103|binding|INFO|ae5db7e6-7a7a-4116-954a-be851ee02864: Claiming fa:16:3e:ed:5c:3e 10.100.0.3
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.454 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ed:5c:3e 10.100.0.3'], port_security=['fa:16:3e:ed:5c:3e 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed008f09-da46-4507-9be2-7398a4728121', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8f8e5d142604e8c8aabf1e14a1467ca', 'neutron:revision_number': '2', 'neutron:security_group_ids': '727984b7-e6f0-4093-a68a-8a566271e9dd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=15a0724e-2d9f-4375-b3ec-7cde297fca09, chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=ae5db7e6-7a7a-4116-954a-be851ee02864) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.456 288528 INFO neutron.agent.ovn.metadata.agent [-] Port ae5db7e6-7a7a-4116-954a-be851ee02864 in datapath ed008f09-da46-4507-9be2-7398a4728121 bound to our chassis
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.460 288528 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ed008f09-da46-4507-9be2-7398a4728121
Dec 03 02:16:42 compute-0 ovn_controller[89134]: 2025-12-03T02:16:42Z|00104|binding|INFO|Setting lport ae5db7e6-7a7a-4116-954a-be851ee02864 ovn-installed in OVS
Dec 03 02:16:42 compute-0 ovn_controller[89134]: 2025-12-03T02:16:42Z|00105|binding|INFO|Setting lport ae5db7e6-7a7a-4116-954a-be851ee02864 up in Southbound
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.481 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[b97f6b76-7e7c-4627-a32b-02a9432e0089]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.483 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH taped008f09-d1 in ovnmeta-ed008f09-da46-4507-9be2-7398a4728121 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 03 02:16:42 compute-0 nova_compute[351485]: 2025-12-03 02:16:42.483 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:42 compute-0 nova_compute[351485]: 2025-12-03 02:16:42.486 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.485 414755 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface taped008f09-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.485 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[8e8eba9b-4d17-4c03-903c-f49ecebcdf2b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.488 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[56be6659-0a82-4745-a904-99fca778c790]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:16:42 compute-0 systemd-machined[138558]: New machine qemu-10-instance-0000000a.
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.503 288639 DEBUG oslo.privsep.daemon [-] privsep: reply[b28415b9-38e9-4a4c-986d-e7dd35285ccb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:16:42 compute-0 systemd[1]: Started Virtual Machine qemu-10-instance-0000000a.
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.532 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[a16e1d92-c362-4fed-b45d-20af6302c729]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:16:42 compute-0 systemd-udevd[447536]: Network interface NamePolicy= disabled on kernel command line.
Dec 03 02:16:42 compute-0 NetworkManager[48912]: <info>  [1764728202.5693] device (tapae5db7e6-7a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.570 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[4dfdb57d-ef60-4206-a48e-c58e7132bb63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:16:42 compute-0 NetworkManager[48912]: <info>  [1764728202.5734] device (tapae5db7e6-7a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.577 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[cb31e717-977a-46e1-ad1e-7deaac55c852]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:16:42 compute-0 systemd-udevd[447540]: Network interface NamePolicy= disabled on kernel command line.
Dec 03 02:16:42 compute-0 NetworkManager[48912]: <info>  [1764728202.5796] manager: (taped008f09-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/53)
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.615 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[b93fa313-ddd9-42c5-a0b0-55b1469e2f2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.618 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[ac591656-3ca7-4074-a075-aa8dd6724033]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:16:42 compute-0 NetworkManager[48912]: <info>  [1764728202.6457] device (taped008f09-d0): carrier: link connected
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.652 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[0d5e7e7d-0326-46c0-b562-dfc78ef68ef5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.675 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[f4cbd505-eae3-4b3f-9144-2bdcfc7a8f21]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'taped008f09-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9c:11:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 704212, 'reachable_time': 40538, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 447565, 'error': None, 'target': 'ovnmeta-ed008f09-da46-4507-9be2-7398a4728121', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.699 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[0013dd0f-db32-40cb-baa1-6a6e85f82895]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9c:11a3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 704212, 'tstamp': 704212}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 447566, 'error': None, 'target': 'ovnmeta-ed008f09-da46-4507-9be2-7398a4728121', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.721 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[01c816aa-b6a0-45c9-b8c9-bac70d492e13]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'taped008f09-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9c:11:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 704212, 'reachable_time': 40538, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 447567, 'error': None, 'target': 'ovnmeta-ed008f09-da46-4507-9be2-7398a4728121', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.756 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[c49f2058-a28b-488c-9f15-dd0c9e5c2c51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.822 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[4969a564-777c-48dc-b0fd-a48499c1eeb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.823 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped008f09-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.823 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.824 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=taped008f09-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:16:42 compute-0 kernel: taped008f09-d0: entered promiscuous mode
Dec 03 02:16:42 compute-0 NetworkManager[48912]: <info>  [1764728202.8271] manager: (taped008f09-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Dec 03 02:16:42 compute-0 nova_compute[351485]: 2025-12-03 02:16:42.826 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:42 compute-0 nova_compute[351485]: 2025-12-03 02:16:42.829 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.834 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=taped008f09-d0, col_values=(('external_ids', {'iface-id': '4fe53946-9a81-46d3-946d-3676da417bd6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:16:42 compute-0 nova_compute[351485]: 2025-12-03 02:16:42.836 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:42 compute-0 ovn_controller[89134]: 2025-12-03T02:16:42Z|00106|binding|INFO|Releasing lport 4fe53946-9a81-46d3-946d-3676da417bd6 from this chassis (sb_readonly=0)
Dec 03 02:16:42 compute-0 nova_compute[351485]: 2025-12-03 02:16:42.866 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.869 288528 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ed008f09-da46-4507-9be2-7398a4728121.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ed008f09-da46-4507-9be2-7398a4728121.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.871 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[b2d7e096-698f-4463-9411-6f0a86a57661]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.873 288528 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]: global
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]:     log         /dev/log local0 debug
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]:     log-tag     haproxy-metadata-proxy-ed008f09-da46-4507-9be2-7398a4728121
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]:     user        root
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]:     group       root
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]:     maxconn     1024
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]:     pidfile     /var/lib/neutron/external/pids/ed008f09-da46-4507-9be2-7398a4728121.pid.haproxy
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]:     daemon
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]: 
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]: defaults
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]:     log global
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]:     mode http
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]:     option httplog
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]:     option dontlognull
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]:     option http-server-close
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]:     option forwardfor
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]:     retries                 3
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]:     timeout http-request    30s
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]:     timeout connect         30s
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]:     timeout client          32s
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]:     timeout server          32s
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]:     timeout http-keep-alive 30s
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]: 
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]: 
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]: listen listener
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]:     bind 169.254.169.254:80
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]:     server metadata /var/lib/neutron/metadata_proxy
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]:     http-request add-header X-OVN-Network-ID ed008f09-da46-4507-9be2-7398a4728121
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 03 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.874 288528 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ed008f09-da46-4507-9be2-7398a4728121', 'env', 'PROCESS_TAG=haproxy-ed008f09-da46-4507-9be2-7398a4728121', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ed008f09-da46-4507-9be2-7398a4728121.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 03 02:16:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1872: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 520 KiB/s rd, 4.5 MiB/s wr, 118 op/s
Dec 03 02:16:43 compute-0 nova_compute[351485]: 2025-12-03 02:16:43.202 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728203.2015908, 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 02:16:43 compute-0 nova_compute[351485]: 2025-12-03 02:16:43.203 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] VM Started (Lifecycle Event)
Dec 03 02:16:43 compute-0 nova_compute[351485]: 2025-12-03 02:16:43.243 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:16:43 compute-0 nova_compute[351485]: 2025-12-03 02:16:43.252 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728203.2220917, 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 02:16:43 compute-0 nova_compute[351485]: 2025-12-03 02:16:43.252 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] VM Paused (Lifecycle Event)
Dec 03 02:16:43 compute-0 nova_compute[351485]: 2025-12-03 02:16:43.277 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:16:43 compute-0 nova_compute[351485]: 2025-12-03 02:16:43.286 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 03 02:16:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:16:43 compute-0 nova_compute[351485]: 2025-12-03 02:16:43.311 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 03 02:16:43 compute-0 podman[447640]: 2025-12-03 02:16:43.405786376 +0000 UTC m=+0.080667573 container create abc133411443d1571c13e1b8a96c81b8811797a052a8fda9f3f684f98f6fbf57 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed008f09-da46-4507-9be2-7398a4728121, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 03 02:16:43 compute-0 podman[447640]: 2025-12-03 02:16:43.362370278 +0000 UTC m=+0.037251495 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 03 02:16:43 compute-0 systemd[1]: Started libpod-conmon-abc133411443d1571c13e1b8a96c81b8811797a052a8fda9f3f684f98f6fbf57.scope.
Dec 03 02:16:43 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:16:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6310634a2e9b69b7fec86a833550521f2d887dce434572f35b449a118a1fc6ac/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 03 02:16:43 compute-0 podman[447640]: 2025-12-03 02:16:43.536860115 +0000 UTC m=+0.211741282 container init abc133411443d1571c13e1b8a96c81b8811797a052a8fda9f3f684f98f6fbf57 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed008f09-da46-4507-9be2-7398a4728121, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:16:43 compute-0 podman[447640]: 2025-12-03 02:16:43.553051023 +0000 UTC m=+0.227932180 container start abc133411443d1571c13e1b8a96c81b8811797a052a8fda9f3f684f98f6fbf57 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed008f09-da46-4507-9be2-7398a4728121, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 03 02:16:43 compute-0 nova_compute[351485]: 2025-12-03 02:16:43.585 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:43 compute-0 neutron-haproxy-ovnmeta-ed008f09-da46-4507-9be2-7398a4728121[447655]: [NOTICE]   (447659) : New worker (447661) forked
Dec 03 02:16:43 compute-0 neutron-haproxy-ovnmeta-ed008f09-da46-4507-9be2-7398a4728121[447655]: [NOTICE]   (447659) : Loading success.
Dec 03 02:16:44 compute-0 ceph-mon[192821]: pgmap v1872: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 520 KiB/s rd, 4.5 MiB/s wr, 118 op/s
Dec 03 02:16:44 compute-0 podman[447670]: 2025-12-03 02:16:44.877499995 +0000 UTC m=+0.153537155 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 03 02:16:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1873: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 403 KiB/s rd, 4.0 MiB/s wr, 103 op/s
Dec 03 02:16:45 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:45.050 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=eda9fd7d-f2b1-4121-b9ac-fc31f8426272, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:16:45 compute-0 nova_compute[351485]: 2025-12-03 02:16:45.400 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:46 compute-0 ceph-mon[192821]: pgmap v1873: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 403 KiB/s rd, 4.0 MiB/s wr, 103 op/s
Dec 03 02:16:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1874: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 382 KiB/s rd, 3.2 MiB/s wr, 99 op/s
Dec 03 02:16:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 02:16:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2160411010' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:16:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 02:16:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2160411010' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:16:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/2160411010' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:16:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/2160411010' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:16:48 compute-0 nova_compute[351485]: 2025-12-03 02:16:48.203 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:16:48 compute-0 ceph-mon[192821]: pgmap v1874: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 382 KiB/s rd, 3.2 MiB/s wr, 99 op/s
Dec 03 02:16:48 compute-0 nova_compute[351485]: 2025-12-03 02:16:48.589 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1875: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 524 KiB/s wr, 35 op/s
Dec 03 02:16:49 compute-0 podman[447693]: 2025-12-03 02:16:49.862962097 +0000 UTC m=+0.115014325 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, name=ubi9-minimal, io.openshift.tags=minimal rhel9, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 03 02:16:49 compute-0 podman[447694]: 2025-12-03 02:16:49.869485312 +0000 UTC m=+0.105052683 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 02:16:49 compute-0 podman[447698]: 2025-12-03 02:16:49.876814719 +0000 UTC m=+0.108606903 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, architecture=x86_64, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, io.buildah.version=1.29.0, config_id=edpm, managed_by=edpm_ansible, version=9.4, distribution-scope=public, io.openshift.expose-services=, release=1214.1726694543, release-0.7.12=, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.tags=base rhel9)
Dec 03 02:16:49 compute-0 podman[447703]: 2025-12-03 02:16:49.884493887 +0000 UTC m=+0.118031281 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 03 02:16:49 compute-0 podman[447692]: 2025-12-03 02:16:49.889788726 +0000 UTC m=+0.147143604 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec 03 02:16:50 compute-0 nova_compute[351485]: 2025-12-03 02:16:50.403 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:50 compute-0 ceph-mon[192821]: pgmap v1875: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 524 KiB/s wr, 35 op/s
Dec 03 02:16:50 compute-0 nova_compute[351485]: 2025-12-03 02:16:50.806 351492 DEBUG nova.compute.manager [req-b1566333-1b4d-43e1-a41d-bcdd93797ad7 req-4ef927c2-487f-4f89-b32c-63052e17f0f7 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Received event network-vif-plugged-ae5db7e6-7a7a-4116-954a-be851ee02864 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:16:50 compute-0 nova_compute[351485]: 2025-12-03 02:16:50.807 351492 DEBUG oslo_concurrency.lockutils [req-b1566333-1b4d-43e1-a41d-bcdd93797ad7 req-4ef927c2-487f-4f89-b32c-63052e17f0f7 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:16:50 compute-0 nova_compute[351485]: 2025-12-03 02:16:50.809 351492 DEBUG oslo_concurrency.lockutils [req-b1566333-1b4d-43e1-a41d-bcdd93797ad7 req-4ef927c2-487f-4f89-b32c-63052e17f0f7 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:16:50 compute-0 nova_compute[351485]: 2025-12-03 02:16:50.810 351492 DEBUG oslo_concurrency.lockutils [req-b1566333-1b4d-43e1-a41d-bcdd93797ad7 req-4ef927c2-487f-4f89-b32c-63052e17f0f7 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:16:50 compute-0 nova_compute[351485]: 2025-12-03 02:16:50.811 351492 DEBUG nova.compute.manager [req-b1566333-1b4d-43e1-a41d-bcdd93797ad7 req-4ef927c2-487f-4f89-b32c-63052e17f0f7 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Processing event network-vif-plugged-ae5db7e6-7a7a-4116-954a-be851ee02864 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 03 02:16:50 compute-0 nova_compute[351485]: 2025-12-03 02:16:50.813 351492 DEBUG nova.compute.manager [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Instance event wait completed in 7 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 03 02:16:50 compute-0 nova_compute[351485]: 2025-12-03 02:16:50.820 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728210.8195055, 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 02:16:50 compute-0 nova_compute[351485]: 2025-12-03 02:16:50.822 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] VM Resumed (Lifecycle Event)
Dec 03 02:16:50 compute-0 nova_compute[351485]: 2025-12-03 02:16:50.826 351492 DEBUG nova.virt.libvirt.driver [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 03 02:16:50 compute-0 nova_compute[351485]: 2025-12-03 02:16:50.833 351492 INFO nova.virt.libvirt.driver [-] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Instance spawned successfully.
Dec 03 02:16:50 compute-0 nova_compute[351485]: 2025-12-03 02:16:50.833 351492 DEBUG nova.virt.libvirt.driver [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 03 02:16:50 compute-0 nova_compute[351485]: 2025-12-03 02:16:50.853 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:16:50 compute-0 nova_compute[351485]: 2025-12-03 02:16:50.869 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 03 02:16:50 compute-0 nova_compute[351485]: 2025-12-03 02:16:50.874 351492 DEBUG nova.virt.libvirt.driver [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:16:50 compute-0 nova_compute[351485]: 2025-12-03 02:16:50.875 351492 DEBUG nova.virt.libvirt.driver [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:16:50 compute-0 nova_compute[351485]: 2025-12-03 02:16:50.876 351492 DEBUG nova.virt.libvirt.driver [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:16:50 compute-0 nova_compute[351485]: 2025-12-03 02:16:50.877 351492 DEBUG nova.virt.libvirt.driver [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:16:50 compute-0 nova_compute[351485]: 2025-12-03 02:16:50.878 351492 DEBUG nova.virt.libvirt.driver [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:16:50 compute-0 nova_compute[351485]: 2025-12-03 02:16:50.879 351492 DEBUG nova.virt.libvirt.driver [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:16:50 compute-0 nova_compute[351485]: 2025-12-03 02:16:50.891 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 03 02:16:50 compute-0 nova_compute[351485]: 2025-12-03 02:16:50.977 351492 INFO nova.compute.manager [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Took 16.76 seconds to spawn the instance on the hypervisor.
Dec 03 02:16:50 compute-0 nova_compute[351485]: 2025-12-03 02:16:50.978 351492 DEBUG nova.compute.manager [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:16:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1876: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 529 KiB/s wr, 36 op/s
Dec 03 02:16:51 compute-0 nova_compute[351485]: 2025-12-03 02:16:51.115 351492 INFO nova.compute.manager [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Took 18.17 seconds to build instance.
Dec 03 02:16:51 compute-0 nova_compute[351485]: 2025-12-03 02:16:51.148 351492 DEBUG oslo_concurrency.lockutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.275s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:16:52 compute-0 ceph-mon[192821]: pgmap v1876: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 529 KiB/s wr, 36 op/s
Dec 03 02:16:52 compute-0 nova_compute[351485]: 2025-12-03 02:16:52.924 351492 DEBUG nova.compute.manager [req-242eca09-3da6-40c3-9f19-5602ee24c227 req-7e6215cc-bbb8-4ac7-b287-55adcf7f0bfb 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Received event network-vif-plugged-ae5db7e6-7a7a-4116-954a-be851ee02864 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:16:52 compute-0 nova_compute[351485]: 2025-12-03 02:16:52.925 351492 DEBUG oslo_concurrency.lockutils [req-242eca09-3da6-40c3-9f19-5602ee24c227 req-7e6215cc-bbb8-4ac7-b287-55adcf7f0bfb 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:16:52 compute-0 nova_compute[351485]: 2025-12-03 02:16:52.926 351492 DEBUG oslo_concurrency.lockutils [req-242eca09-3da6-40c3-9f19-5602ee24c227 req-7e6215cc-bbb8-4ac7-b287-55adcf7f0bfb 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:16:52 compute-0 nova_compute[351485]: 2025-12-03 02:16:52.927 351492 DEBUG oslo_concurrency.lockutils [req-242eca09-3da6-40c3-9f19-5602ee24c227 req-7e6215cc-bbb8-4ac7-b287-55adcf7f0bfb 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:16:52 compute-0 nova_compute[351485]: 2025-12-03 02:16:52.928 351492 DEBUG nova.compute.manager [req-242eca09-3da6-40c3-9f19-5602ee24c227 req-7e6215cc-bbb8-4ac7-b287-55adcf7f0bfb 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] No waiting events found dispatching network-vif-plugged-ae5db7e6-7a7a-4116-954a-be851ee02864 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 03 02:16:52 compute-0 nova_compute[351485]: 2025-12-03 02:16:52.929 351492 WARNING nova.compute.manager [req-242eca09-3da6-40c3-9f19-5602ee24c227 req-7e6215cc-bbb8-4ac7-b287-55adcf7f0bfb 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Received unexpected event network-vif-plugged-ae5db7e6-7a7a-4116-954a-be851ee02864 for instance with vm_state active and task_state None.
Dec 03 02:16:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1877: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 29 KiB/s wr, 11 op/s
Dec 03 02:16:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:16:53 compute-0 nova_compute[351485]: 2025-12-03 02:16:53.532 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:53 compute-0 nova_compute[351485]: 2025-12-03 02:16:53.591 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:54 compute-0 ceph-mon[192821]: pgmap v1877: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 29 KiB/s wr, 11 op/s
Dec 03 02:16:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1878: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 406 KiB/s rd, 28 KiB/s wr, 23 op/s
Dec 03 02:16:55 compute-0 nova_compute[351485]: 2025-12-03 02:16:55.406 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:55 compute-0 nova_compute[351485]: 2025-12-03 02:16:55.671 351492 DEBUG nova.compute.manager [req-f2665b51-bfba-4a44-beb0-12fb1f994f7d req-77b302cd-f6f1-4ba5-bd78-025adc2cabc3 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Received event network-changed-ae5db7e6-7a7a-4116-954a-be851ee02864 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:16:55 compute-0 nova_compute[351485]: 2025-12-03 02:16:55.672 351492 DEBUG nova.compute.manager [req-f2665b51-bfba-4a44-beb0-12fb1f994f7d req-77b302cd-f6f1-4ba5-bd78-025adc2cabc3 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Refreshing instance network info cache due to event network-changed-ae5db7e6-7a7a-4116-954a-be851ee02864. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 03 02:16:55 compute-0 nova_compute[351485]: 2025-12-03 02:16:55.673 351492 DEBUG oslo_concurrency.lockutils [req-f2665b51-bfba-4a44-beb0-12fb1f994f7d req-77b302cd-f6f1-4ba5-bd78-025adc2cabc3 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:16:55 compute-0 nova_compute[351485]: 2025-12-03 02:16:55.673 351492 DEBUG oslo_concurrency.lockutils [req-f2665b51-bfba-4a44-beb0-12fb1f994f7d req-77b302cd-f6f1-4ba5-bd78-025adc2cabc3 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:16:55 compute-0 nova_compute[351485]: 2025-12-03 02:16:55.674 351492 DEBUG nova.network.neutron [req-f2665b51-bfba-4a44-beb0-12fb1f994f7d req-77b302cd-f6f1-4ba5-bd78-025adc2cabc3 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Refreshing network info cache for port ae5db7e6-7a7a-4116-954a-be851ee02864 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 03 02:16:56 compute-0 ceph-mon[192821]: pgmap v1878: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 406 KiB/s rd, 28 KiB/s wr, 23 op/s
Dec 03 02:16:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1879: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 29 KiB/s wr, 67 op/s
Dec 03 02:16:57 compute-0 nova_compute[351485]: 2025-12-03 02:16:57.828 351492 DEBUG nova.network.neutron [req-f2665b51-bfba-4a44-beb0-12fb1f994f7d req-77b302cd-f6f1-4ba5-bd78-025adc2cabc3 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Updated VIF entry in instance network info cache for port ae5db7e6-7a7a-4116-954a-be851ee02864. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 03 02:16:57 compute-0 nova_compute[351485]: 2025-12-03 02:16:57.829 351492 DEBUG nova.network.neutron [req-f2665b51-bfba-4a44-beb0-12fb1f994f7d req-77b302cd-f6f1-4ba5-bd78-025adc2cabc3 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Updating instance_info_cache with network_info: [{"id": "ae5db7e6-7a7a-4116-954a-be851ee02864", "address": "fa:16:3e:ed:5c:3e", "network": {"id": "ed008f09-da46-4507-9be2-7398a4728121", "bridge": "br-int", "label": "tempest-network-smoke--628634883", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8f8e5d142604e8c8aabf1e14a1467ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae5db7e6-7a", "ovs_interfaceid": "ae5db7e6-7a7a-4116-954a-be851ee02864", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:16:57 compute-0 nova_compute[351485]: 2025-12-03 02:16:57.852 351492 DEBUG oslo_concurrency.lockutils [req-f2665b51-bfba-4a44-beb0-12fb1f994f7d req-77b302cd-f6f1-4ba5-bd78-025adc2cabc3 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:16:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:16:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:16:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:16:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:16:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:16:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:16:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:16:58 compute-0 ceph-mon[192821]: pgmap v1879: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 29 KiB/s wr, 67 op/s
Dec 03 02:16:58 compute-0 nova_compute[351485]: 2025-12-03 02:16:58.593 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:16:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1880: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 5.8 KiB/s wr, 60 op/s
Dec 03 02:16:59 compute-0 ceph-mon[192821]: pgmap v1880: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 5.8 KiB/s wr, 60 op/s
Dec 03 02:16:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:59.648 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:16:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:59.649 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:16:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:59.650 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:16:59 compute-0 podman[158098]: time="2025-12-03T02:16:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:16:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:16:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46278 "" "Go-http-client/1.1"
Dec 03 02:16:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:16:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9587 "" "Go-http-client/1.1"
Dec 03 02:17:00 compute-0 nova_compute[351485]: 2025-12-03 02:17:00.411 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1881: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 5.8 KiB/s wr, 64 op/s
Dec 03 02:17:01 compute-0 openstack_network_exporter[368278]: ERROR   02:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:17:01 compute-0 openstack_network_exporter[368278]: ERROR   02:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:17:01 compute-0 openstack_network_exporter[368278]: ERROR   02:17:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:17:01 compute-0 openstack_network_exporter[368278]: ERROR   02:17:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:17:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:17:01 compute-0 openstack_network_exporter[368278]: ERROR   02:17:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:17:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:17:01 compute-0 nova_compute[351485]: 2025-12-03 02:17:01.873 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:02 compute-0 ceph-mon[192821]: pgmap v1881: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 5.8 KiB/s wr, 64 op/s
Dec 03 02:17:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1882: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 KiB/s wr, 64 op/s
Dec 03 02:17:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:17:03 compute-0 nova_compute[351485]: 2025-12-03 02:17:03.596 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:04 compute-0 ceph-mon[192821]: pgmap v1882: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 KiB/s wr, 64 op/s
Dec 03 02:17:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1883: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.7 KiB/s wr, 64 op/s
Dec 03 02:17:05 compute-0 nova_compute[351485]: 2025-12-03 02:17:05.416 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:05 compute-0 nova_compute[351485]: 2025-12-03 02:17:05.746 351492 DEBUG nova.objects.instance [None req-2af10689-3986-425b-97f3-87d84cbdfdec 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Lazy-loading 'flavor' on Instance uuid 4f50e501-f565-4e1f-aa02-df921702eff9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:17:05 compute-0 nova_compute[351485]: 2025-12-03 02:17:05.803 351492 DEBUG oslo_concurrency.lockutils [None req-2af10689-3986-425b-97f3-87d84cbdfdec 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Acquiring lock "refresh_cache-4f50e501-f565-4e1f-aa02-df921702eff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:17:05 compute-0 nova_compute[351485]: 2025-12-03 02:17:05.804 351492 DEBUG oslo_concurrency.lockutils [None req-2af10689-3986-425b-97f3-87d84cbdfdec 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Acquired lock "refresh_cache-4f50e501-f565-4e1f-aa02-df921702eff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:17:06 compute-0 ceph-mon[192821]: pgmap v1883: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.7 KiB/s wr, 64 op/s
Dec 03 02:17:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1884: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.4 KiB/s wr, 51 op/s
Dec 03 02:17:07 compute-0 nova_compute[351485]: 2025-12-03 02:17:07.869 351492 DEBUG nova.network.neutron [None req-2af10689-3986-425b-97f3-87d84cbdfdec 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 03 02:17:08 compute-0 nova_compute[351485]: 2025-12-03 02:17:08.037 351492 DEBUG nova.compute.manager [req-120044bd-e8c3-435e-9040-45776c293a57 req-228342ba-6595-4325-b3b1-11150899ef58 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Received event network-changed-b7fa8023-e50c-4bea-be79-8fbe005f0b8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:17:08 compute-0 nova_compute[351485]: 2025-12-03 02:17:08.038 351492 DEBUG nova.compute.manager [req-120044bd-e8c3-435e-9040-45776c293a57 req-228342ba-6595-4325-b3b1-11150899ef58 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Refreshing instance network info cache due to event network-changed-b7fa8023-e50c-4bea-be79-8fbe005f0b8a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 03 02:17:08 compute-0 nova_compute[351485]: 2025-12-03 02:17:08.038 351492 DEBUG oslo_concurrency.lockutils [req-120044bd-e8c3-435e-9040-45776c293a57 req-228342ba-6595-4325-b3b1-11150899ef58 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-4f50e501-f565-4e1f-aa02-df921702eff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:17:08 compute-0 ceph-mon[192821]: pgmap v1884: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.4 KiB/s wr, 51 op/s
Dec 03 02:17:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:17:08 compute-0 nova_compute[351485]: 2025-12-03 02:17:08.527 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:08 compute-0 nova_compute[351485]: 2025-12-03 02:17:08.601 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1885: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 137 KiB/s rd, 2.4 KiB/s wr, 5 op/s
Dec 03 02:17:09 compute-0 nova_compute[351485]: 2025-12-03 02:17:09.813 351492 DEBUG nova.network.neutron [None req-2af10689-3986-425b-97f3-87d84cbdfdec 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Updating instance_info_cache with network_info: [{"id": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "address": "fa:16:3e:12:b3:fa", "network": {"id": "a5e23dc0-bcc2-406c-bc7f-b978295be94b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1951903174-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9efdda7cf984595a9c5a855bae62b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7fa8023-e5", "ovs_interfaceid": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:17:09 compute-0 podman[447796]: 2025-12-03 02:17:09.837748722 +0000 UTC m=+0.088017691 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 03 02:17:09 compute-0 nova_compute[351485]: 2025-12-03 02:17:09.848 351492 DEBUG oslo_concurrency.lockutils [None req-2af10689-3986-425b-97f3-87d84cbdfdec 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Releasing lock "refresh_cache-4f50e501-f565-4e1f-aa02-df921702eff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:17:09 compute-0 nova_compute[351485]: 2025-12-03 02:17:09.848 351492 DEBUG nova.compute.manager [None req-2af10689-3986-425b-97f3-87d84cbdfdec 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Dec 03 02:17:09 compute-0 nova_compute[351485]: 2025-12-03 02:17:09.849 351492 DEBUG nova.compute.manager [None req-2af10689-3986-425b-97f3-87d84cbdfdec 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] network_info to inject: |[{"id": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "address": "fa:16:3e:12:b3:fa", "network": {"id": "a5e23dc0-bcc2-406c-bc7f-b978295be94b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1951903174-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9efdda7cf984595a9c5a855bae62b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7fa8023-e5", "ovs_interfaceid": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Dec 03 02:17:09 compute-0 nova_compute[351485]: 2025-12-03 02:17:09.851 351492 DEBUG oslo_concurrency.lockutils [req-120044bd-e8c3-435e-9040-45776c293a57 req-228342ba-6595-4325-b3b1-11150899ef58 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-4f50e501-f565-4e1f-aa02-df921702eff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:17:09 compute-0 nova_compute[351485]: 2025-12-03 02:17:09.851 351492 DEBUG nova.network.neutron [req-120044bd-e8c3-435e-9040-45776c293a57 req-228342ba-6595-4325-b3b1-11150899ef58 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Refreshing network info cache for port b7fa8023-e50c-4bea-be79-8fbe005f0b8a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 03 02:17:09 compute-0 podman[447794]: 2025-12-03 02:17:09.857426648 +0000 UTC m=+0.106209846 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 03 02:17:09 compute-0 podman[447795]: 2025-12-03 02:17:09.867785062 +0000 UTC m=+0.112681950 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm)
Dec 03 02:17:10 compute-0 ceph-mon[192821]: pgmap v1885: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 137 KiB/s rd, 2.4 KiB/s wr, 5 op/s
Dec 03 02:17:10 compute-0 nova_compute[351485]: 2025-12-03 02:17:10.422 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1886: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 137 KiB/s rd, 4.4 KiB/s wr, 5 op/s
Dec 03 02:17:11 compute-0 nova_compute[351485]: 2025-12-03 02:17:11.443 351492 DEBUG nova.objects.instance [None req-4a929283-4bc4-4f7b-bbb7-a7bab86eb662 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Lazy-loading 'flavor' on Instance uuid 4f50e501-f565-4e1f-aa02-df921702eff9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:17:11 compute-0 nova_compute[351485]: 2025-12-03 02:17:11.483 351492 DEBUG oslo_concurrency.lockutils [None req-4a929283-4bc4-4f7b-bbb7-a7bab86eb662 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Acquiring lock "refresh_cache-4f50e501-f565-4e1f-aa02-df921702eff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:17:12 compute-0 sshd-session[447851]: Invalid user adam from 154.113.10.113 port 34228
Dec 03 02:17:12 compute-0 ceph-mon[192821]: pgmap v1886: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 137 KiB/s rd, 4.4 KiB/s wr, 5 op/s
Dec 03 02:17:12 compute-0 sshd-session[447851]: Received disconnect from 154.113.10.113 port 34228:11: Bye Bye [preauth]
Dec 03 02:17:12 compute-0 sshd-session[447851]: Disconnected from invalid user adam 154.113.10.113 port 34228 [preauth]
Dec 03 02:17:12 compute-0 nova_compute[351485]: 2025-12-03 02:17:12.498 351492 DEBUG nova.network.neutron [req-120044bd-e8c3-435e-9040-45776c293a57 req-228342ba-6595-4325-b3b1-11150899ef58 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Updated VIF entry in instance network info cache for port b7fa8023-e50c-4bea-be79-8fbe005f0b8a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 03 02:17:12 compute-0 nova_compute[351485]: 2025-12-03 02:17:12.499 351492 DEBUG nova.network.neutron [req-120044bd-e8c3-435e-9040-45776c293a57 req-228342ba-6595-4325-b3b1-11150899ef58 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Updating instance_info_cache with network_info: [{"id": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "address": "fa:16:3e:12:b3:fa", "network": {"id": "a5e23dc0-bcc2-406c-bc7f-b978295be94b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1951903174-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9efdda7cf984595a9c5a855bae62b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7fa8023-e5", "ovs_interfaceid": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:17:12 compute-0 nova_compute[351485]: 2025-12-03 02:17:12.527 351492 DEBUG oslo_concurrency.lockutils [req-120044bd-e8c3-435e-9040-45776c293a57 req-228342ba-6595-4325-b3b1-11150899ef58 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-4f50e501-f565-4e1f-aa02-df921702eff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:17:12 compute-0 nova_compute[351485]: 2025-12-03 02:17:12.528 351492 DEBUG oslo_concurrency.lockutils [None req-4a929283-4bc4-4f7b-bbb7-a7bab86eb662 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Acquired lock "refresh_cache-4f50e501-f565-4e1f-aa02-df921702eff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:17:12 compute-0 nova_compute[351485]: 2025-12-03 02:17:12.581 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:17:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1887: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 4.4 KiB/s wr, 1 op/s
Dec 03 02:17:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:17:13 compute-0 nova_compute[351485]: 2025-12-03 02:17:13.603 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:13 compute-0 nova_compute[351485]: 2025-12-03 02:17:13.742 351492 DEBUG nova.network.neutron [None req-4a929283-4bc4-4f7b-bbb7-a7bab86eb662 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 03 02:17:13 compute-0 nova_compute[351485]: 2025-12-03 02:17:13.946 351492 DEBUG nova.compute.manager [req-d3eefc5c-134d-4313-b6b5-c2093f2ce7a6 req-5bb08c4d-f295-4d45-93de-0a3351aa2306 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Received event network-changed-b7fa8023-e50c-4bea-be79-8fbe005f0b8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:17:13 compute-0 nova_compute[351485]: 2025-12-03 02:17:13.948 351492 DEBUG nova.compute.manager [req-d3eefc5c-134d-4313-b6b5-c2093f2ce7a6 req-5bb08c4d-f295-4d45-93de-0a3351aa2306 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Refreshing instance network info cache due to event network-changed-b7fa8023-e50c-4bea-be79-8fbe005f0b8a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 03 02:17:13 compute-0 nova_compute[351485]: 2025-12-03 02:17:13.949 351492 DEBUG oslo_concurrency.lockutils [req-d3eefc5c-134d-4313-b6b5-c2093f2ce7a6 req-5bb08c4d-f295-4d45-93de-0a3351aa2306 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-4f50e501-f565-4e1f-aa02-df921702eff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:17:14 compute-0 ceph-mon[192821]: pgmap v1887: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 4.4 KiB/s wr, 1 op/s
Dec 03 02:17:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1888: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 6.4 KiB/s wr, 1 op/s
Dec 03 02:17:15 compute-0 nova_compute[351485]: 2025-12-03 02:17:15.366 351492 DEBUG oslo_concurrency.lockutils [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Acquiring lock "a48b4084-369d-432a-9f47-9378cdcc011f" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:17:15 compute-0 nova_compute[351485]: 2025-12-03 02:17:15.367 351492 DEBUG oslo_concurrency.lockutils [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Lock "a48b4084-369d-432a-9f47-9378cdcc011f" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:17:15 compute-0 nova_compute[351485]: 2025-12-03 02:17:15.368 351492 INFO nova.compute.manager [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Rebooting instance
Dec 03 02:17:15 compute-0 nova_compute[351485]: 2025-12-03 02:17:15.392 351492 DEBUG oslo_concurrency.lockutils [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Acquiring lock "refresh_cache-a48b4084-369d-432a-9f47-9378cdcc011f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:17:15 compute-0 nova_compute[351485]: 2025-12-03 02:17:15.393 351492 DEBUG oslo_concurrency.lockutils [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Acquired lock "refresh_cache-a48b4084-369d-432a-9f47-9378cdcc011f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:17:15 compute-0 nova_compute[351485]: 2025-12-03 02:17:15.394 351492 DEBUG nova.network.neutron [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 03 02:17:15 compute-0 nova_compute[351485]: 2025-12-03 02:17:15.432 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:15 compute-0 nova_compute[351485]: 2025-12-03 02:17:15.744 351492 DEBUG nova.network.neutron [None req-4a929283-4bc4-4f7b-bbb7-a7bab86eb662 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Updating instance_info_cache with network_info: [{"id": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "address": "fa:16:3e:12:b3:fa", "network": {"id": "a5e23dc0-bcc2-406c-bc7f-b978295be94b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1951903174-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9efdda7cf984595a9c5a855bae62b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7fa8023-e5", "ovs_interfaceid": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:17:15 compute-0 nova_compute[351485]: 2025-12-03 02:17:15.774 351492 DEBUG oslo_concurrency.lockutils [None req-4a929283-4bc4-4f7b-bbb7-a7bab86eb662 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Releasing lock "refresh_cache-4f50e501-f565-4e1f-aa02-df921702eff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:17:15 compute-0 nova_compute[351485]: 2025-12-03 02:17:15.774 351492 DEBUG nova.compute.manager [None req-4a929283-4bc4-4f7b-bbb7-a7bab86eb662 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Dec 03 02:17:15 compute-0 nova_compute[351485]: 2025-12-03 02:17:15.775 351492 DEBUG nova.compute.manager [None req-4a929283-4bc4-4f7b-bbb7-a7bab86eb662 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] network_info to inject: |[{"id": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "address": "fa:16:3e:12:b3:fa", "network": {"id": "a5e23dc0-bcc2-406c-bc7f-b978295be94b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1951903174-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9efdda7cf984595a9c5a855bae62b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7fa8023-e5", "ovs_interfaceid": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Dec 03 02:17:15 compute-0 nova_compute[351485]: 2025-12-03 02:17:15.782 351492 DEBUG oslo_concurrency.lockutils [req-d3eefc5c-134d-4313-b6b5-c2093f2ce7a6 req-5bb08c4d-f295-4d45-93de-0a3351aa2306 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-4f50e501-f565-4e1f-aa02-df921702eff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:17:15 compute-0 nova_compute[351485]: 2025-12-03 02:17:15.783 351492 DEBUG nova.network.neutron [req-d3eefc5c-134d-4313-b6b5-c2093f2ce7a6 req-5bb08c4d-f295-4d45-93de-0a3351aa2306 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Refreshing network info cache for port b7fa8023-e50c-4bea-be79-8fbe005f0b8a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 03 02:17:15 compute-0 podman[447853]: 2025-12-03 02:17:15.861280824 +0000 UTC m=+0.120662725 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 03 02:17:16 compute-0 ceph-mon[192821]: pgmap v1888: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 6.4 KiB/s wr, 1 op/s
Dec 03 02:17:16 compute-0 ovn_controller[89134]: 2025-12-03T02:17:16Z|00107|binding|INFO|Releasing lport 4fe53946-9a81-46d3-946d-3676da417bd6 from this chassis (sb_readonly=0)
Dec 03 02:17:16 compute-0 ovn_controller[89134]: 2025-12-03T02:17:16Z|00108|binding|INFO|Releasing lport c8314dfe-5b76-4819-9b3e-1cb76a272253 from this chassis (sb_readonly=0)
Dec 03 02:17:16 compute-0 ovn_controller[89134]: 2025-12-03T02:17:16Z|00109|binding|INFO|Releasing lport f4f388aa-0af5-4918-b8ad-5c74c22057c6 from this chassis (sb_readonly=0)
Dec 03 02:17:16 compute-0 nova_compute[351485]: 2025-12-03 02:17:16.390 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:16 compute-0 nova_compute[351485]: 2025-12-03 02:17:16.532 351492 DEBUG oslo_concurrency.lockutils [None req-d6c62894-6720-4b10-b1bb-7408d0e376bd 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Acquiring lock "4f50e501-f565-4e1f-aa02-df921702eff9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:17:16 compute-0 nova_compute[351485]: 2025-12-03 02:17:16.533 351492 DEBUG oslo_concurrency.lockutils [None req-d6c62894-6720-4b10-b1bb-7408d0e376bd 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Lock "4f50e501-f565-4e1f-aa02-df921702eff9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:17:16 compute-0 nova_compute[351485]: 2025-12-03 02:17:16.534 351492 DEBUG oslo_concurrency.lockutils [None req-d6c62894-6720-4b10-b1bb-7408d0e376bd 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Acquiring lock "4f50e501-f565-4e1f-aa02-df921702eff9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:17:16 compute-0 nova_compute[351485]: 2025-12-03 02:17:16.535 351492 DEBUG oslo_concurrency.lockutils [None req-d6c62894-6720-4b10-b1bb-7408d0e376bd 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Lock "4f50e501-f565-4e1f-aa02-df921702eff9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:17:16 compute-0 nova_compute[351485]: 2025-12-03 02:17:16.536 351492 DEBUG oslo_concurrency.lockutils [None req-d6c62894-6720-4b10-b1bb-7408d0e376bd 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Lock "4f50e501-f565-4e1f-aa02-df921702eff9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:17:16 compute-0 nova_compute[351485]: 2025-12-03 02:17:16.539 351492 INFO nova.compute.manager [None req-d6c62894-6720-4b10-b1bb-7408d0e376bd 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Terminating instance
Dec 03 02:17:16 compute-0 nova_compute[351485]: 2025-12-03 02:17:16.542 351492 DEBUG nova.compute.manager [None req-d6c62894-6720-4b10-b1bb-7408d0e376bd 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 03 02:17:16 compute-0 nova_compute[351485]: 2025-12-03 02:17:16.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:17:16 compute-0 nova_compute[351485]: 2025-12-03 02:17:16.610 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:17:16 compute-0 nova_compute[351485]: 2025-12-03 02:17:16.611 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:17:16 compute-0 nova_compute[351485]: 2025-12-03 02:17:16.612 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:17:16 compute-0 nova_compute[351485]: 2025-12-03 02:17:16.613 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 02:17:16 compute-0 nova_compute[351485]: 2025-12-03 02:17:16.613 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:17:16 compute-0 kernel: tapb7fa8023-e5 (unregistering): left promiscuous mode
Dec 03 02:17:16 compute-0 NetworkManager[48912]: <info>  [1764728236.6888] device (tapb7fa8023-e5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 03 02:17:16 compute-0 nova_compute[351485]: 2025-12-03 02:17:16.709 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:16 compute-0 ovn_controller[89134]: 2025-12-03T02:17:16Z|00110|binding|INFO|Releasing lport b7fa8023-e50c-4bea-be79-8fbe005f0b8a from this chassis (sb_readonly=0)
Dec 03 02:17:16 compute-0 ovn_controller[89134]: 2025-12-03T02:17:16Z|00111|binding|INFO|Setting lport b7fa8023-e50c-4bea-be79-8fbe005f0b8a down in Southbound
Dec 03 02:17:16 compute-0 ovn_controller[89134]: 2025-12-03T02:17:16Z|00112|binding|INFO|Removing iface tapb7fa8023-e5 ovn-installed in OVS
Dec 03 02:17:16 compute-0 nova_compute[351485]: 2025-12-03 02:17:16.728 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:16 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:16.730 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:12:b3:fa 10.100.0.3'], port_security=['fa:16:3e:12:b3:fa 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '4f50e501-f565-4e1f-aa02-df921702eff9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a5e23dc0-bcc2-406c-bc7f-b978295be94b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9efdda7cf984595a9c5a855bae62b0e', 'neutron:revision_number': '6', 'neutron:security_group_ids': '532f80d5-065d-43cb-9604-ad1c2a6e3902', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.181'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=319776e3-1c91-4ec0-bfb2-2325dfaa1fa2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=b7fa8023-e50c-4bea-be79-8fbe005f0b8a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 02:17:16 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:16.731 288528 INFO neutron.agent.ovn.metadata.agent [-] Port b7fa8023-e50c-4bea-be79-8fbe005f0b8a in datapath a5e23dc0-bcc2-406c-bc7f-b978295be94b unbound from our chassis
Dec 03 02:17:16 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:16.733 288528 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a5e23dc0-bcc2-406c-bc7f-b978295be94b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 03 02:17:16 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:16.734 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[cb9905fe-46dd-4e0f-951f-eb2837e32eab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:17:16 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:16.734 288528 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a5e23dc0-bcc2-406c-bc7f-b978295be94b namespace which is not needed anymore
Dec 03 02:17:16 compute-0 nova_compute[351485]: 2025-12-03 02:17:16.740 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:16 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Dec 03 02:17:16 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 46.477s CPU time.
Dec 03 02:17:16 compute-0 systemd-machined[138558]: Machine qemu-6-instance-00000006 terminated.
Dec 03 02:17:16 compute-0 neutron-haproxy-ovnmeta-a5e23dc0-bcc2-406c-bc7f-b978295be94b[444426]: [NOTICE]   (444481) : haproxy version is 2.8.14-c23fe91
Dec 03 02:17:16 compute-0 neutron-haproxy-ovnmeta-a5e23dc0-bcc2-406c-bc7f-b978295be94b[444426]: [NOTICE]   (444481) : path to executable is /usr/sbin/haproxy
Dec 03 02:17:16 compute-0 neutron-haproxy-ovnmeta-a5e23dc0-bcc2-406c-bc7f-b978295be94b[444426]: [ALERT]    (444481) : Current worker (444494) exited with code 143 (Terminated)
Dec 03 02:17:16 compute-0 neutron-haproxy-ovnmeta-a5e23dc0-bcc2-406c-bc7f-b978295be94b[444426]: [WARNING]  (444481) : All workers exited. Exiting... (0)
Dec 03 02:17:16 compute-0 systemd[1]: libpod-1850961de0e79545d5e6096d2e1507ace37214bae370e4c395b25878f1ca1363.scope: Deactivated successfully.
Dec 03 02:17:16 compute-0 podman[447913]: 2025-12-03 02:17:16.973747679 +0000 UTC m=+0.076715321 container died 1850961de0e79545d5e6096d2e1507ace37214bae370e4c395b25878f1ca1363 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a5e23dc0-bcc2-406c-bc7f-b978295be94b, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 03 02:17:16 compute-0 nova_compute[351485]: 2025-12-03 02:17:16.983 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:16 compute-0 nova_compute[351485]: 2025-12-03 02:17:16.991 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.000 351492 INFO nova.virt.libvirt.driver [-] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Instance destroyed successfully.
Dec 03 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.001 351492 DEBUG nova.objects.instance [None req-d6c62894-6720-4b10-b1bb-7408d0e376bd 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Lazy-loading 'resources' on Instance uuid 4f50e501-f565-4e1f-aa02-df921702eff9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:17:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d0f5e97a1c9cf6a7b1ce8133ccb65b7a2748d41d5e4c00f49714ed27a9e8b68-merged.mount: Deactivated successfully.
Dec 03 02:17:17 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1850961de0e79545d5e6096d2e1507ace37214bae370e4c395b25878f1ca1363-userdata-shm.mount: Deactivated successfully.
Dec 03 02:17:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1889: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 24 KiB/s wr, 3 op/s
Dec 03 02:17:17 compute-0 podman[447913]: 2025-12-03 02:17:17.051098638 +0000 UTC m=+0.154066250 container cleanup 1850961de0e79545d5e6096d2e1507ace37214bae370e4c395b25878f1ca1363 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a5e23dc0-bcc2-406c-bc7f-b978295be94b, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 03 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.056 351492 DEBUG nova.virt.libvirt.vif [None req-d6c62894-6720-4b10-b1bb-7408d0e376bd 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T02:15:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1950125250',display_name='tempest-AttachInterfacesUnderV243Test-server-1950125250',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1950125250',id=6,image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBB9OuHdIBdpYaktjGsefgccfH8R9SNK99mHHbJQ9rg+G2U1LTvmjO9Wsnt6ghp9uwnzyNl9odxW0s4EjHMYofeke7VnvOokwl4rSnaOh/gTQhB30j9Q5ponmvnWGOY9dA==',key_name='tempest-keypair-48380121',keypairs=<?>,launch_index=0,launched_at=2025-12-03T02:15:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a9efdda7cf984595a9c5a855bae62b0e',ramdisk_id='',reservation_id='r-dnx5z6kj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesUnderV243Test-1651825730',owner_user_name='tempest-AttachInterfacesUnderV243Test-1651825730-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-03T02:17:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='08c7d81f1f9e4989b1eb8b8cf96bbf11',uuid=4f50e501-f565-4e1f-aa02-df921702eff9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "address": "fa:16:3e:12:b3:fa", "network": {"id": "a5e23dc0-bcc2-406c-bc7f-b978295be94b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1951903174-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9efdda7cf984595a9c5a855bae62b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7fa8023-e5", "ovs_interfaceid": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 03 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.057 351492 DEBUG nova.network.os_vif_util [None req-d6c62894-6720-4b10-b1bb-7408d0e376bd 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Converting VIF {"id": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "address": "fa:16:3e:12:b3:fa", "network": {"id": "a5e23dc0-bcc2-406c-bc7f-b978295be94b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1951903174-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9efdda7cf984595a9c5a855bae62b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7fa8023-e5", "ovs_interfaceid": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 03 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.058 351492 DEBUG nova.network.os_vif_util [None req-d6c62894-6720-4b10-b1bb-7408d0e376bd 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:12:b3:fa,bridge_name='br-int',has_traffic_filtering=True,id=b7fa8023-e50c-4bea-be79-8fbe005f0b8a,network=Network(a5e23dc0-bcc2-406c-bc7f-b978295be94b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7fa8023-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 03 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.058 351492 DEBUG os_vif [None req-d6c62894-6720-4b10-b1bb-7408d0e376bd 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:12:b3:fa,bridge_name='br-int',has_traffic_filtering=True,id=b7fa8023-e50c-4bea-be79-8fbe005f0b8a,network=Network(a5e23dc0-bcc2-406c-bc7f-b978295be94b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7fa8023-e5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 03 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.064 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.064 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb7fa8023-e5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.066 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.069 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.070 351492 INFO os_vif [None req-d6c62894-6720-4b10-b1bb-7408d0e376bd 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:12:b3:fa,bridge_name='br-int',has_traffic_filtering=True,id=b7fa8023-e50c-4bea-be79-8fbe005f0b8a,network=Network(a5e23dc0-bcc2-406c-bc7f-b978295be94b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7fa8023-e5')
Dec 03 02:17:17 compute-0 systemd[1]: libpod-conmon-1850961de0e79545d5e6096d2e1507ace37214bae370e4c395b25878f1ca1363.scope: Deactivated successfully.
Dec 03 02:17:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:17:17 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1341854959' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.179 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:17:17 compute-0 podman[447949]: 2025-12-03 02:17:17.18158809 +0000 UTC m=+0.084284716 container remove 1850961de0e79545d5e6096d2e1507ace37214bae370e4c395b25878f1ca1363 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a5e23dc0-bcc2-406c-bc7f-b978295be94b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 03 02:17:17 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:17.192 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[3cca0d28-288c-4823-87d7-57897f6b91f6]: (4, ('Wed Dec  3 02:17:16 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a5e23dc0-bcc2-406c-bc7f-b978295be94b (1850961de0e79545d5e6096d2e1507ace37214bae370e4c395b25878f1ca1363)\n1850961de0e79545d5e6096d2e1507ace37214bae370e4c395b25878f1ca1363\nWed Dec  3 02:17:17 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a5e23dc0-bcc2-406c-bc7f-b978295be94b (1850961de0e79545d5e6096d2e1507ace37214bae370e4c395b25878f1ca1363)\n1850961de0e79545d5e6096d2e1507ace37214bae370e4c395b25878f1ca1363\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:17:17 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:17.195 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[470c14fe-11f1-4eb5-b3c2-3b9c79758fea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:17:17 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:17.196 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa5e23dc0-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.199 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:17 compute-0 kernel: tapa5e23dc0-b0: left promiscuous mode
Dec 03 02:17:17 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1341854959' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.219 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:17 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:17.222 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[32dce629-196f-43ac-89ec-d507fe95db57]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.226 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:17 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:17.236 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[028bb779-cb36-41a2-9e5f-c787e26a851d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:17:17 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:17.238 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[5034d6fb-2c7d-4a93-aca0-033c6ed8c3ca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:17:17 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:17.262 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[df75a8ae-797f-4051-84ab-af23b56fcc96]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 698614, 'reachable_time': 15469, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 447985, 'error': None, 'target': 'ovnmeta-a5e23dc0-bcc2-406c-bc7f-b978295be94b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:17:17 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:17.268 288639 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a5e23dc0-bcc2-406c-bc7f-b978295be94b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 03 02:17:17 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:17.268 288639 DEBUG oslo.privsep.daemon [-] privsep: reply[c813d9b1-76ba-4ee1-a098-4bec661c05d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:17:17 compute-0 systemd[1]: run-netns-ovnmeta\x2da5e23dc0\x2dbcc2\x2d406c\x2dbc7f\x2db978295be94b.mount: Deactivated successfully.
Dec 03 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.305 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.305 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.326 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.326 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.335 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.335 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.883 351492 INFO nova.virt.libvirt.driver [None req-d6c62894-6720-4b10-b1bb-7408d0e376bd 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Deleting instance files /var/lib/nova/instances/4f50e501-f565-4e1f-aa02-df921702eff9_del
Dec 03 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.884 351492 INFO nova.virt.libvirt.driver [None req-d6c62894-6720-4b10-b1bb-7408d0e376bd 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Deletion of /var/lib/nova/instances/4f50e501-f565-4e1f-aa02-df921702eff9_del complete
Dec 03 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.900 351492 DEBUG nova.network.neutron [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Updating instance_info_cache with network_info: [{"id": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "address": "fa:16:3e:ff:dd:2f", "network": {"id": "2fdf214a-0f6e-4e5d-b449-e1988827937a", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-191861003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b95bb4c57d3543acb25997bedee9dec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee5c2dfc-04", "ovs_interfaceid": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.908 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.909 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3668MB free_disk=59.876190185546875GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.909 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.909 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.946 351492 DEBUG oslo_concurrency.lockutils [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Releasing lock "refresh_cache-a48b4084-369d-432a-9f47-9378cdcc011f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.948 351492 DEBUG nova.compute.manager [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.990 351492 INFO nova.compute.manager [None req-d6c62894-6720-4b10-b1bb-7408d0e376bd 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Took 1.45 seconds to destroy the instance on the hypervisor.
Dec 03 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.990 351492 DEBUG oslo.service.loopingcall [None req-d6c62894-6720-4b10-b1bb-7408d0e376bd 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 03 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.990 351492 DEBUG nova.compute.manager [-] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 03 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.991 351492 DEBUG nova.network.neutron [-] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 03 02:17:18 compute-0 kernel: tapee5c2dfc-04 (unregistering): left promiscuous mode
Dec 03 02:17:18 compute-0 NetworkManager[48912]: <info>  [1764728238.1712] device (tapee5c2dfc-04): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.172 351492 DEBUG nova.network.neutron [req-d3eefc5c-134d-4313-b6b5-c2093f2ce7a6 req-5bb08c4d-f295-4d45-93de-0a3351aa2306 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Updated VIF entry in instance network info cache for port b7fa8023-e50c-4bea-be79-8fbe005f0b8a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.173 351492 DEBUG nova.network.neutron [req-d3eefc5c-134d-4313-b6b5-c2093f2ce7a6 req-5bb08c4d-f295-4d45-93de-0a3351aa2306 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Updating instance_info_cache with network_info: [{"id": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "address": "fa:16:3e:12:b3:fa", "network": {"id": "a5e23dc0-bcc2-406c-bc7f-b978295be94b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1951903174-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9efdda7cf984595a9c5a855bae62b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7fa8023-e5", "ovs_interfaceid": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.177 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 4f50e501-f565-4e1f-aa02-df921702eff9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.181 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance a48b4084-369d-432a-9f47-9378cdcc011f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.181 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.182 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.182 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 02:17:18 compute-0 ovn_controller[89134]: 2025-12-03T02:17:18Z|00113|binding|INFO|Releasing lport ee5c2dfc-04c3-400a-8073-6f2c65dcea03 from this chassis (sb_readonly=0)
Dec 03 02:17:18 compute-0 ovn_controller[89134]: 2025-12-03T02:17:18Z|00114|binding|INFO|Setting lport ee5c2dfc-04c3-400a-8073-6f2c65dcea03 down in Southbound
Dec 03 02:17:18 compute-0 ovn_controller[89134]: 2025-12-03T02:17:18Z|00115|binding|INFO|Removing iface tapee5c2dfc-04 ovn-installed in OVS
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.190 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:18 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:18.198 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ff:dd:2f 10.100.0.9'], port_security=['fa:16:3e:ff:dd:2f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'a48b4084-369d-432a-9f47-9378cdcc011f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2fdf214a-0f6e-4e5d-b449-e1988827937a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b95bb4c57d3543acb25997bedee9dec3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '323d2b87-5691-4e3e-84a4-5fb1ca8c1538', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.208'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=49517db8-4396-45c4-bc75-59118441fc2e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=ee5c2dfc-04c3-400a-8073-6f2c65dcea03) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 02:17:18 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:18.199 288528 INFO neutron.agent.ovn.metadata.agent [-] Port ee5c2dfc-04c3-400a-8073-6f2c65dcea03 in datapath 2fdf214a-0f6e-4e5d-b449-e1988827937a unbound from our chassis
Dec 03 02:17:18 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:18.202 288528 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2fdf214a-0f6e-4e5d-b449-e1988827937a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 03 02:17:18 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:18.203 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[159cedb3-a45b-4205-aca9-f3a07247ecc6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:17:18 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:18.203 288528 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a namespace which is not needed anymore
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.207 351492 DEBUG oslo_concurrency.lockutils [req-d3eefc5c-134d-4313-b6b5-c2093f2ce7a6 req-5bb08c4d-f295-4d45-93de-0a3351aa2306 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-4f50e501-f565-4e1f-aa02-df921702eff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.222 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:18 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Dec 03 02:17:18 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 44.035s CPU time.
Dec 03 02:17:18 compute-0 systemd-machined[138558]: Machine qemu-8-instance-00000008 terminated.
Dec 03 02:17:18 compute-0 ceph-mon[192821]: pgmap v1889: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 24 KiB/s wr, 3 op/s
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.270 351492 DEBUG nova.compute.manager [req-fabdad7d-7d79-45a1-9bad-cf39ce03bd47 req-5a3586c6-76bf-4741-9412-2a1183db59c4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Received event network-vif-unplugged-b7fa8023-e50c-4bea-be79-8fbe005f0b8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.271 351492 DEBUG oslo_concurrency.lockutils [req-fabdad7d-7d79-45a1-9bad-cf39ce03bd47 req-5a3586c6-76bf-4741-9412-2a1183db59c4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "4f50e501-f565-4e1f-aa02-df921702eff9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.272 351492 DEBUG oslo_concurrency.lockutils [req-fabdad7d-7d79-45a1-9bad-cf39ce03bd47 req-5a3586c6-76bf-4741-9412-2a1183db59c4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "4f50e501-f565-4e1f-aa02-df921702eff9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.272 351492 DEBUG oslo_concurrency.lockutils [req-fabdad7d-7d79-45a1-9bad-cf39ce03bd47 req-5a3586c6-76bf-4741-9412-2a1183db59c4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "4f50e501-f565-4e1f-aa02-df921702eff9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.273 351492 DEBUG nova.compute.manager [req-fabdad7d-7d79-45a1-9bad-cf39ce03bd47 req-5a3586c6-76bf-4741-9412-2a1183db59c4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] No waiting events found dispatching network-vif-unplugged-b7fa8023-e50c-4bea-be79-8fbe005f0b8a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.273 351492 DEBUG nova.compute.manager [req-fabdad7d-7d79-45a1-9bad-cf39ce03bd47 req-5a3586c6-76bf-4741-9412-2a1183db59c4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Received event network-vif-unplugged-b7fa8023-e50c-4bea-be79-8fbe005f0b8a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.294 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.308 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.321 351492 INFO nova.virt.libvirt.driver [-] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Instance destroyed successfully.
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.322 351492 DEBUG nova.objects.instance [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Lazy-loading 'resources' on Instance uuid a48b4084-369d-432a-9f47-9378cdcc011f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.347 351492 DEBUG nova.virt.libvirt.vif [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T02:15:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-925455337',display_name='tempest-ServerActionsTestJSON-server-925455337',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-925455337',id=8,image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFGOJzr3C/PPi8eniww/uAf5kjbNsdKavxgkZKaJZFgdiLqS6nfAl7iJt2CTK2Uv8oLXiebIMQ1pupDcRRUQudzYxI5uBKdjcX1Ycil7EMv1Jwv4g9nZX8AidJ89XIoqzA==',key_name='tempest-keypair-354319462',keypairs=<?>,launch_index=0,launched_at=2025-12-03T02:15:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b95bb4c57d3543acb25997bedee9dec3',ramdisk_id='',reservation_id='r-4j003m20',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-225723275',owner_user_name='tempest-ServerActionsTestJSON-225723275-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-03T02:17:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='292dd1da4e67424b855327b32f0623b7',uuid=a48b4084-369d-432a-9f47-9378cdcc011f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "address": "fa:16:3e:ff:dd:2f", "network": {"id": "2fdf214a-0f6e-4e5d-b449-e1988827937a", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-191861003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b95bb4c57d3543acb25997bedee9dec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee5c2dfc-04", "ovs_interfaceid": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.348 351492 DEBUG nova.network.os_vif_util [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Converting VIF {"id": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "address": "fa:16:3e:ff:dd:2f", "network": {"id": "2fdf214a-0f6e-4e5d-b449-e1988827937a", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-191861003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b95bb4c57d3543acb25997bedee9dec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee5c2dfc-04", "ovs_interfaceid": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.349 351492 DEBUG nova.network.os_vif_util [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ff:dd:2f,bridge_name='br-int',has_traffic_filtering=True,id=ee5c2dfc-04c3-400a-8073-6f2c65dcea03,network=Network(2fdf214a-0f6e-4e5d-b449-e1988827937a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee5c2dfc-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.349 351492 DEBUG os_vif [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ff:dd:2f,bridge_name='br-int',has_traffic_filtering=True,id=ee5c2dfc-04c3-400a-8073-6f2c65dcea03,network=Network(2fdf214a-0f6e-4e5d-b449-e1988827937a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee5c2dfc-04') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.351 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.352 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee5c2dfc-04, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.358 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.361 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.368 351492 INFO os_vif [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ff:dd:2f,bridge_name='br-int',has_traffic_filtering=True,id=ee5c2dfc-04c3-400a-8073-6f2c65dcea03,network=Network(2fdf214a-0f6e-4e5d-b449-e1988827937a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee5c2dfc-04')
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.378 351492 DEBUG nova.virt.libvirt.driver [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Start _get_guest_xml network_info=[{"id": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "address": "fa:16:3e:ff:dd:2f", "network": {"id": "2fdf214a-0f6e-4e5d-b449-e1988827937a", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-191861003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b95bb4c57d3543acb25997bedee9dec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee5c2dfc-04", "ovs_interfaceid": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=ef773cba-72f0-486f-b5e5-792ff26bb688,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'boot_index': 0, 'guest_format': None, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'image_id': 'ef773cba-72f0-486f-b5e5-792ff26bb688'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.388 351492 WARNING nova.virt.libvirt.driver [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.396 351492 DEBUG nova.virt.libvirt.host [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.397 351492 DEBUG nova.virt.libvirt.host [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.407 351492 DEBUG nova.virt.libvirt.host [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.408 351492 DEBUG nova.virt.libvirt.host [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.409 351492 DEBUG nova.virt.libvirt.driver [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.410 351492 DEBUG nova.virt.hardware [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-03T02:14:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='89219634-32e9-4cb5-896f-6fa0b1edfe13',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=ef773cba-72f0-486f-b5e5-792ff26bb688,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.411 351492 DEBUG nova.virt.hardware [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.412 351492 DEBUG nova.virt.hardware [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.413 351492 DEBUG nova.virt.hardware [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.413 351492 DEBUG nova.virt.hardware [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.414 351492 DEBUG nova.virt.hardware [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.415 351492 DEBUG nova.virt.hardware [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.415 351492 DEBUG nova.virt.hardware [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.416 351492 DEBUG nova.virt.hardware [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.417 351492 DEBUG nova.virt.hardware [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.417 351492 DEBUG nova.virt.hardware [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.418 351492 DEBUG nova.objects.instance [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Lazy-loading 'vcpu_model' on Instance uuid a48b4084-369d-432a-9f47-9378cdcc011f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.441 351492 DEBUG oslo_concurrency.processutils [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:17:18 compute-0 neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a[445211]: [NOTICE]   (445216) : haproxy version is 2.8.14-c23fe91
Dec 03 02:17:18 compute-0 neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a[445211]: [NOTICE]   (445216) : path to executable is /usr/sbin/haproxy
Dec 03 02:17:18 compute-0 neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a[445211]: [ALERT]    (445216) : Current worker (445218) exited with code 143 (Terminated)
Dec 03 02:17:18 compute-0 neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a[445211]: [WARNING]  (445216) : All workers exited. Exiting... (0)
Dec 03 02:17:18 compute-0 systemd[1]: libpod-a7e32c6b2ec711ff4952d75dd39991677c8777498e40fcc11f90542a51cdecf5.scope: Deactivated successfully.
Dec 03 02:17:18 compute-0 podman[448015]: 2025-12-03 02:17:18.459879216 +0000 UTC m=+0.080399265 container died a7e32c6b2ec711ff4952d75dd39991677c8777498e40fcc11f90542a51cdecf5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.497 351492 DEBUG nova.compute.manager [req-7a5a4454-ce20-4b6c-b061-e4f4998294ac req-d34d0cc2-1b13-465c-8b07-861baa8fb9b9 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Received event network-vif-unplugged-ee5c2dfc-04c3-400a-8073-6f2c65dcea03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.502 351492 DEBUG oslo_concurrency.lockutils [req-7a5a4454-ce20-4b6c-b061-e4f4998294ac req-d34d0cc2-1b13-465c-8b07-861baa8fb9b9 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.502 351492 DEBUG oslo_concurrency.lockutils [req-7a5a4454-ce20-4b6c-b061-e4f4998294ac req-d34d0cc2-1b13-465c-8b07-861baa8fb9b9 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.502 351492 DEBUG oslo_concurrency.lockutils [req-7a5a4454-ce20-4b6c-b061-e4f4998294ac req-d34d0cc2-1b13-465c-8b07-861baa8fb9b9 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:17:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-087efa0144787524a70b8446fc5a09fbd51303045924a94f4a2b128c2b8cbdbc-merged.mount: Deactivated successfully.
Dec 03 02:17:18 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a7e32c6b2ec711ff4952d75dd39991677c8777498e40fcc11f90542a51cdecf5-userdata-shm.mount: Deactivated successfully.
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.503 351492 DEBUG nova.compute.manager [req-7a5a4454-ce20-4b6c-b061-e4f4998294ac req-d34d0cc2-1b13-465c-8b07-861baa8fb9b9 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] No waiting events found dispatching network-vif-unplugged-ee5c2dfc-04c3-400a-8073-6f2c65dcea03 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.503 351492 WARNING nova.compute.manager [req-7a5a4454-ce20-4b6c-b061-e4f4998294ac req-d34d0cc2-1b13-465c-8b07-861baa8fb9b9 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Received unexpected event network-vif-unplugged-ee5c2dfc-04c3-400a-8073-6f2c65dcea03 for instance with vm_state active and task_state reboot_started_hard.
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.506 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:17:18 compute-0 podman[448015]: 2025-12-03 02:17:18.529308341 +0000 UTC m=+0.149828370 container cleanup a7e32c6b2ec711ff4952d75dd39991677c8777498e40fcc11f90542a51cdecf5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec 03 02:17:18 compute-0 systemd[1]: libpod-conmon-a7e32c6b2ec711ff4952d75dd39991677c8777498e40fcc11f90542a51cdecf5.scope: Deactivated successfully.
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.603 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:18 compute-0 podman[448046]: 2025-12-03 02:17:18.635124504 +0000 UTC m=+0.067747087 container remove a7e32c6b2ec711ff4952d75dd39991677c8777498e40fcc11f90542a51cdecf5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec 03 02:17:18 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:18.649 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[b5b2830a-2a69-45fe-8782-5853219c3ae6]: (4, ('Wed Dec  3 02:17:18 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a (a7e32c6b2ec711ff4952d75dd39991677c8777498e40fcc11f90542a51cdecf5)\na7e32c6b2ec711ff4952d75dd39991677c8777498e40fcc11f90542a51cdecf5\nWed Dec  3 02:17:18 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a (a7e32c6b2ec711ff4952d75dd39991677c8777498e40fcc11f90542a51cdecf5)\na7e32c6b2ec711ff4952d75dd39991677c8777498e40fcc11f90542a51cdecf5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:17:18 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:18.651 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[f70a84a3-68a3-473c-ad95-91bf45d5bb1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:17:18 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:18.653 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2fdf214a-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.658 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:18 compute-0 kernel: tap2fdf214a-00: left promiscuous mode
Dec 03 02:17:18 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:18.680 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[86f3b1e9-8c1f-4de9-a34c-5e68b52233c1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.681 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:18 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:18.698 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[098c5367-900c-4424-92f8-01276ce39be7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:17:18 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:18.701 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[0ac126ba-6778-4ef7-ad72-753278ed7506]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:17:18 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:18.718 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[644b900c-f060-4f4e-bbe5-6234938607da]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699295, 'reachable_time': 28728, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 448097, 'error': None, 'target': 'ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:17:18 compute-0 systemd[1]: run-netns-ovnmeta\x2d2fdf214a\x2d0f6e\x2d4e5d\x2db449\x2de1988827937a.mount: Deactivated successfully.
Dec 03 02:17:18 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:18.724 288639 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 03 02:17:18 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:18.724 288639 DEBUG oslo.privsep.daemon [-] privsep: reply[85713137-db5a-4f94-ba94-6cc9897baca0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:17:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 03 02:17:18 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/913698756' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.977 351492 DEBUG oslo_concurrency.processutils [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:17:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:17:18 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4174846856' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.039 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:17:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1890: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s wr, 3 op/s
Dec 03 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.051 351492 DEBUG oslo_concurrency.processutils [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.080 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.100 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.139 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.139 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.230s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:17:19 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/913698756' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:17:19 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/4174846856' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.511 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 03 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.512 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 03 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.513 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.520 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec 03 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.522 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}5774f494984a65ffbde2426a05531a474fe014ea4dcd597248cb0a9b623a789b" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec 03 02:17:19 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 03 02:17:19 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/829905297' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.560 351492 DEBUG oslo_concurrency.processutils [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.561 351492 DEBUG nova.virt.libvirt.vif [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T02:15:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-925455337',display_name='tempest-ServerActionsTestJSON-server-925455337',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-925455337',id=8,image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFGOJzr3C/PPi8eniww/uAf5kjbNsdKavxgkZKaJZFgdiLqS6nfAl7iJt2CTK2Uv8oLXiebIMQ1pupDcRRUQudzYxI5uBKdjcX1Ycil7EMv1Jwv4g9nZX8AidJ89XIoqzA==',key_name='tempest-keypair-354319462',keypairs=<?>,launch_index=0,launched_at=2025-12-03T02:15:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b95bb4c57d3543acb25997bedee9dec3',ramdisk_id='',reservation_id='r-4j003m20',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-225723275',owner_user_name='tempest-ServerActionsTestJSON-225723275-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-03T02:17:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='292dd1da4e67424b855327b32f0623b7',uuid=a48b4084-369d-432a-9f47-9378cdcc011f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "address": "fa:16:3e:ff:dd:2f", "network": {"id": "2fdf214a-0f6e-4e5d-b449-e1988827937a", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-191861003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b95bb4c57d3543acb25997bedee9dec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee5c2dfc-04", "ovs_interfaceid": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 03 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.561 351492 DEBUG nova.network.os_vif_util [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Converting VIF {"id": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "address": "fa:16:3e:ff:dd:2f", "network": {"id": "2fdf214a-0f6e-4e5d-b449-e1988827937a", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-191861003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b95bb4c57d3543acb25997bedee9dec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee5c2dfc-04", "ovs_interfaceid": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 03 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.562 351492 DEBUG nova.network.os_vif_util [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ff:dd:2f,bridge_name='br-int',has_traffic_filtering=True,id=ee5c2dfc-04c3-400a-8073-6f2c65dcea03,network=Network(2fdf214a-0f6e-4e5d-b449-e1988827937a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee5c2dfc-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 03 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.563 351492 DEBUG nova.objects.instance [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Lazy-loading 'pci_devices' on Instance uuid a48b4084-369d-432a-9f47-9378cdcc011f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.584 351492 DEBUG nova.virt.libvirt.driver [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] End _get_guest_xml xml=<domain type="kvm">
Dec 03 02:17:19 compute-0 nova_compute[351485]:   <uuid>a48b4084-369d-432a-9f47-9378cdcc011f</uuid>
Dec 03 02:17:19 compute-0 nova_compute[351485]:   <name>instance-00000008</name>
Dec 03 02:17:19 compute-0 nova_compute[351485]:   <memory>131072</memory>
Dec 03 02:17:19 compute-0 nova_compute[351485]:   <vcpu>1</vcpu>
Dec 03 02:17:19 compute-0 nova_compute[351485]:   <metadata>
Dec 03 02:17:19 compute-0 nova_compute[351485]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 03 02:17:19 compute-0 nova_compute[351485]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:       <nova:name>tempest-ServerActionsTestJSON-server-925455337</nova:name>
Dec 03 02:17:19 compute-0 nova_compute[351485]:       <nova:creationTime>2025-12-03 02:17:18</nova:creationTime>
Dec 03 02:17:19 compute-0 nova_compute[351485]:       <nova:flavor name="m1.nano">
Dec 03 02:17:19 compute-0 nova_compute[351485]:         <nova:memory>128</nova:memory>
Dec 03 02:17:19 compute-0 nova_compute[351485]:         <nova:disk>1</nova:disk>
Dec 03 02:17:19 compute-0 nova_compute[351485]:         <nova:swap>0</nova:swap>
Dec 03 02:17:19 compute-0 nova_compute[351485]:         <nova:ephemeral>0</nova:ephemeral>
Dec 03 02:17:19 compute-0 nova_compute[351485]:         <nova:vcpus>1</nova:vcpus>
Dec 03 02:17:19 compute-0 nova_compute[351485]:       </nova:flavor>
Dec 03 02:17:19 compute-0 nova_compute[351485]:       <nova:owner>
Dec 03 02:17:19 compute-0 nova_compute[351485]:         <nova:user uuid="292dd1da4e67424b855327b32f0623b7">tempest-ServerActionsTestJSON-225723275-project-member</nova:user>
Dec 03 02:17:19 compute-0 nova_compute[351485]:         <nova:project uuid="b95bb4c57d3543acb25997bedee9dec3">tempest-ServerActionsTestJSON-225723275</nova:project>
Dec 03 02:17:19 compute-0 nova_compute[351485]:       </nova:owner>
Dec 03 02:17:19 compute-0 nova_compute[351485]:       <nova:root type="image" uuid="ef773cba-72f0-486f-b5e5-792ff26bb688"/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:       <nova:ports>
Dec 03 02:17:19 compute-0 nova_compute[351485]:         <nova:port uuid="ee5c2dfc-04c3-400a-8073-6f2c65dcea03">
Dec 03 02:17:19 compute-0 nova_compute[351485]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:         </nova:port>
Dec 03 02:17:19 compute-0 nova_compute[351485]:       </nova:ports>
Dec 03 02:17:19 compute-0 nova_compute[351485]:     </nova:instance>
Dec 03 02:17:19 compute-0 nova_compute[351485]:   </metadata>
Dec 03 02:17:19 compute-0 nova_compute[351485]:   <sysinfo type="smbios">
Dec 03 02:17:19 compute-0 nova_compute[351485]:     <system>
Dec 03 02:17:19 compute-0 nova_compute[351485]:       <entry name="manufacturer">RDO</entry>
Dec 03 02:17:19 compute-0 nova_compute[351485]:       <entry name="product">OpenStack Compute</entry>
Dec 03 02:17:19 compute-0 nova_compute[351485]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 03 02:17:19 compute-0 nova_compute[351485]:       <entry name="serial">a48b4084-369d-432a-9f47-9378cdcc011f</entry>
Dec 03 02:17:19 compute-0 nova_compute[351485]:       <entry name="uuid">a48b4084-369d-432a-9f47-9378cdcc011f</entry>
Dec 03 02:17:19 compute-0 nova_compute[351485]:       <entry name="family">Virtual Machine</entry>
Dec 03 02:17:19 compute-0 nova_compute[351485]:     </system>
Dec 03 02:17:19 compute-0 nova_compute[351485]:   </sysinfo>
Dec 03 02:17:19 compute-0 nova_compute[351485]:   <os>
Dec 03 02:17:19 compute-0 nova_compute[351485]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 03 02:17:19 compute-0 nova_compute[351485]:     <boot dev="hd"/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:     <smbios mode="sysinfo"/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:   </os>
Dec 03 02:17:19 compute-0 nova_compute[351485]:   <features>
Dec 03 02:17:19 compute-0 nova_compute[351485]:     <acpi/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:     <apic/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:     <vmcoreinfo/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:   </features>
Dec 03 02:17:19 compute-0 nova_compute[351485]:   <clock offset="utc">
Dec 03 02:17:19 compute-0 nova_compute[351485]:     <timer name="pit" tickpolicy="delay"/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:     <timer name="hpet" present="no"/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:   </clock>
Dec 03 02:17:19 compute-0 nova_compute[351485]:   <cpu mode="host-model" match="exact">
Dec 03 02:17:19 compute-0 nova_compute[351485]:     <topology sockets="1" cores="1" threads="1"/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:   </cpu>
Dec 03 02:17:19 compute-0 nova_compute[351485]:   <devices>
Dec 03 02:17:19 compute-0 nova_compute[351485]:     <disk type="network" device="disk">
Dec 03 02:17:19 compute-0 nova_compute[351485]:       <driver type="raw" cache="none"/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:       <source protocol="rbd" name="vms/a48b4084-369d-432a-9f47-9378cdcc011f_disk">
Dec 03 02:17:19 compute-0 nova_compute[351485]:         <host name="192.168.122.100" port="6789"/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:       </source>
Dec 03 02:17:19 compute-0 nova_compute[351485]:       <auth username="openstack">
Dec 03 02:17:19 compute-0 nova_compute[351485]:         <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:       </auth>
Dec 03 02:17:19 compute-0 nova_compute[351485]:       <target dev="vda" bus="virtio"/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:     </disk>
Dec 03 02:17:19 compute-0 nova_compute[351485]:     <disk type="network" device="cdrom">
Dec 03 02:17:19 compute-0 nova_compute[351485]:       <driver type="raw" cache="none"/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:       <source protocol="rbd" name="vms/a48b4084-369d-432a-9f47-9378cdcc011f_disk.config">
Dec 03 02:17:19 compute-0 nova_compute[351485]:         <host name="192.168.122.100" port="6789"/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:       </source>
Dec 03 02:17:19 compute-0 nova_compute[351485]:       <auth username="openstack">
Dec 03 02:17:19 compute-0 nova_compute[351485]:         <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:       </auth>
Dec 03 02:17:19 compute-0 nova_compute[351485]:       <target dev="sda" bus="sata"/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:     </disk>
Dec 03 02:17:19 compute-0 nova_compute[351485]:     <interface type="ethernet">
Dec 03 02:17:19 compute-0 nova_compute[351485]:       <mac address="fa:16:3e:ff:dd:2f"/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:       <model type="virtio"/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:       <driver name="vhost" rx_queue_size="512"/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:       <mtu size="1442"/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:       <target dev="tapee5c2dfc-04"/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:     </interface>
Dec 03 02:17:19 compute-0 nova_compute[351485]:     <serial type="pty">
Dec 03 02:17:19 compute-0 nova_compute[351485]:       <log file="/var/lib/nova/instances/a48b4084-369d-432a-9f47-9378cdcc011f/console.log" append="off"/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:     </serial>
Dec 03 02:17:19 compute-0 nova_compute[351485]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:     <video>
Dec 03 02:17:19 compute-0 nova_compute[351485]:       <model type="virtio"/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:     </video>
Dec 03 02:17:19 compute-0 nova_compute[351485]:     <input type="tablet" bus="usb"/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:     <input type="keyboard" bus="usb"/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:     <rng model="virtio">
Dec 03 02:17:19 compute-0 nova_compute[351485]:       <backend model="random">/dev/urandom</backend>
Dec 03 02:17:19 compute-0 nova_compute[351485]:     </rng>
Dec 03 02:17:19 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root"/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:     <controller type="usb" index="0"/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:     <memballoon model="virtio">
Dec 03 02:17:19 compute-0 nova_compute[351485]:       <stats period="10"/>
Dec 03 02:17:19 compute-0 nova_compute[351485]:     </memballoon>
Dec 03 02:17:19 compute-0 nova_compute[351485]:   </devices>
Dec 03 02:17:19 compute-0 nova_compute[351485]: </domain>
Dec 03 02:17:19 compute-0 nova_compute[351485]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 03 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.585 351492 DEBUG nova.virt.libvirt.driver [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.586 351492 DEBUG nova.virt.libvirt.driver [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.587 351492 DEBUG nova.virt.libvirt.vif [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T02:15:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-925455337',display_name='tempest-ServerActionsTestJSON-server-925455337',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-925455337',id=8,image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFGOJzr3C/PPi8eniww/uAf5kjbNsdKavxgkZKaJZFgdiLqS6nfAl7iJt2CTK2Uv8oLXiebIMQ1pupDcRRUQudzYxI5uBKdjcX1Ycil7EMv1Jwv4g9nZX8AidJ89XIoqzA==',key_name='tempest-keypair-354319462',keypairs=<?>,launch_index=0,launched_at=2025-12-03T02:15:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='b95bb4c57d3543acb25997bedee9dec3',ramdisk_id='',reservation_id='r-4j003m20',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-225723275',owner_user_name='tempest-ServerActionsTestJSON-225723275-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-03T02:17:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='292dd1da4e67424b855327b32f0623b7',uuid=a48b4084-369d-432a-9f47-9378cdcc011f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "address": "fa:16:3e:ff:dd:2f", "network": {"id": "2fdf214a-0f6e-4e5d-b449-e1988827937a", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-191861003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b95bb4c57d3543acb25997bedee9dec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee5c2dfc-04", "ovs_interfaceid": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 03 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.587 351492 DEBUG nova.network.os_vif_util [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Converting VIF {"id": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "address": "fa:16:3e:ff:dd:2f", "network": {"id": "2fdf214a-0f6e-4e5d-b449-e1988827937a", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-191861003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b95bb4c57d3543acb25997bedee9dec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee5c2dfc-04", "ovs_interfaceid": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 03 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.588 351492 DEBUG nova.network.os_vif_util [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ff:dd:2f,bridge_name='br-int',has_traffic_filtering=True,id=ee5c2dfc-04c3-400a-8073-6f2c65dcea03,network=Network(2fdf214a-0f6e-4e5d-b449-e1988827937a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee5c2dfc-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 03 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.588 351492 DEBUG os_vif [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ff:dd:2f,bridge_name='br-int',has_traffic_filtering=True,id=ee5c2dfc-04c3-400a-8073-6f2c65dcea03,network=Network(2fdf214a-0f6e-4e5d-b449-e1988827937a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee5c2dfc-04') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 03 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.588 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.589 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.589 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 03 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.594 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.594 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee5c2dfc-04, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.595 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapee5c2dfc-04, col_values=(('external_ids', {'iface-id': 'ee5c2dfc-04c3-400a-8073-6f2c65dcea03', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ff:dd:2f', 'vm-uuid': 'a48b4084-369d-432a-9f47-9378cdcc011f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:17:19 compute-0 NetworkManager[48912]: <info>  [1764728239.5989] manager: (tapee5c2dfc-04): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Dec 03 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.597 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.599 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 03 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.606 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.607 351492 INFO os_vif [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ff:dd:2f,bridge_name='br-int',has_traffic_filtering=True,id=ee5c2dfc-04c3-400a-8073-6f2c65dcea03,network=Network(2fdf214a-0f6e-4e5d-b449-e1988827937a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee5c2dfc-04')
Dec 03 02:17:19 compute-0 kernel: tapee5c2dfc-04: entered promiscuous mode
Dec 03 02:17:19 compute-0 NetworkManager[48912]: <info>  [1764728239.7476] manager: (tapee5c2dfc-04): new Tun device (/org/freedesktop/NetworkManager/Devices/56)
Dec 03 02:17:19 compute-0 ovn_controller[89134]: 2025-12-03T02:17:19Z|00116|binding|INFO|Claiming lport ee5c2dfc-04c3-400a-8073-6f2c65dcea03 for this chassis.
Dec 03 02:17:19 compute-0 ovn_controller[89134]: 2025-12-03T02:17:19Z|00117|binding|INFO|ee5c2dfc-04c3-400a-8073-6f2c65dcea03: Claiming fa:16:3e:ff:dd:2f 10.100.0.9
Dec 03 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.751 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:19.759 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ff:dd:2f 10.100.0.9'], port_security=['fa:16:3e:ff:dd:2f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'a48b4084-369d-432a-9f47-9378cdcc011f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2fdf214a-0f6e-4e5d-b449-e1988827937a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b95bb4c57d3543acb25997bedee9dec3', 'neutron:revision_number': '5', 'neutron:security_group_ids': '323d2b87-5691-4e3e-84a4-5fb1ca8c1538', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.208'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=49517db8-4396-45c4-bc75-59118441fc2e, chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=ee5c2dfc-04c3-400a-8073-6f2c65dcea03) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 02:17:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:19.762 288528 INFO neutron.agent.ovn.metadata.agent [-] Port ee5c2dfc-04c3-400a-8073-6f2c65dcea03 in datapath 2fdf214a-0f6e-4e5d-b449-e1988827937a bound to our chassis
Dec 03 02:17:19 compute-0 ovn_controller[89134]: 2025-12-03T02:17:19Z|00118|binding|INFO|Setting lport ee5c2dfc-04c3-400a-8073-6f2c65dcea03 ovn-installed in OVS
Dec 03 02:17:19 compute-0 ovn_controller[89134]: 2025-12-03T02:17:19Z|00119|binding|INFO|Setting lport ee5c2dfc-04c3-400a-8073-6f2c65dcea03 up in Southbound
Dec 03 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.771 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.772 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:19.767 288528 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2fdf214a-0f6e-4e5d-b449-e1988827937a
Dec 03 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.788 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:19.788 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[ea2b313f-5c55-4a28-bcb1-ea3a20c1a8b0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:17:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:19.791 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2fdf214a-01 in ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 03 02:17:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:19.794 414755 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2fdf214a-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 03 02:17:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:19.795 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[a1707023-8fea-47b0-97e9-3b3cc19f73b4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:17:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:19.796 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[bac3cb28-9f75-4aa5-b3c7-953bed4bb5d8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:17:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:19.814 288639 DEBUG oslo.privsep.daemon [-] privsep: reply[47395e40-f035-4e91-8147-1473a4e169a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:17:19 compute-0 systemd-udevd[448160]: Network interface NamePolicy= disabled on kernel command line.
Dec 03 02:17:19 compute-0 NetworkManager[48912]: <info>  [1764728239.8354] device (tapee5c2dfc-04): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 03 02:17:19 compute-0 NetworkManager[48912]: <info>  [1764728239.8367] device (tapee5c2dfc-04): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 03 02:17:19 compute-0 systemd-machined[138558]: New machine qemu-11-instance-00000008.
Dec 03 02:17:19 compute-0 systemd[1]: Started Virtual Machine qemu-11-instance-00000008.
Dec 03 02:17:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:19.851 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[36df3c71-1bfd-4225-ad7f-e9b8d3ebacd5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:17:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:19.904 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[c856f297-a283-428e-874c-41e2c381b374]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:17:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:19.922 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[77b3cc1a-2ddc-45dc-b802-8d6f4c2b4cda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:17:19 compute-0 systemd-udevd[448164]: Network interface NamePolicy= disabled on kernel command line.
Dec 03 02:17:19 compute-0 NetworkManager[48912]: <info>  [1764728239.9358] manager: (tap2fdf214a-00): new Veth device (/org/freedesktop/NetworkManager/Devices/57)
Dec 03 02:17:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:19.969 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[0ac6955a-5699-4785-a576-84587d62be71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:17:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:19.976 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[04eb58fc-7482-4a16-81d0-4b2c267d80ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:17:20 compute-0 NetworkManager[48912]: <info>  [1764728240.0271] device (tap2fdf214a-00): carrier: link connected
Dec 03 02:17:20 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:20.032 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[efb9d8e2-6564-4aed-843a-a938f0b60204]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:17:20 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:20.054 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[d33ddf24-bdbc-44bb-93ec-7abb6b314a0d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2fdf214a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:62:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707950, 'reachable_time': 21616, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 448253, 'error': None, 'target': 'ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:17:20 compute-0 podman[448168]: 2025-12-03 02:17:20.075799385 +0000 UTC m=+0.145989262 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 02:17:20 compute-0 podman[448182]: 2025-12-03 02:17:20.077638237 +0000 UTC m=+0.125115331 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 03 02:17:20 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:20.076 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[71d6c632-8b2a-4f92-93fe-2999375ef582]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9f:62d4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 707950, 'tstamp': 707950}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 448276, 'error': None, 'target': 'ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:17:20 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:20.097 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[a1cb5aa4-6d86-47d5-9454-918e1e5eddc6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2fdf214a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:62:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707950, 'reachable_time': 21616, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 448288, 'error': None, 'target': 'ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:17:20 compute-0 podman[448167]: 2025-12-03 02:17:20.10424491 +0000 UTC m=+0.184596824 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, io.openshift.expose-services=, distribution-scope=public, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-type=git, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 03 02:17:20 compute-0 podman[448174]: 2025-12-03 02:17:20.139876518 +0000 UTC m=+0.173968523 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., container_name=kepler, io.openshift.expose-services=, release=1214.1726694543, vendor=Red Hat, Inc., version=9.4, architecture=x86_64, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible)
Dec 03 02:17:20 compute-0 podman[448169]: 2025-12-03 02:17:20.139865127 +0000 UTC m=+0.198158667 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 03 02:17:20 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:20.148 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[e41fefe1-7fff-434a-92cd-55d2ed22558c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:17:20 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:20.224 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[ed5918ea-2bea-47f4-989b-b2ad0097f2fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:17:20 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:20.225 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2fdf214a-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:17:20 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:20.225 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 03 02:17:20 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:20.226 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2fdf214a-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:17:20 compute-0 kernel: tap2fdf214a-00: entered promiscuous mode
Dec 03 02:17:20 compute-0 NetworkManager[48912]: <info>  [1764728240.2323] manager: (tap2fdf214a-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Dec 03 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.233 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:20 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:20.238 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2fdf214a-00, col_values=(('external_ids', {'iface-id': 'c8314dfe-5b76-4819-9b3e-1cb76a272253'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.239 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:20 compute-0 ovn_controller[89134]: 2025-12-03T02:17:20Z|00120|binding|INFO|Releasing lport c8314dfe-5b76-4819-9b3e-1cb76a272253 from this chassis (sb_readonly=0)
Dec 03 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.241 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:20 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:20.254 288528 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2fdf214a-0f6e-4e5d-b449-e1988827937a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2fdf214a-0f6e-4e5d-b449-e1988827937a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 03 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.254 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:20 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:20.256 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[c1241da4-7650-47b4-8887-733bd1f60399]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:17:20 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:20.257 288528 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 03 02:17:20 compute-0 ovn_metadata_agent[288523]: global
Dec 03 02:17:20 compute-0 ovn_metadata_agent[288523]:     log         /dev/log local0 debug
Dec 03 02:17:20 compute-0 ovn_metadata_agent[288523]:     log-tag     haproxy-metadata-proxy-2fdf214a-0f6e-4e5d-b449-e1988827937a
Dec 03 02:17:20 compute-0 ovn_metadata_agent[288523]:     user        root
Dec 03 02:17:20 compute-0 ovn_metadata_agent[288523]:     group       root
Dec 03 02:17:20 compute-0 ovn_metadata_agent[288523]:     maxconn     1024
Dec 03 02:17:20 compute-0 ovn_metadata_agent[288523]:     pidfile     /var/lib/neutron/external/pids/2fdf214a-0f6e-4e5d-b449-e1988827937a.pid.haproxy
Dec 03 02:17:20 compute-0 ovn_metadata_agent[288523]:     daemon
Dec 03 02:17:20 compute-0 ovn_metadata_agent[288523]: 
Dec 03 02:17:20 compute-0 ovn_metadata_agent[288523]: defaults
Dec 03 02:17:20 compute-0 ovn_metadata_agent[288523]:     log global
Dec 03 02:17:20 compute-0 ovn_metadata_agent[288523]:     mode http
Dec 03 02:17:20 compute-0 ovn_metadata_agent[288523]:     option httplog
Dec 03 02:17:20 compute-0 ovn_metadata_agent[288523]:     option dontlognull
Dec 03 02:17:20 compute-0 ovn_metadata_agent[288523]:     option http-server-close
Dec 03 02:17:20 compute-0 ovn_metadata_agent[288523]:     option forwardfor
Dec 03 02:17:20 compute-0 ovn_metadata_agent[288523]:     retries                 3
Dec 03 02:17:20 compute-0 ovn_metadata_agent[288523]:     timeout http-request    30s
Dec 03 02:17:20 compute-0 ovn_metadata_agent[288523]:     timeout connect         30s
Dec 03 02:17:20 compute-0 ovn_metadata_agent[288523]:     timeout client          32s
Dec 03 02:17:20 compute-0 ovn_metadata_agent[288523]:     timeout server          32s
Dec 03 02:17:20 compute-0 ovn_metadata_agent[288523]:     timeout http-keep-alive 30s
Dec 03 02:17:20 compute-0 ovn_metadata_agent[288523]: 
Dec 03 02:17:20 compute-0 ovn_metadata_agent[288523]: 
Dec 03 02:17:20 compute-0 ovn_metadata_agent[288523]: listen listener
Dec 03 02:17:20 compute-0 ovn_metadata_agent[288523]:     bind 169.254.169.254:80
Dec 03 02:17:20 compute-0 ovn_metadata_agent[288523]:     server metadata /var/lib/neutron/metadata_proxy
Dec 03 02:17:20 compute-0 ovn_metadata_agent[288523]:     http-request add-header X-OVN-Network-ID 2fdf214a-0f6e-4e5d-b449-e1988827937a
Dec 03 02:17:20 compute-0 ovn_metadata_agent[288523]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 03 02:17:20 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:20.257 288528 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a', 'env', 'PROCESS_TAG=haproxy-2fdf214a-0f6e-4e5d-b449-e1988827937a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2fdf214a-0f6e-4e5d-b449-e1988827937a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 03 02:17:20 compute-0 ceph-mon[192821]: pgmap v1890: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s wr, 3 op/s
Dec 03 02:17:20 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/829905297' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.313 351492 DEBUG nova.network.neutron [-] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.331 351492 INFO nova.compute.manager [-] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Took 2.34 seconds to deallocate network for instance.
Dec 03 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.378 351492 DEBUG oslo_concurrency.lockutils [None req-d6c62894-6720-4b10-b1bb-7408d0e376bd 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.379 351492 DEBUG oslo_concurrency.lockutils [None req-d6c62894-6720-4b10-b1bb-7408d0e376bd 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.427 351492 DEBUG nova.compute.manager [req-d2b419c3-790e-459c-9c5e-fbeff3d6fefa req-ef58cc70-bdf4-4753-aafa-bc72f13198e2 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Received event network-vif-plugged-b7fa8023-e50c-4bea-be79-8fbe005f0b8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.428 351492 DEBUG oslo_concurrency.lockutils [req-d2b419c3-790e-459c-9c5e-fbeff3d6fefa req-ef58cc70-bdf4-4753-aafa-bc72f13198e2 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "4f50e501-f565-4e1f-aa02-df921702eff9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.429 351492 DEBUG oslo_concurrency.lockutils [req-d2b419c3-790e-459c-9c5e-fbeff3d6fefa req-ef58cc70-bdf4-4753-aafa-bc72f13198e2 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "4f50e501-f565-4e1f-aa02-df921702eff9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.429 351492 DEBUG oslo_concurrency.lockutils [req-d2b419c3-790e-459c-9c5e-fbeff3d6fefa req-ef58cc70-bdf4-4753-aafa-bc72f13198e2 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "4f50e501-f565-4e1f-aa02-df921702eff9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.429 351492 DEBUG nova.compute.manager [req-d2b419c3-790e-459c-9c5e-fbeff3d6fefa req-ef58cc70-bdf4-4753-aafa-bc72f13198e2 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] No waiting events found dispatching network-vif-plugged-b7fa8023-e50c-4bea-be79-8fbe005f0b8a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 03 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.430 351492 WARNING nova.compute.manager [req-d2b419c3-790e-459c-9c5e-fbeff3d6fefa req-ef58cc70-bdf4-4753-aafa-bc72f13198e2 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Received unexpected event network-vif-plugged-b7fa8023-e50c-4bea-be79-8fbe005f0b8a for instance with vm_state deleted and task_state None.
Dec 03 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.430 351492 DEBUG nova.compute.manager [req-d2b419c3-790e-459c-9c5e-fbeff3d6fefa req-ef58cc70-bdf4-4753-aafa-bc72f13198e2 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Received event network-vif-deleted-b7fa8023-e50c-4bea-be79-8fbe005f0b8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.493 351492 DEBUG oslo_concurrency.processutils [None req-d6c62894-6720-4b10-b1bb-7408d0e376bd 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.755 351492 DEBUG nova.compute.manager [req-ea6435d1-d6c0-417d-b9cc-9c0cc6b18345 req-fb31be64-f438-4003-a3e8-c178e3124177 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Received event network-vif-plugged-ee5c2dfc-04c3-400a-8073-6f2c65dcea03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.756 351492 DEBUG oslo_concurrency.lockutils [req-ea6435d1-d6c0-417d-b9cc-9c0cc6b18345 req-fb31be64-f438-4003-a3e8-c178e3124177 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.757 351492 DEBUG oslo_concurrency.lockutils [req-ea6435d1-d6c0-417d-b9cc-9c0cc6b18345 req-fb31be64-f438-4003-a3e8-c178e3124177 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.757 351492 DEBUG oslo_concurrency.lockutils [req-ea6435d1-d6c0-417d-b9cc-9c0cc6b18345 req-fb31be64-f438-4003-a3e8-c178e3124177 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.759 351492 DEBUG nova.compute.manager [req-ea6435d1-d6c0-417d-b9cc-9c0cc6b18345 req-fb31be64-f438-4003-a3e8-c178e3124177 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] No waiting events found dispatching network-vif-plugged-ee5c2dfc-04c3-400a-8073-6f2c65dcea03 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 03 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.760 351492 WARNING nova.compute.manager [req-ea6435d1-d6c0-417d-b9cc-9c0cc6b18345 req-fb31be64-f438-4003-a3e8-c178e3124177 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Received unexpected event network-vif-plugged-ee5c2dfc-04c3-400a-8073-6f2c65dcea03 for instance with vm_state active and task_state reboot_started_hard.
Dec 03 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.762 351492 DEBUG nova.compute.manager [req-ea6435d1-d6c0-417d-b9cc-9c0cc6b18345 req-fb31be64-f438-4003-a3e8-c178e3124177 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Received event network-vif-plugged-ee5c2dfc-04c3-400a-8073-6f2c65dcea03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.762 351492 DEBUG oslo_concurrency.lockutils [req-ea6435d1-d6c0-417d-b9cc-9c0cc6b18345 req-fb31be64-f438-4003-a3e8-c178e3124177 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.763 351492 DEBUG oslo_concurrency.lockutils [req-ea6435d1-d6c0-417d-b9cc-9c0cc6b18345 req-fb31be64-f438-4003-a3e8-c178e3124177 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.766 351492 DEBUG oslo_concurrency.lockutils [req-ea6435d1-d6c0-417d-b9cc-9c0cc6b18345 req-fb31be64-f438-4003-a3e8-c178e3124177 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.767 351492 DEBUG nova.compute.manager [req-ea6435d1-d6c0-417d-b9cc-9c0cc6b18345 req-fb31be64-f438-4003-a3e8-c178e3124177 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] No waiting events found dispatching network-vif-plugged-ee5c2dfc-04c3-400a-8073-6f2c65dcea03 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 03 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.767 351492 WARNING nova.compute.manager [req-ea6435d1-d6c0-417d-b9cc-9c0cc6b18345 req-fb31be64-f438-4003-a3e8-c178e3124177 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Received unexpected event network-vif-plugged-ee5c2dfc-04c3-400a-8073-6f2c65dcea03 for instance with vm_state active and task_state reboot_started_hard.
Dec 03 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.788 351492 DEBUG nova.compute.manager [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 03 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.789 351492 DEBUG nova.virt.libvirt.host [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Removed pending event for a48b4084-369d-432a-9f47-9378cdcc011f due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 03 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.789 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728240.7871401, a48b4084-369d-432a-9f47-9378cdcc011f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.789 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] VM Resumed (Lifecycle Event)
Dec 03 02:17:20 compute-0 podman[448388]: 2025-12-03 02:17:20.796231898 +0000 UTC m=+0.092795067 container create df6275ac70edd41bbefb03e343167c9cf0112ba253c40eb803e2b1de3bfb5a95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 03 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.798 351492 INFO nova.virt.libvirt.driver [-] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Instance rebooted successfully.
Dec 03 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.807 351492 DEBUG nova.compute.manager [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.814 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.824 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 03 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.849 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Dec 03 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.850 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728240.7880769, a48b4084-369d-432a-9f47-9378cdcc011f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.850 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] VM Started (Lifecycle Event)
Dec 03 02:17:20 compute-0 podman[448388]: 2025-12-03 02:17:20.759734685 +0000 UTC m=+0.056297884 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 03 02:17:20 compute-0 systemd[1]: Started libpod-conmon-df6275ac70edd41bbefb03e343167c9cf0112ba253c40eb803e2b1de3bfb5a95.scope.
Dec 03 02:17:20 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.879 351492 DEBUG oslo_concurrency.lockutils [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Lock "a48b4084-369d-432a-9f47-9378cdcc011f" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 5.512s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.882 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.888 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 03 02:17:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eaead3289de756df5c362e51f445187494ce76bdc94cf33a7cf5eb23ba12419/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 03 02:17:20 compute-0 podman[448388]: 2025-12-03 02:17:20.940456198 +0000 UTC m=+0.237019407 container init df6275ac70edd41bbefb03e343167c9cf0112ba253c40eb803e2b1de3bfb5a95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 03 02:17:20 compute-0 podman[448388]: 2025-12-03 02:17:20.948660271 +0000 UTC m=+0.245223450 container start df6275ac70edd41bbefb03e343167c9cf0112ba253c40eb803e2b1de3bfb5a95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 03 02:17:20 compute-0 neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a[448404]: [NOTICE]   (448408) : New worker (448410) forked
Dec 03 02:17:20 compute-0 neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a[448404]: [NOTICE]   (448408) : Loading success.
Dec 03 02:17:21 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:17:21 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3735971706' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:17:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1891: 321 pgs: 321 active+clean; 211 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 24 KiB/s wr, 23 op/s
Dec 03 02:17:21 compute-0 nova_compute[351485]: 2025-12-03 02:17:21.047 351492 DEBUG oslo_concurrency.processutils [None req-d6c62894-6720-4b10-b1bb-7408d0e376bd 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:17:21 compute-0 nova_compute[351485]: 2025-12-03 02:17:21.057 351492 DEBUG nova.compute.provider_tree [None req-d6c62894-6720-4b10-b1bb-7408d0e376bd 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:17:21 compute-0 nova_compute[351485]: 2025-12-03 02:17:21.096 351492 DEBUG nova.scheduler.client.report [None req-d6c62894-6720-4b10-b1bb-7408d0e376bd 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:17:21 compute-0 nova_compute[351485]: 2025-12-03 02:17:21.131 351492 DEBUG oslo_concurrency.lockutils [None req-d6c62894-6720-4b10-b1bb-7408d0e376bd 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:17:21 compute-0 nova_compute[351485]: 2025-12-03 02:17:21.140 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:17:21 compute-0 nova_compute[351485]: 2025-12-03 02:17:21.141 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 02:17:21 compute-0 nova_compute[351485]: 2025-12-03 02:17:21.166 351492 INFO nova.scheduler.client.report [None req-d6c62894-6720-4b10-b1bb-7408d0e376bd 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Deleted allocations for instance 4f50e501-f565-4e1f-aa02-df921702eff9
Dec 03 02:17:21 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3735971706' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:17:21 compute-0 nova_compute[351485]: 2025-12-03 02:17:21.295 351492 DEBUG oslo_concurrency.lockutils [None req-d6c62894-6720-4b10-b1bb-7408d0e376bd 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Lock "4f50e501-f565-4e1f-aa02-df921702eff9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.297 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1975 Content-Type: application/json Date: Wed, 03 Dec 2025 02:17:19 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-7765a678-25c1-4978-ab04-cfd159ea1d96 x-openstack-request-id: req-7765a678-25c1-4978-ab04-cfd159ea1d96 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.298 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592", "name": "tempest-TestNetworkBasicOps-server-2141861820", "status": "ACTIVE", "tenant_id": "f8f8e5d142604e8c8aabf1e14a1467ca", "user_id": "abdbefadac2a4d98bd33ed8a1a60ff75", "metadata": {}, "hostId": "4b1a91bac1182d0f1d9a1d34a268fb1305a907d06d3942a0b7e61f82", "image": {"id": "ef773cba-72f0-486f-b5e5-792ff26bb688", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/ef773cba-72f0-486f-b5e5-792ff26bb688"}]}, "flavor": {"id": "89219634-32e9-4cb5-896f-6fa0b1edfe13", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/89219634-32e9-4cb5-896f-6fa0b1edfe13"}]}, "created": "2025-12-03T02:16:31Z", "updated": "2025-12-03T02:16:51Z", "addresses": {"tempest-network-smoke--628634883": [{"version": 4, "addr": "10.100.0.3", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:ed:5c:3e"}, {"version": 4, "addr": "192.168.122.193", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:ed:5c:3e"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-TestNetworkBasicOps-1925623369", "OS-SRV-USG:launched_at": "2025-12-03T02:16:50.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-secgroup-smoke-1550122294"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000a", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.298 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592 used request id req-7765a678-25c1-4978-ab04-cfd159ea1d96 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.300 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592', 'name': 'tempest-TestNetworkBasicOps-server-2141861820', 'flavor': {'id': '89219634-32e9-4cb5-896f-6fa0b1edfe13', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ef773cba-72f0-486f-b5e5-792ff26bb688'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000a', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'f8f8e5d142604e8c8aabf1e14a1467ca', 'user_id': 'abdbefadac2a4d98bd33ed8a1a60ff75', 'hostId': '4b1a91bac1182d0f1d9a1d34a268fb1305a907d06d3942a0b7e61f82', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.301 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.301 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.301 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.301 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.302 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-03T02:17:21.301357) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.329 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.329 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592: ceilometer.compute.pollsters.NoVolumeException
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.329 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.329 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.330 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.330 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.330 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.330 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.331 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-03T02:17:21.330232) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.335 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592 / tapae5db7e6-7a inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.335 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.336 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.336 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.336 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.336 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.336 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.337 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.337 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.337 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.337 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.337 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.337 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.337 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.338 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.338 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.338 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.338 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.338 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.338 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.339 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.339 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.339 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.339 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.339 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.339 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.340 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.340 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.340 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.340 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.340 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.340 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.341 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-03T02:17:21.336861) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.342 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-03T02:17:21.337705) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.342 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-03T02:17:21.338752) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.342 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-03T02:17:21.339850) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.342 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-03T02:17:21.340925) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.355 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.355 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.355 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.356 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.356 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.356 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.356 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.356 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.356 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.356 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-TestNetworkBasicOps-server-2141861820>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-TestNetworkBasicOps-server-2141861820>]
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.357 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.357 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.357 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.357 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-03T02:17:21.356384) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.357 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.357 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.359 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-03T02:17:21.357696) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.395 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/disk.device.read.bytes volume: 23775232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.395 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.396 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.396 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.396 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.396 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.396 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.396 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.396 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.397 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.397 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.397 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.397 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.397 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.397 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.398 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/disk.device.read.latency volume: 1993141923 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.398 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/disk.device.read.latency volume: 3865639 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.398 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.398 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.398 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.399 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.399 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.399 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.399 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/disk.device.read.requests volume: 760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.399 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.400 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.400 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.400 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.400 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.400 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.400 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.400 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.400 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-03T02:17:21.396621) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.401 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.401 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.401 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.401 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.401 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.401 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-03T02:17:21.397939) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.401 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.401 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.401 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.402 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.402 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.402 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.402 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.402 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.403 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.403 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.403 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.403 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.403 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.403 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.403 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.404 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.404 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.404 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.404 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.404 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.405 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.405 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.405 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.405 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.405 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.405 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.405 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.406 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.406 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.406 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.406 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.406 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.406 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.406 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-03T02:17:21.399316) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.406 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.407 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-03T02:17:21.400663) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.407 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.407 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-03T02:17:21.401332) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.407 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.407 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.407 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.407 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.407 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-03T02:17:21.403013) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.407 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-03T02:17:21.404123) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.407 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.408 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/cpu volume: 28880000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.408 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-03T02:17:21.405391) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.408 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.408 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-03T02:17:21.406809) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.408 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.408 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.408 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.408 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-03T02:17:21.407758) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.408 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.408 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.409 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.409 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.409 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.409 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.409 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.409 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.409 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.410 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.410 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.410 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.410 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.410 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.410 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.410 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.410 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.411 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.411 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.411 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.411 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.411 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.411 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.411 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.412 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.412 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.412 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.412 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.412 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.412 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.412 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.413 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.413 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.413 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.413 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.413 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.413 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.413 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.413 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.413 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.413 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-03T02:17:21.408732) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.413 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.414 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.413 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-03T02:17:21.409421) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.414 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.414 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.414 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-03T02:17:21.410415) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.414 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-03T02:17:21.411523) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.414 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-TestNetworkBasicOps-server-2141861820>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-TestNetworkBasicOps-server-2141861820>]
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.414 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-03T02:17:21.412625) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.415 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-03T02:17:21.413337) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.415 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.415 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.415 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.416 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.416 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.416 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.416 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.417 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.417 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.417 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.417 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.418 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.418 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.418 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.418 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.419 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.419 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.419 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.426 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.427 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.427 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.427 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.427 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.427 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.427 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.427 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.427 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-03T02:17:21.414100) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:17:21 compute-0 nova_compute[351485]: 2025-12-03 02:17:21.461 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-a48b4084-369d-432a-9f47-9378cdcc011f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:17:21 compute-0 nova_compute[351485]: 2025-12-03 02:17:21.461 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-a48b4084-369d-432a-9f47-9378cdcc011f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:17:21 compute-0 nova_compute[351485]: 2025-12-03 02:17:21.462 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 03 02:17:22 compute-0 ceph-mon[192821]: pgmap v1891: 321 pgs: 321 active+clean; 211 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 24 KiB/s wr, 23 op/s
Dec 03 02:17:22 compute-0 ovn_controller[89134]: 2025-12-03T02:17:22Z|00121|binding|INFO|Releasing lport 4fe53946-9a81-46d3-946d-3676da417bd6 from this chassis (sb_readonly=0)
Dec 03 02:17:22 compute-0 ovn_controller[89134]: 2025-12-03T02:17:22Z|00122|binding|INFO|Releasing lport c8314dfe-5b76-4819-9b3e-1cb76a272253 from this chassis (sb_readonly=0)
Dec 03 02:17:22 compute-0 nova_compute[351485]: 2025-12-03 02:17:22.407 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:22 compute-0 nova_compute[351485]: 2025-12-03 02:17:22.863 351492 DEBUG nova.compute.manager [req-93b71d7c-a626-443b-9048-ddaf11ffa714 req-6debb44b-4610-4ecd-aa0a-3707d5b36103 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Received event network-vif-plugged-ee5c2dfc-04c3-400a-8073-6f2c65dcea03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:17:22 compute-0 nova_compute[351485]: 2025-12-03 02:17:22.864 351492 DEBUG oslo_concurrency.lockutils [req-93b71d7c-a626-443b-9048-ddaf11ffa714 req-6debb44b-4610-4ecd-aa0a-3707d5b36103 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:17:22 compute-0 nova_compute[351485]: 2025-12-03 02:17:22.866 351492 DEBUG oslo_concurrency.lockutils [req-93b71d7c-a626-443b-9048-ddaf11ffa714 req-6debb44b-4610-4ecd-aa0a-3707d5b36103 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:17:22 compute-0 nova_compute[351485]: 2025-12-03 02:17:22.866 351492 DEBUG oslo_concurrency.lockutils [req-93b71d7c-a626-443b-9048-ddaf11ffa714 req-6debb44b-4610-4ecd-aa0a-3707d5b36103 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:17:22 compute-0 nova_compute[351485]: 2025-12-03 02:17:22.867 351492 DEBUG nova.compute.manager [req-93b71d7c-a626-443b-9048-ddaf11ffa714 req-6debb44b-4610-4ecd-aa0a-3707d5b36103 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] No waiting events found dispatching network-vif-plugged-ee5c2dfc-04c3-400a-8073-6f2c65dcea03 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 03 02:17:22 compute-0 nova_compute[351485]: 2025-12-03 02:17:22.868 351492 WARNING nova.compute.manager [req-93b71d7c-a626-443b-9048-ddaf11ffa714 req-6debb44b-4610-4ecd-aa0a-3707d5b36103 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Received unexpected event network-vif-plugged-ee5c2dfc-04c3-400a-8073-6f2c65dcea03 for instance with vm_state active and task_state None.
Dec 03 02:17:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1892: 321 pgs: 321 active+clean; 183 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 22 KiB/s wr, 34 op/s
Dec 03 02:17:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:17:23 compute-0 nova_compute[351485]: 2025-12-03 02:17:23.607 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:23 compute-0 nova_compute[351485]: 2025-12-03 02:17:23.895 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Updating instance_info_cache with network_info: [{"id": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "address": "fa:16:3e:ff:dd:2f", "network": {"id": "2fdf214a-0f6e-4e5d-b449-e1988827937a", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-191861003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b95bb4c57d3543acb25997bedee9dec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee5c2dfc-04", "ovs_interfaceid": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:17:23 compute-0 nova_compute[351485]: 2025-12-03 02:17:23.933 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-a48b4084-369d-432a-9f47-9378cdcc011f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:17:23 compute-0 nova_compute[351485]: 2025-12-03 02:17:23.934 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 03 02:17:23 compute-0 nova_compute[351485]: 2025-12-03 02:17:23.934 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:17:23 compute-0 nova_compute[351485]: 2025-12-03 02:17:23.935 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:17:23 compute-0 nova_compute[351485]: 2025-12-03 02:17:23.936 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:17:23 compute-0 nova_compute[351485]: 2025-12-03 02:17:23.937 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:17:23 compute-0 nova_compute[351485]: 2025-12-03 02:17:23.938 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 03 02:17:23 compute-0 nova_compute[351485]: 2025-12-03 02:17:23.959 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 03 02:17:24 compute-0 ceph-mon[192821]: pgmap v1892: 321 pgs: 321 active+clean; 183 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 22 KiB/s wr, 34 op/s
Dec 03 02:17:24 compute-0 nova_compute[351485]: 2025-12-03 02:17:24.599 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1893: 321 pgs: 321 active+clean; 183 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 339 KiB/s rd, 23 KiB/s wr, 46 op/s
Dec 03 02:17:26 compute-0 ceph-mon[192821]: pgmap v1893: 321 pgs: 321 active+clean; 183 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 339 KiB/s rd, 23 KiB/s wr, 46 op/s
Dec 03 02:17:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1894: 321 pgs: 321 active+clean; 183 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 21 KiB/s wr, 102 op/s
Dec 03 02:17:27 compute-0 nova_compute[351485]: 2025-12-03 02:17:27.390 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:17:27 compute-0 nova_compute[351485]: 2025-12-03 02:17:27.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:17:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:17:28 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Dec 03 02:17:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:17:28.308121) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 03 02:17:28 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Dec 03 02:17:28 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728248308169, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 718, "num_deletes": 250, "total_data_size": 907293, "memory_usage": 921376, "flush_reason": "Manual Compaction"}
Dec 03 02:17:28 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Dec 03 02:17:28 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728248318977, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 583360, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38092, "largest_seqno": 38809, "table_properties": {"data_size": 580214, "index_size": 1054, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 8380, "raw_average_key_size": 20, "raw_value_size": 573620, "raw_average_value_size": 1405, "num_data_blocks": 47, "num_entries": 408, "num_filter_entries": 408, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764728189, "oldest_key_time": 1764728189, "file_creation_time": 1764728248, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Dec 03 02:17:28 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 10966 microseconds, and 6156 cpu microseconds.
Dec 03 02:17:28 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 02:17:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:17:28.319083) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 583360 bytes OK
Dec 03 02:17:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:17:28.319110) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Dec 03 02:17:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:17:28.322037) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Dec 03 02:17:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:17:28.322062) EVENT_LOG_v1 {"time_micros": 1764728248322054, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 03 02:17:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:17:28.322081) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 03 02:17:28 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 903601, prev total WAL file size 903601, number of live WAL files 2.
Dec 03 02:17:28 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:17:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:17:28.324024) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353034' seq:72057594037927935, type:22 .. '6D6772737461740031373535' seq:0, type:0; will stop at (end)
Dec 03 02:17:28 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 03 02:17:28 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(569KB)], [86(10MB)]
Dec 03 02:17:28 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728248324139, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 11221806, "oldest_snapshot_seqno": -1}
Dec 03 02:17:28 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 5730 keys, 8237751 bytes, temperature: kUnknown
Dec 03 02:17:28 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728248389690, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 8237751, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8201354, "index_size": 20991, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14341, "raw_key_size": 145593, "raw_average_key_size": 25, "raw_value_size": 8099645, "raw_average_value_size": 1413, "num_data_blocks": 864, "num_entries": 5730, "num_filter_entries": 5730, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764728248, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Dec 03 02:17:28 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 02:17:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:17:28.389942) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 8237751 bytes
Dec 03 02:17:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:17:28.392892) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 171.0 rd, 125.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 10.1 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(33.4) write-amplify(14.1) OK, records in: 6216, records dropped: 486 output_compression: NoCompression
Dec 03 02:17:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:17:28.392923) EVENT_LOG_v1 {"time_micros": 1764728248392910, "job": 50, "event": "compaction_finished", "compaction_time_micros": 65625, "compaction_time_cpu_micros": 38584, "output_level": 6, "num_output_files": 1, "total_output_size": 8237751, "num_input_records": 6216, "num_output_records": 5730, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 03 02:17:28 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:17:28 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728248393268, "job": 50, "event": "table_file_deletion", "file_number": 88}
Dec 03 02:17:28 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:17:28 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728248398019, "job": 50, "event": "table_file_deletion", "file_number": 86}
Dec 03 02:17:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:17:28.323331) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:17:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:17:28.398269) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:17:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:17:28.398277) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:17:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:17:28.398281) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:17:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:17:28.398285) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:17:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:17:28.398289) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:17:28 compute-0 ceph-mon[192821]: pgmap v1894: 321 pgs: 321 active+clean; 183 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 21 KiB/s wr, 102 op/s
Dec 03 02:17:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:17:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:17:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:17:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:17:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:17:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:17:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:17:28
Dec 03 02:17:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 02:17:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 02:17:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', '.mgr', 'vms', '.rgw.root', 'volumes', 'backups', 'images', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Dec 03 02:17:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 02:17:28 compute-0 nova_compute[351485]: 2025-12-03 02:17:28.621 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:28 compute-0 ovn_controller[89134]: 2025-12-03T02:17:28Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ed:5c:3e 10.100.0.3
Dec 03 02:17:28 compute-0 ovn_controller[89134]: 2025-12-03T02:17:28Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ed:5c:3e 10.100.0.3
Dec 03 02:17:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1895: 321 pgs: 321 active+clean; 183 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 KiB/s wr, 99 op/s
Dec 03 02:17:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 02:17:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:17:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 02:17:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:17:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:17:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:17:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:17:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:17:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:17:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:17:29 compute-0 ovn_controller[89134]: 2025-12-03T02:17:29Z|00123|binding|INFO|Releasing lport 4fe53946-9a81-46d3-946d-3676da417bd6 from this chassis (sb_readonly=0)
Dec 03 02:17:29 compute-0 ovn_controller[89134]: 2025-12-03T02:17:29Z|00124|binding|INFO|Releasing lport c8314dfe-5b76-4819-9b3e-1cb76a272253 from this chassis (sb_readonly=0)
Dec 03 02:17:29 compute-0 nova_compute[351485]: 2025-12-03 02:17:29.370 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:29 compute-0 nova_compute[351485]: 2025-12-03 02:17:29.602 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:29 compute-0 podman[158098]: time="2025-12-03T02:17:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:17:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:17:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45045 "" "Go-http-client/1.1"
Dec 03 02:17:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:17:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9116 "" "Go-http-client/1.1"
Dec 03 02:17:30 compute-0 ceph-mon[192821]: pgmap v1895: 321 pgs: 321 active+clean; 183 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 KiB/s wr, 99 op/s
Dec 03 02:17:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1896: 321 pgs: 321 active+clean; 204 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.5 MiB/s wr, 141 op/s
Dec 03 02:17:31 compute-0 openstack_network_exporter[368278]: ERROR   02:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:17:31 compute-0 openstack_network_exporter[368278]: ERROR   02:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:17:31 compute-0 openstack_network_exporter[368278]: ERROR   02:17:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:17:31 compute-0 openstack_network_exporter[368278]: ERROR   02:17:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:17:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:17:31 compute-0 openstack_network_exporter[368278]: ERROR   02:17:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:17:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:17:31 compute-0 nova_compute[351485]: 2025-12-03 02:17:31.575 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:17:31 compute-0 nova_compute[351485]: 2025-12-03 02:17:31.576 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 03 02:17:31 compute-0 nova_compute[351485]: 2025-12-03 02:17:31.994 351492 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764728236.9932077, 4f50e501-f565-4e1f-aa02-df921702eff9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 02:17:31 compute-0 nova_compute[351485]: 2025-12-03 02:17:31.994 351492 INFO nova.compute.manager [-] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] VM Stopped (Lifecycle Event)
Dec 03 02:17:32 compute-0 nova_compute[351485]: 2025-12-03 02:17:32.020 351492 DEBUG nova.compute.manager [None req-df58e7e4-40b3-4b7f-bf52-2929f5e9c073 - - - - - -] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:17:32 compute-0 ceph-mon[192821]: pgmap v1896: 321 pgs: 321 active+clean; 204 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.5 MiB/s wr, 141 op/s
Dec 03 02:17:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1897: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 142 op/s
Dec 03 02:17:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:17:33 compute-0 nova_compute[351485]: 2025-12-03 02:17:33.594 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:17:33 compute-0 nova_compute[351485]: 2025-12-03 02:17:33.595 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 02:17:33 compute-0 nova_compute[351485]: 2025-12-03 02:17:33.620 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:34 compute-0 sudo[448423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:17:34 compute-0 sudo[448423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:17:34 compute-0 sudo[448423]: pam_unix(sudo:session): session closed for user root
Dec 03 02:17:34 compute-0 sudo[448448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:17:34 compute-0 sudo[448448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:17:34 compute-0 sudo[448448]: pam_unix(sudo:session): session closed for user root
Dec 03 02:17:34 compute-0 ceph-mon[192821]: pgmap v1897: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 142 op/s
Dec 03 02:17:34 compute-0 sudo[448473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:17:34 compute-0 sudo[448473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:17:34 compute-0 sudo[448473]: pam_unix(sudo:session): session closed for user root
Dec 03 02:17:34 compute-0 nova_compute[351485]: 2025-12-03 02:17:34.605 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:34 compute-0 sudo[448498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 02:17:34 compute-0 sudo[448498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:17:34 compute-0 nova_compute[351485]: 2025-12-03 02:17:34.845 351492 INFO nova.compute.manager [None req-a2cc65d1-4b4a-4903-8aa7-3a0a427ccfd3 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Get console output
Dec 03 02:17:34 compute-0 nova_compute[351485]: 2025-12-03 02:17:34.858 351492 INFO oslo.privsep.daemon [None req-a2cc65d1-4b4a-4903-8aa7-3a0a427ccfd3 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp9a11x_tz/privsep.sock']
Dec 03 02:17:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1898: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 131 op/s
Dec 03 02:17:35 compute-0 sudo[448498]: pam_unix(sudo:session): session closed for user root
Dec 03 02:17:35 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:17:35 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:17:35 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 02:17:35 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:17:35 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 02:17:35 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:17:35 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev cf8f9eb7-4f26-4adb-87f8-8a418de15bb6 does not exist
Dec 03 02:17:35 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev fd37915f-c69b-40d2-90fc-f856f6355b57 does not exist
Dec 03 02:17:35 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 56f35786-796c-4444-b3d6-0658c4579382 does not exist
Dec 03 02:17:35 compute-0 nova_compute[351485]: 2025-12-03 02:17:35.365 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:35 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 02:17:35 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:17:35 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 02:17:35 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:17:35 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:17:35 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:17:35 compute-0 sudo[448558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:17:35 compute-0 sudo[448558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:17:35 compute-0 sudo[448558]: pam_unix(sudo:session): session closed for user root
Dec 03 02:17:35 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:17:35 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:17:35 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:17:35 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:17:35 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:17:35 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:17:35 compute-0 sudo[448583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:17:35 compute-0 sudo[448583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:17:35 compute-0 sudo[448583]: pam_unix(sudo:session): session closed for user root
Dec 03 02:17:35 compute-0 nova_compute[351485]: 2025-12-03 02:17:35.663 351492 INFO oslo.privsep.daemon [None req-a2cc65d1-4b4a-4903-8aa7-3a0a427ccfd3 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Spawned new privsep daemon via rootwrap
Dec 03 02:17:35 compute-0 nova_compute[351485]: 2025-12-03 02:17:35.530 448603 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 03 02:17:35 compute-0 nova_compute[351485]: 2025-12-03 02:17:35.535 448603 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 03 02:17:35 compute-0 nova_compute[351485]: 2025-12-03 02:17:35.537 448603 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Dec 03 02:17:35 compute-0 nova_compute[351485]: 2025-12-03 02:17:35.538 448603 INFO oslo.privsep.daemon [-] privsep daemon running as pid 448603
Dec 03 02:17:35 compute-0 sudo[448609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:17:35 compute-0 sudo[448609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:17:35 compute-0 sudo[448609]: pam_unix(sudo:session): session closed for user root
Dec 03 02:17:35 compute-0 nova_compute[351485]: 2025-12-03 02:17:35.764 448603 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 03 02:17:35 compute-0 sudo[448635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 02:17:35 compute-0 sudo[448635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:17:36 compute-0 podman[448698]: 2025-12-03 02:17:36.382131691 +0000 UTC m=+0.099673922 container create a92c2086b314d2aadbca9e6eb4eccd70d28c6437055b3fce682d8000aa0e6a9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 03 02:17:36 compute-0 podman[448698]: 2025-12-03 02:17:36.329126871 +0000 UTC m=+0.046669142 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:17:36 compute-0 systemd[1]: Started libpod-conmon-a92c2086b314d2aadbca9e6eb4eccd70d28c6437055b3fce682d8000aa0e6a9a.scope.
Dec 03 02:17:36 compute-0 ceph-mon[192821]: pgmap v1898: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 131 op/s
Dec 03 02:17:36 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:17:36 compute-0 podman[448698]: 2025-12-03 02:17:36.541719205 +0000 UTC m=+0.259261406 container init a92c2086b314d2aadbca9e6eb4eccd70d28c6437055b3fce682d8000aa0e6a9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hugle, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 03 02:17:36 compute-0 podman[448698]: 2025-12-03 02:17:36.562849203 +0000 UTC m=+0.280391384 container start a92c2086b314d2aadbca9e6eb4eccd70d28c6437055b3fce682d8000aa0e6a9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hugle, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:17:36 compute-0 podman[448698]: 2025-12-03 02:17:36.566928878 +0000 UTC m=+0.284471049 container attach a92c2086b314d2aadbca9e6eb4eccd70d28c6437055b3fce682d8000aa0e6a9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 03 02:17:36 compute-0 stupefied_hugle[448713]: 167 167
Dec 03 02:17:36 compute-0 systemd[1]: libpod-a92c2086b314d2aadbca9e6eb4eccd70d28c6437055b3fce682d8000aa0e6a9a.scope: Deactivated successfully.
Dec 03 02:17:36 compute-0 conmon[448713]: conmon a92c2086b314d2aadbca <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a92c2086b314d2aadbca9e6eb4eccd70d28c6437055b3fce682d8000aa0e6a9a.scope/container/memory.events
Dec 03 02:17:36 compute-0 podman[448698]: 2025-12-03 02:17:36.580272986 +0000 UTC m=+0.297815237 container died a92c2086b314d2aadbca9e6eb4eccd70d28c6437055b3fce682d8000aa0e6a9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hugle, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 03 02:17:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c22174a1ec5d6b7758900d7977c7fe33619f26cdc87dc488b319f9f00ed2f97-merged.mount: Deactivated successfully.
Dec 03 02:17:36 compute-0 podman[448698]: 2025-12-03 02:17:36.658480238 +0000 UTC m=+0.376022459 container remove a92c2086b314d2aadbca9e6eb4eccd70d28c6437055b3fce682d8000aa0e6a9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hugle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:17:36 compute-0 systemd[1]: libpod-conmon-a92c2086b314d2aadbca9e6eb4eccd70d28c6437055b3fce682d8000aa0e6a9a.scope: Deactivated successfully.
Dec 03 02:17:36 compute-0 podman[448736]: 2025-12-03 02:17:36.926682647 +0000 UTC m=+0.073920963 container create 509bd32fd28875c9d1b1f1cdffb36720ef32764a074d1a57b750fee8126f15ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_taussig, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:17:36 compute-0 podman[448736]: 2025-12-03 02:17:36.900630979 +0000 UTC m=+0.047869365 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:17:37 compute-0 systemd[1]: Started libpod-conmon-509bd32fd28875c9d1b1f1cdffb36720ef32764a074d1a57b750fee8126f15ca.scope.
Dec 03 02:17:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1899: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 120 op/s
Dec 03 02:17:37 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:17:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/300713f384325cd839c1b11df1a0224378499f3b889b12439400f03fe3682c88/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:17:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/300713f384325cd839c1b11df1a0224378499f3b889b12439400f03fe3682c88/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:17:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/300713f384325cd839c1b11df1a0224378499f3b889b12439400f03fe3682c88/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:17:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/300713f384325cd839c1b11df1a0224378499f3b889b12439400f03fe3682c88/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:17:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/300713f384325cd839c1b11df1a0224378499f3b889b12439400f03fe3682c88/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 02:17:37 compute-0 podman[448736]: 2025-12-03 02:17:37.119929624 +0000 UTC m=+0.267167970 container init 509bd32fd28875c9d1b1f1cdffb36720ef32764a074d1a57b750fee8126f15ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:17:37 compute-0 podman[448736]: 2025-12-03 02:17:37.135273888 +0000 UTC m=+0.282512224 container start 509bd32fd28875c9d1b1f1cdffb36720ef32764a074d1a57b750fee8126f15ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_taussig, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:17:37 compute-0 podman[448736]: 2025-12-03 02:17:37.142095551 +0000 UTC m=+0.289333877 container attach 509bd32fd28875c9d1b1f1cdffb36720ef32764a074d1a57b750fee8126f15ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:17:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:17:38 compute-0 inspiring_taussig[448751]: --> passed data devices: 0 physical, 3 LVM
Dec 03 02:17:38 compute-0 inspiring_taussig[448751]: --> relative data size: 1.0
Dec 03 02:17:38 compute-0 inspiring_taussig[448751]: --> All data devices are unavailable
Dec 03 02:17:38 compute-0 systemd[1]: libpod-509bd32fd28875c9d1b1f1cdffb36720ef32764a074d1a57b750fee8126f15ca.scope: Deactivated successfully.
Dec 03 02:17:38 compute-0 podman[448736]: 2025-12-03 02:17:38.434857817 +0000 UTC m=+1.582096163 container died 509bd32fd28875c9d1b1f1cdffb36720ef32764a074d1a57b750fee8126f15ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_taussig, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:17:38 compute-0 systemd[1]: libpod-509bd32fd28875c9d1b1f1cdffb36720ef32764a074d1a57b750fee8126f15ca.scope: Consumed 1.213s CPU time.
Dec 03 02:17:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-300713f384325cd839c1b11df1a0224378499f3b889b12439400f03fe3682c88-merged.mount: Deactivated successfully.
Dec 03 02:17:38 compute-0 ceph-mon[192821]: pgmap v1899: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 120 op/s
Dec 03 02:17:38 compute-0 podman[448736]: 2025-12-03 02:17:38.541807543 +0000 UTC m=+1.689045859 container remove 509bd32fd28875c9d1b1f1cdffb36720ef32764a074d1a57b750fee8126f15ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_taussig, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:17:38 compute-0 systemd[1]: libpod-conmon-509bd32fd28875c9d1b1f1cdffb36720ef32764a074d1a57b750fee8126f15ca.scope: Deactivated successfully.
Dec 03 02:17:38 compute-0 nova_compute[351485]: 2025-12-03 02:17:38.594 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:38 compute-0 sudo[448635]: pam_unix(sudo:session): session closed for user root
Dec 03 02:17:38 compute-0 nova_compute[351485]: 2025-12-03 02:17:38.627 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:38 compute-0 sudo[448791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:17:38 compute-0 sudo[448791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:17:38 compute-0 sudo[448791]: pam_unix(sudo:session): session closed for user root
Dec 03 02:17:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 02:17:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:17:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 02:17:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:17:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015194363726639317 of space, bias 1.0, pg target 0.4558309117991795 quantized to 32 (current 32)
Dec 03 02:17:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:17:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:17:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:17:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:17:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:17:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec 03 02:17:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:17:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 02:17:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:17:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:17:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:17:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 02:17:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:17:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 02:17:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:17:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:17:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:17:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 02:17:38 compute-0 nova_compute[351485]: 2025-12-03 02:17:38.813 351492 DEBUG nova.compute.manager [req-de21ddac-3b29-4310-8931-4d8c01f17e2e req-e2ee6120-c294-4ed2-82a1-61b4b62dff28 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Received event network-changed-ae5db7e6-7a7a-4116-954a-be851ee02864 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:17:38 compute-0 nova_compute[351485]: 2025-12-03 02:17:38.814 351492 DEBUG nova.compute.manager [req-de21ddac-3b29-4310-8931-4d8c01f17e2e req-e2ee6120-c294-4ed2-82a1-61b4b62dff28 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Refreshing instance network info cache due to event network-changed-ae5db7e6-7a7a-4116-954a-be851ee02864. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 03 02:17:38 compute-0 nova_compute[351485]: 2025-12-03 02:17:38.816 351492 DEBUG oslo_concurrency.lockutils [req-de21ddac-3b29-4310-8931-4d8c01f17e2e req-e2ee6120-c294-4ed2-82a1-61b4b62dff28 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:17:38 compute-0 nova_compute[351485]: 2025-12-03 02:17:38.817 351492 DEBUG oslo_concurrency.lockutils [req-de21ddac-3b29-4310-8931-4d8c01f17e2e req-e2ee6120-c294-4ed2-82a1-61b4b62dff28 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:17:38 compute-0 nova_compute[351485]: 2025-12-03 02:17:38.818 351492 DEBUG nova.network.neutron [req-de21ddac-3b29-4310-8931-4d8c01f17e2e req-e2ee6120-c294-4ed2-82a1-61b4b62dff28 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Refreshing network info cache for port ae5db7e6-7a7a-4116-954a-be851ee02864 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 03 02:17:38 compute-0 sudo[448816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:17:38 compute-0 sudo[448816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:17:38 compute-0 sudo[448816]: pam_unix(sudo:session): session closed for user root
Dec 03 02:17:39 compute-0 sudo[448841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:17:39 compute-0 sudo[448841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:17:39 compute-0 sudo[448841]: pam_unix(sudo:session): session closed for user root
Dec 03 02:17:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1900: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 03 02:17:39 compute-0 sudo[448866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 02:17:39 compute-0 sudo[448866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:17:39 compute-0 nova_compute[351485]: 2025-12-03 02:17:39.430 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:39 compute-0 ceph-mon[192821]: pgmap v1900: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 03 02:17:39 compute-0 nova_compute[351485]: 2025-12-03 02:17:39.609 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:39 compute-0 podman[448929]: 2025-12-03 02:17:39.826360727 +0000 UTC m=+0.107144542 container create d9435e63e8a86fe3e6e32be003c9b6e125d7afc3af335ebb39e0e9e9560a298c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_ritchie, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 03 02:17:39 compute-0 podman[448929]: 2025-12-03 02:17:39.784768631 +0000 UTC m=+0.065552446 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:17:39 compute-0 systemd[1]: Started libpod-conmon-d9435e63e8a86fe3e6e32be003c9b6e125d7afc3af335ebb39e0e9e9560a298c.scope.
Dec 03 02:17:39 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:17:40 compute-0 podman[448929]: 2025-12-03 02:17:40.0010599 +0000 UTC m=+0.281843745 container init d9435e63e8a86fe3e6e32be003c9b6e125d7afc3af335ebb39e0e9e9560a298c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 03 02:17:40 compute-0 podman[448929]: 2025-12-03 02:17:40.014439299 +0000 UTC m=+0.295223124 container start d9435e63e8a86fe3e6e32be003c9b6e125d7afc3af335ebb39e0e9e9560a298c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_ritchie, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:17:40 compute-0 podman[448929]: 2025-12-03 02:17:40.022189778 +0000 UTC m=+0.302973573 container attach d9435e63e8a86fe3e6e32be003c9b6e125d7afc3af335ebb39e0e9e9560a298c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:17:40 compute-0 systemd[1]: libpod-d9435e63e8a86fe3e6e32be003c9b6e125d7afc3af335ebb39e0e9e9560a298c.scope: Deactivated successfully.
Dec 03 02:17:40 compute-0 optimistic_ritchie[448946]: 167 167
Dec 03 02:17:40 compute-0 conmon[448946]: conmon d9435e63e8a86fe3e6e3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d9435e63e8a86fe3e6e32be003c9b6e125d7afc3af335ebb39e0e9e9560a298c.scope/container/memory.events
Dec 03 02:17:40 compute-0 podman[448945]: 2025-12-03 02:17:40.069004262 +0000 UTC m=+0.108023347 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3)
Dec 03 02:17:40 compute-0 podman[448949]: 2025-12-03 02:17:40.081169586 +0000 UTC m=+0.111059432 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 03 02:17:40 compute-0 podman[448982]: 2025-12-03 02:17:40.110496356 +0000 UTC m=+0.056835709 container died d9435e63e8a86fe3e6e32be003c9b6e125d7afc3af335ebb39e0e9e9560a298c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_ritchie, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:17:40 compute-0 podman[448947]: 2025-12-03 02:17:40.128096263 +0000 UTC m=+0.161116498 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 03 02:17:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e99eadc854e482241e0c61b5d2ca21ac9c2c67952052807764979a8473bb825-merged.mount: Deactivated successfully.
Dec 03 02:17:40 compute-0 podman[448982]: 2025-12-03 02:17:40.166896561 +0000 UTC m=+0.113235894 container remove d9435e63e8a86fe3e6e32be003c9b6e125d7afc3af335ebb39e0e9e9560a298c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_ritchie, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 03 02:17:40 compute-0 systemd[1]: libpod-conmon-d9435e63e8a86fe3e6e32be003c9b6e125d7afc3af335ebb39e0e9e9560a298c.scope: Deactivated successfully.
Dec 03 02:17:40 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 03 02:17:40 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 3600.0 total, 600.0 interval
                                            Cumulative writes: 8594 writes, 38K keys, 8594 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.01 MB/s
                                            Cumulative WAL: 8594 writes, 8594 syncs, 1.00 writes per sync, written: 0.05 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 1360 writes, 6401 keys, 1360 commit groups, 1.0 writes per commit group, ingest: 8.72 MB, 0.01 MB/s
                                            Interval WAL: 1360 writes, 1360 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     98.4      0.49              0.22        25    0.020       0      0       0.0       0.0
                                              L6      1/0    7.86 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   3.8    136.5    111.6      1.63              0.79        24    0.068    122K    13K       0.0       0.0
                                             Sum      1/0    7.86 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   4.8    104.8    108.5      2.12              1.01        49    0.043    122K    13K       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.8    119.3    117.7      0.51              0.24        12    0.042     36K   3090       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0    136.5    111.6      1.63              0.79        24    0.068    122K    13K       0.0       0.0
                                            High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     98.8      0.49              0.22        24    0.020       0      0       0.0       0.0
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     18.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 3600.0 total, 600.0 interval
                                            Flush(GB): cumulative 0.047, interval 0.009
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.22 GB write, 0.06 MB/s write, 0.22 GB read, 0.06 MB/s read, 2.1 seconds
                                            Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.5 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x559a0b5b71f0#2 capacity: 308.00 MB usage: 25.18 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000142 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1629,24.26 MB,7.8756%) FilterBlock(50,347.55 KB,0.110195%) IndexBlock(50,602.67 KB,0.191087%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
Dec 03 02:17:40 compute-0 podman[449027]: 2025-12-03 02:17:40.476370227 +0000 UTC m=+0.102520561 container create ef2c84c93c9ba1d5fd29b3ab4af0a1417980e989192e5313a58a7a39925e206d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_driscoll, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 03 02:17:40 compute-0 podman[449027]: 2025-12-03 02:17:40.429016317 +0000 UTC m=+0.055166661 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:17:40 compute-0 systemd[1]: Started libpod-conmon-ef2c84c93c9ba1d5fd29b3ab4af0a1417980e989192e5313a58a7a39925e206d.scope.
Dec 03 02:17:40 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:17:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58f494aed7c7e573a7b6f4caff08d677b3d723287d6c619d590708721f8c972c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:17:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58f494aed7c7e573a7b6f4caff08d677b3d723287d6c619d590708721f8c972c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:17:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58f494aed7c7e573a7b6f4caff08d677b3d723287d6c619d590708721f8c972c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:17:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58f494aed7c7e573a7b6f4caff08d677b3d723287d6c619d590708721f8c972c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:17:40 compute-0 podman[449027]: 2025-12-03 02:17:40.651656177 +0000 UTC m=+0.277806531 container init ef2c84c93c9ba1d5fd29b3ab4af0a1417980e989192e5313a58a7a39925e206d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_driscoll, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:17:40 compute-0 podman[449027]: 2025-12-03 02:17:40.661790723 +0000 UTC m=+0.287941057 container start ef2c84c93c9ba1d5fd29b3ab4af0a1417980e989192e5313a58a7a39925e206d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_driscoll, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec 03 02:17:40 compute-0 podman[449027]: 2025-12-03 02:17:40.668713839 +0000 UTC m=+0.294864183 container attach ef2c84c93c9ba1d5fd29b3ab4af0a1417980e989192e5313a58a7a39925e206d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec 03 02:17:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1901: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]: {
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:     "0": [
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:         {
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:             "devices": [
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:                 "/dev/loop3"
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:             ],
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:             "lv_name": "ceph_lv0",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:             "lv_size": "21470642176",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:             "name": "ceph_lv0",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:             "tags": {
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:                 "ceph.cluster_name": "ceph",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:                 "ceph.crush_device_class": "",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:                 "ceph.encrypted": "0",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:                 "ceph.osd_id": "0",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:                 "ceph.type": "block",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:                 "ceph.vdo": "0"
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:             },
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:             "type": "block",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:             "vg_name": "ceph_vg0"
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:         }
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:     ],
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:     "1": [
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:         {
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:             "devices": [
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:                 "/dev/loop4"
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:             ],
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:             "lv_name": "ceph_lv1",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:             "lv_size": "21470642176",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:             "name": "ceph_lv1",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:             "tags": {
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:                 "ceph.cluster_name": "ceph",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:                 "ceph.crush_device_class": "",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:                 "ceph.encrypted": "0",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:                 "ceph.osd_id": "1",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:                 "ceph.type": "block",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:                 "ceph.vdo": "0"
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:             },
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:             "type": "block",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:             "vg_name": "ceph_vg1"
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:         }
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:     ],
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:     "2": [
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:         {
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:             "devices": [
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:                 "/dev/loop5"
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:             ],
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:             "lv_name": "ceph_lv2",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:             "lv_size": "21470642176",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:             "name": "ceph_lv2",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:             "tags": {
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:                 "ceph.cluster_name": "ceph",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:                 "ceph.crush_device_class": "",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:                 "ceph.encrypted": "0",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:                 "ceph.osd_id": "2",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:                 "ceph.type": "block",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:                 "ceph.vdo": "0"
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:             },
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:             "type": "block",
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:             "vg_name": "ceph_vg2"
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:         }
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]:     ]
Dec 03 02:17:41 compute-0 stoic_driscoll[449043]: }
Dec 03 02:17:41 compute-0 systemd[1]: libpod-ef2c84c93c9ba1d5fd29b3ab4af0a1417980e989192e5313a58a7a39925e206d.scope: Deactivated successfully.
Dec 03 02:17:41 compute-0 podman[449027]: 2025-12-03 02:17:41.582720759 +0000 UTC m=+1.208871093 container died ef2c84c93c9ba1d5fd29b3ab4af0a1417980e989192e5313a58a7a39925e206d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_driscoll, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:17:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-58f494aed7c7e573a7b6f4caff08d677b3d723287d6c619d590708721f8c972c-merged.mount: Deactivated successfully.
Dec 03 02:17:41 compute-0 podman[449027]: 2025-12-03 02:17:41.685060235 +0000 UTC m=+1.311210549 container remove ef2c84c93c9ba1d5fd29b3ab4af0a1417980e989192e5313a58a7a39925e206d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_driscoll, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:17:41 compute-0 systemd[1]: libpod-conmon-ef2c84c93c9ba1d5fd29b3ab4af0a1417980e989192e5313a58a7a39925e206d.scope: Deactivated successfully.
Dec 03 02:17:41 compute-0 sudo[448866]: pam_unix(sudo:session): session closed for user root
Dec 03 02:17:41 compute-0 sudo[449066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:17:41 compute-0 sudo[449066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:17:41 compute-0 sudo[449066]: pam_unix(sudo:session): session closed for user root
Dec 03 02:17:41 compute-0 nova_compute[351485]: 2025-12-03 02:17:41.916 351492 DEBUG nova.network.neutron [req-de21ddac-3b29-4310-8931-4d8c01f17e2e req-e2ee6120-c294-4ed2-82a1-61b4b62dff28 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Updated VIF entry in instance network info cache for port ae5db7e6-7a7a-4116-954a-be851ee02864. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 03 02:17:41 compute-0 nova_compute[351485]: 2025-12-03 02:17:41.918 351492 DEBUG nova.network.neutron [req-de21ddac-3b29-4310-8931-4d8c01f17e2e req-e2ee6120-c294-4ed2-82a1-61b4b62dff28 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Updating instance_info_cache with network_info: [{"id": "ae5db7e6-7a7a-4116-954a-be851ee02864", "address": "fa:16:3e:ed:5c:3e", "network": {"id": "ed008f09-da46-4507-9be2-7398a4728121", "bridge": "br-int", "label": "tempest-network-smoke--628634883", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8f8e5d142604e8c8aabf1e14a1467ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae5db7e6-7a", "ovs_interfaceid": "ae5db7e6-7a7a-4116-954a-be851ee02864", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:17:41 compute-0 nova_compute[351485]: 2025-12-03 02:17:41.952 351492 DEBUG oslo_concurrency.lockutils [req-de21ddac-3b29-4310-8931-4d8c01f17e2e req-e2ee6120-c294-4ed2-82a1-61b4b62dff28 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:17:42 compute-0 sudo[449091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:17:42 compute-0 sudo[449091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:17:42 compute-0 sudo[449091]: pam_unix(sudo:session): session closed for user root
Dec 03 02:17:42 compute-0 ceph-mon[192821]: pgmap v1901: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 03 02:17:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:42.134 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:a6:85', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:2a:11:ae:7b:8c'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 02:17:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:42.136 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 03 02:17:42 compute-0 sudo[449116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:17:42 compute-0 nova_compute[351485]: 2025-12-03 02:17:42.139 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:42.147 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:a6:85', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:2a:11:ae:7b:8c'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 02:17:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:42.148 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 03 02:17:42 compute-0 nova_compute[351485]: 2025-12-03 02:17:42.149 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:42 compute-0 sudo[449116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:17:42 compute-0 sudo[449116]: pam_unix(sudo:session): session closed for user root
Dec 03 02:17:42 compute-0 sudo[449141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 02:17:42 compute-0 sudo[449141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:17:42 compute-0 podman[449204]: 2025-12-03 02:17:42.879265282 +0000 UTC m=+0.086749555 container create 548c9e44629dc5f7a980465afb9ee9c97f10a44730ad55e85168fced38834bdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:17:42 compute-0 podman[449204]: 2025-12-03 02:17:42.844294643 +0000 UTC m=+0.051778956 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:17:42 compute-0 systemd[1]: Started libpod-conmon-548c9e44629dc5f7a980465afb9ee9c97f10a44730ad55e85168fced38834bdf.scope.
Dec 03 02:17:43 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:17:43 compute-0 podman[449204]: 2025-12-03 02:17:43.048190992 +0000 UTC m=+0.255675285 container init 548c9e44629dc5f7a980465afb9ee9c97f10a44730ad55e85168fced38834bdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kare, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:17:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1902: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 653 KiB/s wr, 22 op/s
Dec 03 02:17:43 compute-0 podman[449204]: 2025-12-03 02:17:43.0654579 +0000 UTC m=+0.272942153 container start 548c9e44629dc5f7a980465afb9ee9c97f10a44730ad55e85168fced38834bdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 03 02:17:43 compute-0 podman[449204]: 2025-12-03 02:17:43.071888462 +0000 UTC m=+0.279372785 container attach 548c9e44629dc5f7a980465afb9ee9c97f10a44730ad55e85168fced38834bdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kare, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:17:43 compute-0 zealous_kare[449219]: 167 167
Dec 03 02:17:43 compute-0 systemd[1]: libpod-548c9e44629dc5f7a980465afb9ee9c97f10a44730ad55e85168fced38834bdf.scope: Deactivated successfully.
Dec 03 02:17:43 compute-0 podman[449204]: 2025-12-03 02:17:43.079003994 +0000 UTC m=+0.286488247 container died 548c9e44629dc5f7a980465afb9ee9c97f10a44730ad55e85168fced38834bdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kare, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 03 02:17:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e9224f9d4c209f7571da0b8f4467665f7bf40c0fe4cf79ba5bc99c02c0b1baa-merged.mount: Deactivated successfully.
Dec 03 02:17:43 compute-0 podman[449204]: 2025-12-03 02:17:43.176468651 +0000 UTC m=+0.383952904 container remove 548c9e44629dc5f7a980465afb9ee9c97f10a44730ad55e85168fced38834bdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kare, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:17:43 compute-0 systemd[1]: libpod-conmon-548c9e44629dc5f7a980465afb9ee9c97f10a44730ad55e85168fced38834bdf.scope: Deactivated successfully.
Dec 03 02:17:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:17:43 compute-0 podman[449243]: 2025-12-03 02:17:43.419635221 +0000 UTC m=+0.077567316 container create e6d28ba88fecc0c60884cfe894ef00b78f42313ca55628389509ec8c66128497 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mclaren, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 03 02:17:43 compute-0 podman[449243]: 2025-12-03 02:17:43.388218942 +0000 UTC m=+0.046151067 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:17:43 compute-0 systemd[1]: Started libpod-conmon-e6d28ba88fecc0c60884cfe894ef00b78f42313ca55628389509ec8c66128497.scope.
Dec 03 02:17:43 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:17:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31bcb05dc56f8418364ded26ed9db6eaa41cf2bada1254709ab01f1c97fcad89/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:17:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31bcb05dc56f8418364ded26ed9db6eaa41cf2bada1254709ab01f1c97fcad89/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:17:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31bcb05dc56f8418364ded26ed9db6eaa41cf2bada1254709ab01f1c97fcad89/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:17:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31bcb05dc56f8418364ded26ed9db6eaa41cf2bada1254709ab01f1c97fcad89/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:17:43 compute-0 podman[449243]: 2025-12-03 02:17:43.622887242 +0000 UTC m=+0.280819347 container init e6d28ba88fecc0c60884cfe894ef00b78f42313ca55628389509ec8c66128497 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mclaren, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 03 02:17:43 compute-0 nova_compute[351485]: 2025-12-03 02:17:43.631 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:43 compute-0 podman[449243]: 2025-12-03 02:17:43.638845653 +0000 UTC m=+0.296777738 container start e6d28ba88fecc0c60884cfe894ef00b78f42313ca55628389509ec8c66128497 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:17:43 compute-0 podman[449243]: 2025-12-03 02:17:43.645905122 +0000 UTC m=+0.303837217 container attach e6d28ba88fecc0c60884cfe894ef00b78f42313ca55628389509ec8c66128497 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mclaren, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:17:44 compute-0 ceph-mon[192821]: pgmap v1902: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 653 KiB/s wr, 22 op/s
Dec 03 02:17:44 compute-0 nova_compute[351485]: 2025-12-03 02:17:44.612 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:44 compute-0 intelligent_mclaren[449259]: {
Dec 03 02:17:44 compute-0 intelligent_mclaren[449259]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 02:17:44 compute-0 intelligent_mclaren[449259]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:17:44 compute-0 intelligent_mclaren[449259]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 02:17:44 compute-0 intelligent_mclaren[449259]:         "osd_id": 2,
Dec 03 02:17:44 compute-0 intelligent_mclaren[449259]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:17:44 compute-0 intelligent_mclaren[449259]:         "type": "bluestore"
Dec 03 02:17:44 compute-0 intelligent_mclaren[449259]:     },
Dec 03 02:17:44 compute-0 intelligent_mclaren[449259]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 02:17:44 compute-0 intelligent_mclaren[449259]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:17:44 compute-0 intelligent_mclaren[449259]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 02:17:44 compute-0 intelligent_mclaren[449259]:         "osd_id": 1,
Dec 03 02:17:44 compute-0 intelligent_mclaren[449259]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:17:44 compute-0 intelligent_mclaren[449259]:         "type": "bluestore"
Dec 03 02:17:44 compute-0 intelligent_mclaren[449259]:     },
Dec 03 02:17:44 compute-0 intelligent_mclaren[449259]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 02:17:44 compute-0 intelligent_mclaren[449259]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:17:44 compute-0 intelligent_mclaren[449259]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 02:17:44 compute-0 intelligent_mclaren[449259]:         "osd_id": 0,
Dec 03 02:17:44 compute-0 intelligent_mclaren[449259]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:17:44 compute-0 intelligent_mclaren[449259]:         "type": "bluestore"
Dec 03 02:17:44 compute-0 intelligent_mclaren[449259]:     }
Dec 03 02:17:44 compute-0 intelligent_mclaren[449259]: }
Dec 03 02:17:44 compute-0 systemd[1]: libpod-e6d28ba88fecc0c60884cfe894ef00b78f42313ca55628389509ec8c66128497.scope: Deactivated successfully.
Dec 03 02:17:44 compute-0 podman[449243]: 2025-12-03 02:17:44.893458769 +0000 UTC m=+1.551390894 container died e6d28ba88fecc0c60884cfe894ef00b78f42313ca55628389509ec8c66128497 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Dec 03 02:17:44 compute-0 systemd[1]: libpod-e6d28ba88fecc0c60884cfe894ef00b78f42313ca55628389509ec8c66128497.scope: Consumed 1.239s CPU time.
Dec 03 02:17:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-31bcb05dc56f8418364ded26ed9db6eaa41cf2bada1254709ab01f1c97fcad89-merged.mount: Deactivated successfully.
Dec 03 02:17:44 compute-0 podman[449243]: 2025-12-03 02:17:44.988832208 +0000 UTC m=+1.646764303 container remove e6d28ba88fecc0c60884cfe894ef00b78f42313ca55628389509ec8c66128497 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec 03 02:17:45 compute-0 systemd[1]: libpod-conmon-e6d28ba88fecc0c60884cfe894ef00b78f42313ca55628389509ec8c66128497.scope: Deactivated successfully.
Dec 03 02:17:45 compute-0 sudo[449141]: pam_unix(sudo:session): session closed for user root
Dec 03 02:17:45 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 02:17:45 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:17:45 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 02:17:45 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:17:45 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 06d157ad-47df-4f84-9e14-2b2fd428c222 does not exist
Dec 03 02:17:45 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev d07edb2f-98f2-4d70-8ec3-fe6aaf2a20a2 does not exist
Dec 03 02:17:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1903: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 6.7 KiB/s rd, 15 KiB/s wr, 1 op/s
Dec 03 02:17:45 compute-0 sudo[449305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:17:45 compute-0 sudo[449305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:17:45 compute-0 sudo[449305]: pam_unix(sudo:session): session closed for user root
Dec 03 02:17:45 compute-0 sudo[449330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 02:17:45 compute-0 sudo[449330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:17:45 compute-0 sudo[449330]: pam_unix(sudo:session): session closed for user root
Dec 03 02:17:46 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:17:46 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:17:46 compute-0 ceph-mon[192821]: pgmap v1903: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 6.7 KiB/s rd, 15 KiB/s wr, 1 op/s
Dec 03 02:17:46 compute-0 podman[449355]: 2025-12-03 02:17:46.8851591 +0000 UTC m=+0.137522562 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 03 02:17:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 02:17:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1958934642' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:17:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 02:17:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1958934642' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:17:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1904: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 15 KiB/s wr, 1 op/s
Dec 03 02:17:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/1958934642' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:17:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/1958934642' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:17:48 compute-0 ceph-mon[192821]: pgmap v1904: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 15 KiB/s wr, 1 op/s
Dec 03 02:17:48 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:48.140 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=eda9fd7d-f2b1-4121-b9ac-fc31f8426272, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:17:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:17:48 compute-0 nova_compute[351485]: 2025-12-03 02:17:48.632 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1905: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 3.5 KiB/s wr, 1 op/s
Dec 03 02:17:49 compute-0 nova_compute[351485]: 2025-12-03 02:17:49.443 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:49 compute-0 nova_compute[351485]: 2025-12-03 02:17:49.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:17:49 compute-0 nova_compute[351485]: 2025-12-03 02:17:49.615 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:49 compute-0 ovn_controller[89134]: 2025-12-03T02:17:49Z|00125|binding|INFO|Releasing lport 4fe53946-9a81-46d3-946d-3676da417bd6 from this chassis (sb_readonly=0)
Dec 03 02:17:49 compute-0 ovn_controller[89134]: 2025-12-03T02:17:49Z|00126|binding|INFO|Releasing lport c8314dfe-5b76-4819-9b3e-1cb76a272253 from this chassis (sb_readonly=0)
Dec 03 02:17:50 compute-0 nova_compute[351485]: 2025-12-03 02:17:50.064 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:50 compute-0 ceph-mon[192821]: pgmap v1905: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 3.5 KiB/s wr, 1 op/s
Dec 03 02:17:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:50.153 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=eda9fd7d-f2b1-4121-b9ac-fc31f8426272, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:17:50 compute-0 nova_compute[351485]: 2025-12-03 02:17:50.734 351492 DEBUG oslo_concurrency.lockutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Acquiring lock "1b83725c-0af2-491f-98d9-bdb0ed1a5979" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:17:50 compute-0 nova_compute[351485]: 2025-12-03 02:17:50.734 351492 DEBUG oslo_concurrency.lockutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "1b83725c-0af2-491f-98d9-bdb0ed1a5979" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:17:50 compute-0 nova_compute[351485]: 2025-12-03 02:17:50.757 351492 DEBUG nova.compute.manager [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 03 02:17:50 compute-0 nova_compute[351485]: 2025-12-03 02:17:50.843 351492 DEBUG oslo_concurrency.lockutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:17:50 compute-0 nova_compute[351485]: 2025-12-03 02:17:50.843 351492 DEBUG oslo_concurrency.lockutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:17:50 compute-0 nova_compute[351485]: 2025-12-03 02:17:50.857 351492 DEBUG nova.virt.hardware [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 03 02:17:50 compute-0 nova_compute[351485]: 2025-12-03 02:17:50.857 351492 INFO nova.compute.claims [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Claim successful on node compute-0.ctlplane.example.com
Dec 03 02:17:50 compute-0 podman[449384]: 2025-12-03 02:17:50.885000196 +0000 UTC m=+0.114323904 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 03 02:17:50 compute-0 podman[449376]: 2025-12-03 02:17:50.900102353 +0000 UTC m=+0.143358836 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_id=edpm, io.buildah.version=1.33.7, container_name=openstack_network_exporter, vcs-type=git, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=ubi9-minimal, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec 03 02:17:50 compute-0 podman[449377]: 2025-12-03 02:17:50.901131413 +0000 UTC m=+0.137000617 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 03 02:17:50 compute-0 podman[449375]: 2025-12-03 02:17:50.90632829 +0000 UTC m=+0.161464779 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 03 02:17:50 compute-0 podman[449378]: 2025-12-03 02:17:50.916901329 +0000 UTC m=+0.152065833 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.buildah.version=1.29.0, managed_by=edpm_ansible, version=9.4, io.openshift.expose-services=, io.openshift.tags=base rhel9, release-0.7.12=, name=ubi9, vendor=Red Hat, Inc.)
Dec 03 02:17:51 compute-0 nova_compute[351485]: 2025-12-03 02:17:51.017 351492 DEBUG oslo_concurrency.processutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:17:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1906: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 3.5 KiB/s wr, 1 op/s
Dec 03 02:17:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:17:51 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/115272923' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:17:51 compute-0 nova_compute[351485]: 2025-12-03 02:17:51.545 351492 DEBUG oslo_concurrency.processutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:17:51 compute-0 nova_compute[351485]: 2025-12-03 02:17:51.560 351492 DEBUG nova.compute.provider_tree [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:17:51 compute-0 nova_compute[351485]: 2025-12-03 02:17:51.579 351492 DEBUG nova.scheduler.client.report [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:17:51 compute-0 nova_compute[351485]: 2025-12-03 02:17:51.604 351492 DEBUG oslo_concurrency.lockutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:17:51 compute-0 nova_compute[351485]: 2025-12-03 02:17:51.606 351492 DEBUG nova.compute.manager [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 03 02:17:51 compute-0 nova_compute[351485]: 2025-12-03 02:17:51.663 351492 DEBUG nova.compute.manager [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 03 02:17:51 compute-0 nova_compute[351485]: 2025-12-03 02:17:51.663 351492 DEBUG nova.network.neutron [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 03 02:17:51 compute-0 nova_compute[351485]: 2025-12-03 02:17:51.691 351492 INFO nova.virt.libvirt.driver [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 03 02:17:51 compute-0 nova_compute[351485]: 2025-12-03 02:17:51.709 351492 DEBUG nova.compute.manager [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 03 02:17:51 compute-0 nova_compute[351485]: 2025-12-03 02:17:51.827 351492 DEBUG nova.compute.manager [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 03 02:17:51 compute-0 nova_compute[351485]: 2025-12-03 02:17:51.829 351492 DEBUG nova.virt.libvirt.driver [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 03 02:17:51 compute-0 nova_compute[351485]: 2025-12-03 02:17:51.830 351492 INFO nova.virt.libvirt.driver [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Creating image(s)
Dec 03 02:17:51 compute-0 nova_compute[351485]: 2025-12-03 02:17:51.888 351492 DEBUG nova.storage.rbd_utils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] rbd image 1b83725c-0af2-491f-98d9-bdb0ed1a5979_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:17:51 compute-0 nova_compute[351485]: 2025-12-03 02:17:51.967 351492 DEBUG nova.storage.rbd_utils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] rbd image 1b83725c-0af2-491f-98d9-bdb0ed1a5979_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:17:52 compute-0 nova_compute[351485]: 2025-12-03 02:17:52.032 351492 DEBUG nova.storage.rbd_utils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] rbd image 1b83725c-0af2-491f-98d9-bdb0ed1a5979_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:17:52 compute-0 nova_compute[351485]: 2025-12-03 02:17:52.047 351492 DEBUG oslo_concurrency.processutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:17:52 compute-0 nova_compute[351485]: 2025-12-03 02:17:52.081 351492 DEBUG nova.policy [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'abdbefadac2a4d98bd33ed8a1a60ff75', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f8f8e5d142604e8c8aabf1e14a1467ca', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 03 02:17:52 compute-0 nova_compute[351485]: 2025-12-03 02:17:52.136 351492 DEBUG oslo_concurrency.processutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:17:52 compute-0 nova_compute[351485]: 2025-12-03 02:17:52.137 351492 DEBUG oslo_concurrency.lockutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Acquiring lock "d68b22249947adf9ae6139a52d3c87b68df8a601" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:17:52 compute-0 nova_compute[351485]: 2025-12-03 02:17:52.138 351492 DEBUG oslo_concurrency.lockutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "d68b22249947adf9ae6139a52d3c87b68df8a601" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:17:52 compute-0 nova_compute[351485]: 2025-12-03 02:17:52.139 351492 DEBUG oslo_concurrency.lockutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "d68b22249947adf9ae6139a52d3c87b68df8a601" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:17:52 compute-0 ceph-mon[192821]: pgmap v1906: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 3.5 KiB/s wr, 1 op/s
Dec 03 02:17:52 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/115272923' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:17:52 compute-0 nova_compute[351485]: 2025-12-03 02:17:52.215 351492 DEBUG nova.storage.rbd_utils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] rbd image 1b83725c-0af2-491f-98d9-bdb0ed1a5979_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:17:52 compute-0 nova_compute[351485]: 2025-12-03 02:17:52.234 351492 DEBUG oslo_concurrency.processutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 1b83725c-0af2-491f-98d9-bdb0ed1a5979_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:17:52 compute-0 nova_compute[351485]: 2025-12-03 02:17:52.686 351492 DEBUG oslo_concurrency.processutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 1b83725c-0af2-491f-98d9-bdb0ed1a5979_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:17:52 compute-0 nova_compute[351485]: 2025-12-03 02:17:52.816 351492 DEBUG nova.storage.rbd_utils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] resizing rbd image 1b83725c-0af2-491f-98d9-bdb0ed1a5979_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 03 02:17:53 compute-0 nova_compute[351485]: 2025-12-03 02:17:53.035 351492 DEBUG nova.objects.instance [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lazy-loading 'migration_context' on Instance uuid 1b83725c-0af2-491f-98d9-bdb0ed1a5979 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:17:53 compute-0 nova_compute[351485]: 2025-12-03 02:17:53.060 351492 DEBUG nova.virt.libvirt.driver [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 03 02:17:53 compute-0 nova_compute[351485]: 2025-12-03 02:17:53.061 351492 DEBUG nova.virt.libvirt.driver [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Ensure instance console log exists: /var/lib/nova/instances/1b83725c-0af2-491f-98d9-bdb0ed1a5979/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 03 02:17:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1907: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 3.4 KiB/s wr, 0 op/s
Dec 03 02:17:53 compute-0 nova_compute[351485]: 2025-12-03 02:17:53.063 351492 DEBUG oslo_concurrency.lockutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:17:53 compute-0 nova_compute[351485]: 2025-12-03 02:17:53.064 351492 DEBUG oslo_concurrency.lockutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:17:53 compute-0 nova_compute[351485]: 2025-12-03 02:17:53.065 351492 DEBUG oslo_concurrency.lockutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:17:53 compute-0 nova_compute[351485]: 2025-12-03 02:17:53.254 351492 DEBUG nova.network.neutron [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Successfully created port: 025b4c8a-b3c9-4114-95f7-f17506286d3e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 03 02:17:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:17:53 compute-0 nova_compute[351485]: 2025-12-03 02:17:53.484 351492 DEBUG oslo_concurrency.lockutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Acquiring lock "40db12af-6ca8-4a4f-88e7-833c3fda87c9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:17:53 compute-0 nova_compute[351485]: 2025-12-03 02:17:53.485 351492 DEBUG oslo_concurrency.lockutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Lock "40db12af-6ca8-4a4f-88e7-833c3fda87c9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:17:53 compute-0 nova_compute[351485]: 2025-12-03 02:17:53.508 351492 DEBUG nova.compute.manager [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 03 02:17:53 compute-0 nova_compute[351485]: 2025-12-03 02:17:53.601 351492 DEBUG oslo_concurrency.lockutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:17:53 compute-0 nova_compute[351485]: 2025-12-03 02:17:53.601 351492 DEBUG oslo_concurrency.lockutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:17:53 compute-0 nova_compute[351485]: 2025-12-03 02:17:53.611 351492 DEBUG nova.virt.hardware [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 03 02:17:53 compute-0 nova_compute[351485]: 2025-12-03 02:17:53.611 351492 INFO nova.compute.claims [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Claim successful on node compute-0.ctlplane.example.com
Dec 03 02:17:53 compute-0 nova_compute[351485]: 2025-12-03 02:17:53.639 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:53 compute-0 nova_compute[351485]: 2025-12-03 02:17:53.844 351492 DEBUG oslo_concurrency.processutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:17:54 compute-0 ceph-mon[192821]: pgmap v1907: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 3.4 KiB/s wr, 0 op/s
Dec 03 02:17:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:17:54 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1230111650' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.408 351492 DEBUG oslo_concurrency.processutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.420 351492 DEBUG nova.compute.provider_tree [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.441 351492 DEBUG nova.scheduler.client.report [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.474 351492 DEBUG oslo_concurrency.lockutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.873s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.476 351492 DEBUG nova.compute.manager [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 03 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.509 351492 DEBUG nova.network.neutron [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Successfully updated port: 025b4c8a-b3c9-4114-95f7-f17506286d3e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 03 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.537 351492 DEBUG nova.compute.manager [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 03 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.538 351492 DEBUG nova.network.neutron [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 03 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.544 351492 DEBUG oslo_concurrency.lockutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Acquiring lock "refresh_cache-1b83725c-0af2-491f-98d9-bdb0ed1a5979" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.544 351492 DEBUG oslo_concurrency.lockutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Acquired lock "refresh_cache-1b83725c-0af2-491f-98d9-bdb0ed1a5979" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.545 351492 DEBUG nova.network.neutron [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 03 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.581 351492 INFO nova.virt.libvirt.driver [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 03 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.608 351492 DEBUG nova.compute.manager [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 03 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.619 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.667 351492 DEBUG nova.compute.manager [req-57d85ee9-1df5-4843-ab4b-af62de530db1 req-44962912-4a3b-46de-a9f4-7e0dcac1f89e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Received event network-changed-025b4c8a-b3c9-4114-95f7-f17506286d3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.668 351492 DEBUG nova.compute.manager [req-57d85ee9-1df5-4843-ab4b-af62de530db1 req-44962912-4a3b-46de-a9f4-7e0dcac1f89e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Refreshing instance network info cache due to event network-changed-025b4c8a-b3c9-4114-95f7-f17506286d3e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 03 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.668 351492 DEBUG oslo_concurrency.lockutils [req-57d85ee9-1df5-4843-ab4b-af62de530db1 req-44962912-4a3b-46de-a9f4-7e0dcac1f89e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-1b83725c-0af2-491f-98d9-bdb0ed1a5979" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.728 351492 DEBUG nova.compute.manager [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 03 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.729 351492 DEBUG nova.virt.libvirt.driver [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 03 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.732 351492 INFO nova.virt.libvirt.driver [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Creating image(s)
Dec 03 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.781 351492 DEBUG nova.storage.rbd_utils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] rbd image 40db12af-6ca8-4a4f-88e7-833c3fda87c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.839 351492 DEBUG nova.storage.rbd_utils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] rbd image 40db12af-6ca8-4a4f-88e7-833c3fda87c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.887 351492 DEBUG nova.storage.rbd_utils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] rbd image 40db12af-6ca8-4a4f-88e7-833c3fda87c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.895 351492 DEBUG oslo_concurrency.processutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.985 351492 DEBUG oslo_concurrency.processutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.988 351492 DEBUG oslo_concurrency.lockutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Acquiring lock "d68b22249947adf9ae6139a52d3c87b68df8a601" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.989 351492 DEBUG oslo_concurrency.lockutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Lock "d68b22249947adf9ae6139a52d3c87b68df8a601" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.990 351492 DEBUG oslo_concurrency.lockutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Lock "d68b22249947adf9ae6139a52d3c87b68df8a601" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:17:55 compute-0 nova_compute[351485]: 2025-12-03 02:17:55.025 351492 DEBUG nova.storage.rbd_utils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] rbd image 40db12af-6ca8-4a4f-88e7-833c3fda87c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:17:55 compute-0 nova_compute[351485]: 2025-12-03 02:17:55.033 351492 DEBUG oslo_concurrency.processutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 40db12af-6ca8-4a4f-88e7-833c3fda87c9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:17:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1908: 321 pgs: 321 active+clean; 241 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 626 KiB/s wr, 3 op/s
Dec 03 02:17:55 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1230111650' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:17:55 compute-0 nova_compute[351485]: 2025-12-03 02:17:55.377 351492 DEBUG nova.network.neutron [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 03 02:17:55 compute-0 nova_compute[351485]: 2025-12-03 02:17:55.473 351492 DEBUG oslo_concurrency.processutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 40db12af-6ca8-4a4f-88e7-833c3fda87c9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:17:55 compute-0 nova_compute[351485]: 2025-12-03 02:17:55.539 351492 DEBUG nova.policy [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '085bcee1002d425085c1f09d9b5d3d97', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '19ab3b60e4c749c7897f20982829cd8c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 03 02:17:55 compute-0 nova_compute[351485]: 2025-12-03 02:17:55.650 351492 DEBUG nova.storage.rbd_utils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] resizing rbd image 40db12af-6ca8-4a4f-88e7-833c3fda87c9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 03 02:17:55 compute-0 nova_compute[351485]: 2025-12-03 02:17:55.879 351492 DEBUG nova.objects.instance [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Lazy-loading 'migration_context' on Instance uuid 40db12af-6ca8-4a4f-88e7-833c3fda87c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:17:55 compute-0 nova_compute[351485]: 2025-12-03 02:17:55.906 351492 DEBUG nova.virt.libvirt.driver [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 03 02:17:55 compute-0 nova_compute[351485]: 2025-12-03 02:17:55.906 351492 DEBUG nova.virt.libvirt.driver [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Ensure instance console log exists: /var/lib/nova/instances/40db12af-6ca8-4a4f-88e7-833c3fda87c9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 03 02:17:55 compute-0 nova_compute[351485]: 2025-12-03 02:17:55.907 351492 DEBUG oslo_concurrency.lockutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:17:55 compute-0 nova_compute[351485]: 2025-12-03 02:17:55.907 351492 DEBUG oslo_concurrency.lockutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:17:55 compute-0 nova_compute[351485]: 2025-12-03 02:17:55.908 351492 DEBUG oslo_concurrency.lockutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:17:56 compute-0 ceph-mon[192821]: pgmap v1908: 321 pgs: 321 active+clean; 241 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 626 KiB/s wr, 3 op/s
Dec 03 02:17:56 compute-0 nova_compute[351485]: 2025-12-03 02:17:56.523 351492 DEBUG nova.network.neutron [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Updating instance_info_cache with network_info: [{"id": "025b4c8a-b3c9-4114-95f7-f17506286d3e", "address": "fa:16:3e:24:c0:50", "network": {"id": "ed008f09-da46-4507-9be2-7398a4728121", "bridge": "br-int", "label": "tempest-network-smoke--628634883", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8f8e5d142604e8c8aabf1e14a1467ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap025b4c8a-b3", "ovs_interfaceid": "025b4c8a-b3c9-4114-95f7-f17506286d3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:17:56 compute-0 nova_compute[351485]: 2025-12-03 02:17:56.550 351492 DEBUG oslo_concurrency.lockutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Releasing lock "refresh_cache-1b83725c-0af2-491f-98d9-bdb0ed1a5979" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:17:56 compute-0 nova_compute[351485]: 2025-12-03 02:17:56.551 351492 DEBUG nova.compute.manager [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Instance network_info: |[{"id": "025b4c8a-b3c9-4114-95f7-f17506286d3e", "address": "fa:16:3e:24:c0:50", "network": {"id": "ed008f09-da46-4507-9be2-7398a4728121", "bridge": "br-int", "label": "tempest-network-smoke--628634883", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8f8e5d142604e8c8aabf1e14a1467ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap025b4c8a-b3", "ovs_interfaceid": "025b4c8a-b3c9-4114-95f7-f17506286d3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 03 02:17:56 compute-0 nova_compute[351485]: 2025-12-03 02:17:56.552 351492 DEBUG oslo_concurrency.lockutils [req-57d85ee9-1df5-4843-ab4b-af62de530db1 req-44962912-4a3b-46de-a9f4-7e0dcac1f89e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-1b83725c-0af2-491f-98d9-bdb0ed1a5979" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:17:56 compute-0 nova_compute[351485]: 2025-12-03 02:17:56.552 351492 DEBUG nova.network.neutron [req-57d85ee9-1df5-4843-ab4b-af62de530db1 req-44962912-4a3b-46de-a9f4-7e0dcac1f89e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Refreshing network info cache for port 025b4c8a-b3c9-4114-95f7-f17506286d3e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 03 02:17:56 compute-0 nova_compute[351485]: 2025-12-03 02:17:56.557 351492 DEBUG nova.virt.libvirt.driver [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Start _get_guest_xml network_info=[{"id": "025b4c8a-b3c9-4114-95f7-f17506286d3e", "address": "fa:16:3e:24:c0:50", "network": {"id": "ed008f09-da46-4507-9be2-7398a4728121", "bridge": "br-int", "label": "tempest-network-smoke--628634883", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8f8e5d142604e8c8aabf1e14a1467ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap025b4c8a-b3", "ovs_interfaceid": "025b4c8a-b3c9-4114-95f7-f17506286d3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T02:14:44Z,direct_url=<?>,disk_format='qcow2',id=ef773cba-72f0-486f-b5e5-792ff26bb688,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9746b242761a48048d185ce26d622b33',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T02:14:46Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'boot_index': 0, 'guest_format': None, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'image_id': 'ef773cba-72f0-486f-b5e5-792ff26bb688'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 03 02:17:56 compute-0 nova_compute[351485]: 2025-12-03 02:17:56.576 351492 WARNING nova.virt.libvirt.driver [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:17:56 compute-0 nova_compute[351485]: 2025-12-03 02:17:56.594 351492 DEBUG nova.virt.libvirt.host [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 03 02:17:56 compute-0 nova_compute[351485]: 2025-12-03 02:17:56.596 351492 DEBUG nova.virt.libvirt.host [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 03 02:17:56 compute-0 nova_compute[351485]: 2025-12-03 02:17:56.604 351492 DEBUG nova.virt.libvirt.host [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 03 02:17:56 compute-0 nova_compute[351485]: 2025-12-03 02:17:56.605 351492 DEBUG nova.virt.libvirt.host [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 03 02:17:56 compute-0 nova_compute[351485]: 2025-12-03 02:17:56.605 351492 DEBUG nova.virt.libvirt.driver [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 03 02:17:56 compute-0 nova_compute[351485]: 2025-12-03 02:17:56.606 351492 DEBUG nova.virt.hardware [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-03T02:14:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='89219634-32e9-4cb5-896f-6fa0b1edfe13',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T02:14:44Z,direct_url=<?>,disk_format='qcow2',id=ef773cba-72f0-486f-b5e5-792ff26bb688,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9746b242761a48048d185ce26d622b33',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T02:14:46Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 03 02:17:56 compute-0 nova_compute[351485]: 2025-12-03 02:17:56.607 351492 DEBUG nova.virt.hardware [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 03 02:17:56 compute-0 nova_compute[351485]: 2025-12-03 02:17:56.607 351492 DEBUG nova.virt.hardware [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 03 02:17:56 compute-0 nova_compute[351485]: 2025-12-03 02:17:56.608 351492 DEBUG nova.virt.hardware [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 03 02:17:56 compute-0 nova_compute[351485]: 2025-12-03 02:17:56.608 351492 DEBUG nova.virt.hardware [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 03 02:17:56 compute-0 nova_compute[351485]: 2025-12-03 02:17:56.609 351492 DEBUG nova.virt.hardware [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 03 02:17:56 compute-0 nova_compute[351485]: 2025-12-03 02:17:56.609 351492 DEBUG nova.virt.hardware [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 03 02:17:56 compute-0 nova_compute[351485]: 2025-12-03 02:17:56.610 351492 DEBUG nova.virt.hardware [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 03 02:17:56 compute-0 nova_compute[351485]: 2025-12-03 02:17:56.610 351492 DEBUG nova.virt.hardware [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 03 02:17:56 compute-0 nova_compute[351485]: 2025-12-03 02:17:56.611 351492 DEBUG nova.virt.hardware [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 03 02:17:56 compute-0 nova_compute[351485]: 2025-12-03 02:17:56.611 351492 DEBUG nova.virt.hardware [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 03 02:17:56 compute-0 nova_compute[351485]: 2025-12-03 02:17:56.616 351492 DEBUG oslo_concurrency.processutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:17:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1909: 321 pgs: 321 active+clean; 278 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 2.7 MiB/s wr, 42 op/s
Dec 03 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.135 351492 DEBUG nova.network.neutron [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Successfully created port: c6f07ea7-978a-46d9-b7f8-a4c14ac8475f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 03 02:17:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 03 02:17:57 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2353772158' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.186 351492 DEBUG oslo_concurrency.processutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:17:57 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2353772158' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.236 351492 DEBUG nova.storage.rbd_utils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] rbd image 1b83725c-0af2-491f-98d9-bdb0ed1a5979_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.250 351492 DEBUG oslo_concurrency.processutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:17:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 03 02:17:57 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1368366315' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.751 351492 DEBUG oslo_concurrency.processutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.754 351492 DEBUG nova.virt.libvirt.vif [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T02:17:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-455653039',display_name='tempest-TestNetworkBasicOps-server-455653039',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-455653039',id=11,image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGyLxdmoeScEfSkwzcCczvmCyzQ7WX6pYr3KymEzB5Q09G09n6d3TfahDx7L4JUEY5sh67bwZpAZn3mmGdgttDtWP8gJ/ON+rMTVTFtEqftauFytQHqZZbMU6xxCGBZ6yA==',key_name='tempest-TestNetworkBasicOps-378472767',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f8f8e5d142604e8c8aabf1e14a1467ca',ramdisk_id='',reservation_id='r-ux5cl6xd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1039072813',owner_user_name='tempest-TestNetworkBasicOps-1039072813-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T02:17:51Z,user_data=None,user_id='abdbefadac2a4d98bd33ed8a1a60ff75',uuid=1b83725c-0af2-491f-98d9-bdb0ed1a5979,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "025b4c8a-b3c9-4114-95f7-f17506286d3e", "address": "fa:16:3e:24:c0:50", "network": {"id": "ed008f09-da46-4507-9be2-7398a4728121", "bridge": "br-int", "label": "tempest-network-smoke--628634883", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8f8e5d142604e8c8aabf1e14a1467ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap025b4c8a-b3", "ovs_interfaceid": "025b4c8a-b3c9-4114-95f7-f17506286d3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 03 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.755 351492 DEBUG nova.network.os_vif_util [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Converting VIF {"id": "025b4c8a-b3c9-4114-95f7-f17506286d3e", "address": "fa:16:3e:24:c0:50", "network": {"id": "ed008f09-da46-4507-9be2-7398a4728121", "bridge": "br-int", "label": "tempest-network-smoke--628634883", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8f8e5d142604e8c8aabf1e14a1467ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap025b4c8a-b3", "ovs_interfaceid": "025b4c8a-b3c9-4114-95f7-f17506286d3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 03 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.756 351492 DEBUG nova.network.os_vif_util [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:c0:50,bridge_name='br-int',has_traffic_filtering=True,id=025b4c8a-b3c9-4114-95f7-f17506286d3e,network=Network(ed008f09-da46-4507-9be2-7398a4728121),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap025b4c8a-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 03 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.758 351492 DEBUG nova.objects.instance [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lazy-loading 'pci_devices' on Instance uuid 1b83725c-0af2-491f-98d9-bdb0ed1a5979 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.783 351492 DEBUG nova.virt.libvirt.driver [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] End _get_guest_xml xml=<domain type="kvm">
Dec 03 02:17:57 compute-0 nova_compute[351485]:   <uuid>1b83725c-0af2-491f-98d9-bdb0ed1a5979</uuid>
Dec 03 02:17:57 compute-0 nova_compute[351485]:   <name>instance-0000000b</name>
Dec 03 02:17:57 compute-0 nova_compute[351485]:   <memory>131072</memory>
Dec 03 02:17:57 compute-0 nova_compute[351485]:   <vcpu>1</vcpu>
Dec 03 02:17:57 compute-0 nova_compute[351485]:   <metadata>
Dec 03 02:17:57 compute-0 nova_compute[351485]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 03 02:17:57 compute-0 nova_compute[351485]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:       <nova:name>tempest-TestNetworkBasicOps-server-455653039</nova:name>
Dec 03 02:17:57 compute-0 nova_compute[351485]:       <nova:creationTime>2025-12-03 02:17:56</nova:creationTime>
Dec 03 02:17:57 compute-0 nova_compute[351485]:       <nova:flavor name="m1.nano">
Dec 03 02:17:57 compute-0 nova_compute[351485]:         <nova:memory>128</nova:memory>
Dec 03 02:17:57 compute-0 nova_compute[351485]:         <nova:disk>1</nova:disk>
Dec 03 02:17:57 compute-0 nova_compute[351485]:         <nova:swap>0</nova:swap>
Dec 03 02:17:57 compute-0 nova_compute[351485]:         <nova:ephemeral>0</nova:ephemeral>
Dec 03 02:17:57 compute-0 nova_compute[351485]:         <nova:vcpus>1</nova:vcpus>
Dec 03 02:17:57 compute-0 nova_compute[351485]:       </nova:flavor>
Dec 03 02:17:57 compute-0 nova_compute[351485]:       <nova:owner>
Dec 03 02:17:57 compute-0 nova_compute[351485]:         <nova:user uuid="abdbefadac2a4d98bd33ed8a1a60ff75">tempest-TestNetworkBasicOps-1039072813-project-member</nova:user>
Dec 03 02:17:57 compute-0 nova_compute[351485]:         <nova:project uuid="f8f8e5d142604e8c8aabf1e14a1467ca">tempest-TestNetworkBasicOps-1039072813</nova:project>
Dec 03 02:17:57 compute-0 nova_compute[351485]:       </nova:owner>
Dec 03 02:17:57 compute-0 nova_compute[351485]:       <nova:root type="image" uuid="ef773cba-72f0-486f-b5e5-792ff26bb688"/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:       <nova:ports>
Dec 03 02:17:57 compute-0 nova_compute[351485]:         <nova:port uuid="025b4c8a-b3c9-4114-95f7-f17506286d3e">
Dec 03 02:17:57 compute-0 nova_compute[351485]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:         </nova:port>
Dec 03 02:17:57 compute-0 nova_compute[351485]:       </nova:ports>
Dec 03 02:17:57 compute-0 nova_compute[351485]:     </nova:instance>
Dec 03 02:17:57 compute-0 nova_compute[351485]:   </metadata>
Dec 03 02:17:57 compute-0 nova_compute[351485]:   <sysinfo type="smbios">
Dec 03 02:17:57 compute-0 nova_compute[351485]:     <system>
Dec 03 02:17:57 compute-0 nova_compute[351485]:       <entry name="manufacturer">RDO</entry>
Dec 03 02:17:57 compute-0 nova_compute[351485]:       <entry name="product">OpenStack Compute</entry>
Dec 03 02:17:57 compute-0 nova_compute[351485]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 03 02:17:57 compute-0 nova_compute[351485]:       <entry name="serial">1b83725c-0af2-491f-98d9-bdb0ed1a5979</entry>
Dec 03 02:17:57 compute-0 nova_compute[351485]:       <entry name="uuid">1b83725c-0af2-491f-98d9-bdb0ed1a5979</entry>
Dec 03 02:17:57 compute-0 nova_compute[351485]:       <entry name="family">Virtual Machine</entry>
Dec 03 02:17:57 compute-0 nova_compute[351485]:     </system>
Dec 03 02:17:57 compute-0 nova_compute[351485]:   </sysinfo>
Dec 03 02:17:57 compute-0 nova_compute[351485]:   <os>
Dec 03 02:17:57 compute-0 nova_compute[351485]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 03 02:17:57 compute-0 nova_compute[351485]:     <boot dev="hd"/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:     <smbios mode="sysinfo"/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:   </os>
Dec 03 02:17:57 compute-0 nova_compute[351485]:   <features>
Dec 03 02:17:57 compute-0 nova_compute[351485]:     <acpi/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:     <apic/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:     <vmcoreinfo/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:   </features>
Dec 03 02:17:57 compute-0 nova_compute[351485]:   <clock offset="utc">
Dec 03 02:17:57 compute-0 nova_compute[351485]:     <timer name="pit" tickpolicy="delay"/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:     <timer name="hpet" present="no"/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:   </clock>
Dec 03 02:17:57 compute-0 nova_compute[351485]:   <cpu mode="host-model" match="exact">
Dec 03 02:17:57 compute-0 nova_compute[351485]:     <topology sockets="1" cores="1" threads="1"/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:   </cpu>
Dec 03 02:17:57 compute-0 nova_compute[351485]:   <devices>
Dec 03 02:17:57 compute-0 nova_compute[351485]:     <disk type="network" device="disk">
Dec 03 02:17:57 compute-0 nova_compute[351485]:       <driver type="raw" cache="none"/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:       <source protocol="rbd" name="vms/1b83725c-0af2-491f-98d9-bdb0ed1a5979_disk">
Dec 03 02:17:57 compute-0 nova_compute[351485]:         <host name="192.168.122.100" port="6789"/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:       </source>
Dec 03 02:17:57 compute-0 nova_compute[351485]:       <auth username="openstack">
Dec 03 02:17:57 compute-0 nova_compute[351485]:         <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:       </auth>
Dec 03 02:17:57 compute-0 nova_compute[351485]:       <target dev="vda" bus="virtio"/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:     </disk>
Dec 03 02:17:57 compute-0 nova_compute[351485]:     <disk type="network" device="cdrom">
Dec 03 02:17:57 compute-0 nova_compute[351485]:       <driver type="raw" cache="none"/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:       <source protocol="rbd" name="vms/1b83725c-0af2-491f-98d9-bdb0ed1a5979_disk.config">
Dec 03 02:17:57 compute-0 nova_compute[351485]:         <host name="192.168.122.100" port="6789"/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:       </source>
Dec 03 02:17:57 compute-0 nova_compute[351485]:       <auth username="openstack">
Dec 03 02:17:57 compute-0 nova_compute[351485]:         <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:       </auth>
Dec 03 02:17:57 compute-0 nova_compute[351485]:       <target dev="sda" bus="sata"/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:     </disk>
Dec 03 02:17:57 compute-0 nova_compute[351485]:     <interface type="ethernet">
Dec 03 02:17:57 compute-0 nova_compute[351485]:       <mac address="fa:16:3e:24:c0:50"/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:       <model type="virtio"/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:       <driver name="vhost" rx_queue_size="512"/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:       <mtu size="1442"/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:       <target dev="tap025b4c8a-b3"/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:     </interface>
Dec 03 02:17:57 compute-0 nova_compute[351485]:     <serial type="pty">
Dec 03 02:17:57 compute-0 nova_compute[351485]:       <log file="/var/lib/nova/instances/1b83725c-0af2-491f-98d9-bdb0ed1a5979/console.log" append="off"/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:     </serial>
Dec 03 02:17:57 compute-0 nova_compute[351485]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:     <video>
Dec 03 02:17:57 compute-0 nova_compute[351485]:       <model type="virtio"/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:     </video>
Dec 03 02:17:57 compute-0 nova_compute[351485]:     <input type="tablet" bus="usb"/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:     <rng model="virtio">
Dec 03 02:17:57 compute-0 nova_compute[351485]:       <backend model="random">/dev/urandom</backend>
Dec 03 02:17:57 compute-0 nova_compute[351485]:     </rng>
Dec 03 02:17:57 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root"/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:     <controller type="usb" index="0"/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:     <memballoon model="virtio">
Dec 03 02:17:57 compute-0 nova_compute[351485]:       <stats period="10"/>
Dec 03 02:17:57 compute-0 nova_compute[351485]:     </memballoon>
Dec 03 02:17:57 compute-0 nova_compute[351485]:   </devices>
Dec 03 02:17:57 compute-0 nova_compute[351485]: </domain>
Dec 03 02:17:57 compute-0 nova_compute[351485]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 03 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.784 351492 DEBUG nova.compute.manager [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Preparing to wait for external event network-vif-plugged-025b4c8a-b3c9-4114-95f7-f17506286d3e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 03 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.785 351492 DEBUG oslo_concurrency.lockutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Acquiring lock "1b83725c-0af2-491f-98d9-bdb0ed1a5979-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.785 351492 DEBUG oslo_concurrency.lockutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "1b83725c-0af2-491f-98d9-bdb0ed1a5979-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.786 351492 DEBUG oslo_concurrency.lockutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "1b83725c-0af2-491f-98d9-bdb0ed1a5979-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.787 351492 DEBUG nova.virt.libvirt.vif [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T02:17:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-455653039',display_name='tempest-TestNetworkBasicOps-server-455653039',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-455653039',id=11,image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGyLxdmoeScEfSkwzcCczvmCyzQ7WX6pYr3KymEzB5Q09G09n6d3TfahDx7L4JUEY5sh67bwZpAZn3mmGdgttDtWP8gJ/ON+rMTVTFtEqftauFytQHqZZbMU6xxCGBZ6yA==',key_name='tempest-TestNetworkBasicOps-378472767',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f8f8e5d142604e8c8aabf1e14a1467ca',ramdisk_id='',reservation_id='r-ux5cl6xd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1039072813',owner_user_name='tempest-TestNetworkBasicOps-1039072813-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T02:17:51Z,user_data=None,user_id='abdbefadac2a4d98bd33ed8a1a60ff75',uuid=1b83725c-0af2-491f-98d9-bdb0ed1a5979,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "025b4c8a-b3c9-4114-95f7-f17506286d3e", "address": "fa:16:3e:24:c0:50", "network": {"id": "ed008f09-da46-4507-9be2-7398a4728121", "bridge": "br-int", "label": "tempest-network-smoke--628634883", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8f8e5d142604e8c8aabf1e14a1467ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap025b4c8a-b3", "ovs_interfaceid": "025b4c8a-b3c9-4114-95f7-f17506286d3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 03 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.787 351492 DEBUG nova.network.os_vif_util [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Converting VIF {"id": "025b4c8a-b3c9-4114-95f7-f17506286d3e", "address": "fa:16:3e:24:c0:50", "network": {"id": "ed008f09-da46-4507-9be2-7398a4728121", "bridge": "br-int", "label": "tempest-network-smoke--628634883", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8f8e5d142604e8c8aabf1e14a1467ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap025b4c8a-b3", "ovs_interfaceid": "025b4c8a-b3c9-4114-95f7-f17506286d3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 03 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.788 351492 DEBUG nova.network.os_vif_util [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:c0:50,bridge_name='br-int',has_traffic_filtering=True,id=025b4c8a-b3c9-4114-95f7-f17506286d3e,network=Network(ed008f09-da46-4507-9be2-7398a4728121),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap025b4c8a-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 03 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.789 351492 DEBUG os_vif [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:c0:50,bridge_name='br-int',has_traffic_filtering=True,id=025b4c8a-b3c9-4114-95f7-f17506286d3e,network=Network(ed008f09-da46-4507-9be2-7398a4728121),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap025b4c8a-b3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 03 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.791 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.792 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.792 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 03 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.797 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.798 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap025b4c8a-b3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.799 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap025b4c8a-b3, col_values=(('external_ids', {'iface-id': '025b4c8a-b3c9-4114-95f7-f17506286d3e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:24:c0:50', 'vm-uuid': '1b83725c-0af2-491f-98d9-bdb0ed1a5979'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:17:57 compute-0 NetworkManager[48912]: <info>  [1764728277.8037] manager: (tap025b4c8a-b3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Dec 03 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.801 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.807 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 03 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.812 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.813 351492 INFO os_vif [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:c0:50,bridge_name='br-int',has_traffic_filtering=True,id=025b4c8a-b3c9-4114-95f7-f17506286d3e,network=Network(ed008f09-da46-4507-9be2-7398a4728121),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap025b4c8a-b3')
Dec 03 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.902 351492 DEBUG nova.virt.libvirt.driver [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 03 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.903 351492 DEBUG nova.virt.libvirt.driver [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 03 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.903 351492 DEBUG nova.virt.libvirt.driver [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] No VIF found with MAC fa:16:3e:24:c0:50, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 03 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.905 351492 INFO nova.virt.libvirt.driver [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Using config drive
Dec 03 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.966 351492 DEBUG nova.storage.rbd_utils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] rbd image 1b83725c-0af2-491f-98d9-bdb0ed1a5979_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:17:58 compute-0 ceph-mon[192821]: pgmap v1909: 321 pgs: 321 active+clean; 278 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 2.7 MiB/s wr, 42 op/s
Dec 03 02:17:58 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1368366315' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:17:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:17:58 compute-0 nova_compute[351485]: 2025-12-03 02:17:58.429 351492 DEBUG nova.network.neutron [req-57d85ee9-1df5-4843-ab4b-af62de530db1 req-44962912-4a3b-46de-a9f4-7e0dcac1f89e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Updated VIF entry in instance network info cache for port 025b4c8a-b3c9-4114-95f7-f17506286d3e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 03 02:17:58 compute-0 nova_compute[351485]: 2025-12-03 02:17:58.430 351492 DEBUG nova.network.neutron [req-57d85ee9-1df5-4843-ab4b-af62de530db1 req-44962912-4a3b-46de-a9f4-7e0dcac1f89e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Updating instance_info_cache with network_info: [{"id": "025b4c8a-b3c9-4114-95f7-f17506286d3e", "address": "fa:16:3e:24:c0:50", "network": {"id": "ed008f09-da46-4507-9be2-7398a4728121", "bridge": "br-int", "label": "tempest-network-smoke--628634883", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8f8e5d142604e8c8aabf1e14a1467ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap025b4c8a-b3", "ovs_interfaceid": "025b4c8a-b3c9-4114-95f7-f17506286d3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:17:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:17:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:17:58 compute-0 nova_compute[351485]: 2025-12-03 02:17:58.453 351492 DEBUG oslo_concurrency.lockutils [req-57d85ee9-1df5-4843-ab4b-af62de530db1 req-44962912-4a3b-46de-a9f4-7e0dcac1f89e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-1b83725c-0af2-491f-98d9-bdb0ed1a5979" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:17:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:17:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:17:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:17:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:17:58 compute-0 nova_compute[351485]: 2025-12-03 02:17:58.558 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:58 compute-0 nova_compute[351485]: 2025-12-03 02:17:58.619 351492 INFO nova.virt.libvirt.driver [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Creating config drive at /var/lib/nova/instances/1b83725c-0af2-491f-98d9-bdb0ed1a5979/disk.config
Dec 03 02:17:58 compute-0 nova_compute[351485]: 2025-12-03 02:17:58.626 351492 DEBUG oslo_concurrency.processutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1b83725c-0af2-491f-98d9-bdb0ed1a5979/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9fj184j4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:17:58 compute-0 nova_compute[351485]: 2025-12-03 02:17:58.660 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:58 compute-0 nova_compute[351485]: 2025-12-03 02:17:58.704 351492 DEBUG nova.network.neutron [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Successfully updated port: c6f07ea7-978a-46d9-b7f8-a4c14ac8475f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 03 02:17:58 compute-0 nova_compute[351485]: 2025-12-03 02:17:58.731 351492 DEBUG oslo_concurrency.lockutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Acquiring lock "refresh_cache-40db12af-6ca8-4a4f-88e7-833c3fda87c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:17:58 compute-0 nova_compute[351485]: 2025-12-03 02:17:58.731 351492 DEBUG oslo_concurrency.lockutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Acquired lock "refresh_cache-40db12af-6ca8-4a4f-88e7-833c3fda87c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:17:58 compute-0 nova_compute[351485]: 2025-12-03 02:17:58.732 351492 DEBUG nova.network.neutron [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 03 02:17:58 compute-0 nova_compute[351485]: 2025-12-03 02:17:58.779 351492 DEBUG oslo_concurrency.processutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1b83725c-0af2-491f-98d9-bdb0ed1a5979/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9fj184j4" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:17:58 compute-0 nova_compute[351485]: 2025-12-03 02:17:58.844 351492 DEBUG nova.storage.rbd_utils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] rbd image 1b83725c-0af2-491f-98d9-bdb0ed1a5979_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:17:58 compute-0 nova_compute[351485]: 2025-12-03 02:17:58.854 351492 DEBUG oslo_concurrency.processutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1b83725c-0af2-491f-98d9-bdb0ed1a5979/disk.config 1b83725c-0af2-491f-98d9-bdb0ed1a5979_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:17:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1910: 321 pgs: 321 active+clean; 278 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 2.7 MiB/s wr, 42 op/s
Dec 03 02:17:59 compute-0 nova_compute[351485]: 2025-12-03 02:17:59.113 351492 DEBUG nova.compute.manager [req-8d4d023c-75e2-41d3-ad98-4727e47deee6 req-16a35fce-f5d4-4050-a6cd-b07a47cfd7e7 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Received event network-changed-c6f07ea7-978a-46d9-b7f8-a4c14ac8475f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:17:59 compute-0 nova_compute[351485]: 2025-12-03 02:17:59.114 351492 DEBUG nova.compute.manager [req-8d4d023c-75e2-41d3-ad98-4727e47deee6 req-16a35fce-f5d4-4050-a6cd-b07a47cfd7e7 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Refreshing instance network info cache due to event network-changed-c6f07ea7-978a-46d9-b7f8-a4c14ac8475f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 03 02:17:59 compute-0 nova_compute[351485]: 2025-12-03 02:17:59.114 351492 DEBUG oslo_concurrency.lockutils [req-8d4d023c-75e2-41d3-ad98-4727e47deee6 req-16a35fce-f5d4-4050-a6cd-b07a47cfd7e7 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-40db12af-6ca8-4a4f-88e7-833c3fda87c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:17:59 compute-0 nova_compute[351485]: 2025-12-03 02:17:59.165 351492 DEBUG oslo_concurrency.processutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1b83725c-0af2-491f-98d9-bdb0ed1a5979/disk.config 1b83725c-0af2-491f-98d9-bdb0ed1a5979_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.311s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:17:59 compute-0 nova_compute[351485]: 2025-12-03 02:17:59.166 351492 INFO nova.virt.libvirt.driver [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Deleting local config drive /var/lib/nova/instances/1b83725c-0af2-491f-98d9-bdb0ed1a5979/disk.config because it was imported into RBD.
Dec 03 02:17:59 compute-0 kernel: tap025b4c8a-b3: entered promiscuous mode
Dec 03 02:17:59 compute-0 NetworkManager[48912]: <info>  [1764728279.2936] manager: (tap025b4c8a-b3): new Tun device (/org/freedesktop/NetworkManager/Devices/60)
Dec 03 02:17:59 compute-0 ovn_controller[89134]: 2025-12-03T02:17:59Z|00127|binding|INFO|Claiming lport 025b4c8a-b3c9-4114-95f7-f17506286d3e for this chassis.
Dec 03 02:17:59 compute-0 ovn_controller[89134]: 2025-12-03T02:17:59Z|00128|binding|INFO|025b4c8a-b3c9-4114-95f7-f17506286d3e: Claiming fa:16:3e:24:c0:50 10.100.0.14
Dec 03 02:17:59 compute-0 nova_compute[351485]: 2025-12-03 02:17:59.299 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:59 compute-0 ovn_controller[89134]: 2025-12-03T02:17:59Z|00129|binding|INFO|Setting lport 025b4c8a-b3c9-4114-95f7-f17506286d3e ovn-installed in OVS
Dec 03 02:17:59 compute-0 nova_compute[351485]: 2025-12-03 02:17:59.331 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:59 compute-0 nova_compute[351485]: 2025-12-03 02:17:59.337 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:59 compute-0 systemd-udevd[449985]: Network interface NamePolicy= disabled on kernel command line.
Dec 03 02:17:59 compute-0 NetworkManager[48912]: <info>  [1764728279.3892] device (tap025b4c8a-b3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 03 02:17:59 compute-0 NetworkManager[48912]: <info>  [1764728279.3923] device (tap025b4c8a-b3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 03 02:17:59 compute-0 systemd-machined[138558]: New machine qemu-12-instance-0000000b.
Dec 03 02:17:59 compute-0 ovn_controller[89134]: 2025-12-03T02:17:59Z|00130|binding|INFO|Setting lport 025b4c8a-b3c9-4114-95f7-f17506286d3e up in Southbound
Dec 03 02:17:59 compute-0 systemd[1]: Started Virtual Machine qemu-12-instance-0000000b.
Dec 03 02:17:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:59.418 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:c0:50 10.100.0.14'], port_security=['fa:16:3e:24:c0:50 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '1b83725c-0af2-491f-98d9-bdb0ed1a5979', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed008f09-da46-4507-9be2-7398a4728121', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8f8e5d142604e8c8aabf1e14a1467ca', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0897a5e4-2e8b-4479-bdb4-a75dc9f6f9ce', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=15a0724e-2d9f-4375-b3ec-7cde297fca09, chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=025b4c8a-b3c9-4114-95f7-f17506286d3e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 02:17:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:59.420 288528 INFO neutron.agent.ovn.metadata.agent [-] Port 025b4c8a-b3c9-4114-95f7-f17506286d3e in datapath ed008f09-da46-4507-9be2-7398a4728121 bound to our chassis
Dec 03 02:17:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:59.425 288528 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ed008f09-da46-4507-9be2-7398a4728121
Dec 03 02:17:59 compute-0 ovn_controller[89134]: 2025-12-03T02:17:59Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ff:dd:2f 10.100.0.9
Dec 03 02:17:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:59.452 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[52d075b4-e2be-486c-a6a8-437d203cd16e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:17:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:59.497 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[7c52fd0b-2b82-45e5-a89c-266e04374d83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:17:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:59.501 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[c3c47b3c-49a3-4dd2-a0e2-2296f04202fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:17:59 compute-0 nova_compute[351485]: 2025-12-03 02:17:59.510 351492 DEBUG nova.network.neutron [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 03 02:17:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:59.539 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[34b505b0-4eca-462e-8424-77e4eb9bb875]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:17:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:59.560 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[0417e203-13fa-44c2-8051-3a643da5e7e3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'taped008f09-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9c:11:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 704212, 'reachable_time': 40538, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 450002, 'error': None, 'target': 'ovnmeta-ed008f09-da46-4507-9be2-7398a4728121', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:17:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:59.581 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[8efb6bf1-2474-48d8-b4d0-a00251749269]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'taped008f09-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 704225, 'tstamp': 704225}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 450003, 'error': None, 'target': 'ovnmeta-ed008f09-da46-4507-9be2-7398a4728121', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'taped008f09-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 704229, 'tstamp': 704229}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 450003, 'error': None, 'target': 'ovnmeta-ed008f09-da46-4507-9be2-7398a4728121', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:17:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:59.584 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped008f09-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:17:59 compute-0 nova_compute[351485]: 2025-12-03 02:17:59.586 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:59 compute-0 nova_compute[351485]: 2025-12-03 02:17:59.588 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:17:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:59.593 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=taped008f09-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:17:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:59.594 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 03 02:17:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:59.595 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=taped008f09-d0, col_values=(('external_ids', {'iface-id': '4fe53946-9a81-46d3-946d-3676da417bd6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:17:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:59.595 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 03 02:17:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:59.649 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:17:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:59.650 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:17:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:59.651 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:17:59 compute-0 podman[158098]: time="2025-12-03T02:17:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:17:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:17:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45045 "" "Go-http-client/1.1"
Dec 03 02:17:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:17:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9124 "" "Go-http-client/1.1"
Dec 03 02:18:00 compute-0 ceph-mon[192821]: pgmap v1910: 321 pgs: 321 active+clean; 278 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 2.7 MiB/s wr, 42 op/s
Dec 03 02:18:00 compute-0 nova_compute[351485]: 2025-12-03 02:18:00.373 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728280.3726099, 1b83725c-0af2-491f-98d9-bdb0ed1a5979 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 02:18:00 compute-0 nova_compute[351485]: 2025-12-03 02:18:00.374 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] VM Started (Lifecycle Event)
Dec 03 02:18:00 compute-0 nova_compute[351485]: 2025-12-03 02:18:00.401 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:18:00 compute-0 nova_compute[351485]: 2025-12-03 02:18:00.411 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728280.3728306, 1b83725c-0af2-491f-98d9-bdb0ed1a5979 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 02:18:00 compute-0 nova_compute[351485]: 2025-12-03 02:18:00.412 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] VM Paused (Lifecycle Event)
Dec 03 02:18:00 compute-0 nova_compute[351485]: 2025-12-03 02:18:00.431 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:18:00 compute-0 nova_compute[351485]: 2025-12-03 02:18:00.441 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 03 02:18:00 compute-0 nova_compute[351485]: 2025-12-03 02:18:00.462 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 03 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.058 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1911: 321 pgs: 321 active+clean; 308 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 334 KiB/s rd, 3.5 MiB/s wr, 74 op/s
Dec 03 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.318 351492 DEBUG nova.network.neutron [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Updating instance_info_cache with network_info: [{"id": "c6f07ea7-978a-46d9-b7f8-a4c14ac8475f", "address": "fa:16:3e:0d:93:5c", "network": {"id": "dee48a2c-2a7a-4864-9bd2-f42030910aa8", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1676161980-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "19ab3b60e4c749c7897f20982829cd8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f07ea7-97", "ovs_interfaceid": "c6f07ea7-978a-46d9-b7f8-a4c14ac8475f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.349 351492 DEBUG oslo_concurrency.lockutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Releasing lock "refresh_cache-40db12af-6ca8-4a4f-88e7-833c3fda87c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.350 351492 DEBUG nova.compute.manager [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Instance network_info: |[{"id": "c6f07ea7-978a-46d9-b7f8-a4c14ac8475f", "address": "fa:16:3e:0d:93:5c", "network": {"id": "dee48a2c-2a7a-4864-9bd2-f42030910aa8", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1676161980-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "19ab3b60e4c749c7897f20982829cd8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f07ea7-97", "ovs_interfaceid": "c6f07ea7-978a-46d9-b7f8-a4c14ac8475f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 03 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.350 351492 DEBUG oslo_concurrency.lockutils [req-8d4d023c-75e2-41d3-ad98-4727e47deee6 req-16a35fce-f5d4-4050-a6cd-b07a47cfd7e7 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-40db12af-6ca8-4a4f-88e7-833c3fda87c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.350 351492 DEBUG nova.network.neutron [req-8d4d023c-75e2-41d3-ad98-4727e47deee6 req-16a35fce-f5d4-4050-a6cd-b07a47cfd7e7 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Refreshing network info cache for port c6f07ea7-978a-46d9-b7f8-a4c14ac8475f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 03 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.354 351492 DEBUG nova.virt.libvirt.driver [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Start _get_guest_xml network_info=[{"id": "c6f07ea7-978a-46d9-b7f8-a4c14ac8475f", "address": "fa:16:3e:0d:93:5c", "network": {"id": "dee48a2c-2a7a-4864-9bd2-f42030910aa8", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1676161980-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "19ab3b60e4c749c7897f20982829cd8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f07ea7-97", "ovs_interfaceid": "c6f07ea7-978a-46d9-b7f8-a4c14ac8475f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T02:14:44Z,direct_url=<?>,disk_format='qcow2',id=ef773cba-72f0-486f-b5e5-792ff26bb688,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9746b242761a48048d185ce26d622b33',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T02:14:46Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'boot_index': 0, 'guest_format': None, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'image_id': 'ef773cba-72f0-486f-b5e5-792ff26bb688'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 03 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.364 351492 WARNING nova.virt.libvirt.driver [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.379 351492 DEBUG nova.virt.libvirt.host [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 03 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.380 351492 DEBUG nova.virt.libvirt.host [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 03 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.388 351492 DEBUG nova.virt.libvirt.host [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 03 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.389 351492 DEBUG nova.virt.libvirt.host [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 03 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.389 351492 DEBUG nova.virt.libvirt.driver [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 03 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.390 351492 DEBUG nova.virt.hardware [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-03T02:14:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='89219634-32e9-4cb5-896f-6fa0b1edfe13',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T02:14:44Z,direct_url=<?>,disk_format='qcow2',id=ef773cba-72f0-486f-b5e5-792ff26bb688,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9746b242761a48048d185ce26d622b33',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T02:14:46Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 03 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.390 351492 DEBUG nova.virt.hardware [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 03 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.391 351492 DEBUG nova.virt.hardware [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 03 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.391 351492 DEBUG nova.virt.hardware [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 03 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.392 351492 DEBUG nova.virt.hardware [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 03 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.392 351492 DEBUG nova.virt.hardware [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 03 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.392 351492 DEBUG nova.virt.hardware [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 03 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.393 351492 DEBUG nova.virt.hardware [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 03 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.393 351492 DEBUG nova.virt.hardware [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 03 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.393 351492 DEBUG nova.virt.hardware [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 03 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.394 351492 DEBUG nova.virt.hardware [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 03 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.399 351492 DEBUG oslo_concurrency.processutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:18:01 compute-0 openstack_network_exporter[368278]: ERROR   02:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:18:01 compute-0 openstack_network_exporter[368278]: ERROR   02:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:18:01 compute-0 openstack_network_exporter[368278]: ERROR   02:18:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:18:01 compute-0 openstack_network_exporter[368278]: ERROR   02:18:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:18:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:18:01 compute-0 openstack_network_exporter[368278]: ERROR   02:18:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:18:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:18:01 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 03 02:18:01 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/913447852' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.928 351492 DEBUG oslo_concurrency.processutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.966 351492 DEBUG nova.storage.rbd_utils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] rbd image 40db12af-6ca8-4a4f-88e7-833c3fda87c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.981 351492 DEBUG oslo_concurrency.processutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:18:02 compute-0 ceph-mon[192821]: pgmap v1911: 321 pgs: 321 active+clean; 308 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 334 KiB/s rd, 3.5 MiB/s wr, 74 op/s
Dec 03 02:18:02 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/913447852' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:18:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 03 02:18:02 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/437113168' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.519 351492 DEBUG oslo_concurrency.processutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.522 351492 DEBUG nova.virt.libvirt.vif [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T02:17:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-143016714',display_name='tempest-ServerAddressesTestJSON-server-143016714',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-143016714',id=12,image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='19ab3b60e4c749c7897f20982829cd8c',ramdisk_id='',reservation_id='r-qlc2ubob',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-2068212470',owner_user_name='tempest-ServerAddressesTestJSON-2068212470-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T02:17:54Z,user_data=None,user_id='085bcee1002d425085c1f09d9b5d3d97',uuid=40db12af-6ca8-4a4f-88e7-833c3fda87c9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c6f07ea7-978a-46d9-b7f8-a4c14ac8475f", "address": "fa:16:3e:0d:93:5c", "network": {"id": "dee48a2c-2a7a-4864-9bd2-f42030910aa8", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1676161980-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "19ab3b60e4c749c7897f20982829cd8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f07ea7-97", "ovs_interfaceid": "c6f07ea7-978a-46d9-b7f8-a4c14ac8475f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 03 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.523 351492 DEBUG nova.network.os_vif_util [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Converting VIF {"id": "c6f07ea7-978a-46d9-b7f8-a4c14ac8475f", "address": "fa:16:3e:0d:93:5c", "network": {"id": "dee48a2c-2a7a-4864-9bd2-f42030910aa8", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1676161980-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "19ab3b60e4c749c7897f20982829cd8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f07ea7-97", "ovs_interfaceid": "c6f07ea7-978a-46d9-b7f8-a4c14ac8475f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 03 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.524 351492 DEBUG nova.network.os_vif_util [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:93:5c,bridge_name='br-int',has_traffic_filtering=True,id=c6f07ea7-978a-46d9-b7f8-a4c14ac8475f,network=Network(dee48a2c-2a7a-4864-9bd2-f42030910aa8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6f07ea7-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 03 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.527 351492 DEBUG nova.objects.instance [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Lazy-loading 'pci_devices' on Instance uuid 40db12af-6ca8-4a4f-88e7-833c3fda87c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.548 351492 DEBUG nova.virt.libvirt.driver [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] End _get_guest_xml xml=<domain type="kvm">
Dec 03 02:18:02 compute-0 nova_compute[351485]:   <uuid>40db12af-6ca8-4a4f-88e7-833c3fda87c9</uuid>
Dec 03 02:18:02 compute-0 nova_compute[351485]:   <name>instance-0000000c</name>
Dec 03 02:18:02 compute-0 nova_compute[351485]:   <memory>131072</memory>
Dec 03 02:18:02 compute-0 nova_compute[351485]:   <vcpu>1</vcpu>
Dec 03 02:18:02 compute-0 nova_compute[351485]:   <metadata>
Dec 03 02:18:02 compute-0 nova_compute[351485]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 03 02:18:02 compute-0 nova_compute[351485]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:       <nova:name>tempest-ServerAddressesTestJSON-server-143016714</nova:name>
Dec 03 02:18:02 compute-0 nova_compute[351485]:       <nova:creationTime>2025-12-03 02:18:01</nova:creationTime>
Dec 03 02:18:02 compute-0 nova_compute[351485]:       <nova:flavor name="m1.nano">
Dec 03 02:18:02 compute-0 nova_compute[351485]:         <nova:memory>128</nova:memory>
Dec 03 02:18:02 compute-0 nova_compute[351485]:         <nova:disk>1</nova:disk>
Dec 03 02:18:02 compute-0 nova_compute[351485]:         <nova:swap>0</nova:swap>
Dec 03 02:18:02 compute-0 nova_compute[351485]:         <nova:ephemeral>0</nova:ephemeral>
Dec 03 02:18:02 compute-0 nova_compute[351485]:         <nova:vcpus>1</nova:vcpus>
Dec 03 02:18:02 compute-0 nova_compute[351485]:       </nova:flavor>
Dec 03 02:18:02 compute-0 nova_compute[351485]:       <nova:owner>
Dec 03 02:18:02 compute-0 nova_compute[351485]:         <nova:user uuid="085bcee1002d425085c1f09d9b5d3d97">tempest-ServerAddressesTestJSON-2068212470-project-member</nova:user>
Dec 03 02:18:02 compute-0 nova_compute[351485]:         <nova:project uuid="19ab3b60e4c749c7897f20982829cd8c">tempest-ServerAddressesTestJSON-2068212470</nova:project>
Dec 03 02:18:02 compute-0 nova_compute[351485]:       </nova:owner>
Dec 03 02:18:02 compute-0 nova_compute[351485]:       <nova:root type="image" uuid="ef773cba-72f0-486f-b5e5-792ff26bb688"/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:       <nova:ports>
Dec 03 02:18:02 compute-0 nova_compute[351485]:         <nova:port uuid="c6f07ea7-978a-46d9-b7f8-a4c14ac8475f">
Dec 03 02:18:02 compute-0 nova_compute[351485]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:         </nova:port>
Dec 03 02:18:02 compute-0 nova_compute[351485]:       </nova:ports>
Dec 03 02:18:02 compute-0 nova_compute[351485]:     </nova:instance>
Dec 03 02:18:02 compute-0 nova_compute[351485]:   </metadata>
Dec 03 02:18:02 compute-0 nova_compute[351485]:   <sysinfo type="smbios">
Dec 03 02:18:02 compute-0 nova_compute[351485]:     <system>
Dec 03 02:18:02 compute-0 nova_compute[351485]:       <entry name="manufacturer">RDO</entry>
Dec 03 02:18:02 compute-0 nova_compute[351485]:       <entry name="product">OpenStack Compute</entry>
Dec 03 02:18:02 compute-0 nova_compute[351485]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 03 02:18:02 compute-0 nova_compute[351485]:       <entry name="serial">40db12af-6ca8-4a4f-88e7-833c3fda87c9</entry>
Dec 03 02:18:02 compute-0 nova_compute[351485]:       <entry name="uuid">40db12af-6ca8-4a4f-88e7-833c3fda87c9</entry>
Dec 03 02:18:02 compute-0 nova_compute[351485]:       <entry name="family">Virtual Machine</entry>
Dec 03 02:18:02 compute-0 nova_compute[351485]:     </system>
Dec 03 02:18:02 compute-0 nova_compute[351485]:   </sysinfo>
Dec 03 02:18:02 compute-0 nova_compute[351485]:   <os>
Dec 03 02:18:02 compute-0 nova_compute[351485]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 03 02:18:02 compute-0 nova_compute[351485]:     <boot dev="hd"/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:     <smbios mode="sysinfo"/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:   </os>
Dec 03 02:18:02 compute-0 nova_compute[351485]:   <features>
Dec 03 02:18:02 compute-0 nova_compute[351485]:     <acpi/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:     <apic/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:     <vmcoreinfo/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:   </features>
Dec 03 02:18:02 compute-0 nova_compute[351485]:   <clock offset="utc">
Dec 03 02:18:02 compute-0 nova_compute[351485]:     <timer name="pit" tickpolicy="delay"/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:     <timer name="hpet" present="no"/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:   </clock>
Dec 03 02:18:02 compute-0 nova_compute[351485]:   <cpu mode="host-model" match="exact">
Dec 03 02:18:02 compute-0 nova_compute[351485]:     <topology sockets="1" cores="1" threads="1"/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:   </cpu>
Dec 03 02:18:02 compute-0 nova_compute[351485]:   <devices>
Dec 03 02:18:02 compute-0 nova_compute[351485]:     <disk type="network" device="disk">
Dec 03 02:18:02 compute-0 nova_compute[351485]:       <driver type="raw" cache="none"/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:       <source protocol="rbd" name="vms/40db12af-6ca8-4a4f-88e7-833c3fda87c9_disk">
Dec 03 02:18:02 compute-0 nova_compute[351485]:         <host name="192.168.122.100" port="6789"/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:       </source>
Dec 03 02:18:02 compute-0 nova_compute[351485]:       <auth username="openstack">
Dec 03 02:18:02 compute-0 nova_compute[351485]:         <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:       </auth>
Dec 03 02:18:02 compute-0 nova_compute[351485]:       <target dev="vda" bus="virtio"/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:     </disk>
Dec 03 02:18:02 compute-0 nova_compute[351485]:     <disk type="network" device="cdrom">
Dec 03 02:18:02 compute-0 nova_compute[351485]:       <driver type="raw" cache="none"/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:       <source protocol="rbd" name="vms/40db12af-6ca8-4a4f-88e7-833c3fda87c9_disk.config">
Dec 03 02:18:02 compute-0 nova_compute[351485]:         <host name="192.168.122.100" port="6789"/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:       </source>
Dec 03 02:18:02 compute-0 nova_compute[351485]:       <auth username="openstack">
Dec 03 02:18:02 compute-0 nova_compute[351485]:         <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:       </auth>
Dec 03 02:18:02 compute-0 nova_compute[351485]:       <target dev="sda" bus="sata"/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:     </disk>
Dec 03 02:18:02 compute-0 nova_compute[351485]:     <interface type="ethernet">
Dec 03 02:18:02 compute-0 nova_compute[351485]:       <mac address="fa:16:3e:0d:93:5c"/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:       <model type="virtio"/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:       <driver name="vhost" rx_queue_size="512"/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:       <mtu size="1442"/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:       <target dev="tapc6f07ea7-97"/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:     </interface>
Dec 03 02:18:02 compute-0 nova_compute[351485]:     <serial type="pty">
Dec 03 02:18:02 compute-0 nova_compute[351485]:       <log file="/var/lib/nova/instances/40db12af-6ca8-4a4f-88e7-833c3fda87c9/console.log" append="off"/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:     </serial>
Dec 03 02:18:02 compute-0 nova_compute[351485]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:     <video>
Dec 03 02:18:02 compute-0 nova_compute[351485]:       <model type="virtio"/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:     </video>
Dec 03 02:18:02 compute-0 nova_compute[351485]:     <input type="tablet" bus="usb"/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:     <rng model="virtio">
Dec 03 02:18:02 compute-0 nova_compute[351485]:       <backend model="random">/dev/urandom</backend>
Dec 03 02:18:02 compute-0 nova_compute[351485]:     </rng>
Dec 03 02:18:02 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root"/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:     <controller type="usb" index="0"/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:     <memballoon model="virtio">
Dec 03 02:18:02 compute-0 nova_compute[351485]:       <stats period="10"/>
Dec 03 02:18:02 compute-0 nova_compute[351485]:     </memballoon>
Dec 03 02:18:02 compute-0 nova_compute[351485]:   </devices>
Dec 03 02:18:02 compute-0 nova_compute[351485]: </domain>
Dec 03 02:18:02 compute-0 nova_compute[351485]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 03 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.550 351492 DEBUG nova.compute.manager [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Preparing to wait for external event network-vif-plugged-c6f07ea7-978a-46d9-b7f8-a4c14ac8475f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 03 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.550 351492 DEBUG oslo_concurrency.lockutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Acquiring lock "40db12af-6ca8-4a4f-88e7-833c3fda87c9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.551 351492 DEBUG oslo_concurrency.lockutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Lock "40db12af-6ca8-4a4f-88e7-833c3fda87c9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.551 351492 DEBUG oslo_concurrency.lockutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Lock "40db12af-6ca8-4a4f-88e7-833c3fda87c9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.553 351492 DEBUG nova.virt.libvirt.vif [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T02:17:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-143016714',display_name='tempest-ServerAddressesTestJSON-server-143016714',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-143016714',id=12,image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='19ab3b60e4c749c7897f20982829cd8c',ramdisk_id='',reservation_id='r-qlc2ubob',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-2068212470',owner_user_name='tempest-ServerAddressesTestJSON-2068212470-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T02:17:54Z,user_data=None,user_id='085bcee1002d425085c1f09d9b5d3d97',uuid=40db12af-6ca8-4a4f-88e7-833c3fda87c9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c6f07ea7-978a-46d9-b7f8-a4c14ac8475f", "address": "fa:16:3e:0d:93:5c", "network": {"id": "dee48a2c-2a7a-4864-9bd2-f42030910aa8", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1676161980-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "19ab3b60e4c749c7897f20982829cd8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f07ea7-97", "ovs_interfaceid": "c6f07ea7-978a-46d9-b7f8-a4c14ac8475f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 03 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.553 351492 DEBUG nova.network.os_vif_util [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Converting VIF {"id": "c6f07ea7-978a-46d9-b7f8-a4c14ac8475f", "address": "fa:16:3e:0d:93:5c", "network": {"id": "dee48a2c-2a7a-4864-9bd2-f42030910aa8", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1676161980-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "19ab3b60e4c749c7897f20982829cd8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f07ea7-97", "ovs_interfaceid": "c6f07ea7-978a-46d9-b7f8-a4c14ac8475f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 03 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.555 351492 DEBUG nova.network.os_vif_util [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:93:5c,bridge_name='br-int',has_traffic_filtering=True,id=c6f07ea7-978a-46d9-b7f8-a4c14ac8475f,network=Network(dee48a2c-2a7a-4864-9bd2-f42030910aa8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6f07ea7-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 03 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.556 351492 DEBUG os_vif [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:93:5c,bridge_name='br-int',has_traffic_filtering=True,id=c6f07ea7-978a-46d9-b7f8-a4c14ac8475f,network=Network(dee48a2c-2a7a-4864-9bd2-f42030910aa8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6f07ea7-97') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 03 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.557 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.558 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.559 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 03 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.565 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.566 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc6f07ea7-97, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.567 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc6f07ea7-97, col_values=(('external_ids', {'iface-id': 'c6f07ea7-978a-46d9-b7f8-a4c14ac8475f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0d:93:5c', 'vm-uuid': '40db12af-6ca8-4a4f-88e7-833c3fda87c9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:18:02 compute-0 NetworkManager[48912]: <info>  [1764728282.5719] manager: (tapc6f07ea7-97): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/61)
Dec 03 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.571 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.576 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 03 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.584 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.586 351492 INFO os_vif [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:93:5c,bridge_name='br-int',has_traffic_filtering=True,id=c6f07ea7-978a-46d9-b7f8-a4c14ac8475f,network=Network(dee48a2c-2a7a-4864-9bd2-f42030910aa8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6f07ea7-97')
Dec 03 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.669 351492 DEBUG nova.virt.libvirt.driver [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 03 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.670 351492 DEBUG nova.virt.libvirt.driver [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 03 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.671 351492 DEBUG nova.virt.libvirt.driver [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] No VIF found with MAC fa:16:3e:0d:93:5c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 03 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.672 351492 INFO nova.virt.libvirt.driver [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Using config drive
Dec 03 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.724 351492 DEBUG nova.storage.rbd_utils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] rbd image 40db12af-6ca8-4a4f-88e7-833c3fda87c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:18:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1912: 321 pgs: 321 active+clean; 308 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 574 KiB/s rd, 3.6 MiB/s wr, 107 op/s
Dec 03 02:18:03 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/437113168' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:18:03 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #90. Immutable memtables: 0.
Dec 03 02:18:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:03.290961) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 03 02:18:03 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 90
Dec 03 02:18:03 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728283290997, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 577, "num_deletes": 251, "total_data_size": 581677, "memory_usage": 593448, "flush_reason": "Manual Compaction"}
Dec 03 02:18:03 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #91: started
Dec 03 02:18:03 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728283299696, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 91, "file_size": 576035, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38810, "largest_seqno": 39386, "table_properties": {"data_size": 572864, "index_size": 1079, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7488, "raw_average_key_size": 19, "raw_value_size": 566527, "raw_average_value_size": 1463, "num_data_blocks": 48, "num_entries": 387, "num_filter_entries": 387, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764728248, "oldest_key_time": 1764728248, "file_creation_time": 1764728283, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 91, "seqno_to_time_mapping": "N/A"}}
Dec 03 02:18:03 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 8813 microseconds, and 4214 cpu microseconds.
Dec 03 02:18:03 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 02:18:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:03.299761) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #91: 576035 bytes OK
Dec 03 02:18:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:03.299793) [db/memtable_list.cc:519] [default] Level-0 commit table #91 started
Dec 03 02:18:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:03.302608) [db/memtable_list.cc:722] [default] Level-0 commit table #91: memtable #1 done
Dec 03 02:18:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:03.302631) EVENT_LOG_v1 {"time_micros": 1764728283302623, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 03 02:18:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:03.302653) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 03 02:18:03 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 578467, prev total WAL file size 578467, number of live WAL files 2.
Dec 03 02:18:03 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000087.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:18:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:03.303676) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Dec 03 02:18:03 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 03 02:18:03 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [91(562KB)], [89(8044KB)]
Dec 03 02:18:03 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728283303775, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [91], "files_L6": [89], "score": -1, "input_data_size": 8813786, "oldest_snapshot_seqno": -1}
Dec 03 02:18:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:18:03 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #92: 5603 keys, 7077159 bytes, temperature: kUnknown
Dec 03 02:18:03 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728283369111, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 92, "file_size": 7077159, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7042827, "index_size": 19246, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14021, "raw_key_size": 143654, "raw_average_key_size": 25, "raw_value_size": 6944505, "raw_average_value_size": 1239, "num_data_blocks": 783, "num_entries": 5603, "num_filter_entries": 5603, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764728283, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Dec 03 02:18:03 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 02:18:03 compute-0 nova_compute[351485]: 2025-12-03 02:18:03.371 351492 INFO nova.virt.libvirt.driver [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Creating config drive at /var/lib/nova/instances/40db12af-6ca8-4a4f-88e7-833c3fda87c9/disk.config
Dec 03 02:18:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:03.369838) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 7077159 bytes
Dec 03 02:18:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:03.373327) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 133.9 rd, 107.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 7.9 +0.0 blob) out(6.7 +0.0 blob), read-write-amplify(27.6) write-amplify(12.3) OK, records in: 6117, records dropped: 514 output_compression: NoCompression
Dec 03 02:18:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:03.373360) EVENT_LOG_v1 {"time_micros": 1764728283373345, "job": 52, "event": "compaction_finished", "compaction_time_micros": 65827, "compaction_time_cpu_micros": 41835, "output_level": 6, "num_output_files": 1, "total_output_size": 7077159, "num_input_records": 6117, "num_output_records": 5603, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 03 02:18:03 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000091.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:18:03 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728283373837, "job": 52, "event": "table_file_deletion", "file_number": 91}
Dec 03 02:18:03 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:18:03 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728283377010, "job": 52, "event": "table_file_deletion", "file_number": 89}
Dec 03 02:18:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:03.303346) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:18:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:03.377203) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:18:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:03.377209) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:18:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:03.377213) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:18:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:03.377216) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:18:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:03.377219) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:18:03 compute-0 nova_compute[351485]: 2025-12-03 02:18:03.380 351492 DEBUG oslo_concurrency.processutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/40db12af-6ca8-4a4f-88e7-833c3fda87c9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmz7fd73a execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:18:03 compute-0 nova_compute[351485]: 2025-12-03 02:18:03.434 351492 DEBUG nova.network.neutron [req-8d4d023c-75e2-41d3-ad98-4727e47deee6 req-16a35fce-f5d4-4050-a6cd-b07a47cfd7e7 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Updated VIF entry in instance network info cache for port c6f07ea7-978a-46d9-b7f8-a4c14ac8475f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 03 02:18:03 compute-0 nova_compute[351485]: 2025-12-03 02:18:03.436 351492 DEBUG nova.network.neutron [req-8d4d023c-75e2-41d3-ad98-4727e47deee6 req-16a35fce-f5d4-4050-a6cd-b07a47cfd7e7 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Updating instance_info_cache with network_info: [{"id": "c6f07ea7-978a-46d9-b7f8-a4c14ac8475f", "address": "fa:16:3e:0d:93:5c", "network": {"id": "dee48a2c-2a7a-4864-9bd2-f42030910aa8", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1676161980-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "19ab3b60e4c749c7897f20982829cd8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f07ea7-97", "ovs_interfaceid": "c6f07ea7-978a-46d9-b7f8-a4c14ac8475f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:18:03 compute-0 nova_compute[351485]: 2025-12-03 02:18:03.460 351492 DEBUG oslo_concurrency.lockutils [req-8d4d023c-75e2-41d3-ad98-4727e47deee6 req-16a35fce-f5d4-4050-a6cd-b07a47cfd7e7 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-40db12af-6ca8-4a4f-88e7-833c3fda87c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:18:03 compute-0 nova_compute[351485]: 2025-12-03 02:18:03.530 351492 DEBUG oslo_concurrency.processutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/40db12af-6ca8-4a4f-88e7-833c3fda87c9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmz7fd73a" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:18:03 compute-0 nova_compute[351485]: 2025-12-03 02:18:03.602 351492 DEBUG nova.storage.rbd_utils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] rbd image 40db12af-6ca8-4a4f-88e7-833c3fda87c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:18:03 compute-0 nova_compute[351485]: 2025-12-03 02:18:03.618 351492 DEBUG oslo_concurrency.processutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/40db12af-6ca8-4a4f-88e7-833c3fda87c9/disk.config 40db12af-6ca8-4a4f-88e7-833c3fda87c9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:18:03 compute-0 nova_compute[351485]: 2025-12-03 02:18:03.650 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:03 compute-0 nova_compute[351485]: 2025-12-03 02:18:03.936 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:03 compute-0 nova_compute[351485]: 2025-12-03 02:18:03.937 351492 DEBUG oslo_concurrency.processutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/40db12af-6ca8-4a4f-88e7-833c3fda87c9/disk.config 40db12af-6ca8-4a4f-88e7-833c3fda87c9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.319s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:18:03 compute-0 nova_compute[351485]: 2025-12-03 02:18:03.937 351492 INFO nova.virt.libvirt.driver [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Deleting local config drive /var/lib/nova/instances/40db12af-6ca8-4a4f-88e7-833c3fda87c9/disk.config because it was imported into RBD.
Dec 03 02:18:04 compute-0 kernel: tapc6f07ea7-97: entered promiscuous mode
Dec 03 02:18:04 compute-0 NetworkManager[48912]: <info>  [1764728284.0361] manager: (tapc6f07ea7-97): new Tun device (/org/freedesktop/NetworkManager/Devices/62)
Dec 03 02:18:04 compute-0 ovn_controller[89134]: 2025-12-03T02:18:04Z|00131|binding|INFO|Claiming lport c6f07ea7-978a-46d9-b7f8-a4c14ac8475f for this chassis.
Dec 03 02:18:04 compute-0 ovn_controller[89134]: 2025-12-03T02:18:04Z|00132|binding|INFO|c6f07ea7-978a-46d9-b7f8-a4c14ac8475f: Claiming fa:16:3e:0d:93:5c 10.100.0.6
Dec 03 02:18:04 compute-0 nova_compute[351485]: 2025-12-03 02:18:04.039 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.058 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:93:5c 10.100.0.6'], port_security=['fa:16:3e:0d:93:5c 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '40db12af-6ca8-4a4f-88e7-833c3fda87c9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dee48a2c-2a7a-4864-9bd2-f42030910aa8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '19ab3b60e4c749c7897f20982829cd8c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8422e37d-61b1-4fef-9439-a6ea41458932', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf2713d8-67bb-4af5-af36-8021ea746eae, chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=c6f07ea7-978a-46d9-b7f8-a4c14ac8475f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.061 288528 INFO neutron.agent.ovn.metadata.agent [-] Port c6f07ea7-978a-46d9-b7f8-a4c14ac8475f in datapath dee48a2c-2a7a-4864-9bd2-f42030910aa8 bound to our chassis
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.067 288528 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network dee48a2c-2a7a-4864-9bd2-f42030910aa8
Dec 03 02:18:04 compute-0 nova_compute[351485]: 2025-12-03 02:18:04.079 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:04 compute-0 ovn_controller[89134]: 2025-12-03T02:18:04Z|00133|binding|INFO|Setting lport c6f07ea7-978a-46d9-b7f8-a4c14ac8475f ovn-installed in OVS
Dec 03 02:18:04 compute-0 ovn_controller[89134]: 2025-12-03T02:18:04Z|00134|binding|INFO|Setting lport c6f07ea7-978a-46d9-b7f8-a4c14ac8475f up in Southbound
Dec 03 02:18:04 compute-0 nova_compute[351485]: 2025-12-03 02:18:04.081 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.088 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[aac84330-10da-4653-b52f-2c460a8c6fa7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.092 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdee48a2c-21 in ovnmeta-dee48a2c-2a7a-4864-9bd2-f42030910aa8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.094 414755 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdee48a2c-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.094 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[569c3531-84c0-4a18-b0e6-7753c81b3df8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.096 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[cd4bc2b0-45c5-428a-984e-d7bf7be6e818]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:18:04 compute-0 systemd-udevd[450184]: Network interface NamePolicy= disabled on kernel command line.
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.111 288639 DEBUG oslo.privsep.daemon [-] privsep: reply[ce0d1696-902e-4636-bbc5-a487572e6c54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:18:04 compute-0 systemd-machined[138558]: New machine qemu-13-instance-0000000c.
Dec 03 02:18:04 compute-0 NetworkManager[48912]: <info>  [1764728284.1261] device (tapc6f07ea7-97): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 03 02:18:04 compute-0 NetworkManager[48912]: <info>  [1764728284.1274] device (tapc6f07ea7-97): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 03 02:18:04 compute-0 systemd[1]: Started Virtual Machine qemu-13-instance-0000000c.
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.139 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[72ee20af-f1c7-42f6-ada7-1d2c8c06533c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.183 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[5e6e1faa-12c9-4238-9fd7-9fb0fe7948f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:18:04 compute-0 NetworkManager[48912]: <info>  [1764728284.1956] manager: (tapdee48a2c-20): new Veth device (/org/freedesktop/NetworkManager/Devices/63)
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.194 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[3707b1ff-7eab-4246-828a-6d07df96dd9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.229 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[0c98e99e-aed3-469f-80e2-cf26fa52c222]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.233 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[d22e5a0d-c6d2-443d-bac2-dc5407fb46f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:18:04 compute-0 NetworkManager[48912]: <info>  [1764728284.2647] device (tapdee48a2c-20): carrier: link connected
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.272 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[e8fbeca5-06a0-4d52-b78f-f57fc777d3f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:18:04 compute-0 ceph-mon[192821]: pgmap v1912: 321 pgs: 321 active+clean; 308 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 574 KiB/s rd, 3.6 MiB/s wr, 107 op/s
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.295 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[05d4ad64-4c1b-4745-a5e0-d10a0090b46c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdee48a2c-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:20:e5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 712374, 'reachable_time': 34062, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 450215, 'error': None, 'target': 'ovnmeta-dee48a2c-2a7a-4864-9bd2-f42030910aa8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.332 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[680cfad4-eda7-4462-8cd3-b02dc5169a35]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4e:20e5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 712374, 'tstamp': 712374}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 450216, 'error': None, 'target': 'ovnmeta-dee48a2c-2a7a-4864-9bd2-f42030910aa8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.362 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[4edee53e-cb76-44c5-8d69-e47a66dfd46e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdee48a2c-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:20:e5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 712374, 'reachable_time': 34062, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 450217, 'error': None, 'target': 'ovnmeta-dee48a2c-2a7a-4864-9bd2-f42030910aa8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.414 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[cd261bc1-6628-447c-9fd8-edc3abb49c65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.535 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[a6ecf449-1293-4050-9c47-55434a60750e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.538 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdee48a2c-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.538 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.539 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdee48a2c-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:18:04 compute-0 kernel: tapdee48a2c-20: entered promiscuous mode
Dec 03 02:18:04 compute-0 nova_compute[351485]: 2025-12-03 02:18:04.542 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:04 compute-0 NetworkManager[48912]: <info>  [1764728284.5440] manager: (tapdee48a2c-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/64)
Dec 03 02:18:04 compute-0 nova_compute[351485]: 2025-12-03 02:18:04.553 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.564 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdee48a2c-20, col_values=(('external_ids', {'iface-id': '01cdcf90-ecf3-431a-911c-1a03d9741df1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:18:04 compute-0 nova_compute[351485]: 2025-12-03 02:18:04.566 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:04 compute-0 ovn_controller[89134]: 2025-12-03T02:18:04Z|00135|binding|INFO|Releasing lport 01cdcf90-ecf3-431a-911c-1a03d9741df1 from this chassis (sb_readonly=0)
Dec 03 02:18:04 compute-0 nova_compute[351485]: 2025-12-03 02:18:04.569 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.570 288528 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/dee48a2c-2a7a-4864-9bd2-f42030910aa8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/dee48a2c-2a7a-4864-9bd2-f42030910aa8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.572 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[b5dea5d1-7cc6-42ae-b6d3-e5d9cb6e5c20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.573 288528 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]: global
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]:     log         /dev/log local0 debug
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]:     log-tag     haproxy-metadata-proxy-dee48a2c-2a7a-4864-9bd2-f42030910aa8
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]:     user        root
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]:     group       root
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]:     maxconn     1024
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]:     pidfile     /var/lib/neutron/external/pids/dee48a2c-2a7a-4864-9bd2-f42030910aa8.pid.haproxy
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]:     daemon
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]: 
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]: defaults
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]:     log global
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]:     mode http
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]:     option httplog
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]:     option dontlognull
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]:     option http-server-close
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]:     option forwardfor
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]:     retries                 3
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]:     timeout http-request    30s
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]:     timeout connect         30s
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]:     timeout client          32s
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]:     timeout server          32s
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]:     timeout http-keep-alive 30s
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]: 
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]: 
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]: listen listener
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]:     bind 169.254.169.254:80
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]:     server metadata /var/lib/neutron/metadata_proxy
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]:     http-request add-header X-OVN-Network-ID dee48a2c-2a7a-4864-9bd2-f42030910aa8
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 03 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.574 288528 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-dee48a2c-2a7a-4864-9bd2-f42030910aa8', 'env', 'PROCESS_TAG=haproxy-dee48a2c-2a7a-4864-9bd2-f42030910aa8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/dee48a2c-2a7a-4864-9bd2-f42030910aa8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 03 02:18:04 compute-0 nova_compute[351485]: 2025-12-03 02:18:04.606 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1913: 321 pgs: 321 active+clean; 308 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 576 KiB/s rd, 3.6 MiB/s wr, 110 op/s
Dec 03 02:18:05 compute-0 podman[450247]: 2025-12-03 02:18:05.181848975 +0000 UTC m=+0.125510921 container create 1de37f14aa8d52a7f5b474ddf624a198b96826ecd0cf26d4d2ead1dbc6e51c4c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dee48a2c-2a7a-4864-9bd2-f42030910aa8, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 03 02:18:05 compute-0 podman[450247]: 2025-12-03 02:18:05.129836224 +0000 UTC m=+0.073498250 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 03 02:18:05 compute-0 systemd[1]: Started libpod-conmon-1de37f14aa8d52a7f5b474ddf624a198b96826ecd0cf26d4d2ead1dbc6e51c4c.scope.
Dec 03 02:18:05 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:18:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2458ccabf09d938af728de35a5600b8e9250e78dcce1ee129f34e94e9a713cdc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 03 02:18:05 compute-0 podman[450247]: 2025-12-03 02:18:05.303897038 +0000 UTC m=+0.247559045 container init 1de37f14aa8d52a7f5b474ddf624a198b96826ecd0cf26d4d2ead1dbc6e51c4c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dee48a2c-2a7a-4864-9bd2-f42030910aa8, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 03 02:18:05 compute-0 podman[450247]: 2025-12-03 02:18:05.314806627 +0000 UTC m=+0.258468603 container start 1de37f14aa8d52a7f5b474ddf624a198b96826ecd0cf26d4d2ead1dbc6e51c4c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dee48a2c-2a7a-4864-9bd2-f42030910aa8, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 03 02:18:05 compute-0 neutron-haproxy-ovnmeta-dee48a2c-2a7a-4864-9bd2-f42030910aa8[450299]: [NOTICE]   (450308) : New worker (450311) forked
Dec 03 02:18:05 compute-0 neutron-haproxy-ovnmeta-dee48a2c-2a7a-4864-9bd2-f42030910aa8[450299]: [NOTICE]   (450308) : Loading success.
Dec 03 02:18:05 compute-0 nova_compute[351485]: 2025-12-03 02:18:05.397 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728285.397048, 40db12af-6ca8-4a4f-88e7-833c3fda87c9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 02:18:05 compute-0 nova_compute[351485]: 2025-12-03 02:18:05.397 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] VM Started (Lifecycle Event)
Dec 03 02:18:05 compute-0 nova_compute[351485]: 2025-12-03 02:18:05.429 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:18:05 compute-0 nova_compute[351485]: 2025-12-03 02:18:05.436 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728285.397138, 40db12af-6ca8-4a4f-88e7-833c3fda87c9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 02:18:05 compute-0 nova_compute[351485]: 2025-12-03 02:18:05.437 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] VM Paused (Lifecycle Event)
Dec 03 02:18:05 compute-0 nova_compute[351485]: 2025-12-03 02:18:05.463 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:18:05 compute-0 nova_compute[351485]: 2025-12-03 02:18:05.470 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 03 02:18:05 compute-0 nova_compute[351485]: 2025-12-03 02:18:05.495 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 03 02:18:05 compute-0 nova_compute[351485]: 2025-12-03 02:18:05.670 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:06 compute-0 ceph-mon[192821]: pgmap v1913: 321 pgs: 321 active+clean; 308 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 576 KiB/s rd, 3.6 MiB/s wr, 110 op/s
Dec 03 02:18:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1914: 321 pgs: 321 active+clean; 310 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 577 KiB/s rd, 3.0 MiB/s wr, 111 op/s
Dec 03 02:18:07 compute-0 nova_compute[351485]: 2025-12-03 02:18:07.572 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:08 compute-0 nova_compute[351485]: 2025-12-03 02:18:08.062 351492 DEBUG nova.compute.manager [req-51f164ed-9202-42d3-940e-acb8dfad9531 req-15dfc5ce-984b-431a-b3ae-e0aaa53747f8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Received event network-vif-plugged-025b4c8a-b3c9-4114-95f7-f17506286d3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:18:08 compute-0 nova_compute[351485]: 2025-12-03 02:18:08.065 351492 DEBUG oslo_concurrency.lockutils [req-51f164ed-9202-42d3-940e-acb8dfad9531 req-15dfc5ce-984b-431a-b3ae-e0aaa53747f8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "1b83725c-0af2-491f-98d9-bdb0ed1a5979-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:18:08 compute-0 nova_compute[351485]: 2025-12-03 02:18:08.066 351492 DEBUG oslo_concurrency.lockutils [req-51f164ed-9202-42d3-940e-acb8dfad9531 req-15dfc5ce-984b-431a-b3ae-e0aaa53747f8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "1b83725c-0af2-491f-98d9-bdb0ed1a5979-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:18:08 compute-0 nova_compute[351485]: 2025-12-03 02:18:08.067 351492 DEBUG oslo_concurrency.lockutils [req-51f164ed-9202-42d3-940e-acb8dfad9531 req-15dfc5ce-984b-431a-b3ae-e0aaa53747f8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "1b83725c-0af2-491f-98d9-bdb0ed1a5979-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:18:08 compute-0 nova_compute[351485]: 2025-12-03 02:18:08.067 351492 DEBUG nova.compute.manager [req-51f164ed-9202-42d3-940e-acb8dfad9531 req-15dfc5ce-984b-431a-b3ae-e0aaa53747f8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Processing event network-vif-plugged-025b4c8a-b3c9-4114-95f7-f17506286d3e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 03 02:18:08 compute-0 nova_compute[351485]: 2025-12-03 02:18:08.068 351492 DEBUG nova.compute.manager [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Instance event wait completed in 7 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 03 02:18:08 compute-0 nova_compute[351485]: 2025-12-03 02:18:08.076 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728288.0760784, 1b83725c-0af2-491f-98d9-bdb0ed1a5979 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 02:18:08 compute-0 nova_compute[351485]: 2025-12-03 02:18:08.077 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] VM Resumed (Lifecycle Event)
Dec 03 02:18:08 compute-0 nova_compute[351485]: 2025-12-03 02:18:08.081 351492 DEBUG nova.virt.libvirt.driver [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 03 02:18:08 compute-0 nova_compute[351485]: 2025-12-03 02:18:08.091 351492 INFO nova.virt.libvirt.driver [-] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Instance spawned successfully.
Dec 03 02:18:08 compute-0 nova_compute[351485]: 2025-12-03 02:18:08.092 351492 DEBUG nova.virt.libvirt.driver [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 03 02:18:08 compute-0 nova_compute[351485]: 2025-12-03 02:18:08.111 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:18:08 compute-0 nova_compute[351485]: 2025-12-03 02:18:08.125 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 03 02:18:08 compute-0 nova_compute[351485]: 2025-12-03 02:18:08.132 351492 DEBUG nova.virt.libvirt.driver [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:18:08 compute-0 nova_compute[351485]: 2025-12-03 02:18:08.133 351492 DEBUG nova.virt.libvirt.driver [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:18:08 compute-0 nova_compute[351485]: 2025-12-03 02:18:08.134 351492 DEBUG nova.virt.libvirt.driver [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:18:08 compute-0 nova_compute[351485]: 2025-12-03 02:18:08.135 351492 DEBUG nova.virt.libvirt.driver [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:18:08 compute-0 nova_compute[351485]: 2025-12-03 02:18:08.136 351492 DEBUG nova.virt.libvirt.driver [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:18:08 compute-0 nova_compute[351485]: 2025-12-03 02:18:08.137 351492 DEBUG nova.virt.libvirt.driver [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:18:08 compute-0 nova_compute[351485]: 2025-12-03 02:18:08.160 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 03 02:18:08 compute-0 nova_compute[351485]: 2025-12-03 02:18:08.195 351492 INFO nova.compute.manager [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Took 16.37 seconds to spawn the instance on the hypervisor.
Dec 03 02:18:08 compute-0 nova_compute[351485]: 2025-12-03 02:18:08.195 351492 DEBUG nova.compute.manager [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:18:08 compute-0 nova_compute[351485]: 2025-12-03 02:18:08.286 351492 INFO nova.compute.manager [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Took 17.48 seconds to build instance.
Dec 03 02:18:08 compute-0 nova_compute[351485]: 2025-12-03 02:18:08.300 351492 DEBUG oslo_concurrency.lockutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "1b83725c-0af2-491f-98d9-bdb0ed1a5979" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.566s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:18:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:18:08 compute-0 ceph-mon[192821]: pgmap v1914: 321 pgs: 321 active+clean; 310 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 577 KiB/s rd, 3.0 MiB/s wr, 111 op/s
Dec 03 02:18:08 compute-0 nova_compute[351485]: 2025-12-03 02:18:08.649 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1915: 321 pgs: 321 active+clean; 310 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 547 KiB/s rd, 898 KiB/s wr, 72 op/s
Dec 03 02:18:10 compute-0 ceph-mon[192821]: pgmap v1915: 321 pgs: 321 active+clean; 310 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 547 KiB/s rd, 898 KiB/s wr, 72 op/s
Dec 03 02:18:10 compute-0 nova_compute[351485]: 2025-12-03 02:18:10.491 351492 DEBUG nova.compute.manager [req-42e85dee-271f-433e-a625-9ce629e5c950 req-93a456e0-8294-4dab-9799-afe0b0ddc13e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Received event network-vif-plugged-025b4c8a-b3c9-4114-95f7-f17506286d3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:18:10 compute-0 nova_compute[351485]: 2025-12-03 02:18:10.492 351492 DEBUG oslo_concurrency.lockutils [req-42e85dee-271f-433e-a625-9ce629e5c950 req-93a456e0-8294-4dab-9799-afe0b0ddc13e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "1b83725c-0af2-491f-98d9-bdb0ed1a5979-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:18:10 compute-0 nova_compute[351485]: 2025-12-03 02:18:10.493 351492 DEBUG oslo_concurrency.lockutils [req-42e85dee-271f-433e-a625-9ce629e5c950 req-93a456e0-8294-4dab-9799-afe0b0ddc13e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "1b83725c-0af2-491f-98d9-bdb0ed1a5979-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:18:10 compute-0 nova_compute[351485]: 2025-12-03 02:18:10.494 351492 DEBUG oslo_concurrency.lockutils [req-42e85dee-271f-433e-a625-9ce629e5c950 req-93a456e0-8294-4dab-9799-afe0b0ddc13e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "1b83725c-0af2-491f-98d9-bdb0ed1a5979-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:18:10 compute-0 nova_compute[351485]: 2025-12-03 02:18:10.495 351492 DEBUG nova.compute.manager [req-42e85dee-271f-433e-a625-9ce629e5c950 req-93a456e0-8294-4dab-9799-afe0b0ddc13e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] No waiting events found dispatching network-vif-plugged-025b4c8a-b3c9-4114-95f7-f17506286d3e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 03 02:18:10 compute-0 nova_compute[351485]: 2025-12-03 02:18:10.496 351492 WARNING nova.compute.manager [req-42e85dee-271f-433e-a625-9ce629e5c950 req-93a456e0-8294-4dab-9799-afe0b0ddc13e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Received unexpected event network-vif-plugged-025b4c8a-b3c9-4114-95f7-f17506286d3e for instance with vm_state active and task_state None.
Dec 03 02:18:10 compute-0 ovn_controller[89134]: 2025-12-03T02:18:10Z|00136|binding|INFO|Releasing lport 01cdcf90-ecf3-431a-911c-1a03d9741df1 from this chassis (sb_readonly=0)
Dec 03 02:18:10 compute-0 ovn_controller[89134]: 2025-12-03T02:18:10Z|00137|binding|INFO|Releasing lport 4fe53946-9a81-46d3-946d-3676da417bd6 from this chassis (sb_readonly=0)
Dec 03 02:18:10 compute-0 ovn_controller[89134]: 2025-12-03T02:18:10Z|00138|binding|INFO|Releasing lport c8314dfe-5b76-4819-9b3e-1cb76a272253 from this chassis (sb_readonly=0)
Dec 03 02:18:10 compute-0 nova_compute[351485]: 2025-12-03 02:18:10.816 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:10 compute-0 podman[450322]: 2025-12-03 02:18:10.868331604 +0000 UTC m=+0.101531163 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 02:18:10 compute-0 podman[450320]: 2025-12-03 02:18:10.869129837 +0000 UTC m=+0.111171127 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:18:10 compute-0 podman[450321]: 2025-12-03 02:18:10.876706421 +0000 UTC m=+0.110097596 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, config_id=edpm, org.label-schema.license=GPLv2)
Dec 03 02:18:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1916: 321 pgs: 321 active+clean; 310 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 631 KiB/s rd, 900 KiB/s wr, 79 op/s
Dec 03 02:18:12 compute-0 ceph-mon[192821]: pgmap v1916: 321 pgs: 321 active+clean; 310 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 631 KiB/s rd, 900 KiB/s wr, 79 op/s
Dec 03 02:18:12 compute-0 nova_compute[351485]: 2025-12-03 02:18:12.577 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:12 compute-0 nova_compute[351485]: 2025-12-03 02:18:12.617 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:18:13 compute-0 nova_compute[351485]: 2025-12-03 02:18:13.022 351492 DEBUG nova.compute.manager [req-5dff171c-8e88-4985-8c60-82de48d4d5c3 req-58247300-a9cc-41f8-b4c8-b76b0e123b8c 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Received event network-changed-025b4c8a-b3c9-4114-95f7-f17506286d3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:18:13 compute-0 nova_compute[351485]: 2025-12-03 02:18:13.024 351492 DEBUG nova.compute.manager [req-5dff171c-8e88-4985-8c60-82de48d4d5c3 req-58247300-a9cc-41f8-b4c8-b76b0e123b8c 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Refreshing instance network info cache due to event network-changed-025b4c8a-b3c9-4114-95f7-f17506286d3e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 03 02:18:13 compute-0 nova_compute[351485]: 2025-12-03 02:18:13.025 351492 DEBUG oslo_concurrency.lockutils [req-5dff171c-8e88-4985-8c60-82de48d4d5c3 req-58247300-a9cc-41f8-b4c8-b76b0e123b8c 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-1b83725c-0af2-491f-98d9-bdb0ed1a5979" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:18:13 compute-0 nova_compute[351485]: 2025-12-03 02:18:13.026 351492 DEBUG oslo_concurrency.lockutils [req-5dff171c-8e88-4985-8c60-82de48d4d5c3 req-58247300-a9cc-41f8-b4c8-b76b0e123b8c 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-1b83725c-0af2-491f-98d9-bdb0ed1a5979" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:18:13 compute-0 nova_compute[351485]: 2025-12-03 02:18:13.027 351492 DEBUG nova.network.neutron [req-5dff171c-8e88-4985-8c60-82de48d4d5c3 req-58247300-a9cc-41f8-b4c8-b76b0e123b8c 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Refreshing network info cache for port 025b4c8a-b3c9-4114-95f7-f17506286d3e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 03 02:18:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1917: 321 pgs: 321 active+clean; 310 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 796 KiB/s rd, 49 KiB/s wr, 62 op/s
Dec 03 02:18:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:18:13 compute-0 nova_compute[351485]: 2025-12-03 02:18:13.653 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:14 compute-0 ceph-mon[192821]: pgmap v1917: 321 pgs: 321 active+clean; 310 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 796 KiB/s rd, 49 KiB/s wr, 62 op/s
Dec 03 02:18:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1918: 321 pgs: 321 active+clean; 310 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 26 KiB/s wr, 53 op/s
Dec 03 02:18:15 compute-0 nova_compute[351485]: 2025-12-03 02:18:15.843 351492 DEBUG nova.compute.manager [req-fab4b438-6636-44a6-acdc-4a10cf8bcfdd req-da5c3950-286c-4eca-8c09-6f33bd6a3b45 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Received event network-vif-plugged-c6f07ea7-978a-46d9-b7f8-a4c14ac8475f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:18:15 compute-0 nova_compute[351485]: 2025-12-03 02:18:15.844 351492 DEBUG oslo_concurrency.lockutils [req-fab4b438-6636-44a6-acdc-4a10cf8bcfdd req-da5c3950-286c-4eca-8c09-6f33bd6a3b45 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "40db12af-6ca8-4a4f-88e7-833c3fda87c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:18:15 compute-0 nova_compute[351485]: 2025-12-03 02:18:15.844 351492 DEBUG oslo_concurrency.lockutils [req-fab4b438-6636-44a6-acdc-4a10cf8bcfdd req-da5c3950-286c-4eca-8c09-6f33bd6a3b45 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "40db12af-6ca8-4a4f-88e7-833c3fda87c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:18:15 compute-0 nova_compute[351485]: 2025-12-03 02:18:15.845 351492 DEBUG oslo_concurrency.lockutils [req-fab4b438-6636-44a6-acdc-4a10cf8bcfdd req-da5c3950-286c-4eca-8c09-6f33bd6a3b45 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "40db12af-6ca8-4a4f-88e7-833c3fda87c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:18:15 compute-0 nova_compute[351485]: 2025-12-03 02:18:15.845 351492 DEBUG nova.compute.manager [req-fab4b438-6636-44a6-acdc-4a10cf8bcfdd req-da5c3950-286c-4eca-8c09-6f33bd6a3b45 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Processing event network-vif-plugged-c6f07ea7-978a-46d9-b7f8-a4c14ac8475f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 03 02:18:15 compute-0 nova_compute[351485]: 2025-12-03 02:18:15.846 351492 DEBUG nova.compute.manager [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Instance event wait completed in 10 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 03 02:18:15 compute-0 nova_compute[351485]: 2025-12-03 02:18:15.862 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728295.8609743, 40db12af-6ca8-4a4f-88e7-833c3fda87c9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 02:18:15 compute-0 nova_compute[351485]: 2025-12-03 02:18:15.863 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] VM Resumed (Lifecycle Event)
Dec 03 02:18:15 compute-0 nova_compute[351485]: 2025-12-03 02:18:15.866 351492 DEBUG nova.virt.libvirt.driver [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 03 02:18:15 compute-0 nova_compute[351485]: 2025-12-03 02:18:15.874 351492 INFO nova.virt.libvirt.driver [-] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Instance spawned successfully.
Dec 03 02:18:15 compute-0 nova_compute[351485]: 2025-12-03 02:18:15.875 351492 DEBUG nova.virt.libvirt.driver [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 03 02:18:15 compute-0 nova_compute[351485]: 2025-12-03 02:18:15.889 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:18:15 compute-0 nova_compute[351485]: 2025-12-03 02:18:15.908 351492 DEBUG nova.network.neutron [req-5dff171c-8e88-4985-8c60-82de48d4d5c3 req-58247300-a9cc-41f8-b4c8-b76b0e123b8c 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Updated VIF entry in instance network info cache for port 025b4c8a-b3c9-4114-95f7-f17506286d3e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 03 02:18:15 compute-0 nova_compute[351485]: 2025-12-03 02:18:15.909 351492 DEBUG nova.network.neutron [req-5dff171c-8e88-4985-8c60-82de48d4d5c3 req-58247300-a9cc-41f8-b4c8-b76b0e123b8c 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Updating instance_info_cache with network_info: [{"id": "025b4c8a-b3c9-4114-95f7-f17506286d3e", "address": "fa:16:3e:24:c0:50", "network": {"id": "ed008f09-da46-4507-9be2-7398a4728121", "bridge": "br-int", "label": "tempest-network-smoke--628634883", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8f8e5d142604e8c8aabf1e14a1467ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap025b4c8a-b3", "ovs_interfaceid": "025b4c8a-b3c9-4114-95f7-f17506286d3e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:18:15 compute-0 nova_compute[351485]: 2025-12-03 02:18:15.911 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 03 02:18:15 compute-0 nova_compute[351485]: 2025-12-03 02:18:15.925 351492 DEBUG nova.virt.libvirt.driver [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:18:15 compute-0 nova_compute[351485]: 2025-12-03 02:18:15.926 351492 DEBUG nova.virt.libvirt.driver [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:18:15 compute-0 nova_compute[351485]: 2025-12-03 02:18:15.926 351492 DEBUG nova.virt.libvirt.driver [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:18:15 compute-0 nova_compute[351485]: 2025-12-03 02:18:15.927 351492 DEBUG nova.virt.libvirt.driver [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:18:15 compute-0 nova_compute[351485]: 2025-12-03 02:18:15.928 351492 DEBUG nova.virt.libvirt.driver [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:18:15 compute-0 nova_compute[351485]: 2025-12-03 02:18:15.928 351492 DEBUG nova.virt.libvirt.driver [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:18:15 compute-0 nova_compute[351485]: 2025-12-03 02:18:15.939 351492 DEBUG oslo_concurrency.lockutils [req-5dff171c-8e88-4985-8c60-82de48d4d5c3 req-58247300-a9cc-41f8-b4c8-b76b0e123b8c 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-1b83725c-0af2-491f-98d9-bdb0ed1a5979" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:18:15 compute-0 nova_compute[351485]: 2025-12-03 02:18:15.940 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 03 02:18:15 compute-0 nova_compute[351485]: 2025-12-03 02:18:15.982 351492 INFO nova.compute.manager [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Took 21.25 seconds to spawn the instance on the hypervisor.
Dec 03 02:18:15 compute-0 nova_compute[351485]: 2025-12-03 02:18:15.983 351492 DEBUG nova.compute.manager [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:18:16 compute-0 nova_compute[351485]: 2025-12-03 02:18:16.059 351492 INFO nova.compute.manager [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Took 22.50 seconds to build instance.
Dec 03 02:18:16 compute-0 nova_compute[351485]: 2025-12-03 02:18:16.075 351492 DEBUG oslo_concurrency.lockutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Lock "40db12af-6ca8-4a4f-88e7-833c3fda87c9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 22.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:18:16 compute-0 ceph-mon[192821]: pgmap v1918: 321 pgs: 321 active+clean; 310 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 26 KiB/s wr, 53 op/s
Dec 03 02:18:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1919: 321 pgs: 321 active+clean; 310 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 73 op/s
Dec 03 02:18:17 compute-0 ovn_controller[89134]: 2025-12-03T02:18:17Z|00139|binding|INFO|Releasing lport 01cdcf90-ecf3-431a-911c-1a03d9741df1 from this chassis (sb_readonly=0)
Dec 03 02:18:17 compute-0 ovn_controller[89134]: 2025-12-03T02:18:17Z|00140|binding|INFO|Releasing lport 4fe53946-9a81-46d3-946d-3676da417bd6 from this chassis (sb_readonly=0)
Dec 03 02:18:17 compute-0 ovn_controller[89134]: 2025-12-03T02:18:17Z|00141|binding|INFO|Releasing lport c8314dfe-5b76-4819-9b3e-1cb76a272253 from this chassis (sb_readonly=0)
Dec 03 02:18:17 compute-0 nova_compute[351485]: 2025-12-03 02:18:17.557 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:17 compute-0 nova_compute[351485]: 2025-12-03 02:18:17.582 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:17 compute-0 podman[450377]: 2025-12-03 02:18:17.900143484 +0000 UTC m=+0.148524343 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:18:17 compute-0 nova_compute[351485]: 2025-12-03 02:18:17.919 351492 DEBUG nova.compute.manager [req-ffb14a95-272f-4346-9e84-20fafc8cb9cf req-3b786a19-0510-4469-af8f-034fdd3eaf06 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Received event network-vif-plugged-c6f07ea7-978a-46d9-b7f8-a4c14ac8475f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:18:17 compute-0 nova_compute[351485]: 2025-12-03 02:18:17.920 351492 DEBUG oslo_concurrency.lockutils [req-ffb14a95-272f-4346-9e84-20fafc8cb9cf req-3b786a19-0510-4469-af8f-034fdd3eaf06 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "40db12af-6ca8-4a4f-88e7-833c3fda87c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:18:17 compute-0 nova_compute[351485]: 2025-12-03 02:18:17.920 351492 DEBUG oslo_concurrency.lockutils [req-ffb14a95-272f-4346-9e84-20fafc8cb9cf req-3b786a19-0510-4469-af8f-034fdd3eaf06 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "40db12af-6ca8-4a4f-88e7-833c3fda87c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:18:17 compute-0 nova_compute[351485]: 2025-12-03 02:18:17.921 351492 DEBUG oslo_concurrency.lockutils [req-ffb14a95-272f-4346-9e84-20fafc8cb9cf req-3b786a19-0510-4469-af8f-034fdd3eaf06 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "40db12af-6ca8-4a4f-88e7-833c3fda87c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:18:17 compute-0 nova_compute[351485]: 2025-12-03 02:18:17.921 351492 DEBUG nova.compute.manager [req-ffb14a95-272f-4346-9e84-20fafc8cb9cf req-3b786a19-0510-4469-af8f-034fdd3eaf06 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] No waiting events found dispatching network-vif-plugged-c6f07ea7-978a-46d9-b7f8-a4c14ac8475f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 03 02:18:17 compute-0 nova_compute[351485]: 2025-12-03 02:18:17.922 351492 WARNING nova.compute.manager [req-ffb14a95-272f-4346-9e84-20fafc8cb9cf req-3b786a19-0510-4469-af8f-034fdd3eaf06 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Received unexpected event network-vif-plugged-c6f07ea7-978a-46d9-b7f8-a4c14ac8475f for instance with vm_state active and task_state None.
Dec 03 02:18:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:18:18 compute-0 ceph-mon[192821]: pgmap v1919: 321 pgs: 321 active+clean; 310 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 73 op/s
Dec 03 02:18:18 compute-0 nova_compute[351485]: 2025-12-03 02:18:18.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:18:18 compute-0 nova_compute[351485]: 2025-12-03 02:18:18.620 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:18:18 compute-0 nova_compute[351485]: 2025-12-03 02:18:18.621 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:18:18 compute-0 nova_compute[351485]: 2025-12-03 02:18:18.621 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:18:18 compute-0 nova_compute[351485]: 2025-12-03 02:18:18.622 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 02:18:18 compute-0 nova_compute[351485]: 2025-12-03 02:18:18.623 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:18:18 compute-0 nova_compute[351485]: 2025-12-03 02:18:18.663 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:18 compute-0 nova_compute[351485]: 2025-12-03 02:18:18.875 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1920: 321 pgs: 321 active+clean; 310 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.7 KiB/s wr, 68 op/s
Dec 03 02:18:19 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:18:19 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3974627673' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.211 351492 DEBUG oslo_concurrency.lockutils [None req-c38b8b96-c9d0-4a0d-a420-af324964bdac 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Acquiring lock "40db12af-6ca8-4a4f-88e7-833c3fda87c9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.212 351492 DEBUG oslo_concurrency.lockutils [None req-c38b8b96-c9d0-4a0d-a420-af324964bdac 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Lock "40db12af-6ca8-4a4f-88e7-833c3fda87c9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.214 351492 DEBUG oslo_concurrency.lockutils [None req-c38b8b96-c9d0-4a0d-a420-af324964bdac 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Acquiring lock "40db12af-6ca8-4a4f-88e7-833c3fda87c9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.215 351492 DEBUG oslo_concurrency.lockutils [None req-c38b8b96-c9d0-4a0d-a420-af324964bdac 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Lock "40db12af-6ca8-4a4f-88e7-833c3fda87c9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.217 351492 DEBUG oslo_concurrency.lockutils [None req-c38b8b96-c9d0-4a0d-a420-af324964bdac 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Lock "40db12af-6ca8-4a4f-88e7-833c3fda87c9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.221 351492 INFO nova.compute.manager [None req-c38b8b96-c9d0-4a0d-a420-af324964bdac 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Terminating instance
Dec 03 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.224 351492 DEBUG nova.compute.manager [None req-c38b8b96-c9d0-4a0d-a420-af324964bdac 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 03 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.237 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:18:19 compute-0 kernel: tapc6f07ea7-97 (unregistering): left promiscuous mode
Dec 03 02:18:19 compute-0 NetworkManager[48912]: <info>  [1764728299.3297] device (tapc6f07ea7-97): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 03 02:18:19 compute-0 ovn_controller[89134]: 2025-12-03T02:18:19Z|00142|binding|INFO|Releasing lport c6f07ea7-978a-46d9-b7f8-a4c14ac8475f from this chassis (sb_readonly=0)
Dec 03 02:18:19 compute-0 ovn_controller[89134]: 2025-12-03T02:18:19Z|00143|binding|INFO|Setting lport c6f07ea7-978a-46d9-b7f8-a4c14ac8475f down in Southbound
Dec 03 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.345 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:19 compute-0 ovn_controller[89134]: 2025-12-03T02:18:19Z|00144|binding|INFO|Removing iface tapc6f07ea7-97 ovn-installed in OVS
Dec 03 02:18:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:19.359 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:93:5c 10.100.0.6'], port_security=['fa:16:3e:0d:93:5c 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '40db12af-6ca8-4a4f-88e7-833c3fda87c9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dee48a2c-2a7a-4864-9bd2-f42030910aa8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '19ab3b60e4c749c7897f20982829cd8c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8422e37d-61b1-4fef-9439-a6ea41458932', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf2713d8-67bb-4af5-af36-8021ea746eae, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=c6f07ea7-978a-46d9-b7f8-a4c14ac8475f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.360 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:19.361 288528 INFO neutron.agent.ovn.metadata.agent [-] Port c6f07ea7-978a-46d9-b7f8-a4c14ac8475f in datapath dee48a2c-2a7a-4864-9bd2-f42030910aa8 unbound from our chassis
Dec 03 02:18:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:19.363 288528 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dee48a2c-2a7a-4864-9bd2-f42030910aa8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 03 02:18:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:19.369 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[f6027f1c-3838-4cd3-b49d-a9cda4e40d8d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:18:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:19.370 288528 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-dee48a2c-2a7a-4864-9bd2-f42030910aa8 namespace which is not needed anymore
Dec 03 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.382 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:19 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Dec 03 02:18:19 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000c.scope: Consumed 4.986s CPU time.
Dec 03 02:18:19 compute-0 systemd-machined[138558]: Machine qemu-13-instance-0000000c terminated.
Dec 03 02:18:19 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3974627673' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:18:19 compute-0 kernel: tapc6f07ea7-97: entered promiscuous mode
Dec 03 02:18:19 compute-0 NetworkManager[48912]: <info>  [1764728299.4499] manager: (tapc6f07ea7-97): new Tun device (/org/freedesktop/NetworkManager/Devices/65)
Dec 03 02:18:19 compute-0 ovn_controller[89134]: 2025-12-03T02:18:19Z|00145|binding|INFO|Claiming lport c6f07ea7-978a-46d9-b7f8-a4c14ac8475f for this chassis.
Dec 03 02:18:19 compute-0 kernel: tapc6f07ea7-97 (unregistering): left promiscuous mode
Dec 03 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.453 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:19 compute-0 ovn_controller[89134]: 2025-12-03T02:18:19Z|00146|binding|INFO|c6f07ea7-978a-46d9-b7f8-a4c14ac8475f: Claiming fa:16:3e:0d:93:5c 10.100.0.6
Dec 03 02:18:19 compute-0 ovn_controller[89134]: 2025-12-03T02:18:19Z|00147|if_status|INFO|Not setting lport c6f07ea7-978a-46d9-b7f8-a4c14ac8475f down as sb is readonly
Dec 03 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.471 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:19.477 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:93:5c 10.100.0.6'], port_security=['fa:16:3e:0d:93:5c 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '40db12af-6ca8-4a4f-88e7-833c3fda87c9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dee48a2c-2a7a-4864-9bd2-f42030910aa8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '19ab3b60e4c749c7897f20982829cd8c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8422e37d-61b1-4fef-9439-a6ea41458932', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf2713d8-67bb-4af5-af36-8021ea746eae, chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=c6f07ea7-978a-46d9-b7f8-a4c14ac8475f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 02:18:19 compute-0 ovn_controller[89134]: 2025-12-03T02:18:19Z|00148|binding|INFO|Releasing lport c6f07ea7-978a-46d9-b7f8-a4c14ac8475f from this chassis (sb_readonly=0)
Dec 03 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.481 351492 INFO nova.virt.libvirt.driver [-] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Instance destroyed successfully.
Dec 03 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.482 351492 DEBUG nova.objects.instance [None req-c38b8b96-c9d0-4a0d-a420-af324964bdac 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Lazy-loading 'resources' on Instance uuid 40db12af-6ca8-4a4f-88e7-833c3fda87c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:18:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:19.484 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:93:5c 10.100.0.6'], port_security=['fa:16:3e:0d:93:5c 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '40db12af-6ca8-4a4f-88e7-833c3fda87c9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dee48a2c-2a7a-4864-9bd2-f42030910aa8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '19ab3b60e4c749c7897f20982829cd8c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8422e37d-61b1-4fef-9439-a6ea41458932', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf2713d8-67bb-4af5-af36-8021ea746eae, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=c6f07ea7-978a-46d9-b7f8-a4c14ac8475f) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.498 351492 DEBUG nova.virt.libvirt.vif [None req-c38b8b96-c9d0-4a0d-a420-af324964bdac 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T02:17:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-143016714',display_name='tempest-ServerAddressesTestJSON-server-143016714',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-143016714',id=12,image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-03T02:18:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='19ab3b60e4c749c7897f20982829cd8c',ramdisk_id='',reservation_id='r-qlc2ubob',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-2068212470',owner_user_name='tempest-ServerAddressesTestJSON-2068212470-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-03T02:18:16Z,user_data=None,user_id='085bcee1002d425085c1f09d9b5d3d97',uuid=40db12af-6ca8-4a4f-88e7-833c3fda87c9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c6f07ea7-978a-46d9-b7f8-a4c14ac8475f", "address": "fa:16:3e:0d:93:5c", "network": {"id": "dee48a2c-2a7a-4864-9bd2-f42030910aa8", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1676161980-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "19ab3b60e4c749c7897f20982829cd8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f07ea7-97", "ovs_interfaceid": "c6f07ea7-978a-46d9-b7f8-a4c14ac8475f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 03 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.500 351492 DEBUG nova.network.os_vif_util [None req-c38b8b96-c9d0-4a0d-a420-af324964bdac 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Converting VIF {"id": "c6f07ea7-978a-46d9-b7f8-a4c14ac8475f", "address": "fa:16:3e:0d:93:5c", "network": {"id": "dee48a2c-2a7a-4864-9bd2-f42030910aa8", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1676161980-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "19ab3b60e4c749c7897f20982829cd8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f07ea7-97", "ovs_interfaceid": "c6f07ea7-978a-46d9-b7f8-a4c14ac8475f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 03 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.501 351492 DEBUG nova.network.os_vif_util [None req-c38b8b96-c9d0-4a0d-a420-af324964bdac 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:93:5c,bridge_name='br-int',has_traffic_filtering=True,id=c6f07ea7-978a-46d9-b7f8-a4c14ac8475f,network=Network(dee48a2c-2a7a-4864-9bd2-f42030910aa8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6f07ea7-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 03 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.501 351492 DEBUG os_vif [None req-c38b8b96-c9d0-4a0d-a420-af324964bdac 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:93:5c,bridge_name='br-int',has_traffic_filtering=True,id=c6f07ea7-978a-46d9-b7f8-a4c14ac8475f,network=Network(dee48a2c-2a7a-4864-9bd2-f42030910aa8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6f07ea7-97') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 03 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.504 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.504 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc6f07ea7-97, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.507 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 03 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.510 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.513 351492 INFO os_vif [None req-c38b8b96-c9d0-4a0d-a420-af324964bdac 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:93:5c,bridge_name='br-int',has_traffic_filtering=True,id=c6f07ea7-978a-46d9-b7f8-a4c14ac8475f,network=Network(dee48a2c-2a7a-4864-9bd2-f42030910aa8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6f07ea7-97')
Dec 03 02:18:19 compute-0 neutron-haproxy-ovnmeta-dee48a2c-2a7a-4864-9bd2-f42030910aa8[450299]: [NOTICE]   (450308) : haproxy version is 2.8.14-c23fe91
Dec 03 02:18:19 compute-0 neutron-haproxy-ovnmeta-dee48a2c-2a7a-4864-9bd2-f42030910aa8[450299]: [NOTICE]   (450308) : path to executable is /usr/sbin/haproxy
Dec 03 02:18:19 compute-0 neutron-haproxy-ovnmeta-dee48a2c-2a7a-4864-9bd2-f42030910aa8[450299]: [WARNING]  (450308) : Exiting Master process...
Dec 03 02:18:19 compute-0 neutron-haproxy-ovnmeta-dee48a2c-2a7a-4864-9bd2-f42030910aa8[450299]: [ALERT]    (450308) : Current worker (450311) exited with code 143 (Terminated)
Dec 03 02:18:19 compute-0 neutron-haproxy-ovnmeta-dee48a2c-2a7a-4864-9bd2-f42030910aa8[450299]: [WARNING]  (450308) : All workers exited. Exiting... (0)
Dec 03 02:18:19 compute-0 systemd[1]: libpod-1de37f14aa8d52a7f5b474ddf624a198b96826ecd0cf26d4d2ead1dbc6e51c4c.scope: Deactivated successfully.
Dec 03 02:18:19 compute-0 conmon[450299]: conmon 1de37f14aa8d52a7f5b4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1de37f14aa8d52a7f5b474ddf624a198b96826ecd0cf26d4d2ead1dbc6e51c4c.scope/container/memory.events
Dec 03 02:18:19 compute-0 podman[450451]: 2025-12-03 02:18:19.587350949 +0000 UTC m=+0.078833771 container died 1de37f14aa8d52a7f5b474ddf624a198b96826ecd0cf26d4d2ead1dbc6e51c4c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dee48a2c-2a7a-4864-9bd2-f42030910aa8, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 03 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.626 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.627 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.634 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.635 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:18:19 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1de37f14aa8d52a7f5b474ddf624a198b96826ecd0cf26d4d2ead1dbc6e51c4c-userdata-shm.mount: Deactivated successfully.
Dec 03 02:18:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-2458ccabf09d938af728de35a5600b8e9250e78dcce1ee129f34e94e9a713cdc-merged.mount: Deactivated successfully.
Dec 03 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.643 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.644 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.653 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.654 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:18:19 compute-0 podman[450451]: 2025-12-03 02:18:19.658246395 +0000 UTC m=+0.149729217 container cleanup 1de37f14aa8d52a7f5b474ddf624a198b96826ecd0cf26d4d2ead1dbc6e51c4c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dee48a2c-2a7a-4864-9bd2-f42030910aa8, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 03 02:18:19 compute-0 systemd[1]: libpod-conmon-1de37f14aa8d52a7f5b474ddf624a198b96826ecd0cf26d4d2ead1dbc6e51c4c.scope: Deactivated successfully.
Dec 03 02:18:19 compute-0 podman[450494]: 2025-12-03 02:18:19.762273638 +0000 UTC m=+0.068530660 container remove 1de37f14aa8d52a7f5b474ddf624a198b96826ecd0cf26d4d2ead1dbc6e51c4c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dee48a2c-2a7a-4864-9bd2-f42030910aa8, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec 03 02:18:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:19.772 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[99964c61-3d5f-4774-a0e1-ab6c775eae52]: (4, ('Wed Dec  3 02:18:19 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-dee48a2c-2a7a-4864-9bd2-f42030910aa8 (1de37f14aa8d52a7f5b474ddf624a198b96826ecd0cf26d4d2ead1dbc6e51c4c)\n1de37f14aa8d52a7f5b474ddf624a198b96826ecd0cf26d4d2ead1dbc6e51c4c\nWed Dec  3 02:18:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-dee48a2c-2a7a-4864-9bd2-f42030910aa8 (1de37f14aa8d52a7f5b474ddf624a198b96826ecd0cf26d4d2ead1dbc6e51c4c)\n1de37f14aa8d52a7f5b474ddf624a198b96826ecd0cf26d4d2ead1dbc6e51c4c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:18:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:19.782 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[3ee08654-188f-4a6a-b2a3-c9a9592b05d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:18:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:19.783 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdee48a2c-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.788 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:19 compute-0 kernel: tapdee48a2c-20: left promiscuous mode
Dec 03 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.804 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:19.809 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[1c62bca7-615a-4c55-a002-0adbc225a32e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:18:19 compute-0 virtnodedevd[351021]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Dec 03 02:18:19 compute-0 virtnodedevd[351021]: hostname: compute-0
Dec 03 02:18:19 compute-0 virtnodedevd[351021]: ethtool ioctl error on tapdee48a2c-20: No such device
Dec 03 02:18:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:19.823 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[e6bdd6e4-2854-4496-a4bc-f24c09b3266d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:18:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:19.824 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[208e06ae-941d-4906-8466-5ffd5f508ba8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:18:19 compute-0 virtnodedevd[351021]: ethtool ioctl error on tapdee48a2c-20: No such device
Dec 03 02:18:19 compute-0 virtnodedevd[351021]: ethtool ioctl error on tapdee48a2c-20: No such device
Dec 03 02:18:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:19.842 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[6fe0af3d-15aa-49e4-9de7-8d316a98430b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 712364, 'reachable_time': 40285, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 450514, 'error': None, 'target': 'ovnmeta-dee48a2c-2a7a-4864-9bd2-f42030910aa8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:18:19 compute-0 virtnodedevd[351021]: ethtool ioctl error on tapdee48a2c-20: No such device
Dec 03 02:18:19 compute-0 systemd[1]: run-netns-ovnmeta\x2ddee48a2c\x2d2a7a\x2d4864\x2d9bd2\x2df42030910aa8.mount: Deactivated successfully.
Dec 03 02:18:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:19.848 288639 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-dee48a2c-2a7a-4864-9bd2-f42030910aa8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 03 02:18:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:19.848 288639 DEBUG oslo.privsep.daemon [-] privsep: reply[4c13e81d-48a1-45bf-be54-4fa413963953]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:18:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:19.849 288528 INFO neutron.agent.ovn.metadata.agent [-] Port c6f07ea7-978a-46d9-b7f8-a4c14ac8475f in datapath dee48a2c-2a7a-4864-9bd2-f42030910aa8 unbound from our chassis
Dec 03 02:18:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:19.851 288528 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dee48a2c-2a7a-4864-9bd2-f42030910aa8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 03 02:18:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:19.852 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[325151ab-0728-4fe8-97f2-e70df1916635]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:18:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:19.853 288528 INFO neutron.agent.ovn.metadata.agent [-] Port c6f07ea7-978a-46d9-b7f8-a4c14ac8475f in datapath dee48a2c-2a7a-4864-9bd2-f42030910aa8 unbound from our chassis
Dec 03 02:18:19 compute-0 virtnodedevd[351021]: ethtool ioctl error on tapdee48a2c-20: No such device
Dec 03 02:18:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:19.854 288528 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dee48a2c-2a7a-4864-9bd2-f42030910aa8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 03 02:18:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:19.855 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[173cd257-bb52-49ef-ba2e-9a7a43355fd7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:18:19 compute-0 virtnodedevd[351021]: ethtool ioctl error on tapdee48a2c-20: No such device
Dec 03 02:18:19 compute-0 virtnodedevd[351021]: ethtool ioctl error on tapdee48a2c-20: No such device
Dec 03 02:18:19 compute-0 virtnodedevd[351021]: ethtool ioctl error on tapdee48a2c-20: No such device
Dec 03 02:18:20 compute-0 nova_compute[351485]: 2025-12-03 02:18:20.234 351492 INFO nova.virt.libvirt.driver [None req-c38b8b96-c9d0-4a0d-a420-af324964bdac 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Deleting instance files /var/lib/nova/instances/40db12af-6ca8-4a4f-88e7-833c3fda87c9_del
Dec 03 02:18:20 compute-0 nova_compute[351485]: 2025-12-03 02:18:20.234 351492 INFO nova.virt.libvirt.driver [None req-c38b8b96-c9d0-4a0d-a420-af324964bdac 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Deletion of /var/lib/nova/instances/40db12af-6ca8-4a4f-88e7-833c3fda87c9_del complete
Dec 03 02:18:20 compute-0 nova_compute[351485]: 2025-12-03 02:18:20.289 351492 INFO nova.compute.manager [None req-c38b8b96-c9d0-4a0d-a420-af324964bdac 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Took 1.06 seconds to destroy the instance on the hypervisor.
Dec 03 02:18:20 compute-0 nova_compute[351485]: 2025-12-03 02:18:20.289 351492 DEBUG oslo.service.loopingcall [None req-c38b8b96-c9d0-4a0d-a420-af324964bdac 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 03 02:18:20 compute-0 nova_compute[351485]: 2025-12-03 02:18:20.290 351492 DEBUG nova.compute.manager [-] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 03 02:18:20 compute-0 nova_compute[351485]: 2025-12-03 02:18:20.290 351492 DEBUG nova.network.neutron [-] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 03 02:18:20 compute-0 nova_compute[351485]: 2025-12-03 02:18:20.294 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:18:20 compute-0 nova_compute[351485]: 2025-12-03 02:18:20.295 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3468MB free_disk=59.85527420043945GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 02:18:20 compute-0 nova_compute[351485]: 2025-12-03 02:18:20.295 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:18:20 compute-0 nova_compute[351485]: 2025-12-03 02:18:20.295 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:18:20 compute-0 nova_compute[351485]: 2025-12-03 02:18:20.387 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance a48b4084-369d-432a-9f47-9378cdcc011f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:18:20 compute-0 nova_compute[351485]: 2025-12-03 02:18:20.387 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:18:20 compute-0 nova_compute[351485]: 2025-12-03 02:18:20.387 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 1b83725c-0af2-491f-98d9-bdb0ed1a5979 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:18:20 compute-0 nova_compute[351485]: 2025-12-03 02:18:20.388 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 40db12af-6ca8-4a4f-88e7-833c3fda87c9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:18:20 compute-0 nova_compute[351485]: 2025-12-03 02:18:20.388 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 02:18:20 compute-0 nova_compute[351485]: 2025-12-03 02:18:20.388 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 02:18:20 compute-0 ceph-mon[192821]: pgmap v1920: 321 pgs: 321 active+clean; 310 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.7 KiB/s wr, 68 op/s
Dec 03 02:18:20 compute-0 nova_compute[351485]: 2025-12-03 02:18:20.469 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:18:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:18:20 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2951564323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:18:20 compute-0 nova_compute[351485]: 2025-12-03 02:18:20.993 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:18:21 compute-0 nova_compute[351485]: 2025-12-03 02:18:21.002 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:18:21 compute-0 nova_compute[351485]: 2025-12-03 02:18:21.025 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:18:21 compute-0 nova_compute[351485]: 2025-12-03 02:18:21.068 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 02:18:21 compute-0 nova_compute[351485]: 2025-12-03 02:18:21.069 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:18:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1921: 321 pgs: 321 active+clean; 296 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 12 KiB/s wr, 105 op/s
Dec 03 02:18:21 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2951564323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:18:21 compute-0 nova_compute[351485]: 2025-12-03 02:18:21.583 351492 DEBUG nova.network.neutron [-] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:18:21 compute-0 nova_compute[351485]: 2025-12-03 02:18:21.602 351492 INFO nova.compute.manager [-] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Took 1.31 seconds to deallocate network for instance.
Dec 03 02:18:21 compute-0 nova_compute[351485]: 2025-12-03 02:18:21.648 351492 DEBUG oslo_concurrency.lockutils [None req-c38b8b96-c9d0-4a0d-a420-af324964bdac 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:18:21 compute-0 nova_compute[351485]: 2025-12-03 02:18:21.649 351492 DEBUG oslo_concurrency.lockutils [None req-c38b8b96-c9d0-4a0d-a420-af324964bdac 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:18:21 compute-0 nova_compute[351485]: 2025-12-03 02:18:21.779 351492 DEBUG nova.compute.manager [req-056be785-aa5d-4bf2-85b6-c7c7d66f2803 req-5104f012-ef35-4ecf-b58b-c9088f4494cb 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Received event network-vif-deleted-c6f07ea7-978a-46d9-b7f8-a4c14ac8475f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:18:21 compute-0 nova_compute[351485]: 2025-12-03 02:18:21.807 351492 DEBUG oslo_concurrency.processutils [None req-c38b8b96-c9d0-4a0d-a420-af324964bdac 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:18:21 compute-0 podman[450561]: 2025-12-03 02:18:21.875734944 +0000 UTC m=+0.094151704 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 03 02:18:21 compute-0 podman[450553]: 2025-12-03 02:18:21.882353992 +0000 UTC m=+0.115404156 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, build-date=2025-08-20T13:12:41, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, container_name=openstack_network_exporter, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public)
Dec 03 02:18:21 compute-0 podman[450554]: 2025-12-03 02:18:21.8861786 +0000 UTC m=+0.113490122 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 03 02:18:21 compute-0 podman[450552]: 2025-12-03 02:18:21.903395807 +0000 UTC m=+0.150992303 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 03 02:18:21 compute-0 podman[450559]: 2025-12-03 02:18:21.924904736 +0000 UTC m=+0.137816071 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, maintainer=Red Hat, Inc., architecture=x86_64, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, distribution-scope=public, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.openshift.expose-services=, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, name=ubi9, vcs-type=git, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, release-0.7.12=)
Dec 03 02:18:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:18:22 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2349553752' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:18:22 compute-0 nova_compute[351485]: 2025-12-03 02:18:22.248 351492 DEBUG oslo_concurrency.processutils [None req-c38b8b96-c9d0-4a0d-a420-af324964bdac 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:18:22 compute-0 nova_compute[351485]: 2025-12-03 02:18:22.259 351492 DEBUG nova.compute.provider_tree [None req-c38b8b96-c9d0-4a0d-a420-af324964bdac 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:18:22 compute-0 nova_compute[351485]: 2025-12-03 02:18:22.279 351492 DEBUG nova.scheduler.client.report [None req-c38b8b96-c9d0-4a0d-a420-af324964bdac 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:18:22 compute-0 nova_compute[351485]: 2025-12-03 02:18:22.308 351492 DEBUG oslo_concurrency.lockutils [None req-c38b8b96-c9d0-4a0d-a420-af324964bdac 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:18:22 compute-0 nova_compute[351485]: 2025-12-03 02:18:22.342 351492 INFO nova.scheduler.client.report [None req-c38b8b96-c9d0-4a0d-a420-af324964bdac 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Deleted allocations for instance 40db12af-6ca8-4a4f-88e7-833c3fda87c9
Dec 03 02:18:22 compute-0 ceph-mon[192821]: pgmap v1921: 321 pgs: 321 active+clean; 296 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 12 KiB/s wr, 105 op/s
Dec 03 02:18:22 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2349553752' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:18:22 compute-0 nova_compute[351485]: 2025-12-03 02:18:22.454 351492 DEBUG oslo_concurrency.lockutils [None req-c38b8b96-c9d0-4a0d-a420-af324964bdac 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Lock "40db12af-6ca8-4a4f-88e7-833c3fda87c9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.241s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:18:23 compute-0 nova_compute[351485]: 2025-12-03 02:18:23.033 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:23 compute-0 nova_compute[351485]: 2025-12-03 02:18:23.070 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:18:23 compute-0 nova_compute[351485]: 2025-12-03 02:18:23.071 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 02:18:23 compute-0 nova_compute[351485]: 2025-12-03 02:18:23.071 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 03 02:18:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1922: 321 pgs: 321 active+clean; 280 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 9.8 KiB/s wr, 122 op/s
Dec 03 02:18:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:18:23 compute-0 nova_compute[351485]: 2025-12-03 02:18:23.438 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-a48b4084-369d-432a-9f47-9378cdcc011f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:18:23 compute-0 nova_compute[351485]: 2025-12-03 02:18:23.438 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-a48b4084-369d-432a-9f47-9378cdcc011f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:18:23 compute-0 nova_compute[351485]: 2025-12-03 02:18:23.440 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 03 02:18:23 compute-0 nova_compute[351485]: 2025-12-03 02:18:23.440 351492 DEBUG nova.objects.instance [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lazy-loading 'info_cache' on Instance uuid a48b4084-369d-432a-9f47-9378cdcc011f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:18:23 compute-0 nova_compute[351485]: 2025-12-03 02:18:23.663 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:24 compute-0 ceph-mon[192821]: pgmap v1922: 321 pgs: 321 active+clean; 280 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 9.8 KiB/s wr, 122 op/s
Dec 03 02:18:24 compute-0 nova_compute[351485]: 2025-12-03 02:18:24.506 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1923: 321 pgs: 321 active+clean; 264 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 10 KiB/s wr, 111 op/s
Dec 03 02:18:25 compute-0 nova_compute[351485]: 2025-12-03 02:18:25.480 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Updating instance_info_cache with network_info: [{"id": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "address": "fa:16:3e:ff:dd:2f", "network": {"id": "2fdf214a-0f6e-4e5d-b449-e1988827937a", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-191861003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b95bb4c57d3543acb25997bedee9dec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee5c2dfc-04", "ovs_interfaceid": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:18:25 compute-0 nova_compute[351485]: 2025-12-03 02:18:25.495 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-a48b4084-369d-432a-9f47-9378cdcc011f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:18:25 compute-0 nova_compute[351485]: 2025-12-03 02:18:25.497 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 03 02:18:25 compute-0 nova_compute[351485]: 2025-12-03 02:18:25.498 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:18:25 compute-0 nova_compute[351485]: 2025-12-03 02:18:25.500 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:18:25 compute-0 nova_compute[351485]: 2025-12-03 02:18:25.501 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.001 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.001 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.251 351492 DEBUG oslo_concurrency.lockutils [None req-00aa2088-08d5-417d-9621-1c36b98c7878 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Acquiring lock "a48b4084-369d-432a-9f47-9378cdcc011f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.252 351492 DEBUG oslo_concurrency.lockutils [None req-00aa2088-08d5-417d-9621-1c36b98c7878 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Lock "a48b4084-369d-432a-9f47-9378cdcc011f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.253 351492 DEBUG oslo_concurrency.lockutils [None req-00aa2088-08d5-417d-9621-1c36b98c7878 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Acquiring lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.254 351492 DEBUG oslo_concurrency.lockutils [None req-00aa2088-08d5-417d-9621-1c36b98c7878 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.255 351492 DEBUG oslo_concurrency.lockutils [None req-00aa2088-08d5-417d-9621-1c36b98c7878 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.257 351492 INFO nova.compute.manager [None req-00aa2088-08d5-417d-9621-1c36b98c7878 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Terminating instance
Dec 03 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.259 351492 DEBUG nova.compute.manager [None req-00aa2088-08d5-417d-9621-1c36b98c7878 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 03 02:18:26 compute-0 kernel: tapee5c2dfc-04 (unregistering): left promiscuous mode
Dec 03 02:18:26 compute-0 NetworkManager[48912]: <info>  [1764728306.3639] device (tapee5c2dfc-04): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 03 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.371 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:26 compute-0 ovn_controller[89134]: 2025-12-03T02:18:26Z|00149|binding|INFO|Releasing lport ee5c2dfc-04c3-400a-8073-6f2c65dcea03 from this chassis (sb_readonly=0)
Dec 03 02:18:26 compute-0 ovn_controller[89134]: 2025-12-03T02:18:26Z|00150|binding|INFO|Setting lport ee5c2dfc-04c3-400a-8073-6f2c65dcea03 down in Southbound
Dec 03 02:18:26 compute-0 ovn_controller[89134]: 2025-12-03T02:18:26Z|00151|binding|INFO|Removing iface tapee5c2dfc-04 ovn-installed in OVS
Dec 03 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.397 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:26 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:26.402 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ff:dd:2f 10.100.0.9'], port_security=['fa:16:3e:ff:dd:2f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'a48b4084-369d-432a-9f47-9378cdcc011f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2fdf214a-0f6e-4e5d-b449-e1988827937a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b95bb4c57d3543acb25997bedee9dec3', 'neutron:revision_number': '6', 'neutron:security_group_ids': '323d2b87-5691-4e3e-84a4-5fb1ca8c1538', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.208', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=49517db8-4396-45c4-bc75-59118441fc2e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=ee5c2dfc-04c3-400a-8073-6f2c65dcea03) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 02:18:26 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:26.403 288528 INFO neutron.agent.ovn.metadata.agent [-] Port ee5c2dfc-04c3-400a-8073-6f2c65dcea03 in datapath 2fdf214a-0f6e-4e5d-b449-e1988827937a unbound from our chassis
Dec 03 02:18:26 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:26.406 288528 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2fdf214a-0f6e-4e5d-b449-e1988827937a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 03 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.408 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:26 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:26.407 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[cee0464f-7532-4038-bb88-1b00bf029523]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:18:26 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:26.410 288528 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a namespace which is not needed anymore
Dec 03 02:18:26 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000008.scope: Deactivated successfully.
Dec 03 02:18:26 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000008.scope: Consumed 46.994s CPU time.
Dec 03 02:18:26 compute-0 systemd-machined[138558]: Machine qemu-11-instance-00000008 terminated.
Dec 03 02:18:26 compute-0 ceph-mon[192821]: pgmap v1923: 321 pgs: 321 active+clean; 264 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 10 KiB/s wr, 111 op/s
Dec 03 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.488 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.496 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.506 351492 INFO nova.virt.libvirt.driver [-] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Instance destroyed successfully.
Dec 03 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.507 351492 DEBUG nova.objects.instance [None req-00aa2088-08d5-417d-9621-1c36b98c7878 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Lazy-loading 'resources' on Instance uuid a48b4084-369d-432a-9f47-9378cdcc011f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.525 351492 DEBUG nova.virt.libvirt.vif [None req-00aa2088-08d5-417d-9621-1c36b98c7878 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T02:15:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-925455337',display_name='tempest-ServerActionsTestJSON-server-925455337',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-925455337',id=8,image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFGOJzr3C/PPi8eniww/uAf5kjbNsdKavxgkZKaJZFgdiLqS6nfAl7iJt2CTK2Uv8oLXiebIMQ1pupDcRRUQudzYxI5uBKdjcX1Ycil7EMv1Jwv4g9nZX8AidJ89XIoqzA==',key_name='tempest-keypair-354319462',keypairs=<?>,launch_index=0,launched_at=2025-12-03T02:15:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b95bb4c57d3543acb25997bedee9dec3',ramdisk_id='',reservation_id='r-4j003m20',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-225723275',owner_user_name='tempest-ServerActionsTestJSON-225723275-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-03T02:17:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='292dd1da4e67424b855327b32f0623b7',uuid=a48b4084-369d-432a-9f47-9378cdcc011f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "address": "fa:16:3e:ff:dd:2f", "network": {"id": "2fdf214a-0f6e-4e5d-b449-e1988827937a", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-191861003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b95bb4c57d3543acb25997bedee9dec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee5c2dfc-04", "ovs_interfaceid": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 03 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.526 351492 DEBUG nova.network.os_vif_util [None req-00aa2088-08d5-417d-9621-1c36b98c7878 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Converting VIF {"id": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "address": "fa:16:3e:ff:dd:2f", "network": {"id": "2fdf214a-0f6e-4e5d-b449-e1988827937a", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-191861003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b95bb4c57d3543acb25997bedee9dec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee5c2dfc-04", "ovs_interfaceid": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 03 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.527 351492 DEBUG nova.network.os_vif_util [None req-00aa2088-08d5-417d-9621-1c36b98c7878 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ff:dd:2f,bridge_name='br-int',has_traffic_filtering=True,id=ee5c2dfc-04c3-400a-8073-6f2c65dcea03,network=Network(2fdf214a-0f6e-4e5d-b449-e1988827937a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee5c2dfc-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 03 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.528 351492 DEBUG os_vif [None req-00aa2088-08d5-417d-9621-1c36b98c7878 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ff:dd:2f,bridge_name='br-int',has_traffic_filtering=True,id=ee5c2dfc-04c3-400a-8073-6f2c65dcea03,network=Network(2fdf214a-0f6e-4e5d-b449-e1988827937a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee5c2dfc-04') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 03 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.533 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.533 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee5c2dfc-04, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.536 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 03 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.539 351492 INFO os_vif [None req-00aa2088-08d5-417d-9621-1c36b98c7878 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ff:dd:2f,bridge_name='br-int',has_traffic_filtering=True,id=ee5c2dfc-04c3-400a-8073-6f2c65dcea03,network=Network(2fdf214a-0f6e-4e5d-b449-e1988827937a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee5c2dfc-04')
Dec 03 02:18:26 compute-0 neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a[448404]: [NOTICE]   (448408) : haproxy version is 2.8.14-c23fe91
Dec 03 02:18:26 compute-0 neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a[448404]: [NOTICE]   (448408) : path to executable is /usr/sbin/haproxy
Dec 03 02:18:26 compute-0 neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a[448404]: [WARNING]  (448408) : Exiting Master process...
Dec 03 02:18:26 compute-0 neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a[448404]: [ALERT]    (448408) : Current worker (448410) exited with code 143 (Terminated)
Dec 03 02:18:26 compute-0 neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a[448404]: [WARNING]  (448408) : All workers exited. Exiting... (0)
Dec 03 02:18:26 compute-0 systemd[1]: libpod-df6275ac70edd41bbefb03e343167c9cf0112ba253c40eb803e2b1de3bfb5a95.scope: Deactivated successfully.
Dec 03 02:18:26 compute-0 ovn_controller[89134]: 2025-12-03T02:18:26Z|00152|binding|INFO|Releasing lport 4fe53946-9a81-46d3-946d-3676da417bd6 from this chassis (sb_readonly=0)
Dec 03 02:18:26 compute-0 ovn_controller[89134]: 2025-12-03T02:18:26Z|00153|binding|INFO|Releasing lport c8314dfe-5b76-4819-9b3e-1cb76a272253 from this chassis (sb_readonly=0)
Dec 03 02:18:26 compute-0 podman[450709]: 2025-12-03 02:18:26.639025541 +0000 UTC m=+0.083551214 container died df6275ac70edd41bbefb03e343167c9cf0112ba253c40eb803e2b1de3bfb5a95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS)
Dec 03 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.671 351492 DEBUG nova.compute.manager [req-e1edd8f1-97ba-4f02-b41c-b2ce0ae4715c req-06430af4-0f34-4816-9d2a-0d3fd2acb7d0 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Received event network-vif-unplugged-ee5c2dfc-04c3-400a-8073-6f2c65dcea03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.671 351492 DEBUG oslo_concurrency.lockutils [req-e1edd8f1-97ba-4f02-b41c-b2ce0ae4715c req-06430af4-0f34-4816-9d2a-0d3fd2acb7d0 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.672 351492 DEBUG oslo_concurrency.lockutils [req-e1edd8f1-97ba-4f02-b41c-b2ce0ae4715c req-06430af4-0f34-4816-9d2a-0d3fd2acb7d0 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.673 351492 DEBUG oslo_concurrency.lockutils [req-e1edd8f1-97ba-4f02-b41c-b2ce0ae4715c req-06430af4-0f34-4816-9d2a-0d3fd2acb7d0 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.673 351492 DEBUG nova.compute.manager [req-e1edd8f1-97ba-4f02-b41c-b2ce0ae4715c req-06430af4-0f34-4816-9d2a-0d3fd2acb7d0 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] No waiting events found dispatching network-vif-unplugged-ee5c2dfc-04c3-400a-8073-6f2c65dcea03 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 03 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.674 351492 DEBUG nova.compute.manager [req-e1edd8f1-97ba-4f02-b41c-b2ce0ae4715c req-06430af4-0f34-4816-9d2a-0d3fd2acb7d0 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Received event network-vif-unplugged-ee5c2dfc-04c3-400a-8073-6f2c65dcea03 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 03 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.692 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-0eaead3289de756df5c362e51f445187494ce76bdc94cf33a7cf5eb23ba12419-merged.mount: Deactivated successfully.
Dec 03 02:18:26 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-df6275ac70edd41bbefb03e343167c9cf0112ba253c40eb803e2b1de3bfb5a95-userdata-shm.mount: Deactivated successfully.
Dec 03 02:18:26 compute-0 podman[450709]: 2025-12-03 02:18:26.714000332 +0000 UTC m=+0.158525975 container cleanup df6275ac70edd41bbefb03e343167c9cf0112ba253c40eb803e2b1de3bfb5a95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:18:26 compute-0 systemd[1]: libpod-conmon-df6275ac70edd41bbefb03e343167c9cf0112ba253c40eb803e2b1de3bfb5a95.scope: Deactivated successfully.
Dec 03 02:18:26 compute-0 podman[450751]: 2025-12-03 02:18:26.819039834 +0000 UTC m=+0.070941508 container remove df6275ac70edd41bbefb03e343167c9cf0112ba253c40eb803e2b1de3bfb5a95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 03 02:18:26 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:26.848 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[a77aafb8-529d-4704-8c97-e15a7b6c1db1]: (4, ('Wed Dec  3 02:18:26 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a (df6275ac70edd41bbefb03e343167c9cf0112ba253c40eb803e2b1de3bfb5a95)\ndf6275ac70edd41bbefb03e343167c9cf0112ba253c40eb803e2b1de3bfb5a95\nWed Dec  3 02:18:26 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a (df6275ac70edd41bbefb03e343167c9cf0112ba253c40eb803e2b1de3bfb5a95)\ndf6275ac70edd41bbefb03e343167c9cf0112ba253c40eb803e2b1de3bfb5a95\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:18:26 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:26.855 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[28a343d5-08c9-4e75-a2a7-af72dd588151]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:18:26 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:26.856 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2fdf214a-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:18:26 compute-0 kernel: tap2fdf214a-00: left promiscuous mode
Dec 03 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.861 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.887 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:26 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:26.890 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[a6669048-d2fe-43cc-b40b-218a6166143d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:18:26 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:26.909 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[fde918a2-2f2b-47e8-9c24-0039c9989667]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:18:26 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:26.910 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[7d3cdcb5-d809-4610-b3a1-f4efd3833921]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:18:26 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:26.929 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[30b5db10-7997-42b5-be3f-73a5127a0d25]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707937, 'reachable_time': 18647, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 450765, 'error': None, 'target': 'ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:18:26 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:26.932 288639 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 03 02:18:26 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:26.932 288639 DEBUG oslo.privsep.daemon [-] privsep: reply[e00848c9-3415-4dee-8621-a8ef70cb15bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:18:26 compute-0 systemd[1]: run-netns-ovnmeta\x2d2fdf214a\x2d0f6e\x2d4e5d\x2db449\x2de1988827937a.mount: Deactivated successfully.
Dec 03 02:18:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1924: 321 pgs: 321 active+clean; 264 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 10 KiB/s wr, 87 op/s
Dec 03 02:18:27 compute-0 nova_compute[351485]: 2025-12-03 02:18:27.245 351492 INFO nova.virt.libvirt.driver [None req-00aa2088-08d5-417d-9621-1c36b98c7878 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Deleting instance files /var/lib/nova/instances/a48b4084-369d-432a-9f47-9378cdcc011f_del
Dec 03 02:18:27 compute-0 nova_compute[351485]: 2025-12-03 02:18:27.246 351492 INFO nova.virt.libvirt.driver [None req-00aa2088-08d5-417d-9621-1c36b98c7878 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Deletion of /var/lib/nova/instances/a48b4084-369d-432a-9f47-9378cdcc011f_del complete
Dec 03 02:18:27 compute-0 nova_compute[351485]: 2025-12-03 02:18:27.338 351492 INFO nova.compute.manager [None req-00aa2088-08d5-417d-9621-1c36b98c7878 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Took 1.08 seconds to destroy the instance on the hypervisor.
Dec 03 02:18:27 compute-0 nova_compute[351485]: 2025-12-03 02:18:27.339 351492 DEBUG oslo.service.loopingcall [None req-00aa2088-08d5-417d-9621-1c36b98c7878 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 03 02:18:27 compute-0 nova_compute[351485]: 2025-12-03 02:18:27.339 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:27 compute-0 nova_compute[351485]: 2025-12-03 02:18:27.340 351492 DEBUG nova.compute.manager [-] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 03 02:18:27 compute-0 nova_compute[351485]: 2025-12-03 02:18:27.340 351492 DEBUG nova.network.neutron [-] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 03 02:18:27 compute-0 nova_compute[351485]: 2025-12-03 02:18:27.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:18:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:18:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:18:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:18:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:18:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:18:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:18:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:18:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:18:28
Dec 03 02:18:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 02:18:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 02:18:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.meta', 'images', 'default.rgw.control', 'volumes', 'backups', '.mgr', 'cephfs.cephfs.data', 'vms', 'default.rgw.log', 'cephfs.cephfs.meta']
Dec 03 02:18:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 02:18:28 compute-0 ceph-mon[192821]: pgmap v1924: 321 pgs: 321 active+clean; 264 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 10 KiB/s wr, 87 op/s
Dec 03 02:18:28 compute-0 nova_compute[351485]: 2025-12-03 02:18:28.668 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:28 compute-0 nova_compute[351485]: 2025-12-03 02:18:28.841 351492 DEBUG nova.compute.manager [req-185ffd2d-e7e7-4ec9-8eed-f86582208110 req-497c0185-20aa-45f2-abde-06f9a3edb994 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Received event network-vif-plugged-ee5c2dfc-04c3-400a-8073-6f2c65dcea03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:18:28 compute-0 nova_compute[351485]: 2025-12-03 02:18:28.841 351492 DEBUG oslo_concurrency.lockutils [req-185ffd2d-e7e7-4ec9-8eed-f86582208110 req-497c0185-20aa-45f2-abde-06f9a3edb994 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:18:28 compute-0 nova_compute[351485]: 2025-12-03 02:18:28.842 351492 DEBUG oslo_concurrency.lockutils [req-185ffd2d-e7e7-4ec9-8eed-f86582208110 req-497c0185-20aa-45f2-abde-06f9a3edb994 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:18:28 compute-0 nova_compute[351485]: 2025-12-03 02:18:28.842 351492 DEBUG oslo_concurrency.lockutils [req-185ffd2d-e7e7-4ec9-8eed-f86582208110 req-497c0185-20aa-45f2-abde-06f9a3edb994 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:18:28 compute-0 nova_compute[351485]: 2025-12-03 02:18:28.842 351492 DEBUG nova.compute.manager [req-185ffd2d-e7e7-4ec9-8eed-f86582208110 req-497c0185-20aa-45f2-abde-06f9a3edb994 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] No waiting events found dispatching network-vif-plugged-ee5c2dfc-04c3-400a-8073-6f2c65dcea03 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 03 02:18:28 compute-0 nova_compute[351485]: 2025-12-03 02:18:28.842 351492 WARNING nova.compute.manager [req-185ffd2d-e7e7-4ec9-8eed-f86582208110 req-497c0185-20aa-45f2-abde-06f9a3edb994 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Received unexpected event network-vif-plugged-ee5c2dfc-04c3-400a-8073-6f2c65dcea03 for instance with vm_state active and task_state deleting.
Dec 03 02:18:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1925: 321 pgs: 321 active+clean; 264 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 10 KiB/s wr, 65 op/s
Dec 03 02:18:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 02:18:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:18:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 02:18:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:18:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:18:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:18:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:18:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:18:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:18:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:18:29 compute-0 ceph-mgr[193109]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1922561230
Dec 03 02:18:29 compute-0 podman[158098]: time="2025-12-03T02:18:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:18:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:18:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec 03 02:18:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:18:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8657 "" "Go-http-client/1.1"
Dec 03 02:18:30 compute-0 ceph-mon[192821]: pgmap v1925: 321 pgs: 321 active+clean; 264 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 10 KiB/s wr, 65 op/s
Dec 03 02:18:30 compute-0 nova_compute[351485]: 2025-12-03 02:18:30.986 351492 DEBUG nova.network.neutron [-] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:18:31 compute-0 nova_compute[351485]: 2025-12-03 02:18:31.012 351492 INFO nova.compute.manager [-] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Took 3.67 seconds to deallocate network for instance.
Dec 03 02:18:31 compute-0 nova_compute[351485]: 2025-12-03 02:18:31.073 351492 DEBUG oslo_concurrency.lockutils [None req-00aa2088-08d5-417d-9621-1c36b98c7878 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:18:31 compute-0 nova_compute[351485]: 2025-12-03 02:18:31.074 351492 DEBUG oslo_concurrency.lockutils [None req-00aa2088-08d5-417d-9621-1c36b98c7878 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:18:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1926: 321 pgs: 321 active+clean; 208 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 11 KiB/s wr, 91 op/s
Dec 03 02:18:31 compute-0 nova_compute[351485]: 2025-12-03 02:18:31.123 351492 DEBUG nova.compute.manager [req-47e016ad-955b-4293-a522-39a5f4c36865 req-b68ed872-9658-497d-907b-c11b18d04327 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Received event network-vif-deleted-ee5c2dfc-04c3-400a-8073-6f2c65dcea03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:18:31 compute-0 nova_compute[351485]: 2025-12-03 02:18:31.222 351492 DEBUG oslo_concurrency.processutils [None req-00aa2088-08d5-417d-9621-1c36b98c7878 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:18:31 compute-0 openstack_network_exporter[368278]: ERROR   02:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:18:31 compute-0 openstack_network_exporter[368278]: ERROR   02:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:18:31 compute-0 openstack_network_exporter[368278]: ERROR   02:18:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:18:31 compute-0 openstack_network_exporter[368278]: ERROR   02:18:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:18:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:18:31 compute-0 openstack_network_exporter[368278]: ERROR   02:18:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:18:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:18:31 compute-0 nova_compute[351485]: 2025-12-03 02:18:31.466 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:31 compute-0 nova_compute[351485]: 2025-12-03 02:18:31.536 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:31 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:18:31 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2476211608' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:18:31 compute-0 nova_compute[351485]: 2025-12-03 02:18:31.734 351492 DEBUG oslo_concurrency.processutils [None req-00aa2088-08d5-417d-9621-1c36b98c7878 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:18:31 compute-0 nova_compute[351485]: 2025-12-03 02:18:31.748 351492 DEBUG nova.compute.provider_tree [None req-00aa2088-08d5-417d-9621-1c36b98c7878 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:18:31 compute-0 nova_compute[351485]: 2025-12-03 02:18:31.782 351492 DEBUG nova.scheduler.client.report [None req-00aa2088-08d5-417d-9621-1c36b98c7878 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:18:31 compute-0 nova_compute[351485]: 2025-12-03 02:18:31.827 351492 DEBUG oslo_concurrency.lockutils [None req-00aa2088-08d5-417d-9621-1c36b98c7878 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:18:31 compute-0 nova_compute[351485]: 2025-12-03 02:18:31.852 351492 INFO nova.scheduler.client.report [None req-00aa2088-08d5-417d-9621-1c36b98c7878 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Deleted allocations for instance a48b4084-369d-432a-9f47-9378cdcc011f
Dec 03 02:18:31 compute-0 nova_compute[351485]: 2025-12-03 02:18:31.926 351492 DEBUG oslo_concurrency.lockutils [None req-00aa2088-08d5-417d-9621-1c36b98c7878 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Lock "a48b4084-369d-432a-9f47-9378cdcc011f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:18:32 compute-0 ceph-mon[192821]: pgmap v1926: 321 pgs: 321 active+clean; 208 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 11 KiB/s wr, 91 op/s
Dec 03 02:18:32 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2476211608' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:18:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1927: 321 pgs: 321 active+clean; 183 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 609 KiB/s rd, 2.4 KiB/s wr, 56 op/s
Dec 03 02:18:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:18:33 compute-0 nova_compute[351485]: 2025-12-03 02:18:33.670 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:34 compute-0 nova_compute[351485]: 2025-12-03 02:18:34.475 351492 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764728299.4715793, 40db12af-6ca8-4a4f-88e7-833c3fda87c9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 02:18:34 compute-0 nova_compute[351485]: 2025-12-03 02:18:34.475 351492 INFO nova.compute.manager [-] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] VM Stopped (Lifecycle Event)
Dec 03 02:18:34 compute-0 nova_compute[351485]: 2025-12-03 02:18:34.495 351492 DEBUG nova.compute.manager [None req-dbc4c0f7-0844-45b4-aef4-abf6a3f47e65 - - - - - -] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:18:34 compute-0 ceph-mon[192821]: pgmap v1927: 321 pgs: 321 active+clean; 183 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 609 KiB/s rd, 2.4 KiB/s wr, 56 op/s
Dec 03 02:18:34 compute-0 nova_compute[351485]: 2025-12-03 02:18:34.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:18:34 compute-0 nova_compute[351485]: 2025-12-03 02:18:34.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 02:18:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1928: 321 pgs: 321 active+clean; 183 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.6 KiB/s wr, 32 op/s
Dec 03 02:18:35 compute-0 nova_compute[351485]: 2025-12-03 02:18:35.970 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:36 compute-0 nova_compute[351485]: 2025-12-03 02:18:36.539 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:36 compute-0 ceph-mon[192821]: pgmap v1928: 321 pgs: 321 active+clean; 183 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.6 KiB/s wr, 32 op/s
Dec 03 02:18:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1929: 321 pgs: 321 active+clean; 183 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec 03 02:18:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:18:38 compute-0 nova_compute[351485]: 2025-12-03 02:18:38.479 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:38 compute-0 ceph-mon[192821]: pgmap v1929: 321 pgs: 321 active+clean; 183 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec 03 02:18:38 compute-0 nova_compute[351485]: 2025-12-03 02:18:38.673 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 02:18:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:18:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 02:18:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:18:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011079409023572312 of space, bias 1.0, pg target 0.33238227070716936 quantized to 32 (current 32)
Dec 03 02:18:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:18:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:18:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:18:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:18:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:18:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec 03 02:18:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:18:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 02:18:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:18:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:18:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:18:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 02:18:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:18:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 02:18:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:18:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:18:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:18:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 02:18:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1930: 321 pgs: 321 active+clean; 183 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 03 02:18:39 compute-0 ceph-mon[192821]: pgmap v1930: 321 pgs: 321 active+clean; 183 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 03 02:18:40 compute-0 ovn_controller[89134]: 2025-12-03T02:18:40Z|00154|binding|INFO|Releasing lport 4fe53946-9a81-46d3-946d-3676da417bd6 from this chassis (sb_readonly=0)
Dec 03 02:18:40 compute-0 nova_compute[351485]: 2025-12-03 02:18:40.864 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1931: 321 pgs: 321 active+clean; 183 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 03 02:18:41 compute-0 nova_compute[351485]: 2025-12-03 02:18:41.503 351492 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764728306.5019524, a48b4084-369d-432a-9f47-9378cdcc011f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 02:18:41 compute-0 nova_compute[351485]: 2025-12-03 02:18:41.505 351492 INFO nova.compute.manager [-] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] VM Stopped (Lifecycle Event)
Dec 03 02:18:41 compute-0 nova_compute[351485]: 2025-12-03 02:18:41.536 351492 DEBUG nova.compute.manager [None req-4d284a25-a9ed-4fc5-a505-8f0c8034ecb5 - - - - - -] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:18:41 compute-0 nova_compute[351485]: 2025-12-03 02:18:41.543 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:41 compute-0 podman[450791]: 2025-12-03 02:18:41.869857437 +0000 UTC m=+0.097557911 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 03 02:18:41 compute-0 podman[450790]: 2025-12-03 02:18:41.879218452 +0000 UTC m=+0.117267519 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 03 02:18:41 compute-0 podman[450789]: 2025-12-03 02:18:41.897718985 +0000 UTC m=+0.142763200 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 03 02:18:42 compute-0 ceph-mon[192821]: pgmap v1931: 321 pgs: 321 active+clean; 183 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 03 02:18:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:42.243 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:a6:85', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:2a:11:ae:7b:8c'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 02:18:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:42.244 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 03 02:18:42 compute-0 nova_compute[351485]: 2025-12-03 02:18:42.245 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:42 compute-0 nova_compute[351485]: 2025-12-03 02:18:42.629 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1932: 321 pgs: 321 active+clean; 183 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 511 B/s wr, 2 op/s
Dec 03 02:18:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:18:43 compute-0 nova_compute[351485]: 2025-12-03 02:18:43.675 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:44 compute-0 ceph-mon[192821]: pgmap v1932: 321 pgs: 321 active+clean; 183 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 511 B/s wr, 2 op/s
Dec 03 02:18:44 compute-0 ovn_controller[89134]: 2025-12-03T02:18:44Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:24:c0:50 10.100.0.14
Dec 03 02:18:44 compute-0 ovn_controller[89134]: 2025-12-03T02:18:44Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:24:c0:50 10.100.0.14
Dec 03 02:18:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1933: 321 pgs: 321 active+clean; 188 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 892 KiB/s wr, 18 op/s
Dec 03 02:18:45 compute-0 sudo[450847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:18:45 compute-0 sudo[450847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:18:45 compute-0 sudo[450847]: pam_unix(sudo:session): session closed for user root
Dec 03 02:18:45 compute-0 sudo[450872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:18:45 compute-0 sudo[450872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:18:45 compute-0 sudo[450872]: pam_unix(sudo:session): session closed for user root
Dec 03 02:18:45 compute-0 sudo[450897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:18:45 compute-0 sudo[450897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:18:45 compute-0 sudo[450897]: pam_unix(sudo:session): session closed for user root
Dec 03 02:18:45 compute-0 sudo[450922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 02:18:45 compute-0 sudo[450922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:18:46 compute-0 ceph-mon[192821]: pgmap v1933: 321 pgs: 321 active+clean; 188 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 892 KiB/s wr, 18 op/s
Dec 03 02:18:46 compute-0 nova_compute[351485]: 2025-12-03 02:18:46.546 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:46 compute-0 sudo[450922]: pam_unix(sudo:session): session closed for user root
Dec 03 02:18:46 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:18:46 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:18:46 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 02:18:46 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:18:46 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 02:18:46 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:18:46 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev ccc24c86-1339-4fb4-96ad-f7b29a8ad047 does not exist
Dec 03 02:18:46 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev d6da5496-a1b5-4a25-b9a0-e610eed8d84d does not exist
Dec 03 02:18:46 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 5f55d163-2b2f-4f04-b646-e92deaa75033 does not exist
Dec 03 02:18:46 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 02:18:46 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:18:46 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 02:18:46 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:18:46 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:18:46 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:18:46 compute-0 sudo[450977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:18:46 compute-0 sudo[450977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:18:46 compute-0 sudo[450977]: pam_unix(sudo:session): session closed for user root
Dec 03 02:18:46 compute-0 sudo[451002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:18:46 compute-0 sudo[451002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:18:46 compute-0 sudo[451002]: pam_unix(sudo:session): session closed for user root
Dec 03 02:18:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 02:18:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3168155945' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:18:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 02:18:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3168155945' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:18:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1934: 321 pgs: 321 active+clean; 207 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 196 KiB/s rd, 2.1 MiB/s wr, 47 op/s
Dec 03 02:18:47 compute-0 sudo[451027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:18:47 compute-0 sudo[451027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:18:47 compute-0 sudo[451027]: pam_unix(sudo:session): session closed for user root
Dec 03 02:18:47 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:18:47 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:18:47 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:18:47 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:18:47 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:18:47 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:18:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/3168155945' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:18:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/3168155945' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:18:47 compute-0 sudo[451052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 02:18:47 compute-0 sudo[451052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:18:47 compute-0 podman[451113]: 2025-12-03 02:18:47.861600823 +0000 UTC m=+0.089806462 container create b247276afa4dd6677cbd7398ba98d92f6c91d0086a1aab7b3c0aa7ef696df8bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:18:47 compute-0 podman[451113]: 2025-12-03 02:18:47.825596364 +0000 UTC m=+0.053802043 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:18:47 compute-0 systemd[1]: Started libpod-conmon-b247276afa4dd6677cbd7398ba98d92f6c91d0086a1aab7b3c0aa7ef696df8bf.scope.
Dec 03 02:18:48 compute-0 nova_compute[351485]: 2025-12-03 02:18:48.003 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:48 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:18:48 compute-0 podman[451113]: 2025-12-03 02:18:48.046678499 +0000 UTC m=+0.274884178 container init b247276afa4dd6677cbd7398ba98d92f6c91d0086a1aab7b3c0aa7ef696df8bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:18:48 compute-0 podman[451113]: 2025-12-03 02:18:48.066221882 +0000 UTC m=+0.294427521 container start b247276afa4dd6677cbd7398ba98d92f6c91d0086a1aab7b3c0aa7ef696df8bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:18:48 compute-0 podman[451113]: 2025-12-03 02:18:48.073210749 +0000 UTC m=+0.301416388 container attach b247276afa4dd6677cbd7398ba98d92f6c91d0086a1aab7b3c0aa7ef696df8bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 03 02:18:48 compute-0 ecstatic_bohr[451128]: 167 167
Dec 03 02:18:48 compute-0 systemd[1]: libpod-b247276afa4dd6677cbd7398ba98d92f6c91d0086a1aab7b3c0aa7ef696df8bf.scope: Deactivated successfully.
Dec 03 02:18:48 compute-0 podman[451113]: 2025-12-03 02:18:48.08242141 +0000 UTC m=+0.310627049 container died b247276afa4dd6677cbd7398ba98d92f6c91d0086a1aab7b3c0aa7ef696df8bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:18:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-f337f5ecc1c7475be1201222517795ff10fcc08b4a0f5b892fb3410f27685b56-merged.mount: Deactivated successfully.
Dec 03 02:18:48 compute-0 podman[451113]: 2025-12-03 02:18:48.156751743 +0000 UTC m=+0.384957352 container remove b247276afa4dd6677cbd7398ba98d92f6c91d0086a1aab7b3c0aa7ef696df8bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bohr, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec 03 02:18:48 compute-0 podman[451129]: 2025-12-03 02:18:48.157881095 +0000 UTC m=+0.178680096 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Dec 03 02:18:48 compute-0 systemd[1]: libpod-conmon-b247276afa4dd6677cbd7398ba98d92f6c91d0086a1aab7b3c0aa7ef696df8bf.scope: Deactivated successfully.
Dec 03 02:18:48 compute-0 ceph-mon[192821]: pgmap v1934: 321 pgs: 321 active+clean; 207 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 196 KiB/s rd, 2.1 MiB/s wr, 47 op/s
Dec 03 02:18:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:18:48 compute-0 podman[451171]: 2025-12-03 02:18:48.444029531 +0000 UTC m=+0.096217354 container create 4eaea05378a28ba5d0a5eadb93b8adc2b13bb5a54f9f0b8d786adc2494cf5b44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_curie, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Dec 03 02:18:48 compute-0 podman[451171]: 2025-12-03 02:18:48.40547567 +0000 UTC m=+0.057663493 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:18:48 compute-0 systemd[1]: Started libpod-conmon-4eaea05378a28ba5d0a5eadb93b8adc2b13bb5a54f9f0b8d786adc2494cf5b44.scope.
Dec 03 02:18:48 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:18:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6072070acb38af8281298d039c3898d81876eac213e5a06635fded9d63306b5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:18:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6072070acb38af8281298d039c3898d81876eac213e5a06635fded9d63306b5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:18:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6072070acb38af8281298d039c3898d81876eac213e5a06635fded9d63306b5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:18:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6072070acb38af8281298d039c3898d81876eac213e5a06635fded9d63306b5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:18:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6072070acb38af8281298d039c3898d81876eac213e5a06635fded9d63306b5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 02:18:48 compute-0 podman[451171]: 2025-12-03 02:18:48.625153105 +0000 UTC m=+0.277340988 container init 4eaea05378a28ba5d0a5eadb93b8adc2b13bb5a54f9f0b8d786adc2494cf5b44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_curie, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec 03 02:18:48 compute-0 podman[451171]: 2025-12-03 02:18:48.659115216 +0000 UTC m=+0.311303029 container start 4eaea05378a28ba5d0a5eadb93b8adc2b13bb5a54f9f0b8d786adc2494cf5b44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec 03 02:18:48 compute-0 podman[451171]: 2025-12-03 02:18:48.668294106 +0000 UTC m=+0.320481939 container attach 4eaea05378a28ba5d0a5eadb93b8adc2b13bb5a54f9f0b8d786adc2494cf5b44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_curie, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 03 02:18:48 compute-0 nova_compute[351485]: 2025-12-03 02:18:48.684 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1935: 321 pgs: 321 active+clean; 207 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 196 KiB/s rd, 2.1 MiB/s wr, 47 op/s
Dec 03 02:18:49 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:49.247 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=eda9fd7d-f2b1-4121-b9ac-fc31f8426272, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:18:50 compute-0 elegant_curie[451187]: --> passed data devices: 0 physical, 3 LVM
Dec 03 02:18:50 compute-0 elegant_curie[451187]: --> relative data size: 1.0
Dec 03 02:18:50 compute-0 elegant_curie[451187]: --> All data devices are unavailable
Dec 03 02:18:50 compute-0 nova_compute[351485]: 2025-12-03 02:18:50.018 351492 INFO nova.compute.manager [None req-c1491505-ac29-471f-a2da-cce3edf0bc7c abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Get console output
Dec 03 02:18:50 compute-0 nova_compute[351485]: 2025-12-03 02:18:50.035 448603 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 03 02:18:50 compute-0 systemd[1]: libpod-4eaea05378a28ba5d0a5eadb93b8adc2b13bb5a54f9f0b8d786adc2494cf5b44.scope: Deactivated successfully.
Dec 03 02:18:50 compute-0 systemd[1]: libpod-4eaea05378a28ba5d0a5eadb93b8adc2b13bb5a54f9f0b8d786adc2494cf5b44.scope: Consumed 1.313s CPU time.
Dec 03 02:18:50 compute-0 podman[451171]: 2025-12-03 02:18:50.057208112 +0000 UTC m=+1.709395945 container died 4eaea05378a28ba5d0a5eadb93b8adc2b13bb5a54f9f0b8d786adc2494cf5b44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 03 02:18:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6072070acb38af8281298d039c3898d81876eac213e5a06635fded9d63306b5-merged.mount: Deactivated successfully.
Dec 03 02:18:50 compute-0 podman[451171]: 2025-12-03 02:18:50.155931536 +0000 UTC m=+1.808119349 container remove 4eaea05378a28ba5d0a5eadb93b8adc2b13bb5a54f9f0b8d786adc2494cf5b44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:18:50 compute-0 systemd[1]: libpod-conmon-4eaea05378a28ba5d0a5eadb93b8adc2b13bb5a54f9f0b8d786adc2494cf5b44.scope: Deactivated successfully.
Dec 03 02:18:50 compute-0 sudo[451052]: pam_unix(sudo:session): session closed for user root
Dec 03 02:18:50 compute-0 ceph-mon[192821]: pgmap v1935: 321 pgs: 321 active+clean; 207 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 196 KiB/s rd, 2.1 MiB/s wr, 47 op/s
Dec 03 02:18:50 compute-0 sudo[451228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:18:50 compute-0 sudo[451228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:18:50 compute-0 sudo[451228]: pam_unix(sudo:session): session closed for user root
Dec 03 02:18:50 compute-0 sudo[451253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:18:50 compute-0 sudo[451253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:18:50 compute-0 sudo[451253]: pam_unix(sudo:session): session closed for user root
Dec 03 02:18:50 compute-0 nova_compute[351485]: 2025-12-03 02:18:50.491 351492 DEBUG oslo_concurrency.lockutils [None req-4ea5627b-e29e-4683-bf3a-460ae2137bcf abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Acquiring lock "1b83725c-0af2-491f-98d9-bdb0ed1a5979" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:18:50 compute-0 nova_compute[351485]: 2025-12-03 02:18:50.492 351492 DEBUG oslo_concurrency.lockutils [None req-4ea5627b-e29e-4683-bf3a-460ae2137bcf abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "1b83725c-0af2-491f-98d9-bdb0ed1a5979" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:18:50 compute-0 nova_compute[351485]: 2025-12-03 02:18:50.492 351492 DEBUG oslo_concurrency.lockutils [None req-4ea5627b-e29e-4683-bf3a-460ae2137bcf abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Acquiring lock "1b83725c-0af2-491f-98d9-bdb0ed1a5979-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:18:50 compute-0 nova_compute[351485]: 2025-12-03 02:18:50.492 351492 DEBUG oslo_concurrency.lockutils [None req-4ea5627b-e29e-4683-bf3a-460ae2137bcf abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "1b83725c-0af2-491f-98d9-bdb0ed1a5979-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:18:50 compute-0 nova_compute[351485]: 2025-12-03 02:18:50.493 351492 DEBUG oslo_concurrency.lockutils [None req-4ea5627b-e29e-4683-bf3a-460ae2137bcf abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "1b83725c-0af2-491f-98d9-bdb0ed1a5979-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:18:50 compute-0 nova_compute[351485]: 2025-12-03 02:18:50.494 351492 INFO nova.compute.manager [None req-4ea5627b-e29e-4683-bf3a-460ae2137bcf abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Terminating instance
Dec 03 02:18:50 compute-0 nova_compute[351485]: 2025-12-03 02:18:50.496 351492 DEBUG nova.compute.manager [None req-4ea5627b-e29e-4683-bf3a-460ae2137bcf abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 03 02:18:50 compute-0 sudo[451278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:18:50 compute-0 sudo[451278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:18:50 compute-0 sudo[451278]: pam_unix(sudo:session): session closed for user root
Dec 03 02:18:50 compute-0 kernel: tap025b4c8a-b3 (unregistering): left promiscuous mode
Dec 03 02:18:50 compute-0 NetworkManager[48912]: <info>  [1764728330.6132] device (tap025b4c8a-b3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 03 02:18:50 compute-0 ovn_controller[89134]: 2025-12-03T02:18:50Z|00155|binding|INFO|Releasing lport 025b4c8a-b3c9-4114-95f7-f17506286d3e from this chassis (sb_readonly=0)
Dec 03 02:18:50 compute-0 ovn_controller[89134]: 2025-12-03T02:18:50Z|00156|binding|INFO|Setting lport 025b4c8a-b3c9-4114-95f7-f17506286d3e down in Southbound
Dec 03 02:18:50 compute-0 ovn_controller[89134]: 2025-12-03T02:18:50Z|00157|binding|INFO|Removing iface tap025b4c8a-b3 ovn-installed in OVS
Dec 03 02:18:50 compute-0 nova_compute[351485]: 2025-12-03 02:18:50.630 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:50 compute-0 nova_compute[351485]: 2025-12-03 02:18:50.633 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:50.643 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:c0:50 10.100.0.14'], port_security=['fa:16:3e:24:c0:50 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '1b83725c-0af2-491f-98d9-bdb0ed1a5979', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed008f09-da46-4507-9be2-7398a4728121', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8f8e5d142604e8c8aabf1e14a1467ca', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0897a5e4-2e8b-4479-bdb4-a75dc9f6f9ce', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.193'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=15a0724e-2d9f-4375-b3ec-7cde297fca09, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=025b4c8a-b3c9-4114-95f7-f17506286d3e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 02:18:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:50.644 288528 INFO neutron.agent.ovn.metadata.agent [-] Port 025b4c8a-b3c9-4114-95f7-f17506286d3e in datapath ed008f09-da46-4507-9be2-7398a4728121 unbound from our chassis
Dec 03 02:18:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:50.646 288528 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ed008f09-da46-4507-9be2-7398a4728121
Dec 03 02:18:50 compute-0 nova_compute[351485]: 2025-12-03 02:18:50.659 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:50.667 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[f07421f1-b485-4c76-b750-d513c20c3b91]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:18:50 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Dec 03 02:18:50 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000b.scope: Consumed 40.031s CPU time.
Dec 03 02:18:50 compute-0 systemd-machined[138558]: Machine qemu-12-instance-0000000b terminated.
Dec 03 02:18:50 compute-0 sudo[451303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 02:18:50 compute-0 sudo[451303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:18:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:50.702 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[bbfef608-d268-4680-b611-5e09fcfdceeb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:18:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:50.706 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[47dea3c9-81ed-46c4-af1d-3c9eb708b7a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:18:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:50.744 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[4d28b4fe-148d-4675-a709-4c323003ca82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:18:50 compute-0 nova_compute[351485]: 2025-12-03 02:18:50.751 351492 INFO nova.virt.libvirt.driver [-] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Instance destroyed successfully.
Dec 03 02:18:50 compute-0 nova_compute[351485]: 2025-12-03 02:18:50.752 351492 DEBUG nova.objects.instance [None req-4ea5627b-e29e-4683-bf3a-460ae2137bcf abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lazy-loading 'resources' on Instance uuid 1b83725c-0af2-491f-98d9-bdb0ed1a5979 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:18:50 compute-0 nova_compute[351485]: 2025-12-03 02:18:50.769 351492 DEBUG nova.virt.libvirt.vif [None req-4ea5627b-e29e-4683-bf3a-460ae2137bcf abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T02:17:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-455653039',display_name='tempest-TestNetworkBasicOps-server-455653039',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-455653039',id=11,image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGyLxdmoeScEfSkwzcCczvmCyzQ7WX6pYr3KymEzB5Q09G09n6d3TfahDx7L4JUEY5sh67bwZpAZn3mmGdgttDtWP8gJ/ON+rMTVTFtEqftauFytQHqZZbMU6xxCGBZ6yA==',key_name='tempest-TestNetworkBasicOps-378472767',keypairs=<?>,launch_index=0,launched_at=2025-12-03T02:18:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f8f8e5d142604e8c8aabf1e14a1467ca',ramdisk_id='',reservation_id='r-ux5cl6xd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1039072813',owner_user_name='tempest-TestNetworkBasicOps-1039072813-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-03T02:18:08Z,user_data=None,user_id='abdbefadac2a4d98bd33ed8a1a60ff75',uuid=1b83725c-0af2-491f-98d9-bdb0ed1a5979,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "025b4c8a-b3c9-4114-95f7-f17506286d3e", "address": "fa:16:3e:24:c0:50", "network": {"id": "ed008f09-da46-4507-9be2-7398a4728121", "bridge": "br-int", "label": "tempest-network-smoke--628634883", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8f8e5d142604e8c8aabf1e14a1467ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap025b4c8a-b3", "ovs_interfaceid": "025b4c8a-b3c9-4114-95f7-f17506286d3e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 03 02:18:50 compute-0 nova_compute[351485]: 2025-12-03 02:18:50.770 351492 DEBUG nova.network.os_vif_util [None req-4ea5627b-e29e-4683-bf3a-460ae2137bcf abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Converting VIF {"id": "025b4c8a-b3c9-4114-95f7-f17506286d3e", "address": "fa:16:3e:24:c0:50", "network": {"id": "ed008f09-da46-4507-9be2-7398a4728121", "bridge": "br-int", "label": "tempest-network-smoke--628634883", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8f8e5d142604e8c8aabf1e14a1467ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap025b4c8a-b3", "ovs_interfaceid": "025b4c8a-b3c9-4114-95f7-f17506286d3e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 03 02:18:50 compute-0 nova_compute[351485]: 2025-12-03 02:18:50.771 351492 DEBUG nova.network.os_vif_util [None req-4ea5627b-e29e-4683-bf3a-460ae2137bcf abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:24:c0:50,bridge_name='br-int',has_traffic_filtering=True,id=025b4c8a-b3c9-4114-95f7-f17506286d3e,network=Network(ed008f09-da46-4507-9be2-7398a4728121),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap025b4c8a-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 03 02:18:50 compute-0 nova_compute[351485]: 2025-12-03 02:18:50.772 351492 DEBUG os_vif [None req-4ea5627b-e29e-4683-bf3a-460ae2137bcf abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:24:c0:50,bridge_name='br-int',has_traffic_filtering=True,id=025b4c8a-b3c9-4114-95f7-f17506286d3e,network=Network(ed008f09-da46-4507-9be2-7398a4728121),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap025b4c8a-b3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 03 02:18:50 compute-0 nova_compute[351485]: 2025-12-03 02:18:50.775 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:50 compute-0 nova_compute[351485]: 2025-12-03 02:18:50.775 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap025b4c8a-b3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:18:50 compute-0 nova_compute[351485]: 2025-12-03 02:18:50.782 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:50 compute-0 nova_compute[351485]: 2025-12-03 02:18:50.786 351492 INFO os_vif [None req-4ea5627b-e29e-4683-bf3a-460ae2137bcf abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:24:c0:50,bridge_name='br-int',has_traffic_filtering=True,id=025b4c8a-b3c9-4114-95f7-f17506286d3e,network=Network(ed008f09-da46-4507-9be2-7398a4728121),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap025b4c8a-b3')
Dec 03 02:18:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:50.788 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[6d550af5-7a77-4a58-942b-2b324d3f8775]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'taped008f09-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9c:11:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 704212, 'reachable_time': 40538, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 451351, 'error': None, 'target': 'ovnmeta-ed008f09-da46-4507-9be2-7398a4728121', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:18:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:50.813 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[3d494fdd-3f66-46e4-b9ca-9276aaeae14c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'taped008f09-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 704225, 'tstamp': 704225}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 451357, 'error': None, 'target': 'ovnmeta-ed008f09-da46-4507-9be2-7398a4728121', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'taped008f09-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 704229, 'tstamp': 704229}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 451357, 'error': None, 'target': 'ovnmeta-ed008f09-da46-4507-9be2-7398a4728121', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:18:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:50.820 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped008f09-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:18:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:50.824 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=taped008f09-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:18:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:50.824 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 03 02:18:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:50.825 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=taped008f09-d0, col_values=(('external_ids', {'iface-id': '4fe53946-9a81-46d3-946d-3676da417bd6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:18:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:50.825 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 03 02:18:50 compute-0 nova_compute[351485]: 2025-12-03 02:18:50.828 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1936: 321 pgs: 321 active+clean; 215 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 275 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Dec 03 02:18:51 compute-0 podman[451407]: 2025-12-03 02:18:51.260164818 +0000 UTC m=+0.082384262 container create 719f78c69519ea6e734984c7e044551cab1ae36074277c90719c7e16d25750bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 03 02:18:51 compute-0 podman[451407]: 2025-12-03 02:18:51.227245236 +0000 UTC m=+0.049464710 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:18:51 compute-0 systemd[1]: Started libpod-conmon-719f78c69519ea6e734984c7e044551cab1ae36074277c90719c7e16d25750bd.scope.
Dec 03 02:18:51 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:18:51 compute-0 podman[451407]: 2025-12-03 02:18:51.405032326 +0000 UTC m=+0.227251810 container init 719f78c69519ea6e734984c7e044551cab1ae36074277c90719c7e16d25750bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_banach, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 03 02:18:51 compute-0 podman[451407]: 2025-12-03 02:18:51.416872971 +0000 UTC m=+0.239092375 container start 719f78c69519ea6e734984c7e044551cab1ae36074277c90719c7e16d25750bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_banach, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:18:51 compute-0 podman[451407]: 2025-12-03 02:18:51.422248414 +0000 UTC m=+0.244467908 container attach 719f78c69519ea6e734984c7e044551cab1ae36074277c90719c7e16d25750bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec 03 02:18:51 compute-0 jolly_banach[451423]: 167 167
Dec 03 02:18:51 compute-0 systemd[1]: libpod-719f78c69519ea6e734984c7e044551cab1ae36074277c90719c7e16d25750bd.scope: Deactivated successfully.
Dec 03 02:18:51 compute-0 podman[451407]: 2025-12-03 02:18:51.429619232 +0000 UTC m=+0.251838656 container died 719f78c69519ea6e734984c7e044551cab1ae36074277c90719c7e16d25750bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_banach, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 03 02:18:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7b5efd33fb09da945d54c1a3c280ad3e739088a1ada9324e13b3d2a97b31600-merged.mount: Deactivated successfully.
Dec 03 02:18:51 compute-0 podman[451407]: 2025-12-03 02:18:51.501491876 +0000 UTC m=+0.323711290 container remove 719f78c69519ea6e734984c7e044551cab1ae36074277c90719c7e16d25750bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_banach, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:18:51 compute-0 systemd[1]: libpod-conmon-719f78c69519ea6e734984c7e044551cab1ae36074277c90719c7e16d25750bd.scope: Deactivated successfully.
Dec 03 02:18:51 compute-0 nova_compute[351485]: 2025-12-03 02:18:51.554 351492 INFO nova.virt.libvirt.driver [None req-4ea5627b-e29e-4683-bf3a-460ae2137bcf abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Deleting instance files /var/lib/nova/instances/1b83725c-0af2-491f-98d9-bdb0ed1a5979_del
Dec 03 02:18:51 compute-0 nova_compute[351485]: 2025-12-03 02:18:51.555 351492 INFO nova.virt.libvirt.driver [None req-4ea5627b-e29e-4683-bf3a-460ae2137bcf abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Deletion of /var/lib/nova/instances/1b83725c-0af2-491f-98d9-bdb0ed1a5979_del complete
Dec 03 02:18:51 compute-0 nova_compute[351485]: 2025-12-03 02:18:51.630 351492 INFO nova.compute.manager [None req-4ea5627b-e29e-4683-bf3a-460ae2137bcf abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Took 1.13 seconds to destroy the instance on the hypervisor.
Dec 03 02:18:51 compute-0 nova_compute[351485]: 2025-12-03 02:18:51.631 351492 DEBUG oslo.service.loopingcall [None req-4ea5627b-e29e-4683-bf3a-460ae2137bcf abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 03 02:18:51 compute-0 nova_compute[351485]: 2025-12-03 02:18:51.632 351492 DEBUG nova.compute.manager [-] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 03 02:18:51 compute-0 nova_compute[351485]: 2025-12-03 02:18:51.633 351492 DEBUG nova.network.neutron [-] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 03 02:18:51 compute-0 podman[451446]: 2025-12-03 02:18:51.779910372 +0000 UTC m=+0.093595979 container create eaf2a50f378137f8ecb28eb4ee679a0e41ad75b4425ad6fb0292fa17d76ccd31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec 03 02:18:51 compute-0 podman[451446]: 2025-12-03 02:18:51.739477788 +0000 UTC m=+0.053163445 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:18:51 compute-0 nova_compute[351485]: 2025-12-03 02:18:51.853 351492 DEBUG nova.compute.manager [req-dbf2689a-c850-41d7-b5f5-d06d8aa8a044 req-aa52b054-4c2c-4d78-a40c-6d581b3b86b1 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Received event network-vif-unplugged-025b4c8a-b3c9-4114-95f7-f17506286d3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:18:51 compute-0 nova_compute[351485]: 2025-12-03 02:18:51.854 351492 DEBUG oslo_concurrency.lockutils [req-dbf2689a-c850-41d7-b5f5-d06d8aa8a044 req-aa52b054-4c2c-4d78-a40c-6d581b3b86b1 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "1b83725c-0af2-491f-98d9-bdb0ed1a5979-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:18:51 compute-0 nova_compute[351485]: 2025-12-03 02:18:51.854 351492 DEBUG oslo_concurrency.lockutils [req-dbf2689a-c850-41d7-b5f5-d06d8aa8a044 req-aa52b054-4c2c-4d78-a40c-6d581b3b86b1 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "1b83725c-0af2-491f-98d9-bdb0ed1a5979-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:18:51 compute-0 nova_compute[351485]: 2025-12-03 02:18:51.854 351492 DEBUG oslo_concurrency.lockutils [req-dbf2689a-c850-41d7-b5f5-d06d8aa8a044 req-aa52b054-4c2c-4d78-a40c-6d581b3b86b1 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "1b83725c-0af2-491f-98d9-bdb0ed1a5979-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:18:51 compute-0 nova_compute[351485]: 2025-12-03 02:18:51.855 351492 DEBUG nova.compute.manager [req-dbf2689a-c850-41d7-b5f5-d06d8aa8a044 req-aa52b054-4c2c-4d78-a40c-6d581b3b86b1 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] No waiting events found dispatching network-vif-unplugged-025b4c8a-b3c9-4114-95f7-f17506286d3e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 03 02:18:51 compute-0 nova_compute[351485]: 2025-12-03 02:18:51.855 351492 DEBUG nova.compute.manager [req-dbf2689a-c850-41d7-b5f5-d06d8aa8a044 req-aa52b054-4c2c-4d78-a40c-6d581b3b86b1 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Received event network-vif-unplugged-025b4c8a-b3c9-4114-95f7-f17506286d3e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 03 02:18:51 compute-0 systemd[1]: Started libpod-conmon-eaf2a50f378137f8ecb28eb4ee679a0e41ad75b4425ad6fb0292fa17d76ccd31.scope.
Dec 03 02:18:51 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:18:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d5aaf5dd7bcf13a082bc604a5637d0d221f6438bf0240283d7dc0ee11380013/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:18:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d5aaf5dd7bcf13a082bc604a5637d0d221f6438bf0240283d7dc0ee11380013/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:18:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d5aaf5dd7bcf13a082bc604a5637d0d221f6438bf0240283d7dc0ee11380013/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:18:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d5aaf5dd7bcf13a082bc604a5637d0d221f6438bf0240283d7dc0ee11380013/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:18:52 compute-0 podman[451446]: 2025-12-03 02:18:52.007164712 +0000 UTC m=+0.320850289 container init eaf2a50f378137f8ecb28eb4ee679a0e41ad75b4425ad6fb0292fa17d76ccd31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_beaver, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:18:52 compute-0 podman[451446]: 2025-12-03 02:18:52.025780548 +0000 UTC m=+0.339466115 container start eaf2a50f378137f8ecb28eb4ee679a0e41ad75b4425ad6fb0292fa17d76ccd31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_beaver, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 03 02:18:52 compute-0 podman[451446]: 2025-12-03 02:18:52.030105171 +0000 UTC m=+0.343790738 container attach eaf2a50f378137f8ecb28eb4ee679a0e41ad75b4425ad6fb0292fa17d76ccd31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_beaver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 03 02:18:52 compute-0 podman[451488]: 2025-12-03 02:18:52.06649715 +0000 UTC m=+0.094661229 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, version=9.4, build-date=2024-09-18T21:23:30, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, distribution-scope=public, name=ubi9, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec 03 02:18:52 compute-0 podman[451465]: 2025-12-03 02:18:52.07426967 +0000 UTC m=+0.136549704 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, vendor=Red Hat, Inc., config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, architecture=x86_64, build-date=2025-08-20T13:12:41, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, distribution-scope=public, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec 03 02:18:52 compute-0 podman[451463]: 2025-12-03 02:18:52.08699372 +0000 UTC m=+0.173767437 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 03 02:18:52 compute-0 podman[451472]: 2025-12-03 02:18:52.087221657 +0000 UTC m=+0.144720716 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 03 02:18:52 compute-0 podman[451485]: 2025-12-03 02:18:52.158279227 +0000 UTC m=+0.178351947 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2)
Dec 03 02:18:52 compute-0 ceph-mon[192821]: pgmap v1936: 321 pgs: 321 active+clean; 215 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 275 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Dec 03 02:18:52 compute-0 nova_compute[351485]: 2025-12-03 02:18:52.541 351492 DEBUG oslo_concurrency.lockutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Acquiring lock "48201127-9aa0-4cde-a41d-6790411480a4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:18:52 compute-0 nova_compute[351485]: 2025-12-03 02:18:52.541 351492 DEBUG oslo_concurrency.lockutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Lock "48201127-9aa0-4cde-a41d-6790411480a4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:18:52 compute-0 nova_compute[351485]: 2025-12-03 02:18:52.586 351492 DEBUG nova.compute.manager [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 03 02:18:52 compute-0 nova_compute[351485]: 2025-12-03 02:18:52.643 351492 DEBUG nova.network.neutron [-] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:18:52 compute-0 nova_compute[351485]: 2025-12-03 02:18:52.668 351492 INFO nova.compute.manager [-] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Took 1.04 seconds to deallocate network for instance.
Dec 03 02:18:52 compute-0 nova_compute[351485]: 2025-12-03 02:18:52.694 351492 DEBUG oslo_concurrency.lockutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:18:52 compute-0 nova_compute[351485]: 2025-12-03 02:18:52.694 351492 DEBUG oslo_concurrency.lockutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:18:52 compute-0 nova_compute[351485]: 2025-12-03 02:18:52.705 351492 DEBUG nova.virt.hardware [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 03 02:18:52 compute-0 nova_compute[351485]: 2025-12-03 02:18:52.705 351492 INFO nova.compute.claims [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Claim successful on node compute-0.ctlplane.example.com
Dec 03 02:18:52 compute-0 nova_compute[351485]: 2025-12-03 02:18:52.713 351492 DEBUG oslo_concurrency.lockutils [None req-4ea5627b-e29e-4683-bf3a-460ae2137bcf abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:18:52 compute-0 nova_compute[351485]: 2025-12-03 02:18:52.802 351492 DEBUG nova.compute.manager [req-cbb8f252-7cfe-4cbc-8613-b5dc10cb0ab3 req-aad5c0ed-b7b5-4e9e-9bd4-2bf4878579fe 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Received event network-vif-deleted-025b4c8a-b3c9-4114-95f7-f17506286d3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:18:52 compute-0 nova_compute[351485]: 2025-12-03 02:18:52.859 351492 DEBUG oslo_concurrency.processutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:18:52 compute-0 modest_beaver[451462]: {
Dec 03 02:18:52 compute-0 modest_beaver[451462]:     "0": [
Dec 03 02:18:52 compute-0 modest_beaver[451462]:         {
Dec 03 02:18:52 compute-0 modest_beaver[451462]:             "devices": [
Dec 03 02:18:52 compute-0 modest_beaver[451462]:                 "/dev/loop3"
Dec 03 02:18:52 compute-0 modest_beaver[451462]:             ],
Dec 03 02:18:52 compute-0 modest_beaver[451462]:             "lv_name": "ceph_lv0",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:             "lv_size": "21470642176",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:             "name": "ceph_lv0",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:             "tags": {
Dec 03 02:18:52 compute-0 modest_beaver[451462]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:                 "ceph.cluster_name": "ceph",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:                 "ceph.crush_device_class": "",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:                 "ceph.encrypted": "0",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:                 "ceph.osd_id": "0",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:                 "ceph.type": "block",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:                 "ceph.vdo": "0"
Dec 03 02:18:52 compute-0 modest_beaver[451462]:             },
Dec 03 02:18:52 compute-0 modest_beaver[451462]:             "type": "block",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:             "vg_name": "ceph_vg0"
Dec 03 02:18:52 compute-0 modest_beaver[451462]:         }
Dec 03 02:18:52 compute-0 modest_beaver[451462]:     ],
Dec 03 02:18:52 compute-0 modest_beaver[451462]:     "1": [
Dec 03 02:18:52 compute-0 modest_beaver[451462]:         {
Dec 03 02:18:52 compute-0 modest_beaver[451462]:             "devices": [
Dec 03 02:18:52 compute-0 modest_beaver[451462]:                 "/dev/loop4"
Dec 03 02:18:52 compute-0 modest_beaver[451462]:             ],
Dec 03 02:18:52 compute-0 modest_beaver[451462]:             "lv_name": "ceph_lv1",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:             "lv_size": "21470642176",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:             "name": "ceph_lv1",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:             "tags": {
Dec 03 02:18:52 compute-0 modest_beaver[451462]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:                 "ceph.cluster_name": "ceph",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:                 "ceph.crush_device_class": "",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:                 "ceph.encrypted": "0",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:                 "ceph.osd_id": "1",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:                 "ceph.type": "block",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:                 "ceph.vdo": "0"
Dec 03 02:18:52 compute-0 modest_beaver[451462]:             },
Dec 03 02:18:52 compute-0 modest_beaver[451462]:             "type": "block",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:             "vg_name": "ceph_vg1"
Dec 03 02:18:52 compute-0 modest_beaver[451462]:         }
Dec 03 02:18:52 compute-0 modest_beaver[451462]:     ],
Dec 03 02:18:52 compute-0 modest_beaver[451462]:     "2": [
Dec 03 02:18:52 compute-0 modest_beaver[451462]:         {
Dec 03 02:18:52 compute-0 modest_beaver[451462]:             "devices": [
Dec 03 02:18:52 compute-0 modest_beaver[451462]:                 "/dev/loop5"
Dec 03 02:18:52 compute-0 modest_beaver[451462]:             ],
Dec 03 02:18:52 compute-0 modest_beaver[451462]:             "lv_name": "ceph_lv2",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:             "lv_size": "21470642176",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:             "name": "ceph_lv2",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:             "tags": {
Dec 03 02:18:52 compute-0 modest_beaver[451462]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:                 "ceph.cluster_name": "ceph",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:                 "ceph.crush_device_class": "",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:                 "ceph.encrypted": "0",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:                 "ceph.osd_id": "2",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:                 "ceph.type": "block",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:                 "ceph.vdo": "0"
Dec 03 02:18:52 compute-0 modest_beaver[451462]:             },
Dec 03 02:18:52 compute-0 modest_beaver[451462]:             "type": "block",
Dec 03 02:18:52 compute-0 modest_beaver[451462]:             "vg_name": "ceph_vg2"
Dec 03 02:18:52 compute-0 modest_beaver[451462]:         }
Dec 03 02:18:52 compute-0 modest_beaver[451462]:     ]
Dec 03 02:18:52 compute-0 modest_beaver[451462]: }
Dec 03 02:18:52 compute-0 systemd[1]: libpod-eaf2a50f378137f8ecb28eb4ee679a0e41ad75b4425ad6fb0292fa17d76ccd31.scope: Deactivated successfully.
Dec 03 02:18:52 compute-0 conmon[451462]: conmon eaf2a50f378137f8ecb2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eaf2a50f378137f8ecb28eb4ee679a0e41ad75b4425ad6fb0292fa17d76ccd31.scope/container/memory.events
Dec 03 02:18:52 compute-0 podman[451446]: 2025-12-03 02:18:52.919167765 +0000 UTC m=+1.232853372 container died eaf2a50f378137f8ecb28eb4ee679a0e41ad75b4425ad6fb0292fa17d76ccd31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_beaver, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:18:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d5aaf5dd7bcf13a082bc604a5637d0d221f6438bf0240283d7dc0ee11380013-merged.mount: Deactivated successfully.
Dec 03 02:18:53 compute-0 podman[451446]: 2025-12-03 02:18:53.020872593 +0000 UTC m=+1.334558160 container remove eaf2a50f378137f8ecb28eb4ee679a0e41ad75b4425ad6fb0292fa17d76ccd31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_beaver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec 03 02:18:53 compute-0 systemd[1]: libpod-conmon-eaf2a50f378137f8ecb28eb4ee679a0e41ad75b4425ad6fb0292fa17d76ccd31.scope: Deactivated successfully.
Dec 03 02:18:53 compute-0 sshd-session[451509]: Invalid user tidb from 154.113.10.113 port 45322
Dec 03 02:18:53 compute-0 sudo[451303]: pam_unix(sudo:session): session closed for user root
Dec 03 02:18:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1937: 321 pgs: 321 active+clean; 191 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 289 KiB/s rd, 2.1 MiB/s wr, 78 op/s
Dec 03 02:18:53 compute-0 sudo[451611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:18:53 compute-0 sudo[451611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:18:53 compute-0 sudo[451611]: pam_unix(sudo:session): session closed for user root
Dec 03 02:18:53 compute-0 sshd-session[451509]: Received disconnect from 154.113.10.113 port 45322:11: Bye Bye [preauth]
Dec 03 02:18:53 compute-0 sshd-session[451509]: Disconnected from invalid user tidb 154.113.10.113 port 45322 [preauth]
Dec 03 02:18:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Dec 03 02:18:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Dec 03 02:18:53 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Dec 03 02:18:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:18:53 compute-0 sudo[451636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:18:53 compute-0 sudo[451636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:18:53 compute-0 sudo[451636]: pam_unix(sudo:session): session closed for user root
Dec 03 02:18:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:18:53 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2833856092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:18:53 compute-0 sudo[451661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:18:53 compute-0 sudo[451661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:18:53 compute-0 sudo[451661]: pam_unix(sudo:session): session closed for user root
Dec 03 02:18:53 compute-0 nova_compute[351485]: 2025-12-03 02:18:53.481 351492 DEBUG oslo_concurrency.processutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:18:53 compute-0 nova_compute[351485]: 2025-12-03 02:18:53.489 351492 DEBUG nova.compute.provider_tree [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:18:53 compute-0 nova_compute[351485]: 2025-12-03 02:18:53.522 351492 DEBUG nova.scheduler.client.report [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:18:53 compute-0 nova_compute[351485]: 2025-12-03 02:18:53.542 351492 DEBUG oslo_concurrency.lockutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.848s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:18:53 compute-0 nova_compute[351485]: 2025-12-03 02:18:53.543 351492 DEBUG nova.compute.manager [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 03 02:18:53 compute-0 nova_compute[351485]: 2025-12-03 02:18:53.544 351492 DEBUG oslo_concurrency.lockutils [None req-4ea5627b-e29e-4683-bf3a-460ae2137bcf abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.831s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:18:53 compute-0 sudo[451688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 02:18:53 compute-0 sudo[451688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:18:53 compute-0 nova_compute[351485]: 2025-12-03 02:18:53.604 351492 DEBUG nova.compute.manager [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 03 02:18:53 compute-0 nova_compute[351485]: 2025-12-03 02:18:53.605 351492 DEBUG nova.network.neutron [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 03 02:18:53 compute-0 nova_compute[351485]: 2025-12-03 02:18:53.626 351492 INFO nova.virt.libvirt.driver [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 03 02:18:53 compute-0 nova_compute[351485]: 2025-12-03 02:18:53.632 351492 DEBUG oslo_concurrency.processutils [None req-4ea5627b-e29e-4683-bf3a-460ae2137bcf abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:18:53 compute-0 nova_compute[351485]: 2025-12-03 02:18:53.676 351492 DEBUG nova.compute.manager [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 03 02:18:53 compute-0 nova_compute[351485]: 2025-12-03 02:18:53.686 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:53 compute-0 nova_compute[351485]: 2025-12-03 02:18:53.773 351492 DEBUG nova.compute.manager [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 03 02:18:53 compute-0 nova_compute[351485]: 2025-12-03 02:18:53.775 351492 DEBUG nova.virt.libvirt.driver [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 03 02:18:53 compute-0 nova_compute[351485]: 2025-12-03 02:18:53.775 351492 INFO nova.virt.libvirt.driver [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Creating image(s)
Dec 03 02:18:53 compute-0 nova_compute[351485]: 2025-12-03 02:18:53.819 351492 DEBUG nova.storage.rbd_utils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] rbd image 48201127-9aa0-4cde-a41d-6790411480a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:18:53 compute-0 nova_compute[351485]: 2025-12-03 02:18:53.889 351492 DEBUG nova.storage.rbd_utils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] rbd image 48201127-9aa0-4cde-a41d-6790411480a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:18:53 compute-0 nova_compute[351485]: 2025-12-03 02:18:53.944 351492 DEBUG nova.storage.rbd_utils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] rbd image 48201127-9aa0-4cde-a41d-6790411480a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:18:53 compute-0 nova_compute[351485]: 2025-12-03 02:18:53.956 351492 DEBUG oslo_concurrency.processutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:18:53 compute-0 nova_compute[351485]: 2025-12-03 02:18:53.996 351492 DEBUG nova.compute.manager [req-a2b3d20b-0bb1-4346-a414-cfa35427221b req-6f10eff8-1dbe-4c55-b0b1-973418f513a7 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Received event network-vif-plugged-025b4c8a-b3c9-4114-95f7-f17506286d3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:18:53 compute-0 nova_compute[351485]: 2025-12-03 02:18:53.997 351492 DEBUG oslo_concurrency.lockutils [req-a2b3d20b-0bb1-4346-a414-cfa35427221b req-6f10eff8-1dbe-4c55-b0b1-973418f513a7 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "1b83725c-0af2-491f-98d9-bdb0ed1a5979-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:18:53 compute-0 nova_compute[351485]: 2025-12-03 02:18:53.997 351492 DEBUG oslo_concurrency.lockutils [req-a2b3d20b-0bb1-4346-a414-cfa35427221b req-6f10eff8-1dbe-4c55-b0b1-973418f513a7 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "1b83725c-0af2-491f-98d9-bdb0ed1a5979-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:18:53 compute-0 nova_compute[351485]: 2025-12-03 02:18:53.997 351492 DEBUG oslo_concurrency.lockutils [req-a2b3d20b-0bb1-4346-a414-cfa35427221b req-6f10eff8-1dbe-4c55-b0b1-973418f513a7 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "1b83725c-0af2-491f-98d9-bdb0ed1a5979-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:18:53 compute-0 nova_compute[351485]: 2025-12-03 02:18:53.997 351492 DEBUG nova.compute.manager [req-a2b3d20b-0bb1-4346-a414-cfa35427221b req-6f10eff8-1dbe-4c55-b0b1-973418f513a7 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] No waiting events found dispatching network-vif-plugged-025b4c8a-b3c9-4114-95f7-f17506286d3e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 03 02:18:53 compute-0 nova_compute[351485]: 2025-12-03 02:18:53.998 351492 WARNING nova.compute.manager [req-a2b3d20b-0bb1-4346-a414-cfa35427221b req-6f10eff8-1dbe-4c55-b0b1-973418f513a7 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Received unexpected event network-vif-plugged-025b4c8a-b3c9-4114-95f7-f17506286d3e for instance with vm_state deleted and task_state None.
Dec 03 02:18:54 compute-0 nova_compute[351485]: 2025-12-03 02:18:54.030 351492 DEBUG oslo_concurrency.processutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:18:54 compute-0 nova_compute[351485]: 2025-12-03 02:18:54.030 351492 DEBUG oslo_concurrency.lockutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Acquiring lock "d68b22249947adf9ae6139a52d3c87b68df8a601" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:18:54 compute-0 nova_compute[351485]: 2025-12-03 02:18:54.031 351492 DEBUG oslo_concurrency.lockutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Lock "d68b22249947adf9ae6139a52d3c87b68df8a601" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:18:54 compute-0 nova_compute[351485]: 2025-12-03 02:18:54.031 351492 DEBUG oslo_concurrency.lockutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Lock "d68b22249947adf9ae6139a52d3c87b68df8a601" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:18:54 compute-0 nova_compute[351485]: 2025-12-03 02:18:54.070 351492 DEBUG nova.storage.rbd_utils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] rbd image 48201127-9aa0-4cde-a41d-6790411480a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:18:54 compute-0 nova_compute[351485]: 2025-12-03 02:18:54.086 351492 DEBUG oslo_concurrency.processutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 48201127-9aa0-4cde-a41d-6790411480a4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:18:54 compute-0 nova_compute[351485]: 2025-12-03 02:18:54.112 351492 DEBUG nova.policy [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2de48f7608ea45c8ac558125d72373c4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '38f1a4b24bc74f43a70b0fc06f48b9a2', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 03 02:18:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:18:54 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/537960280' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:18:54 compute-0 nova_compute[351485]: 2025-12-03 02:18:54.165 351492 DEBUG oslo_concurrency.processutils [None req-4ea5627b-e29e-4683-bf3a-460ae2137bcf abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:18:54 compute-0 podman[451844]: 2025-12-03 02:18:54.171617461 +0000 UTC m=+0.068038826 container create 6c23742155cfe6f762216931cc03e46e5aee8ec74cdffb73b46638890e4dc9f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:18:54 compute-0 nova_compute[351485]: 2025-12-03 02:18:54.176 351492 DEBUG nova.compute.provider_tree [None req-4ea5627b-e29e-4683-bf3a-460ae2137bcf abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:18:54 compute-0 nova_compute[351485]: 2025-12-03 02:18:54.198 351492 DEBUG nova.scheduler.client.report [None req-4ea5627b-e29e-4683-bf3a-460ae2137bcf abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:18:54 compute-0 systemd[1]: Started libpod-conmon-6c23742155cfe6f762216931cc03e46e5aee8ec74cdffb73b46638890e4dc9f5.scope.
Dec 03 02:18:54 compute-0 nova_compute[351485]: 2025-12-03 02:18:54.232 351492 DEBUG oslo_concurrency.lockutils [None req-4ea5627b-e29e-4683-bf3a-460ae2137bcf abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:18:54 compute-0 podman[451844]: 2025-12-03 02:18:54.148147057 +0000 UTC m=+0.044568452 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:18:54 compute-0 ceph-mon[192821]: pgmap v1937: 321 pgs: 321 active+clean; 191 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 289 KiB/s rd, 2.1 MiB/s wr, 78 op/s
Dec 03 02:18:54 compute-0 ceph-mon[192821]: osdmap e136: 3 total, 3 up, 3 in
Dec 03 02:18:54 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2833856092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:18:54 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/537960280' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:18:54 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:18:54 compute-0 nova_compute[351485]: 2025-12-03 02:18:54.273 351492 INFO nova.scheduler.client.report [None req-4ea5627b-e29e-4683-bf3a-460ae2137bcf abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Deleted allocations for instance 1b83725c-0af2-491f-98d9-bdb0ed1a5979
Dec 03 02:18:54 compute-0 podman[451844]: 2025-12-03 02:18:54.302784942 +0000 UTC m=+0.199206377 container init 6c23742155cfe6f762216931cc03e46e5aee8ec74cdffb73b46638890e4dc9f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_aryabhata, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:18:54 compute-0 podman[451844]: 2025-12-03 02:18:54.318396423 +0000 UTC m=+0.214817808 container start 6c23742155cfe6f762216931cc03e46e5aee8ec74cdffb73b46638890e4dc9f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_aryabhata, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 03 02:18:54 compute-0 reverent_aryabhata[451876]: 167 167
Dec 03 02:18:54 compute-0 podman[451844]: 2025-12-03 02:18:54.329506268 +0000 UTC m=+0.225927663 container attach 6c23742155cfe6f762216931cc03e46e5aee8ec74cdffb73b46638890e4dc9f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 03 02:18:54 compute-0 systemd[1]: libpod-6c23742155cfe6f762216931cc03e46e5aee8ec74cdffb73b46638890e4dc9f5.scope: Deactivated successfully.
Dec 03 02:18:54 compute-0 podman[451844]: 2025-12-03 02:18:54.332299077 +0000 UTC m=+0.228720482 container died 6c23742155cfe6f762216931cc03e46e5aee8ec74cdffb73b46638890e4dc9f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_aryabhata, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:18:54 compute-0 nova_compute[351485]: 2025-12-03 02:18:54.348 351492 DEBUG oslo_concurrency.lockutils [None req-4ea5627b-e29e-4683-bf3a-460ae2137bcf abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "1b83725c-0af2-491f-98d9-bdb0ed1a5979" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.857s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:18:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-92e6ab937557593bcf0fd400e047d813bc9be199a329879dc9e782760395df42-merged.mount: Deactivated successfully.
Dec 03 02:18:54 compute-0 podman[451844]: 2025-12-03 02:18:54.4179508 +0000 UTC m=+0.314372165 container remove 6c23742155cfe6f762216931cc03e46e5aee8ec74cdffb73b46638890e4dc9f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_aryabhata, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 03 02:18:54 compute-0 systemd[1]: libpod-conmon-6c23742155cfe6f762216931cc03e46e5aee8ec74cdffb73b46638890e4dc9f5.scope: Deactivated successfully.
Dec 03 02:18:54 compute-0 nova_compute[351485]: 2025-12-03 02:18:54.511 351492 DEBUG oslo_concurrency.processutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 48201127-9aa0-4cde-a41d-6790411480a4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:18:54 compute-0 nova_compute[351485]: 2025-12-03 02:18:54.677 351492 DEBUG nova.storage.rbd_utils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] resizing rbd image 48201127-9aa0-4cde-a41d-6790411480a4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 03 02:18:54 compute-0 podman[451919]: 2025-12-03 02:18:54.680608082 +0000 UTC m=+0.088022362 container create db8cbba09ee762fecdbb08fdd21336046794065fb2d00d1bc34d758fc2b4aee5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_sinoussi, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:18:54 compute-0 podman[451919]: 2025-12-03 02:18:54.639506789 +0000 UTC m=+0.046921099 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:18:54 compute-0 systemd[1]: Started libpod-conmon-db8cbba09ee762fecdbb08fdd21336046794065fb2d00d1bc34d758fc2b4aee5.scope.
Dec 03 02:18:54 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:18:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f95e1b831b69e7ea0a8cc852cf18890219995a9d970b4f9f96f89384b9a9719/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:18:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f95e1b831b69e7ea0a8cc852cf18890219995a9d970b4f9f96f89384b9a9719/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:18:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f95e1b831b69e7ea0a8cc852cf18890219995a9d970b4f9f96f89384b9a9719/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:18:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f95e1b831b69e7ea0a8cc852cf18890219995a9d970b4f9f96f89384b9a9719/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:18:54 compute-0 podman[451919]: 2025-12-03 02:18:54.830993536 +0000 UTC m=+0.238407896 container init db8cbba09ee762fecdbb08fdd21336046794065fb2d00d1bc34d758fc2b4aee5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_sinoussi, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 03 02:18:54 compute-0 podman[451919]: 2025-12-03 02:18:54.851446995 +0000 UTC m=+0.258861315 container start db8cbba09ee762fecdbb08fdd21336046794065fb2d00d1bc34d758fc2b4aee5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Dec 03 02:18:54 compute-0 podman[451919]: 2025-12-03 02:18:54.857718873 +0000 UTC m=+0.265133193 container attach db8cbba09ee762fecdbb08fdd21336046794065fb2d00d1bc34d758fc2b4aee5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_sinoussi, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 03 02:18:54 compute-0 nova_compute[351485]: 2025-12-03 02:18:54.938 351492 DEBUG nova.objects.instance [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Lazy-loading 'migration_context' on Instance uuid 48201127-9aa0-4cde-a41d-6790411480a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:18:54 compute-0 nova_compute[351485]: 2025-12-03 02:18:54.967 351492 DEBUG nova.virt.libvirt.driver [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 03 02:18:54 compute-0 nova_compute[351485]: 2025-12-03 02:18:54.968 351492 DEBUG nova.virt.libvirt.driver [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Ensure instance console log exists: /var/lib/nova/instances/48201127-9aa0-4cde-a41d-6790411480a4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 03 02:18:54 compute-0 nova_compute[351485]: 2025-12-03 02:18:54.969 351492 DEBUG oslo_concurrency.lockutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:18:54 compute-0 nova_compute[351485]: 2025-12-03 02:18:54.970 351492 DEBUG oslo_concurrency.lockutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:18:54 compute-0 nova_compute[351485]: 2025-12-03 02:18:54.970 351492 DEBUG oslo_concurrency.lockutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:18:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1939: 321 pgs: 321 active+clean; 161 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 272 KiB/s rd, 2.3 MiB/s wr, 100 op/s
Dec 03 02:18:55 compute-0 nova_compute[351485]: 2025-12-03 02:18:55.778 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:56 compute-0 eloquent_sinoussi[451973]: {
Dec 03 02:18:56 compute-0 eloquent_sinoussi[451973]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 02:18:56 compute-0 eloquent_sinoussi[451973]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:18:56 compute-0 eloquent_sinoussi[451973]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 02:18:56 compute-0 eloquent_sinoussi[451973]:         "osd_id": 2,
Dec 03 02:18:56 compute-0 eloquent_sinoussi[451973]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:18:56 compute-0 eloquent_sinoussi[451973]:         "type": "bluestore"
Dec 03 02:18:56 compute-0 eloquent_sinoussi[451973]:     },
Dec 03 02:18:56 compute-0 eloquent_sinoussi[451973]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 02:18:56 compute-0 eloquent_sinoussi[451973]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:18:56 compute-0 eloquent_sinoussi[451973]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 02:18:56 compute-0 eloquent_sinoussi[451973]:         "osd_id": 1,
Dec 03 02:18:56 compute-0 eloquent_sinoussi[451973]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:18:56 compute-0 eloquent_sinoussi[451973]:         "type": "bluestore"
Dec 03 02:18:56 compute-0 eloquent_sinoussi[451973]:     },
Dec 03 02:18:56 compute-0 eloquent_sinoussi[451973]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 02:18:56 compute-0 eloquent_sinoussi[451973]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:18:56 compute-0 eloquent_sinoussi[451973]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 02:18:56 compute-0 eloquent_sinoussi[451973]:         "osd_id": 0,
Dec 03 02:18:56 compute-0 eloquent_sinoussi[451973]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:18:56 compute-0 eloquent_sinoussi[451973]:         "type": "bluestore"
Dec 03 02:18:56 compute-0 eloquent_sinoussi[451973]:     }
Dec 03 02:18:56 compute-0 eloquent_sinoussi[451973]: }
Dec 03 02:18:56 compute-0 systemd[1]: libpod-db8cbba09ee762fecdbb08fdd21336046794065fb2d00d1bc34d758fc2b4aee5.scope: Deactivated successfully.
Dec 03 02:18:56 compute-0 podman[451919]: 2025-12-03 02:18:56.09506451 +0000 UTC m=+1.502478790 container died db8cbba09ee762fecdbb08fdd21336046794065fb2d00d1bc34d758fc2b4aee5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:18:56 compute-0 systemd[1]: libpod-db8cbba09ee762fecdbb08fdd21336046794065fb2d00d1bc34d758fc2b4aee5.scope: Consumed 1.230s CPU time.
Dec 03 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.172 351492 DEBUG nova.network.neutron [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Successfully created port: 0d927baf-41d2-458f-b4c0-1218ba0eec13 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 03 02:18:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f95e1b831b69e7ea0a8cc852cf18890219995a9d970b4f9f96f89384b9a9719-merged.mount: Deactivated successfully.
Dec 03 02:18:56 compute-0 podman[451919]: 2025-12-03 02:18:56.222369002 +0000 UTC m=+1.629783272 container remove db8cbba09ee762fecdbb08fdd21336046794065fb2d00d1bc34d758fc2b4aee5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_sinoussi, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 03 02:18:56 compute-0 systemd[1]: libpod-conmon-db8cbba09ee762fecdbb08fdd21336046794065fb2d00d1bc34d758fc2b4aee5.scope: Deactivated successfully.
Dec 03 02:18:56 compute-0 sudo[451688]: pam_unix(sudo:session): session closed for user root
Dec 03 02:18:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Dec 03 02:18:56 compute-0 ceph-mon[192821]: pgmap v1939: 321 pgs: 321 active+clean; 161 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 272 KiB/s rd, 2.3 MiB/s wr, 100 op/s
Dec 03 02:18:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 02:18:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Dec 03 02:18:56 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Dec 03 02:18:56 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:18:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 02:18:56 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:18:56 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 592e8a07-ea11-4731-9439-ea8d4cdd9bea does not exist
Dec 03 02:18:56 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev c84b78c6-74fa-45e6-93ec-ed080e427690 does not exist
Dec 03 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.382 351492 DEBUG oslo_concurrency.lockutils [None req-1a1bc7b1-2a09-4d63-88b6-eeeecfcf86be abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Acquiring lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.382 351492 DEBUG oslo_concurrency.lockutils [None req-1a1bc7b1-2a09-4d63-88b6-eeeecfcf86be abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.383 351492 DEBUG oslo_concurrency.lockutils [None req-1a1bc7b1-2a09-4d63-88b6-eeeecfcf86be abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Acquiring lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.383 351492 DEBUG oslo_concurrency.lockutils [None req-1a1bc7b1-2a09-4d63-88b6-eeeecfcf86be abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.383 351492 DEBUG oslo_concurrency.lockutils [None req-1a1bc7b1-2a09-4d63-88b6-eeeecfcf86be abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.384 351492 INFO nova.compute.manager [None req-1a1bc7b1-2a09-4d63-88b6-eeeecfcf86be abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Terminating instance
Dec 03 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.386 351492 DEBUG nova.compute.manager [None req-1a1bc7b1-2a09-4d63-88b6-eeeecfcf86be abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 03 02:18:56 compute-0 sudo[452037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:18:56 compute-0 sudo[452037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:18:56 compute-0 sudo[452037]: pam_unix(sudo:session): session closed for user root
Dec 03 02:18:56 compute-0 kernel: tapae5db7e6-7a (unregistering): left promiscuous mode
Dec 03 02:18:56 compute-0 NetworkManager[48912]: <info>  [1764728336.5130] device (tapae5db7e6-7a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 03 02:18:56 compute-0 ovn_controller[89134]: 2025-12-03T02:18:56Z|00158|binding|INFO|Releasing lport ae5db7e6-7a7a-4116-954a-be851ee02864 from this chassis (sb_readonly=0)
Dec 03 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.535 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:56 compute-0 ovn_controller[89134]: 2025-12-03T02:18:56Z|00159|binding|INFO|Setting lport ae5db7e6-7a7a-4116-954a-be851ee02864 down in Southbound
Dec 03 02:18:56 compute-0 ovn_controller[89134]: 2025-12-03T02:18:56Z|00160|binding|INFO|Removing iface tapae5db7e6-7a ovn-installed in OVS
Dec 03 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.540 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:56 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:56.546 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ed:5c:3e 10.100.0.3'], port_security=['fa:16:3e:ed:5c:3e 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed008f09-da46-4507-9be2-7398a4728121', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8f8e5d142604e8c8aabf1e14a1467ca', 'neutron:revision_number': '4', 'neutron:security_group_ids': '727984b7-e6f0-4093-a68a-8a566271e9dd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=15a0724e-2d9f-4375-b3ec-7cde297fca09, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=ae5db7e6-7a7a-4116-954a-be851ee02864) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 02:18:56 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:56.547 288528 INFO neutron.agent.ovn.metadata.agent [-] Port ae5db7e6-7a7a-4116-954a-be851ee02864 in datapath ed008f09-da46-4507-9be2-7398a4728121 unbound from our chassis
Dec 03 02:18:56 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:56.549 288528 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ed008f09-da46-4507-9be2-7398a4728121, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 03 02:18:56 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:56.551 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[89551710-9ee5-41dc-8639-97b953d73237]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:18:56 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:56.552 288528 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ed008f09-da46-4507-9be2-7398a4728121 namespace which is not needed anymore
Dec 03 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.573 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:56 compute-0 sudo[452062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 02:18:56 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Dec 03 02:18:56 compute-0 sudo[452062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:18:56 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Consumed 52.546s CPU time.
Dec 03 02:18:56 compute-0 sudo[452062]: pam_unix(sudo:session): session closed for user root
Dec 03 02:18:56 compute-0 systemd-machined[138558]: Machine qemu-10-instance-0000000a terminated.
Dec 03 02:18:56 compute-0 kernel: tapae5db7e6-7a: entered promiscuous mode
Dec 03 02:18:56 compute-0 kernel: tapae5db7e6-7a (unregistering): left promiscuous mode
Dec 03 02:18:56 compute-0 NetworkManager[48912]: <info>  [1764728336.6213] manager: (tapae5db7e6-7a): new Tun device (/org/freedesktop/NetworkManager/Devices/66)
Dec 03 02:18:56 compute-0 ovn_controller[89134]: 2025-12-03T02:18:56Z|00161|binding|INFO|Claiming lport ae5db7e6-7a7a-4116-954a-be851ee02864 for this chassis.
Dec 03 02:18:56 compute-0 ovn_controller[89134]: 2025-12-03T02:18:56Z|00162|binding|INFO|ae5db7e6-7a7a-4116-954a-be851ee02864: Claiming fa:16:3e:ed:5c:3e 10.100.0.3
Dec 03 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.631 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:56 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:56.642 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ed:5c:3e 10.100.0.3'], port_security=['fa:16:3e:ed:5c:3e 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed008f09-da46-4507-9be2-7398a4728121', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8f8e5d142604e8c8aabf1e14a1467ca', 'neutron:revision_number': '4', 'neutron:security_group_ids': '727984b7-e6f0-4093-a68a-8a566271e9dd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=15a0724e-2d9f-4375-b3ec-7cde297fca09, chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=ae5db7e6-7a7a-4116-954a-be851ee02864) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.655 351492 INFO nova.virt.libvirt.driver [-] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Instance destroyed successfully.
Dec 03 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.656 351492 DEBUG nova.objects.instance [None req-1a1bc7b1-2a09-4d63-88b6-eeeecfcf86be abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lazy-loading 'resources' on Instance uuid 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:18:56 compute-0 ovn_controller[89134]: 2025-12-03T02:18:56Z|00163|binding|INFO|Setting lport ae5db7e6-7a7a-4116-954a-be851ee02864 ovn-installed in OVS
Dec 03 02:18:56 compute-0 ovn_controller[89134]: 2025-12-03T02:18:56Z|00164|binding|INFO|Setting lport ae5db7e6-7a7a-4116-954a-be851ee02864 up in Southbound
Dec 03 02:18:56 compute-0 ovn_controller[89134]: 2025-12-03T02:18:56Z|00165|binding|INFO|Releasing lport ae5db7e6-7a7a-4116-954a-be851ee02864 from this chassis (sb_readonly=1)
Dec 03 02:18:56 compute-0 ovn_controller[89134]: 2025-12-03T02:18:56Z|00166|binding|INFO|Removing iface tapae5db7e6-7a ovn-installed in OVS
Dec 03 02:18:56 compute-0 ovn_controller[89134]: 2025-12-03T02:18:56Z|00167|if_status|INFO|Not setting lport ae5db7e6-7a7a-4116-954a-be851ee02864 down as sb is readonly
Dec 03 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.671 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.676 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:56 compute-0 ovn_controller[89134]: 2025-12-03T02:18:56Z|00168|binding|INFO|Releasing lport ae5db7e6-7a7a-4116-954a-be851ee02864 from this chassis (sb_readonly=0)
Dec 03 02:18:56 compute-0 ovn_controller[89134]: 2025-12-03T02:18:56Z|00169|binding|INFO|Setting lport ae5db7e6-7a7a-4116-954a-be851ee02864 down in Southbound
Dec 03 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.679 351492 DEBUG nova.virt.libvirt.vif [None req-1a1bc7b1-2a09-4d63-88b6-eeeecfcf86be abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T02:16:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2141861820',display_name='tempest-TestNetworkBasicOps-server-2141861820',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2141861820',id=10,image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDI3XAJe/oWUFcBwASHQKy1+64OXjmmyB8m7y5N7HAPNoYJg/K1iQtuEUIT2NyhA+m3otLmx2JBqvfSdTGVgxCze3o124/xouvwXfOAKv+FU1Zz518hn/q6Xt9p0SK00+w==',key_name='tempest-TestNetworkBasicOps-1925623369',keypairs=<?>,launch_index=0,launched_at=2025-12-03T02:16:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f8f8e5d142604e8c8aabf1e14a1467ca',ramdisk_id='',reservation_id='r-90hgdj1m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1039072813',owner_user_name='tempest-TestNetworkBasicOps-1039072813-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-03T02:16:51Z,user_data=None,user_id='abdbefadac2a4d98bd33ed8a1a60ff75',uuid=8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ae5db7e6-7a7a-4116-954a-be851ee02864", "address": "fa:16:3e:ed:5c:3e", "network": {"id": "ed008f09-da46-4507-9be2-7398a4728121", "bridge": "br-int", "label": "tempest-network-smoke--628634883", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8f8e5d142604e8c8aabf1e14a1467ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae5db7e6-7a", "ovs_interfaceid": "ae5db7e6-7a7a-4116-954a-be851ee02864", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 03 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.679 351492 DEBUG nova.network.os_vif_util [None req-1a1bc7b1-2a09-4d63-88b6-eeeecfcf86be abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Converting VIF {"id": "ae5db7e6-7a7a-4116-954a-be851ee02864", "address": "fa:16:3e:ed:5c:3e", "network": {"id": "ed008f09-da46-4507-9be2-7398a4728121", "bridge": "br-int", "label": "tempest-network-smoke--628634883", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8f8e5d142604e8c8aabf1e14a1467ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae5db7e6-7a", "ovs_interfaceid": "ae5db7e6-7a7a-4116-954a-be851ee02864", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 03 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.680 351492 DEBUG nova.network.os_vif_util [None req-1a1bc7b1-2a09-4d63-88b6-eeeecfcf86be abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ed:5c:3e,bridge_name='br-int',has_traffic_filtering=True,id=ae5db7e6-7a7a-4116-954a-be851ee02864,network=Network(ed008f09-da46-4507-9be2-7398a4728121),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae5db7e6-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 03 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.681 351492 DEBUG os_vif [None req-1a1bc7b1-2a09-4d63-88b6-eeeecfcf86be abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ed:5c:3e,bridge_name='br-int',has_traffic_filtering=True,id=ae5db7e6-7a7a-4116-954a-be851ee02864,network=Network(ed008f09-da46-4507-9be2-7398a4728121),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae5db7e6-7a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 03 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.683 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.684 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapae5db7e6-7a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.685 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:56 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:56.686 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ed:5c:3e 10.100.0.3'], port_security=['fa:16:3e:ed:5c:3e 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed008f09-da46-4507-9be2-7398a4728121', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8f8e5d142604e8c8aabf1e14a1467ca', 'neutron:revision_number': '4', 'neutron:security_group_ids': '727984b7-e6f0-4093-a68a-8a566271e9dd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=15a0724e-2d9f-4375-b3ec-7cde297fca09, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=ae5db7e6-7a7a-4116-954a-be851ee02864) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.688 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 03 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.695 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.697 351492 INFO os_vif [None req-1a1bc7b1-2a09-4d63-88b6-eeeecfcf86be abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ed:5c:3e,bridge_name='br-int',has_traffic_filtering=True,id=ae5db7e6-7a7a-4116-954a-be851ee02864,network=Network(ed008f09-da46-4507-9be2-7398a4728121),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae5db7e6-7a')
Dec 03 02:18:56 compute-0 neutron-haproxy-ovnmeta-ed008f09-da46-4507-9be2-7398a4728121[447655]: [NOTICE]   (447659) : haproxy version is 2.8.14-c23fe91
Dec 03 02:18:56 compute-0 neutron-haproxy-ovnmeta-ed008f09-da46-4507-9be2-7398a4728121[447655]: [NOTICE]   (447659) : path to executable is /usr/sbin/haproxy
Dec 03 02:18:56 compute-0 neutron-haproxy-ovnmeta-ed008f09-da46-4507-9be2-7398a4728121[447655]: [WARNING]  (447659) : Exiting Master process...
Dec 03 02:18:56 compute-0 neutron-haproxy-ovnmeta-ed008f09-da46-4507-9be2-7398a4728121[447655]: [ALERT]    (447659) : Current worker (447661) exited with code 143 (Terminated)
Dec 03 02:18:56 compute-0 neutron-haproxy-ovnmeta-ed008f09-da46-4507-9be2-7398a4728121[447655]: [WARNING]  (447659) : All workers exited. Exiting... (0)
Dec 03 02:18:56 compute-0 systemd[1]: libpod-abc133411443d1571c13e1b8a96c81b8811797a052a8fda9f3f684f98f6fbf57.scope: Deactivated successfully.
Dec 03 02:18:56 compute-0 podman[452116]: 2025-12-03 02:18:56.786212205 +0000 UTC m=+0.070715922 container died abc133411443d1571c13e1b8a96c81b8811797a052a8fda9f3f684f98f6fbf57 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed008f09-da46-4507-9be2-7398a4728121, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 03 02:18:56 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-abc133411443d1571c13e1b8a96c81b8811797a052a8fda9f3f684f98f6fbf57-userdata-shm.mount: Deactivated successfully.
Dec 03 02:18:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-6310634a2e9b69b7fec86a833550521f2d887dce434572f35b449a118a1fc6ac-merged.mount: Deactivated successfully.
Dec 03 02:18:56 compute-0 podman[452116]: 2025-12-03 02:18:56.84258115 +0000 UTC m=+0.127084857 container cleanup abc133411443d1571c13e1b8a96c81b8811797a052a8fda9f3f684f98f6fbf57 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed008f09-da46-4507-9be2-7398a4728121, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 03 02:18:56 compute-0 systemd[1]: libpod-conmon-abc133411443d1571c13e1b8a96c81b8811797a052a8fda9f3f684f98f6fbf57.scope: Deactivated successfully.
Dec 03 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.919 351492 DEBUG nova.compute.manager [req-550471a7-14f3-4fd8-9b1f-e145a29c780f req-b7b7d5c2-88ea-4384-a626-2769296d1805 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Received event network-vif-unplugged-ae5db7e6-7a7a-4116-954a-be851ee02864 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.919 351492 DEBUG oslo_concurrency.lockutils [req-550471a7-14f3-4fd8-9b1f-e145a29c780f req-b7b7d5c2-88ea-4384-a626-2769296d1805 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.920 351492 DEBUG oslo_concurrency.lockutils [req-550471a7-14f3-4fd8-9b1f-e145a29c780f req-b7b7d5c2-88ea-4384-a626-2769296d1805 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.920 351492 DEBUG oslo_concurrency.lockutils [req-550471a7-14f3-4fd8-9b1f-e145a29c780f req-b7b7d5c2-88ea-4384-a626-2769296d1805 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.920 351492 DEBUG nova.compute.manager [req-550471a7-14f3-4fd8-9b1f-e145a29c780f req-b7b7d5c2-88ea-4384-a626-2769296d1805 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] No waiting events found dispatching network-vif-unplugged-ae5db7e6-7a7a-4116-954a-be851ee02864 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 03 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.920 351492 DEBUG nova.compute.manager [req-550471a7-14f3-4fd8-9b1f-e145a29c780f req-b7b7d5c2-88ea-4384-a626-2769296d1805 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Received event network-vif-unplugged-ae5db7e6-7a7a-4116-954a-be851ee02864 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 03 02:18:56 compute-0 podman[452161]: 2025-12-03 02:18:56.958773167 +0000 UTC m=+0.069766435 container remove abc133411443d1571c13e1b8a96c81b8811797a052a8fda9f3f684f98f6fbf57 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed008f09-da46-4507-9be2-7398a4728121, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 03 02:18:56 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:56.968 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[3fd9360d-ab90-4753-8593-6569c66ba2a8]: (4, ('Wed Dec  3 02:18:56 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ed008f09-da46-4507-9be2-7398a4728121 (abc133411443d1571c13e1b8a96c81b8811797a052a8fda9f3f684f98f6fbf57)\nabc133411443d1571c13e1b8a96c81b8811797a052a8fda9f3f684f98f6fbf57\nWed Dec  3 02:18:56 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ed008f09-da46-4507-9be2-7398a4728121 (abc133411443d1571c13e1b8a96c81b8811797a052a8fda9f3f684f98f6fbf57)\nabc133411443d1571c13e1b8a96c81b8811797a052a8fda9f3f684f98f6fbf57\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:18:56 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:56.970 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[f83f16fb-7c33-4ba4-94d8-facca821f446]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:18:56 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:56.972 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped008f09-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:18:56 compute-0 kernel: taped008f09-d0: left promiscuous mode
Dec 03 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.979 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.993 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:56 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:56.995 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[e253b9f8-b5d2-4bc1-865e-4df78aba807a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:18:57 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:57.011 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[3f645acd-e73e-4af8-9e54-c2e71c65dcf9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:18:57 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:57.012 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[e85b816c-df94-41e7-b994-9a47d978bdfa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:18:57 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:57.034 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[35d4a584-d369-487a-9fa0-a280b8b8c9b4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 704204, 'reachable_time': 32145, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 452179, 'error': None, 'target': 'ovnmeta-ed008f09-da46-4507-9be2-7398a4728121', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:18:57 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:57.038 288639 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ed008f09-da46-4507-9be2-7398a4728121 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 03 02:18:57 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:57.038 288639 DEBUG oslo.privsep.daemon [-] privsep: reply[135ceaa4-52d7-4673-9488-201e60bcb061]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:18:57 compute-0 systemd[1]: run-netns-ovnmeta\x2ded008f09\x2dda46\x2d4507\x2d9be2\x2d7398a4728121.mount: Deactivated successfully.
Dec 03 02:18:57 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:57.040 288528 INFO neutron.agent.ovn.metadata.agent [-] Port ae5db7e6-7a7a-4116-954a-be851ee02864 in datapath ed008f09-da46-4507-9be2-7398a4728121 unbound from our chassis
Dec 03 02:18:57 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:57.043 288528 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ed008f09-da46-4507-9be2-7398a4728121, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 03 02:18:57 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:57.044 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[3ef041e4-60b5-4855-b632-cf5922d7441a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:18:57 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:57.045 288528 INFO neutron.agent.ovn.metadata.agent [-] Port ae5db7e6-7a7a-4116-954a-be851ee02864 in datapath ed008f09-da46-4507-9be2-7398a4728121 unbound from our chassis
Dec 03 02:18:57 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:57.047 288528 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ed008f09-da46-4507-9be2-7398a4728121, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 03 02:18:57 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:57.048 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[40b828cc-1ffd-42a7-8c8c-9f5cd7cbe296]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:18:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1941: 321 pgs: 321 active+clean; 199 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 210 KiB/s rd, 5.0 MiB/s wr, 150 op/s
Dec 03 02:18:57 compute-0 nova_compute[351485]: 2025-12-03 02:18:57.285 351492 DEBUG nova.network.neutron [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Successfully updated port: 0d927baf-41d2-458f-b4c0-1218ba0eec13 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 03 02:18:57 compute-0 ceph-mon[192821]: osdmap e137: 3 total, 3 up, 3 in
Dec 03 02:18:57 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:18:57 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:18:57 compute-0 nova_compute[351485]: 2025-12-03 02:18:57.299 351492 DEBUG oslo_concurrency.lockutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Acquiring lock "refresh_cache-48201127-9aa0-4cde-a41d-6790411480a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:18:57 compute-0 nova_compute[351485]: 2025-12-03 02:18:57.299 351492 DEBUG oslo_concurrency.lockutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Acquired lock "refresh_cache-48201127-9aa0-4cde-a41d-6790411480a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:18:57 compute-0 nova_compute[351485]: 2025-12-03 02:18:57.299 351492 DEBUG nova.network.neutron [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 03 02:18:57 compute-0 nova_compute[351485]: 2025-12-03 02:18:57.486 351492 DEBUG nova.compute.manager [req-388997a5-97cc-4676-9128-8f9a68cdc340 req-f6e6f823-cde6-42d7-afa3-d049abe74a7e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Received event network-changed-0d927baf-41d2-458f-b4c0-1218ba0eec13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:18:57 compute-0 nova_compute[351485]: 2025-12-03 02:18:57.486 351492 DEBUG nova.compute.manager [req-388997a5-97cc-4676-9128-8f9a68cdc340 req-f6e6f823-cde6-42d7-afa3-d049abe74a7e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Refreshing instance network info cache due to event network-changed-0d927baf-41d2-458f-b4c0-1218ba0eec13. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 03 02:18:57 compute-0 nova_compute[351485]: 2025-12-03 02:18:57.487 351492 DEBUG oslo_concurrency.lockutils [req-388997a5-97cc-4676-9128-8f9a68cdc340 req-f6e6f823-cde6-42d7-afa3-d049abe74a7e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-48201127-9aa0-4cde-a41d-6790411480a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:18:57 compute-0 nova_compute[351485]: 2025-12-03 02:18:57.527 351492 INFO nova.virt.libvirt.driver [None req-1a1bc7b1-2a09-4d63-88b6-eeeecfcf86be abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Deleting instance files /var/lib/nova/instances/8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592_del
Dec 03 02:18:57 compute-0 nova_compute[351485]: 2025-12-03 02:18:57.528 351492 INFO nova.virt.libvirt.driver [None req-1a1bc7b1-2a09-4d63-88b6-eeeecfcf86be abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Deletion of /var/lib/nova/instances/8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592_del complete
Dec 03 02:18:57 compute-0 nova_compute[351485]: 2025-12-03 02:18:57.535 351492 DEBUG nova.network.neutron [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 03 02:18:57 compute-0 nova_compute[351485]: 2025-12-03 02:18:57.635 351492 INFO nova.compute.manager [None req-1a1bc7b1-2a09-4d63-88b6-eeeecfcf86be abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Took 1.25 seconds to destroy the instance on the hypervisor.
Dec 03 02:18:57 compute-0 nova_compute[351485]: 2025-12-03 02:18:57.636 351492 DEBUG oslo.service.loopingcall [None req-1a1bc7b1-2a09-4d63-88b6-eeeecfcf86be abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 03 02:18:57 compute-0 nova_compute[351485]: 2025-12-03 02:18:57.637 351492 DEBUG nova.compute.manager [-] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 03 02:18:57 compute-0 nova_compute[351485]: 2025-12-03 02:18:57.637 351492 DEBUG nova.network.neutron [-] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 03 02:18:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:18:58 compute-0 ceph-mon[192821]: pgmap v1941: 321 pgs: 321 active+clean; 199 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 210 KiB/s rd, 5.0 MiB/s wr, 150 op/s
Dec 03 02:18:58 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #93. Immutable memtables: 0.
Dec 03 02:18:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:58.346114) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 03 02:18:58 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 93
Dec 03 02:18:58 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728338346187, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 727, "num_deletes": 251, "total_data_size": 862929, "memory_usage": 876664, "flush_reason": "Manual Compaction"}
Dec 03 02:18:58 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #94: started
Dec 03 02:18:58 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728338357293, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 94, "file_size": 855531, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39387, "largest_seqno": 40113, "table_properties": {"data_size": 851729, "index_size": 1582, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 7651, "raw_average_key_size": 17, "raw_value_size": 844093, "raw_average_value_size": 1892, "num_data_blocks": 70, "num_entries": 446, "num_filter_entries": 446, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764728284, "oldest_key_time": 1764728284, "file_creation_time": 1764728338, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 94, "seqno_to_time_mapping": "N/A"}}
Dec 03 02:18:58 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 11256 microseconds, and 6042 cpu microseconds.
Dec 03 02:18:58 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 02:18:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:58.357380) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #94: 855531 bytes OK
Dec 03 02:18:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:58.357405) [db/memtable_list.cc:519] [default] Level-0 commit table #94 started
Dec 03 02:18:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:58.360170) [db/memtable_list.cc:722] [default] Level-0 commit table #94: memtable #1 done
Dec 03 02:18:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:58.360193) EVENT_LOG_v1 {"time_micros": 1764728338360186, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 03 02:18:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:58.360215) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 03 02:18:58 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 859173, prev total WAL file size 859173, number of live WAL files 2.
Dec 03 02:18:58 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000090.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:18:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:58.361350) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323532' seq:0, type:0; will stop at (end)
Dec 03 02:18:58 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 03 02:18:58 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [94(835KB)], [92(6911KB)]
Dec 03 02:18:58 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728338361438, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [94], "files_L6": [92], "score": -1, "input_data_size": 7932690, "oldest_snapshot_seqno": -1}
Dec 03 02:18:58 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #95: 5531 keys, 7196774 bytes, temperature: kUnknown
Dec 03 02:18:58 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728338418356, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 95, "file_size": 7196774, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7162279, "index_size": 19537, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13893, "raw_key_size": 143997, "raw_average_key_size": 26, "raw_value_size": 7064601, "raw_average_value_size": 1277, "num_data_blocks": 776, "num_entries": 5531, "num_filter_entries": 5531, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764728338, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Dec 03 02:18:58 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 02:18:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:58.419229) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 7196774 bytes
Dec 03 02:18:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:58.422701) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 137.8 rd, 125.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 6.7 +0.0 blob) out(6.9 +0.0 blob), read-write-amplify(17.7) write-amplify(8.4) OK, records in: 6049, records dropped: 518 output_compression: NoCompression
Dec 03 02:18:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:58.422739) EVENT_LOG_v1 {"time_micros": 1764728338422721, "job": 54, "event": "compaction_finished", "compaction_time_micros": 57563, "compaction_time_cpu_micros": 35465, "output_level": 6, "num_output_files": 1, "total_output_size": 7196774, "num_input_records": 6049, "num_output_records": 5531, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 03 02:18:58 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000094.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:18:58 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728338424167, "job": 54, "event": "table_file_deletion", "file_number": 94}
Dec 03 02:18:58 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:18:58 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728338425895, "job": 54, "event": "table_file_deletion", "file_number": 92}
Dec 03 02:18:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:58.361186) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:18:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:58.426096) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:18:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:58.426101) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:18:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:58.426103) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:18:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:58.426104) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:18:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:58.426106) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:18:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:18:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:18:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:18:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:18:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:18:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:18:58 compute-0 nova_compute[351485]: 2025-12-03 02:18:58.690 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:18:58 compute-0 nova_compute[351485]: 2025-12-03 02:18:58.939 351492 DEBUG nova.network.neutron [-] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:18:58 compute-0 nova_compute[351485]: 2025-12-03 02:18:58.959 351492 INFO nova.compute.manager [-] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Took 1.32 seconds to deallocate network for instance.
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.016 351492 DEBUG oslo_concurrency.lockutils [None req-1a1bc7b1-2a09-4d63-88b6-eeeecfcf86be abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.017 351492 DEBUG oslo_concurrency.lockutils [None req-1a1bc7b1-2a09-4d63-88b6-eeeecfcf86be abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:18:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1942: 321 pgs: 321 active+clean; 199 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 4.9 MiB/s wr, 131 op/s
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.112 351492 DEBUG oslo_concurrency.processutils [None req-1a1bc7b1-2a09-4d63-88b6-eeeecfcf86be abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.381 351492 DEBUG nova.network.neutron [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Updating instance_info_cache with network_info: [{"id": "0d927baf-41d2-458f-b4c0-1218ba0eec13", "address": "fa:16:3e:55:61:16", "network": {"id": "b46a3397-654d-4ceb-be75-a322ea7e5091", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1788173895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38f1a4b24bc74f43a70b0fc06f48b9a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d927baf-41", "ovs_interfaceid": "0d927baf-41d2-458f-b4c0-1218ba0eec13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.407 351492 DEBUG oslo_concurrency.lockutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Releasing lock "refresh_cache-48201127-9aa0-4cde-a41d-6790411480a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.408 351492 DEBUG nova.compute.manager [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Instance network_info: |[{"id": "0d927baf-41d2-458f-b4c0-1218ba0eec13", "address": "fa:16:3e:55:61:16", "network": {"id": "b46a3397-654d-4ceb-be75-a322ea7e5091", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1788173895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38f1a4b24bc74f43a70b0fc06f48b9a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d927baf-41", "ovs_interfaceid": "0d927baf-41d2-458f-b4c0-1218ba0eec13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.410 351492 DEBUG oslo_concurrency.lockutils [req-388997a5-97cc-4676-9128-8f9a68cdc340 req-f6e6f823-cde6-42d7-afa3-d049abe74a7e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-48201127-9aa0-4cde-a41d-6790411480a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.411 351492 DEBUG nova.network.neutron [req-388997a5-97cc-4676-9128-8f9a68cdc340 req-f6e6f823-cde6-42d7-afa3-d049abe74a7e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Refreshing network info cache for port 0d927baf-41d2-458f-b4c0-1218ba0eec13 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.413 351492 DEBUG nova.virt.libvirt.driver [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Start _get_guest_xml network_info=[{"id": "0d927baf-41d2-458f-b4c0-1218ba0eec13", "address": "fa:16:3e:55:61:16", "network": {"id": "b46a3397-654d-4ceb-be75-a322ea7e5091", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1788173895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38f1a4b24bc74f43a70b0fc06f48b9a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d927baf-41", "ovs_interfaceid": "0d927baf-41d2-458f-b4c0-1218ba0eec13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T02:14:44Z,direct_url=<?>,disk_format='qcow2',id=ef773cba-72f0-486f-b5e5-792ff26bb688,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9746b242761a48048d185ce26d622b33',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T02:14:46Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'boot_index': 0, 'guest_format': None, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'image_id': 'ef773cba-72f0-486f-b5e5-792ff26bb688'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.418 351492 DEBUG nova.compute.manager [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Received event network-vif-plugged-ae5db7e6-7a7a-4116-954a-be851ee02864 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.419 351492 DEBUG oslo_concurrency.lockutils [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.420 351492 DEBUG oslo_concurrency.lockutils [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.421 351492 DEBUG oslo_concurrency.lockutils [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.421 351492 DEBUG nova.compute.manager [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] No waiting events found dispatching network-vif-plugged-ae5db7e6-7a7a-4116-954a-be851ee02864 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.421 351492 WARNING nova.compute.manager [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Received unexpected event network-vif-plugged-ae5db7e6-7a7a-4116-954a-be851ee02864 for instance with vm_state deleted and task_state None.
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.422 351492 DEBUG nova.compute.manager [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Received event network-vif-plugged-ae5db7e6-7a7a-4116-954a-be851ee02864 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.422 351492 DEBUG oslo_concurrency.lockutils [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.422 351492 DEBUG oslo_concurrency.lockutils [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.422 351492 DEBUG oslo_concurrency.lockutils [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.423 351492 DEBUG nova.compute.manager [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] No waiting events found dispatching network-vif-plugged-ae5db7e6-7a7a-4116-954a-be851ee02864 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.423 351492 WARNING nova.compute.manager [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Received unexpected event network-vif-plugged-ae5db7e6-7a7a-4116-954a-be851ee02864 for instance with vm_state deleted and task_state None.
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.423 351492 DEBUG nova.compute.manager [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Received event network-vif-plugged-ae5db7e6-7a7a-4116-954a-be851ee02864 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.423 351492 DEBUG oslo_concurrency.lockutils [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.424 351492 DEBUG oslo_concurrency.lockutils [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.424 351492 DEBUG oslo_concurrency.lockutils [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.424 351492 DEBUG nova.compute.manager [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] No waiting events found dispatching network-vif-plugged-ae5db7e6-7a7a-4116-954a-be851ee02864 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.425 351492 WARNING nova.compute.manager [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Received unexpected event network-vif-plugged-ae5db7e6-7a7a-4116-954a-be851ee02864 for instance with vm_state deleted and task_state None.
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.425 351492 DEBUG nova.compute.manager [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Received event network-vif-unplugged-ae5db7e6-7a7a-4116-954a-be851ee02864 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.425 351492 DEBUG oslo_concurrency.lockutils [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.425 351492 DEBUG oslo_concurrency.lockutils [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.426 351492 DEBUG oslo_concurrency.lockutils [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.426 351492 DEBUG nova.compute.manager [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] No waiting events found dispatching network-vif-unplugged-ae5db7e6-7a7a-4116-954a-be851ee02864 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.426 351492 WARNING nova.compute.manager [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Received unexpected event network-vif-unplugged-ae5db7e6-7a7a-4116-954a-be851ee02864 for instance with vm_state deleted and task_state None.
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.426 351492 DEBUG nova.compute.manager [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Received event network-vif-plugged-ae5db7e6-7a7a-4116-954a-be851ee02864 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.427 351492 DEBUG oslo_concurrency.lockutils [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.427 351492 DEBUG oslo_concurrency.lockutils [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.427 351492 DEBUG oslo_concurrency.lockutils [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.427 351492 DEBUG nova.compute.manager [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] No waiting events found dispatching network-vif-plugged-ae5db7e6-7a7a-4116-954a-be851ee02864 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.428 351492 WARNING nova.compute.manager [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Received unexpected event network-vif-plugged-ae5db7e6-7a7a-4116-954a-be851ee02864 for instance with vm_state deleted and task_state None.
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.438 351492 WARNING nova.virt.libvirt.driver [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.444 351492 DEBUG nova.virt.libvirt.host [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.445 351492 DEBUG nova.virt.libvirt.host [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.457 351492 DEBUG nova.virt.libvirt.host [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.457 351492 DEBUG nova.virt.libvirt.host [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.458 351492 DEBUG nova.virt.libvirt.driver [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.458 351492 DEBUG nova.virt.hardware [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-03T02:14:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='89219634-32e9-4cb5-896f-6fa0b1edfe13',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T02:14:44Z,direct_url=<?>,disk_format='qcow2',id=ef773cba-72f0-486f-b5e5-792ff26bb688,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9746b242761a48048d185ce26d622b33',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T02:14:46Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.459 351492 DEBUG nova.virt.hardware [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.459 351492 DEBUG nova.virt.hardware [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.459 351492 DEBUG nova.virt.hardware [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.459 351492 DEBUG nova.virt.hardware [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.459 351492 DEBUG nova.virt.hardware [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.460 351492 DEBUG nova.virt.hardware [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.460 351492 DEBUG nova.virt.hardware [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.460 351492 DEBUG nova.virt.hardware [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.460 351492 DEBUG nova.virt.hardware [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.461 351492 DEBUG nova.virt.hardware [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.465 351492 DEBUG oslo_concurrency.processutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:18:59 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:18:59 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3083283326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.626 351492 DEBUG oslo_concurrency.processutils [None req-1a1bc7b1-2a09-4d63-88b6-eeeecfcf86be abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.640 351492 DEBUG nova.compute.provider_tree [None req-1a1bc7b1-2a09-4d63-88b6-eeeecfcf86be abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:18:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:59.650 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:18:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:59.654 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:18:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:59.655 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.683 351492 DEBUG nova.scheduler.client.report [None req-1a1bc7b1-2a09-4d63-88b6-eeeecfcf86be abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.702 351492 DEBUG oslo_concurrency.lockutils [None req-1a1bc7b1-2a09-4d63-88b6-eeeecfcf86be abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.731 351492 INFO nova.scheduler.client.report [None req-1a1bc7b1-2a09-4d63-88b6-eeeecfcf86be abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Deleted allocations for instance 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592
Dec 03 02:18:59 compute-0 podman[158098]: time="2025-12-03T02:18:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:18:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:18:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec 03 02:18:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:18:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8194 "" "Go-http-client/1.1"
Dec 03 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.816 351492 DEBUG oslo_concurrency.lockutils [None req-1a1bc7b1-2a09-4d63-88b6-eeeecfcf86be abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.434s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:18:59 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 03 02:18:59 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2244570002' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.024 351492 DEBUG oslo_concurrency.processutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.077 351492 DEBUG nova.storage.rbd_utils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] rbd image 48201127-9aa0-4cde-a41d-6790411480a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.088 351492 DEBUG oslo_concurrency.processutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:19:00 compute-0 ceph-mon[192821]: pgmap v1942: 321 pgs: 321 active+clean; 199 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 4.9 MiB/s wr, 131 op/s
Dec 03 02:19:00 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3083283326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:19:00 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2244570002' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:19:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 03 02:19:00 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1931761169' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.620 351492 DEBUG oslo_concurrency.processutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.623 351492 DEBUG nova.virt.libvirt.vif [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T02:18:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1226962462',display_name='tempest-TestServerBasicOps-server-1226962462',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1226962462',id=13,image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOrfBag91AFIZ3cgT/3v6DEUVxmWorZPsTvJBCT3v1fcFACxQDoahVOND6soOw4PzOfL8jvcBATzzdMnLLkWJn8sw8+PBGsPmPnV6EhNG8NjAI9UA8OPVUdoPITGd7W+8A==',key_name='tempest-TestServerBasicOps-954582748',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='38f1a4b24bc74f43a70b0fc06f48b9a2',ramdisk_id='',reservation_id='r-qt8l6h9j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-1222487710',owner_user_name='tempest-TestServerBasicOps-1222487710-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T02:18:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2de48f7608ea45c8ac558125d72373c4',uuid=48201127-9aa0-4cde-a41d-6790411480a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0d927baf-41d2-458f-b4c0-1218ba0eec13", "address": "fa:16:3e:55:61:16", "network": {"id": "b46a3397-654d-4ceb-be75-a322ea7e5091", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1788173895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38f1a4b24bc74f43a70b0fc06f48b9a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d927baf-41", "ovs_interfaceid": "0d927baf-41d2-458f-b4c0-1218ba0eec13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 03 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.624 351492 DEBUG nova.network.os_vif_util [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Converting VIF {"id": "0d927baf-41d2-458f-b4c0-1218ba0eec13", "address": "fa:16:3e:55:61:16", "network": {"id": "b46a3397-654d-4ceb-be75-a322ea7e5091", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1788173895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38f1a4b24bc74f43a70b0fc06f48b9a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d927baf-41", "ovs_interfaceid": "0d927baf-41d2-458f-b4c0-1218ba0eec13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 03 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.625 351492 DEBUG nova.network.os_vif_util [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:55:61:16,bridge_name='br-int',has_traffic_filtering=True,id=0d927baf-41d2-458f-b4c0-1218ba0eec13,network=Network(b46a3397-654d-4ceb-be75-a322ea7e5091),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d927baf-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 03 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.628 351492 DEBUG nova.objects.instance [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 48201127-9aa0-4cde-a41d-6790411480a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.661 351492 DEBUG nova.virt.libvirt.driver [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] End _get_guest_xml xml=<domain type="kvm">
Dec 03 02:19:00 compute-0 nova_compute[351485]:   <uuid>48201127-9aa0-4cde-a41d-6790411480a4</uuid>
Dec 03 02:19:00 compute-0 nova_compute[351485]:   <name>instance-0000000d</name>
Dec 03 02:19:00 compute-0 nova_compute[351485]:   <memory>131072</memory>
Dec 03 02:19:00 compute-0 nova_compute[351485]:   <vcpu>1</vcpu>
Dec 03 02:19:00 compute-0 nova_compute[351485]:   <metadata>
Dec 03 02:19:00 compute-0 nova_compute[351485]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 03 02:19:00 compute-0 nova_compute[351485]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:       <nova:name>tempest-TestServerBasicOps-server-1226962462</nova:name>
Dec 03 02:19:00 compute-0 nova_compute[351485]:       <nova:creationTime>2025-12-03 02:18:59</nova:creationTime>
Dec 03 02:19:00 compute-0 nova_compute[351485]:       <nova:flavor name="m1.nano">
Dec 03 02:19:00 compute-0 nova_compute[351485]:         <nova:memory>128</nova:memory>
Dec 03 02:19:00 compute-0 nova_compute[351485]:         <nova:disk>1</nova:disk>
Dec 03 02:19:00 compute-0 nova_compute[351485]:         <nova:swap>0</nova:swap>
Dec 03 02:19:00 compute-0 nova_compute[351485]:         <nova:ephemeral>0</nova:ephemeral>
Dec 03 02:19:00 compute-0 nova_compute[351485]:         <nova:vcpus>1</nova:vcpus>
Dec 03 02:19:00 compute-0 nova_compute[351485]:       </nova:flavor>
Dec 03 02:19:00 compute-0 nova_compute[351485]:       <nova:owner>
Dec 03 02:19:00 compute-0 nova_compute[351485]:         <nova:user uuid="2de48f7608ea45c8ac558125d72373c4">tempest-TestServerBasicOps-1222487710-project-member</nova:user>
Dec 03 02:19:00 compute-0 nova_compute[351485]:         <nova:project uuid="38f1a4b24bc74f43a70b0fc06f48b9a2">tempest-TestServerBasicOps-1222487710</nova:project>
Dec 03 02:19:00 compute-0 nova_compute[351485]:       </nova:owner>
Dec 03 02:19:00 compute-0 nova_compute[351485]:       <nova:root type="image" uuid="ef773cba-72f0-486f-b5e5-792ff26bb688"/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:       <nova:ports>
Dec 03 02:19:00 compute-0 nova_compute[351485]:         <nova:port uuid="0d927baf-41d2-458f-b4c0-1218ba0eec13">
Dec 03 02:19:00 compute-0 nova_compute[351485]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:         </nova:port>
Dec 03 02:19:00 compute-0 nova_compute[351485]:       </nova:ports>
Dec 03 02:19:00 compute-0 nova_compute[351485]:     </nova:instance>
Dec 03 02:19:00 compute-0 nova_compute[351485]:   </metadata>
Dec 03 02:19:00 compute-0 nova_compute[351485]:   <sysinfo type="smbios">
Dec 03 02:19:00 compute-0 nova_compute[351485]:     <system>
Dec 03 02:19:00 compute-0 nova_compute[351485]:       <entry name="manufacturer">RDO</entry>
Dec 03 02:19:00 compute-0 nova_compute[351485]:       <entry name="product">OpenStack Compute</entry>
Dec 03 02:19:00 compute-0 nova_compute[351485]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 03 02:19:00 compute-0 nova_compute[351485]:       <entry name="serial">48201127-9aa0-4cde-a41d-6790411480a4</entry>
Dec 03 02:19:00 compute-0 nova_compute[351485]:       <entry name="uuid">48201127-9aa0-4cde-a41d-6790411480a4</entry>
Dec 03 02:19:00 compute-0 nova_compute[351485]:       <entry name="family">Virtual Machine</entry>
Dec 03 02:19:00 compute-0 nova_compute[351485]:     </system>
Dec 03 02:19:00 compute-0 nova_compute[351485]:   </sysinfo>
Dec 03 02:19:00 compute-0 nova_compute[351485]:   <os>
Dec 03 02:19:00 compute-0 nova_compute[351485]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 03 02:19:00 compute-0 nova_compute[351485]:     <boot dev="hd"/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:     <smbios mode="sysinfo"/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:   </os>
Dec 03 02:19:00 compute-0 nova_compute[351485]:   <features>
Dec 03 02:19:00 compute-0 nova_compute[351485]:     <acpi/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:     <apic/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:     <vmcoreinfo/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:   </features>
Dec 03 02:19:00 compute-0 nova_compute[351485]:   <clock offset="utc">
Dec 03 02:19:00 compute-0 nova_compute[351485]:     <timer name="pit" tickpolicy="delay"/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:     <timer name="hpet" present="no"/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:   </clock>
Dec 03 02:19:00 compute-0 nova_compute[351485]:   <cpu mode="host-model" match="exact">
Dec 03 02:19:00 compute-0 nova_compute[351485]:     <topology sockets="1" cores="1" threads="1"/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:   </cpu>
Dec 03 02:19:00 compute-0 nova_compute[351485]:   <devices>
Dec 03 02:19:00 compute-0 nova_compute[351485]:     <disk type="network" device="disk">
Dec 03 02:19:00 compute-0 nova_compute[351485]:       <driver type="raw" cache="none"/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:       <source protocol="rbd" name="vms/48201127-9aa0-4cde-a41d-6790411480a4_disk">
Dec 03 02:19:00 compute-0 nova_compute[351485]:         <host name="192.168.122.100" port="6789"/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:       </source>
Dec 03 02:19:00 compute-0 nova_compute[351485]:       <auth username="openstack">
Dec 03 02:19:00 compute-0 nova_compute[351485]:         <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:       </auth>
Dec 03 02:19:00 compute-0 nova_compute[351485]:       <target dev="vda" bus="virtio"/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:     </disk>
Dec 03 02:19:00 compute-0 nova_compute[351485]:     <disk type="network" device="cdrom">
Dec 03 02:19:00 compute-0 nova_compute[351485]:       <driver type="raw" cache="none"/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:       <source protocol="rbd" name="vms/48201127-9aa0-4cde-a41d-6790411480a4_disk.config">
Dec 03 02:19:00 compute-0 nova_compute[351485]:         <host name="192.168.122.100" port="6789"/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:       </source>
Dec 03 02:19:00 compute-0 nova_compute[351485]:       <auth username="openstack">
Dec 03 02:19:00 compute-0 nova_compute[351485]:         <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:       </auth>
Dec 03 02:19:00 compute-0 nova_compute[351485]:       <target dev="sda" bus="sata"/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:     </disk>
Dec 03 02:19:00 compute-0 nova_compute[351485]:     <interface type="ethernet">
Dec 03 02:19:00 compute-0 nova_compute[351485]:       <mac address="fa:16:3e:55:61:16"/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:       <model type="virtio"/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:       <driver name="vhost" rx_queue_size="512"/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:       <mtu size="1442"/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:       <target dev="tap0d927baf-41"/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:     </interface>
Dec 03 02:19:00 compute-0 nova_compute[351485]:     <serial type="pty">
Dec 03 02:19:00 compute-0 nova_compute[351485]:       <log file="/var/lib/nova/instances/48201127-9aa0-4cde-a41d-6790411480a4/console.log" append="off"/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:     </serial>
Dec 03 02:19:00 compute-0 nova_compute[351485]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:     <video>
Dec 03 02:19:00 compute-0 nova_compute[351485]:       <model type="virtio"/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:     </video>
Dec 03 02:19:00 compute-0 nova_compute[351485]:     <input type="tablet" bus="usb"/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:     <rng model="virtio">
Dec 03 02:19:00 compute-0 nova_compute[351485]:       <backend model="random">/dev/urandom</backend>
Dec 03 02:19:00 compute-0 nova_compute[351485]:     </rng>
Dec 03 02:19:00 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root"/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:     <controller type="usb" index="0"/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:     <memballoon model="virtio">
Dec 03 02:19:00 compute-0 nova_compute[351485]:       <stats period="10"/>
Dec 03 02:19:00 compute-0 nova_compute[351485]:     </memballoon>
Dec 03 02:19:00 compute-0 nova_compute[351485]:   </devices>
Dec 03 02:19:00 compute-0 nova_compute[351485]: </domain>
Dec 03 02:19:00 compute-0 nova_compute[351485]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 03 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.662 351492 DEBUG nova.compute.manager [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Preparing to wait for external event network-vif-plugged-0d927baf-41d2-458f-b4c0-1218ba0eec13 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 03 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.663 351492 DEBUG oslo_concurrency.lockutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Acquiring lock "48201127-9aa0-4cde-a41d-6790411480a4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.663 351492 DEBUG oslo_concurrency.lockutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Lock "48201127-9aa0-4cde-a41d-6790411480a4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.664 351492 DEBUG oslo_concurrency.lockutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Lock "48201127-9aa0-4cde-a41d-6790411480a4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.665 351492 DEBUG nova.virt.libvirt.vif [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T02:18:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1226962462',display_name='tempest-TestServerBasicOps-server-1226962462',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1226962462',id=13,image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOrfBag91AFIZ3cgT/3v6DEUVxmWorZPsTvJBCT3v1fcFACxQDoahVOND6soOw4PzOfL8jvcBATzzdMnLLkWJn8sw8+PBGsPmPnV6EhNG8NjAI9UA8OPVUdoPITGd7W+8A==',key_name='tempest-TestServerBasicOps-954582748',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='38f1a4b24bc74f43a70b0fc06f48b9a2',ramdisk_id='',reservation_id='r-qt8l6h9j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-1222487710',owner_user_name='tempest-TestServerBasicOps-1222487710-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T02:18:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2de48f7608ea45c8ac558125d72373c4',uuid=48201127-9aa0-4cde-a41d-6790411480a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0d927baf-41d2-458f-b4c0-1218ba0eec13", "address": "fa:16:3e:55:61:16", "network": {"id": "b46a3397-654d-4ceb-be75-a322ea7e5091", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1788173895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38f1a4b24bc74f43a70b0fc06f48b9a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d927baf-41", "ovs_interfaceid": "0d927baf-41d2-458f-b4c0-1218ba0eec13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 03 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.665 351492 DEBUG nova.network.os_vif_util [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Converting VIF {"id": "0d927baf-41d2-458f-b4c0-1218ba0eec13", "address": "fa:16:3e:55:61:16", "network": {"id": "b46a3397-654d-4ceb-be75-a322ea7e5091", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1788173895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38f1a4b24bc74f43a70b0fc06f48b9a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d927baf-41", "ovs_interfaceid": "0d927baf-41d2-458f-b4c0-1218ba0eec13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 03 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.666 351492 DEBUG nova.network.os_vif_util [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:55:61:16,bridge_name='br-int',has_traffic_filtering=True,id=0d927baf-41d2-458f-b4c0-1218ba0eec13,network=Network(b46a3397-654d-4ceb-be75-a322ea7e5091),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d927baf-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 03 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.667 351492 DEBUG os_vif [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:55:61:16,bridge_name='br-int',has_traffic_filtering=True,id=0d927baf-41d2-458f-b4c0-1218ba0eec13,network=Network(b46a3397-654d-4ceb-be75-a322ea7e5091),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d927baf-41') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 03 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.668 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.669 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.669 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 03 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.675 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.675 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0d927baf-41, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.676 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0d927baf-41, col_values=(('external_ids', {'iface-id': '0d927baf-41d2-458f-b4c0-1218ba0eec13', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:55:61:16', 'vm-uuid': '48201127-9aa0-4cde-a41d-6790411480a4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.679 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:19:00 compute-0 NetworkManager[48912]: <info>  [1764728340.6820] manager: (tap0d927baf-41): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/67)
Dec 03 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.683 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 03 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.693 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.694 351492 INFO os_vif [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:55:61:16,bridge_name='br-int',has_traffic_filtering=True,id=0d927baf-41d2-458f-b4c0-1218ba0eec13,network=Network(b46a3397-654d-4ceb-be75-a322ea7e5091),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d927baf-41')
Dec 03 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.782 351492 DEBUG nova.virt.libvirt.driver [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 03 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.783 351492 DEBUG nova.virt.libvirt.driver [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 03 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.783 351492 DEBUG nova.virt.libvirt.driver [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] No VIF found with MAC fa:16:3e:55:61:16, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 03 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.783 351492 INFO nova.virt.libvirt.driver [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Using config drive
Dec 03 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.819 351492 DEBUG nova.storage.rbd_utils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] rbd image 48201127-9aa0-4cde-a41d-6790411480a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:19:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1943: 321 pgs: 321 active+clean; 150 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 110 KiB/s rd, 5.2 MiB/s wr, 163 op/s
Dec 03 02:19:01 compute-0 nova_compute[351485]: 2025-12-03 02:19:01.305 351492 DEBUG nova.compute.manager [req-b4673748-32ec-4525-90e4-65789f68cb0f req-77a7407d-9013-46af-bb9d-5fb4c6477ed6 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Received event network-vif-deleted-ae5db7e6-7a7a-4116-954a-be851ee02864 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:19:01 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1931761169' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:19:01 compute-0 openstack_network_exporter[368278]: ERROR   02:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:19:01 compute-0 openstack_network_exporter[368278]: ERROR   02:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:19:01 compute-0 openstack_network_exporter[368278]: ERROR   02:19:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:19:01 compute-0 openstack_network_exporter[368278]: ERROR   02:19:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:19:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:19:01 compute-0 openstack_network_exporter[368278]: ERROR   02:19:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:19:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:19:01 compute-0 nova_compute[351485]: 2025-12-03 02:19:01.509 351492 INFO nova.virt.libvirt.driver [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Creating config drive at /var/lib/nova/instances/48201127-9aa0-4cde-a41d-6790411480a4/disk.config
Dec 03 02:19:01 compute-0 nova_compute[351485]: 2025-12-03 02:19:01.516 351492 DEBUG oslo_concurrency.processutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/48201127-9aa0-4cde-a41d-6790411480a4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7zcazr1d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:19:01 compute-0 nova_compute[351485]: 2025-12-03 02:19:01.665 351492 DEBUG oslo_concurrency.processutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/48201127-9aa0-4cde-a41d-6790411480a4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7zcazr1d" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:19:01 compute-0 nova_compute[351485]: 2025-12-03 02:19:01.724 351492 DEBUG nova.storage.rbd_utils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] rbd image 48201127-9aa0-4cde-a41d-6790411480a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:19:01 compute-0 nova_compute[351485]: 2025-12-03 02:19:01.737 351492 DEBUG oslo_concurrency.processutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/48201127-9aa0-4cde-a41d-6790411480a4/disk.config 48201127-9aa0-4cde-a41d-6790411480a4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:19:02 compute-0 nova_compute[351485]: 2025-12-03 02:19:02.065 351492 DEBUG oslo_concurrency.processutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/48201127-9aa0-4cde-a41d-6790411480a4/disk.config 48201127-9aa0-4cde-a41d-6790411480a4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.328s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:19:02 compute-0 nova_compute[351485]: 2025-12-03 02:19:02.066 351492 INFO nova.virt.libvirt.driver [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Deleting local config drive /var/lib/nova/instances/48201127-9aa0-4cde-a41d-6790411480a4/disk.config because it was imported into RBD.
Dec 03 02:19:02 compute-0 kernel: tap0d927baf-41: entered promiscuous mode
Dec 03 02:19:02 compute-0 NetworkManager[48912]: <info>  [1764728342.1810] manager: (tap0d927baf-41): new Tun device (/org/freedesktop/NetworkManager/Devices/68)
Dec 03 02:19:02 compute-0 ovn_controller[89134]: 2025-12-03T02:19:02Z|00170|binding|INFO|Claiming lport 0d927baf-41d2-458f-b4c0-1218ba0eec13 for this chassis.
Dec 03 02:19:02 compute-0 nova_compute[351485]: 2025-12-03 02:19:02.182 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:19:02 compute-0 ovn_controller[89134]: 2025-12-03T02:19:02Z|00171|binding|INFO|0d927baf-41d2-458f-b4c0-1218ba0eec13: Claiming fa:16:3e:55:61:16 10.100.0.9
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.195 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:55:61:16 10.100.0.9'], port_security=['fa:16:3e:55:61:16 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '48201127-9aa0-4cde-a41d-6790411480a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b46a3397-654d-4ceb-be75-a322ea7e5091', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '38f1a4b24bc74f43a70b0fc06f48b9a2', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3ad947c5-c226-4f50-af5d-711cff08343d b2c98479-d787-4d5e-b71b-1dd64682dc39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a2444ad0-b9d4-4c2c-9115-6ef22db7fd9a, chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=0d927baf-41d2-458f-b4c0-1218ba0eec13) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.197 288528 INFO neutron.agent.ovn.metadata.agent [-] Port 0d927baf-41d2-458f-b4c0-1218ba0eec13 in datapath b46a3397-654d-4ceb-be75-a322ea7e5091 bound to our chassis
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.202 288528 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b46a3397-654d-4ceb-be75-a322ea7e5091
Dec 03 02:19:02 compute-0 ovn_controller[89134]: 2025-12-03T02:19:02Z|00172|binding|INFO|Setting lport 0d927baf-41d2-458f-b4c0-1218ba0eec13 ovn-installed in OVS
Dec 03 02:19:02 compute-0 ovn_controller[89134]: 2025-12-03T02:19:02Z|00173|binding|INFO|Setting lport 0d927baf-41d2-458f-b4c0-1218ba0eec13 up in Southbound
Dec 03 02:19:02 compute-0 nova_compute[351485]: 2025-12-03 02:19:02.220 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.221 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[d2609792-7a37-4e92-9ebf-0a2e6806c61e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.222 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb46a3397-61 in ovnmeta-b46a3397-654d-4ceb-be75-a322ea7e5091 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.228 414755 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb46a3397-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.229 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[a26db8b7-fdbb-4e1e-a533-7cc077b73f88]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:19:02 compute-0 nova_compute[351485]: 2025-12-03 02:19:02.230 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.230 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[406f72ff-a3e9-4993-9ea7-3b76e137630b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:19:02 compute-0 systemd-udevd[452341]: Network interface NamePolicy= disabled on kernel command line.
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.254 288639 DEBUG oslo.privsep.daemon [-] privsep: reply[db02e0be-6683-437d-ae59-2b1ab9a402f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:19:02 compute-0 systemd-machined[138558]: New machine qemu-14-instance-0000000d.
Dec 03 02:19:02 compute-0 systemd[1]: Started Virtual Machine qemu-14-instance-0000000d.
Dec 03 02:19:02 compute-0 NetworkManager[48912]: <info>  [1764728342.2802] device (tap0d927baf-41): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 03 02:19:02 compute-0 NetworkManager[48912]: <info>  [1764728342.2860] device (tap0d927baf-41): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.285 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[b1a14e12-08a7-4220-9ca1-b233ba052055]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.330 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[201ba08e-9948-4daf-8f98-e0b07823eb82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:19:02 compute-0 NetworkManager[48912]: <info>  [1764728342.3399] manager: (tapb46a3397-60): new Veth device (/org/freedesktop/NetworkManager/Devices/69)
Dec 03 02:19:02 compute-0 systemd-udevd[452344]: Network interface NamePolicy= disabled on kernel command line.
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.339 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[a83c9d42-1fc8-4fae-bbe8-ec61f37f585f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.387 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[a84bb635-2c61-4331-9546-95ab08866aa0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.392 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[58607d3c-f5f1-477d-8478-61208b210359]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:19:02 compute-0 ceph-mon[192821]: pgmap v1943: 321 pgs: 321 active+clean; 150 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 110 KiB/s rd, 5.2 MiB/s wr, 163 op/s
Dec 03 02:19:02 compute-0 NetworkManager[48912]: <info>  [1764728342.4276] device (tapb46a3397-60): carrier: link connected
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.438 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[a43fdf3e-8cf1-4ce4-b443-119d2469ea39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.468 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[944c0c59-b480-4861-9657-7d9b3be3e83e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb46a3397-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fd:fe:57'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 718190, 'reachable_time': 22237, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 452372, 'error': None, 'target': 'ovnmeta-b46a3397-654d-4ceb-be75-a322ea7e5091', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.498 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[75678aea-4b70-40e2-9d2b-a27b1a8f3b37]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefd:fe57'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 718190, 'tstamp': 718190}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 452373, 'error': None, 'target': 'ovnmeta-b46a3397-654d-4ceb-be75-a322ea7e5091', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.531 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[5554972d-600b-4cf7-bf18-cb52dfcb858b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb46a3397-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fd:fe:57'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 1, 'rx_bytes': 266, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 1, 'rx_bytes': 266, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 718190, 'reachable_time': 22237, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 452374, 'error': None, 'target': 'ovnmeta-b46a3397-654d-4ceb-be75-a322ea7e5091', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.588 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[49f763b2-16e7-4da1-b2e9-74f3e9063e60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:19:02 compute-0 nova_compute[351485]: 2025-12-03 02:19:02.640 351492 DEBUG nova.network.neutron [req-388997a5-97cc-4676-9128-8f9a68cdc340 req-f6e6f823-cde6-42d7-afa3-d049abe74a7e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Updated VIF entry in instance network info cache for port 0d927baf-41d2-458f-b4c0-1218ba0eec13. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 03 02:19:02 compute-0 nova_compute[351485]: 2025-12-03 02:19:02.641 351492 DEBUG nova.network.neutron [req-388997a5-97cc-4676-9128-8f9a68cdc340 req-f6e6f823-cde6-42d7-afa3-d049abe74a7e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Updating instance_info_cache with network_info: [{"id": "0d927baf-41d2-458f-b4c0-1218ba0eec13", "address": "fa:16:3e:55:61:16", "network": {"id": "b46a3397-654d-4ceb-be75-a322ea7e5091", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1788173895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38f1a4b24bc74f43a70b0fc06f48b9a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d927baf-41", "ovs_interfaceid": "0d927baf-41d2-458f-b4c0-1218ba0eec13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:19:02 compute-0 nova_compute[351485]: 2025-12-03 02:19:02.661 351492 DEBUG oslo_concurrency.lockutils [req-388997a5-97cc-4676-9128-8f9a68cdc340 req-f6e6f823-cde6-42d7-afa3-d049abe74a7e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-48201127-9aa0-4cde-a41d-6790411480a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.703 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[ae20217e-dd25-4204-9a6e-93f481bb0dbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.705 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb46a3397-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.705 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.706 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb46a3397-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:19:02 compute-0 nova_compute[351485]: 2025-12-03 02:19:02.710 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:19:02 compute-0 kernel: tapb46a3397-60: entered promiscuous mode
Dec 03 02:19:02 compute-0 NetworkManager[48912]: <info>  [1764728342.7113] manager: (tapb46a3397-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/70)
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.717 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb46a3397-60, col_values=(('external_ids', {'iface-id': 'b45ed026-f02f-47d3-980a-9a8302853040'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:19:02 compute-0 nova_compute[351485]: 2025-12-03 02:19:02.719 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:19:02 compute-0 ovn_controller[89134]: 2025-12-03T02:19:02Z|00174|binding|INFO|Releasing lport b45ed026-f02f-47d3-980a-9a8302853040 from this chassis (sb_readonly=0)
Dec 03 02:19:02 compute-0 nova_compute[351485]: 2025-12-03 02:19:02.730 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.731 288528 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b46a3397-654d-4ceb-be75-a322ea7e5091.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b46a3397-654d-4ceb-be75-a322ea7e5091.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.737 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[d9046675-17d6-4161-a7a2-6844456e70ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.738 288528 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]: global
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]:     log         /dev/log local0 debug
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]:     log-tag     haproxy-metadata-proxy-b46a3397-654d-4ceb-be75-a322ea7e5091
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]:     user        root
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]:     group       root
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]:     maxconn     1024
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]:     pidfile     /var/lib/neutron/external/pids/b46a3397-654d-4ceb-be75-a322ea7e5091.pid.haproxy
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]:     daemon
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]: 
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]: defaults
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]:     log global
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]:     mode http
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]:     option httplog
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]:     option dontlognull
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]:     option http-server-close
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]:     option forwardfor
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]:     retries                 3
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]:     timeout http-request    30s
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]:     timeout connect         30s
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]:     timeout client          32s
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]:     timeout server          32s
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]:     timeout http-keep-alive 30s
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]: 
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]: 
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]: listen listener
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]:     bind 169.254.169.254:80
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]:     server metadata /var/lib/neutron/metadata_proxy
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]:     http-request add-header X-OVN-Network-ID b46a3397-654d-4ceb-be75-a322ea7e5091
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 03 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.739 288528 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b46a3397-654d-4ceb-be75-a322ea7e5091', 'env', 'PROCESS_TAG=haproxy-b46a3397-654d-4ceb-be75-a322ea7e5091', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b46a3397-654d-4ceb-be75-a322ea7e5091.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 03 02:19:02 compute-0 nova_compute[351485]: 2025-12-03 02:19:02.765 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:19:02 compute-0 nova_compute[351485]: 2025-12-03 02:19:02.992 351492 DEBUG oslo_concurrency.lockutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Acquiring lock "2890ee5c-21c1-4e9d-9421-1a2df0f67f76" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:19:02 compute-0 nova_compute[351485]: 2025-12-03 02:19:02.994 351492 DEBUG oslo_concurrency.lockutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "2890ee5c-21c1-4e9d-9421-1a2df0f67f76" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.017 351492 DEBUG nova.compute.manager [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 03 02:19:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1944: 321 pgs: 321 active+clean; 124 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 99 KiB/s rd, 4.3 MiB/s wr, 146 op/s
Dec 03 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.122 351492 DEBUG oslo_concurrency.lockutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.123 351492 DEBUG oslo_concurrency.lockutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.139 351492 DEBUG nova.virt.hardware [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 03 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.140 351492 INFO nova.compute.claims [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Claim successful on node compute-0.ctlplane.example.com
Dec 03 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.175 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728343.1748834, 48201127-9aa0-4cde-a41d-6790411480a4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.176 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] VM Started (Lifecycle Event)
Dec 03 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.204 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.213 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728343.1750774, 48201127-9aa0-4cde-a41d-6790411480a4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.213 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] VM Paused (Lifecycle Event)
Dec 03 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.227 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.233 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 03 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.252 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 03 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.266 351492 DEBUG oslo_concurrency.processutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:19:03 compute-0 podman[452447]: 2025-12-03 02:19:03.313168101 +0000 UTC m=+0.086348764 container create 57a8a60584e8dfa48c54c7f4c808b077f95b7cac7819fa02e6dc520c2bcbc2eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b46a3397-654d-4ceb-be75-a322ea7e5091, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec 03 02:19:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:19:03 compute-0 podman[452447]: 2025-12-03 02:19:03.271296966 +0000 UTC m=+0.044477639 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 03 02:19:03 compute-0 systemd[1]: Started libpod-conmon-57a8a60584e8dfa48c54c7f4c808b077f95b7cac7819fa02e6dc520c2bcbc2eb.scope.
Dec 03 02:19:03 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:19:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e3ca008127e0843a16153cba25a8cdfe9386b435396ea086db82b591e22278b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 03 02:19:03 compute-0 podman[452447]: 2025-12-03 02:19:03.485084065 +0000 UTC m=+0.258264798 container init 57a8a60584e8dfa48c54c7f4c808b077f95b7cac7819fa02e6dc520c2bcbc2eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b46a3397-654d-4ceb-be75-a322ea7e5091, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0)
Dec 03 02:19:03 compute-0 podman[452447]: 2025-12-03 02:19:03.506673096 +0000 UTC m=+0.279853799 container start 57a8a60584e8dfa48c54c7f4c808b077f95b7cac7819fa02e6dc520c2bcbc2eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b46a3397-654d-4ceb-be75-a322ea7e5091, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 03 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.548 351492 DEBUG nova.compute.manager [req-de94373f-3cc9-4602-975e-f7ddb4aa3d1b req-b9ab7324-2fe9-4bf0-abb1-2aa261358061 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Received event network-vif-plugged-0d927baf-41d2-458f-b4c0-1218ba0eec13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.550 351492 DEBUG oslo_concurrency.lockutils [req-de94373f-3cc9-4602-975e-f7ddb4aa3d1b req-b9ab7324-2fe9-4bf0-abb1-2aa261358061 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "48201127-9aa0-4cde-a41d-6790411480a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.551 351492 DEBUG oslo_concurrency.lockutils [req-de94373f-3cc9-4602-975e-f7ddb4aa3d1b req-b9ab7324-2fe9-4bf0-abb1-2aa261358061 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "48201127-9aa0-4cde-a41d-6790411480a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:19:03 compute-0 neutron-haproxy-ovnmeta-b46a3397-654d-4ceb-be75-a322ea7e5091[452461]: [NOTICE]   (452486) : New worker (452488) forked
Dec 03 02:19:03 compute-0 neutron-haproxy-ovnmeta-b46a3397-654d-4ceb-be75-a322ea7e5091[452461]: [NOTICE]   (452486) : Loading success.
Dec 03 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.553 351492 DEBUG oslo_concurrency.lockutils [req-de94373f-3cc9-4602-975e-f7ddb4aa3d1b req-b9ab7324-2fe9-4bf0-abb1-2aa261358061 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "48201127-9aa0-4cde-a41d-6790411480a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.554 351492 DEBUG nova.compute.manager [req-de94373f-3cc9-4602-975e-f7ddb4aa3d1b req-b9ab7324-2fe9-4bf0-abb1-2aa261358061 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Processing event network-vif-plugged-0d927baf-41d2-458f-b4c0-1218ba0eec13 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 03 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.555 351492 DEBUG nova.compute.manager [req-de94373f-3cc9-4602-975e-f7ddb4aa3d1b req-b9ab7324-2fe9-4bf0-abb1-2aa261358061 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Received event network-vif-plugged-0d927baf-41d2-458f-b4c0-1218ba0eec13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.556 351492 DEBUG oslo_concurrency.lockutils [req-de94373f-3cc9-4602-975e-f7ddb4aa3d1b req-b9ab7324-2fe9-4bf0-abb1-2aa261358061 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "48201127-9aa0-4cde-a41d-6790411480a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.557 351492 DEBUG oslo_concurrency.lockutils [req-de94373f-3cc9-4602-975e-f7ddb4aa3d1b req-b9ab7324-2fe9-4bf0-abb1-2aa261358061 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "48201127-9aa0-4cde-a41d-6790411480a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.558 351492 DEBUG oslo_concurrency.lockutils [req-de94373f-3cc9-4602-975e-f7ddb4aa3d1b req-b9ab7324-2fe9-4bf0-abb1-2aa261358061 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "48201127-9aa0-4cde-a41d-6790411480a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.559 351492 DEBUG nova.compute.manager [req-de94373f-3cc9-4602-975e-f7ddb4aa3d1b req-b9ab7324-2fe9-4bf0-abb1-2aa261358061 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] No waiting events found dispatching network-vif-plugged-0d927baf-41d2-458f-b4c0-1218ba0eec13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 03 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.560 351492 WARNING nova.compute.manager [req-de94373f-3cc9-4602-975e-f7ddb4aa3d1b req-b9ab7324-2fe9-4bf0-abb1-2aa261358061 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Received unexpected event network-vif-plugged-0d927baf-41d2-458f-b4c0-1218ba0eec13 for instance with vm_state building and task_state spawning.
Dec 03 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.562 351492 DEBUG nova.compute.manager [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 03 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.573 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728343.5719786, 48201127-9aa0-4cde-a41d-6790411480a4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.574 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] VM Resumed (Lifecycle Event)
Dec 03 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.578 351492 DEBUG nova.virt.libvirt.driver [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 03 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.587 351492 INFO nova.virt.libvirt.driver [-] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Instance spawned successfully.
Dec 03 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.588 351492 DEBUG nova.virt.libvirt.driver [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 03 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.597 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.607 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 03 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.625 351492 DEBUG nova.virt.libvirt.driver [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.626 351492 DEBUG nova.virt.libvirt.driver [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.626 351492 DEBUG nova.virt.libvirt.driver [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.627 351492 DEBUG nova.virt.libvirt.driver [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.628 351492 DEBUG nova.virt.libvirt.driver [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.628 351492 DEBUG nova.virt.libvirt.driver [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.637 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 03 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.693 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.703 351492 INFO nova.compute.manager [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Took 9.93 seconds to spawn the instance on the hypervisor.
Dec 03 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.703 351492 DEBUG nova.compute.manager [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.775 351492 INFO nova.compute.manager [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Took 11.11 seconds to build instance.
Dec 03 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.796 351492 DEBUG oslo_concurrency.lockutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Lock "48201127-9aa0-4cde-a41d-6790411480a4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.255s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:19:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:19:03 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/259948269' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.845 351492 DEBUG oslo_concurrency.processutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.854 351492 DEBUG nova.compute.provider_tree [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.871 351492 DEBUG nova.scheduler.client.report [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.897 351492 DEBUG oslo_concurrency.lockutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.773s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.897 351492 DEBUG nova.compute.manager [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 03 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.954 351492 DEBUG nova.compute.manager [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 03 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.955 351492 DEBUG nova.network.neutron [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 03 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.979 351492 INFO nova.virt.libvirt.driver [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 03 02:19:04 compute-0 nova_compute[351485]: 2025-12-03 02:19:04.001 351492 DEBUG nova.compute.manager [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 03 02:19:04 compute-0 nova_compute[351485]: 2025-12-03 02:19:04.093 351492 DEBUG nova.compute.manager [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 03 02:19:04 compute-0 nova_compute[351485]: 2025-12-03 02:19:04.095 351492 DEBUG nova.virt.libvirt.driver [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 03 02:19:04 compute-0 nova_compute[351485]: 2025-12-03 02:19:04.095 351492 INFO nova.virt.libvirt.driver [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Creating image(s)
Dec 03 02:19:04 compute-0 nova_compute[351485]: 2025-12-03 02:19:04.137 351492 DEBUG nova.storage.rbd_utils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] rbd image 2890ee5c-21c1-4e9d-9421-1a2df0f67f76_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:19:04 compute-0 nova_compute[351485]: 2025-12-03 02:19:04.177 351492 DEBUG nova.storage.rbd_utils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] rbd image 2890ee5c-21c1-4e9d-9421-1a2df0f67f76_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:19:04 compute-0 nova_compute[351485]: 2025-12-03 02:19:04.232 351492 DEBUG nova.storage.rbd_utils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] rbd image 2890ee5c-21c1-4e9d-9421-1a2df0f67f76_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:19:04 compute-0 nova_compute[351485]: 2025-12-03 02:19:04.241 351492 DEBUG oslo_concurrency.lockutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Acquiring lock "3a2172ba33277b1fb4d8f3381bb190374609d10e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:19:04 compute-0 nova_compute[351485]: 2025-12-03 02:19:04.242 351492 DEBUG oslo_concurrency.lockutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "3a2172ba33277b1fb4d8f3381bb190374609d10e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:19:04 compute-0 nova_compute[351485]: 2025-12-03 02:19:04.270 351492 DEBUG nova.policy [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8f61f44789494541b7c101b0fdab52f0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '63f39ac2863946b8b817457e689ff933', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 03 02:19:04 compute-0 ceph-mon[192821]: pgmap v1944: 321 pgs: 321 active+clean; 124 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 99 KiB/s rd, 4.3 MiB/s wr, 146 op/s
Dec 03 02:19:04 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/259948269' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:19:05 compute-0 ovn_controller[89134]: 2025-12-03T02:19:05Z|00175|binding|INFO|Releasing lport b45ed026-f02f-47d3-980a-9a8302853040 from this chassis (sb_readonly=0)
Dec 03 02:19:05 compute-0 nova_compute[351485]: 2025-12-03 02:19:05.054 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:19:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1945: 321 pgs: 321 active+clean; 124 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 80 KiB/s rd, 3.4 MiB/s wr, 118 op/s
Dec 03 02:19:05 compute-0 nova_compute[351485]: 2025-12-03 02:19:05.107 351492 DEBUG nova.virt.libvirt.imagebackend [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Image locations are: [{'url': 'rbd://3765feb2-36f8-5b86-b74c-64e9221f9c4c/images/8876482c-db67-48c0-9203-60685152fc9d/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://3765feb2-36f8-5b86-b74c-64e9221f9c4c/images/8876482c-db67-48c0-9203-60685152fc9d/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Dec 03 02:19:05 compute-0 ovn_controller[89134]: 2025-12-03T02:19:05Z|00176|binding|INFO|Releasing lport b45ed026-f02f-47d3-980a-9a8302853040 from this chassis (sb_readonly=0)
Dec 03 02:19:05 compute-0 nova_compute[351485]: 2025-12-03 02:19:05.347 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:19:05 compute-0 nova_compute[351485]: 2025-12-03 02:19:05.679 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:19:05 compute-0 nova_compute[351485]: 2025-12-03 02:19:05.748 351492 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764728330.7424986, 1b83725c-0af2-491f-98d9-bdb0ed1a5979 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 02:19:05 compute-0 nova_compute[351485]: 2025-12-03 02:19:05.748 351492 INFO nova.compute.manager [-] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] VM Stopped (Lifecycle Event)
Dec 03 02:19:05 compute-0 nova_compute[351485]: 2025-12-03 02:19:05.771 351492 DEBUG nova.compute.manager [None req-8f9b20d3-6de9-4f77-8230-439e09794c86 - - - - - -] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:19:05 compute-0 nova_compute[351485]: 2025-12-03 02:19:05.867 351492 DEBUG nova.network.neutron [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Successfully created port: f36a9f58-d7c9-4f05-942d-5a2c4cce705a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 03 02:19:06 compute-0 ceph-mon[192821]: pgmap v1945: 321 pgs: 321 active+clean; 124 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 80 KiB/s rd, 3.4 MiB/s wr, 118 op/s
Dec 03 02:19:06 compute-0 nova_compute[351485]: 2025-12-03 02:19:06.728 351492 DEBUG oslo_concurrency.processutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3a2172ba33277b1fb4d8f3381bb190374609d10e.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:19:06 compute-0 nova_compute[351485]: 2025-12-03 02:19:06.828 351492 DEBUG oslo_concurrency.processutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3a2172ba33277b1fb4d8f3381bb190374609d10e.part --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:19:06 compute-0 nova_compute[351485]: 2025-12-03 02:19:06.829 351492 DEBUG nova.virt.images [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] 8876482c-db67-48c0-9203-60685152fc9d was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Dec 03 02:19:06 compute-0 nova_compute[351485]: 2025-12-03 02:19:06.830 351492 DEBUG nova.privsep.utils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Dec 03 02:19:06 compute-0 nova_compute[351485]: 2025-12-03 02:19:06.831 351492 DEBUG oslo_concurrency.processutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/3a2172ba33277b1fb4d8f3381bb190374609d10e.part /var/lib/nova/instances/_base/3a2172ba33277b1fb4d8f3381bb190374609d10e.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:19:07 compute-0 nova_compute[351485]: 2025-12-03 02:19:07.084 351492 DEBUG oslo_concurrency.processutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/3a2172ba33277b1fb4d8f3381bb190374609d10e.part /var/lib/nova/instances/_base/3a2172ba33277b1fb4d8f3381bb190374609d10e.converted" returned: 0 in 0.252s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:19:07 compute-0 nova_compute[351485]: 2025-12-03 02:19:07.093 351492 DEBUG oslo_concurrency.processutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3a2172ba33277b1fb4d8f3381bb190374609d10e.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:19:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1946: 321 pgs: 321 active+clean; 124 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 800 KiB/s rd, 915 KiB/s wr, 132 op/s
Dec 03 02:19:07 compute-0 NetworkManager[48912]: <info>  [1764728347.1573] manager: (patch-provnet-80f94762-882c-4d34-b4ad-5139365af23d-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/71)
Dec 03 02:19:07 compute-0 nova_compute[351485]: 2025-12-03 02:19:07.156 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:19:07 compute-0 NetworkManager[48912]: <info>  [1764728347.1590] manager: (patch-br-int-to-provnet-80f94762-882c-4d34-b4ad-5139365af23d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/72)
Dec 03 02:19:07 compute-0 nova_compute[351485]: 2025-12-03 02:19:07.186 351492 DEBUG oslo_concurrency.processutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3a2172ba33277b1fb4d8f3381bb190374609d10e.converted --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:19:07 compute-0 nova_compute[351485]: 2025-12-03 02:19:07.187 351492 DEBUG oslo_concurrency.lockutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "3a2172ba33277b1fb4d8f3381bb190374609d10e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.945s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:19:07 compute-0 nova_compute[351485]: 2025-12-03 02:19:07.241 351492 DEBUG nova.storage.rbd_utils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] rbd image 2890ee5c-21c1-4e9d-9421-1a2df0f67f76_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:19:07 compute-0 nova_compute[351485]: 2025-12-03 02:19:07.253 351492 DEBUG oslo_concurrency.processutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/3a2172ba33277b1fb4d8f3381bb190374609d10e 2890ee5c-21c1-4e9d-9421-1a2df0f67f76_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:19:07 compute-0 nova_compute[351485]: 2025-12-03 02:19:07.398 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:19:07 compute-0 ovn_controller[89134]: 2025-12-03T02:19:07Z|00177|binding|INFO|Releasing lport b45ed026-f02f-47d3-980a-9a8302853040 from this chassis (sb_readonly=0)
Dec 03 02:19:07 compute-0 nova_compute[351485]: 2025-12-03 02:19:07.438 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:19:07 compute-0 nova_compute[351485]: 2025-12-03 02:19:07.539 351492 DEBUG nova.compute.manager [req-f46862cf-47eb-4a29-bf2d-786f066c91ff req-197b6ad4-9494-47fd-a9f3-65b8595a0d03 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Received event network-changed-0d927baf-41d2-458f-b4c0-1218ba0eec13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:19:07 compute-0 nova_compute[351485]: 2025-12-03 02:19:07.539 351492 DEBUG nova.compute.manager [req-f46862cf-47eb-4a29-bf2d-786f066c91ff req-197b6ad4-9494-47fd-a9f3-65b8595a0d03 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Refreshing instance network info cache due to event network-changed-0d927baf-41d2-458f-b4c0-1218ba0eec13. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 03 02:19:07 compute-0 nova_compute[351485]: 2025-12-03 02:19:07.540 351492 DEBUG oslo_concurrency.lockutils [req-f46862cf-47eb-4a29-bf2d-786f066c91ff req-197b6ad4-9494-47fd-a9f3-65b8595a0d03 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-48201127-9aa0-4cde-a41d-6790411480a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:19:07 compute-0 nova_compute[351485]: 2025-12-03 02:19:07.542 351492 DEBUG oslo_concurrency.lockutils [req-f46862cf-47eb-4a29-bf2d-786f066c91ff req-197b6ad4-9494-47fd-a9f3-65b8595a0d03 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-48201127-9aa0-4cde-a41d-6790411480a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:19:07 compute-0 nova_compute[351485]: 2025-12-03 02:19:07.544 351492 DEBUG nova.network.neutron [req-f46862cf-47eb-4a29-bf2d-786f066c91ff req-197b6ad4-9494-47fd-a9f3-65b8595a0d03 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Refreshing network info cache for port 0d927baf-41d2-458f-b4c0-1218ba0eec13 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 03 02:19:07 compute-0 nova_compute[351485]: 2025-12-03 02:19:07.674 351492 DEBUG nova.network.neutron [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Successfully updated port: f36a9f58-d7c9-4f05-942d-5a2c4cce705a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 03 02:19:07 compute-0 nova_compute[351485]: 2025-12-03 02:19:07.692 351492 DEBUG oslo_concurrency.lockutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Acquiring lock "refresh_cache-2890ee5c-21c1-4e9d-9421-1a2df0f67f76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:19:07 compute-0 nova_compute[351485]: 2025-12-03 02:19:07.692 351492 DEBUG oslo_concurrency.lockutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Acquired lock "refresh_cache-2890ee5c-21c1-4e9d-9421-1a2df0f67f76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:19:07 compute-0 nova_compute[351485]: 2025-12-03 02:19:07.692 351492 DEBUG nova.network.neutron [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 03 02:19:07 compute-0 nova_compute[351485]: 2025-12-03 02:19:07.709 351492 DEBUG oslo_concurrency.processutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/3a2172ba33277b1fb4d8f3381bb190374609d10e 2890ee5c-21c1-4e9d-9421-1a2df0f67f76_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:19:07 compute-0 nova_compute[351485]: 2025-12-03 02:19:07.830 351492 DEBUG nova.storage.rbd_utils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] resizing rbd image 2890ee5c-21c1-4e9d-9421-1a2df0f67f76_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 03 02:19:07 compute-0 nova_compute[351485]: 2025-12-03 02:19:07.892 351492 DEBUG nova.compute.manager [req-704d7f60-60c2-454f-943c-f9cd435b00f8 req-0ed0d989-db79-4d32-8ce0-55269b0d0721 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Received event network-changed-f36a9f58-d7c9-4f05-942d-5a2c4cce705a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:19:07 compute-0 nova_compute[351485]: 2025-12-03 02:19:07.893 351492 DEBUG nova.compute.manager [req-704d7f60-60c2-454f-943c-f9cd435b00f8 req-0ed0d989-db79-4d32-8ce0-55269b0d0721 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Refreshing instance network info cache due to event network-changed-f36a9f58-d7c9-4f05-942d-5a2c4cce705a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 03 02:19:07 compute-0 nova_compute[351485]: 2025-12-03 02:19:07.895 351492 DEBUG oslo_concurrency.lockutils [req-704d7f60-60c2-454f-943c-f9cd435b00f8 req-0ed0d989-db79-4d32-8ce0-55269b0d0721 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-2890ee5c-21c1-4e9d-9421-1a2df0f67f76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:19:08 compute-0 nova_compute[351485]: 2025-12-03 02:19:08.050 351492 DEBUG nova.network.neutron [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 03 02:19:08 compute-0 nova_compute[351485]: 2025-12-03 02:19:08.070 351492 DEBUG nova.objects.instance [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lazy-loading 'migration_context' on Instance uuid 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:19:08 compute-0 nova_compute[351485]: 2025-12-03 02:19:08.086 351492 DEBUG nova.virt.libvirt.driver [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 03 02:19:08 compute-0 nova_compute[351485]: 2025-12-03 02:19:08.087 351492 DEBUG nova.virt.libvirt.driver [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Ensure instance console log exists: /var/lib/nova/instances/2890ee5c-21c1-4e9d-9421-1a2df0f67f76/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 03 02:19:08 compute-0 nova_compute[351485]: 2025-12-03 02:19:08.088 351492 DEBUG oslo_concurrency.lockutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:19:08 compute-0 nova_compute[351485]: 2025-12-03 02:19:08.088 351492 DEBUG oslo_concurrency.lockutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:19:08 compute-0 nova_compute[351485]: 2025-12-03 02:19:08.089 351492 DEBUG oslo_concurrency.lockutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:19:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:19:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Dec 03 02:19:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Dec 03 02:19:08 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Dec 03 02:19:08 compute-0 ceph-mon[192821]: pgmap v1946: 321 pgs: 321 active+clean; 124 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 800 KiB/s rd, 915 KiB/s wr, 132 op/s
Dec 03 02:19:08 compute-0 ceph-mon[192821]: osdmap e138: 3 total, 3 up, 3 in
Dec 03 02:19:08 compute-0 nova_compute[351485]: 2025-12-03 02:19:08.695 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:19:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1948: 321 pgs: 321 active+clean; 124 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 837 KiB/s rd, 288 KiB/s wr, 106 op/s
Dec 03 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.405 351492 DEBUG nova.network.neutron [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Updating instance_info_cache with network_info: [{"id": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "address": "fa:16:3e:dd:ed:eb", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.239", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf36a9f58-d7", "ovs_interfaceid": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.430 351492 DEBUG oslo_concurrency.lockutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Releasing lock "refresh_cache-2890ee5c-21c1-4e9d-9421-1a2df0f67f76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.430 351492 DEBUG nova.compute.manager [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Instance network_info: |[{"id": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "address": "fa:16:3e:dd:ed:eb", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.239", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf36a9f58-d7", "ovs_interfaceid": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 03 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.430 351492 DEBUG oslo_concurrency.lockutils [req-704d7f60-60c2-454f-943c-f9cd435b00f8 req-0ed0d989-db79-4d32-8ce0-55269b0d0721 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-2890ee5c-21c1-4e9d-9421-1a2df0f67f76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.431 351492 DEBUG nova.network.neutron [req-704d7f60-60c2-454f-943c-f9cd435b00f8 req-0ed0d989-db79-4d32-8ce0-55269b0d0721 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Refreshing network info cache for port f36a9f58-d7c9-4f05-942d-5a2c4cce705a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 03 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.434 351492 DEBUG nova.virt.libvirt.driver [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Start _get_guest_xml network_info=[{"id": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "address": "fa:16:3e:dd:ed:eb", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.239", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf36a9f58-d7", "ovs_interfaceid": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T02:18:51Z,direct_url=<?>,disk_format='qcow2',id=8876482c-db67-48c0-9203-60685152fc9d,min_disk=0,min_ram=0,name='tempest-scenario-img--863028734',owner='63f39ac2863946b8b817457e689ff933',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T02:18:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'boot_index': 0, 'guest_format': None, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'image_id': '8876482c-db67-48c0-9203-60685152fc9d'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 03 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.444 351492 WARNING nova.virt.libvirt.driver [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.468 351492 DEBUG nova.virt.libvirt.host [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 03 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.471 351492 DEBUG nova.virt.libvirt.host [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 03 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.482 351492 DEBUG nova.virt.libvirt.host [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 03 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.484 351492 DEBUG nova.virt.libvirt.host [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 03 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.485 351492 DEBUG nova.virt.libvirt.driver [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 03 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.486 351492 DEBUG nova.virt.hardware [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-03T02:14:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='89219634-32e9-4cb5-896f-6fa0b1edfe13',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T02:18:51Z,direct_url=<?>,disk_format='qcow2',id=8876482c-db67-48c0-9203-60685152fc9d,min_disk=0,min_ram=0,name='tempest-scenario-img--863028734',owner='63f39ac2863946b8b817457e689ff933',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T02:18:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 03 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.489 351492 DEBUG nova.virt.hardware [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 03 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.490 351492 DEBUG nova.virt.hardware [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 03 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.491 351492 DEBUG nova.virt.hardware [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 03 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.492 351492 DEBUG nova.virt.hardware [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 03 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.493 351492 DEBUG nova.virt.hardware [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 03 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.496 351492 DEBUG nova.virt.hardware [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 03 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.498 351492 DEBUG nova.virt.hardware [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 03 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.499 351492 DEBUG nova.virt.hardware [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 03 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.500 351492 DEBUG nova.virt.hardware [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 03 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.501 351492 DEBUG nova.virt.hardware [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 03 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.519 351492 DEBUG oslo_concurrency.processutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.594 351492 DEBUG nova.network.neutron [req-f46862cf-47eb-4a29-bf2d-786f066c91ff req-197b6ad4-9494-47fd-a9f3-65b8595a0d03 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Updated VIF entry in instance network info cache for port 0d927baf-41d2-458f-b4c0-1218ba0eec13. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 03 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.596 351492 DEBUG nova.network.neutron [req-f46862cf-47eb-4a29-bf2d-786f066c91ff req-197b6ad4-9494-47fd-a9f3-65b8595a0d03 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Updating instance_info_cache with network_info: [{"id": "0d927baf-41d2-458f-b4c0-1218ba0eec13", "address": "fa:16:3e:55:61:16", "network": {"id": "b46a3397-654d-4ceb-be75-a322ea7e5091", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1788173895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38f1a4b24bc74f43a70b0fc06f48b9a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d927baf-41", "ovs_interfaceid": "0d927baf-41d2-458f-b4c0-1218ba0eec13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.634 351492 DEBUG oslo_concurrency.lockutils [req-f46862cf-47eb-4a29-bf2d-786f066c91ff req-197b6ad4-9494-47fd-a9f3-65b8595a0d03 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-48201127-9aa0-4cde-a41d-6790411480a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:19:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 03 02:19:09 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4008078241' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.984 351492 DEBUG oslo_concurrency.processutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.036 351492 DEBUG nova.storage.rbd_utils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] rbd image 2890ee5c-21c1-4e9d-9421-1a2df0f67f76_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.045 351492 DEBUG oslo_concurrency.processutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:19:10 compute-0 ceph-mon[192821]: pgmap v1948: 321 pgs: 321 active+clean; 124 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 837 KiB/s rd, 288 KiB/s wr, 106 op/s
Dec 03 02:19:10 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/4008078241' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:19:10 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 03 02:19:10 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1857908507' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.541 351492 DEBUG oslo_concurrency.processutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.546 351492 DEBUG nova.virt.libvirt.vif [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T02:19:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-8071397-asg-3rvfkoaoyxm3-n4fdz722tgvn-jwe375iwm6yr',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-8071397-asg-3rvfkoaoyxm3-n4fdz722tgvn-jwe375iwm6yr',id=14,image_ref='8876482c-db67-48c0-9203-60685152fc9d',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='38bfb145-4971-41b6-9bc3-faf3c3931019'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='63f39ac2863946b8b817457e689ff933',ramdisk_id='',reservation_id='r-czfymphz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8876482c-db67-48c0-9203-60685152fc9d',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-1008659157',owner_user_name='tempest-PrometheusGabbiTest-1008659157-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T02:19:04Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='8f61f44789494541b7c101b0fdab52f0',uuid=2890ee5c-21c1-4e9d-9421-1a2df0f67f76,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "address": "fa:16:3e:dd:ed:eb", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.239", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf36a9f58-d7", "ovs_interfaceid": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 03 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.548 351492 DEBUG nova.network.os_vif_util [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Converting VIF {"id": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "address": "fa:16:3e:dd:ed:eb", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.239", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf36a9f58-d7", "ovs_interfaceid": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 03 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.549 351492 DEBUG nova.network.os_vif_util [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:ed:eb,bridge_name='br-int',has_traffic_filtering=True,id=f36a9f58-d7c9-4f05-942d-5a2c4cce705a,network=Network(a7615b73-b987-4b91-b12c-2d7488085657),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf36a9f58-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 03 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.551 351492 DEBUG nova.objects.instance [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.594 351492 DEBUG nova.virt.libvirt.driver [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] End _get_guest_xml xml=<domain type="kvm">
Dec 03 02:19:10 compute-0 nova_compute[351485]:   <uuid>2890ee5c-21c1-4e9d-9421-1a2df0f67f76</uuid>
Dec 03 02:19:10 compute-0 nova_compute[351485]:   <name>instance-0000000e</name>
Dec 03 02:19:10 compute-0 nova_compute[351485]:   <memory>131072</memory>
Dec 03 02:19:10 compute-0 nova_compute[351485]:   <vcpu>1</vcpu>
Dec 03 02:19:10 compute-0 nova_compute[351485]:   <metadata>
Dec 03 02:19:10 compute-0 nova_compute[351485]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 03 02:19:10 compute-0 nova_compute[351485]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:       <nova:name>te-8071397-asg-3rvfkoaoyxm3-n4fdz722tgvn-jwe375iwm6yr</nova:name>
Dec 03 02:19:10 compute-0 nova_compute[351485]:       <nova:creationTime>2025-12-03 02:19:09</nova:creationTime>
Dec 03 02:19:10 compute-0 nova_compute[351485]:       <nova:flavor name="m1.nano">
Dec 03 02:19:10 compute-0 nova_compute[351485]:         <nova:memory>128</nova:memory>
Dec 03 02:19:10 compute-0 nova_compute[351485]:         <nova:disk>1</nova:disk>
Dec 03 02:19:10 compute-0 nova_compute[351485]:         <nova:swap>0</nova:swap>
Dec 03 02:19:10 compute-0 nova_compute[351485]:         <nova:ephemeral>0</nova:ephemeral>
Dec 03 02:19:10 compute-0 nova_compute[351485]:         <nova:vcpus>1</nova:vcpus>
Dec 03 02:19:10 compute-0 nova_compute[351485]:       </nova:flavor>
Dec 03 02:19:10 compute-0 nova_compute[351485]:       <nova:owner>
Dec 03 02:19:10 compute-0 nova_compute[351485]:         <nova:user uuid="8f61f44789494541b7c101b0fdab52f0">tempest-PrometheusGabbiTest-1008659157-project-member</nova:user>
Dec 03 02:19:10 compute-0 nova_compute[351485]:         <nova:project uuid="63f39ac2863946b8b817457e689ff933">tempest-PrometheusGabbiTest-1008659157</nova:project>
Dec 03 02:19:10 compute-0 nova_compute[351485]:       </nova:owner>
Dec 03 02:19:10 compute-0 nova_compute[351485]:       <nova:root type="image" uuid="8876482c-db67-48c0-9203-60685152fc9d"/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:       <nova:ports>
Dec 03 02:19:10 compute-0 nova_compute[351485]:         <nova:port uuid="f36a9f58-d7c9-4f05-942d-5a2c4cce705a">
Dec 03 02:19:10 compute-0 nova_compute[351485]:           <nova:ip type="fixed" address="10.100.0.239" ipVersion="4"/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:         </nova:port>
Dec 03 02:19:10 compute-0 nova_compute[351485]:       </nova:ports>
Dec 03 02:19:10 compute-0 nova_compute[351485]:     </nova:instance>
Dec 03 02:19:10 compute-0 nova_compute[351485]:   </metadata>
Dec 03 02:19:10 compute-0 nova_compute[351485]:   <sysinfo type="smbios">
Dec 03 02:19:10 compute-0 nova_compute[351485]:     <system>
Dec 03 02:19:10 compute-0 nova_compute[351485]:       <entry name="manufacturer">RDO</entry>
Dec 03 02:19:10 compute-0 nova_compute[351485]:       <entry name="product">OpenStack Compute</entry>
Dec 03 02:19:10 compute-0 nova_compute[351485]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 03 02:19:10 compute-0 nova_compute[351485]:       <entry name="serial">2890ee5c-21c1-4e9d-9421-1a2df0f67f76</entry>
Dec 03 02:19:10 compute-0 nova_compute[351485]:       <entry name="uuid">2890ee5c-21c1-4e9d-9421-1a2df0f67f76</entry>
Dec 03 02:19:10 compute-0 nova_compute[351485]:       <entry name="family">Virtual Machine</entry>
Dec 03 02:19:10 compute-0 nova_compute[351485]:     </system>
Dec 03 02:19:10 compute-0 nova_compute[351485]:   </sysinfo>
Dec 03 02:19:10 compute-0 nova_compute[351485]:   <os>
Dec 03 02:19:10 compute-0 nova_compute[351485]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 03 02:19:10 compute-0 nova_compute[351485]:     <boot dev="hd"/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:     <smbios mode="sysinfo"/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:   </os>
Dec 03 02:19:10 compute-0 nova_compute[351485]:   <features>
Dec 03 02:19:10 compute-0 nova_compute[351485]:     <acpi/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:     <apic/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:     <vmcoreinfo/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:   </features>
Dec 03 02:19:10 compute-0 nova_compute[351485]:   <clock offset="utc">
Dec 03 02:19:10 compute-0 nova_compute[351485]:     <timer name="pit" tickpolicy="delay"/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:     <timer name="hpet" present="no"/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:   </clock>
Dec 03 02:19:10 compute-0 nova_compute[351485]:   <cpu mode="host-model" match="exact">
Dec 03 02:19:10 compute-0 nova_compute[351485]:     <topology sockets="1" cores="1" threads="1"/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:   </cpu>
Dec 03 02:19:10 compute-0 nova_compute[351485]:   <devices>
Dec 03 02:19:10 compute-0 nova_compute[351485]:     <disk type="network" device="disk">
Dec 03 02:19:10 compute-0 nova_compute[351485]:       <driver type="raw" cache="none"/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:       <source protocol="rbd" name="vms/2890ee5c-21c1-4e9d-9421-1a2df0f67f76_disk">
Dec 03 02:19:10 compute-0 nova_compute[351485]:         <host name="192.168.122.100" port="6789"/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:       </source>
Dec 03 02:19:10 compute-0 nova_compute[351485]:       <auth username="openstack">
Dec 03 02:19:10 compute-0 nova_compute[351485]:         <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:       </auth>
Dec 03 02:19:10 compute-0 nova_compute[351485]:       <target dev="vda" bus="virtio"/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:     </disk>
Dec 03 02:19:10 compute-0 nova_compute[351485]:     <disk type="network" device="cdrom">
Dec 03 02:19:10 compute-0 nova_compute[351485]:       <driver type="raw" cache="none"/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:       <source protocol="rbd" name="vms/2890ee5c-21c1-4e9d-9421-1a2df0f67f76_disk.config">
Dec 03 02:19:10 compute-0 nova_compute[351485]:         <host name="192.168.122.100" port="6789"/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:       </source>
Dec 03 02:19:10 compute-0 nova_compute[351485]:       <auth username="openstack">
Dec 03 02:19:10 compute-0 nova_compute[351485]:         <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:       </auth>
Dec 03 02:19:10 compute-0 nova_compute[351485]:       <target dev="sda" bus="sata"/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:     </disk>
Dec 03 02:19:10 compute-0 nova_compute[351485]:     <interface type="ethernet">
Dec 03 02:19:10 compute-0 nova_compute[351485]:       <mac address="fa:16:3e:dd:ed:eb"/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:       <model type="virtio"/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:       <driver name="vhost" rx_queue_size="512"/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:       <mtu size="1442"/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:       <target dev="tapf36a9f58-d7"/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:     </interface>
Dec 03 02:19:10 compute-0 nova_compute[351485]:     <serial type="pty">
Dec 03 02:19:10 compute-0 nova_compute[351485]:       <log file="/var/lib/nova/instances/2890ee5c-21c1-4e9d-9421-1a2df0f67f76/console.log" append="off"/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:     </serial>
Dec 03 02:19:10 compute-0 nova_compute[351485]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:     <video>
Dec 03 02:19:10 compute-0 nova_compute[351485]:       <model type="virtio"/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:     </video>
Dec 03 02:19:10 compute-0 nova_compute[351485]:     <input type="tablet" bus="usb"/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:     <rng model="virtio">
Dec 03 02:19:10 compute-0 nova_compute[351485]:       <backend model="random">/dev/urandom</backend>
Dec 03 02:19:10 compute-0 nova_compute[351485]:     </rng>
Dec 03 02:19:10 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root"/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:     <controller type="usb" index="0"/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:     <memballoon model="virtio">
Dec 03 02:19:10 compute-0 nova_compute[351485]:       <stats period="10"/>
Dec 03 02:19:10 compute-0 nova_compute[351485]:     </memballoon>
Dec 03 02:19:10 compute-0 nova_compute[351485]:   </devices>
Dec 03 02:19:10 compute-0 nova_compute[351485]: </domain>
Dec 03 02:19:10 compute-0 nova_compute[351485]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 03 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.597 351492 DEBUG nova.compute.manager [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Preparing to wait for external event network-vif-plugged-f36a9f58-d7c9-4f05-942d-5a2c4cce705a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 03 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.598 351492 DEBUG oslo_concurrency.lockutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Acquiring lock "2890ee5c-21c1-4e9d-9421-1a2df0f67f76-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.599 351492 DEBUG oslo_concurrency.lockutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "2890ee5c-21c1-4e9d-9421-1a2df0f67f76-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.599 351492 DEBUG oslo_concurrency.lockutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "2890ee5c-21c1-4e9d-9421-1a2df0f67f76-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.601 351492 DEBUG nova.virt.libvirt.vif [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T02:19:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-8071397-asg-3rvfkoaoyxm3-n4fdz722tgvn-jwe375iwm6yr',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-8071397-asg-3rvfkoaoyxm3-n4fdz722tgvn-jwe375iwm6yr',id=14,image_ref='8876482c-db67-48c0-9203-60685152fc9d',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='38bfb145-4971-41b6-9bc3-faf3c3931019'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='63f39ac2863946b8b817457e689ff933',ramdisk_id='',reservation_id='r-czfymphz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8876482c-db67-48c0-9203-60685152fc9d',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-1008659157',owner_user_name='tempest-PrometheusGabbiTest-1008659157-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T02:19:04Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='8f61f44789494541b7c101b0fdab52f0',uuid=2890ee5c-21c1-4e9d-9421-1a2df0f67f76,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "address": "fa:16:3e:dd:ed:eb", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.239", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf36a9f58-d7", "ovs_interfaceid": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 03 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.602 351492 DEBUG nova.network.os_vif_util [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Converting VIF {"id": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "address": "fa:16:3e:dd:ed:eb", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.239", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf36a9f58-d7", "ovs_interfaceid": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 03 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.603 351492 DEBUG nova.network.os_vif_util [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:ed:eb,bridge_name='br-int',has_traffic_filtering=True,id=f36a9f58-d7c9-4f05-942d-5a2c4cce705a,network=Network(a7615b73-b987-4b91-b12c-2d7488085657),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf36a9f58-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 03 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.604 351492 DEBUG os_vif [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:ed:eb,bridge_name='br-int',has_traffic_filtering=True,id=f36a9f58-d7c9-4f05-942d-5a2c4cce705a,network=Network(a7615b73-b987-4b91-b12c-2d7488085657),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf36a9f58-d7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 03 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.605 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.607 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.608 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 03 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.614 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.615 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf36a9f58-d7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.616 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf36a9f58-d7, col_values=(('external_ids', {'iface-id': 'f36a9f58-d7c9-4f05-942d-5a2c4cce705a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dd:ed:eb', 'vm-uuid': '2890ee5c-21c1-4e9d-9421-1a2df0f67f76'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:19:10 compute-0 NetworkManager[48912]: <info>  [1764728350.6203] manager: (tapf36a9f58-d7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Dec 03 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.619 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.626 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 03 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.633 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.635 351492 INFO os_vif [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:ed:eb,bridge_name='br-int',has_traffic_filtering=True,id=f36a9f58-d7c9-4f05-942d-5a2c4cce705a,network=Network(a7615b73-b987-4b91-b12c-2d7488085657),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf36a9f58-d7')
Dec 03 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.715 351492 DEBUG nova.virt.libvirt.driver [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 03 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.716 351492 DEBUG nova.virt.libvirt.driver [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 03 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.716 351492 DEBUG nova.virt.libvirt.driver [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] No VIF found with MAC fa:16:3e:dd:ed:eb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 03 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.717 351492 INFO nova.virt.libvirt.driver [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Using config drive
Dec 03 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.781 351492 DEBUG nova.storage.rbd_utils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] rbd image 2890ee5c-21c1-4e9d-9421-1a2df0f67f76_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:19:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1949: 321 pgs: 321 active+clean; 151 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 1.3 MiB/s wr, 135 op/s
Dec 03 02:19:11 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1857908507' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:19:11 compute-0 nova_compute[351485]: 2025-12-03 02:19:11.650 351492 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764728336.6488826, 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 02:19:11 compute-0 nova_compute[351485]: 2025-12-03 02:19:11.651 351492 INFO nova.compute.manager [-] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] VM Stopped (Lifecycle Event)
Dec 03 02:19:11 compute-0 nova_compute[351485]: 2025-12-03 02:19:11.681 351492 DEBUG nova.compute.manager [None req-79d477dc-078f-48b1-b44e-3204d13626d6 - - - - - -] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:19:11 compute-0 nova_compute[351485]: 2025-12-03 02:19:11.722 351492 INFO nova.virt.libvirt.driver [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Creating config drive at /var/lib/nova/instances/2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.config
Dec 03 02:19:11 compute-0 nova_compute[351485]: 2025-12-03 02:19:11.736 351492 DEBUG oslo_concurrency.processutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgezeca7b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:19:11 compute-0 nova_compute[351485]: 2025-12-03 02:19:11.896 351492 DEBUG oslo_concurrency.processutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgezeca7b" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:19:11 compute-0 nova_compute[351485]: 2025-12-03 02:19:11.962 351492 DEBUG nova.storage.rbd_utils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] rbd image 2890ee5c-21c1-4e9d-9421-1a2df0f67f76_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:19:11 compute-0 nova_compute[351485]: 2025-12-03 02:19:11.974 351492 DEBUG oslo_concurrency.processutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.config 2890ee5c-21c1-4e9d-9421-1a2df0f67f76_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:19:12 compute-0 nova_compute[351485]: 2025-12-03 02:19:12.294 351492 DEBUG oslo_concurrency.processutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.config 2890ee5c-21c1-4e9d-9421-1a2df0f67f76_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.321s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:19:12 compute-0 nova_compute[351485]: 2025-12-03 02:19:12.296 351492 INFO nova.virt.libvirt.driver [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Deleting local config drive /var/lib/nova/instances/2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.config because it was imported into RBD.
Dec 03 02:19:12 compute-0 kernel: tapf36a9f58-d7: entered promiscuous mode
Dec 03 02:19:12 compute-0 nova_compute[351485]: 2025-12-03 02:19:12.393 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:19:12 compute-0 nova_compute[351485]: 2025-12-03 02:19:12.401 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:19:12 compute-0 ovn_controller[89134]: 2025-12-03T02:19:12Z|00178|binding|INFO|Claiming lport f36a9f58-d7c9-4f05-942d-5a2c4cce705a for this chassis.
Dec 03 02:19:12 compute-0 ovn_controller[89134]: 2025-12-03T02:19:12Z|00179|binding|INFO|f36a9f58-d7c9-4f05-942d-5a2c4cce705a: Claiming fa:16:3e:dd:ed:eb 10.100.0.239
Dec 03 02:19:12 compute-0 NetworkManager[48912]: <info>  [1764728352.4081] manager: (tapf36a9f58-d7): new Tun device (/org/freedesktop/NetworkManager/Devices/74)
Dec 03 02:19:12 compute-0 nova_compute[351485]: 2025-12-03 02:19:12.411 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.425 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:ed:eb 10.100.0.239'], port_security=['fa:16:3e:dd:ed:eb 10.100.0.239'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.239/16', 'neutron:device_id': '2890ee5c-21c1-4e9d-9421-1a2df0f67f76', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a7615b73-b987-4b91-b12c-2d7488085657', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '63f39ac2863946b8b817457e689ff933', 'neutron:revision_number': '2', 'neutron:security_group_ids': '80ea8f15-ca6c-4a1b-8590-f50ba85e3add', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e2f8982b-cbe8-4539-87ff-9ffeb5a93018, chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=f36a9f58-d7c9-4f05-942d-5a2c4cce705a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.428 288528 INFO neutron.agent.ovn.metadata.agent [-] Port f36a9f58-d7c9-4f05-942d-5a2c4cce705a in datapath a7615b73-b987-4b91-b12c-2d7488085657 bound to our chassis
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.431 288528 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a7615b73-b987-4b91-b12c-2d7488085657
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.452 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[c85551f3-6fdc-4b09-9adf-23e969867029]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.453 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa7615b73-b1 in ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.455 414755 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa7615b73-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.455 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[44797c80-fbe7-4fc4-8e4c-f256b333f0fb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.457 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[92303282-6b59-407f-9f34-18794c762635]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:19:12 compute-0 ovn_controller[89134]: 2025-12-03T02:19:12Z|00180|binding|INFO|Releasing lport b45ed026-f02f-47d3-980a-9a8302853040 from this chassis (sb_readonly=0)
Dec 03 02:19:12 compute-0 ceph-mon[192821]: pgmap v1949: 321 pgs: 321 active+clean; 151 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 1.3 MiB/s wr, 135 op/s
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.473 288639 DEBUG oslo.privsep.daemon [-] privsep: reply[4576038f-2623-4717-abcf-82a21c936621]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:19:12 compute-0 ovn_controller[89134]: 2025-12-03T02:19:12Z|00181|binding|INFO|Setting lport f36a9f58-d7c9-4f05-942d-5a2c4cce705a ovn-installed in OVS
Dec 03 02:19:12 compute-0 ovn_controller[89134]: 2025-12-03T02:19:12Z|00182|binding|INFO|Setting lport f36a9f58-d7c9-4f05-942d-5a2c4cce705a up in Southbound
Dec 03 02:19:12 compute-0 systemd-machined[138558]: New machine qemu-15-instance-0000000e.
Dec 03 02:19:12 compute-0 nova_compute[351485]: 2025-12-03 02:19:12.481 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:19:12 compute-0 systemd[1]: Started Virtual Machine qemu-15-instance-0000000e.
Dec 03 02:19:12 compute-0 nova_compute[351485]: 2025-12-03 02:19:12.491 351492 DEBUG nova.network.neutron [req-704d7f60-60c2-454f-943c-f9cd435b00f8 req-0ed0d989-db79-4d32-8ce0-55269b0d0721 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Updated VIF entry in instance network info cache for port f36a9f58-d7c9-4f05-942d-5a2c4cce705a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 03 02:19:12 compute-0 nova_compute[351485]: 2025-12-03 02:19:12.492 351492 DEBUG nova.network.neutron [req-704d7f60-60c2-454f-943c-f9cd435b00f8 req-0ed0d989-db79-4d32-8ce0-55269b0d0721 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Updating instance_info_cache with network_info: [{"id": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "address": "fa:16:3e:dd:ed:eb", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.239", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf36a9f58-d7", "ovs_interfaceid": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:19:12 compute-0 systemd-udevd[452838]: Network interface NamePolicy= disabled on kernel command line.
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.503 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[dadb1425-7e09-4317-8864-76adbfc43502]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:19:12 compute-0 nova_compute[351485]: 2025-12-03 02:19:12.509 351492 DEBUG oslo_concurrency.lockutils [req-704d7f60-60c2-454f-943c-f9cd435b00f8 req-0ed0d989-db79-4d32-8ce0-55269b0d0721 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-2890ee5c-21c1-4e9d-9421-1a2df0f67f76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:19:12 compute-0 NetworkManager[48912]: <info>  [1764728352.5203] device (tapf36a9f58-d7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 03 02:19:12 compute-0 NetworkManager[48912]: <info>  [1764728352.5217] device (tapf36a9f58-d7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.546 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[553a6255-f8ef-4ff2-927c-e239d3d13727]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:19:12 compute-0 NetworkManager[48912]: <info>  [1764728352.5634] manager: (tapa7615b73-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/75)
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.563 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[792a15a1-d268-4048-b8a4-dbdf08b55ac1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:19:12 compute-0 nova_compute[351485]: 2025-12-03 02:19:12.578 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:19:12 compute-0 podman[452809]: 2025-12-03 02:19:12.58544921 +0000 UTC m=+0.142081581 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:19:12 compute-0 podman[452811]: 2025-12-03 02:19:12.586281224 +0000 UTC m=+0.137216024 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.603 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[410cdfa3-0deb-4a5a-8c3b-bd85707eab68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.606 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[23c14c9b-534a-4a9f-ba10-22e4b89dcc4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:19:12 compute-0 podman[452810]: 2025-12-03 02:19:12.612567657 +0000 UTC m=+0.172452880 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec 03 02:19:12 compute-0 NetworkManager[48912]: <info>  [1764728352.6265] device (tapa7615b73-b0): carrier: link connected
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.632 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[c52b6fc1-044b-4c21-ad87-ecca54a3abbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.649 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[49da2eb2-1910-429d-af65-4da3f04bb7c4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa7615b73-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6c:3e:f5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 47], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719210, 'reachable_time': 34894, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 452899, 'error': None, 'target': 'ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.671 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[1d03151c-b192-48a6-aacb-779141a3d0b4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6c:3ef5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 719210, 'tstamp': 719210}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 452900, 'error': None, 'target': 'ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.694 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[3dbbfcff-7d69-460c-b334-e25624300383]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa7615b73-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6c:3e:f5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 47], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719210, 'reachable_time': 34894, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 452901, 'error': None, 'target': 'ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:19:12 compute-0 nova_compute[351485]: 2025-12-03 02:19:12.713 351492 DEBUG nova.compute.manager [req-683da914-89fa-40fa-ae44-e4b528b4be95 req-ee6cbe03-c94e-4b1f-973b-e4aa21d34bda 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Received event network-vif-plugged-f36a9f58-d7c9-4f05-942d-5a2c4cce705a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:19:12 compute-0 nova_compute[351485]: 2025-12-03 02:19:12.714 351492 DEBUG oslo_concurrency.lockutils [req-683da914-89fa-40fa-ae44-e4b528b4be95 req-ee6cbe03-c94e-4b1f-973b-e4aa21d34bda 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "2890ee5c-21c1-4e9d-9421-1a2df0f67f76-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:19:12 compute-0 nova_compute[351485]: 2025-12-03 02:19:12.715 351492 DEBUG oslo_concurrency.lockutils [req-683da914-89fa-40fa-ae44-e4b528b4be95 req-ee6cbe03-c94e-4b1f-973b-e4aa21d34bda 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "2890ee5c-21c1-4e9d-9421-1a2df0f67f76-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:19:12 compute-0 nova_compute[351485]: 2025-12-03 02:19:12.715 351492 DEBUG oslo_concurrency.lockutils [req-683da914-89fa-40fa-ae44-e4b528b4be95 req-ee6cbe03-c94e-4b1f-973b-e4aa21d34bda 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "2890ee5c-21c1-4e9d-9421-1a2df0f67f76-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:19:12 compute-0 nova_compute[351485]: 2025-12-03 02:19:12.716 351492 DEBUG nova.compute.manager [req-683da914-89fa-40fa-ae44-e4b528b4be95 req-ee6cbe03-c94e-4b1f-973b-e4aa21d34bda 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Processing event network-vif-plugged-f36a9f58-d7c9-4f05-942d-5a2c4cce705a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.743 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[84ba0024-eb72-4c4d-8aea-1bb849048796]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.877 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[2f54d88f-d309-4735-87b9-1e2ff1aefbcc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.879 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa7615b73-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.881 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.882 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa7615b73-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:19:12 compute-0 kernel: tapa7615b73-b0: entered promiscuous mode
Dec 03 02:19:12 compute-0 NetworkManager[48912]: <info>  [1764728352.8866] manager: (tapa7615b73-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/76)
Dec 03 02:19:12 compute-0 nova_compute[351485]: 2025-12-03 02:19:12.885 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.892 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa7615b73-b0, col_values=(('external_ids', {'iface-id': '50c454e1-4a4b-4aad-b47b-dafc7b079018'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:19:12 compute-0 ovn_controller[89134]: 2025-12-03T02:19:12Z|00183|binding|INFO|Releasing lport 50c454e1-4a4b-4aad-b47b-dafc7b079018 from this chassis (sb_readonly=0)
Dec 03 02:19:12 compute-0 nova_compute[351485]: 2025-12-03 02:19:12.923 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.924 288528 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a7615b73-b987-4b91-b12c-2d7488085657.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a7615b73-b987-4b91-b12c-2d7488085657.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.926 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[0d3db6b7-9e3d-4043-8d3e-363a24d92e97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.927 288528 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]: global
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]:     log         /dev/log local0 debug
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]:     log-tag     haproxy-metadata-proxy-a7615b73-b987-4b91-b12c-2d7488085657
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]:     user        root
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]:     group       root
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]:     maxconn     1024
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]:     pidfile     /var/lib/neutron/external/pids/a7615b73-b987-4b91-b12c-2d7488085657.pid.haproxy
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]:     daemon
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]: 
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]: defaults
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]:     log global
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]:     mode http
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]:     option httplog
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]:     option dontlognull
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]:     option http-server-close
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]:     option forwardfor
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]:     retries                 3
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]:     timeout http-request    30s
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]:     timeout connect         30s
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]:     timeout client          32s
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]:     timeout server          32s
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]:     timeout http-keep-alive 30s
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]: 
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]: 
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]: listen listener
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]:     bind 169.254.169.254:80
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]:     server metadata /var/lib/neutron/metadata_proxy
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]:     http-request add-header X-OVN-Network-ID a7615b73-b987-4b91-b12c-2d7488085657
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 03 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.928 288528 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657', 'env', 'PROCESS_TAG=haproxy-a7615b73-b987-4b91-b12c-2d7488085657', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a7615b73-b987-4b91-b12c-2d7488085657.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 03 02:19:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1950: 321 pgs: 321 active+clean; 170 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Dec 03 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.208 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728353.207108, 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.208 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] VM Started (Lifecycle Event)
Dec 03 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.212 351492 DEBUG nova.compute.manager [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 03 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.219 351492 DEBUG nova.virt.libvirt.driver [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 03 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.227 351492 INFO nova.virt.libvirt.driver [-] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Instance spawned successfully.
Dec 03 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.228 351492 DEBUG nova.virt.libvirt.driver [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 03 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.243 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.258 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 03 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.272 351492 DEBUG nova.virt.libvirt.driver [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.273 351492 DEBUG nova.virt.libvirt.driver [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.275 351492 DEBUG nova.virt.libvirt.driver [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.280 351492 DEBUG nova.virt.libvirt.driver [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.287 351492 DEBUG nova.virt.libvirt.driver [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.287 351492 DEBUG nova.virt.libvirt.driver [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.292 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 03 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.293 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728353.207289, 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.293 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] VM Paused (Lifecycle Event)
Dec 03 02:19:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.344 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.353 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728353.21828, 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.354 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] VM Resumed (Lifecycle Event)
Dec 03 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.380 351492 INFO nova.compute.manager [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Took 9.29 seconds to spawn the instance on the hypervisor.
Dec 03 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.381 351492 DEBUG nova.compute.manager [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.385 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.407 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 03 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.455 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 03 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.483 351492 INFO nova.compute.manager [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Took 10.40 seconds to build instance.
Dec 03 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.510 351492 DEBUG oslo_concurrency.lockutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "2890ee5c-21c1-4e9d-9421-1a2df0f67f76" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.516s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:19:13 compute-0 podman[452973]: 2025-12-03 02:19:13.517475409 +0000 UTC m=+0.125054899 container create c800fdc7996a5ce9fede2c3aba64d14e29e89828606aa9d2a7ffa7487fe7cad6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 03 02:19:13 compute-0 podman[452973]: 2025-12-03 02:19:13.456828613 +0000 UTC m=+0.064408183 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 03 02:19:13 compute-0 systemd[1]: Started libpod-conmon-c800fdc7996a5ce9fede2c3aba64d14e29e89828606aa9d2a7ffa7487fe7cad6.scope.
Dec 03 02:19:13 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:19:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88013123d4a753ad03452e7c5ee2f44c7a3cff6bfcbc4c86988a478219f1d093/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 03 02:19:13 compute-0 podman[452973]: 2025-12-03 02:19:13.664924151 +0000 UTC m=+0.272503671 container init c800fdc7996a5ce9fede2c3aba64d14e29e89828606aa9d2a7ffa7487fe7cad6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 03 02:19:13 compute-0 podman[452973]: 2025-12-03 02:19:13.673831153 +0000 UTC m=+0.281410643 container start c800fdc7996a5ce9fede2c3aba64d14e29e89828606aa9d2a7ffa7487fe7cad6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 03 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.698 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:19:13 compute-0 neutron-haproxy-ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657[452989]: [NOTICE]   (452993) : New worker (452995) forked
Dec 03 02:19:13 compute-0 neutron-haproxy-ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657[452989]: [NOTICE]   (452993) : Loading success.
Dec 03 02:19:14 compute-0 ceph-mon[192821]: pgmap v1950: 321 pgs: 321 active+clean; 170 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Dec 03 02:19:14 compute-0 nova_compute[351485]: 2025-12-03 02:19:14.989 351492 DEBUG nova.compute.manager [req-5e7b1aa2-80bc-49dc-9ddb-adfa81ba5e4a req-e2e7e060-1a5c-4cb2-b238-72f3e8e723a7 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Received event network-vif-plugged-f36a9f58-d7c9-4f05-942d-5a2c4cce705a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:19:14 compute-0 nova_compute[351485]: 2025-12-03 02:19:14.989 351492 DEBUG oslo_concurrency.lockutils [req-5e7b1aa2-80bc-49dc-9ddb-adfa81ba5e4a req-e2e7e060-1a5c-4cb2-b238-72f3e8e723a7 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "2890ee5c-21c1-4e9d-9421-1a2df0f67f76-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:19:14 compute-0 nova_compute[351485]: 2025-12-03 02:19:14.990 351492 DEBUG oslo_concurrency.lockutils [req-5e7b1aa2-80bc-49dc-9ddb-adfa81ba5e4a req-e2e7e060-1a5c-4cb2-b238-72f3e8e723a7 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "2890ee5c-21c1-4e9d-9421-1a2df0f67f76-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:19:14 compute-0 nova_compute[351485]: 2025-12-03 02:19:14.990 351492 DEBUG oslo_concurrency.lockutils [req-5e7b1aa2-80bc-49dc-9ddb-adfa81ba5e4a req-e2e7e060-1a5c-4cb2-b238-72f3e8e723a7 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "2890ee5c-21c1-4e9d-9421-1a2df0f67f76-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:19:14 compute-0 nova_compute[351485]: 2025-12-03 02:19:14.991 351492 DEBUG nova.compute.manager [req-5e7b1aa2-80bc-49dc-9ddb-adfa81ba5e4a req-e2e7e060-1a5c-4cb2-b238-72f3e8e723a7 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] No waiting events found dispatching network-vif-plugged-f36a9f58-d7c9-4f05-942d-5a2c4cce705a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 03 02:19:14 compute-0 nova_compute[351485]: 2025-12-03 02:19:14.991 351492 WARNING nova.compute.manager [req-5e7b1aa2-80bc-49dc-9ddb-adfa81ba5e4a req-e2e7e060-1a5c-4cb2-b238-72f3e8e723a7 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Received unexpected event network-vif-plugged-f36a9f58-d7c9-4f05-942d-5a2c4cce705a for instance with vm_state active and task_state None.
Dec 03 02:19:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1951: 321 pgs: 321 active+clean; 170 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Dec 03 02:19:15 compute-0 nova_compute[351485]: 2025-12-03 02:19:15.621 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:19:16 compute-0 ceph-mon[192821]: pgmap v1951: 321 pgs: 321 active+clean; 170 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Dec 03 02:19:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1952: 321 pgs: 321 active+clean; 170 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 2.1 MiB/s wr, 110 op/s
Dec 03 02:19:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:19:18 compute-0 ceph-mon[192821]: pgmap v1952: 321 pgs: 321 active+clean; 170 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 2.1 MiB/s wr, 110 op/s
Dec 03 02:19:18 compute-0 nova_compute[351485]: 2025-12-03 02:19:18.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:19:18 compute-0 nova_compute[351485]: 2025-12-03 02:19:18.615 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:19:18 compute-0 nova_compute[351485]: 2025-12-03 02:19:18.616 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:19:18 compute-0 nova_compute[351485]: 2025-12-03 02:19:18.616 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:19:18 compute-0 nova_compute[351485]: 2025-12-03 02:19:18.616 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 02:19:18 compute-0 nova_compute[351485]: 2025-12-03 02:19:18.618 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:19:18 compute-0 nova_compute[351485]: 2025-12-03 02:19:18.705 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:19:18 compute-0 podman[453005]: 2025-12-03 02:19:18.8901526 +0000 UTC m=+0.139066835 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_managed=true, container_name=ceilometer_agent_ipmi)
Dec 03 02:19:19 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:19:19 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1131944634' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:19:19 compute-0 nova_compute[351485]: 2025-12-03 02:19:19.104 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:19:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1953: 321 pgs: 321 active+clean; 170 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 2.0 MiB/s wr, 103 op/s
Dec 03 02:19:19 compute-0 nova_compute[351485]: 2025-12-03 02:19:19.221 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:19:19 compute-0 nova_compute[351485]: 2025-12-03 02:19:19.222 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:19:19 compute-0 nova_compute[351485]: 2025-12-03 02:19:19.228 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:19:19 compute-0 nova_compute[351485]: 2025-12-03 02:19:19.229 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.512 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 03 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.513 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 03 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.515 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:19:19 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1131944634' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.526 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.526 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.526 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.526 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.526 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.526 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.535 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 48201127-9aa0-4cde-a41d-6790411480a4 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec 03 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.544 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/48201127-9aa0-4cde-a41d-6790411480a4 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}5774f494984a65ffbde2426a05531a474fe014ea4dcd597248cb0a9b623a789b" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec 03 02:19:19 compute-0 nova_compute[351485]: 2025-12-03 02:19:19.759 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:19:19 compute-0 nova_compute[351485]: 2025-12-03 02:19:19.761 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3669MB free_disk=59.94643783569336GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 02:19:19 compute-0 nova_compute[351485]: 2025-12-03 02:19:19.761 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:19:19 compute-0 nova_compute[351485]: 2025-12-03 02:19:19.762 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:19:19 compute-0 nova_compute[351485]: 2025-12-03 02:19:19.889 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 48201127-9aa0-4cde-a41d-6790411480a4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:19:19 compute-0 nova_compute[351485]: 2025-12-03 02:19:19.890 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:19:19 compute-0 nova_compute[351485]: 2025-12-03 02:19:19.891 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 02:19:19 compute-0 nova_compute[351485]: 2025-12-03 02:19:19.892 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 02:19:19 compute-0 nova_compute[351485]: 2025-12-03 02:19:19.949 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:19:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:19:20 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1307495154' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:19:20 compute-0 nova_compute[351485]: 2025-12-03 02:19:20.456 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:19:20 compute-0 nova_compute[351485]: 2025-12-03 02:19:20.472 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:19:20 compute-0 nova_compute[351485]: 2025-12-03 02:19:20.493 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:19:20 compute-0 nova_compute[351485]: 2025-12-03 02:19:20.530 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 02:19:20 compute-0 nova_compute[351485]: 2025-12-03 02:19:20.531 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:19:20 compute-0 ceph-mon[192821]: pgmap v1953: 321 pgs: 321 active+clean; 170 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 2.0 MiB/s wr, 103 op/s
Dec 03 02:19:20 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1307495154' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:19:20 compute-0 nova_compute[351485]: 2025-12-03 02:19:20.625 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:19:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:20.923 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 2084 Content-Type: application/json Date: Wed, 03 Dec 2025 02:19:19 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-89d201f8-3452-4af1-80a2-7836e7d8b368 x-openstack-request-id: req-89d201f8-3452-4af1-80a2-7836e7d8b368 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec 03 02:19:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:20.923 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "48201127-9aa0-4cde-a41d-6790411480a4", "name": "tempest-TestServerBasicOps-server-1226962462", "status": "ACTIVE", "tenant_id": "38f1a4b24bc74f43a70b0fc06f48b9a2", "user_id": "2de48f7608ea45c8ac558125d72373c4", "metadata": {"meta1": "data1", "meta2": "data2", "metaN": "dataN"}, "hostId": "b7a9ecca22a84e47db0dcb720867459e13c9ede783cdac92160bd565", "image": {"id": "ef773cba-72f0-486f-b5e5-792ff26bb688", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/ef773cba-72f0-486f-b5e5-792ff26bb688"}]}, "flavor": {"id": "89219634-32e9-4cb5-896f-6fa0b1edfe13", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/89219634-32e9-4cb5-896f-6fa0b1edfe13"}]}, "created": "2025-12-03T02:18:51Z", "updated": "2025-12-03T02:19:03Z", "addresses": {"tempest-TestServerBasicOps-1788173895-network": [{"version": 4, "addr": "10.100.0.9", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:55:61:16"}, {"version": 4, "addr": "192.168.122.211", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:55:61:16"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/48201127-9aa0-4cde-a41d-6790411480a4"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/48201127-9aa0-4cde-a41d-6790411480a4"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-TestServerBasicOps-954582748", "OS-SRV-USG:launched_at": "2025-12-03T02:19:03.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--1036119230"}, {"name": "tempest-secgroup-smoke-1084002553"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000d", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec 03 02:19:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:20.923 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/48201127-9aa0-4cde-a41d-6790411480a4 used request id req-89d201f8-3452-4af1-80a2-7836e7d8b368 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec 03 02:19:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:20.926 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '48201127-9aa0-4cde-a41d-6790411480a4', 'name': 'tempest-TestServerBasicOps-server-1226962462', 'flavor': {'id': '89219634-32e9-4cb5-896f-6fa0b1edfe13', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ef773cba-72f0-486f-b5e5-792ff26bb688'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000d', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '38f1a4b24bc74f43a70b0fc06f48b9a2', 'user_id': '2de48f7608ea45c8ac558125d72373c4', 'hostId': 'b7a9ecca22a84e47db0dcb720867459e13c9ede783cdac92160bd565', 'status': 'active', 'metadata': {'meta1': 'data1', 'meta2': 'data2', 'metaN': 'dataN'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 03 02:19:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:20.930 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec 03 02:19:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:20.931 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/2890ee5c-21c1-4e9d-9421-1a2df0f67f76 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}5774f494984a65ffbde2426a05531a474fe014ea4dcd597248cb0a9b623a789b" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec 03 02:19:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1954: 321 pgs: 321 active+clean; 170 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 1.8 MiB/s wr, 125 op/s
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.063 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1832 Content-Type: application/json Date: Wed, 03 Dec 2025 02:19:20 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-95792df7-aa63-4950-bb2f-ba5778b76d04 x-openstack-request-id: req-95792df7-aa63-4950-bb2f-ba5778b76d04 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.064 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "2890ee5c-21c1-4e9d-9421-1a2df0f67f76", "name": "te-8071397-asg-3rvfkoaoyxm3-n4fdz722tgvn-jwe375iwm6yr", "status": "ACTIVE", "tenant_id": "63f39ac2863946b8b817457e689ff933", "user_id": "8f61f44789494541b7c101b0fdab52f0", "metadata": {"metering.server_group": "38bfb145-4971-41b6-9bc3-faf3c3931019"}, "hostId": "b9b5204cb6f419d1971089b3610cd52175ffd5baf1b6a5204f14f9c2", "image": {"id": "8876482c-db67-48c0-9203-60685152fc9d", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/8876482c-db67-48c0-9203-60685152fc9d"}]}, "flavor": {"id": "89219634-32e9-4cb5-896f-6fa0b1edfe13", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/89219634-32e9-4cb5-896f-6fa0b1edfe13"}]}, "created": "2025-12-03T02:19:01Z", "updated": "2025-12-03T02:19:13Z", "addresses": {"": [{"version": 4, "addr": "10.100.0.239", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:dd:ed:eb"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/2890ee5c-21c1-4e9d-9421-1a2df0f67f76"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/2890ee5c-21c1-4e9d-9421-1a2df0f67f76"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-03T02:19:13.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000e", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.064 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/2890ee5c-21c1-4e9d-9421-1a2df0f67f76 used request id req-95792df7-aa63-4950-bb2f-ba5778b76d04 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.066 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2890ee5c-21c1-4e9d-9421-1a2df0f67f76', 'name': 'te-8071397-asg-3rvfkoaoyxm3-n4fdz722tgvn-jwe375iwm6yr', 'flavor': {'id': '89219634-32e9-4cb5-896f-6fa0b1edfe13', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '8876482c-db67-48c0-9203-60685152fc9d'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '63f39ac2863946b8b817457e689ff933', 'user_id': '8f61f44789494541b7c101b0fdab52f0', 'hostId': 'b9b5204cb6f419d1971089b3610cd52175ffd5baf1b6a5204f14f9c2', 'status': 'active', 'metadata': {'metering.server_group': '38bfb145-4971-41b6-9bc3-faf3c3931019'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.066 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.067 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.067 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.067 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.068 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-03T02:19:22.067449) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.099 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.099 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 48201127-9aa0-4cde-a41d-6790411480a4: ceilometer.compute.pollsters.NoVolumeException
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.145 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.145 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 2890ee5c-21c1-4e9d-9421-1a2df0f67f76: ceilometer.compute.pollsters.NoVolumeException
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.145 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.145 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.145 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.146 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.146 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.146 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.146 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-03T02:19:22.146248) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.150 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 48201127-9aa0-4cde-a41d-6790411480a4 / tap0d927baf-41 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.151 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.155 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 / tapf36a9f58-d7 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.155 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.156 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.156 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.156 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.156 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.156 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.157 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.157 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.157 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.158 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.158 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.158 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.158 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.158 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.159 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.159 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.159 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-03T02:19:22.156963) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.159 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.160 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.160 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.160 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.160 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.160 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.161 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.161 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.161 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.161 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-03T02:19:22.159083) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.161 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-03T02:19:22.160999) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.162 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.162 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.162 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.162 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.162 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.162 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.162 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.163 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.163 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.163 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.163 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.164 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.164 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.164 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.164 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-03T02:19:22.162593) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.164 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-03T02:19:22.164415) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.182 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.182 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.201 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.201 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.202 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.202 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.202 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.202 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.202 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.202 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.203 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.203 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-TestServerBasicOps-server-1226962462>, <NovaLikeServer: te-8071397-asg-3rvfkoaoyxm3-n4fdz722tgvn-jwe375iwm6yr>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-TestServerBasicOps-server-1226962462>, <NovaLikeServer: te-8071397-asg-3rvfkoaoyxm3-n4fdz722tgvn-jwe375iwm6yr>]
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.203 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-03T02:19:22.202731) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.203 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.203 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.203 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.203 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.204 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.204 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-03T02:19:22.204078) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.259 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/disk.device.read.bytes volume: 23775232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.260 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.316 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.bytes volume: 23775232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.316 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.317 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.317 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.318 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.318 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.318 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.318 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.319 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.319 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-03T02:19:22.318697) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.319 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.320 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.320 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.321 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.321 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.321 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.321 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.322 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-03T02:19:22.321724) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.322 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/disk.device.read.latency volume: 2114496694 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.322 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/disk.device.read.latency volume: 2875731 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.323 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.latency volume: 2182451717 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.323 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.latency volume: 2630415 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.324 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.324 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.325 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.325 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.325 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.325 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.326 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/disk.device.read.requests volume: 760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.326 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.327 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.requests volume: 760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.327 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.328 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-03T02:19:22.325505) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.328 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.328 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.329 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.329 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.329 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.329 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.330 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.330 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-03T02:19:22.329430) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.330 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.331 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.331 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.331 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.331 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.332 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.332 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.332 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.332 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.332 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-03T02:19:22.332186) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.333 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.333 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.333 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.333 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.334 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.334 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.334 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.334 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.334 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.334 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.335 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-03T02:19:22.334337) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.335 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.335 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.335 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.336 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.336 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.336 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.336 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.336 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.336 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.337 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.337 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.338 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.338 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-03T02:19:22.336421) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.338 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.338 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.338 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.338 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.338 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.339 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.339 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.339 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.340 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-03T02:19:22.338608) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.340 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.340 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.341 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.342 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.342 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.342 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.342 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.344 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.344 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.344 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.344 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.344 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.344 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.344 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-03T02:19:22.342106) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.344 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/cpu volume: 17700000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.345 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-03T02:19:22.344790) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.345 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/cpu volume: 8580000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.346 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.346 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.346 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.346 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.346 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.346 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.347 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.348 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.348 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.348 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.348 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.348 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-03T02:19:22.346720) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.348 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.348 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.349 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-03T02:19:22.348758) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.349 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.350 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.350 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.350 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.350 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.350 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.350 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.350 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.351 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-03T02:19:22.350737) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.351 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.351 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.352 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.352 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.352 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.352 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.352 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.353 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.353 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.353 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.353 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-03T02:19:22.353078) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.353 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.354 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.354 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.354 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.354 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.354 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.354 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.355 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.355 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.355 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.355 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.355 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.355 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-03T02:19:22.354603) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.355 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.356 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.356 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-03T02:19:22.355951) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.356 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.356 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.357 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.357 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.357 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.357 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.357 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.357 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.357 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-TestServerBasicOps-server-1226962462>, <NovaLikeServer: te-8071397-asg-3rvfkoaoyxm3-n4fdz722tgvn-jwe375iwm6yr>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-TestServerBasicOps-server-1226962462>, <NovaLikeServer: te-8071397-asg-3rvfkoaoyxm3-n4fdz722tgvn-jwe375iwm6yr>]
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.358 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-03T02:19:22.357468) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.359 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.359 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.359 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.359 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.359 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.359 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.360 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.360 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.360 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.360 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.360 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.360 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.360 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.360 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.361 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.361 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.361 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.361 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.361 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.361 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.361 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.362 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:19:22 compute-0 ceph-mon[192821]: pgmap v1954: 321 pgs: 321 active+clean; 170 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 1.8 MiB/s wr, 125 op/s
Dec 03 02:19:22 compute-0 podman[453072]: 2025-12-03 02:19:22.896920483 +0000 UTC m=+0.117191846 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_id=edpm, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, version=9.4, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, maintainer=Red Hat, Inc.)
Dec 03 02:19:22 compute-0 podman[453070]: 2025-12-03 02:19:22.901160183 +0000 UTC m=+0.140179817 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, name=ubi9-minimal, version=9.6, distribution-scope=public, io.buildah.version=1.33.7, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 03 02:19:22 compute-0 podman[453071]: 2025-12-03 02:19:22.916516038 +0000 UTC m=+0.152029913 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 02:19:22 compute-0 podman[453069]: 2025-12-03 02:19:22.925932794 +0000 UTC m=+0.174495338 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 03 02:19:22 compute-0 podman[453082]: 2025-12-03 02:19:22.926613793 +0000 UTC m=+0.134818395 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 03 02:19:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1955: 321 pgs: 321 active+clean; 170 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 755 KiB/s wr, 79 op/s
Dec 03 02:19:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:19:23 compute-0 nova_compute[351485]: 2025-12-03 02:19:23.533 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:19:23 compute-0 nova_compute[351485]: 2025-12-03 02:19:23.533 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 02:19:23 compute-0 nova_compute[351485]: 2025-12-03 02:19:23.597 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 03 02:19:23 compute-0 nova_compute[351485]: 2025-12-03 02:19:23.599 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:19:23 compute-0 nova_compute[351485]: 2025-12-03 02:19:23.600 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:19:23 compute-0 nova_compute[351485]: 2025-12-03 02:19:23.706 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:19:24 compute-0 ceph-mon[192821]: pgmap v1955: 321 pgs: 321 active+clean; 170 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 755 KiB/s wr, 79 op/s
Dec 03 02:19:24 compute-0 nova_compute[351485]: 2025-12-03 02:19:24.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:19:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1956: 321 pgs: 321 active+clean; 170 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Dec 03 02:19:25 compute-0 ceph-mon[192821]: pgmap v1956: 321 pgs: 321 active+clean; 170 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Dec 03 02:19:25 compute-0 nova_compute[351485]: 2025-12-03 02:19:25.631 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:19:26 compute-0 nova_compute[351485]: 2025-12-03 02:19:26.570 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:19:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1957: 321 pgs: 321 active+clean; 170 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 65 op/s
Dec 03 02:19:28 compute-0 ceph-mon[192821]: pgmap v1957: 321 pgs: 321 active+clean; 170 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 65 op/s
Dec 03 02:19:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:19:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:19:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:19:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:19:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:19:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:19:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:19:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:19:28
Dec 03 02:19:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 02:19:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 02:19:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['vms', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta', 'images', 'volumes', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control']
Dec 03 02:19:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 02:19:28 compute-0 nova_compute[351485]: 2025-12-03 02:19:28.708 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:19:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1958: 321 pgs: 321 active+clean; 170 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 51 op/s
Dec 03 02:19:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 02:19:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:19:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 02:19:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:19:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:19:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:19:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:19:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:19:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:19:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:19:29 compute-0 nova_compute[351485]: 2025-12-03 02:19:29.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:19:29 compute-0 podman[158098]: time="2025-12-03T02:19:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:19:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:19:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45044 "" "Go-http-client/1.1"
Dec 03 02:19:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:19:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9129 "" "Go-http-client/1.1"
Dec 03 02:19:30 compute-0 ceph-mon[192821]: pgmap v1958: 321 pgs: 321 active+clean; 170 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 51 op/s
Dec 03 02:19:30 compute-0 nova_compute[351485]: 2025-12-03 02:19:30.636 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:19:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1959: 321 pgs: 321 active+clean; 170 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 51 op/s
Dec 03 02:19:31 compute-0 openstack_network_exporter[368278]: ERROR   02:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:19:31 compute-0 openstack_network_exporter[368278]: ERROR   02:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:19:31 compute-0 openstack_network_exporter[368278]: ERROR   02:19:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:19:31 compute-0 openstack_network_exporter[368278]: ERROR   02:19:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:19:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:19:31 compute-0 openstack_network_exporter[368278]: ERROR   02:19:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:19:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:19:32 compute-0 ceph-mon[192821]: pgmap v1959: 321 pgs: 321 active+clean; 170 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 51 op/s
Dec 03 02:19:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1960: 321 pgs: 321 active+clean; 170 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 567 KiB/s rd, 18 op/s
Dec 03 02:19:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:19:33 compute-0 nova_compute[351485]: 2025-12-03 02:19:33.710 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:19:34 compute-0 ceph-mon[192821]: pgmap v1960: 321 pgs: 321 active+clean; 170 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 567 KiB/s rd, 18 op/s
Dec 03 02:19:34 compute-0 nova_compute[351485]: 2025-12-03 02:19:34.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:19:34 compute-0 nova_compute[351485]: 2025-12-03 02:19:34.576 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 02:19:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1961: 321 pgs: 321 active+clean; 170 MiB data, 338 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:19:35 compute-0 nova_compute[351485]: 2025-12-03 02:19:35.644 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:19:36 compute-0 ceph-mon[192821]: pgmap v1961: 321 pgs: 321 active+clean; 170 MiB data, 338 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:19:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1962: 321 pgs: 321 active+clean; 170 MiB data, 338 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:19:38 compute-0 ceph-mon[192821]: pgmap v1962: 321 pgs: 321 active+clean; 170 MiB data, 338 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:19:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:19:38 compute-0 nova_compute[351485]: 2025-12-03 02:19:38.712 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:19:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 02:19:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:19:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 02:19:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:19:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0006975264740834798 of space, bias 1.0, pg target 0.20925794222504393 quantized to 32 (current 32)
Dec 03 02:19:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:19:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:19:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:19:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:19:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:19:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec 03 02:19:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:19:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 02:19:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:19:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:19:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:19:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 02:19:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:19:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 02:19:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:19:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:19:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:19:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 02:19:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1963: 321 pgs: 321 active+clean; 170 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1 op/s
Dec 03 02:19:40 compute-0 ceph-mon[192821]: pgmap v1963: 321 pgs: 321 active+clean; 170 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1 op/s
Dec 03 02:19:40 compute-0 nova_compute[351485]: 2025-12-03 02:19:40.648 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:19:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1964: 321 pgs: 321 active+clean; 170 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 409 KiB/s wr, 8 op/s
Dec 03 02:19:41 compute-0 ovn_controller[89134]: 2025-12-03T02:19:41Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:55:61:16 10.100.0.9
Dec 03 02:19:41 compute-0 ovn_controller[89134]: 2025-12-03T02:19:41Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:55:61:16 10.100.0.9
Dec 03 02:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 03 02:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 3600.1 total, 600.0 interval
                                            Cumulative writes: 9581 writes, 36K keys, 9581 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 9581 writes, 2507 syncs, 3.82 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 2123 writes, 7531 keys, 2123 commit groups, 1.0 writes per commit group, ingest: 7.42 MB, 0.01 MB/s
                                            Interval WAL: 2123 writes, 874 syncs, 2.43 writes per sync, written: 0.01 GB, 0.01 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 03 02:19:42 compute-0 ceph-mon[192821]: pgmap v1964: 321 pgs: 321 active+clean; 170 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 409 KiB/s wr, 8 op/s
Dec 03 02:19:42 compute-0 podman[453170]: 2025-12-03 02:19:42.863868884 +0000 UTC m=+0.105173937 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Dec 03 02:19:42 compute-0 podman[453169]: 2025-12-03 02:19:42.891859946 +0000 UTC m=+0.138214472 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent)
Dec 03 02:19:42 compute-0 podman[453171]: 2025-12-03 02:19:42.900908782 +0000 UTC m=+0.133167649 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 02:19:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1965: 321 pgs: 321 active+clean; 184 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 222 KiB/s rd, 1.6 MiB/s wr, 38 op/s
Dec 03 02:19:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:19:43 compute-0 nova_compute[351485]: 2025-12-03 02:19:43.715 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:19:44 compute-0 ceph-mon[192821]: pgmap v1965: 321 pgs: 321 active+clean; 184 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 222 KiB/s rd, 1.6 MiB/s wr, 38 op/s
Dec 03 02:19:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1966: 321 pgs: 321 active+clean; 200 MiB data, 362 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.0 MiB/s wr, 53 op/s
Dec 03 02:19:45 compute-0 nova_compute[351485]: 2025-12-03 02:19:45.654 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:19:46 compute-0 ceph-mon[192821]: pgmap v1966: 321 pgs: 321 active+clean; 200 MiB data, 362 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.0 MiB/s wr, 53 op/s
Dec 03 02:19:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 02:19:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/328678732' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:19:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 02:19:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/328678732' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:19:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1967: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 356 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 03 02:19:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/328678732' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:19:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/328678732' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:19:48 compute-0 ceph-mon[192821]: pgmap v1967: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 356 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 03 02:19:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 03 02:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 3600.1 total, 600.0 interval
                                            Cumulative writes: 11K writes, 42K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 11K writes, 2973 syncs, 3.71 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 2088 writes, 7681 keys, 2088 commit groups, 1.0 writes per commit group, ingest: 7.32 MB, 0.01 MB/s
                                            Interval WAL: 2088 writes, 866 syncs, 2.41 writes per sync, written: 0.01 GB, 0.01 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 03 02:19:48 compute-0 nova_compute[351485]: 2025-12-03 02:19:48.718 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:19:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1968: 321 pgs: 321 active+clean; 208 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 358 KiB/s rd, 2.5 MiB/s wr, 70 op/s
Dec 03 02:19:49 compute-0 podman[453226]: 2025-12-03 02:19:49.913011694 +0000 UTC m=+0.161749118 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 03 02:19:50 compute-0 ceph-mon[192821]: pgmap v1968: 321 pgs: 321 active+clean; 208 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 358 KiB/s rd, 2.5 MiB/s wr, 70 op/s
Dec 03 02:19:50 compute-0 nova_compute[351485]: 2025-12-03 02:19:50.659 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:19:50 compute-0 ovn_controller[89134]: 2025-12-03T02:19:50Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:dd:ed:eb 10.100.0.239
Dec 03 02:19:50 compute-0 ovn_controller[89134]: 2025-12-03T02:19:50Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:dd:ed:eb 10.100.0.239
Dec 03 02:19:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1969: 321 pgs: 321 active+clean; 224 MiB data, 381 MiB used, 60 GiB / 60 GiB avail; 486 KiB/s rd, 4.0 MiB/s wr, 87 op/s
Dec 03 02:19:52 compute-0 ceph-mon[192821]: pgmap v1969: 321 pgs: 321 active+clean; 224 MiB data, 381 MiB used, 60 GiB / 60 GiB avail; 486 KiB/s rd, 4.0 MiB/s wr, 87 op/s
Dec 03 02:19:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1970: 321 pgs: 321 active+clean; 224 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 568 KiB/s rd, 3.8 MiB/s wr, 98 op/s
Dec 03 02:19:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:19:53 compute-0 nova_compute[351485]: 2025-12-03 02:19:53.722 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:19:53 compute-0 podman[453247]: 2025-12-03 02:19:53.879398557 +0000 UTC m=+0.102148071 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 02:19:53 compute-0 podman[453248]: 2025-12-03 02:19:53.89507159 +0000 UTC m=+0.111811664 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release-0.7.12=, managed_by=edpm_ansible, name=ubi9, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.29.0, config_id=edpm, io.openshift.tags=base rhel9, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, container_name=kepler)
Dec 03 02:19:53 compute-0 podman[453246]: 2025-12-03 02:19:53.898544238 +0000 UTC m=+0.139081836 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, managed_by=edpm_ansible, release=1755695350, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., name=ubi9-minimal, config_id=edpm, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41)
Dec 03 02:19:53 compute-0 podman[453254]: 2025-12-03 02:19:53.902980134 +0000 UTC m=+0.120850810 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 03 02:19:53 compute-0 podman[453245]: 2025-12-03 02:19:53.91626948 +0000 UTC m=+0.163536668 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 03 02:19:54 compute-0 ceph-mon[192821]: pgmap v1970: 321 pgs: 321 active+clean; 224 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 568 KiB/s rd, 3.8 MiB/s wr, 98 op/s
Dec 03 02:19:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1971: 321 pgs: 321 active+clean; 235 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 446 KiB/s rd, 2.6 MiB/s wr, 84 op/s
Dec 03 02:19:55 compute-0 nova_compute[351485]: 2025-12-03 02:19:55.664 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 03 02:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 3600.1 total, 600.0 interval
                                            Cumulative writes: 8914 writes, 35K keys, 8914 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 8914 writes, 2261 syncs, 3.94 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 1912 writes, 7094 keys, 1912 commit groups, 1.0 writes per commit group, ingest: 7.72 MB, 0.01 MB/s
                                            Interval WAL: 1912 writes, 777 syncs, 2.46 writes per sync, written: 0.01 GB, 0.01 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 03 02:19:56 compute-0 ceph-mon[192821]: pgmap v1971: 321 pgs: 321 active+clean; 235 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 446 KiB/s rd, 2.6 MiB/s wr, 84 op/s
Dec 03 02:19:56 compute-0 sudo[453348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:19:56 compute-0 sudo[453348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:19:56 compute-0 sudo[453348]: pam_unix(sudo:session): session closed for user root
Dec 03 02:19:56 compute-0 sudo[453373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:19:56 compute-0 sudo[453373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:19:56 compute-0 sudo[453373]: pam_unix(sudo:session): session closed for user root
Dec 03 02:19:57 compute-0 sudo[453398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:19:57 compute-0 sudo[453398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:19:57 compute-0 sudo[453398]: pam_unix(sudo:session): session closed for user root
Dec 03 02:19:57 compute-0 ceph-mgr[193109]: [devicehealth INFO root] Check health
Dec 03 02:19:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1972: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 344 KiB/s rd, 2.2 MiB/s wr, 72 op/s
Dec 03 02:19:57 compute-0 sudo[453423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 02:19:57 compute-0 sudo[453423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:19:57 compute-0 sudo[453423]: pam_unix(sudo:session): session closed for user root
Dec 03 02:19:58 compute-0 sudo[453480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:19:58 compute-0 sudo[453480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:19:58 compute-0 sudo[453480]: pam_unix(sudo:session): session closed for user root
Dec 03 02:19:58 compute-0 sudo[453505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:19:58 compute-0 sudo[453505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:19:58 compute-0 sudo[453505]: pam_unix(sudo:session): session closed for user root
Dec 03 02:19:58 compute-0 sudo[453530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:19:58 compute-0 sudo[453530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:19:58 compute-0 sudo[453530]: pam_unix(sudo:session): session closed for user root
Dec 03 02:19:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:19:58 compute-0 ceph-mon[192821]: pgmap v1972: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 344 KiB/s rd, 2.2 MiB/s wr, 72 op/s
Dec 03 02:19:58 compute-0 sudo[453555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Dec 03 02:19:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:19:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:19:58 compute-0 sudo[453555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:19:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:19:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:19:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:19:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:19:58 compute-0 nova_compute[351485]: 2025-12-03 02:19:58.725 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:19:58 compute-0 sudo[453555]: pam_unix(sudo:session): session closed for user root
Dec 03 02:19:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 02:19:58 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:19:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 02:19:58 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:19:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:19:58 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:19:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 02:19:58 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:19:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 02:19:58 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:19:58 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 659c4e1d-aeea-49ed-88a9-3509e2ec2b39 does not exist
Dec 03 02:19:58 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 6c7ad595-f5d8-4219-a11a-8768aa72e4c9 does not exist
Dec 03 02:19:58 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev f6ad4e01-867b-4c76-9b43-607aada68802 does not exist
Dec 03 02:19:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 02:19:58 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:19:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 02:19:58 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:19:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:19:58 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:19:58 compute-0 sudo[453600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:19:58 compute-0 sudo[453600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:19:58 compute-0 sudo[453600]: pam_unix(sudo:session): session closed for user root
Dec 03 02:19:59 compute-0 sudo[453625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:19:59 compute-0 sudo[453625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:19:59 compute-0 sudo[453625]: pam_unix(sudo:session): session closed for user root
Dec 03 02:19:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1973: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 313 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Dec 03 02:19:59 compute-0 sudo[453650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:19:59 compute-0 sudo[453650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:19:59 compute-0 sudo[453650]: pam_unix(sudo:session): session closed for user root
Dec 03 02:19:59 compute-0 sudo[453675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 02:19:59 compute-0 sudo[453675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:19:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:59.652 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:19:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:59.653 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:19:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:59.655 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:19:59 compute-0 podman[158098]: time="2025-12-03T02:19:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:19:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:19:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45044 "" "Go-http-client/1.1"
Dec 03 02:19:59 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:19:59 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:19:59 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:19:59 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:19:59 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:19:59 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:19:59 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:19:59 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:19:59 compute-0 ceph-mon[192821]: pgmap v1973: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 313 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Dec 03 02:19:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:19:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9122 "" "Go-http-client/1.1"
Dec 03 02:19:59 compute-0 podman[453737]: 2025-12-03 02:19:59.930320814 +0000 UTC m=+0.072928324 container create 58d896b86d57f696eb2748cd45c27ea0a8a6907fe61d8c8e14f851388811d009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 03 02:19:59 compute-0 systemd[1]: Started libpod-conmon-58d896b86d57f696eb2748cd45c27ea0a8a6907fe61d8c8e14f851388811d009.scope.
Dec 03 02:19:59 compute-0 podman[453737]: 2025-12-03 02:19:59.900798359 +0000 UTC m=+0.043405859 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:20:00 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:20:00 compute-0 podman[453737]: 2025-12-03 02:20:00.052332276 +0000 UTC m=+0.194939786 container init 58d896b86d57f696eb2748cd45c27ea0a8a6907fe61d8c8e14f851388811d009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:20:00 compute-0 podman[453737]: 2025-12-03 02:20:00.071018965 +0000 UTC m=+0.213626445 container start 58d896b86d57f696eb2748cd45c27ea0a8a6907fe61d8c8e14f851388811d009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_shtern, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:20:00 compute-0 podman[453737]: 2025-12-03 02:20:00.075885142 +0000 UTC m=+0.218492702 container attach 58d896b86d57f696eb2748cd45c27ea0a8a6907fe61d8c8e14f851388811d009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_shtern, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 03 02:20:00 compute-0 nostalgic_shtern[453753]: 167 167
Dec 03 02:20:00 compute-0 systemd[1]: libpod-58d896b86d57f696eb2748cd45c27ea0a8a6907fe61d8c8e14f851388811d009.scope: Deactivated successfully.
Dec 03 02:20:00 compute-0 conmon[453753]: conmon 58d896b86d57f696eb27 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-58d896b86d57f696eb2748cd45c27ea0a8a6907fe61d8c8e14f851388811d009.scope/container/memory.events
Dec 03 02:20:00 compute-0 podman[453737]: 2025-12-03 02:20:00.090710302 +0000 UTC m=+0.233317832 container died 58d896b86d57f696eb2748cd45c27ea0a8a6907fe61d8c8e14f851388811d009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_shtern, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 03 02:20:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-42b4b2e25f2573798ee34d031944ffc35f9eda56ce371d603b07131dafcd6ce9-merged.mount: Deactivated successfully.
Dec 03 02:20:00 compute-0 podman[453737]: 2025-12-03 02:20:00.169497871 +0000 UTC m=+0.312105391 container remove 58d896b86d57f696eb2748cd45c27ea0a8a6907fe61d8c8e14f851388811d009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:20:00 compute-0 systemd[1]: libpod-conmon-58d896b86d57f696eb2748cd45c27ea0a8a6907fe61d8c8e14f851388811d009.scope: Deactivated successfully.
Dec 03 02:20:00 compute-0 podman[453775]: 2025-12-03 02:20:00.440061736 +0000 UTC m=+0.074977332 container create 2acee72155f45378c92e0bbfb289243ceca12fe29f4e489494dc0d860167deb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec 03 02:20:00 compute-0 podman[453775]: 2025-12-03 02:20:00.412963779 +0000 UTC m=+0.047879455 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:20:00 compute-0 systemd[1]: Started libpod-conmon-2acee72155f45378c92e0bbfb289243ceca12fe29f4e489494dc0d860167deb0.scope.
Dec 03 02:20:00 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:20:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/712d0207f2fb647226d1ead9ae750754259c8e92cf1fd4492ff3c92fd3132747/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:20:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/712d0207f2fb647226d1ead9ae750754259c8e92cf1fd4492ff3c92fd3132747/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:20:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/712d0207f2fb647226d1ead9ae750754259c8e92cf1fd4492ff3c92fd3132747/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:20:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/712d0207f2fb647226d1ead9ae750754259c8e92cf1fd4492ff3c92fd3132747/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:20:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/712d0207f2fb647226d1ead9ae750754259c8e92cf1fd4492ff3c92fd3132747/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 02:20:00 compute-0 podman[453775]: 2025-12-03 02:20:00.633853739 +0000 UTC m=+0.268769405 container init 2acee72155f45378c92e0bbfb289243ceca12fe29f4e489494dc0d860167deb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:20:00 compute-0 podman[453775]: 2025-12-03 02:20:00.650965303 +0000 UTC m=+0.285880929 container start 2acee72155f45378c92e0bbfb289243ceca12fe29f4e489494dc0d860167deb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_thompson, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:20:00 compute-0 podman[453775]: 2025-12-03 02:20:00.659260208 +0000 UTC m=+0.294175854 container attach 2acee72155f45378c92e0bbfb289243ceca12fe29f4e489494dc0d860167deb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_thompson, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 03 02:20:00 compute-0 nova_compute[351485]: 2025-12-03 02:20:00.669 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:20:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1974: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 310 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Dec 03 02:20:01 compute-0 openstack_network_exporter[368278]: ERROR   02:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:20:01 compute-0 openstack_network_exporter[368278]: ERROR   02:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:20:01 compute-0 openstack_network_exporter[368278]: ERROR   02:20:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:20:01 compute-0 openstack_network_exporter[368278]: ERROR   02:20:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:20:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:20:01 compute-0 openstack_network_exporter[368278]: ERROR   02:20:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:20:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:20:01 compute-0 blissful_thompson[453791]: --> passed data devices: 0 physical, 3 LVM
Dec 03 02:20:01 compute-0 blissful_thompson[453791]: --> relative data size: 1.0
Dec 03 02:20:01 compute-0 blissful_thompson[453791]: --> All data devices are unavailable
Dec 03 02:20:01 compute-0 systemd[1]: libpod-2acee72155f45378c92e0bbfb289243ceca12fe29f4e489494dc0d860167deb0.scope: Deactivated successfully.
Dec 03 02:20:01 compute-0 systemd[1]: libpod-2acee72155f45378c92e0bbfb289243ceca12fe29f4e489494dc0d860167deb0.scope: Consumed 1.254s CPU time.
Dec 03 02:20:01 compute-0 podman[453775]: 2025-12-03 02:20:01.983897266 +0000 UTC m=+1.618812892 container died 2acee72155f45378c92e0bbfb289243ceca12fe29f4e489494dc0d860167deb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_thompson, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:20:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-712d0207f2fb647226d1ead9ae750754259c8e92cf1fd4492ff3c92fd3132747-merged.mount: Deactivated successfully.
Dec 03 02:20:02 compute-0 podman[453775]: 2025-12-03 02:20:02.114916493 +0000 UTC m=+1.749832089 container remove 2acee72155f45378c92e0bbfb289243ceca12fe29f4e489494dc0d860167deb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_thompson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:20:02 compute-0 systemd[1]: libpod-conmon-2acee72155f45378c92e0bbfb289243ceca12fe29f4e489494dc0d860167deb0.scope: Deactivated successfully.
Dec 03 02:20:02 compute-0 sudo[453675]: pam_unix(sudo:session): session closed for user root
Dec 03 02:20:02 compute-0 ceph-mon[192821]: pgmap v1974: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 310 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Dec 03 02:20:02 compute-0 sudo[453830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:20:02 compute-0 sudo[453830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:20:02 compute-0 sudo[453830]: pam_unix(sudo:session): session closed for user root
Dec 03 02:20:02 compute-0 sudo[453855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:20:02 compute-0 sudo[453855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:20:02 compute-0 sudo[453855]: pam_unix(sudo:session): session closed for user root
Dec 03 02:20:02 compute-0 sudo[453880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:20:02 compute-0 sudo[453880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:20:02 compute-0 sudo[453880]: pam_unix(sudo:session): session closed for user root
Dec 03 02:20:02 compute-0 sudo[453905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 02:20:02 compute-0 sudo[453905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:20:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1975: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 173 KiB/s rd, 274 KiB/s wr, 36 op/s
Dec 03 02:20:03 compute-0 podman[453967]: 2025-12-03 02:20:03.253559708 +0000 UTC m=+0.095814321 container create 1e790ac29c0f961bda29af030e10fce5d48508c65032362682360e4e95ad587d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_boyd, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:20:03 compute-0 podman[453967]: 2025-12-03 02:20:03.218587439 +0000 UTC m=+0.060842142 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:20:03 compute-0 systemd[1]: Started libpod-conmon-1e790ac29c0f961bda29af030e10fce5d48508c65032362682360e4e95ad587d.scope.
Dec 03 02:20:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:20:03 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:20:03 compute-0 podman[453967]: 2025-12-03 02:20:03.390838712 +0000 UTC m=+0.233093415 container init 1e790ac29c0f961bda29af030e10fce5d48508c65032362682360e4e95ad587d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 03 02:20:03 compute-0 podman[453967]: 2025-12-03 02:20:03.404674443 +0000 UTC m=+0.246929046 container start 1e790ac29c0f961bda29af030e10fce5d48508c65032362682360e4e95ad587d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_boyd, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 03 02:20:03 compute-0 podman[453967]: 2025-12-03 02:20:03.408959264 +0000 UTC m=+0.251213927 container attach 1e790ac29c0f961bda29af030e10fce5d48508c65032362682360e4e95ad587d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_boyd, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:20:03 compute-0 thirsty_boyd[453983]: 167 167
Dec 03 02:20:03 compute-0 systemd[1]: libpod-1e790ac29c0f961bda29af030e10fce5d48508c65032362682360e4e95ad587d.scope: Deactivated successfully.
Dec 03 02:20:03 compute-0 conmon[453983]: conmon 1e790ac29c0f961bda29 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1e790ac29c0f961bda29af030e10fce5d48508c65032362682360e4e95ad587d.scope/container/memory.events
Dec 03 02:20:03 compute-0 podman[453967]: 2025-12-03 02:20:03.417126085 +0000 UTC m=+0.259380708 container died 1e790ac29c0f961bda29af030e10fce5d48508c65032362682360e4e95ad587d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_boyd, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:20:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7b29fb5cb5e7b26022e193c1842f4c39aee2662250ba4a057685de95c70a388-merged.mount: Deactivated successfully.
Dec 03 02:20:03 compute-0 podman[453967]: 2025-12-03 02:20:03.470836655 +0000 UTC m=+0.313091278 container remove 1e790ac29c0f961bda29af030e10fce5d48508c65032362682360e4e95ad587d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_boyd, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 03 02:20:03 compute-0 systemd[1]: libpod-conmon-1e790ac29c0f961bda29af030e10fce5d48508c65032362682360e4e95ad587d.scope: Deactivated successfully.
Dec 03 02:20:03 compute-0 nova_compute[351485]: 2025-12-03 02:20:03.728 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:20:03 compute-0 podman[454006]: 2025-12-03 02:20:03.786948109 +0000 UTC m=+0.105607139 container create ba11354671995eb180ae8db3a6fd9ac411ae05c9b8bb57c8b3d2ddfa0e98481e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_curran, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:20:03 compute-0 podman[454006]: 2025-12-03 02:20:03.748841061 +0000 UTC m=+0.067500121 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:20:03 compute-0 systemd[1]: Started libpod-conmon-ba11354671995eb180ae8db3a6fd9ac411ae05c9b8bb57c8b3d2ddfa0e98481e.scope.
Dec 03 02:20:03 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:20:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e7e81615bded6bf61706c0a8c33200c44ef1c5922cb75e0b04e17fb22a33512/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:20:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e7e81615bded6bf61706c0a8c33200c44ef1c5922cb75e0b04e17fb22a33512/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:20:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e7e81615bded6bf61706c0a8c33200c44ef1c5922cb75e0b04e17fb22a33512/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:20:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e7e81615bded6bf61706c0a8c33200c44ef1c5922cb75e0b04e17fb22a33512/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:20:03 compute-0 podman[454006]: 2025-12-03 02:20:03.964668517 +0000 UTC m=+0.283327647 container init ba11354671995eb180ae8db3a6fd9ac411ae05c9b8bb57c8b3d2ddfa0e98481e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 03 02:20:03 compute-0 podman[454006]: 2025-12-03 02:20:03.975610017 +0000 UTC m=+0.294269047 container start ba11354671995eb180ae8db3a6fd9ac411ae05c9b8bb57c8b3d2ddfa0e98481e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_curran, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:20:03 compute-0 podman[454006]: 2025-12-03 02:20:03.981080311 +0000 UTC m=+0.299739441 container attach ba11354671995eb180ae8db3a6fd9ac411ae05c9b8bb57c8b3d2ddfa0e98481e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_curran, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Dec 03 02:20:04 compute-0 ceph-mon[192821]: pgmap v1975: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 173 KiB/s rd, 274 KiB/s wr, 36 op/s
Dec 03 02:20:04 compute-0 infallible_curran[454020]: {
Dec 03 02:20:04 compute-0 infallible_curran[454020]:     "0": [
Dec 03 02:20:04 compute-0 infallible_curran[454020]:         {
Dec 03 02:20:04 compute-0 infallible_curran[454020]:             "devices": [
Dec 03 02:20:04 compute-0 infallible_curran[454020]:                 "/dev/loop3"
Dec 03 02:20:04 compute-0 infallible_curran[454020]:             ],
Dec 03 02:20:04 compute-0 infallible_curran[454020]:             "lv_name": "ceph_lv0",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:             "lv_size": "21470642176",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:             "name": "ceph_lv0",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:             "tags": {
Dec 03 02:20:04 compute-0 infallible_curran[454020]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:                 "ceph.cluster_name": "ceph",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:                 "ceph.crush_device_class": "",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:                 "ceph.encrypted": "0",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:                 "ceph.osd_id": "0",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:                 "ceph.type": "block",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:                 "ceph.vdo": "0"
Dec 03 02:20:04 compute-0 infallible_curran[454020]:             },
Dec 03 02:20:04 compute-0 infallible_curran[454020]:             "type": "block",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:             "vg_name": "ceph_vg0"
Dec 03 02:20:04 compute-0 infallible_curran[454020]:         }
Dec 03 02:20:04 compute-0 infallible_curran[454020]:     ],
Dec 03 02:20:04 compute-0 infallible_curran[454020]:     "1": [
Dec 03 02:20:04 compute-0 infallible_curran[454020]:         {
Dec 03 02:20:04 compute-0 infallible_curran[454020]:             "devices": [
Dec 03 02:20:04 compute-0 infallible_curran[454020]:                 "/dev/loop4"
Dec 03 02:20:04 compute-0 infallible_curran[454020]:             ],
Dec 03 02:20:04 compute-0 infallible_curran[454020]:             "lv_name": "ceph_lv1",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:             "lv_size": "21470642176",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:             "name": "ceph_lv1",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:             "tags": {
Dec 03 02:20:04 compute-0 infallible_curran[454020]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:                 "ceph.cluster_name": "ceph",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:                 "ceph.crush_device_class": "",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:                 "ceph.encrypted": "0",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:                 "ceph.osd_id": "1",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:                 "ceph.type": "block",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:                 "ceph.vdo": "0"
Dec 03 02:20:04 compute-0 infallible_curran[454020]:             },
Dec 03 02:20:04 compute-0 infallible_curran[454020]:             "type": "block",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:             "vg_name": "ceph_vg1"
Dec 03 02:20:04 compute-0 infallible_curran[454020]:         }
Dec 03 02:20:04 compute-0 infallible_curran[454020]:     ],
Dec 03 02:20:04 compute-0 infallible_curran[454020]:     "2": [
Dec 03 02:20:04 compute-0 infallible_curran[454020]:         {
Dec 03 02:20:04 compute-0 infallible_curran[454020]:             "devices": [
Dec 03 02:20:04 compute-0 infallible_curran[454020]:                 "/dev/loop5"
Dec 03 02:20:04 compute-0 infallible_curran[454020]:             ],
Dec 03 02:20:04 compute-0 infallible_curran[454020]:             "lv_name": "ceph_lv2",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:             "lv_size": "21470642176",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:             "name": "ceph_lv2",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:             "tags": {
Dec 03 02:20:04 compute-0 infallible_curran[454020]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:                 "ceph.cluster_name": "ceph",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:                 "ceph.crush_device_class": "",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:                 "ceph.encrypted": "0",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:                 "ceph.osd_id": "2",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:                 "ceph.type": "block",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:                 "ceph.vdo": "0"
Dec 03 02:20:04 compute-0 infallible_curran[454020]:             },
Dec 03 02:20:04 compute-0 infallible_curran[454020]:             "type": "block",
Dec 03 02:20:04 compute-0 infallible_curran[454020]:             "vg_name": "ceph_vg2"
Dec 03 02:20:04 compute-0 infallible_curran[454020]:         }
Dec 03 02:20:04 compute-0 infallible_curran[454020]:     ]
Dec 03 02:20:04 compute-0 infallible_curran[454020]: }
Dec 03 02:20:04 compute-0 systemd[1]: libpod-ba11354671995eb180ae8db3a6fd9ac411ae05c9b8bb57c8b3d2ddfa0e98481e.scope: Deactivated successfully.
Dec 03 02:20:04 compute-0 podman[454006]: 2025-12-03 02:20:04.808021358 +0000 UTC m=+1.126680388 container died ba11354671995eb180ae8db3a6fd9ac411ae05c9b8bb57c8b3d2ddfa0e98481e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_curran, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:20:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e7e81615bded6bf61706c0a8c33200c44ef1c5922cb75e0b04e17fb22a33512-merged.mount: Deactivated successfully.
Dec 03 02:20:04 compute-0 podman[454006]: 2025-12-03 02:20:04.902168642 +0000 UTC m=+1.220827702 container remove ba11354671995eb180ae8db3a6fd9ac411ae05c9b8bb57c8b3d2ddfa0e98481e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_curran, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 03 02:20:04 compute-0 systemd[1]: libpod-conmon-ba11354671995eb180ae8db3a6fd9ac411ae05c9b8bb57c8b3d2ddfa0e98481e.scope: Deactivated successfully.
Dec 03 02:20:04 compute-0 sudo[453905]: pam_unix(sudo:session): session closed for user root
Dec 03 02:20:05 compute-0 sudo[454041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:20:05 compute-0 sudo[454041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:20:05 compute-0 sudo[454041]: pam_unix(sudo:session): session closed for user root
Dec 03 02:20:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1976: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 96 KiB/s wr, 19 op/s
Dec 03 02:20:05 compute-0 sudo[454066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:20:05 compute-0 sudo[454066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:20:05 compute-0 sudo[454066]: pam_unix(sudo:session): session closed for user root
Dec 03 02:20:05 compute-0 sudo[454091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:20:05 compute-0 sudo[454091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:20:05 compute-0 sudo[454091]: pam_unix(sudo:session): session closed for user root
Dec 03 02:20:05 compute-0 ovn_controller[89134]: 2025-12-03T02:20:05Z|00184|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Dec 03 02:20:05 compute-0 sudo[454116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 02:20:05 compute-0 sudo[454116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:20:05 compute-0 nova_compute[351485]: 2025-12-03 02:20:05.673 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:20:06 compute-0 podman[454181]: 2025-12-03 02:20:06.001814714 +0000 UTC m=+0.083006979 container create 92f399ac6685b4295b7800131b089978f9cef0c09dea2f0a33afd8a338d3b639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_satoshi, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:20:06 compute-0 podman[454181]: 2025-12-03 02:20:05.969127369 +0000 UTC m=+0.050319744 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:20:06 compute-0 systemd[1]: Started libpod-conmon-92f399ac6685b4295b7800131b089978f9cef0c09dea2f0a33afd8a338d3b639.scope.
Dec 03 02:20:06 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:20:06 compute-0 podman[454181]: 2025-12-03 02:20:06.127172401 +0000 UTC m=+0.208364786 container init 92f399ac6685b4295b7800131b089978f9cef0c09dea2f0a33afd8a338d3b639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_satoshi, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Dec 03 02:20:06 compute-0 podman[454181]: 2025-12-03 02:20:06.145845569 +0000 UTC m=+0.227037814 container start 92f399ac6685b4295b7800131b089978f9cef0c09dea2f0a33afd8a338d3b639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 03 02:20:06 compute-0 podman[454181]: 2025-12-03 02:20:06.151106608 +0000 UTC m=+0.232298953 container attach 92f399ac6685b4295b7800131b089978f9cef0c09dea2f0a33afd8a338d3b639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_satoshi, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:20:06 compute-0 silly_satoshi[454196]: 167 167
Dec 03 02:20:06 compute-0 systemd[1]: libpod-92f399ac6685b4295b7800131b089978f9cef0c09dea2f0a33afd8a338d3b639.scope: Deactivated successfully.
Dec 03 02:20:06 compute-0 podman[454181]: 2025-12-03 02:20:06.168007576 +0000 UTC m=+0.249199831 container died 92f399ac6685b4295b7800131b089978f9cef0c09dea2f0a33afd8a338d3b639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 03 02:20:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-71ffda040e6946eb7611142ba5e9d2444253d455603a09eb18cdbda7c17bc62c-merged.mount: Deactivated successfully.
Dec 03 02:20:06 compute-0 podman[454181]: 2025-12-03 02:20:06.224102743 +0000 UTC m=+0.305294998 container remove 92f399ac6685b4295b7800131b089978f9cef0c09dea2f0a33afd8a338d3b639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_satoshi, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 03 02:20:06 compute-0 ceph-mon[192821]: pgmap v1976: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 96 KiB/s wr, 19 op/s
Dec 03 02:20:06 compute-0 systemd[1]: libpod-conmon-92f399ac6685b4295b7800131b089978f9cef0c09dea2f0a33afd8a338d3b639.scope: Deactivated successfully.
Dec 03 02:20:06 compute-0 podman[454220]: 2025-12-03 02:20:06.46714414 +0000 UTC m=+0.069633841 container create 2784886ffecb12ed1bd8897c48d8952c413435944ba83faa73b956c612f54ece (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_johnson, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 03 02:20:06 compute-0 podman[454220]: 2025-12-03 02:20:06.43393274 +0000 UTC m=+0.036422481 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:20:06 compute-0 systemd[1]: Started libpod-conmon-2784886ffecb12ed1bd8897c48d8952c413435944ba83faa73b956c612f54ece.scope.
Dec 03 02:20:06 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:20:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dd7a45afd4609d4557d4e7dd92011004eabb5af9686d31c62ce8dea877b07ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:20:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dd7a45afd4609d4557d4e7dd92011004eabb5af9686d31c62ce8dea877b07ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:20:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dd7a45afd4609d4557d4e7dd92011004eabb5af9686d31c62ce8dea877b07ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:20:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dd7a45afd4609d4557d4e7dd92011004eabb5af9686d31c62ce8dea877b07ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:20:06 compute-0 podman[454220]: 2025-12-03 02:20:06.616385882 +0000 UTC m=+0.218875673 container init 2784886ffecb12ed1bd8897c48d8952c413435944ba83faa73b956c612f54ece (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:20:06 compute-0 podman[454220]: 2025-12-03 02:20:06.640779732 +0000 UTC m=+0.243269463 container start 2784886ffecb12ed1bd8897c48d8952c413435944ba83faa73b956c612f54ece (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_johnson, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:20:06 compute-0 podman[454220]: 2025-12-03 02:20:06.654504251 +0000 UTC m=+0.256994052 container attach 2784886ffecb12ed1bd8897c48d8952c413435944ba83faa73b956c612f54ece (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_johnson, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 03 02:20:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1977: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 73 KiB/s wr, 2 op/s
Dec 03 02:20:07 compute-0 sad_johnson[454237]: {
Dec 03 02:20:07 compute-0 sad_johnson[454237]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 02:20:07 compute-0 sad_johnson[454237]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:20:07 compute-0 sad_johnson[454237]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 02:20:07 compute-0 sad_johnson[454237]:         "osd_id": 2,
Dec 03 02:20:07 compute-0 sad_johnson[454237]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:20:07 compute-0 sad_johnson[454237]:         "type": "bluestore"
Dec 03 02:20:07 compute-0 sad_johnson[454237]:     },
Dec 03 02:20:07 compute-0 sad_johnson[454237]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 02:20:07 compute-0 sad_johnson[454237]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:20:07 compute-0 sad_johnson[454237]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 02:20:07 compute-0 sad_johnson[454237]:         "osd_id": 1,
Dec 03 02:20:07 compute-0 sad_johnson[454237]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:20:07 compute-0 sad_johnson[454237]:         "type": "bluestore"
Dec 03 02:20:07 compute-0 sad_johnson[454237]:     },
Dec 03 02:20:07 compute-0 sad_johnson[454237]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 02:20:07 compute-0 sad_johnson[454237]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:20:07 compute-0 sad_johnson[454237]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 02:20:07 compute-0 sad_johnson[454237]:         "osd_id": 0,
Dec 03 02:20:07 compute-0 sad_johnson[454237]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:20:07 compute-0 sad_johnson[454237]:         "type": "bluestore"
Dec 03 02:20:07 compute-0 sad_johnson[454237]:     }
Dec 03 02:20:07 compute-0 sad_johnson[454237]: }
Dec 03 02:20:07 compute-0 podman[454220]: 2025-12-03 02:20:07.838877919 +0000 UTC m=+1.441367620 container died 2784886ffecb12ed1bd8897c48d8952c413435944ba83faa73b956c612f54ece (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Dec 03 02:20:07 compute-0 systemd[1]: libpod-2784886ffecb12ed1bd8897c48d8952c413435944ba83faa73b956c612f54ece.scope: Deactivated successfully.
Dec 03 02:20:07 compute-0 systemd[1]: libpod-2784886ffecb12ed1bd8897c48d8952c413435944ba83faa73b956c612f54ece.scope: Consumed 1.200s CPU time.
Dec 03 02:20:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-8dd7a45afd4609d4557d4e7dd92011004eabb5af9686d31c62ce8dea877b07ce-merged.mount: Deactivated successfully.
Dec 03 02:20:07 compute-0 podman[454220]: 2025-12-03 02:20:07.938297362 +0000 UTC m=+1.540787053 container remove 2784886ffecb12ed1bd8897c48d8952c413435944ba83faa73b956c612f54ece (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_johnson, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:20:07 compute-0 systemd[1]: libpod-conmon-2784886ffecb12ed1bd8897c48d8952c413435944ba83faa73b956c612f54ece.scope: Deactivated successfully.
Dec 03 02:20:07 compute-0 sudo[454116]: pam_unix(sudo:session): session closed for user root
Dec 03 02:20:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 02:20:07 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:20:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 02:20:08 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:20:08 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 64b3947e-8e14-4330-ab68-57dbee2f0abc does not exist
Dec 03 02:20:08 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 648fa0db-cadd-46f9-a277-3c56bc509ad7 does not exist
Dec 03 02:20:08 compute-0 sudo[454281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:20:08 compute-0 sudo[454281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:20:08 compute-0 sudo[454281]: pam_unix(sudo:session): session closed for user root
Dec 03 02:20:08 compute-0 ceph-mon[192821]: pgmap v1977: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 73 KiB/s wr, 2 op/s
Dec 03 02:20:08 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:20:08 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:20:08 compute-0 sudo[454306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 02:20:08 compute-0 sudo[454306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:20:08 compute-0 sudo[454306]: pam_unix(sudo:session): session closed for user root
Dec 03 02:20:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:20:08 compute-0 nova_compute[351485]: 2025-12-03 02:20:08.732 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:20:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1978: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 7.4 KiB/s wr, 0 op/s
Dec 03 02:20:10 compute-0 ceph-mon[192821]: pgmap v1978: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 7.4 KiB/s wr, 0 op/s
Dec 03 02:20:10 compute-0 nova_compute[351485]: 2025-12-03 02:20:10.676 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:20:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1979: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 8.5 KiB/s wr, 0 op/s
Dec 03 02:20:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:12.183 288634 DEBUG eventlet.wsgi.server [-] (288634) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Dec 03 02:20:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:12.184 288634 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /latest/meta-data/public-ipv4 HTTP/1.0
Dec 03 02:20:12 compute-0 ovn_metadata_agent[288523]: Accept: */*
Dec 03 02:20:12 compute-0 ovn_metadata_agent[288523]: Connection: close
Dec 03 02:20:12 compute-0 ovn_metadata_agent[288523]: Content-Type: text/plain
Dec 03 02:20:12 compute-0 ovn_metadata_agent[288523]: Host: 169.254.169.254
Dec 03 02:20:12 compute-0 ovn_metadata_agent[288523]: User-Agent: curl/7.84.0
Dec 03 02:20:12 compute-0 ovn_metadata_agent[288523]: X-Forwarded-For: 10.100.0.9
Dec 03 02:20:12 compute-0 ovn_metadata_agent[288523]: X-Ovn-Network-Id: b46a3397-654d-4ceb-be75-a322ea7e5091 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Dec 03 02:20:12 compute-0 ceph-mon[192821]: pgmap v1979: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 8.5 KiB/s wr, 0 op/s
Dec 03 02:20:12 compute-0 nova_compute[351485]: 2025-12-03 02:20:12.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:20:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1980: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec 03 02:20:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:20:13 compute-0 nova_compute[351485]: 2025-12-03 02:20:13.735 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:20:13 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:13.736 288634 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Dec 03 02:20:13 compute-0 haproxy-metadata-proxy-b46a3397-654d-4ceb-be75-a322ea7e5091[452488]: 10.100.0.9:57126 [03/Dec/2025:02:20:12.180] listener listener/metadata 0/0/0/1556/1556 200 135 - - ---- 1/1/0/0/0 0/0 "GET /latest/meta-data/public-ipv4 HTTP/1.1"
Dec 03 02:20:13 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:13.737 288634 INFO eventlet.wsgi.server [-] 10.100.0.9,<local> "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 151 time: 1.5523539
Dec 03 02:20:13 compute-0 podman[454333]: 2025-12-03 02:20:13.871760837 +0000 UTC m=+0.092715734 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 03 02:20:13 compute-0 podman[454332]: 2025-12-03 02:20:13.879157417 +0000 UTC m=+0.111548878 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 03 02:20:13 compute-0 podman[454331]: 2025-12-03 02:20:13.88071063 +0000 UTC m=+0.106924266 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Dec 03 02:20:13 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:13.889 288634 DEBUG eventlet.wsgi.server [-] (288634) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Dec 03 02:20:13 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:13.890 288634 DEBUG neutron.agent.ovn.metadata.server [-] Request: POST /openstack/2013-10-17/password HTTP/1.0
Dec 03 02:20:13 compute-0 ovn_metadata_agent[288523]: Accept: */*
Dec 03 02:20:13 compute-0 ovn_metadata_agent[288523]: Connection: close
Dec 03 02:20:13 compute-0 ovn_metadata_agent[288523]: Content-Length: 100
Dec 03 02:20:13 compute-0 ovn_metadata_agent[288523]: Content-Type: application/x-www-form-urlencoded
Dec 03 02:20:13 compute-0 ovn_metadata_agent[288523]: Host: 169.254.169.254
Dec 03 02:20:13 compute-0 ovn_metadata_agent[288523]: User-Agent: curl/7.84.0
Dec 03 02:20:13 compute-0 ovn_metadata_agent[288523]: X-Forwarded-For: 10.100.0.9
Dec 03 02:20:13 compute-0 ovn_metadata_agent[288523]: X-Ovn-Network-Id: b46a3397-654d-4ceb-be75-a322ea7e5091
Dec 03 02:20:13 compute-0 ovn_metadata_agent[288523]: 
Dec 03 02:20:13 compute-0 ovn_metadata_agent[288523]: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Dec 03 02:20:14 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:14.151 288634 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Dec 03 02:20:14 compute-0 haproxy-metadata-proxy-b46a3397-654d-4ceb-be75-a322ea7e5091[452488]: 10.100.0.9:57142 [03/Dec/2025:02:20:13.888] listener listener/metadata 0/0/0/264/264 200 118 - - ---- 1/1/0/0/0 0/0 "POST /openstack/2013-10-17/password HTTP/1.1"
Dec 03 02:20:14 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:14.152 288634 INFO eventlet.wsgi.server [-] 10.100.0.9,<local> "POST /openstack/2013-10-17/password HTTP/1.1" status: 200  len: 134 time: 0.2619309
Dec 03 02:20:14 compute-0 ceph-mon[192821]: pgmap v1980: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec 03 02:20:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1981: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 681 B/s rd, 1.9 KiB/s wr, 0 op/s
Dec 03 02:20:15 compute-0 nova_compute[351485]: 2025-12-03 02:20:15.681 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:20:16 compute-0 ceph-mon[192821]: pgmap v1981: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 681 B/s rd, 1.9 KiB/s wr, 0 op/s
Dec 03 02:20:16 compute-0 nova_compute[351485]: 2025-12-03 02:20:16.601 351492 DEBUG oslo_concurrency.lockutils [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Acquiring lock "48201127-9aa0-4cde-a41d-6790411480a4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:20:16 compute-0 nova_compute[351485]: 2025-12-03 02:20:16.601 351492 DEBUG oslo_concurrency.lockutils [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Lock "48201127-9aa0-4cde-a41d-6790411480a4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:20:16 compute-0 nova_compute[351485]: 2025-12-03 02:20:16.602 351492 DEBUG oslo_concurrency.lockutils [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Acquiring lock "48201127-9aa0-4cde-a41d-6790411480a4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:20:16 compute-0 nova_compute[351485]: 2025-12-03 02:20:16.602 351492 DEBUG oslo_concurrency.lockutils [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Lock "48201127-9aa0-4cde-a41d-6790411480a4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:20:16 compute-0 nova_compute[351485]: 2025-12-03 02:20:16.602 351492 DEBUG oslo_concurrency.lockutils [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Lock "48201127-9aa0-4cde-a41d-6790411480a4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:20:16 compute-0 nova_compute[351485]: 2025-12-03 02:20:16.604 351492 INFO nova.compute.manager [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Terminating instance
Dec 03 02:20:16 compute-0 nova_compute[351485]: 2025-12-03 02:20:16.605 351492 DEBUG nova.compute.manager [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 03 02:20:16 compute-0 kernel: tap0d927baf-41 (unregistering): left promiscuous mode
Dec 03 02:20:16 compute-0 NetworkManager[48912]: <info>  [1764728416.7454] device (tap0d927baf-41): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 03 02:20:16 compute-0 nova_compute[351485]: 2025-12-03 02:20:16.754 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:20:16 compute-0 ovn_controller[89134]: 2025-12-03T02:20:16Z|00185|binding|INFO|Releasing lport 0d927baf-41d2-458f-b4c0-1218ba0eec13 from this chassis (sb_readonly=0)
Dec 03 02:20:16 compute-0 ovn_controller[89134]: 2025-12-03T02:20:16Z|00186|binding|INFO|Setting lport 0d927baf-41d2-458f-b4c0-1218ba0eec13 down in Southbound
Dec 03 02:20:16 compute-0 ovn_controller[89134]: 2025-12-03T02:20:16Z|00187|binding|INFO|Removing iface tap0d927baf-41 ovn-installed in OVS
Dec 03 02:20:16 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:16.763 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:55:61:16 10.100.0.9'], port_security=['fa:16:3e:55:61:16 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '48201127-9aa0-4cde-a41d-6790411480a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b46a3397-654d-4ceb-be75-a322ea7e5091', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '38f1a4b24bc74f43a70b0fc06f48b9a2', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3ad947c5-c226-4f50-af5d-711cff08343d b2c98479-d787-4d5e-b71b-1dd64682dc39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.211'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a2444ad0-b9d4-4c2c-9115-6ef22db7fd9a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=0d927baf-41d2-458f-b4c0-1218ba0eec13) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 02:20:16 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:16.765 288528 INFO neutron.agent.ovn.metadata.agent [-] Port 0d927baf-41d2-458f-b4c0-1218ba0eec13 in datapath b46a3397-654d-4ceb-be75-a322ea7e5091 unbound from our chassis
Dec 03 02:20:16 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:16.767 288528 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b46a3397-654d-4ceb-be75-a322ea7e5091, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 03 02:20:16 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:16.768 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[ba424624-fc0d-445c-9938-a32562aa0b69]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:20:16 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:16.769 288528 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b46a3397-654d-4ceb-be75-a322ea7e5091 namespace which is not needed anymore
Dec 03 02:20:16 compute-0 nova_compute[351485]: 2025-12-03 02:20:16.787 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:20:16 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Dec 03 02:20:16 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000d.scope: Consumed 46.842s CPU time.
Dec 03 02:20:16 compute-0 systemd-machined[138558]: Machine qemu-14-instance-0000000d terminated.
Dec 03 02:20:16 compute-0 nova_compute[351485]: 2025-12-03 02:20:16.837 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:20:16 compute-0 nova_compute[351485]: 2025-12-03 02:20:16.847 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:20:16 compute-0 nova_compute[351485]: 2025-12-03 02:20:16.854 351492 INFO nova.virt.libvirt.driver [-] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Instance destroyed successfully.
Dec 03 02:20:16 compute-0 nova_compute[351485]: 2025-12-03 02:20:16.855 351492 DEBUG nova.objects.instance [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Lazy-loading 'resources' on Instance uuid 48201127-9aa0-4cde-a41d-6790411480a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:20:16 compute-0 neutron-haproxy-ovnmeta-b46a3397-654d-4ceb-be75-a322ea7e5091[452461]: [NOTICE]   (452486) : haproxy version is 2.8.14-c23fe91
Dec 03 02:20:16 compute-0 neutron-haproxy-ovnmeta-b46a3397-654d-4ceb-be75-a322ea7e5091[452461]: [NOTICE]   (452486) : path to executable is /usr/sbin/haproxy
Dec 03 02:20:16 compute-0 neutron-haproxy-ovnmeta-b46a3397-654d-4ceb-be75-a322ea7e5091[452461]: [WARNING]  (452486) : Exiting Master process...
Dec 03 02:20:16 compute-0 neutron-haproxy-ovnmeta-b46a3397-654d-4ceb-be75-a322ea7e5091[452461]: [ALERT]    (452486) : Current worker (452488) exited with code 143 (Terminated)
Dec 03 02:20:16 compute-0 neutron-haproxy-ovnmeta-b46a3397-654d-4ceb-be75-a322ea7e5091[452461]: [WARNING]  (452486) : All workers exited. Exiting... (0)
Dec 03 02:20:16 compute-0 systemd[1]: libpod-57a8a60584e8dfa48c54c7f4c808b077f95b7cac7819fa02e6dc520c2bcbc2eb.scope: Deactivated successfully.
Dec 03 02:20:16 compute-0 podman[454425]: 2025-12-03 02:20:16.996586147 +0000 UTC m=+0.071010990 container died 57a8a60584e8dfa48c54c7f4c808b077f95b7cac7819fa02e6dc520c2bcbc2eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b46a3397-654d-4ceb-be75-a322ea7e5091, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 03 02:20:17 compute-0 nova_compute[351485]: 2025-12-03 02:20:17.003 351492 DEBUG nova.virt.libvirt.vif [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T02:18:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1226962462',display_name='tempest-TestServerBasicOps-server-1226962462',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1226962462',id=13,image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOrfBag91AFIZ3cgT/3v6DEUVxmWorZPsTvJBCT3v1fcFACxQDoahVOND6soOw4PzOfL8jvcBATzzdMnLLkWJn8sw8+PBGsPmPnV6EhNG8NjAI9UA8OPVUdoPITGd7W+8A==',key_name='tempest-TestServerBasicOps-954582748',keypairs=<?>,launch_index=0,launched_at=2025-12-03T02:19:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='38f1a4b24bc74f43a70b0fc06f48b9a2',ramdisk_id='',reservation_id='r-qt8l6h9j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerBasicOps-1222487710',owner_user_name='tempest-TestServerBasicOps-1222487710-project-member',password_0='testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-03T02:20:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2de48f7608ea45c8ac558125d72373c4',uuid=48201127-9aa0-4cde-a41d-6790411480a4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0d927baf-41d2-458f-b4c0-1218ba0eec13", "address": "fa:16:3e:55:61:16", "network": {"id": "b46a3397-654d-4ceb-be75-a322ea7e5091", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1788173895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38f1a4b24bc74f43a70b0fc06f48b9a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d927baf-41", "ovs_interfaceid": "0d927baf-41d2-458f-b4c0-1218ba0eec13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 03 02:20:17 compute-0 nova_compute[351485]: 2025-12-03 02:20:17.005 351492 DEBUG nova.network.os_vif_util [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Converting VIF {"id": "0d927baf-41d2-458f-b4c0-1218ba0eec13", "address": "fa:16:3e:55:61:16", "network": {"id": "b46a3397-654d-4ceb-be75-a322ea7e5091", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1788173895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38f1a4b24bc74f43a70b0fc06f48b9a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d927baf-41", "ovs_interfaceid": "0d927baf-41d2-458f-b4c0-1218ba0eec13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 03 02:20:17 compute-0 nova_compute[351485]: 2025-12-03 02:20:17.007 351492 DEBUG nova.network.os_vif_util [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:55:61:16,bridge_name='br-int',has_traffic_filtering=True,id=0d927baf-41d2-458f-b4c0-1218ba0eec13,network=Network(b46a3397-654d-4ceb-be75-a322ea7e5091),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d927baf-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 03 02:20:17 compute-0 nova_compute[351485]: 2025-12-03 02:20:17.008 351492 DEBUG os_vif [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:55:61:16,bridge_name='br-int',has_traffic_filtering=True,id=0d927baf-41d2-458f-b4c0-1218ba0eec13,network=Network(b46a3397-654d-4ceb-be75-a322ea7e5091),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d927baf-41') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 03 02:20:17 compute-0 nova_compute[351485]: 2025-12-03 02:20:17.012 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:20:17 compute-0 nova_compute[351485]: 2025-12-03 02:20:17.013 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0d927baf-41, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:20:17 compute-0 nova_compute[351485]: 2025-12-03 02:20:17.017 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:20:17 compute-0 nova_compute[351485]: 2025-12-03 02:20:17.022 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 03 02:20:17 compute-0 nova_compute[351485]: 2025-12-03 02:20:17.027 351492 INFO os_vif [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:55:61:16,bridge_name='br-int',has_traffic_filtering=True,id=0d927baf-41d2-458f-b4c0-1218ba0eec13,network=Network(b46a3397-654d-4ceb-be75-a322ea7e5091),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d927baf-41')
Dec 03 02:20:17 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-57a8a60584e8dfa48c54c7f4c808b077f95b7cac7819fa02e6dc520c2bcbc2eb-userdata-shm.mount: Deactivated successfully.
Dec 03 02:20:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e3ca008127e0843a16153cba25a8cdfe9386b435396ea086db82b591e22278b-merged.mount: Deactivated successfully.
Dec 03 02:20:17 compute-0 podman[454425]: 2025-12-03 02:20:17.09246719 +0000 UTC m=+0.166891963 container cleanup 57a8a60584e8dfa48c54c7f4c808b077f95b7cac7819fa02e6dc520c2bcbc2eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b46a3397-654d-4ceb-be75-a322ea7e5091, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 03 02:20:17 compute-0 systemd[1]: libpod-conmon-57a8a60584e8dfa48c54c7f4c808b077f95b7cac7819fa02e6dc520c2bcbc2eb.scope: Deactivated successfully.
Dec 03 02:20:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1982: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 4.2 KiB/s rd, 7.2 KiB/s wr, 2 op/s
Dec 03 02:20:17 compute-0 podman[454468]: 2025-12-03 02:20:17.214909814 +0000 UTC m=+0.082662930 container remove 57a8a60584e8dfa48c54c7f4c808b077f95b7cac7819fa02e6dc520c2bcbc2eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b46a3397-654d-4ceb-be75-a322ea7e5091, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 03 02:20:17 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:17.235 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[b160dc28-9630-48ad-a7f3-1d35b5ca817a]: (4, ('Wed Dec  3 02:20:16 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b46a3397-654d-4ceb-be75-a322ea7e5091 (57a8a60584e8dfa48c54c7f4c808b077f95b7cac7819fa02e6dc520c2bcbc2eb)\n57a8a60584e8dfa48c54c7f4c808b077f95b7cac7819fa02e6dc520c2bcbc2eb\nWed Dec  3 02:20:17 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b46a3397-654d-4ceb-be75-a322ea7e5091 (57a8a60584e8dfa48c54c7f4c808b077f95b7cac7819fa02e6dc520c2bcbc2eb)\n57a8a60584e8dfa48c54c7f4c808b077f95b7cac7819fa02e6dc520c2bcbc2eb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:20:17 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:17.238 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[acefaad2-a278-4985-aef8-af5953adedae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:20:17 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:17.239 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb46a3397-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:20:17 compute-0 nova_compute[351485]: 2025-12-03 02:20:17.243 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:20:17 compute-0 kernel: tapb46a3397-60: left promiscuous mode
Dec 03 02:20:17 compute-0 nova_compute[351485]: 2025-12-03 02:20:17.263 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:20:17 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:17.271 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[bc9e1799-351d-4606-bef4-1dbbf6cb1ae7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:20:17 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:17.289 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[de573608-ce00-4163-8a87-a6644b080c8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:20:17 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:17.290 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[1e074a76-ee02-41d9-bc3c-8552d58ed06b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:20:17 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:17.319 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[d6c44181-be02-49b2-ac46-d1b6c4eb9555]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 718180, 'reachable_time': 33575, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 454484, 'error': None, 'target': 'ovnmeta-b46a3397-654d-4ceb-be75-a322ea7e5091', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:20:17 compute-0 systemd[1]: run-netns-ovnmeta\x2db46a3397\x2d654d\x2d4ceb\x2dbe75\x2da322ea7e5091.mount: Deactivated successfully.
Dec 03 02:20:17 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:17.324 288639 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b46a3397-654d-4ceb-be75-a322ea7e5091 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 03 02:20:17 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:17.325 288639 DEBUG oslo.privsep.daemon [-] privsep: reply[008f8fcd-015c-4747-9b38-a6671e5b5847]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:20:17 compute-0 nova_compute[351485]: 2025-12-03 02:20:17.939 351492 INFO nova.virt.libvirt.driver [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Deleting instance files /var/lib/nova/instances/48201127-9aa0-4cde-a41d-6790411480a4_del
Dec 03 02:20:17 compute-0 nova_compute[351485]: 2025-12-03 02:20:17.941 351492 INFO nova.virt.libvirt.driver [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Deletion of /var/lib/nova/instances/48201127-9aa0-4cde-a41d-6790411480a4_del complete
Dec 03 02:20:18 compute-0 nova_compute[351485]: 2025-12-03 02:20:18.018 351492 INFO nova.compute.manager [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Took 1.41 seconds to destroy the instance on the hypervisor.
Dec 03 02:20:18 compute-0 nova_compute[351485]: 2025-12-03 02:20:18.019 351492 DEBUG oslo.service.loopingcall [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 03 02:20:18 compute-0 nova_compute[351485]: 2025-12-03 02:20:18.019 351492 DEBUG nova.compute.manager [-] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 03 02:20:18 compute-0 nova_compute[351485]: 2025-12-03 02:20:18.020 351492 DEBUG nova.network.neutron [-] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 03 02:20:18 compute-0 nova_compute[351485]: 2025-12-03 02:20:18.055 351492 DEBUG nova.compute.manager [req-c243c2bc-b676-4eb7-8f35-09ba5ec29257 req-e2902c0c-b780-4649-a8c7-58d3b62c53fd 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Received event network-vif-unplugged-0d927baf-41d2-458f-b4c0-1218ba0eec13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:20:18 compute-0 nova_compute[351485]: 2025-12-03 02:20:18.055 351492 DEBUG oslo_concurrency.lockutils [req-c243c2bc-b676-4eb7-8f35-09ba5ec29257 req-e2902c0c-b780-4649-a8c7-58d3b62c53fd 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "48201127-9aa0-4cde-a41d-6790411480a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:20:18 compute-0 nova_compute[351485]: 2025-12-03 02:20:18.056 351492 DEBUG oslo_concurrency.lockutils [req-c243c2bc-b676-4eb7-8f35-09ba5ec29257 req-e2902c0c-b780-4649-a8c7-58d3b62c53fd 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "48201127-9aa0-4cde-a41d-6790411480a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:20:18 compute-0 nova_compute[351485]: 2025-12-03 02:20:18.056 351492 DEBUG oslo_concurrency.lockutils [req-c243c2bc-b676-4eb7-8f35-09ba5ec29257 req-e2902c0c-b780-4649-a8c7-58d3b62c53fd 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "48201127-9aa0-4cde-a41d-6790411480a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:20:18 compute-0 nova_compute[351485]: 2025-12-03 02:20:18.056 351492 DEBUG nova.compute.manager [req-c243c2bc-b676-4eb7-8f35-09ba5ec29257 req-e2902c0c-b780-4649-a8c7-58d3b62c53fd 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] No waiting events found dispatching network-vif-unplugged-0d927baf-41d2-458f-b4c0-1218ba0eec13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 03 02:20:18 compute-0 nova_compute[351485]: 2025-12-03 02:20:18.057 351492 DEBUG nova.compute.manager [req-c243c2bc-b676-4eb7-8f35-09ba5ec29257 req-e2902c0c-b780-4649-a8c7-58d3b62c53fd 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Received event network-vif-unplugged-0d927baf-41d2-458f-b4c0-1218ba0eec13 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 03 02:20:18 compute-0 ceph-mon[192821]: pgmap v1982: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 4.2 KiB/s rd, 7.2 KiB/s wr, 2 op/s
Dec 03 02:20:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:20:18 compute-0 nova_compute[351485]: 2025-12-03 02:20:18.575 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:20:18 compute-0 nova_compute[351485]: 2025-12-03 02:20:18.608 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:20:18 compute-0 nova_compute[351485]: 2025-12-03 02:20:18.608 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:20:18 compute-0 nova_compute[351485]: 2025-12-03 02:20:18.609 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:20:18 compute-0 nova_compute[351485]: 2025-12-03 02:20:18.609 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 02:20:18 compute-0 nova_compute[351485]: 2025-12-03 02:20:18.610 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:20:18 compute-0 nova_compute[351485]: 2025-12-03 02:20:18.739 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:20:18 compute-0 nova_compute[351485]: 2025-12-03 02:20:18.835 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:20:18 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:18.836 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:a6:85', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:2a:11:ae:7b:8c'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 02:20:18 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:18.843 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 03 02:20:19 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:20:19 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4283653761' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:20:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1983: 321 pgs: 321 active+clean; 216 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 7.6 KiB/s wr, 7 op/s
Dec 03 02:20:19 compute-0 nova_compute[351485]: 2025-12-03 02:20:19.180 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:20:19 compute-0 nova_compute[351485]: 2025-12-03 02:20:19.300 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:20:19 compute-0 nova_compute[351485]: 2025-12-03 02:20:19.301 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:20:19 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/4283653761' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:20:19 compute-0 nova_compute[351485]: 2025-12-03 02:20:19.504 351492 DEBUG nova.network.neutron [-] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:20:19 compute-0 nova_compute[351485]: 2025-12-03 02:20:19.539 351492 INFO nova.compute.manager [-] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Took 1.52 seconds to deallocate network for instance.
Dec 03 02:20:19 compute-0 nova_compute[351485]: 2025-12-03 02:20:19.608 351492 DEBUG oslo_concurrency.lockutils [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:20:19 compute-0 nova_compute[351485]: 2025-12-03 02:20:19.610 351492 DEBUG oslo_concurrency.lockutils [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:20:19 compute-0 nova_compute[351485]: 2025-12-03 02:20:19.654 351492 DEBUG nova.scheduler.client.report [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Refreshing inventories for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 03 02:20:19 compute-0 nova_compute[351485]: 2025-12-03 02:20:19.692 351492 DEBUG nova.scheduler.client.report [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Updating ProviderTree inventory for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 03 02:20:19 compute-0 nova_compute[351485]: 2025-12-03 02:20:19.694 351492 DEBUG nova.compute.provider_tree [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Updating inventory in ProviderTree for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 03 02:20:19 compute-0 nova_compute[351485]: 2025-12-03 02:20:19.729 351492 DEBUG nova.scheduler.client.report [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Refreshing aggregate associations for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 03 02:20:19 compute-0 nova_compute[351485]: 2025-12-03 02:20:19.763 351492 DEBUG nova.scheduler.client.report [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Refreshing trait associations for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05, traits: HW_CPU_X86_SSE42,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_ABM,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_F16C,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_RESCUE_BFV,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 03 02:20:19 compute-0 nova_compute[351485]: 2025-12-03 02:20:19.842 351492 DEBUG oslo_concurrency.processutils [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:20:19 compute-0 nova_compute[351485]: 2025-12-03 02:20:19.982 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:20:19 compute-0 nova_compute[351485]: 2025-12-03 02:20:19.985 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3778MB free_disk=59.8972053527832GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 02:20:19 compute-0 nova_compute[351485]: 2025-12-03 02:20:19.986 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:20:20 compute-0 nova_compute[351485]: 2025-12-03 02:20:20.153 351492 DEBUG nova.compute.manager [req-d67506dc-2b3c-4352-b81d-5854982088e7 req-5a007d13-6a5a-49d3-8113-90b2756fb42b 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Received event network-vif-plugged-0d927baf-41d2-458f-b4c0-1218ba0eec13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:20:20 compute-0 nova_compute[351485]: 2025-12-03 02:20:20.155 351492 DEBUG oslo_concurrency.lockutils [req-d67506dc-2b3c-4352-b81d-5854982088e7 req-5a007d13-6a5a-49d3-8113-90b2756fb42b 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "48201127-9aa0-4cde-a41d-6790411480a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:20:20 compute-0 nova_compute[351485]: 2025-12-03 02:20:20.156 351492 DEBUG oslo_concurrency.lockutils [req-d67506dc-2b3c-4352-b81d-5854982088e7 req-5a007d13-6a5a-49d3-8113-90b2756fb42b 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "48201127-9aa0-4cde-a41d-6790411480a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:20:20 compute-0 nova_compute[351485]: 2025-12-03 02:20:20.157 351492 DEBUG oslo_concurrency.lockutils [req-d67506dc-2b3c-4352-b81d-5854982088e7 req-5a007d13-6a5a-49d3-8113-90b2756fb42b 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "48201127-9aa0-4cde-a41d-6790411480a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:20:20 compute-0 nova_compute[351485]: 2025-12-03 02:20:20.158 351492 DEBUG nova.compute.manager [req-d67506dc-2b3c-4352-b81d-5854982088e7 req-5a007d13-6a5a-49d3-8113-90b2756fb42b 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] No waiting events found dispatching network-vif-plugged-0d927baf-41d2-458f-b4c0-1218ba0eec13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 03 02:20:20 compute-0 nova_compute[351485]: 2025-12-03 02:20:20.160 351492 WARNING nova.compute.manager [req-d67506dc-2b3c-4352-b81d-5854982088e7 req-5a007d13-6a5a-49d3-8113-90b2756fb42b 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Received unexpected event network-vif-plugged-0d927baf-41d2-458f-b4c0-1218ba0eec13 for instance with vm_state deleted and task_state None.
Dec 03 02:20:20 compute-0 nova_compute[351485]: 2025-12-03 02:20:20.161 351492 DEBUG nova.compute.manager [req-d67506dc-2b3c-4352-b81d-5854982088e7 req-5a007d13-6a5a-49d3-8113-90b2756fb42b 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Received event network-vif-deleted-0d927baf-41d2-458f-b4c0-1218ba0eec13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:20:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:20:20 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2205015450' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:20:20 compute-0 ceph-mon[192821]: pgmap v1983: 321 pgs: 321 active+clean; 216 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 7.6 KiB/s wr, 7 op/s
Dec 03 02:20:20 compute-0 nova_compute[351485]: 2025-12-03 02:20:20.373 351492 DEBUG oslo_concurrency.processutils [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:20:20 compute-0 nova_compute[351485]: 2025-12-03 02:20:20.383 351492 DEBUG nova.compute.provider_tree [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:20:20 compute-0 nova_compute[351485]: 2025-12-03 02:20:20.397 351492 DEBUG nova.scheduler.client.report [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:20:20 compute-0 nova_compute[351485]: 2025-12-03 02:20:20.417 351492 DEBUG oslo_concurrency.lockutils [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.807s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:20:20 compute-0 nova_compute[351485]: 2025-12-03 02:20:20.421 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.436s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:20:20 compute-0 nova_compute[351485]: 2025-12-03 02:20:20.470 351492 INFO nova.scheduler.client.report [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Deleted allocations for instance 48201127-9aa0-4cde-a41d-6790411480a4
Dec 03 02:20:20 compute-0 nova_compute[351485]: 2025-12-03 02:20:20.523 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:20:20 compute-0 nova_compute[351485]: 2025-12-03 02:20:20.525 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 02:20:20 compute-0 nova_compute[351485]: 2025-12-03 02:20:20.526 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 02:20:20 compute-0 nova_compute[351485]: 2025-12-03 02:20:20.559 351492 DEBUG oslo_concurrency.lockutils [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Lock "48201127-9aa0-4cde-a41d-6790411480a4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.958s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:20:20 compute-0 nova_compute[351485]: 2025-12-03 02:20:20.586 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:20:20 compute-0 podman[454534]: 2025-12-03 02:20:20.887945825 +0000 UTC m=+0.144163240 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:20:21 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:20:21 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3420898867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:20:21 compute-0 nova_compute[351485]: 2025-12-03 02:20:21.070 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:20:21 compute-0 nova_compute[351485]: 2025-12-03 02:20:21.082 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:20:21 compute-0 nova_compute[351485]: 2025-12-03 02:20:21.102 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:20:21 compute-0 nova_compute[351485]: 2025-12-03 02:20:21.132 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 02:20:21 compute-0 nova_compute[351485]: 2025-12-03 02:20:21.133 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:20:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1984: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 8.4 KiB/s wr, 31 op/s
Dec 03 02:20:21 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2205015450' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:20:21 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3420898867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:20:22 compute-0 nova_compute[351485]: 2025-12-03 02:20:22.019 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:20:22 compute-0 ceph-mon[192821]: pgmap v1984: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 8.4 KiB/s wr, 31 op/s
Dec 03 02:20:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1985: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 7.3 KiB/s wr, 31 op/s
Dec 03 02:20:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:20:23 compute-0 nova_compute[351485]: 2025-12-03 02:20:23.742 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:20:24 compute-0 ceph-mon[192821]: pgmap v1985: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 7.3 KiB/s wr, 31 op/s
Dec 03 02:20:24 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:24.848 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=eda9fd7d-f2b1-4121-b9ac-fc31f8426272, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:20:24 compute-0 podman[454574]: 2025-12-03 02:20:24.871588412 +0000 UTC m=+0.110732244 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.openshift.tags=minimal rhel9, architecture=x86_64, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, vcs-type=git, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1755695350, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter)
Dec 03 02:20:24 compute-0 podman[454575]: 2025-12-03 02:20:24.877944152 +0000 UTC m=+0.102865781 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 03 02:20:24 compute-0 podman[454578]: 2025-12-03 02:20:24.905276005 +0000 UTC m=+0.122398124 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 03 02:20:24 compute-0 podman[454576]: 2025-12-03 02:20:24.908764914 +0000 UTC m=+0.131735838 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.openshift.tags=base rhel9, managed_by=edpm_ansible, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, version=9.4, maintainer=Red Hat, Inc., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, distribution-scope=public, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc.)
Dec 03 02:20:24 compute-0 podman[454573]: 2025-12-03 02:20:24.932482545 +0000 UTC m=+0.176125184 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 03 02:20:25 compute-0 nova_compute[351485]: 2025-12-03 02:20:25.136 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:20:25 compute-0 nova_compute[351485]: 2025-12-03 02:20:25.136 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 02:20:25 compute-0 nova_compute[351485]: 2025-12-03 02:20:25.137 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 03 02:20:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1986: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 7.7 KiB/s wr, 31 op/s
Dec 03 02:20:25 compute-0 nova_compute[351485]: 2025-12-03 02:20:25.707 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-2890ee5c-21c1-4e9d-9421-1a2df0f67f76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:20:25 compute-0 nova_compute[351485]: 2025-12-03 02:20:25.708 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-2890ee5c-21c1-4e9d-9421-1a2df0f67f76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:20:25 compute-0 nova_compute[351485]: 2025-12-03 02:20:25.709 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 03 02:20:25 compute-0 nova_compute[351485]: 2025-12-03 02:20:25.710 351492 DEBUG nova.objects.instance [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:20:26 compute-0 ceph-mon[192821]: pgmap v1986: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 7.7 KiB/s wr, 31 op/s
Dec 03 02:20:27 compute-0 nova_compute[351485]: 2025-12-03 02:20:27.025 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:20:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1987: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 7.2 KiB/s wr, 30 op/s
Dec 03 02:20:28 compute-0 nova_compute[351485]: 2025-12-03 02:20:28.079 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Updating instance_info_cache with network_info: [{"id": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "address": "fa:16:3e:dd:ed:eb", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.239", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf36a9f58-d7", "ovs_interfaceid": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:20:28 compute-0 nova_compute[351485]: 2025-12-03 02:20:28.107 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-2890ee5c-21c1-4e9d-9421-1a2df0f67f76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:20:28 compute-0 nova_compute[351485]: 2025-12-03 02:20:28.108 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 03 02:20:28 compute-0 nova_compute[351485]: 2025-12-03 02:20:28.109 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:20:28 compute-0 nova_compute[351485]: 2025-12-03 02:20:28.110 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:20:28 compute-0 nova_compute[351485]: 2025-12-03 02:20:28.110 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:20:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:20:28 compute-0 ceph-mon[192821]: pgmap v1987: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 7.2 KiB/s wr, 30 op/s
Dec 03 02:20:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:20:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:20:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:20:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:20:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:20:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:20:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:20:28
Dec 03 02:20:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 02:20:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 02:20:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['backups', 'images', 'volumes', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta']
Dec 03 02:20:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 02:20:28 compute-0 nova_compute[351485]: 2025-12-03 02:20:28.746 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:20:28 compute-0 ovn_controller[89134]: 2025-12-03T02:20:28Z|00188|binding|INFO|Releasing lport 50c454e1-4a4b-4aad-b47b-dafc7b079018 from this chassis (sb_readonly=0)
Dec 03 02:20:28 compute-0 nova_compute[351485]: 2025-12-03 02:20:28.962 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:20:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 02:20:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:20:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 02:20:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:20:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:20:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:20:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:20:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:20:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:20:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:20:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1988: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 KiB/s wr, 28 op/s
Dec 03 02:20:29 compute-0 ovn_controller[89134]: 2025-12-03T02:20:29Z|00189|binding|INFO|Releasing lport 50c454e1-4a4b-4aad-b47b-dafc7b079018 from this chassis (sb_readonly=0)
Dec 03 02:20:29 compute-0 nova_compute[351485]: 2025-12-03 02:20:29.250 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:20:29 compute-0 nova_compute[351485]: 2025-12-03 02:20:29.545 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:20:29 compute-0 nova_compute[351485]: 2025-12-03 02:20:29.546 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:20:29 compute-0 nova_compute[351485]: 2025-12-03 02:20:29.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:20:29 compute-0 podman[158098]: time="2025-12-03T02:20:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:20:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:20:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec 03 02:20:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:20:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8644 "" "Go-http-client/1.1"
Dec 03 02:20:30 compute-0 ceph-mon[192821]: pgmap v1988: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 KiB/s wr, 28 op/s
Dec 03 02:20:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1989: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 KiB/s wr, 24 op/s
Dec 03 02:20:31 compute-0 openstack_network_exporter[368278]: ERROR   02:20:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:20:31 compute-0 openstack_network_exporter[368278]: ERROR   02:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:20:31 compute-0 openstack_network_exporter[368278]: ERROR   02:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:20:31 compute-0 openstack_network_exporter[368278]: ERROR   02:20:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:20:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:20:31 compute-0 openstack_network_exporter[368278]: ERROR   02:20:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:20:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:20:31 compute-0 nova_compute[351485]: 2025-12-03 02:20:31.849 351492 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764728416.846394, 48201127-9aa0-4cde-a41d-6790411480a4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 02:20:31 compute-0 nova_compute[351485]: 2025-12-03 02:20:31.851 351492 INFO nova.compute.manager [-] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] VM Stopped (Lifecycle Event)
Dec 03 02:20:31 compute-0 nova_compute[351485]: 2025-12-03 02:20:31.878 351492 DEBUG nova.compute.manager [None req-89ec3af4-db0a-4a58-8dbc-67cb64d9a8f3 - - - - - -] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:20:32 compute-0 nova_compute[351485]: 2025-12-03 02:20:32.030 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:20:32 compute-0 ceph-mon[192821]: pgmap v1989: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 KiB/s wr, 24 op/s
Dec 03 02:20:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1990: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 1022 B/s wr, 0 op/s
Dec 03 02:20:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:20:33 compute-0 nova_compute[351485]: 2025-12-03 02:20:33.749 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:20:34 compute-0 ceph-mon[192821]: pgmap v1990: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 1022 B/s wr, 0 op/s
Dec 03 02:20:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1991: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 1022 B/s wr, 0 op/s
Dec 03 02:20:35 compute-0 nova_compute[351485]: 2025-12-03 02:20:35.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:20:35 compute-0 nova_compute[351485]: 2025-12-03 02:20:35.580 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 02:20:36 compute-0 sshd-session[454672]: Invalid user userroot from 154.113.10.113 port 37616
Dec 03 02:20:36 compute-0 sshd-session[454672]: Received disconnect from 154.113.10.113 port 37616:11: Bye Bye [preauth]
Dec 03 02:20:36 compute-0 sshd-session[454672]: Disconnected from invalid user userroot 154.113.10.113 port 37616 [preauth]
Dec 03 02:20:36 compute-0 ceph-mon[192821]: pgmap v1991: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 1022 B/s wr, 0 op/s
Dec 03 02:20:37 compute-0 nova_compute[351485]: 2025-12-03 02:20:37.035 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:20:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1992: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Dec 03 02:20:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:20:38 compute-0 ceph-mon[192821]: pgmap v1992: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Dec 03 02:20:38 compute-0 nova_compute[351485]: 2025-12-03 02:20:38.750 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:20:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 02:20:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:20:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 02:20:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:20:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007578104650973498 of space, bias 1.0, pg target 0.22734313952920493 quantized to 32 (current 32)
Dec 03 02:20:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:20:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:20:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:20:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:20:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:20:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec 03 02:20:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:20:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 02:20:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:20:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:20:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:20:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 02:20:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:20:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 02:20:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:20:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:20:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:20:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 02:20:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1993: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 03 02:20:40 compute-0 ceph-mon[192821]: pgmap v1993: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 03 02:20:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1994: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 03 02:20:42 compute-0 nova_compute[351485]: 2025-12-03 02:20:42.039 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:20:42 compute-0 ceph-mon[192821]: pgmap v1994: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 03 02:20:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1995: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:20:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:20:43 compute-0 nova_compute[351485]: 2025-12-03 02:20:43.754 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:20:44 compute-0 ceph-mon[192821]: pgmap v1995: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:20:44 compute-0 podman[454674]: 2025-12-03 02:20:44.803102351 +0000 UTC m=+0.094289474 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 03 02:20:44 compute-0 podman[454675]: 2025-12-03 02:20:44.832493551 +0000 UTC m=+0.112175799 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Dec 03 02:20:44 compute-0 podman[454676]: 2025-12-03 02:20:44.838268014 +0000 UTC m=+0.091649829 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 03 02:20:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1996: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:20:46 compute-0 ceph-mon[192821]: pgmap v1996: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:20:47 compute-0 nova_compute[351485]: 2025-12-03 02:20:47.042 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:20:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 02:20:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3041175653' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:20:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 02:20:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3041175653' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:20:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1997: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:20:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/3041175653' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:20:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/3041175653' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:20:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:20:48 compute-0 ceph-mon[192821]: pgmap v1997: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:20:48 compute-0 nova_compute[351485]: 2025-12-03 02:20:48.759 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:20:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1998: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:20:50 compute-0 ceph-mon[192821]: pgmap v1998: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:20:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1999: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:20:51 compute-0 ceph-mon[192821]: pgmap v1999: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:20:51 compute-0 podman[454730]: 2025-12-03 02:20:51.897848609 +0000 UTC m=+0.144893983 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi)
Dec 03 02:20:52 compute-0 nova_compute[351485]: 2025-12-03 02:20:52.045 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:20:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2000: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:20:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:20:53 compute-0 nova_compute[351485]: 2025-12-03 02:20:53.763 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:20:54 compute-0 ceph-mon[192821]: pgmap v2000: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:20:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2001: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:20:55 compute-0 podman[454750]: 2025-12-03 02:20:55.863631699 +0000 UTC m=+0.103861054 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, managed_by=edpm_ansible, release=1755695350, build-date=2025-08-20T13:12:41, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 03 02:20:55 compute-0 podman[454752]: 2025-12-03 02:20:55.88559814 +0000 UTC m=+0.111657215 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, vendor=Red Hat, Inc., container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, io.buildah.version=1.29.0, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, release-0.7.12=, version=9.4, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64)
Dec 03 02:20:55 compute-0 podman[454755]: 2025-12-03 02:20:55.896438996 +0000 UTC m=+0.128476200 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec 03 02:20:55 compute-0 podman[454751]: 2025-12-03 02:20:55.901983192 +0000 UTC m=+0.131675809 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 02:20:55 compute-0 podman[454749]: 2025-12-03 02:20:55.914829465 +0000 UTC m=+0.157963492 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 03 02:20:56 compute-0 ceph-mon[192821]: pgmap v2001: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:20:57 compute-0 nova_compute[351485]: 2025-12-03 02:20:57.048 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:20:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2002: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Dec 03 02:20:58 compute-0 ceph-mon[192821]: pgmap v2002: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Dec 03 02:20:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:20:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:20:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:20:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:20:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:20:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:20:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:20:58 compute-0 nova_compute[351485]: 2025-12-03 02:20:58.767 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:20:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2003: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Dec 03 02:20:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:59.653 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:20:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:59.654 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:20:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:59.654 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:20:59 compute-0 podman[158098]: time="2025-12-03T02:20:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:20:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:20:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec 03 02:20:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:20:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8661 "" "Go-http-client/1.1"
Dec 03 02:21:00 compute-0 ceph-mon[192821]: pgmap v2003: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Dec 03 02:21:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2004: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Dec 03 02:21:01 compute-0 openstack_network_exporter[368278]: ERROR   02:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:21:01 compute-0 openstack_network_exporter[368278]: ERROR   02:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:21:01 compute-0 openstack_network_exporter[368278]: ERROR   02:21:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:21:01 compute-0 openstack_network_exporter[368278]: ERROR   02:21:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:21:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:21:01 compute-0 openstack_network_exporter[368278]: ERROR   02:21:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:21:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:21:02 compute-0 nova_compute[351485]: 2025-12-03 02:21:02.052 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:21:02 compute-0 ceph-mon[192821]: pgmap v2004: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Dec 03 02:21:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2005: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Dec 03 02:21:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:21:03 compute-0 nova_compute[351485]: 2025-12-03 02:21:03.770 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:21:04 compute-0 ceph-mon[192821]: pgmap v2005: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Dec 03 02:21:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2006: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Dec 03 02:21:06 compute-0 ceph-mon[192821]: pgmap v2006: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Dec 03 02:21:06 compute-0 ovn_controller[89134]: 2025-12-03T02:21:06Z|00190|memory_trim|INFO|Detected inactivity (last active 30006 ms ago): trimming memory
Dec 03 02:21:07 compute-0 nova_compute[351485]: 2025-12-03 02:21:07.056 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:21:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2007: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Dec 03 02:21:08 compute-0 ceph-mon[192821]: pgmap v2007: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Dec 03 02:21:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:21:08 compute-0 sudo[454850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:21:08 compute-0 sudo[454850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:21:08 compute-0 sudo[454850]: pam_unix(sudo:session): session closed for user root
Dec 03 02:21:08 compute-0 sudo[454875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:21:08 compute-0 sudo[454875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:21:08 compute-0 sudo[454875]: pam_unix(sudo:session): session closed for user root
Dec 03 02:21:08 compute-0 sudo[454900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:21:08 compute-0 sudo[454900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:21:08 compute-0 sudo[454900]: pam_unix(sudo:session): session closed for user root
Dec 03 02:21:08 compute-0 nova_compute[351485]: 2025-12-03 02:21:08.772 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:21:08 compute-0 sudo[454925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 02:21:08 compute-0 sudo[454925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:21:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2008: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:21:09 compute-0 sudo[454925]: pam_unix(sudo:session): session closed for user root
Dec 03 02:21:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:21:09 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:21:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 02:21:09 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:21:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 02:21:09 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:21:09 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 4a0ee2e1-c667-4b50-a81f-3a6d184270d6 does not exist
Dec 03 02:21:09 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev d5bbfe0e-7364-4114-9ce7-2848f397aeb9 does not exist
Dec 03 02:21:09 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 9a9bf96b-90e8-4d64-9b5a-d05a6728e568 does not exist
Dec 03 02:21:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 02:21:09 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:21:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 02:21:09 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:21:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:21:09 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:21:09 compute-0 sudo[454980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:21:09 compute-0 sudo[454980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:21:09 compute-0 sudo[454980]: pam_unix(sudo:session): session closed for user root
Dec 03 02:21:09 compute-0 sudo[455005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:21:09 compute-0 sudo[455005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:21:09 compute-0 sudo[455005]: pam_unix(sudo:session): session closed for user root
Dec 03 02:21:09 compute-0 sudo[455030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:21:09 compute-0 sudo[455030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:21:09 compute-0 sudo[455030]: pam_unix(sudo:session): session closed for user root
Dec 03 02:21:10 compute-0 sudo[455055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 02:21:10 compute-0 sudo[455055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:21:10 compute-0 ceph-mon[192821]: pgmap v2008: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:21:10 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:21:10 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:21:10 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:21:10 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:21:10 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:21:10 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:21:10 compute-0 podman[455119]: 2025-12-03 02:21:10.761580512 +0000 UTC m=+0.084979201 container create 2186ce8074cf784ad3fa48af6b774530fdea8a47b42454841aea950d6bcb0898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Dec 03 02:21:10 compute-0 podman[455119]: 2025-12-03 02:21:10.727647524 +0000 UTC m=+0.051046243 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:21:10 compute-0 systemd[1]: Started libpod-conmon-2186ce8074cf784ad3fa48af6b774530fdea8a47b42454841aea950d6bcb0898.scope.
Dec 03 02:21:10 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:21:10 compute-0 podman[455119]: 2025-12-03 02:21:10.920007336 +0000 UTC m=+0.243406065 container init 2186ce8074cf784ad3fa48af6b774530fdea8a47b42454841aea950d6bcb0898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hertz, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:21:10 compute-0 podman[455119]: 2025-12-03 02:21:10.937804009 +0000 UTC m=+0.261202688 container start 2186ce8074cf784ad3fa48af6b774530fdea8a47b42454841aea950d6bcb0898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hertz, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 03 02:21:10 compute-0 podman[455119]: 2025-12-03 02:21:10.944942151 +0000 UTC m=+0.268340860 container attach 2186ce8074cf784ad3fa48af6b774530fdea8a47b42454841aea950d6bcb0898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hertz, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 03 02:21:10 compute-0 crazy_hertz[455135]: 167 167
Dec 03 02:21:10 compute-0 systemd[1]: libpod-2186ce8074cf784ad3fa48af6b774530fdea8a47b42454841aea950d6bcb0898.scope: Deactivated successfully.
Dec 03 02:21:10 compute-0 podman[455119]: 2025-12-03 02:21:10.956331762 +0000 UTC m=+0.279730441 container died 2186ce8074cf784ad3fa48af6b774530fdea8a47b42454841aea950d6bcb0898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hertz, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:21:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-df070375a94010c9c5a20c5eef1573668a3163d290669d2a8bd35ab8ebef2997-merged.mount: Deactivated successfully.
Dec 03 02:21:11 compute-0 podman[455119]: 2025-12-03 02:21:11.035399545 +0000 UTC m=+0.358798204 container remove 2186ce8074cf784ad3fa48af6b774530fdea8a47b42454841aea950d6bcb0898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:21:11 compute-0 systemd[1]: libpod-conmon-2186ce8074cf784ad3fa48af6b774530fdea8a47b42454841aea950d6bcb0898.scope: Deactivated successfully.
Dec 03 02:21:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2009: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:21:11 compute-0 podman[455158]: 2025-12-03 02:21:11.309970969 +0000 UTC m=+0.077320774 container create 1f708ac1f366c0067f4ad1cfd14875670117d68fdcffa0f57e0380b366be18c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_poincare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:21:11 compute-0 podman[455158]: 2025-12-03 02:21:11.283866141 +0000 UTC m=+0.051215956 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:21:11 compute-0 systemd[1]: Started libpod-conmon-1f708ac1f366c0067f4ad1cfd14875670117d68fdcffa0f57e0380b366be18c8.scope.
Dec 03 02:21:11 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:21:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e18a8033d7dd0d8388ff85ab5a97e648934b5fdb4b9db8bc220312d1e03fc6a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:21:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e18a8033d7dd0d8388ff85ab5a97e648934b5fdb4b9db8bc220312d1e03fc6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:21:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e18a8033d7dd0d8388ff85ab5a97e648934b5fdb4b9db8bc220312d1e03fc6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:21:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e18a8033d7dd0d8388ff85ab5a97e648934b5fdb4b9db8bc220312d1e03fc6a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:21:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e18a8033d7dd0d8388ff85ab5a97e648934b5fdb4b9db8bc220312d1e03fc6a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 02:21:11 compute-0 podman[455158]: 2025-12-03 02:21:11.481188194 +0000 UTC m=+0.248538049 container init 1f708ac1f366c0067f4ad1cfd14875670117d68fdcffa0f57e0380b366be18c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:21:11 compute-0 podman[455158]: 2025-12-03 02:21:11.526194685 +0000 UTC m=+0.293544500 container start 1f708ac1f366c0067f4ad1cfd14875670117d68fdcffa0f57e0380b366be18c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_poincare, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:21:11 compute-0 podman[455158]: 2025-12-03 02:21:11.533060359 +0000 UTC m=+0.300410174 container attach 1f708ac1f366c0067f4ad1cfd14875670117d68fdcffa0f57e0380b366be18c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_poincare, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Dec 03 02:21:12 compute-0 nova_compute[351485]: 2025-12-03 02:21:12.058 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:21:12 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec 03 02:21:12 compute-0 ceph-mon[192821]: pgmap v2009: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:21:12 compute-0 hopeful_poincare[455174]: --> passed data devices: 0 physical, 3 LVM
Dec 03 02:21:12 compute-0 hopeful_poincare[455174]: --> relative data size: 1.0
Dec 03 02:21:12 compute-0 hopeful_poincare[455174]: --> All data devices are unavailable
Dec 03 02:21:12 compute-0 systemd[1]: libpod-1f708ac1f366c0067f4ad1cfd14875670117d68fdcffa0f57e0380b366be18c8.scope: Deactivated successfully.
Dec 03 02:21:12 compute-0 systemd[1]: libpod-1f708ac1f366c0067f4ad1cfd14875670117d68fdcffa0f57e0380b366be18c8.scope: Consumed 1.221s CPU time.
Dec 03 02:21:12 compute-0 podman[455204]: 2025-12-03 02:21:12.890884987 +0000 UTC m=+0.054898332 container died 1f708ac1f366c0067f4ad1cfd14875670117d68fdcffa0f57e0380b366be18c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 03 02:21:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e18a8033d7dd0d8388ff85ab5a97e648934b5fdb4b9db8bc220312d1e03fc6a-merged.mount: Deactivated successfully.
Dec 03 02:21:12 compute-0 podman[455204]: 2025-12-03 02:21:12.965268238 +0000 UTC m=+0.129281593 container remove 1f708ac1f366c0067f4ad1cfd14875670117d68fdcffa0f57e0380b366be18c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_poincare, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 03 02:21:12 compute-0 systemd[1]: libpod-conmon-1f708ac1f366c0067f4ad1cfd14875670117d68fdcffa0f57e0380b366be18c8.scope: Deactivated successfully.
Dec 03 02:21:13 compute-0 sudo[455055]: pam_unix(sudo:session): session closed for user root
Dec 03 02:21:13 compute-0 sudo[455216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:21:13 compute-0 sudo[455216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:21:13 compute-0 sudo[455216]: pam_unix(sudo:session): session closed for user root
Dec 03 02:21:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2010: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:21:13 compute-0 sudo[455241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:21:13 compute-0 sudo[455241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:21:13 compute-0 sudo[455241]: pam_unix(sudo:session): session closed for user root
Dec 03 02:21:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:21:13 compute-0 sudo[455266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:21:13 compute-0 sudo[455266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:21:13 compute-0 sudo[455266]: pam_unix(sudo:session): session closed for user root
Dec 03 02:21:13 compute-0 sudo[455291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 02:21:13 compute-0 sudo[455291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:21:13 compute-0 nova_compute[351485]: 2025-12-03 02:21:13.774 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:21:14 compute-0 podman[455349]: 2025-12-03 02:21:14.037243482 +0000 UTC m=+0.057846184 container create a4e5ca782f10c851d5e1c3df982067dc8ed4ef9c8f8a35512b9291b56a3d327e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec 03 02:21:14 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec 03 02:21:14 compute-0 systemd[1]: Started libpod-conmon-a4e5ca782f10c851d5e1c3df982067dc8ed4ef9c8f8a35512b9291b56a3d327e.scope.
Dec 03 02:21:14 compute-0 podman[455349]: 2025-12-03 02:21:14.01522769 +0000 UTC m=+0.035830482 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:21:14 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:21:14 compute-0 podman[455349]: 2025-12-03 02:21:14.14798567 +0000 UTC m=+0.168588422 container init a4e5ca782f10c851d5e1c3df982067dc8ed4ef9c8f8a35512b9291b56a3d327e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_booth, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Dec 03 02:21:14 compute-0 podman[455349]: 2025-12-03 02:21:14.163475337 +0000 UTC m=+0.184078049 container start a4e5ca782f10c851d5e1c3df982067dc8ed4ef9c8f8a35512b9291b56a3d327e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:21:14 compute-0 podman[455349]: 2025-12-03 02:21:14.169984511 +0000 UTC m=+0.190587263 container attach a4e5ca782f10c851d5e1c3df982067dc8ed4ef9c8f8a35512b9291b56a3d327e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_booth, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 03 02:21:14 compute-0 vigilant_booth[455366]: 167 167
Dec 03 02:21:14 compute-0 systemd[1]: libpod-a4e5ca782f10c851d5e1c3df982067dc8ed4ef9c8f8a35512b9291b56a3d327e.scope: Deactivated successfully.
Dec 03 02:21:14 compute-0 conmon[455366]: conmon a4e5ca782f10c851d5e1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a4e5ca782f10c851d5e1c3df982067dc8ed4ef9c8f8a35512b9291b56a3d327e.scope/container/memory.events
Dec 03 02:21:14 compute-0 podman[455349]: 2025-12-03 02:21:14.176213897 +0000 UTC m=+0.196816609 container died a4e5ca782f10c851d5e1c3df982067dc8ed4ef9c8f8a35512b9291b56a3d327e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_booth, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 03 02:21:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-c27f156ffd3591e36728d5c3be233615f7c7a6012e042b5de41c0f7c80f05812-merged.mount: Deactivated successfully.
Dec 03 02:21:14 compute-0 podman[455349]: 2025-12-03 02:21:14.234764361 +0000 UTC m=+0.255367053 container remove a4e5ca782f10c851d5e1c3df982067dc8ed4ef9c8f8a35512b9291b56a3d327e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 03 02:21:14 compute-0 systemd[1]: libpod-conmon-a4e5ca782f10c851d5e1c3df982067dc8ed4ef9c8f8a35512b9291b56a3d327e.scope: Deactivated successfully.
Dec 03 02:21:14 compute-0 ceph-mon[192821]: pgmap v2010: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:21:14 compute-0 podman[455388]: 2025-12-03 02:21:14.550526878 +0000 UTC m=+0.096990050 container create db42c80b62723d1943a852a9ea80ee7ee1cbf70b833eafb7c8e1a354b50a6536 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_gates, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 03 02:21:14 compute-0 nova_compute[351485]: 2025-12-03 02:21:14.581 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:21:14 compute-0 podman[455388]: 2025-12-03 02:21:14.515496989 +0000 UTC m=+0.061960211 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:21:14 compute-0 systemd[1]: Started libpod-conmon-db42c80b62723d1943a852a9ea80ee7ee1cbf70b833eafb7c8e1a354b50a6536.scope.
Dec 03 02:21:14 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:21:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c294b47e3d40a65f8a2a3252f9615e6e750cd2766e5f52fd6733f9618d1ec05/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:21:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c294b47e3d40a65f8a2a3252f9615e6e750cd2766e5f52fd6733f9618d1ec05/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:21:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c294b47e3d40a65f8a2a3252f9615e6e750cd2766e5f52fd6733f9618d1ec05/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:21:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c294b47e3d40a65f8a2a3252f9615e6e750cd2766e5f52fd6733f9618d1ec05/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:21:14 compute-0 podman[455388]: 2025-12-03 02:21:14.773351201 +0000 UTC m=+0.319814463 container init db42c80b62723d1943a852a9ea80ee7ee1cbf70b833eafb7c8e1a354b50a6536 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_gates, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 03 02:21:14 compute-0 podman[455388]: 2025-12-03 02:21:14.796296999 +0000 UTC m=+0.342760151 container start db42c80b62723d1943a852a9ea80ee7ee1cbf70b833eafb7c8e1a354b50a6536 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_gates, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:21:14 compute-0 podman[455388]: 2025-12-03 02:21:14.80234093 +0000 UTC m=+0.348804182 container attach db42c80b62723d1943a852a9ea80ee7ee1cbf70b833eafb7c8e1a354b50a6536 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_gates, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:21:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2011: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:21:15 compute-0 gallant_gates[455404]: {
Dec 03 02:21:15 compute-0 gallant_gates[455404]:     "0": [
Dec 03 02:21:15 compute-0 gallant_gates[455404]:         {
Dec 03 02:21:15 compute-0 gallant_gates[455404]:             "devices": [
Dec 03 02:21:15 compute-0 gallant_gates[455404]:                 "/dev/loop3"
Dec 03 02:21:15 compute-0 gallant_gates[455404]:             ],
Dec 03 02:21:15 compute-0 gallant_gates[455404]:             "lv_name": "ceph_lv0",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:             "lv_size": "21470642176",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:             "name": "ceph_lv0",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:             "tags": {
Dec 03 02:21:15 compute-0 gallant_gates[455404]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:                 "ceph.cluster_name": "ceph",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:                 "ceph.crush_device_class": "",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:                 "ceph.encrypted": "0",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:                 "ceph.osd_id": "0",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:                 "ceph.type": "block",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:                 "ceph.vdo": "0"
Dec 03 02:21:15 compute-0 gallant_gates[455404]:             },
Dec 03 02:21:15 compute-0 gallant_gates[455404]:             "type": "block",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:             "vg_name": "ceph_vg0"
Dec 03 02:21:15 compute-0 gallant_gates[455404]:         }
Dec 03 02:21:15 compute-0 gallant_gates[455404]:     ],
Dec 03 02:21:15 compute-0 gallant_gates[455404]:     "1": [
Dec 03 02:21:15 compute-0 gallant_gates[455404]:         {
Dec 03 02:21:15 compute-0 gallant_gates[455404]:             "devices": [
Dec 03 02:21:15 compute-0 gallant_gates[455404]:                 "/dev/loop4"
Dec 03 02:21:15 compute-0 gallant_gates[455404]:             ],
Dec 03 02:21:15 compute-0 gallant_gates[455404]:             "lv_name": "ceph_lv1",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:             "lv_size": "21470642176",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:             "name": "ceph_lv1",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:             "tags": {
Dec 03 02:21:15 compute-0 gallant_gates[455404]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:                 "ceph.cluster_name": "ceph",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:                 "ceph.crush_device_class": "",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:                 "ceph.encrypted": "0",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:                 "ceph.osd_id": "1",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:                 "ceph.type": "block",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:                 "ceph.vdo": "0"
Dec 03 02:21:15 compute-0 gallant_gates[455404]:             },
Dec 03 02:21:15 compute-0 gallant_gates[455404]:             "type": "block",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:             "vg_name": "ceph_vg1"
Dec 03 02:21:15 compute-0 gallant_gates[455404]:         }
Dec 03 02:21:15 compute-0 gallant_gates[455404]:     ],
Dec 03 02:21:15 compute-0 gallant_gates[455404]:     "2": [
Dec 03 02:21:15 compute-0 gallant_gates[455404]:         {
Dec 03 02:21:15 compute-0 gallant_gates[455404]:             "devices": [
Dec 03 02:21:15 compute-0 gallant_gates[455404]:                 "/dev/loop5"
Dec 03 02:21:15 compute-0 gallant_gates[455404]:             ],
Dec 03 02:21:15 compute-0 gallant_gates[455404]:             "lv_name": "ceph_lv2",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:             "lv_size": "21470642176",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:             "name": "ceph_lv2",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:             "tags": {
Dec 03 02:21:15 compute-0 gallant_gates[455404]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:                 "ceph.cluster_name": "ceph",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:                 "ceph.crush_device_class": "",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:                 "ceph.encrypted": "0",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:                 "ceph.osd_id": "2",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:                 "ceph.type": "block",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:                 "ceph.vdo": "0"
Dec 03 02:21:15 compute-0 gallant_gates[455404]:             },
Dec 03 02:21:15 compute-0 gallant_gates[455404]:             "type": "block",
Dec 03 02:21:15 compute-0 gallant_gates[455404]:             "vg_name": "ceph_vg2"
Dec 03 02:21:15 compute-0 gallant_gates[455404]:         }
Dec 03 02:21:15 compute-0 gallant_gates[455404]:     ]
Dec 03 02:21:15 compute-0 gallant_gates[455404]: }
Dec 03 02:21:15 compute-0 systemd[1]: libpod-db42c80b62723d1943a852a9ea80ee7ee1cbf70b833eafb7c8e1a354b50a6536.scope: Deactivated successfully.
Dec 03 02:21:15 compute-0 podman[455388]: 2025-12-03 02:21:15.659436525 +0000 UTC m=+1.205899677 container died db42c80b62723d1943a852a9ea80ee7ee1cbf70b833eafb7c8e1a354b50a6536 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_gates, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:21:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c294b47e3d40a65f8a2a3252f9615e6e750cd2766e5f52fd6733f9618d1ec05-merged.mount: Deactivated successfully.
Dec 03 02:21:15 compute-0 podman[455388]: 2025-12-03 02:21:15.753918804 +0000 UTC m=+1.300381966 container remove db42c80b62723d1943a852a9ea80ee7ee1cbf70b833eafb7c8e1a354b50a6536 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_gates, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:21:15 compute-0 systemd[1]: libpod-conmon-db42c80b62723d1943a852a9ea80ee7ee1cbf70b833eafb7c8e1a354b50a6536.scope: Deactivated successfully.
Dec 03 02:21:15 compute-0 sudo[455291]: pam_unix(sudo:session): session closed for user root
Dec 03 02:21:15 compute-0 podman[455424]: 2025-12-03 02:21:15.816677286 +0000 UTC m=+0.094429028 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 02:21:15 compute-0 podman[455422]: 2025-12-03 02:21:15.814831294 +0000 UTC m=+0.097778933 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec 03 02:21:15 compute-0 podman[455414]: 2025-12-03 02:21:15.839743377 +0000 UTC m=+0.136154436 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Dec 03 02:21:15 compute-0 sudo[455474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:21:15 compute-0 sudo[455474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:21:15 compute-0 sudo[455474]: pam_unix(sudo:session): session closed for user root
Dec 03 02:21:16 compute-0 sudo[455505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:21:16 compute-0 sudo[455505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:21:16 compute-0 sudo[455505]: pam_unix(sudo:session): session closed for user root
Dec 03 02:21:16 compute-0 sudo[455530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:21:16 compute-0 sudo[455530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:21:16 compute-0 sudo[455530]: pam_unix(sudo:session): session closed for user root
Dec 03 02:21:16 compute-0 sudo[455555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 02:21:16 compute-0 sudo[455555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:21:16 compute-0 ceph-mon[192821]: pgmap v2011: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:21:16 compute-0 podman[455618]: 2025-12-03 02:21:16.887065346 +0000 UTC m=+0.090432985 container create 8cbece5e69342541a387b201a63dd82bd400da1a7ff385569de9a8b038cf0970 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_elgamal, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec 03 02:21:16 compute-0 podman[455618]: 2025-12-03 02:21:16.847866229 +0000 UTC m=+0.051233918 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:21:16 compute-0 systemd[1]: Started libpod-conmon-8cbece5e69342541a387b201a63dd82bd400da1a7ff385569de9a8b038cf0970.scope.
Dec 03 02:21:17 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:21:17 compute-0 podman[455618]: 2025-12-03 02:21:17.046258312 +0000 UTC m=+0.249625951 container init 8cbece5e69342541a387b201a63dd82bd400da1a7ff385569de9a8b038cf0970 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 03 02:21:17 compute-0 podman[455618]: 2025-12-03 02:21:17.06070309 +0000 UTC m=+0.264070699 container start 8cbece5e69342541a387b201a63dd82bd400da1a7ff385569de9a8b038cf0970 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_elgamal, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 03 02:21:17 compute-0 nova_compute[351485]: 2025-12-03 02:21:17.061 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:21:17 compute-0 podman[455618]: 2025-12-03 02:21:17.066088572 +0000 UTC m=+0.269456181 container attach 8cbece5e69342541a387b201a63dd82bd400da1a7ff385569de9a8b038cf0970 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_elgamal, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 03 02:21:17 compute-0 sweet_elgamal[455634]: 167 167
Dec 03 02:21:17 compute-0 systemd[1]: libpod-8cbece5e69342541a387b201a63dd82bd400da1a7ff385569de9a8b038cf0970.scope: Deactivated successfully.
Dec 03 02:21:17 compute-0 podman[455618]: 2025-12-03 02:21:17.077426312 +0000 UTC m=+0.280793951 container died 8cbece5e69342541a387b201a63dd82bd400da1a7ff385569de9a8b038cf0970 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_elgamal, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Dec 03 02:21:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-e352f0acd372fd2763ff8eefdc5c70419cf3cb2543b1f644a85956bc8e16e7b4-merged.mount: Deactivated successfully.
Dec 03 02:21:17 compute-0 podman[455618]: 2025-12-03 02:21:17.155941049 +0000 UTC m=+0.359308668 container remove 8cbece5e69342541a387b201a63dd82bd400da1a7ff385569de9a8b038cf0970 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_elgamal, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:21:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2012: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:21:17 compute-0 systemd[1]: libpod-conmon-8cbece5e69342541a387b201a63dd82bd400da1a7ff385569de9a8b038cf0970.scope: Deactivated successfully.
Dec 03 02:21:17 compute-0 podman[455660]: 2025-12-03 02:21:17.46863271 +0000 UTC m=+0.098994706 container create 0e0bbbc91d10eeeb10cb46d26d0118fa7cdcf3608cadbd3d07d8290f035f037c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_kilby, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 03 02:21:17 compute-0 podman[455660]: 2025-12-03 02:21:17.427958612 +0000 UTC m=+0.058320668 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:21:17 compute-0 systemd[1]: Started libpod-conmon-0e0bbbc91d10eeeb10cb46d26d0118fa7cdcf3608cadbd3d07d8290f035f037c.scope.
Dec 03 02:21:17 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:21:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6184be35db45b6cedb11841dfc5344c61d7efbc121c9d4005e1d7b1a0acfde36/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:21:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6184be35db45b6cedb11841dfc5344c61d7efbc121c9d4005e1d7b1a0acfde36/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:21:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6184be35db45b6cedb11841dfc5344c61d7efbc121c9d4005e1d7b1a0acfde36/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:21:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6184be35db45b6cedb11841dfc5344c61d7efbc121c9d4005e1d7b1a0acfde36/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:21:17 compute-0 podman[455660]: 2025-12-03 02:21:17.650048324 +0000 UTC m=+0.280410350 container init 0e0bbbc91d10eeeb10cb46d26d0118fa7cdcf3608cadbd3d07d8290f035f037c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_kilby, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 03 02:21:17 compute-0 podman[455660]: 2025-12-03 02:21:17.673030303 +0000 UTC m=+0.303392329 container start 0e0bbbc91d10eeeb10cb46d26d0118fa7cdcf3608cadbd3d07d8290f035f037c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_kilby, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:21:17 compute-0 podman[455660]: 2025-12-03 02:21:17.679778374 +0000 UTC m=+0.310140470 container attach 0e0bbbc91d10eeeb10cb46d26d0118fa7cdcf3608cadbd3d07d8290f035f037c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 03 02:21:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:21:18 compute-0 ceph-mon[192821]: pgmap v2012: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:21:18 compute-0 nova_compute[351485]: 2025-12-03 02:21:18.778 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:21:18 compute-0 frosty_kilby[455676]: {
Dec 03 02:21:18 compute-0 frosty_kilby[455676]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 02:21:18 compute-0 frosty_kilby[455676]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:21:18 compute-0 frosty_kilby[455676]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 02:21:18 compute-0 frosty_kilby[455676]:         "osd_id": 2,
Dec 03 02:21:18 compute-0 frosty_kilby[455676]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:21:18 compute-0 frosty_kilby[455676]:         "type": "bluestore"
Dec 03 02:21:18 compute-0 frosty_kilby[455676]:     },
Dec 03 02:21:18 compute-0 frosty_kilby[455676]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 02:21:18 compute-0 frosty_kilby[455676]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:21:18 compute-0 frosty_kilby[455676]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 02:21:18 compute-0 frosty_kilby[455676]:         "osd_id": 1,
Dec 03 02:21:18 compute-0 frosty_kilby[455676]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:21:18 compute-0 frosty_kilby[455676]:         "type": "bluestore"
Dec 03 02:21:18 compute-0 frosty_kilby[455676]:     },
Dec 03 02:21:18 compute-0 frosty_kilby[455676]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 02:21:18 compute-0 frosty_kilby[455676]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:21:18 compute-0 frosty_kilby[455676]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 02:21:18 compute-0 frosty_kilby[455676]:         "osd_id": 0,
Dec 03 02:21:18 compute-0 frosty_kilby[455676]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:21:18 compute-0 frosty_kilby[455676]:         "type": "bluestore"
Dec 03 02:21:18 compute-0 frosty_kilby[455676]:     }
Dec 03 02:21:18 compute-0 frosty_kilby[455676]: }
Dec 03 02:21:18 compute-0 systemd[1]: libpod-0e0bbbc91d10eeeb10cb46d26d0118fa7cdcf3608cadbd3d07d8290f035f037c.scope: Deactivated successfully.
Dec 03 02:21:18 compute-0 systemd[1]: libpod-0e0bbbc91d10eeeb10cb46d26d0118fa7cdcf3608cadbd3d07d8290f035f037c.scope: Consumed 1.242s CPU time.
Dec 03 02:21:18 compute-0 podman[455709]: 2025-12-03 02:21:18.993182626 +0000 UTC m=+0.051166156 container died 0e0bbbc91d10eeeb10cb46d26d0118fa7cdcf3608cadbd3d07d8290f035f037c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_kilby, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:21:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-6184be35db45b6cedb11841dfc5344c61d7efbc121c9d4005e1d7b1a0acfde36-merged.mount: Deactivated successfully.
Dec 03 02:21:19 compute-0 podman[455709]: 2025-12-03 02:21:19.16825106 +0000 UTC m=+0.226234450 container remove 0e0bbbc91d10eeeb10cb46d26d0118fa7cdcf3608cadbd3d07d8290f035f037c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_kilby, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Dec 03 02:21:19 compute-0 systemd[1]: libpod-conmon-0e0bbbc91d10eeeb10cb46d26d0118fa7cdcf3608cadbd3d07d8290f035f037c.scope: Deactivated successfully.
Dec 03 02:21:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2013: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:21:19 compute-0 sudo[455555]: pam_unix(sudo:session): session closed for user root
Dec 03 02:21:19 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 02:21:19 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:21:19 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 02:21:19 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:21:19 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev e7e81c02-08ad-41d5-9dbd-c4521bb1db0d does not exist
Dec 03 02:21:19 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev a38a251f-2b48-4871-9505-9ad7a3514fed does not exist
Dec 03 02:21:19 compute-0 sudo[455723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:21:19 compute-0 sudo[455723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:21:19 compute-0 sudo[455723]: pam_unix(sudo:session): session closed for user root
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.512 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.513 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.514 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.524 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2890ee5c-21c1-4e9d-9421-1a2df0f67f76', 'name': 'te-8071397-asg-3rvfkoaoyxm3-n4fdz722tgvn-jwe375iwm6yr', 'flavor': {'id': '89219634-32e9-4cb5-896f-6fa0b1edfe13', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '8876482c-db67-48c0-9203-60685152fc9d'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '63f39ac2863946b8b817457e689ff933', 'user_id': '8f61f44789494541b7c101b0fdab52f0', 'hostId': 'b9b5204cb6f419d1971089b3610cd52175ffd5baf1b6a5204f14f9c2', 'status': 'active', 'metadata': {'metering.server_group': '38bfb145-4971-41b6-9bc3-faf3c3931019'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.525 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.525 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.525 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.526 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.527 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-03T02:21:19.525951) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.564 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/memory.usage volume: 43.4296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.565 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.565 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.566 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.566 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.566 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.566 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.567 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-03T02:21:19.566663) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.572 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.573 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.573 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.574 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.574 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.574 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.574 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.575 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.bytes.delta volume: 1172 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.575 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-03T02:21:19.574736) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:21:19 compute-0 nova_compute[351485]: 2025-12-03 02:21:19.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.576 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.580 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.581 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 03 02:21:19 compute-0 sudo[455748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.582 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.583 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.583 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.583 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.585 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.585 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.585 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.586 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.587 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.587 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-03T02:21:19.583300) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.587 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.588 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.589 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.589 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.589 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.589 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:21:19 compute-0 sudo[455748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.590 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.591 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.591 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.592 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.593 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.593 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.594 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-03T02:21:19.587853) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.594 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.595 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.595 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.596 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-03T02:21:19.591408) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.596 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-03T02:21:19.595736) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:21:19 compute-0 sudo[455748]: pam_unix(sudo:session): session closed for user root
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.615 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.616 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.617 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.617 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.617 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.618 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.618 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.619 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.619 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.619 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.621 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-03T02:21:19.619707) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:21:19 compute-0 nova_compute[351485]: 2025-12-03 02:21:19.638 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:21:19 compute-0 nova_compute[351485]: 2025-12-03 02:21:19.640 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:21:19 compute-0 nova_compute[351485]: 2025-12-03 02:21:19.641 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:21:19 compute-0 nova_compute[351485]: 2025-12-03 02:21:19.642 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 02:21:19 compute-0 nova_compute[351485]: 2025-12-03 02:21:19.643 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.686 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.bytes volume: 30342144 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.687 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.687 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.688 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.688 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.688 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.688 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.688 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.689 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.bytes volume: 1262 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.689 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-03T02:21:19.688499) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.689 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.690 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.690 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.690 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.690 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.690 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.691 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-03T02:21:19.690614) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.691 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.latency volume: 2892253301 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.691 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.latency volume: 193523124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.691 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.692 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.692 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.692 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.692 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.693 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.693 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-03T02:21:19.692630) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.693 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.requests volume: 1100 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.693 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.694 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.694 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.694 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.694 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.694 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.694 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.695 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.695 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-03T02:21:19.694785) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.695 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.696 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.696 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.696 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.696 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.696 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.697 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-03T02:21:19.696811) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.697 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.698 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.698 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.698 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.698 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.698 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.699 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.699 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.699 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-03T02:21:19.699088) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.699 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.bytes volume: 72855552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.699 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.700 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.700 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.700 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.701 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.701 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.701 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.701 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-03T02:21:19.701184) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.701 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.latency volume: 9924409915 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.702 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.702 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.702 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.702 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.703 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.703 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.703 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.703 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-03T02:21:19.703224) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.703 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.requests volume: 310 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.704 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.704 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.704 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.704 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.704 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.704 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.706 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.707 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.packets volume: 8 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.708 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.709 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-03T02:21:19.706186) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.709 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.709 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.709 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.710 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.710 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/cpu volume: 122560000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.711 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.711 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.711 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.711 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.711 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.711 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.712 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.712 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.712 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.712 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.712 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.713 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.713 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.713 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.714 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.714 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.714 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.714 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.714 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-03T02:21:19.709883) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.715 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-03T02:21:19.711706) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.715 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-03T02:21:19.712992) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.715 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.715 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.715 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-03T02:21:19.714854) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.715 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.716 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.716 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.716 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.716 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.716 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.716 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.717 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-03T02:21:19.716570) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.717 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.717 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.717 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.717 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.717 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.717 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.718 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.718 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.718 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-03T02:21:19.717975) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.718 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.719 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.719 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.719 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.719 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.719 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.bytes.delta volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.719 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-03T02:21:19.719284) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.721 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.721 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.721 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.722 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.722 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.722 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.722 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.727 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.727 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.728 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:21:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:21:20 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2220651878' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:21:20 compute-0 nova_compute[351485]: 2025-12-03 02:21:20.121 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:21:20 compute-0 nova_compute[351485]: 2025-12-03 02:21:20.234 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:21:20 compute-0 nova_compute[351485]: 2025-12-03 02:21:20.235 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:21:20 compute-0 ceph-mon[192821]: pgmap v2013: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:21:20 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:21:20 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:21:20 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2220651878' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:21:20 compute-0 nova_compute[351485]: 2025-12-03 02:21:20.738 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:21:20 compute-0 nova_compute[351485]: 2025-12-03 02:21:20.740 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3747MB free_disk=59.94282150268555GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 02:21:20 compute-0 nova_compute[351485]: 2025-12-03 02:21:20.741 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:21:20 compute-0 nova_compute[351485]: 2025-12-03 02:21:20.741 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:21:20 compute-0 nova_compute[351485]: 2025-12-03 02:21:20.825 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:21:20 compute-0 nova_compute[351485]: 2025-12-03 02:21:20.826 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 02:21:20 compute-0 nova_compute[351485]: 2025-12-03 02:21:20.826 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 02:21:20 compute-0 nova_compute[351485]: 2025-12-03 02:21:20.866 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:21:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2014: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 0 B/s wr, 9 op/s
Dec 03 02:21:21 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:21:21 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1786476869' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:21:21 compute-0 nova_compute[351485]: 2025-12-03 02:21:21.470 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:21:21 compute-0 nova_compute[351485]: 2025-12-03 02:21:21.489 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:21:21 compute-0 nova_compute[351485]: 2025-12-03 02:21:21.516 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:21:21 compute-0 nova_compute[351485]: 2025-12-03 02:21:21.522 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 02:21:21 compute-0 nova_compute[351485]: 2025-12-03 02:21:21.523 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:21:22 compute-0 nova_compute[351485]: 2025-12-03 02:21:22.067 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:21:22 compute-0 ceph-mon[192821]: pgmap v2014: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 0 B/s wr, 9 op/s
Dec 03 02:21:22 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1786476869' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:21:22 compute-0 podman[455821]: 2025-12-03 02:21:22.891978204 +0000 UTC m=+0.128765917 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 03 02:21:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2015: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 0 B/s wr, 9 op/s
Dec 03 02:21:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:21:23 compute-0 nova_compute[351485]: 2025-12-03 02:21:23.780 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:21:24 compute-0 ceph-mon[192821]: pgmap v2015: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 0 B/s wr, 9 op/s
Dec 03 02:21:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2016: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Dec 03 02:21:26 compute-0 ceph-mon[192821]: pgmap v2016: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Dec 03 02:21:26 compute-0 nova_compute[351485]: 2025-12-03 02:21:26.524 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:21:26 compute-0 nova_compute[351485]: 2025-12-03 02:21:26.525 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 02:21:26 compute-0 nova_compute[351485]: 2025-12-03 02:21:26.526 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 03 02:21:26 compute-0 nova_compute[351485]: 2025-12-03 02:21:26.710 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-2890ee5c-21c1-4e9d-9421-1a2df0f67f76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:21:26 compute-0 nova_compute[351485]: 2025-12-03 02:21:26.711 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-2890ee5c-21c1-4e9d-9421-1a2df0f67f76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:21:26 compute-0 nova_compute[351485]: 2025-12-03 02:21:26.711 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 03 02:21:26 compute-0 nova_compute[351485]: 2025-12-03 02:21:26.712 351492 DEBUG nova.objects.instance [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:21:26 compute-0 podman[455843]: 2025-12-03 02:21:26.869726803 +0000 UTC m=+0.119626010 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, version=9.6, config_id=edpm, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, vcs-type=git, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, name=ubi9-minimal, release=1755695350)
Dec 03 02:21:26 compute-0 podman[455845]: 2025-12-03 02:21:26.87918555 +0000 UTC m=+0.113217269 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, version=9.4, vcs-type=git, name=ubi9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, container_name=kepler)
Dec 03 02:21:26 compute-0 podman[455842]: 2025-12-03 02:21:26.887755752 +0000 UTC m=+0.139923063 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 03 02:21:26 compute-0 podman[455844]: 2025-12-03 02:21:26.891854888 +0000 UTC m=+0.128333946 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 03 02:21:26 compute-0 podman[455846]: 2025-12-03 02:21:26.90150003 +0000 UTC m=+0.134042017 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 03 02:21:27 compute-0 nova_compute[351485]: 2025-12-03 02:21:27.070 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:21:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2017: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 0 B/s wr, 71 op/s
Dec 03 02:21:28 compute-0 ceph-mon[192821]: pgmap v2017: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 0 B/s wr, 71 op/s
Dec 03 02:21:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:21:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:21:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:21:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:21:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:21:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:21:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:21:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:21:28
Dec 03 02:21:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 02:21:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 02:21:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['.mgr', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', 'backups', 'default.rgw.log', 'images', 'cephfs.cephfs.data', '.rgw.root']
Dec 03 02:21:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 02:21:28 compute-0 nova_compute[351485]: 2025-12-03 02:21:28.783 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:21:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 02:21:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:21:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 02:21:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:21:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:21:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:21:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:21:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:21:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:21:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:21:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2018: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Dec 03 02:21:29 compute-0 podman[158098]: time="2025-12-03T02:21:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:21:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:21:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec 03 02:21:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:21:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8655 "" "Go-http-client/1.1"
Dec 03 02:21:30 compute-0 ceph-mon[192821]: pgmap v2018: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Dec 03 02:21:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2019: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Dec 03 02:21:31 compute-0 openstack_network_exporter[368278]: ERROR   02:21:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:21:31 compute-0 openstack_network_exporter[368278]: ERROR   02:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:21:31 compute-0 openstack_network_exporter[368278]: ERROR   02:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:21:31 compute-0 openstack_network_exporter[368278]: ERROR   02:21:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:21:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:21:31 compute-0 openstack_network_exporter[368278]: ERROR   02:21:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:21:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:21:31 compute-0 nova_compute[351485]: 2025-12-03 02:21:31.797 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Updating instance_info_cache with network_info: [{"id": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "address": "fa:16:3e:dd:ed:eb", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.239", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf36a9f58-d7", "ovs_interfaceid": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:21:31 compute-0 nova_compute[351485]: 2025-12-03 02:21:31.817 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-2890ee5c-21c1-4e9d-9421-1a2df0f67f76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:21:31 compute-0 nova_compute[351485]: 2025-12-03 02:21:31.818 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 03 02:21:31 compute-0 nova_compute[351485]: 2025-12-03 02:21:31.820 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:21:31 compute-0 nova_compute[351485]: 2025-12-03 02:21:31.820 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:21:31 compute-0 nova_compute[351485]: 2025-12-03 02:21:31.821 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:21:31 compute-0 nova_compute[351485]: 2025-12-03 02:21:31.822 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:21:32 compute-0 nova_compute[351485]: 2025-12-03 02:21:32.073 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:21:32 compute-0 ceph-mon[192821]: pgmap v2019: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Dec 03 02:21:32 compute-0 nova_compute[351485]: 2025-12-03 02:21:32.868 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:21:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2020: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 0 B/s wr, 62 op/s
Dec 03 02:21:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:21:33 compute-0 nova_compute[351485]: 2025-12-03 02:21:33.786 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:21:34 compute-0 ceph-mon[192821]: pgmap v2020: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 0 B/s wr, 62 op/s
Dec 03 02:21:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2021: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 0 B/s wr, 62 op/s
Dec 03 02:21:35 compute-0 nova_compute[351485]: 2025-12-03 02:21:35.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:21:35 compute-0 nova_compute[351485]: 2025-12-03 02:21:35.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 02:21:36 compute-0 ceph-mon[192821]: pgmap v2021: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 0 B/s wr, 62 op/s
Dec 03 02:21:37 compute-0 nova_compute[351485]: 2025-12-03 02:21:37.076 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:21:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2022: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 17 op/s
Dec 03 02:21:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:21:38 compute-0 ceph-mon[192821]: pgmap v2022: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 17 op/s
Dec 03 02:21:38 compute-0 nova_compute[351485]: 2025-12-03 02:21:38.789 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:21:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 02:21:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:21:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 02:21:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:21:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007578104650973498 of space, bias 1.0, pg target 0.22734313952920493 quantized to 32 (current 32)
Dec 03 02:21:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:21:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:21:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:21:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:21:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:21:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec 03 02:21:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:21:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 02:21:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:21:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:21:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:21:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 02:21:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:21:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 02:21:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:21:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:21:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:21:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 02:21:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2023: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 0 B/s wr, 0 op/s
Dec 03 02:21:40 compute-0 ceph-mon[192821]: pgmap v2023: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 0 B/s wr, 0 op/s
Dec 03 02:21:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2024: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:21:42 compute-0 nova_compute[351485]: 2025-12-03 02:21:42.079 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:21:42 compute-0 ceph-mon[192821]: pgmap v2024: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:21:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2025: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:21:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:21:43 compute-0 nova_compute[351485]: 2025-12-03 02:21:43.793 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:21:44 compute-0 ceph-mon[192821]: pgmap v2025: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:21:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2026: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:21:45 compute-0 nova_compute[351485]: 2025-12-03 02:21:45.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:21:45 compute-0 nova_compute[351485]: 2025-12-03 02:21:45.576 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:21:45 compute-0 nova_compute[351485]: 2025-12-03 02:21:45.577 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:21:45 compute-0 nova_compute[351485]: 2025-12-03 02:21:45.578 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:21:45 compute-0 nova_compute[351485]: 2025-12-03 02:21:45.578 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:21:45 compute-0 nova_compute[351485]: 2025-12-03 02:21:45.578 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:21:45 compute-0 nova_compute[351485]: 2025-12-03 02:21:45.579 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:21:45 compute-0 nova_compute[351485]: 2025-12-03 02:21:45.595 351492 DEBUG nova.virt.libvirt.imagecache [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100
Dec 03 02:21:45 compute-0 nova_compute[351485]: 2025-12-03 02:21:45.604 351492 DEBUG nova.virt.libvirt.imagecache [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Dec 03 02:21:45 compute-0 nova_compute[351485]: 2025-12-03 02:21:45.604 351492 DEBUG nova.virt.libvirt.imagecache [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Image id 8876482c-db67-48c0-9203-60685152fc9d yields fingerprint 3a2172ba33277b1fb4d8f3381bb190374609d10e _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Dec 03 02:21:45 compute-0 nova_compute[351485]: 2025-12-03 02:21:45.605 351492 INFO nova.virt.libvirt.imagecache [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] image 8876482c-db67-48c0-9203-60685152fc9d at (/var/lib/nova/instances/_base/3a2172ba33277b1fb4d8f3381bb190374609d10e): checking
Dec 03 02:21:45 compute-0 nova_compute[351485]: 2025-12-03 02:21:45.605 351492 DEBUG nova.virt.libvirt.imagecache [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] image 8876482c-db67-48c0-9203-60685152fc9d at (/var/lib/nova/instances/_base/3a2172ba33277b1fb4d8f3381bb190374609d10e): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279
Dec 03 02:21:45 compute-0 nova_compute[351485]: 2025-12-03 02:21:45.608 351492 DEBUG nova.virt.libvirt.imagecache [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Dec 03 02:21:45 compute-0 nova_compute[351485]: 2025-12-03 02:21:45.609 351492 DEBUG nova.virt.libvirt.imagecache [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Dec 03 02:21:45 compute-0 nova_compute[351485]: 2025-12-03 02:21:45.609 351492 WARNING nova.virt.libvirt.imagecache [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/b9e804eb90834f1320f9fd6c25a03e15d4052aa8
Dec 03 02:21:45 compute-0 nova_compute[351485]: 2025-12-03 02:21:45.609 351492 WARNING nova.virt.libvirt.imagecache [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/c29aeb8fc873eee85b0369901388993e8201c8d4
Dec 03 02:21:45 compute-0 nova_compute[351485]: 2025-12-03 02:21:45.610 351492 WARNING nova.virt.libvirt.imagecache [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601
Dec 03 02:21:45 compute-0 nova_compute[351485]: 2025-12-03 02:21:45.610 351492 INFO nova.virt.libvirt.imagecache [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Active base files: /var/lib/nova/instances/_base/3a2172ba33277b1fb4d8f3381bb190374609d10e
Dec 03 02:21:45 compute-0 nova_compute[351485]: 2025-12-03 02:21:45.610 351492 INFO nova.virt.libvirt.imagecache [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Removable base files: /var/lib/nova/instances/_base/b9e804eb90834f1320f9fd6c25a03e15d4052aa8 /var/lib/nova/instances/_base/c29aeb8fc873eee85b0369901388993e8201c8d4 /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601
Dec 03 02:21:45 compute-0 nova_compute[351485]: 2025-12-03 02:21:45.611 351492 INFO nova.virt.libvirt.imagecache [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/b9e804eb90834f1320f9fd6c25a03e15d4052aa8
Dec 03 02:21:45 compute-0 nova_compute[351485]: 2025-12-03 02:21:45.611 351492 INFO nova.virt.libvirt.imagecache [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/c29aeb8fc873eee85b0369901388993e8201c8d4
Dec 03 02:21:45 compute-0 nova_compute[351485]: 2025-12-03 02:21:45.611 351492 INFO nova.virt.libvirt.imagecache [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601
Dec 03 02:21:45 compute-0 nova_compute[351485]: 2025-12-03 02:21:45.612 351492 DEBUG nova.virt.libvirt.imagecache [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Dec 03 02:21:45 compute-0 nova_compute[351485]: 2025-12-03 02:21:45.612 351492 DEBUG nova.virt.libvirt.imagecache [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Dec 03 02:21:45 compute-0 nova_compute[351485]: 2025-12-03 02:21:45.612 351492 DEBUG nova.virt.libvirt.imagecache [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Dec 03 02:21:45 compute-0 nova_compute[351485]: 2025-12-03 02:21:45.612 351492 INFO nova.virt.libvirt.imagecache [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66
Dec 03 02:21:46 compute-0 ceph-mon[192821]: pgmap v2026: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:21:46 compute-0 podman[455943]: 2025-12-03 02:21:46.890949826 +0000 UTC m=+0.122620044 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm)
Dec 03 02:21:46 compute-0 podman[455942]: 2025-12-03 02:21:46.908962194 +0000 UTC m=+0.146511158 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 03 02:21:46 compute-0 podman[455944]: 2025-12-03 02:21:46.921384175 +0000 UTC m=+0.156186512 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 03 02:21:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 02:21:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2028091222' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:21:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 02:21:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2028091222' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:21:47 compute-0 nova_compute[351485]: 2025-12-03 02:21:47.083 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:21:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2027: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:21:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/2028091222' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:21:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/2028091222' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:21:47 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #96. Immutable memtables: 0.
Dec 03 02:21:47 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:21:47.488701) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 03 02:21:47 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 96
Dec 03 02:21:47 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728507488842, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 1591, "num_deletes": 251, "total_data_size": 2575246, "memory_usage": 2615792, "flush_reason": "Manual Compaction"}
Dec 03 02:21:47 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #97: started
Dec 03 02:21:47 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728507510857, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 97, "file_size": 2528507, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40114, "largest_seqno": 41704, "table_properties": {"data_size": 2521096, "index_size": 4418, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15353, "raw_average_key_size": 20, "raw_value_size": 2506205, "raw_average_value_size": 3276, "num_data_blocks": 197, "num_entries": 765, "num_filter_entries": 765, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764728339, "oldest_key_time": 1764728339, "file_creation_time": 1764728507, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 97, "seqno_to_time_mapping": "N/A"}}
Dec 03 02:21:47 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 22243 microseconds, and 12752 cpu microseconds.
Dec 03 02:21:47 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 02:21:47 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:21:47.510956) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #97: 2528507 bytes OK
Dec 03 02:21:47 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:21:47.510983) [db/memtable_list.cc:519] [default] Level-0 commit table #97 started
Dec 03 02:21:47 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:21:47.514377) [db/memtable_list.cc:722] [default] Level-0 commit table #97: memtable #1 done
Dec 03 02:21:47 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:21:47.514402) EVENT_LOG_v1 {"time_micros": 1764728507514395, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 03 02:21:47 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:21:47.514425) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 03 02:21:47 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 2568334, prev total WAL file size 2568334, number of live WAL files 2.
Dec 03 02:21:47 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000093.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:21:47 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:21:47.516014) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Dec 03 02:21:47 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 03 02:21:47 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [97(2469KB)], [95(7028KB)]
Dec 03 02:21:47 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728507516060, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [97], "files_L6": [95], "score": -1, "input_data_size": 9725281, "oldest_snapshot_seqno": -1}
Dec 03 02:21:47 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #98: 5777 keys, 7972853 bytes, temperature: kUnknown
Dec 03 02:21:47 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728507599460, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 98, "file_size": 7972853, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7936013, "index_size": 21306, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14469, "raw_key_size": 149768, "raw_average_key_size": 25, "raw_value_size": 7833272, "raw_average_value_size": 1355, "num_data_blocks": 848, "num_entries": 5777, "num_filter_entries": 5777, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764728507, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Dec 03 02:21:47 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 02:21:47 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:21:47.600676) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 7972853 bytes
Dec 03 02:21:47 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:21:47.604052) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 115.3 rd, 94.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 6.9 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(7.0) write-amplify(3.2) OK, records in: 6296, records dropped: 519 output_compression: NoCompression
Dec 03 02:21:47 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:21:47.604084) EVENT_LOG_v1 {"time_micros": 1764728507604069, "job": 56, "event": "compaction_finished", "compaction_time_micros": 84345, "compaction_time_cpu_micros": 36834, "output_level": 6, "num_output_files": 1, "total_output_size": 7972853, "num_input_records": 6296, "num_output_records": 5777, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 03 02:21:47 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000097.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:21:47 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728507607111, "job": 56, "event": "table_file_deletion", "file_number": 97}
Dec 03 02:21:47 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:21:47 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728507610788, "job": 56, "event": "table_file_deletion", "file_number": 95}
Dec 03 02:21:47 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:21:47.515824) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:21:47 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:21:47.611978) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:21:47 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:21:47.611985) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:21:47 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:21:47.611988) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:21:47 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:21:47.611991) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:21:47 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:21:47.611995) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:21:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:21:48 compute-0 ceph-mon[192821]: pgmap v2027: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:21:48 compute-0 nova_compute[351485]: 2025-12-03 02:21:48.795 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:21:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2028: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:21:50 compute-0 ceph-mon[192821]: pgmap v2028: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:21:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2029: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:21:52 compute-0 nova_compute[351485]: 2025-12-03 02:21:52.086 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:21:52 compute-0 ceph-mon[192821]: pgmap v2029: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:21:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2030: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:21:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:21:53 compute-0 nova_compute[351485]: 2025-12-03 02:21:53.800 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:21:53 compute-0 podman[456001]: 2025-12-03 02:21:53.885238326 +0000 UTC m=+0.135039735 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 03 02:21:54 compute-0 ceph-mon[192821]: pgmap v2030: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:21:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2031: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:21:56 compute-0 ceph-mon[192821]: pgmap v2031: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:21:57 compute-0 nova_compute[351485]: 2025-12-03 02:21:57.090 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:21:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2032: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:21:57 compute-0 podman[456022]: 2025-12-03 02:21:57.876467404 +0000 UTC m=+0.105195991 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 02:21:57 compute-0 podman[456021]: 2025-12-03 02:21:57.887207837 +0000 UTC m=+0.123684753 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vcs-type=git, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., release=1755695350, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 03 02:21:57 compute-0 podman[456023]: 2025-12-03 02:21:57.895854362 +0000 UTC m=+0.117133939 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, release=1214.1726694543, vcs-type=git, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.openshift.tags=base rhel9, managed_by=edpm_ansible, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, architecture=x86_64, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, name=ubi9, vendor=Red Hat, Inc.)
Dec 03 02:21:57 compute-0 podman[456020]: 2025-12-03 02:21:57.897829387 +0000 UTC m=+0.143152213 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 03 02:21:57 compute-0 podman[456028]: 2025-12-03 02:21:57.910192197 +0000 UTC m=+0.127244515 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 03 02:21:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:21:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:21:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:21:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:21:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:21:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:21:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:21:58 compute-0 ceph-mon[192821]: pgmap v2032: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:21:58 compute-0 nova_compute[351485]: 2025-12-03 02:21:58.802 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:21:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2033: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:21:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:21:59.655 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:21:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:21:59.656 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:21:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:21:59.657 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:21:59 compute-0 podman[158098]: time="2025-12-03T02:21:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:21:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:21:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec 03 02:21:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:21:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8657 "" "Go-http-client/1.1"
Dec 03 02:22:00 compute-0 ceph-mon[192821]: pgmap v2033: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:22:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2034: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:22:01 compute-0 openstack_network_exporter[368278]: ERROR   02:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:22:01 compute-0 openstack_network_exporter[368278]: ERROR   02:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:22:01 compute-0 openstack_network_exporter[368278]: ERROR   02:22:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:22:01 compute-0 openstack_network_exporter[368278]: ERROR   02:22:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:22:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:22:01 compute-0 openstack_network_exporter[368278]: ERROR   02:22:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:22:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:22:02 compute-0 nova_compute[351485]: 2025-12-03 02:22:02.095 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:22:02 compute-0 ceph-mon[192821]: pgmap v2034: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:22:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2035: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:22:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:22:03 compute-0 nova_compute[351485]: 2025-12-03 02:22:03.808 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:22:04 compute-0 ceph-mon[192821]: pgmap v2035: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:22:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2036: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:22:06 compute-0 ceph-mon[192821]: pgmap v2036: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:22:07 compute-0 nova_compute[351485]: 2025-12-03 02:22:07.098 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:22:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2037: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:22:07 compute-0 ceph-mon[192821]: pgmap v2037: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:22:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:22:08 compute-0 nova_compute[351485]: 2025-12-03 02:22:08.815 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:22:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2038: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:22:10 compute-0 ceph-mon[192821]: pgmap v2038: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:22:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2039: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:22:12 compute-0 nova_compute[351485]: 2025-12-03 02:22:12.102 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:22:12 compute-0 ceph-mon[192821]: pgmap v2039: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:22:12 compute-0 nova_compute[351485]: 2025-12-03 02:22:12.386 351492 DEBUG oslo_concurrency.lockutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Acquiring lock "4fb8fc07-d7b7-4be8-94da-155b040faf32" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:22:12 compute-0 nova_compute[351485]: 2025-12-03 02:22:12.387 351492 DEBUG oslo_concurrency.lockutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "4fb8fc07-d7b7-4be8-94da-155b040faf32" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:22:12 compute-0 nova_compute[351485]: 2025-12-03 02:22:12.417 351492 DEBUG nova.compute.manager [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 03 02:22:12 compute-0 rsyslogd[188612]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 03 02:22:12 compute-0 rsyslogd[188612]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 03 02:22:12 compute-0 nova_compute[351485]: 2025-12-03 02:22:12.532 351492 DEBUG oslo_concurrency.lockutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:22:12 compute-0 nova_compute[351485]: 2025-12-03 02:22:12.534 351492 DEBUG oslo_concurrency.lockutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:22:12 compute-0 nova_compute[351485]: 2025-12-03 02:22:12.550 351492 DEBUG nova.virt.hardware [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 03 02:22:12 compute-0 nova_compute[351485]: 2025-12-03 02:22:12.551 351492 INFO nova.compute.claims [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Claim successful on node compute-0.ctlplane.example.com
Dec 03 02:22:12 compute-0 nova_compute[351485]: 2025-12-03 02:22:12.729 351492 DEBUG oslo_concurrency.processutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:22:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:22:13 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1490501508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:22:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2040: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:22:13 compute-0 nova_compute[351485]: 2025-12-03 02:22:13.263 351492 DEBUG oslo_concurrency.processutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:22:13 compute-0 nova_compute[351485]: 2025-12-03 02:22:13.276 351492 DEBUG nova.compute.provider_tree [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:22:13 compute-0 nova_compute[351485]: 2025-12-03 02:22:13.298 351492 DEBUG nova.scheduler.client.report [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:22:13 compute-0 nova_compute[351485]: 2025-12-03 02:22:13.323 351492 DEBUG oslo_concurrency.lockutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.789s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:22:13 compute-0 nova_compute[351485]: 2025-12-03 02:22:13.324 351492 DEBUG nova.compute.manager [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 03 02:22:13 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1490501508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:22:13 compute-0 nova_compute[351485]: 2025-12-03 02:22:13.381 351492 DEBUG nova.compute.manager [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 03 02:22:13 compute-0 nova_compute[351485]: 2025-12-03 02:22:13.382 351492 DEBUG nova.network.neutron [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 03 02:22:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:22:13 compute-0 nova_compute[351485]: 2025-12-03 02:22:13.409 351492 INFO nova.virt.libvirt.driver [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 03 02:22:13 compute-0 nova_compute[351485]: 2025-12-03 02:22:13.433 351492 DEBUG nova.compute.manager [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 03 02:22:13 compute-0 nova_compute[351485]: 2025-12-03 02:22:13.556 351492 DEBUG nova.compute.manager [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 03 02:22:13 compute-0 nova_compute[351485]: 2025-12-03 02:22:13.558 351492 DEBUG nova.virt.libvirt.driver [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 03 02:22:13 compute-0 nova_compute[351485]: 2025-12-03 02:22:13.559 351492 INFO nova.virt.libvirt.driver [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Creating image(s)
Dec 03 02:22:13 compute-0 nova_compute[351485]: 2025-12-03 02:22:13.609 351492 DEBUG nova.storage.rbd_utils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] rbd image 4fb8fc07-d7b7-4be8-94da-155b040faf32_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:22:13 compute-0 nova_compute[351485]: 2025-12-03 02:22:13.665 351492 DEBUG nova.storage.rbd_utils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] rbd image 4fb8fc07-d7b7-4be8-94da-155b040faf32_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:22:13 compute-0 nova_compute[351485]: 2025-12-03 02:22:13.725 351492 DEBUG nova.storage.rbd_utils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] rbd image 4fb8fc07-d7b7-4be8-94da-155b040faf32_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:22:13 compute-0 nova_compute[351485]: 2025-12-03 02:22:13.735 351492 DEBUG oslo_concurrency.processutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3a2172ba33277b1fb4d8f3381bb190374609d10e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:22:13 compute-0 nova_compute[351485]: 2025-12-03 02:22:13.768 351492 DEBUG nova.policy [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8f61f44789494541b7c101b0fdab52f0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '63f39ac2863946b8b817457e689ff933', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 03 02:22:13 compute-0 nova_compute[351485]: 2025-12-03 02:22:13.819 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:22:13 compute-0 nova_compute[351485]: 2025-12-03 02:22:13.832 351492 DEBUG oslo_concurrency.processutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3a2172ba33277b1fb4d8f3381bb190374609d10e --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:22:13 compute-0 nova_compute[351485]: 2025-12-03 02:22:13.833 351492 DEBUG oslo_concurrency.lockutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Acquiring lock "3a2172ba33277b1fb4d8f3381bb190374609d10e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:22:13 compute-0 nova_compute[351485]: 2025-12-03 02:22:13.834 351492 DEBUG oslo_concurrency.lockutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "3a2172ba33277b1fb4d8f3381bb190374609d10e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:22:13 compute-0 nova_compute[351485]: 2025-12-03 02:22:13.834 351492 DEBUG oslo_concurrency.lockutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "3a2172ba33277b1fb4d8f3381bb190374609d10e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:22:13 compute-0 nova_compute[351485]: 2025-12-03 02:22:13.874 351492 DEBUG nova.storage.rbd_utils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] rbd image 4fb8fc07-d7b7-4be8-94da-155b040faf32_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:22:13 compute-0 nova_compute[351485]: 2025-12-03 02:22:13.883 351492 DEBUG oslo_concurrency.processutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/3a2172ba33277b1fb4d8f3381bb190374609d10e 4fb8fc07-d7b7-4be8-94da-155b040faf32_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:22:14 compute-0 nova_compute[351485]: 2025-12-03 02:22:14.340 351492 DEBUG oslo_concurrency.processutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/3a2172ba33277b1fb4d8f3381bb190374609d10e 4fb8fc07-d7b7-4be8-94da-155b040faf32_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:22:14 compute-0 ceph-mon[192821]: pgmap v2040: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:22:14 compute-0 nova_compute[351485]: 2025-12-03 02:22:14.525 351492 DEBUG nova.storage.rbd_utils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] resizing rbd image 4fb8fc07-d7b7-4be8-94da-155b040faf32_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 03 02:22:14 compute-0 nova_compute[351485]: 2025-12-03 02:22:14.621 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:22:14 compute-0 nova_compute[351485]: 2025-12-03 02:22:14.792 351492 DEBUG nova.objects.instance [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lazy-loading 'migration_context' on Instance uuid 4fb8fc07-d7b7-4be8-94da-155b040faf32 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:22:14 compute-0 nova_compute[351485]: 2025-12-03 02:22:14.810 351492 DEBUG nova.virt.libvirt.driver [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 03 02:22:14 compute-0 nova_compute[351485]: 2025-12-03 02:22:14.811 351492 DEBUG nova.virt.libvirt.driver [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Ensure instance console log exists: /var/lib/nova/instances/4fb8fc07-d7b7-4be8-94da-155b040faf32/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 03 02:22:14 compute-0 nova_compute[351485]: 2025-12-03 02:22:14.811 351492 DEBUG oslo_concurrency.lockutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:22:14 compute-0 nova_compute[351485]: 2025-12-03 02:22:14.812 351492 DEBUG oslo_concurrency.lockutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:22:14 compute-0 nova_compute[351485]: 2025-12-03 02:22:14.812 351492 DEBUG oslo_concurrency.lockutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:22:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2041: 321 pgs: 321 active+clean; 164 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 7.9 KiB/s rd, 114 KiB/s wr, 9 op/s
Dec 03 02:22:15 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:22:15.782 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:a6:85', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:2a:11:ae:7b:8c'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 02:22:15 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:22:15.783 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 03 02:22:15 compute-0 nova_compute[351485]: 2025-12-03 02:22:15.787 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:22:15 compute-0 nova_compute[351485]: 2025-12-03 02:22:15.922 351492 DEBUG nova.network.neutron [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Successfully created port: 94fdb5b9-66bf-4e81-b411-064b08e4c71c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 03 02:22:16 compute-0 ceph-mon[192821]: pgmap v2041: 321 pgs: 321 active+clean; 164 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 7.9 KiB/s rd, 114 KiB/s wr, 9 op/s
Dec 03 02:22:17 compute-0 nova_compute[351485]: 2025-12-03 02:22:17.106 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:22:17 compute-0 sshd-session[456305]: Received disconnect from 154.113.10.113 port 33208:11: Bye Bye [preauth]
Dec 03 02:22:17 compute-0 sshd-session[456305]: Disconnected from authenticating user root 154.113.10.113 port 33208 [preauth]
Dec 03 02:22:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2042: 321 pgs: 321 active+clean; 191 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 961 KiB/s wr, 24 op/s
Dec 03 02:22:17 compute-0 nova_compute[351485]: 2025-12-03 02:22:17.804 351492 DEBUG nova.network.neutron [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Successfully updated port: 94fdb5b9-66bf-4e81-b411-064b08e4c71c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 03 02:22:17 compute-0 nova_compute[351485]: 2025-12-03 02:22:17.822 351492 DEBUG oslo_concurrency.lockutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Acquiring lock "refresh_cache-4fb8fc07-d7b7-4be8-94da-155b040faf32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:22:17 compute-0 nova_compute[351485]: 2025-12-03 02:22:17.822 351492 DEBUG oslo_concurrency.lockutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Acquired lock "refresh_cache-4fb8fc07-d7b7-4be8-94da-155b040faf32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:22:17 compute-0 nova_compute[351485]: 2025-12-03 02:22:17.822 351492 DEBUG nova.network.neutron [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 03 02:22:17 compute-0 podman[456308]: 2025-12-03 02:22:17.876331065 +0000 UTC m=+0.114355151 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 03 02:22:17 compute-0 podman[456307]: 2025-12-03 02:22:17.88928441 +0000 UTC m=+0.135760155 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 03 02:22:17 compute-0 podman[456309]: 2025-12-03 02:22:17.892383058 +0000 UTC m=+0.117553681 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 02:22:18 compute-0 nova_compute[351485]: 2025-12-03 02:22:18.015 351492 DEBUG nova.network.neutron [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 03 02:22:18 compute-0 nova_compute[351485]: 2025-12-03 02:22:18.132 351492 DEBUG nova.compute.manager [req-4bce65db-28c8-4671-9571-c7ae62546bf2 req-d8dc52c7-a091-4879-9a5e-0109ceb1d6f4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Received event network-changed-94fdb5b9-66bf-4e81-b411-064b08e4c71c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:22:18 compute-0 nova_compute[351485]: 2025-12-03 02:22:18.133 351492 DEBUG nova.compute.manager [req-4bce65db-28c8-4671-9571-c7ae62546bf2 req-d8dc52c7-a091-4879-9a5e-0109ceb1d6f4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Refreshing instance network info cache due to event network-changed-94fdb5b9-66bf-4e81-b411-064b08e4c71c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 03 02:22:18 compute-0 nova_compute[351485]: 2025-12-03 02:22:18.134 351492 DEBUG oslo_concurrency.lockutils [req-4bce65db-28c8-4671-9571-c7ae62546bf2 req-d8dc52c7-a091-4879-9a5e-0109ceb1d6f4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-4fb8fc07-d7b7-4be8-94da-155b040faf32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:22:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:22:18 compute-0 ceph-mon[192821]: pgmap v2042: 321 pgs: 321 active+clean; 191 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 961 KiB/s wr, 24 op/s
Dec 03 02:22:18 compute-0 nova_compute[351485]: 2025-12-03 02:22:18.824 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:22:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2043: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.8 MiB/s wr, 25 op/s
Dec 03 02:22:19 compute-0 sudo[456366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:22:19 compute-0 sudo[456366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:22:19 compute-0 sudo[456366]: pam_unix(sudo:session): session closed for user root
Dec 03 02:22:19 compute-0 sudo[456391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:22:19 compute-0 sudo[456391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:22:19 compute-0 sudo[456391]: pam_unix(sudo:session): session closed for user root
Dec 03 02:22:20 compute-0 sudo[456416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:22:20 compute-0 sudo[456416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:22:20 compute-0 sudo[456416]: pam_unix(sudo:session): session closed for user root
Dec 03 02:22:20 compute-0 nova_compute[351485]: 2025-12-03 02:22:20.117 351492 DEBUG nova.network.neutron [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Updating instance_info_cache with network_info: [{"id": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "address": "fa:16:3e:3f:0c:ae", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.46", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94fdb5b9-66", "ovs_interfaceid": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:22:20 compute-0 nova_compute[351485]: 2025-12-03 02:22:20.144 351492 DEBUG oslo_concurrency.lockutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Releasing lock "refresh_cache-4fb8fc07-d7b7-4be8-94da-155b040faf32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:22:20 compute-0 nova_compute[351485]: 2025-12-03 02:22:20.146 351492 DEBUG nova.compute.manager [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Instance network_info: |[{"id": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "address": "fa:16:3e:3f:0c:ae", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.46", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94fdb5b9-66", "ovs_interfaceid": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 03 02:22:20 compute-0 nova_compute[351485]: 2025-12-03 02:22:20.148 351492 DEBUG oslo_concurrency.lockutils [req-4bce65db-28c8-4671-9571-c7ae62546bf2 req-d8dc52c7-a091-4879-9a5e-0109ceb1d6f4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-4fb8fc07-d7b7-4be8-94da-155b040faf32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:22:20 compute-0 nova_compute[351485]: 2025-12-03 02:22:20.149 351492 DEBUG nova.network.neutron [req-4bce65db-28c8-4671-9571-c7ae62546bf2 req-d8dc52c7-a091-4879-9a5e-0109ceb1d6f4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Refreshing network info cache for port 94fdb5b9-66bf-4e81-b411-064b08e4c71c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 03 02:22:20 compute-0 nova_compute[351485]: 2025-12-03 02:22:20.156 351492 DEBUG nova.virt.libvirt.driver [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Start _get_guest_xml network_info=[{"id": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "address": "fa:16:3e:3f:0c:ae", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.46", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94fdb5b9-66", "ovs_interfaceid": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T02:18:51Z,direct_url=<?>,disk_format='qcow2',id=8876482c-db67-48c0-9203-60685152fc9d,min_disk=0,min_ram=0,name='tempest-scenario-img--863028734',owner='63f39ac2863946b8b817457e689ff933',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T02:18:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'boot_index': 0, 'guest_format': None, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'image_id': '8876482c-db67-48c0-9203-60685152fc9d'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 03 02:22:20 compute-0 nova_compute[351485]: 2025-12-03 02:22:20.178 351492 WARNING nova.virt.libvirt.driver [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:22:20 compute-0 sudo[456441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 02:22:20 compute-0 sudo[456441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:22:20 compute-0 nova_compute[351485]: 2025-12-03 02:22:20.190 351492 DEBUG nova.virt.libvirt.host [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 03 02:22:20 compute-0 nova_compute[351485]: 2025-12-03 02:22:20.191 351492 DEBUG nova.virt.libvirt.host [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 03 02:22:20 compute-0 nova_compute[351485]: 2025-12-03 02:22:20.208 351492 DEBUG nova.virt.libvirt.host [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 03 02:22:20 compute-0 nova_compute[351485]: 2025-12-03 02:22:20.210 351492 DEBUG nova.virt.libvirt.host [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 03 02:22:20 compute-0 nova_compute[351485]: 2025-12-03 02:22:20.211 351492 DEBUG nova.virt.libvirt.driver [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 03 02:22:20 compute-0 nova_compute[351485]: 2025-12-03 02:22:20.212 351492 DEBUG nova.virt.hardware [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-03T02:14:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='89219634-32e9-4cb5-896f-6fa0b1edfe13',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T02:18:51Z,direct_url=<?>,disk_format='qcow2',id=8876482c-db67-48c0-9203-60685152fc9d,min_disk=0,min_ram=0,name='tempest-scenario-img--863028734',owner='63f39ac2863946b8b817457e689ff933',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T02:18:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 03 02:22:20 compute-0 nova_compute[351485]: 2025-12-03 02:22:20.213 351492 DEBUG nova.virt.hardware [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 03 02:22:20 compute-0 nova_compute[351485]: 2025-12-03 02:22:20.214 351492 DEBUG nova.virt.hardware [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 03 02:22:20 compute-0 nova_compute[351485]: 2025-12-03 02:22:20.215 351492 DEBUG nova.virt.hardware [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 03 02:22:20 compute-0 nova_compute[351485]: 2025-12-03 02:22:20.216 351492 DEBUG nova.virt.hardware [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 03 02:22:20 compute-0 nova_compute[351485]: 2025-12-03 02:22:20.216 351492 DEBUG nova.virt.hardware [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 03 02:22:20 compute-0 nova_compute[351485]: 2025-12-03 02:22:20.217 351492 DEBUG nova.virt.hardware [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 03 02:22:20 compute-0 nova_compute[351485]: 2025-12-03 02:22:20.218 351492 DEBUG nova.virt.hardware [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 03 02:22:20 compute-0 nova_compute[351485]: 2025-12-03 02:22:20.218 351492 DEBUG nova.virt.hardware [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 03 02:22:20 compute-0 nova_compute[351485]: 2025-12-03 02:22:20.219 351492 DEBUG nova.virt.hardware [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 03 02:22:20 compute-0 nova_compute[351485]: 2025-12-03 02:22:20.221 351492 DEBUG nova.virt.hardware [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 03 02:22:20 compute-0 nova_compute[351485]: 2025-12-03 02:22:20.226 351492 DEBUG oslo_concurrency.processutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:22:20 compute-0 ceph-mon[192821]: pgmap v2043: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.8 MiB/s wr, 25 op/s
Dec 03 02:22:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 03 02:22:20 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/698277242' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:22:20 compute-0 nova_compute[351485]: 2025-12-03 02:22:20.797 351492 DEBUG oslo_concurrency.processutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:22:20 compute-0 nova_compute[351485]: 2025-12-03 02:22:20.850 351492 DEBUG nova.storage.rbd_utils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] rbd image 4fb8fc07-d7b7-4be8-94da-155b040faf32_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:22:20 compute-0 nova_compute[351485]: 2025-12-03 02:22:20.864 351492 DEBUG oslo_concurrency.processutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:22:20 compute-0 sudo[456441]: pam_unix(sudo:session): session closed for user root
Dec 03 02:22:21 compute-0 sudo[456537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:22:21 compute-0 sudo[456537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:22:21 compute-0 sudo[456537]: pam_unix(sudo:session): session closed for user root
Dec 03 02:22:21 compute-0 sudo[456581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:22:21 compute-0 sudo[456581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:22:21 compute-0 sudo[456581]: pam_unix(sudo:session): session closed for user root
Dec 03 02:22:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2044: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 03 02:22:21 compute-0 sudo[456606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:22:21 compute-0 sudo[456606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:22:21 compute-0 sudo[456606]: pam_unix(sudo:session): session closed for user root
Dec 03 02:22:21 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 03 02:22:21 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4292561624' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.400 351492 DEBUG oslo_concurrency.processutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.403 351492 DEBUG nova.virt.libvirt.vif [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T02:22:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-8071397-asg-3rvfkoaoyxm3-pdxc7a4qjxpu-j7dwudlie42q',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-8071397-asg-3rvfkoaoyxm3-pdxc7a4qjxpu-j7dwudlie42q',id=15,image_ref='8876482c-db67-48c0-9203-60685152fc9d',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='38bfb145-4971-41b6-9bc3-faf3c3931019'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='63f39ac2863946b8b817457e689ff933',ramdisk_id='',reservation_id='r-xvixyek3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8876482c-db67-48c0-9203-60685152fc9d',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-1008659157',owner_user_name='tempest-PrometheusGabbiTest-1008659157-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T02:22:13Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='8f61f44789494541b7c101b0fdab52f0',uuid=4fb8fc07-d7b7-4be8-94da-155b040faf32,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "address": "fa:16:3e:3f:0c:ae", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.46", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94fdb5b9-66", "ovs_interfaceid": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 03 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.404 351492 DEBUG nova.network.os_vif_util [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Converting VIF {"id": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "address": "fa:16:3e:3f:0c:ae", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.46", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94fdb5b9-66", "ovs_interfaceid": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 03 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.405 351492 DEBUG nova.network.os_vif_util [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:0c:ae,bridge_name='br-int',has_traffic_filtering=True,id=94fdb5b9-66bf-4e81-b411-064b08e4c71c,network=Network(a7615b73-b987-4b91-b12c-2d7488085657),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94fdb5b9-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 03 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.406 351492 DEBUG nova.objects.instance [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4fb8fc07-d7b7-4be8-94da-155b040faf32 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:22:21 compute-0 sudo[456631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- inventory --format=json-pretty --filter-for-batch
Dec 03 02:22:21 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/698277242' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:22:21 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/4292561624' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 03 02:22:21 compute-0 sudo[456631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.424 351492 DEBUG nova.virt.libvirt.driver [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] End _get_guest_xml xml=<domain type="kvm">
Dec 03 02:22:21 compute-0 nova_compute[351485]:   <uuid>4fb8fc07-d7b7-4be8-94da-155b040faf32</uuid>
Dec 03 02:22:21 compute-0 nova_compute[351485]:   <name>instance-0000000f</name>
Dec 03 02:22:21 compute-0 nova_compute[351485]:   <memory>131072</memory>
Dec 03 02:22:21 compute-0 nova_compute[351485]:   <vcpu>1</vcpu>
Dec 03 02:22:21 compute-0 nova_compute[351485]:   <metadata>
Dec 03 02:22:21 compute-0 nova_compute[351485]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 03 02:22:21 compute-0 nova_compute[351485]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:       <nova:name>te-8071397-asg-3rvfkoaoyxm3-pdxc7a4qjxpu-j7dwudlie42q</nova:name>
Dec 03 02:22:21 compute-0 nova_compute[351485]:       <nova:creationTime>2025-12-03 02:22:20</nova:creationTime>
Dec 03 02:22:21 compute-0 nova_compute[351485]:       <nova:flavor name="m1.nano">
Dec 03 02:22:21 compute-0 nova_compute[351485]:         <nova:memory>128</nova:memory>
Dec 03 02:22:21 compute-0 nova_compute[351485]:         <nova:disk>1</nova:disk>
Dec 03 02:22:21 compute-0 nova_compute[351485]:         <nova:swap>0</nova:swap>
Dec 03 02:22:21 compute-0 nova_compute[351485]:         <nova:ephemeral>0</nova:ephemeral>
Dec 03 02:22:21 compute-0 nova_compute[351485]:         <nova:vcpus>1</nova:vcpus>
Dec 03 02:22:21 compute-0 nova_compute[351485]:       </nova:flavor>
Dec 03 02:22:21 compute-0 nova_compute[351485]:       <nova:owner>
Dec 03 02:22:21 compute-0 nova_compute[351485]:         <nova:user uuid="8f61f44789494541b7c101b0fdab52f0">tempest-PrometheusGabbiTest-1008659157-project-member</nova:user>
Dec 03 02:22:21 compute-0 nova_compute[351485]:         <nova:project uuid="63f39ac2863946b8b817457e689ff933">tempest-PrometheusGabbiTest-1008659157</nova:project>
Dec 03 02:22:21 compute-0 nova_compute[351485]:       </nova:owner>
Dec 03 02:22:21 compute-0 nova_compute[351485]:       <nova:root type="image" uuid="8876482c-db67-48c0-9203-60685152fc9d"/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:       <nova:ports>
Dec 03 02:22:21 compute-0 nova_compute[351485]:         <nova:port uuid="94fdb5b9-66bf-4e81-b411-064b08e4c71c">
Dec 03 02:22:21 compute-0 nova_compute[351485]:           <nova:ip type="fixed" address="10.100.1.46" ipVersion="4"/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:         </nova:port>
Dec 03 02:22:21 compute-0 nova_compute[351485]:       </nova:ports>
Dec 03 02:22:21 compute-0 nova_compute[351485]:     </nova:instance>
Dec 03 02:22:21 compute-0 nova_compute[351485]:   </metadata>
Dec 03 02:22:21 compute-0 nova_compute[351485]:   <sysinfo type="smbios">
Dec 03 02:22:21 compute-0 nova_compute[351485]:     <system>
Dec 03 02:22:21 compute-0 nova_compute[351485]:       <entry name="manufacturer">RDO</entry>
Dec 03 02:22:21 compute-0 nova_compute[351485]:       <entry name="product">OpenStack Compute</entry>
Dec 03 02:22:21 compute-0 nova_compute[351485]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 03 02:22:21 compute-0 nova_compute[351485]:       <entry name="serial">4fb8fc07-d7b7-4be8-94da-155b040faf32</entry>
Dec 03 02:22:21 compute-0 nova_compute[351485]:       <entry name="uuid">4fb8fc07-d7b7-4be8-94da-155b040faf32</entry>
Dec 03 02:22:21 compute-0 nova_compute[351485]:       <entry name="family">Virtual Machine</entry>
Dec 03 02:22:21 compute-0 nova_compute[351485]:     </system>
Dec 03 02:22:21 compute-0 nova_compute[351485]:   </sysinfo>
Dec 03 02:22:21 compute-0 nova_compute[351485]:   <os>
Dec 03 02:22:21 compute-0 nova_compute[351485]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 03 02:22:21 compute-0 nova_compute[351485]:     <boot dev="hd"/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:     <smbios mode="sysinfo"/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:   </os>
Dec 03 02:22:21 compute-0 nova_compute[351485]:   <features>
Dec 03 02:22:21 compute-0 nova_compute[351485]:     <acpi/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:     <apic/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:     <vmcoreinfo/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:   </features>
Dec 03 02:22:21 compute-0 nova_compute[351485]:   <clock offset="utc">
Dec 03 02:22:21 compute-0 nova_compute[351485]:     <timer name="pit" tickpolicy="delay"/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:     <timer name="hpet" present="no"/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:   </clock>
Dec 03 02:22:21 compute-0 nova_compute[351485]:   <cpu mode="host-model" match="exact">
Dec 03 02:22:21 compute-0 nova_compute[351485]:     <topology sockets="1" cores="1" threads="1"/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:   </cpu>
Dec 03 02:22:21 compute-0 nova_compute[351485]:   <devices>
Dec 03 02:22:21 compute-0 nova_compute[351485]:     <disk type="network" device="disk">
Dec 03 02:22:21 compute-0 nova_compute[351485]:       <driver type="raw" cache="none"/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:       <source protocol="rbd" name="vms/4fb8fc07-d7b7-4be8-94da-155b040faf32_disk">
Dec 03 02:22:21 compute-0 nova_compute[351485]:         <host name="192.168.122.100" port="6789"/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:       </source>
Dec 03 02:22:21 compute-0 nova_compute[351485]:       <auth username="openstack">
Dec 03 02:22:21 compute-0 nova_compute[351485]:         <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:       </auth>
Dec 03 02:22:21 compute-0 nova_compute[351485]:       <target dev="vda" bus="virtio"/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:     </disk>
Dec 03 02:22:21 compute-0 nova_compute[351485]:     <disk type="network" device="cdrom">
Dec 03 02:22:21 compute-0 nova_compute[351485]:       <driver type="raw" cache="none"/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:       <source protocol="rbd" name="vms/4fb8fc07-d7b7-4be8-94da-155b040faf32_disk.config">
Dec 03 02:22:21 compute-0 nova_compute[351485]:         <host name="192.168.122.100" port="6789"/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:       </source>
Dec 03 02:22:21 compute-0 nova_compute[351485]:       <auth username="openstack">
Dec 03 02:22:21 compute-0 nova_compute[351485]:         <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:       </auth>
Dec 03 02:22:21 compute-0 nova_compute[351485]:       <target dev="sda" bus="sata"/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:     </disk>
Dec 03 02:22:21 compute-0 nova_compute[351485]:     <interface type="ethernet">
Dec 03 02:22:21 compute-0 nova_compute[351485]:       <mac address="fa:16:3e:3f:0c:ae"/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:       <model type="virtio"/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:       <driver name="vhost" rx_queue_size="512"/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:       <mtu size="1442"/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:       <target dev="tap94fdb5b9-66"/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:     </interface>
Dec 03 02:22:21 compute-0 nova_compute[351485]:     <serial type="pty">
Dec 03 02:22:21 compute-0 nova_compute[351485]:       <log file="/var/lib/nova/instances/4fb8fc07-d7b7-4be8-94da-155b040faf32/console.log" append="off"/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:     </serial>
Dec 03 02:22:21 compute-0 nova_compute[351485]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:     <video>
Dec 03 02:22:21 compute-0 nova_compute[351485]:       <model type="virtio"/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:     </video>
Dec 03 02:22:21 compute-0 nova_compute[351485]:     <input type="tablet" bus="usb"/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:     <rng model="virtio">
Dec 03 02:22:21 compute-0 nova_compute[351485]:       <backend model="random">/dev/urandom</backend>
Dec 03 02:22:21 compute-0 nova_compute[351485]:     </rng>
Dec 03 02:22:21 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root"/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:     <controller type="pci" model="pcie-root-port"/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:     <controller type="usb" index="0"/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:     <memballoon model="virtio">
Dec 03 02:22:21 compute-0 nova_compute[351485]:       <stats period="10"/>
Dec 03 02:22:21 compute-0 nova_compute[351485]:     </memballoon>
Dec 03 02:22:21 compute-0 nova_compute[351485]:   </devices>
Dec 03 02:22:21 compute-0 nova_compute[351485]: </domain>
Dec 03 02:22:21 compute-0 nova_compute[351485]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 03 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.426 351492 DEBUG nova.compute.manager [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Preparing to wait for external event network-vif-plugged-94fdb5b9-66bf-4e81-b411-064b08e4c71c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 03 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.426 351492 DEBUG oslo_concurrency.lockutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Acquiring lock "4fb8fc07-d7b7-4be8-94da-155b040faf32-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.427 351492 DEBUG oslo_concurrency.lockutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "4fb8fc07-d7b7-4be8-94da-155b040faf32-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.427 351492 DEBUG oslo_concurrency.lockutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "4fb8fc07-d7b7-4be8-94da-155b040faf32-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.428 351492 DEBUG nova.virt.libvirt.vif [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T02:22:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-8071397-asg-3rvfkoaoyxm3-pdxc7a4qjxpu-j7dwudlie42q',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-8071397-asg-3rvfkoaoyxm3-pdxc7a4qjxpu-j7dwudlie42q',id=15,image_ref='8876482c-db67-48c0-9203-60685152fc9d',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='38bfb145-4971-41b6-9bc3-faf3c3931019'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='63f39ac2863946b8b817457e689ff933',ramdisk_id='',reservation_id='r-xvixyek3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8876482c-db67-48c0-9203-60685152fc9d',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-1008659157',owner_user_name='tempest-PrometheusGabbiTest-1008659157-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T02:22:13Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='8f61f44789494541b7c101b0fdab52f0',uuid=4fb8fc07-d7b7-4be8-94da-155b040faf32,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "address": "fa:16:3e:3f:0c:ae", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.46", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94fdb5b9-66", "ovs_interfaceid": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 03 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.428 351492 DEBUG nova.network.os_vif_util [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Converting VIF {"id": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "address": "fa:16:3e:3f:0c:ae", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.46", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94fdb5b9-66", "ovs_interfaceid": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 03 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.429 351492 DEBUG nova.network.os_vif_util [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:0c:ae,bridge_name='br-int',has_traffic_filtering=True,id=94fdb5b9-66bf-4e81-b411-064b08e4c71c,network=Network(a7615b73-b987-4b91-b12c-2d7488085657),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94fdb5b9-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 03 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.430 351492 DEBUG os_vif [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:0c:ae,bridge_name='br-int',has_traffic_filtering=True,id=94fdb5b9-66bf-4e81-b411-064b08e4c71c,network=Network(a7615b73-b987-4b91-b12c-2d7488085657),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94fdb5b9-66') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 03 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.431 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.432 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.432 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 03 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.436 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.436 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap94fdb5b9-66, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.437 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap94fdb5b9-66, col_values=(('external_ids', {'iface-id': '94fdb5b9-66bf-4e81-b411-064b08e4c71c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3f:0c:ae', 'vm-uuid': '4fb8fc07-d7b7-4be8-94da-155b040faf32'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.440 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:22:21 compute-0 NetworkManager[48912]: <info>  [1764728541.4411] manager: (tap94fdb5b9-66): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/77)
Dec 03 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.443 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 03 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.450 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.451 351492 INFO os_vif [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:0c:ae,bridge_name='br-int',has_traffic_filtering=True,id=94fdb5b9-66bf-4e81-b411-064b08e4c71c,network=Network(a7615b73-b987-4b91-b12c-2d7488085657),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94fdb5b9-66')
Dec 03 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.575 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.599 351492 DEBUG nova.virt.libvirt.driver [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 03 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.600 351492 DEBUG nova.virt.libvirt.driver [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 03 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.600 351492 DEBUG nova.virt.libvirt.driver [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] No VIF found with MAC fa:16:3e:3f:0c:ae, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 03 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.601 351492 INFO nova.virt.libvirt.driver [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Using config drive
Dec 03 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.655 351492 DEBUG nova.storage.rbd_utils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] rbd image 4fb8fc07-d7b7-4be8-94da-155b040faf32_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.689 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.690 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.690 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.690 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.691 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:22:21 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:22:21.787 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=eda9fd7d-f2b1-4121-b9ac-fc31f8426272, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:22:21 compute-0 podman[456734]: 2025-12-03 02:22:21.949779346 +0000 UTC m=+0.073561199 container create 3ab3eabca71d7a5541aa4db7047b4987f395990f17e35c2cd9ae46e1c566a6e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_yonath, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:22:22 compute-0 podman[456734]: 2025-12-03 02:22:21.911131794 +0000 UTC m=+0.034913677 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:22:22 compute-0 systemd[1]: Started libpod-conmon-3ab3eabca71d7a5541aa4db7047b4987f395990f17e35c2cd9ae46e1c566a6e6.scope.
Dec 03 02:22:22 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:22:22 compute-0 podman[456734]: 2025-12-03 02:22:22.085460008 +0000 UTC m=+0.209241941 container init 3ab3eabca71d7a5541aa4db7047b4987f395990f17e35c2cd9ae46e1c566a6e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_yonath, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:22:22 compute-0 podman[456734]: 2025-12-03 02:22:22.099269438 +0000 UTC m=+0.223051321 container start 3ab3eabca71d7a5541aa4db7047b4987f395990f17e35c2cd9ae46e1c566a6e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_yonath, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 03 02:22:22 compute-0 podman[456734]: 2025-12-03 02:22:22.107655215 +0000 UTC m=+0.231437148 container attach 3ab3eabca71d7a5541aa4db7047b4987f395990f17e35c2cd9ae46e1c566a6e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_yonath, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:22:22 compute-0 festive_yonath[456752]: 167 167
Dec 03 02:22:22 compute-0 systemd[1]: libpod-3ab3eabca71d7a5541aa4db7047b4987f395990f17e35c2cd9ae46e1c566a6e6.scope: Deactivated successfully.
Dec 03 02:22:22 compute-0 podman[456734]: 2025-12-03 02:22:22.110154365 +0000 UTC m=+0.233936248 container died 3ab3eabca71d7a5541aa4db7047b4987f395990f17e35c2cd9ae46e1c566a6e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 03 02:22:22 compute-0 nova_compute[351485]: 2025-12-03 02:22:22.145 351492 INFO nova.virt.libvirt.driver [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Creating config drive at /var/lib/nova/instances/4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.config
Dec 03 02:22:22 compute-0 nova_compute[351485]: 2025-12-03 02:22:22.156 351492 DEBUG oslo_concurrency.processutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9dz43iat execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:22:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-8af11b101b1fc17136d12595b7f914b9cd8d6b235134db42374acf87d6bb8585-merged.mount: Deactivated successfully.
Dec 03 02:22:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:22:22 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1729463512' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:22:22 compute-0 podman[456734]: 2025-12-03 02:22:22.193790867 +0000 UTC m=+0.317572720 container remove 3ab3eabca71d7a5541aa4db7047b4987f395990f17e35c2cd9ae46e1c566a6e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 03 02:22:22 compute-0 nova_compute[351485]: 2025-12-03 02:22:22.212 351492 DEBUG nova.network.neutron [req-4bce65db-28c8-4671-9571-c7ae62546bf2 req-d8dc52c7-a091-4879-9a5e-0109ceb1d6f4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Updated VIF entry in instance network info cache for port 94fdb5b9-66bf-4e81-b411-064b08e4c71c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 03 02:22:22 compute-0 nova_compute[351485]: 2025-12-03 02:22:22.213 351492 DEBUG nova.network.neutron [req-4bce65db-28c8-4671-9571-c7ae62546bf2 req-d8dc52c7-a091-4879-9a5e-0109ceb1d6f4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Updating instance_info_cache with network_info: [{"id": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "address": "fa:16:3e:3f:0c:ae", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.46", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94fdb5b9-66", "ovs_interfaceid": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:22:22 compute-0 systemd[1]: libpod-conmon-3ab3eabca71d7a5541aa4db7047b4987f395990f17e35c2cd9ae46e1c566a6e6.scope: Deactivated successfully.
Dec 03 02:22:22 compute-0 nova_compute[351485]: 2025-12-03 02:22:22.234 351492 DEBUG oslo_concurrency.lockutils [req-4bce65db-28c8-4671-9571-c7ae62546bf2 req-d8dc52c7-a091-4879-9a5e-0109ceb1d6f4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-4fb8fc07-d7b7-4be8-94da-155b040faf32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:22:22 compute-0 nova_compute[351485]: 2025-12-03 02:22:22.250 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:22:22 compute-0 nova_compute[351485]: 2025-12-03 02:22:22.318 351492 DEBUG oslo_concurrency.processutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9dz43iat" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:22:22 compute-0 nova_compute[351485]: 2025-12-03 02:22:22.380 351492 DEBUG nova.storage.rbd_utils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] rbd image 4fb8fc07-d7b7-4be8-94da-155b040faf32_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 03 02:22:22 compute-0 nova_compute[351485]: 2025-12-03 02:22:22.398 351492 DEBUG oslo_concurrency.processutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.config 4fb8fc07-d7b7-4be8-94da-155b040faf32_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:22:22 compute-0 podman[456779]: 2025-12-03 02:22:22.413195523 +0000 UTC m=+0.069241946 container create eca1961e9773c0eccafe4031ac799514c763b7307fc4f5045af1259a2e7bee0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:22:22 compute-0 ceph-mon[192821]: pgmap v2044: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 03 02:22:22 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1729463512' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:22:22 compute-0 nova_compute[351485]: 2025-12-03 02:22:22.450 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:22:22 compute-0 nova_compute[351485]: 2025-12-03 02:22:22.451 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:22:22 compute-0 nova_compute[351485]: 2025-12-03 02:22:22.459 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:22:22 compute-0 nova_compute[351485]: 2025-12-03 02:22:22.459 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:22:22 compute-0 podman[456779]: 2025-12-03 02:22:22.380882961 +0000 UTC m=+0.036929414 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:22:22 compute-0 systemd[1]: Started libpod-conmon-eca1961e9773c0eccafe4031ac799514c763b7307fc4f5045af1259a2e7bee0b.scope.
Dec 03 02:22:22 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:22:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa9c5684d222edcb509f7a9d659d2314a89f2b8fe60579721a24009b4b8dedf3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:22:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa9c5684d222edcb509f7a9d659d2314a89f2b8fe60579721a24009b4b8dedf3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:22:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa9c5684d222edcb509f7a9d659d2314a89f2b8fe60579721a24009b4b8dedf3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:22:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa9c5684d222edcb509f7a9d659d2314a89f2b8fe60579721a24009b4b8dedf3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:22:22 compute-0 podman[456779]: 2025-12-03 02:22:22.566813952 +0000 UTC m=+0.222860445 container init eca1961e9773c0eccafe4031ac799514c763b7307fc4f5045af1259a2e7bee0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_ritchie, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:22:22 compute-0 podman[456779]: 2025-12-03 02:22:22.593269909 +0000 UTC m=+0.249316322 container start eca1961e9773c0eccafe4031ac799514c763b7307fc4f5045af1259a2e7bee0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_ritchie, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 03 02:22:22 compute-0 podman[456779]: 2025-12-03 02:22:22.598301321 +0000 UTC m=+0.254347814 container attach eca1961e9773c0eccafe4031ac799514c763b7307fc4f5045af1259a2e7bee0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_ritchie, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 03 02:22:22 compute-0 nova_compute[351485]: 2025-12-03 02:22:22.760 351492 DEBUG oslo_concurrency.processutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.config 4fb8fc07-d7b7-4be8-94da-155b040faf32_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.362s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:22:22 compute-0 nova_compute[351485]: 2025-12-03 02:22:22.761 351492 INFO nova.virt.libvirt.driver [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Deleting local config drive /var/lib/nova/instances/4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.config because it was imported into RBD.
Dec 03 02:22:22 compute-0 systemd[1]: Starting libvirt secret daemon...
Dec 03 02:22:22 compute-0 systemd[1]: Started libvirt secret daemon.
Dec 03 02:22:22 compute-0 kernel: tap94fdb5b9-66: entered promiscuous mode
Dec 03 02:22:22 compute-0 ovn_controller[89134]: 2025-12-03T02:22:22Z|00191|binding|INFO|Claiming lport 94fdb5b9-66bf-4e81-b411-064b08e4c71c for this chassis.
Dec 03 02:22:22 compute-0 NetworkManager[48912]: <info>  [1764728542.8892] manager: (tap94fdb5b9-66): new Tun device (/org/freedesktop/NetworkManager/Devices/78)
Dec 03 02:22:22 compute-0 nova_compute[351485]: 2025-12-03 02:22:22.891 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:22:22 compute-0 ovn_controller[89134]: 2025-12-03T02:22:22Z|00192|binding|INFO|94fdb5b9-66bf-4e81-b411-064b08e4c71c: Claiming fa:16:3e:3f:0c:ae 10.100.1.46
Dec 03 02:22:22 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:22:22.914 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:0c:ae 10.100.1.46'], port_security=['fa:16:3e:3f:0c:ae 10.100.1.46'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.46/16', 'neutron:device_id': '4fb8fc07-d7b7-4be8-94da-155b040faf32', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a7615b73-b987-4b91-b12c-2d7488085657', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '63f39ac2863946b8b817457e689ff933', 'neutron:revision_number': '2', 'neutron:security_group_ids': '80ea8f15-ca6c-4a1b-8590-f50ba85e3add', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e2f8982b-cbe8-4539-87ff-9ffeb5a93018, chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=94fdb5b9-66bf-4e81-b411-064b08e4c71c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 02:22:22 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:22:22.915 288528 INFO neutron.agent.ovn.metadata.agent [-] Port 94fdb5b9-66bf-4e81-b411-064b08e4c71c in datapath a7615b73-b987-4b91-b12c-2d7488085657 bound to our chassis
Dec 03 02:22:22 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:22:22.918 288528 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a7615b73-b987-4b91-b12c-2d7488085657
Dec 03 02:22:22 compute-0 nova_compute[351485]: 2025-12-03 02:22:22.925 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:22:22 compute-0 ovn_controller[89134]: 2025-12-03T02:22:22Z|00193|binding|INFO|Setting lport 94fdb5b9-66bf-4e81-b411-064b08e4c71c ovn-installed in OVS
Dec 03 02:22:22 compute-0 ovn_controller[89134]: 2025-12-03T02:22:22Z|00194|binding|INFO|Setting lport 94fdb5b9-66bf-4e81-b411-064b08e4c71c up in Southbound
Dec 03 02:22:22 compute-0 nova_compute[351485]: 2025-12-03 02:22:22.928 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:22:22 compute-0 systemd-machined[138558]: New machine qemu-16-instance-0000000f.
Dec 03 02:22:22 compute-0 systemd[1]: Started Virtual Machine qemu-16-instance-0000000f.
Dec 03 02:22:22 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:22:22.946 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[93b1f415-013f-4d2f-b6fc-a68f4479cc0f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:22:22 compute-0 systemd-udevd[456869]: Network interface NamePolicy= disabled on kernel command line.
Dec 03 02:22:22 compute-0 NetworkManager[48912]: <info>  [1764728542.9737] device (tap94fdb5b9-66): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 03 02:22:22 compute-0 NetworkManager[48912]: <info>  [1764728542.9744] device (tap94fdb5b9-66): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 03 02:22:22 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:22:22.984 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[61d0e853-0882-4592-9d1f-b885f7acbab2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:22:22 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:22:22.988 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[ba185987-077d-4a14-b424-0fa37ec93e72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:22:22 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec 03 02:22:23 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:22:23.014 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[c8c5ec37-b77c-419b-a4cb-aad635780709]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:22:23 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec 03 02:22:23 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:22:23.033 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[99489513-c758-41c6-b955-88971cd22de6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa7615b73-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6c:3e:f5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 47], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719210, 'reachable_time': 32339, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 456899, 'error': None, 'target': 'ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:22:23 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:22:23.046 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[e76d2a00-b4a6-4fda-9e2c-1e471a10d7b8]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa7615b73-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 719227, 'tstamp': 719227}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 456901, 'error': None, 'target': 'ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 16, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.255.255'], ['IFA_LABEL', 'tapa7615b73-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 719234, 'tstamp': 719234}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 456901, 'error': None, 'target': 'ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:22:23 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:22:23.048 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa7615b73-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.050 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:22:23 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:22:23.051 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa7615b73-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:22:23 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:22:23.052 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 03 02:22:23 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:22:23.053 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa7615b73-b0, col_values=(('external_ids', {'iface-id': '50c454e1-4a4b-4aad-b47b-dafc7b079018'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:22:23 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:22:23.053 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 03 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.062 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.063 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3777MB free_disk=59.92206954956055GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.063 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.063 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:22:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2045: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 03 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.302 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.302 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 4fb8fc07-d7b7-4be8-94da-155b040faf32 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.313 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.314 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.329 351492 DEBUG nova.compute.manager [req-754d59b3-df68-42e3-8305-ed4d1266388b req-865611ad-2f01-4ef6-bee0-6448641a24f1 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Received event network-vif-plugged-94fdb5b9-66bf-4e81-b411-064b08e4c71c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.330 351492 DEBUG oslo_concurrency.lockutils [req-754d59b3-df68-42e3-8305-ed4d1266388b req-865611ad-2f01-4ef6-bee0-6448641a24f1 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "4fb8fc07-d7b7-4be8-94da-155b040faf32-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.330 351492 DEBUG oslo_concurrency.lockutils [req-754d59b3-df68-42e3-8305-ed4d1266388b req-865611ad-2f01-4ef6-bee0-6448641a24f1 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "4fb8fc07-d7b7-4be8-94da-155b040faf32-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.331 351492 DEBUG oslo_concurrency.lockutils [req-754d59b3-df68-42e3-8305-ed4d1266388b req-865611ad-2f01-4ef6-bee0-6448641a24f1 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "4fb8fc07-d7b7-4be8-94da-155b040faf32-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.331 351492 DEBUG nova.compute.manager [req-754d59b3-df68-42e3-8305-ed4d1266388b req-865611ad-2f01-4ef6-bee0-6448641a24f1 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Processing event network-vif-plugged-94fdb5b9-66bf-4e81-b411-064b08e4c71c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 03 02:22:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.495 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.816 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728543.8038168, 4fb8fc07-d7b7-4be8-94da-155b040faf32 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.818 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] VM Started (Lifecycle Event)
Dec 03 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.830 351492 DEBUG nova.compute.manager [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 03 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.832 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.842 351492 DEBUG nova.virt.libvirt.driver [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 03 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.847 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.863 351492 INFO nova.virt.libvirt.driver [-] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Instance spawned successfully.
Dec 03 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.864 351492 DEBUG nova.virt.libvirt.driver [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 03 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.871 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 03 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.894 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 03 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.895 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728543.8040407, 4fb8fc07-d7b7-4be8-94da-155b040faf32 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.896 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] VM Paused (Lifecycle Event)
Dec 03 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.905 351492 DEBUG nova.virt.libvirt.driver [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.906 351492 DEBUG nova.virt.libvirt.driver [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.907 351492 DEBUG nova.virt.libvirt.driver [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.907 351492 DEBUG nova.virt.libvirt.driver [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.908 351492 DEBUG nova.virt.libvirt.driver [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.909 351492 DEBUG nova.virt.libvirt.driver [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 03 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.915 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.922 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728543.8426242, 4fb8fc07-d7b7-4be8-94da-155b040faf32 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.922 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] VM Resumed (Lifecycle Event)
Dec 03 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.951 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.959 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 03 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.969 351492 INFO nova.compute.manager [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Took 10.41 seconds to spawn the instance on the hypervisor.
Dec 03 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.970 351492 DEBUG nova.compute.manager [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.982 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 03 02:22:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:22:23 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4224194113' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:22:24 compute-0 nova_compute[351485]: 2025-12-03 02:22:24.013 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:22:24 compute-0 nova_compute[351485]: 2025-12-03 02:22:24.022 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:22:24 compute-0 nova_compute[351485]: 2025-12-03 02:22:24.036 351492 INFO nova.compute.manager [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Took 11.55 seconds to build instance.
Dec 03 02:22:24 compute-0 nova_compute[351485]: 2025-12-03 02:22:24.039 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:22:24 compute-0 nova_compute[351485]: 2025-12-03 02:22:24.054 351492 DEBUG oslo_concurrency.lockutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "4fb8fc07-d7b7-4be8-94da-155b040faf32" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:22:24 compute-0 nova_compute[351485]: 2025-12-03 02:22:24.066 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 02:22:24 compute-0 nova_compute[351485]: 2025-12-03 02:22:24.066 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:22:24 compute-0 ceph-mon[192821]: pgmap v2045: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 03 02:22:24 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/4224194113' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:22:24 compute-0 podman[458778]: 2025-12-03 02:22:24.844891128 +0000 UTC m=+0.101160048 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 03 02:22:25 compute-0 relaxed_ritchie[456819]: [
Dec 03 02:22:25 compute-0 relaxed_ritchie[456819]:     {
Dec 03 02:22:25 compute-0 relaxed_ritchie[456819]:         "available": false,
Dec 03 02:22:25 compute-0 relaxed_ritchie[456819]:         "ceph_device": false,
Dec 03 02:22:25 compute-0 relaxed_ritchie[456819]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 03 02:22:25 compute-0 relaxed_ritchie[456819]:         "lsm_data": {},
Dec 03 02:22:25 compute-0 relaxed_ritchie[456819]:         "lvs": [],
Dec 03 02:22:25 compute-0 relaxed_ritchie[456819]:         "path": "/dev/sr0",
Dec 03 02:22:25 compute-0 relaxed_ritchie[456819]:         "rejected_reasons": [
Dec 03 02:22:25 compute-0 relaxed_ritchie[456819]:             "Has a FileSystem",
Dec 03 02:22:25 compute-0 relaxed_ritchie[456819]:             "Insufficient space (<5GB)"
Dec 03 02:22:25 compute-0 relaxed_ritchie[456819]:         ],
Dec 03 02:22:25 compute-0 relaxed_ritchie[456819]:         "sys_api": {
Dec 03 02:22:25 compute-0 relaxed_ritchie[456819]:             "actuators": null,
Dec 03 02:22:25 compute-0 relaxed_ritchie[456819]:             "device_nodes": "sr0",
Dec 03 02:22:25 compute-0 relaxed_ritchie[456819]:             "devname": "sr0",
Dec 03 02:22:25 compute-0 relaxed_ritchie[456819]:             "human_readable_size": "482.00 KB",
Dec 03 02:22:25 compute-0 relaxed_ritchie[456819]:             "id_bus": "ata",
Dec 03 02:22:25 compute-0 relaxed_ritchie[456819]:             "model": "QEMU DVD-ROM",
Dec 03 02:22:25 compute-0 relaxed_ritchie[456819]:             "nr_requests": "2",
Dec 03 02:22:25 compute-0 relaxed_ritchie[456819]:             "parent": "/dev/sr0",
Dec 03 02:22:25 compute-0 relaxed_ritchie[456819]:             "partitions": {},
Dec 03 02:22:25 compute-0 relaxed_ritchie[456819]:             "path": "/dev/sr0",
Dec 03 02:22:25 compute-0 relaxed_ritchie[456819]:             "removable": "1",
Dec 03 02:22:25 compute-0 relaxed_ritchie[456819]:             "rev": "2.5+",
Dec 03 02:22:25 compute-0 relaxed_ritchie[456819]:             "ro": "0",
Dec 03 02:22:25 compute-0 relaxed_ritchie[456819]:             "rotational": "1",
Dec 03 02:22:25 compute-0 relaxed_ritchie[456819]:             "sas_address": "",
Dec 03 02:22:25 compute-0 relaxed_ritchie[456819]:             "sas_device_handle": "",
Dec 03 02:22:25 compute-0 relaxed_ritchie[456819]:             "scheduler_mode": "mq-deadline",
Dec 03 02:22:25 compute-0 relaxed_ritchie[456819]:             "sectors": 0,
Dec 03 02:22:25 compute-0 relaxed_ritchie[456819]:             "sectorsize": "2048",
Dec 03 02:22:25 compute-0 relaxed_ritchie[456819]:             "size": 493568.0,
Dec 03 02:22:25 compute-0 relaxed_ritchie[456819]:             "support_discard": "2048",
Dec 03 02:22:25 compute-0 relaxed_ritchie[456819]:             "type": "disk",
Dec 03 02:22:25 compute-0 relaxed_ritchie[456819]:             "vendor": "QEMU"
Dec 03 02:22:25 compute-0 relaxed_ritchie[456819]:         }
Dec 03 02:22:25 compute-0 relaxed_ritchie[456819]:     }
Dec 03 02:22:25 compute-0 relaxed_ritchie[456819]: ]
Dec 03 02:22:25 compute-0 systemd[1]: libpod-eca1961e9773c0eccafe4031ac799514c763b7307fc4f5045af1259a2e7bee0b.scope: Deactivated successfully.
Dec 03 02:22:25 compute-0 systemd[1]: libpod-eca1961e9773c0eccafe4031ac799514c763b7307fc4f5045af1259a2e7bee0b.scope: Consumed 2.429s CPU time.
Dec 03 02:22:25 compute-0 podman[456779]: 2025-12-03 02:22:25.093127689 +0000 UTC m=+2.749174132 container died eca1961e9773c0eccafe4031ac799514c763b7307fc4f5045af1259a2e7bee0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 03 02:22:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa9c5684d222edcb509f7a9d659d2314a89f2b8fe60579721a24009b4b8dedf3-merged.mount: Deactivated successfully.
Dec 03 02:22:25 compute-0 podman[456779]: 2025-12-03 02:22:25.186473035 +0000 UTC m=+2.842519458 container remove eca1961e9773c0eccafe4031ac799514c763b7307fc4f5045af1259a2e7bee0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:22:25 compute-0 systemd[1]: libpod-conmon-eca1961e9773c0eccafe4031ac799514c763b7307fc4f5045af1259a2e7bee0b.scope: Deactivated successfully.
Dec 03 02:22:25 compute-0 sudo[456631]: pam_unix(sudo:session): session closed for user root
Dec 03 02:22:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 02:22:25 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:22:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 02:22:25 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:22:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:22:25 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:22:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 02:22:25 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:22:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 02:22:25 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:22:25 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 99640c76-3d1d-42b7-8a84-9c00bf393bd8 does not exist
Dec 03 02:22:25 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 09986305-7b01-4753-a627-6e7b1b021551 does not exist
Dec 03 02:22:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2046: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Dec 03 02:22:25 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 939dc9d4-9492-48ca-91c5-7dee0d152285 does not exist
Dec 03 02:22:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 02:22:25 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:22:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 02:22:25 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:22:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:22:25 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:22:25 compute-0 sudo[459484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:22:25 compute-0 sudo[459484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:22:25 compute-0 sudo[459484]: pam_unix(sudo:session): session closed for user root
Dec 03 02:22:25 compute-0 sudo[459509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:22:25 compute-0 sudo[459509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:22:25 compute-0 sudo[459509]: pam_unix(sudo:session): session closed for user root
Dec 03 02:22:25 compute-0 sudo[459534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:22:25 compute-0 sudo[459534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:22:25 compute-0 sudo[459534]: pam_unix(sudo:session): session closed for user root
Dec 03 02:22:25 compute-0 sudo[459559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 02:22:25 compute-0 sudo[459559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:22:26 compute-0 nova_compute[351485]: 2025-12-03 02:22:26.210 351492 DEBUG nova.compute.manager [req-15047d89-d305-4d38-a56b-5c7c9f4e8465 req-d2436710-2d58-4033-ad74-6995ed78c7d0 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Received event network-vif-plugged-94fdb5b9-66bf-4e81-b411-064b08e4c71c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:22:26 compute-0 nova_compute[351485]: 2025-12-03 02:22:26.211 351492 DEBUG oslo_concurrency.lockutils [req-15047d89-d305-4d38-a56b-5c7c9f4e8465 req-d2436710-2d58-4033-ad74-6995ed78c7d0 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "4fb8fc07-d7b7-4be8-94da-155b040faf32-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:22:26 compute-0 nova_compute[351485]: 2025-12-03 02:22:26.212 351492 DEBUG oslo_concurrency.lockutils [req-15047d89-d305-4d38-a56b-5c7c9f4e8465 req-d2436710-2d58-4033-ad74-6995ed78c7d0 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "4fb8fc07-d7b7-4be8-94da-155b040faf32-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:22:26 compute-0 nova_compute[351485]: 2025-12-03 02:22:26.212 351492 DEBUG oslo_concurrency.lockutils [req-15047d89-d305-4d38-a56b-5c7c9f4e8465 req-d2436710-2d58-4033-ad74-6995ed78c7d0 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "4fb8fc07-d7b7-4be8-94da-155b040faf32-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:22:26 compute-0 nova_compute[351485]: 2025-12-03 02:22:26.212 351492 DEBUG nova.compute.manager [req-15047d89-d305-4d38-a56b-5c7c9f4e8465 req-d2436710-2d58-4033-ad74-6995ed78c7d0 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] No waiting events found dispatching network-vif-plugged-94fdb5b9-66bf-4e81-b411-064b08e4c71c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 03 02:22:26 compute-0 nova_compute[351485]: 2025-12-03 02:22:26.212 351492 WARNING nova.compute.manager [req-15047d89-d305-4d38-a56b-5c7c9f4e8465 req-d2436710-2d58-4033-ad74-6995ed78c7d0 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Received unexpected event network-vif-plugged-94fdb5b9-66bf-4e81-b411-064b08e4c71c for instance with vm_state active and task_state None.
Dec 03 02:22:26 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:22:26 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:22:26 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:22:26 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:22:26 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:22:26 compute-0 ceph-mon[192821]: pgmap v2046: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Dec 03 02:22:26 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:22:26 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:22:26 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:22:26 compute-0 podman[459619]: 2025-12-03 02:22:26.292055989 +0000 UTC m=+0.055672403 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:22:26 compute-0 nova_compute[351485]: 2025-12-03 02:22:26.439 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:22:26 compute-0 podman[459619]: 2025-12-03 02:22:26.903161827 +0000 UTC m=+0.666778281 container create e20da72c1463117dd6f44037e0ebdd80316b87eb3d3cc97efa02c8e8f1ee5e5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mirzakhani, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 03 02:22:26 compute-0 systemd[1]: Started libpod-conmon-e20da72c1463117dd6f44037e0ebdd80316b87eb3d3cc97efa02c8e8f1ee5e5f.scope.
Dec 03 02:22:27 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:22:27 compute-0 podman[459619]: 2025-12-03 02:22:27.052124794 +0000 UTC m=+0.815741288 container init e20da72c1463117dd6f44037e0ebdd80316b87eb3d3cc97efa02c8e8f1ee5e5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mirzakhani, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:22:27 compute-0 podman[459619]: 2025-12-03 02:22:27.067284872 +0000 UTC m=+0.830901286 container start e20da72c1463117dd6f44037e0ebdd80316b87eb3d3cc97efa02c8e8f1ee5e5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:22:27 compute-0 charming_mirzakhani[459635]: 167 167
Dec 03 02:22:27 compute-0 podman[459619]: 2025-12-03 02:22:27.074321771 +0000 UTC m=+0.837938265 container attach e20da72c1463117dd6f44037e0ebdd80316b87eb3d3cc97efa02c8e8f1ee5e5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mirzakhani, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:22:27 compute-0 systemd[1]: libpod-e20da72c1463117dd6f44037e0ebdd80316b87eb3d3cc97efa02c8e8f1ee5e5f.scope: Deactivated successfully.
Dec 03 02:22:27 compute-0 podman[459619]: 2025-12-03 02:22:27.078318294 +0000 UTC m=+0.841934778 container died e20da72c1463117dd6f44037e0ebdd80316b87eb3d3cc97efa02c8e8f1ee5e5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mirzakhani, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 03 02:22:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd10248e8c6d91c4866ce4de440da3741003989a81f156c7f5cb2876b15602e5-merged.mount: Deactivated successfully.
Dec 03 02:22:27 compute-0 podman[459619]: 2025-12-03 02:22:27.138693529 +0000 UTC m=+0.902309933 container remove e20da72c1463117dd6f44037e0ebdd80316b87eb3d3cc97efa02c8e8f1ee5e5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:22:27 compute-0 systemd[1]: libpod-conmon-e20da72c1463117dd6f44037e0ebdd80316b87eb3d3cc97efa02c8e8f1ee5e5f.scope: Deactivated successfully.
Dec 03 02:22:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2047: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 411 KiB/s rd, 1.7 MiB/s wr, 40 op/s
Dec 03 02:22:27 compute-0 podman[459657]: 2025-12-03 02:22:27.415755684 +0000 UTC m=+0.073084995 container create b5cd6aec82e1d84a3fab141801b0a092ad7d20a058e2b83815e05ac653284845 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec 03 02:22:27 compute-0 podman[459657]: 2025-12-03 02:22:27.382641718 +0000 UTC m=+0.039971059 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:22:27 compute-0 systemd[1]: Started libpod-conmon-b5cd6aec82e1d84a3fab141801b0a092ad7d20a058e2b83815e05ac653284845.scope.
Dec 03 02:22:27 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:22:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bcf6ba284673f9b1b878bb1476e3b6ee0b41896be67c48e4d19b1a96a6dd6d0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:22:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bcf6ba284673f9b1b878bb1476e3b6ee0b41896be67c48e4d19b1a96a6dd6d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:22:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bcf6ba284673f9b1b878bb1476e3b6ee0b41896be67c48e4d19b1a96a6dd6d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:22:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bcf6ba284673f9b1b878bb1476e3b6ee0b41896be67c48e4d19b1a96a6dd6d0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:22:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bcf6ba284673f9b1b878bb1476e3b6ee0b41896be67c48e4d19b1a96a6dd6d0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 02:22:27 compute-0 podman[459657]: 2025-12-03 02:22:27.539459817 +0000 UTC m=+0.196789158 container init b5cd6aec82e1d84a3fab141801b0a092ad7d20a058e2b83815e05ac653284845 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hellman, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 03 02:22:27 compute-0 podman[459657]: 2025-12-03 02:22:27.548293367 +0000 UTC m=+0.205622658 container start b5cd6aec82e1d84a3fab141801b0a092ad7d20a058e2b83815e05ac653284845 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hellman, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:22:27 compute-0 podman[459657]: 2025-12-03 02:22:27.556627702 +0000 UTC m=+0.213957073 container attach b5cd6aec82e1d84a3fab141801b0a092ad7d20a058e2b83815e05ac653284845 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hellman, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:22:27 compute-0 nova_compute[351485]: 2025-12-03 02:22:27.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:22:27 compute-0 nova_compute[351485]: 2025-12-03 02:22:27.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 02:22:27 compute-0 nova_compute[351485]: 2025-12-03 02:22:27.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 03 02:22:27 compute-0 nova_compute[351485]: 2025-12-03 02:22:27.781 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-2890ee5c-21c1-4e9d-9421-1a2df0f67f76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:22:27 compute-0 nova_compute[351485]: 2025-12-03 02:22:27.782 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-2890ee5c-21c1-4e9d-9421-1a2df0f67f76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:22:27 compute-0 nova_compute[351485]: 2025-12-03 02:22:27.788 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 03 02:22:27 compute-0 nova_compute[351485]: 2025-12-03 02:22:27.789 351492 DEBUG nova.objects.instance [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:22:28 compute-0 ceph-mon[192821]: pgmap v2047: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 411 KiB/s rd, 1.7 MiB/s wr, 40 op/s
Dec 03 02:22:28 compute-0 podman[459680]: 2025-12-03 02:22:28.36599839 +0000 UTC m=+0.128166450 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, vendor=Red Hat, Inc., version=9.6, config_id=edpm, vcs-type=git, architecture=x86_64, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, release=1755695350)
Dec 03 02:22:28 compute-0 podman[459681]: 2025-12-03 02:22:28.374512441 +0000 UTC m=+0.121674537 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 02:22:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:22:28 compute-0 podman[459688]: 2025-12-03 02:22:28.399873817 +0000 UTC m=+0.143197105 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 03 02:22:28 compute-0 podman[459683]: 2025-12-03 02:22:28.406411442 +0000 UTC m=+0.160042711 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, vendor=Red Hat, Inc., config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, release=1214.1726694543, managed_by=edpm_ansible, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, name=ubi9, vcs-type=git, version=9.4, io.buildah.version=1.29.0, architecture=x86_64, distribution-scope=public)
Dec 03 02:22:28 compute-0 podman[459679]: 2025-12-03 02:22:28.40884363 +0000 UTC m=+0.162155620 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec 03 02:22:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:22:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:22:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:22:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:22:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:22:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:22:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:22:28
Dec 03 02:22:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 02:22:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 02:22:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr', 'images', 'default.rgw.meta', '.rgw.root', 'backups', 'default.rgw.control']
Dec 03 02:22:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 02:22:28 compute-0 determined_hellman[459673]: --> passed data devices: 0 physical, 3 LVM
Dec 03 02:22:28 compute-0 determined_hellman[459673]: --> relative data size: 1.0
Dec 03 02:22:28 compute-0 determined_hellman[459673]: --> All data devices are unavailable
Dec 03 02:22:28 compute-0 systemd[1]: libpod-b5cd6aec82e1d84a3fab141801b0a092ad7d20a058e2b83815e05ac653284845.scope: Deactivated successfully.
Dec 03 02:22:28 compute-0 podman[459657]: 2025-12-03 02:22:28.668829393 +0000 UTC m=+1.326158684 container died b5cd6aec82e1d84a3fab141801b0a092ad7d20a058e2b83815e05ac653284845 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 03 02:22:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-6bcf6ba284673f9b1b878bb1476e3b6ee0b41896be67c48e4d19b1a96a6dd6d0-merged.mount: Deactivated successfully.
Dec 03 02:22:28 compute-0 podman[459657]: 2025-12-03 02:22:28.752919098 +0000 UTC m=+1.410248389 container remove b5cd6aec82e1d84a3fab141801b0a092ad7d20a058e2b83815e05ac653284845 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hellman, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 03 02:22:28 compute-0 systemd[1]: libpod-conmon-b5cd6aec82e1d84a3fab141801b0a092ad7d20a058e2b83815e05ac653284845.scope: Deactivated successfully.
Dec 03 02:22:28 compute-0 sudo[459559]: pam_unix(sudo:session): session closed for user root
Dec 03 02:22:28 compute-0 nova_compute[351485]: 2025-12-03 02:22:28.831 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:22:28 compute-0 sudo[459817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:22:28 compute-0 sudo[459817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:22:28 compute-0 sudo[459817]: pam_unix(sudo:session): session closed for user root
Dec 03 02:22:29 compute-0 sudo[459842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:22:29 compute-0 sudo[459842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:22:29 compute-0 sudo[459842]: pam_unix(sudo:session): session closed for user root
Dec 03 02:22:29 compute-0 sudo[459867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:22:29 compute-0 sudo[459867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:22:29 compute-0 sudo[459867]: pam_unix(sudo:session): session closed for user root
Dec 03 02:22:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 02:22:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:22:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 02:22:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:22:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:22:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:22:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:22:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:22:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:22:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:22:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2048: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 867 KiB/s wr, 54 op/s
Dec 03 02:22:29 compute-0 sudo[459892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 02:22:29 compute-0 sudo[459892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:22:29 compute-0 podman[158098]: time="2025-12-03T02:22:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:22:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:22:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec 03 02:22:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:22:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8657 "" "Go-http-client/1.1"
Dec 03 02:22:29 compute-0 nova_compute[351485]: 2025-12-03 02:22:29.833 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Updating instance_info_cache with network_info: [{"id": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "address": "fa:16:3e:dd:ed:eb", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.239", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf36a9f58-d7", "ovs_interfaceid": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:22:29 compute-0 nova_compute[351485]: 2025-12-03 02:22:29.861 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-2890ee5c-21c1-4e9d-9421-1a2df0f67f76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:22:29 compute-0 nova_compute[351485]: 2025-12-03 02:22:29.862 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 03 02:22:29 compute-0 nova_compute[351485]: 2025-12-03 02:22:29.862 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:22:29 compute-0 nova_compute[351485]: 2025-12-03 02:22:29.863 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:22:29 compute-0 nova_compute[351485]: 2025-12-03 02:22:29.863 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:22:29 compute-0 nova_compute[351485]: 2025-12-03 02:22:29.863 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:22:29 compute-0 nova_compute[351485]: 2025-12-03 02:22:29.863 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 03 02:22:29 compute-0 nova_compute[351485]: 2025-12-03 02:22:29.879 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 03 02:22:29 compute-0 podman[459953]: 2025-12-03 02:22:29.964186086 +0000 UTC m=+0.081208154 container create aae43f3dc4d0f4d869b28cfbb1a4df2ea9c0ea6e701c8095157e42c3c6b460ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mccarthy, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:22:30 compute-0 podman[459953]: 2025-12-03 02:22:29.929733013 +0000 UTC m=+0.046755151 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:22:30 compute-0 systemd[1]: Started libpod-conmon-aae43f3dc4d0f4d869b28cfbb1a4df2ea9c0ea6e701c8095157e42c3c6b460ce.scope.
Dec 03 02:22:30 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:22:30 compute-0 podman[459953]: 2025-12-03 02:22:30.116992331 +0000 UTC m=+0.234014439 container init aae43f3dc4d0f4d869b28cfbb1a4df2ea9c0ea6e701c8095157e42c3c6b460ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:22:30 compute-0 podman[459953]: 2025-12-03 02:22:30.133084295 +0000 UTC m=+0.250106363 container start aae43f3dc4d0f4d869b28cfbb1a4df2ea9c0ea6e701c8095157e42c3c6b460ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 03 02:22:30 compute-0 podman[459953]: 2025-12-03 02:22:30.139397664 +0000 UTC m=+0.256419742 container attach aae43f3dc4d0f4d869b28cfbb1a4df2ea9c0ea6e701c8095157e42c3c6b460ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mccarthy, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:22:30 compute-0 cranky_mccarthy[459969]: 167 167
Dec 03 02:22:30 compute-0 systemd[1]: libpod-aae43f3dc4d0f4d869b28cfbb1a4df2ea9c0ea6e701c8095157e42c3c6b460ce.scope: Deactivated successfully.
Dec 03 02:22:30 compute-0 conmon[459969]: conmon aae43f3dc4d0f4d869b2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aae43f3dc4d0f4d869b28cfbb1a4df2ea9c0ea6e701c8095157e42c3c6b460ce.scope/container/memory.events
Dec 03 02:22:30 compute-0 podman[459953]: 2025-12-03 02:22:30.14918015 +0000 UTC m=+0.266202188 container died aae43f3dc4d0f4d869b28cfbb1a4df2ea9c0ea6e701c8095157e42c3c6b460ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:22:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-3bd26cf534e94ec7ed332ba72c0867304a3df0459b536edf29f498185aec5e81-merged.mount: Deactivated successfully.
Dec 03 02:22:30 compute-0 podman[459953]: 2025-12-03 02:22:30.200652714 +0000 UTC m=+0.317674742 container remove aae43f3dc4d0f4d869b28cfbb1a4df2ea9c0ea6e701c8095157e42c3c6b460ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mccarthy, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 03 02:22:30 compute-0 systemd[1]: libpod-conmon-aae43f3dc4d0f4d869b28cfbb1a4df2ea9c0ea6e701c8095157e42c3c6b460ce.scope: Deactivated successfully.
Dec 03 02:22:30 compute-0 ceph-mon[192821]: pgmap v2048: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 867 KiB/s wr, 54 op/s
Dec 03 02:22:30 compute-0 podman[459993]: 2025-12-03 02:22:30.463987491 +0000 UTC m=+0.074506006 container create eb31fd14505ab82af39b34b85330e226184023c22c0a0cd4790e2592fdf1e677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_raman, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec 03 02:22:30 compute-0 podman[459993]: 2025-12-03 02:22:30.433969323 +0000 UTC m=+0.044487848 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:22:30 compute-0 systemd[1]: Started libpod-conmon-eb31fd14505ab82af39b34b85330e226184023c22c0a0cd4790e2592fdf1e677.scope.
Dec 03 02:22:30 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:22:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ce9c0a61696efa7d4491b6a6868dd7d3aaeadf47a9f47aea28f471b2962a435/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:22:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ce9c0a61696efa7d4491b6a6868dd7d3aaeadf47a9f47aea28f471b2962a435/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:22:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ce9c0a61696efa7d4491b6a6868dd7d3aaeadf47a9f47aea28f471b2962a435/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:22:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ce9c0a61696efa7d4491b6a6868dd7d3aaeadf47a9f47aea28f471b2962a435/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:22:30 compute-0 podman[459993]: 2025-12-03 02:22:30.639910249 +0000 UTC m=+0.250428764 container init eb31fd14505ab82af39b34b85330e226184023c22c0a0cd4790e2592fdf1e677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_raman, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:22:30 compute-0 podman[459993]: 2025-12-03 02:22:30.660043918 +0000 UTC m=+0.270562453 container start eb31fd14505ab82af39b34b85330e226184023c22c0a0cd4790e2592fdf1e677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_raman, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:22:30 compute-0 podman[459993]: 2025-12-03 02:22:30.667692864 +0000 UTC m=+0.278211369 container attach eb31fd14505ab82af39b34b85330e226184023c22c0a0cd4790e2592fdf1e677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_raman, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 03 02:22:30 compute-0 nova_compute[351485]: 2025-12-03 02:22:30.873 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:22:30 compute-0 nova_compute[351485]: 2025-12-03 02:22:30.875 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:22:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2049: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 75 op/s
Dec 03 02:22:31 compute-0 gracious_raman[460009]: {
Dec 03 02:22:31 compute-0 gracious_raman[460009]:     "0": [
Dec 03 02:22:31 compute-0 gracious_raman[460009]:         {
Dec 03 02:22:31 compute-0 gracious_raman[460009]:             "devices": [
Dec 03 02:22:31 compute-0 gracious_raman[460009]:                 "/dev/loop3"
Dec 03 02:22:31 compute-0 gracious_raman[460009]:             ],
Dec 03 02:22:31 compute-0 gracious_raman[460009]:             "lv_name": "ceph_lv0",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:             "lv_size": "21470642176",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:             "name": "ceph_lv0",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:             "tags": {
Dec 03 02:22:31 compute-0 gracious_raman[460009]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:                 "ceph.cluster_name": "ceph",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:                 "ceph.crush_device_class": "",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:                 "ceph.encrypted": "0",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:                 "ceph.osd_id": "0",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:                 "ceph.type": "block",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:                 "ceph.vdo": "0"
Dec 03 02:22:31 compute-0 gracious_raman[460009]:             },
Dec 03 02:22:31 compute-0 gracious_raman[460009]:             "type": "block",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:             "vg_name": "ceph_vg0"
Dec 03 02:22:31 compute-0 gracious_raman[460009]:         }
Dec 03 02:22:31 compute-0 gracious_raman[460009]:     ],
Dec 03 02:22:31 compute-0 gracious_raman[460009]:     "1": [
Dec 03 02:22:31 compute-0 gracious_raman[460009]:         {
Dec 03 02:22:31 compute-0 gracious_raman[460009]:             "devices": [
Dec 03 02:22:31 compute-0 gracious_raman[460009]:                 "/dev/loop4"
Dec 03 02:22:31 compute-0 gracious_raman[460009]:             ],
Dec 03 02:22:31 compute-0 gracious_raman[460009]:             "lv_name": "ceph_lv1",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:             "lv_size": "21470642176",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:             "name": "ceph_lv1",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:             "tags": {
Dec 03 02:22:31 compute-0 gracious_raman[460009]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:                 "ceph.cluster_name": "ceph",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:                 "ceph.crush_device_class": "",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:                 "ceph.encrypted": "0",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:                 "ceph.osd_id": "1",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:                 "ceph.type": "block",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:                 "ceph.vdo": "0"
Dec 03 02:22:31 compute-0 gracious_raman[460009]:             },
Dec 03 02:22:31 compute-0 gracious_raman[460009]:             "type": "block",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:             "vg_name": "ceph_vg1"
Dec 03 02:22:31 compute-0 gracious_raman[460009]:         }
Dec 03 02:22:31 compute-0 gracious_raman[460009]:     ],
Dec 03 02:22:31 compute-0 gracious_raman[460009]:     "2": [
Dec 03 02:22:31 compute-0 gracious_raman[460009]:         {
Dec 03 02:22:31 compute-0 gracious_raman[460009]:             "devices": [
Dec 03 02:22:31 compute-0 gracious_raman[460009]:                 "/dev/loop5"
Dec 03 02:22:31 compute-0 gracious_raman[460009]:             ],
Dec 03 02:22:31 compute-0 gracious_raman[460009]:             "lv_name": "ceph_lv2",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:             "lv_size": "21470642176",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:             "name": "ceph_lv2",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:             "tags": {
Dec 03 02:22:31 compute-0 gracious_raman[460009]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:                 "ceph.cluster_name": "ceph",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:                 "ceph.crush_device_class": "",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:                 "ceph.encrypted": "0",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:                 "ceph.osd_id": "2",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:                 "ceph.type": "block",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:                 "ceph.vdo": "0"
Dec 03 02:22:31 compute-0 gracious_raman[460009]:             },
Dec 03 02:22:31 compute-0 gracious_raman[460009]:             "type": "block",
Dec 03 02:22:31 compute-0 gracious_raman[460009]:             "vg_name": "ceph_vg2"
Dec 03 02:22:31 compute-0 gracious_raman[460009]:         }
Dec 03 02:22:31 compute-0 gracious_raman[460009]:     ]
Dec 03 02:22:31 compute-0 gracious_raman[460009]: }
Dec 03 02:22:31 compute-0 openstack_network_exporter[368278]: ERROR   02:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:22:31 compute-0 openstack_network_exporter[368278]: ERROR   02:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:22:31 compute-0 openstack_network_exporter[368278]: ERROR   02:22:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:22:31 compute-0 openstack_network_exporter[368278]: ERROR   02:22:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:22:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:22:31 compute-0 openstack_network_exporter[368278]: ERROR   02:22:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:22:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:22:31 compute-0 nova_compute[351485]: 2025-12-03 02:22:31.441 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:22:31 compute-0 systemd[1]: libpod-eb31fd14505ab82af39b34b85330e226184023c22c0a0cd4790e2592fdf1e677.scope: Deactivated successfully.
Dec 03 02:22:31 compute-0 podman[459993]: 2025-12-03 02:22:31.455805582 +0000 UTC m=+1.066324067 container died eb31fd14505ab82af39b34b85330e226184023c22c0a0cd4790e2592fdf1e677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_raman, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec 03 02:22:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ce9c0a61696efa7d4491b6a6868dd7d3aaeadf47a9f47aea28f471b2962a435-merged.mount: Deactivated successfully.
Dec 03 02:22:31 compute-0 podman[459993]: 2025-12-03 02:22:31.524648946 +0000 UTC m=+1.135167431 container remove eb31fd14505ab82af39b34b85330e226184023c22c0a0cd4790e2592fdf1e677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_raman, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 03 02:22:31 compute-0 systemd[1]: libpod-conmon-eb31fd14505ab82af39b34b85330e226184023c22c0a0cd4790e2592fdf1e677.scope: Deactivated successfully.
Dec 03 02:22:31 compute-0 sudo[459892]: pam_unix(sudo:session): session closed for user root
Dec 03 02:22:31 compute-0 sudo[460029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:22:31 compute-0 sudo[460029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:22:31 compute-0 sudo[460029]: pam_unix(sudo:session): session closed for user root
Dec 03 02:22:31 compute-0 sudo[460054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:22:31 compute-0 sudo[460054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:22:31 compute-0 sudo[460054]: pam_unix(sudo:session): session closed for user root
Dec 03 02:22:31 compute-0 sudo[460079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:22:31 compute-0 sudo[460079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:22:31 compute-0 sudo[460079]: pam_unix(sudo:session): session closed for user root
Dec 03 02:22:32 compute-0 sudo[460104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 02:22:32 compute-0 sudo[460104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:22:32 compute-0 ceph-mon[192821]: pgmap v2049: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 75 op/s
Dec 03 02:22:32 compute-0 nova_compute[351485]: 2025-12-03 02:22:32.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:22:32 compute-0 podman[460163]: 2025-12-03 02:22:32.666064932 +0000 UTC m=+0.083651244 container create e3b74825a8ab3df28d74534c2c0a87b8f2b3fb6c113c3c3a5be8a9cc81db5d27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wilbur, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:22:32 compute-0 systemd[1]: Started libpod-conmon-e3b74825a8ab3df28d74534c2c0a87b8f2b3fb6c113c3c3a5be8a9cc81db5d27.scope.
Dec 03 02:22:32 compute-0 podman[460163]: 2025-12-03 02:22:32.638150223 +0000 UTC m=+0.055736515 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:22:32 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:22:32 compute-0 podman[460163]: 2025-12-03 02:22:32.80940119 +0000 UTC m=+0.226987512 container init e3b74825a8ab3df28d74534c2c0a87b8f2b3fb6c113c3c3a5be8a9cc81db5d27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wilbur, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:22:32 compute-0 podman[460163]: 2025-12-03 02:22:32.834045376 +0000 UTC m=+0.251631678 container start e3b74825a8ab3df28d74534c2c0a87b8f2b3fb6c113c3c3a5be8a9cc81db5d27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 03 02:22:32 compute-0 dreamy_wilbur[460179]: 167 167
Dec 03 02:22:32 compute-0 systemd[1]: libpod-e3b74825a8ab3df28d74534c2c0a87b8f2b3fb6c113c3c3a5be8a9cc81db5d27.scope: Deactivated successfully.
Dec 03 02:22:32 compute-0 podman[460163]: 2025-12-03 02:22:32.842800283 +0000 UTC m=+0.260386565 container attach e3b74825a8ab3df28d74534c2c0a87b8f2b3fb6c113c3c3a5be8a9cc81db5d27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:22:32 compute-0 podman[460163]: 2025-12-03 02:22:32.843426761 +0000 UTC m=+0.261013043 container died e3b74825a8ab3df28d74534c2c0a87b8f2b3fb6c113c3c3a5be8a9cc81db5d27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wilbur, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 03 02:22:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-69d74fce5f5f8ac84c1888ef089f27766c244bc02548f7c7475c88007fd463e0-merged.mount: Deactivated successfully.
Dec 03 02:22:32 compute-0 podman[460163]: 2025-12-03 02:22:32.906716158 +0000 UTC m=+0.324302450 container remove e3b74825a8ab3df28d74534c2c0a87b8f2b3fb6c113c3c3a5be8a9cc81db5d27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:22:32 compute-0 systemd[1]: libpod-conmon-e3b74825a8ab3df28d74534c2c0a87b8f2b3fb6c113c3c3a5be8a9cc81db5d27.scope: Deactivated successfully.
Dec 03 02:22:33 compute-0 podman[460202]: 2025-12-03 02:22:33.166383642 +0000 UTC m=+0.088001467 container create b206e31a98ecee480b4d0f0063c464043b57fcc81a944dc02a78ca831ed3d57f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_tesla, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 03 02:22:33 compute-0 podman[460202]: 2025-12-03 02:22:33.139034059 +0000 UTC m=+0.060651924 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:22:33 compute-0 systemd[1]: Started libpod-conmon-b206e31a98ecee480b4d0f0063c464043b57fcc81a944dc02a78ca831ed3d57f.scope.
Dec 03 02:22:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2050: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Dec 03 02:22:33 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:22:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7b35f913b3c9c08e4bcbd4ead20275f6ba940ccf1f1735535a2c2a9124a2897/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:22:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7b35f913b3c9c08e4bcbd4ead20275f6ba940ccf1f1735535a2c2a9124a2897/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:22:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7b35f913b3c9c08e4bcbd4ead20275f6ba940ccf1f1735535a2c2a9124a2897/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:22:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7b35f913b3c9c08e4bcbd4ead20275f6ba940ccf1f1735535a2c2a9124a2897/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:22:33 compute-0 podman[460202]: 2025-12-03 02:22:33.314035942 +0000 UTC m=+0.235653787 container init b206e31a98ecee480b4d0f0063c464043b57fcc81a944dc02a78ca831ed3d57f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_tesla, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec 03 02:22:33 compute-0 podman[460202]: 2025-12-03 02:22:33.338557304 +0000 UTC m=+0.260175129 container start b206e31a98ecee480b4d0f0063c464043b57fcc81a944dc02a78ca831ed3d57f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_tesla, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec 03 02:22:33 compute-0 podman[460202]: 2025-12-03 02:22:33.342954848 +0000 UTC m=+0.264572703 container attach b206e31a98ecee480b4d0f0063c464043b57fcc81a944dc02a78ca831ed3d57f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_tesla, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec 03 02:22:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:22:33 compute-0 nova_compute[351485]: 2025-12-03 02:22:33.843 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:22:34 compute-0 ceph-mon[192821]: pgmap v2050: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Dec 03 02:22:34 compute-0 wizardly_tesla[460218]: {
Dec 03 02:22:34 compute-0 wizardly_tesla[460218]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 02:22:34 compute-0 wizardly_tesla[460218]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:22:34 compute-0 wizardly_tesla[460218]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 02:22:34 compute-0 wizardly_tesla[460218]:         "osd_id": 2,
Dec 03 02:22:34 compute-0 wizardly_tesla[460218]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:22:34 compute-0 wizardly_tesla[460218]:         "type": "bluestore"
Dec 03 02:22:34 compute-0 wizardly_tesla[460218]:     },
Dec 03 02:22:34 compute-0 wizardly_tesla[460218]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 02:22:34 compute-0 wizardly_tesla[460218]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:22:34 compute-0 wizardly_tesla[460218]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 02:22:34 compute-0 wizardly_tesla[460218]:         "osd_id": 1,
Dec 03 02:22:34 compute-0 wizardly_tesla[460218]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:22:34 compute-0 wizardly_tesla[460218]:         "type": "bluestore"
Dec 03 02:22:34 compute-0 wizardly_tesla[460218]:     },
Dec 03 02:22:34 compute-0 wizardly_tesla[460218]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 02:22:34 compute-0 wizardly_tesla[460218]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:22:34 compute-0 wizardly_tesla[460218]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 02:22:34 compute-0 wizardly_tesla[460218]:         "osd_id": 0,
Dec 03 02:22:34 compute-0 wizardly_tesla[460218]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:22:34 compute-0 wizardly_tesla[460218]:         "type": "bluestore"
Dec 03 02:22:34 compute-0 wizardly_tesla[460218]:     }
Dec 03 02:22:34 compute-0 wizardly_tesla[460218]: }
Dec 03 02:22:34 compute-0 systemd[1]: libpod-b206e31a98ecee480b4d0f0063c464043b57fcc81a944dc02a78ca831ed3d57f.scope: Deactivated successfully.
Dec 03 02:22:34 compute-0 systemd[1]: libpod-b206e31a98ecee480b4d0f0063c464043b57fcc81a944dc02a78ca831ed3d57f.scope: Consumed 1.177s CPU time.
Dec 03 02:22:34 compute-0 conmon[460218]: conmon b206e31a98ecee480b4d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b206e31a98ecee480b4d0f0063c464043b57fcc81a944dc02a78ca831ed3d57f.scope/container/memory.events
Dec 03 02:22:34 compute-0 podman[460202]: 2025-12-03 02:22:34.532177463 +0000 UTC m=+1.453795328 container died b206e31a98ecee480b4d0f0063c464043b57fcc81a944dc02a78ca831ed3d57f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_tesla, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec 03 02:22:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7b35f913b3c9c08e4bcbd4ead20275f6ba940ccf1f1735535a2c2a9124a2897-merged.mount: Deactivated successfully.
Dec 03 02:22:34 compute-0 podman[460202]: 2025-12-03 02:22:34.667171666 +0000 UTC m=+1.588789511 container remove b206e31a98ecee480b4d0f0063c464043b57fcc81a944dc02a78ca831ed3d57f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 03 02:22:34 compute-0 systemd[1]: libpod-conmon-b206e31a98ecee480b4d0f0063c464043b57fcc81a944dc02a78ca831ed3d57f.scope: Deactivated successfully.
Dec 03 02:22:34 compute-0 sudo[460104]: pam_unix(sudo:session): session closed for user root
Dec 03 02:22:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 02:22:34 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:22:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 02:22:34 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:22:34 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 203c3763-a4e5-4937-bf19-b3ab19ef4ce0 does not exist
Dec 03 02:22:34 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev b6c4a24e-3099-4ced-ba9f-2738a6b26ead does not exist
Dec 03 02:22:34 compute-0 sudo[460264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:22:34 compute-0 sudo[460264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:22:34 compute-0 sudo[460264]: pam_unix(sudo:session): session closed for user root
Dec 03 02:22:35 compute-0 sudo[460289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 02:22:35 compute-0 sudo[460289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:22:35 compute-0 sudo[460289]: pam_unix(sudo:session): session closed for user root
Dec 03 02:22:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2051: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Dec 03 02:22:35 compute-0 nova_compute[351485]: 2025-12-03 02:22:35.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:22:35 compute-0 nova_compute[351485]: 2025-12-03 02:22:35.579 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 02:22:35 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:22:35 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:22:35 compute-0 ceph-mon[192821]: pgmap v2051: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Dec 03 02:22:36 compute-0 nova_compute[351485]: 2025-12-03 02:22:36.443 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:22:36 compute-0 nova_compute[351485]: 2025-12-03 02:22:36.587 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:22:36 compute-0 nova_compute[351485]: 2025-12-03 02:22:36.587 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 03 02:22:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2052: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 64 op/s
Dec 03 02:22:38 compute-0 ceph-mon[192821]: pgmap v2052: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 64 op/s
Dec 03 02:22:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:22:38 compute-0 nova_compute[351485]: 2025-12-03 02:22:38.842 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:22:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 02:22:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:22:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 02:22:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:22:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011065419067851794 of space, bias 1.0, pg target 0.33196257203555385 quantized to 32 (current 32)
Dec 03 02:22:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:22:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:22:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:22:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:22:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:22:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec 03 02:22:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:22:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 02:22:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:22:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:22:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:22:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 02:22:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:22:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 02:22:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:22:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:22:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:22:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 02:22:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2053: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 49 op/s
Dec 03 02:22:40 compute-0 ceph-mon[192821]: pgmap v2053: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 49 op/s
Dec 03 02:22:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2054: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 677 KiB/s rd, 22 op/s
Dec 03 02:22:41 compute-0 nova_compute[351485]: 2025-12-03 02:22:41.446 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:22:42 compute-0 ceph-mon[192821]: pgmap v2054: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 677 KiB/s rd, 22 op/s
Dec 03 02:22:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2055: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:22:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:22:43 compute-0 nova_compute[351485]: 2025-12-03 02:22:43.845 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:22:44 compute-0 ceph-mon[192821]: pgmap v2055: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:22:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2056: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:22:46 compute-0 ceph-mon[192821]: pgmap v2056: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:22:46 compute-0 nova_compute[351485]: 2025-12-03 02:22:46.451 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:22:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 02:22:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3516732987' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:22:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 02:22:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3516732987' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:22:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2057: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:22:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/3516732987' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:22:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/3516732987' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:22:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:22:48 compute-0 ceph-mon[192821]: pgmap v2057: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:22:48 compute-0 nova_compute[351485]: 2025-12-03 02:22:48.848 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:22:48 compute-0 podman[460316]: 2025-12-03 02:22:48.87823455 +0000 UTC m=+0.114416022 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 02:22:48 compute-0 podman[460314]: 2025-12-03 02:22:48.882994414 +0000 UTC m=+0.123867529 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 03 02:22:48 compute-0 podman[460315]: 2025-12-03 02:22:48.918862637 +0000 UTC m=+0.156044148 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.build-date=20251125, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Dec 03 02:22:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2058: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:22:50 compute-0 ceph-mon[192821]: pgmap v2058: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:22:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2059: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:22:51 compute-0 nova_compute[351485]: 2025-12-03 02:22:51.454 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:22:52 compute-0 ceph-mon[192821]: pgmap v2059: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:22:52 compute-0 ovn_controller[89134]: 2025-12-03T02:22:52Z|00195|memory_trim|INFO|Detected inactivity (last active 30023 ms ago): trimming memory
Dec 03 02:22:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2060: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:22:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:22:53 compute-0 nova_compute[351485]: 2025-12-03 02:22:53.851 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:22:54 compute-0 ceph-mon[192821]: pgmap v2060: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:22:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2061: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:22:55 compute-0 podman[460373]: 2025-12-03 02:22:55.899150302 +0000 UTC m=+0.142900467 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 03 02:22:56 compute-0 nova_compute[351485]: 2025-12-03 02:22:56.458 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:22:56 compute-0 ceph-mon[192821]: pgmap v2061: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:22:56 compute-0 nova_compute[351485]: 2025-12-03 02:22:56.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:22:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2062: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:22:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:22:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:22:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:22:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:22:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:22:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:22:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:22:58 compute-0 ceph-mon[192821]: pgmap v2062: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:22:58 compute-0 nova_compute[351485]: 2025-12-03 02:22:58.852 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:22:58 compute-0 podman[460395]: 2025-12-03 02:22:58.860558147 +0000 UTC m=+0.093505372 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 03 02:22:58 compute-0 podman[460393]: 2025-12-03 02:22:58.887435406 +0000 UTC m=+0.138684538 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:22:58 compute-0 podman[460396]: 2025-12-03 02:22:58.889479184 +0000 UTC m=+0.095410976 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, container_name=kepler, vcs-type=git, name=ubi9, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., io.openshift.expose-services=, vendor=Red Hat, Inc., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, version=9.4, build-date=2024-09-18T21:23:30, distribution-scope=public, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9)
Dec 03 02:22:58 compute-0 podman[460407]: 2025-12-03 02:22:58.895048991 +0000 UTC m=+0.130101945 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 03 02:22:58 compute-0 podman[460394]: 2025-12-03 02:22:58.904329813 +0000 UTC m=+0.158643441 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, distribution-scope=public, release=1755695350, architecture=x86_64, container_name=openstack_network_exporter, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc.)
Dec 03 02:22:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2063: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:22:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:22:59.656 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:22:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:22:59.656 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:22:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:22:59.657 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:22:59 compute-0 podman[158098]: time="2025-12-03T02:22:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:22:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:22:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec 03 02:22:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:22:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8664 "" "Go-http-client/1.1"
Dec 03 02:23:00 compute-0 ceph-mon[192821]: pgmap v2063: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:23:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2064: 321 pgs: 321 active+clean; 221 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 277 KiB/s rd, 1.7 MiB/s wr, 47 op/s
Dec 03 02:23:01 compute-0 openstack_network_exporter[368278]: ERROR   02:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:23:01 compute-0 openstack_network_exporter[368278]: ERROR   02:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:23:01 compute-0 openstack_network_exporter[368278]: ERROR   02:23:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:23:01 compute-0 openstack_network_exporter[368278]: ERROR   02:23:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:23:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:23:01 compute-0 openstack_network_exporter[368278]: ERROR   02:23:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:23:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:23:01 compute-0 ovn_controller[89134]: 2025-12-03T02:23:01Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3f:0c:ae 10.100.1.46
Dec 03 02:23:01 compute-0 ovn_controller[89134]: 2025-12-03T02:23:01Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3f:0c:ae 10.100.1.46
Dec 03 02:23:01 compute-0 nova_compute[351485]: 2025-12-03 02:23:01.464 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:23:02 compute-0 ceph-mon[192821]: pgmap v2064: 321 pgs: 321 active+clean; 221 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 277 KiB/s rd, 1.7 MiB/s wr, 47 op/s
Dec 03 02:23:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2065: 321 pgs: 321 active+clean; 221 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 277 KiB/s rd, 1.7 MiB/s wr, 47 op/s
Dec 03 02:23:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:23:03 compute-0 nova_compute[351485]: 2025-12-03 02:23:03.855 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:23:04 compute-0 ceph-mon[192821]: pgmap v2065: 321 pgs: 321 active+clean; 221 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 277 KiB/s rd, 1.7 MiB/s wr, 47 op/s
Dec 03 02:23:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2066: 321 pgs: 321 active+clean; 235 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 293 KiB/s rd, 2.1 MiB/s wr, 51 op/s
Dec 03 02:23:06 compute-0 nova_compute[351485]: 2025-12-03 02:23:06.467 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:23:06 compute-0 ceph-mon[192821]: pgmap v2066: 321 pgs: 321 active+clean; 235 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 293 KiB/s rd, 2.1 MiB/s wr, 51 op/s
Dec 03 02:23:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2067: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 297 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Dec 03 02:23:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:23:08 compute-0 ceph-mon[192821]: pgmap v2067: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 297 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Dec 03 02:23:08 compute-0 nova_compute[351485]: 2025-12-03 02:23:08.860 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:23:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2068: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 297 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Dec 03 02:23:10 compute-0 ceph-mon[192821]: pgmap v2068: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 297 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Dec 03 02:23:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2069: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 297 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Dec 03 02:23:11 compute-0 nova_compute[351485]: 2025-12-03 02:23:11.471 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:23:12 compute-0 ceph-mon[192821]: pgmap v2069: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 297 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Dec 03 02:23:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2070: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 402 KiB/s wr, 13 op/s
Dec 03 02:23:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:23:13 compute-0 nova_compute[351485]: 2025-12-03 02:23:13.864 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:23:14 compute-0 ceph-mon[192821]: pgmap v2070: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 402 KiB/s wr, 13 op/s
Dec 03 02:23:14 compute-0 nova_compute[351485]: 2025-12-03 02:23:14.596 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:23:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2071: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 402 KiB/s wr, 13 op/s
Dec 03 02:23:16 compute-0 nova_compute[351485]: 2025-12-03 02:23:16.476 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:23:16 compute-0 ceph-mon[192821]: pgmap v2071: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 402 KiB/s wr, 13 op/s
Dec 03 02:23:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2072: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 4.3 KiB/s rd, 76 KiB/s wr, 8 op/s
Dec 03 02:23:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:23:18 compute-0 ceph-mon[192821]: pgmap v2072: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 4.3 KiB/s rd, 76 KiB/s wr, 8 op/s
Dec 03 02:23:18 compute-0 nova_compute[351485]: 2025-12-03 02:23:18.867 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:23:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2073: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 4.3 KiB/s wr, 0 op/s
Dec 03 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.512 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 03 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.513 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 03 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.514 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.521 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.521 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.521 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.522 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 4fb8fc07-d7b7-4be8-94da-155b040faf32 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec 03 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.524 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/4fb8fc07-d7b7-4be8-94da-155b040faf32 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}5774f494984a65ffbde2426a05531a474fe014ea4dcd597248cb0a9b623a789b" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec 03 02:23:19 compute-0 podman[460495]: 2025-12-03 02:23:19.867077487 +0000 UTC m=+0.095106167 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 02:23:19 compute-0 podman[460493]: 2025-12-03 02:23:19.875179126 +0000 UTC m=+0.118168688 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent)
Dec 03 02:23:19 compute-0 podman[460494]: 2025-12-03 02:23:19.887394601 +0000 UTC m=+0.121606005 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec 03 02:23:20 compute-0 ceph-mon[192821]: pgmap v2073: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 4.3 KiB/s wr, 0 op/s
Dec 03 02:23:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2074: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 5.4 KiB/s wr, 0 op/s
Dec 03 02:23:21 compute-0 nova_compute[351485]: 2025-12-03 02:23:21.480 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.831 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1831 Content-Type: application/json Date: Wed, 03 Dec 2025 02:23:19 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-2f4f25fb-399c-406b-9246-7ca842c22f00 x-openstack-request-id: req-2f4f25fb-399c-406b-9246-7ca842c22f00 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.831 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "4fb8fc07-d7b7-4be8-94da-155b040faf32", "name": "te-8071397-asg-3rvfkoaoyxm3-pdxc7a4qjxpu-j7dwudlie42q", "status": "ACTIVE", "tenant_id": "63f39ac2863946b8b817457e689ff933", "user_id": "8f61f44789494541b7c101b0fdab52f0", "metadata": {"metering.server_group": "38bfb145-4971-41b6-9bc3-faf3c3931019"}, "hostId": "b9b5204cb6f419d1971089b3610cd52175ffd5baf1b6a5204f14f9c2", "image": {"id": "8876482c-db67-48c0-9203-60685152fc9d", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/8876482c-db67-48c0-9203-60685152fc9d"}]}, "flavor": {"id": "89219634-32e9-4cb5-896f-6fa0b1edfe13", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/89219634-32e9-4cb5-896f-6fa0b1edfe13"}]}, "created": "2025-12-03T02:22:10Z", "updated": "2025-12-03T02:22:24Z", "addresses": {"": [{"version": 4, "addr": "10.100.1.46", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:3f:0c:ae"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/4fb8fc07-d7b7-4be8-94da-155b040faf32"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/4fb8fc07-d7b7-4be8-94da-155b040faf32"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-03T02:22:23.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000f", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.831 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/4fb8fc07-d7b7-4be8-94da-155b040faf32 used request id req-2f4f25fb-399c-406b-9246-7ca842c22f00 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.833 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '4fb8fc07-d7b7-4be8-94da-155b040faf32', 'name': 'te-8071397-asg-3rvfkoaoyxm3-pdxc7a4qjxpu-j7dwudlie42q', 'flavor': {'id': '89219634-32e9-4cb5-896f-6fa0b1edfe13', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '8876482c-db67-48c0-9203-60685152fc9d'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '63f39ac2863946b8b817457e689ff933', 'user_id': '8f61f44789494541b7c101b0fdab52f0', 'hostId': 'b9b5204cb6f419d1971089b3610cd52175ffd5baf1b6a5204f14f9c2', 'status': 'active', 'metadata': {'metering.server_group': '38bfb145-4971-41b6-9bc3-faf3c3931019'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.839 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2890ee5c-21c1-4e9d-9421-1a2df0f67f76', 'name': 'te-8071397-asg-3rvfkoaoyxm3-n4fdz722tgvn-jwe375iwm6yr', 'flavor': {'id': '89219634-32e9-4cb5-896f-6fa0b1edfe13', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '8876482c-db67-48c0-9203-60685152fc9d'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '63f39ac2863946b8b817457e689ff933', 'user_id': '8f61f44789494541b7c101b0fdab52f0', 'hostId': 'b9b5204cb6f419d1971089b3610cd52175ffd5baf1b6a5204f14f9c2', 'status': 'active', 'metadata': {'metering.server_group': '38bfb145-4971-41b6-9bc3-faf3c3931019'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.840 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.840 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.840 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.841 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.843 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-03T02:23:21.841144) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.883 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/memory.usage volume: 43.5703125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.918 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/memory.usage volume: 43.4296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.918 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.918 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.919 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.919 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.919 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.919 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.923 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-03T02:23:21.919304) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.924 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 4fb8fc07-d7b7-4be8-94da-155b040faf32 / tap94fdb5b9-66 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.924 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.928 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.929 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.929 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.929 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.929 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.929 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.930 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.930 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.930 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.bytes.delta volume: 168 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.931 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.931 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.932 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.931 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-03T02:23:21.929998) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.932 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.932 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.932 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.932 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.932 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.933 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.933 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.933 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.933 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.933 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.933 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.934 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.934 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.934 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.935 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.935 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.935 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.935 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.935 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.935 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.935 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.936 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.936 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.936 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.937 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-03T02:23:21.932236) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.936 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.937 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-03T02:23:21.933870) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.937 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.937 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-03T02:23:21.935428) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.937 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.938 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-03T02:23:21.937514) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.951 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.952 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.971 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.971 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.972 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.972 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.972 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.973 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.973 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.973 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.973 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.973 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: te-8071397-asg-3rvfkoaoyxm3-pdxc7a4qjxpu-j7dwudlie42q>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-8071397-asg-3rvfkoaoyxm3-pdxc7a4qjxpu-j7dwudlie42q>]
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.974 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.974 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.974 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-03T02:23:21.973283) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.974 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.974 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.974 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.975 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-03T02:23:21.974820) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.008 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.bytes volume: 30149632 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.008 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.054 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.bytes volume: 30342144 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.055 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.056 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.056 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.056 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.056 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.056 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.056 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.056 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.incoming.bytes volume: 1346 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.057 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.bytes volume: 1430 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.057 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.058 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.058 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.058 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-03T02:23:22.056799) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.058 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.058 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.059 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.059 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.latency volume: 3251057957 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.059 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.latency volume: 228292831 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.059 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.latency volume: 2892253301 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.060 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.latency volume: 193523124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.060 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.060 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.061 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.061 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.061 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-03T02:23:22.058970) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.061 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.061 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.061 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.requests volume: 1093 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.062 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.062 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.requests volume: 1100 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.062 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.063 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.063 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.063 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.063 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.063 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.063 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.064 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.064 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.064 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.065 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-03T02:23:22.061614) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.065 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.065 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-03T02:23:22.063970) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.065 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.065 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.065 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.065 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.065 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.066 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.066 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.066 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.067 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.067 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.068 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.068 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.068 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-03T02:23:22.065758) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.068 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.068 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.068 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.bytes volume: 72790016 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.069 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.069 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.bytes volume: 72855552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.069 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.070 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.070 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.070 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.070 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.071 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.071 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.071 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.latency volume: 8474740037 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.071 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-03T02:23:22.068667) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.071 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-03T02:23:22.071163) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.071 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.072 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.latency volume: 9924409915 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.072 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.073 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.073 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.073 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.074 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.074 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.074 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.requests volume: 313 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.074 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.074 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.requests volume: 310 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.075 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.075 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.075 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.076 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.076 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.076 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.076 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.incoming.packets volume: 10 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.076 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-03T02:23:22.074096) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.076 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-03T02:23:22.076289) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.077 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.077 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.077 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.077 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.077 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.077 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.078 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.078 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/cpu volume: 55040000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.078 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-03T02:23:22.078044) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.078 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/cpu volume: 243990000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.079 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.079 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.079 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.079 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.079 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.079 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.080 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.080 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.080 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.081 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-03T02:23:22.079907) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.081 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.081 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.081 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.081 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.081 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.082 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.082 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.082 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.083 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.083 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.083 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.083 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.084 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.084 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.084 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.085 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.085 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.085 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.085 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.085 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.086 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.086 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.086 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.087 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.087 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.087 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.087 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.087 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.087 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.088 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.088 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-03T02:23:22.081273) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.089 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-03T02:23:22.083215) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.089 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-03T02:23:22.086030) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.089 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-03T02:23:22.087713) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.090 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.090 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.090 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.090 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.090 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.091 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.091 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.091 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.091 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.092 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.092 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.092 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.092 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.092 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.092 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: te-8071397-asg-3rvfkoaoyxm3-pdxc7a4qjxpu-j7dwudlie42q>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-8071397-asg-3rvfkoaoyxm3-pdxc7a4qjxpu-j7dwudlie42q>]
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.094 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.094 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.094 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.094 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.094 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.094 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.094 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.094 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.094 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.095 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.095 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.095 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.095 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.095 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.095 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.095 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.095 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.095 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.095 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.095 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.095 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.095 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.095 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.095 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.096 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.096 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.096 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-03T02:23:22.090892) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.096 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-03T02:23:22.092292) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:23:22 compute-0 ceph-mon[192821]: pgmap v2074: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 5.4 KiB/s wr, 0 op/s
Dec 03 02:23:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2075: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec 03 02:23:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:23:23 compute-0 nova_compute[351485]: 2025-12-03 02:23:23.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:23:23 compute-0 nova_compute[351485]: 2025-12-03 02:23:23.606 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:23:23 compute-0 nova_compute[351485]: 2025-12-03 02:23:23.608 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:23:23 compute-0 nova_compute[351485]: 2025-12-03 02:23:23.609 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:23:23 compute-0 nova_compute[351485]: 2025-12-03 02:23:23.610 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 02:23:23 compute-0 nova_compute[351485]: 2025-12-03 02:23:23.611 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:23:23 compute-0 sshd[113879]: Timeout before authentication for connection from 45.78.219.140 to 38.102.83.36, pid = 455797
Dec 03 02:23:23 compute-0 nova_compute[351485]: 2025-12-03 02:23:23.871 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:23:24 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:23:24 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2508921091' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:23:24 compute-0 nova_compute[351485]: 2025-12-03 02:23:24.122 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:23:24 compute-0 nova_compute[351485]: 2025-12-03 02:23:24.257 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:23:24 compute-0 nova_compute[351485]: 2025-12-03 02:23:24.258 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:23:24 compute-0 nova_compute[351485]: 2025-12-03 02:23:24.268 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:23:24 compute-0 nova_compute[351485]: 2025-12-03 02:23:24.268 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:23:24 compute-0 ceph-mon[192821]: pgmap v2075: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec 03 02:23:24 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2508921091' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:23:24 compute-0 nova_compute[351485]: 2025-12-03 02:23:24.746 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:23:24 compute-0 nova_compute[351485]: 2025-12-03 02:23:24.747 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3582MB free_disk=59.897377014160156GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 02:23:24 compute-0 nova_compute[351485]: 2025-12-03 02:23:24.747 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:23:24 compute-0 nova_compute[351485]: 2025-12-03 02:23:24.748 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:23:24 compute-0 nova_compute[351485]: 2025-12-03 02:23:24.841 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:23:24 compute-0 nova_compute[351485]: 2025-12-03 02:23:24.842 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 4fb8fc07-d7b7-4be8-94da-155b040faf32 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:23:24 compute-0 nova_compute[351485]: 2025-12-03 02:23:24.843 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 02:23:24 compute-0 nova_compute[351485]: 2025-12-03 02:23:24.843 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 02:23:24 compute-0 nova_compute[351485]: 2025-12-03 02:23:24.908 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:23:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2076: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec 03 02:23:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:23:25 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3895294831' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:23:25 compute-0 nova_compute[351485]: 2025-12-03 02:23:25.430 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:23:25 compute-0 nova_compute[351485]: 2025-12-03 02:23:25.445 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:23:25 compute-0 nova_compute[351485]: 2025-12-03 02:23:25.471 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:23:25 compute-0 nova_compute[351485]: 2025-12-03 02:23:25.500 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 02:23:25 compute-0 nova_compute[351485]: 2025-12-03 02:23:25.501 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:23:25 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3895294831' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:23:26 compute-0 nova_compute[351485]: 2025-12-03 02:23:26.482 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:23:26 compute-0 ceph-mon[192821]: pgmap v2076: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec 03 02:23:26 compute-0 podman[460595]: 2025-12-03 02:23:26.86993979 +0000 UTC m=+0.109768151 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Dec 03 02:23:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2077: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec 03 02:23:27 compute-0 ceph-mon[192821]: pgmap v2077: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec 03 02:23:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:23:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:23:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:23:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:23:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:23:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:23:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:23:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:23:28
Dec 03 02:23:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 02:23:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 02:23:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', 'vms', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', 'volumes', 'backups', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta']
Dec 03 02:23:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 02:23:28 compute-0 nova_compute[351485]: 2025-12-03 02:23:28.504 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:23:28 compute-0 nova_compute[351485]: 2025-12-03 02:23:28.505 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 02:23:28 compute-0 nova_compute[351485]: 2025-12-03 02:23:28.837 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-4fb8fc07-d7b7-4be8-94da-155b040faf32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:23:28 compute-0 nova_compute[351485]: 2025-12-03 02:23:28.838 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-4fb8fc07-d7b7-4be8-94da-155b040faf32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:23:28 compute-0 nova_compute[351485]: 2025-12-03 02:23:28.838 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 03 02:23:28 compute-0 nova_compute[351485]: 2025-12-03 02:23:28.875 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:23:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 02:23:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:23:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 02:23:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:23:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:23:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:23:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:23:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:23:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:23:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:23:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2078: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec 03 02:23:29 compute-0 podman[158098]: time="2025-12-03T02:23:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:23:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:23:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec 03 02:23:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:23:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8664 "" "Go-http-client/1.1"
Dec 03 02:23:29 compute-0 podman[460618]: 2025-12-03 02:23:29.894278692 +0000 UTC m=+0.106802417 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, distribution-scope=public, release=1214.1726694543, version=9.4, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.tags=base rhel9, config_id=edpm, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec 03 02:23:29 compute-0 podman[460617]: 2025-12-03 02:23:29.900848408 +0000 UTC m=+0.120618428 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 03 02:23:29 compute-0 podman[460616]: 2025-12-03 02:23:29.906503007 +0000 UTC m=+0.114612018 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, architecture=x86_64, vcs-type=git, distribution-scope=public)
Dec 03 02:23:29 compute-0 podman[460615]: 2025-12-03 02:23:29.929312141 +0000 UTC m=+0.172342268 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 03 02:23:29 compute-0 podman[460625]: 2025-12-03 02:23:29.931288927 +0000 UTC m=+0.141974160 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 03 02:23:29 compute-0 nova_compute[351485]: 2025-12-03 02:23:29.981 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Updating instance_info_cache with network_info: [{"id": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "address": "fa:16:3e:3f:0c:ae", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.46", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94fdb5b9-66", "ovs_interfaceid": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:23:30 compute-0 nova_compute[351485]: 2025-12-03 02:23:30.002 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-4fb8fc07-d7b7-4be8-94da-155b040faf32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:23:30 compute-0 nova_compute[351485]: 2025-12-03 02:23:30.003 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 03 02:23:30 compute-0 nova_compute[351485]: 2025-12-03 02:23:30.003 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:23:30 compute-0 nova_compute[351485]: 2025-12-03 02:23:30.004 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:23:30 compute-0 nova_compute[351485]: 2025-12-03 02:23:30.004 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:23:30 compute-0 ceph-mon[192821]: pgmap v2078: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec 03 02:23:31 compute-0 nova_compute[351485]: 2025-12-03 02:23:31.078 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:23:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2079: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec 03 02:23:31 compute-0 openstack_network_exporter[368278]: ERROR   02:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:23:31 compute-0 openstack_network_exporter[368278]: ERROR   02:23:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:23:31 compute-0 openstack_network_exporter[368278]: ERROR   02:23:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:23:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:23:31 compute-0 openstack_network_exporter[368278]: ERROR   02:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:23:31 compute-0 openstack_network_exporter[368278]: ERROR   02:23:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:23:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:23:31 compute-0 nova_compute[351485]: 2025-12-03 02:23:31.485 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:23:32 compute-0 ceph-mon[192821]: pgmap v2079: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec 03 02:23:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2080: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:23:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:23:33 compute-0 nova_compute[351485]: 2025-12-03 02:23:33.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:23:33 compute-0 nova_compute[351485]: 2025-12-03 02:23:33.878 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:23:34 compute-0 ceph-mon[192821]: pgmap v2080: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:23:35 compute-0 sudo[460715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:23:35 compute-0 sudo[460715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:23:35 compute-0 sudo[460715]: pam_unix(sudo:session): session closed for user root
Dec 03 02:23:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2081: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Dec 03 02:23:35 compute-0 sudo[460740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:23:35 compute-0 sudo[460740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:23:35 compute-0 sudo[460740]: pam_unix(sudo:session): session closed for user root
Dec 03 02:23:35 compute-0 sudo[460765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:23:35 compute-0 sudo[460765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:23:35 compute-0 sudo[460765]: pam_unix(sudo:session): session closed for user root
Dec 03 02:23:35 compute-0 nova_compute[351485]: 2025-12-03 02:23:35.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:23:35 compute-0 nova_compute[351485]: 2025-12-03 02:23:35.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 02:23:35 compute-0 sudo[460790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Dec 03 02:23:35 compute-0 sudo[460790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:23:35 compute-0 sudo[460790]: pam_unix(sudo:session): session closed for user root
Dec 03 02:23:35 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 02:23:35 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:23:35 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 02:23:35 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:23:36 compute-0 sudo[460834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:23:36 compute-0 sudo[460834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:23:36 compute-0 sudo[460834]: pam_unix(sudo:session): session closed for user root
Dec 03 02:23:36 compute-0 sudo[460859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:23:36 compute-0 sudo[460859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:23:36 compute-0 sudo[460859]: pam_unix(sudo:session): session closed for user root
Dec 03 02:23:36 compute-0 sudo[460884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:23:36 compute-0 sudo[460884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:23:36 compute-0 sudo[460884]: pam_unix(sudo:session): session closed for user root
Dec 03 02:23:36 compute-0 sudo[460909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 02:23:36 compute-0 sudo[460909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:23:36 compute-0 ceph-mon[192821]: pgmap v2081: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Dec 03 02:23:36 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:23:36 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:23:36 compute-0 nova_compute[351485]: 2025-12-03 02:23:36.488 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:23:36 compute-0 sudo[460909]: pam_unix(sudo:session): session closed for user root
Dec 03 02:23:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:23:37 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:23:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 02:23:37 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:23:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 02:23:37 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:23:37 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev d5bca3e3-ad1d-4906-8aae-62cb97d2b368 does not exist
Dec 03 02:23:37 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 2c6330b8-7ae7-4dbb-8542-446dc2ad2888 does not exist
Dec 03 02:23:37 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 89421ddb-8ec5-4cf7-abb3-1443ad240d28 does not exist
Dec 03 02:23:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 02:23:37 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:23:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 02:23:37 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:23:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:23:37 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:23:37 compute-0 sshd[113879]: drop connection #0 from [45.78.219.140]:47764 on [38.102.83.36]:22 penalty: exceeded LoginGraceTime
Dec 03 02:23:37 compute-0 sudo[460964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:23:37 compute-0 sudo[460964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:23:37 compute-0 sudo[460964]: pam_unix(sudo:session): session closed for user root
Dec 03 02:23:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2082: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Dec 03 02:23:37 compute-0 sudo[460989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:23:37 compute-0 sudo[460989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:23:37 compute-0 sudo[460989]: pam_unix(sudo:session): session closed for user root
Dec 03 02:23:37 compute-0 sudo[461014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:23:37 compute-0 sudo[461014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:23:37 compute-0 sudo[461014]: pam_unix(sudo:session): session closed for user root
Dec 03 02:23:37 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:23:37 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:23:37 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:23:37 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:23:37 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:23:37 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:23:37 compute-0 sudo[461039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 02:23:37 compute-0 sudo[461039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:23:38 compute-0 podman[461101]: 2025-12-03 02:23:38.109270336 +0000 UTC m=+0.104821961 container create 2bd9accad92fd8d4593caa0b73e904f7856839459fef7d28fc0bc4402891b3e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:23:38 compute-0 podman[461101]: 2025-12-03 02:23:38.074603717 +0000 UTC m=+0.070155402 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:23:38 compute-0 systemd[1]: Started libpod-conmon-2bd9accad92fd8d4593caa0b73e904f7856839459fef7d28fc0bc4402891b3e9.scope.
Dec 03 02:23:38 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:23:38 compute-0 podman[461101]: 2025-12-03 02:23:38.282340534 +0000 UTC m=+0.277892199 container init 2bd9accad92fd8d4593caa0b73e904f7856839459fef7d28fc0bc4402891b3e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:23:38 compute-0 podman[461101]: 2025-12-03 02:23:38.294268251 +0000 UTC m=+0.289819846 container start 2bd9accad92fd8d4593caa0b73e904f7856839459fef7d28fc0bc4402891b3e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_varahamihira, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:23:38 compute-0 podman[461101]: 2025-12-03 02:23:38.298757288 +0000 UTC m=+0.294308923 container attach 2bd9accad92fd8d4593caa0b73e904f7856839459fef7d28fc0bc4402891b3e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 03 02:23:38 compute-0 mystifying_varahamihira[461117]: 167 167
Dec 03 02:23:38 compute-0 systemd[1]: libpod-2bd9accad92fd8d4593caa0b73e904f7856839459fef7d28fc0bc4402891b3e9.scope: Deactivated successfully.
Dec 03 02:23:38 compute-0 podman[461101]: 2025-12-03 02:23:38.307929437 +0000 UTC m=+0.303481102 container died 2bd9accad92fd8d4593caa0b73e904f7856839459fef7d28fc0bc4402891b3e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec 03 02:23:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ed74f22f13aaef893214ec30b0ec597e1044d8dc3477466cd5e27cba8fcb6c1-merged.mount: Deactivated successfully.
Dec 03 02:23:38 compute-0 podman[461101]: 2025-12-03 02:23:38.387173975 +0000 UTC m=+0.382725590 container remove 2bd9accad92fd8d4593caa0b73e904f7856839459fef7d28fc0bc4402891b3e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 03 02:23:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:23:38 compute-0 systemd[1]: libpod-conmon-2bd9accad92fd8d4593caa0b73e904f7856839459fef7d28fc0bc4402891b3e9.scope: Deactivated successfully.
Dec 03 02:23:38 compute-0 ceph-mon[192821]: pgmap v2082: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Dec 03 02:23:38 compute-0 podman[461140]: 2025-12-03 02:23:38.632643218 +0000 UTC m=+0.083063497 container create 3b580c9b2f124e31e3fe19180213162be5f96112947f79b994da2150b1c72804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:23:38 compute-0 podman[461140]: 2025-12-03 02:23:38.598408571 +0000 UTC m=+0.048828860 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:23:38 compute-0 systemd[1]: Started libpod-conmon-3b580c9b2f124e31e3fe19180213162be5f96112947f79b994da2150b1c72804.scope.
Dec 03 02:23:38 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:23:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28806bee26cfd119ceefcb052431138df89a6ee8762faf231d2a4b641eb65282/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:23:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28806bee26cfd119ceefcb052431138df89a6ee8762faf231d2a4b641eb65282/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:23:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28806bee26cfd119ceefcb052431138df89a6ee8762faf231d2a4b641eb65282/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:23:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28806bee26cfd119ceefcb052431138df89a6ee8762faf231d2a4b641eb65282/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:23:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28806bee26cfd119ceefcb052431138df89a6ee8762faf231d2a4b641eb65282/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 02:23:38 compute-0 podman[461140]: 2025-12-03 02:23:38.803326308 +0000 UTC m=+0.253746567 container init 3b580c9b2f124e31e3fe19180213162be5f96112947f79b994da2150b1c72804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_poitras, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:23:38 compute-0 podman[461140]: 2025-12-03 02:23:38.841735403 +0000 UTC m=+0.292155652 container start 3b580c9b2f124e31e3fe19180213162be5f96112947f79b994da2150b1c72804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_poitras, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:23:38 compute-0 podman[461140]: 2025-12-03 02:23:38.850461489 +0000 UTC m=+0.300881768 container attach 3b580c9b2f124e31e3fe19180213162be5f96112947f79b994da2150b1c72804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_poitras, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:23:38 compute-0 nova_compute[351485]: 2025-12-03 02:23:38.883 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:23:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 02:23:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:23:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 02:23:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:23:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015153665673634173 of space, bias 1.0, pg target 0.45460997020902516 quantized to 32 (current 32)
Dec 03 02:23:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:23:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:23:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:23:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:23:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:23:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec 03 02:23:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:23:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 02:23:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:23:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:23:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:23:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 02:23:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:23:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 02:23:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:23:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:23:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:23:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 02:23:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2083: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Dec 03 02:23:40 compute-0 vibrant_poitras[461156]: --> passed data devices: 0 physical, 3 LVM
Dec 03 02:23:40 compute-0 vibrant_poitras[461156]: --> relative data size: 1.0
Dec 03 02:23:40 compute-0 vibrant_poitras[461156]: --> All data devices are unavailable
Dec 03 02:23:40 compute-0 systemd[1]: libpod-3b580c9b2f124e31e3fe19180213162be5f96112947f79b994da2150b1c72804.scope: Deactivated successfully.
Dec 03 02:23:40 compute-0 systemd[1]: libpod-3b580c9b2f124e31e3fe19180213162be5f96112947f79b994da2150b1c72804.scope: Consumed 1.231s CPU time.
Dec 03 02:23:40 compute-0 podman[461140]: 2025-12-03 02:23:40.145620797 +0000 UTC m=+1.596041066 container died 3b580c9b2f124e31e3fe19180213162be5f96112947f79b994da2150b1c72804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_poitras, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec 03 02:23:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-28806bee26cfd119ceefcb052431138df89a6ee8762faf231d2a4b641eb65282-merged.mount: Deactivated successfully.
Dec 03 02:23:40 compute-0 podman[461140]: 2025-12-03 02:23:40.23034277 +0000 UTC m=+1.680763019 container remove 3b580c9b2f124e31e3fe19180213162be5f96112947f79b994da2150b1c72804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_poitras, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 03 02:23:40 compute-0 systemd[1]: libpod-conmon-3b580c9b2f124e31e3fe19180213162be5f96112947f79b994da2150b1c72804.scope: Deactivated successfully.
Dec 03 02:23:40 compute-0 sudo[461039]: pam_unix(sudo:session): session closed for user root
Dec 03 02:23:40 compute-0 sudo[461197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:23:40 compute-0 sudo[461197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:23:40 compute-0 sudo[461197]: pam_unix(sudo:session): session closed for user root
Dec 03 02:23:40 compute-0 ceph-mon[192821]: pgmap v2083: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Dec 03 02:23:40 compute-0 sudo[461222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:23:40 compute-0 sudo[461222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:23:40 compute-0 sudo[461222]: pam_unix(sudo:session): session closed for user root
Dec 03 02:23:40 compute-0 sudo[461247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:23:40 compute-0 sudo[461247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:23:40 compute-0 sudo[461247]: pam_unix(sudo:session): session closed for user root
Dec 03 02:23:40 compute-0 sudo[461272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 02:23:40 compute-0 sudo[461272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:23:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2084: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Dec 03 02:23:41 compute-0 podman[461336]: 2025-12-03 02:23:41.334156054 +0000 UTC m=+0.089203491 container create 974a76abc840646a8ce084e4d54c7c444e43a9e9a24b3fdda217b9343045ce88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_williamson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Dec 03 02:23:41 compute-0 podman[461336]: 2025-12-03 02:23:41.293986409 +0000 UTC m=+0.049033916 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:23:41 compute-0 systemd[1]: Started libpod-conmon-974a76abc840646a8ce084e4d54c7c444e43a9e9a24b3fdda217b9343045ce88.scope.
Dec 03 02:23:41 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:23:41 compute-0 podman[461336]: 2025-12-03 02:23:41.477521322 +0000 UTC m=+0.232568819 container init 974a76abc840646a8ce084e4d54c7c444e43a9e9a24b3fdda217b9343045ce88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_williamson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:23:41 compute-0 podman[461336]: 2025-12-03 02:23:41.493511244 +0000 UTC m=+0.248558701 container start 974a76abc840646a8ce084e4d54c7c444e43a9e9a24b3fdda217b9343045ce88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_williamson, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 03 02:23:41 compute-0 nova_compute[351485]: 2025-12-03 02:23:41.491 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:23:41 compute-0 relaxed_williamson[461352]: 167 167
Dec 03 02:23:41 compute-0 systemd[1]: libpod-974a76abc840646a8ce084e4d54c7c444e43a9e9a24b3fdda217b9343045ce88.scope: Deactivated successfully.
Dec 03 02:23:41 compute-0 podman[461336]: 2025-12-03 02:23:41.501614353 +0000 UTC m=+0.256661880 container attach 974a76abc840646a8ce084e4d54c7c444e43a9e9a24b3fdda217b9343045ce88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_williamson, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:23:41 compute-0 conmon[461352]: conmon 974a76abc840646a8ce0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-974a76abc840646a8ce084e4d54c7c444e43a9e9a24b3fdda217b9343045ce88.scope/container/memory.events
Dec 03 02:23:41 compute-0 podman[461336]: 2025-12-03 02:23:41.503070304 +0000 UTC m=+0.258117741 container died 974a76abc840646a8ce084e4d54c7c444e43a9e9a24b3fdda217b9343045ce88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_williamson, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:23:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2e45b73091bba86ead5a9b2acde63f1d6936662990cb43901836afa4f0b26b6-merged.mount: Deactivated successfully.
Dec 03 02:23:41 compute-0 podman[461336]: 2025-12-03 02:23:41.559144938 +0000 UTC m=+0.314192355 container remove 974a76abc840646a8ce084e4d54c7c444e43a9e9a24b3fdda217b9343045ce88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_williamson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:23:41 compute-0 systemd[1]: libpod-conmon-974a76abc840646a8ce084e4d54c7c444e43a9e9a24b3fdda217b9343045ce88.scope: Deactivated successfully.
Dec 03 02:23:41 compute-0 podman[461378]: 2025-12-03 02:23:41.801632005 +0000 UTC m=+0.068788494 container create 356365b9a426596db9a32654cff4d06d62fa32409ab1bb1ef8deaad545315f2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mcclintock, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:23:41 compute-0 podman[461378]: 2025-12-03 02:23:41.775154877 +0000 UTC m=+0.042311376 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:23:41 compute-0 systemd[1]: Started libpod-conmon-356365b9a426596db9a32654cff4d06d62fa32409ab1bb1ef8deaad545315f2c.scope.
Dec 03 02:23:41 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:23:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/133f55651ec14128b0359a21a370b89bdf3f5d3d2355a8db9f2e44476f2b9756/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:23:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/133f55651ec14128b0359a21a370b89bdf3f5d3d2355a8db9f2e44476f2b9756/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:23:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/133f55651ec14128b0359a21a370b89bdf3f5d3d2355a8db9f2e44476f2b9756/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:23:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/133f55651ec14128b0359a21a370b89bdf3f5d3d2355a8db9f2e44476f2b9756/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:23:41 compute-0 podman[461378]: 2025-12-03 02:23:41.947180085 +0000 UTC m=+0.214336664 container init 356365b9a426596db9a32654cff4d06d62fa32409ab1bb1ef8deaad545315f2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mcclintock, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 03 02:23:41 compute-0 podman[461378]: 2025-12-03 02:23:41.966188692 +0000 UTC m=+0.233345181 container start 356365b9a426596db9a32654cff4d06d62fa32409ab1bb1ef8deaad545315f2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mcclintock, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 03 02:23:41 compute-0 podman[461378]: 2025-12-03 02:23:41.971747209 +0000 UTC m=+0.238903718 container attach 356365b9a426596db9a32654cff4d06d62fa32409ab1bb1ef8deaad545315f2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 03 02:23:42 compute-0 ceph-mon[192821]: pgmap v2084: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]: {
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:     "0": [
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:         {
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:             "devices": [
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:                 "/dev/loop3"
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:             ],
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:             "lv_name": "ceph_lv0",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:             "lv_size": "21470642176",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:             "name": "ceph_lv0",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:             "tags": {
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:                 "ceph.cluster_name": "ceph",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:                 "ceph.crush_device_class": "",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:                 "ceph.encrypted": "0",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:                 "ceph.osd_id": "0",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:                 "ceph.type": "block",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:                 "ceph.vdo": "0"
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:             },
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:             "type": "block",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:             "vg_name": "ceph_vg0"
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:         }
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:     ],
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:     "1": [
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:         {
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:             "devices": [
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:                 "/dev/loop4"
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:             ],
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:             "lv_name": "ceph_lv1",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:             "lv_size": "21470642176",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:             "name": "ceph_lv1",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:             "tags": {
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:                 "ceph.cluster_name": "ceph",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:                 "ceph.crush_device_class": "",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:                 "ceph.encrypted": "0",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:                 "ceph.osd_id": "1",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:                 "ceph.type": "block",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:                 "ceph.vdo": "0"
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:             },
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:             "type": "block",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:             "vg_name": "ceph_vg1"
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:         }
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:     ],
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:     "2": [
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:         {
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:             "devices": [
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:                 "/dev/loop5"
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:             ],
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:             "lv_name": "ceph_lv2",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:             "lv_size": "21470642176",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:             "name": "ceph_lv2",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:             "tags": {
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:                 "ceph.cluster_name": "ceph",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:                 "ceph.crush_device_class": "",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:                 "ceph.encrypted": "0",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:                 "ceph.osd_id": "2",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:                 "ceph.type": "block",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:                 "ceph.vdo": "0"
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:             },
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:             "type": "block",
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:             "vg_name": "ceph_vg2"
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:         }
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]:     ]
Dec 03 02:23:42 compute-0 boring_mcclintock[461394]: }
Dec 03 02:23:42 compute-0 systemd[1]: libpod-356365b9a426596db9a32654cff4d06d62fa32409ab1bb1ef8deaad545315f2c.scope: Deactivated successfully.
Dec 03 02:23:42 compute-0 podman[461403]: 2025-12-03 02:23:42.93631047 +0000 UTC m=+0.090348342 container died 356365b9a426596db9a32654cff4d06d62fa32409ab1bb1ef8deaad545315f2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec 03 02:23:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-133f55651ec14128b0359a21a370b89bdf3f5d3d2355a8db9f2e44476f2b9756-merged.mount: Deactivated successfully.
Dec 03 02:23:43 compute-0 podman[461403]: 2025-12-03 02:23:43.049373434 +0000 UTC m=+0.203411236 container remove 356365b9a426596db9a32654cff4d06d62fa32409ab1bb1ef8deaad545315f2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:23:43 compute-0 systemd[1]: libpod-conmon-356365b9a426596db9a32654cff4d06d62fa32409ab1bb1ef8deaad545315f2c.scope: Deactivated successfully.
Dec 03 02:23:43 compute-0 sudo[461272]: pam_unix(sudo:session): session closed for user root
Dec 03 02:23:43 compute-0 sudo[461415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:23:43 compute-0 sudo[461415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:23:43 compute-0 sudo[461415]: pam_unix(sudo:session): session closed for user root
Dec 03 02:23:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2085: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Dec 03 02:23:43 compute-0 sudo[461440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:23:43 compute-0 sudo[461440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:23:43 compute-0 sudo[461440]: pam_unix(sudo:session): session closed for user root
Dec 03 02:23:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:23:43 compute-0 sudo[461465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:23:43 compute-0 sudo[461465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:23:43 compute-0 sudo[461465]: pam_unix(sudo:session): session closed for user root
Dec 03 02:23:43 compute-0 sudo[461490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 02:23:43 compute-0 sudo[461490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:23:43 compute-0 nova_compute[351485]: 2025-12-03 02:23:43.886 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:23:44 compute-0 podman[461556]: 2025-12-03 02:23:44.259967803 +0000 UTC m=+0.068650150 container create 9f9e4d33458530b67473c570cbc30d138bdbbb85057fa071ff235d05f6fe5fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:23:44 compute-0 systemd[1]: Started libpod-conmon-9f9e4d33458530b67473c570cbc30d138bdbbb85057fa071ff235d05f6fe5fb3.scope.
Dec 03 02:23:44 compute-0 podman[461556]: 2025-12-03 02:23:44.231673924 +0000 UTC m=+0.040356261 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:23:44 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:23:44 compute-0 podman[461556]: 2025-12-03 02:23:44.378300305 +0000 UTC m=+0.186982692 container init 9f9e4d33458530b67473c570cbc30d138bdbbb85057fa071ff235d05f6fe5fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_nash, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Dec 03 02:23:44 compute-0 podman[461556]: 2025-12-03 02:23:44.388102852 +0000 UTC m=+0.196785159 container start 9f9e4d33458530b67473c570cbc30d138bdbbb85057fa071ff235d05f6fe5fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec 03 02:23:44 compute-0 podman[461556]: 2025-12-03 02:23:44.39300976 +0000 UTC m=+0.201692097 container attach 9f9e4d33458530b67473c570cbc30d138bdbbb85057fa071ff235d05f6fe5fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_nash, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Dec 03 02:23:44 compute-0 practical_nash[461572]: 167 167
Dec 03 02:23:44 compute-0 systemd[1]: libpod-9f9e4d33458530b67473c570cbc30d138bdbbb85057fa071ff235d05f6fe5fb3.scope: Deactivated successfully.
Dec 03 02:23:44 compute-0 conmon[461572]: conmon 9f9e4d33458530b67473 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9f9e4d33458530b67473c570cbc30d138bdbbb85057fa071ff235d05f6fe5fb3.scope/container/memory.events
Dec 03 02:23:44 compute-0 podman[461577]: 2025-12-03 02:23:44.458921542 +0000 UTC m=+0.041237266 container died 9f9e4d33458530b67473c570cbc30d138bdbbb85057fa071ff235d05f6fe5fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 03 02:23:44 compute-0 ceph-mon[192821]: pgmap v2085: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Dec 03 02:23:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-f25bab1a26f2b77a3b36b0e090dcf1e5a35fb8032f976dc93a1023b98113be68-merged.mount: Deactivated successfully.
Dec 03 02:23:44 compute-0 podman[461577]: 2025-12-03 02:23:44.511727683 +0000 UTC m=+0.094043397 container remove 9f9e4d33458530b67473c570cbc30d138bdbbb85057fa071ff235d05f6fe5fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_nash, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 03 02:23:44 compute-0 systemd[1]: libpod-conmon-9f9e4d33458530b67473c570cbc30d138bdbbb85057fa071ff235d05f6fe5fb3.scope: Deactivated successfully.
Dec 03 02:23:44 compute-0 podman[461596]: 2025-12-03 02:23:44.77611786 +0000 UTC m=+0.069044691 container create 0141180c1ffaa823ec586c17ee16b13cd68c6068d8ea18b2e301ca7dcf215b1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_lehmann, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:23:44 compute-0 podman[461596]: 2025-12-03 02:23:44.747452231 +0000 UTC m=+0.040379142 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:23:44 compute-0 systemd[1]: Started libpod-conmon-0141180c1ffaa823ec586c17ee16b13cd68c6068d8ea18b2e301ca7dcf215b1d.scope.
Dec 03 02:23:44 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:23:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc849c55fa1616c71a69da3629075fe8c0fe0a14a87c315095266a921dd263e0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:23:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc849c55fa1616c71a69da3629075fe8c0fe0a14a87c315095266a921dd263e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:23:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc849c55fa1616c71a69da3629075fe8c0fe0a14a87c315095266a921dd263e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:23:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc849c55fa1616c71a69da3629075fe8c0fe0a14a87c315095266a921dd263e0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:23:44 compute-0 podman[461596]: 2025-12-03 02:23:44.942972393 +0000 UTC m=+0.235899234 container init 0141180c1ffaa823ec586c17ee16b13cd68c6068d8ea18b2e301ca7dcf215b1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_lehmann, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 03 02:23:44 compute-0 podman[461596]: 2025-12-03 02:23:44.953412407 +0000 UTC m=+0.246339258 container start 0141180c1ffaa823ec586c17ee16b13cd68c6068d8ea18b2e301ca7dcf215b1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_lehmann, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:23:44 compute-0 podman[461596]: 2025-12-03 02:23:44.960123327 +0000 UTC m=+0.253050148 container attach 0141180c1ffaa823ec586c17ee16b13cd68c6068d8ea18b2e301ca7dcf215b1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec 03 02:23:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2086: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Dec 03 02:23:46 compute-0 angry_lehmann[461612]: {
Dec 03 02:23:46 compute-0 angry_lehmann[461612]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 02:23:46 compute-0 angry_lehmann[461612]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:23:46 compute-0 angry_lehmann[461612]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 02:23:46 compute-0 angry_lehmann[461612]:         "osd_id": 2,
Dec 03 02:23:46 compute-0 angry_lehmann[461612]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:23:46 compute-0 angry_lehmann[461612]:         "type": "bluestore"
Dec 03 02:23:46 compute-0 angry_lehmann[461612]:     },
Dec 03 02:23:46 compute-0 angry_lehmann[461612]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 02:23:46 compute-0 angry_lehmann[461612]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:23:46 compute-0 angry_lehmann[461612]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 02:23:46 compute-0 angry_lehmann[461612]:         "osd_id": 1,
Dec 03 02:23:46 compute-0 angry_lehmann[461612]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:23:46 compute-0 angry_lehmann[461612]:         "type": "bluestore"
Dec 03 02:23:46 compute-0 angry_lehmann[461612]:     },
Dec 03 02:23:46 compute-0 angry_lehmann[461612]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 02:23:46 compute-0 angry_lehmann[461612]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:23:46 compute-0 angry_lehmann[461612]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 02:23:46 compute-0 angry_lehmann[461612]:         "osd_id": 0,
Dec 03 02:23:46 compute-0 angry_lehmann[461612]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:23:46 compute-0 angry_lehmann[461612]:         "type": "bluestore"
Dec 03 02:23:46 compute-0 angry_lehmann[461612]:     }
Dec 03 02:23:46 compute-0 angry_lehmann[461612]: }
Dec 03 02:23:46 compute-0 systemd[1]: libpod-0141180c1ffaa823ec586c17ee16b13cd68c6068d8ea18b2e301ca7dcf215b1d.scope: Deactivated successfully.
Dec 03 02:23:46 compute-0 systemd[1]: libpod-0141180c1ffaa823ec586c17ee16b13cd68c6068d8ea18b2e301ca7dcf215b1d.scope: Consumed 1.237s CPU time.
Dec 03 02:23:46 compute-0 conmon[461612]: conmon 0141180c1ffaa823ec58 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0141180c1ffaa823ec586c17ee16b13cd68c6068d8ea18b2e301ca7dcf215b1d.scope/container/memory.events
Dec 03 02:23:46 compute-0 podman[461596]: 2025-12-03 02:23:46.194024414 +0000 UTC m=+1.486951265 container died 0141180c1ffaa823ec586c17ee16b13cd68c6068d8ea18b2e301ca7dcf215b1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec 03 02:23:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc849c55fa1616c71a69da3629075fe8c0fe0a14a87c315095266a921dd263e0-merged.mount: Deactivated successfully.
Dec 03 02:23:46 compute-0 podman[461596]: 2025-12-03 02:23:46.305746649 +0000 UTC m=+1.598673470 container remove 0141180c1ffaa823ec586c17ee16b13cd68c6068d8ea18b2e301ca7dcf215b1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Dec 03 02:23:46 compute-0 systemd[1]: libpod-conmon-0141180c1ffaa823ec586c17ee16b13cd68c6068d8ea18b2e301ca7dcf215b1d.scope: Deactivated successfully.
Dec 03 02:23:46 compute-0 sudo[461490]: pam_unix(sudo:session): session closed for user root
Dec 03 02:23:46 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 02:23:46 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:23:46 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 02:23:46 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:23:46 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 269afbdb-67af-4a75-bed8-d3307674805b does not exist
Dec 03 02:23:46 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 0a4b85f4-52fd-43d8-b0af-30d3542a26b1 does not exist
Dec 03 02:23:46 compute-0 ceph-mon[192821]: pgmap v2086: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Dec 03 02:23:46 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:23:46 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:23:46 compute-0 nova_compute[351485]: 2025-12-03 02:23:46.495 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:23:46 compute-0 sudo[461659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:23:46 compute-0 sudo[461659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:23:46 compute-0 sudo[461659]: pam_unix(sudo:session): session closed for user root
Dec 03 02:23:46 compute-0 sudo[461684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 02:23:46 compute-0 sudo[461684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:23:46 compute-0 sudo[461684]: pam_unix(sudo:session): session closed for user root
Dec 03 02:23:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 02:23:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/7143871' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:23:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 02:23:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/7143871' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:23:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2087: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:23:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/7143871' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:23:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/7143871' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:23:48 compute-0 nova_compute[351485]: 2025-12-03 02:23:48.326 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:23:48 compute-0 nova_compute[351485]: 2025-12-03 02:23:48.363 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Triggering sync for uuid 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 03 02:23:48 compute-0 nova_compute[351485]: 2025-12-03 02:23:48.364 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Triggering sync for uuid 4fb8fc07-d7b7-4be8-94da-155b040faf32 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 03 02:23:48 compute-0 nova_compute[351485]: 2025-12-03 02:23:48.366 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "2890ee5c-21c1-4e9d-9421-1a2df0f67f76" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:23:48 compute-0 nova_compute[351485]: 2025-12-03 02:23:48.367 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "2890ee5c-21c1-4e9d-9421-1a2df0f67f76" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:23:48 compute-0 nova_compute[351485]: 2025-12-03 02:23:48.368 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "4fb8fc07-d7b7-4be8-94da-155b040faf32" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:23:48 compute-0 nova_compute[351485]: 2025-12-03 02:23:48.368 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "4fb8fc07-d7b7-4be8-94da-155b040faf32" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:23:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:23:48 compute-0 nova_compute[351485]: 2025-12-03 02:23:48.421 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "2890ee5c-21c1-4e9d-9421-1a2df0f67f76" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.054s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:23:48 compute-0 nova_compute[351485]: 2025-12-03 02:23:48.426 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "4fb8fc07-d7b7-4be8-94da-155b040faf32" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:23:48 compute-0 ceph-mon[192821]: pgmap v2087: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:23:48 compute-0 nova_compute[351485]: 2025-12-03 02:23:48.886 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:23:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2088: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:23:50 compute-0 ceph-mon[192821]: pgmap v2088: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:23:50 compute-0 podman[461710]: 2025-12-03 02:23:50.872273166 +0000 UTC m=+0.109807603 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Dec 03 02:23:50 compute-0 podman[461712]: 2025-12-03 02:23:50.874286462 +0000 UTC m=+0.114894795 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 02:23:50 compute-0 podman[461711]: 2025-12-03 02:23:50.909757544 +0000 UTC m=+0.152085556 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image)
Dec 03 02:23:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2089: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:23:51 compute-0 nova_compute[351485]: 2025-12-03 02:23:51.498 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:23:52 compute-0 ceph-mon[192821]: pgmap v2089: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:23:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2090: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:23:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:23:53 compute-0 nova_compute[351485]: 2025-12-03 02:23:53.890 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:23:54 compute-0 ceph-mon[192821]: pgmap v2090: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:23:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2091: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:23:56 compute-0 nova_compute[351485]: 2025-12-03 02:23:56.502 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:23:56 compute-0 ceph-mon[192821]: pgmap v2091: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:23:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2092: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:23:57 compute-0 podman[461773]: 2025-12-03 02:23:57.897444958 +0000 UTC m=+0.139421458 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 03 02:23:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:23:58 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #99. Immutable memtables: 0.
Dec 03 02:23:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:23:58.421185) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 03 02:23:58 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 99
Dec 03 02:23:58 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728638421246, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 1312, "num_deletes": 256, "total_data_size": 2041298, "memory_usage": 2075120, "flush_reason": "Manual Compaction"}
Dec 03 02:23:58 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #100: started
Dec 03 02:23:58 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728638436276, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 100, "file_size": 1988829, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41705, "largest_seqno": 43016, "table_properties": {"data_size": 1982625, "index_size": 3471, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 12826, "raw_average_key_size": 19, "raw_value_size": 1970174, "raw_average_value_size": 2989, "num_data_blocks": 156, "num_entries": 659, "num_filter_entries": 659, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764728508, "oldest_key_time": 1764728508, "file_creation_time": 1764728638, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 100, "seqno_to_time_mapping": "N/A"}}
Dec 03 02:23:58 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 15163 microseconds, and 5418 cpu microseconds.
Dec 03 02:23:58 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 02:23:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:23:58.436348) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #100: 1988829 bytes OK
Dec 03 02:23:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:23:58.436379) [db/memtable_list.cc:519] [default] Level-0 commit table #100 started
Dec 03 02:23:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:23:58.446066) [db/memtable_list.cc:722] [default] Level-0 commit table #100: memtable #1 done
Dec 03 02:23:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:23:58.446095) EVENT_LOG_v1 {"time_micros": 1764728638446086, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 03 02:23:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:23:58.446119) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 03 02:23:58 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 2035386, prev total WAL file size 2035386, number of live WAL files 2.
Dec 03 02:23:58 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000096.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:23:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:23:58.447482) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353036' seq:72057594037927935, type:22 .. '6C6F676D0031373538' seq:0, type:0; will stop at (end)
Dec 03 02:23:58 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 03 02:23:58 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [100(1942KB)], [98(7785KB)]
Dec 03 02:23:58 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728638447635, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [100], "files_L6": [98], "score": -1, "input_data_size": 9961682, "oldest_snapshot_seqno": -1}
Dec 03 02:23:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:23:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:23:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:23:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:23:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:23:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:23:58 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #101: 5912 keys, 9857083 bytes, temperature: kUnknown
Dec 03 02:23:58 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728638544096, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 101, "file_size": 9857083, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9816802, "index_size": 24427, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14789, "raw_key_size": 153450, "raw_average_key_size": 25, "raw_value_size": 9709120, "raw_average_value_size": 1642, "num_data_blocks": 980, "num_entries": 5912, "num_filter_entries": 5912, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764728638, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Dec 03 02:23:58 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 02:23:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:23:58.544331) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 9857083 bytes
Dec 03 02:23:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:23:58.545848) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 103.2 rd, 102.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 7.6 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(10.0) write-amplify(5.0) OK, records in: 6436, records dropped: 524 output_compression: NoCompression
Dec 03 02:23:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:23:58.545867) EVENT_LOG_v1 {"time_micros": 1764728638545858, "job": 58, "event": "compaction_finished", "compaction_time_micros": 96521, "compaction_time_cpu_micros": 43521, "output_level": 6, "num_output_files": 1, "total_output_size": 9857083, "num_input_records": 6436, "num_output_records": 5912, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 03 02:23:58 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000100.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:23:58 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728638546639, "job": 58, "event": "table_file_deletion", "file_number": 100}
Dec 03 02:23:58 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:23:58 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728638549628, "job": 58, "event": "table_file_deletion", "file_number": 98}
Dec 03 02:23:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:23:58.447285) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:23:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:23:58.550037) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:23:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:23:58.550044) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:23:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:23:58.550052) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:23:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:23:58.550056) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:23:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:23:58.550059) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:23:58 compute-0 ceph-mon[192821]: pgmap v2092: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:23:58 compute-0 sshd-session[461771]: Received disconnect from 154.113.10.113 port 58066:11: Bye Bye [preauth]
Dec 03 02:23:58 compute-0 sshd-session[461771]: Disconnected from authenticating user root 154.113.10.113 port 58066 [preauth]
Dec 03 02:23:58 compute-0 nova_compute[351485]: 2025-12-03 02:23:58.895 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:23:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2093: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:23:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:23:59.657 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:23:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:23:59.658 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:23:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:23:59.659 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:23:59 compute-0 podman[158098]: time="2025-12-03T02:23:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:23:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:23:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec 03 02:23:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:23:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8661 "" "Go-http-client/1.1"
Dec 03 02:24:00 compute-0 ceph-mon[192821]: pgmap v2093: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:24:00 compute-0 podman[461796]: 2025-12-03 02:24:00.871982404 +0000 UTC m=+0.104939815 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, config_id=edpm, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release-0.7.12=, version=9.4, architecture=x86_64, io.openshift.expose-services=, release=1214.1726694543, io.buildah.version=1.29.0, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, name=ubi9)
Dec 03 02:24:00 compute-0 podman[461808]: 2025-12-03 02:24:00.889579781 +0000 UTC m=+0.112900500 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Dec 03 02:24:00 compute-0 podman[461795]: 2025-12-03 02:24:00.889883999 +0000 UTC m=+0.137729210 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 03 02:24:00 compute-0 podman[461793]: 2025-12-03 02:24:00.906036466 +0000 UTC m=+0.154997029 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 03 02:24:00 compute-0 podman[461794]: 2025-12-03 02:24:00.918154078 +0000 UTC m=+0.160156154 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, container_name=openstack_network_exporter, architecture=x86_64, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, release=1755695350, vendor=Red Hat, Inc.)
Dec 03 02:24:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2094: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:24:01 compute-0 openstack_network_exporter[368278]: ERROR   02:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:24:01 compute-0 openstack_network_exporter[368278]: ERROR   02:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:24:01 compute-0 openstack_network_exporter[368278]: ERROR   02:24:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:24:01 compute-0 openstack_network_exporter[368278]: ERROR   02:24:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:24:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:24:01 compute-0 openstack_network_exporter[368278]: ERROR   02:24:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:24:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:24:01 compute-0 nova_compute[351485]: 2025-12-03 02:24:01.505 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:24:02 compute-0 ceph-mon[192821]: pgmap v2094: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:24:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2095: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:24:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:24:03 compute-0 nova_compute[351485]: 2025-12-03 02:24:03.900 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:24:04 compute-0 ceph-mon[192821]: pgmap v2095: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:24:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2096: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:24:06 compute-0 nova_compute[351485]: 2025-12-03 02:24:06.507 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:24:06 compute-0 ceph-mon[192821]: pgmap v2096: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:24:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2097: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s wr, 0 op/s
Dec 03 02:24:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:24:08 compute-0 ceph-mon[192821]: pgmap v2097: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s wr, 0 op/s
Dec 03 02:24:08 compute-0 nova_compute[351485]: 2025-12-03 02:24:08.904 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:24:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2098: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s wr, 0 op/s
Dec 03 02:24:10 compute-0 ceph-mon[192821]: pgmap v2098: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s wr, 0 op/s
Dec 03 02:24:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2099: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s wr, 0 op/s
Dec 03 02:24:11 compute-0 nova_compute[351485]: 2025-12-03 02:24:11.512 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:24:12 compute-0 ceph-mon[192821]: pgmap v2099: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s wr, 0 op/s
Dec 03 02:24:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2100: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s wr, 0 op/s
Dec 03 02:24:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:24:13 compute-0 nova_compute[351485]: 2025-12-03 02:24:13.908 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:24:14 compute-0 ceph-mon[192821]: pgmap v2100: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s wr, 0 op/s
Dec 03 02:24:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2101: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s wr, 0 op/s
Dec 03 02:24:16 compute-0 nova_compute[351485]: 2025-12-03 02:24:16.515 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:24:16 compute-0 nova_compute[351485]: 2025-12-03 02:24:16.619 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:24:16 compute-0 ceph-mon[192821]: pgmap v2101: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s wr, 0 op/s
Dec 03 02:24:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2102: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s wr, 0 op/s
Dec 03 02:24:17 compute-0 ceph-mon[192821]: pgmap v2102: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s wr, 0 op/s
Dec 03 02:24:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:24:18 compute-0 nova_compute[351485]: 2025-12-03 02:24:18.914 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:24:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2103: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:24:20 compute-0 ceph-mon[192821]: pgmap v2103: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:24:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2104: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:24:21 compute-0 nova_compute[351485]: 2025-12-03 02:24:21.520 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:24:21 compute-0 podman[461892]: 2025-12-03 02:24:21.851703226 +0000 UTC m=+0.097688320 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec 03 02:24:21 compute-0 podman[461894]: 2025-12-03 02:24:21.871143215 +0000 UTC m=+0.096113375 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 03 02:24:21 compute-0 podman[461893]: 2025-12-03 02:24:21.888891546 +0000 UTC m=+0.122649894 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 03 02:24:22 compute-0 ceph-mon[192821]: pgmap v2104: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:24:22 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec 03 02:24:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2105: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:24:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:24:23 compute-0 nova_compute[351485]: 2025-12-03 02:24:23.919 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:24:24 compute-0 ceph-mon[192821]: pgmap v2105: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:24:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2106: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:24:25 compute-0 nova_compute[351485]: 2025-12-03 02:24:25.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:24:25 compute-0 nova_compute[351485]: 2025-12-03 02:24:25.625 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:24:25 compute-0 nova_compute[351485]: 2025-12-03 02:24:25.626 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:24:25 compute-0 nova_compute[351485]: 2025-12-03 02:24:25.627 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:24:25 compute-0 nova_compute[351485]: 2025-12-03 02:24:25.627 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 02:24:25 compute-0 nova_compute[351485]: 2025-12-03 02:24:25.628 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:24:26 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec 03 02:24:26 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:24:26 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3878797341' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:24:26 compute-0 nova_compute[351485]: 2025-12-03 02:24:26.157 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:24:26 compute-0 nova_compute[351485]: 2025-12-03 02:24:26.275 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:24:26 compute-0 nova_compute[351485]: 2025-12-03 02:24:26.275 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:24:26 compute-0 nova_compute[351485]: 2025-12-03 02:24:26.281 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:24:26 compute-0 nova_compute[351485]: 2025-12-03 02:24:26.282 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:24:26 compute-0 ceph-mon[192821]: pgmap v2106: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:24:26 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3878797341' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:24:26 compute-0 nova_compute[351485]: 2025-12-03 02:24:26.524 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:24:26 compute-0 nova_compute[351485]: 2025-12-03 02:24:26.639 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:24:26 compute-0 nova_compute[351485]: 2025-12-03 02:24:26.641 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3550MB free_disk=59.897377014160156GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 02:24:26 compute-0 nova_compute[351485]: 2025-12-03 02:24:26.641 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:24:26 compute-0 nova_compute[351485]: 2025-12-03 02:24:26.641 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:24:26 compute-0 nova_compute[351485]: 2025-12-03 02:24:26.773 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:24:26 compute-0 nova_compute[351485]: 2025-12-03 02:24:26.773 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 4fb8fc07-d7b7-4be8-94da-155b040faf32 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:24:26 compute-0 nova_compute[351485]: 2025-12-03 02:24:26.774 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 02:24:26 compute-0 nova_compute[351485]: 2025-12-03 02:24:26.774 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 02:24:26 compute-0 nova_compute[351485]: 2025-12-03 02:24:26.849 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:24:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2107: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:24:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:24:27 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3880468333' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:24:27 compute-0 nova_compute[351485]: 2025-12-03 02:24:27.409 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:24:27 compute-0 nova_compute[351485]: 2025-12-03 02:24:27.422 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:24:27 compute-0 nova_compute[351485]: 2025-12-03 02:24:27.449 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:24:27 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3880468333' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:24:27 compute-0 nova_compute[351485]: 2025-12-03 02:24:27.455 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 02:24:27 compute-0 nova_compute[351485]: 2025-12-03 02:24:27.457 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:24:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:24:28 compute-0 ceph-mon[192821]: pgmap v2107: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:24:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:24:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:24:28 compute-0 nova_compute[351485]: 2025-12-03 02:24:28.460 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:24:28 compute-0 nova_compute[351485]: 2025-12-03 02:24:28.460 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 02:24:28 compute-0 nova_compute[351485]: 2025-12-03 02:24:28.461 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 03 02:24:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:24:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:24:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:24:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:24:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:24:28
Dec 03 02:24:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 02:24:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 02:24:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['default.rgw.log', 'volumes', '.rgw.root', 'vms', 'default.rgw.control', '.mgr', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images']
Dec 03 02:24:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 02:24:28 compute-0 nova_compute[351485]: 2025-12-03 02:24:28.838 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-2890ee5c-21c1-4e9d-9421-1a2df0f67f76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:24:28 compute-0 nova_compute[351485]: 2025-12-03 02:24:28.839 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-2890ee5c-21c1-4e9d-9421-1a2df0f67f76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:24:28 compute-0 nova_compute[351485]: 2025-12-03 02:24:28.840 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 03 02:24:28 compute-0 nova_compute[351485]: 2025-12-03 02:24:28.841 351492 DEBUG nova.objects.instance [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:24:28 compute-0 podman[461995]: 2025-12-03 02:24:28.892079348 +0000 UTC m=+0.139856871 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 03 02:24:28 compute-0 nova_compute[351485]: 2025-12-03 02:24:28.921 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:24:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 02:24:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:24:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 02:24:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:24:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:24:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:24:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:24:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:24:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:24:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:24:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2108: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:24:29 compute-0 podman[158098]: time="2025-12-03T02:24:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:24:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:24:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec 03 02:24:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:24:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8659 "" "Go-http-client/1.1"
Dec 03 02:24:30 compute-0 ceph-mon[192821]: pgmap v2108: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:24:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2109: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:24:31 compute-0 openstack_network_exporter[368278]: ERROR   02:24:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:24:31 compute-0 openstack_network_exporter[368278]: ERROR   02:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:24:31 compute-0 openstack_network_exporter[368278]: ERROR   02:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:24:31 compute-0 openstack_network_exporter[368278]: ERROR   02:24:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:24:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:24:31 compute-0 openstack_network_exporter[368278]: ERROR   02:24:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:24:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:24:31 compute-0 nova_compute[351485]: 2025-12-03 02:24:31.526 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:24:31 compute-0 podman[462015]: 2025-12-03 02:24:31.852116134 +0000 UTC m=+0.107940049 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, architecture=x86_64, name=ubi9-minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, version=9.6, container_name=openstack_network_exporter, io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 03 02:24:31 compute-0 nova_compute[351485]: 2025-12-03 02:24:31.857 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Updating instance_info_cache with network_info: [{"id": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "address": "fa:16:3e:dd:ed:eb", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.239", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf36a9f58-d7", "ovs_interfaceid": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:24:31 compute-0 nova_compute[351485]: 2025-12-03 02:24:31.873 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-2890ee5c-21c1-4e9d-9421-1a2df0f67f76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:24:31 compute-0 nova_compute[351485]: 2025-12-03 02:24:31.874 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 03 02:24:31 compute-0 nova_compute[351485]: 2025-12-03 02:24:31.874 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:24:31 compute-0 nova_compute[351485]: 2025-12-03 02:24:31.874 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:24:31 compute-0 nova_compute[351485]: 2025-12-03 02:24:31.875 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:24:31 compute-0 podman[462014]: 2025-12-03 02:24:31.888577804 +0000 UTC m=+0.141604180 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 03 02:24:31 compute-0 podman[462026]: 2025-12-03 02:24:31.894390938 +0000 UTC m=+0.116115410 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., name=ubi9, release-0.7.12=, vendor=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, container_name=kepler, io.openshift.expose-services=, release=1214.1726694543, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container)
Dec 03 02:24:31 compute-0 podman[462016]: 2025-12-03 02:24:31.894420589 +0000 UTC m=+0.143020130 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 03 02:24:31 compute-0 podman[462033]: 2025-12-03 02:24:31.899802431 +0000 UTC m=+0.133107700 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 03 02:24:32 compute-0 ceph-mon[192821]: pgmap v2109: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:24:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2110: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:24:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:24:33 compute-0 nova_compute[351485]: 2025-12-03 02:24:33.925 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:24:33 compute-0 nova_compute[351485]: 2025-12-03 02:24:33.985 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:24:33 compute-0 nova_compute[351485]: 2025-12-03 02:24:33.986 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:24:34 compute-0 ceph-mon[192821]: pgmap v2110: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:24:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2111: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:24:35 compute-0 nova_compute[351485]: 2025-12-03 02:24:35.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:24:36 compute-0 ceph-mon[192821]: pgmap v2111: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:24:36 compute-0 nova_compute[351485]: 2025-12-03 02:24:36.530 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:24:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2112: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:24:37 compute-0 nova_compute[351485]: 2025-12-03 02:24:37.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:24:37 compute-0 nova_compute[351485]: 2025-12-03 02:24:37.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 02:24:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:24:38 compute-0 ceph-mon[192821]: pgmap v2112: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:24:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 02:24:38 compute-0 nova_compute[351485]: 2025-12-03 02:24:38.929 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:24:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:24:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 02:24:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:24:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015153665673634173 of space, bias 1.0, pg target 0.45460997020902516 quantized to 32 (current 32)
Dec 03 02:24:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:24:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:24:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:24:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:24:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:24:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec 03 02:24:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:24:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 02:24:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:24:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:24:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:24:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 02:24:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:24:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 02:24:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:24:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:24:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:24:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 02:24:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2113: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:24:40 compute-0 ceph-mon[192821]: pgmap v2113: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:24:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2114: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:24:41 compute-0 nova_compute[351485]: 2025-12-03 02:24:41.532 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:24:42 compute-0 ceph-mon[192821]: pgmap v2114: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:24:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2115: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:24:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:24:43 compute-0 nova_compute[351485]: 2025-12-03 02:24:43.931 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:24:44 compute-0 ceph-mon[192821]: pgmap v2115: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:24:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2116: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:24:46 compute-0 nova_compute[351485]: 2025-12-03 02:24:46.536 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:24:46 compute-0 ceph-mon[192821]: pgmap v2116: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:24:46 compute-0 sudo[462114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:24:46 compute-0 sudo[462114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:24:46 compute-0 sudo[462114]: pam_unix(sudo:session): session closed for user root
Dec 03 02:24:46 compute-0 sudo[462139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:24:46 compute-0 sudo[462139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:24:46 compute-0 sudo[462139]: pam_unix(sudo:session): session closed for user root
Dec 03 02:24:47 compute-0 sudo[462164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:24:47 compute-0 sudo[462164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:24:47 compute-0 sudo[462164]: pam_unix(sudo:session): session closed for user root
Dec 03 02:24:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 02:24:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/533367272' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:24:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 02:24:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/533367272' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:24:47 compute-0 sudo[462189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 02:24:47 compute-0 sudo[462189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:24:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2117: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:24:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/533367272' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:24:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/533367272' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:24:47 compute-0 sudo[462189]: pam_unix(sudo:session): session closed for user root
Dec 03 02:24:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec 03 02:24:47 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 03 02:24:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:24:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:24:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 02:24:47 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:24:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 02:24:47 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:24:47 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 0fdfa9fa-a666-4191-b4a3-eea286efd634 does not exist
Dec 03 02:24:47 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 05951b5b-d812-4374-a8f6-6afa5ade34a0 does not exist
Dec 03 02:24:47 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 6a1e8666-8abe-4421-b46d-28a3e31dab9c does not exist
Dec 03 02:24:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 02:24:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:24:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 02:24:47 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:24:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:24:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:24:47 compute-0 sudo[462244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:24:47 compute-0 sudo[462244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:24:47 compute-0 sudo[462244]: pam_unix(sudo:session): session closed for user root
Dec 03 02:24:48 compute-0 sudo[462269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:24:48 compute-0 sudo[462269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:24:48 compute-0 sudo[462269]: pam_unix(sudo:session): session closed for user root
Dec 03 02:24:48 compute-0 sudo[462294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:24:48 compute-0 sudo[462294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:24:48 compute-0 sudo[462294]: pam_unix(sudo:session): session closed for user root
Dec 03 02:24:48 compute-0 sudo[462319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 02:24:48 compute-0 sudo[462319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:24:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:24:48 compute-0 ceph-mon[192821]: pgmap v2117: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:24:48 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 03 02:24:48 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:24:48 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:24:48 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:24:48 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:24:48 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:24:48 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:24:48 compute-0 podman[462381]: 2025-12-03 02:24:48.727805243 +0000 UTC m=+0.061003784 container create ad2c78ff935dfcd8035b0ee03e697a19dafe7aef42edc898f0bd343d63ebea83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_yalow, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec 03 02:24:48 compute-0 systemd[1]: Started libpod-conmon-ad2c78ff935dfcd8035b0ee03e697a19dafe7aef42edc898f0bd343d63ebea83.scope.
Dec 03 02:24:48 compute-0 podman[462381]: 2025-12-03 02:24:48.707586162 +0000 UTC m=+0.040784723 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:24:48 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:24:48 compute-0 podman[462381]: 2025-12-03 02:24:48.833395375 +0000 UTC m=+0.166593926 container init ad2c78ff935dfcd8035b0ee03e697a19dafe7aef42edc898f0bd343d63ebea83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_yalow, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:24:48 compute-0 podman[462381]: 2025-12-03 02:24:48.844412026 +0000 UTC m=+0.177610567 container start ad2c78ff935dfcd8035b0ee03e697a19dafe7aef42edc898f0bd343d63ebea83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 03 02:24:48 compute-0 podman[462381]: 2025-12-03 02:24:48.84952228 +0000 UTC m=+0.182720841 container attach ad2c78ff935dfcd8035b0ee03e697a19dafe7aef42edc898f0bd343d63ebea83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_yalow, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 03 02:24:48 compute-0 agitated_yalow[462398]: 167 167
Dec 03 02:24:48 compute-0 systemd[1]: libpod-ad2c78ff935dfcd8035b0ee03e697a19dafe7aef42edc898f0bd343d63ebea83.scope: Deactivated successfully.
Dec 03 02:24:48 compute-0 conmon[462398]: conmon ad2c78ff935dfcd8035b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ad2c78ff935dfcd8035b0ee03e697a19dafe7aef42edc898f0bd343d63ebea83.scope/container/memory.events
Dec 03 02:24:48 compute-0 podman[462381]: 2025-12-03 02:24:48.854356247 +0000 UTC m=+0.187554808 container died ad2c78ff935dfcd8035b0ee03e697a19dafe7aef42edc898f0bd343d63ebea83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_yalow, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:24:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1a79ce3013357a07d263f7b41b0e4d45671d4a61b9a552ec9c3d845a35dd6d5-merged.mount: Deactivated successfully.
Dec 03 02:24:48 compute-0 podman[462381]: 2025-12-03 02:24:48.926749981 +0000 UTC m=+0.259948522 container remove ad2c78ff935dfcd8035b0ee03e697a19dafe7aef42edc898f0bd343d63ebea83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:24:48 compute-0 nova_compute[351485]: 2025-12-03 02:24:48.932 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:24:48 compute-0 systemd[1]: libpod-conmon-ad2c78ff935dfcd8035b0ee03e697a19dafe7aef42edc898f0bd343d63ebea83.scope: Deactivated successfully.
Dec 03 02:24:49 compute-0 podman[462421]: 2025-12-03 02:24:49.188909985 +0000 UTC m=+0.095914050 container create 71f1f15b3035222bfc0187b1183c2b6bbef184eb76ba614dd2ccd2710cb3be3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_meitner, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:24:49 compute-0 podman[462421]: 2025-12-03 02:24:49.158719142 +0000 UTC m=+0.065723197 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:24:49 compute-0 systemd[1]: Started libpod-conmon-71f1f15b3035222bfc0187b1183c2b6bbef184eb76ba614dd2ccd2710cb3be3f.scope.
Dec 03 02:24:49 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:24:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaec2ef9262ea94ca73421ebc11b6ba9ef1b0ccf49e0f5d72f6559569a72f748/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:24:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaec2ef9262ea94ca73421ebc11b6ba9ef1b0ccf49e0f5d72f6559569a72f748/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:24:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaec2ef9262ea94ca73421ebc11b6ba9ef1b0ccf49e0f5d72f6559569a72f748/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:24:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaec2ef9262ea94ca73421ebc11b6ba9ef1b0ccf49e0f5d72f6559569a72f748/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:24:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaec2ef9262ea94ca73421ebc11b6ba9ef1b0ccf49e0f5d72f6559569a72f748/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 02:24:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2118: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:24:49 compute-0 podman[462421]: 2025-12-03 02:24:49.381618528 +0000 UTC m=+0.288622573 container init 71f1f15b3035222bfc0187b1183c2b6bbef184eb76ba614dd2ccd2710cb3be3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 03 02:24:49 compute-0 podman[462421]: 2025-12-03 02:24:49.393105792 +0000 UTC m=+0.300109857 container start 71f1f15b3035222bfc0187b1183c2b6bbef184eb76ba614dd2ccd2710cb3be3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 03 02:24:49 compute-0 podman[462421]: 2025-12-03 02:24:49.398842674 +0000 UTC m=+0.305846709 container attach 71f1f15b3035222bfc0187b1183c2b6bbef184eb76ba614dd2ccd2710cb3be3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_meitner, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:24:50 compute-0 nice_meitner[462436]: --> passed data devices: 0 physical, 3 LVM
Dec 03 02:24:50 compute-0 nice_meitner[462436]: --> relative data size: 1.0
Dec 03 02:24:50 compute-0 nice_meitner[462436]: --> All data devices are unavailable
Dec 03 02:24:50 compute-0 ceph-mon[192821]: pgmap v2118: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:24:50 compute-0 systemd[1]: libpod-71f1f15b3035222bfc0187b1183c2b6bbef184eb76ba614dd2ccd2710cb3be3f.scope: Deactivated successfully.
Dec 03 02:24:50 compute-0 systemd[1]: libpod-71f1f15b3035222bfc0187b1183c2b6bbef184eb76ba614dd2ccd2710cb3be3f.scope: Consumed 1.201s CPU time.
Dec 03 02:24:50 compute-0 podman[462468]: 2025-12-03 02:24:50.723767292 +0000 UTC m=+0.053768130 container died 71f1f15b3035222bfc0187b1183c2b6bbef184eb76ba614dd2ccd2710cb3be3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_meitner, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 03 02:24:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-aaec2ef9262ea94ca73421ebc11b6ba9ef1b0ccf49e0f5d72f6559569a72f748-merged.mount: Deactivated successfully.
Dec 03 02:24:50 compute-0 podman[462468]: 2025-12-03 02:24:50.818750264 +0000 UTC m=+0.148751032 container remove 71f1f15b3035222bfc0187b1183c2b6bbef184eb76ba614dd2ccd2710cb3be3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_meitner, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 03 02:24:50 compute-0 systemd[1]: libpod-conmon-71f1f15b3035222bfc0187b1183c2b6bbef184eb76ba614dd2ccd2710cb3be3f.scope: Deactivated successfully.
Dec 03 02:24:50 compute-0 sudo[462319]: pam_unix(sudo:session): session closed for user root
Dec 03 02:24:51 compute-0 sudo[462483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:24:51 compute-0 sudo[462483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:24:51 compute-0 sudo[462483]: pam_unix(sudo:session): session closed for user root
Dec 03 02:24:51 compute-0 sudo[462508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:24:51 compute-0 sudo[462508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:24:51 compute-0 sudo[462508]: pam_unix(sudo:session): session closed for user root
Dec 03 02:24:51 compute-0 sudo[462533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:24:51 compute-0 sudo[462533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:24:51 compute-0 sudo[462533]: pam_unix(sudo:session): session closed for user root
Dec 03 02:24:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2119: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:24:51 compute-0 sudo[462558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 02:24:51 compute-0 sudo[462558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:24:51 compute-0 nova_compute[351485]: 2025-12-03 02:24:51.539 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:24:52 compute-0 podman[462622]: 2025-12-03 02:24:52.094346729 +0000 UTC m=+0.102967309 container create 1100632859ff574cfa3df182cb262dbe1792f6fb0dc4fe58c738dfd54ac42119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_kilby, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:24:52 compute-0 podman[462622]: 2025-12-03 02:24:52.056674435 +0000 UTC m=+0.065295015 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:24:52 compute-0 systemd[1]: Started libpod-conmon-1100632859ff574cfa3df182cb262dbe1792f6fb0dc4fe58c738dfd54ac42119.scope.
Dec 03 02:24:52 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:24:52 compute-0 podman[462622]: 2025-12-03 02:24:52.217191819 +0000 UTC m=+0.225812429 container init 1100632859ff574cfa3df182cb262dbe1792f6fb0dc4fe58c738dfd54ac42119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_kilby, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:24:52 compute-0 podman[462622]: 2025-12-03 02:24:52.229516627 +0000 UTC m=+0.238137187 container start 1100632859ff574cfa3df182cb262dbe1792f6fb0dc4fe58c738dfd54ac42119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:24:52 compute-0 podman[462622]: 2025-12-03 02:24:52.234035834 +0000 UTC m=+0.242656434 container attach 1100632859ff574cfa3df182cb262dbe1792f6fb0dc4fe58c738dfd54ac42119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_kilby, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:24:52 compute-0 systemd[1]: libpod-1100632859ff574cfa3df182cb262dbe1792f6fb0dc4fe58c738dfd54ac42119.scope: Deactivated successfully.
Dec 03 02:24:52 compute-0 interesting_kilby[462659]: 167 167
Dec 03 02:24:52 compute-0 conmon[462659]: conmon 1100632859ff574cfa3d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1100632859ff574cfa3df182cb262dbe1792f6fb0dc4fe58c738dfd54ac42119.scope/container/memory.events
Dec 03 02:24:52 compute-0 podman[462622]: 2025-12-03 02:24:52.237120422 +0000 UTC m=+0.245741002 container died 1100632859ff574cfa3df182cb262dbe1792f6fb0dc4fe58c738dfd54ac42119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 03 02:24:52 compute-0 podman[462636]: 2025-12-03 02:24:52.249502971 +0000 UTC m=+0.099516161 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 03 02:24:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7635df6fc40b925ea45303576f590a96f6f2de00d52a90b9ab1d61a7de68c25-merged.mount: Deactivated successfully.
Dec 03 02:24:52 compute-0 podman[462640]: 2025-12-03 02:24:52.268359714 +0000 UTC m=+0.107328422 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 02:24:52 compute-0 podman[462637]: 2025-12-03 02:24:52.279611562 +0000 UTC m=+0.118614671 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Dec 03 02:24:52 compute-0 podman[462622]: 2025-12-03 02:24:52.284966683 +0000 UTC m=+0.293587243 container remove 1100632859ff574cfa3df182cb262dbe1792f6fb0dc4fe58c738dfd54ac42119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_kilby, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 03 02:24:52 compute-0 systemd[1]: libpod-conmon-1100632859ff574cfa3df182cb262dbe1792f6fb0dc4fe58c738dfd54ac42119.scope: Deactivated successfully.
Dec 03 02:24:52 compute-0 podman[462722]: 2025-12-03 02:24:52.546156359 +0000 UTC m=+0.090466096 container create 4e2a3d93c934e59fb2649b0038cc065cef5553e15bca93391b70dc0a89cabd91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:24:52 compute-0 podman[462722]: 2025-12-03 02:24:52.510109011 +0000 UTC m=+0.054418798 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:24:52 compute-0 ceph-mon[192821]: pgmap v2119: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:24:52 compute-0 systemd[1]: Started libpod-conmon-4e2a3d93c934e59fb2649b0038cc065cef5553e15bca93391b70dc0a89cabd91.scope.
Dec 03 02:24:52 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:24:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f491239df7fdaa6377193002c804a2a6ff9b8611c4708ba6cb63ded6dd77bc3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:24:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f491239df7fdaa6377193002c804a2a6ff9b8611c4708ba6cb63ded6dd77bc3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:24:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f491239df7fdaa6377193002c804a2a6ff9b8611c4708ba6cb63ded6dd77bc3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:24:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f491239df7fdaa6377193002c804a2a6ff9b8611c4708ba6cb63ded6dd77bc3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:24:52 compute-0 podman[462722]: 2025-12-03 02:24:52.746120247 +0000 UTC m=+0.290429994 container init 4e2a3d93c934e59fb2649b0038cc065cef5553e15bca93391b70dc0a89cabd91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_bose, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 03 02:24:52 compute-0 podman[462722]: 2025-12-03 02:24:52.770860405 +0000 UTC m=+0.315170152 container start 4e2a3d93c934e59fb2649b0038cc065cef5553e15bca93391b70dc0a89cabd91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_bose, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 03 02:24:52 compute-0 podman[462722]: 2025-12-03 02:24:52.777812472 +0000 UTC m=+0.322122219 container attach 4e2a3d93c934e59fb2649b0038cc065cef5553e15bca93391b70dc0a89cabd91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_bose, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec 03 02:24:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2120: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:24:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:24:53 compute-0 dazzling_bose[462738]: {
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:     "0": [
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:         {
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:             "devices": [
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:                 "/dev/loop3"
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:             ],
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:             "lv_name": "ceph_lv0",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:             "lv_size": "21470642176",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:             "name": "ceph_lv0",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:             "tags": {
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:                 "ceph.cluster_name": "ceph",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:                 "ceph.crush_device_class": "",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:                 "ceph.encrypted": "0",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:                 "ceph.osd_id": "0",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:                 "ceph.type": "block",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:                 "ceph.vdo": "0"
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:             },
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:             "type": "block",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:             "vg_name": "ceph_vg0"
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:         }
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:     ],
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:     "1": [
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:         {
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:             "devices": [
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:                 "/dev/loop4"
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:             ],
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:             "lv_name": "ceph_lv1",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:             "lv_size": "21470642176",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:             "name": "ceph_lv1",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:             "tags": {
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:                 "ceph.cluster_name": "ceph",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:                 "ceph.crush_device_class": "",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:                 "ceph.encrypted": "0",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:                 "ceph.osd_id": "1",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:                 "ceph.type": "block",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:                 "ceph.vdo": "0"
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:             },
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:             "type": "block",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:             "vg_name": "ceph_vg1"
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:         }
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:     ],
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:     "2": [
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:         {
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:             "devices": [
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:                 "/dev/loop5"
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:             ],
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:             "lv_name": "ceph_lv2",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:             "lv_size": "21470642176",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:             "name": "ceph_lv2",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:             "tags": {
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:                 "ceph.cluster_name": "ceph",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:                 "ceph.crush_device_class": "",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:                 "ceph.encrypted": "0",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:                 "ceph.osd_id": "2",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:                 "ceph.type": "block",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:                 "ceph.vdo": "0"
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:             },
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:             "type": "block",
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:             "vg_name": "ceph_vg2"
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:         }
Dec 03 02:24:53 compute-0 dazzling_bose[462738]:     ]
Dec 03 02:24:53 compute-0 dazzling_bose[462738]: }
Dec 03 02:24:53 compute-0 systemd[1]: libpod-4e2a3d93c934e59fb2649b0038cc065cef5553e15bca93391b70dc0a89cabd91.scope: Deactivated successfully.
Dec 03 02:24:53 compute-0 podman[462722]: 2025-12-03 02:24:53.617288219 +0000 UTC m=+1.161597936 container died 4e2a3d93c934e59fb2649b0038cc065cef5553e15bca93391b70dc0a89cabd91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:24:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f491239df7fdaa6377193002c804a2a6ff9b8611c4708ba6cb63ded6dd77bc3-merged.mount: Deactivated successfully.
Dec 03 02:24:53 compute-0 podman[462722]: 2025-12-03 02:24:53.702781174 +0000 UTC m=+1.247090881 container remove 4e2a3d93c934e59fb2649b0038cc065cef5553e15bca93391b70dc0a89cabd91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_bose, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 03 02:24:53 compute-0 systemd[1]: libpod-conmon-4e2a3d93c934e59fb2649b0038cc065cef5553e15bca93391b70dc0a89cabd91.scope: Deactivated successfully.
Dec 03 02:24:53 compute-0 sudo[462558]: pam_unix(sudo:session): session closed for user root
Dec 03 02:24:53 compute-0 sudo[462757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:24:53 compute-0 sudo[462757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:24:53 compute-0 sudo[462757]: pam_unix(sudo:session): session closed for user root
Dec 03 02:24:53 compute-0 nova_compute[351485]: 2025-12-03 02:24:53.936 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:24:53 compute-0 sudo[462782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:24:53 compute-0 sudo[462782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:24:53 compute-0 sudo[462782]: pam_unix(sudo:session): session closed for user root
Dec 03 02:24:54 compute-0 sudo[462807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:24:54 compute-0 sudo[462807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:24:54 compute-0 sudo[462807]: pam_unix(sudo:session): session closed for user root
Dec 03 02:24:54 compute-0 sudo[462832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 02:24:54 compute-0 sudo[462832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:24:54 compute-0 ceph-mon[192821]: pgmap v2120: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:24:54 compute-0 podman[462894]: 2025-12-03 02:24:54.807161453 +0000 UTC m=+0.093543272 container create ccc3734460b17e9f5bc5d444ced6f4c39a972b07ea8cfb41bcf1a1544357169c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_shirley, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:24:54 compute-0 podman[462894]: 2025-12-03 02:24:54.77374885 +0000 UTC m=+0.060130679 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:24:54 compute-0 systemd[1]: Started libpod-conmon-ccc3734460b17e9f5bc5d444ced6f4c39a972b07ea8cfb41bcf1a1544357169c.scope.
Dec 03 02:24:54 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:24:54 compute-0 podman[462894]: 2025-12-03 02:24:54.966929026 +0000 UTC m=+0.253310905 container init ccc3734460b17e9f5bc5d444ced6f4c39a972b07ea8cfb41bcf1a1544357169c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_shirley, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:24:54 compute-0 podman[462894]: 2025-12-03 02:24:54.984008138 +0000 UTC m=+0.270389957 container start ccc3734460b17e9f5bc5d444ced6f4c39a972b07ea8cfb41bcf1a1544357169c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_shirley, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:24:54 compute-0 sad_shirley[462909]: 167 167
Dec 03 02:24:54 compute-0 systemd[1]: libpod-ccc3734460b17e9f5bc5d444ced6f4c39a972b07ea8cfb41bcf1a1544357169c.scope: Deactivated successfully.
Dec 03 02:24:54 compute-0 podman[462894]: 2025-12-03 02:24:54.997790477 +0000 UTC m=+0.284172306 container attach ccc3734460b17e9f5bc5d444ced6f4c39a972b07ea8cfb41bcf1a1544357169c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_shirley, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:24:54 compute-0 podman[462894]: 2025-12-03 02:24:54.998885968 +0000 UTC m=+0.285267767 container died ccc3734460b17e9f5bc5d444ced6f4c39a972b07ea8cfb41bcf1a1544357169c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_shirley, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:24:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-4db8adaa06daa907f89749a1a94a3807f1d7624c7cf2cbb499f4d05e15abe972-merged.mount: Deactivated successfully.
Dec 03 02:24:55 compute-0 podman[462894]: 2025-12-03 02:24:55.082220902 +0000 UTC m=+0.368602691 container remove ccc3734460b17e9f5bc5d444ced6f4c39a972b07ea8cfb41bcf1a1544357169c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_shirley, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:24:55 compute-0 systemd[1]: libpod-conmon-ccc3734460b17e9f5bc5d444ced6f4c39a972b07ea8cfb41bcf1a1544357169c.scope: Deactivated successfully.
Dec 03 02:24:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2121: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 170 B/s wr, 3 op/s
Dec 03 02:24:55 compute-0 podman[462933]: 2025-12-03 02:24:55.406408797 +0000 UTC m=+0.081102031 container create ce69aa323dc7ac4f89a287695467953874dfb559a15b8c9daf5c1b1c3c8bd0fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_goldberg, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec 03 02:24:55 compute-0 podman[462933]: 2025-12-03 02:24:55.376623106 +0000 UTC m=+0.051316370 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:24:55 compute-0 systemd[1]: Started libpod-conmon-ce69aa323dc7ac4f89a287695467953874dfb559a15b8c9daf5c1b1c3c8bd0fb.scope.
Dec 03 02:24:55 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:24:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/288e483fcdda6f6d9e5220051364972e9df2e24d32d243d758af54dc6a575382/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:24:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/288e483fcdda6f6d9e5220051364972e9df2e24d32d243d758af54dc6a575382/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:24:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/288e483fcdda6f6d9e5220051364972e9df2e24d32d243d758af54dc6a575382/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:24:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/288e483fcdda6f6d9e5220051364972e9df2e24d32d243d758af54dc6a575382/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:24:55 compute-0 podman[462933]: 2025-12-03 02:24:55.584683152 +0000 UTC m=+0.259376466 container init ce69aa323dc7ac4f89a287695467953874dfb559a15b8c9daf5c1b1c3c8bd0fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_goldberg, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:24:55 compute-0 podman[462933]: 2025-12-03 02:24:55.613930558 +0000 UTC m=+0.288623812 container start ce69aa323dc7ac4f89a287695467953874dfb559a15b8c9daf5c1b1c3c8bd0fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_goldberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 03 02:24:55 compute-0 podman[462933]: 2025-12-03 02:24:55.619401963 +0000 UTC m=+0.294095227 container attach ce69aa323dc7ac4f89a287695467953874dfb559a15b8c9daf5c1b1c3c8bd0fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 03 02:24:56 compute-0 sshd-session[462914]: Invalid user user from 80.94.95.115 port 36526
Dec 03 02:24:56 compute-0 nova_compute[351485]: 2025-12-03 02:24:56.545 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:24:56 compute-0 ceph-mon[192821]: pgmap v2121: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 170 B/s wr, 3 op/s
Dec 03 02:24:56 compute-0 sshd-session[462914]: Connection closed by invalid user user 80.94.95.115 port 36526 [preauth]
Dec 03 02:24:56 compute-0 agitated_goldberg[462947]: {
Dec 03 02:24:56 compute-0 agitated_goldberg[462947]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 02:24:56 compute-0 agitated_goldberg[462947]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:24:56 compute-0 agitated_goldberg[462947]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 02:24:56 compute-0 agitated_goldberg[462947]:         "osd_id": 2,
Dec 03 02:24:56 compute-0 agitated_goldberg[462947]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:24:56 compute-0 agitated_goldberg[462947]:         "type": "bluestore"
Dec 03 02:24:56 compute-0 agitated_goldberg[462947]:     },
Dec 03 02:24:56 compute-0 agitated_goldberg[462947]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 02:24:56 compute-0 agitated_goldberg[462947]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:24:56 compute-0 agitated_goldberg[462947]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 02:24:56 compute-0 agitated_goldberg[462947]:         "osd_id": 1,
Dec 03 02:24:56 compute-0 agitated_goldberg[462947]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:24:56 compute-0 agitated_goldberg[462947]:         "type": "bluestore"
Dec 03 02:24:56 compute-0 agitated_goldberg[462947]:     },
Dec 03 02:24:56 compute-0 agitated_goldberg[462947]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 02:24:56 compute-0 agitated_goldberg[462947]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:24:56 compute-0 agitated_goldberg[462947]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 02:24:56 compute-0 agitated_goldberg[462947]:         "osd_id": 0,
Dec 03 02:24:56 compute-0 agitated_goldberg[462947]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:24:56 compute-0 agitated_goldberg[462947]:         "type": "bluestore"
Dec 03 02:24:56 compute-0 agitated_goldberg[462947]:     }
Dec 03 02:24:56 compute-0 agitated_goldberg[462947]: }
Dec 03 02:24:56 compute-0 systemd[1]: libpod-ce69aa323dc7ac4f89a287695467953874dfb559a15b8c9daf5c1b1c3c8bd0fb.scope: Deactivated successfully.
Dec 03 02:24:56 compute-0 podman[462933]: 2025-12-03 02:24:56.850222723 +0000 UTC m=+1.524915977 container died ce69aa323dc7ac4f89a287695467953874dfb559a15b8c9daf5c1b1c3c8bd0fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_goldberg, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:24:56 compute-0 systemd[1]: libpod-ce69aa323dc7ac4f89a287695467953874dfb559a15b8c9daf5c1b1c3c8bd0fb.scope: Consumed 1.221s CPU time.
Dec 03 02:24:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-288e483fcdda6f6d9e5220051364972e9df2e24d32d243d758af54dc6a575382-merged.mount: Deactivated successfully.
Dec 03 02:24:56 compute-0 podman[462933]: 2025-12-03 02:24:56.949934529 +0000 UTC m=+1.624627773 container remove ce69aa323dc7ac4f89a287695467953874dfb559a15b8c9daf5c1b1c3c8bd0fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_goldberg, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 03 02:24:56 compute-0 systemd[1]: libpod-conmon-ce69aa323dc7ac4f89a287695467953874dfb559a15b8c9daf5c1b1c3c8bd0fb.scope: Deactivated successfully.
Dec 03 02:24:56 compute-0 sudo[462832]: pam_unix(sudo:session): session closed for user root
Dec 03 02:24:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 02:24:57 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:24:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 02:24:57 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:24:57 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 062907cc-b64d-4ecf-a366-91930dcfb4d1 does not exist
Dec 03 02:24:57 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev df2ed028-2931-473c-9049-cdbf2e6dd2db does not exist
Dec 03 02:24:57 compute-0 sudo[462995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:24:57 compute-0 sudo[462995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:24:57 compute-0 sudo[462995]: pam_unix(sudo:session): session closed for user root
Dec 03 02:24:57 compute-0 sudo[463020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 02:24:57 compute-0 sudo[463020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:24:57 compute-0 sudo[463020]: pam_unix(sudo:session): session closed for user root
Dec 03 02:24:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2122: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 170 B/s wr, 4 op/s
Dec 03 02:24:58 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:24:58 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:24:58 compute-0 ceph-mon[192821]: pgmap v2122: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 170 B/s wr, 4 op/s
Dec 03 02:24:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:24:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:24:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:24:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:24:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:24:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:24:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:24:58 compute-0 nova_compute[351485]: 2025-12-03 02:24:58.937 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:24:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2123: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 255 B/s wr, 4 op/s
Dec 03 02:24:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:24:59.657 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:24:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:24:59.658 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:24:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:24:59.659 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:24:59 compute-0 podman[158098]: time="2025-12-03T02:24:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:24:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:24:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec 03 02:24:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:24:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8668 "" "Go-http-client/1.1"
Dec 03 02:24:59 compute-0 podman[463045]: 2025-12-03 02:24:59.90620986 +0000 UTC m=+0.152033665 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 03 02:25:00 compute-0 ceph-mon[192821]: pgmap v2123: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 255 B/s wr, 4 op/s
Dec 03 02:25:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2124: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 8.6 KiB/s wr, 5 op/s
Dec 03 02:25:01 compute-0 openstack_network_exporter[368278]: ERROR   02:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:25:01 compute-0 openstack_network_exporter[368278]: ERROR   02:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:25:01 compute-0 openstack_network_exporter[368278]: ERROR   02:25:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:25:01 compute-0 openstack_network_exporter[368278]: ERROR   02:25:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:25:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:25:01 compute-0 openstack_network_exporter[368278]: ERROR   02:25:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:25:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:25:01 compute-0 nova_compute[351485]: 2025-12-03 02:25:01.550 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:25:02 compute-0 ceph-mon[192821]: pgmap v2124: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 8.6 KiB/s wr, 5 op/s
Dec 03 02:25:02 compute-0 podman[463066]: 2025-12-03 02:25:02.867495491 +0000 UTC m=+0.099389028 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 03 02:25:02 compute-0 podman[463079]: 2025-12-03 02:25:02.868454028 +0000 UTC m=+0.084982651 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd)
Dec 03 02:25:02 compute-0 podman[463065]: 2025-12-03 02:25:02.874211381 +0000 UTC m=+0.124372704 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, com.redhat.component=ubi9-minimal-container, config_id=edpm, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, name=ubi9-minimal)
Dec 03 02:25:02 compute-0 podman[463067]: 2025-12-03 02:25:02.877091132 +0000 UTC m=+0.109573315 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, config_id=edpm, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.29.0, container_name=kepler, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, version=9.4, maintainer=Red Hat, Inc., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64)
Dec 03 02:25:02 compute-0 podman[463064]: 2025-12-03 02:25:02.905085803 +0000 UTC m=+0.154569817 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 03 02:25:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2125: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 8.6 KiB/s wr, 5 op/s
Dec 03 02:25:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:25:03 compute-0 nova_compute[351485]: 2025-12-03 02:25:03.941 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:25:04 compute-0 ceph-mon[192821]: pgmap v2125: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 8.6 KiB/s wr, 5 op/s
Dec 03 02:25:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2126: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 8.6 KiB/s wr, 5 op/s
Dec 03 02:25:06 compute-0 ceph-mon[192821]: pgmap v2126: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 8.6 KiB/s wr, 5 op/s
Dec 03 02:25:06 compute-0 nova_compute[351485]: 2025-12-03 02:25:06.553 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:25:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2127: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 6.3 KiB/s rd, 8.4 KiB/s wr, 1 op/s
Dec 03 02:25:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:25:08 compute-0 ceph-mon[192821]: pgmap v2127: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 6.3 KiB/s rd, 8.4 KiB/s wr, 1 op/s
Dec 03 02:25:08 compute-0 nova_compute[351485]: 2025-12-03 02:25:08.944 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:25:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2128: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 8.4 KiB/s wr, 0 op/s
Dec 03 02:25:10 compute-0 ceph-mon[192821]: pgmap v2128: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 8.4 KiB/s wr, 0 op/s
Dec 03 02:25:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2129: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 8.3 KiB/s wr, 0 op/s
Dec 03 02:25:11 compute-0 nova_compute[351485]: 2025-12-03 02:25:11.557 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:25:12 compute-0 ceph-mon[192821]: pgmap v2129: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 8.3 KiB/s wr, 0 op/s
Dec 03 02:25:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2130: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 0 op/s
Dec 03 02:25:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:25:13 compute-0 nova_compute[351485]: 2025-12-03 02:25:13.950 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:25:14 compute-0 ceph-mon[192821]: pgmap v2130: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 0 op/s
Dec 03 02:25:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2131: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 341 B/s wr, 0 op/s
Dec 03 02:25:16 compute-0 nova_compute[351485]: 2025-12-03 02:25:16.560 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:25:16 compute-0 ceph-mon[192821]: pgmap v2131: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 341 B/s wr, 0 op/s
Dec 03 02:25:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2132: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.3 KiB/s wr, 0 op/s
Dec 03 02:25:17 compute-0 nova_compute[351485]: 2025-12-03 02:25:17.578 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:25:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:25:18 compute-0 ceph-mon[192821]: pgmap v2132: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.3 KiB/s wr, 0 op/s
Dec 03 02:25:18 compute-0 nova_compute[351485]: 2025-12-03 02:25:18.955 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:25:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2133: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.3 KiB/s wr, 0 op/s
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.513 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.514 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.515 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.521 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.521 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.521 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.526 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '4fb8fc07-d7b7-4be8-94da-155b040faf32', 'name': 'te-8071397-asg-3rvfkoaoyxm3-pdxc7a4qjxpu-j7dwudlie42q', 'flavor': {'id': '89219634-32e9-4cb5-896f-6fa0b1edfe13', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '8876482c-db67-48c0-9203-60685152fc9d'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '63f39ac2863946b8b817457e689ff933', 'user_id': '8f61f44789494541b7c101b0fdab52f0', 'hostId': 'b9b5204cb6f419d1971089b3610cd52175ffd5baf1b6a5204f14f9c2', 'status': 'active', 'metadata': {'metering.server_group': '38bfb145-4971-41b6-9bc3-faf3c3931019'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.531 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2890ee5c-21c1-4e9d-9421-1a2df0f67f76', 'name': 'te-8071397-asg-3rvfkoaoyxm3-n4fdz722tgvn-jwe375iwm6yr', 'flavor': {'id': '89219634-32e9-4cb5-896f-6fa0b1edfe13', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '8876482c-db67-48c0-9203-60685152fc9d'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '63f39ac2863946b8b817457e689ff933', 'user_id': '8f61f44789494541b7c101b0fdab52f0', 'hostId': 'b9b5204cb6f419d1971089b3610cd52175ffd5baf1b6a5204f14f9c2', 'status': 'active', 'metadata': {'metering.server_group': '38bfb145-4971-41b6-9bc3-faf3c3931019'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.532 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.532 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.532 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.532 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.535 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-03T02:25:19.532699) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.571 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/memory.usage volume: 43.55859375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.611 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/memory.usage volume: 42.47265625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.612 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.612 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.612 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.612 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.612 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.613 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.614 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-03T02:25:19.613157) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.619 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.626 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.627 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.627 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.627 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.627 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.628 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.628 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.628 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.incoming.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.628 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.629 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.630 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.630 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.630 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.630 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.630 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.631 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.631 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.632 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.632 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.633 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.633 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.633 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-03T02:25:19.628167) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.633 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.633 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-03T02:25:19.630865) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.633 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.633 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.634 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.635 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.635 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.635 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.636 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-03T02:25:19.633676) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.636 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.636 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.636 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.636 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.637 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.637 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-03T02:25:19.636675) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.638 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.638 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.638 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.639 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.639 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.639 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.641 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-03T02:25:19.639320) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.653 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.654 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.669 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.670 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.670 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.671 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.671 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.671 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.671 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.672 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.672 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.672 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.673 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-03T02:25:19.672231) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.710 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.bytes volume: 30149632 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.711 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.750 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.bytes volume: 31267328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.750 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.751 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.751 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.752 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.752 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.752 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.752 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.752 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.753 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.bytes volume: 1430 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.754 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.754 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.755 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.755 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.755 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.755 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-03T02:25:19.752607) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.755 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.755 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.latency volume: 3251057957 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.756 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.latency volume: 228292831 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.756 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.latency volume: 2988151233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.757 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.latency volume: 215162747 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.757 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.758 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-03T02:25:19.755466) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.758 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.758 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.758 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.758 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.759 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.759 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.requests volume: 1093 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.759 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.760 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.requests volume: 1144 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.760 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.761 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.762 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.762 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.762 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-03T02:25:19.759092) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.762 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.762 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.763 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.763 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-03T02:25:19.762928) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.763 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.763 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.764 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.764 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.764 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.765 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.765 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.765 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.765 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.766 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.767 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-03T02:25:19.765324) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.767 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.767 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.768 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.768 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.769 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.769 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.769 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.769 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.769 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.bytes volume: 72830976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.770 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.770 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.bytes volume: 73048064 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.771 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-03T02:25:19.769600) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.771 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.772 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.772 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.772 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.772 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.773 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.773 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.773 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.latency volume: 8629084086 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.773 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.774 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.latency volume: 10027508187 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.774 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.775 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-03T02:25:19.773152) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.775 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.775 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.776 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.776 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.776 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.776 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.776 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.requests volume: 320 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.777 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.777 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.requests volume: 317 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.778 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.778 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.779 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.779 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.779 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.779 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.780 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-03T02:25:19.776463) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.779 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.780 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-03T02:25:19.779947) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.780 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.780 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.781 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.781 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.781 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.782 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.782 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.782 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.782 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/cpu volume: 171590000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.782 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-03T02:25:19.782226) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.782 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/cpu volume: 334860000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.783 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.783 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.783 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.784 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.784 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.784 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.784 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.784 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.785 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.785 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.785 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.785 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.785 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-03T02:25:19.784234) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.785 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-03T02:25:19.785468) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.785 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.786 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.786 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.786 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.786 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.786 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.786 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.787 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.787 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.787 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.787 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.788 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.788 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-03T02:25:19.786999) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.788 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.788 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.788 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.789 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.789 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.789 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.789 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.789 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.790 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.790 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.790 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.790 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.790 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.790 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.791 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.791 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.791 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.791 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.791 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.791 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.792 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.792 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.792 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.792 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.793 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.793 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.794 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-03T02:25:19.789274) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.794 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-03T02:25:19.790781) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.794 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-03T02:25:19.791924) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:25:20 compute-0 ceph-mon[192821]: pgmap v2133: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.3 KiB/s wr, 0 op/s
Dec 03 02:25:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2134: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.3 KiB/s wr, 0 op/s
Dec 03 02:25:21 compute-0 nova_compute[351485]: 2025-12-03 02:25:21.563 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:25:22 compute-0 ceph-mon[192821]: pgmap v2134: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.3 KiB/s wr, 0 op/s
Dec 03 02:25:22 compute-0 podman[463168]: 2025-12-03 02:25:22.906213637 +0000 UTC m=+0.130852296 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 03 02:25:22 compute-0 podman[463166]: 2025-12-03 02:25:22.910438607 +0000 UTC m=+0.151732326 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 03 02:25:22 compute-0 podman[463167]: 2025-12-03 02:25:22.920892302 +0000 UTC m=+0.154830644 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 03 02:25:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2135: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s wr, 0 op/s
Dec 03 02:25:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:25:23 compute-0 nova_compute[351485]: 2025-12-03 02:25:23.958 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:25:24 compute-0 ceph-mon[192821]: pgmap v2135: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s wr, 0 op/s
Dec 03 02:25:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2136: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s wr, 0 op/s
Dec 03 02:25:25 compute-0 nova_compute[351485]: 2025-12-03 02:25:25.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:25:25 compute-0 nova_compute[351485]: 2025-12-03 02:25:25.621 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:25:25 compute-0 nova_compute[351485]: 2025-12-03 02:25:25.621 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:25:25 compute-0 nova_compute[351485]: 2025-12-03 02:25:25.622 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:25:25 compute-0 nova_compute[351485]: 2025-12-03 02:25:25.622 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 02:25:25 compute-0 nova_compute[351485]: 2025-12-03 02:25:25.622 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:25:26 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:25:26 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2037201336' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:25:26 compute-0 nova_compute[351485]: 2025-12-03 02:25:26.218 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:25:26 compute-0 nova_compute[351485]: 2025-12-03 02:25:26.517 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:25:26 compute-0 nova_compute[351485]: 2025-12-03 02:25:26.518 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:25:26 compute-0 nova_compute[351485]: 2025-12-03 02:25:26.528 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:25:26 compute-0 nova_compute[351485]: 2025-12-03 02:25:26.528 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:25:26 compute-0 nova_compute[351485]: 2025-12-03 02:25:26.566 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:25:26 compute-0 ceph-mon[192821]: pgmap v2136: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s wr, 0 op/s
Dec 03 02:25:26 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2037201336' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:25:27 compute-0 nova_compute[351485]: 2025-12-03 02:25:27.109 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:25:27 compute-0 nova_compute[351485]: 2025-12-03 02:25:27.111 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3536MB free_disk=59.89719772338867GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 02:25:27 compute-0 nova_compute[351485]: 2025-12-03 02:25:27.111 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:25:27 compute-0 nova_compute[351485]: 2025-12-03 02:25:27.112 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:25:27 compute-0 nova_compute[351485]: 2025-12-03 02:25:27.219 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:25:27 compute-0 nova_compute[351485]: 2025-12-03 02:25:27.220 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 4fb8fc07-d7b7-4be8-94da-155b040faf32 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:25:27 compute-0 nova_compute[351485]: 2025-12-03 02:25:27.220 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 02:25:27 compute-0 nova_compute[351485]: 2025-12-03 02:25:27.220 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 02:25:27 compute-0 nova_compute[351485]: 2025-12-03 02:25:27.267 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing inventories for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 03 02:25:27 compute-0 nova_compute[351485]: 2025-12-03 02:25:27.303 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Updating ProviderTree inventory for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 03 02:25:27 compute-0 nova_compute[351485]: 2025-12-03 02:25:27.304 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Updating inventory in ProviderTree for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 03 02:25:27 compute-0 nova_compute[351485]: 2025-12-03 02:25:27.322 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing aggregate associations for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 03 02:25:27 compute-0 nova_compute[351485]: 2025-12-03 02:25:27.350 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing trait associations for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05, traits: HW_CPU_X86_SSE42,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_ABM,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_F16C,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_RESCUE_BFV,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 03 02:25:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2137: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s wr, 0 op/s
Dec 03 02:25:27 compute-0 nova_compute[351485]: 2025-12-03 02:25:27.448 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:25:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:25:27 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/680197914' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:25:27 compute-0 nova_compute[351485]: 2025-12-03 02:25:27.922 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:25:27 compute-0 nova_compute[351485]: 2025-12-03 02:25:27.933 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:25:27 compute-0 nova_compute[351485]: 2025-12-03 02:25:27.957 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:25:27 compute-0 nova_compute[351485]: 2025-12-03 02:25:27.958 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 02:25:27 compute-0 nova_compute[351485]: 2025-12-03 02:25:27.959 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.847s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:25:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:25:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:25:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:25:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:25:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:25:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:25:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:25:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:25:28
Dec 03 02:25:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 02:25:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 02:25:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'default.rgw.log', 'images', 'backups', '.mgr', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta']
Dec 03 02:25:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 02:25:28 compute-0 ceph-mon[192821]: pgmap v2137: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s wr, 0 op/s
Dec 03 02:25:28 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/680197914' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:25:28 compute-0 nova_compute[351485]: 2025-12-03 02:25:28.959 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:25:28 compute-0 nova_compute[351485]: 2025-12-03 02:25:28.960 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 02:25:28 compute-0 nova_compute[351485]: 2025-12-03 02:25:28.962 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:25:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 02:25:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:25:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 02:25:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:25:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:25:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:25:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:25:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:25:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:25:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:25:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2138: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Dec 03 02:25:29 compute-0 podman[158098]: time="2025-12-03T02:25:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:25:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:25:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec 03 02:25:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:25:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8664 "" "Go-http-client/1.1"
Dec 03 02:25:29 compute-0 nova_compute[351485]: 2025-12-03 02:25:29.884 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-4fb8fc07-d7b7-4be8-94da-155b040faf32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:25:29 compute-0 nova_compute[351485]: 2025-12-03 02:25:29.885 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-4fb8fc07-d7b7-4be8-94da-155b040faf32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:25:29 compute-0 nova_compute[351485]: 2025-12-03 02:25:29.885 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 03 02:25:30 compute-0 ceph-mon[192821]: pgmap v2138: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Dec 03 02:25:30 compute-0 podman[463270]: 2025-12-03 02:25:30.878721915 +0000 UTC m=+0.132606957 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, tcib_managed=true)
Dec 03 02:25:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2139: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec 03 02:25:31 compute-0 openstack_network_exporter[368278]: ERROR   02:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:25:31 compute-0 openstack_network_exporter[368278]: ERROR   02:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:25:31 compute-0 openstack_network_exporter[368278]: ERROR   02:25:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:25:31 compute-0 openstack_network_exporter[368278]: ERROR   02:25:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:25:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:25:31 compute-0 openstack_network_exporter[368278]: ERROR   02:25:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:25:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:25:31 compute-0 nova_compute[351485]: 2025-12-03 02:25:31.436 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Updating instance_info_cache with network_info: [{"id": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "address": "fa:16:3e:3f:0c:ae", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.46", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94fdb5b9-66", "ovs_interfaceid": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:25:31 compute-0 nova_compute[351485]: 2025-12-03 02:25:31.457 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-4fb8fc07-d7b7-4be8-94da-155b040faf32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:25:31 compute-0 nova_compute[351485]: 2025-12-03 02:25:31.457 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 03 02:25:31 compute-0 nova_compute[351485]: 2025-12-03 02:25:31.458 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:25:31 compute-0 nova_compute[351485]: 2025-12-03 02:25:31.570 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:25:31 compute-0 nova_compute[351485]: 2025-12-03 02:25:31.575 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:25:31 compute-0 nova_compute[351485]: 2025-12-03 02:25:31.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:25:32 compute-0 ceph-mon[192821]: pgmap v2139: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec 03 02:25:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2140: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec 03 02:25:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:25:33 compute-0 nova_compute[351485]: 2025-12-03 02:25:33.571 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:25:33 compute-0 podman[463289]: 2025-12-03 02:25:33.873283806 +0000 UTC m=+0.103926756 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 03 02:25:33 compute-0 podman[463298]: 2025-12-03 02:25:33.882894517 +0000 UTC m=+0.100800338 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec 03 02:25:33 compute-0 podman[463293]: 2025-12-03 02:25:33.885867091 +0000 UTC m=+0.102186057 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, maintainer=Red Hat, Inc., name=ubi9, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, architecture=x86_64, com.redhat.component=ubi9-container, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release-0.7.12=, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, release=1214.1726694543, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec 03 02:25:33 compute-0 podman[463288]: 2025-12-03 02:25:33.90283675 +0000 UTC m=+0.145334125 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=edpm, release=1755695350)
Dec 03 02:25:33 compute-0 podman[463287]: 2025-12-03 02:25:33.908605013 +0000 UTC m=+0.158295061 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 03 02:25:33 compute-0 nova_compute[351485]: 2025-12-03 02:25:33.964 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:25:34 compute-0 ceph-mon[192821]: pgmap v2140: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec 03 02:25:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2141: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec 03 02:25:35 compute-0 nova_compute[351485]: 2025-12-03 02:25:35.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:25:35 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #102. Immutable memtables: 0.
Dec 03 02:25:35 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:35.742453) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 03 02:25:35 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 102
Dec 03 02:25:35 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728735742590, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 992, "num_deletes": 251, "total_data_size": 1439534, "memory_usage": 1461728, "flush_reason": "Manual Compaction"}
Dec 03 02:25:35 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #103: started
Dec 03 02:25:35 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728735757099, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 103, "file_size": 1426161, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43017, "largest_seqno": 44008, "table_properties": {"data_size": 1421230, "index_size": 2519, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10518, "raw_average_key_size": 19, "raw_value_size": 1411385, "raw_average_value_size": 2638, "num_data_blocks": 113, "num_entries": 535, "num_filter_entries": 535, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764728638, "oldest_key_time": 1764728638, "file_creation_time": 1764728735, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 103, "seqno_to_time_mapping": "N/A"}}
Dec 03 02:25:35 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 14752 microseconds, and 7914 cpu microseconds.
Dec 03 02:25:35 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 02:25:35 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:35.757210) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #103: 1426161 bytes OK
Dec 03 02:25:35 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:35.757238) [db/memtable_list.cc:519] [default] Level-0 commit table #103 started
Dec 03 02:25:35 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:35.760289) [db/memtable_list.cc:722] [default] Level-0 commit table #103: memtable #1 done
Dec 03 02:25:35 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:35.760312) EVENT_LOG_v1 {"time_micros": 1764728735760305, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 03 02:25:35 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:35.760334) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 03 02:25:35 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 1434835, prev total WAL file size 1434835, number of live WAL files 2.
Dec 03 02:25:35 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000099.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:25:35 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:35.761777) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Dec 03 02:25:35 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 03 02:25:35 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [103(1392KB)], [101(9626KB)]
Dec 03 02:25:35 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728735761861, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [103], "files_L6": [101], "score": -1, "input_data_size": 11283244, "oldest_snapshot_seqno": -1}
Dec 03 02:25:35 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #104: 5933 keys, 9589538 bytes, temperature: kUnknown
Dec 03 02:25:35 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728735846081, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 104, "file_size": 9589538, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9549315, "index_size": 24305, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14853, "raw_key_size": 154541, "raw_average_key_size": 26, "raw_value_size": 9441454, "raw_average_value_size": 1591, "num_data_blocks": 969, "num_entries": 5933, "num_filter_entries": 5933, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764728735, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Dec 03 02:25:35 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 02:25:35 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:35.846298) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 9589538 bytes
Dec 03 02:25:35 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:35.848812) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 133.9 rd, 113.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 9.4 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(14.6) write-amplify(6.7) OK, records in: 6447, records dropped: 514 output_compression: NoCompression
Dec 03 02:25:35 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:35.848831) EVENT_LOG_v1 {"time_micros": 1764728735848822, "job": 60, "event": "compaction_finished", "compaction_time_micros": 84273, "compaction_time_cpu_micros": 35775, "output_level": 6, "num_output_files": 1, "total_output_size": 9589538, "num_input_records": 6447, "num_output_records": 5933, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 03 02:25:35 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000103.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:25:35 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728735849222, "job": 60, "event": "table_file_deletion", "file_number": 103}
Dec 03 02:25:35 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:25:35 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728735851023, "job": 60, "event": "table_file_deletion", "file_number": 101}
Dec 03 02:25:35 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:35.761496) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:25:35 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:35.851219) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:25:35 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:35.851223) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:25:35 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:35.851225) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:25:35 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:35.851227) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:25:35 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:35.851229) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:25:36 compute-0 nova_compute[351485]: 2025-12-03 02:25:36.574 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:25:36 compute-0 ceph-mon[192821]: pgmap v2141: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec 03 02:25:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2142: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec 03 02:25:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:25:38 compute-0 ceph-mon[192821]: pgmap v2142: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec 03 02:25:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 02:25:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:25:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 02:25:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:25:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001518418921338803 of space, bias 1.0, pg target 0.45552567640164093 quantized to 32 (current 32)
Dec 03 02:25:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:25:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:25:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:25:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:25:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:25:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec 03 02:25:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:25:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 02:25:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:25:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:25:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:25:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 02:25:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:25:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 02:25:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:25:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:25:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:25:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 02:25:38 compute-0 nova_compute[351485]: 2025-12-03 02:25:38.969 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:25:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2143: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec 03 02:25:39 compute-0 nova_compute[351485]: 2025-12-03 02:25:39.578 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:25:39 compute-0 nova_compute[351485]: 2025-12-03 02:25:39.581 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 02:25:40 compute-0 ceph-mon[192821]: pgmap v2143: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec 03 02:25:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2144: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 8.3 KiB/s wr, 1 op/s
Dec 03 02:25:41 compute-0 nova_compute[351485]: 2025-12-03 02:25:41.578 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:25:41 compute-0 sshd-session[463384]: Invalid user update from 154.113.10.113 port 44828
Dec 03 02:25:41 compute-0 ceph-mon[192821]: pgmap v2144: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 8.3 KiB/s wr, 1 op/s
Dec 03 02:25:41 compute-0 sshd-session[463384]: Received disconnect from 154.113.10.113 port 44828:11: Bye Bye [preauth]
Dec 03 02:25:41 compute-0 sshd-session[463384]: Disconnected from invalid user update 154.113.10.113 port 44828 [preauth]
Dec 03 02:25:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2145: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:25:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:25:43 compute-0 nova_compute[351485]: 2025-12-03 02:25:43.975 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:25:44 compute-0 ceph-mon[192821]: pgmap v2145: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:25:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2146: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 03 02:25:46 compute-0 ceph-mon[192821]: pgmap v2146: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 03 02:25:46 compute-0 nova_compute[351485]: 2025-12-03 02:25:46.582 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:25:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 02:25:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/779942832' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:25:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 02:25:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/779942832' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:25:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2147: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 03 02:25:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/779942832' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:25:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/779942832' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:25:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:25:48 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #105. Immutable memtables: 0.
Dec 03 02:25:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:48.449848) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 03 02:25:48 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 105
Dec 03 02:25:48 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728748449901, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 347, "num_deletes": 250, "total_data_size": 183587, "memory_usage": 189536, "flush_reason": "Manual Compaction"}
Dec 03 02:25:48 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #106: started
Dec 03 02:25:48 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728748456294, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 106, "file_size": 181513, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 44009, "largest_seqno": 44355, "table_properties": {"data_size": 179351, "index_size": 326, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5904, "raw_average_key_size": 20, "raw_value_size": 175107, "raw_average_value_size": 599, "num_data_blocks": 15, "num_entries": 292, "num_filter_entries": 292, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764728736, "oldest_key_time": 1764728736, "file_creation_time": 1764728748, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 106, "seqno_to_time_mapping": "N/A"}}
Dec 03 02:25:48 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 6792 microseconds, and 2853 cpu microseconds.
Dec 03 02:25:48 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 02:25:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:48.456636) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #106: 181513 bytes OK
Dec 03 02:25:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:48.456668) [db/memtable_list.cc:519] [default] Level-0 commit table #106 started
Dec 03 02:25:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:48.461342) [db/memtable_list.cc:722] [default] Level-0 commit table #106: memtable #1 done
Dec 03 02:25:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:48.461370) EVENT_LOG_v1 {"time_micros": 1764728748461362, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 03 02:25:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:48.461393) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 03 02:25:48 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 181244, prev total WAL file size 181244, number of live WAL files 2.
Dec 03 02:25:48 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000102.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:25:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:48.462654) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373534' seq:72057594037927935, type:22 .. '6D6772737461740032303035' seq:0, type:0; will stop at (end)
Dec 03 02:25:48 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 03 02:25:48 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [106(177KB)], [104(9364KB)]
Dec 03 02:25:48 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728748462699, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [106], "files_L6": [104], "score": -1, "input_data_size": 9771051, "oldest_snapshot_seqno": -1}
Dec 03 02:25:48 compute-0 ceph-mon[192821]: pgmap v2147: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 03 02:25:48 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #107: 5718 keys, 6482067 bytes, temperature: kUnknown
Dec 03 02:25:48 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728748537393, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 107, "file_size": 6482067, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6448055, "index_size": 18606, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14341, "raw_key_size": 150260, "raw_average_key_size": 26, "raw_value_size": 6348636, "raw_average_value_size": 1110, "num_data_blocks": 733, "num_entries": 5718, "num_filter_entries": 5718, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764728748, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Dec 03 02:25:48 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 02:25:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:48.537895) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 6482067 bytes
Dec 03 02:25:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:48.540883) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 130.6 rd, 86.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 9.1 +0.0 blob) out(6.2 +0.0 blob), read-write-amplify(89.5) write-amplify(35.7) OK, records in: 6225, records dropped: 507 output_compression: NoCompression
Dec 03 02:25:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:48.540917) EVENT_LOG_v1 {"time_micros": 1764728748540902, "job": 62, "event": "compaction_finished", "compaction_time_micros": 74797, "compaction_time_cpu_micros": 43343, "output_level": 6, "num_output_files": 1, "total_output_size": 6482067, "num_input_records": 6225, "num_output_records": 5718, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 03 02:25:48 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000106.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:25:48 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728748541171, "job": 62, "event": "table_file_deletion", "file_number": 106}
Dec 03 02:25:48 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:25:48 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728748545657, "job": 62, "event": "table_file_deletion", "file_number": 104}
Dec 03 02:25:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:48.462337) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:25:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:48.545793) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:25:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:48.545798) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:25:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:48.545800) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:25:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:48.545801) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:25:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:48.545803) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:25:48 compute-0 nova_compute[351485]: 2025-12-03 02:25:48.977 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:25:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2148: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 03 02:25:50 compute-0 ceph-mon[192821]: pgmap v2148: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 03 02:25:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2149: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 03 02:25:51 compute-0 nova_compute[351485]: 2025-12-03 02:25:51.584 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:25:52 compute-0 ceph-mon[192821]: pgmap v2149: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 03 02:25:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2150: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 03 02:25:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:25:53 compute-0 podman[463389]: 2025-12-03 02:25:53.868722281 +0000 UTC m=+0.110924994 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 02:25:53 compute-0 podman[463388]: 2025-12-03 02:25:53.884475916 +0000 UTC m=+0.130200109 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Dec 03 02:25:53 compute-0 podman[463387]: 2025-12-03 02:25:53.910194172 +0000 UTC m=+0.156560453 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true)
Dec 03 02:25:53 compute-0 nova_compute[351485]: 2025-12-03 02:25:53.981 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:25:54 compute-0 ceph-mon[192821]: pgmap v2150: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 03 02:25:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2151: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 03 02:25:56 compute-0 ceph-mon[192821]: pgmap v2151: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 03 02:25:56 compute-0 nova_compute[351485]: 2025-12-03 02:25:56.587 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:25:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2152: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:25:57 compute-0 sudo[463449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:25:57 compute-0 sudo[463449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:25:57 compute-0 sudo[463449]: pam_unix(sudo:session): session closed for user root
Dec 03 02:25:57 compute-0 sudo[463474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:25:57 compute-0 sudo[463474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:25:57 compute-0 sudo[463474]: pam_unix(sudo:session): session closed for user root
Dec 03 02:25:57 compute-0 sudo[463499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:25:57 compute-0 sudo[463499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:25:57 compute-0 sudo[463499]: pam_unix(sudo:session): session closed for user root
Dec 03 02:25:57 compute-0 sudo[463524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 02:25:57 compute-0 sudo[463524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:25:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:25:58 compute-0 sudo[463524]: pam_unix(sudo:session): session closed for user root
Dec 03 02:25:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:25:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:25:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:25:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:25:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:25:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:25:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:25:58 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:25:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 02:25:58 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:25:58 compute-0 ceph-mon[192821]: pgmap v2152: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:25:58 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:25:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 02:25:58 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:25:58 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 821a63a6-4efd-4fd7-b072-7a905e991501 does not exist
Dec 03 02:25:58 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 491b2b96-98fb-4527-bacd-9f6256d7ff92 does not exist
Dec 03 02:25:58 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev c14a9550-023e-4129-af02-e856ce723cb7 does not exist
Dec 03 02:25:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 02:25:58 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:25:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 02:25:58 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:25:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:25:58 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:25:58 compute-0 sudo[463580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:25:58 compute-0 sudo[463580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:25:58 compute-0 sudo[463580]: pam_unix(sudo:session): session closed for user root
Dec 03 02:25:58 compute-0 sudo[463605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:25:58 compute-0 sudo[463605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:25:58 compute-0 sudo[463605]: pam_unix(sudo:session): session closed for user root
Dec 03 02:25:58 compute-0 nova_compute[351485]: 2025-12-03 02:25:58.984 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:25:59 compute-0 sudo[463630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:25:59 compute-0 sudo[463630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:25:59 compute-0 sudo[463630]: pam_unix(sudo:session): session closed for user root
Dec 03 02:25:59 compute-0 sudo[463655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 02:25:59 compute-0 sudo[463655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:25:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2153: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:25:59 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:25:59 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:25:59 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:25:59 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:25:59 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:25:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:25:59.659 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:25:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:25:59.659 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:25:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:25:59.660 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:25:59 compute-0 podman[158098]: time="2025-12-03T02:25:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:25:59 compute-0 podman[463719]: 2025-12-03 02:25:59.764284141 +0000 UTC m=+0.088297805 container create 132f0686ef8efba164f23dbed5afac7098fe5e5263df2d42f2fd578f48a95f20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_liskov, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:25:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:25:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec 03 02:25:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:25:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8664 "" "Go-http-client/1.1"
Dec 03 02:25:59 compute-0 podman[463719]: 2025-12-03 02:25:59.726123763 +0000 UTC m=+0.050137497 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:25:59 compute-0 systemd[1]: Started libpod-conmon-132f0686ef8efba164f23dbed5afac7098fe5e5263df2d42f2fd578f48a95f20.scope.
Dec 03 02:25:59 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:25:59 compute-0 podman[463719]: 2025-12-03 02:25:59.936052712 +0000 UTC m=+0.260066406 container init 132f0686ef8efba164f23dbed5afac7098fe5e5263df2d42f2fd578f48a95f20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:25:59 compute-0 podman[463719]: 2025-12-03 02:25:59.95299545 +0000 UTC m=+0.277009124 container start 132f0686ef8efba164f23dbed5afac7098fe5e5263df2d42f2fd578f48a95f20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 03 02:25:59 compute-0 podman[463719]: 2025-12-03 02:25:59.958778504 +0000 UTC m=+0.282792268 container attach 132f0686ef8efba164f23dbed5afac7098fe5e5263df2d42f2fd578f48a95f20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 03 02:25:59 compute-0 intelligent_liskov[463733]: 167 167
Dec 03 02:25:59 compute-0 podman[463719]: 2025-12-03 02:25:59.969870767 +0000 UTC m=+0.293884461 container died 132f0686ef8efba164f23dbed5afac7098fe5e5263df2d42f2fd578f48a95f20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_liskov, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Dec 03 02:25:59 compute-0 systemd[1]: libpod-132f0686ef8efba164f23dbed5afac7098fe5e5263df2d42f2fd578f48a95f20.scope: Deactivated successfully.
Dec 03 02:26:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-935fe44948038371fd171d1482a698dff8d9d60f17c28b04933cb3663125405d-merged.mount: Deactivated successfully.
Dec 03 02:26:00 compute-0 podman[463719]: 2025-12-03 02:26:00.103220323 +0000 UTC m=+0.427233987 container remove 132f0686ef8efba164f23dbed5afac7098fe5e5263df2d42f2fd578f48a95f20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 03 02:26:00 compute-0 systemd[1]: libpod-conmon-132f0686ef8efba164f23dbed5afac7098fe5e5263df2d42f2fd578f48a95f20.scope: Deactivated successfully.
Dec 03 02:26:00 compute-0 podman[463759]: 2025-12-03 02:26:00.40374089 +0000 UTC m=+0.084438925 container create 5ea28a27c4e9e32f97e13ee90634d4000ccda72017e5ce1c6ebfc5567d0cbdd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jepsen, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec 03 02:26:00 compute-0 podman[463759]: 2025-12-03 02:26:00.35982992 +0000 UTC m=+0.040527995 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:26:00 compute-0 systemd[1]: Started libpod-conmon-5ea28a27c4e9e32f97e13ee90634d4000ccda72017e5ce1c6ebfc5567d0cbdd7.scope.
Dec 03 02:26:00 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:26:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68eb2cdf4a98317bc2c1dbdfe3c3c920f544eccf47c35d3c2a7a59f667ed3790/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:26:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68eb2cdf4a98317bc2c1dbdfe3c3c920f544eccf47c35d3c2a7a59f667ed3790/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:26:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68eb2cdf4a98317bc2c1dbdfe3c3c920f544eccf47c35d3c2a7a59f667ed3790/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:26:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68eb2cdf4a98317bc2c1dbdfe3c3c920f544eccf47c35d3c2a7a59f667ed3790/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:26:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68eb2cdf4a98317bc2c1dbdfe3c3c920f544eccf47c35d3c2a7a59f667ed3790/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 02:26:00 compute-0 podman[463759]: 2025-12-03 02:26:00.578836905 +0000 UTC m=+0.259534970 container init 5ea28a27c4e9e32f97e13ee90634d4000ccda72017e5ce1c6ebfc5567d0cbdd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jepsen, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:26:00 compute-0 podman[463759]: 2025-12-03 02:26:00.605161389 +0000 UTC m=+0.285859444 container start 5ea28a27c4e9e32f97e13ee90634d4000ccda72017e5ce1c6ebfc5567d0cbdd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:26:00 compute-0 ceph-mon[192821]: pgmap v2153: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:00 compute-0 podman[463759]: 2025-12-03 02:26:00.619797712 +0000 UTC m=+0.300495767 container attach 5ea28a27c4e9e32f97e13ee90634d4000ccda72017e5ce1c6ebfc5567d0cbdd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:26:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2154: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:01 compute-0 openstack_network_exporter[368278]: ERROR   02:26:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:26:01 compute-0 openstack_network_exporter[368278]: ERROR   02:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:26:01 compute-0 openstack_network_exporter[368278]: ERROR   02:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:26:01 compute-0 openstack_network_exporter[368278]: ERROR   02:26:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:26:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:26:01 compute-0 openstack_network_exporter[368278]: ERROR   02:26:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:26:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:26:01 compute-0 nova_compute[351485]: 2025-12-03 02:26:01.591 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:26:01 compute-0 podman[463798]: 2025-12-03 02:26:01.861861829 +0000 UTC m=+0.109232015 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Dec 03 02:26:01 compute-0 silly_jepsen[463777]: --> passed data devices: 0 physical, 3 LVM
Dec 03 02:26:01 compute-0 silly_jepsen[463777]: --> relative data size: 1.0
Dec 03 02:26:01 compute-0 silly_jepsen[463777]: --> All data devices are unavailable
Dec 03 02:26:01 compute-0 systemd[1]: libpod-5ea28a27c4e9e32f97e13ee90634d4000ccda72017e5ce1c6ebfc5567d0cbdd7.scope: Deactivated successfully.
Dec 03 02:26:01 compute-0 systemd[1]: libpod-5ea28a27c4e9e32f97e13ee90634d4000ccda72017e5ce1c6ebfc5567d0cbdd7.scope: Consumed 1.216s CPU time.
Dec 03 02:26:01 compute-0 conmon[463777]: conmon 5ea28a27c4e9e32f97e1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5ea28a27c4e9e32f97e13ee90634d4000ccda72017e5ce1c6ebfc5567d0cbdd7.scope/container/memory.events
Dec 03 02:26:01 compute-0 podman[463759]: 2025-12-03 02:26:01.918772517 +0000 UTC m=+1.599470572 container died 5ea28a27c4e9e32f97e13ee90634d4000ccda72017e5ce1c6ebfc5567d0cbdd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jepsen, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 03 02:26:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-68eb2cdf4a98317bc2c1dbdfe3c3c920f544eccf47c35d3c2a7a59f667ed3790-merged.mount: Deactivated successfully.
Dec 03 02:26:02 compute-0 podman[463759]: 2025-12-03 02:26:02.018324398 +0000 UTC m=+1.699022433 container remove 5ea28a27c4e9e32f97e13ee90634d4000ccda72017e5ce1c6ebfc5567d0cbdd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jepsen, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 03 02:26:02 compute-0 systemd[1]: libpod-conmon-5ea28a27c4e9e32f97e13ee90634d4000ccda72017e5ce1c6ebfc5567d0cbdd7.scope: Deactivated successfully.
Dec 03 02:26:02 compute-0 sudo[463655]: pam_unix(sudo:session): session closed for user root
Dec 03 02:26:02 compute-0 sudo[463838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:26:02 compute-0 sudo[463838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:26:02 compute-0 sudo[463838]: pam_unix(sudo:session): session closed for user root
Dec 03 02:26:02 compute-0 sudo[463863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:26:02 compute-0 sudo[463863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:26:02 compute-0 sudo[463863]: pam_unix(sudo:session): session closed for user root
Dec 03 02:26:02 compute-0 sudo[463888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:26:02 compute-0 sudo[463888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:26:02 compute-0 sudo[463888]: pam_unix(sudo:session): session closed for user root
Dec 03 02:26:02 compute-0 sudo[463913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 02:26:02 compute-0 sudo[463913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:26:02 compute-0 ceph-mon[192821]: pgmap v2154: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:03 compute-0 podman[463978]: 2025-12-03 02:26:03.103576938 +0000 UTC m=+0.094093678 container create 8cf6e3e0e6e16060a0d4d14bd7d94a3e2e7c2761db517cc45cd42c33bd1d2873 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_khorana, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 03 02:26:03 compute-0 podman[463978]: 2025-12-03 02:26:03.07108432 +0000 UTC m=+0.061601100 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:26:03 compute-0 systemd[1]: Started libpod-conmon-8cf6e3e0e6e16060a0d4d14bd7d94a3e2e7c2761db517cc45cd42c33bd1d2873.scope.
Dec 03 02:26:03 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:26:03 compute-0 podman[463978]: 2025-12-03 02:26:03.267678503 +0000 UTC m=+0.258195253 container init 8cf6e3e0e6e16060a0d4d14bd7d94a3e2e7c2761db517cc45cd42c33bd1d2873 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:26:03 compute-0 podman[463978]: 2025-12-03 02:26:03.281970106 +0000 UTC m=+0.272486846 container start 8cf6e3e0e6e16060a0d4d14bd7d94a3e2e7c2761db517cc45cd42c33bd1d2873 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:26:03 compute-0 podman[463978]: 2025-12-03 02:26:03.287345178 +0000 UTC m=+0.277861918 container attach 8cf6e3e0e6e16060a0d4d14bd7d94a3e2e7c2761db517cc45cd42c33bd1d2873 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_khorana, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:26:03 compute-0 quizzical_khorana[463994]: 167 167
Dec 03 02:26:03 compute-0 systemd[1]: libpod-8cf6e3e0e6e16060a0d4d14bd7d94a3e2e7c2761db517cc45cd42c33bd1d2873.scope: Deactivated successfully.
Dec 03 02:26:03 compute-0 podman[463978]: 2025-12-03 02:26:03.296389243 +0000 UTC m=+0.286905973 container died 8cf6e3e0e6e16060a0d4d14bd7d94a3e2e7c2761db517cc45cd42c33bd1d2873 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:26:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-f413ddd69871522dc3d0db570d876e7b81f10851ce4054ef035d6839a6fca279-merged.mount: Deactivated successfully.
Dec 03 02:26:03 compute-0 podman[463978]: 2025-12-03 02:26:03.366859894 +0000 UTC m=+0.357376664 container remove 8cf6e3e0e6e16060a0d4d14bd7d94a3e2e7c2761db517cc45cd42c33bd1d2873 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:26:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2155: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:03 compute-0 systemd[1]: libpod-conmon-8cf6e3e0e6e16060a0d4d14bd7d94a3e2e7c2761db517cc45cd42c33bd1d2873.scope: Deactivated successfully.
Dec 03 02:26:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:26:03 compute-0 podman[464017]: 2025-12-03 02:26:03.600150282 +0000 UTC m=+0.074249408 container create 4c18f7bf248bf0bbbe7d547a79aa2438e32aae336247b0925b5dbb928c8831ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:26:03 compute-0 podman[464017]: 2025-12-03 02:26:03.571641087 +0000 UTC m=+0.045740203 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:26:03 compute-0 systemd[1]: Started libpod-conmon-4c18f7bf248bf0bbbe7d547a79aa2438e32aae336247b0925b5dbb928c8831ee.scope.
Dec 03 02:26:03 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:26:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89fe6fb322c73f5c71e2da91bcabcd68183b071daba927157406efbe360cec95/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:26:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89fe6fb322c73f5c71e2da91bcabcd68183b071daba927157406efbe360cec95/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:26:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89fe6fb322c73f5c71e2da91bcabcd68183b071daba927157406efbe360cec95/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:26:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89fe6fb322c73f5c71e2da91bcabcd68183b071daba927157406efbe360cec95/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:26:03 compute-0 podman[464017]: 2025-12-03 02:26:03.792061832 +0000 UTC m=+0.266160948 container init 4c18f7bf248bf0bbbe7d547a79aa2438e32aae336247b0925b5dbb928c8831ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_noether, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 03 02:26:03 compute-0 podman[464017]: 2025-12-03 02:26:03.805054459 +0000 UTC m=+0.279153545 container start 4c18f7bf248bf0bbbe7d547a79aa2438e32aae336247b0925b5dbb928c8831ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_noether, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:26:03 compute-0 podman[464017]: 2025-12-03 02:26:03.809170755 +0000 UTC m=+0.283269871 container attach 4c18f7bf248bf0bbbe7d547a79aa2438e32aae336247b0925b5dbb928c8831ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_noether, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Dec 03 02:26:03 compute-0 nova_compute[351485]: 2025-12-03 02:26:03.987 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:26:04 compute-0 thirsty_noether[464033]: {
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:     "0": [
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:         {
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:             "devices": [
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:                 "/dev/loop3"
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:             ],
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:             "lv_name": "ceph_lv0",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:             "lv_size": "21470642176",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:             "name": "ceph_lv0",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:             "tags": {
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:                 "ceph.cluster_name": "ceph",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:                 "ceph.crush_device_class": "",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:                 "ceph.encrypted": "0",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:                 "ceph.osd_id": "0",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:                 "ceph.type": "block",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:                 "ceph.vdo": "0"
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:             },
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:             "type": "block",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:             "vg_name": "ceph_vg0"
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:         }
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:     ],
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:     "1": [
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:         {
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:             "devices": [
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:                 "/dev/loop4"
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:             ],
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:             "lv_name": "ceph_lv1",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:             "lv_size": "21470642176",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:             "name": "ceph_lv1",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:             "tags": {
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:                 "ceph.cluster_name": "ceph",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:                 "ceph.crush_device_class": "",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:                 "ceph.encrypted": "0",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:                 "ceph.osd_id": "1",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:                 "ceph.type": "block",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:                 "ceph.vdo": "0"
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:             },
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:             "type": "block",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:             "vg_name": "ceph_vg1"
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:         }
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:     ],
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:     "2": [
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:         {
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:             "devices": [
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:                 "/dev/loop5"
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:             ],
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:             "lv_name": "ceph_lv2",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:             "lv_size": "21470642176",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:             "name": "ceph_lv2",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:             "tags": {
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:                 "ceph.cluster_name": "ceph",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:                 "ceph.crush_device_class": "",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:                 "ceph.encrypted": "0",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:                 "ceph.osd_id": "2",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:                 "ceph.type": "block",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:                 "ceph.vdo": "0"
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:             },
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:             "type": "block",
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:             "vg_name": "ceph_vg2"
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:         }
Dec 03 02:26:04 compute-0 thirsty_noether[464033]:     ]
Dec 03 02:26:04 compute-0 thirsty_noether[464033]: }
Dec 03 02:26:04 compute-0 systemd[1]: libpod-4c18f7bf248bf0bbbe7d547a79aa2438e32aae336247b0925b5dbb928c8831ee.scope: Deactivated successfully.
Dec 03 02:26:04 compute-0 podman[464017]: 2025-12-03 02:26:04.632883729 +0000 UTC m=+1.106982835 container died 4c18f7bf248bf0bbbe7d547a79aa2438e32aae336247b0925b5dbb928c8831ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_noether, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec 03 02:26:04 compute-0 ceph-mon[192821]: pgmap v2155: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-89fe6fb322c73f5c71e2da91bcabcd68183b071daba927157406efbe360cec95-merged.mount: Deactivated successfully.
Dec 03 02:26:04 compute-0 podman[464017]: 2025-12-03 02:26:04.721676676 +0000 UTC m=+1.195775762 container remove 4c18f7bf248bf0bbbe7d547a79aa2438e32aae336247b0925b5dbb928c8831ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_noether, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 03 02:26:04 compute-0 systemd[1]: libpod-conmon-4c18f7bf248bf0bbbe7d547a79aa2438e32aae336247b0925b5dbb928c8831ee.scope: Deactivated successfully.
Dec 03 02:26:04 compute-0 sudo[463913]: pam_unix(sudo:session): session closed for user root
Dec 03 02:26:04 compute-0 podman[464053]: 2025-12-03 02:26:04.803332411 +0000 UTC m=+0.115399589 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0)
Dec 03 02:26:04 compute-0 podman[464050]: 2025-12-03 02:26:04.812368717 +0000 UTC m=+0.119749752 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.29.0, release=1214.1726694543, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, managed_by=edpm_ansible, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.openshift.tags=base rhel9, release-0.7.12=, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 03 02:26:04 compute-0 podman[464045]: 2025-12-03 02:26:04.837153507 +0000 UTC m=+0.143735900 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, maintainer=Red Hat, Inc., name=ubi9-minimal, version=9.6, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, distribution-scope=public, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 03 02:26:04 compute-0 podman[464046]: 2025-12-03 02:26:04.84791339 +0000 UTC m=+0.139219521 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 03 02:26:04 compute-0 sudo[464110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:26:04 compute-0 sudo[464110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:26:04 compute-0 sudo[464110]: pam_unix(sudo:session): session closed for user root
Dec 03 02:26:04 compute-0 podman[464043]: 2025-12-03 02:26:04.871465306 +0000 UTC m=+0.183344718 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 03 02:26:04 compute-0 sudo[464180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:26:04 compute-0 sudo[464180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:26:04 compute-0 sudo[464180]: pam_unix(sudo:session): session closed for user root
Dec 03 02:26:05 compute-0 sudo[464205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:26:05 compute-0 sudo[464205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:26:05 compute-0 sudo[464205]: pam_unix(sudo:session): session closed for user root
Dec 03 02:26:05 compute-0 sudo[464230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 02:26:05 compute-0 sudo[464230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:26:05 compute-0 sshd-session[463758]: Invalid user kapsch from 45.78.219.140 port 47910
Dec 03 02:26:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2156: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:05 compute-0 podman[464289]: 2025-12-03 02:26:05.842591212 +0000 UTC m=+0.120613387 container create e532c4dcf0f1e3cb5f26f999868e7f8d1374dba2143461c6591b959f0cfc54b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 03 02:26:05 compute-0 podman[464289]: 2025-12-03 02:26:05.772635116 +0000 UTC m=+0.050657331 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:26:05 compute-0 systemd[1]: Started libpod-conmon-e532c4dcf0f1e3cb5f26f999868e7f8d1374dba2143461c6591b959f0cfc54b1.scope.
Dec 03 02:26:05 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:26:05 compute-0 podman[464289]: 2025-12-03 02:26:05.98592276 +0000 UTC m=+0.263944945 container init e532c4dcf0f1e3cb5f26f999868e7f8d1374dba2143461c6591b959f0cfc54b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_beaver, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Dec 03 02:26:05 compute-0 podman[464289]: 2025-12-03 02:26:05.995498801 +0000 UTC m=+0.273520936 container start e532c4dcf0f1e3cb5f26f999868e7f8d1374dba2143461c6591b959f0cfc54b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_beaver, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:26:05 compute-0 podman[464289]: 2025-12-03 02:26:05.999086422 +0000 UTC m=+0.277108557 container attach e532c4dcf0f1e3cb5f26f999868e7f8d1374dba2143461c6591b959f0cfc54b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 03 02:26:06 compute-0 sweet_beaver[464304]: 167 167
Dec 03 02:26:06 compute-0 systemd[1]: libpod-e532c4dcf0f1e3cb5f26f999868e7f8d1374dba2143461c6591b959f0cfc54b1.scope: Deactivated successfully.
Dec 03 02:26:06 compute-0 podman[464289]: 2025-12-03 02:26:06.005780841 +0000 UTC m=+0.283802976 container died e532c4dcf0f1e3cb5f26f999868e7f8d1374dba2143461c6591b959f0cfc54b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_beaver, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 03 02:26:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5b560a0cc2e69da758a20454ffc9e76fc0488000393b32db40884ff4676a606-merged.mount: Deactivated successfully.
Dec 03 02:26:06 compute-0 podman[464289]: 2025-12-03 02:26:06.051844512 +0000 UTC m=+0.329866647 container remove e532c4dcf0f1e3cb5f26f999868e7f8d1374dba2143461c6591b959f0cfc54b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 03 02:26:06 compute-0 systemd[1]: libpod-conmon-e532c4dcf0f1e3cb5f26f999868e7f8d1374dba2143461c6591b959f0cfc54b1.scope: Deactivated successfully.
Dec 03 02:26:06 compute-0 podman[464327]: 2025-12-03 02:26:06.336775019 +0000 UTC m=+0.088153221 container create b1699bd0e29b7633f541d80a947fd74514efb4f93c746c8bdbc5eed3afe1626d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_khayyam, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 03 02:26:06 compute-0 podman[464327]: 2025-12-03 02:26:06.312287707 +0000 UTC m=+0.063665919 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:26:06 compute-0 systemd[1]: Started libpod-conmon-b1699bd0e29b7633f541d80a947fd74514efb4f93c746c8bdbc5eed3afe1626d.scope.
Dec 03 02:26:06 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:26:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8e8e54b38003ca44e5a3e971b52d02c5bb077c1e005deba7e67b3a82a0041bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:26:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8e8e54b38003ca44e5a3e971b52d02c5bb077c1e005deba7e67b3a82a0041bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:26:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8e8e54b38003ca44e5a3e971b52d02c5bb077c1e005deba7e67b3a82a0041bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:26:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8e8e54b38003ca44e5a3e971b52d02c5bb077c1e005deba7e67b3a82a0041bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:26:06 compute-0 podman[464327]: 2025-12-03 02:26:06.516264348 +0000 UTC m=+0.267642580 container init b1699bd0e29b7633f541d80a947fd74514efb4f93c746c8bdbc5eed3afe1626d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:26:06 compute-0 podman[464327]: 2025-12-03 02:26:06.542827558 +0000 UTC m=+0.294205800 container start b1699bd0e29b7633f541d80a947fd74514efb4f93c746c8bdbc5eed3afe1626d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Dec 03 02:26:06 compute-0 podman[464327]: 2025-12-03 02:26:06.550085103 +0000 UTC m=+0.301463335 container attach b1699bd0e29b7633f541d80a947fd74514efb4f93c746c8bdbc5eed3afe1626d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_khayyam, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 03 02:26:06 compute-0 nova_compute[351485]: 2025-12-03 02:26:06.595 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:26:06 compute-0 ceph-mon[192821]: pgmap v2156: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:06 compute-0 sshd-session[463758]: Received disconnect from 45.78.219.140 port 47910:11: Bye Bye [preauth]
Dec 03 02:26:06 compute-0 sshd-session[463758]: Disconnected from invalid user kapsch 45.78.219.140 port 47910 [preauth]
Dec 03 02:26:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2157: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:07 compute-0 distracted_khayyam[464344]: {
Dec 03 02:26:07 compute-0 distracted_khayyam[464344]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 02:26:07 compute-0 distracted_khayyam[464344]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:26:07 compute-0 distracted_khayyam[464344]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 02:26:07 compute-0 distracted_khayyam[464344]:         "osd_id": 2,
Dec 03 02:26:07 compute-0 distracted_khayyam[464344]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:26:07 compute-0 distracted_khayyam[464344]:         "type": "bluestore"
Dec 03 02:26:07 compute-0 distracted_khayyam[464344]:     },
Dec 03 02:26:07 compute-0 distracted_khayyam[464344]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 02:26:07 compute-0 distracted_khayyam[464344]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:26:07 compute-0 distracted_khayyam[464344]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 02:26:07 compute-0 distracted_khayyam[464344]:         "osd_id": 1,
Dec 03 02:26:07 compute-0 distracted_khayyam[464344]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:26:07 compute-0 distracted_khayyam[464344]:         "type": "bluestore"
Dec 03 02:26:07 compute-0 distracted_khayyam[464344]:     },
Dec 03 02:26:07 compute-0 distracted_khayyam[464344]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 02:26:07 compute-0 distracted_khayyam[464344]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:26:07 compute-0 distracted_khayyam[464344]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 02:26:07 compute-0 distracted_khayyam[464344]:         "osd_id": 0,
Dec 03 02:26:07 compute-0 distracted_khayyam[464344]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:26:07 compute-0 distracted_khayyam[464344]:         "type": "bluestore"
Dec 03 02:26:07 compute-0 distracted_khayyam[464344]:     }
Dec 03 02:26:07 compute-0 distracted_khayyam[464344]: }
Dec 03 02:26:07 compute-0 systemd[1]: libpod-b1699bd0e29b7633f541d80a947fd74514efb4f93c746c8bdbc5eed3afe1626d.scope: Deactivated successfully.
Dec 03 02:26:07 compute-0 podman[464327]: 2025-12-03 02:26:07.719281504 +0000 UTC m=+1.470659726 container died b1699bd0e29b7633f541d80a947fd74514efb4f93c746c8bdbc5eed3afe1626d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:26:07 compute-0 systemd[1]: libpod-b1699bd0e29b7633f541d80a947fd74514efb4f93c746c8bdbc5eed3afe1626d.scope: Consumed 1.184s CPU time.
Dec 03 02:26:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8e8e54b38003ca44e5a3e971b52d02c5bb077c1e005deba7e67b3a82a0041bc-merged.mount: Deactivated successfully.
Dec 03 02:26:07 compute-0 podman[464327]: 2025-12-03 02:26:07.791453472 +0000 UTC m=+1.542831664 container remove b1699bd0e29b7633f541d80a947fd74514efb4f93c746c8bdbc5eed3afe1626d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_khayyam, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 03 02:26:07 compute-0 sudo[464230]: pam_unix(sudo:session): session closed for user root
Dec 03 02:26:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 02:26:07 compute-0 systemd[1]: libpod-conmon-b1699bd0e29b7633f541d80a947fd74514efb4f93c746c8bdbc5eed3afe1626d.scope: Deactivated successfully.
Dec 03 02:26:07 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:26:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 02:26:07 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:26:07 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 95ee77f6-f4bb-47c2-9973-3651272121a0 does not exist
Dec 03 02:26:07 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 8db5fe37-d429-4410-bcd9-8c03f2ba9e87 does not exist
Dec 03 02:26:07 compute-0 sudo[464389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:26:07 compute-0 sudo[464389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:26:07 compute-0 sudo[464389]: pam_unix(sudo:session): session closed for user root
Dec 03 02:26:08 compute-0 sudo[464414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 02:26:08 compute-0 sudo[464414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:26:08 compute-0 sudo[464414]: pam_unix(sudo:session): session closed for user root
Dec 03 02:26:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:26:08 compute-0 ceph-mon[192821]: pgmap v2157: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:08 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:26:08 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:26:08 compute-0 nova_compute[351485]: 2025-12-03 02:26:08.989 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:26:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2158: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:10 compute-0 ceph-mon[192821]: pgmap v2158: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2159: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:11 compute-0 nova_compute[351485]: 2025-12-03 02:26:11.599 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:26:12 compute-0 ceph-mon[192821]: pgmap v2159: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2160: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:26:13 compute-0 nova_compute[351485]: 2025-12-03 02:26:13.993 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:26:14 compute-0 ceph-mon[192821]: pgmap v2160: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2161: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:16 compute-0 nova_compute[351485]: 2025-12-03 02:26:16.603 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:26:16 compute-0 ceph-mon[192821]: pgmap v2161: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2162: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:17 compute-0 nova_compute[351485]: 2025-12-03 02:26:17.583 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:26:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:26:18 compute-0 ceph-mon[192821]: pgmap v2162: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:18 compute-0 nova_compute[351485]: 2025-12-03 02:26:18.998 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:26:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2163: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:20 compute-0 ceph-mon[192821]: pgmap v2163: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2164: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:21 compute-0 nova_compute[351485]: 2025-12-03 02:26:21.607 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:26:22 compute-0 ceph-mon[192821]: pgmap v2164: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2165: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:26:23 compute-0 ceph-mon[192821]: pgmap v2165: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:24 compute-0 nova_compute[351485]: 2025-12-03 02:26:24.001 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:26:24 compute-0 podman[464440]: 2025-12-03 02:26:24.898130253 +0000 UTC m=+0.133999835 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 03 02:26:24 compute-0 podman[464442]: 2025-12-03 02:26:24.898826403 +0000 UTC m=+0.130344592 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 02:26:24 compute-0 podman[464441]: 2025-12-03 02:26:24.903365991 +0000 UTC m=+0.134030426 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Dec 03 02:26:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2166: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:25 compute-0 nova_compute[351485]: 2025-12-03 02:26:25.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:26:25 compute-0 nova_compute[351485]: 2025-12-03 02:26:25.620 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:26:25 compute-0 nova_compute[351485]: 2025-12-03 02:26:25.621 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:26:25 compute-0 nova_compute[351485]: 2025-12-03 02:26:25.621 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:26:25 compute-0 nova_compute[351485]: 2025-12-03 02:26:25.622 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 02:26:25 compute-0 nova_compute[351485]: 2025-12-03 02:26:25.622 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:26:26 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:26:26 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3295606808' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:26:26 compute-0 nova_compute[351485]: 2025-12-03 02:26:26.244 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:26:26 compute-0 nova_compute[351485]: 2025-12-03 02:26:26.357 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:26:26 compute-0 nova_compute[351485]: 2025-12-03 02:26:26.358 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:26:26 compute-0 nova_compute[351485]: 2025-12-03 02:26:26.365 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:26:26 compute-0 nova_compute[351485]: 2025-12-03 02:26:26.365 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:26:26 compute-0 ceph-mon[192821]: pgmap v2166: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:26 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3295606808' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:26:26 compute-0 nova_compute[351485]: 2025-12-03 02:26:26.610 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:26:26 compute-0 nova_compute[351485]: 2025-12-03 02:26:26.908 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:26:26 compute-0 nova_compute[351485]: 2025-12-03 02:26:26.909 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3513MB free_disk=59.897193908691406GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 02:26:26 compute-0 nova_compute[351485]: 2025-12-03 02:26:26.910 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:26:26 compute-0 nova_compute[351485]: 2025-12-03 02:26:26.910 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:26:27 compute-0 nova_compute[351485]: 2025-12-03 02:26:27.011 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:26:27 compute-0 nova_compute[351485]: 2025-12-03 02:26:27.012 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 4fb8fc07-d7b7-4be8-94da-155b040faf32 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:26:27 compute-0 nova_compute[351485]: 2025-12-03 02:26:27.012 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 02:26:27 compute-0 nova_compute[351485]: 2025-12-03 02:26:27.013 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 02:26:27 compute-0 nova_compute[351485]: 2025-12-03 02:26:27.081 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:26:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2167: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:26:27 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1175832973' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:26:27 compute-0 nova_compute[351485]: 2025-12-03 02:26:27.625 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:26:27 compute-0 nova_compute[351485]: 2025-12-03 02:26:27.636 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:26:27 compute-0 nova_compute[351485]: 2025-12-03 02:26:27.655 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:26:27 compute-0 nova_compute[351485]: 2025-12-03 02:26:27.658 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 02:26:27 compute-0 nova_compute[351485]: 2025-12-03 02:26:27.660 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.749s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:26:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:26:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:26:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:26:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:26:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:26:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:26:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:26:28 compute-0 ceph-mon[192821]: pgmap v2167: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:28 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1175832973' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:26:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:26:28
Dec 03 02:26:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 02:26:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 02:26:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log', 'vms', 'volumes', 'images', 'default.rgw.meta', 'backups', '.mgr']
Dec 03 02:26:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 02:26:29 compute-0 nova_compute[351485]: 2025-12-03 02:26:29.004 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:26:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 02:26:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:26:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 02:26:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:26:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:26:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:26:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:26:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:26:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:26:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:26:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2168: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:29 compute-0 nova_compute[351485]: 2025-12-03 02:26:29.661 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:26:29 compute-0 nova_compute[351485]: 2025-12-03 02:26:29.662 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 02:26:29 compute-0 nova_compute[351485]: 2025-12-03 02:26:29.662 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 03 02:26:29 compute-0 podman[158098]: time="2025-12-03T02:26:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:26:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:26:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec 03 02:26:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:26:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8658 "" "Go-http-client/1.1"
Dec 03 02:26:30 compute-0 nova_compute[351485]: 2025-12-03 02:26:30.148 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-2890ee5c-21c1-4e9d-9421-1a2df0f67f76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:26:30 compute-0 nova_compute[351485]: 2025-12-03 02:26:30.150 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-2890ee5c-21c1-4e9d-9421-1a2df0f67f76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:26:30 compute-0 nova_compute[351485]: 2025-12-03 02:26:30.151 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 03 02:26:30 compute-0 nova_compute[351485]: 2025-12-03 02:26:30.152 351492 DEBUG nova.objects.instance [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:26:30 compute-0 ceph-mon[192821]: pgmap v2168: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2169: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:31 compute-0 openstack_network_exporter[368278]: ERROR   02:26:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:26:31 compute-0 openstack_network_exporter[368278]: ERROR   02:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:26:31 compute-0 openstack_network_exporter[368278]: ERROR   02:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:26:31 compute-0 openstack_network_exporter[368278]: ERROR   02:26:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:26:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:26:31 compute-0 openstack_network_exporter[368278]: ERROR   02:26:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:26:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:26:31 compute-0 nova_compute[351485]: 2025-12-03 02:26:31.614 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:26:32 compute-0 ceph-mon[192821]: pgmap v2169: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:32 compute-0 nova_compute[351485]: 2025-12-03 02:26:32.599 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Updating instance_info_cache with network_info: [{"id": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "address": "fa:16:3e:dd:ed:eb", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.239", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf36a9f58-d7", "ovs_interfaceid": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:26:32 compute-0 nova_compute[351485]: 2025-12-03 02:26:32.618 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-2890ee5c-21c1-4e9d-9421-1a2df0f67f76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:26:32 compute-0 nova_compute[351485]: 2025-12-03 02:26:32.620 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 03 02:26:32 compute-0 nova_compute[351485]: 2025-12-03 02:26:32.621 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:26:32 compute-0 nova_compute[351485]: 2025-12-03 02:26:32.622 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:26:32 compute-0 podman[464541]: 2025-12-03 02:26:32.891384566 +0000 UTC m=+0.137708940 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.3)
Dec 03 02:26:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2170: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:26:33 compute-0 nova_compute[351485]: 2025-12-03 02:26:33.578 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:26:33 compute-0 nova_compute[351485]: 2025-12-03 02:26:33.578 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:26:33 compute-0 nova_compute[351485]: 2025-12-03 02:26:33.609 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:26:34 compute-0 nova_compute[351485]: 2025-12-03 02:26:34.010 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:26:34 compute-0 ceph-mon[192821]: pgmap v2170: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2171: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:35 compute-0 nova_compute[351485]: 2025-12-03 02:26:35.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:26:35 compute-0 podman[464564]: 2025-12-03 02:26:35.895087076 +0000 UTC m=+0.105162251 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, release=1214.1726694543, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.29.0, container_name=kepler, version=9.4, architecture=x86_64, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 03 02:26:35 compute-0 podman[464569]: 2025-12-03 02:26:35.895326752 +0000 UTC m=+0.110959534 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 03 02:26:35 compute-0 podman[464562]: 2025-12-03 02:26:35.911971172 +0000 UTC m=+0.142120064 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7)
Dec 03 02:26:35 compute-0 podman[464563]: 2025-12-03 02:26:35.916399547 +0000 UTC m=+0.134896230 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 03 02:26:35 compute-0 podman[464561]: 2025-12-03 02:26:35.958466216 +0000 UTC m=+0.193716302 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 03 02:26:36 compute-0 ceph-mon[192821]: pgmap v2171: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:36 compute-0 nova_compute[351485]: 2025-12-03 02:26:36.618 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:26:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2172: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:26:38 compute-0 ceph-mon[192821]: pgmap v2172: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 02:26:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:26:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 02:26:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:26:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001518418921338803 of space, bias 1.0, pg target 0.45552567640164093 quantized to 32 (current 32)
Dec 03 02:26:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:26:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:26:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:26:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:26:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:26:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec 03 02:26:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:26:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 02:26:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:26:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:26:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:26:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 02:26:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:26:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 02:26:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:26:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:26:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:26:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 02:26:39 compute-0 nova_compute[351485]: 2025-12-03 02:26:39.014 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:26:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2173: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:39 compute-0 nova_compute[351485]: 2025-12-03 02:26:39.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:26:39 compute-0 nova_compute[351485]: 2025-12-03 02:26:39.578 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 02:26:40 compute-0 ceph-mon[192821]: pgmap v2173: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2174: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:41 compute-0 nova_compute[351485]: 2025-12-03 02:26:41.622 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:26:42 compute-0 ceph-mon[192821]: pgmap v2174: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2175: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:26:44 compute-0 nova_compute[351485]: 2025-12-03 02:26:44.019 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:26:44 compute-0 ceph-mon[192821]: pgmap v2175: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2176: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:46 compute-0 nova_compute[351485]: 2025-12-03 02:26:46.626 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:26:46 compute-0 ceph-mon[192821]: pgmap v2176: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 02:26:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3404991307' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:26:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 02:26:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3404991307' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:26:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2177: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/3404991307' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:26:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/3404991307' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:26:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:26:48 compute-0 ceph-mon[192821]: pgmap v2177: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:49 compute-0 nova_compute[351485]: 2025-12-03 02:26:49.026 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:26:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2178: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:50 compute-0 ceph-mon[192821]: pgmap v2178: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2179: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:51 compute-0 nova_compute[351485]: 2025-12-03 02:26:51.631 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:26:52 compute-0 ceph-mon[192821]: pgmap v2179: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2180: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:26:54 compute-0 nova_compute[351485]: 2025-12-03 02:26:54.032 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:26:54 compute-0 ceph-mon[192821]: pgmap v2180: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2181: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:55 compute-0 podman[464661]: 2025-12-03 02:26:55.876601006 +0000 UTC m=+0.123926941 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 03 02:26:55 compute-0 podman[464663]: 2025-12-03 02:26:55.903035593 +0000 UTC m=+0.135779246 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 02:26:55 compute-0 podman[464662]: 2025-12-03 02:26:55.93019747 +0000 UTC m=+0.170128736 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, container_name=ceilometer_agent_compute)
Dec 03 02:26:56 compute-0 nova_compute[351485]: 2025-12-03 02:26:56.634 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:26:56 compute-0 ceph-mon[192821]: pgmap v2181: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2182: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:26:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:26:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:26:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:26:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:26:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:26:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:26:58 compute-0 ceph-mon[192821]: pgmap v2182: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:59 compute-0 nova_compute[351485]: 2025-12-03 02:26:59.037 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:26:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2183: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:26:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:26:59.660 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:26:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:26:59.660 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:26:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:26:59.661 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:26:59 compute-0 podman[158098]: time="2025-12-03T02:26:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:26:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:26:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec 03 02:26:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:26:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8659 "" "Go-http-client/1.1"
Dec 03 02:27:00 compute-0 ceph-mon[192821]: pgmap v2183: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:01 compute-0 openstack_network_exporter[368278]: ERROR   02:27:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:27:01 compute-0 openstack_network_exporter[368278]: ERROR   02:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:27:01 compute-0 openstack_network_exporter[368278]: ERROR   02:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:27:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2184: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:01 compute-0 openstack_network_exporter[368278]: ERROR   02:27:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:27:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:27:01 compute-0 openstack_network_exporter[368278]: ERROR   02:27:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:27:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:27:01 compute-0 nova_compute[351485]: 2025-12-03 02:27:01.637 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:27:02 compute-0 ceph-mon[192821]: pgmap v2184: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2185: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:27:03 compute-0 podman[464720]: 2025-12-03 02:27:03.8850943 +0000 UTC m=+0.136739853 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Dec 03 02:27:04 compute-0 nova_compute[351485]: 2025-12-03 02:27:04.042 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:27:04 compute-0 ceph-mon[192821]: pgmap v2185: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2186: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:06 compute-0 nova_compute[351485]: 2025-12-03 02:27:06.640 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:27:06 compute-0 ceph-mon[192821]: pgmap v2186: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:06 compute-0 podman[464742]: 2025-12-03 02:27:06.882291345 +0000 UTC m=+0.104125721 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 03 02:27:06 compute-0 podman[464748]: 2025-12-03 02:27:06.890295201 +0000 UTC m=+0.106115287 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd)
Dec 03 02:27:06 compute-0 podman[464741]: 2025-12-03 02:27:06.895358685 +0000 UTC m=+0.132912725 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_id=edpm, managed_by=edpm_ansible, name=ubi9-minimal, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc.)
Dec 03 02:27:06 compute-0 podman[464743]: 2025-12-03 02:27:06.905057768 +0000 UTC m=+0.125899036 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, container_name=kepler, config_id=edpm, vcs-type=git, name=ubi9, release=1214.1726694543, managed_by=edpm_ansible, release-0.7.12=, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., version=9.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public)
Dec 03 02:27:06 compute-0 podman[464740]: 2025-12-03 02:27:06.922937093 +0000 UTC m=+0.170161416 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 03 02:27:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2187: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:08 compute-0 sudo[464840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:27:08 compute-0 sudo[464840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:27:08 compute-0 sudo[464840]: pam_unix(sudo:session): session closed for user root
Dec 03 02:27:08 compute-0 sudo[464865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:27:08 compute-0 sudo[464865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:27:08 compute-0 sudo[464865]: pam_unix(sudo:session): session closed for user root
Dec 03 02:27:08 compute-0 sudo[464890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:27:08 compute-0 sudo[464890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:27:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:27:08 compute-0 sudo[464890]: pam_unix(sudo:session): session closed for user root
Dec 03 02:27:08 compute-0 sudo[464915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 03 02:27:08 compute-0 sudo[464915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:27:08 compute-0 ceph-mon[192821]: pgmap v2187: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:09 compute-0 nova_compute[351485]: 2025-12-03 02:27:09.044 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:27:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2188: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:09 compute-0 podman[465011]: 2025-12-03 02:27:09.46834236 +0000 UTC m=+0.131322140 container exec d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Dec 03 02:27:09 compute-0 podman[465011]: 2025-12-03 02:27:09.577197334 +0000 UTC m=+0.240177104 container exec_died d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:27:09 compute-0 ceph-mon[192821]: pgmap v2188: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:10 compute-0 sudo[464915]: pam_unix(sudo:session): session closed for user root
Dec 03 02:27:10 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 02:27:10 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:27:10 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 02:27:10 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:27:10 compute-0 sudo[465164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:27:10 compute-0 sudo[465164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:27:10 compute-0 sudo[465164]: pam_unix(sudo:session): session closed for user root
Dec 03 02:27:10 compute-0 sudo[465189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:27:10 compute-0 sudo[465189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:27:10 compute-0 sudo[465189]: pam_unix(sudo:session): session closed for user root
Dec 03 02:27:11 compute-0 sudo[465214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:27:11 compute-0 sudo[465214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:27:11 compute-0 sudo[465214]: pam_unix(sudo:session): session closed for user root
Dec 03 02:27:11 compute-0 sudo[465239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 02:27:11 compute-0 sudo[465239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:27:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2189: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:11 compute-0 nova_compute[351485]: 2025-12-03 02:27:11.644 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:27:11 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:27:11 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:27:11 compute-0 sudo[465239]: pam_unix(sudo:session): session closed for user root
Dec 03 02:27:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:27:11 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:27:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 02:27:11 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:27:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 02:27:11 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:27:11 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 31e132f0-f288-46b4-bb24-1b0bc05342e5 does not exist
Dec 03 02:27:11 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 2df69b2f-1092-4b89-83a2-5b14f643ab73 does not exist
Dec 03 02:27:11 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 76c15bc3-1c08-4231-9186-4cb2f8fdba75 does not exist
Dec 03 02:27:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 02:27:11 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:27:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 02:27:11 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:27:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:27:11 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:27:12 compute-0 sudo[465295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:27:12 compute-0 sudo[465295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:27:12 compute-0 sudo[465295]: pam_unix(sudo:session): session closed for user root
Dec 03 02:27:12 compute-0 sudo[465320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:27:12 compute-0 sudo[465320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:27:12 compute-0 sudo[465320]: pam_unix(sudo:session): session closed for user root
Dec 03 02:27:12 compute-0 sudo[465345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:27:12 compute-0 sudo[465345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:27:12 compute-0 sudo[465345]: pam_unix(sudo:session): session closed for user root
Dec 03 02:27:12 compute-0 sudo[465370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 02:27:12 compute-0 sudo[465370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:27:12 compute-0 ceph-mon[192821]: pgmap v2189: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:12 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:27:12 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:27:12 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:27:12 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:27:12 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:27:12 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:27:13 compute-0 podman[465431]: 2025-12-03 02:27:13.105577011 +0000 UTC m=+0.087590814 container create f9b4bb93101542a7b1a62d51476eb912f0d4c96d01a44d84c81a8744e41a362a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:27:13 compute-0 podman[465431]: 2025-12-03 02:27:13.078299731 +0000 UTC m=+0.060313544 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:27:13 compute-0 systemd[1]: Started libpod-conmon-f9b4bb93101542a7b1a62d51476eb912f0d4c96d01a44d84c81a8744e41a362a.scope.
Dec 03 02:27:13 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:27:13 compute-0 podman[465431]: 2025-12-03 02:27:13.270817068 +0000 UTC m=+0.252830931 container init f9b4bb93101542a7b1a62d51476eb912f0d4c96d01a44d84c81a8744e41a362a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_jemison, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Dec 03 02:27:13 compute-0 podman[465431]: 2025-12-03 02:27:13.282054865 +0000 UTC m=+0.264068678 container start f9b4bb93101542a7b1a62d51476eb912f0d4c96d01a44d84c81a8744e41a362a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_jemison, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 03 02:27:13 compute-0 podman[465431]: 2025-12-03 02:27:13.289065653 +0000 UTC m=+0.271079536 container attach f9b4bb93101542a7b1a62d51476eb912f0d4c96d01a44d84c81a8744e41a362a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:27:13 compute-0 suspicious_jemison[465446]: 167 167
Dec 03 02:27:13 compute-0 systemd[1]: libpod-f9b4bb93101542a7b1a62d51476eb912f0d4c96d01a44d84c81a8744e41a362a.scope: Deactivated successfully.
Dec 03 02:27:13 compute-0 conmon[465446]: conmon f9b4bb93101542a7b1a6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f9b4bb93101542a7b1a62d51476eb912f0d4c96d01a44d84c81a8744e41a362a.scope/container/memory.events
Dec 03 02:27:13 compute-0 podman[465431]: 2025-12-03 02:27:13.296672208 +0000 UTC m=+0.278686031 container died f9b4bb93101542a7b1a62d51476eb912f0d4c96d01a44d84c81a8744e41a362a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:27:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-380a6acbd13631bc8d7c17e7389bb52efa36addc3ff5c355fffa5abcebc4cd51-merged.mount: Deactivated successfully.
Dec 03 02:27:13 compute-0 podman[465431]: 2025-12-03 02:27:13.356977611 +0000 UTC m=+0.338991384 container remove f9b4bb93101542a7b1a62d51476eb912f0d4c96d01a44d84c81a8744e41a362a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_jemison, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:27:13 compute-0 systemd[1]: libpod-conmon-f9b4bb93101542a7b1a62d51476eb912f0d4c96d01a44d84c81a8744e41a362a.scope: Deactivated successfully.
Dec 03 02:27:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2190: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:27:13 compute-0 podman[465471]: 2025-12-03 02:27:13.681049154 +0000 UTC m=+0.110889213 container create 662089ba4fdaf6ac350a23ac3dbcab26c34221766b36d89fd0c935e9a5f728eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hawking, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:27:13 compute-0 podman[465471]: 2025-12-03 02:27:13.60939329 +0000 UTC m=+0.039233409 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:27:13 compute-0 systemd[1]: Started libpod-conmon-662089ba4fdaf6ac350a23ac3dbcab26c34221766b36d89fd0c935e9a5f728eb.scope.
Dec 03 02:27:13 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:27:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed11ca9b9e7fc749c802fdec75c1c3e755eeaec5dd2c0383fc7dc0eac2f528b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:27:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed11ca9b9e7fc749c802fdec75c1c3e755eeaec5dd2c0383fc7dc0eac2f528b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:27:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed11ca9b9e7fc749c802fdec75c1c3e755eeaec5dd2c0383fc7dc0eac2f528b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:27:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed11ca9b9e7fc749c802fdec75c1c3e755eeaec5dd2c0383fc7dc0eac2f528b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:27:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed11ca9b9e7fc749c802fdec75c1c3e755eeaec5dd2c0383fc7dc0eac2f528b0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 02:27:13 compute-0 ceph-mon[192821]: pgmap v2190: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:13 compute-0 podman[465471]: 2025-12-03 02:27:13.876848829 +0000 UTC m=+0.306688888 container init 662089ba4fdaf6ac350a23ac3dbcab26c34221766b36d89fd0c935e9a5f728eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hawking, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:27:13 compute-0 podman[465471]: 2025-12-03 02:27:13.899369236 +0000 UTC m=+0.329209285 container start 662089ba4fdaf6ac350a23ac3dbcab26c34221766b36d89fd0c935e9a5f728eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hawking, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 03 02:27:13 compute-0 podman[465471]: 2025-12-03 02:27:13.905308043 +0000 UTC m=+0.335148112 container attach 662089ba4fdaf6ac350a23ac3dbcab26c34221766b36d89fd0c935e9a5f728eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:27:14 compute-0 nova_compute[351485]: 2025-12-03 02:27:14.051 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:27:15 compute-0 cool_hawking[465487]: --> passed data devices: 0 physical, 3 LVM
Dec 03 02:27:15 compute-0 cool_hawking[465487]: --> relative data size: 1.0
Dec 03 02:27:15 compute-0 cool_hawking[465487]: --> All data devices are unavailable
Dec 03 02:27:15 compute-0 podman[465471]: 2025-12-03 02:27:15.139690239 +0000 UTC m=+1.569530328 container died 662089ba4fdaf6ac350a23ac3dbcab26c34221766b36d89fd0c935e9a5f728eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hawking, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:27:15 compute-0 systemd[1]: libpod-662089ba4fdaf6ac350a23ac3dbcab26c34221766b36d89fd0c935e9a5f728eb.scope: Deactivated successfully.
Dec 03 02:27:15 compute-0 systemd[1]: libpod-662089ba4fdaf6ac350a23ac3dbcab26c34221766b36d89fd0c935e9a5f728eb.scope: Consumed 1.189s CPU time.
Dec 03 02:27:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed11ca9b9e7fc749c802fdec75c1c3e755eeaec5dd2c0383fc7dc0eac2f528b0-merged.mount: Deactivated successfully.
Dec 03 02:27:15 compute-0 podman[465471]: 2025-12-03 02:27:15.253678978 +0000 UTC m=+1.683518997 container remove 662089ba4fdaf6ac350a23ac3dbcab26c34221766b36d89fd0c935e9a5f728eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hawking, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Dec 03 02:27:15 compute-0 systemd[1]: libpod-conmon-662089ba4fdaf6ac350a23ac3dbcab26c34221766b36d89fd0c935e9a5f728eb.scope: Deactivated successfully.
Dec 03 02:27:15 compute-0 sudo[465370]: pam_unix(sudo:session): session closed for user root
Dec 03 02:27:15 compute-0 sudo[465528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:27:15 compute-0 sudo[465528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:27:15 compute-0 sudo[465528]: pam_unix(sudo:session): session closed for user root
Dec 03 02:27:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2191: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:15 compute-0 sudo[465553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:27:15 compute-0 sudo[465553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:27:15 compute-0 sudo[465553]: pam_unix(sudo:session): session closed for user root
Dec 03 02:27:15 compute-0 sudo[465578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:27:15 compute-0 sudo[465578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:27:15 compute-0 sudo[465578]: pam_unix(sudo:session): session closed for user root
Dec 03 02:27:15 compute-0 sudo[465603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 02:27:15 compute-0 sudo[465603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:27:16 compute-0 podman[465665]: 2025-12-03 02:27:16.286305771 +0000 UTC m=+0.080603247 container create 9210e7778fd89491a0c31a863af9898590169a050b61ea665a09008eb2147e6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_villani, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:27:16 compute-0 podman[465665]: 2025-12-03 02:27:16.258729472 +0000 UTC m=+0.053026978 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:27:16 compute-0 systemd[1]: Started libpod-conmon-9210e7778fd89491a0c31a863af9898590169a050b61ea665a09008eb2147e6c.scope.
Dec 03 02:27:16 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:27:16 compute-0 podman[465665]: 2025-12-03 02:27:16.43147856 +0000 UTC m=+0.225776056 container init 9210e7778fd89491a0c31a863af9898590169a050b61ea665a09008eb2147e6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_villani, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 03 02:27:16 compute-0 podman[465665]: 2025-12-03 02:27:16.444083136 +0000 UTC m=+0.238380632 container start 9210e7778fd89491a0c31a863af9898590169a050b61ea665a09008eb2147e6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_villani, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 03 02:27:16 compute-0 podman[465665]: 2025-12-03 02:27:16.450499127 +0000 UTC m=+0.244796623 container attach 9210e7778fd89491a0c31a863af9898590169a050b61ea665a09008eb2147e6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_villani, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:27:16 compute-0 systemd[1]: libpod-9210e7778fd89491a0c31a863af9898590169a050b61ea665a09008eb2147e6c.scope: Deactivated successfully.
Dec 03 02:27:16 compute-0 conmon[465681]: conmon 9210e7778fd89491a0c3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9210e7778fd89491a0c31a863af9898590169a050b61ea665a09008eb2147e6c.scope/container/memory.events
Dec 03 02:27:16 compute-0 clever_villani[465681]: 167 167
Dec 03 02:27:16 compute-0 podman[465665]: 2025-12-03 02:27:16.456351793 +0000 UTC m=+0.250649279 container died 9210e7778fd89491a0c31a863af9898590169a050b61ea665a09008eb2147e6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_villani, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 03 02:27:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-65a8c53211283b7908a8e5868ead20a9bb606ce7e7a9ce4ab809aa8ed859c566-merged.mount: Deactivated successfully.
Dec 03 02:27:16 compute-0 ceph-mon[192821]: pgmap v2191: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:16 compute-0 podman[465665]: 2025-12-03 02:27:16.513662221 +0000 UTC m=+0.307959687 container remove 9210e7778fd89491a0c31a863af9898590169a050b61ea665a09008eb2147e6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_villani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:27:16 compute-0 systemd[1]: libpod-conmon-9210e7778fd89491a0c31a863af9898590169a050b61ea665a09008eb2147e6c.scope: Deactivated successfully.
Dec 03 02:27:16 compute-0 nova_compute[351485]: 2025-12-03 02:27:16.648 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:27:16 compute-0 podman[465703]: 2025-12-03 02:27:16.797350163 +0000 UTC m=+0.075676998 container create 8ec33575ae577ae628adccad1333a4841a9eca9fc5f19ae548649d11a8a6df41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:27:16 compute-0 podman[465703]: 2025-12-03 02:27:16.764275389 +0000 UTC m=+0.042602224 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:27:16 compute-0 systemd[1]: Started libpod-conmon-8ec33575ae577ae628adccad1333a4841a9eca9fc5f19ae548649d11a8a6df41.scope.
Dec 03 02:27:16 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:27:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/656a16d96209ec6ebde10805b52360107cb5701298778643649033fadb9a1916/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:27:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/656a16d96209ec6ebde10805b52360107cb5701298778643649033fadb9a1916/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:27:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/656a16d96209ec6ebde10805b52360107cb5701298778643649033fadb9a1916/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:27:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/656a16d96209ec6ebde10805b52360107cb5701298778643649033fadb9a1916/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:27:16 compute-0 podman[465703]: 2025-12-03 02:27:16.960693296 +0000 UTC m=+0.239020161 container init 8ec33575ae577ae628adccad1333a4841a9eca9fc5f19ae548649d11a8a6df41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_dubinsky, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 03 02:27:16 compute-0 podman[465703]: 2025-12-03 02:27:16.984052966 +0000 UTC m=+0.262379801 container start 8ec33575ae577ae628adccad1333a4841a9eca9fc5f19ae548649d11a8a6df41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_dubinsky, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:27:16 compute-0 podman[465703]: 2025-12-03 02:27:16.990456077 +0000 UTC m=+0.268782942 container attach 8ec33575ae577ae628adccad1333a4841a9eca9fc5f19ae548649d11a8a6df41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_dubinsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 03 02:27:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2192: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]: {
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:     "0": [
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:         {
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:             "devices": [
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:                 "/dev/loop3"
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:             ],
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:             "lv_name": "ceph_lv0",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:             "lv_size": "21470642176",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:             "name": "ceph_lv0",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:             "tags": {
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:                 "ceph.cluster_name": "ceph",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:                 "ceph.crush_device_class": "",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:                 "ceph.encrypted": "0",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:                 "ceph.osd_id": "0",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:                 "ceph.type": "block",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:                 "ceph.vdo": "0"
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:             },
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:             "type": "block",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:             "vg_name": "ceph_vg0"
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:         }
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:     ],
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:     "1": [
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:         {
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:             "devices": [
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:                 "/dev/loop4"
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:             ],
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:             "lv_name": "ceph_lv1",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:             "lv_size": "21470642176",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:             "name": "ceph_lv1",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:             "tags": {
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:                 "ceph.cluster_name": "ceph",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:                 "ceph.crush_device_class": "",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:                 "ceph.encrypted": "0",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:                 "ceph.osd_id": "1",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:                 "ceph.type": "block",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:                 "ceph.vdo": "0"
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:             },
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:             "type": "block",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:             "vg_name": "ceph_vg1"
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:         }
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:     ],
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:     "2": [
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:         {
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:             "devices": [
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:                 "/dev/loop5"
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:             ],
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:             "lv_name": "ceph_lv2",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:             "lv_size": "21470642176",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:             "name": "ceph_lv2",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:             "tags": {
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:                 "ceph.cluster_name": "ceph",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:                 "ceph.crush_device_class": "",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:                 "ceph.encrypted": "0",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:                 "ceph.osd_id": "2",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:                 "ceph.type": "block",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:                 "ceph.vdo": "0"
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:             },
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:             "type": "block",
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:             "vg_name": "ceph_vg2"
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:         }
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]:     ]
Dec 03 02:27:17 compute-0 suspicious_dubinsky[465719]: }
Dec 03 02:27:17 compute-0 systemd[1]: libpod-8ec33575ae577ae628adccad1333a4841a9eca9fc5f19ae548649d11a8a6df41.scope: Deactivated successfully.
Dec 03 02:27:17 compute-0 podman[465703]: 2025-12-03 02:27:17.898123021 +0000 UTC m=+1.176449846 container died 8ec33575ae577ae628adccad1333a4841a9eca9fc5f19ae548649d11a8a6df41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 03 02:27:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-656a16d96209ec6ebde10805b52360107cb5701298778643649033fadb9a1916-merged.mount: Deactivated successfully.
Dec 03 02:27:18 compute-0 podman[465703]: 2025-12-03 02:27:18.072397283 +0000 UTC m=+1.350724118 container remove 8ec33575ae577ae628adccad1333a4841a9eca9fc5f19ae548649d11a8a6df41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:27:18 compute-0 systemd[1]: libpod-conmon-8ec33575ae577ae628adccad1333a4841a9eca9fc5f19ae548649d11a8a6df41.scope: Deactivated successfully.
Dec 03 02:27:18 compute-0 sudo[465603]: pam_unix(sudo:session): session closed for user root
Dec 03 02:27:18 compute-0 sudo[465739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:27:18 compute-0 sudo[465739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:27:18 compute-0 sudo[465739]: pam_unix(sudo:session): session closed for user root
Dec 03 02:27:18 compute-0 sudo[465764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:27:18 compute-0 sudo[465764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:27:18 compute-0 sudo[465764]: pam_unix(sudo:session): session closed for user root
Dec 03 02:27:18 compute-0 sudo[465789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:27:18 compute-0 sudo[465789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:27:18 compute-0 sudo[465789]: pam_unix(sudo:session): session closed for user root
Dec 03 02:27:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:27:18 compute-0 ceph-mon[192821]: pgmap v2192: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:18 compute-0 sudo[465814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 02:27:18 compute-0 sudo[465814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:27:19 compute-0 nova_compute[351485]: 2025-12-03 02:27:19.054 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:27:19 compute-0 podman[465879]: 2025-12-03 02:27:19.144002987 +0000 UTC m=+0.090097605 container create 5b04e0c6fe63dd836a560a9318bd818ab3bf6a3ee5f11913af95003047e3936a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 03 02:27:19 compute-0 podman[465879]: 2025-12-03 02:27:19.110139311 +0000 UTC m=+0.056233979 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:27:19 compute-0 systemd[1]: Started libpod-conmon-5b04e0c6fe63dd836a560a9318bd818ab3bf6a3ee5f11913af95003047e3936a.scope.
Dec 03 02:27:19 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:27:19 compute-0 podman[465879]: 2025-12-03 02:27:19.274112672 +0000 UTC m=+0.220207310 container init 5b04e0c6fe63dd836a560a9318bd818ab3bf6a3ee5f11913af95003047e3936a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_agnesi, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec 03 02:27:19 compute-0 podman[465879]: 2025-12-03 02:27:19.289885047 +0000 UTC m=+0.235979625 container start 5b04e0c6fe63dd836a560a9318bd818ab3bf6a3ee5f11913af95003047e3936a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:27:19 compute-0 podman[465879]: 2025-12-03 02:27:19.294894279 +0000 UTC m=+0.240988897 container attach 5b04e0c6fe63dd836a560a9318bd818ab3bf6a3ee5f11913af95003047e3936a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_agnesi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 03 02:27:19 compute-0 thirsty_agnesi[465894]: 167 167
Dec 03 02:27:19 compute-0 podman[465879]: 2025-12-03 02:27:19.29919378 +0000 UTC m=+0.245288368 container died 5b04e0c6fe63dd836a560a9318bd818ab3bf6a3ee5f11913af95003047e3936a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 03 02:27:19 compute-0 systemd[1]: libpod-5b04e0c6fe63dd836a560a9318bd818ab3bf6a3ee5f11913af95003047e3936a.scope: Deactivated successfully.
Dec 03 02:27:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-56fb6952e35943e91d7213d6395cfc5cb568953b8ac08e3722f314b005f64e35-merged.mount: Deactivated successfully.
Dec 03 02:27:19 compute-0 podman[465879]: 2025-12-03 02:27:19.368982491 +0000 UTC m=+0.315077059 container remove 5b04e0c6fe63dd836a560a9318bd818ab3bf6a3ee5f11913af95003047e3936a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:27:19 compute-0 systemd[1]: libpod-conmon-5b04e0c6fe63dd836a560a9318bd818ab3bf6a3ee5f11913af95003047e3936a.scope: Deactivated successfully.
Dec 03 02:27:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2193: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.514 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.515 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.516 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.526 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.526 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.527 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.528 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.529 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.530 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.528 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '4fb8fc07-d7b7-4be8-94da-155b040faf32', 'name': 'te-8071397-asg-3rvfkoaoyxm3-pdxc7a4qjxpu-j7dwudlie42q', 'flavor': {'id': '89219634-32e9-4cb5-896f-6fa0b1edfe13', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '8876482c-db67-48c0-9203-60685152fc9d'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '63f39ac2863946b8b817457e689ff933', 'user_id': '8f61f44789494541b7c101b0fdab52f0', 'hostId': 'b9b5204cb6f419d1971089b3610cd52175ffd5baf1b6a5204f14f9c2', 'status': 'active', 'metadata': {'metering.server_group': '38bfb145-4971-41b6-9bc3-faf3c3931019'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.531 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.534 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.535 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.537 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.538 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.539 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.541 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.541 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.542 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.542 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.543 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.543 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.544 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.545 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.545 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.554 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2890ee5c-21c1-4e9d-9421-1a2df0f67f76', 'name': 'te-8071397-asg-3rvfkoaoyxm3-n4fdz722tgvn-jwe375iwm6yr', 'flavor': {'id': '89219634-32e9-4cb5-896f-6fa0b1edfe13', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '8876482c-db67-48c0-9203-60685152fc9d'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '63f39ac2863946b8b817457e689ff933', 'user_id': '8f61f44789494541b7c101b0fdab52f0', 'hostId': 'b9b5204cb6f419d1971089b3610cd52175ffd5baf1b6a5204f14f9c2', 'status': 'active', 'metadata': {'metering.server_group': '38bfb145-4971-41b6-9bc3-faf3c3931019'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.555 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.555 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.555 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.556 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.558 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-03T02:27:19.555918) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:27:19 compute-0 nova_compute[351485]: 2025-12-03 02:27:19.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.599 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/memory.usage volume: 43.55859375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.636 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/memory.usage volume: 42.43359375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.637 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.637 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.637 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.638 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.638 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.638 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.640 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-03T02:27:19.638614) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:27:19 compute-0 podman[465919]: 2025-12-03 02:27:19.642905477 +0000 UTC m=+0.078298292 container create 24caf5e4f7bc8fd28f85dfcdb223a2176ea360e6c2d6b8861fd702cedf8f5a40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.649 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.656 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.656 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.656 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.657 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.657 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.657 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.657 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.657 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.657 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.658 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.658 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.658 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.658 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.658 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.658 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.659 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.659 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.659 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.659 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.660 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.660 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.660 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.660 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.660 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-03T02:27:19.657315) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.660 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.660 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-03T02:27:19.658901) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.660 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.661 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.661 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.661 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.661 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.661 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.661 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.661 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.662 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.662 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.662 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.662 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.662 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.662 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.663 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.667 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-03T02:27:19.660420) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.667 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-03T02:27:19.661813) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.667 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-03T02:27:19.663059) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.692 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.692 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 systemd[1]: Started libpod-conmon-24caf5e4f7bc8fd28f85dfcdb223a2176ea360e6c2d6b8861fd702cedf8f5a40.scope.
Dec 03 02:27:19 compute-0 podman[465919]: 2025-12-03 02:27:19.608212357 +0000 UTC m=+0.043605222 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.706 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.707 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.707 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.707 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.707 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.708 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.708 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.708 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.708 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.708 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.709 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-03T02:27:19.708404) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:27:19 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:27:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91bf95015077c22adaeb7980209a4357d7002a4677be7361e3b7ea2842a32168/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:27:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91bf95015077c22adaeb7980209a4357d7002a4677be7361e3b7ea2842a32168/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:27:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91bf95015077c22adaeb7980209a4357d7002a4677be7361e3b7ea2842a32168/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:27:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91bf95015077c22adaeb7980209a4357d7002a4677be7361e3b7ea2842a32168/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:27:19 compute-0 podman[465919]: 2025-12-03 02:27:19.775634826 +0000 UTC m=+0.211027671 container init 24caf5e4f7bc8fd28f85dfcdb223a2176ea360e6c2d6b8861fd702cedf8f5a40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.788 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.bytes volume: 30149632 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.789 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 podman[465919]: 2025-12-03 02:27:19.793393017 +0000 UTC m=+0.228785842 container start 24caf5e4f7bc8fd28f85dfcdb223a2176ea360e6c2d6b8861fd702cedf8f5a40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shannon, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:27:19 compute-0 podman[465919]: 2025-12-03 02:27:19.799358216 +0000 UTC m=+0.234751081 container attach 24caf5e4f7bc8fd28f85dfcdb223a2176ea360e6c2d6b8861fd702cedf8f5a40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.832 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.bytes volume: 31267328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.833 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.834 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.834 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.834 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.834 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.834 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.834 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.835 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.835 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.bytes volume: 1430 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.835 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-03T02:27:19.834875) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.835 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.836 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.836 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.836 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.836 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.836 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.836 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.latency volume: 3251057957 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.837 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.latency volume: 228292831 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.837 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.latency volume: 2988151233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.837 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.latency volume: 215162747 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.838 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.838 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.838 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.838 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.838 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-03T02:27:19.836597) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.838 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.839 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.839 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.requests volume: 1093 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.839 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.839 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.requests volume: 1144 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.840 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.840 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-03T02:27:19.839060) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.840 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.841 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.841 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.841 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.841 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.841 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.842 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.842 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.842 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.843 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.843 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.843 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.844 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.844 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.844 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.845 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-03T02:27:19.841722) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.845 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.845 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-03T02:27:19.844436) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.845 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.845 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.846 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.846 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.846 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.847 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.847 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.847 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.847 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.bytes volume: 72830976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.848 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-03T02:27:19.847279) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.848 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.848 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.bytes volume: 73162752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.849 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.849 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.849 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.850 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.850 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.850 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.850 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.851 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.latency volume: 8629084086 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.851 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-03T02:27:19.850812) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.851 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.851 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.latency volume: 10465171027 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.852 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.852 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.853 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.853 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.853 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.853 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.853 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.853 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.requests volume: 320 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.854 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.854 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-03T02:27:19.853404) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.854 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.requests volume: 335 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.854 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.855 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.855 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.855 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.855 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.855 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.856 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.856 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.856 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-03T02:27:19.856051) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.856 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.857 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.857 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.857 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.857 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.857 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.858 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.858 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/cpu volume: 290740000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.858 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/cpu volume: 336690000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.858 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-03T02:27:19.857829) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.859 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.859 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.859 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.859 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.859 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.859 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.860 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.860 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.860 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.860 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.861 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.861 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-03T02:27:19.859678) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.861 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.861 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.861 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.862 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.862 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.862 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.862 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.863 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.862 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-03T02:27:19.861442) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.863 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.863 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.863 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.863 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.864 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.864 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-03T02:27:19.863072) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.864 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.864 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.864 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.865 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.865 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.865 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.865 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.865 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.866 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.866 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.866 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.866 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.866 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.866 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.867 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-03T02:27:19.865167) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.867 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-03T02:27:19.866952) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.867 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.867 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.867 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.867 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.867 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.868 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.868 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.868 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.868 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.869 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.869 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.870 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-03T02:27:19.868177) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.869 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.871 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.871 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.871 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.871 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.871 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.872 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.872 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.872 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.872 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.872 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.872 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.873 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.873 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.873 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.873 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.873 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.873 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.874 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.874 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.874 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.874 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.874 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.874 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.875 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.875 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:27:20 compute-0 ceph-mon[192821]: pgmap v2193: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:20 compute-0 thirsty_shannon[465934]: {
Dec 03 02:27:20 compute-0 thirsty_shannon[465934]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 02:27:20 compute-0 thirsty_shannon[465934]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:27:20 compute-0 thirsty_shannon[465934]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 02:27:20 compute-0 thirsty_shannon[465934]:         "osd_id": 2,
Dec 03 02:27:20 compute-0 thirsty_shannon[465934]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:27:20 compute-0 thirsty_shannon[465934]:         "type": "bluestore"
Dec 03 02:27:20 compute-0 thirsty_shannon[465934]:     },
Dec 03 02:27:20 compute-0 thirsty_shannon[465934]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 02:27:20 compute-0 thirsty_shannon[465934]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:27:20 compute-0 thirsty_shannon[465934]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 02:27:20 compute-0 thirsty_shannon[465934]:         "osd_id": 1,
Dec 03 02:27:20 compute-0 thirsty_shannon[465934]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:27:20 compute-0 thirsty_shannon[465934]:         "type": "bluestore"
Dec 03 02:27:20 compute-0 thirsty_shannon[465934]:     },
Dec 03 02:27:20 compute-0 thirsty_shannon[465934]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 02:27:20 compute-0 thirsty_shannon[465934]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:27:20 compute-0 thirsty_shannon[465934]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 02:27:20 compute-0 thirsty_shannon[465934]:         "osd_id": 0,
Dec 03 02:27:20 compute-0 thirsty_shannon[465934]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:27:20 compute-0 thirsty_shannon[465934]:         "type": "bluestore"
Dec 03 02:27:20 compute-0 thirsty_shannon[465934]:     }
Dec 03 02:27:20 compute-0 thirsty_shannon[465934]: }
Dec 03 02:27:21 compute-0 systemd[1]: libpod-24caf5e4f7bc8fd28f85dfcdb223a2176ea360e6c2d6b8861fd702cedf8f5a40.scope: Deactivated successfully.
Dec 03 02:27:21 compute-0 systemd[1]: libpod-24caf5e4f7bc8fd28f85dfcdb223a2176ea360e6c2d6b8861fd702cedf8f5a40.scope: Consumed 1.213s CPU time.
Dec 03 02:27:21 compute-0 podman[465967]: 2025-12-03 02:27:21.120208348 +0000 UTC m=+0.069761261 container died 24caf5e4f7bc8fd28f85dfcdb223a2176ea360e6c2d6b8861fd702cedf8f5a40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 03 02:27:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-91bf95015077c22adaeb7980209a4357d7002a4677be7361e3b7ea2842a32168-merged.mount: Deactivated successfully.
Dec 03 02:27:21 compute-0 podman[465967]: 2025-12-03 02:27:21.225498462 +0000 UTC m=+0.175051315 container remove 24caf5e4f7bc8fd28f85dfcdb223a2176ea360e6c2d6b8861fd702cedf8f5a40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:27:21 compute-0 systemd[1]: libpod-conmon-24caf5e4f7bc8fd28f85dfcdb223a2176ea360e6c2d6b8861fd702cedf8f5a40.scope: Deactivated successfully.
Dec 03 02:27:21 compute-0 sudo[465814]: pam_unix(sudo:session): session closed for user root
Dec 03 02:27:21 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 02:27:21 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:27:21 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 02:27:21 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:27:21 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 9e8ad0c4-9456-458e-84a5-0a45f790ddea does not exist
Dec 03 02:27:21 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 0e404682-11f3-4b41-a7cd-a78409d4a876 does not exist
Dec 03 02:27:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2194: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:21 compute-0 sudo[465983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:27:21 compute-0 sudo[465983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:27:21 compute-0 sudo[465983]: pam_unix(sudo:session): session closed for user root
Dec 03 02:27:21 compute-0 sudo[466008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 02:27:21 compute-0 sudo[466008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:27:21 compute-0 sudo[466008]: pam_unix(sudo:session): session closed for user root
Dec 03 02:27:21 compute-0 nova_compute[351485]: 2025-12-03 02:27:21.652 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:27:22 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:27:22 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:27:22 compute-0 ceph-mon[192821]: pgmap v2194: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2195: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:27:24 compute-0 nova_compute[351485]: 2025-12-03 02:27:24.060 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:27:24 compute-0 ceph-mon[192821]: pgmap v2195: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:25 compute-0 sshd-session[466033]: Received disconnect from 154.113.10.113 port 45344:11: Bye Bye [preauth]
Dec 03 02:27:25 compute-0 sshd-session[466033]: Disconnected from authenticating user root 154.113.10.113 port 45344 [preauth]
Dec 03 02:27:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2196: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:26 compute-0 ceph-mon[192821]: pgmap v2196: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:26 compute-0 nova_compute[351485]: 2025-12-03 02:27:26.657 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:27:26 compute-0 podman[466037]: 2025-12-03 02:27:26.879958134 +0000 UTC m=+0.119010582 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 03 02:27:26 compute-0 podman[466035]: 2025-12-03 02:27:26.904429845 +0000 UTC m=+0.143191255 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 03 02:27:26 compute-0 podman[466036]: 2025-12-03 02:27:26.917426952 +0000 UTC m=+0.155690578 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, managed_by=edpm_ansible, io.buildah.version=1.41.4)
Dec 03 02:27:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2197: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:27 compute-0 nova_compute[351485]: 2025-12-03 02:27:27.575 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:27:27 compute-0 nova_compute[351485]: 2025-12-03 02:27:27.624 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:27:27 compute-0 nova_compute[351485]: 2025-12-03 02:27:27.625 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:27:27 compute-0 nova_compute[351485]: 2025-12-03 02:27:27.625 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:27:27 compute-0 nova_compute[351485]: 2025-12-03 02:27:27.626 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 02:27:27 compute-0 nova_compute[351485]: 2025-12-03 02:27:27.626 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:27:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:27:28 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4290083264' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:27:28 compute-0 nova_compute[351485]: 2025-12-03 02:27:28.167 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:27:28 compute-0 nova_compute[351485]: 2025-12-03 02:27:28.293 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:27:28 compute-0 nova_compute[351485]: 2025-12-03 02:27:28.294 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:27:28 compute-0 nova_compute[351485]: 2025-12-03 02:27:28.302 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:27:28 compute-0 nova_compute[351485]: 2025-12-03 02:27:28.303 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:27:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:27:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:27:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:27:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:27:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:27:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:27:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:27:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:27:28
Dec 03 02:27:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 02:27:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 02:27:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', '.mgr', 'images', 'backups', 'volumes', 'default.rgw.control']
Dec 03 02:27:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 02:27:28 compute-0 ceph-mon[192821]: pgmap v2197: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:28 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/4290083264' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:27:28 compute-0 nova_compute[351485]: 2025-12-03 02:27:28.716 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:27:28 compute-0 nova_compute[351485]: 2025-12-03 02:27:28.718 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3502MB free_disk=59.897193908691406GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 02:27:28 compute-0 nova_compute[351485]: 2025-12-03 02:27:28.718 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:27:28 compute-0 nova_compute[351485]: 2025-12-03 02:27:28.719 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:27:28 compute-0 nova_compute[351485]: 2025-12-03 02:27:28.913 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:27:28 compute-0 nova_compute[351485]: 2025-12-03 02:27:28.914 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 4fb8fc07-d7b7-4be8-94da-155b040faf32 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:27:28 compute-0 nova_compute[351485]: 2025-12-03 02:27:28.915 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 02:27:28 compute-0 nova_compute[351485]: 2025-12-03 02:27:28.915 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 02:27:29 compute-0 nova_compute[351485]: 2025-12-03 02:27:29.060 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:27:29 compute-0 nova_compute[351485]: 2025-12-03 02:27:29.110 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:27:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 02:27:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:27:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 02:27:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:27:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:27:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:27:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:27:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:27:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:27:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:27:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2198: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:27:29 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3280635965' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:27:29 compute-0 nova_compute[351485]: 2025-12-03 02:27:29.623 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:27:29 compute-0 nova_compute[351485]: 2025-12-03 02:27:29.637 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:27:29 compute-0 podman[158098]: time="2025-12-03T02:27:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:27:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:27:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec 03 02:27:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:27:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8670 "" "Go-http-client/1.1"
Dec 03 02:27:29 compute-0 nova_compute[351485]: 2025-12-03 02:27:29.864 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:27:29 compute-0 nova_compute[351485]: 2025-12-03 02:27:29.869 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 02:27:29 compute-0 nova_compute[351485]: 2025-12-03 02:27:29.870 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.152s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:27:29 compute-0 nova_compute[351485]: 2025-12-03 02:27:29.872 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:27:29 compute-0 nova_compute[351485]: 2025-12-03 02:27:29.873 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 03 02:27:29 compute-0 nova_compute[351485]: 2025-12-03 02:27:29.906 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 03 02:27:30 compute-0 ceph-mon[192821]: pgmap v2198: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:30 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3280635965' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:27:31 compute-0 openstack_network_exporter[368278]: ERROR   02:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:27:31 compute-0 openstack_network_exporter[368278]: ERROR   02:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:27:31 compute-0 openstack_network_exporter[368278]: ERROR   02:27:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:27:31 compute-0 openstack_network_exporter[368278]: ERROR   02:27:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:27:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:27:31 compute-0 openstack_network_exporter[368278]: ERROR   02:27:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:27:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:27:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2199: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:31 compute-0 nova_compute[351485]: 2025-12-03 02:27:31.659 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:27:31 compute-0 nova_compute[351485]: 2025-12-03 02:27:31.907 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:27:31 compute-0 nova_compute[351485]: 2025-12-03 02:27:31.908 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 02:27:32 compute-0 nova_compute[351485]: 2025-12-03 02:27:32.360 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-4fb8fc07-d7b7-4be8-94da-155b040faf32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:27:32 compute-0 nova_compute[351485]: 2025-12-03 02:27:32.363 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-4fb8fc07-d7b7-4be8-94da-155b040faf32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:27:32 compute-0 nova_compute[351485]: 2025-12-03 02:27:32.364 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 03 02:27:32 compute-0 ceph-mon[192821]: pgmap v2199: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2200: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:27:34 compute-0 nova_compute[351485]: 2025-12-03 02:27:34.063 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:27:34 compute-0 nova_compute[351485]: 2025-12-03 02:27:34.280 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Updating instance_info_cache with network_info: [{"id": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "address": "fa:16:3e:3f:0c:ae", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.46", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94fdb5b9-66", "ovs_interfaceid": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:27:34 compute-0 nova_compute[351485]: 2025-12-03 02:27:34.307 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-4fb8fc07-d7b7-4be8-94da-155b040faf32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:27:34 compute-0 nova_compute[351485]: 2025-12-03 02:27:34.308 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 03 02:27:34 compute-0 nova_compute[351485]: 2025-12-03 02:27:34.309 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:27:34 compute-0 nova_compute[351485]: 2025-12-03 02:27:34.310 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:27:34 compute-0 nova_compute[351485]: 2025-12-03 02:27:34.578 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:27:34 compute-0 ceph-mon[192821]: pgmap v2200: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:34 compute-0 podman[466136]: 2025-12-03 02:27:34.911359563 +0000 UTC m=+0.165244108 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 03 02:27:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2201: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:35 compute-0 nova_compute[351485]: 2025-12-03 02:27:35.570 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:27:36 compute-0 nova_compute[351485]: 2025-12-03 02:27:36.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:27:36 compute-0 ceph-mon[192821]: pgmap v2201: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:36 compute-0 nova_compute[351485]: 2025-12-03 02:27:36.663 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:27:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2202: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:37 compute-0 podman[466170]: 2025-12-03 02:27:37.892713611 +0000 UTC m=+0.107971009 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 03 02:27:37 compute-0 podman[466158]: 2025-12-03 02:27:37.899108191 +0000 UTC m=+0.137753090 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, config_id=edpm, distribution-scope=public, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.33.7, name=ubi9-minimal, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec 03 02:27:37 compute-0 podman[466157]: 2025-12-03 02:27:37.913436076 +0000 UTC m=+0.157313643 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Dec 03 02:27:37 compute-0 podman[466159]: 2025-12-03 02:27:37.918041376 +0000 UTC m=+0.155764159 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 03 02:27:37 compute-0 podman[466160]: 2025-12-03 02:27:37.918079187 +0000 UTC m=+0.130646399 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, architecture=x86_64, managed_by=edpm_ansible, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, distribution-scope=public, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec 03 02:27:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:27:38 compute-0 ceph-mon[192821]: pgmap v2202: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 02:27:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:27:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 02:27:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:27:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001518418921338803 of space, bias 1.0, pg target 0.45552567640164093 quantized to 32 (current 32)
Dec 03 02:27:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:27:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:27:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:27:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:27:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:27:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec 03 02:27:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:27:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 02:27:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:27:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:27:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:27:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 02:27:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:27:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 02:27:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:27:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:27:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:27:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 02:27:39 compute-0 nova_compute[351485]: 2025-12-03 02:27:39.069 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:27:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2203: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:39 compute-0 nova_compute[351485]: 2025-12-03 02:27:39.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:27:39 compute-0 nova_compute[351485]: 2025-12-03 02:27:39.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 03 02:27:40 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 03 02:27:40 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 4200.0 total, 600.0 interval
                                            Cumulative writes: 9935 writes, 45K keys, 9935 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.01 MB/s
                                            Cumulative WAL: 9935 writes, 9935 syncs, 1.00 writes per sync, written: 0.06 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 1341 writes, 6318 keys, 1341 commit groups, 1.0 writes per commit group, ingest: 8.77 MB, 0.01 MB/s
                                            Interval WAL: 1341 writes, 1341 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     97.4      0.57              0.26        31    0.018       0      0       0.0       0.0
                                              L6      1/0    6.18 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.1    132.5    108.8      2.09              1.03        30    0.070    160K    16K       0.0       0.0
                                             Sum      1/0    6.18 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.1    104.1    106.4      2.66              1.29        61    0.044    160K    16K       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   7.4    101.1     98.0      0.54              0.28        12    0.045     37K   3096       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0    132.5    108.8      2.09              1.03        30    0.070    160K    16K       0.0       0.0
                                            High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     97.8      0.57              0.26        30    0.019       0      0       0.0       0.0
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     18.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 4200.0 total, 600.0 interval
                                            Flush(GB): cumulative 0.054, interval 0.007
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.28 GB write, 0.07 MB/s write, 0.27 GB read, 0.07 MB/s read, 2.7 seconds
                                            Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.5 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x559a0b5b71f0#2 capacity: 308.00 MB usage: 32.37 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.000326 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2091,31.20 MB,10.1287%) FilterBlock(62,452.30 KB,0.143408%) IndexBlock(62,749.83 KB,0.237745%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
Dec 03 02:27:40 compute-0 nova_compute[351485]: 2025-12-03 02:27:40.599 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:27:40 compute-0 nova_compute[351485]: 2025-12-03 02:27:40.600 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 02:27:40 compute-0 ceph-mon[192821]: pgmap v2203: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2204: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:41 compute-0 nova_compute[351485]: 2025-12-03 02:27:41.665 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:27:42 compute-0 ceph-mon[192821]: pgmap v2204: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2205: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:27:44 compute-0 nova_compute[351485]: 2025-12-03 02:27:44.073 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:27:44 compute-0 ceph-mon[192821]: pgmap v2205: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2206: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:46 compute-0 nova_compute[351485]: 2025-12-03 02:27:46.669 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:27:46 compute-0 ceph-mon[192821]: pgmap v2206: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 02:27:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/924182898' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:27:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 02:27:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/924182898' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:27:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2207: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/924182898' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:27:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/924182898' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:27:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:27:48 compute-0 ceph-mon[192821]: pgmap v2207: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:49 compute-0 nova_compute[351485]: 2025-12-03 02:27:49.073 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:27:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2208: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:50 compute-0 ceph-mon[192821]: pgmap v2208: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2209: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:51 compute-0 nova_compute[351485]: 2025-12-03 02:27:51.673 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:27:52 compute-0 ceph-mon[192821]: pgmap v2209: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2210: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:27:54 compute-0 nova_compute[351485]: 2025-12-03 02:27:54.078 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:27:54 compute-0 ceph-mon[192821]: pgmap v2210: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2211: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:56 compute-0 nova_compute[351485]: 2025-12-03 02:27:56.676 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:27:56 compute-0 ceph-mon[192821]: pgmap v2211: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2212: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:57 compute-0 podman[466260]: 2025-12-03 02:27:57.885053479 +0000 UTC m=+0.110315187 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 02:27:57 compute-0 podman[466258]: 2025-12-03 02:27:57.909988173 +0000 UTC m=+0.150198633 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:27:57 compute-0 podman[466259]: 2025-12-03 02:27:57.942851331 +0000 UTC m=+0.180486118 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2)
Dec 03 02:27:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:27:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:27:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:27:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:27:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:27:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:27:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:27:58 compute-0 ceph-mon[192821]: pgmap v2212: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:59 compute-0 nova_compute[351485]: 2025-12-03 02:27:59.081 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:27:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2213: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:27:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:27:59.662 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:27:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:27:59.662 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:27:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:27:59.663 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:27:59 compute-0 podman[158098]: time="2025-12-03T02:27:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:27:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:27:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec 03 02:27:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:27:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8664 "" "Go-http-client/1.1"
Dec 03 02:28:00 compute-0 ceph-mon[192821]: pgmap v2213: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:28:01 compute-0 openstack_network_exporter[368278]: ERROR   02:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:28:01 compute-0 openstack_network_exporter[368278]: ERROR   02:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:28:01 compute-0 openstack_network_exporter[368278]: ERROR   02:28:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:28:01 compute-0 openstack_network_exporter[368278]: ERROR   02:28:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:28:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:28:01 compute-0 openstack_network_exporter[368278]: ERROR   02:28:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:28:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:28:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2214: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:28:01 compute-0 nova_compute[351485]: 2025-12-03 02:28:01.679 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:28:02 compute-0 ceph-mon[192821]: pgmap v2214: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:28:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2215: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:28:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:28:03 compute-0 ceph-mon[192821]: pgmap v2215: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:28:04 compute-0 nova_compute[351485]: 2025-12-03 02:28:04.083 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:28:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2216: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 170 B/s wr, 1 op/s
Dec 03 02:28:05 compute-0 podman[466315]: 2025-12-03 02:28:05.884051774 +0000 UTC m=+0.129574940 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 03 02:28:06 compute-0 ceph-mon[192821]: pgmap v2216: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 170 B/s wr, 1 op/s
Dec 03 02:28:06 compute-0 nova_compute[351485]: 2025-12-03 02:28:06.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:28:06 compute-0 nova_compute[351485]: 2025-12-03 02:28:06.682 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:28:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2217: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 170 B/s wr, 4 op/s
Dec 03 02:28:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:28:08 compute-0 ceph-mon[192821]: pgmap v2217: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 170 B/s wr, 4 op/s
Dec 03 02:28:08 compute-0 podman[466335]: 2025-12-03 02:28:08.8918906 +0000 UTC m=+0.115398641 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 03 02:28:08 compute-0 podman[466343]: 2025-12-03 02:28:08.902659024 +0000 UTC m=+0.104627026 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 03 02:28:08 compute-0 podman[466333]: 2025-12-03 02:28:08.90818831 +0000 UTC m=+0.147749174 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 03 02:28:08 compute-0 podman[466334]: 2025-12-03 02:28:08.9124483 +0000 UTC m=+0.141164207 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, release=1755695350, io.buildah.version=1.33.7, io.openshift.expose-services=, vendor=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, config_id=edpm, io.openshift.tags=minimal rhel9, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 03 02:28:08 compute-0 podman[466336]: 2025-12-03 02:28:08.917928465 +0000 UTC m=+0.130063804 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, version=9.4, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, container_name=kepler, distribution-scope=public, managed_by=edpm_ansible, vcs-type=git, name=ubi9, config_id=edpm, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec 03 02:28:09 compute-0 nova_compute[351485]: 2025-12-03 02:28:09.087 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:28:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2218: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 170 B/s wr, 4 op/s
Dec 03 02:28:10 compute-0 ceph-mon[192821]: pgmap v2218: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 170 B/s wr, 4 op/s
Dec 03 02:28:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2219: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 8.6 KiB/s wr, 5 op/s
Dec 03 02:28:11 compute-0 nova_compute[351485]: 2025-12-03 02:28:11.685 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:28:12 compute-0 ceph-mon[192821]: pgmap v2219: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 8.6 KiB/s wr, 5 op/s
Dec 03 02:28:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2220: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 8.6 KiB/s wr, 5 op/s
Dec 03 02:28:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:28:14 compute-0 nova_compute[351485]: 2025-12-03 02:28:14.092 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:28:14 compute-0 ceph-mon[192821]: pgmap v2220: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 8.6 KiB/s wr, 5 op/s
Dec 03 02:28:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2221: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 8.6 KiB/s wr, 5 op/s
Dec 03 02:28:16 compute-0 ceph-mon[192821]: pgmap v2221: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 8.6 KiB/s wr, 5 op/s
Dec 03 02:28:16 compute-0 nova_compute[351485]: 2025-12-03 02:28:16.688 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:28:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2222: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 8.4 KiB/s wr, 3 op/s
Dec 03 02:28:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:28:18 compute-0 ceph-mon[192821]: pgmap v2222: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 8.4 KiB/s wr, 3 op/s
Dec 03 02:28:19 compute-0 nova_compute[351485]: 2025-12-03 02:28:19.095 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:28:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2223: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 8.4 KiB/s wr, 0 op/s
Dec 03 02:28:19 compute-0 nova_compute[351485]: 2025-12-03 02:28:19.609 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:28:20 compute-0 ceph-mon[192821]: pgmap v2223: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 8.4 KiB/s wr, 0 op/s
Dec 03 02:28:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2224: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 8.4 KiB/s wr, 0 op/s
Dec 03 02:28:21 compute-0 nova_compute[351485]: 2025-12-03 02:28:21.691 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:28:21 compute-0 sudo[466433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:28:21 compute-0 sudo[466433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:28:21 compute-0 sudo[466433]: pam_unix(sudo:session): session closed for user root
Dec 03 02:28:21 compute-0 sudo[466458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:28:21 compute-0 sudo[466458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:28:21 compute-0 sudo[466458]: pam_unix(sudo:session): session closed for user root
Dec 03 02:28:22 compute-0 sudo[466483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:28:22 compute-0 sudo[466483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:28:22 compute-0 sudo[466483]: pam_unix(sudo:session): session closed for user root
Dec 03 02:28:22 compute-0 sudo[466508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 02:28:22 compute-0 sudo[466508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:28:22 compute-0 ceph-mon[192821]: pgmap v2224: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 8.4 KiB/s wr, 0 op/s
Dec 03 02:28:22 compute-0 sudo[466508]: pam_unix(sudo:session): session closed for user root
Dec 03 02:28:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:28:22 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:28:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 02:28:22 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:28:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 02:28:22 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:28:22 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 4ecd5193-066b-4f4c-b7ce-42f0dd320441 does not exist
Dec 03 02:28:22 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev f39ccec4-69a2-4acc-984a-dbad9e4fcbcb does not exist
Dec 03 02:28:22 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev e267c131-aa21-4f6d-b912-7606931d33c2 does not exist
Dec 03 02:28:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 02:28:22 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:28:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 02:28:22 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:28:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:28:22 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:28:23 compute-0 sudo[466564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:28:23 compute-0 sudo[466564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:28:23 compute-0 sudo[466564]: pam_unix(sudo:session): session closed for user root
Dec 03 02:28:23 compute-0 sudo[466589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:28:23 compute-0 sudo[466589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:28:23 compute-0 sudo[466589]: pam_unix(sudo:session): session closed for user root
Dec 03 02:28:23 compute-0 sudo[466614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:28:23 compute-0 sudo[466614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:28:23 compute-0 sudo[466614]: pam_unix(sudo:session): session closed for user root
Dec 03 02:28:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2225: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 0 op/s
Dec 03 02:28:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:28:23 compute-0 sudo[466639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 02:28:23 compute-0 sudo[466639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:28:23 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:28:23 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:28:23 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:28:23 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:28:23 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:28:23 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:28:24 compute-0 podman[466701]: 2025-12-03 02:28:24.043820142 +0000 UTC m=+0.070911554 container create 22efcef3faa5439f2908f52abe9fa5e04b447673b283ff2401e8c6277d1d9309 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_gates, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec 03 02:28:24 compute-0 nova_compute[351485]: 2025-12-03 02:28:24.097 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:28:24 compute-0 podman[466701]: 2025-12-03 02:28:24.007419264 +0000 UTC m=+0.034510766 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:28:24 compute-0 systemd[1]: Started libpod-conmon-22efcef3faa5439f2908f52abe9fa5e04b447673b283ff2401e8c6277d1d9309.scope.
Dec 03 02:28:24 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:28:24 compute-0 podman[466701]: 2025-12-03 02:28:24.212023582 +0000 UTC m=+0.239115084 container init 22efcef3faa5439f2908f52abe9fa5e04b447673b283ff2401e8c6277d1d9309 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_gates, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 03 02:28:24 compute-0 podman[466701]: 2025-12-03 02:28:24.230142734 +0000 UTC m=+0.257234186 container start 22efcef3faa5439f2908f52abe9fa5e04b447673b283ff2401e8c6277d1d9309 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:28:24 compute-0 podman[466701]: 2025-12-03 02:28:24.238027576 +0000 UTC m=+0.265119028 container attach 22efcef3faa5439f2908f52abe9fa5e04b447673b283ff2401e8c6277d1d9309 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:28:24 compute-0 inspiring_gates[466717]: 167 167
Dec 03 02:28:24 compute-0 systemd[1]: libpod-22efcef3faa5439f2908f52abe9fa5e04b447673b283ff2401e8c6277d1d9309.scope: Deactivated successfully.
Dec 03 02:28:24 compute-0 podman[466701]: 2025-12-03 02:28:24.246161936 +0000 UTC m=+0.273253388 container died 22efcef3faa5439f2908f52abe9fa5e04b447673b283ff2401e8c6277d1d9309 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_gates, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:28:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d4afeb7540fed811c0b8a9a8cbc889afe23875518104e3421336c2132cffd4e-merged.mount: Deactivated successfully.
Dec 03 02:28:24 compute-0 podman[466701]: 2025-12-03 02:28:24.319696913 +0000 UTC m=+0.346788355 container remove 22efcef3faa5439f2908f52abe9fa5e04b447673b283ff2401e8c6277d1d9309 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_gates, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 03 02:28:24 compute-0 systemd[1]: libpod-conmon-22efcef3faa5439f2908f52abe9fa5e04b447673b283ff2401e8c6277d1d9309.scope: Deactivated successfully.
Dec 03 02:28:24 compute-0 podman[466739]: 2025-12-03 02:28:24.622090272 +0000 UTC m=+0.083836129 container create 031c160e40bd4ef287ef1ca836a20d5d0e2c96b89f4462373a63bd02dfd19cb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_leavitt, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:28:24 compute-0 ceph-mon[192821]: pgmap v2225: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 0 op/s
Dec 03 02:28:24 compute-0 podman[466739]: 2025-12-03 02:28:24.586099486 +0000 UTC m=+0.047845393 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:28:24 compute-0 systemd[1]: Started libpod-conmon-031c160e40bd4ef287ef1ca836a20d5d0e2c96b89f4462373a63bd02dfd19cb9.scope.
Dec 03 02:28:24 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:28:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55deec32efe2af1db4dba36866cedcd1273606f8ef6a07e2ef5ae9c0775afe90/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:28:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55deec32efe2af1db4dba36866cedcd1273606f8ef6a07e2ef5ae9c0775afe90/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:28:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55deec32efe2af1db4dba36866cedcd1273606f8ef6a07e2ef5ae9c0775afe90/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:28:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55deec32efe2af1db4dba36866cedcd1273606f8ef6a07e2ef5ae9c0775afe90/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:28:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55deec32efe2af1db4dba36866cedcd1273606f8ef6a07e2ef5ae9c0775afe90/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 02:28:24 compute-0 podman[466739]: 2025-12-03 02:28:24.835863669 +0000 UTC m=+0.297609576 container init 031c160e40bd4ef287ef1ca836a20d5d0e2c96b89f4462373a63bd02dfd19cb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_leavitt, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 03 02:28:24 compute-0 podman[466739]: 2025-12-03 02:28:24.845614615 +0000 UTC m=+0.307360452 container start 031c160e40bd4ef287ef1ca836a20d5d0e2c96b89f4462373a63bd02dfd19cb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 03 02:28:24 compute-0 podman[466739]: 2025-12-03 02:28:24.85145377 +0000 UTC m=+0.313199607 container attach 031c160e40bd4ef287ef1ca836a20d5d0e2c96b89f4462373a63bd02dfd19cb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_leavitt, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:28:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2226: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 341 B/s wr, 0 op/s
Dec 03 02:28:26 compute-0 fervent_leavitt[466756]: --> passed data devices: 0 physical, 3 LVM
Dec 03 02:28:26 compute-0 fervent_leavitt[466756]: --> relative data size: 1.0
Dec 03 02:28:26 compute-0 fervent_leavitt[466756]: --> All data devices are unavailable
Dec 03 02:28:26 compute-0 systemd[1]: libpod-031c160e40bd4ef287ef1ca836a20d5d0e2c96b89f4462373a63bd02dfd19cb9.scope: Deactivated successfully.
Dec 03 02:28:26 compute-0 podman[466739]: 2025-12-03 02:28:26.24619212 +0000 UTC m=+1.707937977 container died 031c160e40bd4ef287ef1ca836a20d5d0e2c96b89f4462373a63bd02dfd19cb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:28:26 compute-0 systemd[1]: libpod-031c160e40bd4ef287ef1ca836a20d5d0e2c96b89f4462373a63bd02dfd19cb9.scope: Consumed 1.325s CPU time.
Dec 03 02:28:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-55deec32efe2af1db4dba36866cedcd1273606f8ef6a07e2ef5ae9c0775afe90-merged.mount: Deactivated successfully.
Dec 03 02:28:26 compute-0 podman[466739]: 2025-12-03 02:28:26.348007365 +0000 UTC m=+1.809753192 container remove 031c160e40bd4ef287ef1ca836a20d5d0e2c96b89f4462373a63bd02dfd19cb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 03 02:28:26 compute-0 systemd[1]: libpod-conmon-031c160e40bd4ef287ef1ca836a20d5d0e2c96b89f4462373a63bd02dfd19cb9.scope: Deactivated successfully.
Dec 03 02:28:26 compute-0 sudo[466639]: pam_unix(sudo:session): session closed for user root
Dec 03 02:28:26 compute-0 sudo[466796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:28:26 compute-0 sudo[466796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:28:26 compute-0 sudo[466796]: pam_unix(sudo:session): session closed for user root
Dec 03 02:28:26 compute-0 sudo[466821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:28:26 compute-0 sudo[466821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:28:26 compute-0 ceph-mon[192821]: pgmap v2226: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 341 B/s wr, 0 op/s
Dec 03 02:28:26 compute-0 sudo[466821]: pam_unix(sudo:session): session closed for user root
Dec 03 02:28:26 compute-0 nova_compute[351485]: 2025-12-03 02:28:26.694 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:28:26 compute-0 sudo[466846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:28:26 compute-0 sudo[466846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:28:26 compute-0 sudo[466846]: pam_unix(sudo:session): session closed for user root
Dec 03 02:28:26 compute-0 sudo[466871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 02:28:26 compute-0 sudo[466871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:28:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2227: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.3 KiB/s wr, 0 op/s
Dec 03 02:28:27 compute-0 podman[466931]: 2025-12-03 02:28:27.746384368 +0000 UTC m=+0.070199343 container create d68bc6e759493ff28bd6e89682818ff1cd8631ee73efb093c4c9f9e309434740 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wright, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:28:27 compute-0 podman[466931]: 2025-12-03 02:28:27.720446966 +0000 UTC m=+0.044261951 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:28:27 compute-0 systemd[1]: Started libpod-conmon-d68bc6e759493ff28bd6e89682818ff1cd8631ee73efb093c4c9f9e309434740.scope.
Dec 03 02:28:27 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:28:27 compute-0 podman[466931]: 2025-12-03 02:28:27.899211694 +0000 UTC m=+0.223026679 container init d68bc6e759493ff28bd6e89682818ff1cd8631ee73efb093c4c9f9e309434740 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wright, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Dec 03 02:28:27 compute-0 podman[466931]: 2025-12-03 02:28:27.918135818 +0000 UTC m=+0.241950793 container start d68bc6e759493ff28bd6e89682818ff1cd8631ee73efb093c4c9f9e309434740 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wright, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Dec 03 02:28:27 compute-0 podman[466931]: 2025-12-03 02:28:27.923661174 +0000 UTC m=+0.247476149 container attach d68bc6e759493ff28bd6e89682818ff1cd8631ee73efb093c4c9f9e309434740 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wright, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 03 02:28:27 compute-0 sharp_wright[466947]: 167 167
Dec 03 02:28:27 compute-0 systemd[1]: libpod-d68bc6e759493ff28bd6e89682818ff1cd8631ee73efb093c4c9f9e309434740.scope: Deactivated successfully.
Dec 03 02:28:27 compute-0 podman[466931]: 2025-12-03 02:28:27.931703061 +0000 UTC m=+0.255518076 container died d68bc6e759493ff28bd6e89682818ff1cd8631ee73efb093c4c9f9e309434740 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wright, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:28:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1b54c79e48804c76f3bdd39b9a5db52a56f9ac37b16b8abfa127489983d0360-merged.mount: Deactivated successfully.
Dec 03 02:28:28 compute-0 podman[466931]: 2025-12-03 02:28:28.014849569 +0000 UTC m=+0.338664514 container remove d68bc6e759493ff28bd6e89682818ff1cd8631ee73efb093c4c9f9e309434740 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:28:28 compute-0 systemd[1]: libpod-conmon-d68bc6e759493ff28bd6e89682818ff1cd8631ee73efb093c4c9f9e309434740.scope: Deactivated successfully.
Dec 03 02:28:28 compute-0 podman[466954]: 2025-12-03 02:28:28.067925978 +0000 UTC m=+0.095872228 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 03 02:28:28 compute-0 podman[466964]: 2025-12-03 02:28:28.093750077 +0000 UTC m=+0.098061100 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 03 02:28:28 compute-0 podman[466957]: 2025-12-03 02:28:28.101075994 +0000 UTC m=+0.117327444 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 02:28:28 compute-0 podman[467029]: 2025-12-03 02:28:28.241713856 +0000 UTC m=+0.073787815 container create a4454b585f51d206170b072a9189b346f7f8cf4810d796a17b43f7bc2c74033e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec 03 02:28:28 compute-0 podman[467029]: 2025-12-03 02:28:28.210174746 +0000 UTC m=+0.042248755 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:28:28 compute-0 systemd[1]: Started libpod-conmon-a4454b585f51d206170b072a9189b346f7f8cf4810d796a17b43f7bc2c74033e.scope.
Dec 03 02:28:28 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:28:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4eb2f438ad1b8d80ed0715e3488dc22ff31c8e1369f9c00849cea7d5fd7c87c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:28:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4eb2f438ad1b8d80ed0715e3488dc22ff31c8e1369f9c00849cea7d5fd7c87c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:28:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4eb2f438ad1b8d80ed0715e3488dc22ff31c8e1369f9c00849cea7d5fd7c87c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:28:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4eb2f438ad1b8d80ed0715e3488dc22ff31c8e1369f9c00849cea7d5fd7c87c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:28:28 compute-0 podman[467029]: 2025-12-03 02:28:28.427076421 +0000 UTC m=+0.259150430 container init a4454b585f51d206170b072a9189b346f7f8cf4810d796a17b43f7bc2c74033e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_ptolemy, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:28:28 compute-0 podman[467029]: 2025-12-03 02:28:28.447698574 +0000 UTC m=+0.279772533 container start a4454b585f51d206170b072a9189b346f7f8cf4810d796a17b43f7bc2c74033e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_ptolemy, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:28:28 compute-0 podman[467029]: 2025-12-03 02:28:28.455250197 +0000 UTC m=+0.287324226 container attach a4454b585f51d206170b072a9189b346f7f8cf4810d796a17b43f7bc2c74033e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_ptolemy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Dec 03 02:28:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:28:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:28:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:28:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:28:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:28:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:28:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:28:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:28:28
Dec 03 02:28:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 02:28:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 02:28:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', 'vms', 'default.rgw.control', '.rgw.root', 'images', '.mgr', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups']
Dec 03 02:28:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 02:28:28 compute-0 nova_compute[351485]: 2025-12-03 02:28:28.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:28:28 compute-0 nova_compute[351485]: 2025-12-03 02:28:28.610 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:28:28 compute-0 nova_compute[351485]: 2025-12-03 02:28:28.611 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:28:28 compute-0 nova_compute[351485]: 2025-12-03 02:28:28.611 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:28:28 compute-0 nova_compute[351485]: 2025-12-03 02:28:28.612 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 02:28:28 compute-0 nova_compute[351485]: 2025-12-03 02:28:28.613 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:28:28 compute-0 ceph-mon[192821]: pgmap v2227: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.3 KiB/s wr, 0 op/s
Dec 03 02:28:29 compute-0 nova_compute[351485]: 2025-12-03 02:28:29.100 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:28:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:28:29 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/376880488' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:28:29 compute-0 nova_compute[351485]: 2025-12-03 02:28:29.196 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:28:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 02:28:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:28:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 02:28:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:28:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:28:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:28:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:28:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:28:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:28:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]: {
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:     "0": [
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:         {
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:             "devices": [
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:                 "/dev/loop3"
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:             ],
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:             "lv_name": "ceph_lv0",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:             "lv_size": "21470642176",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:             "name": "ceph_lv0",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:             "tags": {
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:                 "ceph.cluster_name": "ceph",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:                 "ceph.crush_device_class": "",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:                 "ceph.encrypted": "0",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:                 "ceph.osd_id": "0",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:                 "ceph.type": "block",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:                 "ceph.vdo": "0"
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:             },
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:             "type": "block",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:             "vg_name": "ceph_vg0"
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:         }
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:     ],
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:     "1": [
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:         {
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:             "devices": [
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:                 "/dev/loop4"
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:             ],
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:             "lv_name": "ceph_lv1",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:             "lv_size": "21470642176",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:             "name": "ceph_lv1",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:             "tags": {
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:                 "ceph.cluster_name": "ceph",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:                 "ceph.crush_device_class": "",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:                 "ceph.encrypted": "0",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:                 "ceph.osd_id": "1",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:                 "ceph.type": "block",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:                 "ceph.vdo": "0"
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:             },
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:             "type": "block",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:             "vg_name": "ceph_vg1"
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:         }
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:     ],
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:     "2": [
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:         {
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:             "devices": [
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:                 "/dev/loop5"
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:             ],
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:             "lv_name": "ceph_lv2",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:             "lv_size": "21470642176",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:             "name": "ceph_lv2",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:             "tags": {
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:                 "ceph.cluster_name": "ceph",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:                 "ceph.crush_device_class": "",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:                 "ceph.encrypted": "0",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:                 "ceph.osd_id": "2",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:                 "ceph.type": "block",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:                 "ceph.vdo": "0"
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:             },
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:             "type": "block",
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:             "vg_name": "ceph_vg2"
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:         }
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]:     ]
Dec 03 02:28:29 compute-0 affectionate_ptolemy[467045]: }
Dec 03 02:28:29 compute-0 nova_compute[351485]: 2025-12-03 02:28:29.293 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:28:29 compute-0 nova_compute[351485]: 2025-12-03 02:28:29.293 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:28:29 compute-0 nova_compute[351485]: 2025-12-03 02:28:29.300 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:28:29 compute-0 nova_compute[351485]: 2025-12-03 02:28:29.300 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:28:29 compute-0 systemd[1]: libpod-a4454b585f51d206170b072a9189b346f7f8cf4810d796a17b43f7bc2c74033e.scope: Deactivated successfully.
Dec 03 02:28:29 compute-0 podman[467029]: 2025-12-03 02:28:29.328498139 +0000 UTC m=+1.160572128 container died a4454b585f51d206170b072a9189b346f7f8cf4810d796a17b43f7bc2c74033e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_ptolemy, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 03 02:28:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4eb2f438ad1b8d80ed0715e3488dc22ff31c8e1369f9c00849cea7d5fd7c87c-merged.mount: Deactivated successfully.
Dec 03 02:28:29 compute-0 podman[467029]: 2025-12-03 02:28:29.438073764 +0000 UTC m=+1.270147693 container remove a4454b585f51d206170b072a9189b346f7f8cf4810d796a17b43f7bc2c74033e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_ptolemy, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec 03 02:28:29 compute-0 systemd[1]: libpod-conmon-a4454b585f51d206170b072a9189b346f7f8cf4810d796a17b43f7bc2c74033e.scope: Deactivated successfully.
Dec 03 02:28:29 compute-0 sudo[466871]: pam_unix(sudo:session): session closed for user root
Dec 03 02:28:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2228: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.3 KiB/s wr, 0 op/s
Dec 03 02:28:29 compute-0 sudo[467088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:28:29 compute-0 sudo[467088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:28:29 compute-0 sudo[467088]: pam_unix(sudo:session): session closed for user root
Dec 03 02:28:29 compute-0 sudo[467113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:28:29 compute-0 sudo[467113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:28:29 compute-0 sudo[467113]: pam_unix(sudo:session): session closed for user root
Dec 03 02:28:29 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/376880488' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:28:29 compute-0 podman[158098]: time="2025-12-03T02:28:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:28:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:28:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec 03 02:28:29 compute-0 nova_compute[351485]: 2025-12-03 02:28:29.769 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:28:29 compute-0 nova_compute[351485]: 2025-12-03 02:28:29.770 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3436MB free_disk=59.89701461791992GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 02:28:29 compute-0 nova_compute[351485]: 2025-12-03 02:28:29.770 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:28:29 compute-0 nova_compute[351485]: 2025-12-03 02:28:29.771 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:28:29 compute-0 sudo[467138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:28:29 compute-0 sudo[467138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:28:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:28:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8676 "" "Go-http-client/1.1"
Dec 03 02:28:29 compute-0 sudo[467138]: pam_unix(sudo:session): session closed for user root
Dec 03 02:28:29 compute-0 nova_compute[351485]: 2025-12-03 02:28:29.874 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:28:29 compute-0 nova_compute[351485]: 2025-12-03 02:28:29.874 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 4fb8fc07-d7b7-4be8-94da-155b040faf32 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:28:29 compute-0 nova_compute[351485]: 2025-12-03 02:28:29.875 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 02:28:29 compute-0 nova_compute[351485]: 2025-12-03 02:28:29.875 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 02:28:29 compute-0 sudo[467163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 02:28:29 compute-0 sudo[467163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:28:29 compute-0 nova_compute[351485]: 2025-12-03 02:28:29.949 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:28:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:28:30 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/213844537' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:28:30 compute-0 nova_compute[351485]: 2025-12-03 02:28:30.402 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:28:30 compute-0 nova_compute[351485]: 2025-12-03 02:28:30.419 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:28:30 compute-0 podman[467248]: 2025-12-03 02:28:30.495032204 +0000 UTC m=+0.089619382 container create 1670db05d651a14c09aef209fb74c64f9b18b84a5fd0b6139346363422be37bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 03 02:28:30 compute-0 nova_compute[351485]: 2025-12-03 02:28:30.546 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:28:30 compute-0 nova_compute[351485]: 2025-12-03 02:28:30.548 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 02:28:30 compute-0 nova_compute[351485]: 2025-12-03 02:28:30.548 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:28:30 compute-0 podman[467248]: 2025-12-03 02:28:30.46052339 +0000 UTC m=+0.055110588 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:28:30 compute-0 systemd[1]: Started libpod-conmon-1670db05d651a14c09aef209fb74c64f9b18b84a5fd0b6139346363422be37bd.scope.
Dec 03 02:28:30 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:28:30 compute-0 podman[467248]: 2025-12-03 02:28:30.651297568 +0000 UTC m=+0.245884806 container init 1670db05d651a14c09aef209fb74c64f9b18b84a5fd0b6139346363422be37bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wright, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:28:30 compute-0 podman[467248]: 2025-12-03 02:28:30.669341187 +0000 UTC m=+0.263928375 container start 1670db05d651a14c09aef209fb74c64f9b18b84a5fd0b6139346363422be37bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:28:30 compute-0 podman[467248]: 2025-12-03 02:28:30.67864066 +0000 UTC m=+0.273227908 container attach 1670db05d651a14c09aef209fb74c64f9b18b84a5fd0b6139346363422be37bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:28:30 compute-0 trusting_wright[467265]: 167 167
Dec 03 02:28:30 compute-0 systemd[1]: libpod-1670db05d651a14c09aef209fb74c64f9b18b84a5fd0b6139346363422be37bd.scope: Deactivated successfully.
Dec 03 02:28:30 compute-0 podman[467248]: 2025-12-03 02:28:30.684211787 +0000 UTC m=+0.278798975 container died 1670db05d651a14c09aef209fb74c64f9b18b84a5fd0b6139346363422be37bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:28:30 compute-0 ceph-mon[192821]: pgmap v2228: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.3 KiB/s wr, 0 op/s
Dec 03 02:28:30 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/213844537' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:28:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff70ff64e0e922ed52ab748117215237ce5ff1e3d6ea5e1c902e4c634a44f758-merged.mount: Deactivated successfully.
Dec 03 02:28:30 compute-0 podman[467248]: 2025-12-03 02:28:30.765071681 +0000 UTC m=+0.359658839 container remove 1670db05d651a14c09aef209fb74c64f9b18b84a5fd0b6139346363422be37bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wright, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:28:30 compute-0 systemd[1]: libpod-conmon-1670db05d651a14c09aef209fb74c64f9b18b84a5fd0b6139346363422be37bd.scope: Deactivated successfully.
Dec 03 02:28:31 compute-0 podman[467287]: 2025-12-03 02:28:31.05469083 +0000 UTC m=+0.100338475 container create 717b4b98726c8b55165cac878b5c916ba2fcb06b9b19fe9686d990fcd22fbd66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_agnesi, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 03 02:28:31 compute-0 podman[467287]: 2025-12-03 02:28:31.020273398 +0000 UTC m=+0.065921043 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:28:31 compute-0 systemd[1]: Started libpod-conmon-717b4b98726c8b55165cac878b5c916ba2fcb06b9b19fe9686d990fcd22fbd66.scope.
Dec 03 02:28:31 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:28:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30004a6e4e20f26e7a6c4f166d95e3eb88b8be8b8804302e8f31030b377ac046/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:28:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30004a6e4e20f26e7a6c4f166d95e3eb88b8be8b8804302e8f31030b377ac046/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:28:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30004a6e4e20f26e7a6c4f166d95e3eb88b8be8b8804302e8f31030b377ac046/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:28:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30004a6e4e20f26e7a6c4f166d95e3eb88b8be8b8804302e8f31030b377ac046/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:28:31 compute-0 podman[467287]: 2025-12-03 02:28:31.217004084 +0000 UTC m=+0.262651699 container init 717b4b98726c8b55165cac878b5c916ba2fcb06b9b19fe9686d990fcd22fbd66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:28:31 compute-0 podman[467287]: 2025-12-03 02:28:31.236635769 +0000 UTC m=+0.282283384 container start 717b4b98726c8b55165cac878b5c916ba2fcb06b9b19fe9686d990fcd22fbd66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 03 02:28:31 compute-0 podman[467287]: 2025-12-03 02:28:31.241168697 +0000 UTC m=+0.286816312 container attach 717b4b98726c8b55165cac878b5c916ba2fcb06b9b19fe9686d990fcd22fbd66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 03 02:28:31 compute-0 openstack_network_exporter[368278]: ERROR   02:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:28:31 compute-0 openstack_network_exporter[368278]: ERROR   02:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:28:31 compute-0 openstack_network_exporter[368278]: ERROR   02:28:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:28:31 compute-0 openstack_network_exporter[368278]: ERROR   02:28:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:28:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:28:31 compute-0 openstack_network_exporter[368278]: ERROR   02:28:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:28:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:28:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2229: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.3 KiB/s wr, 0 op/s
Dec 03 02:28:31 compute-0 nova_compute[351485]: 2025-12-03 02:28:31.698 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:28:32 compute-0 unruffled_agnesi[467302]: {
Dec 03 02:28:32 compute-0 unruffled_agnesi[467302]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 02:28:32 compute-0 unruffled_agnesi[467302]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:28:32 compute-0 unruffled_agnesi[467302]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 02:28:32 compute-0 unruffled_agnesi[467302]:         "osd_id": 2,
Dec 03 02:28:32 compute-0 unruffled_agnesi[467302]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:28:32 compute-0 unruffled_agnesi[467302]:         "type": "bluestore"
Dec 03 02:28:32 compute-0 unruffled_agnesi[467302]:     },
Dec 03 02:28:32 compute-0 unruffled_agnesi[467302]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 02:28:32 compute-0 unruffled_agnesi[467302]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:28:32 compute-0 unruffled_agnesi[467302]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 02:28:32 compute-0 unruffled_agnesi[467302]:         "osd_id": 1,
Dec 03 02:28:32 compute-0 unruffled_agnesi[467302]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:28:32 compute-0 unruffled_agnesi[467302]:         "type": "bluestore"
Dec 03 02:28:32 compute-0 unruffled_agnesi[467302]:     },
Dec 03 02:28:32 compute-0 unruffled_agnesi[467302]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 02:28:32 compute-0 unruffled_agnesi[467302]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:28:32 compute-0 unruffled_agnesi[467302]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 02:28:32 compute-0 unruffled_agnesi[467302]:         "osd_id": 0,
Dec 03 02:28:32 compute-0 unruffled_agnesi[467302]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:28:32 compute-0 unruffled_agnesi[467302]:         "type": "bluestore"
Dec 03 02:28:32 compute-0 unruffled_agnesi[467302]:     }
Dec 03 02:28:32 compute-0 unruffled_agnesi[467302]: }
Dec 03 02:28:32 compute-0 systemd[1]: libpod-717b4b98726c8b55165cac878b5c916ba2fcb06b9b19fe9686d990fcd22fbd66.scope: Deactivated successfully.
Dec 03 02:28:32 compute-0 systemd[1]: libpod-717b4b98726c8b55165cac878b5c916ba2fcb06b9b19fe9686d990fcd22fbd66.scope: Consumed 1.252s CPU time.
Dec 03 02:28:32 compute-0 nova_compute[351485]: 2025-12-03 02:28:32.548 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:28:32 compute-0 nova_compute[351485]: 2025-12-03 02:28:32.549 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 02:28:32 compute-0 nova_compute[351485]: 2025-12-03 02:28:32.549 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 03 02:28:32 compute-0 podman[467335]: 2025-12-03 02:28:32.57197995 +0000 UTC m=+0.048721877 container died 717b4b98726c8b55165cac878b5c916ba2fcb06b9b19fe9686d990fcd22fbd66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 03 02:28:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-30004a6e4e20f26e7a6c4f166d95e3eb88b8be8b8804302e8f31030b377ac046-merged.mount: Deactivated successfully.
Dec 03 02:28:32 compute-0 podman[467335]: 2025-12-03 02:28:32.694259804 +0000 UTC m=+0.171001681 container remove 717b4b98726c8b55165cac878b5c916ba2fcb06b9b19fe9686d990fcd22fbd66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_agnesi, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 03 02:28:32 compute-0 systemd[1]: libpod-conmon-717b4b98726c8b55165cac878b5c916ba2fcb06b9b19fe9686d990fcd22fbd66.scope: Deactivated successfully.
Dec 03 02:28:32 compute-0 ceph-mon[192821]: pgmap v2229: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.3 KiB/s wr, 0 op/s
Dec 03 02:28:32 compute-0 sudo[467163]: pam_unix(sudo:session): session closed for user root
Dec 03 02:28:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 02:28:32 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:28:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 02:28:32 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:28:32 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev eddbd822-ef5c-4699-9b27-b80182c76f92 does not exist
Dec 03 02:28:32 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 13e22447-67d2-4abe-b939-cc2769d0d612 does not exist
Dec 03 02:28:32 compute-0 sudo[467349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:28:32 compute-0 sudo[467349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:28:32 compute-0 sudo[467349]: pam_unix(sudo:session): session closed for user root
Dec 03 02:28:32 compute-0 nova_compute[351485]: 2025-12-03 02:28:32.981 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-2890ee5c-21c1-4e9d-9421-1a2df0f67f76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:28:32 compute-0 nova_compute[351485]: 2025-12-03 02:28:32.982 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-2890ee5c-21c1-4e9d-9421-1a2df0f67f76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:28:32 compute-0 nova_compute[351485]: 2025-12-03 02:28:32.982 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 03 02:28:32 compute-0 nova_compute[351485]: 2025-12-03 02:28:32.983 351492 DEBUG nova.objects.instance [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:28:33 compute-0 sudo[467374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 02:28:33 compute-0 sudo[467374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:28:33 compute-0 sudo[467374]: pam_unix(sudo:session): session closed for user root
Dec 03 02:28:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2230: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s wr, 0 op/s
Dec 03 02:28:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:28:33 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:28:33 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:28:34 compute-0 nova_compute[351485]: 2025-12-03 02:28:34.102 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:28:34 compute-0 nova_compute[351485]: 2025-12-03 02:28:34.494 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Updating instance_info_cache with network_info: [{"id": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "address": "fa:16:3e:dd:ed:eb", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.239", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf36a9f58-d7", "ovs_interfaceid": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:28:34 compute-0 nova_compute[351485]: 2025-12-03 02:28:34.517 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-2890ee5c-21c1-4e9d-9421-1a2df0f67f76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:28:34 compute-0 nova_compute[351485]: 2025-12-03 02:28:34.517 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 03 02:28:34 compute-0 nova_compute[351485]: 2025-12-03 02:28:34.519 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:28:34 compute-0 nova_compute[351485]: 2025-12-03 02:28:34.520 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:28:34 compute-0 nova_compute[351485]: 2025-12-03 02:28:34.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:28:34 compute-0 ceph-mon[192821]: pgmap v2230: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s wr, 0 op/s
Dec 03 02:28:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2231: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s wr, 0 op/s
Dec 03 02:28:36 compute-0 nova_compute[351485]: 2025-12-03 02:28:36.702 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:28:36 compute-0 ceph-mon[192821]: pgmap v2231: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s wr, 0 op/s
Dec 03 02:28:36 compute-0 podman[467399]: 2025-12-03 02:28:36.899670552 +0000 UTC m=+0.145655874 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:28:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2232: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s wr, 0 op/s
Dec 03 02:28:37 compute-0 nova_compute[351485]: 2025-12-03 02:28:37.571 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:28:37 compute-0 nova_compute[351485]: 2025-12-03 02:28:37.575 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:28:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:28:38 compute-0 nova_compute[351485]: 2025-12-03 02:28:38.571 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:28:38 compute-0 ceph-mon[192821]: pgmap v2232: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s wr, 0 op/s
Dec 03 02:28:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 02:28:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:28:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 02:28:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:28:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015214076846063684 of space, bias 1.0, pg target 0.45642230538191053 quantized to 32 (current 32)
Dec 03 02:28:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:28:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:28:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:28:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:28:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:28:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec 03 02:28:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:28:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 02:28:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:28:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:28:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:28:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 02:28:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:28:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 02:28:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:28:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:28:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:28:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 02:28:39 compute-0 nova_compute[351485]: 2025-12-03 02:28:39.105 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:28:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2233: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Dec 03 02:28:39 compute-0 podman[467421]: 2025-12-03 02:28:39.879380674 +0000 UTC m=+0.113238349 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 03 02:28:39 compute-0 podman[467420]: 2025-12-03 02:28:39.888209224 +0000 UTC m=+0.125443794 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, distribution-scope=public, managed_by=edpm_ansible, architecture=x86_64, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 03 02:28:39 compute-0 podman[467422]: 2025-12-03 02:28:39.895768117 +0000 UTC m=+0.119981870 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, name=ubi9, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, release-0.7.12=, build-date=2024-09-18T21:23:30, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 03 02:28:39 compute-0 podman[467419]: 2025-12-03 02:28:39.909650679 +0000 UTC m=+0.153209518 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller)
Dec 03 02:28:39 compute-0 podman[467424]: 2025-12-03 02:28:39.910785401 +0000 UTC m=+0.131052732 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 03 02:28:39 compute-0 ceph-mon[192821]: pgmap v2233: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Dec 03 02:28:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2234: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec 03 02:28:41 compute-0 nova_compute[351485]: 2025-12-03 02:28:41.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:28:41 compute-0 nova_compute[351485]: 2025-12-03 02:28:41.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 02:28:41 compute-0 nova_compute[351485]: 2025-12-03 02:28:41.705 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:28:42 compute-0 ceph-mon[192821]: pgmap v2234: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec 03 02:28:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2235: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec 03 02:28:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:28:44 compute-0 nova_compute[351485]: 2025-12-03 02:28:44.109 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:28:44 compute-0 ceph-mon[192821]: pgmap v2235: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec 03 02:28:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2236: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec 03 02:28:46 compute-0 ceph-mon[192821]: pgmap v2236: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec 03 02:28:46 compute-0 nova_compute[351485]: 2025-12-03 02:28:46.708 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:28:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 02:28:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4203230717' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:28:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 02:28:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4203230717' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:28:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2237: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec 03 02:28:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/4203230717' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:28:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/4203230717' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:28:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:28:48 compute-0 ceph-mon[192821]: pgmap v2237: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec 03 02:28:49 compute-0 nova_compute[351485]: 2025-12-03 02:28:49.112 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:28:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2238: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec 03 02:28:50 compute-0 ceph-mon[192821]: pgmap v2238: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec 03 02:28:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2239: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 8.3 KiB/s wr, 1 op/s
Dec 03 02:28:51 compute-0 nova_compute[351485]: 2025-12-03 02:28:51.711 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:28:52 compute-0 ceph-mon[192821]: pgmap v2239: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 8.3 KiB/s wr, 1 op/s
Dec 03 02:28:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:28:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2240: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:28:54 compute-0 nova_compute[351485]: 2025-12-03 02:28:54.118 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:28:54 compute-0 ceph-mon[192821]: pgmap v2240: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:28:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2241: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 03 02:28:56 compute-0 ceph-mon[192821]: pgmap v2241: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 03 02:28:56 compute-0 nova_compute[351485]: 2025-12-03 02:28:56.715 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:28:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2242: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 03 02:28:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:28:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:28:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:28:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:28:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:28:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:28:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:28:58 compute-0 ceph-mon[192821]: pgmap v2242: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 03 02:28:58 compute-0 podman[467524]: 2025-12-03 02:28:58.892999323 +0000 UTC m=+0.123795317 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 03 02:28:58 compute-0 podman[467523]: 2025-12-03 02:28:58.899895918 +0000 UTC m=+0.134247292 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Dec 03 02:28:58 compute-0 podman[467522]: 2025-12-03 02:28:58.910125717 +0000 UTC m=+0.149269977 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent)
Dec 03 02:28:59 compute-0 nova_compute[351485]: 2025-12-03 02:28:59.119 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:28:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2243: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 03 02:28:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:28:59.663 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:28:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:28:59.664 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:28:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:28:59.665 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:28:59 compute-0 podman[158098]: time="2025-12-03T02:28:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:28:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:28:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec 03 02:28:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:28:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8669 "" "Go-http-client/1.1"
Dec 03 02:29:00 compute-0 ceph-mon[192821]: pgmap v2243: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 03 02:29:01 compute-0 openstack_network_exporter[368278]: ERROR   02:29:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:29:01 compute-0 openstack_network_exporter[368278]: ERROR   02:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:29:01 compute-0 openstack_network_exporter[368278]: ERROR   02:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:29:01 compute-0 openstack_network_exporter[368278]: ERROR   02:29:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:29:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:29:01 compute-0 openstack_network_exporter[368278]: ERROR   02:29:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:29:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:29:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2244: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 03 02:29:01 compute-0 nova_compute[351485]: 2025-12-03 02:29:01.718 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:29:02 compute-0 ceph-mon[192821]: pgmap v2244: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 03 02:29:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:29:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2245: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 03 02:29:04 compute-0 nova_compute[351485]: 2025-12-03 02:29:04.122 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:29:04 compute-0 ceph-mon[192821]: pgmap v2245: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 03 02:29:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2246: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 03 02:29:06 compute-0 sshd-session[467579]: Received disconnect from 154.113.10.113 port 53560:11: Bye Bye [preauth]
Dec 03 02:29:06 compute-0 sshd-session[467579]: Disconnected from authenticating user root 154.113.10.113 port 53560 [preauth]
Dec 03 02:29:06 compute-0 nova_compute[351485]: 2025-12-03 02:29:06.724 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:29:06 compute-0 ceph-mon[192821]: pgmap v2246: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 03 02:29:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2247: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:29:07 compute-0 podman[467581]: 2025-12-03 02:29:07.876227958 +0000 UTC m=+0.127097998 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 03 02:29:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:29:08 compute-0 ceph-mon[192821]: pgmap v2247: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:29:09 compute-0 nova_compute[351485]: 2025-12-03 02:29:09.126 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:29:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2248: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:29:10 compute-0 ceph-mon[192821]: pgmap v2248: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:29:10 compute-0 podman[467601]: 2025-12-03 02:29:10.865712301 +0000 UTC m=+0.110111197 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 02:29:10 compute-0 podman[467600]: 2025-12-03 02:29:10.872092791 +0000 UTC m=+0.114102770 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, version=9.6, io.openshift.tags=minimal rhel9, config_id=edpm, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, managed_by=edpm_ansible, release=1755695350, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 03 02:29:10 compute-0 podman[467602]: 2025-12-03 02:29:10.879518071 +0000 UTC m=+0.107264677 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, config_id=edpm, distribution-scope=public)
Dec 03 02:29:10 compute-0 podman[467599]: 2025-12-03 02:29:10.890099869 +0000 UTC m=+0.142509621 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible)
Dec 03 02:29:10 compute-0 podman[467607]: 2025-12-03 02:29:10.908354404 +0000 UTC m=+0.127830907 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec 03 02:29:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2249: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:29:11 compute-0 nova_compute[351485]: 2025-12-03 02:29:11.728 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:29:12 compute-0 ceph-mon[192821]: pgmap v2249: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:29:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:29:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2250: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:29:14 compute-0 nova_compute[351485]: 2025-12-03 02:29:14.129 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:29:14 compute-0 ceph-mon[192821]: pgmap v2250: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:29:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2251: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:29:16 compute-0 nova_compute[351485]: 2025-12-03 02:29:16.731 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:29:16 compute-0 ceph-mon[192821]: pgmap v2251: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:29:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2252: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:29:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:29:18 compute-0 ceph-mon[192821]: pgmap v2252: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:29:19 compute-0 nova_compute[351485]: 2025-12-03 02:29:19.133 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:29:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2253: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.514 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.515 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.516 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.521 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.521 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.522 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.526 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.527 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.527 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.527 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.530 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '4fb8fc07-d7b7-4be8-94da-155b040faf32', 'name': 'te-8071397-asg-3rvfkoaoyxm3-pdxc7a4qjxpu-j7dwudlie42q', 'flavor': {'id': '89219634-32e9-4cb5-896f-6fa0b1edfe13', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '8876482c-db67-48c0-9203-60685152fc9d'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '63f39ac2863946b8b817457e689ff933', 'user_id': '8f61f44789494541b7c101b0fdab52f0', 'hostId': 'b9b5204cb6f419d1971089b3610cd52175ffd5baf1b6a5204f14f9c2', 'status': 'active', 'metadata': {'metering.server_group': '38bfb145-4971-41b6-9bc3-faf3c3931019'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.536 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2890ee5c-21c1-4e9d-9421-1a2df0f67f76', 'name': 'te-8071397-asg-3rvfkoaoyxm3-n4fdz722tgvn-jwe375iwm6yr', 'flavor': {'id': '89219634-32e9-4cb5-896f-6fa0b1edfe13', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '8876482c-db67-48c0-9203-60685152fc9d'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '63f39ac2863946b8b817457e689ff933', 'user_id': '8f61f44789494541b7c101b0fdab52f0', 'hostId': 'b9b5204cb6f419d1971089b3610cd52175ffd5baf1b6a5204f14f9c2', 'status': 'active', 'metadata': {'metering.server_group': '38bfb145-4971-41b6-9bc3-faf3c3931019'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.536 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.537 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.537 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.537 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.539 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-03T02:29:19.537648) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.583 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/memory.usage volume: 42.39453125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.627 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/memory.usage volume: 41.953125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.628 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.628 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.628 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.629 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.629 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.629 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.630 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-03T02:29:19.629455) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.635 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.643 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.644 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.644 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.645 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.645 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.645 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.645 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.646 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.646 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.647 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-03T02:29:19.645692) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.647 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.648 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.648 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.648 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.649 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.649 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.649 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.650 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-03T02:29:19.649169) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.650 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.651 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.651 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.651 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.651 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.651 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.652 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.652 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.652 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.653 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.653 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.654 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.654 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.654 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.654 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-03T02:29:19.652122) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.654 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.655 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.655 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.656 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.656 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.657 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.657 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-03T02:29:19.654849) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.657 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.657 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.657 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.659 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-03T02:29:19.657839) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.681 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.682 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.701 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.702 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.703 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.703 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.703 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.704 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.704 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.704 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.704 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.704 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.705 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-03T02:29:19.704804) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.761 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.bytes volume: 31074816 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.762 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.820 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.bytes volume: 31267328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.821 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.821 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.822 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.822 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.822 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.822 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.822 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.822 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.823 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.bytes volume: 2060 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.823 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-03T02:29:19.822467) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.823 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.823 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.823 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.824 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.824 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.824 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.824 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.latency volume: 3352022930 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.824 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.latency volume: 250801539 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.824 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.latency volume: 2988151233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.825 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.latency volume: 215162747 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.825 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.825 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.825 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.826 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.826 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.826 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-03T02:29:19.824215) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.826 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.826 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.requests volume: 1137 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.826 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.827 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.requests volume: 1144 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.827 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.828 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.828 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.828 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.828 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-03T02:29:19.826434) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.828 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.828 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.828 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.828 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.828 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-03T02:29:19.828581) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.829 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.829 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.829 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.829 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.829 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.829 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.830 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.830 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.830 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.830 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.831 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.831 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.831 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.832 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.832 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.832 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.832 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.832 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-03T02:29:19.830069) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.832 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.bytes volume: 73138176 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.832 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.833 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.bytes volume: 73162752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.833 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.833 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-03T02:29:19.832454) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.834 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.834 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.834 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.834 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.834 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.834 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.834 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.latency volume: 9097731540 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.835 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.835 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.latency volume: 10465171027 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.835 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.836 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.836 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.836 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.836 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.836 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.837 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.837 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-03T02:29:19.834709) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.837 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.requests volume: 345 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.837 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.837 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-03T02:29:19.837234) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.838 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.requests volume: 335 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.838 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.838 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.838 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.839 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.839 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.839 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.839 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.839 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.840 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.packets volume: 27 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.839 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-03T02:29:19.839376) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.840 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.840 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.840 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.840 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.840 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.840 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.841 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/cpu volume: 335420000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.841 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/cpu volume: 338610000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.841 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.841 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.842 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.842 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.842 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.842 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.842 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.843 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.843 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.843 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.843 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.843 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.844 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.844 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-03T02:29:19.840941) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.844 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.844 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-03T02:29:19.842382) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.844 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.844 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.845 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.845 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.845 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.845 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.845 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.845 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.846 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.846 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.846 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.846 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.847 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.847 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.847 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.847 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.847 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.847 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.848 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.848 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.848 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.848 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.848 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.848 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.849 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.849 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.849 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.849 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.849 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.850 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.850 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.outgoing.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.850 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-03T02:29:19.843859) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.850 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-03T02:29:19.845297) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.850 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.850 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-03T02:29:19.847363) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.851 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.851 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.851 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.850 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-03T02:29:19.848826) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.851 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-03T02:29:19.850008) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.851 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.851 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.852 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.852 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.852 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.852 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.852 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.852 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.852 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.852 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.852 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.852 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.852 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.852 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.853 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.853 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.853 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.853 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.853 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.853 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.853 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.853 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.853 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.853 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.853 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.854 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:29:20 compute-0 nova_compute[351485]: 2025-12-03 02:29:20.578 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:29:20 compute-0 ceph-mon[192821]: pgmap v2253: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:29:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2254: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:29:21 compute-0 nova_compute[351485]: 2025-12-03 02:29:21.734 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:29:21 compute-0 ceph-mon[192821]: pgmap v2254: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:29:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:29:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2255: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:29:24 compute-0 nova_compute[351485]: 2025-12-03 02:29:24.135 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:29:24 compute-0 ceph-mon[192821]: pgmap v2255: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:29:24 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #108. Immutable memtables: 0.
Dec 03 02:29:24 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:29:24.601149) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 03 02:29:24 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 108
Dec 03 02:29:24 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728964601193, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 1942, "num_deletes": 251, "total_data_size": 3241966, "memory_usage": 3291648, "flush_reason": "Manual Compaction"}
Dec 03 02:29:24 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #109: started
Dec 03 02:29:24 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728964627695, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 109, "file_size": 3177366, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 44356, "largest_seqno": 46297, "table_properties": {"data_size": 3168460, "index_size": 5592, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 17709, "raw_average_key_size": 20, "raw_value_size": 3150849, "raw_average_value_size": 3560, "num_data_blocks": 249, "num_entries": 885, "num_filter_entries": 885, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764728748, "oldest_key_time": 1764728748, "file_creation_time": 1764728964, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 109, "seqno_to_time_mapping": "N/A"}}
Dec 03 02:29:24 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 27185 microseconds, and 14890 cpu microseconds.
Dec 03 02:29:24 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 02:29:24 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:29:24.628322) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #109: 3177366 bytes OK
Dec 03 02:29:24 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:29:24.629044) [db/memtable_list.cc:519] [default] Level-0 commit table #109 started
Dec 03 02:29:24 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:29:24.631273) [db/memtable_list.cc:722] [default] Level-0 commit table #109: memtable #1 done
Dec 03 02:29:24 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:29:24.631292) EVENT_LOG_v1 {"time_micros": 1764728964631285, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 03 02:29:24 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:29:24.631308) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 03 02:29:24 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 3233809, prev total WAL file size 3233809, number of live WAL files 2.
Dec 03 02:29:24 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000105.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:29:24 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:29:24.633092) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Dec 03 02:29:24 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 03 02:29:24 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [109(3102KB)], [107(6330KB)]
Dec 03 02:29:24 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728964633150, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [109], "files_L6": [107], "score": -1, "input_data_size": 9659433, "oldest_snapshot_seqno": -1}
Dec 03 02:29:24 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #110: 6089 keys, 7919555 bytes, temperature: kUnknown
Dec 03 02:29:24 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728964689743, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 110, "file_size": 7919555, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7881626, "index_size": 21627, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15237, "raw_key_size": 158542, "raw_average_key_size": 26, "raw_value_size": 7774163, "raw_average_value_size": 1276, "num_data_blocks": 857, "num_entries": 6089, "num_filter_entries": 6089, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764728964, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Dec 03 02:29:24 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 02:29:24 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:29:24.689951) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 7919555 bytes
Dec 03 02:29:24 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:29:24.691961) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 170.5 rd, 139.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 6.2 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(5.5) write-amplify(2.5) OK, records in: 6603, records dropped: 514 output_compression: NoCompression
Dec 03 02:29:24 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:29:24.691980) EVENT_LOG_v1 {"time_micros": 1764728964691971, "job": 64, "event": "compaction_finished", "compaction_time_micros": 56660, "compaction_time_cpu_micros": 26968, "output_level": 6, "num_output_files": 1, "total_output_size": 7919555, "num_input_records": 6603, "num_output_records": 6089, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 03 02:29:24 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000109.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:29:24 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728964692770, "job": 64, "event": "table_file_deletion", "file_number": 109}
Dec 03 02:29:24 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:29:24 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728964694382, "job": 64, "event": "table_file_deletion", "file_number": 107}
Dec 03 02:29:24 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:29:24.632594) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:29:24 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:29:24.694754) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:29:24 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:29:24.694761) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:29:24 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:29:24.694765) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:29:24 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:29:24.694768) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:29:24 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:29:24.694771) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:29:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2256: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:29:26 compute-0 ceph-mon[192821]: pgmap v2256: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:29:26 compute-0 nova_compute[351485]: 2025-12-03 02:29:26.736 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:29:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2257: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:29:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:29:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:29:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:29:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:29:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:29:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:29:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:29:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:29:28
Dec 03 02:29:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 02:29:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 02:29:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups', 'default.rgw.meta', 'default.rgw.log', 'images', '.mgr', 'vms']
Dec 03 02:29:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 02:29:28 compute-0 ceph-mon[192821]: pgmap v2257: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:29:29 compute-0 nova_compute[351485]: 2025-12-03 02:29:29.139 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:29:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 02:29:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:29:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 02:29:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:29:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:29:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:29:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:29:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:29:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:29:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:29:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2258: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:29:29 compute-0 nova_compute[351485]: 2025-12-03 02:29:29.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:29:29 compute-0 nova_compute[351485]: 2025-12-03 02:29:29.610 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:29:29 compute-0 nova_compute[351485]: 2025-12-03 02:29:29.611 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:29:29 compute-0 nova_compute[351485]: 2025-12-03 02:29:29.611 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:29:29 compute-0 nova_compute[351485]: 2025-12-03 02:29:29.611 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 02:29:29 compute-0 nova_compute[351485]: 2025-12-03 02:29:29.612 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:29:29 compute-0 podman[158098]: time="2025-12-03T02:29:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:29:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:29:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec 03 02:29:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:29:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8669 "" "Go-http-client/1.1"
Dec 03 02:29:29 compute-0 podman[467707]: 2025-12-03 02:29:29.841344902 +0000 UTC m=+0.096263358 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 03 02:29:29 compute-0 podman[467708]: 2025-12-03 02:29:29.864433823 +0000 UTC m=+0.106736943 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Dec 03 02:29:29 compute-0 podman[467709]: 2025-12-03 02:29:29.901841139 +0000 UTC m=+0.125785691 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 03 02:29:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:29:30 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3774826908' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:29:30 compute-0 nova_compute[351485]: 2025-12-03 02:29:30.119 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:29:30 compute-0 nova_compute[351485]: 2025-12-03 02:29:30.240 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:29:30 compute-0 nova_compute[351485]: 2025-12-03 02:29:30.240 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:29:30 compute-0 nova_compute[351485]: 2025-12-03 02:29:30.250 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:29:30 compute-0 nova_compute[351485]: 2025-12-03 02:29:30.250 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:29:30 compute-0 ceph-mon[192821]: pgmap v2258: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:29:30 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3774826908' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:29:30 compute-0 nova_compute[351485]: 2025-12-03 02:29:30.819 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:29:30 compute-0 nova_compute[351485]: 2025-12-03 02:29:30.821 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3472MB free_disk=59.897010803222656GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 02:29:30 compute-0 nova_compute[351485]: 2025-12-03 02:29:30.821 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:29:30 compute-0 nova_compute[351485]: 2025-12-03 02:29:30.822 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:29:30 compute-0 nova_compute[351485]: 2025-12-03 02:29:30.931 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:29:30 compute-0 nova_compute[351485]: 2025-12-03 02:29:30.932 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 4fb8fc07-d7b7-4be8-94da-155b040faf32 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:29:30 compute-0 nova_compute[351485]: 2025-12-03 02:29:30.932 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 02:29:30 compute-0 nova_compute[351485]: 2025-12-03 02:29:30.933 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 02:29:31 compute-0 nova_compute[351485]: 2025-12-03 02:29:31.011 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:29:31 compute-0 openstack_network_exporter[368278]: ERROR   02:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:29:31 compute-0 openstack_network_exporter[368278]: ERROR   02:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:29:31 compute-0 openstack_network_exporter[368278]: ERROR   02:29:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:29:31 compute-0 openstack_network_exporter[368278]: ERROR   02:29:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:29:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:29:31 compute-0 openstack_network_exporter[368278]: ERROR   02:29:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:29:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:29:31 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:29:31 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1998550311' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:29:31 compute-0 nova_compute[351485]: 2025-12-03 02:29:31.473 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:29:31 compute-0 nova_compute[351485]: 2025-12-03 02:29:31.488 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:29:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2259: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:29:31 compute-0 nova_compute[351485]: 2025-12-03 02:29:31.518 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:29:31 compute-0 nova_compute[351485]: 2025-12-03 02:29:31.522 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 02:29:31 compute-0 nova_compute[351485]: 2025-12-03 02:29:31.523 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:29:31 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1998550311' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:29:31 compute-0 nova_compute[351485]: 2025-12-03 02:29:31.741 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:29:32 compute-0 ceph-mon[192821]: pgmap v2259: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:29:33 compute-0 sudo[467809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:29:33 compute-0 sudo[467809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:29:33 compute-0 sudo[467809]: pam_unix(sudo:session): session closed for user root
Dec 03 02:29:33 compute-0 sudo[467834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:29:33 compute-0 sudo[467834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:29:33 compute-0 sudo[467834]: pam_unix(sudo:session): session closed for user root
Dec 03 02:29:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2260: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:29:33 compute-0 sudo[467859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:29:33 compute-0 sudo[467859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:29:33 compute-0 sudo[467859]: pam_unix(sudo:session): session closed for user root
Dec 03 02:29:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:29:33 compute-0 sudo[467884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 02:29:33 compute-0 sudo[467884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:29:34 compute-0 nova_compute[351485]: 2025-12-03 02:29:34.141 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:29:34 compute-0 nova_compute[351485]: 2025-12-03 02:29:34.523 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:29:34 compute-0 nova_compute[351485]: 2025-12-03 02:29:34.524 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 02:29:34 compute-0 sudo[467884]: pam_unix(sudo:session): session closed for user root
Dec 03 02:29:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:29:34 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:29:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 02:29:34 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:29:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 02:29:34 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:29:34 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev b37dd645-7f74-4429-a7e1-8f591d4bfd7c does not exist
Dec 03 02:29:34 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 741949d4-965f-4d6e-824b-a4f615b23562 does not exist
Dec 03 02:29:34 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev a1eeb915-513b-4756-b264-54071b5f0556 does not exist
Dec 03 02:29:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 02:29:34 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:29:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 02:29:34 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:29:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:29:34 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:29:34 compute-0 ceph-mon[192821]: pgmap v2260: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:29:34 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:29:34 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:29:34 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:29:34 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:29:34 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:29:34 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:29:34 compute-0 sudo[467938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:29:34 compute-0 sudo[467938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:29:34 compute-0 sudo[467938]: pam_unix(sudo:session): session closed for user root
Dec 03 02:29:34 compute-0 sudo[467963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:29:34 compute-0 sudo[467963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:29:34 compute-0 sudo[467963]: pam_unix(sudo:session): session closed for user root
Dec 03 02:29:35 compute-0 nova_compute[351485]: 2025-12-03 02:29:35.020 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-4fb8fc07-d7b7-4be8-94da-155b040faf32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:29:35 compute-0 nova_compute[351485]: 2025-12-03 02:29:35.020 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-4fb8fc07-d7b7-4be8-94da-155b040faf32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:29:35 compute-0 nova_compute[351485]: 2025-12-03 02:29:35.021 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 03 02:29:35 compute-0 sudo[467988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:29:35 compute-0 sudo[467988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:29:35 compute-0 sudo[467988]: pam_unix(sudo:session): session closed for user root
Dec 03 02:29:35 compute-0 sudo[468013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 02:29:35 compute-0 sudo[468013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:29:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2261: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:29:35 compute-0 podman[468075]: 2025-12-03 02:29:35.845664352 +0000 UTC m=+0.093956413 container create a7638ebe87e93edfd47f3a8a4f953d0d7e6ffd328e71f1abe594ced5b751bfee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_meitner, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:29:35 compute-0 podman[468075]: 2025-12-03 02:29:35.815274614 +0000 UTC m=+0.063566685 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:29:35 compute-0 systemd[1]: Started libpod-conmon-a7638ebe87e93edfd47f3a8a4f953d0d7e6ffd328e71f1abe594ced5b751bfee.scope.
Dec 03 02:29:36 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:29:36 compute-0 podman[468075]: 2025-12-03 02:29:36.040935381 +0000 UTC m=+0.289227502 container init a7638ebe87e93edfd47f3a8a4f953d0d7e6ffd328e71f1abe594ced5b751bfee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_meitner, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:29:36 compute-0 podman[468075]: 2025-12-03 02:29:36.059468694 +0000 UTC m=+0.307760755 container start a7638ebe87e93edfd47f3a8a4f953d0d7e6ffd328e71f1abe594ced5b751bfee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 03 02:29:36 compute-0 podman[468075]: 2025-12-03 02:29:36.067140851 +0000 UTC m=+0.315432972 container attach a7638ebe87e93edfd47f3a8a4f953d0d7e6ffd328e71f1abe594ced5b751bfee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_meitner, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 03 02:29:36 compute-0 compassionate_meitner[468091]: 167 167
Dec 03 02:29:36 compute-0 systemd[1]: libpod-a7638ebe87e93edfd47f3a8a4f953d0d7e6ffd328e71f1abe594ced5b751bfee.scope: Deactivated successfully.
Dec 03 02:29:36 compute-0 podman[468075]: 2025-12-03 02:29:36.075220539 +0000 UTC m=+0.323512620 container died a7638ebe87e93edfd47f3a8a4f953d0d7e6ffd328e71f1abe594ced5b751bfee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 03 02:29:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7eed382ad3b93035c50df1047118ef6062a15c548ea6388e0eeb69de7073c17-merged.mount: Deactivated successfully.
Dec 03 02:29:36 compute-0 podman[468075]: 2025-12-03 02:29:36.171963339 +0000 UTC m=+0.420255410 container remove a7638ebe87e93edfd47f3a8a4f953d0d7e6ffd328e71f1abe594ced5b751bfee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:29:36 compute-0 systemd[1]: libpod-conmon-a7638ebe87e93edfd47f3a8a4f953d0d7e6ffd328e71f1abe594ced5b751bfee.scope: Deactivated successfully.
Dec 03 02:29:36 compute-0 nova_compute[351485]: 2025-12-03 02:29:36.323 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Updating instance_info_cache with network_info: [{"id": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "address": "fa:16:3e:3f:0c:ae", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.46", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94fdb5b9-66", "ovs_interfaceid": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:29:36 compute-0 nova_compute[351485]: 2025-12-03 02:29:36.341 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-4fb8fc07-d7b7-4be8-94da-155b040faf32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:29:36 compute-0 nova_compute[351485]: 2025-12-03 02:29:36.341 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 03 02:29:36 compute-0 nova_compute[351485]: 2025-12-03 02:29:36.342 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:29:36 compute-0 nova_compute[351485]: 2025-12-03 02:29:36.342 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:29:36 compute-0 nova_compute[351485]: 2025-12-03 02:29:36.343 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:29:36 compute-0 podman[468114]: 2025-12-03 02:29:36.446828215 +0000 UTC m=+0.076420297 container create 0fd3f06407a2039f77ed31137123ec3bc2e9f406a8f1d1268c50b84d7370c17e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_nobel, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec 03 02:29:36 compute-0 podman[468114]: 2025-12-03 02:29:36.420482352 +0000 UTC m=+0.050074424 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:29:36 compute-0 systemd[1]: Started libpod-conmon-0fd3f06407a2039f77ed31137123ec3bc2e9f406a8f1d1268c50b84d7370c17e.scope.
Dec 03 02:29:36 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:29:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc266e56bd6ab7c84f6e93e53c0d81ceed2beb527c50f16d70443f9b03e19eaf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:29:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc266e56bd6ab7c84f6e93e53c0d81ceed2beb527c50f16d70443f9b03e19eaf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:29:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc266e56bd6ab7c84f6e93e53c0d81ceed2beb527c50f16d70443f9b03e19eaf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:29:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc266e56bd6ab7c84f6e93e53c0d81ceed2beb527c50f16d70443f9b03e19eaf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:29:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc266e56bd6ab7c84f6e93e53c0d81ceed2beb527c50f16d70443f9b03e19eaf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 02:29:36 compute-0 podman[468114]: 2025-12-03 02:29:36.599233596 +0000 UTC m=+0.228825738 container init 0fd3f06407a2039f77ed31137123ec3bc2e9f406a8f1d1268c50b84d7370c17e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 03 02:29:36 compute-0 podman[468114]: 2025-12-03 02:29:36.627287138 +0000 UTC m=+0.256879200 container start 0fd3f06407a2039f77ed31137123ec3bc2e9f406a8f1d1268c50b84d7370c17e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_nobel, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 03 02:29:36 compute-0 podman[468114]: 2025-12-03 02:29:36.632133105 +0000 UTC m=+0.261725167 container attach 0fd3f06407a2039f77ed31137123ec3bc2e9f406a8f1d1268c50b84d7370c17e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:29:36 compute-0 ceph-mon[192821]: pgmap v2261: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:29:36 compute-0 nova_compute[351485]: 2025-12-03 02:29:36.745 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:29:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2262: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:29:37 compute-0 eloquent_nobel[468129]: --> passed data devices: 0 physical, 3 LVM
Dec 03 02:29:37 compute-0 eloquent_nobel[468129]: --> relative data size: 1.0
Dec 03 02:29:37 compute-0 eloquent_nobel[468129]: --> All data devices are unavailable
Dec 03 02:29:37 compute-0 systemd[1]: libpod-0fd3f06407a2039f77ed31137123ec3bc2e9f406a8f1d1268c50b84d7370c17e.scope: Deactivated successfully.
Dec 03 02:29:37 compute-0 podman[468114]: 2025-12-03 02:29:37.787995163 +0000 UTC m=+1.417587315 container died 0fd3f06407a2039f77ed31137123ec3bc2e9f406a8f1d1268c50b84d7370c17e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_nobel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 03 02:29:37 compute-0 systemd[1]: libpod-0fd3f06407a2039f77ed31137123ec3bc2e9f406a8f1d1268c50b84d7370c17e.scope: Consumed 1.106s CPU time.
Dec 03 02:29:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc266e56bd6ab7c84f6e93e53c0d81ceed2beb527c50f16d70443f9b03e19eaf-merged.mount: Deactivated successfully.
Dec 03 02:29:37 compute-0 podman[468114]: 2025-12-03 02:29:37.890041163 +0000 UTC m=+1.519633225 container remove 0fd3f06407a2039f77ed31137123ec3bc2e9f406a8f1d1268c50b84d7370c17e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_nobel, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:29:37 compute-0 systemd[1]: libpod-conmon-0fd3f06407a2039f77ed31137123ec3bc2e9f406a8f1d1268c50b84d7370c17e.scope: Deactivated successfully.
Dec 03 02:29:37 compute-0 sudo[468013]: pam_unix(sudo:session): session closed for user root
Dec 03 02:29:38 compute-0 sudo[468176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:29:38 compute-0 podman[468170]: 2025-12-03 02:29:38.079498029 +0000 UTC m=+0.132310475 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:29:38 compute-0 sudo[468176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:29:38 compute-0 sudo[468176]: pam_unix(sudo:session): session closed for user root
Dec 03 02:29:38 compute-0 sudo[468215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:29:38 compute-0 sudo[468215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:29:38 compute-0 sudo[468215]: pam_unix(sudo:session): session closed for user root
Dec 03 02:29:38 compute-0 sudo[468240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:29:38 compute-0 sudo[468240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:29:38 compute-0 sudo[468240]: pam_unix(sudo:session): session closed for user root
Dec 03 02:29:38 compute-0 sudo[468265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 02:29:38 compute-0 sudo[468265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:29:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:29:38 compute-0 ceph-mon[192821]: pgmap v2262: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:29:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 02:29:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:29:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 02:29:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:29:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001521471275314189 of space, bias 1.0, pg target 0.45644138259425665 quantized to 32 (current 32)
Dec 03 02:29:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:29:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:29:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:29:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:29:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:29:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec 03 02:29:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:29:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 02:29:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:29:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:29:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:29:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 02:29:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:29:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 02:29:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:29:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:29:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:29:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 02:29:39 compute-0 podman[468331]: 2025-12-03 02:29:39.078524092 +0000 UTC m=+0.081543872 container create be0fbd273874b8052f9c2aa11de0c2d1e87e91bf3b727c6520744a52281bca18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 03 02:29:39 compute-0 podman[468331]: 2025-12-03 02:29:39.048472694 +0000 UTC m=+0.051492484 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:29:39 compute-0 nova_compute[351485]: 2025-12-03 02:29:39.145 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:29:39 compute-0 systemd[1]: Started libpod-conmon-be0fbd273874b8052f9c2aa11de0c2d1e87e91bf3b727c6520744a52281bca18.scope.
Dec 03 02:29:39 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:29:39 compute-0 podman[468331]: 2025-12-03 02:29:39.236236842 +0000 UTC m=+0.239256612 container init be0fbd273874b8052f9c2aa11de0c2d1e87e91bf3b727c6520744a52281bca18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mccarthy, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 03 02:29:39 compute-0 podman[468331]: 2025-12-03 02:29:39.256200036 +0000 UTC m=+0.259219806 container start be0fbd273874b8052f9c2aa11de0c2d1e87e91bf3b727c6520744a52281bca18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mccarthy, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec 03 02:29:39 compute-0 podman[468331]: 2025-12-03 02:29:39.265620712 +0000 UTC m=+0.268640542 container attach be0fbd273874b8052f9c2aa11de0c2d1e87e91bf3b727c6520744a52281bca18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mccarthy, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 03 02:29:39 compute-0 determined_mccarthy[468347]: 167 167
Dec 03 02:29:39 compute-0 systemd[1]: libpod-be0fbd273874b8052f9c2aa11de0c2d1e87e91bf3b727c6520744a52281bca18.scope: Deactivated successfully.
Dec 03 02:29:39 compute-0 podman[468331]: 2025-12-03 02:29:39.271664782 +0000 UTC m=+0.274684612 container died be0fbd273874b8052f9c2aa11de0c2d1e87e91bf3b727c6520744a52281bca18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mccarthy, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 03 02:29:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-c93609aeb7020dab022d3a5be9bc25d458af7f25d2b7a8b14759cede45820849-merged.mount: Deactivated successfully.
Dec 03 02:29:39 compute-0 podman[468331]: 2025-12-03 02:29:39.345037953 +0000 UTC m=+0.348057703 container remove be0fbd273874b8052f9c2aa11de0c2d1e87e91bf3b727c6520744a52281bca18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mccarthy, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:29:39 compute-0 systemd[1]: libpod-conmon-be0fbd273874b8052f9c2aa11de0c2d1e87e91bf3b727c6520744a52281bca18.scope: Deactivated successfully.
Dec 03 02:29:39 compute-0 nova_compute[351485]: 2025-12-03 02:29:39.390 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:29:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2263: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:29:39 compute-0 nova_compute[351485]: 2025-12-03 02:29:39.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:29:39 compute-0 podman[468371]: 2025-12-03 02:29:39.655168474 +0000 UTC m=+0.097510223 container create e6c67e510a354cfee868a146f4f6ce8adf65ee7c05ffa690d1896c359ae64634 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 03 02:29:39 compute-0 podman[468371]: 2025-12-03 02:29:39.616664007 +0000 UTC m=+0.059005806 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:29:39 compute-0 systemd[1]: Started libpod-conmon-e6c67e510a354cfee868a146f4f6ce8adf65ee7c05ffa690d1896c359ae64634.scope.
Dec 03 02:29:39 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:29:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/387a14436f8d2266f56a6157d03d4a15642e8abedc74eab0dd64d2d9b870ce9a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:29:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/387a14436f8d2266f56a6157d03d4a15642e8abedc74eab0dd64d2d9b870ce9a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:29:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/387a14436f8d2266f56a6157d03d4a15642e8abedc74eab0dd64d2d9b870ce9a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:29:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/387a14436f8d2266f56a6157d03d4a15642e8abedc74eab0dd64d2d9b870ce9a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:29:39 compute-0 podman[468371]: 2025-12-03 02:29:39.864298725 +0000 UTC m=+0.306640504 container init e6c67e510a354cfee868a146f4f6ce8adf65ee7c05ffa690d1896c359ae64634 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kalam, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:29:39 compute-0 podman[468371]: 2025-12-03 02:29:39.901966188 +0000 UTC m=+0.344307947 container start e6c67e510a354cfee868a146f4f6ce8adf65ee7c05ffa690d1896c359ae64634 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 03 02:29:39 compute-0 podman[468371]: 2025-12-03 02:29:39.908997577 +0000 UTC m=+0.351339326 container attach e6c67e510a354cfee868a146f4f6ce8adf65ee7c05ffa690d1896c359ae64634 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kalam, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 03 02:29:40 compute-0 ceph-mon[192821]: pgmap v2263: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]: {
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:     "0": [
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:         {
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:             "devices": [
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:                 "/dev/loop3"
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:             ],
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:             "lv_name": "ceph_lv0",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:             "lv_size": "21470642176",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:             "name": "ceph_lv0",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:             "tags": {
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:                 "ceph.cluster_name": "ceph",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:                 "ceph.crush_device_class": "",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:                 "ceph.encrypted": "0",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:                 "ceph.osd_id": "0",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:                 "ceph.type": "block",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:                 "ceph.vdo": "0"
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:             },
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:             "type": "block",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:             "vg_name": "ceph_vg0"
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:         }
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:     ],
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:     "1": [
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:         {
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:             "devices": [
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:                 "/dev/loop4"
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:             ],
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:             "lv_name": "ceph_lv1",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:             "lv_size": "21470642176",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:             "name": "ceph_lv1",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:             "tags": {
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:                 "ceph.cluster_name": "ceph",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:                 "ceph.crush_device_class": "",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:                 "ceph.encrypted": "0",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:                 "ceph.osd_id": "1",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:                 "ceph.type": "block",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:                 "ceph.vdo": "0"
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:             },
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:             "type": "block",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:             "vg_name": "ceph_vg1"
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:         }
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:     ],
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:     "2": [
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:         {
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:             "devices": [
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:                 "/dev/loop5"
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:             ],
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:             "lv_name": "ceph_lv2",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:             "lv_size": "21470642176",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:             "name": "ceph_lv2",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:             "tags": {
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:                 "ceph.cluster_name": "ceph",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:                 "ceph.crush_device_class": "",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:                 "ceph.encrypted": "0",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:                 "ceph.osd_id": "2",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:                 "ceph.type": "block",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:                 "ceph.vdo": "0"
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:             },
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:             "type": "block",
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:             "vg_name": "ceph_vg2"
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:         }
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]:     ]
Dec 03 02:29:40 compute-0 inspiring_kalam[468387]: }
Dec 03 02:29:40 compute-0 systemd[1]: libpod-e6c67e510a354cfee868a146f4f6ce8adf65ee7c05ffa690d1896c359ae64634.scope: Deactivated successfully.
Dec 03 02:29:40 compute-0 podman[468371]: 2025-12-03 02:29:40.790617386 +0000 UTC m=+1.232959145 container died e6c67e510a354cfee868a146f4f6ce8adf65ee7c05ffa690d1896c359ae64634 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Dec 03 02:29:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-387a14436f8d2266f56a6157d03d4a15642e8abedc74eab0dd64d2d9b870ce9a-merged.mount: Deactivated successfully.
Dec 03 02:29:40 compute-0 podman[468371]: 2025-12-03 02:29:40.888034745 +0000 UTC m=+1.330376504 container remove e6c67e510a354cfee868a146f4f6ce8adf65ee7c05ffa690d1896c359ae64634 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kalam, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:29:40 compute-0 systemd[1]: libpod-conmon-e6c67e510a354cfee868a146f4f6ce8adf65ee7c05ffa690d1896c359ae64634.scope: Deactivated successfully.
Dec 03 02:29:40 compute-0 sudo[468265]: pam_unix(sudo:session): session closed for user root
Dec 03 02:29:41 compute-0 sudo[468432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:29:41 compute-0 sudo[468432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:29:41 compute-0 sudo[468432]: pam_unix(sudo:session): session closed for user root
Dec 03 02:29:41 compute-0 podman[468412]: 2025-12-03 02:29:41.066864772 +0000 UTC m=+0.104153791 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=kepler, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=base rhel9, release=1214.1726694543, architecture=x86_64, build-date=2024-09-18T21:23:30, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec 03 02:29:41 compute-0 podman[468411]: 2025-12-03 02:29:41.073586031 +0000 UTC m=+0.114550873 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 03 02:29:41 compute-0 podman[468413]: 2025-12-03 02:29:41.082295637 +0000 UTC m=+0.114552764 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 03 02:29:41 compute-0 podman[468410]: 2025-12-03 02:29:41.10897479 +0000 UTC m=+0.149310415 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, managed_by=edpm_ansible, release=1755695350, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7)
Dec 03 02:29:41 compute-0 podman[468409]: 2025-12-03 02:29:41.119407874 +0000 UTC m=+0.161660183 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 03 02:29:41 compute-0 sudo[468524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:29:41 compute-0 sudo[468524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:29:41 compute-0 sudo[468524]: pam_unix(sudo:session): session closed for user root
Dec 03 02:29:41 compute-0 sudo[468563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:29:41 compute-0 sudo[468563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:29:41 compute-0 sudo[468563]: pam_unix(sudo:session): session closed for user root
Dec 03 02:29:41 compute-0 sudo[468588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 02:29:41 compute-0 sudo[468588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:29:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2264: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:29:41 compute-0 nova_compute[351485]: 2025-12-03 02:29:41.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:29:41 compute-0 nova_compute[351485]: 2025-12-03 02:29:41.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 02:29:41 compute-0 nova_compute[351485]: 2025-12-03 02:29:41.749 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:29:41 compute-0 podman[468651]: 2025-12-03 02:29:41.895621949 +0000 UTC m=+0.101835125 container create 1b02c2f29580b671d15b06b1a465ea9496e9e80398322b017a51a006f2ec2250 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_clarke, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:29:41 compute-0 podman[468651]: 2025-12-03 02:29:41.852806241 +0000 UTC m=+0.059019457 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:29:41 compute-0 systemd[1]: Started libpod-conmon-1b02c2f29580b671d15b06b1a465ea9496e9e80398322b017a51a006f2ec2250.scope.
Dec 03 02:29:42 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:29:42 compute-0 podman[468651]: 2025-12-03 02:29:42.053811743 +0000 UTC m=+0.260024969 container init 1b02c2f29580b671d15b06b1a465ea9496e9e80398322b017a51a006f2ec2250 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_clarke, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:29:42 compute-0 podman[468651]: 2025-12-03 02:29:42.070817593 +0000 UTC m=+0.277030759 container start 1b02c2f29580b671d15b06b1a465ea9496e9e80398322b017a51a006f2ec2250 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_clarke, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 03 02:29:42 compute-0 podman[468651]: 2025-12-03 02:29:42.077815921 +0000 UTC m=+0.284029097 container attach 1b02c2f29580b671d15b06b1a465ea9496e9e80398322b017a51a006f2ec2250 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_clarke, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:29:42 compute-0 sleepy_clarke[468667]: 167 167
Dec 03 02:29:42 compute-0 systemd[1]: libpod-1b02c2f29580b671d15b06b1a465ea9496e9e80398322b017a51a006f2ec2250.scope: Deactivated successfully.
Dec 03 02:29:42 compute-0 podman[468651]: 2025-12-03 02:29:42.084848419 +0000 UTC m=+0.291061595 container died 1b02c2f29580b671d15b06b1a465ea9496e9e80398322b017a51a006f2ec2250 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_clarke, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 03 02:29:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-ecbe0678e71fa0f774b2b1b61b85cd3f5e0fa4a2066ee1a76bc087399e41963d-merged.mount: Deactivated successfully.
Dec 03 02:29:42 compute-0 podman[468651]: 2025-12-03 02:29:42.182059202 +0000 UTC m=+0.388272368 container remove 1b02c2f29580b671d15b06b1a465ea9496e9e80398322b017a51a006f2ec2250 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_clarke, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:29:42 compute-0 systemd[1]: libpod-conmon-1b02c2f29580b671d15b06b1a465ea9496e9e80398322b017a51a006f2ec2250.scope: Deactivated successfully.
Dec 03 02:29:42 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 03 02:29:42 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 4200.1 total, 600.0 interval
                                            Cumulative writes: 10K writes, 39K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 10K writes, 2810 syncs, 3.67 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 738 writes, 2541 keys, 738 commit groups, 1.0 writes per commit group, ingest: 3.34 MB, 0.01 MB/s
                                            Interval WAL: 738 writes, 303 syncs, 2.44 writes per sync, written: 0.00 GB, 0.01 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 03 02:29:42 compute-0 podman[468688]: 2025-12-03 02:29:42.507839576 +0000 UTC m=+0.113054892 container create c02e36bcb15f01cb6d1867c15727bb44f665c34f802a54cfc67c8a6fd9f64bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:29:42 compute-0 podman[468688]: 2025-12-03 02:29:42.466730196 +0000 UTC m=+0.071945562 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:29:42 compute-0 systemd[1]: Started libpod-conmon-c02e36bcb15f01cb6d1867c15727bb44f665c34f802a54cfc67c8a6fd9f64bc2.scope.
Dec 03 02:29:42 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:29:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b647a81ee77db4b3c8e915068c2d00e15a2403025d6e7bfe9e9ac8099cd60035/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:29:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b647a81ee77db4b3c8e915068c2d00e15a2403025d6e7bfe9e9ac8099cd60035/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:29:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b647a81ee77db4b3c8e915068c2d00e15a2403025d6e7bfe9e9ac8099cd60035/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:29:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b647a81ee77db4b3c8e915068c2d00e15a2403025d6e7bfe9e9ac8099cd60035/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:29:42 compute-0 podman[468688]: 2025-12-03 02:29:42.675650371 +0000 UTC m=+0.280865657 container init c02e36bcb15f01cb6d1867c15727bb44f665c34f802a54cfc67c8a6fd9f64bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:29:42 compute-0 podman[468688]: 2025-12-03 02:29:42.692653461 +0000 UTC m=+0.297868777 container start c02e36bcb15f01cb6d1867c15727bb44f665c34f802a54cfc67c8a6fd9f64bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:29:42 compute-0 podman[468688]: 2025-12-03 02:29:42.699445673 +0000 UTC m=+0.304660959 container attach c02e36bcb15f01cb6d1867c15727bb44f665c34f802a54cfc67c8a6fd9f64bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec 03 02:29:42 compute-0 ceph-mon[192821]: pgmap v2264: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:29:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2265: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:29:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:29:43 compute-0 cool_ptolemy[468703]: {
Dec 03 02:29:43 compute-0 cool_ptolemy[468703]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 02:29:43 compute-0 cool_ptolemy[468703]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:29:43 compute-0 cool_ptolemy[468703]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 02:29:43 compute-0 cool_ptolemy[468703]:         "osd_id": 2,
Dec 03 02:29:43 compute-0 cool_ptolemy[468703]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:29:43 compute-0 cool_ptolemy[468703]:         "type": "bluestore"
Dec 03 02:29:43 compute-0 cool_ptolemy[468703]:     },
Dec 03 02:29:43 compute-0 cool_ptolemy[468703]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 02:29:43 compute-0 cool_ptolemy[468703]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:29:43 compute-0 cool_ptolemy[468703]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 02:29:43 compute-0 cool_ptolemy[468703]:         "osd_id": 1,
Dec 03 02:29:43 compute-0 cool_ptolemy[468703]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:29:43 compute-0 cool_ptolemy[468703]:         "type": "bluestore"
Dec 03 02:29:43 compute-0 cool_ptolemy[468703]:     },
Dec 03 02:29:43 compute-0 cool_ptolemy[468703]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 02:29:43 compute-0 cool_ptolemy[468703]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:29:43 compute-0 cool_ptolemy[468703]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 02:29:43 compute-0 cool_ptolemy[468703]:         "osd_id": 0,
Dec 03 02:29:43 compute-0 cool_ptolemy[468703]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:29:43 compute-0 cool_ptolemy[468703]:         "type": "bluestore"
Dec 03 02:29:43 compute-0 cool_ptolemy[468703]:     }
Dec 03 02:29:43 compute-0 cool_ptolemy[468703]: }
Dec 03 02:29:43 compute-0 systemd[1]: libpod-c02e36bcb15f01cb6d1867c15727bb44f665c34f802a54cfc67c8a6fd9f64bc2.scope: Deactivated successfully.
Dec 03 02:29:43 compute-0 podman[468688]: 2025-12-03 02:29:43.912156645 +0000 UTC m=+1.517371941 container died c02e36bcb15f01cb6d1867c15727bb44f665c34f802a54cfc67c8a6fd9f64bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ptolemy, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 03 02:29:43 compute-0 systemd[1]: libpod-c02e36bcb15f01cb6d1867c15727bb44f665c34f802a54cfc67c8a6fd9f64bc2.scope: Consumed 1.214s CPU time.
Dec 03 02:29:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-b647a81ee77db4b3c8e915068c2d00e15a2403025d6e7bfe9e9ac8099cd60035-merged.mount: Deactivated successfully.
Dec 03 02:29:43 compute-0 podman[468688]: 2025-12-03 02:29:43.988857249 +0000 UTC m=+1.594072545 container remove c02e36bcb15f01cb6d1867c15727bb44f665c34f802a54cfc67c8a6fd9f64bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:29:44 compute-0 systemd[1]: libpod-conmon-c02e36bcb15f01cb6d1867c15727bb44f665c34f802a54cfc67c8a6fd9f64bc2.scope: Deactivated successfully.
Dec 03 02:29:44 compute-0 sudo[468588]: pam_unix(sudo:session): session closed for user root
Dec 03 02:29:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 02:29:44 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:29:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 02:29:44 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:29:44 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev f5ff9db9-92c7-4702-9550-def9cc191582 does not exist
Dec 03 02:29:44 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev af21abab-3b1d-499e-a80a-4c8452597db0 does not exist
Dec 03 02:29:44 compute-0 nova_compute[351485]: 2025-12-03 02:29:44.148 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:29:44 compute-0 sudo[468747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:29:44 compute-0 sudo[468747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:29:44 compute-0 sudo[468747]: pam_unix(sudo:session): session closed for user root
Dec 03 02:29:44 compute-0 sudo[468772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 02:29:44 compute-0 sudo[468772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:29:44 compute-0 sudo[468772]: pam_unix(sudo:session): session closed for user root
Dec 03 02:29:44 compute-0 ceph-mon[192821]: pgmap v2265: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:29:44 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:29:44 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:29:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2266: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:29:46 compute-0 nova_compute[351485]: 2025-12-03 02:29:46.753 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:29:46 compute-0 ceph-mon[192821]: pgmap v2266: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:29:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 02:29:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4103974287' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:29:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 02:29:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4103974287' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:29:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2267: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:29:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/4103974287' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:29:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/4103974287' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:29:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:29:48 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 03 02:29:48 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 4200.1 total, 600.0 interval
                                            Cumulative writes: 11K writes, 45K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 11K writes, 3272 syncs, 3.60 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 751 writes, 2590 keys, 751 commit groups, 1.0 writes per commit group, ingest: 3.26 MB, 0.01 MB/s
                                            Interval WAL: 751 writes, 299 syncs, 2.51 writes per sync, written: 0.00 GB, 0.01 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 03 02:29:48 compute-0 ceph-mon[192821]: pgmap v2267: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:29:49 compute-0 nova_compute[351485]: 2025-12-03 02:29:49.152 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:29:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2268: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:29:50 compute-0 ceph-mon[192821]: pgmap v2268: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:29:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2269: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:29:51 compute-0 nova_compute[351485]: 2025-12-03 02:29:51.756 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:29:52 compute-0 ceph-mon[192821]: pgmap v2269: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:29:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2270: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:29:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:29:54 compute-0 nova_compute[351485]: 2025-12-03 02:29:54.155 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:29:54 compute-0 ceph-mon[192821]: pgmap v2270: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:29:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2271: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:29:55 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 03 02:29:55 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 4200.1 total, 600.0 interval
                                            Cumulative writes: 9225 writes, 35K keys, 9225 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 9225 writes, 2410 syncs, 3.83 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 311 writes, 768 keys, 311 commit groups, 1.0 writes per commit group, ingest: 0.41 MB, 0.00 MB/s
                                            Interval WAL: 311 writes, 149 syncs, 2.09 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 03 02:29:56 compute-0 nova_compute[351485]: 2025-12-03 02:29:56.759 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:29:56 compute-0 ceph-mon[192821]: pgmap v2271: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:29:57 compute-0 ceph-mgr[193109]: [devicehealth INFO root] Check health
Dec 03 02:29:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2272: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:29:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:29:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:29:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:29:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:29:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:29:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:29:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:29:58 compute-0 ceph-mon[192821]: pgmap v2272: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:29:59 compute-0 nova_compute[351485]: 2025-12-03 02:29:59.158 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:29:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2273: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:29:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:29:59.664 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:29:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:29:59.665 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:29:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:29:59.666 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:29:59 compute-0 podman[158098]: time="2025-12-03T02:29:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:29:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:29:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec 03 02:29:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:29:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8664 "" "Go-http-client/1.1"
Dec 03 02:30:00 compute-0 podman[468800]: 2025-12-03 02:30:00.881435529 +0000 UTC m=+0.114163573 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 03 02:30:00 compute-0 podman[468799]: 2025-12-03 02:30:00.88251712 +0000 UTC m=+0.118119895 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 03 02:30:00 compute-0 ceph-mon[192821]: pgmap v2273: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:00 compute-0 podman[468798]: 2025-12-03 02:30:00.906430424 +0000 UTC m=+0.145736793 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 03 02:30:01 compute-0 openstack_network_exporter[368278]: ERROR   02:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:30:01 compute-0 openstack_network_exporter[368278]: ERROR   02:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:30:01 compute-0 openstack_network_exporter[368278]: ERROR   02:30:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:30:01 compute-0 openstack_network_exporter[368278]: ERROR   02:30:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:30:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:30:01 compute-0 openstack_network_exporter[368278]: ERROR   02:30:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:30:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:30:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2274: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:01 compute-0 nova_compute[351485]: 2025-12-03 02:30:01.762 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:30:01 compute-0 ceph-mon[192821]: pgmap v2274: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2275: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:30:04 compute-0 nova_compute[351485]: 2025-12-03 02:30:04.161 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:30:04 compute-0 ceph-mon[192821]: pgmap v2275: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2276: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:06 compute-0 ceph-mon[192821]: pgmap v2276: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:06 compute-0 nova_compute[351485]: 2025-12-03 02:30:06.767 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:30:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2277: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:30:08 compute-0 ceph-mon[192821]: pgmap v2277: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:08 compute-0 podman[468858]: 2025-12-03 02:30:08.869042579 +0000 UTC m=+0.116623422 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 03 02:30:09 compute-0 nova_compute[351485]: 2025-12-03 02:30:09.164 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:30:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2278: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:10 compute-0 ceph-mon[192821]: pgmap v2278: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2279: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:11 compute-0 nova_compute[351485]: 2025-12-03 02:30:11.770 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:30:11 compute-0 podman[468877]: 2025-12-03 02:30:11.860368891 +0000 UTC m=+0.094467456 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, version=9.6, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, release=1755695350, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec 03 02:30:11 compute-0 podman[468878]: 2025-12-03 02:30:11.878936585 +0000 UTC m=+0.105550939 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 03 02:30:11 compute-0 podman[468879]: 2025-12-03 02:30:11.906368249 +0000 UTC m=+0.125476011 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, io.openshift.tags=base rhel9, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, version=9.4, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1214.1726694543, vcs-type=git, config_id=edpm, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30)
Dec 03 02:30:11 compute-0 podman[468876]: 2025-12-03 02:30:11.910851846 +0000 UTC m=+0.153549084 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 03 02:30:11 compute-0 podman[468885]: 2025-12-03 02:30:11.912644897 +0000 UTC m=+0.123329612 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:30:12 compute-0 ceph-mon[192821]: pgmap v2279: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2280: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:30:14 compute-0 nova_compute[351485]: 2025-12-03 02:30:14.167 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:30:14 compute-0 ceph-mon[192821]: pgmap v2280: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2281: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:16 compute-0 ceph-mon[192821]: pgmap v2281: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:16 compute-0 nova_compute[351485]: 2025-12-03 02:30:16.774 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:30:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2282: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:30:18 compute-0 ceph-mon[192821]: pgmap v2282: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:19 compute-0 nova_compute[351485]: 2025-12-03 02:30:19.170 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:30:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2283: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:20 compute-0 nova_compute[351485]: 2025-12-03 02:30:20.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:30:20 compute-0 ceph-mon[192821]: pgmap v2283: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2284: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:21 compute-0 nova_compute[351485]: 2025-12-03 02:30:21.777 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:30:22 compute-0 ceph-mon[192821]: pgmap v2284: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2285: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:30:24 compute-0 nova_compute[351485]: 2025-12-03 02:30:24.175 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:30:24 compute-0 ceph-mon[192821]: pgmap v2285: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2286: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:26 compute-0 ceph-mon[192821]: pgmap v2286: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:26 compute-0 nova_compute[351485]: 2025-12-03 02:30:26.781 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:30:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2287: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:30:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:30:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:30:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:30:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:30:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:30:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:30:28
Dec 03 02:30:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 02:30:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 02:30:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['.mgr', 'default.rgw.meta', '.rgw.root', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', 'default.rgw.control', 'backups', 'vms']
Dec 03 02:30:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 02:30:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:30:28 compute-0 ceph-mon[192821]: pgmap v2287: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:29 compute-0 nova_compute[351485]: 2025-12-03 02:30:29.178 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:30:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 02:30:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:30:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 02:30:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:30:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:30:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:30:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:30:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:30:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:30:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:30:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2288: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:29 compute-0 nova_compute[351485]: 2025-12-03 02:30:29.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:30:29 compute-0 podman[158098]: time="2025-12-03T02:30:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:30:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:30:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec 03 02:30:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:30:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8665 "" "Go-http-client/1.1"
Dec 03 02:30:30 compute-0 nova_compute[351485]: 2025-12-03 02:30:30.085 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:30:30 compute-0 nova_compute[351485]: 2025-12-03 02:30:30.086 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:30:30 compute-0 nova_compute[351485]: 2025-12-03 02:30:30.086 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:30:30 compute-0 nova_compute[351485]: 2025-12-03 02:30:30.087 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 02:30:30 compute-0 nova_compute[351485]: 2025-12-03 02:30:30.088 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:30:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:30:30 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/597456042' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:30:30 compute-0 nova_compute[351485]: 2025-12-03 02:30:30.622 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:30:30 compute-0 nova_compute[351485]: 2025-12-03 02:30:30.725 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:30:30 compute-0 nova_compute[351485]: 2025-12-03 02:30:30.725 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:30:30 compute-0 nova_compute[351485]: 2025-12-03 02:30:30.732 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:30:30 compute-0 nova_compute[351485]: 2025-12-03 02:30:30.733 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:30:30 compute-0 ceph-mon[192821]: pgmap v2288: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:30 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/597456042' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:30:31 compute-0 nova_compute[351485]: 2025-12-03 02:30:31.367 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:30:31 compute-0 nova_compute[351485]: 2025-12-03 02:30:31.368 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3474MB free_disk=59.897010803222656GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 02:30:31 compute-0 nova_compute[351485]: 2025-12-03 02:30:31.369 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:30:31 compute-0 nova_compute[351485]: 2025-12-03 02:30:31.369 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:30:31 compute-0 openstack_network_exporter[368278]: ERROR   02:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:30:31 compute-0 openstack_network_exporter[368278]: ERROR   02:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:30:31 compute-0 openstack_network_exporter[368278]: ERROR   02:30:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:30:31 compute-0 openstack_network_exporter[368278]: ERROR   02:30:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:30:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:30:31 compute-0 openstack_network_exporter[368278]: ERROR   02:30:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:30:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:30:31 compute-0 nova_compute[351485]: 2025-12-03 02:30:31.463 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:30:31 compute-0 nova_compute[351485]: 2025-12-03 02:30:31.464 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 4fb8fc07-d7b7-4be8-94da-155b040faf32 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:30:31 compute-0 nova_compute[351485]: 2025-12-03 02:30:31.464 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 02:30:31 compute-0 nova_compute[351485]: 2025-12-03 02:30:31.464 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 02:30:31 compute-0 nova_compute[351485]: 2025-12-03 02:30:31.478 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing inventories for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 03 02:30:31 compute-0 nova_compute[351485]: 2025-12-03 02:30:31.497 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Updating ProviderTree inventory for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 03 02:30:31 compute-0 nova_compute[351485]: 2025-12-03 02:30:31.498 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Updating inventory in ProviderTree for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 03 02:30:31 compute-0 nova_compute[351485]: 2025-12-03 02:30:31.510 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing aggregate associations for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 03 02:30:31 compute-0 nova_compute[351485]: 2025-12-03 02:30:31.528 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing trait associations for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05, traits: HW_CPU_X86_SSE42,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_ABM,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_F16C,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_RESCUE_BFV,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 03 02:30:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2289: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:31 compute-0 nova_compute[351485]: 2025-12-03 02:30:31.583 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:30:31 compute-0 nova_compute[351485]: 2025-12-03 02:30:31.786 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:30:31 compute-0 podman[469000]: 2025-12-03 02:30:31.86594991 +0000 UTC m=+0.099149219 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 03 02:30:31 compute-0 podman[468998]: 2025-12-03 02:30:31.869214382 +0000 UTC m=+0.108421790 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 03 02:30:31 compute-0 podman[468999]: 2025-12-03 02:30:31.878173635 +0000 UTC m=+0.124033361 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.4, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Dec 03 02:30:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:30:32 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3618414721' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:30:32 compute-0 nova_compute[351485]: 2025-12-03 02:30:32.218 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:30:32 compute-0 nova_compute[351485]: 2025-12-03 02:30:32.232 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:30:32 compute-0 nova_compute[351485]: 2025-12-03 02:30:32.257 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:30:32 compute-0 nova_compute[351485]: 2025-12-03 02:30:32.258 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 02:30:32 compute-0 nova_compute[351485]: 2025-12-03 02:30:32.259 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.890s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:30:32 compute-0 ceph-mon[192821]: pgmap v2289: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:32 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3618414721' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:30:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2290: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:30:34 compute-0 nova_compute[351485]: 2025-12-03 02:30:34.182 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:30:34 compute-0 ceph-mon[192821]: pgmap v2290: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2291: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:36 compute-0 nova_compute[351485]: 2025-12-03 02:30:36.791 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:30:36 compute-0 ceph-mon[192821]: pgmap v2291: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:37 compute-0 nova_compute[351485]: 2025-12-03 02:30:37.258 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:30:37 compute-0 nova_compute[351485]: 2025-12-03 02:30:37.259 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 02:30:37 compute-0 nova_compute[351485]: 2025-12-03 02:30:37.260 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 03 02:30:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2292: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:38 compute-0 nova_compute[351485]: 2025-12-03 02:30:38.004 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-2890ee5c-21c1-4e9d-9421-1a2df0f67f76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:30:38 compute-0 nova_compute[351485]: 2025-12-03 02:30:38.005 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-2890ee5c-21c1-4e9d-9421-1a2df0f67f76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:30:38 compute-0 nova_compute[351485]: 2025-12-03 02:30:38.005 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 03 02:30:38 compute-0 nova_compute[351485]: 2025-12-03 02:30:38.005 351492 DEBUG nova.objects.instance [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:30:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:30:38 compute-0 ceph-mon[192821]: pgmap v2292: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 02:30:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:30:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 02:30:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:30:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001521471275314189 of space, bias 1.0, pg target 0.45644138259425665 quantized to 32 (current 32)
Dec 03 02:30:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:30:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:30:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:30:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:30:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:30:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec 03 02:30:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:30:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 02:30:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:30:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:30:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:30:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 02:30:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:30:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 02:30:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:30:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:30:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:30:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 02:30:39 compute-0 nova_compute[351485]: 2025-12-03 02:30:39.185 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:30:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2293: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:39 compute-0 nova_compute[351485]: 2025-12-03 02:30:39.589 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Updating instance_info_cache with network_info: [{"id": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "address": "fa:16:3e:dd:ed:eb", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.239", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf36a9f58-d7", "ovs_interfaceid": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:30:39 compute-0 nova_compute[351485]: 2025-12-03 02:30:39.606 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-2890ee5c-21c1-4e9d-9421-1a2df0f67f76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:30:39 compute-0 nova_compute[351485]: 2025-12-03 02:30:39.607 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 03 02:30:39 compute-0 nova_compute[351485]: 2025-12-03 02:30:39.608 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:30:39 compute-0 nova_compute[351485]: 2025-12-03 02:30:39.609 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:30:39 compute-0 nova_compute[351485]: 2025-12-03 02:30:39.610 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:30:39 compute-0 nova_compute[351485]: 2025-12-03 02:30:39.611 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:30:39 compute-0 podman[469074]: 2025-12-03 02:30:39.869380084 +0000 UTC m=+0.122582950 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 03 02:30:40 compute-0 ceph-mon[192821]: pgmap v2293: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:40 compute-0 nova_compute[351485]: 2025-12-03 02:30:40.923 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:30:40 compute-0 nova_compute[351485]: 2025-12-03 02:30:40.925 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:30:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2294: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:41 compute-0 nova_compute[351485]: 2025-12-03 02:30:41.794 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:30:42 compute-0 podman[469095]: 2025-12-03 02:30:42.871654604 +0000 UTC m=+0.095512056 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 02:30:42 compute-0 podman[469094]: 2025-12-03 02:30:42.871558642 +0000 UTC m=+0.103513233 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., version=9.6, vendor=Red Hat, Inc., architecture=x86_64, config_id=edpm, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=)
Dec 03 02:30:42 compute-0 podman[469102]: 2025-12-03 02:30:42.893726937 +0000 UTC m=+0.104681995 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:30:42 compute-0 ceph-mon[192821]: pgmap v2294: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:42 compute-0 podman[469096]: 2025-12-03 02:30:42.924894567 +0000 UTC m=+0.142598415 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, managed_by=edpm_ansible, name=ubi9, release=1214.1726694543, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, version=9.4, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9)
Dec 03 02:30:42 compute-0 podman[469093]: 2025-12-03 02:30:42.92749828 +0000 UTC m=+0.166173600 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 03 02:30:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2295: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:43 compute-0 nova_compute[351485]: 2025-12-03 02:30:43.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:30:43 compute-0 nova_compute[351485]: 2025-12-03 02:30:43.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 02:30:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:30:44 compute-0 nova_compute[351485]: 2025-12-03 02:30:44.189 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:30:44 compute-0 sudo[469198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:30:44 compute-0 sudo[469198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:30:44 compute-0 sudo[469198]: pam_unix(sudo:session): session closed for user root
Dec 03 02:30:44 compute-0 sudo[469223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:30:44 compute-0 sudo[469223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:30:44 compute-0 sudo[469223]: pam_unix(sudo:session): session closed for user root
Dec 03 02:30:44 compute-0 sudo[469248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:30:44 compute-0 sudo[469248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:30:44 compute-0 sudo[469248]: pam_unix(sudo:session): session closed for user root
Dec 03 02:30:44 compute-0 ceph-mon[192821]: pgmap v2295: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:44 compute-0 sudo[469273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 02:30:44 compute-0 sudo[469273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:30:45 compute-0 sshd-session[469196]: Invalid user drcomadmin from 154.113.10.113 port 51562
Dec 03 02:30:45 compute-0 sshd-session[469196]: Received disconnect from 154.113.10.113 port 51562:11: Bye Bye [preauth]
Dec 03 02:30:45 compute-0 sshd-session[469196]: Disconnected from invalid user drcomadmin 154.113.10.113 port 51562 [preauth]
Dec 03 02:30:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2296: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:45 compute-0 sudo[469273]: pam_unix(sudo:session): session closed for user root
Dec 03 02:30:45 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:30:45 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:30:45 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 02:30:45 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:30:45 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 02:30:45 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:30:45 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev d39ec02b-dd99-4c3d-b332-cdfa66d71aed does not exist
Dec 03 02:30:45 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 31747156-abaf-4f70-872b-e94e937d8f37 does not exist
Dec 03 02:30:45 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 7034b79f-9a97-406e-b0c1-c60d0dd70949 does not exist
Dec 03 02:30:45 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 02:30:45 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:30:45 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 02:30:45 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:30:45 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:30:45 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:30:45 compute-0 sudo[469327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:30:45 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:30:45 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:30:45 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:30:45 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:30:45 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:30:45 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:30:45 compute-0 sudo[469327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:30:45 compute-0 sudo[469327]: pam_unix(sudo:session): session closed for user root
Dec 03 02:30:46 compute-0 sudo[469352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:30:46 compute-0 sudo[469352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:30:46 compute-0 sudo[469352]: pam_unix(sudo:session): session closed for user root
Dec 03 02:30:46 compute-0 sudo[469377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:30:46 compute-0 sudo[469377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:30:46 compute-0 sudo[469377]: pam_unix(sudo:session): session closed for user root
Dec 03 02:30:46 compute-0 sudo[469402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 02:30:46 compute-0 sudo[469402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:30:46 compute-0 nova_compute[351485]: 2025-12-03 02:30:46.797 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:30:46 compute-0 ceph-mon[192821]: pgmap v2296: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:46 compute-0 podman[469463]: 2025-12-03 02:30:46.957911936 +0000 UTC m=+0.082891110 container create 8271c5124cbca86a247240f492492b3793c7485b54d91b51ed82178b8219834f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_nightingale, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 03 02:30:47 compute-0 podman[469463]: 2025-12-03 02:30:46.921851029 +0000 UTC m=+0.046830273 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:30:47 compute-0 systemd[1]: Started libpod-conmon-8271c5124cbca86a247240f492492b3793c7485b54d91b51ed82178b8219834f.scope.
Dec 03 02:30:47 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:30:47 compute-0 podman[469463]: 2025-12-03 02:30:47.094724447 +0000 UTC m=+0.219703621 container init 8271c5124cbca86a247240f492492b3793c7485b54d91b51ed82178b8219834f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_nightingale, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 03 02:30:47 compute-0 podman[469463]: 2025-12-03 02:30:47.113017263 +0000 UTC m=+0.237996447 container start 8271c5124cbca86a247240f492492b3793c7485b54d91b51ed82178b8219834f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:30:47 compute-0 podman[469463]: 2025-12-03 02:30:47.119877917 +0000 UTC m=+0.244857071 container attach 8271c5124cbca86a247240f492492b3793c7485b54d91b51ed82178b8219834f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_nightingale, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:30:47 compute-0 beautiful_nightingale[469477]: 167 167
Dec 03 02:30:47 compute-0 systemd[1]: libpod-8271c5124cbca86a247240f492492b3793c7485b54d91b51ed82178b8219834f.scope: Deactivated successfully.
Dec 03 02:30:47 compute-0 podman[469463]: 2025-12-03 02:30:47.127256045 +0000 UTC m=+0.252235279 container died 8271c5124cbca86a247240f492492b3793c7485b54d91b51ed82178b8219834f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 03 02:30:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-28d92719bd5b18741c4ebccf17cbba998f02afda01b902580d048c6e9f812e91-merged.mount: Deactivated successfully.
Dec 03 02:30:47 compute-0 podman[469463]: 2025-12-03 02:30:47.202821258 +0000 UTC m=+0.327800412 container remove 8271c5124cbca86a247240f492492b3793c7485b54d91b51ed82178b8219834f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Dec 03 02:30:47 compute-0 systemd[1]: libpod-conmon-8271c5124cbca86a247240f492492b3793c7485b54d91b51ed82178b8219834f.scope: Deactivated successfully.
Dec 03 02:30:47 compute-0 podman[469503]: 2025-12-03 02:30:47.492150813 +0000 UTC m=+0.094096577 container create bbb8f2dc773a1b3b77106bd1f0fcb1ec2b519d0535ec369feee3069e7c7a97a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 03 02:30:47 compute-0 podman[469503]: 2025-12-03 02:30:47.454927252 +0000 UTC m=+0.056873086 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:30:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2297: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:47 compute-0 systemd[1]: Started libpod-conmon-bbb8f2dc773a1b3b77106bd1f0fcb1ec2b519d0535ec369feee3069e7c7a97a9.scope.
Dec 03 02:30:47 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:30:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9929b07bb6e30f17a0eff9f1b18e69878bc2e6b0265e0a1173a7889f4495d635/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:30:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9929b07bb6e30f17a0eff9f1b18e69878bc2e6b0265e0a1173a7889f4495d635/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:30:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9929b07bb6e30f17a0eff9f1b18e69878bc2e6b0265e0a1173a7889f4495d635/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:30:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9929b07bb6e30f17a0eff9f1b18e69878bc2e6b0265e0a1173a7889f4495d635/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:30:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9929b07bb6e30f17a0eff9f1b18e69878bc2e6b0265e0a1173a7889f4495d635/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 02:30:47 compute-0 podman[469503]: 2025-12-03 02:30:47.675910187 +0000 UTC m=+0.277855971 container init bbb8f2dc773a1b3b77106bd1f0fcb1ec2b519d0535ec369feee3069e7c7a97a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:30:47 compute-0 podman[469503]: 2025-12-03 02:30:47.693101682 +0000 UTC m=+0.295047456 container start bbb8f2dc773a1b3b77106bd1f0fcb1ec2b519d0535ec369feee3069e7c7a97a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_satoshi, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:30:47 compute-0 podman[469503]: 2025-12-03 02:30:47.70011056 +0000 UTC m=+0.302056304 container attach bbb8f2dc773a1b3b77106bd1f0fcb1ec2b519d0535ec369feee3069e7c7a97a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_satoshi, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 03 02:30:47 compute-0 ceph-mon[192821]: pgmap v2297: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:30:49 compute-0 blissful_satoshi[469517]: --> passed data devices: 0 physical, 3 LVM
Dec 03 02:30:49 compute-0 blissful_satoshi[469517]: --> relative data size: 1.0
Dec 03 02:30:49 compute-0 blissful_satoshi[469517]: --> All data devices are unavailable
Dec 03 02:30:49 compute-0 systemd[1]: libpod-bbb8f2dc773a1b3b77106bd1f0fcb1ec2b519d0535ec369feee3069e7c7a97a9.scope: Deactivated successfully.
Dec 03 02:30:49 compute-0 podman[469503]: 2025-12-03 02:30:49.091457544 +0000 UTC m=+1.693403298 container died bbb8f2dc773a1b3b77106bd1f0fcb1ec2b519d0535ec369feee3069e7c7a97a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:30:49 compute-0 systemd[1]: libpod-bbb8f2dc773a1b3b77106bd1f0fcb1ec2b519d0535ec369feee3069e7c7a97a9.scope: Consumed 1.332s CPU time.
Dec 03 02:30:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-9929b07bb6e30f17a0eff9f1b18e69878bc2e6b0265e0a1173a7889f4495d635-merged.mount: Deactivated successfully.
Dec 03 02:30:49 compute-0 nova_compute[351485]: 2025-12-03 02:30:49.193 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:30:49 compute-0 podman[469503]: 2025-12-03 02:30:49.206900262 +0000 UTC m=+1.808846016 container remove bbb8f2dc773a1b3b77106bd1f0fcb1ec2b519d0535ec369feee3069e7c7a97a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:30:49 compute-0 systemd[1]: libpod-conmon-bbb8f2dc773a1b3b77106bd1f0fcb1ec2b519d0535ec369feee3069e7c7a97a9.scope: Deactivated successfully.
Dec 03 02:30:49 compute-0 sudo[469402]: pam_unix(sudo:session): session closed for user root
Dec 03 02:30:49 compute-0 sudo[469559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:30:49 compute-0 sudo[469559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:30:49 compute-0 sudo[469559]: pam_unix(sudo:session): session closed for user root
Dec 03 02:30:49 compute-0 sudo[469584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:30:49 compute-0 sudo[469584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:30:49 compute-0 sudo[469584]: pam_unix(sudo:session): session closed for user root
Dec 03 02:30:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2298: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:49 compute-0 sudo[469610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:30:49 compute-0 sudo[469610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:30:49 compute-0 sudo[469610]: pam_unix(sudo:session): session closed for user root
Dec 03 02:30:49 compute-0 sudo[469635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 02:30:49 compute-0 sudo[469635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:30:50 compute-0 podman[469696]: 2025-12-03 02:30:50.476679655 +0000 UTC m=+0.099409236 container create 69468f3ff64d53b74a1cec774afd2625e93dbef01653301a5997ddaf74b0203b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Dec 03 02:30:50 compute-0 podman[469696]: 2025-12-03 02:30:50.436882802 +0000 UTC m=+0.059612453 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:30:50 compute-0 systemd[1]: Started libpod-conmon-69468f3ff64d53b74a1cec774afd2625e93dbef01653301a5997ddaf74b0203b.scope.
Dec 03 02:30:50 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:30:50 compute-0 podman[469696]: 2025-12-03 02:30:50.60972783 +0000 UTC m=+0.232457381 container init 69468f3ff64d53b74a1cec774afd2625e93dbef01653301a5997ddaf74b0203b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec 03 02:30:50 compute-0 podman[469696]: 2025-12-03 02:30:50.626435031 +0000 UTC m=+0.249164612 container start 69468f3ff64d53b74a1cec774afd2625e93dbef01653301a5997ddaf74b0203b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:30:50 compute-0 podman[469696]: 2025-12-03 02:30:50.634040136 +0000 UTC m=+0.256769737 container attach 69468f3ff64d53b74a1cec774afd2625e93dbef01653301a5997ddaf74b0203b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:30:50 compute-0 ceph-mon[192821]: pgmap v2298: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:50 compute-0 recursing_bhabha[469712]: 167 167
Dec 03 02:30:50 compute-0 systemd[1]: libpod-69468f3ff64d53b74a1cec774afd2625e93dbef01653301a5997ddaf74b0203b.scope: Deactivated successfully.
Dec 03 02:30:50 compute-0 podman[469696]: 2025-12-03 02:30:50.642088453 +0000 UTC m=+0.264818024 container died 69468f3ff64d53b74a1cec774afd2625e93dbef01653301a5997ddaf74b0203b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 03 02:30:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef0be077ea56e511227f10cad255d9bd78b54229ba85dbb8d06b8683610c0101-merged.mount: Deactivated successfully.
Dec 03 02:30:50 compute-0 podman[469696]: 2025-12-03 02:30:50.715971838 +0000 UTC m=+0.338701379 container remove 69468f3ff64d53b74a1cec774afd2625e93dbef01653301a5997ddaf74b0203b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 03 02:30:50 compute-0 systemd[1]: libpod-conmon-69468f3ff64d53b74a1cec774afd2625e93dbef01653301a5997ddaf74b0203b.scope: Deactivated successfully.
Dec 03 02:30:51 compute-0 podman[469734]: 2025-12-03 02:30:51.012875616 +0000 UTC m=+0.104719096 container create 16826113ae1e4ec2cb29fd616497b9b7a25fb3b806436e091ff1f6c990c7ed8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:30:51 compute-0 podman[469734]: 2025-12-03 02:30:50.969288666 +0000 UTC m=+0.061132186 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:30:51 compute-0 systemd[1]: Started libpod-conmon-16826113ae1e4ec2cb29fd616497b9b7a25fb3b806436e091ff1f6c990c7ed8f.scope.
Dec 03 02:30:51 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:30:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59cacf352a7df3aed24fa29855a745e8ce937ce83eabed3037ca808d9c089e8d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:30:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59cacf352a7df3aed24fa29855a745e8ce937ce83eabed3037ca808d9c089e8d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:30:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59cacf352a7df3aed24fa29855a745e8ce937ce83eabed3037ca808d9c089e8d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:30:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59cacf352a7df3aed24fa29855a745e8ce937ce83eabed3037ca808d9c089e8d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:30:51 compute-0 podman[469734]: 2025-12-03 02:30:51.238248905 +0000 UTC m=+0.330092445 container init 16826113ae1e4ec2cb29fd616497b9b7a25fb3b806436e091ff1f6c990c7ed8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_leakey, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 03 02:30:51 compute-0 podman[469734]: 2025-12-03 02:30:51.273680885 +0000 UTC m=+0.365524355 container start 16826113ae1e4ec2cb29fd616497b9b7a25fb3b806436e091ff1f6c990c7ed8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:30:51 compute-0 podman[469734]: 2025-12-03 02:30:51.281319681 +0000 UTC m=+0.373163221 container attach 16826113ae1e4ec2cb29fd616497b9b7a25fb3b806436e091ff1f6c990c7ed8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_leakey, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 03 02:30:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2299: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:51 compute-0 nova_compute[351485]: 2025-12-03 02:30:51.801 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:30:52 compute-0 gracious_leakey[469750]: {
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:     "0": [
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:         {
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:             "devices": [
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:                 "/dev/loop3"
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:             ],
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:             "lv_name": "ceph_lv0",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:             "lv_size": "21470642176",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:             "name": "ceph_lv0",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:             "tags": {
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:                 "ceph.cluster_name": "ceph",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:                 "ceph.crush_device_class": "",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:                 "ceph.encrypted": "0",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:                 "ceph.osd_id": "0",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:                 "ceph.type": "block",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:                 "ceph.vdo": "0"
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:             },
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:             "type": "block",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:             "vg_name": "ceph_vg0"
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:         }
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:     ],
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:     "1": [
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:         {
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:             "devices": [
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:                 "/dev/loop4"
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:             ],
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:             "lv_name": "ceph_lv1",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:             "lv_size": "21470642176",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:             "name": "ceph_lv1",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:             "tags": {
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:                 "ceph.cluster_name": "ceph",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:                 "ceph.crush_device_class": "",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:                 "ceph.encrypted": "0",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:                 "ceph.osd_id": "1",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:                 "ceph.type": "block",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:                 "ceph.vdo": "0"
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:             },
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:             "type": "block",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:             "vg_name": "ceph_vg1"
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:         }
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:     ],
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:     "2": [
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:         {
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:             "devices": [
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:                 "/dev/loop5"
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:             ],
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:             "lv_name": "ceph_lv2",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:             "lv_size": "21470642176",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:             "name": "ceph_lv2",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:             "tags": {
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:                 "ceph.cluster_name": "ceph",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:                 "ceph.crush_device_class": "",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:                 "ceph.encrypted": "0",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:                 "ceph.osd_id": "2",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:                 "ceph.type": "block",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:                 "ceph.vdo": "0"
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:             },
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:             "type": "block",
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:             "vg_name": "ceph_vg2"
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:         }
Dec 03 02:30:52 compute-0 gracious_leakey[469750]:     ]
Dec 03 02:30:52 compute-0 gracious_leakey[469750]: }
Dec 03 02:30:52 compute-0 systemd[1]: libpod-16826113ae1e4ec2cb29fd616497b9b7a25fb3b806436e091ff1f6c990c7ed8f.scope: Deactivated successfully.
Dec 03 02:30:52 compute-0 podman[469759]: 2025-12-03 02:30:52.200688296 +0000 UTC m=+0.051722861 container died 16826113ae1e4ec2cb29fd616497b9b7a25fb3b806436e091ff1f6c990c7ed8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_leakey, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:30:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-59cacf352a7df3aed24fa29855a745e8ce937ce83eabed3037ca808d9c089e8d-merged.mount: Deactivated successfully.
Dec 03 02:30:52 compute-0 podman[469759]: 2025-12-03 02:30:52.324900111 +0000 UTC m=+0.175934656 container remove 16826113ae1e4ec2cb29fd616497b9b7a25fb3b806436e091ff1f6c990c7ed8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_leakey, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec 03 02:30:52 compute-0 systemd[1]: libpod-conmon-16826113ae1e4ec2cb29fd616497b9b7a25fb3b806436e091ff1f6c990c7ed8f.scope: Deactivated successfully.
Dec 03 02:30:52 compute-0 sudo[469635]: pam_unix(sudo:session): session closed for user root
Dec 03 02:30:52 compute-0 sudo[469773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:30:52 compute-0 sudo[469773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:30:52 compute-0 sudo[469773]: pam_unix(sudo:session): session closed for user root
Dec 03 02:30:52 compute-0 ceph-mon[192821]: pgmap v2299: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:52 compute-0 sudo[469798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:30:52 compute-0 sudo[469798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:30:52 compute-0 sudo[469798]: pam_unix(sudo:session): session closed for user root
Dec 03 02:30:52 compute-0 sudo[469823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:30:52 compute-0 sudo[469823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:30:52 compute-0 sudo[469823]: pam_unix(sudo:session): session closed for user root
Dec 03 02:30:52 compute-0 sudo[469848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 02:30:52 compute-0 sudo[469848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:30:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2300: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:53 compute-0 podman[469910]: 2025-12-03 02:30:53.620007399 +0000 UTC m=+0.089415815 container create f9a13c6062315e81f4d4762ff811b81c57a56b65a0146df036eda19c09fc76b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:30:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:30:53 compute-0 podman[469910]: 2025-12-03 02:30:53.590005462 +0000 UTC m=+0.059413878 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:30:53 compute-0 systemd[1]: Started libpod-conmon-f9a13c6062315e81f4d4762ff811b81c57a56b65a0146df036eda19c09fc76b2.scope.
Dec 03 02:30:53 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:30:53 compute-0 podman[469910]: 2025-12-03 02:30:53.723707495 +0000 UTC m=+0.193115961 container init f9a13c6062315e81f4d4762ff811b81c57a56b65a0146df036eda19c09fc76b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hypatia, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:30:53 compute-0 podman[469910]: 2025-12-03 02:30:53.732173854 +0000 UTC m=+0.201582230 container start f9a13c6062315e81f4d4762ff811b81c57a56b65a0146df036eda19c09fc76b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hypatia, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec 03 02:30:53 compute-0 strange_hypatia[469923]: 167 167
Dec 03 02:30:53 compute-0 podman[469910]: 2025-12-03 02:30:53.736727332 +0000 UTC m=+0.206135798 container attach f9a13c6062315e81f4d4762ff811b81c57a56b65a0146df036eda19c09fc76b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 03 02:30:53 compute-0 systemd[1]: libpod-f9a13c6062315e81f4d4762ff811b81c57a56b65a0146df036eda19c09fc76b2.scope: Deactivated successfully.
Dec 03 02:30:53 compute-0 podman[469910]: 2025-12-03 02:30:53.739240203 +0000 UTC m=+0.208648599 container died f9a13c6062315e81f4d4762ff811b81c57a56b65a0146df036eda19c09fc76b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hypatia, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:30:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e5f4c6474010f34d79cfa96859b2a720b490bcf20d007cad1690a2963672fe4-merged.mount: Deactivated successfully.
Dec 03 02:30:53 compute-0 podman[469910]: 2025-12-03 02:30:53.775497276 +0000 UTC m=+0.244905652 container remove f9a13c6062315e81f4d4762ff811b81c57a56b65a0146df036eda19c09fc76b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 03 02:30:53 compute-0 systemd[1]: libpod-conmon-f9a13c6062315e81f4d4762ff811b81c57a56b65a0146df036eda19c09fc76b2.scope: Deactivated successfully.
Dec 03 02:30:54 compute-0 podman[469949]: 2025-12-03 02:30:54.0226054 +0000 UTC m=+0.070561262 container create 8dbf491db27b4bcb5bccb1f31dcd8f40230ce45146cb77adcf84573ee83310f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_nobel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 03 02:30:54 compute-0 systemd[1]: Started libpod-conmon-8dbf491db27b4bcb5bccb1f31dcd8f40230ce45146cb77adcf84573ee83310f0.scope.
Dec 03 02:30:54 compute-0 podman[469949]: 2025-12-03 02:30:54.002111422 +0000 UTC m=+0.050067264 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:30:54 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:30:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/670d577cf27dcb880013d4af82d4abb76b6d311fd77f127ce07ce9f76d4cfd41/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:30:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/670d577cf27dcb880013d4af82d4abb76b6d311fd77f127ce07ce9f76d4cfd41/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:30:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/670d577cf27dcb880013d4af82d4abb76b6d311fd77f127ce07ce9f76d4cfd41/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:30:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/670d577cf27dcb880013d4af82d4abb76b6d311fd77f127ce07ce9f76d4cfd41/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:30:54 compute-0 podman[469949]: 2025-12-03 02:30:54.177363417 +0000 UTC m=+0.225319269 container init 8dbf491db27b4bcb5bccb1f31dcd8f40230ce45146cb77adcf84573ee83310f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Dec 03 02:30:54 compute-0 nova_compute[351485]: 2025-12-03 02:30:54.195 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:30:54 compute-0 podman[469949]: 2025-12-03 02:30:54.204860763 +0000 UTC m=+0.252816605 container start 8dbf491db27b4bcb5bccb1f31dcd8f40230ce45146cb77adcf84573ee83310f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_nobel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 03 02:30:54 compute-0 podman[469949]: 2025-12-03 02:30:54.211247223 +0000 UTC m=+0.259203145 container attach 8dbf491db27b4bcb5bccb1f31dcd8f40230ce45146cb77adcf84573ee83310f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec 03 02:30:54 compute-0 ceph-mon[192821]: pgmap v2300: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:55 compute-0 stoic_nobel[469966]: {
Dec 03 02:30:55 compute-0 stoic_nobel[469966]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 02:30:55 compute-0 stoic_nobel[469966]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:30:55 compute-0 stoic_nobel[469966]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 02:30:55 compute-0 stoic_nobel[469966]:         "osd_id": 2,
Dec 03 02:30:55 compute-0 stoic_nobel[469966]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:30:55 compute-0 stoic_nobel[469966]:         "type": "bluestore"
Dec 03 02:30:55 compute-0 stoic_nobel[469966]:     },
Dec 03 02:30:55 compute-0 stoic_nobel[469966]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 02:30:55 compute-0 stoic_nobel[469966]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:30:55 compute-0 stoic_nobel[469966]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 02:30:55 compute-0 stoic_nobel[469966]:         "osd_id": 1,
Dec 03 02:30:55 compute-0 stoic_nobel[469966]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:30:55 compute-0 stoic_nobel[469966]:         "type": "bluestore"
Dec 03 02:30:55 compute-0 stoic_nobel[469966]:     },
Dec 03 02:30:55 compute-0 stoic_nobel[469966]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 02:30:55 compute-0 stoic_nobel[469966]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:30:55 compute-0 stoic_nobel[469966]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 02:30:55 compute-0 stoic_nobel[469966]:         "osd_id": 0,
Dec 03 02:30:55 compute-0 stoic_nobel[469966]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:30:55 compute-0 stoic_nobel[469966]:         "type": "bluestore"
Dec 03 02:30:55 compute-0 stoic_nobel[469966]:     }
Dec 03 02:30:55 compute-0 stoic_nobel[469966]: }
Dec 03 02:30:55 compute-0 systemd[1]: libpod-8dbf491db27b4bcb5bccb1f31dcd8f40230ce45146cb77adcf84573ee83310f0.scope: Deactivated successfully.
Dec 03 02:30:55 compute-0 systemd[1]: libpod-8dbf491db27b4bcb5bccb1f31dcd8f40230ce45146cb77adcf84573ee83310f0.scope: Consumed 1.201s CPU time.
Dec 03 02:30:55 compute-0 podman[469949]: 2025-12-03 02:30:55.407729957 +0000 UTC m=+1.455685889 container died 8dbf491db27b4bcb5bccb1f31dcd8f40230ce45146cb77adcf84573ee83310f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_nobel, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:30:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-670d577cf27dcb880013d4af82d4abb76b6d311fd77f127ce07ce9f76d4cfd41-merged.mount: Deactivated successfully.
Dec 03 02:30:55 compute-0 podman[469949]: 2025-12-03 02:30:55.508001167 +0000 UTC m=+1.555957009 container remove 8dbf491db27b4bcb5bccb1f31dcd8f40230ce45146cb77adcf84573ee83310f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_nobel, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 03 02:30:55 compute-0 systemd[1]: libpod-conmon-8dbf491db27b4bcb5bccb1f31dcd8f40230ce45146cb77adcf84573ee83310f0.scope: Deactivated successfully.
Dec 03 02:30:55 compute-0 sudo[469848]: pam_unix(sudo:session): session closed for user root
Dec 03 02:30:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2301: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:55 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 02:30:55 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:30:55 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 02:30:55 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:30:55 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 8e322b93-3c41-470f-b258-fa87838e6e37 does not exist
Dec 03 02:30:55 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 46f85ef2-abac-45a6-aacb-62c78f17e54c does not exist
Dec 03 02:30:55 compute-0 sudo[470012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:30:55 compute-0 sudo[470012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:30:55 compute-0 sudo[470012]: pam_unix(sudo:session): session closed for user root
Dec 03 02:30:55 compute-0 sudo[470037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 02:30:55 compute-0 sudo[470037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:30:55 compute-0 sudo[470037]: pam_unix(sudo:session): session closed for user root
Dec 03 02:30:56 compute-0 ceph-mon[192821]: pgmap v2301: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:56 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:30:56 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:30:56 compute-0 nova_compute[351485]: 2025-12-03 02:30:56.806 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:30:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2302: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:30:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:30:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:30:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:30:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:30:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:30:58 compute-0 ceph-mon[192821]: pgmap v2302: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:30:59 compute-0 nova_compute[351485]: 2025-12-03 02:30:59.198 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:30:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2303: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:30:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:30:59.666 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:30:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:30:59.667 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:30:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:30:59.668 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:30:59 compute-0 podman[158098]: time="2025-12-03T02:30:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:30:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:30:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec 03 02:30:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:30:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8675 "" "Go-http-client/1.1"
Dec 03 02:31:00 compute-0 ceph-mon[192821]: pgmap v2303: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:31:01 compute-0 openstack_network_exporter[368278]: ERROR   02:31:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:31:01 compute-0 openstack_network_exporter[368278]: ERROR   02:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:31:01 compute-0 openstack_network_exporter[368278]: ERROR   02:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:31:01 compute-0 openstack_network_exporter[368278]: ERROR   02:31:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:31:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:31:01 compute-0 openstack_network_exporter[368278]: ERROR   02:31:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:31:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:31:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2304: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:31:01 compute-0 nova_compute[351485]: 2025-12-03 02:31:01.809 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:31:02 compute-0 ceph-mon[192821]: pgmap v2304: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:31:02 compute-0 podman[470062]: 2025-12-03 02:31:02.851898128 +0000 UTC m=+0.100255140 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 03 02:31:02 compute-0 podman[470063]: 2025-12-03 02:31:02.924514227 +0000 UTC m=+0.171111919 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 03 02:31:02 compute-0 podman[470064]: 2025-12-03 02:31:02.9377155 +0000 UTC m=+0.172212311 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 02:31:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2305: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:31:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:31:04 compute-0 nova_compute[351485]: 2025-12-03 02:31:04.200 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:31:04 compute-0 ceph-mon[192821]: pgmap v2305: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:31:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2306: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:31:06 compute-0 ceph-mon[192821]: pgmap v2306: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:31:06 compute-0 nova_compute[351485]: 2025-12-03 02:31:06.813 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:31:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2307: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:31:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:31:08 compute-0 ceph-mon[192821]: pgmap v2307: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:31:09 compute-0 nova_compute[351485]: 2025-12-03 02:31:09.205 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:31:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2308: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:31:10 compute-0 ceph-mon[192821]: pgmap v2308: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:31:10 compute-0 podman[470121]: 2025-12-03 02:31:10.880122548 +0000 UTC m=+0.136245335 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0)
Dec 03 02:31:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2309: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:31:11 compute-0 nova_compute[351485]: 2025-12-03 02:31:11.817 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:31:12 compute-0 ceph-mon[192821]: pgmap v2309: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:31:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2310: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:31:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:31:13 compute-0 podman[470143]: 2025-12-03 02:31:13.877697026 +0000 UTC m=+0.105181740 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 03 02:31:13 compute-0 podman[470142]: 2025-12-03 02:31:13.888442099 +0000 UTC m=+0.122320963 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, config_id=edpm, release=1755695350, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, architecture=x86_64, distribution-scope=public, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-type=git)
Dec 03 02:31:13 compute-0 podman[470150]: 2025-12-03 02:31:13.89841086 +0000 UTC m=+0.109117070 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 03 02:31:13 compute-0 podman[470141]: 2025-12-03 02:31:13.910979055 +0000 UTC m=+0.152517135 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Dec 03 02:31:13 compute-0 podman[470144]: 2025-12-03 02:31:13.924470816 +0000 UTC m=+0.145293292 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, config_id=edpm, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, managed_by=edpm_ansible)
Dec 03 02:31:14 compute-0 nova_compute[351485]: 2025-12-03 02:31:14.207 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:31:14 compute-0 ceph-mon[192821]: pgmap v2310: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:31:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2311: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:31:16 compute-0 ceph-mon[192821]: pgmap v2311: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:31:16 compute-0 nova_compute[351485]: 2025-12-03 02:31:16.821 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:31:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2312: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:31:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:31:18 compute-0 ceph-mon[192821]: pgmap v2312: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:31:19 compute-0 nova_compute[351485]: 2025-12-03 02:31:19.211 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.515 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.516 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56177d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.517 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56177d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56177d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56177d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56177d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56177d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56177d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56177d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56177d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.521 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56177d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.521 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56177d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.521 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56177d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.522 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56177d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.522 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56177d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.522 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56177d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.522 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56177d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56177d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56177d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56177d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56177d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56177d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56177d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56177d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56177d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.526 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56177d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.526 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56177d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.530 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '4fb8fc07-d7b7-4be8-94da-155b040faf32', 'name': 'te-8071397-asg-3rvfkoaoyxm3-pdxc7a4qjxpu-j7dwudlie42q', 'flavor': {'id': '89219634-32e9-4cb5-896f-6fa0b1edfe13', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '8876482c-db67-48c0-9203-60685152fc9d'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '63f39ac2863946b8b817457e689ff933', 'user_id': '8f61f44789494541b7c101b0fdab52f0', 'hostId': 'b9b5204cb6f419d1971089b3610cd52175ffd5baf1b6a5204f14f9c2', 'status': 'active', 'metadata': {'metering.server_group': '38bfb145-4971-41b6-9bc3-faf3c3931019'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.536 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2890ee5c-21c1-4e9d-9421-1a2df0f67f76', 'name': 'te-8071397-asg-3rvfkoaoyxm3-n4fdz722tgvn-jwe375iwm6yr', 'flavor': {'id': '89219634-32e9-4cb5-896f-6fa0b1edfe13', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '8876482c-db67-48c0-9203-60685152fc9d'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '63f39ac2863946b8b817457e689ff933', 'user_id': '8f61f44789494541b7c101b0fdab52f0', 'hostId': 'b9b5204cb6f419d1971089b3610cd52175ffd5baf1b6a5204f14f9c2', 'status': 'active', 'metadata': {'metering.server_group': '38bfb145-4971-41b6-9bc3-faf3c3931019'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.537 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.537 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.538 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.538 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.540 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-03T02:31:19.538161) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:31:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2313: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.596 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/memory.usage volume: 42.40625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.647 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/memory.usage volume: 42.03515625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.647 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.648 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.648 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.648 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.648 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.649 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.650 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-03T02:31:19.648994) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.654 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.659 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.659 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.660 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.660 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.660 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.660 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.660 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.661 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.661 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-03T02:31:19.660900) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.661 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.662 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.662 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.662 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.663 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.663 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.663 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.663 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.663 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-03T02:31:19.663301) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.664 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.665 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.665 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.665 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.665 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.665 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.666 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.666 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.667 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.667 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-03T02:31:19.666073) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.667 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.667 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.668 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.668 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.668 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.668 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.668 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.669 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.670 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.670 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.670 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.670 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.670 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-03T02:31:19.668654) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.671 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.671 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.672 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-03T02:31:19.671218) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.692 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.692 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.710 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.711 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.712 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.712 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.713 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.713 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.713 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.714 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.714 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.714 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.716 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-03T02:31:19.714489) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.778 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.bytes volume: 31074816 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.779 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.848 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.bytes volume: 31267328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.849 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.850 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.850 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.850 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.851 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.851 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.851 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.852 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.852 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-03T02:31:19.851336) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.852 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.bytes volume: 2060 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.853 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.853 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.854 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.854 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.854 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.854 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.855 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-03T02:31:19.854717) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.855 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.latency volume: 3352022930 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.856 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.latency volume: 250801539 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.856 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.latency volume: 2988151233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.857 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.latency volume: 215162747 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.858 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.858 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.858 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.859 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.859 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.859 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.860 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-03T02:31:19.859319) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.860 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.requests volume: 1137 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.860 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.861 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.requests volume: 1144 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.861 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.862 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.862 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.862 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.863 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.863 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.863 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.864 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-03T02:31:19.863360) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.864 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.864 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.865 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.865 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.865 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.865 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.866 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.866 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.866 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.867 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.867 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.867 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.867 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-03T02:31:19.866096) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.868 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.869 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.869 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.869 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.869 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.870 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.870 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.bytes volume: 73138176 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.870 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.871 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-03T02:31:19.870000) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.871 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.bytes volume: 73162752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.872 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.873 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.873 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.873 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.873 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.874 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.874 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.874 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.latency volume: 9097731540 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.875 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.876 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.latency volume: 10465171027 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.877 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.878 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.878 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.878 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.879 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.879 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.879 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.879 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.requests volume: 345 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.880 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.881 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.requests volume: 335 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.881 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.882 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.883 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.883 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-03T02:31:19.874418) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.883 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.883 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-03T02:31:19.879740) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.883 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.883 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.884 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.884 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-03T02:31:19.884059) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.884 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.885 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.packets volume: 27 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.886 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.886 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.886 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.886 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.887 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.887 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.887 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/cpu volume: 337520000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.887 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-03T02:31:19.887194) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.888 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/cpu volume: 340540000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.889 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.889 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.890 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.890 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.890 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.890 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.891 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-03T02:31:19.890607) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.891 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.891 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.892 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.892 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.892 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.892 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.892 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.893 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.893 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-03T02:31:19.892667) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.894 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.894 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.894 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.895 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.895 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.895 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.895 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.896 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.896 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-03T02:31:19.895314) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.896 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.897 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.897 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.897 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.897 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.898 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.898 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.898 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.898 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.898 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.898 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.899 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.899 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.899 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.899 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.899 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.899 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.900 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.900 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.900 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.900 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.900 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.900 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.901 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.901 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.901 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.901 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.902 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.902 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.902 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.902 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.902 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-03T02:31:19.898152) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.902 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.902 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.902 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-03T02:31:19.899428) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.903 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.903 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-03T02:31:19.900674) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.903 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.903 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.903 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.903 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.903 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.903 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.903 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.903 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.903 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.903 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.904 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.904 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.904 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.904 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.904 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.904 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.904 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.904 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.904 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:31:20 compute-0 ceph-mon[192821]: pgmap v2313: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:31:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2314: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s
Dec 03 02:31:21 compute-0 nova_compute[351485]: 2025-12-03 02:31:21.824 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:31:22 compute-0 nova_compute[351485]: 2025-12-03 02:31:22.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:31:22 compute-0 ceph-mon[192821]: pgmap v2314: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s
Dec 03 02:31:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2315: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s
Dec 03 02:31:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:31:24 compute-0 nova_compute[351485]: 2025-12-03 02:31:24.214 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:31:24 compute-0 ceph-mon[192821]: pgmap v2315: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s
Dec 03 02:31:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2316: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Dec 03 02:31:26 compute-0 nova_compute[351485]: 2025-12-03 02:31:26.827 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:31:26 compute-0 ceph-mon[192821]: pgmap v2316: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Dec 03 02:31:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2317: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Dec 03 02:31:27 compute-0 ceph-mon[192821]: pgmap v2317: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Dec 03 02:31:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:31:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:31:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:31:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:31:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:31:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:31:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:31:28
Dec 03 02:31:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 02:31:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 02:31:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', 'volumes', '.rgw.root', 'images', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', 'vms']
Dec 03 02:31:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 02:31:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:31:29 compute-0 nova_compute[351485]: 2025-12-03 02:31:29.216 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:31:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 02:31:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:31:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 02:31:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:31:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:31:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:31:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:31:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:31:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:31:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:31:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2318: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 03 02:31:29 compute-0 podman[158098]: time="2025-12-03T02:31:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:31:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:31:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec 03 02:31:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:31:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8679 "" "Go-http-client/1.1"
Dec 03 02:31:30 compute-0 ceph-mon[192821]: pgmap v2318: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 03 02:31:31 compute-0 openstack_network_exporter[368278]: ERROR   02:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:31:31 compute-0 openstack_network_exporter[368278]: ERROR   02:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:31:31 compute-0 openstack_network_exporter[368278]: ERROR   02:31:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:31:31 compute-0 openstack_network_exporter[368278]: ERROR   02:31:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:31:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:31:31 compute-0 openstack_network_exporter[368278]: ERROR   02:31:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:31:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:31:31 compute-0 nova_compute[351485]: 2025-12-03 02:31:31.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:31:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2319: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 03 02:31:31 compute-0 nova_compute[351485]: 2025-12-03 02:31:31.619 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:31:31 compute-0 nova_compute[351485]: 2025-12-03 02:31:31.621 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:31:31 compute-0 nova_compute[351485]: 2025-12-03 02:31:31.622 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:31:31 compute-0 nova_compute[351485]: 2025-12-03 02:31:31.623 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 02:31:31 compute-0 nova_compute[351485]: 2025-12-03 02:31:31.623 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:31:31 compute-0 nova_compute[351485]: 2025-12-03 02:31:31.831 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:31:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:31:32 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1489720266' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:31:32 compute-0 nova_compute[351485]: 2025-12-03 02:31:32.168 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:31:32 compute-0 nova_compute[351485]: 2025-12-03 02:31:32.287 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:31:32 compute-0 nova_compute[351485]: 2025-12-03 02:31:32.288 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:31:32 compute-0 nova_compute[351485]: 2025-12-03 02:31:32.298 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:31:32 compute-0 nova_compute[351485]: 2025-12-03 02:31:32.299 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 03 02:31:32 compute-0 ceph-mon[192821]: pgmap v2319: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 03 02:31:32 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1489720266' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:31:32 compute-0 nova_compute[351485]: 2025-12-03 02:31:32.776 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:31:32 compute-0 nova_compute[351485]: 2025-12-03 02:31:32.777 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3441MB free_disk=59.897010803222656GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 02:31:32 compute-0 nova_compute[351485]: 2025-12-03 02:31:32.777 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:31:32 compute-0 nova_compute[351485]: 2025-12-03 02:31:32.778 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:31:32 compute-0 nova_compute[351485]: 2025-12-03 02:31:32.904 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:31:32 compute-0 nova_compute[351485]: 2025-12-03 02:31:32.905 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 4fb8fc07-d7b7-4be8-94da-155b040faf32 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 03 02:31:32 compute-0 nova_compute[351485]: 2025-12-03 02:31:32.905 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 02:31:32 compute-0 nova_compute[351485]: 2025-12-03 02:31:32.906 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 02:31:32 compute-0 nova_compute[351485]: 2025-12-03 02:31:32.959 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:31:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:31:33 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2522631771' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:31:33 compute-0 nova_compute[351485]: 2025-12-03 02:31:33.462 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:31:33 compute-0 nova_compute[351485]: 2025-12-03 02:31:33.477 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:31:33 compute-0 nova_compute[351485]: 2025-12-03 02:31:33.499 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:31:33 compute-0 nova_compute[351485]: 2025-12-03 02:31:33.503 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 02:31:33 compute-0 nova_compute[351485]: 2025-12-03 02:31:33.504 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.726s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:31:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2320: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Dec 03 02:31:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:31:33 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2522631771' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:31:33 compute-0 podman[470294]: 2025-12-03 02:31:33.869601135 +0000 UTC m=+0.100405854 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 02:31:33 compute-0 podman[470292]: 2025-12-03 02:31:33.877446277 +0000 UTC m=+0.122364234 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 03 02:31:33 compute-0 podman[470293]: 2025-12-03 02:31:33.880173024 +0000 UTC m=+0.129041283 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true)
Dec 03 02:31:34 compute-0 nova_compute[351485]: 2025-12-03 02:31:34.218 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:31:34 compute-0 ceph-mon[192821]: pgmap v2320: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Dec 03 02:31:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2321: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Dec 03 02:31:36 compute-0 nova_compute[351485]: 2025-12-03 02:31:36.506 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:31:36 compute-0 nova_compute[351485]: 2025-12-03 02:31:36.508 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 02:31:36 compute-0 ceph-mon[192821]: pgmap v2321: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Dec 03 02:31:36 compute-0 nova_compute[351485]: 2025-12-03 02:31:36.835 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:31:36 compute-0 nova_compute[351485]: 2025-12-03 02:31:36.947 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-4fb8fc07-d7b7-4be8-94da-155b040faf32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 03 02:31:36 compute-0 nova_compute[351485]: 2025-12-03 02:31:36.948 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-4fb8fc07-d7b7-4be8-94da-155b040faf32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 03 02:31:36 compute-0 nova_compute[351485]: 2025-12-03 02:31:36.949 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 03 02:31:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2322: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Dec 03 02:31:38 compute-0 nova_compute[351485]: 2025-12-03 02:31:38.286 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Updating instance_info_cache with network_info: [{"id": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "address": "fa:16:3e:3f:0c:ae", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.46", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94fdb5b9-66", "ovs_interfaceid": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:31:38 compute-0 nova_compute[351485]: 2025-12-03 02:31:38.310 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-4fb8fc07-d7b7-4be8-94da-155b040faf32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 03 02:31:38 compute-0 nova_compute[351485]: 2025-12-03 02:31:38.311 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 03 02:31:38 compute-0 nova_compute[351485]: 2025-12-03 02:31:38.312 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:31:38 compute-0 nova_compute[351485]: 2025-12-03 02:31:38.313 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:31:38 compute-0 nova_compute[351485]: 2025-12-03 02:31:38.314 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:31:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:31:38 compute-0 ceph-mon[192821]: pgmap v2322: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Dec 03 02:31:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 02:31:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:31:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 02:31:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:31:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001521471275314189 of space, bias 1.0, pg target 0.45644138259425665 quantized to 32 (current 32)
Dec 03 02:31:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:31:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:31:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:31:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:31:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:31:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec 03 02:31:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:31:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 02:31:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:31:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:31:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:31:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 02:31:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:31:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 02:31:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:31:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:31:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:31:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 02:31:39 compute-0 nova_compute[351485]: 2025-12-03 02:31:39.222 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:31:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2323: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s
Dec 03 02:31:40 compute-0 nova_compute[351485]: 2025-12-03 02:31:40.378 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:31:40 compute-0 ceph-mon[192821]: pgmap v2323: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s
Dec 03 02:31:41 compute-0 nova_compute[351485]: 2025-12-03 02:31:41.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:31:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2324: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:31:41 compute-0 nova_compute[351485]: 2025-12-03 02:31:41.839 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:31:41 compute-0 podman[470352]: 2025-12-03 02:31:41.880925477 +0000 UTC m=+0.135562786 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 03 02:31:42 compute-0 ceph-mon[192821]: pgmap v2324: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:31:43 compute-0 nova_compute[351485]: 2025-12-03 02:31:43.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:31:43 compute-0 nova_compute[351485]: 2025-12-03 02:31:43.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 02:31:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2325: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:31:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:31:44 compute-0 nova_compute[351485]: 2025-12-03 02:31:44.225 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:31:44 compute-0 ceph-mon[192821]: pgmap v2325: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:31:44 compute-0 podman[470370]: 2025-12-03 02:31:44.844262009 +0000 UTC m=+0.120795799 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, architecture=x86_64, release=1755695350, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, version=9.6, vendor=Red Hat, Inc., maintainer=Red Hat, Inc.)
Dec 03 02:31:44 compute-0 podman[470379]: 2025-12-03 02:31:44.861269969 +0000 UTC m=+0.117191977 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 03 02:31:44 compute-0 podman[470372]: 2025-12-03 02:31:44.872169106 +0000 UTC m=+0.112780422 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, name=ubi9, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=base rhel9, release-0.7.12=, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9)
Dec 03 02:31:44 compute-0 podman[470371]: 2025-12-03 02:31:44.874986406 +0000 UTC m=+0.144660313 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 02:31:44 compute-0 podman[470369]: 2025-12-03 02:31:44.889880446 +0000 UTC m=+0.173156376 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 03 02:31:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2326: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:31:46 compute-0 ceph-mon[192821]: pgmap v2326: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:31:46 compute-0 nova_compute[351485]: 2025-12-03 02:31:46.841 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:31:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 02:31:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1103238257' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:31:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 02:31:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1103238257' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:31:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2327: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:31:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/1103238257' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:31:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/1103238257' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:31:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:31:48 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #111. Immutable memtables: 0.
Dec 03 02:31:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:31:48.646571) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 03 02:31:48 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 111
Dec 03 02:31:48 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729108646609, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 1364, "num_deletes": 256, "total_data_size": 2135109, "memory_usage": 2174240, "flush_reason": "Manual Compaction"}
Dec 03 02:31:48 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #112: started
Dec 03 02:31:48 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729108661852, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 112, "file_size": 2103981, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 46298, "largest_seqno": 47661, "table_properties": {"data_size": 2097479, "index_size": 3702, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13178, "raw_average_key_size": 19, "raw_value_size": 2084531, "raw_average_value_size": 3083, "num_data_blocks": 166, "num_entries": 676, "num_filter_entries": 676, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764728965, "oldest_key_time": 1764728965, "file_creation_time": 1764729108, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 112, "seqno_to_time_mapping": "N/A"}}
Dec 03 02:31:48 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 15350 microseconds, and 7119 cpu microseconds.
Dec 03 02:31:48 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 02:31:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:31:48.661918) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #112: 2103981 bytes OK
Dec 03 02:31:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:31:48.661939) [db/memtable_list.cc:519] [default] Level-0 commit table #112 started
Dec 03 02:31:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:31:48.664011) [db/memtable_list.cc:722] [default] Level-0 commit table #112: memtable #1 done
Dec 03 02:31:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:31:48.664029) EVENT_LOG_v1 {"time_micros": 1764729108664022, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 03 02:31:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:31:48.664049) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 03 02:31:48 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 2129025, prev total WAL file size 2129025, number of live WAL files 2.
Dec 03 02:31:48 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000108.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:31:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:31:48.665503) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373537' seq:72057594037927935, type:22 .. '6C6F676D0032303039' seq:0, type:0; will stop at (end)
Dec 03 02:31:48 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 03 02:31:48 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [112(2054KB)], [110(7733KB)]
Dec 03 02:31:48 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729108665610, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [112], "files_L6": [110], "score": -1, "input_data_size": 10023536, "oldest_snapshot_seqno": -1}
Dec 03 02:31:48 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #113: 6241 keys, 9914551 bytes, temperature: kUnknown
Dec 03 02:31:48 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729108747301, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 113, "file_size": 9914551, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9872915, "index_size": 24950, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15621, "raw_key_size": 162576, "raw_average_key_size": 26, "raw_value_size": 9760126, "raw_average_value_size": 1563, "num_data_blocks": 998, "num_entries": 6241, "num_filter_entries": 6241, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764729108, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Dec 03 02:31:48 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 02:31:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:31:48.747736) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 9914551 bytes
Dec 03 02:31:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:31:48.750501) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 122.6 rd, 121.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 7.6 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(9.5) write-amplify(4.7) OK, records in: 6765, records dropped: 524 output_compression: NoCompression
Dec 03 02:31:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:31:48.750586) EVENT_LOG_v1 {"time_micros": 1764729108750520, "job": 66, "event": "compaction_finished", "compaction_time_micros": 81782, "compaction_time_cpu_micros": 30875, "output_level": 6, "num_output_files": 1, "total_output_size": 9914551, "num_input_records": 6765, "num_output_records": 6241, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 03 02:31:48 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000112.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:31:48 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729108751423, "job": 66, "event": "table_file_deletion", "file_number": 112}
Dec 03 02:31:48 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:31:48 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729108754738, "job": 66, "event": "table_file_deletion", "file_number": 110}
Dec 03 02:31:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:31:48.665047) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:31:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:31:48.755112) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:31:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:31:48.755120) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:31:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:31:48.755124) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:31:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:31:48.755127) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:31:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:31:48.755130) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:31:48 compute-0 ceph-mon[192821]: pgmap v2327: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:31:49 compute-0 nova_compute[351485]: 2025-12-03 02:31:49.228 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:31:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2328: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:31:50 compute-0 ceph-mon[192821]: pgmap v2328: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:31:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2329: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:31:51 compute-0 nova_compute[351485]: 2025-12-03 02:31:51.844 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:31:52 compute-0 ceph-mon[192821]: pgmap v2329: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:31:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2330: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:31:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:31:54 compute-0 nova_compute[351485]: 2025-12-03 02:31:54.231 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:31:54 compute-0 ceph-mon[192821]: pgmap v2330: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:31:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2331: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:31:56 compute-0 sudo[470475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:31:56 compute-0 sudo[470475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:31:56 compute-0 sudo[470475]: pam_unix(sudo:session): session closed for user root
Dec 03 02:31:56 compute-0 sudo[470500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:31:56 compute-0 sudo[470500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:31:56 compute-0 sudo[470500]: pam_unix(sudo:session): session closed for user root
Dec 03 02:31:56 compute-0 sudo[470525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:31:56 compute-0 sudo[470525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:31:56 compute-0 sudo[470525]: pam_unix(sudo:session): session closed for user root
Dec 03 02:31:56 compute-0 sudo[470550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 02:31:56 compute-0 sudo[470550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:31:56 compute-0 nova_compute[351485]: 2025-12-03 02:31:56.848 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:31:56 compute-0 ceph-mon[192821]: pgmap v2331: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:31:57 compute-0 sudo[470550]: pam_unix(sudo:session): session closed for user root
Dec 03 02:31:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:31:57 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:31:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 02:31:57 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:31:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 02:31:57 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:31:57 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 194e852c-c153-4376-bf73-c2bd8d55dcd6 does not exist
Dec 03 02:31:57 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev cf8996a8-98f5-4786-a74b-72f570af7c2b does not exist
Dec 03 02:31:57 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 75610ed0-e05f-49e4-a5fd-e88ad258c152 does not exist
Dec 03 02:31:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 02:31:57 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:31:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 02:31:57 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:31:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:31:57 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:31:57 compute-0 sudo[470605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:31:57 compute-0 sudo[470605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:31:57 compute-0 sudo[470605]: pam_unix(sudo:session): session closed for user root
Dec 03 02:31:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2332: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:31:57 compute-0 sudo[470630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:31:57 compute-0 sudo[470630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:31:57 compute-0 sudo[470630]: pam_unix(sudo:session): session closed for user root
Dec 03 02:31:57 compute-0 sudo[470655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:31:57 compute-0 sudo[470655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:31:57 compute-0 sudo[470655]: pam_unix(sudo:session): session closed for user root
Dec 03 02:31:57 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:31:57 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:31:57 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:31:57 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:31:57 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:31:57 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:31:57 compute-0 sudo[470680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 02:31:57 compute-0 sudo[470680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:31:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:31:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:31:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:31:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:31:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:31:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:31:58 compute-0 podman[470742]: 2025-12-03 02:31:58.506269371 +0000 UTC m=+0.058905103 container create 7fdc6517e69c7f47cf2fa0470825ac36e0d6c7071942ca674b3f75a2048f57d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hermann, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec 03 02:31:58 compute-0 systemd[1]: Started libpod-conmon-7fdc6517e69c7f47cf2fa0470825ac36e0d6c7071942ca674b3f75a2048f57d4.scope.
Dec 03 02:31:58 compute-0 podman[470742]: 2025-12-03 02:31:58.484347692 +0000 UTC m=+0.036983454 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:31:58 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:31:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:31:58 compute-0 podman[470742]: 2025-12-03 02:31:58.653776973 +0000 UTC m=+0.206412805 container init 7fdc6517e69c7f47cf2fa0470825ac36e0d6c7071942ca674b3f75a2048f57d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hermann, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:31:58 compute-0 podman[470742]: 2025-12-03 02:31:58.672229814 +0000 UTC m=+0.224865556 container start 7fdc6517e69c7f47cf2fa0470825ac36e0d6c7071942ca674b3f75a2048f57d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 03 02:31:58 compute-0 podman[470742]: 2025-12-03 02:31:58.677730009 +0000 UTC m=+0.230365831 container attach 7fdc6517e69c7f47cf2fa0470825ac36e0d6c7071942ca674b3f75a2048f57d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hermann, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 03 02:31:58 compute-0 stoic_hermann[470756]: 167 167
Dec 03 02:31:58 compute-0 systemd[1]: libpod-7fdc6517e69c7f47cf2fa0470825ac36e0d6c7071942ca674b3f75a2048f57d4.scope: Deactivated successfully.
Dec 03 02:31:58 compute-0 podman[470742]: 2025-12-03 02:31:58.685781607 +0000 UTC m=+0.238417419 container died 7fdc6517e69c7f47cf2fa0470825ac36e0d6c7071942ca674b3f75a2048f57d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hermann, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:31:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5fd00b9ad52f5456a440e38b6d3f3b4a98b37be57ab0578c78f0cc149abe3a1-merged.mount: Deactivated successfully.
Dec 03 02:31:58 compute-0 podman[470742]: 2025-12-03 02:31:58.771915697 +0000 UTC m=+0.324551469 container remove 7fdc6517e69c7f47cf2fa0470825ac36e0d6c7071942ca674b3f75a2048f57d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hermann, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:31:58 compute-0 systemd[1]: libpod-conmon-7fdc6517e69c7f47cf2fa0470825ac36e0d6c7071942ca674b3f75a2048f57d4.scope: Deactivated successfully.
Dec 03 02:31:58 compute-0 ceph-mon[192821]: pgmap v2332: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:31:59 compute-0 podman[470779]: 2025-12-03 02:31:59.115473112 +0000 UTC m=+0.092329496 container create b1085d6f6e7cfa0a07d033b8d0bb6611af57c311a7e9909aa54f4bc70936ca5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mclean, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True)
Dec 03 02:31:59 compute-0 podman[470779]: 2025-12-03 02:31:59.082868892 +0000 UTC m=+0.059725316 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:31:59 compute-0 systemd[1]: Started libpod-conmon-b1085d6f6e7cfa0a07d033b8d0bb6611af57c311a7e9909aa54f4bc70936ca5b.scope.
Dec 03 02:31:59 compute-0 nova_compute[351485]: 2025-12-03 02:31:59.233 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:31:59 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:31:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba42076c0bd270941cb98c88532a03a09dc9a2bc884e362030deda31a2510a93/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:31:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba42076c0bd270941cb98c88532a03a09dc9a2bc884e362030deda31a2510a93/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:31:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba42076c0bd270941cb98c88532a03a09dc9a2bc884e362030deda31a2510a93/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:31:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba42076c0bd270941cb98c88532a03a09dc9a2bc884e362030deda31a2510a93/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:31:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba42076c0bd270941cb98c88532a03a09dc9a2bc884e362030deda31a2510a93/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 02:31:59 compute-0 podman[470779]: 2025-12-03 02:31:59.306967375 +0000 UTC m=+0.283823809 container init b1085d6f6e7cfa0a07d033b8d0bb6611af57c311a7e9909aa54f4bc70936ca5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mclean, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Dec 03 02:31:59 compute-0 podman[470779]: 2025-12-03 02:31:59.359901329 +0000 UTC m=+0.336757703 container start b1085d6f6e7cfa0a07d033b8d0bb6611af57c311a7e9909aa54f4bc70936ca5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:31:59 compute-0 podman[470779]: 2025-12-03 02:31:59.367489473 +0000 UTC m=+0.344345867 container attach b1085d6f6e7cfa0a07d033b8d0bb6611af57c311a7e9909aa54f4bc70936ca5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:31:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2333: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:31:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:31:59.667 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:31:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:31:59.668 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:31:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:31:59.671 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:31:59 compute-0 podman[158098]: time="2025-12-03T02:31:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:31:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:31:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45514 "" "Go-http-client/1.1"
Dec 03 02:31:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:31:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9090 "" "Go-http-client/1.1"
Dec 03 02:32:00 compute-0 angry_mclean[470795]: --> passed data devices: 0 physical, 3 LVM
Dec 03 02:32:00 compute-0 angry_mclean[470795]: --> relative data size: 1.0
Dec 03 02:32:00 compute-0 angry_mclean[470795]: --> All data devices are unavailable
Dec 03 02:32:00 compute-0 systemd[1]: libpod-b1085d6f6e7cfa0a07d033b8d0bb6611af57c311a7e9909aa54f4bc70936ca5b.scope: Deactivated successfully.
Dec 03 02:32:00 compute-0 systemd[1]: libpod-b1085d6f6e7cfa0a07d033b8d0bb6611af57c311a7e9909aa54f4bc70936ca5b.scope: Consumed 1.329s CPU time.
Dec 03 02:32:00 compute-0 podman[470779]: 2025-12-03 02:32:00.759391283 +0000 UTC m=+1.736247667 container died b1085d6f6e7cfa0a07d033b8d0bb6611af57c311a7e9909aa54f4bc70936ca5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mclean, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 03 02:32:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba42076c0bd270941cb98c88532a03a09dc9a2bc884e362030deda31a2510a93-merged.mount: Deactivated successfully.
Dec 03 02:32:00 compute-0 podman[470779]: 2025-12-03 02:32:00.847138749 +0000 UTC m=+1.823995123 container remove b1085d6f6e7cfa0a07d033b8d0bb6611af57c311a7e9909aa54f4bc70936ca5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mclean, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 03 02:32:00 compute-0 systemd[1]: libpod-conmon-b1085d6f6e7cfa0a07d033b8d0bb6611af57c311a7e9909aa54f4bc70936ca5b.scope: Deactivated successfully.
Dec 03 02:32:00 compute-0 sudo[470680]: pam_unix(sudo:session): session closed for user root
Dec 03 02:32:00 compute-0 ceph-mon[192821]: pgmap v2333: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:32:01 compute-0 sudo[470838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:32:01 compute-0 sudo[470838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:32:01 compute-0 sudo[470838]: pam_unix(sudo:session): session closed for user root
Dec 03 02:32:01 compute-0 sudo[470863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:32:01 compute-0 sudo[470863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:32:01 compute-0 sudo[470863]: pam_unix(sudo:session): session closed for user root
Dec 03 02:32:01 compute-0 sudo[470888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:32:01 compute-0 sudo[470888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:32:01 compute-0 sudo[470888]: pam_unix(sudo:session): session closed for user root
Dec 03 02:32:01 compute-0 openstack_network_exporter[368278]: ERROR   02:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:32:01 compute-0 openstack_network_exporter[368278]: ERROR   02:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:32:01 compute-0 openstack_network_exporter[368278]: ERROR   02:32:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:32:01 compute-0 openstack_network_exporter[368278]: ERROR   02:32:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:32:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:32:01 compute-0 openstack_network_exporter[368278]: ERROR   02:32:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:32:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:32:01 compute-0 sudo[470913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 02:32:01 compute-0 sudo[470913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:32:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2334: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:32:01 compute-0 nova_compute[351485]: 2025-12-03 02:32:01.851 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:32:02 compute-0 podman[470976]: 2025-12-03 02:32:02.112335783 +0000 UTC m=+0.101031822 container create 09e75f6d362de42dd442fb92a9244a8edb9e29a15495a05f11a44c1f82564397 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hermann, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 03 02:32:02 compute-0 podman[470976]: 2025-12-03 02:32:02.076648206 +0000 UTC m=+0.065344305 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:32:02 compute-0 systemd[1]: Started libpod-conmon-09e75f6d362de42dd442fb92a9244a8edb9e29a15495a05f11a44c1f82564397.scope.
Dec 03 02:32:02 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:32:02 compute-0 podman[470976]: 2025-12-03 02:32:02.290249784 +0000 UTC m=+0.278945863 container init 09e75f6d362de42dd442fb92a9244a8edb9e29a15495a05f11a44c1f82564397 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hermann, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:32:02 compute-0 podman[470976]: 2025-12-03 02:32:02.307376447 +0000 UTC m=+0.296072476 container start 09e75f6d362de42dd442fb92a9244a8edb9e29a15495a05f11a44c1f82564397 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:32:02 compute-0 podman[470976]: 2025-12-03 02:32:02.314007754 +0000 UTC m=+0.302703833 container attach 09e75f6d362de42dd442fb92a9244a8edb9e29a15495a05f11a44c1f82564397 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec 03 02:32:02 compute-0 sad_hermann[470992]: 167 167
Dec 03 02:32:02 compute-0 systemd[1]: libpod-09e75f6d362de42dd442fb92a9244a8edb9e29a15495a05f11a44c1f82564397.scope: Deactivated successfully.
Dec 03 02:32:02 compute-0 podman[470976]: 2025-12-03 02:32:02.321125785 +0000 UTC m=+0.309821854 container died 09e75f6d362de42dd442fb92a9244a8edb9e29a15495a05f11a44c1f82564397 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Dec 03 02:32:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee30bd922b8c21b4473dd98081935579241de0fd8b717abc0c450fc96b056d77-merged.mount: Deactivated successfully.
Dec 03 02:32:02 compute-0 podman[470976]: 2025-12-03 02:32:02.408000467 +0000 UTC m=+0.396696506 container remove 09e75f6d362de42dd442fb92a9244a8edb9e29a15495a05f11a44c1f82564397 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:32:02 compute-0 systemd[1]: libpod-conmon-09e75f6d362de42dd442fb92a9244a8edb9e29a15495a05f11a44c1f82564397.scope: Deactivated successfully.
Dec 03 02:32:02 compute-0 podman[471015]: 2025-12-03 02:32:02.698666438 +0000 UTC m=+0.092057328 container create ef8a7b3858fcc6d56cb176ac279c134203d9b53a05fdd9f32a410571b06e8906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 03 02:32:02 compute-0 podman[471015]: 2025-12-03 02:32:02.662102857 +0000 UTC m=+0.055493827 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:32:02 compute-0 systemd[1]: Started libpod-conmon-ef8a7b3858fcc6d56cb176ac279c134203d9b53a05fdd9f32a410571b06e8906.scope.
Dec 03 02:32:02 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:32:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/059d47155ba265b3177b56d0de33912309472eb3c4cddf35777713d4f4b7d783/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:32:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/059d47155ba265b3177b56d0de33912309472eb3c4cddf35777713d4f4b7d783/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:32:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/059d47155ba265b3177b56d0de33912309472eb3c4cddf35777713d4f4b7d783/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:32:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/059d47155ba265b3177b56d0de33912309472eb3c4cddf35777713d4f4b7d783/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:32:02 compute-0 podman[471015]: 2025-12-03 02:32:02.870813686 +0000 UTC m=+0.264204606 container init ef8a7b3858fcc6d56cb176ac279c134203d9b53a05fdd9f32a410571b06e8906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_darwin, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:32:02 compute-0 podman[471015]: 2025-12-03 02:32:02.890178633 +0000 UTC m=+0.283569553 container start ef8a7b3858fcc6d56cb176ac279c134203d9b53a05fdd9f32a410571b06e8906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_darwin, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:32:02 compute-0 podman[471015]: 2025-12-03 02:32:02.896842871 +0000 UTC m=+0.290233791 container attach ef8a7b3858fcc6d56cb176ac279c134203d9b53a05fdd9f32a410571b06e8906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_darwin, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:32:02 compute-0 ceph-mon[192821]: pgmap v2334: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:32:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2335: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:32:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]: {
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:     "0": [
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:         {
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:             "devices": [
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:                 "/dev/loop3"
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:             ],
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:             "lv_name": "ceph_lv0",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:             "lv_size": "21470642176",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:             "name": "ceph_lv0",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:             "tags": {
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:                 "ceph.cluster_name": "ceph",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:                 "ceph.crush_device_class": "",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:                 "ceph.encrypted": "0",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:                 "ceph.osd_id": "0",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:                 "ceph.type": "block",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:                 "ceph.vdo": "0"
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:             },
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:             "type": "block",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:             "vg_name": "ceph_vg0"
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:         }
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:     ],
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:     "1": [
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:         {
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:             "devices": [
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:                 "/dev/loop4"
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:             ],
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:             "lv_name": "ceph_lv1",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:             "lv_size": "21470642176",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:             "name": "ceph_lv1",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:             "tags": {
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:                 "ceph.cluster_name": "ceph",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:                 "ceph.crush_device_class": "",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:                 "ceph.encrypted": "0",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:                 "ceph.osd_id": "1",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:                 "ceph.type": "block",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:                 "ceph.vdo": "0"
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:             },
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:             "type": "block",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:             "vg_name": "ceph_vg1"
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:         }
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:     ],
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:     "2": [
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:         {
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:             "devices": [
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:                 "/dev/loop5"
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:             ],
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:             "lv_name": "ceph_lv2",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:             "lv_size": "21470642176",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:             "name": "ceph_lv2",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:             "tags": {
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:                 "ceph.cluster_name": "ceph",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:                 "ceph.crush_device_class": "",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:                 "ceph.encrypted": "0",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:                 "ceph.osd_id": "2",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:                 "ceph.type": "block",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:                 "ceph.vdo": "0"
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:             },
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:             "type": "block",
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:             "vg_name": "ceph_vg2"
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:         }
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]:     ]
Dec 03 02:32:03 compute-0 intelligent_darwin[471030]: }
Dec 03 02:32:03 compute-0 systemd[1]: libpod-ef8a7b3858fcc6d56cb176ac279c134203d9b53a05fdd9f32a410571b06e8906.scope: Deactivated successfully.
Dec 03 02:32:03 compute-0 podman[471015]: 2025-12-03 02:32:03.750742818 +0000 UTC m=+1.144133798 container died ef8a7b3858fcc6d56cb176ac279c134203d9b53a05fdd9f32a410571b06e8906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_darwin, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 03 02:32:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-059d47155ba265b3177b56d0de33912309472eb3c4cddf35777713d4f4b7d783-merged.mount: Deactivated successfully.
Dec 03 02:32:03 compute-0 podman[471015]: 2025-12-03 02:32:03.844229376 +0000 UTC m=+1.237620256 container remove ef8a7b3858fcc6d56cb176ac279c134203d9b53a05fdd9f32a410571b06e8906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_darwin, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 03 02:32:03 compute-0 systemd[1]: libpod-conmon-ef8a7b3858fcc6d56cb176ac279c134203d9b53a05fdd9f32a410571b06e8906.scope: Deactivated successfully.
Dec 03 02:32:03 compute-0 sudo[470913]: pam_unix(sudo:session): session closed for user root
Dec 03 02:32:03 compute-0 sudo[471051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:32:04 compute-0 sudo[471051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:32:04 compute-0 sudo[471051]: pam_unix(sudo:session): session closed for user root
Dec 03 02:32:04 compute-0 sudo[471095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:32:04 compute-0 sudo[471095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:32:04 compute-0 sudo[471095]: pam_unix(sudo:session): session closed for user root
Dec 03 02:32:04 compute-0 podman[471075]: 2025-12-03 02:32:04.146219318 +0000 UTC m=+0.116205260 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent)
Dec 03 02:32:04 compute-0 podman[471077]: 2025-12-03 02:32:04.153120623 +0000 UTC m=+0.119226646 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 02:32:04 compute-0 podman[471076]: 2025-12-03 02:32:04.154292426 +0000 UTC m=+0.122260971 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm)
Dec 03 02:32:04 compute-0 sudo[471156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:32:04 compute-0 sudo[471156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:32:04 compute-0 sudo[471156]: pam_unix(sudo:session): session closed for user root
Dec 03 02:32:04 compute-0 nova_compute[351485]: 2025-12-03 02:32:04.235 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:32:04 compute-0 sudo[471181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 02:32:04 compute-0 sudo[471181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:32:04 compute-0 podman[471245]: 2025-12-03 02:32:04.830070647 +0000 UTC m=+0.056119175 container create b5caf5969b3a6b766a864cf148ad12b29cda602da4fd48c43386283c5c159def (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_nobel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:32:04 compute-0 systemd[1]: Started libpod-conmon-b5caf5969b3a6b766a864cf148ad12b29cda602da4fd48c43386283c5c159def.scope.
Dec 03 02:32:04 compute-0 podman[471245]: 2025-12-03 02:32:04.810448993 +0000 UTC m=+0.036497521 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:32:04 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:32:04 compute-0 ceph-mon[192821]: pgmap v2335: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:32:04 compute-0 podman[471245]: 2025-12-03 02:32:04.956735871 +0000 UTC m=+0.182784429 container init b5caf5969b3a6b766a864cf148ad12b29cda602da4fd48c43386283c5c159def (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_nobel, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 03 02:32:04 compute-0 podman[471245]: 2025-12-03 02:32:04.972767353 +0000 UTC m=+0.198815891 container start b5caf5969b3a6b766a864cf148ad12b29cda602da4fd48c43386283c5c159def (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 03 02:32:04 compute-0 podman[471245]: 2025-12-03 02:32:04.97937544 +0000 UTC m=+0.205423988 container attach b5caf5969b3a6b766a864cf148ad12b29cda602da4fd48c43386283c5c159def (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:32:04 compute-0 recursing_nobel[471261]: 167 167
Dec 03 02:32:04 compute-0 systemd[1]: libpod-b5caf5969b3a6b766a864cf148ad12b29cda602da4fd48c43386283c5c159def.scope: Deactivated successfully.
Dec 03 02:32:04 compute-0 podman[471245]: 2025-12-03 02:32:04.985504833 +0000 UTC m=+0.211553341 container died b5caf5969b3a6b766a864cf148ad12b29cda602da4fd48c43386283c5c159def (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_nobel, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec 03 02:32:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-4740b9722af89fb3f6ebee4710a70ea74a682a6fe81dc066ef75122332b02bb3-merged.mount: Deactivated successfully.
Dec 03 02:32:05 compute-0 podman[471245]: 2025-12-03 02:32:05.039487106 +0000 UTC m=+0.265535614 container remove b5caf5969b3a6b766a864cf148ad12b29cda602da4fd48c43386283c5c159def (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 03 02:32:05 compute-0 systemd[1]: libpod-conmon-b5caf5969b3a6b766a864cf148ad12b29cda602da4fd48c43386283c5c159def.scope: Deactivated successfully.
Dec 03 02:32:05 compute-0 podman[471283]: 2025-12-03 02:32:05.345517092 +0000 UTC m=+0.100181558 container create 782eb842465bdd8430294ba289eaa1519087c62f9975bc35426ffd5f33e82b08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ishizaka, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 03 02:32:05 compute-0 podman[471283]: 2025-12-03 02:32:05.308004324 +0000 UTC m=+0.062668830 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:32:05 compute-0 systemd[1]: Started libpod-conmon-782eb842465bdd8430294ba289eaa1519087c62f9975bc35426ffd5f33e82b08.scope.
Dec 03 02:32:05 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:32:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8553bb06f084064e86437e78fd19a25c13be2bcd4f74af8e762e80017b1720/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:32:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8553bb06f084064e86437e78fd19a25c13be2bcd4f74af8e762e80017b1720/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:32:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8553bb06f084064e86437e78fd19a25c13be2bcd4f74af8e762e80017b1720/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:32:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8553bb06f084064e86437e78fd19a25c13be2bcd4f74af8e762e80017b1720/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:32:05 compute-0 podman[471283]: 2025-12-03 02:32:05.53146807 +0000 UTC m=+0.286132566 container init 782eb842465bdd8430294ba289eaa1519087c62f9975bc35426ffd5f33e82b08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:32:05 compute-0 podman[471283]: 2025-12-03 02:32:05.563828293 +0000 UTC m=+0.318492749 container start 782eb842465bdd8430294ba289eaa1519087c62f9975bc35426ffd5f33e82b08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ishizaka, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Dec 03 02:32:05 compute-0 podman[471283]: 2025-12-03 02:32:05.572407305 +0000 UTC m=+0.327071821 container attach 782eb842465bdd8430294ba289eaa1519087c62f9975bc35426ffd5f33e82b08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ishizaka, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:32:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2336: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:32:06 compute-0 festive_ishizaka[471299]: {
Dec 03 02:32:06 compute-0 festive_ishizaka[471299]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 02:32:06 compute-0 festive_ishizaka[471299]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:32:06 compute-0 festive_ishizaka[471299]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 02:32:06 compute-0 festive_ishizaka[471299]:         "osd_id": 2,
Dec 03 02:32:06 compute-0 festive_ishizaka[471299]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:32:06 compute-0 festive_ishizaka[471299]:         "type": "bluestore"
Dec 03 02:32:06 compute-0 festive_ishizaka[471299]:     },
Dec 03 02:32:06 compute-0 festive_ishizaka[471299]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 02:32:06 compute-0 festive_ishizaka[471299]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:32:06 compute-0 festive_ishizaka[471299]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 02:32:06 compute-0 festive_ishizaka[471299]:         "osd_id": 1,
Dec 03 02:32:06 compute-0 festive_ishizaka[471299]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:32:06 compute-0 festive_ishizaka[471299]:         "type": "bluestore"
Dec 03 02:32:06 compute-0 festive_ishizaka[471299]:     },
Dec 03 02:32:06 compute-0 festive_ishizaka[471299]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 02:32:06 compute-0 festive_ishizaka[471299]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:32:06 compute-0 festive_ishizaka[471299]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 02:32:06 compute-0 festive_ishizaka[471299]:         "osd_id": 0,
Dec 03 02:32:06 compute-0 festive_ishizaka[471299]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:32:06 compute-0 festive_ishizaka[471299]:         "type": "bluestore"
Dec 03 02:32:06 compute-0 festive_ishizaka[471299]:     }
Dec 03 02:32:06 compute-0 festive_ishizaka[471299]: }
Dec 03 02:32:06 compute-0 systemd[1]: libpod-782eb842465bdd8430294ba289eaa1519087c62f9975bc35426ffd5f33e82b08.scope: Deactivated successfully.
Dec 03 02:32:06 compute-0 podman[471283]: 2025-12-03 02:32:06.745126258 +0000 UTC m=+1.499790704 container died 782eb842465bdd8430294ba289eaa1519087c62f9975bc35426ffd5f33e82b08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ishizaka, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:32:06 compute-0 systemd[1]: libpod-782eb842465bdd8430294ba289eaa1519087c62f9975bc35426ffd5f33e82b08.scope: Consumed 1.183s CPU time.
Dec 03 02:32:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e8553bb06f084064e86437e78fd19a25c13be2bcd4f74af8e762e80017b1720-merged.mount: Deactivated successfully.
Dec 03 02:32:06 compute-0 podman[471283]: 2025-12-03 02:32:06.828394868 +0000 UTC m=+1.583059334 container remove 782eb842465bdd8430294ba289eaa1519087c62f9975bc35426ffd5f33e82b08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:32:06 compute-0 systemd[1]: libpod-conmon-782eb842465bdd8430294ba289eaa1519087c62f9975bc35426ffd5f33e82b08.scope: Deactivated successfully.
Dec 03 02:32:06 compute-0 nova_compute[351485]: 2025-12-03 02:32:06.854 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:32:06 compute-0 sudo[471181]: pam_unix(sudo:session): session closed for user root
Dec 03 02:32:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 02:32:06 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:32:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 02:32:06 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:32:06 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 1bc9dd6a-b2a6-4506-9eeb-e91519edfdd0 does not exist
Dec 03 02:32:06 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev cdc62077-a070-4d4c-b68d-5881f3072eb1 does not exist
Dec 03 02:32:06 compute-0 ceph-mon[192821]: pgmap v2336: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:32:06 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:32:06 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:32:07 compute-0 sudo[471342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:32:07 compute-0 sudo[471342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:32:07 compute-0 sudo[471342]: pam_unix(sudo:session): session closed for user root
Dec 03 02:32:07 compute-0 sudo[471367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 02:32:07 compute-0 sudo[471367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:32:07 compute-0 sudo[471367]: pam_unix(sudo:session): session closed for user root
Dec 03 02:32:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2337: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:32:07 compute-0 ceph-mon[192821]: pgmap v2337: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:32:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:32:09 compute-0 nova_compute[351485]: 2025-12-03 02:32:09.237 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:32:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2338: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:32:10 compute-0 ceph-mon[192821]: pgmap v2338: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:32:10 compute-0 nova_compute[351485]: 2025-12-03 02:32:10.766 351492 DEBUG oslo_concurrency.lockutils [None req-7686e067-c256-4b5b-8848-c27319400f31 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Acquiring lock "2890ee5c-21c1-4e9d-9421-1a2df0f67f76" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:32:10 compute-0 nova_compute[351485]: 2025-12-03 02:32:10.767 351492 DEBUG oslo_concurrency.lockutils [None req-7686e067-c256-4b5b-8848-c27319400f31 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "2890ee5c-21c1-4e9d-9421-1a2df0f67f76" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:32:10 compute-0 nova_compute[351485]: 2025-12-03 02:32:10.767 351492 DEBUG oslo_concurrency.lockutils [None req-7686e067-c256-4b5b-8848-c27319400f31 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Acquiring lock "2890ee5c-21c1-4e9d-9421-1a2df0f67f76-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:32:10 compute-0 nova_compute[351485]: 2025-12-03 02:32:10.767 351492 DEBUG oslo_concurrency.lockutils [None req-7686e067-c256-4b5b-8848-c27319400f31 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "2890ee5c-21c1-4e9d-9421-1a2df0f67f76-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:32:10 compute-0 nova_compute[351485]: 2025-12-03 02:32:10.768 351492 DEBUG oslo_concurrency.lockutils [None req-7686e067-c256-4b5b-8848-c27319400f31 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "2890ee5c-21c1-4e9d-9421-1a2df0f67f76-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:32:10 compute-0 nova_compute[351485]: 2025-12-03 02:32:10.770 351492 INFO nova.compute.manager [None req-7686e067-c256-4b5b-8848-c27319400f31 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Terminating instance
Dec 03 02:32:10 compute-0 nova_compute[351485]: 2025-12-03 02:32:10.772 351492 DEBUG nova.compute.manager [None req-7686e067-c256-4b5b-8848-c27319400f31 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 03 02:32:10 compute-0 kernel: tapf36a9f58-d7 (unregistering): left promiscuous mode
Dec 03 02:32:10 compute-0 NetworkManager[48912]: <info>  [1764729130.9145] device (tapf36a9f58-d7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 03 02:32:10 compute-0 ovn_controller[89134]: 2025-12-03T02:32:10Z|00196|binding|INFO|Releasing lport f36a9f58-d7c9-4f05-942d-5a2c4cce705a from this chassis (sb_readonly=0)
Dec 03 02:32:10 compute-0 ovn_controller[89134]: 2025-12-03T02:32:10Z|00197|binding|INFO|Setting lport f36a9f58-d7c9-4f05-942d-5a2c4cce705a down in Southbound
Dec 03 02:32:10 compute-0 nova_compute[351485]: 2025-12-03 02:32:10.933 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:32:10 compute-0 ovn_controller[89134]: 2025-12-03T02:32:10Z|00198|binding|INFO|Removing iface tapf36a9f58-d7 ovn-installed in OVS
Dec 03 02:32:10 compute-0 nova_compute[351485]: 2025-12-03 02:32:10.937 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:32:10 compute-0 nova_compute[351485]: 2025-12-03 02:32:10.962 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:32:11 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Dec 03 02:32:11 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000e.scope: Consumed 7min 34.099s CPU time.
Dec 03 02:32:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:11.014 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:ed:eb 10.100.0.239'], port_security=['fa:16:3e:dd:ed:eb 10.100.0.239'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.239/16', 'neutron:device_id': '2890ee5c-21c1-4e9d-9421-1a2df0f67f76', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a7615b73-b987-4b91-b12c-2d7488085657', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '63f39ac2863946b8b817457e689ff933', 'neutron:revision_number': '4', 'neutron:security_group_ids': '80ea8f15-ca6c-4a1b-8590-f50ba85e3add', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e2f8982b-cbe8-4539-87ff-9ffeb5a93018, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=f36a9f58-d7c9-4f05-942d-5a2c4cce705a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 02:32:11 compute-0 systemd-machined[138558]: Machine qemu-15-instance-0000000e terminated.
Dec 03 02:32:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:11.018 288528 INFO neutron.agent.ovn.metadata.agent [-] Port f36a9f58-d7c9-4f05-942d-5a2c4cce705a in datapath a7615b73-b987-4b91-b12c-2d7488085657 unbound from our chassis
Dec 03 02:32:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:11.021 288528 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a7615b73-b987-4b91-b12c-2d7488085657
Dec 03 02:32:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:11.050 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[685eadca-7a18-43ca-940a-f1542d87cd43]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:32:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:11.102 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[6e090291-4a32-4bac-9acf-ba2525f40a94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:32:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:11.107 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[43ca7411-32c4-4258-b43d-6190a82727e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:32:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:11.153 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[cde2e097-5c40-4469-9274-f242c236a555]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:32:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:11.183 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[2a9784b0-16d1-4eb7-80ca-32085c8e2c50]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa7615b73-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6c:3e:f5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 40, 'tx_packets': 8, 'rx_bytes': 1960, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 40, 'tx_packets': 8, 'rx_bytes': 1960, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 47], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719210, 'reachable_time': 41270, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 471403, 'error': None, 'target': 'ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:32:11 compute-0 nova_compute[351485]: 2025-12-03 02:32:11.215 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:32:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:11.216 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[6faf479f-e4a1-45dc-a15c-9d762d4eab0a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa7615b73-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 719227, 'tstamp': 719227}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 471405, 'error': None, 'target': 'ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 16, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.255.255'], ['IFA_LABEL', 'tapa7615b73-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 719234, 'tstamp': 719234}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 471405, 'error': None, 'target': 'ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:32:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:11.219 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa7615b73-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:32:11 compute-0 nova_compute[351485]: 2025-12-03 02:32:11.222 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:32:11 compute-0 nova_compute[351485]: 2025-12-03 02:32:11.227 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:32:11 compute-0 nova_compute[351485]: 2025-12-03 02:32:11.238 351492 INFO nova.virt.libvirt.driver [-] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Instance destroyed successfully.
Dec 03 02:32:11 compute-0 nova_compute[351485]: 2025-12-03 02:32:11.239 351492 DEBUG nova.objects.instance [None req-7686e067-c256-4b5b-8848-c27319400f31 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lazy-loading 'resources' on Instance uuid 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:32:11 compute-0 nova_compute[351485]: 2025-12-03 02:32:11.244 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:32:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:11.245 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa7615b73-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:32:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:11.246 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 03 02:32:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:11.246 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa7615b73-b0, col_values=(('external_ids', {'iface-id': '50c454e1-4a4b-4aad-b47b-dafc7b079018'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:32:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:11.247 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 03 02:32:11 compute-0 nova_compute[351485]: 2025-12-03 02:32:11.263 351492 DEBUG nova.virt.libvirt.vif [None req-7686e067-c256-4b5b-8848-c27319400f31 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T02:19:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='te-8071397-asg-3rvfkoaoyxm3-n4fdz722tgvn-jwe375iwm6yr',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-8071397-asg-3rvfkoaoyxm3-n4fdz722tgvn-jwe375iwm6yr',id=14,image_ref='8876482c-db67-48c0-9203-60685152fc9d',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-03T02:19:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='38bfb145-4971-41b6-9bc3-faf3c3931019'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='63f39ac2863946b8b817457e689ff933',ramdisk_id='',reservation_id='r-czfymphz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8876482c-db67-48c0-9203-60685152fc9d',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-PrometheusGabbiTest-1008659157',owner_user_name='tempest-PrometheusGabbiTest-1008659157-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-03T02:19:13Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='8f61f44789494541b7c101b0fdab52f0',uuid=2890ee5c-21c1-4e9d-9421-1a2df0f67f76,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "address": "fa:16:3e:dd:ed:eb", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.239", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf36a9f58-d7", "ovs_interfaceid": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 03 02:32:11 compute-0 nova_compute[351485]: 2025-12-03 02:32:11.263 351492 DEBUG nova.network.os_vif_util [None req-7686e067-c256-4b5b-8848-c27319400f31 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Converting VIF {"id": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "address": "fa:16:3e:dd:ed:eb", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.239", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf36a9f58-d7", "ovs_interfaceid": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 03 02:32:11 compute-0 nova_compute[351485]: 2025-12-03 02:32:11.264 351492 DEBUG nova.network.os_vif_util [None req-7686e067-c256-4b5b-8848-c27319400f31 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:dd:ed:eb,bridge_name='br-int',has_traffic_filtering=True,id=f36a9f58-d7c9-4f05-942d-5a2c4cce705a,network=Network(a7615b73-b987-4b91-b12c-2d7488085657),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf36a9f58-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 03 02:32:11 compute-0 nova_compute[351485]: 2025-12-03 02:32:11.265 351492 DEBUG os_vif [None req-7686e067-c256-4b5b-8848-c27319400f31 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:dd:ed:eb,bridge_name='br-int',has_traffic_filtering=True,id=f36a9f58-d7c9-4f05-942d-5a2c4cce705a,network=Network(a7615b73-b987-4b91-b12c-2d7488085657),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf36a9f58-d7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 03 02:32:11 compute-0 nova_compute[351485]: 2025-12-03 02:32:11.268 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:32:11 compute-0 nova_compute[351485]: 2025-12-03 02:32:11.268 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf36a9f58-d7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:32:11 compute-0 nova_compute[351485]: 2025-12-03 02:32:11.272 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:32:11 compute-0 nova_compute[351485]: 2025-12-03 02:32:11.273 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 03 02:32:11 compute-0 nova_compute[351485]: 2025-12-03 02:32:11.274 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:32:11 compute-0 nova_compute[351485]: 2025-12-03 02:32:11.280 351492 INFO os_vif [None req-7686e067-c256-4b5b-8848-c27319400f31 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:dd:ed:eb,bridge_name='br-int',has_traffic_filtering=True,id=f36a9f58-d7c9-4f05-942d-5a2c4cce705a,network=Network(a7615b73-b987-4b91-b12c-2d7488085657),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf36a9f58-d7')
Dec 03 02:32:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2339: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:32:12 compute-0 nova_compute[351485]: 2025-12-03 02:32:12.275 351492 INFO nova.virt.libvirt.driver [None req-7686e067-c256-4b5b-8848-c27319400f31 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Deleting instance files /var/lib/nova/instances/2890ee5c-21c1-4e9d-9421-1a2df0f67f76_del
Dec 03 02:32:12 compute-0 nova_compute[351485]: 2025-12-03 02:32:12.276 351492 INFO nova.virt.libvirt.driver [None req-7686e067-c256-4b5b-8848-c27319400f31 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Deletion of /var/lib/nova/instances/2890ee5c-21c1-4e9d-9421-1a2df0f67f76_del complete
Dec 03 02:32:12 compute-0 nova_compute[351485]: 2025-12-03 02:32:12.348 351492 INFO nova.compute.manager [None req-7686e067-c256-4b5b-8848-c27319400f31 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Took 1.58 seconds to destroy the instance on the hypervisor.
Dec 03 02:32:12 compute-0 nova_compute[351485]: 2025-12-03 02:32:12.349 351492 DEBUG oslo.service.loopingcall [None req-7686e067-c256-4b5b-8848-c27319400f31 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 03 02:32:12 compute-0 nova_compute[351485]: 2025-12-03 02:32:12.351 351492 DEBUG nova.compute.manager [-] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 03 02:32:12 compute-0 nova_compute[351485]: 2025-12-03 02:32:12.351 351492 DEBUG nova.network.neutron [-] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 03 02:32:12 compute-0 nova_compute[351485]: 2025-12-03 02:32:12.581 351492 DEBUG nova.compute.manager [req-5785768a-0261-48a8-89c1-a5fafadc3303 req-7876cf6a-0f2b-4f12-8298-380b3055b49a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Received event network-vif-unplugged-f36a9f58-d7c9-4f05-942d-5a2c4cce705a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:32:12 compute-0 nova_compute[351485]: 2025-12-03 02:32:12.583 351492 DEBUG oslo_concurrency.lockutils [req-5785768a-0261-48a8-89c1-a5fafadc3303 req-7876cf6a-0f2b-4f12-8298-380b3055b49a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "2890ee5c-21c1-4e9d-9421-1a2df0f67f76-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:32:12 compute-0 nova_compute[351485]: 2025-12-03 02:32:12.585 351492 DEBUG oslo_concurrency.lockutils [req-5785768a-0261-48a8-89c1-a5fafadc3303 req-7876cf6a-0f2b-4f12-8298-380b3055b49a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "2890ee5c-21c1-4e9d-9421-1a2df0f67f76-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:32:12 compute-0 nova_compute[351485]: 2025-12-03 02:32:12.586 351492 DEBUG oslo_concurrency.lockutils [req-5785768a-0261-48a8-89c1-a5fafadc3303 req-7876cf6a-0f2b-4f12-8298-380b3055b49a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "2890ee5c-21c1-4e9d-9421-1a2df0f67f76-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:32:12 compute-0 nova_compute[351485]: 2025-12-03 02:32:12.587 351492 DEBUG nova.compute.manager [req-5785768a-0261-48a8-89c1-a5fafadc3303 req-7876cf6a-0f2b-4f12-8298-380b3055b49a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] No waiting events found dispatching network-vif-unplugged-f36a9f58-d7c9-4f05-942d-5a2c4cce705a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 03 02:32:12 compute-0 nova_compute[351485]: 2025-12-03 02:32:12.588 351492 DEBUG nova.compute.manager [req-5785768a-0261-48a8-89c1-a5fafadc3303 req-7876cf6a-0f2b-4f12-8298-380b3055b49a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Received event network-vif-unplugged-f36a9f58-d7c9-4f05-942d-5a2c4cce705a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 03 02:32:12 compute-0 ceph-mon[192821]: pgmap v2339: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:32:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:12.797 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:a6:85', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:2a:11:ae:7b:8c'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 02:32:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:12.798 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 03 02:32:12 compute-0 nova_compute[351485]: 2025-12-03 02:32:12.798 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:32:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:12.802 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=eda9fd7d-f2b1-4121-b9ac-fc31f8426272, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:32:12 compute-0 podman[471435]: 2025-12-03 02:32:12.887197306 +0000 UTC m=+0.132343856 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 03 02:32:13 compute-0 nova_compute[351485]: 2025-12-03 02:32:13.264 351492 DEBUG nova.network.neutron [-] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:32:13 compute-0 nova_compute[351485]: 2025-12-03 02:32:13.301 351492 INFO nova.compute.manager [-] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Took 0.95 seconds to deallocate network for instance.
Dec 03 02:32:13 compute-0 nova_compute[351485]: 2025-12-03 02:32:13.369 351492 DEBUG oslo_concurrency.lockutils [None req-7686e067-c256-4b5b-8848-c27319400f31 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:32:13 compute-0 nova_compute[351485]: 2025-12-03 02:32:13.370 351492 DEBUG oslo_concurrency.lockutils [None req-7686e067-c256-4b5b-8848-c27319400f31 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:32:13 compute-0 nova_compute[351485]: 2025-12-03 02:32:13.479 351492 DEBUG oslo_concurrency.processutils [None req-7686e067-c256-4b5b-8848-c27319400f31 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:32:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2340: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:32:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:32:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:32:14 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1936787387' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:32:14 compute-0 nova_compute[351485]: 2025-12-03 02:32:14.032 351492 DEBUG oslo_concurrency.processutils [None req-7686e067-c256-4b5b-8848-c27319400f31 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:32:14 compute-0 nova_compute[351485]: 2025-12-03 02:32:14.042 351492 DEBUG nova.compute.provider_tree [None req-7686e067-c256-4b5b-8848-c27319400f31 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:32:14 compute-0 nova_compute[351485]: 2025-12-03 02:32:14.059 351492 DEBUG nova.scheduler.client.report [None req-7686e067-c256-4b5b-8848-c27319400f31 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:32:14 compute-0 nova_compute[351485]: 2025-12-03 02:32:14.079 351492 DEBUG oslo_concurrency.lockutils [None req-7686e067-c256-4b5b-8848-c27319400f31 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:32:14 compute-0 nova_compute[351485]: 2025-12-03 02:32:14.101 351492 INFO nova.scheduler.client.report [None req-7686e067-c256-4b5b-8848-c27319400f31 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Deleted allocations for instance 2890ee5c-21c1-4e9d-9421-1a2df0f67f76
Dec 03 02:32:14 compute-0 nova_compute[351485]: 2025-12-03 02:32:14.155 351492 DEBUG oslo_concurrency.lockutils [None req-7686e067-c256-4b5b-8848-c27319400f31 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "2890ee5c-21c1-4e9d-9421-1a2df0f67f76" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.388s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:32:14 compute-0 nova_compute[351485]: 2025-12-03 02:32:14.241 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:32:14 compute-0 ceph-mon[192821]: pgmap v2340: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:32:14 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1936787387' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:32:15 compute-0 nova_compute[351485]: 2025-12-03 02:32:15.405 351492 DEBUG nova.compute.manager [req-708588a1-457b-482c-98ad-43d01ad73373 req-f3830f35-63c7-4eb9-858e-f6e3df7b0974 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Received event network-vif-plugged-f36a9f58-d7c9-4f05-942d-5a2c4cce705a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:32:15 compute-0 nova_compute[351485]: 2025-12-03 02:32:15.405 351492 DEBUG oslo_concurrency.lockutils [req-708588a1-457b-482c-98ad-43d01ad73373 req-f3830f35-63c7-4eb9-858e-f6e3df7b0974 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "2890ee5c-21c1-4e9d-9421-1a2df0f67f76-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:32:15 compute-0 nova_compute[351485]: 2025-12-03 02:32:15.406 351492 DEBUG oslo_concurrency.lockutils [req-708588a1-457b-482c-98ad-43d01ad73373 req-f3830f35-63c7-4eb9-858e-f6e3df7b0974 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "2890ee5c-21c1-4e9d-9421-1a2df0f67f76-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:32:15 compute-0 nova_compute[351485]: 2025-12-03 02:32:15.406 351492 DEBUG oslo_concurrency.lockutils [req-708588a1-457b-482c-98ad-43d01ad73373 req-f3830f35-63c7-4eb9-858e-f6e3df7b0974 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "2890ee5c-21c1-4e9d-9421-1a2df0f67f76-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:32:15 compute-0 nova_compute[351485]: 2025-12-03 02:32:15.407 351492 DEBUG nova.compute.manager [req-708588a1-457b-482c-98ad-43d01ad73373 req-f3830f35-63c7-4eb9-858e-f6e3df7b0974 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] No waiting events found dispatching network-vif-plugged-f36a9f58-d7c9-4f05-942d-5a2c4cce705a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 03 02:32:15 compute-0 nova_compute[351485]: 2025-12-03 02:32:15.408 351492 WARNING nova.compute.manager [req-708588a1-457b-482c-98ad-43d01ad73373 req-f3830f35-63c7-4eb9-858e-f6e3df7b0974 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Received unexpected event network-vif-plugged-f36a9f58-d7c9-4f05-942d-5a2c4cce705a for instance with vm_state deleted and task_state None.
Dec 03 02:32:15 compute-0 nova_compute[351485]: 2025-12-03 02:32:15.408 351492 DEBUG nova.compute.manager [req-708588a1-457b-482c-98ad-43d01ad73373 req-f3830f35-63c7-4eb9-858e-f6e3df7b0974 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Received event network-vif-deleted-f36a9f58-d7c9-4f05-942d-5a2c4cce705a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:32:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2341: 321 pgs: 321 active+clean; 177 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 682 B/s wr, 25 op/s
Dec 03 02:32:15 compute-0 podman[471479]: 2025-12-03 02:32:15.876822119 +0000 UTC m=+0.107467973 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 03 02:32:15 compute-0 podman[471478]: 2025-12-03 02:32:15.884197178 +0000 UTC m=+0.121979523 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, config_id=edpm, distribution-scope=public, managed_by=edpm_ansible, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., release=1755695350, version=9.6, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 03 02:32:15 compute-0 podman[471484]: 2025-12-03 02:32:15.89137861 +0000 UTC m=+0.109166231 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 03 02:32:15 compute-0 podman[471480]: 2025-12-03 02:32:15.898075999 +0000 UTC m=+0.113685639 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, version=9.4, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, architecture=x86_64, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_id=edpm, name=ubi9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, distribution-scope=public)
Dec 03 02:32:15 compute-0 podman[471477]: 2025-12-03 02:32:15.919172895 +0000 UTC m=+0.160667435 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller)
Dec 03 02:32:16 compute-0 nova_compute[351485]: 2025-12-03 02:32:16.272 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:32:16 compute-0 ceph-mon[192821]: pgmap v2341: 321 pgs: 321 active+clean; 177 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 682 B/s wr, 25 op/s
Dec 03 02:32:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2342: 321 pgs: 321 active+clean; 157 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 03 02:32:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:32:18 compute-0 ceph-mon[192821]: pgmap v2342: 321 pgs: 321 active+clean; 157 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 03 02:32:19 compute-0 nova_compute[351485]: 2025-12-03 02:32:19.245 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:32:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2343: 321 pgs: 321 active+clean; 157 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 03 02:32:20 compute-0 ceph-mon[192821]: pgmap v2343: 321 pgs: 321 active+clean; 157 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 03 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.276 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.333 351492 DEBUG oslo_concurrency.lockutils [None req-3afd028c-54f9-4c23-bc87-389d4bed2dd0 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Acquiring lock "4fb8fc07-d7b7-4be8-94da-155b040faf32" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.334 351492 DEBUG oslo_concurrency.lockutils [None req-3afd028c-54f9-4c23-bc87-389d4bed2dd0 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "4fb8fc07-d7b7-4be8-94da-155b040faf32" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.335 351492 DEBUG oslo_concurrency.lockutils [None req-3afd028c-54f9-4c23-bc87-389d4bed2dd0 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Acquiring lock "4fb8fc07-d7b7-4be8-94da-155b040faf32-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.336 351492 DEBUG oslo_concurrency.lockutils [None req-3afd028c-54f9-4c23-bc87-389d4bed2dd0 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "4fb8fc07-d7b7-4be8-94da-155b040faf32-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.337 351492 DEBUG oslo_concurrency.lockutils [None req-3afd028c-54f9-4c23-bc87-389d4bed2dd0 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "4fb8fc07-d7b7-4be8-94da-155b040faf32-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.339 351492 INFO nova.compute.manager [None req-3afd028c-54f9-4c23-bc87-389d4bed2dd0 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Terminating instance
Dec 03 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.341 351492 DEBUG nova.compute.manager [None req-3afd028c-54f9-4c23-bc87-389d4bed2dd0 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 03 02:32:21 compute-0 kernel: tap94fdb5b9-66 (unregistering): left promiscuous mode
Dec 03 02:32:21 compute-0 NetworkManager[48912]: <info>  [1764729141.4886] device (tap94fdb5b9-66): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 03 02:32:21 compute-0 ovn_controller[89134]: 2025-12-03T02:32:21Z|00199|binding|INFO|Releasing lport 94fdb5b9-66bf-4e81-b411-064b08e4c71c from this chassis (sb_readonly=0)
Dec 03 02:32:21 compute-0 ovn_controller[89134]: 2025-12-03T02:32:21Z|00200|binding|INFO|Setting lport 94fdb5b9-66bf-4e81-b411-064b08e4c71c down in Southbound
Dec 03 02:32:21 compute-0 ovn_controller[89134]: 2025-12-03T02:32:21Z|00201|binding|INFO|Removing iface tap94fdb5b9-66 ovn-installed in OVS
Dec 03 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.519 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.523 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:32:21 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:21.529 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:0c:ae 10.100.1.46'], port_security=['fa:16:3e:3f:0c:ae 10.100.1.46'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.46/16', 'neutron:device_id': '4fb8fc07-d7b7-4be8-94da-155b040faf32', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a7615b73-b987-4b91-b12c-2d7488085657', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '63f39ac2863946b8b817457e689ff933', 'neutron:revision_number': '4', 'neutron:security_group_ids': '80ea8f15-ca6c-4a1b-8590-f50ba85e3add', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e2f8982b-cbe8-4539-87ff-9ffeb5a93018, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=94fdb5b9-66bf-4e81-b411-064b08e4c71c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 02:32:21 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:21.532 288528 INFO neutron.agent.ovn.metadata.agent [-] Port 94fdb5b9-66bf-4e81-b411-064b08e4c71c in datapath a7615b73-b987-4b91-b12c-2d7488085657 unbound from our chassis
Dec 03 02:32:21 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:21.534 288528 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a7615b73-b987-4b91-b12c-2d7488085657, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 03 02:32:21 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:21.536 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[2f627e57-f33a-4b7f-9bdb-b15cf0219708]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:32:21 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:21.538 288528 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657 namespace which is not needed anymore
Dec 03 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.559 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:32:21 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Dec 03 02:32:21 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000f.scope: Consumed 7min 2.865s CPU time.
Dec 03 02:32:21 compute-0 systemd-machined[138558]: Machine qemu-16-instance-0000000f terminated.
Dec 03 02:32:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2344: 321 pgs: 321 active+clean; 157 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 03 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.742 351492 DEBUG nova.compute.manager [req-1099b501-90e6-454a-917d-e646c3e4e5da req-c5ee94ad-2f91-49be-bae1-b856083867b4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Received event network-vif-unplugged-94fdb5b9-66bf-4e81-b411-064b08e4c71c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.744 351492 DEBUG oslo_concurrency.lockutils [req-1099b501-90e6-454a-917d-e646c3e4e5da req-c5ee94ad-2f91-49be-bae1-b856083867b4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "4fb8fc07-d7b7-4be8-94da-155b040faf32-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.745 351492 DEBUG oslo_concurrency.lockutils [req-1099b501-90e6-454a-917d-e646c3e4e5da req-c5ee94ad-2f91-49be-bae1-b856083867b4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "4fb8fc07-d7b7-4be8-94da-155b040faf32-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.747 351492 DEBUG oslo_concurrency.lockutils [req-1099b501-90e6-454a-917d-e646c3e4e5da req-c5ee94ad-2f91-49be-bae1-b856083867b4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "4fb8fc07-d7b7-4be8-94da-155b040faf32-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.748 351492 DEBUG nova.compute.manager [req-1099b501-90e6-454a-917d-e646c3e4e5da req-c5ee94ad-2f91-49be-bae1-b856083867b4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] No waiting events found dispatching network-vif-unplugged-94fdb5b9-66bf-4e81-b411-064b08e4c71c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 03 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.749 351492 DEBUG nova.compute.manager [req-1099b501-90e6-454a-917d-e646c3e4e5da req-c5ee94ad-2f91-49be-bae1-b856083867b4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Received event network-vif-unplugged-94fdb5b9-66bf-4e81-b411-064b08e4c71c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 03 02:32:21 compute-0 neutron-haproxy-ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657[452989]: [NOTICE]   (452993) : haproxy version is 2.8.14-c23fe91
Dec 03 02:32:21 compute-0 neutron-haproxy-ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657[452989]: [NOTICE]   (452993) : path to executable is /usr/sbin/haproxy
Dec 03 02:32:21 compute-0 neutron-haproxy-ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657[452989]: [WARNING]  (452993) : Exiting Master process...
Dec 03 02:32:21 compute-0 neutron-haproxy-ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657[452989]: [WARNING]  (452993) : Exiting Master process...
Dec 03 02:32:21 compute-0 neutron-haproxy-ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657[452989]: [ALERT]    (452993) : Current worker (452995) exited with code 143 (Terminated)
Dec 03 02:32:21 compute-0 neutron-haproxy-ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657[452989]: [WARNING]  (452993) : All workers exited. Exiting... (0)
Dec 03 02:32:21 compute-0 systemd[1]: libpod-c800fdc7996a5ce9fede2c3aba64d14e29e89828606aa9d2a7ffa7487fe7cad6.scope: Deactivated successfully.
Dec 03 02:32:21 compute-0 podman[471601]: 2025-12-03 02:32:21.785624367 +0000 UTC m=+0.089889338 container died c800fdc7996a5ce9fede2c3aba64d14e29e89828606aa9d2a7ffa7487fe7cad6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 03 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.807 351492 INFO nova.virt.libvirt.driver [-] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Instance destroyed successfully.
Dec 03 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.808 351492 DEBUG nova.objects.instance [None req-3afd028c-54f9-4c23-bc87-389d4bed2dd0 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lazy-loading 'resources' on Instance uuid 4fb8fc07-d7b7-4be8-94da-155b040faf32 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 03 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.827 351492 DEBUG nova.virt.libvirt.vif [None req-3afd028c-54f9-4c23-bc87-389d4bed2dd0 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T02:22:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='te-8071397-asg-3rvfkoaoyxm3-pdxc7a4qjxpu-j7dwudlie42q',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-8071397-asg-3rvfkoaoyxm3-pdxc7a4qjxpu-j7dwudlie42q',id=15,image_ref='8876482c-db67-48c0-9203-60685152fc9d',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-03T02:22:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='38bfb145-4971-41b6-9bc3-faf3c3931019'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='63f39ac2863946b8b817457e689ff933',ramdisk_id='',reservation_id='r-xvixyek3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8876482c-db67-48c0-9203-60685152fc9d',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-PrometheusGabbiTest-1008659157',owner_user_name='tempest-PrometheusGabbiTest-1008659157-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-03T02:22:24Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='8f61f44789494541b7c101b0fdab52f0',uuid=4fb8fc07-d7b7-4be8-94da-155b040faf32,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "address": "fa:16:3e:3f:0c:ae", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.46", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94fdb5b9-66", "ovs_interfaceid": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 03 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.828 351492 DEBUG nova.network.os_vif_util [None req-3afd028c-54f9-4c23-bc87-389d4bed2dd0 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Converting VIF {"id": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "address": "fa:16:3e:3f:0c:ae", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.46", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94fdb5b9-66", "ovs_interfaceid": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 03 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.829 351492 DEBUG nova.network.os_vif_util [None req-3afd028c-54f9-4c23-bc87-389d4bed2dd0 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3f:0c:ae,bridge_name='br-int',has_traffic_filtering=True,id=94fdb5b9-66bf-4e81-b411-064b08e4c71c,network=Network(a7615b73-b987-4b91-b12c-2d7488085657),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94fdb5b9-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 03 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.830 351492 DEBUG os_vif [None req-3afd028c-54f9-4c23-bc87-389d4bed2dd0 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3f:0c:ae,bridge_name='br-int',has_traffic_filtering=True,id=94fdb5b9-66bf-4e81-b411-064b08e4c71c,network=Network(a7615b73-b987-4b91-b12c-2d7488085657),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94fdb5b9-66') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 03 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.834 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.836 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap94fdb5b9-66, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.846 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.850 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 03 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.853 351492 INFO os_vif [None req-3afd028c-54f9-4c23-bc87-389d4bed2dd0 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3f:0c:ae,bridge_name='br-int',has_traffic_filtering=True,id=94fdb5b9-66bf-4e81-b411-064b08e4c71c,network=Network(a7615b73-b987-4b91-b12c-2d7488085657),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94fdb5b9-66')
Dec 03 02:32:21 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c800fdc7996a5ce9fede2c3aba64d14e29e89828606aa9d2a7ffa7487fe7cad6-userdata-shm.mount: Deactivated successfully.
Dec 03 02:32:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-88013123d4a753ad03452e7c5ee2f44c7a3cff6bfcbc4c86988a478219f1d093-merged.mount: Deactivated successfully.
Dec 03 02:32:21 compute-0 podman[471601]: 2025-12-03 02:32:21.884493167 +0000 UTC m=+0.188758118 container cleanup c800fdc7996a5ce9fede2c3aba64d14e29e89828606aa9d2a7ffa7487fe7cad6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:32:21 compute-0 systemd[1]: libpod-conmon-c800fdc7996a5ce9fede2c3aba64d14e29e89828606aa9d2a7ffa7487fe7cad6.scope: Deactivated successfully.
Dec 03 02:32:22 compute-0 podman[471652]: 2025-12-03 02:32:22.033280266 +0000 UTC m=+0.098910333 container remove c800fdc7996a5ce9fede2c3aba64d14e29e89828606aa9d2a7ffa7487fe7cad6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 03 02:32:22 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:22.053 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[4e3fddfc-c223-465d-97cf-5da5197c7904]: (4, ('Wed Dec  3 02:32:21 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657 (c800fdc7996a5ce9fede2c3aba64d14e29e89828606aa9d2a7ffa7487fe7cad6)\nc800fdc7996a5ce9fede2c3aba64d14e29e89828606aa9d2a7ffa7487fe7cad6\nWed Dec  3 02:32:21 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657 (c800fdc7996a5ce9fede2c3aba64d14e29e89828606aa9d2a7ffa7487fe7cad6)\nc800fdc7996a5ce9fede2c3aba64d14e29e89828606aa9d2a7ffa7487fe7cad6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:32:22 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:22.057 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[72f6062c-d682-4ca5-8682-8eb4f28d00e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:32:22 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:22.059 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa7615b73-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:32:22 compute-0 nova_compute[351485]: 2025-12-03 02:32:22.062 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:32:22 compute-0 kernel: tapa7615b73-b0: left promiscuous mode
Dec 03 02:32:22 compute-0 nova_compute[351485]: 2025-12-03 02:32:22.066 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:32:22 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:22.071 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[fc9edeff-db22-4777-93ff-fe493ede465d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:32:22 compute-0 nova_compute[351485]: 2025-12-03 02:32:22.087 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:32:22 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:22.097 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[87e4b66a-616e-4740-9128-31588855637d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:32:22 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:22.099 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[0c2898e2-5ced-4ff3-a22a-275447eb1b3f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:32:22 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:22.125 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[32c3dc7c-6b35-4b4f-9828-f85930eba1d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719201, 'reachable_time': 35807, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 471669, 'error': None, 'target': 'ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:32:22 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:22.129 288639 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 03 02:32:22 compute-0 systemd[1]: run-netns-ovnmeta\x2da7615b73\x2db987\x2d4b91\x2db12c\x2d2d7488085657.mount: Deactivated successfully.
Dec 03 02:32:22 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:22.129 288639 DEBUG oslo.privsep.daemon [-] privsep: reply[b4ee25aa-3b4c-4c44-8f97-d3c9286210b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 03 02:32:22 compute-0 nova_compute[351485]: 2025-12-03 02:32:22.740 351492 INFO nova.virt.libvirt.driver [None req-3afd028c-54f9-4c23-bc87-389d4bed2dd0 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Deleting instance files /var/lib/nova/instances/4fb8fc07-d7b7-4be8-94da-155b040faf32_del
Dec 03 02:32:22 compute-0 nova_compute[351485]: 2025-12-03 02:32:22.742 351492 INFO nova.virt.libvirt.driver [None req-3afd028c-54f9-4c23-bc87-389d4bed2dd0 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Deletion of /var/lib/nova/instances/4fb8fc07-d7b7-4be8-94da-155b040faf32_del complete
Dec 03 02:32:22 compute-0 ceph-mon[192821]: pgmap v2344: 321 pgs: 321 active+clean; 157 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 03 02:32:22 compute-0 nova_compute[351485]: 2025-12-03 02:32:22.806 351492 INFO nova.compute.manager [None req-3afd028c-54f9-4c23-bc87-389d4bed2dd0 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Took 1.46 seconds to destroy the instance on the hypervisor.
Dec 03 02:32:22 compute-0 nova_compute[351485]: 2025-12-03 02:32:22.807 351492 DEBUG oslo.service.loopingcall [None req-3afd028c-54f9-4c23-bc87-389d4bed2dd0 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 03 02:32:22 compute-0 nova_compute[351485]: 2025-12-03 02:32:22.807 351492 DEBUG nova.compute.manager [-] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 03 02:32:22 compute-0 nova_compute[351485]: 2025-12-03 02:32:22.808 351492 DEBUG nova.network.neutron [-] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 03 02:32:23 compute-0 nova_compute[351485]: 2025-12-03 02:32:23.562 351492 DEBUG nova.network.neutron [-] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 03 02:32:23 compute-0 nova_compute[351485]: 2025-12-03 02:32:23.581 351492 INFO nova.compute.manager [-] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Took 0.77 seconds to deallocate network for instance.
Dec 03 02:32:23 compute-0 nova_compute[351485]: 2025-12-03 02:32:23.619 351492 DEBUG oslo_concurrency.lockutils [None req-3afd028c-54f9-4c23-bc87-389d4bed2dd0 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:32:23 compute-0 nova_compute[351485]: 2025-12-03 02:32:23.620 351492 DEBUG oslo_concurrency.lockutils [None req-3afd028c-54f9-4c23-bc87-389d4bed2dd0 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:32:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2345: 321 pgs: 321 active+clean; 157 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 03 02:32:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:32:23 compute-0 nova_compute[351485]: 2025-12-03 02:32:23.677 351492 DEBUG oslo_concurrency.processutils [None req-3afd028c-54f9-4c23-bc87-389d4bed2dd0 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:32:23 compute-0 nova_compute[351485]: 2025-12-03 02:32:23.822 351492 DEBUG nova.compute.manager [req-6350c315-c8b2-4e61-adc2-ed529b12ee85 req-da679287-3708-452f-b3b0-79b4e2d0856a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Received event network-vif-plugged-94fdb5b9-66bf-4e81-b411-064b08e4c71c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:32:23 compute-0 nova_compute[351485]: 2025-12-03 02:32:23.823 351492 DEBUG oslo_concurrency.lockutils [req-6350c315-c8b2-4e61-adc2-ed529b12ee85 req-da679287-3708-452f-b3b0-79b4e2d0856a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "4fb8fc07-d7b7-4be8-94da-155b040faf32-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:32:23 compute-0 nova_compute[351485]: 2025-12-03 02:32:23.824 351492 DEBUG oslo_concurrency.lockutils [req-6350c315-c8b2-4e61-adc2-ed529b12ee85 req-da679287-3708-452f-b3b0-79b4e2d0856a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "4fb8fc07-d7b7-4be8-94da-155b040faf32-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:32:23 compute-0 nova_compute[351485]: 2025-12-03 02:32:23.826 351492 DEBUG oslo_concurrency.lockutils [req-6350c315-c8b2-4e61-adc2-ed529b12ee85 req-da679287-3708-452f-b3b0-79b4e2d0856a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "4fb8fc07-d7b7-4be8-94da-155b040faf32-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:32:23 compute-0 nova_compute[351485]: 2025-12-03 02:32:23.826 351492 DEBUG nova.compute.manager [req-6350c315-c8b2-4e61-adc2-ed529b12ee85 req-da679287-3708-452f-b3b0-79b4e2d0856a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] No waiting events found dispatching network-vif-plugged-94fdb5b9-66bf-4e81-b411-064b08e4c71c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 03 02:32:23 compute-0 nova_compute[351485]: 2025-12-03 02:32:23.827 351492 WARNING nova.compute.manager [req-6350c315-c8b2-4e61-adc2-ed529b12ee85 req-da679287-3708-452f-b3b0-79b4e2d0856a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Received unexpected event network-vif-plugged-94fdb5b9-66bf-4e81-b411-064b08e4c71c for instance with vm_state deleted and task_state None.
Dec 03 02:32:23 compute-0 nova_compute[351485]: 2025-12-03 02:32:23.828 351492 DEBUG nova.compute.manager [req-6350c315-c8b2-4e61-adc2-ed529b12ee85 req-da679287-3708-452f-b3b0-79b4e2d0856a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Received event network-vif-deleted-94fdb5b9-66bf-4e81-b411-064b08e4c71c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 03 02:32:24 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:32:24 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/384299428' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:32:24 compute-0 nova_compute[351485]: 2025-12-03 02:32:24.152 351492 DEBUG oslo_concurrency.processutils [None req-3afd028c-54f9-4c23-bc87-389d4bed2dd0 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:32:24 compute-0 nova_compute[351485]: 2025-12-03 02:32:24.167 351492 DEBUG nova.compute.provider_tree [None req-3afd028c-54f9-4c23-bc87-389d4bed2dd0 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:32:24 compute-0 nova_compute[351485]: 2025-12-03 02:32:24.187 351492 DEBUG nova.scheduler.client.report [None req-3afd028c-54f9-4c23-bc87-389d4bed2dd0 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:32:24 compute-0 nova_compute[351485]: 2025-12-03 02:32:24.224 351492 DEBUG oslo_concurrency.lockutils [None req-3afd028c-54f9-4c23-bc87-389d4bed2dd0 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:32:24 compute-0 nova_compute[351485]: 2025-12-03 02:32:24.249 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:32:24 compute-0 nova_compute[351485]: 2025-12-03 02:32:24.269 351492 INFO nova.scheduler.client.report [None req-3afd028c-54f9-4c23-bc87-389d4bed2dd0 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Deleted allocations for instance 4fb8fc07-d7b7-4be8-94da-155b040faf32
Dec 03 02:32:24 compute-0 nova_compute[351485]: 2025-12-03 02:32:24.361 351492 DEBUG oslo_concurrency.lockutils [None req-3afd028c-54f9-4c23-bc87-389d4bed2dd0 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "4fb8fc07-d7b7-4be8-94da-155b040faf32" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.026s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:32:24 compute-0 nova_compute[351485]: 2025-12-03 02:32:24.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:32:24 compute-0 ceph-mon[192821]: pgmap v2345: 321 pgs: 321 active+clean; 157 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 03 02:32:24 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/384299428' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:32:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2346: 321 pgs: 321 active+clean; 115 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 1.5 KiB/s wr, 49 op/s
Dec 03 02:32:26 compute-0 nova_compute[351485]: 2025-12-03 02:32:26.232 351492 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764729131.2301612, 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 02:32:26 compute-0 nova_compute[351485]: 2025-12-03 02:32:26.233 351492 INFO nova.compute.manager [-] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] VM Stopped (Lifecycle Event)
Dec 03 02:32:26 compute-0 nova_compute[351485]: 2025-12-03 02:32:26.273 351492 DEBUG nova.compute.manager [None req-2d9d8389-e8fb-46c7-830d-26333c38771f - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:32:26 compute-0 nova_compute[351485]: 2025-12-03 02:32:26.840 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:32:27 compute-0 ceph-mon[192821]: pgmap v2346: 321 pgs: 321 active+clean; 115 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 1.5 KiB/s wr, 49 op/s
Dec 03 02:32:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2347: 321 pgs: 321 active+clean; 77 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.7 KiB/s wr, 30 op/s
Dec 03 02:32:28 compute-0 ceph-mon[192821]: pgmap v2347: 321 pgs: 321 active+clean; 77 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.7 KiB/s wr, 30 op/s
Dec 03 02:32:28 compute-0 sshd-session[471693]: Received disconnect from 154.113.10.113 port 42132:11: Bye Bye [preauth]
Dec 03 02:32:28 compute-0 sshd-session[471693]: Disconnected from authenticating user root 154.113.10.113 port 42132 [preauth]
Dec 03 02:32:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:32:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:32:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:32:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:32:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:32:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:32:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:32:28
Dec 03 02:32:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 02:32:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 02:32:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['backups', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', 'images', 'vms', '.mgr', 'default.rgw.log', '.rgw.root', 'default.rgw.meta']
Dec 03 02:32:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 02:32:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:32:29 compute-0 nova_compute[351485]: 2025-12-03 02:32:29.253 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:32:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 02:32:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 02:32:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:32:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:32:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:32:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:32:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:32:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:32:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:32:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:32:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2348: 321 pgs: 321 active+clean; 77 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 03 02:32:29 compute-0 podman[158098]: time="2025-12-03T02:32:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:32:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:32:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec 03 02:32:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:32:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8210 "" "Go-http-client/1.1"
Dec 03 02:32:30 compute-0 ceph-mon[192821]: pgmap v2348: 321 pgs: 321 active+clean; 77 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 03 02:32:31 compute-0 openstack_network_exporter[368278]: ERROR   02:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:32:31 compute-0 openstack_network_exporter[368278]: ERROR   02:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:32:31 compute-0 openstack_network_exporter[368278]: ERROR   02:32:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:32:31 compute-0 openstack_network_exporter[368278]: ERROR   02:32:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:32:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:32:31 compute-0 openstack_network_exporter[368278]: ERROR   02:32:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:32:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:32:31 compute-0 nova_compute[351485]: 2025-12-03 02:32:31.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:32:31 compute-0 nova_compute[351485]: 2025-12-03 02:32:31.604 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:32:31 compute-0 nova_compute[351485]: 2025-12-03 02:32:31.605 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:32:31 compute-0 nova_compute[351485]: 2025-12-03 02:32:31.606 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:32:31 compute-0 nova_compute[351485]: 2025-12-03 02:32:31.607 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 02:32:31 compute-0 nova_compute[351485]: 2025-12-03 02:32:31.608 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:32:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2349: 321 pgs: 321 active+clean; 77 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 03 02:32:31 compute-0 nova_compute[351485]: 2025-12-03 02:32:31.844 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:32:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:32:32 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1402290756' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:32:32 compute-0 nova_compute[351485]: 2025-12-03 02:32:32.144 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:32:32 compute-0 ceph-mon[192821]: pgmap v2349: 321 pgs: 321 active+clean; 77 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 03 02:32:32 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1402290756' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:32:32 compute-0 nova_compute[351485]: 2025-12-03 02:32:32.790 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:32:32 compute-0 nova_compute[351485]: 2025-12-03 02:32:32.792 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3965MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 02:32:32 compute-0 nova_compute[351485]: 2025-12-03 02:32:32.793 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:32:32 compute-0 nova_compute[351485]: 2025-12-03 02:32:32.793 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:32:33 compute-0 nova_compute[351485]: 2025-12-03 02:32:33.123 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 02:32:33 compute-0 nova_compute[351485]: 2025-12-03 02:32:33.124 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 02:32:33 compute-0 nova_compute[351485]: 2025-12-03 02:32:33.200 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:32:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2350: 321 pgs: 321 active+clean; 77 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 03 02:32:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 03 02:32:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Dec 03 02:32:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Dec 03 02:32:33 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Dec 03 02:32:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:32:33 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4173988335' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:32:33 compute-0 nova_compute[351485]: 2025-12-03 02:32:33.819 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.619s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:32:33 compute-0 nova_compute[351485]: 2025-12-03 02:32:33.839 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:32:33 compute-0 nova_compute[351485]: 2025-12-03 02:32:33.863 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:32:33 compute-0 nova_compute[351485]: 2025-12-03 02:32:33.883 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 02:32:33 compute-0 nova_compute[351485]: 2025-12-03 02:32:33.884 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.091s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:32:34 compute-0 nova_compute[351485]: 2025-12-03 02:32:34.257 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:32:34 compute-0 ceph-mon[192821]: pgmap v2350: 321 pgs: 321 active+clean; 77 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 03 02:32:34 compute-0 ceph-mon[192821]: osdmap e139: 3 total, 3 up, 3 in
Dec 03 02:32:34 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/4173988335' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:32:34 compute-0 podman[471742]: 2025-12-03 02:32:34.871851515 +0000 UTC m=+0.108239226 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 03 02:32:34 compute-0 podman[471740]: 2025-12-03 02:32:34.90008141 +0000 UTC m=+0.140942337 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 03 02:32:34 compute-0 podman[471741]: 2025-12-03 02:32:34.900252605 +0000 UTC m=+0.131657225 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 03 02:32:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2352: 321 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 312 active+clean; 65 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 1.4 KiB/s wr, 10 op/s
Dec 03 02:32:35 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Dec 03 02:32:35 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Dec 03 02:32:35 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Dec 03 02:32:36 compute-0 ceph-mon[192821]: pgmap v2352: 321 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 312 active+clean; 65 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 1.4 KiB/s wr, 10 op/s
Dec 03 02:32:36 compute-0 ceph-mon[192821]: osdmap e140: 3 total, 3 up, 3 in
Dec 03 02:32:36 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Dec 03 02:32:36 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Dec 03 02:32:36 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Dec 03 02:32:36 compute-0 nova_compute[351485]: 2025-12-03 02:32:36.801 351492 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764729141.7989619, 4fb8fc07-d7b7-4be8-94da-155b040faf32 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 03 02:32:36 compute-0 nova_compute[351485]: 2025-12-03 02:32:36.802 351492 INFO nova.compute.manager [-] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] VM Stopped (Lifecycle Event)
Dec 03 02:32:36 compute-0 nova_compute[351485]: 2025-12-03 02:32:36.831 351492 DEBUG nova.compute.manager [None req-923b733c-bf8c-43d5-a698-d45e8fcf898d - - - - - -] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 03 02:32:36 compute-0 nova_compute[351485]: 2025-12-03 02:32:36.848 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:32:36 compute-0 nova_compute[351485]: 2025-12-03 02:32:36.885 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:32:36 compute-0 nova_compute[351485]: 2025-12-03 02:32:36.885 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 02:32:36 compute-0 nova_compute[351485]: 2025-12-03 02:32:36.885 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 03 02:32:36 compute-0 nova_compute[351485]: 2025-12-03 02:32:36.917 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 03 02:32:36 compute-0 nova_compute[351485]: 2025-12-03 02:32:36.917 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:32:37 compute-0 nova_compute[351485]: 2025-12-03 02:32:37.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:32:37 compute-0 nova_compute[351485]: 2025-12-03 02:32:37.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:32:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2355: 321 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 312 active+clean; 69 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 761 KiB/s wr, 65 op/s
Dec 03 02:32:37 compute-0 ceph-mon[192821]: osdmap e141: 3 total, 3 up, 3 in
Dec 03 02:32:38 compute-0 nova_compute[351485]: 2025-12-03 02:32:38.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:32:38 compute-0 nova_compute[351485]: 2025-12-03 02:32:38.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 03 02:32:38 compute-0 nova_compute[351485]: 2025-12-03 02:32:38.615 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 03 02:32:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:32:38 compute-0 ceph-mon[192821]: pgmap v2355: 321 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 312 active+clean; 69 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 761 KiB/s wr, 65 op/s
Dec 03 02:32:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 02:32:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:32:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 02:32:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:32:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 03 02:32:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:32:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:32:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:32:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:32:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:32:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0012519738555707494 of space, bias 1.0, pg target 0.3755921566712248 quantized to 32 (current 32)
Dec 03 02:32:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:32:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 02:32:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:32:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:32:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:32:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 02:32:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:32:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 02:32:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:32:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:32:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:32:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 02:32:39 compute-0 nova_compute[351485]: 2025-12-03 02:32:39.261 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:32:39 compute-0 nova_compute[351485]: 2025-12-03 02:32:39.609 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:32:39 compute-0 nova_compute[351485]: 2025-12-03 02:32:39.609 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:32:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2356: 321 pgs: 321 active+clean; 61 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 763 KiB/s wr, 75 op/s
Dec 03 02:32:40 compute-0 nova_compute[351485]: 2025-12-03 02:32:40.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:32:40 compute-0 nova_compute[351485]: 2025-12-03 02:32:40.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 03 02:32:40 compute-0 nova_compute[351485]: 2025-12-03 02:32:40.626 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:32:40 compute-0 ceph-mon[192821]: pgmap v2356: 321 pgs: 321 active+clean; 61 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 763 KiB/s wr, 75 op/s
Dec 03 02:32:41 compute-0 nova_compute[351485]: 2025-12-03 02:32:41.603 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:32:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2357: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 2.6 MiB/s wr, 87 op/s
Dec 03 02:32:41 compute-0 nova_compute[351485]: 2025-12-03 02:32:41.851 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:32:42 compute-0 ceph-mon[192821]: pgmap v2357: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 2.6 MiB/s wr, 87 op/s
Dec 03 02:32:43 compute-0 nova_compute[351485]: 2025-12-03 02:32:43.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:32:43 compute-0 nova_compute[351485]: 2025-12-03 02:32:43.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 02:32:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2358: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 2.6 MiB/s wr, 82 op/s
Dec 03 02:32:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:32:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Dec 03 02:32:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Dec 03 02:32:43 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Dec 03 02:32:43 compute-0 podman[471800]: 2025-12-03 02:32:43.866239564 +0000 UTC m=+0.119909175 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 03 02:32:44 compute-0 nova_compute[351485]: 2025-12-03 02:32:44.264 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:32:44 compute-0 ceph-mon[192821]: pgmap v2358: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 2.6 MiB/s wr, 82 op/s
Dec 03 02:32:44 compute-0 ceph-mon[192821]: osdmap e142: 3 total, 3 up, 3 in
Dec 03 02:32:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2360: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Dec 03 02:32:46 compute-0 ceph-mon[192821]: pgmap v2360: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Dec 03 02:32:46 compute-0 nova_compute[351485]: 2025-12-03 02:32:46.854 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:32:46 compute-0 podman[471822]: 2025-12-03 02:32:46.867911071 +0000 UTC m=+0.101830615 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, name=ubi9, version=9.4, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, config_id=edpm, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, container_name=kepler, io.openshift.expose-services=, managed_by=edpm_ansible, architecture=x86_64, release=1214.1726694543, vendor=Red Hat, Inc.)
Dec 03 02:32:46 compute-0 podman[471821]: 2025-12-03 02:32:46.880744333 +0000 UTC m=+0.119331468 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 02:32:46 compute-0 podman[471820]: 2025-12-03 02:32:46.892928417 +0000 UTC m=+0.136790571 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, version=9.6, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, managed_by=edpm_ansible, distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.buildah.version=1.33.7, release=1755695350, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec 03 02:32:46 compute-0 podman[471823]: 2025-12-03 02:32:46.901418676 +0000 UTC m=+0.125909564 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125)
Dec 03 02:32:46 compute-0 podman[471819]: 2025-12-03 02:32:46.901619172 +0000 UTC m=+0.148199343 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 03 02:32:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 02:32:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/946065896' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:32:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 02:32:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/946065896' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:32:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2361: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.6 MiB/s wr, 29 op/s
Dec 03 02:32:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/946065896' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:32:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/946065896' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:32:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:32:48 compute-0 ceph-mon[192821]: pgmap v2361: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.6 MiB/s wr, 29 op/s
Dec 03 02:32:49 compute-0 nova_compute[351485]: 2025-12-03 02:32:49.269 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:32:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2362: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.6 MiB/s wr, 23 op/s
Dec 03 02:32:50 compute-0 ceph-mon[192821]: pgmap v2362: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.6 MiB/s wr, 23 op/s
Dec 03 02:32:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2363: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:32:51 compute-0 nova_compute[351485]: 2025-12-03 02:32:51.857 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:32:52 compute-0 ceph-mon[192821]: pgmap v2363: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:32:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2364: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:32:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:32:54 compute-0 nova_compute[351485]: 2025-12-03 02:32:54.273 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:32:54 compute-0 ceph-mon[192821]: pgmap v2364: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:32:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2365: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:32:56 compute-0 ceph-mon[192821]: pgmap v2365: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:32:56 compute-0 nova_compute[351485]: 2025-12-03 02:32:56.860 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:32:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2366: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:32:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:32:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:32:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:32:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:32:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:32:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:32:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:32:58 compute-0 ceph-mon[192821]: pgmap v2366: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:32:59 compute-0 nova_compute[351485]: 2025-12-03 02:32:59.276 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:32:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2367: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:59.668 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:59.669 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:59.669 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:32:59 compute-0 podman[158098]: time="2025-12-03T02:32:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:32:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:32:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec 03 02:32:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:32:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8199 "" "Go-http-client/1.1"
Dec 03 02:33:00 compute-0 ceph-mon[192821]: pgmap v2367: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:01 compute-0 openstack_network_exporter[368278]: ERROR   02:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:33:01 compute-0 openstack_network_exporter[368278]: ERROR   02:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:33:01 compute-0 openstack_network_exporter[368278]: ERROR   02:33:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:33:01 compute-0 openstack_network_exporter[368278]: ERROR   02:33:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:33:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:33:01 compute-0 openstack_network_exporter[368278]: ERROR   02:33:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:33:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:33:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2368: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:01 compute-0 nova_compute[351485]: 2025-12-03 02:33:01.863 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:33:02 compute-0 ceph-mon[192821]: pgmap v2368: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2369: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:33:04 compute-0 nova_compute[351485]: 2025-12-03 02:33:04.279 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:33:04 compute-0 ceph-mon[192821]: pgmap v2369: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2370: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:05 compute-0 podman[471920]: 2025-12-03 02:33:05.840972275 +0000 UTC m=+0.089381544 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 03 02:33:05 compute-0 podman[471922]: 2025-12-03 02:33:05.846036408 +0000 UTC m=+0.087088529 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 02:33:05 compute-0 podman[471921]: 2025-12-03 02:33:05.868491581 +0000 UTC m=+0.111764055 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Dec 03 02:33:06 compute-0 ceph-mon[192821]: pgmap v2370: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:06 compute-0 nova_compute[351485]: 2025-12-03 02:33:06.866 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:33:07 compute-0 sudo[471980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:33:07 compute-0 sudo[471980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:33:07 compute-0 sudo[471980]: pam_unix(sudo:session): session closed for user root
Dec 03 02:33:07 compute-0 sudo[472005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:33:07 compute-0 sudo[472005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:33:07 compute-0 sudo[472005]: pam_unix(sudo:session): session closed for user root
Dec 03 02:33:07 compute-0 sudo[472030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:33:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2371: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:07 compute-0 sudo[472030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:33:07 compute-0 sudo[472030]: pam_unix(sudo:session): session closed for user root
Dec 03 02:33:07 compute-0 sudo[472055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 02:33:07 compute-0 sudo[472055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:33:08 compute-0 sudo[472055]: pam_unix(sudo:session): session closed for user root
Dec 03 02:33:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:33:08 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:33:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 02:33:08 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:33:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 02:33:08 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:33:08 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev aba42754-2fa6-4d55-9898-57826b6f4e86 does not exist
Dec 03 02:33:08 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 65241505-fc7e-4c43-9303-7e9296cedbca does not exist
Dec 03 02:33:08 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 13cfc2a0-8921-4f1d-8e1b-624d71ce29a2 does not exist
Dec 03 02:33:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 02:33:08 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:33:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 02:33:08 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:33:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:33:08 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:33:08 compute-0 sudo[472110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:33:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:33:08 compute-0 sudo[472110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:33:08 compute-0 sudo[472110]: pam_unix(sudo:session): session closed for user root
Dec 03 02:33:08 compute-0 sudo[472135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:33:08 compute-0 sudo[472135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:33:08 compute-0 sudo[472135]: pam_unix(sudo:session): session closed for user root
Dec 03 02:33:08 compute-0 ceph-mon[192821]: pgmap v2371: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:08 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:33:08 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:33:08 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:33:08 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:33:08 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:33:08 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:33:08 compute-0 sudo[472160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:33:08 compute-0 sudo[472160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:33:08 compute-0 sudo[472160]: pam_unix(sudo:session): session closed for user root
Dec 03 02:33:09 compute-0 sudo[472185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 02:33:09 compute-0 sudo[472185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:33:09 compute-0 nova_compute[351485]: 2025-12-03 02:33:09.281 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:33:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2372: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:09 compute-0 podman[472249]: 2025-12-03 02:33:09.66329551 +0000 UTC m=+0.085795472 container create 621d9cb9711f0235ec0863973ca3be9d94c67f13a54d999666cae78b3d9662b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_benz, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:33:09 compute-0 podman[472249]: 2025-12-03 02:33:09.628259641 +0000 UTC m=+0.050759663 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:33:09 compute-0 systemd[1]: Started libpod-conmon-621d9cb9711f0235ec0863973ca3be9d94c67f13a54d999666cae78b3d9662b8.scope.
Dec 03 02:33:09 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:33:09 compute-0 podman[472249]: 2025-12-03 02:33:09.822239525 +0000 UTC m=+0.244739567 container init 621d9cb9711f0235ec0863973ca3be9d94c67f13a54d999666cae78b3d9662b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_benz, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:33:09 compute-0 podman[472249]: 2025-12-03 02:33:09.840157691 +0000 UTC m=+0.262657673 container start 621d9cb9711f0235ec0863973ca3be9d94c67f13a54d999666cae78b3d9662b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:33:09 compute-0 podman[472249]: 2025-12-03 02:33:09.847825227 +0000 UTC m=+0.270325199 container attach 621d9cb9711f0235ec0863973ca3be9d94c67f13a54d999666cae78b3d9662b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 03 02:33:09 compute-0 sleepy_benz[472265]: 167 167
Dec 03 02:33:09 compute-0 systemd[1]: libpod-621d9cb9711f0235ec0863973ca3be9d94c67f13a54d999666cae78b3d9662b8.scope: Deactivated successfully.
Dec 03 02:33:09 compute-0 podman[472249]: 2025-12-03 02:33:09.853863328 +0000 UTC m=+0.276363300 container died 621d9cb9711f0235ec0863973ca3be9d94c67f13a54d999666cae78b3d9662b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_benz, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:33:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-000cf0f57ff64c9ebb7553f14f9b3182d5039588ba3083513c6ac578b604ca90-merged.mount: Deactivated successfully.
Dec 03 02:33:09 compute-0 podman[472249]: 2025-12-03 02:33:09.945182955 +0000 UTC m=+0.367682917 container remove 621d9cb9711f0235ec0863973ca3be9d94c67f13a54d999666cae78b3d9662b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_benz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 03 02:33:09 compute-0 systemd[1]: libpod-conmon-621d9cb9711f0235ec0863973ca3be9d94c67f13a54d999666cae78b3d9662b8.scope: Deactivated successfully.
Dec 03 02:33:10 compute-0 podman[472287]: 2025-12-03 02:33:10.243313778 +0000 UTC m=+0.095101335 container create 276c195c51576f5f26c8a7e462af4e886bfe416c603a2c566c37faeb60b5fbce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:33:10 compute-0 podman[472287]: 2025-12-03 02:33:10.208369502 +0000 UTC m=+0.060157119 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:33:10 compute-0 systemd[1]: Started libpod-conmon-276c195c51576f5f26c8a7e462af4e886bfe416c603a2c566c37faeb60b5fbce.scope.
Dec 03 02:33:10 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:33:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4df890217a81dcdbdf6abffe3d217def7a7fa232b08f3568d43a17e2b31dcfa8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:33:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4df890217a81dcdbdf6abffe3d217def7a7fa232b08f3568d43a17e2b31dcfa8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:33:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4df890217a81dcdbdf6abffe3d217def7a7fa232b08f3568d43a17e2b31dcfa8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:33:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4df890217a81dcdbdf6abffe3d217def7a7fa232b08f3568d43a17e2b31dcfa8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:33:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4df890217a81dcdbdf6abffe3d217def7a7fa232b08f3568d43a17e2b31dcfa8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 02:33:10 compute-0 podman[472287]: 2025-12-03 02:33:10.462339099 +0000 UTC m=+0.314126676 container init 276c195c51576f5f26c8a7e462af4e886bfe416c603a2c566c37faeb60b5fbce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_ellis, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 03 02:33:10 compute-0 podman[472287]: 2025-12-03 02:33:10.479075181 +0000 UTC m=+0.330862728 container start 276c195c51576f5f26c8a7e462af4e886bfe416c603a2c566c37faeb60b5fbce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:33:10 compute-0 podman[472287]: 2025-12-03 02:33:10.48469176 +0000 UTC m=+0.336479337 container attach 276c195c51576f5f26c8a7e462af4e886bfe416c603a2c566c37faeb60b5fbce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_ellis, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 03 02:33:10 compute-0 nova_compute[351485]: 2025-12-03 02:33:10.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:33:10 compute-0 ceph-mon[192821]: pgmap v2372: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2373: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:11 compute-0 great_ellis[472303]: --> passed data devices: 0 physical, 3 LVM
Dec 03 02:33:11 compute-0 great_ellis[472303]: --> relative data size: 1.0
Dec 03 02:33:11 compute-0 great_ellis[472303]: --> All data devices are unavailable
Dec 03 02:33:11 compute-0 systemd[1]: libpod-276c195c51576f5f26c8a7e462af4e886bfe416c603a2c566c37faeb60b5fbce.scope: Deactivated successfully.
Dec 03 02:33:11 compute-0 podman[472287]: 2025-12-03 02:33:11.783965044 +0000 UTC m=+1.635752621 container died 276c195c51576f5f26c8a7e462af4e886bfe416c603a2c566c37faeb60b5fbce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_ellis, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 03 02:33:11 compute-0 systemd[1]: libpod-276c195c51576f5f26c8a7e462af4e886bfe416c603a2c566c37faeb60b5fbce.scope: Consumed 1.240s CPU time.
Dec 03 02:33:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-4df890217a81dcdbdf6abffe3d217def7a7fa232b08f3568d43a17e2b31dcfa8-merged.mount: Deactivated successfully.
Dec 03 02:33:11 compute-0 nova_compute[351485]: 2025-12-03 02:33:11.869 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:33:11 compute-0 podman[472287]: 2025-12-03 02:33:11.888488584 +0000 UTC m=+1.740276131 container remove 276c195c51576f5f26c8a7e462af4e886bfe416c603a2c566c37faeb60b5fbce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_ellis, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec 03 02:33:11 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #114. Immutable memtables: 0.
Dec 03 02:33:11 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:33:11.891504) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 03 02:33:11 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 114
Dec 03 02:33:11 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729191891579, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 963, "num_deletes": 253, "total_data_size": 1315641, "memory_usage": 1343312, "flush_reason": "Manual Compaction"}
Dec 03 02:33:11 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #115: started
Dec 03 02:33:11 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729191902479, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 115, "file_size": 1302551, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47662, "largest_seqno": 48624, "table_properties": {"data_size": 1297713, "index_size": 2426, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10645, "raw_average_key_size": 19, "raw_value_size": 1287907, "raw_average_value_size": 2416, "num_data_blocks": 108, "num_entries": 533, "num_filter_entries": 533, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764729108, "oldest_key_time": 1764729108, "file_creation_time": 1764729191, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 115, "seqno_to_time_mapping": "N/A"}}
Dec 03 02:33:11 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 11074 microseconds, and 5154 cpu microseconds.
Dec 03 02:33:11 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 02:33:11 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:33:11.902574) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #115: 1302551 bytes OK
Dec 03 02:33:11 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:33:11.902595) [db/memtable_list.cc:519] [default] Level-0 commit table #115 started
Dec 03 02:33:11 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:33:11.905718) [db/memtable_list.cc:722] [default] Level-0 commit table #115: memtable #1 done
Dec 03 02:33:11 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:33:11.905735) EVENT_LOG_v1 {"time_micros": 1764729191905730, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 03 02:33:11 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:33:11.905753) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 03 02:33:11 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 1310999, prev total WAL file size 1310999, number of live WAL files 2.
Dec 03 02:33:11 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000111.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:33:11 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:33:11.907018) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Dec 03 02:33:11 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 03 02:33:11 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [115(1272KB)], [113(9682KB)]
Dec 03 02:33:11 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729191907132, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [115], "files_L6": [113], "score": -1, "input_data_size": 11217102, "oldest_snapshot_seqno": -1}
Dec 03 02:33:11 compute-0 systemd[1]: libpod-conmon-276c195c51576f5f26c8a7e462af4e886bfe416c603a2c566c37faeb60b5fbce.scope: Deactivated successfully.
Dec 03 02:33:11 compute-0 sudo[472185]: pam_unix(sudo:session): session closed for user root
Dec 03 02:33:12 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #116: 6253 keys, 9455418 bytes, temperature: kUnknown
Dec 03 02:33:12 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729192002722, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 116, "file_size": 9455418, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9413965, "index_size": 24703, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15685, "raw_key_size": 163545, "raw_average_key_size": 26, "raw_value_size": 9301228, "raw_average_value_size": 1487, "num_data_blocks": 981, "num_entries": 6253, "num_filter_entries": 6253, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764729191, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Dec 03 02:33:12 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 02:33:12 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:33:12.003034) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 9455418 bytes
Dec 03 02:33:12 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:33:12.005877) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 117.2 rd, 98.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 9.5 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(15.9) write-amplify(7.3) OK, records in: 6774, records dropped: 521 output_compression: NoCompression
Dec 03 02:33:12 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:33:12.005905) EVENT_LOG_v1 {"time_micros": 1764729192005892, "job": 68, "event": "compaction_finished", "compaction_time_micros": 95675, "compaction_time_cpu_micros": 44462, "output_level": 6, "num_output_files": 1, "total_output_size": 9455418, "num_input_records": 6774, "num_output_records": 6253, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 03 02:33:12 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000115.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:33:12 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729192006495, "job": 68, "event": "table_file_deletion", "file_number": 115}
Dec 03 02:33:12 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:33:12 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729192010339, "job": 68, "event": "table_file_deletion", "file_number": 113}
Dec 03 02:33:12 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:33:11.906473) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:33:12 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:33:12.010672) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:33:12 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:33:12.010681) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:33:12 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:33:12.010685) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:33:12 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:33:12.010688) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:33:12 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:33:12.010691) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:33:12 compute-0 sudo[472344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:33:12 compute-0 sudo[472344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:33:12 compute-0 sudo[472344]: pam_unix(sudo:session): session closed for user root
Dec 03 02:33:12 compute-0 sudo[472369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:33:12 compute-0 sudo[472369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:33:12 compute-0 sudo[472369]: pam_unix(sudo:session): session closed for user root
Dec 03 02:33:12 compute-0 sudo[472394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:33:12 compute-0 sudo[472394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:33:12 compute-0 sudo[472394]: pam_unix(sudo:session): session closed for user root
Dec 03 02:33:12 compute-0 sudo[472419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 02:33:12 compute-0 sudo[472419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:33:12 compute-0 ceph-mon[192821]: pgmap v2373: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:13 compute-0 podman[472483]: 2025-12-03 02:33:13.043708424 +0000 UTC m=+0.086431180 container create 9820aa6ecaa819c941523e94380abb620e3f12423f4cb1d5e0f3abdc9072bfd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_herschel, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 03 02:33:13 compute-0 podman[472483]: 2025-12-03 02:33:13.009025095 +0000 UTC m=+0.051747911 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:33:13 compute-0 systemd[1]: Started libpod-conmon-9820aa6ecaa819c941523e94380abb620e3f12423f4cb1d5e0f3abdc9072bfd2.scope.
Dec 03 02:33:13 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:33:13 compute-0 podman[472483]: 2025-12-03 02:33:13.204505082 +0000 UTC m=+0.247227898 container init 9820aa6ecaa819c941523e94380abb620e3f12423f4cb1d5e0f3abdc9072bfd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:33:13 compute-0 podman[472483]: 2025-12-03 02:33:13.220870694 +0000 UTC m=+0.263593440 container start 9820aa6ecaa819c941523e94380abb620e3f12423f4cb1d5e0f3abdc9072bfd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 03 02:33:13 compute-0 podman[472483]: 2025-12-03 02:33:13.227609084 +0000 UTC m=+0.270331850 container attach 9820aa6ecaa819c941523e94380abb620e3f12423f4cb1d5e0f3abdc9072bfd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_herschel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:33:13 compute-0 priceless_herschel[472498]: 167 167
Dec 03 02:33:13 compute-0 systemd[1]: libpod-9820aa6ecaa819c941523e94380abb620e3f12423f4cb1d5e0f3abdc9072bfd2.scope: Deactivated successfully.
Dec 03 02:33:13 compute-0 podman[472483]: 2025-12-03 02:33:13.233388047 +0000 UTC m=+0.276110853 container died 9820aa6ecaa819c941523e94380abb620e3f12423f4cb1d5e0f3abdc9072bfd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_herschel, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 03 02:33:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d2a7a77478c0b1774309140c42bfc34f031a216b95798c91a0a363b75d3f1a8-merged.mount: Deactivated successfully.
Dec 03 02:33:13 compute-0 podman[472483]: 2025-12-03 02:33:13.308395264 +0000 UTC m=+0.351117990 container remove 9820aa6ecaa819c941523e94380abb620e3f12423f4cb1d5e0f3abdc9072bfd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_herschel, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Dec 03 02:33:13 compute-0 systemd[1]: libpod-conmon-9820aa6ecaa819c941523e94380abb620e3f12423f4cb1d5e0f3abdc9072bfd2.scope: Deactivated successfully.
Dec 03 02:33:13 compute-0 podman[472521]: 2025-12-03 02:33:13.615692295 +0000 UTC m=+0.092582853 container create e7437255c6efdf5cbaf72eb8612858c3b6679cc0cf10f34411dc8d80e0673e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_brattain, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:33:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2374: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:13 compute-0 podman[472521]: 2025-12-03 02:33:13.577192639 +0000 UTC m=+0.054083237 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:33:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:33:13 compute-0 systemd[1]: Started libpod-conmon-e7437255c6efdf5cbaf72eb8612858c3b6679cc0cf10f34411dc8d80e0673e63.scope.
Dec 03 02:33:13 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:33:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afda988bbab7f86a53aeed48018e2bc747b129ca64566114b7d5e2e39289eea3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:33:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afda988bbab7f86a53aeed48018e2bc747b129ca64566114b7d5e2e39289eea3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:33:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afda988bbab7f86a53aeed48018e2bc747b129ca64566114b7d5e2e39289eea3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:33:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afda988bbab7f86a53aeed48018e2bc747b129ca64566114b7d5e2e39289eea3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:33:13 compute-0 podman[472521]: 2025-12-03 02:33:13.812462538 +0000 UTC m=+0.289353086 container init e7437255c6efdf5cbaf72eb8612858c3b6679cc0cf10f34411dc8d80e0673e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:33:13 compute-0 podman[472521]: 2025-12-03 02:33:13.832826013 +0000 UTC m=+0.309716551 container start e7437255c6efdf5cbaf72eb8612858c3b6679cc0cf10f34411dc8d80e0673e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 03 02:33:13 compute-0 podman[472521]: 2025-12-03 02:33:13.840341885 +0000 UTC m=+0.317232413 container attach e7437255c6efdf5cbaf72eb8612858c3b6679cc0cf10f34411dc8d80e0673e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_brattain, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:33:14 compute-0 nova_compute[351485]: 2025-12-03 02:33:14.284 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]: {
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:     "0": [
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:         {
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:             "devices": [
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:                 "/dev/loop3"
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:             ],
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:             "lv_name": "ceph_lv0",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:             "lv_size": "21470642176",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:             "name": "ceph_lv0",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:             "tags": {
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:                 "ceph.cluster_name": "ceph",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:                 "ceph.crush_device_class": "",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:                 "ceph.encrypted": "0",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:                 "ceph.osd_id": "0",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:                 "ceph.type": "block",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:                 "ceph.vdo": "0"
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:             },
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:             "type": "block",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:             "vg_name": "ceph_vg0"
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:         }
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:     ],
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:     "1": [
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:         {
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:             "devices": [
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:                 "/dev/loop4"
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:             ],
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:             "lv_name": "ceph_lv1",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:             "lv_size": "21470642176",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:             "name": "ceph_lv1",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:             "tags": {
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:                 "ceph.cluster_name": "ceph",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:                 "ceph.crush_device_class": "",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:                 "ceph.encrypted": "0",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:                 "ceph.osd_id": "1",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:                 "ceph.type": "block",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:                 "ceph.vdo": "0"
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:             },
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:             "type": "block",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:             "vg_name": "ceph_vg1"
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:         }
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:     ],
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:     "2": [
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:         {
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:             "devices": [
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:                 "/dev/loop5"
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:             ],
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:             "lv_name": "ceph_lv2",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:             "lv_size": "21470642176",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:             "name": "ceph_lv2",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:             "tags": {
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:                 "ceph.cluster_name": "ceph",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:                 "ceph.crush_device_class": "",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:                 "ceph.encrypted": "0",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:                 "ceph.osd_id": "2",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:                 "ceph.type": "block",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:                 "ceph.vdo": "0"
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:             },
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:             "type": "block",
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:             "vg_name": "ceph_vg2"
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:         }
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]:     ]
Dec 03 02:33:14 compute-0 unruffled_brattain[472536]: }
Dec 03 02:33:14 compute-0 systemd[1]: libpod-e7437255c6efdf5cbaf72eb8612858c3b6679cc0cf10f34411dc8d80e0673e63.scope: Deactivated successfully.
Dec 03 02:33:14 compute-0 podman[472521]: 2025-12-03 02:33:14.717281381 +0000 UTC m=+1.194171939 container died e7437255c6efdf5cbaf72eb8612858c3b6679cc0cf10f34411dc8d80e0673e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 03 02:33:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-afda988bbab7f86a53aeed48018e2bc747b129ca64566114b7d5e2e39289eea3-merged.mount: Deactivated successfully.
Dec 03 02:33:14 compute-0 podman[472521]: 2025-12-03 02:33:14.824286731 +0000 UTC m=+1.301177249 container remove e7437255c6efdf5cbaf72eb8612858c3b6679cc0cf10f34411dc8d80e0673e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 03 02:33:14 compute-0 systemd[1]: libpod-conmon-e7437255c6efdf5cbaf72eb8612858c3b6679cc0cf10f34411dc8d80e0673e63.scope: Deactivated successfully.
Dec 03 02:33:14 compute-0 sudo[472419]: pam_unix(sudo:session): session closed for user root
Dec 03 02:33:14 compute-0 podman[472545]: 2025-12-03 02:33:14.88059602 +0000 UTC m=+0.165649266 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 03 02:33:14 compute-0 ceph-mon[192821]: pgmap v2374: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:14 compute-0 sudo[472572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:33:14 compute-0 sudo[472572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:33:14 compute-0 sudo[472572]: pam_unix(sudo:session): session closed for user root
Dec 03 02:33:15 compute-0 sudo[472598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:33:15 compute-0 sudo[472598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:33:15 compute-0 sudo[472598]: pam_unix(sudo:session): session closed for user root
Dec 03 02:33:15 compute-0 sudo[472623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:33:15 compute-0 sudo[472623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:33:15 compute-0 sudo[472623]: pam_unix(sudo:session): session closed for user root
Dec 03 02:33:15 compute-0 sudo[472648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 02:33:15 compute-0 sudo[472648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:33:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2375: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:15 compute-0 podman[472714]: 2025-12-03 02:33:15.914752364 +0000 UTC m=+0.087964394 container create 5b991ef81930a47eb858e18f29717a4b7243580931c6865e07e943154ecd2145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_cori, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef)
Dec 03 02:33:15 compute-0 podman[472714]: 2025-12-03 02:33:15.88237255 +0000 UTC m=+0.055584620 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:33:15 compute-0 systemd[1]: Started libpod-conmon-5b991ef81930a47eb858e18f29717a4b7243580931c6865e07e943154ecd2145.scope.
Dec 03 02:33:16 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:33:16 compute-0 podman[472714]: 2025-12-03 02:33:16.059113808 +0000 UTC m=+0.232325878 container init 5b991ef81930a47eb858e18f29717a4b7243580931c6865e07e943154ecd2145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec 03 02:33:16 compute-0 podman[472714]: 2025-12-03 02:33:16.078015861 +0000 UTC m=+0.251227881 container start 5b991ef81930a47eb858e18f29717a4b7243580931c6865e07e943154ecd2145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_cori, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 03 02:33:16 compute-0 podman[472714]: 2025-12-03 02:33:16.084846134 +0000 UTC m=+0.258058194 container attach 5b991ef81930a47eb858e18f29717a4b7243580931c6865e07e943154ecd2145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_cori, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:33:16 compute-0 relaxed_cori[472728]: 167 167
Dec 03 02:33:16 compute-0 systemd[1]: libpod-5b991ef81930a47eb858e18f29717a4b7243580931c6865e07e943154ecd2145.scope: Deactivated successfully.
Dec 03 02:33:16 compute-0 podman[472714]: 2025-12-03 02:33:16.092912922 +0000 UTC m=+0.266124952 container died 5b991ef81930a47eb858e18f29717a4b7243580931c6865e07e943154ecd2145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:33:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f531e3fafb8265f1e65607d04b278b1f9ed3ee759fb64a86bcd8f9fc24ecb9c-merged.mount: Deactivated successfully.
Dec 03 02:33:16 compute-0 podman[472714]: 2025-12-03 02:33:16.179016331 +0000 UTC m=+0.352228351 container remove 5b991ef81930a47eb858e18f29717a4b7243580931c6865e07e943154ecd2145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_cori, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:33:16 compute-0 systemd[1]: libpod-conmon-5b991ef81930a47eb858e18f29717a4b7243580931c6865e07e943154ecd2145.scope: Deactivated successfully.
Dec 03 02:33:16 compute-0 podman[472752]: 2025-12-03 02:33:16.495284386 +0000 UTC m=+0.121628863 container create 89ab56c625ecde76bf113ca98e17db29aa91aaf5dbffbae42f3b848933a87f33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_shtern, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Dec 03 02:33:16 compute-0 podman[472752]: 2025-12-03 02:33:16.438873835 +0000 UTC m=+0.065218372 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:33:16 compute-0 systemd[1]: Started libpod-conmon-89ab56c625ecde76bf113ca98e17db29aa91aaf5dbffbae42f3b848933a87f33.scope.
Dec 03 02:33:16 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:33:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a53037716e1e6efde6dbd19c251e2c115c00e5fea6eb8d3f9364d68541151920/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:33:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a53037716e1e6efde6dbd19c251e2c115c00e5fea6eb8d3f9364d68541151920/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:33:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a53037716e1e6efde6dbd19c251e2c115c00e5fea6eb8d3f9364d68541151920/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:33:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a53037716e1e6efde6dbd19c251e2c115c00e5fea6eb8d3f9364d68541151920/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:33:16 compute-0 podman[472752]: 2025-12-03 02:33:16.70909159 +0000 UTC m=+0.335436087 container init 89ab56c625ecde76bf113ca98e17db29aa91aaf5dbffbae42f3b848933a87f33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_shtern, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 03 02:33:16 compute-0 podman[472752]: 2025-12-03 02:33:16.728490427 +0000 UTC m=+0.354834914 container start 89ab56c625ecde76bf113ca98e17db29aa91aaf5dbffbae42f3b848933a87f33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 03 02:33:16 compute-0 podman[472752]: 2025-12-03 02:33:16.736693309 +0000 UTC m=+0.363037826 container attach 89ab56c625ecde76bf113ca98e17db29aa91aaf5dbffbae42f3b848933a87f33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_shtern, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Dec 03 02:33:16 compute-0 nova_compute[351485]: 2025-12-03 02:33:16.874 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:33:16 compute-0 ceph-mon[192821]: pgmap v2375: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2376: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:17 compute-0 ovn_controller[89134]: 2025-12-03T02:33:17Z|00202|memory_trim|INFO|Detected inactivity (last active 30017 ms ago): trimming memory
Dec 03 02:33:17 compute-0 podman[472794]: 2025-12-03 02:33:17.872335646 +0000 UTC m=+0.096846733 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3)
Dec 03 02:33:17 compute-0 podman[472793]: 2025-12-03 02:33:17.877690437 +0000 UTC m=+0.106190357 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=ubi9, com.redhat.component=ubi9-container, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, container_name=kepler, io.openshift.expose-services=, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, release-0.7.12=)
Dec 03 02:33:17 compute-0 podman[472791]: 2025-12-03 02:33:17.880120666 +0000 UTC m=+0.119844142 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, distribution-scope=public, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.buildah.version=1.33.7, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1755695350, vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, version=9.6)
Dec 03 02:33:17 compute-0 nostalgic_shtern[472768]: {
Dec 03 02:33:17 compute-0 nostalgic_shtern[472768]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 02:33:17 compute-0 nostalgic_shtern[472768]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:33:17 compute-0 nostalgic_shtern[472768]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 02:33:17 compute-0 nostalgic_shtern[472768]:         "osd_id": 2,
Dec 03 02:33:17 compute-0 nostalgic_shtern[472768]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:33:17 compute-0 nostalgic_shtern[472768]:         "type": "bluestore"
Dec 03 02:33:17 compute-0 nostalgic_shtern[472768]:     },
Dec 03 02:33:17 compute-0 nostalgic_shtern[472768]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 02:33:17 compute-0 nostalgic_shtern[472768]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:33:17 compute-0 nostalgic_shtern[472768]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 02:33:17 compute-0 nostalgic_shtern[472768]:         "osd_id": 1,
Dec 03 02:33:17 compute-0 nostalgic_shtern[472768]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:33:17 compute-0 nostalgic_shtern[472768]:         "type": "bluestore"
Dec 03 02:33:17 compute-0 nostalgic_shtern[472768]:     },
Dec 03 02:33:17 compute-0 nostalgic_shtern[472768]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 02:33:17 compute-0 nostalgic_shtern[472768]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:33:17 compute-0 nostalgic_shtern[472768]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 02:33:17 compute-0 nostalgic_shtern[472768]:         "osd_id": 0,
Dec 03 02:33:17 compute-0 nostalgic_shtern[472768]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:33:17 compute-0 nostalgic_shtern[472768]:         "type": "bluestore"
Dec 03 02:33:17 compute-0 nostalgic_shtern[472768]:     }
Dec 03 02:33:17 compute-0 nostalgic_shtern[472768]: }
Dec 03 02:33:17 compute-0 podman[472792]: 2025-12-03 02:33:17.898673239 +0000 UTC m=+0.137345596 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 02:33:17 compute-0 systemd[1]: libpod-89ab56c625ecde76bf113ca98e17db29aa91aaf5dbffbae42f3b848933a87f33.scope: Deactivated successfully.
Dec 03 02:33:17 compute-0 systemd[1]: libpod-89ab56c625ecde76bf113ca98e17db29aa91aaf5dbffbae42f3b848933a87f33.scope: Consumed 1.188s CPU time.
Dec 03 02:33:17 compute-0 podman[472752]: 2025-12-03 02:33:17.924409815 +0000 UTC m=+1.550754262 container died 89ab56c625ecde76bf113ca98e17db29aa91aaf5dbffbae42f3b848933a87f33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_shtern, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:33:17 compute-0 podman[472789]: 2025-12-03 02:33:17.936121886 +0000 UTC m=+0.177139899 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 03 02:33:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-a53037716e1e6efde6dbd19c251e2c115c00e5fea6eb8d3f9364d68541151920-merged.mount: Deactivated successfully.
Dec 03 02:33:17 compute-0 podman[472752]: 2025-12-03 02:33:17.990572492 +0000 UTC m=+1.616916929 container remove 89ab56c625ecde76bf113ca98e17db29aa91aaf5dbffbae42f3b848933a87f33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_shtern, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:33:18 compute-0 systemd[1]: libpod-conmon-89ab56c625ecde76bf113ca98e17db29aa91aaf5dbffbae42f3b848933a87f33.scope: Deactivated successfully.
Dec 03 02:33:18 compute-0 sudo[472648]: pam_unix(sudo:session): session closed for user root
Dec 03 02:33:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 02:33:18 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:33:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 02:33:18 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:33:18 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev fad24750-0aab-4270-83d8-4edff869152a does not exist
Dec 03 02:33:18 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 50e1f2ae-6872-4a64-a957-5b452d07e7eb does not exist
Dec 03 02:33:18 compute-0 sudo[472916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:33:18 compute-0 sudo[472916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:33:18 compute-0 sudo[472916]: pam_unix(sudo:session): session closed for user root
Dec 03 02:33:18 compute-0 sudo[472941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 02:33:18 compute-0 sudo[472941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:33:18 compute-0 sudo[472941]: pam_unix(sudo:session): session closed for user root
Dec 03 02:33:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:33:19 compute-0 ceph-mon[192821]: pgmap v2376: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:19 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:33:19 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:33:19 compute-0 nova_compute[351485]: 2025-12-03 02:33:19.289 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.516 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.517 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.518 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.521 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.521 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.521 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.521 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.522 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.525 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.525 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.526 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.526 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.526 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.527 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.527 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.527 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.527 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.528 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.527 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.528 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.529 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.529 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.529 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.529 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.530 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.530 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.528 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.531 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.531 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.530 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.532 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.532 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.532 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.532 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.532 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.533 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.533 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.533 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.533 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.533 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.533 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.534 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.534 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.534 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.534 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.535 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.535 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.535 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.535 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.535 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.535 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.536 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.536 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.536 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.536 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.536 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.536 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.537 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.537 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.537 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.537 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.538 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.538 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.538 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.539 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.539 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.539 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.539 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.539 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.539 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.539 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.540 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.540 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.540 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.540 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.540 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.540 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.541 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.541 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.541 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.541 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.541 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.541 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.541 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.542 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.542 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.542 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:33:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2377: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:20 compute-0 ceph-mon[192821]: pgmap v2377: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2378: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:21 compute-0 nova_compute[351485]: 2025-12-03 02:33:21.879 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:33:22 compute-0 ceph-mon[192821]: pgmap v2378: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2379: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:33:24 compute-0 nova_compute[351485]: 2025-12-03 02:33:24.292 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:33:24 compute-0 nova_compute[351485]: 2025-12-03 02:33:24.607 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:33:24 compute-0 ceph-mon[192821]: pgmap v2379: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2380: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:26 compute-0 ceph-mon[192821]: pgmap v2380: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:26 compute-0 nova_compute[351485]: 2025-12-03 02:33:26.882 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:33:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2381: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:33:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:33:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:33:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:33:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:33:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:33:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:33:28
Dec 03 02:33:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 02:33:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 02:33:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', 'volumes', 'backups', 'vms', '.rgw.root', 'default.rgw.control', 'default.rgw.meta']
Dec 03 02:33:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 02:33:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:33:28 compute-0 ceph-mon[192821]: pgmap v2381: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:29 compute-0 nova_compute[351485]: 2025-12-03 02:33:29.296 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:33:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 02:33:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:33:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 02:33:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:33:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:33:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:33:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:33:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:33:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:33:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:33:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2382: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:29 compute-0 podman[158098]: time="2025-12-03T02:33:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:33:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:33:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec 03 02:33:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:33:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8202 "" "Go-http-client/1.1"
Dec 03 02:33:30 compute-0 ceph-mon[192821]: pgmap v2382: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:31 compute-0 openstack_network_exporter[368278]: ERROR   02:33:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:33:31 compute-0 openstack_network_exporter[368278]: ERROR   02:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:33:31 compute-0 openstack_network_exporter[368278]: ERROR   02:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:33:31 compute-0 openstack_network_exporter[368278]: ERROR   02:33:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:33:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:33:31 compute-0 openstack_network_exporter[368278]: ERROR   02:33:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:33:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:33:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2383: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:31 compute-0 nova_compute[351485]: 2025-12-03 02:33:31.886 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:33:32 compute-0 nova_compute[351485]: 2025-12-03 02:33:32.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:33:32 compute-0 ceph-mon[192821]: pgmap v2383: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:32 compute-0 nova_compute[351485]: 2025-12-03 02:33:32.848 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:33:32 compute-0 nova_compute[351485]: 2025-12-03 02:33:32.849 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:33:32 compute-0 nova_compute[351485]: 2025-12-03 02:33:32.850 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:33:32 compute-0 nova_compute[351485]: 2025-12-03 02:33:32.850 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 02:33:32 compute-0 nova_compute[351485]: 2025-12-03 02:33:32.851 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:33:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:33:33 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1581721207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:33:33 compute-0 nova_compute[351485]: 2025-12-03 02:33:33.385 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:33:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2384: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:33:33 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1581721207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:33:34 compute-0 nova_compute[351485]: 2025-12-03 02:33:34.017 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:33:34 compute-0 nova_compute[351485]: 2025-12-03 02:33:34.019 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3953MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 02:33:34 compute-0 nova_compute[351485]: 2025-12-03 02:33:34.019 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:33:34 compute-0 nova_compute[351485]: 2025-12-03 02:33:34.020 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:33:34 compute-0 nova_compute[351485]: 2025-12-03 02:33:34.117 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 02:33:34 compute-0 nova_compute[351485]: 2025-12-03 02:33:34.118 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 02:33:34 compute-0 nova_compute[351485]: 2025-12-03 02:33:34.298 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:33:34 compute-0 nova_compute[351485]: 2025-12-03 02:33:34.322 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:33:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:33:34 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/406111295' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:33:34 compute-0 nova_compute[351485]: 2025-12-03 02:33:34.829 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:33:34 compute-0 ceph-mon[192821]: pgmap v2384: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:34 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/406111295' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:33:34 compute-0 nova_compute[351485]: 2025-12-03 02:33:34.841 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:33:34 compute-0 nova_compute[351485]: 2025-12-03 02:33:34.854 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:33:34 compute-0 nova_compute[351485]: 2025-12-03 02:33:34.856 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 02:33:34 compute-0 nova_compute[351485]: 2025-12-03 02:33:34.856 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.836s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:33:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2385: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:36 compute-0 ceph-mon[192821]: pgmap v2385: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:36 compute-0 podman[473015]: 2025-12-03 02:33:36.879193515 +0000 UTC m=+0.115561113 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 02:33:36 compute-0 nova_compute[351485]: 2025-12-03 02:33:36.889 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:33:36 compute-0 podman[473013]: 2025-12-03 02:33:36.892435848 +0000 UTC m=+0.138275963 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:33:36 compute-0 podman[473014]: 2025-12-03 02:33:36.899607861 +0000 UTC m=+0.143338676 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 03 02:33:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2386: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:37 compute-0 nova_compute[351485]: 2025-12-03 02:33:37.857 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:33:37 compute-0 nova_compute[351485]: 2025-12-03 02:33:37.857 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 02:33:37 compute-0 nova_compute[351485]: 2025-12-03 02:33:37.858 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 03 02:33:37 compute-0 nova_compute[351485]: 2025-12-03 02:33:37.897 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 03 02:33:37 compute-0 nova_compute[351485]: 2025-12-03 02:33:37.897 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:33:37 compute-0 nova_compute[351485]: 2025-12-03 02:33:37.899 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:33:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:33:38 compute-0 ceph-mon[192821]: pgmap v2386: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 02:33:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:33:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 02:33:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:33:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 03 02:33:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:33:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:33:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:33:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:33:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:33:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec 03 02:33:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:33:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 02:33:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:33:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:33:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:33:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 02:33:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:33:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 02:33:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:33:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:33:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:33:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 02:33:39 compute-0 nova_compute[351485]: 2025-12-03 02:33:39.300 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:33:39 compute-0 nova_compute[351485]: 2025-12-03 02:33:39.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:33:39 compute-0 nova_compute[351485]: 2025-12-03 02:33:39.578 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:33:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2387: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:40 compute-0 ceph-mon[192821]: pgmap v2387: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2388: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:41 compute-0 nova_compute[351485]: 2025-12-03 02:33:41.893 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:33:42 compute-0 ceph-mon[192821]: pgmap v2388: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:43 compute-0 sshd-session[473071]: Accepted publickey for zuul from 192.168.122.10 port 40372 ssh2: ECDSA SHA256:ja3ITS17A9km0/Ot+KN2pl9ub4ump/b6GV+vNoE7Szw
Dec 03 02:33:43 compute-0 systemd-logind[800]: New session 63 of user zuul.
Dec 03 02:33:43 compute-0 systemd[1]: Started Session 63 of User zuul.
Dec 03 02:33:43 compute-0 sshd-session[473071]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 03 02:33:43 compute-0 sudo[473075]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Dec 03 02:33:43 compute-0 sudo[473075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 02:33:43 compute-0 nova_compute[351485]: 2025-12-03 02:33:43.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:33:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2389: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:33:44 compute-0 nova_compute[351485]: 2025-12-03 02:33:44.302 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:33:44 compute-0 nova_compute[351485]: 2025-12-03 02:33:44.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:33:44 compute-0 nova_compute[351485]: 2025-12-03 02:33:44.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 02:33:44 compute-0 ceph-mon[192821]: pgmap v2389: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2390: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:45 compute-0 podman[473174]: 2025-12-03 02:33:45.867003538 +0000 UTC m=+0.119250906 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3)
Dec 03 02:33:46 compute-0 nova_compute[351485]: 2025-12-03 02:33:46.897 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:33:46 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15543 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:33:46 compute-0 ceph-mon[192821]: pgmap v2390: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 02:33:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/809714596' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:33:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 02:33:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/809714596' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:33:47 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15549 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:33:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2391: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:47 compute-0 ceph-mon[192821]: from='client.15543 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:33:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/809714596' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:33:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/809714596' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:33:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Dec 03 02:33:48 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1295810968' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 03 02:33:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:33:48 compute-0 podman[473343]: 2025-12-03 02:33:48.887870876 +0000 UTC m=+0.111147698 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 02:33:48 compute-0 podman[473342]: 2025-12-03 02:33:48.904694341 +0000 UTC m=+0.138293934 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, architecture=x86_64, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vendor=Red Hat, Inc.)
Dec 03 02:33:48 compute-0 podman[473350]: 2025-12-03 02:33:48.919685844 +0000 UTC m=+0.124796343 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, release-0.7.12=, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, config_id=edpm, container_name=kepler, distribution-scope=public, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, version=9.4)
Dec 03 02:33:48 compute-0 podman[473362]: 2025-12-03 02:33:48.923186623 +0000 UTC m=+0.118670610 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 03 02:33:48 compute-0 podman[473341]: 2025-12-03 02:33:48.923348727 +0000 UTC m=+0.165907493 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:33:48 compute-0 ceph-mon[192821]: from='client.15549 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:33:48 compute-0 ceph-mon[192821]: pgmap v2391: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:48 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1295810968' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 03 02:33:49 compute-0 nova_compute[351485]: 2025-12-03 02:33:49.304 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:33:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2392: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:50 compute-0 ceph-mon[192821]: pgmap v2392: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2393: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:51 compute-0 ovs-vsctl[473476]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec 03 02:33:51 compute-0 nova_compute[351485]: 2025-12-03 02:33:51.900 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:33:52 compute-0 ceph-mon[192821]: pgmap v2393: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:53 compute-0 virtqemud[154511]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec 03 02:33:53 compute-0 virtqemud[154511]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec 03 02:33:53 compute-0 virtqemud[154511]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec 03 02:33:53 compute-0 nova_compute[351485]: 2025-12-03 02:33:53.325 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:33:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:33:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2394: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:54 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq asok_command: cache status {prefix=cache status} (starting...)
Dec 03 02:33:54 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq asok_command: client ls {prefix=client ls} (starting...)
Dec 03 02:33:54 compute-0 lvm[473802]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 03 02:33:54 compute-0 lvm[473802]: VG ceph_vg0 finished
Dec 03 02:33:54 compute-0 nova_compute[351485]: 2025-12-03 02:33:54.307 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:33:54 compute-0 lvm[473814]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 03 02:33:54 compute-0 lvm[473814]: VG ceph_vg1 finished
Dec 03 02:33:54 compute-0 lvm[473878]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 03 02:33:54 compute-0 lvm[473878]: VG ceph_vg2 finished
Dec 03 02:33:54 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq asok_command: damage ls {prefix=damage ls} (starting...)
Dec 03 02:33:54 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15553 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:33:54 compute-0 ceph-mon[192821]: pgmap v2394: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:55 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq asok_command: dump loads {prefix=dump loads} (starting...)
Dec 03 02:33:55 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Dec 03 02:33:55 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Dec 03 02:33:55 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15555 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:33:55 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Dec 03 02:33:55 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Dec 03 02:33:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2395: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:55 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Dec 03 02:33:55 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1187494540' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 03 02:33:55 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Dec 03 02:33:56 compute-0 ceph-mon[192821]: from='client.15553 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:33:56 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1187494540' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 03 02:33:56 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq asok_command: get subtrees {prefix=get subtrees} (starting...)
Dec 03 02:33:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:33:56 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2028655004' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:33:56 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15561 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:33:56 compute-0 ceph-mgr[193109]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 03 02:33:56 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T02:33:56.339+0000 7fabb0026640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 03 02:33:56 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq asok_command: ops {prefix=ops} (starting...)
Dec 03 02:33:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Dec 03 02:33:56 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3369402920' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec 03 02:33:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Dec 03 02:33:56 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1099872981' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec 03 02:33:56 compute-0 nova_compute[351485]: 2025-12-03 02:33:56.902 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:33:57 compute-0 ceph-mon[192821]: from='client.15555 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:33:57 compute-0 ceph-mon[192821]: pgmap v2395: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:57 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2028655004' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:33:57 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3369402920' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec 03 02:33:57 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1099872981' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec 03 02:33:57 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq asok_command: session ls {prefix=session ls} (starting...)
Dec 03 02:33:57 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq asok_command: status {prefix=status} (starting...)
Dec 03 02:33:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Dec 03 02:33:57 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3172274940' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 03 02:33:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Dec 03 02:33:57 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/550125421' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec 03 02:33:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2396: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Dec 03 02:33:57 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3693103312' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 03 02:33:57 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15575 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:33:58 compute-0 ceph-mon[192821]: from='client.15561 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:33:58 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3172274940' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 03 02:33:58 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/550125421' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec 03 02:33:58 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3693103312' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 03 02:33:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Dec 03 02:33:58 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1577305353' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 03 02:33:58 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15579 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:33:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:33:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:33:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:33:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:33:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:33:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:33:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Dec 03 02:33:58 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3510990327' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 03 02:33:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:33:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Dec 03 02:33:58 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/676029562' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 03 02:33:59 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Dec 03 02:33:59 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2233607358' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 03 02:33:59 compute-0 ceph-mon[192821]: pgmap v2396: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:59 compute-0 ceph-mon[192821]: from='client.15575 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:33:59 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1577305353' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 03 02:33:59 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3510990327' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 03 02:33:59 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/676029562' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 03 02:33:59 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Dec 03 02:33:59 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3492666941' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec 03 02:33:59 compute-0 nova_compute[351485]: 2025-12-03 02:33:59.314 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:33:59 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Dec 03 02:33:59 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2325094571' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 03 02:33:59 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15591 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:33:59 compute-0 ceph-mgr[193109]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 03 02:33:59 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T02:33:59.640+0000 7fabb0026640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 03 02:33:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:33:59.670 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:33:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:33:59.670 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:33:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:33:59.670 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:33:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2397: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:33:59 compute-0 podman[158098]: time="2025-12-03T02:33:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:33:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:33:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec 03 02:33:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:33:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8200 "" "Go-http-client/1.1"
Dec 03 02:33:59 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15593 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:34:00 compute-0 ceph-mon[192821]: from='client.15579 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:34:00 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2233607358' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 03 02:34:00 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3492666941' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec 03 02:34:00 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2325094571' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 03 02:34:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Dec 03 02:34:00 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1313203538' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec 03 02:34:00 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15597 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:34:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Dec 03 02:34:00 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2936925648' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec 03 02:34:00 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15601 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:34:01 compute-0 ceph-mon[192821]: from='client.15591 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:34:01 compute-0 ceph-mon[192821]: pgmap v2397: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:01 compute-0 ceph-mon[192821]: from='client.15593 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:34:01 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1313203538' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec 03 02:34:01 compute-0 ceph-mon[192821]: from='client.15597 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:34:01 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2936925648' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec 03 02:34:01 compute-0 ceph-mon[192821]: from='client.15601 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:34:01 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Dec 03 02:34:01 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2704523019' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:09.774085+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105390080 unmapped: 3162112 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:10.774474+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105390080 unmapped: 3162112 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:11.775003+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa5a5000/0x0/0x4ffc00000, data 0x25b276a/0x2679000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105390080 unmapped: 3162112 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:12.775367+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105390080 unmapped: 3162112 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1334501 data_alloc: 234881024 data_used: 24854528
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:13.775707+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105390080 unmapped: 3162112 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:14.775889+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105390080 unmapped: 3162112 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:15.776286+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105390080 unmapped: 3162112 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa5a5000/0x0/0x4ffc00000, data 0x25b276a/0x2679000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:16.776669+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 3153920 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:17.776999+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 3153920 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1334501 data_alloc: 234881024 data_used: 24854528
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:18.777329+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 3153920 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:19.777735+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 3153920 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa5a5000/0x0/0x4ffc00000, data 0x25b276a/0x2679000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:20.777904+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 3153920 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa5a5000/0x0/0x4ffc00000, data 0x25b276a/0x2679000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:21.778193+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 3153920 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15605 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:22.778418+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa5a5000/0x0/0x4ffc00000, data 0x25b276a/0x2679000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 3153920 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1334501 data_alloc: 234881024 data_used: 24854528
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:23.778687+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 3153920 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:24.779001+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa5a5000/0x0/0x4ffc00000, data 0x25b276a/0x2679000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 3153920 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:25.779348+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 3153920 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:26.779729+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 3153920 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:27.779985+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 3153920 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1334501 data_alloc: 234881024 data_used: 24854528
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:28.780346+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 3153920 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:29.780744+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa5a5000/0x0/0x4ffc00000, data 0x25b276a/0x2679000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 3153920 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:30.781079+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa5a5000/0x0/0x4ffc00000, data 0x25b276a/0x2679000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 3153920 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:31.781444+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 3153920 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:32.781633+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 3153920 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1334501 data_alloc: 234881024 data_used: 24854528
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:33.781894+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 3153920 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:34.782239+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 3153920 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa5a5000/0x0/0x4ffc00000, data 0x25b276a/0x2679000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:35.782675+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 3153920 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:36.783146+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 3145728 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:37.783362+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 3145728 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1334501 data_alloc: 234881024 data_used: 24854528
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:38.783654+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 3145728 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:39.783830+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa5a5000/0x0/0x4ffc00000, data 0x25b276a/0x2679000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 3145728 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:40.784091+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 3145728 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:41.784416+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 3145728 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:42.784762+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 3145728 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:43.785149+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1334501 data_alloc: 234881024 data_used: 24854528
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 3145728 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:44.785405+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa5a5000/0x0/0x4ffc00000, data 0x25b276a/0x2679000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 3145728 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:45.786777+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 3145728 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:46.787141+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 3145728 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:47.787644+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 3145728 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:48.788070+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1334501 data_alloc: 234881024 data_used: 24854528
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 3145728 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:49.788351+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa5a5000/0x0/0x4ffc00000, data 0x25b276a/0x2679000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 3145728 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:50.788693+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 3145728 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:51.789024+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa5a5000/0x0/0x4ffc00000, data 0x25b276a/0x2679000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 3145728 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:52.789395+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 3145728 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:53.789806+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1334501 data_alloc: 234881024 data_used: 24854528
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ec9400
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 64.043724060s of 64.707359314s, submitted: 90
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 ms_handle_reset con 0x558b85ec9400 session 0x558b8521f680
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ec8c00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 ms_handle_reset con 0x558b85ec8c00 session 0x558b8634b2c0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ec8000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 ms_handle_reset con 0x558b85ec8000 session 0x558b8634a5a0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b84a93c00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105439232 unmapped: 9412608 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:54.790176+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 ms_handle_reset con 0x558b84a93c00 session 0x558b85fc3a40
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83a73000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 ms_handle_reset con 0x558b83a73000 session 0x558b85fc2f00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ec8000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 ms_handle_reset con 0x558b85ec8000 session 0x558b84a2cd20
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ec8c00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 ms_handle_reset con 0x558b85ec8c00 session 0x558b84a2cf00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ec9400
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 ms_handle_reset con 0x558b85ec9400 session 0x558b83a950e0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b84a92000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa225000/0x0/0x4ffc00000, data 0x293276a/0x29f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:55.790463+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105439232 unmapped: 9412608 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 ms_handle_reset con 0x558b84a92000 session 0x558b8625ab40
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83a73000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 ms_handle_reset con 0x558b83a73000 session 0x558b86260000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ec8000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 ms_handle_reset con 0x558b85ec8000 session 0x558b85bbcb40
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ec8c00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 ms_handle_reset con 0x558b85ec8c00 session 0x558b85bbcd20
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:56.790852+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105439232 unmapped: 9412608 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:57.792638+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105439232 unmapped: 9412608 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ec9400
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 ms_handle_reset con 0x558b85ec9400 session 0x558b85bbda40
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:58.792962+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105439232 unmapped: 9412608 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1359205 data_alloc: 234881024 data_used: 24854528
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b84a92400
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 ms_handle_reset con 0x558b84a92400 session 0x558b83ca5c20
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:59.793303+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105455616 unmapped: 9396224 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83a73000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 ms_handle_reset con 0x558b83a73000 session 0x558b8624a5a0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ec8000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 ms_handle_reset con 0x558b85ec8000 session 0x558b8624be00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:00.793685+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa224000/0x0/0x4ffc00000, data 0x293277a/0x29fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105472000 unmapped: 9379840 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ec8c00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ec9400
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b84a92800
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:01.794155+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105496576 unmapped: 9355264 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:02.794516+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105496576 unmapped: 9355264 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:03.794907+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 6537216 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1383123 data_alloc: 251658240 data_used: 28000256
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:04.795162+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108855296 unmapped: 5996544 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:05.795397+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108855296 unmapped: 5996544 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa224000/0x0/0x4ffc00000, data 0x293277a/0x29fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:06.795727+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108863488 unmapped: 5988352 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:07.795945+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa224000/0x0/0x4ffc00000, data 0x293277a/0x29fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108896256 unmapped: 5955584 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:08.796240+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108896256 unmapped: 5955584 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1386963 data_alloc: 251658240 data_used: 28524544
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:09.796877+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108896256 unmapped: 5955584 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:10.797135+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108896256 unmapped: 5955584 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:11.797510+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108896256 unmapped: 5955584 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa224000/0x0/0x4ffc00000, data 0x293277a/0x29fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:12.797916+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108896256 unmapped: 5955584 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:13.798135+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108896256 unmapped: 5955584 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1386963 data_alloc: 251658240 data_used: 28524544
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:14.798393+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108896256 unmapped: 5955584 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:15.798597+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108896256 unmapped: 5955584 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:16.799102+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108896256 unmapped: 5955584 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:17.799718+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108896256 unmapped: 5955584 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa224000/0x0/0x4ffc00000, data 0x293277a/0x29fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:18.800028+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108896256 unmapped: 5955584 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1386963 data_alloc: 251658240 data_used: 28524544
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:19.800300+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108896256 unmapped: 5955584 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:20.800656+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108896256 unmapped: 5955584 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:21.800966+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa224000/0x0/0x4ffc00000, data 0x293277a/0x29fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108937216 unmapped: 5914624 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:22.801199+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108937216 unmapped: 5914624 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa224000/0x0/0x4ffc00000, data 0x293277a/0x29fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:23.801607+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108969984 unmapped: 5881856 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1386963 data_alloc: 251658240 data_used: 28524544
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:24.802101+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108969984 unmapped: 5881856 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:25.802518+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108969984 unmapped: 5881856 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa224000/0x0/0x4ffc00000, data 0x293277a/0x29fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:26.802885+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108969984 unmapped: 5881856 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:27.803165+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108969984 unmapped: 5881856 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:28.803409+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa224000/0x0/0x4ffc00000, data 0x293277a/0x29fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 5873664 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1386963 data_alloc: 251658240 data_used: 28524544
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:29.803709+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108986368 unmapped: 5865472 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:30.804079+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108986368 unmapped: 5865472 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:31.804619+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108986368 unmapped: 5865472 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:32.805946+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa224000/0x0/0x4ffc00000, data 0x293277a/0x29fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108986368 unmapped: 5865472 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:33.806359+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108986368 unmapped: 5865472 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1386963 data_alloc: 251658240 data_used: 28524544
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:34.806872+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108986368 unmapped: 5865472 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:35.807950+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108986368 unmapped: 5865472 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:36.808144+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108986368 unmapped: 5865472 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa224000/0x0/0x4ffc00000, data 0x293277a/0x29fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:37.808508+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108986368 unmapped: 5865472 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 43.798915863s of 43.856277466s, submitted: 3
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:38.808855+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 4931584 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1401303 data_alloc: 251658240 data_used: 28524544
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:39.809202+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110108672 unmapped: 4743168 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:40.809629+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112492544 unmapped: 2359296 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:41.810031+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112492544 unmapped: 2359296 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:42.810315+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112492544 unmapped: 2359296 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec6000/0x0/0x4ffc00000, data 0x2af077a/0x2bb8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:43.810520+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112525312 unmapped: 2326528 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414313 data_alloc: 251658240 data_used: 28524544
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:44.810932+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112525312 unmapped: 2326528 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:45.811199+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec6000/0x0/0x4ffc00000, data 0x2af077a/0x2bb8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 2285568 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:46.811765+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 2285568 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:47.812123+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 2285568 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:48.812477+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 2285568 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:49.812691+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:50.813016+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:51.828157+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:52.828406+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 2220032 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:53.828738+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 2220032 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:54.828906+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 2220032 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:55.829434+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 2220032 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:56.829796+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 2211840 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:57.829989+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112533504 unmapped: 2318336 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:58.830329+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112533504 unmapped: 2318336 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:59.830661+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112533504 unmapped: 2318336 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 openstack_network_exporter[368278]: ERROR   02:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:34:01 compute-0 openstack_network_exporter[368278]: ERROR   02:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:34:01 compute-0 openstack_network_exporter[368278]: ERROR   02:34:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:00.831063+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112533504 unmapped: 2318336 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:01.831469+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112533504 unmapped: 2318336 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:02.831845+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112533504 unmapped: 2318336 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 openstack_network_exporter[368278]: ERROR   02:34:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:03.832135+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112533504 unmapped: 2318336 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:01 compute-0 openstack_network_exporter[368278]: ERROR   02:34:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:04.832620+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112533504 unmapped: 2318336 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:05.832988+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112533504 unmapped: 2318336 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:06.833418+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112541696 unmapped: 2310144 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:07.833917+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112541696 unmapped: 2310144 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:08.834265+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112541696 unmapped: 2310144 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:09.834720+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 2301952 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:10.835112+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 2301952 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:11.835642+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 2301952 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:12.836036+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 2301952 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:13.836403+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 2301952 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:14.836742+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 2301952 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:15.837060+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 2301952 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:16.837653+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 2301952 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:17.837890+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 2301952 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:18.838250+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 2301952 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:19.838759+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 2301952 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:20.839111+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 2301952 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:21.839479+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 2301952 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:22.839885+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 2301952 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:23.840332+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 2301952 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:24.840804+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 2301952 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:25.841215+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 2301952 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:26.841647+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 2301952 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:27.842002+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 2301952 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:28.842381+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 2293760 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:29.842709+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 2293760 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:30.842986+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 2293760 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:31.843792+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 2293760 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:32.844258+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 2293760 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:33.844894+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 2285568 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:34.845309+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 2285568 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:35.845508+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 2285568 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:36.845848+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 2285568 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:37.846163+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 2277376 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:38.846414+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 2277376 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:39.846971+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 2277376 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:40.847371+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 2277376 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:41.847858+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 2277376 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:42.848288+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 2277376 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:43.848717+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 2277376 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:44.849089+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 2277376 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:45.849470+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 2277376 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:46.850021+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 2277376 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:47.850450+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 2277376 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:48.850825+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 2277376 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:49.851168+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 2277376 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:50.851653+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:51.852123+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:52.852423+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:53.852762+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:54.853047+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:55.853387+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:56.853723+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:57.853983+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:58.854318+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:59.854813+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:00.855172+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:01.855684+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:02.856438+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:03.856785+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:04.857196+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:05.857614+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:06.857963+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:07.858280+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:08.858611+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:09.858916+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:10.859303+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:11.859758+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:12.860113+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:13.860463+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:14.860825+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:15.861200+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:16.861692+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:17.862032+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:18.862393+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:19.862720+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:20.863042+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:21.864191+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:22.864632+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:23.865024+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:24.865390+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:25.865814+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets getting new tickets!
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:26.866192+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _finish_auth 0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:26.867917+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 2260992 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:27.866680+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 2260992 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:28.867075+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 2260992 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:29.867508+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 2260992 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:30.867830+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:31.868213+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:32.868454+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:33.868863+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:34.869318+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:35.869671+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:36.870016+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:37.870299+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:38.870731+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:39.871109+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:40.871430+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:41.871882+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:42.872212+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:43.872724+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:44.873151+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:45.873502+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:46.873935+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:47.874338+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:48.874772+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:49.875073+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:50.875306+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:51.875753+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:52.876130+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:53.876680+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:54.877018+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:55.877336+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:56.877618+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:57.878062+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:58.878430+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:59.878745+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:00.879066+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:01.879326+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:02.879799+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:03.880151+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:04.880594+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:05.880882+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:06.881176+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 2244608 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:07.881505+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 2244608 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:08.881885+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 2244608 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:09.882130+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 2244608 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:10.882514+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 2244608 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:11.882970+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 2244608 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:12.883302+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 2244608 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:13.883708+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 2244608 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:14.884065+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 2244608 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:15.884461+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 2244608 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:16.884840+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 2244608 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:17.885016+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112615424 unmapped: 2236416 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:18.885372+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112615424 unmapped: 2236416 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:19.885758+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112615424 unmapped: 2236416 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:20.886090+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112615424 unmapped: 2236416 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:21.886441+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112615424 unmapped: 2236416 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:22.887056+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112615424 unmapped: 2236416 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:23.887522+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112615424 unmapped: 2236416 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:24.887966+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112615424 unmapped: 2236416 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:25.888352+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112615424 unmapped: 2236416 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:26.888627+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112615424 unmapped: 2236416 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:27.889077+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112615424 unmapped: 2236416 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:28.889591+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112615424 unmapped: 2236416 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:29.889989+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112615424 unmapped: 2236416 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:30.890284+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112615424 unmapped: 2236416 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:31.890811+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112615424 unmapped: 2236416 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:32.891227+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112615424 unmapped: 2236416 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:33.891700+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112615424 unmapped: 2236416 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:34.892127+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112615424 unmapped: 2236416 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:35.892710+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112615424 unmapped: 2236416 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:36.893009+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112615424 unmapped: 2236416 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:37.893366+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112623616 unmapped: 2228224 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:38.893751+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112623616 unmapped: 2228224 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:39.894064+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112623616 unmapped: 2228224 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 ms_handle_reset con 0x558b83c47000 session 0x558b83d4ad20
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 182.060043335s of 182.239807129s, submitted: 27
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 ms_handle_reset con 0x558b83c49800 session 0x558b84a2c3c0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 ms_handle_reset con 0x558b85e1c400 session 0x558b8624af00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:40.894390+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83a73000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107536384 unmapped: 7315456 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:41.894826+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 ms_handle_reset con 0x558b83a73000 session 0x558b8634b2c0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:42.895306+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 ms_handle_reset con 0x558b85e1c000 session 0x558b85bac780
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c47000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:43.895857+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:44.896132+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244362 data_alloc: 234881024 data_used: 19968000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:45.896803+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:46.897150+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:47.897454+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:48.898025+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:49.898349+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244362 data_alloc: 234881024 data_used: 19968000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:50.898690+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:51.899015+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:52.899318+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:53.899776+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:54.900039+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244362 data_alloc: 234881024 data_used: 19968000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:55.900398+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:56.900767+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:57.901174+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:58.901665+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:59.902053+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244362 data_alloc: 234881024 data_used: 19968000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:00.902318+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:01.902695+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:02.903131+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:03.903460+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:04.903841+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244362 data_alloc: 234881024 data_used: 19968000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:05.904210+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:06.904715+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:07.905105+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:08.905466+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:09.905863+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244362 data_alloc: 234881024 data_used: 19968000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:10.906209+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:11.906761+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:12.907095+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:13.907451+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:14.908269+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244362 data_alloc: 234881024 data_used: 19968000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:15.908753+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:16.909102+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:17.909505+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:18.909910+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:19.910115+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244362 data_alloc: 234881024 data_used: 19968000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:20.910445+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:21.910988+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:22.911219+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:23.911601+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:24.912070+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244362 data_alloc: 234881024 data_used: 19968000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:25.912421+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:26.912752+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:27.913106+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:28.913342+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:29.913704+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244362 data_alloc: 218103808 data_used: 19968000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:30.914040+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:31.914333+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:32.914661+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:33.914940+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:34.915249+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244362 data_alloc: 218103808 data_used: 19968000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:35.915448+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:36.915750+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:37.916019+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:38.916484+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:39.916836+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244362 data_alloc: 218103808 data_used: 19968000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:40.917115+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:41.917586+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:42.917848+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:43.918269+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:44.918729+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:45.919096+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244362 data_alloc: 218103808 data_used: 19968000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:46.919432+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:47.919862+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:48.920214+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:49.920453+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:50.920784+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244362 data_alloc: 218103808 data_used: 19968000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:51.921278+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:52.921730+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:53.922050+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:54.922292+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:55.922757+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244362 data_alloc: 218103808 data_used: 19968000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:56.923140+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:57.923455+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:58.923835+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:59.924371+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:00.924787+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244362 data_alloc: 218103808 data_used: 19968000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:01.925362+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:02.925781+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:03.926168+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:04.926640+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:05.926876+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244362 data_alloc: 218103808 data_used: 19968000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:06.927242+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:07.928161+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:08.928614+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:09.928808+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:10.929145+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244362 data_alloc: 218103808 data_used: 19968000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:11.929637+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:12.930035+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:13.930409+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:14.930727+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:15.931074+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244362 data_alloc: 218103808 data_used: 19968000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:16.931382+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:17.931809+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:18.932246+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:19.932676+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:20.933035+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244362 data_alloc: 218103808 data_used: 19968000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:21.933453+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:22.933767+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:23.934121+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:24.934350+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:25.934731+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244362 data_alloc: 218103808 data_used: 19968000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:26.935159+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:27.935389+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:28.935816+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:29.936151+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 7512064 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:30.936463+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 7512064 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244362 data_alloc: 218103808 data_used: 19968000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:31.936990+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 7512064 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:32.937258+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 7512064 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:33.937727+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 7512064 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:34.938160+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 7512064 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:35.938615+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 7512064 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244362 data_alloc: 218103808 data_used: 19968000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:36.939688+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 7512064 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:37.940013+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 7512064 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:38.940319+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 7512064 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:39.940803+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 7512064 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:40.941017+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 7512064 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244362 data_alloc: 218103808 data_used: 19968000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 ms_handle_reset con 0x558b83c46c00 session 0x558b8625af00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 121.117050171s of 121.235237122s, submitted: 20
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 ms_handle_reset con 0x558b85707000 session 0x558b849b9860
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 ms_handle_reset con 0x558b85ec8800 session 0x558b83ca30e0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:41.941475+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c49800
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107347968 unmapped: 7503872 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:42.941862+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104808448 unmapped: 10043392 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 ms_handle_reset con 0x558b83c49800 session 0x558b85bbc5a0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:43.942266+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:44.942707+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:45.943361+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180429 data_alloc: 218103808 data_used: 16781312
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:46.943607+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa0c1000/0x0/0x4ffc00000, data 0x18f76a6/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:47.943914+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:48.944277+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:49.944745+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa0c1000/0x0/0x4ffc00000, data 0x18f76a6/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:50.945170+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180429 data_alloc: 218103808 data_used: 16781312
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:51.945766+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:52.946165+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:53.946816+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:54.947140+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa0c1000/0x0/0x4ffc00000, data 0x18f76a6/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:55.947638+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180429 data_alloc: 218103808 data_used: 16781312
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:56.948143+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa0c1000/0x0/0x4ffc00000, data 0x18f76a6/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:57.948612+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:58.949060+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa0c1000/0x0/0x4ffc00000, data 0x18f76a6/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:59.949349+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:00.949833+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180429 data_alloc: 218103808 data_used: 16781312
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:01.950122+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:02.950614+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:03.951130+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa0c1000/0x0/0x4ffc00000, data 0x18f76a6/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:04.951624+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa0c1000/0x0/0x4ffc00000, data 0x18f76a6/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:05.952032+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180429 data_alloc: 218103808 data_used: 16781312
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:06.952439+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:07.952919+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:08.953318+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:09.953867+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa0c1000/0x0/0x4ffc00000, data 0x18f76a6/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:10.954319+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180429 data_alloc: 218103808 data_used: 16781312
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:11.954775+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:12.955175+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:13.955616+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:14.956028+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:15.956229+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa0c1000/0x0/0x4ffc00000, data 0x18f76a6/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180429 data_alloc: 218103808 data_used: 16781312
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa0c1000/0x0/0x4ffc00000, data 0x18f76a6/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:16.956670+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:17.956962+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:18.957279+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:19.957856+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:20.958199+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa0c1000/0x0/0x4ffc00000, data 0x18f76a6/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83a73000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 39.548728943s of 39.757991791s, submitted: 30
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104562688 unmapped: 10289152 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182035 data_alloc: 218103808 data_used: 16781312
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:21.958667+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104562688 unmapped: 10289152 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:22.959166+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104587264 unmapped: 10264576 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:23.959651+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104587264 unmapped: 10264576 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:24.959990+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa0c1000/0x0/0x4ffc00000, data 0x18f76c9/0x19bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [0,0,0,0,0,0,1])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112525312 unmapped: 18161664 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 handle_osd_map epochs [127,128], i have 127, src has [1,128]
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:25.960331+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 ms_handle_reset con 0x558b83a73000 session 0x558b83a941e0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104620032 unmapped: 26066944 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238520 data_alloc: 218103808 data_used: 16789504
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:26.960679+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104620032 unmapped: 26066944 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:27.961048+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104620032 unmapped: 26066944 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:28.961408+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104620032 unmapped: 26066944 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9931000/0x0/0x4ffc00000, data 0x2084c46/0x214c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:29.961716+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104620032 unmapped: 26066944 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:30.962064+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104620032 unmapped: 26066944 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238520 data_alloc: 218103808 data_used: 16789504
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:31.962415+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104620032 unmapped: 26066944 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:32.962794+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104620032 unmapped: 26066944 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:33.963083+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c46c00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 ms_handle_reset con 0x558b83c46c00 session 0x558b844081e0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85707000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 ms_handle_reset con 0x558b85707000 session 0x558b85fc2960
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ec8800
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 ms_handle_reset con 0x558b85ec8800 session 0x558b85fc25a0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104628224 unmapped: 26058752 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:34.963457+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9931000/0x0/0x4ffc00000, data 0x2084c46/0x214c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ec8000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 ms_handle_reset con 0x558b85ec8000 session 0x558b85fc21e0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105807872 unmapped: 24879104 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83a73000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 ms_handle_reset con 0x558b83a73000 session 0x558b8624a5a0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:35.963762+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c46c00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.587652206s of 14.710074425s, submitted: 13
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 ms_handle_reset con 0x558b83c46c00 session 0x558b8624be00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85707000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 ms_handle_reset con 0x558b85707000 session 0x558b84974780
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ec8800
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 ms_handle_reset con 0x558b85ec8800 session 0x558b862fa5a0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b84a92c00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105979904 unmapped: 24707072 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1320516 data_alloc: 234881024 data_used: 16769024
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 ms_handle_reset con 0x558b84a92c00 session 0x558b862fb0e0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83a73000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 ms_handle_reset con 0x558b83a73000 session 0x558b862faf00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:36.964072+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c46c00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 ms_handle_reset con 0x558b83c46c00 session 0x558b862fab40
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85707000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 ms_handle_reset con 0x558b85707000 session 0x558b8634b0e0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105553920 unmapped: 25133056 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ec8800
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 ms_handle_reset con 0x558b85ec8800 session 0x558b8634a5a0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b84a93000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 ms_handle_reset con 0x558b84a93000 session 0x558b83c523c0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:37.964440+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105414656 unmapped: 25272320 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:38.964964+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8e84000/0x0/0x4ffc00000, data 0x2b32c46/0x2bfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105422848 unmapped: 25264128 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:39.965212+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8e84000/0x0/0x4ffc00000, data 0x2b32c46/0x2bfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105422848 unmapped: 25264128 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83a73000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:40.965501+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 ms_handle_reset con 0x558b83a73000 session 0x558b83ca43c0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105070592 unmapped: 25616384 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324327 data_alloc: 234881024 data_used: 16769024
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8e84000/0x0/0x4ffc00000, data 0x2b32c46/0x2bfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:41.965898+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c46c00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85707000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ec8800
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105086976 unmapped: 25600000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:42.966248+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105086976 unmapped: 25600000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:43.966588+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8e84000/0x0/0x4ffc00000, data 0x2b32c46/0x2bfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104620032 unmapped: 26066944 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:44.966773+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110723072 unmapped: 19963904 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:45.967064+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110723072 unmapped: 19963904 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1403339 data_alloc: 234881024 data_used: 27774976
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:46.967518+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110723072 unmapped: 19963904 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:47.967930+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8e84000/0x0/0x4ffc00000, data 0x2b32c46/0x2bfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110723072 unmapped: 19963904 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:48.969653+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110723072 unmapped: 19963904 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:49.970060+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110723072 unmapped: 19963904 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:50.970263+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8e84000/0x0/0x4ffc00000, data 0x2b32c46/0x2bfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110723072 unmapped: 19963904 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1403339 data_alloc: 234881024 data_used: 27774976
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:51.970689+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110723072 unmapped: 19963904 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:52.971054+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110723072 unmapped: 19963904 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:53.971403+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110723072 unmapped: 19963904 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:54.971703+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110723072 unmapped: 19963904 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:55.971961+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110723072 unmapped: 19963904 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1403339 data_alloc: 234881024 data_used: 27774976
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:56.972223+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8e84000/0x0/0x4ffc00000, data 0x2b32c46/0x2bfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110723072 unmapped: 19963904 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:57.972621+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110723072 unmapped: 19963904 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:58.972786+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110723072 unmapped: 19963904 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:59.973072+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110723072 unmapped: 19963904 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:00.973402+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110723072 unmapped: 19963904 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1403499 data_alloc: 234881024 data_used: 27779072
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:01.973848+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110731264 unmapped: 19955712 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:02.974042+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8e84000/0x0/0x4ffc00000, data 0x2b32c46/0x2bfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 ms_handle_reset con 0x558b83c46c00 session 0x558b83ca2000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 ms_handle_reset con 0x558b85707000 session 0x558b862fb4a0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 26.913837433s of 27.059389114s, submitted: 26
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 ms_handle_reset con 0x558b85ec8800 session 0x558b85de2d20
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b84a93400
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 26017792 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:03.974353+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 ms_handle_reset con 0x558b84a93400 session 0x558b84976000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 26001408 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:04.974787+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9932000/0x0/0x4ffc00000, data 0x2084c46/0x214c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9932000/0x0/0x4ffc00000, data 0x2084c46/0x214c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 26001408 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:05.975248+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 26001408 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1248005 data_alloc: 234881024 data_used: 16769024
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:06.975818+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83a73000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104693760 unmapped: 25993216 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:07.979679+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _renew_subs
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 129 ms_handle_reset con 0x558b83a73000 session 0x558b85278f00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 32047104 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:08.980056+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 32047104 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:09.980419+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 32047104 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:10.980818+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fa0ba000/0x0/0x4ffc00000, data 0x18fadf4/0x19c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 32047104 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181099 data_alloc: 218103808 data_used: 9981952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:11.981260+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 32047104 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:12.981690+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 32047104 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:13.982063+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fa0ba000/0x0/0x4ffc00000, data 0x18fadf4/0x19c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:14.982390+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 32047104 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:15.982704+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 32047104 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:16.982918+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 32047104 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181099 data_alloc: 218103808 data_used: 9981952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 129 ms_handle_reset con 0x558b85e1d400 session 0x558b8624ab40
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c46c00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fa0ba000/0x0/0x4ffc00000, data 0x18fadf4/0x19c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:17.983161+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 32047104 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85707000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 129 ms_handle_reset con 0x558b85707000 session 0x558b862603c0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.032296181s of 15.317586899s, submitted: 49
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:18.983672+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 32047104 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ec8800
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:19.984028+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 32022528 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fa0b8000/0x0/0x4ffc00000, data 0x18fc857/0x19c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:20.984448+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 32022528 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fa0b8000/0x0/0x4ffc00000, data 0x18fc857/0x19c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:21.984965+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fa0b8000/0x0/0x4ffc00000, data 0x18fc857/0x19c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 99713024 unmapped: 30973952 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184738 data_alloc: 218103808 data_used: 9981952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:22.985303+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 99713024 unmapped: 30973952 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:23.985718+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 32022528 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _renew_subs
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 131 ms_handle_reset con 0x558b85ec8800 session 0x558b86261e00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:24.986046+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98738176 unmapped: 31948800 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 131 heartbeat osd_stat(store_statfs(0x4f9929000/0x0/0x4ffc00000, data 0x2089dd4/0x2154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 3000.1 total, 600.0 interval
                                            Cumulative writes: 7002 writes, 28K keys, 7002 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                            Cumulative WAL: 7002 writes, 1484 syncs, 4.72 writes per sync, written: 0.02 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 615 writes, 1901 keys, 615 commit groups, 1.0 writes per commit group, ingest: 1.36 MB, 0.00 MB/s
                                            Interval WAL: 615 writes, 283 syncs, 2.17 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:25.986408+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98738176 unmapped: 31948800 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:26.986777+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98738176 unmapped: 31948800 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240968 data_alloc: 218103808 data_used: 9990144
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:27.987180+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98738176 unmapped: 31948800 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:28.987659+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98738176 unmapped: 31948800 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 131 heartbeat osd_stat(store_statfs(0x4f9929000/0x0/0x4ffc00000, data 0x2089dd4/0x2154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:29.988072+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98738176 unmapped: 31948800 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:30.988482+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98738176 unmapped: 31948800 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c48000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.118351936s of 13.264735222s, submitted: 27
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:31.989020+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98754560 unmapped: 31932416 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240312 data_alloc: 218103808 data_used: 9990144
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _renew_subs
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 132 ms_handle_reset con 0x558b83c48000 session 0x558b84a2cd20
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:32.989364+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f992a000/0x0/0x4ffc00000, data 0x2089dd4/0x2154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98369536 unmapped: 32317440 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:33.989811+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98369536 unmapped: 32317440 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:34.990226+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98369536 unmapped: 32317440 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:35.990637+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98369536 unmapped: 32317440 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:36.991043+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98369536 unmapped: 32317440 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192646 data_alloc: 218103808 data_used: 9998336
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fa0b2000/0x0/0x4ffc00000, data 0x18fffa5/0x19cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:37.991669+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98369536 unmapped: 32317440 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:38.992211+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:39.992711+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:41.005817+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:42.006250+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195620 data_alloc: 218103808 data_used: 9998336
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:43.006754+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0af000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:44.007014+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:45.007322+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:46.007820+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:47.008139+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195620 data_alloc: 218103808 data_used: 9998336
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0af000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:48.008524+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0af000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:49.009008+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:50.009888+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:51.010248+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:52.010715+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195620 data_alloc: 218103808 data_used: 9998336
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:53.011028+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0af000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:54.011251+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:55.011751+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:56.012080+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:57.012434+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0af000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195620 data_alloc: 218103808 data_used: 9998336
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:58.012748+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:59.013089+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:00.013299+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:01.013736+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0af000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:02.014040+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195620 data_alloc: 218103808 data_used: 9998336
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:03.014441+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:04.014835+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0af000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0af000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:05.015204+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0af000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:06.015761+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:07.016218+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195620 data_alloc: 218103808 data_used: 9998336
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:08.016824+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:09.017349+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:10.017811+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:11.018208+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0af000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0af000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:12.018721+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195620 data_alloc: 218103808 data_used: 9998336
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:13.019129+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:14.019694+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:15.020111+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:16.020484+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0af000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:17.020911+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195620 data_alloc: 218103808 data_used: 9998336
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:18.021234+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:19.021674+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:20.021935+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:21.022312+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:22.022749+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0af000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195620 data_alloc: 218103808 data_used: 9998336
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:23.023178+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0af000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:24.023644+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:25.024113+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:26.024488+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:27.024858+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195620 data_alloc: 218103808 data_used: 9998336
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:28.025213+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:29.025786+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0af000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:30.026171+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:31.026611+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:32.027007+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195620 data_alloc: 218103808 data_used: 9998336
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:33.027458+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:34.027860+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0af000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:35.028178+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0af000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:36.028645+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:37.029138+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195620 data_alloc: 218103808 data_used: 9998336
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:38.029495+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:39.029836+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:40.030267+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:41.030993+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0af000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:42.031492+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195620 data_alloc: 218103808 data_used: 9998336
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:43.031769+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:44.032180+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:45.032501+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0af000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:46.032850+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0af000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:47.033262+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195620 data_alloc: 218103808 data_used: 9998336
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:48.033798+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:49.034381+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 77.878738403s of 78.146438599s, submitted: 53
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:50.034778+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:51.035135+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 99450880 unmapped: 31236096 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:52.035519+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194740 data_alloc: 218103808 data_used: 9998336
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98779136 unmapped: 31907840 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:53.035898+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:54.036314+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:55.036637+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:56.037051+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:57.037440+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194740 data_alloc: 218103808 data_used: 9998336
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:58.037827+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:59.038239+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:00.038779+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:01.039089+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:02.039495+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194740 data_alloc: 218103808 data_used: 9998336
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:03.039807+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:04.040170+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:05.040644+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:06.041272+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:07.041710+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194740 data_alloc: 218103808 data_used: 9998336
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:08.042094+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:09.042462+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:10.042814+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:11.043250+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:12.043728+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194740 data_alloc: 218103808 data_used: 9998336
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:13.044143+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:14.044767+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:15.045124+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:16.045381+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:17.045791+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194740 data_alloc: 218103808 data_used: 9998336
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:18.046199+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:19.046685+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:20.047037+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:21.047420+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:22.047842+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194740 data_alloc: 218103808 data_used: 9998336
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:23.048110+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:24.048443+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:25.048767+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:26.049463+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:27.049942+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194740 data_alloc: 218103808 data_used: 9998336
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:28.050311+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:29.050760+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:30.051046+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:31.051340+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:32.051792+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194740 data_alloc: 218103808 data_used: 9998336
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:33.052146+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:34.052480+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:35.052803+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:36.053298+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:37.053772+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194740 data_alloc: 218103808 data_used: 9998336
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:38.054147+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:39.054436+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:40.054763+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:41.055030+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:42.055649+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194740 data_alloc: 218103808 data_used: 9998336
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:43.055964+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:44.056351+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:45.056708+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:46.056920+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:47.057317+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:48.057790+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194740 data_alloc: 218103808 data_used: 9998336
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:49.058210+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:50.058656+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:51.058949+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:52.059310+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:53.059767+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194740 data_alloc: 218103808 data_used: 9998336
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:54.060142+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:55.060404+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:56.060728+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:57.061142+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:58.061367+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194740 data_alloc: 218103808 data_used: 9998336
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:59.061753+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:00.062155+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:01.062667+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:02.063126+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:03.063479+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194740 data_alloc: 218103808 data_used: 9998336
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:04.063869+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:05.064272+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:06.064678+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:07.065028+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:08.065301+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194740 data_alloc: 218103808 data_used: 9998336
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:09.065714+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:10.066136+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:11.066691+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:12.067090+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:13.067670+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194740 data_alloc: 218103808 data_used: 9998336
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:14.067965+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:15.068325+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:16.068778+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:17.069127+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:18.069516+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194740 data_alloc: 218103808 data_used: 9998336
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:19.069971+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:20.070414+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:21.070854+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:22.071238+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:23.071725+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194740 data_alloc: 218103808 data_used: 9998336
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:24.072180+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:25.072718+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:26.073037+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:27.073379+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:28.073672+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194740 data_alloc: 218103808 data_used: 9998336
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:29.074025+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 ms_handle_reset con 0x558b85ec8c00 session 0x558b8624b0e0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 ms_handle_reset con 0x558b85ec9400 session 0x558b85fc3c20
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 ms_handle_reset con 0x558b84a92800 session 0x558b86376d20
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:30.075064+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83a73000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 100.205551147s of 100.784317017s, submitted: 90
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:31.075468+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 ms_handle_reset con 0x558b83a73000 session 0x558b83a94f00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:32.075938+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:33.076166+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1126457 data_alloc: 218103808 data_used: 6328320
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:34.076522+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa5f0000/0x0/0x4ffc00000, data 0x13c29f8/0x148e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:35.076943+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:36.077361+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:37.077772+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:38.078038+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1126457 data_alloc: 218103808 data_used: 6328320
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:39.078409+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa5f0000/0x0/0x4ffc00000, data 0x13c29f8/0x148e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:40.078710+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:41.079179+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:42.079726+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:43.080126+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1126457 data_alloc: 218103808 data_used: 6328320
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:44.080457+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:45.080821+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa5f0000/0x0/0x4ffc00000, data 0x13c29f8/0x148e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:46.081149+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa5f0000/0x0/0x4ffc00000, data 0x13c29f8/0x148e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:47.081671+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:48.082055+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1126457 data_alloc: 218103808 data_used: 6328320
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:49.082439+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.514204025s of 18.552843094s, submitted: 8
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 ms_handle_reset con 0x558b83c47400 session 0x558b85fc23c0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 ms_handle_reset con 0x558b843b6000 session 0x558b85bbdc20
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 ms_handle_reset con 0x558b83c46c00 session 0x558b862aa3c0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96763904 unmapped: 33923072 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83a73000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa5f0000/0x0/0x4ffc00000, data 0x13c29f8/0x148e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:50.082766+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92102656 unmapped: 38584320 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 ms_handle_reset con 0x558b83a73000 session 0x558b8634be00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:51.082983+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:52.083356+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:53.083659+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:54.084075+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:55.084368+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:56.084742+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:57.084981+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:58.085341+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:59.085674+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:00.086031+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:01.086463+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:02.087082+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:03.087480+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:04.087835+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:05.088166+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:06.088725+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:07.089081+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:08.089446+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:09.089874+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:10.090286+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:11.090818+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:12.091444+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:13.091796+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:14.092291+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:15.092731+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:16.093052+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:17.093429+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:18.093800+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:19.094203+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:20.094690+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:21.095038+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:22.095484+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:23.095825+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:24.096223+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:25.096692+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:26.097105+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:27.097617+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:28.097977+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:29.098316+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:30.098801+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:31.099212+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:32.099604+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:33.099904+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:34.100217+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:35.100465+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:36.100709+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:37.101080+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:38.101454+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:39.102004+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:40.102991+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:41.103357+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:42.103636+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:43.104075+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:44.104420+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:45.104749+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:46.105092+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:47.105462+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:48.105816+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:49.106221+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:50.106447+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:51.106832+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:52.107253+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:53.107521+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:54.107962+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:55.108315+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:56.108769+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:57.109114+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:58.109643+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:59.109987+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:00.110345+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:01.110820+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:02.111264+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:03.111733+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:04.112092+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:05.112324+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:06.112795+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:07.113197+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:08.113433+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:09.113767+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:10.116948+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:11.117390+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:12.117791+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:13.118198+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:14.118640+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c46c00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 85.285675049s of 85.578956604s, submitted: 49
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:15.119031+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92102656 unmapped: 38584320 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49e8/0x98f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:16.119466+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 handle_osd_map epochs [133,134], i have 133, src has [1,134]
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92200960 unmapped: 38486016 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 134 ms_handle_reset con 0x558b83c46c00 session 0x558b85fc3860
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c47400
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:17.119826+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92233728 unmapped: 38453248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:18.120256+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _renew_subs
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91750400 unmapped: 38936576 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c47400 session 0x558b86376960
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:19.120739+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095506 data_alloc: 218103808 data_used: 53248
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:20.121026+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:21.121426+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:22.121841+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:23.122190+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:24.122429+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095506 data_alloc: 218103808 data_used: 53248
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:25.122797+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:26.123112+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:27.123634+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:28.123979+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:29.124309+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095506 data_alloc: 218103808 data_used: 53248
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:30.124760+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:31.125160+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:32.125835+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:33.126196+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:34.126685+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095506 data_alloc: 218103808 data_used: 53248
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:35.127028+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:36.127407+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:37.127860+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:38.128184+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:39.128576+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095506 data_alloc: 218103808 data_used: 53248
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:40.129052+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:41.129457+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:42.129891+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:43.130267+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:44.130675+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095506 data_alloc: 218103808 data_used: 53248
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:45.131021+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:46.131684+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:47.131920+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:48.132623+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:49.133083+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095506 data_alloc: 218103808 data_used: 53248
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:50.133385+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:51.133765+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:52.134172+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:53.134367+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:54.134732+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095506 data_alloc: 218103808 data_used: 53248
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:55.135111+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:56.136084+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:57.136681+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:58.136994+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:59.137423+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095506 data_alloc: 218103808 data_used: 53248
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:00.137876+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:01.138401+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:02.139094+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:03.139502+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:04.140001+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095506 data_alloc: 218103808 data_used: 53248
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:05.140508+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b843b6000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b843b6000 session 0x558b84927860
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b84a92800
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b84a92800 session 0x558b8311e3c0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b84a92800
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b84a92800 session 0x558b8625a960
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:06.140971+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83a73000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83a73000 session 0x558b83dbb680
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c46c00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:07.141336+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b85278000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 35282944 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:08.141794+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c47400
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c47400 session 0x558b862841e0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b843b6000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 53.474491119s of 53.753597260s, submitted: 32
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 35282944 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b843b6000 session 0x558b86284f00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83a73000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83a73000 session 0x558b852443c0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c46c00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:09.142211+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b85244960
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c47400
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c47400 session 0x558b862fa780
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b84a92800
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b84a92800 session 0x558b849b8f00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166167 data_alloc: 218103808 data_used: 4714496
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96403456 unmapped: 34283520 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:10.142711+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c48000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c48000 session 0x558b86376d20
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c48000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c48000 session 0x558b86377e00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83a73000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83a73000 session 0x558b83dbb680
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c46c00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96534528 unmapped: 34152448 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b8625a960
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c47400
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c47400 session 0x558b84927860
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:11.143116+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96534528 unmapped: 34152448 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:12.143714+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96534528 unmapped: 34152448 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:13.144057+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9d26000/0x0/0x4ffc00000, data 0x1c861aa/0x1d58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96534528 unmapped: 34152448 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b84a92800
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b84a92800 session 0x558b85fc3c20
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:14.144588+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1175957 data_alloc: 218103808 data_used: 4714496
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96534528 unmapped: 34152448 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83a73000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83a73000 session 0x558b85fc23c0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:15.144842+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c46c00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b8634be00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9d26000/0x0/0x4ffc00000, data 0x1c861aa/0x1d58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c47400
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c47400 session 0x558b85bbdc20
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c48000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c48000 session 0x558b83a94f00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ec8800
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96485376 unmapped: 34201600 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85ec8800 session 0x558b8624b0e0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ec8800
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85ec8800 session 0x558b84a2cd20
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83a73000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83a73000 session 0x558b862841e0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:16.145264+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c46c00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b86284f00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96485376 unmapped: 34201600 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:17.145625+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c47400
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c48000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9592000/0x0/0x4ffc00000, data 0x20081dd/0x20dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ec8c00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85ec8c00 session 0x558b86376000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96509952 unmapped: 34177024 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:18.146014+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ec9c00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85ec9c00 session 0x558b86376b40
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96493568 unmapped: 34193408 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:19.146424+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83a73000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83a73000 session 0x558b863763c0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c46c00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ec8800
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.407323837s of 10.792876244s, submitted: 47
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231863 data_alloc: 218103808 data_used: 4722688
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b862fbe00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85ec8800 session 0x558b86261e00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ec8c00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85ec8c00 session 0x558b844090e0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88138000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88138000 session 0x558b844225a0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83a73000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96804864 unmapped: 33882112 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83a73000 session 0x558b8625b4a0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c46c00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b849b92c0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:20.146852+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ec8800
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ec8c00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96821248 unmapped: 33865728 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:21.147203+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96821248 unmapped: 33865728 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9308000/0x0/0x4ffc00000, data 0x229020f/0x2366000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:22.147710+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96821248 unmapped: 33865728 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88138400
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:23.147929+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88138400 session 0x558b8634b860
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96829440 unmapped: 33857536 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88138800
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88138c00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:24.148153+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262169 data_alloc: 218103808 data_used: 8060928
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9307000/0x0/0x4ffc00000, data 0x2290232/0x2367000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,1])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96804864 unmapped: 33882112 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:25.148477+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 97984512 unmapped: 32702464 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:26.148651+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 97984512 unmapped: 32702464 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:27.149044+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9307000/0x0/0x4ffc00000, data 0x2290232/0x2367000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88139000 session 0x558b862faf00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 97984512 unmapped: 32702464 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139400
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88139400 session 0x558b862fab40
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:28.152940+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9307000/0x0/0x4ffc00000, data 0x2290232/0x2367000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 97951744 unmapped: 32735232 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:29.153175+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83a73000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83a73000 session 0x558b83ca2000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c46c00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b83ca43c0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1291687 data_alloc: 234881024 data_used: 12001280
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88138400
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:30.153379+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98394112 unmapped: 32292864 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:31.153563+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 30040064 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:32.153877+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 30040064 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:33.154164+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9306000/0x0/0x4ffc00000, data 0x2290242/0x2368000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 30040064 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:34.154601+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1317927 data_alloc: 234881024 data_used: 15679488
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 30023680 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:35.154956+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 30023680 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:36.155473+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 30023680 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:37.155664+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85ec8800 session 0x558b8624a780
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85ec8c00 session 0x558b862fb0e0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139400
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 101007360 unmapped: 29679616 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:38.155913+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.547910690s of 18.724098206s, submitted: 24
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88139400 session 0x558b83dbb0e0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 102367232 unmapped: 28319744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:39.156090+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f93d2000/0x0/0x4ffc00000, data 0x21c5210/0x229b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1318572 data_alloc: 234881024 data_used: 17334272
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 102449152 unmapped: 28237824 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:40.156378+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 102449152 unmapped: 28237824 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:41.156697+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 102449152 unmapped: 28237824 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:42.157076+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 102449152 unmapped: 28237824 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:43.157502+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88138400 session 0x558b8634a5a0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88139000 session 0x558b85de72c0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f93d2000/0x0/0x4ffc00000, data 0x21c5210/0x229b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c46c00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 29696000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:44.157818+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b867461e0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1283015 data_alloc: 234881024 data_used: 15118336
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 29696000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:45.158010+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 29696000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:46.158195+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 29696000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:47.158390+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 29696000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:48.158588+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9632000/0x0/0x4ffc00000, data 0x1f67200/0x203c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 29696000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:49.158796+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1283015 data_alloc: 234881024 data_used: 15118336
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 29696000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:50.159001+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 29687808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:51.159204+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9632000/0x0/0x4ffc00000, data 0x1f67200/0x203c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 29687808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:52.159462+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 29687808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:53.159676+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 29687808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:54.159870+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1283015 data_alloc: 234881024 data_used: 15118336
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 29687808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:55.160085+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9632000/0x0/0x4ffc00000, data 0x1f67200/0x203c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 29687808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:56.160295+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 29687808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:57.160501+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.113079071s of 19.223537445s, submitted: 17
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108929024 unmapped: 21757952 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:58.160664+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f85a5000/0x0/0x4ffc00000, data 0x2ff4200/0x30c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108937216 unmapped: 21749760 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:59.160823+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8546000/0x0/0x4ffc00000, data 0x3053200/0x3128000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1415883 data_alloc: 234881024 data_used: 16392192
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 109969408 unmapped: 20717568 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:00.161245+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:01.161507+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 109969408 unmapped: 20717568 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:02.161973+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 109969408 unmapped: 20717568 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:03.162378+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 109969408 unmapped: 20717568 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:04.162903+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110362624 unmapped: 20324352 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139800
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88139800 session 0x558b867465a0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139c00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88139c00 session 0x558b86746780
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c46c00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b86746960
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88138400
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88138400 session 0x558b86746b40
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7c5f000/0x0/0x4ffc00000, data 0x393a200/0x3a0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1522536 data_alloc: 234881024 data_used: 16547840
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88139000 session 0x558b86746d20
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139800
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88139800 session 0x558b86747a40
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7ad6000/0x0/0x4ffc00000, data 0x3ac2210/0x3b98000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139c00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88139c00 session 0x558b84410d20
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:05.163240+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c46c00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110125056 unmapped: 20561920 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b84408b40
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88138400
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88138400 session 0x558b83c53680
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7ad6000/0x0/0x4ffc00000, data 0x3ac2210/0x3b98000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:06.163428+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 20406272 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:07.163736+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 20406272 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7720000/0x0/0x4ffc00000, data 0x3e78210/0x3f4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:08.164108+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 20406272 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:09.164511+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 20406272 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530322 data_alloc: 234881024 data_used: 16613376
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:10.164779+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 20406272 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:11.165186+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110288896 unmapped: 20398080 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:12.165522+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110288896 unmapped: 20398080 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88139000 session 0x558b83c52960
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7720000/0x0/0x4ffc00000, data 0x3e78210/0x3f4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139800
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.731391907s of 15.542462349s, submitted: 209
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139c00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:13.165844+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110288896 unmapped: 20398080 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:14.166173+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110288896 unmapped: 20398080 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7720000/0x0/0x4ffc00000, data 0x3e78210/0x3f4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530454 data_alloc: 234881024 data_used: 16613376
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:15.166859+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110305280 unmapped: 20381696 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7720000/0x0/0x4ffc00000, data 0x3e78210/0x3f4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:16.167052+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110305280 unmapped: 20381696 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:17.167465+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110305280 unmapped: 20381696 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:18.168178+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110305280 unmapped: 20381696 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7720000/0x0/0x4ffc00000, data 0x3e78210/0x3f4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:19.168643+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110305280 unmapped: 20381696 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1528914 data_alloc: 234881024 data_used: 16613376
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:20.169059+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110305280 unmapped: 20381696 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:21.169498+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110305280 unmapped: 20381696 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:22.169991+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110362624 unmapped: 20324352 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f771d000/0x0/0x4ffc00000, data 0x3e7b210/0x3f51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:23.170286+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110804992 unmapped: 19881984 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:24.170496+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112484352 unmapped: 18202624 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1566994 data_alloc: 234881024 data_used: 21929984
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:25.170716+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114016256 unmapped: 16670720 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:26.170945+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114016256 unmapped: 16670720 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:27.171171+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114016256 unmapped: 16670720 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:28.171387+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f771d000/0x0/0x4ffc00000, data 0x3e7b210/0x3f51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 16654336 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:29.171673+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 16654336 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1566994 data_alloc: 234881024 data_used: 21929984
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:30.171914+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 16654336 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.591274261s of 17.624994278s, submitted: 3
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:31.172193+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114139136 unmapped: 16547840 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f771d000/0x0/0x4ffc00000, data 0x3e7b210/0x3f51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:32.172432+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114139136 unmapped: 16547840 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:33.172678+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114139136 unmapped: 16547840 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:34.172914+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114139136 unmapped: 16547840 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f771d000/0x0/0x4ffc00000, data 0x3e7b210/0x3f51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1567522 data_alloc: 234881024 data_used: 21929984
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:35.173305+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114139136 unmapped: 16547840 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:36.173652+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114139136 unmapped: 16547840 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:37.173975+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114139136 unmapped: 16547840 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:38.174408+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114139136 unmapped: 16547840 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:39.174834+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114147328 unmapped: 16539648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:40.175224+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1567522 data_alloc: 234881024 data_used: 21929984
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114163712 unmapped: 16523264 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f771d000/0x0/0x4ffc00000, data 0x3e7b210/0x3f51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.202155113s of 10.243181229s, submitted: 7
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:41.175671+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114163712 unmapped: 16523264 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7718000/0x0/0x4ffc00000, data 0x3e80210/0x3f56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:42.176117+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114163712 unmapped: 16523264 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7718000/0x0/0x4ffc00000, data 0x3e80210/0x3f56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:43.176718+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114163712 unmapped: 16523264 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:44.177020+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114163712 unmapped: 16523264 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:45.177403+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1567254 data_alloc: 234881024 data_used: 21929984
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114163712 unmapped: 16523264 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:46.177789+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7718000/0x0/0x4ffc00000, data 0x3e80210/0x3f56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 16515072 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c47400 session 0x558b86285680
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c48000 session 0x558b862852c0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c46c00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:47.177999+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110878720 unmapped: 19808256 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b849272c0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88138800 session 0x558b86377e00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88138c00 session 0x558b8624be00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:48.178227+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110878720 unmapped: 19808256 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c47400
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c47400 session 0x558b8521f680
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:49.178445+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110878720 unmapped: 19808256 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8ebc000/0x0/0x4ffc00000, data 0x267816b/0x274a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:50.178638+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336665 data_alloc: 234881024 data_used: 13844480
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88138400
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110878720 unmapped: 19808256 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:51.178968+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110878720 unmapped: 19808256 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:52.179406+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110878720 unmapped: 19808256 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:53.179669+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110878720 unmapped: 19808256 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:54.180169+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8f24000/0x0/0x4ffc00000, data 0x267816b/0x274a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110878720 unmapped: 19808256 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8f24000/0x0/0x4ffc00000, data 0x267816b/0x274a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:55.180618+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336445 data_alloc: 234881024 data_used: 13844480
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110878720 unmapped: 19808256 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:56.181007+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.562313080s of 15.728222847s, submitted: 35
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 19046400 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:57.181166+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113491968 unmapped: 17195008 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:58.181438+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8a9f000/0x0/0x4ffc00000, data 0x2af716b/0x2bc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113999872 unmapped: 16687104 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:59.181629+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 18096128 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:00.181951+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1385755 data_alloc: 234881024 data_used: 14766080
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 18096128 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:01.182282+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8a8d000/0x0/0x4ffc00000, data 0x2b0916b/0x2bdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 18096128 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:02.182743+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 18096128 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:03.183153+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 18096128 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8a8d000/0x0/0x4ffc00000, data 0x2b0916b/0x2bdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:04.183459+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 18096128 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:05.183752+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1385755 data_alloc: 234881024 data_used: 14766080
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 18096128 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:06.184015+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:07.184322+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8a8d000/0x0/0x4ffc00000, data 0x2b0916b/0x2bdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:08.184516+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:09.184846+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:10.185065+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1386715 data_alloc: 234881024 data_used: 14835712
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.230500221s of 14.482069969s, submitted: 64
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:11.185417+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:12.185820+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:13.186042+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8a93000/0x0/0x4ffc00000, data 0x2b0916b/0x2bdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:14.186206+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:15.186490+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1384779 data_alloc: 234881024 data_used: 14835712
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:16.186894+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:17.187245+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:18.187630+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8a93000/0x0/0x4ffc00000, data 0x2b0916b/0x2bdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:19.187861+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8a93000/0x0/0x4ffc00000, data 0x2b0916b/0x2bdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:20.188281+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1384779 data_alloc: 234881024 data_used: 14835712
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112615424 unmapped: 18071552 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:21.188690+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112615424 unmapped: 18071552 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85e1c800
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85e1c800 session 0x558b8624af00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c46c00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b8624a5a0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85a8bc00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85a8bc00 session 0x558b85bbc3c0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:22.188932+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c47400
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c47400 session 0x558b85bbcf00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88138c00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.310770035s of 11.319570541s, submitted: 1
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8a93000/0x0/0x4ffc00000, data 0x2b0916b/0x2bdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88138c00 session 0x558b85bbda40
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88138800
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88138800 session 0x558b85de6000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c46c00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b8634b4a0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 18259968 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c47400
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c47400 session 0x558b867461e0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85a8bc00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85a8bc00 session 0x558b867465a0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:23.189140+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 18259968 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:24.189336+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88138c00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88138c00 session 0x558b86746960
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ea2000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85ea2000 session 0x558b862fa780
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 18259968 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ea2000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85ea2000 session 0x558b862fb0e0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c46c00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b8624b680
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c47400
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:25.189510+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c47400 session 0x558b8624a3c0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497051 data_alloc: 234881024 data_used: 14835712
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85a8bc00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85a8bc00 session 0x558b8624bc20
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88138c00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88138c00 session 0x558b8624a000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c46c00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b8624a780
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 18735104 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c47400
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c47400 session 0x558b83ca3a40
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:26.189796+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 18677760 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:27.190009+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7e81000/0x0/0x4ffc00000, data 0x371a17b/0x37ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112025600 unmapped: 18661376 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:28.190320+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7e81000/0x0/0x4ffc00000, data 0x371a17b/0x37ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112074752 unmapped: 18612224 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85a8bc00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85a8bc00 session 0x558b83ca2000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:29.190757+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112091136 unmapped: 18595840 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ea2000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ea3400
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:30.191079+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1502491 data_alloc: 234881024 data_used: 15368192
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112091136 unmapped: 18595840 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:31.191669+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7e81000/0x0/0x4ffc00000, data 0x371a17b/0x37ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112091136 unmapped: 18595840 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:32.192023+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112091136 unmapped: 18595840 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:33.192483+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ea2400
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.251416206s of 11.426207542s, submitted: 28
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112123904 unmapped: 18563072 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85ea2400 session 0x558b862abe00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7e80000/0x0/0x4ffc00000, data 0x371a19e/0x37ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:34.192725+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112132096 unmapped: 18554880 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b861f5c00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:35.193284+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b84a6f800
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1504272 data_alloc: 234881024 data_used: 15368192
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112132096 unmapped: 18554880 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:36.193797+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112132096 unmapped: 18554880 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:37.194290+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112132096 unmapped: 18554880 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:38.194738+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112132096 unmapped: 18554880 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:39.195079+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7e80000/0x0/0x4ffc00000, data 0x371a19e/0x37ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112320512 unmapped: 18366464 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:40.195305+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1521392 data_alloc: 234881024 data_used: 17547264
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114794496 unmapped: 15892480 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:41.195759+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115277824 unmapped: 15409152 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:42.196487+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115286016 unmapped: 15400960 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:43.197039+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115286016 unmapped: 15400960 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7e80000/0x0/0x4ffc00000, data 0x371a19e/0x37ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:44.197258+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.764178276s of 10.793314934s, submitted: 4
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 15155200 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:45.197805+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1552128 data_alloc: 234881024 data_used: 21549056
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 15155200 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:46.198050+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7e80000/0x0/0x4ffc00000, data 0x371a19e/0x37ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 15155200 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:47.198427+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7e80000/0x0/0x4ffc00000, data 0x371a19e/0x37ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 13492224 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:48.198687+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7e80000/0x0/0x4ffc00000, data 0x371a19e/0x37ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118415360 unmapped: 12271616 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:49.199050+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b861f5c00 session 0x558b8311e1e0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b84a6f800 session 0x558b849774a0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c46c00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 15089664 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:50.199244+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b86285c20
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1511816 data_alloc: 234881024 data_used: 21544960
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 15089664 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:51.199468+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 15089664 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:52.199793+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f833a000/0x0/0x4ffc00000, data 0x326017b/0x3333000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 15089664 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:53.200055+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 15089664 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:54.200430+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 15089664 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:55.200762+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1511816 data_alloc: 234881024 data_used: 21544960
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 15089664 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:56.201000+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88138400 session 0x558b849b9860
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.645172119s of 11.731092453s, submitted: 27
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88139000 session 0x558b85de3a40
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c47400
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112082944 unmapped: 18604032 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:57.201355+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c47400 session 0x558b85fc25a0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8f69000/0x0/0x4ffc00000, data 0x2632158/0x2704000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:58.201707+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8f69000/0x0/0x4ffc00000, data 0x2632158/0x2704000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:59.201941+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:00.202301+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1389946 data_alloc: 234881024 data_used: 17207296
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:01.202588+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:02.203032+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8f69000/0x0/0x4ffc00000, data 0x2632158/0x2704000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:03.203332+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:04.203685+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:05.204058+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1389946 data_alloc: 234881024 data_used: 17207296
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:06.204361+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:07.204780+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8f69000/0x0/0x4ffc00000, data 0x2632158/0x2704000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:08.205078+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8f69000/0x0/0x4ffc00000, data 0x2632158/0x2704000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:09.207215+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:10.207663+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1389946 data_alloc: 234881024 data_used: 17207296
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:11.207933+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:12.208398+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8f69000/0x0/0x4ffc00000, data 0x2632158/0x2704000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.237525940s of 16.362010956s, submitted: 31
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8eab000/0x0/0x4ffc00000, data 0x26f1158/0x27c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,0,2])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119226368 unmapped: 11460608 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:13.208874+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:14.209093+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 11444224 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f850d000/0x0/0x4ffc00000, data 0x3081158/0x3153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:15.209434+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119398400 unmapped: 11288576 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1484722 data_alloc: 234881024 data_used: 17747968
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:16.209627+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119422976 unmapped: 11264000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:17.210040+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119422976 unmapped: 11264000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8476000/0x0/0x4ffc00000, data 0x311d158/0x31ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:18.210299+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119422976 unmapped: 11264000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:19.210717+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119422976 unmapped: 11264000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8476000/0x0/0x4ffc00000, data 0x311d158/0x31ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:20.210937+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119422976 unmapped: 11264000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85ea2000 session 0x558b84974000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85ea3400 session 0x558b863774a0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b84a6f800
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1478306 data_alloc: 234881024 data_used: 17756160
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:21.211256+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114073600 unmapped: 16613376 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88138400
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f845f000/0x0/0x4ffc00000, data 0x313d158/0x320f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [1])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b84a6f800 session 0x558b8521e960
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:22.211742+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113524736 unmapped: 33947648 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.629374504s of 10.309167862s, submitted: 151
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:23.211943+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 handle_osd_map epochs [135,136], i have 135, src has [1,136]
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _renew_subs
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 136 handle_osd_map epochs [136,136], i have 136, src has [1,136]
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113549312 unmapped: 33923072 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 136 ms_handle_reset con 0x558b88139000 session 0x558b83d11680
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 136 ms_handle_reset con 0x558b88138400 session 0x558b84410d20
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b84a6f800
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 136 ms_handle_reset con 0x558b84a6f800 session 0x558b83c53860
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ea2000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ea3400
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 136 ms_handle_reset con 0x558b85ea2000 session 0x558b84927e00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 136 ms_handle_reset con 0x558b85ea3400 session 0x558b8634a780
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 136 ms_handle_reset con 0x558b88139000 session 0x558b84926b40
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85a8bc00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 136 ms_handle_reset con 0x558b85a8bc00 session 0x558b85244960
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85a8bc00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 136 ms_handle_reset con 0x558b85a8bc00 session 0x558b852785a0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b84a6f800
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:24.212131+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 33906688 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 136 ms_handle_reset con 0x558b84a6f800 session 0x558b83c53860
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ea2000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 136 ms_handle_reset con 0x558b85ea2000 session 0x558b83c52780
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ea3400
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 136 ms_handle_reset con 0x558b85ea3400 session 0x558b8624bc20
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 136 ms_handle_reset con 0x558b88139000 session 0x558b8624a000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 136 ms_handle_reset con 0x558b88139000 session 0x558b8624b4a0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f8c1c000/0x0/0x4ffc00000, data 0x297b13e/0x2a52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:25.212335+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113639424 unmapped: 33832960 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b84a6f800
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1377459 data_alloc: 234881024 data_used: 11022336
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:26.212709+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _renew_subs
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113631232 unmapped: 33841152 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 137 ms_handle_reset con 0x558b88139800 session 0x558b849274a0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 137 ms_handle_reset con 0x558b88139c00 session 0x558b862fa960
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85a8bc00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 137 ms_handle_reset con 0x558b84a6f800 session 0x558b8624af00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:27.213133+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108437504 unmapped: 39034880 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 137 ms_handle_reset con 0x558b85a8bc00 session 0x558b849774a0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:28.213457+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108437504 unmapped: 39034880 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:29.213785+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108437504 unmapped: 39034880 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f95bf000/0x0/0x4ffc00000, data 0x1fd98b9/0x20ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:30.214105+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108437504 unmapped: 39034880 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253805 data_alloc: 218103808 data_used: 4730880
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:31.214437+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108437504 unmapped: 39034880 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b84a6f800
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 137 ms_handle_reset con 0x558b84a6f800 session 0x558b85de61e0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:32.214791+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108437504 unmapped: 39034880 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139800
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:33.215032+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f95bf000/0x0/0x4ffc00000, data 0x1fd98dc/0x20af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108445696 unmapped: 39026688 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:34.215384+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108445696 unmapped: 39026688 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:35.215751+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108445696 unmapped: 39026688 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139c00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1269378 data_alloc: 218103808 data_used: 6901760
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 137 ms_handle_reset con 0x558b88139c00 session 0x558b86284d20
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:36.216144+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 111984640 unmapped: 35487744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:37.216415+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f95bf000/0x0/0x4ffc00000, data 0x1fd98dc/0x20af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ea2000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.745358467s of 14.380681038s, submitted: 113
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 111984640 unmapped: 35487744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 137 ms_handle_reset con 0x558b85ea2000 session 0x558b83d4a1e0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ea3400
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 137 ms_handle_reset con 0x558b85ea3400 session 0x558b84a2cf00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ea2400
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 137 ms_handle_reset con 0x558b85ea2400 session 0x558b84927e00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b84a6f800
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 137 ms_handle_reset con 0x558b84a6f800 session 0x558b844101e0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ea2000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 137 ms_handle_reset con 0x558b85ea2000 session 0x558b84408000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:38.216753+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112033792 unmapped: 35438592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:39.217185+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _renew_subs
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112033792 unmapped: 35438592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:40.217641+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112033792 unmapped: 35438592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1294828 data_alloc: 218103808 data_used: 8527872
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:41.217931+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f9538000/0x0/0x4ffc00000, data 0x205d3a1/0x2135000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112033792 unmapped: 35438592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ea3400
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:42.218197+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b85ea3400 session 0x558b8521ed20
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112033792 unmapped: 35438592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139c00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ec6c00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:43.218664+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112050176 unmapped: 35422208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:44.219010+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112050176 unmapped: 35422208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:45.219265+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112050176 unmapped: 35422208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1301826 data_alloc: 218103808 data_used: 9060352
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f9538000/0x0/0x4ffc00000, data 0x205d3c4/0x2136000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:46.219569+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112050176 unmapped: 35422208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:47.219813+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 35414016 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:48.220089+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 35414016 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:49.220369+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 35414016 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:50.220676+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 35414016 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1301826 data_alloc: 218103808 data_used: 9060352
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:51.221007+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 35414016 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f9538000/0x0/0x4ffc00000, data 0x205d3c4/0x2136000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:52.221342+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 35414016 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:53.221774+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 35414016 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f9538000/0x0/0x4ffc00000, data 0x205d3c4/0x2136000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:54.222146+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112066560 unmapped: 35405824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f9538000/0x0/0x4ffc00000, data 0x205d3c4/0x2136000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:55.222357+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f9538000/0x0/0x4ffc00000, data 0x205d3c4/0x2136000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112066560 unmapped: 35405824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1301826 data_alloc: 218103808 data_used: 9060352
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:56.222605+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112066560 unmapped: 35405824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:57.222939+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112066560 unmapped: 35405824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:58.223343+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112066560 unmapped: 35405824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:59.223791+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112066560 unmapped: 35405824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:00.224124+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112066560 unmapped: 35405824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1302306 data_alloc: 218103808 data_used: 9072640
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:01.224506+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f9538000/0x0/0x4ffc00000, data 0x205d3c4/0x2136000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112066560 unmapped: 35405824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:02.224977+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112074752 unmapped: 35397632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:03.225369+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112074752 unmapped: 35397632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:04.225631+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112074752 unmapped: 35397632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:05.225968+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f9538000/0x0/0x4ffc00000, data 0x205d3c4/0x2136000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112074752 unmapped: 35397632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1302306 data_alloc: 218103808 data_used: 9072640
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:06.226337+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112074752 unmapped: 35397632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:07.226735+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112074752 unmapped: 35397632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:08.227093+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112074752 unmapped: 35397632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 31.471420288s of 31.809257507s, submitted: 56
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:09.227460+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115089408 unmapped: 32382976 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8fdd000/0x0/0x4ffc00000, data 0x25b83c4/0x2691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:10.227682+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114663424 unmapped: 32808960 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1359190 data_alloc: 234881024 data_used: 9846784
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:11.227905+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114712576 unmapped: 32759808 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8fc7000/0x0/0x4ffc00000, data 0x25cd3c4/0x26a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:12.228255+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8fc7000/0x0/0x4ffc00000, data 0x25cd3c4/0x26a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114606080 unmapped: 32866304 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:13.228984+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114606080 unmapped: 32866304 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:14.229433+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114606080 unmapped: 32866304 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:15.229807+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114606080 unmapped: 32866304 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:16.230185+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1369410 data_alloc: 234881024 data_used: 9990144
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8fc7000/0x0/0x4ffc00000, data 0x25cd3c4/0x26a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114606080 unmapped: 32866304 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:17.230670+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114606080 unmapped: 32866304 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8fc7000/0x0/0x4ffc00000, data 0x25cd3c4/0x26a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:18.231065+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114966528 unmapped: 32505856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:19.231883+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114466816 unmapped: 33005568 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:20.232266+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114466816 unmapped: 33005568 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8b19000/0x0/0x4ffc00000, data 0x2a7c3c4/0x2b55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.458605766s of 11.824682236s, submitted: 72
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:21.232797+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1404216 data_alloc: 234881024 data_used: 10186752
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114499584 unmapped: 32972800 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:22.233352+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114499584 unmapped: 32972800 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:23.233708+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114499584 unmapped: 32972800 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:24.233949+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114499584 unmapped: 32972800 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:25.234312+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114499584 unmapped: 32972800 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 3600.1 total, 600.0 interval
                                            Cumulative writes: 8914 writes, 35K keys, 8914 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 8914 writes, 2261 syncs, 3.94 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 1912 writes, 7094 keys, 1912 commit groups, 1.0 writes per commit group, ingest: 7.72 MB, 0.01 MB/s
                                            Interval WAL: 1912 writes, 777 syncs, 2.46 writes per sync, written: 0.01 GB, 0.01 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:26.234728+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8b0d000/0x0/0x4ffc00000, data 0x2a883c4/0x2b61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411348 data_alloc: 234881024 data_used: 10186752
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114499584 unmapped: 32972800 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:27.235016+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8b0d000/0x0/0x4ffc00000, data 0x2a883c4/0x2b61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114499584 unmapped: 32972800 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:28.235308+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114499584 unmapped: 32972800 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:29.235695+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114499584 unmapped: 32972800 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:30.236176+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8b0d000/0x0/0x4ffc00000, data 0x2a883c4/0x2b61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114499584 unmapped: 32972800 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:31.236469+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411348 data_alloc: 234881024 data_used: 10186752
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114499584 unmapped: 32972800 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:32.236936+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8b0d000/0x0/0x4ffc00000, data 0x2a883c4/0x2b61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114507776 unmapped: 32964608 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:33.237336+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: mgrc ms_handle_reset ms_handle_reset con 0x558b85ea3000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1922561230
Dec 03 02:34:01 compute-0 ceph-osd[208731]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1922561230,v1:192.168.122.100:6801/1922561230]
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: get_auth_request con 0x558b85ea2400 auth_method 0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: mgrc handle_mgr_configure stats_period=5
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:34.237508+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:35.237690+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:36.237917+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411348 data_alloc: 234881024 data_used: 10186752
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8b0d000/0x0/0x4ffc00000, data 0x2a883c4/0x2b61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:37.238843+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:38.239208+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:39.240106+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.394577026s of 18.451017380s, submitted: 14
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:40.240941+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:41.241449+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411668 data_alloc: 234881024 data_used: 10194944
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8b0d000/0x0/0x4ffc00000, data 0x2a883c4/0x2b61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:42.241903+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:43.242351+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:44.242812+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:45.243128+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8b0b000/0x0/0x4ffc00000, data 0x2a893c4/0x2b62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:46.243463+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411304 data_alloc: 234881024 data_used: 10215424
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b88139000 session 0x558b85de72c0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b88139800 session 0x558b85de74a0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139800
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8b0b000/0x0/0x4ffc00000, data 0x2a893c4/0x2b62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:47.244140+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b88139800 session 0x558b862852c0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:48.244698+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:49.245007+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:50.245517+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:51.245990+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321090 data_alloc: 218103808 data_used: 7651328
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:52.246404+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:53.246853+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:54.247190+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:55.247638+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:56.248047+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321090 data_alloc: 218103808 data_used: 7651328
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:57.248410+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:58.248778+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:59.249061+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:00.249424+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:01.249848+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321090 data_alloc: 218103808 data_used: 7651328
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:02.250255+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:03.250701+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:04.251057+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:05.251443+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:06.251793+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321090 data_alloc: 218103808 data_used: 7651328
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:07.252074+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:08.252929+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:09.253282+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:10.253683+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:11.253982+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321090 data_alloc: 218103808 data_used: 7651328
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:12.254250+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:13.254639+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:14.254949+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:15.255276+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:16.255707+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321090 data_alloc: 218103808 data_used: 7651328
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:17.256009+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:18.256434+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:19.256748+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:20.257388+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:21.257740+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321090 data_alloc: 218103808 data_used: 7651328
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:22.257990+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:23.258502+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:24.258931+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:25.259364+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:26.259772+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321090 data_alloc: 218103808 data_used: 7651328
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:27.260163+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:28.260887+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113549312 unmapped: 33923072 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:29.261280+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113549312 unmapped: 33923072 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:30.261706+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113549312 unmapped: 33923072 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:31.262124+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113549312 unmapped: 33923072 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321090 data_alloc: 218103808 data_used: 7651328
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:32.262924+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113549312 unmapped: 33923072 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:33.263348+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113549312 unmapped: 33923072 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:34.263744+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113549312 unmapped: 33923072 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:35.264028+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113549312 unmapped: 33923072 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:36.264426+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113549312 unmapped: 33923072 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321090 data_alloc: 218103808 data_used: 7651328
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:37.264857+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113549312 unmapped: 33923072 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:38.265204+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113557504 unmapped: 33914880 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:39.265609+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113557504 unmapped: 33914880 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:40.266016+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113557504 unmapped: 33914880 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:41.266319+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113557504 unmapped: 33914880 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321090 data_alloc: 218103808 data_used: 7651328
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:42.266620+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 33906688 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b83c47000 session 0x558b8634af00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b84a6f800
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:43.266862+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 33906688 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:44.267148+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 33906688 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:45.267658+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 33906688 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:46.268049+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 33906688 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321090 data_alloc: 218103808 data_used: 7651328
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:47.268299+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 33906688 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:48.268928+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 33906688 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:49.269522+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 69.777755737s of 69.905036926s, submitted: 21
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 33906688 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:50.269787+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113582080 unmapped: 33890304 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:51.270194+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 33865728 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322354 data_alloc: 218103808 data_used: 7688192
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:52.270495+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113647616 unmapped: 33824768 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:53.270828+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113688576 unmapped: 33783808 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:54.271186+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 33759232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:55.271906+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 33759232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:56.272309+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 33759232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322354 data_alloc: 218103808 data_used: 7688192
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:57.272777+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 33759232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:58.273098+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 33759232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:59.273507+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 33759232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:00.273887+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 33759232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:01.274308+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 33759232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322354 data_alloc: 218103808 data_used: 7688192
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:02.274630+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 33759232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:03.275015+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:04.275294+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:05.275742+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:06.276128+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322354 data_alloc: 218103808 data_used: 7688192
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:07.276509+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:08.276855+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:09.277175+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:10.277453+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:11.277866+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322354 data_alloc: 218103808 data_used: 7688192
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:12.278172+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:13.278517+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:14.279018+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:15.279483+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:16.279827+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322354 data_alloc: 218103808 data_used: 7688192
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:17.280053+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:18.280481+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:19.280894+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:20.281204+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:21.281750+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322354 data_alloc: 218103808 data_used: 7688192
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:22.282170+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:23.282473+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:24.282903+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:25.283263+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:26.283642+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322354 data_alloc: 218103808 data_used: 7688192
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:27.283990+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:28.284308+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:29.284683+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:30.284987+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:31.285225+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322354 data_alloc: 218103808 data_used: 7688192
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:32.285651+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:33.286025+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:34.286244+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:35.286642+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:36.287011+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:37.287375+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322354 data_alloc: 218103808 data_used: 7688192
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:38.287729+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:39.288063+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:40.288413+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:41.288746+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:42.289131+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322354 data_alloc: 218103808 data_used: 7688192
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 33742848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:43.289466+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c47000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b83c47000 session 0x558b86746780
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ea2000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b85ea2000 session 0x558b86284b40
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ea3400
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b85ea3400 session 0x558b849774a0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113737728 unmapped: 33734656 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b88139000 session 0x558b86747a40
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 54.146961212s of 54.832401276s, submitted: 108
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:44.289777+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b88139000 session 0x558b84a2cf00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c47000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b83c47000 session 0x558b83df8b40
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ea2000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b85ea2000 session 0x558b85fc2b40
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ea3400
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b85ea3400 session 0x558b86285860
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139800
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891d000/0x0/0x4ffc00000, data 0x2c7a391/0x2d51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b88139800 session 0x558b8521e960
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 33701888 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:45.290166+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 33701888 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:46.290662+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 33701888 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:47.291043+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1393042 data_alloc: 218103808 data_used: 7688192
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 33701888 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891d000/0x0/0x4ffc00000, data 0x2c7a391/0x2d51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:48.291597+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 33701888 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:49.292035+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 33693696 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:50.292304+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139800
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b88139800 session 0x558b83d4a1e0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 33693696 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891d000/0x0/0x4ffc00000, data 0x2c7a391/0x2d51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:51.292743+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c47000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b83c47000 session 0x558b85278780
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891d000/0x0/0x4ffc00000, data 0x2c7a391/0x2d51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 33693696 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:52.293009+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ea2000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b85ea2000 session 0x558b85279e00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1393042 data_alloc: 218103808 data_used: 7688192
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ea3400
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b85ea3400 session 0x558b84409860
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 33693696 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:53.293510+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b879ce000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 33693696 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:54.293813+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 33693696 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:55.294059+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 33693696 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:56.294312+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114376704 unmapped: 33095680 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891c000/0x0/0x4ffc00000, data 0x2c7a3a1/0x2d52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:57.294591+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1446880 data_alloc: 234881024 data_used: 15056896
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117514240 unmapped: 29958144 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:58.294740+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 29753344 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:59.295096+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 29753344 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:00.295321+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 29753344 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:01.295641+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 29753344 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891c000/0x0/0x4ffc00000, data 0x2c7a3a1/0x2d52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:02.295882+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1468960 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 29753344 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:03.296136+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 29753344 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:04.296341+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 29753344 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:05.296574+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 29753344 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:06.296999+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891c000/0x0/0x4ffc00000, data 0x2c7a3a1/0x2d52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 29745152 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:07.297252+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1468960 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 29745152 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:08.297687+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 29745152 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891c000/0x0/0x4ffc00000, data 0x2c7a3a1/0x2d52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:09.298061+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 29745152 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:10.298400+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891c000/0x0/0x4ffc00000, data 0x2c7a3a1/0x2d52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 29745152 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:11.298827+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891c000/0x0/0x4ffc00000, data 0x2c7a3a1/0x2d52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 29745152 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:12.299259+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1468960 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 29745152 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:13.299713+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 29736960 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:14.300097+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891c000/0x0/0x4ffc00000, data 0x2c7a3a1/0x2d52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117743616 unmapped: 29728768 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:15.300305+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117743616 unmapped: 29728768 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:16.300787+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891c000/0x0/0x4ffc00000, data 0x2c7a3a1/0x2d52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117743616 unmapped: 29728768 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:17.301195+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1468960 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117743616 unmapped: 29728768 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:18.301514+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117743616 unmapped: 29728768 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:19.301904+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117743616 unmapped: 29728768 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:20.302307+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117743616 unmapped: 29728768 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:21.302520+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891c000/0x0/0x4ffc00000, data 0x2c7a3a1/0x2d52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117743616 unmapped: 29728768 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:22.302946+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1468960 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117751808 unmapped: 29720576 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:23.303155+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117751808 unmapped: 29720576 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:24.303500+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117751808 unmapped: 29720576 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:25.303867+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891c000/0x0/0x4ffc00000, data 0x2c7a3a1/0x2d52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117751808 unmapped: 29720576 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:26.304290+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117751808 unmapped: 29720576 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:27.304723+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1468960 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117751808 unmapped: 29720576 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:28.304998+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 44.778945923s of 44.865009308s, submitted: 6
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 26681344 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:29.305445+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8588000/0x0/0x4ffc00000, data 0x300e3a1/0x30e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118964224 unmapped: 28508160 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:30.305675+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118964224 unmapped: 28508160 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:31.305922+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118964224 unmapped: 28508160 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:32.306344+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118964224 unmapped: 28508160 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:33.306725+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118964224 unmapped: 28508160 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:34.307201+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118964224 unmapped: 28508160 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:35.307666+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118964224 unmapped: 28508160 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:36.308048+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118972416 unmapped: 28499968 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:37.308396+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:38.308751+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:39.309152+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:40.309521+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:41.309936+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:42.310441+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:43.310706+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:44.311058+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:45.311386+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:46.311728+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:47.312119+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:48.312462+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:49.312830+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:50.313069+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:51.313250+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:52.313752+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:53.314038+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:54.314402+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:55.314641+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:56.314894+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:57.315238+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:58.315521+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:59.315935+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:00.316136+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:01.316409+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:02.316790+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:03.317164+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:04.317607+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:05.317887+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:06.318146+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:07.318687+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:08.318946+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:09.319178+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:10.319621+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:11.319885+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:12.320264+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:13.320597+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:14.320905+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:15.321097+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:16.321405+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:17.321639+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:18.321947+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:19.322346+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:20.322729+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:21.323074+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:22.323457+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3095444552' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:23.323769+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:24.324015+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:25.324355+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:26.324844+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:27.325175+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:28.325504+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:29.325868+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:30.326250+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:31.326612+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:32.327040+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:33.327409+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:34.327654+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:35.328092+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:36.328593+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:37.329258+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:38.329644+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:39.329955+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:40.330361+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:41.330848+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:42.331364+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119005184 unmapped: 28467200 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:43.331679+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119005184 unmapped: 28467200 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:44.332058+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119005184 unmapped: 28467200 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:45.332405+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119005184 unmapped: 28467200 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:46.332842+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119005184 unmapped: 28467200 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:47.333231+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119005184 unmapped: 28467200 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:48.333515+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119013376 unmapped: 28459008 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:49.333913+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119013376 unmapped: 28459008 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:50.334256+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119013376 unmapped: 28459008 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:51.334645+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119013376 unmapped: 28459008 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:52.334836+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119013376 unmapped: 28459008 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:53.335045+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119013376 unmapped: 28459008 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:54.335183+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119013376 unmapped: 28459008 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:55.335369+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119013376 unmapped: 28459008 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:56.335596+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 28450816 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:57.335963+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 28450816 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:58.336152+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 28450816 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:59.336423+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 28450816 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:00.336776+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 28450816 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:01.337813+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 28450816 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:02.338276+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 28450816 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:03.339055+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 28450816 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:04.339617+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 28450816 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:05.340112+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 28450816 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:06.340715+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:07.341397+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:08.341874+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:09.342125+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:10.342622+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:11.343145+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:12.343675+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:13.344657+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:14.344838+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:15.346793+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:16.347430+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:17.348618+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:18.349010+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:19.349737+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:20.349910+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119037952 unmapped: 28434432 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:21.350309+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 112.571006775s of 112.683082581s, submitted: 22
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119037952 unmapped: 28434432 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:22.350574+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119037952 unmapped: 28434432 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:23.350996+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501114 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119037952 unmapped: 28434432 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:24.351306+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:25.351598+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:26.352018+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:27.352381+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:28.352842+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501114 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:29.353656+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:30.353982+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:31.354698+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:32.354997+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:33.355613+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501114 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:34.355887+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:35.357140+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:36.357456+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:37.358699+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:38.359036+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501114 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:39.360057+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:40.360423+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:41.361577+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:42.361834+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:43.363870+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501114 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:44.366451+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119054336 unmapped: 28418048 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:45.366738+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:46.367122+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:47.367666+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:48.367962+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501114 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:49.368189+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:50.368459+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:51.369962+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:52.370280+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:53.370515+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501114 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:54.370761+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:55.371807+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:56.372147+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:57.373144+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 35.938446045s of 35.963088989s, submitted: 3
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:58.373430+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:59.374265+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:00.374712+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:01.375627+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:02.376112+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:03.376614+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:04.376857+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:05.377455+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:06.377775+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:07.378829+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:08.379215+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:09.380198+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:10.381118+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:11.382891+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:12.383293+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:13.384254+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:14.384717+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:15.385000+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:16.385308+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:17.387169+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:18.387641+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:19.387839+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:20.388060+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:21.388278+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:22.388685+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:23.388975+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:24.389347+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:25.389745+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:26.390044+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:27.390416+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:28.390614+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:29.391746+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:30.392009+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:31.393268+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:32.393558+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:33.393949+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:34.394115+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:35.397240+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:36.398120+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:37.398603+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:38.398956+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:39.399375+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:40.399585+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:41.400586+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:42.400931+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:43.401295+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:44.401716+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:45.402688+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:46.403102+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:47.403370+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:48.403670+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:49.404320+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:50.404741+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:51.405857+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:52.406251+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:53.406853+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:54.407224+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:55.409396+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:56.409750+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:57.410036+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:58.410325+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:59.410741+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:00.410958+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:01.411173+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:02.411591+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:03.411836+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:04.413245+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:05.413645+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:06.413976+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:07.414190+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:08.414684+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:09.415610+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:10.416508+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:11.416926+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:12.417393+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:13.417680+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:14.418005+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:15.419129+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:16.419622+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:17.420260+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:18.420680+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:19.420912+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:20.421277+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:21.422725+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:22.423127+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:23.423348+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:24.423585+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:25.424403+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:26.424727+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:27.425863+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:28.426111+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119103488 unmapped: 28368896 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:29.427218+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119103488 unmapped: 28368896 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:30.427581+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:31.427878+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:32.428294+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:33.428741+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:34.429126+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:35.430288+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:36.430709+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:37.431732+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:38.431964+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:39.432476+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:40.432716+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:41.433406+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:42.433657+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:43.433919+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:44.434297+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:45.434451+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:46.437358+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:47.437575+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:48.437850+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:49.438412+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:50.438630+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:51.438790+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:52.439201+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:53.439393+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:54.439614+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:55.439997+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:56.440263+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:57.441244+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:58.441441+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:59.441946+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:00.442298+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:01.442746+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:02.443224+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:03.443672+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:04.443968+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:05.444366+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:06.444673+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:07.445024+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:08.445371+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:09.445639+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119128064 unmapped: 28344320 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:10.445890+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119128064 unmapped: 28344320 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:11.446202+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119128064 unmapped: 28344320 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:12.446664+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119128064 unmapped: 28344320 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:13.446910+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119128064 unmapped: 28344320 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:14.447380+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119128064 unmapped: 28344320 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:15.447624+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119128064 unmapped: 28344320 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:16.448173+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119128064 unmapped: 28344320 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:17.448435+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119128064 unmapped: 28344320 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:18.448698+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119136256 unmapped: 28336128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:19.449035+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119136256 unmapped: 28336128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:20.449522+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119136256 unmapped: 28336128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:21.449942+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119136256 unmapped: 28336128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:22.450278+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119136256 unmapped: 28336128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:23.450745+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119136256 unmapped: 28336128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:24.451063+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119136256 unmapped: 28336128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:25.451343+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:26.451820+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:27.452022+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:28.452248+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:29.452752+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:30.453081+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:31.453301+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:32.453655+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:33.453989+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:34.454347+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501582 data_alloc: 234881024 data_used: 18178048
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:35.454814+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:36.455220+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:37.455682+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:38.456045+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:39.456479+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501582 data_alloc: 234881024 data_used: 18178048
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:40.456841+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:41.457270+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:42.457788+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:43.458257+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:44.458663+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501582 data_alloc: 234881024 data_used: 18178048
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:45.458994+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:46.459352+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:47.459787+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:48.460089+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:49.460420+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501582 data_alloc: 234881024 data_used: 18178048
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:50.460798+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:51.461164+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:52.461655+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:53.461999+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:54.462219+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501582 data_alloc: 234881024 data_used: 18178048
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:55.462619+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:56.462941+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:57.463171+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:58.463504+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119160832 unmapped: 28311552 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:59.463706+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501582 data_alloc: 234881024 data_used: 18178048
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:00.464000+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:01.464205+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:02.464646+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:03.464983+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:04.465261+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501582 data_alloc: 234881024 data_used: 18178048
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:05.465756+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:06.466014+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 189.647811890s of 189.655746460s, submitted: 1
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:07.466413+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:08.466821+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:09.467180+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:10.467756+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:11.468163+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:12.468652+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:13.468921+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:14.469353+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119177216 unmapped: 28295168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:15.469800+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119177216 unmapped: 28295168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:16.470138+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119177216 unmapped: 28295168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:17.470479+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119177216 unmapped: 28295168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:18.470892+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119177216 unmapped: 28295168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:19.471266+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119177216 unmapped: 28295168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:20.471768+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119177216 unmapped: 28295168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:21.472186+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119185408 unmapped: 28286976 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:22.472670+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119185408 unmapped: 28286976 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:23.473100+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119185408 unmapped: 28286976 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:24.473485+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119185408 unmapped: 28286976 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:25.473909+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119185408 unmapped: 28286976 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:26.474298+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119185408 unmapped: 28286976 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:27.474809+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119185408 unmapped: 28286976 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:28.475226+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119185408 unmapped: 28286976 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:29.475661+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119193600 unmapped: 28278784 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:30.476038+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119193600 unmapped: 28278784 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:31.476460+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119193600 unmapped: 28278784 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:32.477000+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119193600 unmapped: 28278784 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:33.477367+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119193600 unmapped: 28278784 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:34.477803+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119193600 unmapped: 28278784 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:35.478161+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119193600 unmapped: 28278784 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:36.478871+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119193600 unmapped: 28278784 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:37.479234+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119193600 unmapped: 28278784 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:38.479705+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119201792 unmapped: 28270592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:39.480118+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119201792 unmapped: 28270592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:40.480703+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119201792 unmapped: 28270592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:41.481069+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119201792 unmapped: 28270592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:42.481520+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119201792 unmapped: 28270592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:43.481891+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119201792 unmapped: 28270592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:44.482356+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119201792 unmapped: 28270592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:45.482734+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119201792 unmapped: 28270592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:46.482982+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119201792 unmapped: 28270592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:47.483387+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119201792 unmapped: 28270592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:48.483715+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 28262400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:49.484093+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 28262400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:50.484478+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 28262400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:51.484885+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 28262400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:52.485360+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 28262400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:53.485814+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 28262400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:54.486212+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 28262400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:55.486618+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 28262400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:56.486935+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 28262400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:57.487320+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 28262400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:58.487579+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 28262400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:59.487993+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 28262400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:00.488319+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:01.488622+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:02.488923+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:03.489201+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:04.489518+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:05.490005+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:06.490318+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:07.490789+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:08.491215+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:09.491460+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:10.491775+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:11.492020+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:12.492398+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:13.492766+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:14.493083+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:15.493418+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:16.493761+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:17.494071+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:18.494410+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:19.494853+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:20.495214+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:21.495729+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:22.496091+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:23.496996+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:24.497509+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:25.498083+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 4200.1 total, 600.0 interval
                                            Cumulative writes: 9225 writes, 35K keys, 9225 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 9225 writes, 2410 syncs, 3.83 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 311 writes, 768 keys, 311 commit groups, 1.0 writes per commit group, ingest: 0.41 MB, 0.00 MB/s
                                            Interval WAL: 311 writes, 149 syncs, 2.09 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:26.498449+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:27.498871+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:28.499175+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:29.499518+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:30.499948+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:31.500313+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:32.500762+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:33.501070+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:34.501407+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:35.501756+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:36.502742+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:37.503032+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:38.503232+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:39.503676+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:40.504016+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:41.504387+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:42.504976+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:43.505405+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:44.505910+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:45.506401+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:46.506668+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:47.507035+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:48.507448+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:49.507764+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:50.508137+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 28213248 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:51.508480+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 28213248 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:52.509095+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 28213248 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:53.509457+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 28213248 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:54.509803+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 28213248 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:55.510168+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 28213248 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:56.510498+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 28213248 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:57.510891+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 28213248 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:58.511279+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:59.511772+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:00.512060+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:01.512360+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:02.512851+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:03.513149+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:04.513486+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:05.513871+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:06.514246+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:07.514727+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:08.515118+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:09.515428+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:10.515777+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:11.516087+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:12.516701+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:13.517096+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:14.517455+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:15.517745+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:16.518013+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:17.518347+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:18.518857+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:19.519177+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:20.519507+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:21.519825+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 28196864 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:22.520161+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:23.520479+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:24.520885+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:25.521204+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:26.521660+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:27.522028+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:28.522411+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:29.522748+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:30.523029+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:31.523511+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:32.524128+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:33.524485+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:34.524766+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:35.525089+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:36.525516+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:37.525920+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:38.526247+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:39.526638+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:40.526921+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:41.527270+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:42.527622+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:43.528035+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:44.528388+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:45.528704+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:46.529046+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:47.529765+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:48.530153+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:49.531152+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:50.531465+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 163.277404785s of 163.285751343s, submitted: 1
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119308288 unmapped: 28164096 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501538 data_alloc: 234881024 data_used: 18178048
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:51.531810+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119324672 unmapped: 28147712 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:52.532231+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119349248 unmapped: 28123136 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:53.532825+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119414784 unmapped: 28057600 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:54.533140+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:55.533503+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501538 data_alloc: 234881024 data_used: 18178048
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:56.534005+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:57.534292+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:58.534773+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:59.535185+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:00.535492+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501538 data_alloc: 234881024 data_used: 18178048
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:01.535966+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:02.536268+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:03.536607+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:04.536929+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:05.537331+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501538 data_alloc: 234881024 data_used: 18178048
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:06.537782+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:07.538123+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:08.538454+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:09.538776+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:10.539164+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501538 data_alloc: 234881024 data_used: 18178048
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:11.539675+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:12.540174+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:13.540520+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:14.541223+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:15.541727+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501538 data_alloc: 234881024 data_used: 18178048
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:16.542118+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:17.542619+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:18.542910+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:19.543323+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:20.543804+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501538 data_alloc: 234881024 data_used: 18178048
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:21.544225+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:22.544682+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:23.544927+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:24.545246+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:25.545672+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501538 data_alloc: 234881024 data_used: 18178048
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:26.546043+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:27.546664+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:28.547075+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:29.547840+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:30.548318+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501538 data_alloc: 234881024 data_used: 18178048
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:31.548724+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:32.549130+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:33.549336+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:34.549810+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119439360 unmapped: 28033024 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:35.550279+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119439360 unmapped: 28033024 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:36.550718+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119439360 unmapped: 28033024 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501538 data_alloc: 234881024 data_used: 18178048
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:37.551183+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119439360 unmapped: 28033024 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:38.551833+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119439360 unmapped: 28033024 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:39.552238+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119439360 unmapped: 28033024 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:40.552446+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119439360 unmapped: 28033024 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 49.532619476s of 50.152313232s, submitted: 90
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b88139c00 session 0x558b83ca5c20
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b85ec6c00 session 0x558b849b8000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c47000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:41.552785+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119472128 unmapped: 28000256 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501242 data_alloc: 234881024 data_used: 18178048
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b83c47000 session 0x558b86260f00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:42.553180+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 26927104 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:43.553996+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 26927104 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:44.554417+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 26927104 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:45.554794+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8ab9000/0x0/0x4ffc00000, data 0x2add31c/0x2bb3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 26927104 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:46.555150+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 26927104 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1450062 data_alloc: 234881024 data_used: 17440768
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:47.555781+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 26927104 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:48.556153+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 26927104 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:49.556521+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 26927104 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:50.556981+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 26927104 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8ab9000/0x0/0x4ffc00000, data 0x2add31c/0x2bb3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b88139000 session 0x558b86284000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b879ce000 session 0x558b862aa5a0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8ab9000/0x0/0x4ffc00000, data 0x2add31c/0x2bb3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:51.557217+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 26927104 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1450062 data_alloc: 234881024 data_used: 17440768
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ea2000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.970065117s of 11.374399185s, submitted: 57
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:52.557670+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 33382400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b85ea2000 session 0x558b844112c0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:53.558045+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 33382400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:54.558725+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113385472 unmapped: 34086912 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:55.559114+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113385472 unmapped: 34086912 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f985c000/0x0/0x4ffc00000, data 0x1d3d30c/0x1e12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:56.559677+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113385472 unmapped: 34086912 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278563 data_alloc: 218103808 data_used: 6950912
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f985c000/0x0/0x4ffc00000, data 0x1d3d30c/0x1e12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:57.560104+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113385472 unmapped: 34086912 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:58.560656+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113385472 unmapped: 34086912 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:59.561064+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113393664 unmapped: 34078720 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:00.561410+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f985c000/0x0/0x4ffc00000, data 0x1d3d30c/0x1e12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113393664 unmapped: 34078720 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:01.561969+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113393664 unmapped: 34078720 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278563 data_alloc: 218103808 data_used: 6950912
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:02.562433+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113393664 unmapped: 34078720 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c47000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.220892906s of 11.268563271s, submitted: 8
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:03.562820+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _renew_subs
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113401856 unmapped: 34070528 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 139 ms_handle_reset con 0x558b83c47000 session 0x558b863761e0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fa058000/0x0/0x4ffc00000, data 0x153eeba/0x1614000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:04.563270+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ec6c00
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106258432 unmapped: 41213952 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:05.564153+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _renew_subs
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 140 ms_handle_reset con 0x558b85ec6c00 session 0x558b86285c20
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 41205760 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b879ce000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:06.564659+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 140 handle_osd_map epochs [140,141], i have 140, src has [1,141]
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106315776 unmapped: 41156608 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1214842 data_alloc: 218103808 data_used: 143360
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 141 ms_handle_reset con 0x558b879ce000 session 0x558b85fc3c20
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:07.565041+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:08.565470+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:09.565840+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa053000/0x0/0x4ffc00000, data 0x1542634/0x161a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:10.566200+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa053000/0x0/0x4ffc00000, data 0x1542634/0x161a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:11.566975+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa053000/0x0/0x4ffc00000, data 0x1542634/0x161a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1214842 data_alloc: 218103808 data_used: 143360
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:12.567300+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:13.567692+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:14.567899+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:15.568204+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa053000/0x0/0x4ffc00000, data 0x1542634/0x161a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139000
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.120633125s of 12.398234367s, submitted: 53
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 141 ms_handle_reset con 0x558b88139000 session 0x558b83ca3c20
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 141 handle_osd_map epochs [142,142], i have 142, src has [1,142]
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:16.568640+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217816 data_alloc: 218103808 data_used: 143360
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:17.568957+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:18.569396+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:19.569770+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:20.570125+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:21.570515+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217816 data_alloc: 218103808 data_used: 143360
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:22.571042+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:23.571410+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:24.571771+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:25.572251+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:26.572777+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217816 data_alloc: 218103808 data_used: 143360
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:27.573220+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:28.573507+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:29.574028+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:30.574366+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:31.575031+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217816 data_alloc: 218103808 data_used: 143360
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:32.575674+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:33.576143+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:34.576626+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:35.577058+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:36.577460+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217816 data_alloc: 218103808 data_used: 143360
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:37.577850+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:38.578102+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:39.578480+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:40.578638+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:41.579018+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217816 data_alloc: 218103808 data_used: 143360
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:42.579501+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:43.579839+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:44.580099+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:45.580426+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:46.580827+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217816 data_alloc: 218103808 data_used: 143360
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:47.581222+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:48.581789+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:49.582205+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:50.582716+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:51.583173+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217816 data_alloc: 218103808 data_used: 143360
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:52.583689+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:53.584223+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:54.584612+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:55.585066+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:56.585429+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217816 data_alloc: 218103808 data_used: 143360
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:57.585792+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:58.586383+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:59.586725+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:00.587041+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:01.587378+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:02.587804+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:03.588191+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:04.588645+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:05.589026+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:06.589396+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:07.589704+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:08.590103+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:09.590621+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:10.591058+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:11.591454+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:12.591903+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:13.592103+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:14.592272+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:15.593010+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:16.593393+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:17.593767+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:18.594107+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:19.594448+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:20.594764+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:21.595151+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:22.595679+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:23.596006+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:24.596188+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:25.596507+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:26.596786+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:27.596971+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 40976384 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:34:01 compute-0 ceph-osd[208731]: do_command 'config diff' '{prefix=config diff}'
Dec 03 02:34:01 compute-0 ceph-osd[208731]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 03 02:34:01 compute-0 ceph-osd[208731]: do_command 'config show' '{prefix=config show}'
Dec 03 02:34:01 compute-0 ceph-osd[208731]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:28.597331+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: do_command 'counter dump' '{prefix=counter dump}'
Dec 03 02:34:01 compute-0 ceph-osd[208731]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106618880 unmapped: 40853504 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: do_command 'counter schema' '{prefix=counter schema}'
Dec 03 02:34:01 compute-0 ceph-osd[208731]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:29.597569+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106463232 unmapped: 41009152 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:34:01 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:30.597814+0000)
Dec 03 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106717184 unmapped: 40755200 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:01 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:34:01 compute-0 ceph-osd[208731]: do_command 'log dump' '{prefix=log dump}'
Dec 03 02:34:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2398: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:01 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15609 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:34:01 compute-0 rsyslogd[188612]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 03 02:34:01 compute-0 nova_compute[351485]: 2025-12-03 02:34:01.904 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:34:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Dec 03 02:34:02 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1739375453' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 03 02:34:02 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2704523019' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 03 02:34:02 compute-0 ceph-mon[192821]: from='client.15605 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:34:02 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3095444552' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 03 02:34:02 compute-0 ceph-mon[192821]: pgmap v2398: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:02 compute-0 ceph-mon[192821]: from='client.15609 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:34:02 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1739375453' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 03 02:34:02 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15613 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:34:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Dec 03 02:34:02 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1101401782' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 03 02:34:02 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15617 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:34:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Dec 03 02:34:02 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/745656350' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 03 02:34:02 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15621 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:34:03 compute-0 ceph-mon[192821]: from='client.15613 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:34:03 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1101401782' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 03 02:34:03 compute-0 ceph-mon[192821]: from='client.15617 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:34:03 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/745656350' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 03 02:34:03 compute-0 ceph-mon[192821]: from='client.15621 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:34:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Dec 03 02:34:03 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1405273975' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec 03 02:34:03 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15625 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:34:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:34:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2399: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:04 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1405273975' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec 03 02:34:04 compute-0 ceph-mon[192821]: from='client.15625 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:34:04 compute-0 ceph-mon[192821]: pgmap v2399: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:04 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15631 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:34:04 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T02:34:04.296+0000 7fabb0026640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 03 02:34:04 compute-0 ceph-mgr[193109]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 03 02:34:04 compute-0 nova_compute[351485]: 2025-12-03 02:34:04.315 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:34:04 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Dec 03 02:34:04 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2247554239' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec 03 02:34:04 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Dec 03 02:34:04 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1630472333' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec 03 02:34:04 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Dec 03 02:34:04 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1699661219' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec 03 02:34:05 compute-0 ceph-mon[192821]: from='client.15631 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:34:05 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2247554239' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec 03 02:34:05 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1630472333' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec 03 02:34:05 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1699661219' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec 03 02:34:05 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Dec 03 02:34:05 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/50831080' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec 03 02:34:05 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Dec 03 02:34:05 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2460830282' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec 03 02:34:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2400: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:05 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Dec 03 02:34:05 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3331314447' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec 03 02:34:05 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Dec 03 02:34:05 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3311555939' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec 03 02:34:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Dec 03 02:34:06 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/914881831' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec 03 02:34:06 compute-0 crontab[475496]: (root) LIST (root)
Dec 03 02:34:06 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/50831080' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec 03 02:34:06 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2460830282' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec 03 02:34:06 compute-0 ceph-mon[192821]: pgmap v2400: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:06 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3331314447' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec 03 02:34:06 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3311555939' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec 03 02:34:06 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/914881831' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec 03 02:34:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Dec 03 02:34:06 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2908467597' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:12.761256+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 6938624 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:13.761704+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 6938624 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f847a000/0x0/0x4ffc00000, data 0x312bb74/0x31ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:14.762129+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 6938624 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:15.762498+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 6938624 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:16.762826+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 6938624 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1448525 data_alloc: 234881024 data_used: 20029440
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:17.763094+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 6938624 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:18.763352+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 6938624 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:19.763702+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 6938624 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f847a000/0x0/0x4ffc00000, data 0x312bb74/0x31ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:20.763957+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 6938624 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f847a000/0x0/0x4ffc00000, data 0x312bb74/0x31ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:21.764378+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 6938624 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1448525 data_alloc: 234881024 data_used: 20029440
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:22.764801+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 6938624 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:23.765234+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 6938624 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:24.765665+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 6938624 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:25.766118+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 6938624 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f847a000/0x0/0x4ffc00000, data 0x312bb74/0x31ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:26.766389+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 6938624 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1448525 data_alloc: 234881024 data_used: 20029440
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:27.766660+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 6938624 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:28.767081+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 6938624 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:29.767456+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 6938624 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:30.767855+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 6938624 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f847a000/0x0/0x4ffc00000, data 0x312bb74/0x31ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:31.768211+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 6938624 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:32.768628+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1448525 data_alloc: 234881024 data_used: 20029440
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 6938624 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:33.769108+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 6938624 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:34.769317+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 6938624 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:35.769692+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109092864 unmapped: 6930432 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f847a000/0x0/0x4ffc00000, data 0x312bb74/0x31ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:36.769890+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109092864 unmapped: 6930432 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:37.770137+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1448525 data_alloc: 234881024 data_used: 20029440
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109092864 unmapped: 6930432 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:38.771317+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f847a000/0x0/0x4ffc00000, data 0x312bb74/0x31ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109092864 unmapped: 6930432 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:39.772654+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109092864 unmapped: 6930432 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:40.774250+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109092864 unmapped: 6930432 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:41.775600+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109092864 unmapped: 6930432 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:42.777157+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1448525 data_alloc: 234881024 data_used: 20029440
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109092864 unmapped: 6930432 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f847a000/0x0/0x4ffc00000, data 0x312bb74/0x31ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:43.778817+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109092864 unmapped: 6930432 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:44.780715+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109092864 unmapped: 6930432 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:45.782729+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f847a000/0x0/0x4ffc00000, data 0x312bb74/0x31ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109092864 unmapped: 6930432 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:46.784753+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109092864 unmapped: 6930432 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:47.786450+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1448525 data_alloc: 234881024 data_used: 20029440
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109092864 unmapped: 6930432 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:48.788231+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109092864 unmapped: 6930432 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:49.789442+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109092864 unmapped: 6930432 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:50.790366+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109092864 unmapped: 6930432 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:51.791812+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f847a000/0x0/0x4ffc00000, data 0x312bb74/0x31ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109092864 unmapped: 6930432 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:52.793005+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1448525 data_alloc: 234881024 data_used: 20029440
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109092864 unmapped: 6930432 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:53.794848+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a7e73000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 ms_handle_reset con 0x55f0a7e73000 session 0x55f0a7eb7e00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a858dc00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 ms_handle_reset con 0x55f0a858dc00 session 0x55f0a7eb74a0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a7e6b800
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 ms_handle_reset con 0x55f0a7e6b800 session 0x55f0a562fe00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109092864 unmapped: 6930432 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b38c00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 ms_handle_reset con 0x55f0a4b38c00 session 0x55f0a75b7c20
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f847a000/0x0/0x4ffc00000, data 0x312bb74/0x31ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b3d000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 50.961769104s of 51.003173828s, submitted: 11
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 ms_handle_reset con 0x55f0a4b3d000 session 0x55f0a75b70e0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:54.796000+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a7e73000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 ms_handle_reset con 0x55f0a7e73000 session 0x55f0a8020b40
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a858dc00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 ms_handle_reset con 0x55f0a858dc00 session 0x55f0a6b8fa40
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b3b800
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 ms_handle_reset con 0x55f0a4b3b800 session 0x55f0a57e3680
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 110190592 unmapped: 10035200 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b38c00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:55.828247+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 ms_handle_reset con 0x55f0a4b38c00 session 0x55f0a57e23c0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b3d000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 ms_handle_reset con 0x55f0a4b3d000 session 0x55f0a8545680
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a7e73000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 ms_handle_reset con 0x55f0a7e73000 session 0x55f0a4d7d4a0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 110223360 unmapped: 10002432 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a8593800
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 ms_handle_reset con 0x55f0a8593800 session 0x55f0a85710e0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:56.828673+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 110231552 unmapped: 9994240 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:57.828932+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1520933 data_alloc: 234881024 data_used: 20029440
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 110231552 unmapped: 9994240 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7c20000/0x0/0x4ffc00000, data 0x3988be6/0x3a4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:58.829154+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 9977856 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:59.829367+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b3fc00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 110256128 unmapped: 9969664 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 ms_handle_reset con 0x55f0a4b3fc00 session 0x55f0a96b8960
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:00.829679+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b38c00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b3d000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b3a800
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109756416 unmapped: 10469376 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7be4000/0x0/0x4ffc00000, data 0x39c4be6/0x3a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:01.830066+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109764608 unmapped: 10461184 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:02.830474+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1529030 data_alloc: 234881024 data_used: 20213760
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 110157824 unmapped: 10067968 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7be4000/0x0/0x4ffc00000, data 0x39c4be6/0x3a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:03.830842+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111173632 unmapped: 9052160 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:04.831237+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112820224 unmapped: 7405568 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:05.831670+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112820224 unmapped: 7405568 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:06.832042+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112820224 unmapped: 7405568 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.774980545s of 13.003511429s, submitted: 38
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:07.832446+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1551026 data_alloc: 234881024 data_used: 23433216
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112820224 unmapped: 7405568 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:08.832815+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112820224 unmapped: 7405568 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7be1000/0x0/0x4ffc00000, data 0x39c7be6/0x3a8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:09.833186+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112852992 unmapped: 7372800 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:10.833455+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112852992 unmapped: 7372800 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:11.833883+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112852992 unmapped: 7372800 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:12.834163+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1551026 data_alloc: 234881024 data_used: 23433216
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112852992 unmapped: 7372800 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:13.834701+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7be1000/0x0/0x4ffc00000, data 0x39c7be6/0x3a8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112885760 unmapped: 7340032 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:14.835112+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112885760 unmapped: 7340032 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:15.835489+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112885760 unmapped: 7340032 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:16.835666+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7be1000/0x0/0x4ffc00000, data 0x39c7be6/0x3a8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112918528 unmapped: 7307264 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:17.835986+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1551026 data_alloc: 234881024 data_used: 23433216
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112926720 unmapped: 7299072 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:18.836233+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112926720 unmapped: 7299072 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:19.836512+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 7290880 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:20.836872+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 7290880 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:21.837210+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7be1000/0x0/0x4ffc00000, data 0x39c7be6/0x3a8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 7290880 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:22.837593+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1551026 data_alloc: 234881024 data_used: 23433216
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 7282688 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7be1000/0x0/0x4ffc00000, data 0x39c7be6/0x3a8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:23.838027+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7be1000/0x0/0x4ffc00000, data 0x39c7be6/0x3a8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 7282688 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:24.838290+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 7282688 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:25.838678+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 7282688 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:26.839020+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 7282688 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:27.839441+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1551026 data_alloc: 234881024 data_used: 23433216
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 7282688 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:28.839907+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7be1000/0x0/0x4ffc00000, data 0x39c7be6/0x3a8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 7282688 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:29.840345+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 7282688 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:30.840858+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 7282688 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7be1000/0x0/0x4ffc00000, data 0x39c7be6/0x3a8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:31.841335+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112975872 unmapped: 7249920 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:32.841801+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1551026 data_alloc: 234881024 data_used: 23433216
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112975872 unmapped: 7249920 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:33.842351+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112975872 unmapped: 7249920 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:34.842584+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112975872 unmapped: 7249920 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7be1000/0x0/0x4ffc00000, data 0x39c7be6/0x3a8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:35.842812+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112984064 unmapped: 7241728 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7be1000/0x0/0x4ffc00000, data 0x39c7be6/0x3a8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:36.845642+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112984064 unmapped: 7241728 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.193262100s of 30.209300995s, submitted: 3
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:37.845900+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1575144 data_alloc: 234881024 data_used: 23506944
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115335168 unmapped: 4890624 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:38.846243+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 4833280 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76ec000/0x0/0x4ffc00000, data 0x3ebcbe6/0x3f82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:39.846925+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114933760 unmapped: 5292032 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:40.847238+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114933760 unmapped: 5292032 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:41.847630+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114933760 unmapped: 5292032 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:42.847963+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114933760 unmapped: 5292032 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:43.848318+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114933760 unmapped: 5292032 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:44.848706+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114933760 unmapped: 5292032 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:45.849329+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114941952 unmapped: 5283840 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:46.849780+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114941952 unmapped: 5283840 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:47.850260+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114941952 unmapped: 5283840 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:48.850676+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114941952 unmapped: 5283840 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:49.850948+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114941952 unmapped: 5283840 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:50.851134+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114941952 unmapped: 5283840 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:51.851501+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 5251072 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:52.851760+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 5251072 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:53.852179+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 5251072 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:54.852617+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 5251072 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:55.853028+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 5251072 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:56.853454+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 5251072 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:57.853696+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 5251072 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:58.853914+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 5251072 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:59.854272+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 5251072 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:00.854715+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114982912 unmapped: 5242880 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:01.855116+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 6037504 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:02.855684+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 6037504 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:03.856244+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 6037504 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:04.856701+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 6037504 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:05.856960+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 6037504 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:06.857163+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 6037504 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:07.857425+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 6037504 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:08.857855+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 6037504 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:09.858283+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 6037504 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:10.858682+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114196480 unmapped: 6029312 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:11.859141+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114196480 unmapped: 6029312 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:12.859616+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114196480 unmapped: 6029312 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:13.860064+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114196480 unmapped: 6029312 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:14.860377+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114196480 unmapped: 6029312 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:15.860784+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114204672 unmapped: 6021120 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:16.861019+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114204672 unmapped: 6021120 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:17.861861+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114212864 unmapped: 6012928 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:18.862245+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114212864 unmapped: 6012928 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:19.862704+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114212864 unmapped: 6012928 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:20.862966+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114212864 unmapped: 6012928 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:21.863360+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114212864 unmapped: 6012928 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:22.863784+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114212864 unmapped: 6012928 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:23.864403+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114212864 unmapped: 6012928 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:24.864727+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114212864 unmapped: 6012928 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:25.865108+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114212864 unmapped: 6012928 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:26.865582+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114212864 unmapped: 6012928 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:27.865819+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114212864 unmapped: 6012928 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:28.866079+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114212864 unmapped: 6012928 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:29.866481+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114212864 unmapped: 6012928 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:30.866761+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114212864 unmapped: 6012928 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:31.867108+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114212864 unmapped: 6012928 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:32.867609+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114212864 unmapped: 6012928 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:33.868057+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114221056 unmapped: 6004736 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:34.868444+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114221056 unmapped: 6004736 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:35.868802+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114221056 unmapped: 6004736 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:36.869189+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:37.869647+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114221056 unmapped: 6004736 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:38.869914+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114221056 unmapped: 6004736 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:39.870393+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114221056 unmapped: 6004736 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:40.870803+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114221056 unmapped: 6004736 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:41.871215+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114221056 unmapped: 6004736 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:42.871666+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 5996544 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:43.872096+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 5996544 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:44.872483+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 5996544 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:45.872810+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 5996544 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:46.873118+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 5996544 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:47.873431+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 5996544 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:48.873895+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 5996544 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:49.874268+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 5996544 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:50.874672+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 5996544 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:51.875013+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 5996544 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:52.875289+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 5996544 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:53.875727+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 5996544 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:54.875964+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 5996544 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:55.876309+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 5996544 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:56.876586+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 5996544 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:57.876737+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114237440 unmapped: 5988352 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:58.877009+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114237440 unmapped: 5988352 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:59.877302+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114237440 unmapped: 5988352 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:00.877593+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114237440 unmapped: 5988352 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:01.878056+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114237440 unmapped: 5988352 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:02.878311+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114237440 unmapped: 5988352 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:03.878758+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114237440 unmapped: 5988352 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:04.879152+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114237440 unmapped: 5988352 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:05.879750+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114237440 unmapped: 5988352 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:06.880148+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 5980160 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:07.880503+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 5996544 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:08.880894+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 5996544 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:09.881256+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 5996544 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:10.881830+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 5996544 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:11.882230+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 5996544 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:12.882773+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 5996544 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:13.883299+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 5996544 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:14.883767+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 5996544 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:15.884215+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 5996544 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:16.884866+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 5996544 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:17.885123+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 5996544 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets getting new tickets!
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:18.885729+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _finish_auth 0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:18.887677+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 5980160 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:19.886020+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 5980160 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:20.886460+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 5980160 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:21.886804+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 5980160 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:22.887093+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 5980160 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:23.887701+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 5980160 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:24.887926+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 5980160 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 ms_handle_reset con 0x55f0a4d41400 session 0x55f0a58863c0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a7c73c00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:25.888227+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 5971968 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 ms_handle_reset con 0x55f0a4b3ec00 session 0x55f0a75c0960
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4d41c00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:26.888657+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 ms_handle_reset con 0x55f0a4b3f400 session 0x55f0a4d7de00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a6ab1400
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 5971968 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:27.889013+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 5971968 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:28.889346+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 5971968 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:29.889842+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 5971968 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:30.890121+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 5971968 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:31.890665+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 5971968 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:32.891022+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 5971968 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:33.891289+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 5971968 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:34.891743+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 5971968 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:35.891949+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 5963776 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:36.892588+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 5963776 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:37.892780+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 5963776 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:38.893197+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 5963776 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:39.893922+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 5963776 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:40.894272+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 5963776 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:41.894711+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 5963776 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:42.895052+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 5963776 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:43.895480+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 5963776 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:44.895864+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 5963776 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:45.896095+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 5963776 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:46.896358+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 5963776 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:47.896733+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 5963776 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:48.897108+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 5963776 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:49.897459+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 5955584 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:50.897679+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 5955584 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:51.897949+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 5955584 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:52.898272+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 5955584 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:53.898804+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 5955584 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:54.899189+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 5955584 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:55.899424+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 5955584 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:56.899818+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 5955584 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:57.900192+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 5955584 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:58.900512+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 5955584 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:59.900862+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 5955584 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:00.901215+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 5955584 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:01.901441+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 5955584 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:02.901725+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114278400 unmapped: 5947392 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:03.902141+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114286592 unmapped: 5939200 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:04.902482+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114286592 unmapped: 5939200 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:05.902758+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114286592 unmapped: 5939200 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:06.903155+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114286592 unmapped: 5939200 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:07.903490+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114286592 unmapped: 5939200 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:08.903701+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114286592 unmapped: 5939200 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:09.903960+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114286592 unmapped: 5939200 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:10.904298+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114286592 unmapped: 5939200 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:11.904692+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114286592 unmapped: 5939200 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:12.904961+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114286592 unmapped: 5939200 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:13.905774+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114286592 unmapped: 5939200 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:14.906716+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114286592 unmapped: 5939200 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:15.907111+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114286592 unmapped: 5939200 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:16.907775+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114286592 unmapped: 5939200 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:17.907966+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114294784 unmapped: 5931008 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:18.908376+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114294784 unmapped: 5931008 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:19.908761+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114294784 unmapped: 5931008 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:20.909005+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114294784 unmapped: 5931008 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:21.909229+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114294784 unmapped: 5931008 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:22.909590+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114294784 unmapped: 5931008 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:23.910330+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114294784 unmapped: 5931008 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:24.910831+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114294784 unmapped: 5931008 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:25.911137+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114294784 unmapped: 5931008 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:26.911572+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114294784 unmapped: 5931008 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:27.911948+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114294784 unmapped: 5931008 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:28.912324+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114294784 unmapped: 5931008 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:29.912754+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114302976 unmapped: 5922816 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:30.913014+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114302976 unmapped: 5922816 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:31.913314+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114302976 unmapped: 5922816 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:32.913653+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114302976 unmapped: 5922816 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:33.914065+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114302976 unmapped: 5922816 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:34.914375+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114302976 unmapped: 5922816 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:35.914840+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114302976 unmapped: 5922816 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:36.915268+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114302976 unmapped: 5922816 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:37.915657+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114302976 unmapped: 5922816 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:38.916056+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114302976 unmapped: 5922816 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:39.916627+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 182.688186646s of 182.889434814s, submitted: 45
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 ms_handle_reset con 0x55f0a4b40800 session 0x55f0a8570f00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 ms_handle_reset con 0x55f0a56d1800 session 0x55f0a8b46f00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 ms_handle_reset con 0x55f0a7450c00 session 0x55f0a8df7680
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115359744 unmapped: 4866048 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:40.918299+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a60c7400
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115376128 unmapped: 4849664 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:41.918769+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 ms_handle_reset con 0x55f0a60c7400 session 0x55f0a7db6b40
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7cfe000/0x0/0x4ffc00000, data 0x38acbc6/0x3970000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115376128 unmapped: 4849664 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:42.919220+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 ms_handle_reset con 0x55f0a785b000 session 0x55f0a8173e00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b40800
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115376128 unmapped: 4849664 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:43.919709+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541721 data_alloc: 234881024 data_used: 23597056
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115376128 unmapped: 4849664 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:44.920015+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115376128 unmapped: 4849664 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:45.920352+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115376128 unmapped: 4849664 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:46.920758+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115376128 unmapped: 4849664 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:47.921082+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115376128 unmapped: 4849664 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:48.921426+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541721 data_alloc: 234881024 data_used: 23597056
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115376128 unmapped: 4849664 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:49.921889+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115376128 unmapped: 4849664 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:50.922196+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115376128 unmapped: 4849664 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:51.922751+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115376128 unmapped: 4849664 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:52.923002+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115376128 unmapped: 4849664 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:53.923434+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541721 data_alloc: 234881024 data_used: 23597056
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115376128 unmapped: 4849664 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:54.923788+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115376128 unmapped: 4849664 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:55.924126+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115376128 unmapped: 4849664 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:56.924485+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115376128 unmapped: 4849664 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:57.924789+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115376128 unmapped: 4849664 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:58.925214+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541721 data_alloc: 234881024 data_used: 23597056
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 4841472 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:59.925728+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 4841472 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:00.926073+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 4841472 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:01.926434+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 4841472 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:02.926775+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 4841472 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:03.927211+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541721 data_alloc: 234881024 data_used: 23597056
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 4841472 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:04.927477+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 4841472 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:05.927929+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 4841472 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:06.928336+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:07.928654+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 4841472 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:08.929043+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 4841472 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541721 data_alloc: 234881024 data_used: 23597056
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:09.929322+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 4841472 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:10.929788+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 4841472 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:11.930041+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 4841472 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:12.930500+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 4841472 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:13.930947+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 4841472 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541721 data_alloc: 234881024 data_used: 23597056
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:14.931352+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 4841472 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:15.931711+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 4841472 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:16.932146+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 4841472 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:17.932514+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 4841472 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:18.932975+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 4841472 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541721 data_alloc: 234881024 data_used: 23597056
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:19.933333+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 4841472 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:20.933696+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 4841472 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:21.934107+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 4833280 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:22.934381+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 4833280 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:23.935276+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 4833280 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541721 data_alloc: 234881024 data_used: 23597056
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:24.935707+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 4833280 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:25.936109+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 4833280 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:26.936362+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 4833280 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:27.936628+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 4833280 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:28.936992+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 4833280 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541721 data_alloc: 234881024 data_used: 23597056
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:29.941644+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 4833280 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:30.942035+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 4833280 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:31.942322+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 4833280 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:32.942950+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 4833280 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:33.943394+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 4833280 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541721 data_alloc: 234881024 data_used: 23597056
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:34.943664+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 4833280 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:35.943954+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 4833280 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:36.944356+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115400704 unmapped: 4825088 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:37.944771+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115400704 unmapped: 4825088 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:38.945206+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115400704 unmapped: 4825088 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:39.945493+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541721 data_alloc: 234881024 data_used: 23597056
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115400704 unmapped: 4825088 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:40.945738+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115408896 unmapped: 4816896 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:41.946106+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115408896 unmapped: 4816896 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:42.946476+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115417088 unmapped: 4808704 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:43.947051+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115417088 unmapped: 4808704 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:44.947390+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541721 data_alloc: 234881024 data_used: 23597056
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115417088 unmapped: 4808704 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:45.947866+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115417088 unmapped: 4808704 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:46.948278+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 4800512 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:47.948733+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 4800512 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:48.949119+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 4800512 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:49.949454+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541721 data_alloc: 234881024 data_used: 23597056
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 4800512 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:50.949819+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 4800512 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:51.950221+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 4800512 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:52.950699+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 4800512 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:53.951029+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 4800512 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:54.951393+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541721 data_alloc: 234881024 data_used: 23597056
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 4800512 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:55.951818+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 4800512 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:56.952281+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 4800512 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:57.952768+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 4800512 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:58.952937+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 4800512 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:59.953338+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541721 data_alloc: 234881024 data_used: 23597056
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 4800512 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:00.953704+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 4800512 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:01.954081+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 4800512 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:02.954403+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115433472 unmapped: 4792320 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:03.954951+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115433472 unmapped: 4792320 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:04.955285+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541721 data_alloc: 234881024 data_used: 23597056
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115433472 unmapped: 4792320 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:05.955591+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115433472 unmapped: 4792320 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:06.955971+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115433472 unmapped: 4792320 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:07.956291+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115433472 unmapped: 4792320 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:08.956723+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115433472 unmapped: 4792320 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:09.957261+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541721 data_alloc: 234881024 data_used: 23597056
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115433472 unmapped: 4792320 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:10.957646+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115433472 unmapped: 4792320 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:11.958030+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115433472 unmapped: 4792320 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:12.958359+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115433472 unmapped: 4792320 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:13.958823+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115433472 unmapped: 4792320 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:14.959331+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541721 data_alloc: 234881024 data_used: 23597056
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115433472 unmapped: 4792320 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:15.959797+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115433472 unmapped: 4792320 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:16.960169+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115433472 unmapped: 4792320 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:17.960686+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115441664 unmapped: 4784128 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:18.961021+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115441664 unmapped: 4784128 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:19.961321+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541721 data_alloc: 234881024 data_used: 23597056
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115441664 unmapped: 4784128 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:20.961728+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115441664 unmapped: 4784128 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:21.962046+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115441664 unmapped: 4784128 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:22.962488+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115441664 unmapped: 4784128 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:23.962945+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115441664 unmapped: 4784128 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:24.963236+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541721 data_alloc: 234881024 data_used: 23597056
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115441664 unmapped: 4784128 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:25.963686+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115441664 unmapped: 4784128 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:26.964026+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115458048 unmapped: 4767744 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:27.964318+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115458048 unmapped: 4767744 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:28.964949+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115458048 unmapped: 4767744 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:29.965305+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541721 data_alloc: 234881024 data_used: 23597056
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115458048 unmapped: 4767744 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:30.965687+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115458048 unmapped: 4767744 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:31.966029+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115458048 unmapped: 4767744 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:32.966364+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115458048 unmapped: 4767744 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:33.966776+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115458048 unmapped: 4767744 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:34.967028+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541721 data_alloc: 234881024 data_used: 23597056
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115458048 unmapped: 4767744 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:35.967387+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115458048 unmapped: 4767744 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:36.967732+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115458048 unmapped: 4767744 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:37.967959+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115458048 unmapped: 4767744 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:38.968269+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115458048 unmapped: 4767744 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:39.968688+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541721 data_alloc: 234881024 data_used: 23597056
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115458048 unmapped: 4767744 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:40.968906+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115458048 unmapped: 4767744 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 120.761978149s of 121.249809265s, submitted: 73
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 ms_handle_reset con 0x55f0a4d40c00 session 0x55f0a810a780
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 ms_handle_reset con 0x55f0a7e6f800 session 0x55f0a57e2f00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 ms_handle_reset con 0x55f0a7e6d800 session 0x55f0a97ec780
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:41.969144+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a7e6a000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115466240 unmapped: 4759552 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:42.969431+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 ms_handle_reset con 0x55f0a7e6a000 session 0x55f0a7e365a0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:43.969858+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:44.970282+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1334755 data_alloc: 218103808 data_used: 16007168
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:45.970774+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:46.971065+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f902e000/0x0/0x4ffc00000, data 0x257cba3/0x263f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:47.971642+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:48.971919+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:49.972256+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1334755 data_alloc: 218103808 data_used: 16007168
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:50.972714+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f902e000/0x0/0x4ffc00000, data 0x257cba3/0x263f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:51.973109+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:52.973415+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f902e000/0x0/0x4ffc00000, data 0x257cba3/0x263f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:53.973862+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:54.974095+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1334755 data_alloc: 218103808 data_used: 16007168
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:55.974495+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:56.974974+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:57.975359+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:58.975779+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f902e000/0x0/0x4ffc00000, data 0x257cba3/0x263f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:59.976142+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1334755 data_alloc: 218103808 data_used: 16007168
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:00.976697+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:01.977060+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f902e000/0x0/0x4ffc00000, data 0x257cba3/0x263f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:02.977459+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:03.977867+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:04.978096+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1334755 data_alloc: 218103808 data_used: 16007168
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:05.978504+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:06.978912+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:07.979273+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f902e000/0x0/0x4ffc00000, data 0x257cba3/0x263f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:08.979790+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f902e000/0x0/0x4ffc00000, data 0x257cba3/0x263f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:09.980181+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1334755 data_alloc: 218103808 data_used: 16007168
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:10.980734+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:11.981137+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:12.981760+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f902e000/0x0/0x4ffc00000, data 0x257cba3/0x263f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:13.982239+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:14.982683+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1334755 data_alloc: 218103808 data_used: 16007168
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:15.982889+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f902e000/0x0/0x4ffc00000, data 0x257cba3/0x263f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:16.983173+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:17.983837+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:18.984202+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f902e000/0x0/0x4ffc00000, data 0x257cba3/0x263f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:19.984726+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1334755 data_alloc: 218103808 data_used: 16007168
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:20.985078+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4ba3c00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 39.468624115s of 39.815692902s, submitted: 54
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111624192 unmapped: 8601600 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:21.985798+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111624192 unmapped: 8601600 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:22.986220+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111665152 unmapped: 25346048 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f902f000/0x0/0x4ffc00000, data 0x257cba3/0x263f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:23.986749+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111665152 unmapped: 25346048 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:24.987038+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1391308 data_alloc: 218103808 data_used: 16011264
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111689728 unmapped: 25321472 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _renew_subs
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 handle_osd_map epochs [128,128], i have 127, src has [1,128]
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:25.987373+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 ms_handle_reset con 0x55f0a4ba3c00 session 0x55f0a7eb7e00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f882b000/0x0/0x4ffc00000, data 0x2d7e720/0x2e42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111173632 unmapped: 25837568 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:26.988106+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111173632 unmapped: 25837568 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:27.988446+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111173632 unmapped: 25837568 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:28.988856+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111173632 unmapped: 25837568 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:29.989123+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1395258 data_alloc: 218103808 data_used: 16019456
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111173632 unmapped: 25837568 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:30.989363+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f882b000/0x0/0x4ffc00000, data 0x2d7e720/0x2e42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111173632 unmapped: 25837568 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:31.989736+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111173632 unmapped: 25837568 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:32.990164+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111173632 unmapped: 25837568 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:33.990640+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4d40800
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 ms_handle_reset con 0x55f0a4d40800 session 0x55f0a6b8fa40
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a694b800
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 ms_handle_reset con 0x55f0a694b800 session 0x55f0a75bc780
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b40c00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 ms_handle_reset con 0x55f0a4b40c00 session 0x55f0a56ac1e0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a7e6e400
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.053081512s of 13.177683830s, submitted: 13
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 ms_handle_reset con 0x55f0a7e6e400 session 0x55f0a85aab40
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a858dc00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 119365632 unmapped: 17645568 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:34.990934+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 ms_handle_reset con 0x55f0a858dc00 session 0x55f0a58870e0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1414858 data_alloc: 234881024 data_used: 22835200
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b40c00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 ms_handle_reset con 0x55f0a4b40c00 session 0x55f0a756c780
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 119365632 unmapped: 17645568 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:35.991150+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4d40800
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f882c000/0x0/0x4ffc00000, data 0x2d7e720/0x2e42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,1,5])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 ms_handle_reset con 0x55f0a4d40800 session 0x55f0a85ab2c0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a8590000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 ms_handle_reset con 0x55f0a8590000 session 0x55f0a85445a0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b3d800
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 ms_handle_reset con 0x55f0a4b3d800 session 0x55f0a8544780
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b39000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 ms_handle_reset con 0x55f0a4b39000 session 0x55f0a85abc20
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b3d800
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 ms_handle_reset con 0x55f0a4b3d800 session 0x55f0a8b47680
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b40c00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 ms_handle_reset con 0x55f0a4b40c00 session 0x55f0a8544d20
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:36.991523+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 17580032 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4d40800
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 ms_handle_reset con 0x55f0a4d40800 session 0x55f0a6b8e780
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a8590000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 ms_handle_reset con 0x55f0a8590000 session 0x55f0a5897860
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a8595800
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 ms_handle_reset con 0x55f0a8595800 session 0x55f0a5538000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b3d800
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 ms_handle_reset con 0x55f0a4b3d800 session 0x55f0a96b81e0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b40c00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 ms_handle_reset con 0x55f0a4b40c00 session 0x55f0a562f4a0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4d40800
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 ms_handle_reset con 0x55f0a4d40800 session 0x55f0a96b90e0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:37.991745+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 119480320 unmapped: 17530880 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a8590000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 ms_handle_reset con 0x55f0a8590000 session 0x55f0a56acd20
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:38.992131+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 119480320 unmapped: 17530880 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a8595000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 ms_handle_reset con 0x55f0a8595000 session 0x55f0a54dc000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:39.992483+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 119480320 unmapped: 17530880 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b3d800
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 ms_handle_reset con 0x55f0a4b3d800 session 0x55f0a5542960
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1467161 data_alloc: 234881024 data_used: 22835200
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b40c00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:40.992931+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 119783424 unmapped: 17227776 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 ms_handle_reset con 0x55f0a4b40c00 session 0x55f0a7e36f00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f82e0000/0x0/0x4ffc00000, data 0x32c5826/0x338e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:41.993460+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 119783424 unmapped: 17227776 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a5f18400
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a60c9400
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a8595400
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:42.993788+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 119799808 unmapped: 17211392 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:43.994003+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 119799808 unmapped: 17211392 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:44.994372+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 119832576 unmapped: 17178624 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f82bc000/0x0/0x4ffc00000, data 0x32e9826/0x33b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1480806 data_alloc: 234881024 data_used: 24236032
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:45.994952+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 119889920 unmapped: 17121280 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:46.995195+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 120365056 unmapped: 16646144 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f82bc000/0x0/0x4ffc00000, data 0x32e9826/0x33b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:47.995393+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 120365056 unmapped: 16646144 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:48.995737+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f82bc000/0x0/0x4ffc00000, data 0x32e9826/0x33b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 120365056 unmapped: 16646144 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f82bc000/0x0/0x4ffc00000, data 0x32e9826/0x33b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:49.996031+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 16842752 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f82bc000/0x0/0x4ffc00000, data 0x32e9826/0x33b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1503206 data_alloc: 234881024 data_used: 27332608
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:50.996268+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 16842752 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:51.996444+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 16842752 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:52.996627+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 16842752 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:53.997445+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 16842752 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:54.997785+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 16842752 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f82bc000/0x0/0x4ffc00000, data 0x32e9826/0x33b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1503206 data_alloc: 234881024 data_used: 27332608
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:55.998166+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 16842752 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:56.998513+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 16842752 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:57.998854+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 16842752 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:58.999197+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 16842752 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:59.999597+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 16842752 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1503206 data_alloc: 234881024 data_used: 27332608
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:00.999930+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 16842752 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f82bc000/0x0/0x4ffc00000, data 0x32e9826/0x33b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:02.000145+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 16842752 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:03.000521+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 120176640 unmapped: 16834560 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 ms_handle_reset con 0x55f0a5f18400 session 0x55f0a7db50e0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 28.228794098s of 28.534566879s, submitted: 48
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 ms_handle_reset con 0x55f0a60c9400 session 0x55f0a6acef00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 ms_handle_reset con 0x55f0a8595400 session 0x55f0a75bbc20
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b3d800
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:04.000987+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 120201216 unmapped: 16809984 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 ms_handle_reset con 0x55f0a4b3d800 session 0x55f0a85701e0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:05.001343+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 119201792 unmapped: 17809408 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425718 data_alloc: 234881024 data_used: 22835200
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:06.001658+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 119201792 unmapped: 17809408 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:07.002006+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8828000/0x0/0x4ffc00000, data 0x2d7e720/0x2e42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 119201792 unmapped: 17809408 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a694b000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:08.002383+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 17801216 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 handle_osd_map epochs [128,129], i have 128, src has [1,129]
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _renew_subs
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 handle_osd_map epochs [129,129], i have 129, src has [1,129]
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 129 heartbeat osd_stat(store_statfs(0x4f8828000/0x0/0x4ffc00000, data 0x2d802f1/0x2e45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 129 ms_handle_reset con 0x55f0a694b000 session 0x55f0a75bad20
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:09.002761+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 24453120 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:10.003140+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 24453120 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1354716 data_alloc: 218103808 data_used: 16027648
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:11.003413+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 24453120 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:12.003732+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 24453120 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:13.004098+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 24453120 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 129 heartbeat osd_stat(store_statfs(0x4f9028000/0x0/0x4ffc00000, data 0x25802f1/0x2645000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:14.004426+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 24453120 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:15.004866+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 24453120 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:16.005215+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1354716 data_alloc: 218103808 data_used: 16027648
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 24444928 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:17.005648+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 24444928 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:18.006187+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 24444928 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.647365570s of 15.350210190s, submitted: 100
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 3000.1 total, 600.0 interval
                                            Cumulative writes: 8945 writes, 34K keys, 8945 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                            Cumulative WAL: 8945 writes, 2107 syncs, 4.25 writes per sync, written: 0.02 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 1124 writes, 3389 keys, 1124 commit groups, 1.0 writes per commit group, ingest: 2.51 MB, 0.00 MB/s
                                            Interval WAL: 1124 writes, 495 syncs, 2.27 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 130 heartbeat osd_stat(store_statfs(0x4f9028000/0x0/0x4ffc00000, data 0x25802f1/0x2645000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:19.006638+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a56d1400
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 24444928 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:20.007038+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 24444928 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:21.007448+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1357690 data_alloc: 218103808 data_used: 16027648
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 120954880 unmapped: 24453120 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:22.007813+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 32841728 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 130 heartbeat osd_stat(store_statfs(0x4f8825000/0x0/0x4ffc00000, data 0x2d81d54/0x2e48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:23.008213+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 32841728 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:24.008687+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _renew_subs
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 32833536 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 131 ms_handle_reset con 0x55f0a56d1400 session 0x55f0a75c34a0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 131 heartbeat osd_stat(store_statfs(0x4f8825000/0x0/0x4ffc00000, data 0x2d81d54/0x2e48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:25.009146+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 32833536 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:26.009512+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1417328 data_alloc: 218103808 data_used: 16027648
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 32833536 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:27.009989+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 32833536 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:28.010433+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 32833536 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 131 heartbeat osd_stat(store_statfs(0x4f8821000/0x0/0x4ffc00000, data 0x2d838f4/0x2e4c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:29.010851+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 32833536 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:30.011264+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 32833536 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:31.011692+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1417328 data_alloc: 218103808 data_used: 16027648
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 32833536 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 131 heartbeat osd_stat(store_statfs(0x4f8821000/0x0/0x4ffc00000, data 0x2d838f4/0x2e4c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a6ab1800
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.317202568s of 13.401175499s, submitted: 16
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:32.012017+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 131 handle_osd_map epochs [131,132], i have 131, src has [1,132]
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 32833536 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f8821000/0x0/0x4ffc00000, data 0x2d838f4/0x2e4c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 132 ms_handle_reset con 0x55f0a6ab1800 session 0x55f0a75bba40
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:33.012405+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 32817152 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:34.013006+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 32817152 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:35.013413+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 32817152 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:36.013750+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1364910 data_alloc: 218103808 data_used: 16027648
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 32817152 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f901f000/0x0/0x4ffc00000, data 0x25854a2/0x264e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:37.014131+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 32817152 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:38.014490+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 32849920 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:39.014954+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901c000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 32849920 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:40.015399+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 32849920 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:41.015853+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367884 data_alloc: 218103808 data_used: 16027648
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 32849920 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:42.016213+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 32849920 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:43.016512+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 32849920 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:44.016994+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 32849920 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:45.017370+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901c000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 32849920 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:46.017792+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367884 data_alloc: 218103808 data_used: 16027648
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 32849920 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:47.018211+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901c000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 32849920 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:48.018682+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 32849920 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:49.019043+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 32849920 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:50.019522+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 32849920 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:51.020001+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367884 data_alloc: 218103808 data_used: 16027648
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 32849920 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:52.020301+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 32849920 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901c000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:53.020624+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901c000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 32849920 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:54.021035+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 32849920 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:55.021357+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 32849920 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:56.021737+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367884 data_alloc: 218103808 data_used: 16027648
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 32849920 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:57.021927+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 32849920 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:58.022248+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901c000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 32849920 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:59.022781+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 32849920 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:00.023120+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 32849920 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:01.023513+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367884 data_alloc: 218103808 data_used: 16027648
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 32841728 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:02.023776+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 32841728 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:03.024157+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 32841728 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901c000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:04.024772+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 32841728 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:05.025095+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 32841728 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:06.025438+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367884 data_alloc: 218103808 data_used: 16027648
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 32841728 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:07.025833+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901c000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 32841728 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:08.026213+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 32841728 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:09.026663+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901c000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 32841728 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:10.026899+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 32841728 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:11.027259+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367884 data_alloc: 218103808 data_used: 16027648
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 32841728 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:12.027731+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901c000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 32841728 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:13.027949+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 32841728 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:14.028401+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 32841728 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:15.028871+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901c000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 32841728 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:16.029315+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367884 data_alloc: 218103808 data_used: 16027648
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 32841728 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:17.029767+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 32841728 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:18.030091+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 32841728 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:19.030452+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901c000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 32841728 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:20.030852+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 32841728 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:21.031255+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367884 data_alloc: 218103808 data_used: 16027648
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 32841728 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:22.031837+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901c000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 32841728 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:23.032112+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 32841728 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:24.032609+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 32833536 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:25.032960+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 32833536 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:26.033348+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367884 data_alloc: 218103808 data_used: 16027648
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 32833536 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:27.033952+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 32833536 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:28.034285+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901c000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 32833536 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:29.034767+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 32833536 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:30.034995+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 32825344 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:31.035366+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367884 data_alloc: 218103808 data_used: 16027648
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901c000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 32825344 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:32.035773+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 32825344 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:33.036228+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 32825344 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:34.036763+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 32825344 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:35.037163+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901c000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 32825344 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:36.037521+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367884 data_alloc: 218103808 data_used: 16027648
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 32825344 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901c000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:37.038092+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 32825344 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:38.038442+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 32825344 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:39.038681+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901c000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 32825344 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:40.038993+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 32825344 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:41.039195+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367884 data_alloc: 218103808 data_used: 16027648
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 32825344 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:42.039502+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 32825344 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:43.039916+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 32825344 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:44.040389+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901c000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 32825344 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:45.040726+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 32825344 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:46.040972+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367884 data_alloc: 218103808 data_used: 16027648
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 32817152 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:47.041312+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 32817152 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:48.041714+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 32817152 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:49.041982+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901c000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 32817152 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 77.925277710s of 78.037933350s, submitted: 34
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:50.042391+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 32817152 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:51.042658+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367004 data_alloc: 218103808 data_used: 16027648
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112623616 unmapped: 32784384 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:52.042917+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112680960 unmapped: 32727040 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:53.043262+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:54.043793+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:55.044201+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:56.044641+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367004 data_alloc: 218103808 data_used: 16027648
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:57.044975+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:58.045355+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:59.045746+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:00.046088+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:01.046479+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367004 data_alloc: 218103808 data_used: 16027648
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:02.046823+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:03.047294+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:04.047784+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:05.048176+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:06.048511+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367004 data_alloc: 218103808 data_used: 16027648
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:07.048855+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:08.049239+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:09.049783+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:10.050119+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:11.050458+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367004 data_alloc: 218103808 data_used: 16027648
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:12.050790+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:13.051151+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:14.051617+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:15.052028+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:16.052364+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367004 data_alloc: 218103808 data_used: 16027648
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:17.052695+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:18.052872+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:19.053233+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:20.053650+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:21.054001+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367004 data_alloc: 218103808 data_used: 16027648
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:22.054255+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:23.054652+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:24.055069+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:25.055361+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:26.055585+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367004 data_alloc: 218103808 data_used: 16027648
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:27.055923+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:28.056235+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:29.056752+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:30.057066+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:31.057396+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367004 data_alloc: 218103808 data_used: 16027648
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:32.057954+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:33.058307+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:34.058775+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:35.059070+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:36.059441+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367004 data_alloc: 218103808 data_used: 16027648
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:37.059834+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:38.060202+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:39.060658+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:40.061038+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:41.061321+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:42.061863+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367004 data_alloc: 218103808 data_used: 16027648
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:43.062248+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:44.063465+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:45.063866+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:46.064241+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:47.064744+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367004 data_alloc: 218103808 data_used: 16027648
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:48.065171+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:49.065706+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:50.066083+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:51.066465+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:52.066833+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367004 data_alloc: 218103808 data_used: 16027648
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:53.067229+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:54.067603+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:55.068142+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:56.068490+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:57.068793+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367004 data_alloc: 218103808 data_used: 16027648
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:58.069173+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:59.069648+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:00.070041+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:01.070449+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:02.070746+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367004 data_alloc: 218103808 data_used: 16027648
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:03.071151+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:04.071604+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:05.071983+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:06.072503+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:07.072950+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367004 data_alloc: 218103808 data_used: 16027648
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:08.073333+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:09.073824+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:10.074232+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:11.074717+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:12.075157+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367004 data_alloc: 218103808 data_used: 16027648
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:13.075702+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:14.076173+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:15.076766+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:16.077012+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:17.077843+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367004 data_alloc: 218103808 data_used: 16027648
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:18.078274+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:19.078725+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:20.078943+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:21.079385+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:22.079737+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367004 data_alloc: 218103808 data_used: 16027648
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:23.080185+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:24.080694+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:25.081042+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:26.081243+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:27.081699+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367004 data_alloc: 218103808 data_used: 16027648
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:28.082119+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:29.082417+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 ms_handle_reset con 0x55f0a4b38c00 session 0x55f0a75bd0e0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 99.323188782s of 99.935211182s, submitted: 90
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 ms_handle_reset con 0x55f0a4b3d000 session 0x55f0a529c960
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 ms_handle_reset con 0x55f0a4b3a800 session 0x55f0a57e3860
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:30.082854+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b38c00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112738304 unmapped: 32669696 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:31.083190+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f9d79000/0x0/0x4ffc00000, data 0x182af05/0x18f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,1])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 ms_handle_reset con 0x55f0a4b38c00 session 0x55f0a75c1680
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:32.083517+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198259 data_alloc: 218103808 data_used: 8048640
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:33.083832+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:34.084242+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f9db7000/0x0/0x4ffc00000, data 0x17eee93/0x18b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:35.084720+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f9db7000/0x0/0x4ffc00000, data 0x17eee93/0x18b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f9db7000/0x0/0x4ffc00000, data 0x17eee93/0x18b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:36.085036+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:37.085356+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198259 data_alloc: 218103808 data_used: 8048640
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:38.085752+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:39.086080+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:40.086380+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:41.086742+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f9db7000/0x0/0x4ffc00000, data 0x17eee93/0x18b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:42.087160+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f9db7000/0x0/0x4ffc00000, data 0x17eee93/0x18b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198259 data_alloc: 218103808 data_used: 8048640
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:43.087772+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:44.088249+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:45.088775+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:46.089135+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f9db7000/0x0/0x4ffc00000, data 0x17eee93/0x18b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:47.089657+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198259 data_alloc: 218103808 data_used: 8048640
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f9db7000/0x0/0x4ffc00000, data 0x17eee93/0x18b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:48.090005+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:49.090372+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.926574707s of 19.366115570s, submitted: 65
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 ms_handle_reset con 0x55f0a4b3dc00 session 0x55f0a7f5bc20
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 ms_handle_reset con 0x55f0a57ab000 session 0x55f0a56af2c0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b38c00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:50.090655+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107028480 unmapped: 38379520 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 ms_handle_reset con 0x55f0a4b38c00 session 0x55f0a553f680
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:51.090885+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:52.091223+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:53.091464+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:54.091941+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:55.092215+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:56.092632+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:57.092866+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:58.093246+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:59.093684+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:00.094027+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:01.094487+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:02.094907+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:03.095303+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:04.095765+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:05.096104+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:06.096492+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:07.096861+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:08.097195+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:09.097916+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:10.098265+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:11.099088+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:12.099631+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:13.100025+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:14.100494+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:15.100867+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:16.101316+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:17.102087+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:18.102476+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:19.102840+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:20.103238+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:21.103671+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:22.104045+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:23.104467+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:24.105037+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:25.105684+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:26.106058+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:27.106445+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:28.106894+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:29.107225+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:30.107696+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:31.108058+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:32.108294+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:33.108664+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:34.109060+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:35.109271+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:36.109666+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:37.110042+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:38.110407+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:39.110781+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:40.111145+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:41.111519+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:42.111985+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:43.112668+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:44.113128+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:45.113585+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:46.113960+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:47.114354+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:48.114773+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:49.115218+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:50.115595+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:51.116086+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:52.116508+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:53.116883+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:54.117347+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:55.117725+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:56.118064+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:57.118459+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:58.118784+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:59.119189+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:00.119728+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:01.120030+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:02.120374+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:03.120810+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:04.121342+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:05.121758+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:06.122145+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:07.122661+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:08.123058+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:09.123501+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:10.123997+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:11.124413+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:12.124973+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:13.125411+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:14.125839+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a7e6b400
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:15.126166+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 85.550582886s of 85.862503052s, submitted: 51
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:16.126616+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _renew_subs
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 134 ms_handle_reset con 0x55f0a7e6b400 session 0x55f0a7cbbc20
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a57aa000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:17.126992+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107044864 unmapped: 38363136 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:18.127431+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1142798 data_alloc: 218103808 data_used: 7061504
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _renew_subs
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a57aa000 session 0x55f0a4d42b40
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:19.127923+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:20.129492+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:21.129946+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:22.130375+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:23.130750+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146916 data_alloc: 218103808 data_used: 7069696
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:24.131038+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:25.131375+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:26.131790+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:27.132268+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:28.132773+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146916 data_alloc: 218103808 data_used: 7069696
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:29.133160+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:30.133642+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:31.134141+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:32.134485+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:33.134876+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146916 data_alloc: 218103808 data_used: 7069696
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:34.135274+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:35.135797+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:36.136168+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:37.136503+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:38.136815+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146916 data_alloc: 218103808 data_used: 7069696
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:39.137179+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:40.137678+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:41.137958+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:42.138141+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:43.138507+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146916 data_alloc: 218103808 data_used: 7069696
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:44.139125+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:45.139383+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:46.139818+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:47.140035+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:48.140351+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146916 data_alloc: 218103808 data_used: 7069696
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:49.140768+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:50.141196+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:51.142746+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:52.143230+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:53.143504+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146916 data_alloc: 218103808 data_used: 7069696
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:54.143971+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:55.144417+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:56.144742+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:57.145132+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:58.145507+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146916 data_alloc: 218103808 data_used: 7069696
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:59.145972+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:00.146393+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:01.146805+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:02.147212+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:03.147669+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146916 data_alloc: 218103808 data_used: 7069696
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:04.148108+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:05.148285+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:06.148762+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a6bbdc00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a6bbdc00 session 0x55f0a7ecc1e0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b3f800
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:07.149103+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b3f800 session 0x55f0a7ecc3c0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 113639424 unmapped: 31768576 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:08.149376+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1167396 data_alloc: 218103808 data_used: 13885440
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b38c00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 53.427654266s of 53.600463867s, submitted: 15
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 27410432 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b38c00 session 0x55f0a726d860
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:09.149751+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b3f800
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b3f800 session 0x55f0a726d680
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a57aa000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a57aa000 session 0x55f0a553e000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a6bbdc00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a6bbdc00 session 0x55f0a75b83c0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 113803264 unmapped: 31604736 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a7e6b400
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a7e6b400 session 0x55f0a7cbba40
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:10.149983+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b38c00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b38c00 session 0x55f0a64152c0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b3f800
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b3f800 session 0x55f0a80205a0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a57aa000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a57aa000 session 0x55f0a756c780
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a6bbdc00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 113991680 unmapped: 31416320 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a6bbdc00 session 0x55f0a54dcf00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a7e6a400
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a7e6a400 session 0x55f0a56aed20
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:11.150439+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 113991680 unmapped: 31416320 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:12.150859+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 113991680 unmapped: 31416320 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:13.151302+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8eec000/0x0/0x4ffc00000, data 0x26b655e/0x2782000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303610 data_alloc: 218103808 data_used: 13885440
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 113991680 unmapped: 31416320 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:14.151754+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a8591c00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a8591c00 session 0x55f0a54dd4a0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a60c9000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a60c9000 session 0x55f0a8b46780
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 113975296 unmapped: 31432704 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a858e800
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a858e800 session 0x55f0a8df6b40
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a651e000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a651e000 session 0x55f0a4d7d680
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:15.152189+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a8592800
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8eec000/0x0/0x4ffc00000, data 0x26b655e/0x2782000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a8592800 session 0x55f0a5578d20
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a60c9000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a60c9000 session 0x55f0a4b37c20
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a651e000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a651e000 session 0x55f0a84fa780
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a858e800
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a858e800 session 0x55f0a7eb74a0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114302976 unmapped: 31105024 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a8591c00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a8591c00 session 0x55f0a52a65a0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:16.152673+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b3cc00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b3cc00 session 0x55f0a885b2c0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114327552 unmapped: 31080448 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:17.152935+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b3cc00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a60c9000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114343936 unmapped: 31064064 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:18.153315+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1400757 data_alloc: 218103808 data_used: 13889536
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 31055872 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:19.153674+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a651e000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.290942192s of 10.699803352s, submitted: 40
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a7e71400
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a651e000 session 0x55f0a58825a0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a7e71400 session 0x55f0a810bc20
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114057216 unmapped: 31350784 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:20.153968+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a858e800
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a8591c00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114057216 unmapped: 31350784 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:21.154263+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f7a68000/0x0/0x4ffc00000, data 0x3b3956e/0x3c06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114057216 unmapped: 31350784 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:22.154689+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114057216 unmapped: 31350784 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:23.154979+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b3c000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b3c000 session 0x55f0a81732c0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f7a68000/0x0/0x4ffc00000, data 0x3b3956e/0x3c06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1467170 data_alloc: 218103808 data_used: 13893632
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 113500160 unmapped: 31907840 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a7e6cc00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b41800
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:24.155226+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 113508352 unmapped: 31899648 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:25.155656+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f7a3e000/0x0/0x4ffc00000, data 0x3b6356e/0x3c30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 31236096 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:26.155912+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 30670848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:27.156378+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 30670848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:28.156910+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f7a3e000/0x0/0x4ffc00000, data 0x3b6356e/0x3c30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1516450 data_alloc: 234881024 data_used: 20791296
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 30670848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:29.157443+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a7450400
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a7450400 session 0x55f0a7eb72c0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 116916224 unmapped: 28491776 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a7ec3c00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.427840233s of 10.600404739s, submitted: 22
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:30.157876+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 121511936 unmapped: 23896064 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:31.158402+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 125722624 unmapped: 19685376 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:32.158747+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f7a14000/0x0/0x4ffc00000, data 0x3b8d56e/0x3c5a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131022848 unmapped: 14385152 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:33.158945+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680281 data_alloc: 251658240 data_used: 40378368
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 132775936 unmapped: 12632064 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:34.159397+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 132775936 unmapped: 12632064 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:35.159747+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 132775936 unmapped: 12632064 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:36.160072+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 132775936 unmapped: 12632064 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:37.160303+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a858e800 session 0x55f0a96b8f00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a8591c00 session 0x55f0a7eced20
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b3c000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a651e000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 132833280 unmapped: 12574720 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f7a14000/0x0/0x4ffc00000, data 0x3b8d56e/0x3c5a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:38.160650+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b3c000 session 0x55f0a6afd680
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1544478 data_alloc: 234881024 data_used: 33796096
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 129966080 unmapped: 15441920 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:39.160896+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133218304 unmapped: 12189696 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:40.161316+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f83f5000/0x0/0x4ffc00000, data 0x31ad55e/0x3279000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134144000 unmapped: 11264000 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:41.161510+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134144000 unmapped: 11264000 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:42.161882+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134144000 unmapped: 11264000 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:43.162235+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a651e000 session 0x55f0a57e6780
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.212381363s of 13.371566772s, submitted: 31
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a7ec3c00 session 0x55f0a8571e00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a8594c00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1596346 data_alloc: 251658240 data_used: 41136128
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130449408 unmapped: 14958592 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:44.162580+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a8594c00 session 0x55f0a57e23c0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8c24000/0x0/0x4ffc00000, data 0x297e55e/0x2a4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 14942208 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:45.185992+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 14942208 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:46.186664+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 14942208 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:47.187075+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8c24000/0x0/0x4ffc00000, data 0x297e55e/0x2a4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 14942208 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:48.187500+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1475072 data_alloc: 234881024 data_used: 32735232
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 14942208 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:49.187895+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 14942208 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:50.188110+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:51.188308+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 14942208 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8c24000/0x0/0x4ffc00000, data 0x297e55e/0x2a4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:52.188674+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 14942208 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:53.189037+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 14942208 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1475072 data_alloc: 234881024 data_used: 32735232
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:54.189447+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 14942208 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8c24000/0x0/0x4ffc00000, data 0x297e55e/0x2a4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:55.189792+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 14942208 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:56.190105+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 14942208 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:57.190279+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 14942208 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.805484772s of 13.987822533s, submitted: 30
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:58.190469+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135127040 unmapped: 10280960 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1533936 data_alloc: 234881024 data_used: 33460224
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f85de000/0x0/0x4ffc00000, data 0x2fc455e/0x3090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:59.190637+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135127040 unmapped: 10280960 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:00.190943+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133750784 unmapped: 11657216 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:01.191191+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133750784 unmapped: 11657216 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:02.192180+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133750784 unmapped: 11657216 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:03.192603+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133750784 unmapped: 11657216 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1585670 data_alloc: 234881024 data_used: 34004992
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:04.193013+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134864896 unmapped: 10543104 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8073000/0x0/0x4ffc00000, data 0x352955e/0x35f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a858d800
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a858d800 session 0x55f0a60fc780
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:05.193284+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135520256 unmapped: 20389888 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:06.193701+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137887744 unmapped: 18022400 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f780e000/0x0/0x4ffc00000, data 0x3d8555e/0x3e51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:07.194125+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137912320 unmapped: 17997824 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:08.194501+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137912320 unmapped: 17997824 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1667382 data_alloc: 251658240 data_used: 34631680
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:09.194871+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137912320 unmapped: 17997824 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a56d1800
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a56d1800 session 0x55f0a4b37e00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:10.195235+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137912320 unmapped: 17997824 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a8597000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a8597000 session 0x55f0a810bc20
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.885555267s of 13.601085663s, submitted: 132
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:11.195680+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136953856 unmapped: 18956288 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f781b000/0x0/0x4ffc00000, data 0x3d8755e/0x3e53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a8594000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a8594000 session 0x55f0a810b0e0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a8593800
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:12.196102+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a8593800 session 0x55f0a810a000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137117696 unmapped: 18792448 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a56d1800
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a858d800
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:13.196499+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137117696 unmapped: 18792448 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1665953 data_alloc: 251658240 data_used: 34635776
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:14.197099+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137117696 unmapped: 18792448 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:15.197626+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137125888 unmapped: 18784256 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f77f6000/0x0/0x4ffc00000, data 0x3dab56e/0x3e78000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:16.197966+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137125888 unmapped: 18784256 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:17.198334+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137125888 unmapped: 18784256 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:18.198856+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f77f5000/0x0/0x4ffc00000, data 0x3dac56e/0x3e79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137125888 unmapped: 18784256 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1665621 data_alloc: 251658240 data_used: 34635776
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:19.199254+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137125888 unmapped: 18784256 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f77f5000/0x0/0x4ffc00000, data 0x3dac56e/0x3e79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:20.199743+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137125888 unmapped: 18784256 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:21.200132+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137125888 unmapped: 18784256 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:22.200468+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137076736 unmapped: 18833408 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:23.200756+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 17981440 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1695221 data_alloc: 251658240 data_used: 38678528
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:24.200992+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 139608064 unmapped: 16302080 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f77f5000/0x0/0x4ffc00000, data 0x3dac56e/0x3e79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:25.201208+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140410880 unmapped: 15499264 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:26.201475+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140410880 unmapped: 15499264 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:27.201677+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140410880 unmapped: 15499264 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f77f5000/0x0/0x4ffc00000, data 0x3dac56e/0x3e79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:28.201894+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 15532032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1717141 data_alloc: 251658240 data_used: 41816064
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:29.202170+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 15532032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:30.202486+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 15532032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:31.202781+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f77f5000/0x0/0x4ffc00000, data 0x3dac56e/0x3e79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 15532032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:32.203098+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 15532032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:33.203353+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f77f5000/0x0/0x4ffc00000, data 0x3dac56e/0x3e79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 15532032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1717141 data_alloc: 251658240 data_used: 41816064
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:34.203713+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 15532032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:35.204058+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 15532032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:36.204446+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 15532032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:37.204856+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 15532032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:38.205083+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f77f5000/0x0/0x4ffc00000, data 0x3dac56e/0x3e79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 15532032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1717461 data_alloc: 251658240 data_used: 41824256
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:39.205481+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f77f5000/0x0/0x4ffc00000, data 0x3dac56e/0x3e79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140386304 unmapped: 15523840 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:40.205738+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140386304 unmapped: 15523840 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:41.206161+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.360565186s of 30.452342987s, submitted: 11
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140386304 unmapped: 15523840 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:42.206413+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140386304 unmapped: 15523840 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:43.206808+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140386304 unmapped: 15523840 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:44.207218+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1717637 data_alloc: 251658240 data_used: 41824256
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140386304 unmapped: 15523840 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:45.207730+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f77f5000/0x0/0x4ffc00000, data 0x3dac56e/0x3e79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140550144 unmapped: 15360000 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:46.207961+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b3cc00 session 0x55f0a8df70e0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a60c9000 session 0x55f0a81721e0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140574720 unmapped: 15335424 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a8591400
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:47.208172+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136282112 unmapped: 19628032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a8591400 session 0x55f0a75ba780
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a7e6cc00 session 0x55f0a81734a0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b41800 session 0x55f0a6414780
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:48.211243+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136282112 unmapped: 19628032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:49.211737+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1549753 data_alloc: 234881024 data_used: 34029568
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136282112 unmapped: 19628032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:50.212072+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b3cc00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f85eb000/0x0/0x4ffc00000, data 0x2fb656e/0x3083000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a60c9000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136298496 unmapped: 19611648 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:51.212606+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136298496 unmapped: 19611648 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:52.212997+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136306688 unmapped: 19603456 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f85eb000/0x0/0x4ffc00000, data 0x2fb656e/0x3083000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [1])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:53.213498+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136134656 unmapped: 19775488 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:54.214354+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550873 data_alloc: 234881024 data_used: 34156544
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f85eb000/0x0/0x4ffc00000, data 0x2fb656e/0x3083000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136142848 unmapped: 19767296 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:55.214690+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136142848 unmapped: 19767296 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:56.218426+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.911537170s of 15.160791397s, submitted: 53
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135208960 unmapped: 20701184 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:57.218643+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137076736 unmapped: 18833408 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:58.219047+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136634368 unmapped: 19275776 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:59.219486+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1623403 data_alloc: 234881024 data_used: 34353152
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136634368 unmapped: 19275776 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:00.219796+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f7e77000/0x0/0x4ffc00000, data 0x372956e/0x37f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136634368 unmapped: 19275776 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:01.220165+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136634368 unmapped: 19275776 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:02.220629+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136634368 unmapped: 19275776 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:03.220958+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f7e77000/0x0/0x4ffc00000, data 0x372956e/0x37f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136634368 unmapped: 19275776 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:04.221301+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1621595 data_alloc: 234881024 data_used: 34357248
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135503872 unmapped: 20406272 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:05.221655+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135503872 unmapped: 20406272 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:06.222026+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135503872 unmapped: 20406272 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f7e55000/0x0/0x4ffc00000, data 0x374c56e/0x3819000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:07.222279+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135503872 unmapped: 20406272 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:08.222609+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135503872 unmapped: 20406272 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:09.223082+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1621595 data_alloc: 234881024 data_used: 34357248
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135503872 unmapped: 20406272 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:10.223275+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135503872 unmapped: 20406272 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.919089317s of 14.367115021s, submitted: 74
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:11.223435+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 19357696 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:12.223666+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f7e4b000/0x0/0x4ffc00000, data 0x375656e/0x3823000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 19357696 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:13.223842+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 19357696 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:14.224089+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1621715 data_alloc: 234881024 data_used: 34357248
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 19357696 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:15.224318+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f7e4b000/0x0/0x4ffc00000, data 0x375656e/0x3823000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 19357696 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:16.224871+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 19357696 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:17.225299+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 19357696 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:18.225633+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136560640 unmapped: 19349504 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:19.226050+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1621715 data_alloc: 234881024 data_used: 34357248
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136560640 unmapped: 19349504 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:20.226702+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136560640 unmapped: 19349504 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:21.227038+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f7e4b000/0x0/0x4ffc00000, data 0x375656e/0x3823000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136560640 unmapped: 19349504 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:22.227363+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b39c00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.317792892s of 11.335372925s, submitted: 3
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b39c00 session 0x55f0a529b2c0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a56d1c00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a56d1c00 session 0x55f0a56acd20
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4d40c00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4d40c00 session 0x55f0a7eb61e0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a8593800
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134938624 unmapped: 20971520 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a8593800 session 0x55f0a632b0e0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a858d000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a858d000 session 0x55f0a7dee960
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:23.227768+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134922240 unmapped: 20987904 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:24.228001+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f78b7000/0x0/0x4ffc00000, data 0x3ce95d0/0x3db7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1668235 data_alloc: 234881024 data_used: 34357248
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b39c00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b39c00 session 0x55f0a75b7c20
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4d40c00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4d40c00 session 0x55f0a75b74a0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a56d1c00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a56d1c00 session 0x55f0a75b6960
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f78b7000/0x0/0x4ffc00000, data 0x3ce95d0/0x3db7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134922240 unmapped: 20987904 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a8593800
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a8593800 session 0x55f0a75b7860
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a6ab1800
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:25.228377+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a6ab1800 session 0x55f0a553fa40
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a6ab1800
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a6ab1800 session 0x55f0a7ecd4a0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b39c00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b39c00 session 0x55f0a7ecc1e0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4d40c00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4d40c00 session 0x55f0a7ecc3c0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a56d1c00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a56d1c00 session 0x55f0a52a7e00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135241728 unmapped: 28540928 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f78b6000/0x0/0x4ffc00000, data 0x3ce95e0/0x3db8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:26.228823+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135340032 unmapped: 28442624 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:27.229436+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a6ab0800
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a6ab0800 session 0x55f0a57e32c0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a6ab0800
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a6ab0800 session 0x55f0a810ba40
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135585792 unmapped: 28196864 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:28.229944+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a651f000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a651f000 session 0x55f0a75bb680
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b39c00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135618560 unmapped: 28164096 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b39c00 session 0x55f0a810a3c0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:29.230291+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1772936 data_alloc: 251658240 data_used: 35344384
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135921664 unmapped: 27860992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4d40c00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a56d1c00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:30.230659+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135929856 unmapped: 27852800 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:31.231008+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f6d41000/0x0/0x4ffc00000, data 0x485b665/0x492c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b38400
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b38400 session 0x55f0a58901e0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135929856 unmapped: 27852800 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:32.231422+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a8590c00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a8590c00 session 0x55f0a60110e0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135929856 unmapped: 27852800 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:33.231641+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b38400
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b38400 session 0x55f0a4d42780
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b39c00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.765680313s of 11.383768082s, submitted: 98
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b39c00 session 0x55f0a7eccd20
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136249344 unmapped: 27533312 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:34.231981+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1777362 data_alloc: 251658240 data_used: 35352576
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136265728 unmapped: 27516928 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a651f000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a6ab0800
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:35.232197+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136265728 unmapped: 27516928 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:36.232701+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f6d1d000/0x0/0x4ffc00000, data 0x487f675/0x4951000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136265728 unmapped: 27516928 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:37.233080+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136265728 unmapped: 27516928 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:38.233443+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136265728 unmapped: 27516928 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:39.233799+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1777682 data_alloc: 251658240 data_used: 35360768
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136896512 unmapped: 26886144 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:40.234179+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137576448 unmapped: 26206208 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:41.234626+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f6d1d000/0x0/0x4ffc00000, data 0x487f675/0x4951000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 139427840 unmapped: 24354816 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:42.234849+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 139427840 unmapped: 24354816 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:43.235157+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 139427840 unmapped: 24354816 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:44.235602+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1818642 data_alloc: 251658240 data_used: 41193472
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 139427840 unmapped: 24354816 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:45.235898+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f6d1d000/0x0/0x4ffc00000, data 0x487f675/0x4951000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 139427840 unmapped: 24354816 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:46.236166+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 139427840 unmapped: 24354816 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:47.236626+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 139993088 unmapped: 23789568 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:48.236868+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 145022976 unmapped: 18759680 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:49.237083+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a651f000 session 0x55f0a6acf4a0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a6ab0800 session 0x55f0a75c21e0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a8593c00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.797070503s of 15.820782661s, submitted: 2
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1866786 data_alloc: 251658240 data_used: 48082944
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 142245888 unmapped: 21536768 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a8593c00 session 0x55f0a7ece5a0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:50.237340+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 142245888 unmapped: 21536768 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:51.237832+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f748f000/0x0/0x4ffc00000, data 0x3d0e5f3/0x3ddd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f748f000/0x0/0x4ffc00000, data 0x3d0e5f3/0x3ddd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 142245888 unmapped: 21536768 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:52.238177+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 142245888 unmapped: 21536768 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:53.238638+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 142245888 unmapped: 21536768 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:54.239142+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1735401 data_alloc: 251658240 data_used: 41189376
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 142245888 unmapped: 21536768 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:55.239690+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 142245888 unmapped: 21536768 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:56.240189+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b3cc00 session 0x55f0a7eb7a40
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a60c9000 session 0x55f0a6b8fa40
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a7e73000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131342336 unmapped: 32440320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a7e73000 session 0x55f0a6b8f0e0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:57.240515+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8aa1000/0x0/0x4ffc00000, data 0x2aff5e3/0x2bcd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131342336 unmapped: 32440320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:58.241037+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131342336 unmapped: 32440320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:59.241345+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1491587 data_alloc: 234881024 data_used: 27148288
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131342336 unmapped: 32440320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:00.241577+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:01.241979+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131342336 unmapped: 32440320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:02.242428+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131342336 unmapped: 32440320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:03.242844+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131342336 unmapped: 32440320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8acb000/0x0/0x4ffc00000, data 0x2ad55e3/0x2ba3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:04.243325+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131342336 unmapped: 32440320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1491587 data_alloc: 234881024 data_used: 27148288
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:05.243787+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131342336 unmapped: 32440320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:06.244329+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131342336 unmapped: 32440320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:07.244767+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131342336 unmapped: 32440320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:08.245179+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131342336 unmapped: 32440320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8acb000/0x0/0x4ffc00000, data 0x2ad55e3/0x2ba3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:09.245406+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131342336 unmapped: 32440320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8acb000/0x0/0x4ffc00000, data 0x2ad55e3/0x2ba3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1491587 data_alloc: 234881024 data_used: 27148288
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:10.245813+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131342336 unmapped: 32440320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:11.246082+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131342336 unmapped: 32440320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:12.246396+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131342336 unmapped: 32440320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.834514618s of 23.068130493s, submitted: 47
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:13.246987+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134021120 unmapped: 29761536 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:14.247304+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134103040 unmapped: 29679616 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8139000/0x0/0x4ffc00000, data 0x34675e3/0x3535000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1571961 data_alloc: 234881024 data_used: 27537408
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:15.247576+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134651904 unmapped: 29130752 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:16.247797+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 29065216 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:17.248152+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 29065216 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:18.248612+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 29065216 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:19.248876+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 29065216 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1578329 data_alloc: 234881024 data_used: 27856896
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f812f000/0x0/0x4ffc00000, data 0x34715e3/0x353f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:20.249103+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 29065216 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4d40c00 session 0x55f0a80210e0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a56d1c00 session 0x55f0a885ad20
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b3cc00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f812f000/0x0/0x4ffc00000, data 0x34715e3/0x353f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,1])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:21.249394+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 129646592 unmapped: 34136064 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a7c74800
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b3cc00 session 0x55f0a529d4a0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:22.249629+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8c71000/0x0/0x4ffc00000, data 0x251e56e/0x25eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 129687552 unmapped: 34095104 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a5f18400
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.617131233s of 10.342635155s, submitted: 143
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:23.249837+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _renew_subs
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 136 ms_handle_reset con 0x55f0a5f18400 session 0x55f0a75bb680
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 136 ms_handle_reset con 0x55f0a7c74800 session 0x55f0a56ad860
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 129671168 unmapped: 34111488 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a858fc00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 136 ms_handle_reset con 0x55f0a858fc00 session 0x55f0a553fa40
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b39c00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 136 ms_handle_reset con 0x55f0a4b39c00 session 0x55f0a75b7c20
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b39c00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:24.250448+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 129179648 unmapped: 34603008 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 136 ms_handle_reset con 0x55f0a4b39c00 session 0x55f0a75b74a0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b3cc00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 136 ms_handle_reset con 0x55f0a4b3cc00 session 0x55f0a7ece780
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a5f18400
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 136 ms_handle_reset con 0x55f0a5f18400 session 0x55f0a6b8e960
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a7c74800
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 136 ms_handle_reset con 0x55f0a7c74800 session 0x55f0a75c3860
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a858fc00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1484659 data_alloc: 234881024 data_used: 21315584
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 136 ms_handle_reset con 0x55f0a858fc00 session 0x55f0a562e5a0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f859a000/0x0/0x4ffc00000, data 0x2bf40fb/0x2cc3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:25.250800+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 129204224 unmapped: 34578432 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b39c00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:26.251133+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f859a000/0x0/0x4ffc00000, data 0x2bf40fb/0x2cc3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _renew_subs
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 129138688 unmapped: 34643968 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 137 ms_handle_reset con 0x55f0a56d1800 session 0x55f0a810a5a0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 137 ms_handle_reset con 0x55f0a858d800 session 0x55f0a756c780
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a8590c00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 137 ms_handle_reset con 0x55f0a4b39c00 session 0x55f0a529c000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:27.251402+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 123150336 unmapped: 40632320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 137 ms_handle_reset con 0x55f0a8590c00 session 0x55f0a75b8b40
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:28.251825+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 123150336 unmapped: 40632320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:29.252029+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 123150336 unmapped: 40632320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f9584000/0x0/0x4ffc00000, data 0x1c0acac/0x1cd9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1307635 data_alloc: 218103808 data_used: 13893632
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a8595c00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 137 ms_handle_reset con 0x55f0a8595c00 session 0x55f0a7eb7e00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:30.252393+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 123150336 unmapped: 40632320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b39c00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 137 ms_handle_reset con 0x55f0a4b39c00 session 0x55f0a7cbb860
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f9584000/0x0/0x4ffc00000, data 0x1c0acac/0x1cd9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:31.252708+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 123150336 unmapped: 40632320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a858ec00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 137 ms_handle_reset con 0x55f0a858ec00 session 0x55f0a7eb61e0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b38800
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 137 ms_handle_reset con 0x55f0a4b38800 session 0x55f0a4b37e00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:32.253125+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 40861696 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a60c7000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a8593400
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:33.253370+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 40861696 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a6955400
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 137 ms_handle_reset con 0x55f0a6955400 session 0x55f0a80d1e00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a785b000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 137 ms_handle_reset con 0x55f0a785b000 session 0x55f0a6b8e780
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b38800
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 137 ms_handle_reset con 0x55f0a4b38800 session 0x55f0a7eb74a0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:34.253631+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 40861696 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1314319 data_alloc: 218103808 data_used: 14106624
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:35.254659+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 122880000 unmapped: 40902656 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b39c00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.097565651s of 12.628091812s, submitted: 91
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 137 ms_handle_reset con 0x55f0a4b39c00 session 0x55f0a75b92c0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a6955400
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 137 ms_handle_reset con 0x55f0a6955400 session 0x55f0a5897860
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f955a000/0x0/0x4ffc00000, data 0x1c34cbc/0x1d04000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:36.255068+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 123027456 unmapped: 40755200 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a7e6f000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 137 ms_handle_reset con 0x55f0a7e6f000 session 0x55f0a726c960
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:37.255353+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a6bbf000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 39845888 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 137 ms_handle_reset con 0x55f0a6bbf000 session 0x55f0a8b463c0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:38.255670+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 137 handle_osd_map epochs [137,138], i have 137, src has [1,138]
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 124067840 unmapped: 39714816 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:39.256099+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 124067840 unmapped: 39714816 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b38800
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a4b38800 session 0x55f0a6aced20
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1436171 data_alloc: 234881024 data_used: 20680704
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:40.256479+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 124067840 unmapped: 39714816 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b39c00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a4b39c00 session 0x55f0a6afcb40
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:41.256808+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 124067840 unmapped: 39714816 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a858c000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a858c000 session 0x55f0a75ba960
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8ca7000/0x0/0x4ffc00000, data 0x24e571f/0x25b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a8590c00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:42.257366+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a8590c00 session 0x55f0a6acf680
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 124084224 unmapped: 39698432 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a7ec3400
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8ca7000/0x0/0x4ffc00000, data 0x24e571f/0x25b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:43.257656+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 124084224 unmapped: 39698432 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:44.258048+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 124084224 unmapped: 39698432 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1437451 data_alloc: 234881024 data_used: 20795392
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:45.258298+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 39149568 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8ca7000/0x0/0x4ffc00000, data 0x24e571f/0x25b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:46.258607+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 38887424 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8ca7000/0x0/0x4ffc00000, data 0x24e571f/0x25b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [1])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:47.258844+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128049152 unmapped: 35733504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:48.259090+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8ca7000/0x0/0x4ffc00000, data 0x24e571f/0x25b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128049152 unmapped: 35733504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:49.259365+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8ca7000/0x0/0x4ffc00000, data 0x24e571f/0x25b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128049152 unmapped: 35733504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1491211 data_alloc: 234881024 data_used: 28409856
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:50.259718+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128049152 unmapped: 35733504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:51.260014+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128049152 unmapped: 35733504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:52.260226+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128049152 unmapped: 35733504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:53.260579+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8ca7000/0x0/0x4ffc00000, data 0x24e571f/0x25b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128049152 unmapped: 35733504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:54.260963+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128049152 unmapped: 35733504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1491211 data_alloc: 234881024 data_used: 28409856
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8ca7000/0x0/0x4ffc00000, data 0x24e571f/0x25b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:55.261240+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128049152 unmapped: 35733504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8ca7000/0x0/0x4ffc00000, data 0x24e571f/0x25b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:56.261827+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128049152 unmapped: 35733504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:57.262135+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128049152 unmapped: 35733504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8ca7000/0x0/0x4ffc00000, data 0x24e571f/0x25b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:58.262629+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128049152 unmapped: 35733504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:59.264316+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128049152 unmapped: 35733504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1491211 data_alloc: 234881024 data_used: 28409856
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:00.264658+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8ca7000/0x0/0x4ffc00000, data 0x24e571f/0x25b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128057344 unmapped: 35725312 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:01.265026+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128057344 unmapped: 35725312 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:02.265321+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128065536 unmapped: 35717120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8ca7000/0x0/0x4ffc00000, data 0x24e571f/0x25b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:03.265617+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8ca7000/0x0/0x4ffc00000, data 0x24e571f/0x25b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128065536 unmapped: 35717120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:04.265898+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128065536 unmapped: 35717120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1491211 data_alloc: 234881024 data_used: 28409856
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:05.266237+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8ca7000/0x0/0x4ffc00000, data 0x24e571f/0x25b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128065536 unmapped: 35717120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:06.266765+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128065536 unmapped: 35717120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:07.267500+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8ca7000/0x0/0x4ffc00000, data 0x24e571f/0x25b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128065536 unmapped: 35717120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:08.267893+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128065536 unmapped: 35717120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 33.258148193s of 33.476356506s, submitted: 37
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:09.268377+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 138297344 unmapped: 25485312 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1608251 data_alloc: 234881024 data_used: 29712384
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:10.268765+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7e73000/0x0/0x4ffc00000, data 0x330c71f/0x33dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 138543104 unmapped: 25239552 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:11.268995+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137125888 unmapped: 26656768 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:12.269303+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137142272 unmapped: 26640384 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:13.269790+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137142272 unmapped: 26640384 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:14.270577+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137142272 unmapped: 26640384 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1621193 data_alloc: 234881024 data_used: 29835264
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:15.270935+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137142272 unmapped: 26640384 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:16.271620+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7dc7000/0x0/0x4ffc00000, data 0x33be71f/0x348f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137142272 unmapped: 26640384 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:17.272145+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137142272 unmapped: 26640384 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:18.272673+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 3600.1 total, 600.0 interval
                                            Cumulative writes: 11K writes, 42K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 11K writes, 2973 syncs, 3.71 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 2088 writes, 7681 keys, 2088 commit groups, 1.0 writes per commit group, ingest: 7.32 MB, 0.01 MB/s
                                            Interval WAL: 2088 writes, 866 syncs, 2.41 writes per sync, written: 0.01 GB, 0.01 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137175040 unmapped: 26607616 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.264138222s of 10.018401146s, submitted: 173
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:19.273075+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137969664 unmapped: 25812992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7bc4000/0x0/0x4ffc00000, data 0x35c371f/0x3694000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:20.273642+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1633635 data_alloc: 234881024 data_used: 30130176
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137969664 unmapped: 25812992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:21.274034+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137109504 unmapped: 26673152 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:22.274506+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137109504 unmapped: 26673152 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:23.274925+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137109504 unmapped: 26673152 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b3a000/0x0/0x4ffc00000, data 0x364a71f/0x371b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:24.275178+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137109504 unmapped: 26673152 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:25.275433+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1641113 data_alloc: 234881024 data_used: 29941760
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a7c73c00 session 0x55f0a80d0780
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a8590800
Dec 03 02:34:06 compute-0 ceph-osd[207705]: mgrc ms_handle_reset ms_handle_reset con 0x55f0a651e800
Dec 03 02:34:06 compute-0 ceph-osd[207705]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1922561230
Dec 03 02:34:06 compute-0 ceph-osd[207705]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1922561230,v1:192.168.122.100:6801/1922561230]
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: get_auth_request con 0x55f0a858fc00 auth_method 0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: mgrc handle_mgr_configure stats_period=5
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137158656 unmapped: 26624000 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:26.275692+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a4d41c00 session 0x55f0a56ac000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a8f73400
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137158656 unmapped: 26624000 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a6ab1400 session 0x55f0a57e7860
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a8f70400
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:27.276054+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137330688 unmapped: 26451968 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:28.276273+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137330688 unmapped: 26451968 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:29.276664+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b33000/0x0/0x4ffc00000, data 0x365a71f/0x372b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.565891266s of 10.747445107s, submitted: 33
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137330688 unmapped: 26451968 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:30.276885+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1636453 data_alloc: 234881024 data_used: 29941760
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137330688 unmapped: 26451968 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:31.277103+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137330688 unmapped: 26451968 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:32.277348+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b30000/0x0/0x4ffc00000, data 0x365d71f/0x372e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137330688 unmapped: 26451968 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:33.277572+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137330688 unmapped: 26451968 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:34.277952+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 26443776 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:35.278256+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1636453 data_alloc: 234881024 data_used: 29941760
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 26443776 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:36.278580+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b30000/0x0/0x4ffc00000, data 0x365d71f/0x372e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 26443776 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:37.278763+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 26443776 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:38.278968+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 26443776 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:39.279868+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 26443776 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.241274834s of 10.265392303s, submitted: 5
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:40.280722+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1636785 data_alloc: 234881024 data_used: 29941760
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 26443776 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b2d000/0x0/0x4ffc00000, data 0x366071f/0x3731000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:41.282356+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 26443776 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:42.282933+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b2d000/0x0/0x4ffc00000, data 0x366071f/0x3731000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137347072 unmapped: 26435584 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:43.283440+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137347072 unmapped: 26435584 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:44.284180+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137347072 unmapped: 26435584 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:45.284514+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1638097 data_alloc: 234881024 data_used: 29954048
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137388032 unmapped: 26394624 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:46.284899+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a60c7000 session 0x55f0a7eb6d20
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a8593400 session 0x55f0a6afda40
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137355264 unmapped: 26427392 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a9f58800
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:47.287216+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a9f58800 session 0x55f0a6afd860
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:48.287471+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:49.287790+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:50.288114+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1409806 data_alloc: 234881024 data_used: 21741568
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:51.288368+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.852160454s of 12.121880531s, submitted: 48
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:52.288793+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:53.289192+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:54.289832+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:55.290283+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1409982 data_alloc: 234881024 data_used: 21741568
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:56.290805+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:57.291130+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:58.291631+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:59.291992+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:00.292367+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1409982 data_alloc: 234881024 data_used: 21741568
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:01.292823+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:02.293228+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:03.293641+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:04.294111+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:05.294462+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1409982 data_alloc: 234881024 data_used: 21741568
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:06.294639+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133881856 unmapped: 29900800 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:07.294794+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133881856 unmapped: 29900800 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:08.294959+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133881856 unmapped: 29900800 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:09.295349+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133881856 unmapped: 29900800 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:10.295755+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1409982 data_alloc: 234881024 data_used: 21741568
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133881856 unmapped: 29900800 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:11.295913+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133881856 unmapped: 29900800 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:12.296277+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133881856 unmapped: 29900800 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:13.296652+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133881856 unmapped: 29900800 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:14.297047+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 29892608 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:15.297394+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1409982 data_alloc: 234881024 data_used: 21741568
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 29892608 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:16.297798+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 29892608 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:17.298295+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 29892608 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:18.298799+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 29892608 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:19.299142+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 29892608 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:20.299617+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1409982 data_alloc: 234881024 data_used: 21741568
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133898240 unmapped: 29884416 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:21.299995+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133898240 unmapped: 29884416 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:22.300449+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.712953568s of 30.721988678s, submitted: 1
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133931008 unmapped: 29851648 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:23.300768+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133931008 unmapped: 29851648 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:24.301111+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133931008 unmapped: 29851648 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:25.301392+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411390 data_alloc: 234881024 data_used: 21741568
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133931008 unmapped: 29851648 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:26.301735+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133931008 unmapped: 29851648 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:27.302102+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133931008 unmapped: 29851648 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:28.302296+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:29.302692+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133931008 unmapped: 29851648 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:30.303030+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133939200 unmapped: 29843456 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411390 data_alloc: 234881024 data_used: 21741568
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:31.303288+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133939200 unmapped: 29843456 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:32.303692+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133939200 unmapped: 29843456 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:33.303919+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133939200 unmapped: 29843456 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:34.304287+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133939200 unmapped: 29843456 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:35.304506+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133939200 unmapped: 29843456 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411390 data_alloc: 234881024 data_used: 21741568
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:36.304890+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133947392 unmapped: 29835264 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:37.305239+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133947392 unmapped: 29835264 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:38.305670+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133947392 unmapped: 29835264 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:39.305980+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133955584 unmapped: 29827072 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:40.306345+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133955584 unmapped: 29827072 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411390 data_alloc: 234881024 data_used: 21741568
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:41.306671+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133955584 unmapped: 29827072 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:42.307076+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133955584 unmapped: 29827072 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a4b40800 session 0x55f0a8544b40
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a7e71000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:43.307396+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133955584 unmapped: 29827072 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:44.307698+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133955584 unmapped: 29827072 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:45.307954+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133955584 unmapped: 29827072 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411390 data_alloc: 234881024 data_used: 21741568
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:46.308403+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133955584 unmapped: 29827072 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:47.309422+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133963776 unmapped: 29818880 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:48.309677+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133963776 unmapped: 29818880 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.298162460s of 26.320926666s, submitted: 8
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:49.310038+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 29794304 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:50.310645+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 29777920 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1410126 data_alloc: 234881024 data_used: 21749760
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:51.311236+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 29753344 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:52.311586+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134103040 unmapped: 29679616 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:53.311830+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134160384 unmapped: 29622272 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:54.312243+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135208960 unmapped: 28573696 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:55.312734+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135208960 unmapped: 28573696 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1410126 data_alloc: 234881024 data_used: 21749760
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:56.313097+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135208960 unmapped: 28573696 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:57.313304+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135208960 unmapped: 28573696 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:58.313503+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135208960 unmapped: 28573696 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:59.313669+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135208960 unmapped: 28573696 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:00.313864+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135217152 unmapped: 28565504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1410126 data_alloc: 234881024 data_used: 21749760
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:01.314078+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135217152 unmapped: 28565504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:02.314632+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135217152 unmapped: 28565504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:03.315166+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135217152 unmapped: 28565504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:04.315596+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135217152 unmapped: 28565504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:05.316185+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135217152 unmapped: 28565504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1410126 data_alloc: 234881024 data_used: 21749760
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:06.316629+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135217152 unmapped: 28565504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:07.317045+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135217152 unmapped: 28565504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:08.318048+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135225344 unmapped: 28557312 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:09.318385+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135225344 unmapped: 28557312 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:10.318832+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135225344 unmapped: 28557312 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1410126 data_alloc: 234881024 data_used: 21749760
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:11.319211+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135225344 unmapped: 28557312 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:12.319732+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135225344 unmapped: 28557312 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:13.320147+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135225344 unmapped: 28557312 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:14.320591+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135225344 unmapped: 28557312 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:15.321010+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135225344 unmapped: 28557312 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1410126 data_alloc: 234881024 data_used: 21749760
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:16.321427+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135225344 unmapped: 28557312 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:17.321739+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135225344 unmapped: 28557312 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:18.322171+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135233536 unmapped: 28549120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:19.322617+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135233536 unmapped: 28549120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:20.322923+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135233536 unmapped: 28549120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1410126 data_alloc: 234881024 data_used: 21749760
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:21.323199+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135233536 unmapped: 28549120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:22.323704+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135233536 unmapped: 28549120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:23.324086+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135233536 unmapped: 28549120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:24.324612+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135233536 unmapped: 28549120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:25.324952+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135233536 unmapped: 28549120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1410126 data_alloc: 234881024 data_used: 21749760
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:26.325339+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135233536 unmapped: 28549120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:27.325825+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135233536 unmapped: 28549120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:28.326187+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135233536 unmapped: 28549120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:29.326431+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135233536 unmapped: 28549120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:30.326820+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135233536 unmapped: 28549120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1410126 data_alloc: 234881024 data_used: 21749760
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:31.327156+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135233536 unmapped: 28549120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:32.327616+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135233536 unmapped: 28549120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:33.328253+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135241728 unmapped: 28540928 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:34.329725+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135241728 unmapped: 28540928 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:35.330163+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135241728 unmapped: 28540928 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1410126 data_alloc: 234881024 data_used: 21749760
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:36.330521+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135241728 unmapped: 28540928 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:37.331273+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135241728 unmapped: 28540928 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:38.331649+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135241728 unmapped: 28540928 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:39.331985+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135241728 unmapped: 28540928 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:40.332303+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135241728 unmapped: 28540928 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1410126 data_alloc: 234881024 data_used: 21749760
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:41.332710+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135241728 unmapped: 28540928 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:42.333087+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135241728 unmapped: 28540928 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:43.333867+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a8596000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a8596000 session 0x55f0a7ecd680
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a80ed400
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a80ed400 session 0x55f0a4b37680
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a60c7000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a60c7000 session 0x55f0a6415680
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a80ed400
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135249920 unmapped: 28532736 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a80ed400 session 0x55f0a529c5a0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a8593400
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 54.039604187s of 54.708065033s, submitted: 110
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a8593400 session 0x55f0a96b8b40
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:44.334223+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a8596000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a8596000 session 0x55f0a7def0e0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a9f58800
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a9f58800 session 0x55f0a5529860
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a60c7000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a60c7000 session 0x55f0a810b4a0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a80ed400
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a80ed400 session 0x55f0a8172780
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135708672 unmapped: 31752192 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:45.334446+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135708672 unmapped: 31752192 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1502243 data_alloc: 234881024 data_used: 21749760
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:46.334753+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135708672 unmapped: 31752192 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:47.335077+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135708672 unmapped: 31752192 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:48.335372+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135716864 unmapped: 31744000 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:49.335752+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8740000/0x0/0x4ffc00000, data 0x2a4c781/0x2b1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135716864 unmapped: 31744000 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:50.336335+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135725056 unmapped: 31735808 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:51.337326+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1502243 data_alloc: 234881024 data_used: 21749760
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135725056 unmapped: 31735808 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:52.338026+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a5768c00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a5768c00 session 0x55f0a4b37860
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135372800 unmapped: 32088064 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:53.338327+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a694a000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a7e72800
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135372800 unmapped: 32088064 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:54.338629+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135372800 unmapped: 32088064 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8716000/0x0/0x4ffc00000, data 0x2a76781/0x2b48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:55.338831+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 32743424 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:56.339115+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1527149 data_alloc: 234881024 data_used: 25067520
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 32743424 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:57.339362+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134979584 unmapped: 32481280 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:58.342191+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:59.342380+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8716000/0x0/0x4ffc00000, data 0x2a76781/0x2b48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:00.342794+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:01.343038+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1567309 data_alloc: 234881024 data_used: 30720000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:02.343226+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:03.343495+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:04.344012+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:05.344387+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8716000/0x0/0x4ffc00000, data 0x2a76781/0x2b48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:06.344762+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1567309 data_alloc: 234881024 data_used: 30720000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:07.345077+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:08.345479+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:09.345804+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:10.347223+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:11.347494+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1567309 data_alloc: 234881024 data_used: 30720000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8716000/0x0/0x4ffc00000, data 0x2a76781/0x2b48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:12.348190+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8716000/0x0/0x4ffc00000, data 0x2a76781/0x2b48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:13.348638+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:14.348899+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8716000/0x0/0x4ffc00000, data 0x2a76781/0x2b48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:15.349269+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:16.349703+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1567309 data_alloc: 234881024 data_used: 30720000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8716000/0x0/0x4ffc00000, data 0x2a76781/0x2b48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:17.350054+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:18.350402+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:19.350734+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8716000/0x0/0x4ffc00000, data 0x2a76781/0x2b48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:20.350980+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:21.351216+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1567309 data_alloc: 234881024 data_used: 30720000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8716000/0x0/0x4ffc00000, data 0x2a76781/0x2b48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:22.351421+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8716000/0x0/0x4ffc00000, data 0x2a76781/0x2b48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:23.351666+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:24.352074+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:25.352258+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:26.352601+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1567309 data_alloc: 234881024 data_used: 30720000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:27.352918+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8716000/0x0/0x4ffc00000, data 0x2a76781/0x2b48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:28.353234+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 44.706817627s of 44.924095154s, submitted: 44
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 142917632 unmapped: 24543232 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:29.354948+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143196160 unmapped: 24264704 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:30.355387+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 141459456 unmapped: 26001408 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:31.355831+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1678761 data_alloc: 234881024 data_used: 31834112
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 142753792 unmapped: 24707072 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b7d000/0x0/0x4ffc00000, data 0x3606781/0x36d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:32.356214+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 142753792 unmapped: 24707072 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:33.356631+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 142753792 unmapped: 24707072 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:34.356985+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b7d000/0x0/0x4ffc00000, data 0x3606781/0x36d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 142753792 unmapped: 24707072 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:35.357413+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 142753792 unmapped: 24707072 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:36.357746+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1678761 data_alloc: 234881024 data_used: 31834112
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:37.358113+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b7b000/0x0/0x4ffc00000, data 0x3611781/0x36e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:38.358521+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b7b000/0x0/0x4ffc00000, data 0x3611781/0x36e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:39.359913+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:40.360313+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:41.360755+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b7b000/0x0/0x4ffc00000, data 0x3611781/0x36e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673013 data_alloc: 234881024 data_used: 31838208
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:42.361165+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:43.361446+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:44.361881+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b7b000/0x0/0x4ffc00000, data 0x3611781/0x36e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:45.362232+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:46.362640+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673013 data_alloc: 234881024 data_used: 31838208
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:47.363024+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:48.363413+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.688020706s of 20.183700562s, submitted: 135
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:49.363823+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:50.364217+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:51.364737+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1672881 data_alloc: 234881024 data_used: 31838208
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:52.364949+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:53.365300+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:54.365567+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:55.365902+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:56.366267+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1672881 data_alloc: 234881024 data_used: 31838208
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:57.366590+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:58.366972+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:59.367341+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:00.368231+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:01.369706+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1672881 data_alloc: 234881024 data_used: 31838208
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:02.371283+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:03.372883+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:04.373695+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:05.373898+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:06.374101+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1672881 data_alloc: 234881024 data_used: 31838208
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:07.374453+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:08.374814+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:09.375056+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:10.375279+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:11.375691+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1672881 data_alloc: 234881024 data_used: 31838208
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:12.375932+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:13.376720+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:14.379966+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:15.381120+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:16.382517+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1672881 data_alloc: 234881024 data_used: 31838208
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:17.383917+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:18.385061+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:19.386725+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:20.387856+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:21.389225+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1672881 data_alloc: 234881024 data_used: 31838208
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:22.390865+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:23.392140+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:24.393119+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:25.394115+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:26.395516+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1672881 data_alloc: 234881024 data_used: 31838208
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:27.397339+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:28.398657+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:29.400230+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:30.401905+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:31.403573+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143024128 unmapped: 24436736 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1672881 data_alloc: 234881024 data_used: 31838208
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:32.405231+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143032320 unmapped: 24428544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 43.763500214s of 43.778255463s, submitted: 2
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:33.406518+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143032320 unmapped: 24428544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:34.408437+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143032320 unmapped: 24428544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:35.410195+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143032320 unmapped: 24428544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:36.411817+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143032320 unmapped: 24428544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673585 data_alloc: 234881024 data_used: 31838208
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:37.413452+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143032320 unmapped: 24428544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:38.415195+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143032320 unmapped: 24428544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:39.416521+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143032320 unmapped: 24428544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:40.418158+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143032320 unmapped: 24428544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:41.419755+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143032320 unmapped: 24428544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673585 data_alloc: 234881024 data_used: 31838208
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:42.420964+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143032320 unmapped: 24428544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:43.422411+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143032320 unmapped: 24428544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:44.424139+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143032320 unmapped: 24428544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:45.425823+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143032320 unmapped: 24428544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:46.427521+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143032320 unmapped: 24428544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673585 data_alloc: 234881024 data_used: 31838208
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:47.429365+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143032320 unmapped: 24428544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:48.431089+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143040512 unmapped: 24420352 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:49.432729+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143040512 unmapped: 24420352 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:50.434469+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143040512 unmapped: 24420352 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:51.436212+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143040512 unmapped: 24420352 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673585 data_alloc: 234881024 data_used: 31838208
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:52.437781+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143040512 unmapped: 24420352 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:53.439472+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143040512 unmapped: 24420352 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:54.441314+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143048704 unmapped: 24412160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:55.442946+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143048704 unmapped: 24412160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:56.444111+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143048704 unmapped: 24412160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673585 data_alloc: 234881024 data_used: 31838208
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:57.445687+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143048704 unmapped: 24412160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:58.446978+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143048704 unmapped: 24412160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:59.448423+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143048704 unmapped: 24412160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:00.449449+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143048704 unmapped: 24412160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:01.450215+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143048704 unmapped: 24412160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673585 data_alloc: 234881024 data_used: 31838208
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:02.450740+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143048704 unmapped: 24412160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:03.451095+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143048704 unmapped: 24412160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:04.451788+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143048704 unmapped: 24412160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:05.452309+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143056896 unmapped: 24403968 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:06.452754+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143056896 unmapped: 24403968 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673585 data_alloc: 234881024 data_used: 31838208
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:07.453207+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143056896 unmapped: 24403968 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:08.454487+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143056896 unmapped: 24403968 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:09.456284+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143056896 unmapped: 24403968 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:10.458018+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143056896 unmapped: 24403968 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:11.459118+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143056896 unmapped: 24403968 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673585 data_alloc: 234881024 data_used: 31838208
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:12.460092+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143056896 unmapped: 24403968 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:13.461753+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143056896 unmapped: 24403968 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:14.463319+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143065088 unmapped: 24395776 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:15.465226+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143065088 unmapped: 24395776 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:16.466927+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143065088 unmapped: 24395776 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673585 data_alloc: 234881024 data_used: 31838208
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:17.467946+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143065088 unmapped: 24395776 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:18.469066+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143065088 unmapped: 24395776 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:19.470384+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143065088 unmapped: 24395776 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:20.471294+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143065088 unmapped: 24395776 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:21.472996+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143073280 unmapped: 24387584 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673585 data_alloc: 234881024 data_used: 31838208
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:22.474797+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143073280 unmapped: 24387584 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:23.475447+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143073280 unmapped: 24387584 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:24.475783+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143073280 unmapped: 24387584 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:25.476157+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143073280 unmapped: 24387584 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:26.476685+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143073280 unmapped: 24387584 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673585 data_alloc: 234881024 data_used: 31838208
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 54.550392151s of 54.573677063s, submitted: 5
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:27.477455+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 24199168 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:28.478694+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 24199168 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:29.479059+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 24199168 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:30.479398+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 24199168 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:31.479784+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b61000/0x0/0x4ffc00000, data 0x362b781/0x36fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 24199168 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673149 data_alloc: 234881024 data_used: 31838208
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:32.480024+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 24199168 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b61000/0x0/0x4ffc00000, data 0x362b781/0x36fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:33.480336+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 24199168 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b61000/0x0/0x4ffc00000, data 0x362b781/0x36fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:34.480787+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 24199168 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:35.481067+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 24199168 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:36.481471+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 24199168 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673149 data_alloc: 234881024 data_used: 31838208
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:37.481821+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 24199168 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:38.482085+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b61000/0x0/0x4ffc00000, data 0x362b781/0x36fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 24199168 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:39.482486+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b61000/0x0/0x4ffc00000, data 0x362b781/0x36fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 24199168 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:40.482877+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b61000/0x0/0x4ffc00000, data 0x362b781/0x36fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 24199168 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:41.483294+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b61000/0x0/0x4ffc00000, data 0x362b781/0x36fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 24199168 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:42.483745+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673309 data_alloc: 234881024 data_used: 31842304
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 24199168 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.851950645s of 15.870507240s, submitted: 2
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:43.484069+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143343616 unmapped: 24117248 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:44.484455+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143351808 unmapped: 24109056 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:45.484885+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143351808 unmapped: 24109056 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:46.485263+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143351808 unmapped: 24109056 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:47.485644+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673729 data_alloc: 234881024 data_used: 31842304
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143351808 unmapped: 24109056 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:48.486024+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143351808 unmapped: 24109056 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:49.486408+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143351808 unmapped: 24109056 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:50.486828+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143351808 unmapped: 24109056 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:51.487130+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143351808 unmapped: 24109056 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:52.487759+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673729 data_alloc: 234881024 data_used: 31842304
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143360000 unmapped: 24100864 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:53.487999+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143360000 unmapped: 24100864 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:54.488216+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143360000 unmapped: 24100864 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:55.488656+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143360000 unmapped: 24100864 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:56.488858+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143360000 unmapped: 24100864 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:57.489142+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673729 data_alloc: 234881024 data_used: 31842304
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.315594673s of 14.335906982s, submitted: 2
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143384576 unmapped: 24076288 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:58.489378+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143384576 unmapped: 24076288 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:59.489613+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143384576 unmapped: 24076288 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:00.489945+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143384576 unmapped: 24076288 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:01.490374+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143384576 unmapped: 24076288 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:02.490737+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143384576 unmapped: 24076288 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:03.491041+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143384576 unmapped: 24076288 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:04.491452+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143384576 unmapped: 24076288 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:05.491797+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143384576 unmapped: 24076288 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:06.492153+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143384576 unmapped: 24076288 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:07.492470+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143384576 unmapped: 24076288 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:08.492882+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143384576 unmapped: 24076288 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:09.493324+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143384576 unmapped: 24076288 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:10.493769+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:11.494161+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:12.494690+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:13.495104+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:14.495500+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:15.495840+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:16.496110+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:17.496421+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:18.496758+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:19.497045+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:20.497379+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:21.497692+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:22.498005+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:23.498333+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:24.498770+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:25.499046+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:26.499411+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:27.499771+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:28.500045+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:29.500614+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:30.500810+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:31.501033+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:32.501328+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:33.501751+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:34.502120+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:35.502480+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:36.502853+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:37.503056+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:38.503817+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 24059904 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:39.504417+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 24059904 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:40.504860+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 24059904 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:41.505212+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 24059904 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:42.508796+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 24059904 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:43.509253+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 24059904 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:44.509706+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 24059904 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:45.510171+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 24059904 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:46.510606+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 24059904 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:47.511008+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 24059904 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:48.511391+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 24059904 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:49.511785+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 24059904 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:50.512130+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 24051712 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:51.512515+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 24051712 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:52.513030+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 24051712 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:53.513259+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 24051712 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:54.513808+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 24051712 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:55.514067+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 24051712 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:56.514260+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 24051712 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:57.514795+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 24051712 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:58.515062+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 24051712 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:59.515323+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 24051712 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:00.515608+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 24051712 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:01.515996+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 24051712 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:02.516488+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 24051712 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:03.517005+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 24051712 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:04.517593+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 24051712 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:05.518158+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 24051712 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:06.519395+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143417344 unmapped: 24043520 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:07.520441+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143417344 unmapped: 24043520 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:08.521628+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143417344 unmapped: 24043520 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:09.521987+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143417344 unmapped: 24043520 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:10.522837+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143417344 unmapped: 24043520 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:11.523478+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143417344 unmapped: 24043520 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:12.524068+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143417344 unmapped: 24043520 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:13.524422+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143417344 unmapped: 24043520 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:14.524843+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143417344 unmapped: 24043520 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:15.525778+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143417344 unmapped: 24043520 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:16.526322+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143417344 unmapped: 24043520 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:17.527021+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143417344 unmapped: 24043520 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:18.527629+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143417344 unmapped: 24043520 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:19.528019+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143417344 unmapped: 24043520 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:20.528873+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143417344 unmapped: 24043520 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:21.529459+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 24035328 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:22.530009+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 24035328 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:23.530384+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 24035328 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:24.530880+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 24035328 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:25.531330+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 24035328 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:26.531739+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 24035328 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:27.532075+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 24035328 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:28.532429+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 24035328 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:29.532825+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 24035328 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:30.533145+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 24027136 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:31.533517+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 24027136 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:32.534021+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 24027136 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:33.534467+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 24027136 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:34.534996+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 24027136 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:35.535665+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 24027136 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:36.536032+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 24027136 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:37.536379+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 24027136 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:38.536609+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 24027136 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:39.536963+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 24027136 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:40.537243+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 24027136 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:41.537482+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 24027136 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:42.537744+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 24027136 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:43.538019+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 24027136 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:44.538371+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 24027136 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:45.538708+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 24027136 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:46.538893+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 24018944 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:47.539093+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 24018944 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:48.539435+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 24018944 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:49.539929+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 24018944 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:50.550041+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 24018944 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:51.551619+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 24018944 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:52.553424+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 24018944 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:53.555707+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 24018944 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:54.557676+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 24018944 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:55.558954+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 24018944 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:56.560596+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 24018944 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:57.562610+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 24018944 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:58.563264+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 24018944 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:59.563641+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 24018944 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:00.564045+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 24018944 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:01.564452+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 24018944 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:02.564788+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 24018944 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:03.565135+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 24018944 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:04.565501+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 24010752 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:05.565749+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 24010752 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:06.566132+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 24010752 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:07.566430+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 24010752 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:08.566719+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 24010752 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:09.567048+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 24010752 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:10.567865+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 24010752 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:11.568454+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 24010752 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:12.569950+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 24010752 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:13.570641+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 24010752 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:14.571355+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 24010752 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:15.571965+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 24010752 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:16.573304+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 24010752 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:17.573984+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 24010752 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:18.574373+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 24010752 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:19.574869+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 24010752 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:20.575678+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 24010752 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:21.576140+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 24010752 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:22.576739+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 24010752 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:23.577275+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143458304 unmapped: 24002560 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:24.577921+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143458304 unmapped: 24002560 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:25.578341+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143458304 unmapped: 24002560 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:26.578715+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143458304 unmapped: 24002560 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:27.579208+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143458304 unmapped: 24002560 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:28.579792+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143458304 unmapped: 24002560 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:29.580201+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143458304 unmapped: 24002560 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:30.580608+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143458304 unmapped: 24002560 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:31.580948+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143499264 unmapped: 23961600 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:32.581329+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143499264 unmapped: 23961600 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:33.581723+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1678609 data_alloc: 234881024 data_used: 32022528
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143499264 unmapped: 23961600 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:34.581970+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143507456 unmapped: 23953408 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:35.582394+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:36.582987+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143507456 unmapped: 23953408 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 159.426895142s of 159.453201294s, submitted: 15
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:37.583437+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143605760 unmapped: 23855104 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:38.583918+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143605760 unmapped: 23855104 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676813 data_alloc: 234881024 data_used: 32022528
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:39.584295+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143605760 unmapped: 23855104 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:40.584501+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143605760 unmapped: 23855104 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b35000/0x0/0x4ffc00000, data 0x3657781/0x3729000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:41.584935+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143605760 unmapped: 23855104 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:42.585308+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143605760 unmapped: 23855104 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:43.585642+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143605760 unmapped: 23855104 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676813 data_alloc: 234881024 data_used: 32022528
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:44.586238+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143605760 unmapped: 23855104 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b35000/0x0/0x4ffc00000, data 0x3657781/0x3729000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:45.586643+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143605760 unmapped: 23855104 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:46.587179+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143605760 unmapped: 23855104 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:47.587670+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143605760 unmapped: 23855104 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:48.588054+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143605760 unmapped: 23855104 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1677293 data_alloc: 234881024 data_used: 32034816
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:49.588354+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143605760 unmapped: 23855104 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:50.588781+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143605760 unmapped: 23855104 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b35000/0x0/0x4ffc00000, data 0x3657781/0x3729000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:51.589123+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143605760 unmapped: 23855104 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:52.589480+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143605760 unmapped: 23855104 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:53.589886+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143605760 unmapped: 23855104 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1677293 data_alloc: 234881024 data_used: 32034816
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.874889374s of 16.894330978s, submitted: 2
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:54.590223+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143605760 unmapped: 23855104 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:55.590710+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143605760 unmapped: 23855104 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:56.591048+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143605760 unmapped: 23855104 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:57.591311+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143605760 unmapped: 23855104 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:58.591725+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143605760 unmapped: 23855104 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1677713 data_alloc: 234881024 data_used: 32034816
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:59.591925+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143605760 unmapped: 23855104 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:00.592251+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143605760 unmapped: 23855104 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:01.592625+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143605760 unmapped: 23855104 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:02.593032+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143613952 unmapped: 23846912 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:03.593409+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143613952 unmapped: 23846912 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1677713 data_alloc: 234881024 data_used: 32034816
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:04.593981+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143613952 unmapped: 23846912 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:05.594307+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143613952 unmapped: 23846912 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:06.594742+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143613952 unmapped: 23846912 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.290188789s of 13.311237335s, submitted: 2
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:07.595116+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143622144 unmapped: 23838720 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:08.595400+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143622144 unmapped: 23838720 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:09.595766+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143622144 unmapped: 23838720 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:10.596260+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143622144 unmapped: 23838720 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:11.596956+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143622144 unmapped: 23838720 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:12.597355+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143622144 unmapped: 23838720 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:13.598048+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143622144 unmapped: 23838720 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:14.598447+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143622144 unmapped: 23838720 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:15.598852+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143622144 unmapped: 23838720 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:16.599348+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143622144 unmapped: 23838720 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:17.599742+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143622144 unmapped: 23838720 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:18.600110+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143622144 unmapped: 23838720 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:34:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Dec 03 02:34:06 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1762531585' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:19.600485+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143622144 unmapped: 23838720 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:20.600886+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143622144 unmapped: 23838720 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:21.601365+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143622144 unmapped: 23838720 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:22.601911+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143622144 unmapped: 23838720 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:23.602310+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143622144 unmapped: 23838720 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:24.602769+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143622144 unmapped: 23838720 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:25.603087+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143622144 unmapped: 23838720 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:26.603488+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143622144 unmapped: 23838720 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:27.603779+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143622144 unmapped: 23838720 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:28.604126+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143630336 unmapped: 23830528 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:29.604515+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143630336 unmapped: 23830528 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:30.605115+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143630336 unmapped: 23830528 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:31.605359+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143630336 unmapped: 23830528 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:32.605721+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143630336 unmapped: 23830528 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:33.606030+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143630336 unmapped: 23830528 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:34.606387+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143630336 unmapped: 23830528 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:35.606719+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143630336 unmapped: 23830528 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:36.607081+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143630336 unmapped: 23830528 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:37.607455+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143630336 unmapped: 23830528 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:38.607937+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143630336 unmapped: 23830528 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:39.608339+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143630336 unmapped: 23830528 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:40.608817+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143630336 unmapped: 23830528 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:41.609196+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143630336 unmapped: 23830528 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:42.609749+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143630336 unmapped: 23830528 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:43.610168+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143630336 unmapped: 23830528 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:44.610709+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143638528 unmapped: 23822336 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:45.611414+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143638528 unmapped: 23822336 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:46.611786+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143638528 unmapped: 23822336 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:47.612162+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143638528 unmapped: 23822336 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:48.612628+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143638528 unmapped: 23822336 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:49.612878+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143638528 unmapped: 23822336 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:50.613304+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143638528 unmapped: 23822336 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:51.613748+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143638528 unmapped: 23822336 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:52.614186+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143638528 unmapped: 23822336 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:53.614636+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143638528 unmapped: 23822336 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:54.615141+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143646720 unmapped: 23814144 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:55.615483+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143646720 unmapped: 23814144 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:56.615837+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143646720 unmapped: 23814144 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:57.616217+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143646720 unmapped: 23814144 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:58.616642+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143646720 unmapped: 23814144 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:59.616886+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143646720 unmapped: 23814144 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:00.617262+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143646720 unmapped: 23814144 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:01.617690+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143646720 unmapped: 23814144 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:02.618065+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143646720 unmapped: 23814144 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:03.618356+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143646720 unmapped: 23814144 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:04.618805+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143646720 unmapped: 23814144 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:05.619104+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143646720 unmapped: 23814144 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:06.619352+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143646720 unmapped: 23814144 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:07.619725+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143646720 unmapped: 23814144 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:08.620104+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143646720 unmapped: 23814144 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:09.620492+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143646720 unmapped: 23814144 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:10.620866+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143654912 unmapped: 23805952 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:11.621230+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143654912 unmapped: 23805952 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:12.621467+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143654912 unmapped: 23805952 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:13.621805+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143654912 unmapped: 23805952 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:14.622338+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143654912 unmapped: 23805952 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:15.622816+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143654912 unmapped: 23805952 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:16.623222+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143654912 unmapped: 23805952 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:17.623671+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143654912 unmapped: 23805952 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:18.623988+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 4200.1 total, 600.0 interval
                                            Cumulative writes: 11K writes, 45K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 11K writes, 3272 syncs, 3.60 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 751 writes, 2590 keys, 751 commit groups, 1.0 writes per commit group, ingest: 3.26 MB, 0.01 MB/s
                                            Interval WAL: 751 writes, 299 syncs, 2.51 writes per sync, written: 0.00 GB, 0.01 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143654912 unmapped: 23805952 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:19.624519+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143654912 unmapped: 23805952 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:20.624909+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143654912 unmapped: 23805952 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:21.625338+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143654912 unmapped: 23805952 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:22.625744+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143654912 unmapped: 23805952 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:23.626143+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143654912 unmapped: 23805952 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:24.626700+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143654912 unmapped: 23805952 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:25.627187+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143654912 unmapped: 23805952 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:26.627757+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143663104 unmapped: 23797760 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:27.628346+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143663104 unmapped: 23797760 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:28.628747+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143663104 unmapped: 23797760 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:29.629334+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143663104 unmapped: 23797760 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:30.629923+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143663104 unmapped: 23797760 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:31.630336+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143663104 unmapped: 23797760 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:32.630813+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143663104 unmapped: 23797760 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:33.631115+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143663104 unmapped: 23797760 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:34.631489+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143663104 unmapped: 23797760 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:35.631876+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143663104 unmapped: 23797760 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:36.632201+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143663104 unmapped: 23797760 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:37.632829+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143663104 unmapped: 23797760 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:38.633032+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143663104 unmapped: 23797760 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:39.633639+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143663104 unmapped: 23797760 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:40.634055+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143663104 unmapped: 23797760 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:41.634391+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143671296 unmapped: 23789568 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:42.634880+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143671296 unmapped: 23789568 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:43.635342+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143671296 unmapped: 23789568 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:44.635867+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143671296 unmapped: 23789568 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:45.636204+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143671296 unmapped: 23789568 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:46.636755+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143671296 unmapped: 23789568 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:47.637154+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143671296 unmapped: 23789568 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:48.637669+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143671296 unmapped: 23789568 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:49.638155+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143671296 unmapped: 23789568 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:50.638617+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143671296 unmapped: 23789568 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:51.638965+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143671296 unmapped: 23789568 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:52.639447+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143671296 unmapped: 23789568 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:53.639845+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143679488 unmapped: 23781376 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:54.640269+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143679488 unmapped: 23781376 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:55.640669+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143687680 unmapped: 23773184 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:56.640997+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:57.641338+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143687680 unmapped: 23773184 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:58.641812+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143687680 unmapped: 23773184 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:59.642218+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143687680 unmapped: 23773184 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:00.642688+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143687680 unmapped: 23773184 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:01.643049+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143687680 unmapped: 23773184 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:02.643361+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143687680 unmapped: 23773184 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:03.644014+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143687680 unmapped: 23773184 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:04.644429+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143687680 unmapped: 23773184 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:05.644808+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143687680 unmapped: 23773184 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:06.645386+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143687680 unmapped: 23773184 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:07.645799+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143687680 unmapped: 23773184 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:08.646160+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143687680 unmapped: 23773184 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:09.646645+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143687680 unmapped: 23773184 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:10.646997+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143687680 unmapped: 23773184 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:11.647429+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143687680 unmapped: 23773184 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:12.647840+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143695872 unmapped: 23764992 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:13.648273+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143695872 unmapped: 23764992 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:14.648718+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143695872 unmapped: 23764992 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:15.648955+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143695872 unmapped: 23764992 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:16.649221+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143695872 unmapped: 23764992 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:17.666835+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143695872 unmapped: 23764992 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:18.667086+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143695872 unmapped: 23764992 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:19.667379+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143695872 unmapped: 23764992 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:20.667723+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143695872 unmapped: 23764992 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:21.668013+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143695872 unmapped: 23764992 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:22.668432+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143695872 unmapped: 23764992 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:23.668758+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143695872 unmapped: 23764992 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:24.669126+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143695872 unmapped: 23764992 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:25.669472+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143695872 unmapped: 23764992 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:26.669761+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143704064 unmapped: 23756800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:27.670118+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143704064 unmapped: 23756800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:28.670487+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143704064 unmapped: 23756800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:29.670832+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143704064 unmapped: 23756800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:30.671194+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143704064 unmapped: 23756800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:31.671695+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143704064 unmapped: 23756800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:32.672153+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143704064 unmapped: 23756800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:33.672652+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143704064 unmapped: 23756800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:34.672937+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143704064 unmapped: 23756800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:35.673230+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143704064 unmapped: 23756800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:36.673950+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143704064 unmapped: 23756800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:37.674310+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143704064 unmapped: 23756800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:38.674920+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143704064 unmapped: 23756800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:39.675177+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143704064 unmapped: 23756800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:40.675359+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143704064 unmapped: 23756800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:41.675833+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143704064 unmapped: 23756800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:42.676226+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143712256 unmapped: 23748608 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:43.676715+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143712256 unmapped: 23748608 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:44.677037+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143712256 unmapped: 23748608 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:45.677417+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143712256 unmapped: 23748608 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:46.677742+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143712256 unmapped: 23748608 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:47.678156+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143712256 unmapped: 23748608 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:48.678669+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143712256 unmapped: 23748608 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:49.678989+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143712256 unmapped: 23748608 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 163.313308716s of 163.336517334s, submitted: 15
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:50.679355+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143720448 unmapped: 23740416 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:51.679817+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143720448 unmapped: 23740416 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:52.680184+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143761408 unmapped: 23699456 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [0,0,0,1])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:53.680735+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:54.681131+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1677697 data_alloc: 218103808 data_used: 32022528
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:55.681647+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:56.981496+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:57.981934+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:58.982394+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:59.982779+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1677697 data_alloc: 218103808 data_used: 32022528
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:00.983187+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:01.983709+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:02.984153+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:03.984495+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:04.984968+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1677697 data_alloc: 218103808 data_used: 32022528
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:05.985388+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:06.985782+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:07.986190+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:08.986679+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:09.987071+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1677697 data_alloc: 218103808 data_used: 32022528
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:10.987470+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:11.987886+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:12.988256+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:13.988781+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:14.989202+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1677697 data_alloc: 218103808 data_used: 32022528
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:15.989690+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:16.990085+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:17.990447+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:18.990931+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:19.991312+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1677697 data_alloc: 218103808 data_used: 32022528
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:20.991805+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:21.992153+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:22.992673+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:23.993051+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:24.993312+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1677697 data_alloc: 218103808 data_used: 32022528
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:25.993705+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:26.994078+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:27.994613+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:28.994905+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:29.995208+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1677697 data_alloc: 218103808 data_used: 32022528
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:30.995610+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:31.995968+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:32.996165+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:33.996508+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:34.997073+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1677697 data_alloc: 218103808 data_used: 32022528
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:35.997666+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:36.998067+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:37.998492+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:38.999020+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:39.999490+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1677697 data_alloc: 218103808 data_used: 32022528
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a7ec3400 session 0x55f0a7f5ab40
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:41.000008+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a87bb000
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 50.244667053s of 50.852611542s, submitted: 90
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:42.000406+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a87bb000 session 0x55f0a562ef00
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 139329536 unmapped: 28131328 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:43.000832+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 139329536 unmapped: 28131328 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:44.001083+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 139329536 unmapped: 28131328 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8677000/0x0/0x4ffc00000, data 0x2b15781/0x2be7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:45.001649+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 139329536 unmapped: 28131328 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1535347 data_alloc: 234881024 data_used: 24195072
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:46.002133+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 139329536 unmapped: 28131328 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:47.002615+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 139329536 unmapped: 28131328 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8677000/0x0/0x4ffc00000, data 0x2b15781/0x2be7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:48.002957+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 139329536 unmapped: 28131328 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:49.003332+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8677000/0x0/0x4ffc00000, data 0x2b15781/0x2be7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 139329536 unmapped: 28131328 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:50.003787+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 139329536 unmapped: 28131328 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1535347 data_alloc: 234881024 data_used: 24195072
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:51.004244+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a694a000 session 0x55f0a8172960
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a7e72800 session 0x55f0a75bda40
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 139329536 unmapped: 28131328 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a7e6c400
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.297913551s of 10.423287392s, submitted: 18
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:52.004867+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 36700160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8677000/0x0/0x4ffc00000, data 0x2b15781/0x2be7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a7e6c400 session 0x55f0a7ece1e0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:53.005162+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 36700160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:54.005343+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 36700160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:55.005808+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 36700160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1283831 data_alloc: 218103808 data_used: 13914112
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:56.006222+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f9c56000/0x0/0x4ffc00000, data 0x153870f/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 36700160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:57.022880+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 36700160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:58.023704+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 36700160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:59.024083+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 36700160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:00.024478+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 36700160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1283831 data_alloc: 218103808 data_used: 13914112
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:01.024903+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f9c56000/0x0/0x4ffc00000, data 0x153870f/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 36700160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:02.025363+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f9c56000/0x0/0x4ffc00000, data 0x153870f/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 36700160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:03.025743+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a6bbd800
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.247928619s of 11.414648056s, submitted: 33
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130793472 unmapped: 36667392 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _renew_subs
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:04.025998+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 139 ms_handle_reset con 0x55f0a6bbd800 session 0x55f0a64145a0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130818048 unmapped: 36642816 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b41800
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:05.026429+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130908160 unmapped: 36552704 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1320370 data_alloc: 218103808 data_used: 13918208
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 139 handle_osd_map epochs [139,140], i have 139, src has [1,140]
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 140 ms_handle_reset con 0x55f0a4b41800 session 0x55f0a58874a0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:06.026818+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a9f59400
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130932736 unmapped: 36528128 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _renew_subs
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:07.027155+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 141 ms_handle_reset con 0x55f0a9f59400 session 0x55f0a6b8e960
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x19ada6d/0x1a81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:08.027642+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:09.028041+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153da4a/0x1610000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:10.028461+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296774 data_alloc: 218103808 data_used: 13926400
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:11.028823+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:12.029223+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:13.029696+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:14.030106+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153da4a/0x1610000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.709212303s of 11.402141571s, submitted: 109
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:15.030607+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: get_auth_request con 0x55f0a5768000 auth_method 0
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:16.031024+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:17.031285+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:18.031763+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:19.032200+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:20.032887+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:21.033371+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:22.034007+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:23.034420+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:24.034669+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:25.035105+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:26.035621+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:27.036119+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:28.036688+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:29.037159+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:30.037628+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:31.038059+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:32.038475+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:33.038943+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:34.039371+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:35.039895+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:36.040151+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:37.040624+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:38.040903+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:39.041165+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:40.041621+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:41.042017+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:42.042378+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:43.042670+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:44.043036+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:45.043306+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:46.043856+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:47.044236+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:48.044444+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:49.044875+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:50.045987+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:51.046441+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:52.046882+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:53.047328+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:54.047766+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:55.048486+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:56.048898+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:57.049282+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:58.049753+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:59.050189+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:00.050719+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:01.051122+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:02.051511+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:03.052015+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:04.052387+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:05.052863+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:06.053238+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:07.053735+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:08.054144+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:09.054637+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:10.055090+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131039232 unmapped: 36421632 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:11.055807+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131039232 unmapped: 36421632 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:12.056207+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131039232 unmapped: 36421632 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:13.056632+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131039232 unmapped: 36421632 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:14.056864+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131039232 unmapped: 36421632 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:15.057219+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131039232 unmapped: 36421632 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:16.057658+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131039232 unmapped: 36421632 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:17.058124+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131039232 unmapped: 36421632 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:18.058639+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131039232 unmapped: 36421632 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:19.058815+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131039232 unmapped: 36421632 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:20.059159+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131039232 unmapped: 36421632 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:21.059439+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131039232 unmapped: 36421632 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:22.059790+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131039232 unmapped: 36421632 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:23.060184+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131039232 unmapped: 36421632 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:24.060363+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131039232 unmapped: 36421632 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:25.060674+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131039232 unmapped: 36421632 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:26.060980+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131039232 unmapped: 36421632 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:27.061337+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131039232 unmapped: 36421632 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:28.061812+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131039232 unmapped: 36421632 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:29.062008+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131039232 unmapped: 36421632 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:30.062200+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131039232 unmapped: 36421632 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:31.062409+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131039232 unmapped: 36421632 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:32.062610+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131039232 unmapped: 36421632 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:33.062784+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131096576 unmapped: 36364288 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: do_command 'config diff' '{prefix=config diff}'
Dec 03 02:34:06 compute-0 ceph-osd[207705]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 03 02:34:06 compute-0 ceph-osd[207705]: do_command 'config show' '{prefix=config show}'
Dec 03 02:34:06 compute-0 ceph-osd[207705]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 03 02:34:06 compute-0 ceph-osd[207705]: do_command 'counter dump' '{prefix=counter dump}'
Dec 03 02:34:06 compute-0 ceph-osd[207705]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 03 02:34:06 compute-0 ceph-osd[207705]: do_command 'counter schema' '{prefix=counter schema}'
Dec 03 02:34:06 compute-0 ceph-osd[207705]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:34.062978+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131252224 unmapped: 36208640 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:35.063355+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130621440 unmapped: 36839424 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:34:06 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:36.063685+0000)
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130727936 unmapped: 36732928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:34:06 compute-0 ceph-osd[207705]: do_command 'log dump' '{prefix=log dump}'
Dec 03 02:34:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Dec 03 02:34:06 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1664769515' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec 03 02:34:06 compute-0 rsyslogd[188612]: imjournal from <compute-0:ceph-osd>: begin to drop messages due to rate-limiting
Dec 03 02:34:06 compute-0 rsyslogd[188612]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 03 02:34:06 compute-0 nova_compute[351485]: 2025-12-03 02:34:06.906 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:34:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Dec 03 02:34:06 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/918338627' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec 03 02:34:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Dec 03 02:34:07 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/252527615' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Dec 03 02:34:07 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2908467597' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec 03 02:34:07 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1762531585' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec 03 02:34:07 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1664769515' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec 03 02:34:07 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/918338627' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec 03 02:34:07 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/252527615' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Dec 03 02:34:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Dec 03 02:34:07 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1703008089' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 03 02:34:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Dec 03 02:34:07 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1972135823' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec 03 02:34:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2401: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Dec 03 02:34:07 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/26303607' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec 03 02:34:07 compute-0 podman[475733]: 2025-12-03 02:34:07.852140771 +0000 UTC m=+0.110561541 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 03 02:34:07 compute-0 podman[475744]: 2025-12-03 02:34:07.878446993 +0000 UTC m=+0.119278177 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 03 02:34:07 compute-0 podman[475742]: 2025-12-03 02:34:07.882260931 +0000 UTC m=+0.128198129 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true)
Dec 03 02:34:07 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15665 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:34:08 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1703008089' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 03 02:34:08 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1972135823' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec 03 02:34:08 compute-0 ceph-mon[192821]: pgmap v2401: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:08 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/26303607' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec 03 02:34:08 compute-0 ceph-mon[192821]: from='client.15665 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:34:08 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15667 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:34:08 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15669 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:34:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:34:08 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #117. Immutable memtables: 0.
Dec 03 02:34:08 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:34:08.690833) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 03 02:34:08 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 117
Dec 03 02:34:08 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729248690861, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 779, "num_deletes": 250, "total_data_size": 905548, "memory_usage": 919072, "flush_reason": "Manual Compaction"}
Dec 03 02:34:08 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #118: started
Dec 03 02:34:08 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729248698339, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 118, "file_size": 607284, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 48625, "largest_seqno": 49403, "table_properties": {"data_size": 603753, "index_size": 1247, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9861, "raw_average_key_size": 21, "raw_value_size": 596191, "raw_average_value_size": 1282, "num_data_blocks": 55, "num_entries": 465, "num_filter_entries": 465, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764729192, "oldest_key_time": 1764729192, "file_creation_time": 1764729248, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 118, "seqno_to_time_mapping": "N/A"}}
Dec 03 02:34:08 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 7561 microseconds, and 2664 cpu microseconds.
Dec 03 02:34:08 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 02:34:08 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:34:08.698389) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #118: 607284 bytes OK
Dec 03 02:34:08 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:34:08.698407) [db/memtable_list.cc:519] [default] Level-0 commit table #118 started
Dec 03 02:34:08 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:34:08.701135) [db/memtable_list.cc:722] [default] Level-0 commit table #118: memtable #1 done
Dec 03 02:34:08 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:34:08.701157) EVENT_LOG_v1 {"time_micros": 1764729248701151, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 03 02:34:08 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:34:08.701173) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 03 02:34:08 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 901477, prev total WAL file size 901477, number of live WAL files 2.
Dec 03 02:34:08 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000114.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:34:08 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:34:08.701751) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303034' seq:72057594037927935, type:22 .. '6D6772737461740032323535' seq:0, type:0; will stop at (end)
Dec 03 02:34:08 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 03 02:34:08 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [118(593KB)], [116(9233KB)]
Dec 03 02:34:08 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729248701792, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [118], "files_L6": [116], "score": -1, "input_data_size": 10062702, "oldest_snapshot_seqno": -1}
Dec 03 02:34:08 compute-0 sshd-session[475732]: Invalid user admin123 from 154.113.10.113 port 51434
Dec 03 02:34:08 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #119: 6230 keys, 7052462 bytes, temperature: kUnknown
Dec 03 02:34:08 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729248742959, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 119, "file_size": 7052462, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7015358, "index_size": 20418, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15621, "raw_key_size": 163528, "raw_average_key_size": 26, "raw_value_size": 6907116, "raw_average_value_size": 1108, "num_data_blocks": 805, "num_entries": 6230, "num_filter_entries": 6230, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764729248, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Dec 03 02:34:08 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 02:34:08 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:34:08.743132) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 7052462 bytes
Dec 03 02:34:08 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:34:08.745138) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 244.1 rd, 171.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 9.0 +0.0 blob) out(6.7 +0.0 blob), read-write-amplify(28.2) write-amplify(11.6) OK, records in: 6718, records dropped: 488 output_compression: NoCompression
Dec 03 02:34:08 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:34:08.745393) EVENT_LOG_v1 {"time_micros": 1764729248745385, "job": 70, "event": "compaction_finished", "compaction_time_micros": 41217, "compaction_time_cpu_micros": 19998, "output_level": 6, "num_output_files": 1, "total_output_size": 7052462, "num_input_records": 6718, "num_output_records": 6230, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 03 02:34:08 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000118.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:34:08 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729248745642, "job": 70, "event": "table_file_deletion", "file_number": 118}
Dec 03 02:34:08 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:34:08 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729248746939, "job": 70, "event": "table_file_deletion", "file_number": 116}
Dec 03 02:34:08 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:34:08.701656) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:34:08 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:34:08.747094) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:34:08 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:34:08.747099) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:34:08 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:34:08.747101) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:34:08 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:34:08.747103) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:34:08 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:34:08.747104) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:34:08 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15673 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:34:08 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15672 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:34:08 compute-0 sshd-session[475732]: Received disconnect from 154.113.10.113 port 51434:11: Bye Bye [preauth]
Dec 03 02:34:08 compute-0 sshd-session[475732]: Disconnected from invalid user admin123 154.113.10.113 port 51434 [preauth]
Dec 03 02:34:09 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15675 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:34:09 compute-0 nova_compute[351485]: 2025-12-03 02:34:09.318 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:34:09 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15679 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:34:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2402: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:09 compute-0 ceph-mon[192821]: from='client.15667 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:34:09 compute-0 ceph-mon[192821]: from='client.15669 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:34:09 compute-0 ceph-mon[192821]: from='client.15673 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:34:09 compute-0 ceph-mon[192821]: from='client.15672 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:34:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Dec 03 02:34:09 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3591265849' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec 03 02:34:10 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15683 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:34:10 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0) v1
Dec 03 02:34:10 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3175948602' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec 03 02:34:10 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15687 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:34:10 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Dec 03 02:34:10 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/94084427' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 03 02:34:10 compute-0 ceph-mon[192821]: from='client.15675 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:34:10 compute-0 ceph-mon[192821]: from='client.15679 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:34:10 compute-0 ceph-mon[192821]: pgmap v2402: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:10 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3591265849' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec 03 02:34:10 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3175948602' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec 03 02:34:10 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/94084427' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 03 02:34:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Dec 03 02:34:11 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2108106552' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:15.396043+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f98a7000/0x0/0x4ffc00000, data 0x210eea7/0x21d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 3252224 heap: 102236160 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:16.396401+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 3252224 heap: 102236160 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:17.396706+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 3252224 heap: 102236160 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:18.397096+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f98a7000/0x0/0x4ffc00000, data 0x210eea7/0x21d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 3252224 heap: 102236160 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:19.397495+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1277924 data_alloc: 234881024 data_used: 13750272
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 3252224 heap: 102236160 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:20.397760+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 3252224 heap: 102236160 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:21.397990+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 3252224 heap: 102236160 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:22.398320+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 3252224 heap: 102236160 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f98a7000/0x0/0x4ffc00000, data 0x210eea7/0x21d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:23.398637+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 3252224 heap: 102236160 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:24.399186+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1277924 data_alloc: 234881024 data_used: 13750272
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 3252224 heap: 102236160 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:25.399575+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f98a7000/0x0/0x4ffc00000, data 0x210eea7/0x21d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 3252224 heap: 102236160 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:26.400022+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 3252224 heap: 102236160 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:27.400434+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 3252224 heap: 102236160 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:28.400676+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 3252224 heap: 102236160 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f98a7000/0x0/0x4ffc00000, data 0x210eea7/0x21d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:29.400983+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1277924 data_alloc: 234881024 data_used: 13750272
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 3252224 heap: 102236160 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f98a7000/0x0/0x4ffc00000, data 0x210eea7/0x21d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:30.401308+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 3252224 heap: 102236160 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:31.401697+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 3252224 heap: 102236160 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:32.401990+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 3252224 heap: 102236160 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:33.402298+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 3252224 heap: 102236160 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:34.402488+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1277924 data_alloc: 234881024 data_used: 13750272
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 3252224 heap: 102236160 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:35.402884+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f98a7000/0x0/0x4ffc00000, data 0x210eea7/0x21d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 3252224 heap: 102236160 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:36.403194+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 3252224 heap: 102236160 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:37.403408+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 3252224 heap: 102236160 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:38.403684+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 3252224 heap: 102236160 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:39.403958+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1277924 data_alloc: 234881024 data_used: 13750272
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 3252224 heap: 102236160 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:40.404200+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f98a7000/0x0/0x4ffc00000, data 0x210eea7/0x21d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 3252224 heap: 102236160 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:41.404446+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 3252224 heap: 102236160 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f98a7000/0x0/0x4ffc00000, data 0x210eea7/0x21d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:42.404856+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 3252224 heap: 102236160 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:43.405252+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 3252224 heap: 102236160 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:44.405719+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1277924 data_alloc: 234881024 data_used: 13750272
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 3252224 heap: 102236160 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:45.406101+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f98a7000/0x0/0x4ffc00000, data 0x210eea7/0x21d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 3252224 heap: 102236160 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:46.406407+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 3252224 heap: 102236160 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:47.406770+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 3252224 heap: 102236160 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:48.407136+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 3252224 heap: 102236160 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:49.407476+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1277924 data_alloc: 234881024 data_used: 13750272
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 3252224 heap: 102236160 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:50.407890+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 3252224 heap: 102236160 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f98a7000/0x0/0x4ffc00000, data 0x210eea7/0x21d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:51.408323+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 3252224 heap: 102236160 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:52.408695+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 3252224 heap: 102236160 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:53.409030+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd992bdc00
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 ms_handle_reset con 0x55cd992bdc00 session 0x55cd95afde00
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99ed0000
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 ms_handle_reset con 0x55cd99ed0000 session 0x55cd97f26960
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99ed0400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 ms_handle_reset con 0x55cd99ed0400 session 0x55cd97f261e0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99ed0800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 98992128 unmapped: 3244032 heap: 102236160 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 ms_handle_reset con 0x55cd99ed0800 session 0x55cd98e91e00
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99ed0c00
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 50.868255615s of 50.884643555s, submitted: 2
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:54.409295+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 ms_handle_reset con 0x55cd99ed0c00 session 0x55cd96e56000
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd992bdc00
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 ms_handle_reset con 0x55cd992bdc00 session 0x55cd96674d20
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99ed0000
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 ms_handle_reset con 0x55cd99ed0000 session 0x55cd98e82000
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99ed0400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 ms_handle_reset con 0x55cd99ed0400 session 0x55cd98b030e0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99ed0800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 ms_handle_reset con 0x55cd99ed0800 session 0x55cd96e56000
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1320168 data_alloc: 234881024 data_used: 13750272
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 100786176 unmapped: 9314304 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99ed1000
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:55.409514+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 ms_handle_reset con 0x55cd99ed1000 session 0x55cd97f26960
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f9440000/0x0/0x4ffc00000, data 0x2573f19/0x263e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 100818944 unmapped: 9281536 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:56.409932+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f9440000/0x0/0x4ffc00000, data 0x2573f19/0x263e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f9440000/0x0/0x4ffc00000, data 0x2573f19/0x263e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 100818944 unmapped: 9281536 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:57.410288+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 100818944 unmapped: 9281536 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd992bdc00
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 ms_handle_reset con 0x55cd992bdc00 session 0x55cd95afde00
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:58.410519+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 100835328 unmapped: 9265152 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99ed0000
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 ms_handle_reset con 0x55cd99ed0000 session 0x55cd97202780
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:01:59.410992+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99ed0400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 ms_handle_reset con 0x55cd99ed0400 session 0x55cd97f33680
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99ed0800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1323714 data_alloc: 234881024 data_used: 13750272
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 100835328 unmapped: 9265152 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 ms_handle_reset con 0x55cd99ed0800 session 0x55cd966021e0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:00.411201+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99ed1400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99ed1800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 100868096 unmapped: 9232384 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:01.411374+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 100868096 unmapped: 9232384 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:02.411794+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f943f000/0x0/0x4ffc00000, data 0x2573f3c/0x263f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 101523456 unmapped: 8577024 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:03.411994+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 101605376 unmapped: 8495104 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:04.412275+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355387 data_alloc: 234881024 data_used: 18079744
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 104644608 unmapped: 5455872 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:05.412728+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 104644608 unmapped: 5455872 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:06.413055+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 104644608 unmapped: 5455872 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:07.413467+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 104644608 unmapped: 5455872 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f943f000/0x0/0x4ffc00000, data 0x2573f3c/0x263f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:08.413729+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 104644608 unmapped: 5455872 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f943f000/0x0/0x4ffc00000, data 0x2573f3c/0x263f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:09.413987+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355387 data_alloc: 234881024 data_used: 18079744
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 104644608 unmapped: 5455872 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:10.414199+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f943f000/0x0/0x4ffc00000, data 0x2573f3c/0x263f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 104644608 unmapped: 5455872 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:11.414623+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 104644608 unmapped: 5455872 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:12.414867+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 104644608 unmapped: 5455872 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:13.415132+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f943f000/0x0/0x4ffc00000, data 0x2573f3c/0x263f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 5423104 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:14.415454+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355387 data_alloc: 234881024 data_used: 18079744
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 5423104 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:15.415832+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 5423104 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:16.416180+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 5423104 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:17.416463+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 5423104 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:18.416835+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f943f000/0x0/0x4ffc00000, data 0x2573f3c/0x263f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f943f000/0x0/0x4ffc00000, data 0x2573f3c/0x263f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 5423104 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:19.417175+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355387 data_alloc: 234881024 data_used: 18079744
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 5423104 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:20.417516+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 5423104 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:21.417821+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 5423104 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:22.418138+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f943f000/0x0/0x4ffc00000, data 0x2573f3c/0x263f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 5414912 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:23.418516+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 5414912 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f943f000/0x0/0x4ffc00000, data 0x2573f3c/0x263f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:24.418941+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f943f000/0x0/0x4ffc00000, data 0x2573f3c/0x263f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355387 data_alloc: 234881024 data_used: 18079744
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 5414912 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:25.419293+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 5414912 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:26.419674+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 5414912 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:27.420030+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 5414912 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:28.420369+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 5414912 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:29.420612+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f943f000/0x0/0x4ffc00000, data 0x2573f3c/0x263f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355387 data_alloc: 234881024 data_used: 18079744
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 5414912 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:30.420801+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 5414912 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:31.421479+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 5414912 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:32.421693+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 5414912 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:33.421981+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 5414912 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:34.422301+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355387 data_alloc: 234881024 data_used: 18079744
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 5414912 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:35.422609+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f943f000/0x0/0x4ffc00000, data 0x2573f3c/0x263f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 104693760 unmapped: 5406720 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:36.422806+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 104693760 unmapped: 5406720 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:37.423060+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 43.507286072s of 43.778175354s, submitted: 50
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 106307584 unmapped: 5578752 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:38.423289+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:39.423465+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107315200 unmapped: 4571136 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1441083 data_alloc: 234881024 data_used: 18161664
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:40.424022+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 4923392 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f899a000/0x0/0x4ffc00000, data 0x3011f3c/0x30dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:41.424227+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 106995712 unmapped: 4890624 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:42.424671+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 106995712 unmapped: 4890624 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f899a000/0x0/0x4ffc00000, data 0x3011f3c/0x30dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:43.425017+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107028480 unmapped: 4857856 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:44.425256+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107028480 unmapped: 4857856 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1440349 data_alloc: 234881024 data_used: 18161664
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f899a000/0x0/0x4ffc00000, data 0x3011f3c/0x30dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:45.425599+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107036672 unmapped: 4849664 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:46.425925+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107216896 unmapped: 4669440 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:47.426184+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107233280 unmapped: 4653056 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:48.426667+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107233280 unmapped: 4653056 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f898c000/0x0/0x4ffc00000, data 0x3026f3c/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:49.427199+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107233280 unmapped: 4653056 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1438209 data_alloc: 234881024 data_used: 18165760
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:50.427381+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107233280 unmapped: 4653056 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:51.427832+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107233280 unmapped: 4653056 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f898c000/0x0/0x4ffc00000, data 0x3026f3c/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:52.428182+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107233280 unmapped: 4653056 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:53.428439+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107233280 unmapped: 4653056 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:54.428821+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 4644864 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:55.429996+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1438209 data_alloc: 234881024 data_used: 18165760
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 4644864 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:56.430440+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f898c000/0x0/0x4ffc00000, data 0x3026f3c/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 4644864 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:57.430932+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 4644864 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:58.431334+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 4644864 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f898c000/0x0/0x4ffc00000, data 0x3026f3c/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:02:59.431683+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 4644864 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:00.432144+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1438209 data_alloc: 234881024 data_used: 18165760
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 4644864 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:01.432422+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 4644864 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:02.432849+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 4644864 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:03.433096+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 4644864 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:04.433322+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 4644864 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f898c000/0x0/0x4ffc00000, data 0x3026f3c/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:05.433801+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1438209 data_alloc: 234881024 data_used: 18165760
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 4636672 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:06.434139+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 4636672 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:07.434517+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f898c000/0x0/0x4ffc00000, data 0x3026f3c/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 4636672 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:08.435078+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 4636672 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:09.435611+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 4636672 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:10.436012+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f898c000/0x0/0x4ffc00000, data 0x3026f3c/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1438209 data_alloc: 234881024 data_used: 18165760
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 4636672 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 32.370967865s of 32.851169586s, submitted: 127
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:11.436393+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 4636672 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:12.436789+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 4636672 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f898c000/0x0/0x4ffc00000, data 0x3026f3c/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f898c000/0x0/0x4ffc00000, data 0x3026f3c/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:13.437246+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 4636672 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:14.437867+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 4636672 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:15.438268+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1438385 data_alloc: 234881024 data_used: 18165760
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 4636672 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:16.438849+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 4636672 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f898c000/0x0/0x4ffc00000, data 0x3026f3c/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:17.439242+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 4636672 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:18.439762+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 4636672 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:19.440136+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 4636672 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:20.440458+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1438385 data_alloc: 234881024 data_used: 18165760
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 4636672 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:21.440892+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f898c000/0x0/0x4ffc00000, data 0x3026f3c/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 4636672 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f898c000/0x0/0x4ffc00000, data 0x3026f3c/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:22.441175+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 4636672 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:23.441704+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 4636672 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:24.442125+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 4628480 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:25.442670+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1438385 data_alloc: 234881024 data_used: 18165760
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 4628480 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:26.443264+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 4628480 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:27.443736+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 4628480 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f898c000/0x0/0x4ffc00000, data 0x3026f3c/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f898c000/0x0/0x4ffc00000, data 0x3026f3c/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:28.443985+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 4595712 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:29.444342+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 4595712 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:30.444823+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1438385 data_alloc: 234881024 data_used: 18165760
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 4587520 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:31.445408+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 4587520 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f898c000/0x0/0x4ffc00000, data 0x3026f3c/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:32.445857+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 4587520 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:33.446074+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 4587520 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:34.446411+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 4579328 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f898c000/0x0/0x4ffc00000, data 0x3026f3c/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:35.446780+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1438385 data_alloc: 234881024 data_used: 18165760
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 4579328 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:36.447054+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 4579328 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:37.447341+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 4579328 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:38.447734+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 4579328 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f898c000/0x0/0x4ffc00000, data 0x3026f3c/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:39.448082+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 4579328 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f898c000/0x0/0x4ffc00000, data 0x3026f3c/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:40.448661+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1438385 data_alloc: 234881024 data_used: 18165760
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 4579328 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:41.449006+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 30.695907593s of 30.719285965s, submitted: 4
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107364352 unmapped: 4521984 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:42.449389+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107364352 unmapped: 4521984 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:43.449736+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107364352 unmapped: 4521984 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:44.450186+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 4513792 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:45.450723+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1439121 data_alloc: 234881024 data_used: 18157568
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 4513792 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f8987000/0x0/0x4ffc00000, data 0x3026f3c/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:46.451015+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107380736 unmapped: 4505600 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:47.451333+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107380736 unmapped: 4505600 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:48.451731+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107380736 unmapped: 4505600 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:49.452179+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107380736 unmapped: 4505600 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:50.452513+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1439121 data_alloc: 234881024 data_used: 18157568
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107388928 unmapped: 4497408 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f8987000/0x0/0x4ffc00000, data 0x3026f3c/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:51.452930+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107388928 unmapped: 4497408 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:52.453279+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107388928 unmapped: 4497408 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:53.453666+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107388928 unmapped: 4497408 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:54.453878+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107388928 unmapped: 4497408 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:55.454311+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1439121 data_alloc: 234881024 data_used: 18157568
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107388928 unmapped: 4497408 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:56.454742+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107388928 unmapped: 4497408 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f8987000/0x0/0x4ffc00000, data 0x3026f3c/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:57.455119+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107388928 unmapped: 4497408 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:58.455697+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107388928 unmapped: 4497408 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:03:59.455901+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107388928 unmapped: 4497408 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:00.456366+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1439121 data_alloc: 234881024 data_used: 18157568
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107388928 unmapped: 4497408 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:01.456773+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107388928 unmapped: 4497408 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:02.457186+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107388928 unmapped: 4497408 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f8987000/0x0/0x4ffc00000, data 0x3026f3c/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:03.457430+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 4702208 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f8987000/0x0/0x4ffc00000, data 0x3026f3c/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:04.457793+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 4661248 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:05.458003+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1439121 data_alloc: 234881024 data_used: 18157568
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 4661248 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:06.458384+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 4661248 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:07.458764+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 4661248 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f8987000/0x0/0x4ffc00000, data 0x3026f3c/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:08.459030+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 4661248 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:09.459377+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 4661248 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:10.459789+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1439121 data_alloc: 234881024 data_used: 18157568
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 4661248 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:11.460194+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 4661248 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets getting new tickets!
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:12.460859+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _finish_auth 0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:12.463038+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 4661248 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:13.461133+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f8987000/0x0/0x4ffc00000, data 0x3026f3c/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107233280 unmapped: 4653056 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:14.461715+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107233280 unmapped: 4653056 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:15.462096+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1439121 data_alloc: 234881024 data_used: 18157568
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107233280 unmapped: 4653056 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:16.462697+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f8987000/0x0/0x4ffc00000, data 0x3026f3c/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107233280 unmapped: 4653056 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:17.463241+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107233280 unmapped: 4653056 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:18.463726+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107233280 unmapped: 4653056 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f8987000/0x0/0x4ffc00000, data 0x3026f3c/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:19.464121+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107233280 unmapped: 4653056 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:20.464432+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1439121 data_alloc: 234881024 data_used: 18157568
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107233280 unmapped: 4653056 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:21.464747+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107233280 unmapped: 4653056 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:22.465144+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107233280 unmapped: 4653056 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f8987000/0x0/0x4ffc00000, data 0x3026f3c/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:23.465361+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107233280 unmapped: 4653056 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:24.465767+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107233280 unmapped: 4653056 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:25.466168+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1439121 data_alloc: 234881024 data_used: 18157568
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107233280 unmapped: 4653056 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f8987000/0x0/0x4ffc00000, data 0x3026f3c/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:26.466653+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 ms_handle_reset con 0x55cd98165400 session 0x55cd96b901e0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98165400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107233280 unmapped: 4653056 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:27.467060+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107233280 unmapped: 4653056 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:28.467835+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107233280 unmapped: 4653056 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:29.468265+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107233280 unmapped: 4653056 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:30.468712+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1439121 data_alloc: 234881024 data_used: 18157568
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 4644864 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:31.469114+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f8987000/0x0/0x4ffc00000, data 0x3026f3c/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 4636672 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:32.469488+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 4636672 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:33.470010+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 4636672 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:34.470389+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 4636672 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:35.470658+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f8987000/0x0/0x4ffc00000, data 0x3026f3c/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1439121 data_alloc: 234881024 data_used: 18157568
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 4636672 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:36.471020+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f8987000/0x0/0x4ffc00000, data 0x3026f3c/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 4636672 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:37.471388+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 4636672 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:38.471682+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f8987000/0x0/0x4ffc00000, data 0x3026f3c/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 4636672 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:39.472032+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 4636672 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:40.472370+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1439121 data_alloc: 234881024 data_used: 18157568
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 4636672 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:41.472711+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 4636672 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:42.473253+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f8987000/0x0/0x4ffc00000, data 0x3026f3c/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 4636672 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:43.473644+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 4636672 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:44.474016+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 4636672 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:45.474342+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f8987000/0x0/0x4ffc00000, data 0x3026f3c/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f8987000/0x0/0x4ffc00000, data 0x3026f3c/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1439121 data_alloc: 234881024 data_used: 18157568
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 4636672 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:46.474861+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 4636672 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:47.475249+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 4628480 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:48.475498+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 4628480 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:49.475864+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 4628480 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:50.476246+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1439121 data_alloc: 234881024 data_used: 18157568
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 4628480 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f8987000/0x0/0x4ffc00000, data 0x3026f3c/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:51.476781+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 4628480 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f8987000/0x0/0x4ffc00000, data 0x3026f3c/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:52.477208+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 4628480 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:53.477735+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f8987000/0x0/0x4ffc00000, data 0x3026f3c/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 4628480 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:54.477960+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f8987000/0x0/0x4ffc00000, data 0x3026f3c/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 4628480 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:55.478368+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f8987000/0x0/0x4ffc00000, data 0x3026f3c/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1439121 data_alloc: 234881024 data_used: 18157568
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 4628480 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:56.478834+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 4628480 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:57.479293+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f8987000/0x0/0x4ffc00000, data 0x3026f3c/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 4628480 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:58.479716+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 4628480 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:04:59.480037+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f8987000/0x0/0x4ffc00000, data 0x3026f3c/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 4628480 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:00.480322+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1439121 data_alloc: 234881024 data_used: 18157568
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 4628480 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:01.480789+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 4628480 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:02.481001+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 4628480 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:03.481674+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 4628480 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f8987000/0x0/0x4ffc00000, data 0x3026f3c/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:04.482008+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 4620288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:05.482461+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1439121 data_alloc: 234881024 data_used: 18157568
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 4620288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:06.482835+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 4620288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:07.483256+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 4620288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:08.483561+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f8987000/0x0/0x4ffc00000, data 0x3026f3c/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 4620288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:09.483935+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 4620288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:10.484102+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1439121 data_alloc: 234881024 data_used: 18157568
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 4612096 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:11.484717+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:12.485026+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 4612096 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f8987000/0x0/0x4ffc00000, data 0x3026f3c/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:13.485259+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 4612096 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:14.485442+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 4612096 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f8987000/0x0/0x4ffc00000, data 0x3026f3c/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:15.485729+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 4612096 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1439121 data_alloc: 234881024 data_used: 18157568
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:16.486012+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 4612096 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:17.486395+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 4612096 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f8987000/0x0/0x4ffc00000, data 0x3026f3c/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:18.486695+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 4603904 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:19.486980+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 4603904 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:20.487348+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 4603904 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1439121 data_alloc: 234881024 data_used: 18157568
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:21.487781+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 4603904 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:22.488302+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 4603904 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:23.488789+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 4603904 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f8987000/0x0/0x4ffc00000, data 0x3026f3c/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:24.489136+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 4603904 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:25.489352+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 4603904 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1439121 data_alloc: 234881024 data_used: 18157568
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:26.489711+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 4595712 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f8987000/0x0/0x4ffc00000, data 0x3026f3c/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:27.490164+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 4595712 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f8987000/0x0/0x4ffc00000, data 0x3026f3c/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:28.490488+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 4595712 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:29.490920+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 4595712 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:30.491215+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 4595712 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f8987000/0x0/0x4ffc00000, data 0x3026f3c/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:31.491517+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1439121 data_alloc: 234881024 data_used: 18157568
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 4595712 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:32.491974+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 4595712 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:33.492237+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 4595712 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:34.492766+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 4587520 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:35.493218+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 4587520 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f8987000/0x0/0x4ffc00000, data 0x3026f3c/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:36.493692+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1439121 data_alloc: 234881024 data_used: 18157568
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 4579328 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:37.494752+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 4579328 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:38.495158+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 4579328 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:39.495684+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 4579328 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 118.835258484s of 118.860160828s, submitted: 8
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:40.496069+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 ms_handle_reset con 0x55cd984f5800 session 0x55cd966f2960
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f8987000/0x0/0x4ffc00000, data 0x3026f3c/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 4579328 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99ed0400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:41.496341+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1275149 data_alloc: 234881024 data_used: 10731520
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103153664 unmapped: 8732672 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 ms_handle_reset con 0x55cd99ed0400 session 0x55cd98e821e0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f984e000/0x0/0x4ffc00000, data 0x2165eb7/0x222f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:42.496837+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 ms_handle_reset con 0x55cd9634ec00 session 0x55cd964a23c0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99ed0800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:43.497215+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:44.497739+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f984e000/0x0/0x4ffc00000, data 0x2165eb7/0x222f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:45.498205+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f984e000/0x0/0x4ffc00000, data 0x2165eb7/0x222f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:46.498686+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1274277 data_alloc: 234881024 data_used: 10727424
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:47.499075+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:48.499734+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:49.500059+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f984e000/0x0/0x4ffc00000, data 0x2165eb7/0x222f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:50.500403+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:51.500809+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1274277 data_alloc: 234881024 data_used: 10727424
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:52.501233+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:53.501662+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:54.501981+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:55.502236+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f984e000/0x0/0x4ffc00000, data 0x2165eb7/0x222f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:56.502417+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1274277 data_alloc: 234881024 data_used: 10727424
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:57.502684+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:58.503034+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:05:59.503359+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f984e000/0x0/0x4ffc00000, data 0x2165eb7/0x222f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:00.503735+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f984e000/0x0/0x4ffc00000, data 0x2165eb7/0x222f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:01.504112+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1274277 data_alloc: 234881024 data_used: 10727424
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:02.504730+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f984e000/0x0/0x4ffc00000, data 0x2165eb7/0x222f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:03.505000+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:04.505330+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:05.505790+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f984e000/0x0/0x4ffc00000, data 0x2165eb7/0x222f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:06.506286+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1274277 data_alloc: 234881024 data_used: 10727424
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:07.506750+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:08.507159+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:09.507621+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:10.507937+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f984e000/0x0/0x4ffc00000, data 0x2165eb7/0x222f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:11.508261+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1274277 data_alloc: 234881024 data_used: 10727424
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:12.508752+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:13.509017+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f984e000/0x0/0x4ffc00000, data 0x2165eb7/0x222f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:14.509694+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:15.510050+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:16.510290+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1274277 data_alloc: 234881024 data_used: 10727424
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:17.510814+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f984e000/0x0/0x4ffc00000, data 0x2165eb7/0x222f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:18.511192+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:19.511618+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:20.511925+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f984e000/0x0/0x4ffc00000, data 0x2165eb7/0x222f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:21.512260+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1274277 data_alloc: 234881024 data_used: 10727424
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f984e000/0x0/0x4ffc00000, data 0x2165eb7/0x222f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:22.512686+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:23.512993+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:24.513331+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f984e000/0x0/0x4ffc00000, data 0x2165eb7/0x222f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:25.513837+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f984e000/0x0/0x4ffc00000, data 0x2165eb7/0x222f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:26.514156+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1274277 data_alloc: 234881024 data_used: 10727424
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:27.514595+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:28.514993+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:29.515351+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:30.515761+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:31.516150+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f984e000/0x0/0x4ffc00000, data 0x2165eb7/0x222f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1274277 data_alloc: 234881024 data_used: 10727424
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:32.516627+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:33.516862+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:34.517271+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:35.517573+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:36.517920+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1274277 data_alloc: 234881024 data_used: 10727424
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f984e000/0x0/0x4ffc00000, data 0x2165eb7/0x222f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:37.518399+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:38.518758+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f984e000/0x0/0x4ffc00000, data 0x2165eb7/0x222f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:39.518950+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 8716288 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:40.519327+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 8708096 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:41.519903+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1274277 data_alloc: 234881024 data_used: 10727424
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 8708096 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:42.520325+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 8708096 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:43.520783+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f984e000/0x0/0x4ffc00000, data 0x2165eb7/0x222f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 8708096 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:44.521180+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 8708096 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:45.521660+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 8708096 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:46.521994+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f984e000/0x0/0x4ffc00000, data 0x2165eb7/0x222f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1274277 data_alloc: 234881024 data_used: 10727424
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 8708096 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:47.522461+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 8708096 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:48.522858+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 8708096 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f984e000/0x0/0x4ffc00000, data 0x2165eb7/0x222f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:49.523229+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 8708096 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:50.523902+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 8708096 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:51.524234+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1274277 data_alloc: 234881024 data_used: 10727424
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f984e000/0x0/0x4ffc00000, data 0x2165eb7/0x222f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 8708096 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:52.524722+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f984e000/0x0/0x4ffc00000, data 0x2165eb7/0x222f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 8708096 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:53.525196+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 8708096 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:54.525641+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 8708096 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:55.525898+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 8708096 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:56.526363+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1274277 data_alloc: 234881024 data_used: 10727424
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 8708096 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:57.526790+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 8708096 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:58.527135+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f984e000/0x0/0x4ffc00000, data 0x2165eb7/0x222f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 8708096 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:06:59.527490+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 8708096 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:00.527883+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 8708096 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:01.528230+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1274277 data_alloc: 234881024 data_used: 10727424
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 8708096 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:02.528690+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 8708096 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:03.529091+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f984e000/0x0/0x4ffc00000, data 0x2165eb7/0x222f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 8708096 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:04.529443+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 8708096 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:05.529750+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 8708096 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:06.530113+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f984e000/0x0/0x4ffc00000, data 0x2165eb7/0x222f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1274277 data_alloc: 234881024 data_used: 10727424
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 8708096 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:07.530661+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 8708096 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:08.530917+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f984e000/0x0/0x4ffc00000, data 0x2165eb7/0x222f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 8708096 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:09.531280+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 8708096 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:10.531769+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 8708096 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:11.532169+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1274277 data_alloc: 234881024 data_used: 10727424
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 8708096 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:12.532672+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 8708096 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:13.533035+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 8708096 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:14.533424+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f984e000/0x0/0x4ffc00000, data 0x2165eb7/0x222f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 8708096 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f984e000/0x0/0x4ffc00000, data 0x2165eb7/0x222f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:15.533844+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 8708096 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:16.534209+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1274277 data_alloc: 234881024 data_used: 10727424
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 8708096 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:17.534519+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f984e000/0x0/0x4ffc00000, data 0x2165eb7/0x222f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 8708096 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:18.535030+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f984e000/0x0/0x4ffc00000, data 0x2165eb7/0x222f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 8699904 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:19.535262+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 8699904 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:20.535667+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f984e000/0x0/0x4ffc00000, data 0x2165eb7/0x222f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 8699904 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:21.536034+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1274277 data_alloc: 234881024 data_used: 10727424
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 8699904 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:22.536388+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f984e000/0x0/0x4ffc00000, data 0x2165eb7/0x222f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 8699904 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:23.536696+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 8699904 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:24.536941+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 8699904 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:25.537299+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 8699904 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:26.537756+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1274277 data_alloc: 234881024 data_used: 10727424
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 8699904 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:27.538205+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f984e000/0x0/0x4ffc00000, data 0x2165eb7/0x222f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 8699904 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:28.538687+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 8699904 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:29.539073+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:30.539469+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 8699904 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:31.539696+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 8699904 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1274277 data_alloc: 234881024 data_used: 10727424
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:32.539968+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 8699904 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f984e000/0x0/0x4ffc00000, data 0x2165eb7/0x222f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:33.540273+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 8699904 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:34.540720+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 8699904 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f984e000/0x0/0x4ffc00000, data 0x2165eb7/0x222f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:35.540925+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 8699904 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:36.541294+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 8699904 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1274277 data_alloc: 234881024 data_used: 10727424
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:37.541739+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 8699904 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f984e000/0x0/0x4ffc00000, data 0x2165eb7/0x222f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:38.542067+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 8699904 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:39.542393+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 8699904 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:40.542881+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 8699904 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:41.543203+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 8699904 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 121.069267273s of 121.250671387s, submitted: 28
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 ms_handle_reset con 0x55cd98502400 session 0x55cd966741e0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 ms_handle_reset con 0x55cd98eac000 session 0x55cd98b03a40
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 ms_handle_reset con 0x55cd98eac400 session 0x55cd98b023c0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd984f5800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1273019 data_alloc: 234881024 data_used: 10723328
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:42.543658+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 11960320 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4f984f000/0x0/0x4ffc00000, data 0x2165ea7/0x222e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:43.543915+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 ms_handle_reset con 0x55cd984f5800 session 0x55cd9668f860
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99508224 unmapped: 12378112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:44.544301+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99508224 unmapped: 12378112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:45.544948+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99508224 unmapped: 12378112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:46.545383+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99508224 unmapped: 12378112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fa23a000/0x0/0x4ffc00000, data 0x177de26/0x1843000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:47.545738+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161167 data_alloc: 218103808 data_used: 5865472
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99508224 unmapped: 12378112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:48.546079+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99508224 unmapped: 12378112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fa23a000/0x0/0x4ffc00000, data 0x177de26/0x1843000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:49.546442+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99508224 unmapped: 12378112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:50.546782+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99508224 unmapped: 12378112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fa23a000/0x0/0x4ffc00000, data 0x177de26/0x1843000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:51.547106+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99508224 unmapped: 12378112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:52.547424+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161167 data_alloc: 218103808 data_used: 5865472
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99508224 unmapped: 12378112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:53.547815+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99508224 unmapped: 12378112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:54.548116+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99508224 unmapped: 12378112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:55.548442+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99508224 unmapped: 12378112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fa23a000/0x0/0x4ffc00000, data 0x177de26/0x1843000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:56.548864+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99508224 unmapped: 12378112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fa23a000/0x0/0x4ffc00000, data 0x177de26/0x1843000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:57.549400+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161167 data_alloc: 218103808 data_used: 5865472
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99508224 unmapped: 12378112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:58.549779+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99508224 unmapped: 12378112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:07:59.550051+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99508224 unmapped: 12378112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:00.550416+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99508224 unmapped: 12378112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fa23a000/0x0/0x4ffc00000, data 0x177de26/0x1843000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:01.550767+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99508224 unmapped: 12378112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:02.551144+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161167 data_alloc: 218103808 data_used: 5865472
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99508224 unmapped: 12378112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fa23a000/0x0/0x4ffc00000, data 0x177de26/0x1843000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:03.551615+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99508224 unmapped: 12378112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:04.552011+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99508224 unmapped: 12378112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:05.552347+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99508224 unmapped: 12378112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:06.552883+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99508224 unmapped: 12378112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fa23a000/0x0/0x4ffc00000, data 0x177de26/0x1843000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:07.553393+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161167 data_alloc: 218103808 data_used: 5865472
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99508224 unmapped: 12378112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:08.553851+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99508224 unmapped: 12378112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:09.554402+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99508224 unmapped: 12378112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:10.554856+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99508224 unmapped: 12378112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:11.555256+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99508224 unmapped: 12378112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fa23a000/0x0/0x4ffc00000, data 0x177de26/0x1843000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:12.555665+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161167 data_alloc: 218103808 data_used: 5865472
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99508224 unmapped: 12378112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:13.555983+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99508224 unmapped: 12378112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:14.556351+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fa23a000/0x0/0x4ffc00000, data 0x177de26/0x1843000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99508224 unmapped: 12378112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:15.556772+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99508224 unmapped: 12378112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:16.557136+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99508224 unmapped: 12378112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:17.557643+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fa23a000/0x0/0x4ffc00000, data 0x177de26/0x1843000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161167 data_alloc: 218103808 data_used: 5865472
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99508224 unmapped: 12378112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:18.558043+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99508224 unmapped: 12378112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:19.558387+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99508224 unmapped: 12378112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:20.558854+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99508224 unmapped: 12378112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98502400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 39.575115204s of 39.815700531s, submitted: 37
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:21.559215+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 12361728 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fa23a000/0x0/0x4ffc00000, data 0x177de26/0x1843000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:22.559682+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162981 data_alloc: 218103808 data_used: 5865472
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 12361728 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:23.560016+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 12361728 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:24.560374+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 12361728 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:25.560792+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 12361728 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _renew_subs
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 127 handle_osd_map epochs [128,128], i have 127, src has [1,128]
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 128 ms_handle_reset con 0x55cd98502400 session 0x55cd97f32d20
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:26.561159+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99557376 unmapped: 12328960 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 128 heartbeat osd_stat(store_statfs(0x4fa235000/0x0/0x4ffc00000, data 0x177f9d6/0x1848000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:27.561688+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168651 data_alloc: 218103808 data_used: 5873664
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99557376 unmapped: 12328960 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:28.562039+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99557376 unmapped: 12328960 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 128 heartbeat osd_stat(store_statfs(0x4fa235000/0x0/0x4ffc00000, data 0x177f9d6/0x1848000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:29.562422+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99557376 unmapped: 12328960 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:30.562880+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99557376 unmapped: 12328960 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:31.563149+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99557376 unmapped: 12328960 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:32.563786+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 128 heartbeat osd_stat(store_statfs(0x4fa235000/0x0/0x4ffc00000, data 0x177f9d6/0x1848000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168651 data_alloc: 218103808 data_used: 5873664
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99557376 unmapped: 12328960 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:33.564020+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99557376 unmapped: 12328960 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:34.564440+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98eac000
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 128 ms_handle_reset con 0x55cd98eac000 session 0x55cd98e992c0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99ed0400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99557376 unmapped: 12328960 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 128 ms_handle_reset con 0x55cd99ed0400 session 0x55cd97f501e0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 128 heartbeat osd_stat(store_statfs(0x4fa235000/0x0/0x4ffc00000, data 0x177f9d6/0x1848000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:35.564738+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99557376 unmapped: 12328960 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd9732a400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.506047249s of 14.546795845s, submitted: 6
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 128 ms_handle_reset con 0x55cd9732a400 session 0x55cd98b1e5a0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd9732a400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 128 ms_handle_reset con 0x55cd9732a400 session 0x55cd98e832c0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd984f5800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:36.564993+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 128 ms_handle_reset con 0x55cd984f5800 session 0x55cd98e91a40
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98502400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 128 ms_handle_reset con 0x55cd98502400 session 0x55cd966f25a0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98eac000
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 128 ms_handle_reset con 0x55cd98eac000 session 0x55cd964a34a0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99ed0400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 128 ms_handle_reset con 0x55cd99ed0400 session 0x55cd97f66f00
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd9732a400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99639296 unmapped: 12247040 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 128 ms_handle_reset con 0x55cd9732a400 session 0x55cd97f665a0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd984f5800
Dec 03 02:34:11 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 128 ms_handle_reset con 0x55cd984f5800 session 0x55cd98ab3680
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98502400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:37.565337+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 128 ms_handle_reset con 0x55cd98502400 session 0x55cd98ad30e0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178418 data_alloc: 218103808 data_used: 5877760
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99639296 unmapped: 12247040 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:38.565765+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98eac000
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 128 ms_handle_reset con 0x55cd98eac000 session 0x55cd981ccf00
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99639296 unmapped: 12247040 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:39.566178+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd9732a800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 128 ms_handle_reset con 0x55cd9732a800 session 0x55cd981baf00
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 128 heartbeat osd_stat(store_statfs(0x4fa1e9000/0x0/0x4ffc00000, data 0x17cc9d6/0x1895000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99639296 unmapped: 12247040 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:40.566507+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd9732a400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 128 ms_handle_reset con 0x55cd9732a400 session 0x55cd96be92c0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd984f5800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 128 ms_handle_reset con 0x55cd984f5800 session 0x55cd973a7e00
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99581952 unmapped: 12304384 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:41.566754+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99581952 unmapped: 12304384 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98502400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98eac000
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:42.566950+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179684 data_alloc: 218103808 data_used: 5877760
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99598336 unmapped: 12288000 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:43.567174+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99442688 unmapped: 12443648 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:44.567458+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99442688 unmapped: 12443648 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 128 heartbeat osd_stat(store_statfs(0x4fa1e8000/0x0/0x4ffc00000, data 0x17cc9e6/0x1896000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:45.567660+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99442688 unmapped: 12443648 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:46.567929+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99442688 unmapped: 12443648 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:47.568361+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181764 data_alloc: 218103808 data_used: 6098944
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99442688 unmapped: 12443648 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:48.568948+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99442688 unmapped: 12443648 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:49.569162+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99442688 unmapped: 12443648 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:50.569633+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 128 heartbeat osd_stat(store_statfs(0x4fa1e8000/0x0/0x4ffc00000, data 0x17cc9e6/0x1896000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99442688 unmapped: 12443648 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:51.569843+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99442688 unmapped: 12443648 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:52.570240+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181764 data_alloc: 218103808 data_used: 6098944
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99442688 unmapped: 12443648 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 128 heartbeat osd_stat(store_statfs(0x4fa1e8000/0x0/0x4ffc00000, data 0x17cc9e6/0x1896000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:53.570634+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99442688 unmapped: 12443648 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:54.571002+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99442688 unmapped: 12443648 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:55.571219+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 128 heartbeat osd_stat(store_statfs(0x4fa1e8000/0x0/0x4ffc00000, data 0x17cc9e6/0x1896000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99442688 unmapped: 12443648 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:56.571790+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99442688 unmapped: 12443648 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:57.572325+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 128 heartbeat osd_stat(store_statfs(0x4fa1e8000/0x0/0x4ffc00000, data 0x17cc9e6/0x1896000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181764 data_alloc: 218103808 data_used: 6098944
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99442688 unmapped: 12443648 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:58.572691+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99450880 unmapped: 12435456 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:08:59.573049+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99450880 unmapped: 12435456 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:00.573299+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99450880 unmapped: 12435456 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:01.573524+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99450880 unmapped: 12435456 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:02.573790+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 128 heartbeat osd_stat(store_statfs(0x4fa1e8000/0x0/0x4ffc00000, data 0x17cc9e6/0x1896000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181764 data_alloc: 218103808 data_used: 6098944
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99450880 unmapped: 12435456 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 27.011846542s of 27.149738312s, submitted: 21
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 128 ms_handle_reset con 0x55cd98502400 session 0x55cd98ab2f00
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 128 ms_handle_reset con 0x55cd98eac000 session 0x55cd96b901e0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98795400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:03.573996+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99295232 unmapped: 12591104 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:04.574400+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 128 ms_handle_reset con 0x55cd98795400 session 0x55cd98e82d20
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99295232 unmapped: 12591104 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:05.574834+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99295232 unmapped: 12591104 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:06.575272+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 128 heartbeat osd_stat(store_statfs(0x4fa236000/0x0/0x4ffc00000, data 0x177f9d6/0x1848000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99295232 unmapped: 12591104 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd9732a400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:07.575792+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171799 data_alloc: 218103808 data_used: 5881856
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99295232 unmapped: 12591104 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _renew_subs
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:08.576132+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 129 ms_handle_reset con 0x55cd9732a400 session 0x55cd981bbe00
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99303424 unmapped: 12582912 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:09.576391+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99303424 unmapped: 12582912 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:10.576741+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99303424 unmapped: 12582912 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:11.577100+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 3000.1 total, 600.0 interval
                                            Cumulative writes: 7458 writes, 29K keys, 7458 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                            Cumulative WAL: 7458 writes, 1633 syncs, 4.57 writes per sync, written: 0.02 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 915 writes, 2916 keys, 915 commit groups, 1.0 writes per commit group, ingest: 2.84 MB, 0.00 MB/s
                                            Interval WAL: 915 writes, 383 syncs, 2.39 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99303424 unmapped: 12582912 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:12.577479+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 129 heartbeat osd_stat(store_statfs(0x4fa234000/0x0/0x4ffc00000, data 0x1781574/0x1849000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176546 data_alloc: 218103808 data_used: 5885952
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99303424 unmapped: 12582912 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:13.577792+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99311616 unmapped: 12574720 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:14.577978+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99311616 unmapped: 12574720 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:15.578161+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99311616 unmapped: 12574720 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:16.578621+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99311616 unmapped: 12574720 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:17.579051+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 129 handle_osd_map epochs [129,130], i have 129, src has [1,130]
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.726830482s of 14.959962845s, submitted: 38
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179520 data_alloc: 218103808 data_used: 5885952
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99336192 unmapped: 12550144 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:18.579427+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fa231000/0x0/0x4ffc00000, data 0x1782fd7/0x184c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: get_auth_request con 0x55cd96bc0400 auth_method 0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd984f5800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99336192 unmapped: 12550144 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:19.579914+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99336192 unmapped: 12550144 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:20.580291+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99360768 unmapped: 12525568 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:21.580798+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99360768 unmapped: 12525568 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:22.581183+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182339 data_alloc: 218103808 data_used: 5890048
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fa230000/0x0/0x4ffc00000, data 0x178300a/0x184e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [0,0,0,1])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99360768 unmapped: 12525568 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:23.581670+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fa230000/0x0/0x4ffc00000, data 0x178300a/0x184e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 130 handle_osd_map epochs [130,131], i have 130, src has [1,131]
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99368960 unmapped: 12517376 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 131 ms_handle_reset con 0x55cd984f5800 session 0x55cd95afef00
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:24.582076+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 131 heartbeat osd_stat(store_statfs(0x4fa22c000/0x0/0x4ffc00000, data 0x1784b87/0x1851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99368960 unmapped: 12517376 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:25.582447+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99368960 unmapped: 12517376 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:26.582784+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 131 heartbeat osd_stat(store_statfs(0x4fa22c000/0x0/0x4ffc00000, data 0x1784b87/0x1851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99368960 unmapped: 12517376 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:27.583205+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186473 data_alloc: 218103808 data_used: 5898240
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99368960 unmapped: 12517376 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:28.583673+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99368960 unmapped: 12517376 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:29.584103+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99368960 unmapped: 12517376 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:30.584517+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99368960 unmapped: 12517376 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:31.584938+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98502400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.671102524s of 13.774903297s, submitted: 21
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _renew_subs
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99385344 unmapped: 12500992 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:32.585268+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 132 ms_handle_reset con 0x55cd98502400 session 0x55cd95afd4a0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa22d000/0x0/0x4ffc00000, data 0x1784b87/0x1851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188294 data_alloc: 218103808 data_used: 5906432
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 100433920 unmapped: 11452416 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:33.585731+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa22a000/0x0/0x4ffc00000, data 0x1786725/0x1852000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 100433920 unmapped: 11452416 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:34.586086+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 100433920 unmapped: 11452416 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:35.586485+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 100433920 unmapped: 11452416 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:36.586899+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 100433920 unmapped: 11452416 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:37.587272+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 132 handle_osd_map epochs [132,133], i have 132, src has [1,133]
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fa22a000/0x0/0x4ffc00000, data 0x1786725/0x1852000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190167 data_alloc: 218103808 data_used: 5906432
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99401728 unmapped: 12484608 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:38.587756+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99401728 unmapped: 12484608 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:39.588067+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99401728 unmapped: 12484608 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:40.588453+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99401728 unmapped: 12484608 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:41.588992+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99401728 unmapped: 12484608 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:42.589297+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190167 data_alloc: 218103808 data_used: 5906432
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99401728 unmapped: 12484608 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:43.589515+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fa228000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99401728 unmapped: 12484608 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:44.589883+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99401728 unmapped: 12484608 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:45.590268+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99401728 unmapped: 12484608 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:46.590728+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99401728 unmapped: 12484608 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:47.591136+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190487 data_alloc: 218103808 data_used: 5914624
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99401728 unmapped: 12484608 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:48.591457+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99401728 unmapped: 12484608 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:49.591857+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fa228000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99401728 unmapped: 12484608 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:50.592224+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99401728 unmapped: 12484608 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:51.592668+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:52.592966+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99401728 unmapped: 12484608 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fa228000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190487 data_alloc: 218103808 data_used: 5914624
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:53.593276+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99401728 unmapped: 12484608 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:54.593675+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99409920 unmapped: 12476416 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fa228000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fa228000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:55.594038+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99409920 unmapped: 12476416 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:56.594407+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99409920 unmapped: 12476416 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fa228000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:57.594760+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99409920 unmapped: 12476416 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190487 data_alloc: 218103808 data_used: 5914624
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:58.595164+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fa228000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99409920 unmapped: 12476416 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:09:59.595497+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99426304 unmapped: 12460032 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:00.595736+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99426304 unmapped: 12460032 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:01.595951+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99426304 unmapped: 12460032 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fa228000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:02.596297+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99426304 unmapped: 12460032 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190487 data_alloc: 218103808 data_used: 5914624
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:03.596474+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99426304 unmapped: 12460032 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:04.596859+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99426304 unmapped: 12460032 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fa228000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:05.597129+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99434496 unmapped: 12451840 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:06.597470+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99434496 unmapped: 12451840 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:07.597989+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99434496 unmapped: 12451840 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190487 data_alloc: 218103808 data_used: 5914624
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:08.598413+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99434496 unmapped: 12451840 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:09.598721+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fa228000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99434496 unmapped: 12451840 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:10.599104+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99434496 unmapped: 12451840 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fa228000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:11.599465+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99434496 unmapped: 12451840 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:12.599830+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99434496 unmapped: 12451840 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:13.600234+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190487 data_alloc: 218103808 data_used: 5914624
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99434496 unmapped: 12451840 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fa228000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:14.600965+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99434496 unmapped: 12451840 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:15.601408+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99434496 unmapped: 12451840 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:16.601748+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fa228000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99434496 unmapped: 12451840 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:17.602154+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99434496 unmapped: 12451840 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:18.602659+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190487 data_alloc: 218103808 data_used: 5914624
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99434496 unmapped: 12451840 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:19.602893+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99434496 unmapped: 12451840 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:20.603292+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fa228000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99434496 unmapped: 12451840 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:21.603772+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99434496 unmapped: 12451840 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:22.604159+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99434496 unmapped: 12451840 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:23.604610+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190487 data_alloc: 218103808 data_used: 5914624
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99434496 unmapped: 12451840 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:24.604998+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fa228000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99434496 unmapped: 12451840 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:25.605353+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99434496 unmapped: 12451840 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:26.605838+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99434496 unmapped: 12451840 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:27.606293+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99434496 unmapped: 12451840 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:28.606789+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190487 data_alloc: 218103808 data_used: 5914624
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99434496 unmapped: 12451840 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:29.607356+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99434496 unmapped: 12451840 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fa228000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:30.607732+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99442688 unmapped: 12443648 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:31.608116+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fa228000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99442688 unmapped: 12443648 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:32.608501+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99442688 unmapped: 12443648 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:33.608913+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190487 data_alloc: 218103808 data_used: 5914624
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99442688 unmapped: 12443648 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:34.609328+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99442688 unmapped: 12443648 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:35.609786+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99442688 unmapped: 12443648 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:36.610124+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99442688 unmapped: 12443648 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fa228000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:37.610520+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99442688 unmapped: 12443648 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:38.610932+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190487 data_alloc: 218103808 data_used: 5914624
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99442688 unmapped: 12443648 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:39.611292+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99442688 unmapped: 12443648 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:40.611498+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fa228000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99442688 unmapped: 12443648 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:41.611925+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99442688 unmapped: 12443648 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:42.612276+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99442688 unmapped: 12443648 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:43.612457+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190487 data_alloc: 218103808 data_used: 5914624
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99442688 unmapped: 12443648 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fa228000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:44.612817+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99442688 unmapped: 12443648 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:45.613098+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99442688 unmapped: 12443648 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:46.613411+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99442688 unmapped: 12443648 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:47.613827+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99442688 unmapped: 12443648 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:48.614156+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fa228000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190487 data_alloc: 218103808 data_used: 5914624
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99442688 unmapped: 12443648 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:49.614721+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99442688 unmapped: 12443648 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 77.927276611s of 78.105461121s, submitted: 46
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:50.615151+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99459072 unmapped: 12427264 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fa229000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:51.615497+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99483648 unmapped: 12402688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:52.615962+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99508224 unmapped: 12378112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:53.616287+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189607 data_alloc: 218103808 data_used: 5914624
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99508224 unmapped: 12378112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:54.616756+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99508224 unmapped: 12378112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:55.617099+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99508224 unmapped: 12378112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4f9e19000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:56.617502+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99508224 unmapped: 12378112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:57.618049+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99508224 unmapped: 12378112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:58.618369+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189607 data_alloc: 218103808 data_used: 5914624
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 12369920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:10:59.618942+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 12369920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:00.619357+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 12369920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4f9e19000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:01.619774+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 12369920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4f9e19000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:02.620104+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 12369920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:03.621104+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189607 data_alloc: 218103808 data_used: 5914624
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 12369920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:04.621732+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 12369920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:05.622102+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 12369920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:06.622435+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 12369920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:07.622807+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4f9e19000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 12369920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:08.623129+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189607 data_alloc: 218103808 data_used: 5914624
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 12369920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:09.623356+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 12369920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:10.623805+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 12369920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4f9e19000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:11.624198+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 12369920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:12.624445+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4f9e19000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 12369920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:13.624759+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189607 data_alloc: 218103808 data_used: 5914624
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 12369920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:14.624969+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 12369920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:15.625302+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 12369920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:16.625789+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4f9e19000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 12369920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4f9e19000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:17.626200+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 12369920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4f9e19000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:18.626700+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189607 data_alloc: 218103808 data_used: 5914624
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 12369920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:19.627017+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 12369920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:20.627318+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 12369920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:21.627759+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 12369920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4f9e19000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:22.628201+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 12369920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:23.628767+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189607 data_alloc: 218103808 data_used: 5914624
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 12369920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:24.629146+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 12369920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4f9e19000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:25.629735+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 12369920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:26.630113+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 12369920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:27.630515+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 12369920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4f9e19000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:28.630973+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189607 data_alloc: 218103808 data_used: 5914624
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 12369920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:29.631293+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 12369920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4f9e19000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:30.631685+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 12369920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:31.632019+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 12369920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:32.632512+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 12369920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:33.633105+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189607 data_alloc: 218103808 data_used: 5914624
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 12369920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:34.633461+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 12361728 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:35.633752+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4f9e19000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 12361728 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:36.634101+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 12361728 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:37.634518+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 12361728 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:38.634984+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189607 data_alloc: 218103808 data_used: 5914624
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 12361728 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:39.635394+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 12361728 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:40.635851+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4f9e19000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 12361728 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:41.636251+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 12361728 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:42.636741+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4f9e19000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 12361728 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:43.637085+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189607 data_alloc: 218103808 data_used: 5914624
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 12361728 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:44.637281+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 12361728 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:45.637725+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 12361728 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:46.638001+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 12361728 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:47.638449+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4f9e19000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 12361728 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:48.638780+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189607 data_alloc: 218103808 data_used: 5914624
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 12361728 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:49.639073+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 12361728 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:50.639428+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 12361728 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:51.639777+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 12361728 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:52.640105+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 12361728 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:53.640414+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4f9e19000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189607 data_alloc: 218103808 data_used: 5914624
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 12361728 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:54.640741+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 12361728 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:55.641040+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 12361728 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:56.641294+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 12361728 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:57.641828+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 12361728 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:58.642078+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189607 data_alloc: 218103808 data_used: 5914624
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 12353536 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:11:59.642321+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4f9e19000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 12353536 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:00.642930+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4f9e19000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 12353536 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4f9e19000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:01.643277+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 12353536 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:02.643651+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4f9e19000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 12353536 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:03.644022+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189607 data_alloc: 218103808 data_used: 5914624
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4f9e19000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 12353536 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:04.644411+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4f9e19000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 12353536 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:05.644783+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 12353536 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:06.645152+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 12353536 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:07.645769+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 12353536 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:08.646135+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189607 data_alloc: 218103808 data_used: 5914624
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 12353536 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:09.646509+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 12353536 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:10.647047+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4f9e19000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 12353536 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:11.647437+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4f9e19000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 12353536 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:12.647773+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 12353536 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4f9e19000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:13.648178+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189607 data_alloc: 218103808 data_used: 5914624
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 12353536 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:14.648499+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 12353536 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:15.648784+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4f9e19000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 12353536 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:16.649057+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4f9e19000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 12353536 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:17.649462+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:18.649768+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 12353536 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189607 data_alloc: 218103808 data_used: 5914624
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:19.650051+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99540992 unmapped: 12345344 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:20.650403+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99540992 unmapped: 12345344 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4f9e19000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:21.650807+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99540992 unmapped: 12345344 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:22.651169+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99540992 unmapped: 12345344 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:23.651621+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99540992 unmapped: 12345344 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189607 data_alloc: 218103808 data_used: 5914624
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:24.651914+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99540992 unmapped: 12345344 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4f9e19000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:25.652168+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99540992 unmapped: 12345344 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4f9e19000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:26.652446+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99549184 unmapped: 12337152 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4f9e19000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:27.652816+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99549184 unmapped: 12337152 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4f9e19000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:28.653289+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99549184 unmapped: 12337152 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189607 data_alloc: 218103808 data_used: 5914624
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:29.653830+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99549184 unmapped: 12337152 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 99.197372437s of 99.864547729s, submitted: 106
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 ms_handle_reset con 0x55cd99ed1400 session 0x55cd964a25a0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 ms_handle_reset con 0x55cd99ed1800 session 0x55cd98b03860
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd9732a400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:30.654150+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99565568 unmapped: 12320768 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4f9e19000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,0,0,1])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:31.654506+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 96436224 unmapped: 15450112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 ms_handle_reset con 0x55cd9732a400 session 0x55cd981bb860
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:32.655072+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 96436224 unmapped: 15450112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:33.655453+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 96436224 unmapped: 15450112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1042487 data_alloc: 218103808 data_used: 1507328
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:34.655919+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 96436224 unmapped: 15450112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fad32000/0x0/0x4ffc00000, data 0x8700f3/0x93a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:35.656348+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 96436224 unmapped: 15450112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:36.656762+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 96436224 unmapped: 15450112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:37.657416+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 96436224 unmapped: 15450112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:38.657772+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 96436224 unmapped: 15450112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1042487 data_alloc: 218103808 data_used: 1507328
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:39.658289+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 96436224 unmapped: 15450112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fad32000/0x0/0x4ffc00000, data 0x8700f3/0x93a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:40.659026+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 96436224 unmapped: 15450112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:41.659444+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 96436224 unmapped: 15450112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:42.659830+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 96436224 unmapped: 15450112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:43.660232+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 96436224 unmapped: 15450112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:44.660682+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1042487 data_alloc: 218103808 data_used: 1507328
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 96436224 unmapped: 15450112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fad32000/0x0/0x4ffc00000, data 0x8700f3/0x93a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:45.661081+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 96436224 unmapped: 15450112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:46.661477+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 96436224 unmapped: 15450112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:47.662261+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 96436224 unmapped: 15450112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:48.662806+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 96436224 unmapped: 15450112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 ms_handle_reset con 0x55cd98eac800 session 0x55cd95cfd860
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.159221649s of 19.462923050s, submitted: 48
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 ms_handle_reset con 0x55cd98ead800 session 0x55cd98e99c20
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fad32000/0x0/0x4ffc00000, data 0x8700f3/0x93a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:49.663159+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1042415 data_alloc: 218103808 data_used: 1507328
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd984f5800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 96337920 unmapped: 15548416 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:50.663506+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95379456 unmapped: 16506880 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 ms_handle_reset con 0x55cd984f5800 session 0x55cd98e82960
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:51.664129+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:52.664578+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:53.664958+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:54.665245+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971097 data_alloc: 218103808 data_used: 217088
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fb4e2000/0x0/0x4ffc00000, data 0xc4072/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:55.665704+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:56.666095+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:57.666622+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fb4e2000/0x0/0x4ffc00000, data 0xc4072/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:58.666966+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:59.667322+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971097 data_alloc: 218103808 data_used: 217088
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:00.667796+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fb4e2000/0x0/0x4ffc00000, data 0xc4072/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:01.668102+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:02.668422+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fb4e2000/0x0/0x4ffc00000, data 0xc4072/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:03.668721+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:04.669091+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fb4e2000/0x0/0x4ffc00000, data 0xc4072/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971097 data_alloc: 218103808 data_used: 217088
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:05.669455+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fb4e2000/0x0/0x4ffc00000, data 0xc4072/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:06.669753+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:07.670190+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:08.670490+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fb4e2000/0x0/0x4ffc00000, data 0xc4072/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fb4e2000/0x0/0x4ffc00000, data 0xc4072/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:09.670806+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971097 data_alloc: 218103808 data_used: 217088
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:10.671146+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:11.671382+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:12.671772+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:13.672115+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:14.672888+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971097 data_alloc: 218103808 data_used: 217088
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fb4e2000/0x0/0x4ffc00000, data 0xc4072/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:15.673140+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:16.673520+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fb4e2000/0x0/0x4ffc00000, data 0xc4072/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:17.674013+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fb4e2000/0x0/0x4ffc00000, data 0xc4072/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:18.675072+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:19.675326+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971097 data_alloc: 218103808 data_used: 217088
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:20.675729+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fb4e2000/0x0/0x4ffc00000, data 0xc4072/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:21.676041+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:22.676350+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:23.676784+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:24.677096+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971097 data_alloc: 218103808 data_used: 217088
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:25.677359+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:26.677710+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fb4e2000/0x0/0x4ffc00000, data 0xc4072/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:27.678087+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:28.678486+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:29.678756+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971097 data_alloc: 218103808 data_used: 217088
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95428608 unmapped: 16457728 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:30.679202+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95428608 unmapped: 16457728 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:31.679500+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fb4e2000/0x0/0x4ffc00000, data 0xc4072/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95428608 unmapped: 16457728 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:32.679836+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95428608 unmapped: 16457728 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:33.681386+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95428608 unmapped: 16457728 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:34.681833+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fb4e2000/0x0/0x4ffc00000, data 0xc4072/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971097 data_alloc: 218103808 data_used: 217088
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95379456 unmapped: 16506880 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:35.682213+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95379456 unmapped: 16506880 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fb4e2000/0x0/0x4ffc00000, data 0xc4072/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:36.682586+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95379456 unmapped: 16506880 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:37.683036+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95379456 unmapped: 16506880 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:38.683364+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95379456 unmapped: 16506880 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:39.683740+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971097 data_alloc: 218103808 data_used: 217088
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:40.684152+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fb4e2000/0x0/0x4ffc00000, data 0xc4072/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:41.684469+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:42.684837+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:43.685236+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:44.686411+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971097 data_alloc: 218103808 data_used: 217088
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fb4e2000/0x0/0x4ffc00000, data 0xc4072/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:45.686752+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:46.687109+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:47.687620+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:48.687928+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:49.688313+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971097 data_alloc: 218103808 data_used: 217088
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fb4e2000/0x0/0x4ffc00000, data 0xc4072/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:50.688837+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:51.689159+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:52.689756+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:53.690154+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:54.690626+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971097 data_alloc: 218103808 data_used: 217088
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fb4e2000/0x0/0x4ffc00000, data 0xc4072/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:55.690982+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:56.691287+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:57.692009+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fb4e2000/0x0/0x4ffc00000, data 0xc4072/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:58.692369+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:59.692703+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fb4e2000/0x0/0x4ffc00000, data 0xc4072/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971097 data_alloc: 218103808 data_used: 217088
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:00.693175+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:01.693634+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:02.694068+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fb4e2000/0x0/0x4ffc00000, data 0xc4072/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fb4e2000/0x0/0x4ffc00000, data 0xc4072/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:03.694644+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:04.694896+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971097 data_alloc: 218103808 data_used: 217088
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:05.695256+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:06.695763+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:07.696209+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:08.696721+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fb4e2000/0x0/0x4ffc00000, data 0xc4072/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:09.697119+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971097 data_alloc: 218103808 data_used: 217088
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:10.697479+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:11.697865+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fb4e2000/0x0/0x4ffc00000, data 0xc4072/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:12.698232+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:13.698463+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:14.698836+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98502400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 85.413833618s of 85.549705505s, submitted: 21
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974456 data_alloc: 218103808 data_used: 217088
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 33267712 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:15.699249+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _renew_subs
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 134 ms_handle_reset con 0x55cd98502400 session 0x55cd99c66d20
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 33259520 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:16.699853+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 134 heartbeat osd_stat(store_statfs(0x4face2000/0x0/0x4ffc00000, data 0x8c4082/0x98c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd9732a400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:17.700409+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 134 handle_osd_map epochs [134,135], i have 134, src has [1,135]
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd9732a400 session 0x55cd9729e780
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:18.700954+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:19.701456+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1091572 data_alloc: 218103808 data_used: 225280
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:20.701878+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fa4da000/0x0/0x4ffc00000, data 0x10c777c/0x1192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:21.702227+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:22.702657+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:23.703000+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:24.703358+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fa4da000/0x0/0x4ffc00000, data 0x10c777c/0x1192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1091572 data_alloc: 218103808 data_used: 225280
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:25.703772+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:26.704187+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:27.704475+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:28.704820+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:29.705221+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1091572 data_alloc: 218103808 data_used: 225280
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fa4da000/0x0/0x4ffc00000, data 0x10c777c/0x1192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:30.705777+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fa4da000/0x0/0x4ffc00000, data 0x10c777c/0x1192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:31.706093+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fa4da000/0x0/0x4ffc00000, data 0x10c777c/0x1192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:32.706466+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:33.706843+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:34.707089+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1091572 data_alloc: 218103808 data_used: 225280
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:35.707735+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fa4da000/0x0/0x4ffc00000, data 0x10c777c/0x1192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:36.708152+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:37.708632+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:38.708860+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:39.709249+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1091572 data_alloc: 218103808 data_used: 225280
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:40.709736+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fa4da000/0x0/0x4ffc00000, data 0x10c777c/0x1192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:41.720607+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:42.720963+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fa4da000/0x0/0x4ffc00000, data 0x10c777c/0x1192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:43.721293+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:44.721609+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1091572 data_alloc: 218103808 data_used: 225280
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:45.721992+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fa4da000/0x0/0x4ffc00000, data 0x10c777c/0x1192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:46.723180+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:47.725002+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:48.726607+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:49.728324+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1091572 data_alloc: 218103808 data_used: 225280
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:50.730063+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:51.731681+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fa4da000/0x0/0x4ffc00000, data 0x10c777c/0x1192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:52.732421+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fa4da000/0x0/0x4ffc00000, data 0x10c777c/0x1192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:53.732767+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:54.733114+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1091572 data_alloc: 218103808 data_used: 225280
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:55.733436+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fa4da000/0x0/0x4ffc00000, data 0x10c777c/0x1192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:56.733776+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:57.734204+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:58.734678+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:59.735085+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 33259520 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1091572 data_alloc: 218103808 data_used: 225280
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fa4da000/0x0/0x4ffc00000, data 0x10c777c/0x1192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:00.735452+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 33259520 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:01.735822+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 33259520 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:02.736057+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 33259520 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:03.736410+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 33259520 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fa4da000/0x0/0x4ffc00000, data 0x10c777c/0x1192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:04.736810+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 33259520 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1091572 data_alloc: 218103808 data_used: 225280
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:05.737248+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fa4da000/0x0/0x4ffc00000, data 0x10c777c/0x1192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 33259520 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:06.737761+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd984f5800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 51.784358978s of 51.954193115s, submitted: 16
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd984f5800 session 0x55cd96be94a0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98502400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98502400 session 0x55cd9a7baf00
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:07.738183+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 102187008 unmapped: 26484736 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98eac800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:08.738476+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 22454272 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98eac800 session 0x55cd96603a40
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98ead800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98ead800 session 0x55cd981cc1e0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd9732a400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd9732a400 session 0x55cd9a7bbe00
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd984f5800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd984f5800 session 0x55cd9a7bb680
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98502400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98502400 session 0x55cd981bbe00
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:09.739269+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98eac800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98eac800 session 0x55cd981bab40
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99ed1400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd99ed1400 session 0x55cd9a64fc20
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd9732a400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd9732a400 session 0x55cd9a64ed20
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 102481920 unmapped: 26189824 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f9dee000/0x0/0x4ffc00000, data 0x17b577c/0x1880000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd984f5800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd984f5800 session 0x55cd9a64f2c0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98502400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98502400 session 0x55cd9a64fe00
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98eac800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98eac800 session 0x55cd96be90e0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98eac000
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98eac000 session 0x55cd98ab34a0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260267 data_alloc: 218103808 data_used: 7045120
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:10.739687+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd9732a400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 25419776 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd9732a400 session 0x55cd98ab2f00
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd984f5800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd984f5800 session 0x55cd96707680
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:11.739998+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 25419776 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:12.740302+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 25419776 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:13.740797+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 25419776 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98502400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98502400 session 0x55cd967074a0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f9330000/0x0/0x4ffc00000, data 0x22727de/0x233e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:14.741158+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98eac800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98eac800 session 0x55cd96707a40
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99dffc00
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd99dffc00 session 0x55cd981cc1e0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd9732a400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd9732a400 session 0x55cd981cd0e0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 25419776 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd984f5800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd984f5800 session 0x55cd9a7bbc20
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98502400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98502400 session 0x55cd9a7ba3c0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98eac800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98eac800 session 0x55cd9a7baf00
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99dff000
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd99dff000 session 0x55cd96b90b40
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd9732a400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd9732a400 session 0x55cd971a10e0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:15.741629+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313159 data_alloc: 218103808 data_used: 7045120
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd984f5800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f9284000/0x0/0x4ffc00000, data 0x231d807/0x23ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd984f5800 session 0x55cd98b03860
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98502400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98502400 session 0x55cd966754a0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 105086976 unmapped: 23584768 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98eac800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98eac800 session 0x55cd966745a0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991de000
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991de000 session 0x55cd98ab3a40
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:16.741889+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 105447424 unmapped: 23224320 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd9732a400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.374081612s of 10.282898903s, submitted: 125
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd984f5800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:17.747832+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 105447424 unmapped: 23224320 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:18.748232+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98502400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98502400 session 0x55cd97f26960
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98eac800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98eac800 session 0x55cd96e56000
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991de400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 105472000 unmapped: 23199744 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991de400 session 0x55cd98b030e0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991de800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991de800 session 0x55cd98e82000
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991dec00
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991df000
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991dec00 session 0x55cd96674d20
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991df000 session 0x55cd95cfdc20
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98502400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98502400 session 0x55cd9a7bab40
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98eac800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:19.748584+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98eac800 session 0x55cd99c67e00
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991de400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991de400 session 0x55cd98e99e00
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991de800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991de800 session 0x55cd964d92c0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 26951680 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98502400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:20.748896+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1409345 data_alloc: 218103808 data_used: 7049216
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 105480192 unmapped: 26869760 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f82ea000/0x0/0x4ffc00000, data 0x32b58b2/0x3384000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98eac800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98eac800 session 0x55cd98ab21e0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:21.749340+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 105447424 unmapped: 26902528 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991de400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991de400 session 0x55cd95affa40
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:22.749907+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991de800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991de800 session 0x55cd98b032c0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991df000
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 105447424 unmapped: 26902528 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991df000 session 0x55cd98b03e00
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:23.750095+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991df400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991df800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 105562112 unmapped: 26787840 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:24.750607+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 105570304 unmapped: 26779648 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:25.750909+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f82e9000/0x0/0x4ffc00000, data 0x32b58c2/0x3385000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1439075 data_alloc: 234881024 data_used: 11091968
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 105865216 unmapped: 26484736 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:26.751306+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107208704 unmapped: 25141248 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:27.751792+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107208704 unmapped: 25141248 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f82e9000/0x0/0x4ffc00000, data 0x32b58c2/0x3385000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:28.752009+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107724800 unmapped: 24625152 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991dfc00
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.870912552s of 12.140001297s, submitted: 55
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991dfc00 session 0x55cd9668fc20
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:29.755323+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98eac800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 111099904 unmapped: 21250048 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:30.755508+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1524220 data_alloc: 234881024 data_used: 22650880
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 113868800 unmapped: 18481152 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:31.755765+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 117587968 unmapped: 14761984 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:32.756003+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 12640256 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:33.756388+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f82e8000/0x0/0x4ffc00000, data 0x32b58e5/0x3386000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 119775232 unmapped: 12574720 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:34.756758+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 119775232 unmapped: 12574720 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:35.757111+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1570620 data_alloc: 251658240 data_used: 29188096
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 119775232 unmapped: 12574720 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:36.757323+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 119775232 unmapped: 12574720 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98502400 session 0x55cd964d9c20
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:37.757755+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991de400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 15228928 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991de400 session 0x55cd98b02b40
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f8da5000/0x0/0x4ffc00000, data 0x27f8883/0x28c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:38.757982+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 116064256 unmapped: 16285696 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:39.758246+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 116457472 unmapped: 15892480 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:40.758513+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1488350 data_alloc: 251658240 data_used: 29421568
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 119611392 unmapped: 12738560 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:41.758790+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 119611392 unmapped: 12738560 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:42.758981+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 119611392 unmapped: 12738560 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.953304291s of 14.133977890s, submitted: 40
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98eac800 session 0x55cd98aec780
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991de800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:43.759137+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f8da5000/0x0/0x4ffc00000, data 0x27f8883/0x28c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,0,1])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 115376128 unmapped: 16973824 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991de800 session 0x55cd958bbe00
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:44.759320+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 114696192 unmapped: 17653760 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:45.759773+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332442 data_alloc: 234881024 data_used: 19333120
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 114696192 unmapped: 17653760 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:46.760211+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 114696192 unmapped: 17653760 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:47.760773+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 114696192 unmapped: 17653760 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f97ad000/0x0/0x4ffc00000, data 0x1d1d7ee/0x1dea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:48.768312+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 114696192 unmapped: 17653760 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:49.768581+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 114696192 unmapped: 17653760 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f97ad000/0x0/0x4ffc00000, data 0x1d1d7ee/0x1dea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:50.768825+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332442 data_alloc: 234881024 data_used: 19333120
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 114696192 unmapped: 17653760 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:51.769167+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 114696192 unmapped: 17653760 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:52.769884+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 114696192 unmapped: 17653760 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:53.770193+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 114696192 unmapped: 17653760 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:54.770437+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 114696192 unmapped: 17653760 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f97ad000/0x0/0x4ffc00000, data 0x1d1d7ee/0x1dea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:55.770802+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332442 data_alloc: 234881024 data_used: 19333120
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 114696192 unmapped: 17653760 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:56.771001+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 114696192 unmapped: 17653760 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.753718376s of 13.965150833s, submitted: 43
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:57.771241+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f97ad000/0x0/0x4ffc00000, data 0x1d1d7ee/0x1dea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 15818752 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:58.771470+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 15818752 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:59.771808+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 16089088 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:00.772004+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1362744 data_alloc: 234881024 data_used: 19480576
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 16089088 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:01.772284+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 16089088 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:02.772463+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 16089088 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:03.772670+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f96e8000/0x0/0x4ffc00000, data 0x1eb17ee/0x1f7e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991df000
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991df000 session 0x55cd98b030e0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98502400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98502400 session 0x55cd981ae1e0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98eac800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98eac800 session 0x55cd97f32d20
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 11501568 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991de400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991de400 session 0x55cd981cdc20
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991de800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:04.772956+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991de800 session 0x55cd973a63c0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99d6e000
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd99d6e000 session 0x55cd9729f680
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 118923264 unmapped: 21823488 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:05.773480+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1525693 data_alloc: 234881024 data_used: 20959232
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f8494000/0x0/0x4ffc00000, data 0x310c850/0x31da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 119644160 unmapped: 21102592 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:06.773851+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 119644160 unmapped: 21102592 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:07.774202+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f840d000/0x0/0x4ffc00000, data 0x3193850/0x3261000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 119644160 unmapped: 21102592 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:08.774860+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f840d000/0x0/0x4ffc00000, data 0x3193850/0x3261000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 119644160 unmapped: 21102592 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:09.775248+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98502400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98502400 session 0x55cd97f330e0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f840d000/0x0/0x4ffc00000, data 0x3193850/0x3261000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 119652352 unmapped: 21094400 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98eac800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98eac800 session 0x55cd9668fa40
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:10.775678+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.796039581s of 13.633616447s, submitted: 196
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1522041 data_alloc: 234881024 data_used: 20971520
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 119808000 unmapped: 20938752 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:11.776026+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991de400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991de400 session 0x55cd96431c20
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991de800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991de800 session 0x55cd96db4d20
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 119816192 unmapped: 20930560 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:12.776192+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 119816192 unmapped: 20930560 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:13.776493+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f83ed000/0x0/0x4ffc00000, data 0x31b2873/0x3281000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 119816192 unmapped: 20930560 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f83ed000/0x0/0x4ffc00000, data 0x31b2873/0x3281000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:14.776939+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 119824384 unmapped: 20922368 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:15.777269+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1523870 data_alloc: 234881024 data_used: 20971520
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 119824384 unmapped: 20922368 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:16.777747+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 119824384 unmapped: 20922368 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:17.778698+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 119881728 unmapped: 20865024 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:18.779100+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f83e3000/0x0/0x4ffc00000, data 0x31bc873/0x328b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 119881728 unmapped: 20865024 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:19.779318+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f83e3000/0x0/0x4ffc00000, data 0x31bc873/0x328b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 119881728 unmapped: 20865024 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:20.779824+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1523894 data_alloc: 234881024 data_used: 20975616
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 119881728 unmapped: 20865024 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:21.780233+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99d6e000
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 119906304 unmapped: 20840448 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:22.780695+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 122036224 unmapped: 18710528 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:23.781100+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.950662613s of 13.055603981s, submitted: 12
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123740160 unmapped: 17006592 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:24.781491+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123740160 unmapped: 17006592 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:25.781890+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f83e0000/0x0/0x4ffc00000, data 0x31bf873/0x328e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1582690 data_alloc: 251658240 data_used: 28319744
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123740160 unmapped: 17006592 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:26.782282+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f83e0000/0x0/0x4ffc00000, data 0x31bf873/0x328e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123740160 unmapped: 17006592 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:27.782671+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123740160 unmapped: 17006592 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:28.782886+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123740160 unmapped: 17006592 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:29.783218+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123740160 unmapped: 17006592 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:30.783393+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1582998 data_alloc: 251658240 data_used: 28319744
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123781120 unmapped: 16965632 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:31.783764+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123781120 unmapped: 16965632 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:32.784215+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f83de000/0x0/0x4ffc00000, data 0x31c0873/0x328f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123781120 unmapped: 16965632 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:33.784826+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.107700348s of 10.130168915s, submitted: 3
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123781120 unmapped: 16965632 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:34.785190+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123781120 unmapped: 16965632 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:35.785495+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1582646 data_alloc: 251658240 data_used: 28319744
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123805696 unmapped: 16941056 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:36.785866+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f83dd000/0x0/0x4ffc00000, data 0x31c1873/0x3290000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123805696 unmapped: 16941056 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:37.786159+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f83dd000/0x0/0x4ffc00000, data 0x31c1873/0x3290000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 16932864 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:38.786490+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 16932864 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:39.786739+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 16932864 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:40.787222+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1585242 data_alloc: 251658240 data_used: 28307456
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123985920 unmapped: 16760832 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:41.787685+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123994112 unmapped: 16752640 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:42.787946+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f83de000/0x0/0x4ffc00000, data 0x31c1873/0x3290000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123994112 unmapped: 16752640 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:43.788405+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123994112 unmapped: 16752640 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:44.788836+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.781484604s of 10.835879326s, submitted: 20
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123994112 unmapped: 16752640 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:45.789238+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1583282 data_alloc: 251658240 data_used: 28307456
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd9732a400 session 0x55cd98ab2f00
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd984f5800 session 0x55cd9a7bbc20
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124002304 unmapped: 16744448 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:46.789672+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd9732a400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f83d8000/0x0/0x4ffc00000, data 0x31c7873/0x3296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,0,0,0,3])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 121233408 unmapped: 19513344 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:47.789981+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd9732a400 session 0x55cd9729f0e0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991df400 session 0x55cd96602000
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991df800 session 0x55cd96be90e0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 121233408 unmapped: 19513344 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:48.790268+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98502400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98502400 session 0x55cd9a64f2c0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 121233408 unmapped: 19513344 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:49.791331+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd9732a400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd984f5800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 122298368 unmapped: 18448384 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:50.791598+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1457656 data_alloc: 234881024 data_used: 21876736
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 122298368 unmapped: 18448384 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:51.792114+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f8c85000/0x0/0x4ffc00000, data 0x291a873/0x29e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 122314752 unmapped: 18432000 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:52.792630+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 122322944 unmapped: 18423808 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:53.793010+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 122322944 unmapped: 18423808 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:54.793385+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:55.793632+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 122322944 unmapped: 18423808 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f8c85000/0x0/0x4ffc00000, data 0x291a873/0x29e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1459736 data_alloc: 234881024 data_used: 22016000
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.302616119s of 11.499516487s, submitted: 34
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:56.793847+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 125272064 unmapped: 15474688 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:57.794088+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126509056 unmapped: 14237696 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:58.794383+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 14016512 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:59.794721+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 14016512 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:00.795111+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126877696 unmapped: 13869056 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1571568 data_alloc: 234881024 data_used: 23171072
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f7f6d000/0x0/0x4ffc00000, data 0x362a873/0x36f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:01.795467+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126877696 unmapped: 13869056 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:02.795683+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126877696 unmapped: 13869056 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:03.796007+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126877696 unmapped: 13869056 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f7f6d000/0x0/0x4ffc00000, data 0x362a873/0x36f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:04.796313+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126877696 unmapped: 13869056 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:05.796701+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126877696 unmapped: 13869056 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1571584 data_alloc: 234881024 data_used: 23171072
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:06.797062+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126877696 unmapped: 13869056 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f7f6d000/0x0/0x4ffc00000, data 0x362a873/0x36f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:07.797946+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126877696 unmapped: 13869056 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:08.798279+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126877696 unmapped: 13869056 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:09.798621+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126877696 unmapped: 13869056 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:10.798894+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126877696 unmapped: 13869056 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1571584 data_alloc: 234881024 data_used: 23171072
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:11.799167+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126877696 unmapped: 13869056 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f7f6d000/0x0/0x4ffc00000, data 0x362a873/0x36f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:12.799364+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 13860864 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:13.799584+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 13860864 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:14.799989+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 13860864 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:15.800352+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 13860864 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1571584 data_alloc: 234881024 data_used: 23171072
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:16.800764+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 13860864 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:17.801174+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 13860864 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f7f6d000/0x0/0x4ffc00000, data 0x362a873/0x36f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:18.801783+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126894080 unmapped: 13852672 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f7f6d000/0x0/0x4ffc00000, data 0x362a873/0x36f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:19.802266+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126894080 unmapped: 13852672 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:20.802565+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126894080 unmapped: 13852672 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1571584 data_alloc: 234881024 data_used: 23171072
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:21.802950+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98502400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98502400 session 0x55cd98b03e00
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f7f6d000/0x0/0x4ffc00000, data 0x362a873/0x36f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991df400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991df400 session 0x55cd98b02b40
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126894080 unmapped: 13852672 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991df800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991df800 session 0x55cd98b030e0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98eac800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98eac800 session 0x55cd98e91a40
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991de400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 25.372167587s of 25.784019470s, submitted: 101
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991de400 session 0x55cd98e905a0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:22.803143+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 125100032 unmapped: 19324928 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f7721000/0x0/0x4ffc00000, data 0x3e7e873/0x3f4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:23.803518+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 125100032 unmapped: 19324928 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:24.803846+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 125100032 unmapped: 19324928 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98502400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98502400 session 0x55cd98e914a0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98eac800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98eac800 session 0x55cd96db50e0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991df400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991df400 session 0x55cd9a64e780
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991df800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991df800 session 0x55cd96707c20
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:25.804131+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991de800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991de800 session 0x55cd96b90f00
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f71ea000/0x0/0x4ffc00000, data 0x43b5873/0x4484000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126304256 unmapped: 18120704 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1674037 data_alloc: 234881024 data_used: 23396352
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:26.804382+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 125665280 unmapped: 18759680 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991de800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991de800 session 0x55cd96be94a0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f71e8000/0x0/0x4ffc00000, data 0x43b5873/0x4484000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:27.804592+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98502400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98502400 session 0x55cd966f2000
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 125673472 unmapped: 18751488 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:28.804857+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98eac800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98eac800 session 0x55cd981ae1e0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991df400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 125804544 unmapped: 18620416 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991df400 session 0x55cd981afa40
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:29.805065+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 125812736 unmapped: 18612224 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991df800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f71e9000/0x0/0x4ffc00000, data 0x43b5883/0x4485000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:30.805669+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 125820928 unmapped: 18604032 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1683651 data_alloc: 234881024 data_used: 24293376
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:31.806067+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99d6e400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd99d6e400 session 0x55cd98e99e00
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 125820928 unmapped: 18604032 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98502400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98502400 session 0x55cd96707680
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:32.806644+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126017536 unmapped: 18407424 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98eac800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98eac800 session 0x55cd96be9e00
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991de800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.024081230s of 11.396586418s, submitted: 59
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:33.806992+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991de800 session 0x55cd96602780
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126017536 unmapped: 18407424 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:34.807448+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126017536 unmapped: 18407424 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991df400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f71e4000/0x0/0x4ffc00000, data 0x43ba883/0x448a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99d6e800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:35.807831+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126025728 unmapped: 18399232 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1686890 data_alloc: 234881024 data_used: 24293376
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:36.808347+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126025728 unmapped: 18399232 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:37.808689+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126025728 unmapped: 18399232 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:38.809127+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126025728 unmapped: 18399232 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:39.809309+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99d6ec00
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126025728 unmapped: 18399232 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f71e4000/0x0/0x4ffc00000, data 0x43ba883/0x448a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:40.809618+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 127647744 unmapped: 16777216 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1731510 data_alloc: 251658240 data_used: 30433280
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:41.809844+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 131833856 unmapped: 12591104 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:42.810189+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 131842048 unmapped: 12582912 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:43.810402+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 131842048 unmapped: 12582912 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f71e3000/0x0/0x4ffc00000, data 0x43bb883/0x448b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:44.810787+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 131842048 unmapped: 12582912 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:45.811335+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 131850240 unmapped: 12574720 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1749430 data_alloc: 251658240 data_used: 33026048
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:46.812113+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 131850240 unmapped: 12574720 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.432714462s of 13.512619019s, submitted: 11
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:47.812600+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f71e3000/0x0/0x4ffc00000, data 0x43bb883/0x448b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 132661248 unmapped: 11763712 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:48.813016+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 133513216 unmapped: 10911744 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991df400 session 0x55cd98e834a0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd99d6e800 session 0x55cd96602960
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98502400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:49.813220+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 132579328 unmapped: 11845632 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98502400 session 0x55cd98b023c0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f771a000/0x0/0x4ffc00000, data 0x3e84883/0x3f54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:50.813650+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 132579328 unmapped: 11845632 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1705750 data_alloc: 251658240 data_used: 33021952
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:51.813824+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 132579328 unmapped: 11845632 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:52.814038+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 132579328 unmapped: 11845632 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:53.814445+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f771a000/0x0/0x4ffc00000, data 0x3e84883/0x3f54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 132579328 unmapped: 11845632 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:54.814776+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 132579328 unmapped: 11845632 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:55.815192+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 132579328 unmapped: 11845632 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd9732a400 session 0x55cd9668fa40
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd984f5800 session 0x55cd96db43c0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98eac800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1705582 data_alloc: 251658240 data_used: 33034240
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:56.815371+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98eac800 session 0x55cd98ab34a0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 127893504 unmapped: 16531456 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:57.815743+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 127893504 unmapped: 16531456 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:58.815966+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 127893504 unmapped: 16531456 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:59.816808+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f83d0000/0x0/0x4ffc00000, data 0x2e31811/0x2eff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 127893504 unmapped: 16531456 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:00.817184+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 127893504 unmapped: 16531456 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1517145 data_alloc: 234881024 data_used: 25182208
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:01.817786+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f83d0000/0x0/0x4ffc00000, data 0x2e31811/0x2eff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 127893504 unmapped: 16531456 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:02.818214+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 127893504 unmapped: 16531456 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:03.818735+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 127893504 unmapped: 16531456 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f83d0000/0x0/0x4ffc00000, data 0x2e31811/0x2eff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:04.819130+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 127893504 unmapped: 16531456 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f83d0000/0x0/0x4ffc00000, data 0x2e31811/0x2eff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:05.819709+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 127893504 unmapped: 16531456 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f83d0000/0x0/0x4ffc00000, data 0x2e31811/0x2eff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1517145 data_alloc: 234881024 data_used: 25182208
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:06.820115+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f83d0000/0x0/0x4ffc00000, data 0x2e31811/0x2eff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 127893504 unmapped: 16531456 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:07.820464+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 127893504 unmapped: 16531456 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:08.820806+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 127893504 unmapped: 16531456 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:09.821500+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 127893504 unmapped: 16531456 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:10.822043+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 127893504 unmapped: 16531456 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:11.822374+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f83d0000/0x0/0x4ffc00000, data 0x2e31811/0x2eff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1517145 data_alloc: 234881024 data_used: 25182208
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 127893504 unmapped: 16531456 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 25.079788208s of 25.470129013s, submitted: 80
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:12.822751+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f864f000/0x0/0x4ffc00000, data 0x2f51811/0x301f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 14475264 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:13.823961+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 130056192 unmapped: 14368768 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:14.824803+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 130260992 unmapped: 14163968 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:15.825036+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 130260992 unmapped: 14163968 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:16.825439+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1558459 data_alloc: 234881024 data_used: 25649152
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 130260992 unmapped: 14163968 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f82e0000/0x0/0x4ffc00000, data 0x32c0811/0x338e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:17.825824+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 130260992 unmapped: 14163968 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:18.826173+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 130269184 unmapped: 14155776 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:19.826392+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 130269184 unmapped: 14155776 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd99d6ec00 session 0x55cd981af2c0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991df800 session 0x55cd981ae960
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:20.826633+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99d6ec00
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123731968 unmapped: 20692992 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd9732a400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd99d6ec00 session 0x55cd97f67680
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:21.826940+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1454260 data_alloc: 234881024 data_used: 16445440
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123731968 unmapped: 37478400 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:22.827301+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f8354000/0x0/0x4ffc00000, data 0x324d801/0x331a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _renew_subs
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.226628304s of 10.548670769s, submitted: 55
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 136 ms_handle_reset con 0x55cd9732a400 session 0x55cd964d94a0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123731968 unmapped: 37478400 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:23.827722+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd984f5800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 136 ms_handle_reset con 0x55cd984f5800 session 0x55cd972034a0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 37257216 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98502400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:24.827997+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 136 ms_handle_reset con 0x55cd98502400 session 0x55cd981bab40
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 37257216 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd9732a400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:25.828219+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 136 heartbeat osd_stat(store_statfs(0x4f7783000/0x0/0x4ffc00000, data 0x3e1b403/0x3eeb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 136 handle_osd_map epochs [136,137], i have 136, src has [1,137]
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 137 ms_handle_reset con 0x55cd99d6e000 session 0x55cd9668e5a0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123985920 unmapped: 37224448 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd984f5800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:26.828649+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 137 ms_handle_reset con 0x55cd9732a400 session 0x55cd98ab3c20
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1583793 data_alloc: 234881024 data_used: 16461824
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 137 ms_handle_reset con 0x55cd984f5800 session 0x55cd966745a0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 115851264 unmapped: 45359104 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:27.829106+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f8c34000/0x0/0x4ffc00000, data 0x2906f4f/0x29d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 115851264 unmapped: 45359104 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:28.829476+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 115851264 unmapped: 45359104 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:29.829848+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 115851264 unmapped: 45359104 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:30.830112+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 115851264 unmapped: 45359104 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:31.830603+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991df800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1365450 data_alloc: 218103808 data_used: 7061504
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 137 ms_handle_reset con 0x55cd991df800 session 0x55cd981af2c0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f8c34000/0x0/0x4ffc00000, data 0x2906f4f/0x29d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 115875840 unmapped: 45334528 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:32.830982+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99d6ec00
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 115875840 unmapped: 45334528 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:33.831188+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98eac800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 137 ms_handle_reset con 0x55cd98eac800 session 0x55cd964d9e00
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd9732a400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 137 ms_handle_reset con 0x55cd9732a400 session 0x55cd964d92c0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd984f5800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 137 ms_handle_reset con 0x55cd984f5800 session 0x55cd964d90e0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 115875840 unmapped: 45334528 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:34.831603+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991df800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f8c98000/0x0/0x4ffc00000, data 0x2906f4f/0x29d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 115875840 unmapped: 45334528 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99d6e000
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 137 ms_handle_reset con 0x55cd99d6e000 session 0x55cd95cfd2c0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991de800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:35.831792+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 137 ms_handle_reset con 0x55cd991de800 session 0x55cd9a7bab40
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 127811584 unmapped: 33398784 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:36.832010+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f8c98000/0x0/0x4ffc00000, data 0x2906f4f/0x29d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [1])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1436238 data_alloc: 234881024 data_used: 23932928
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991df400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 137 ms_handle_reset con 0x55cd991df400 session 0x55cd97203e00
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd9732a400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.541071892s of 14.065881729s, submitted: 86
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 130252800 unmapped: 30957568 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 137 ms_handle_reset con 0x55cd9732a400 session 0x55cd964314a0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd984f5800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 137 ms_handle_reset con 0x55cd984f5800 session 0x55cd98e985a0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991de800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 137 ms_handle_reset con 0x55cd991de800 session 0x55cd98e98960
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:37.832294+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99d6e000
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 137 ms_handle_reset con 0x55cd99d6e000 session 0x55cd98e994a0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99d6f000
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 137 ms_handle_reset con 0x55cd99d6f000 session 0x55cd98e99a40
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f808a000/0x0/0x4ffc00000, data 0x3513f5f/0x35e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 131375104 unmapped: 29835264 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:38.832781+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 131399680 unmapped: 29810688 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:39.833070+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd9732a400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 ms_handle_reset con 0x55cd9732a400 session 0x55cd98e98d20
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 131432448 unmapped: 29777920 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd984f5800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 ms_handle_reset con 0x55cd984f5800 session 0x55cd96be9e00
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:40.833432+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 131432448 unmapped: 29777920 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:41.833750+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991de800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 ms_handle_reset con 0x55cd991de800 session 0x55cd96be8b40
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1585930 data_alloc: 251658240 data_used: 30912512
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99d6e000
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 ms_handle_reset con 0x55cd99d6e000 session 0x55cd96be92c0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f8086000/0x0/0x4ffc00000, data 0x35159c2/0x35e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 131784704 unmapped: 29425664 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:42.833980+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99d6f400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99d6f800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 131784704 unmapped: 29425664 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:43.834238+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 131784704 unmapped: 29425664 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:44.834446+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f805c000/0x0/0x4ffc00000, data 0x353f9d2/0x3612000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 134365184 unmapped: 26845184 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:45.834696+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 138723328 unmapped: 22487040 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:46.834977+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1665266 data_alloc: 251658240 data_used: 41414656
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 140386304 unmapped: 20824064 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:47.835242+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 141189120 unmapped: 20021248 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:48.835589+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 141189120 unmapped: 20021248 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:49.835907+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 141189120 unmapped: 20021248 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:50.836218+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f805c000/0x0/0x4ffc00000, data 0x353f9d2/0x3612000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 141221888 unmapped: 19988480 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:51.836473+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1679186 data_alloc: 251658240 data_used: 43380736
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 141221888 unmapped: 19988480 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:52.837654+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 141221888 unmapped: 19988480 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:53.838112+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 141230080 unmapped: 19980288 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:54.838510+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f805c000/0x0/0x4ffc00000, data 0x353f9d2/0x3612000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 141262848 unmapped: 19947520 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:55.839019+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 141262848 unmapped: 19947520 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:56.839450+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1679506 data_alloc: 251658240 data_used: 43388928
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 141262848 unmapped: 19947520 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:57.839891+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 141262848 unmapped: 19947520 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:58.840303+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 141279232 unmapped: 19931136 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:59.840625+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f805c000/0x0/0x4ffc00000, data 0x353f9d2/0x3612000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 141312000 unmapped: 19898368 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:00.840977+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 141312000 unmapped: 19898368 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:01.841419+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1679506 data_alloc: 251658240 data_used: 43388928
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 141312000 unmapped: 19898368 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:02.841782+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f805c000/0x0/0x4ffc00000, data 0x353f9d2/0x3612000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 141344768 unmapped: 19865600 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:03.842089+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 141344768 unmapped: 19865600 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:04.842298+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 141344768 unmapped: 19865600 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:05.842715+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f805c000/0x0/0x4ffc00000, data 0x353f9d2/0x3612000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 141344768 unmapped: 19865600 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:06.843050+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1679506 data_alloc: 251658240 data_used: 43388928
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f805c000/0x0/0x4ffc00000, data 0x353f9d2/0x3612000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 141344768 unmapped: 19865600 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:07.843491+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:08.843930+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 141344768 unmapped: 19865600 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 31.587177277s of 31.760829926s, submitted: 25
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:09.844305+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145563648 unmapped: 15646720 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7b92000/0x0/0x4ffc00000, data 0x3a099d2/0x3adc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:10.844660+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145620992 unmapped: 15589376 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:11.844915+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 146866176 unmapped: 14344192 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 3600.1 total, 600.0 interval
                                            Cumulative writes: 9581 writes, 36K keys, 9581 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 9581 writes, 2507 syncs, 3.82 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 2123 writes, 7531 keys, 2123 commit groups, 1.0 writes per commit group, ingest: 7.42 MB, 0.01 MB/s
                                            Interval WAL: 2123 writes, 874 syncs, 2.43 writes per sync, written: 0.01 GB, 0.01 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1718392 data_alloc: 251658240 data_used: 43724800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:12.845286+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 146866176 unmapped: 14344192 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7b8e000/0x0/0x4ffc00000, data 0x3a0d9d2/0x3ae0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:13.845787+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 146866176 unmapped: 14344192 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:14.846429+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 146866176 unmapped: 14344192 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7b8e000/0x0/0x4ffc00000, data 0x3a0d9d2/0x3ae0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:15.846809+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 146866176 unmapped: 14344192 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:16.847652+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 146866176 unmapped: 14344192 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1718392 data_alloc: 251658240 data_used: 43724800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:17.848140+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 146866176 unmapped: 14344192 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:18.848358+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 155246592 unmapped: 5963776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.473926544s of 10.002105713s, submitted: 160
Dec 03 02:34:11 compute-0 ceph-osd[206633]: mgrc ms_handle_reset ms_handle_reset con 0x55cd963ae000
Dec 03 02:34:11 compute-0 ceph-osd[206633]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1922561230
Dec 03 02:34:11 compute-0 ceph-osd[206633]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1922561230,v1:192.168.122.100:6801/1922561230]
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: get_auth_request con 0x55cd99d6f000 auth_method 0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: mgrc handle_mgr_configure stats_period=5
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:19.849174+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6a34000/0x0/0x4ffc00000, data 0x4b5f9d2/0x4c32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 152649728 unmapped: 8560640 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:20.849637+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 152756224 unmapped: 8454144 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:21.849836+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 152969216 unmapped: 8241152 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1876214 data_alloc: 268435456 data_used: 45903872
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:22.850120+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f69a7000/0x0/0x4ffc00000, data 0x4bf49d2/0x4cc7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 153001984 unmapped: 8208384 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f69a7000/0x0/0x4ffc00000, data 0x4bf49d2/0x4cc7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:23.850372+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 153034752 unmapped: 8175616 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:24.850797+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 153034752 unmapped: 8175616 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:25.851122+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 153034752 unmapped: 8175616 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f69a7000/0x0/0x4ffc00000, data 0x4bf49d2/0x4cc7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:26.851371+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 ms_handle_reset con 0x55cd98165400 session 0x55cd9ae405a0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99ed0000
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 153075712 unmapped: 8134656 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1876214 data_alloc: 268435456 data_used: 45903872
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:27.852117+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 153116672 unmapped: 8093696 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:28.852334+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 153116672 unmapped: 8093696 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:29.852739+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 153116672 unmapped: 8093696 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:30.852929+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 153116672 unmapped: 8093696 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:31.853115+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 153116672 unmapped: 8093696 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f69a5000/0x0/0x4ffc00000, data 0x4bf69d2/0x4cc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1871934 data_alloc: 268435456 data_used: 45903872
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:32.853332+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 153116672 unmapped: 8093696 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f69a5000/0x0/0x4ffc00000, data 0x4bf69d2/0x4cc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:33.853572+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 153116672 unmapped: 8093696 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:34.853801+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f69a5000/0x0/0x4ffc00000, data 0x4bf69d2/0x4cc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 153124864 unmapped: 8085504 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f69a5000/0x0/0x4ffc00000, data 0x4bf69d2/0x4cc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:35.854231+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 153124864 unmapped: 8085504 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:36.854666+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 153124864 unmapped: 8085504 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1871934 data_alloc: 268435456 data_used: 45903872
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:37.855105+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 153124864 unmapped: 8085504 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:38.855581+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 153133056 unmapped: 8077312 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:39.856041+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 153141248 unmapped: 8069120 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f69a5000/0x0/0x4ffc00000, data 0x4bf69d2/0x4cc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:40.856605+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 153141248 unmapped: 8069120 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:41.857067+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 153141248 unmapped: 8069120 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1871934 data_alloc: 268435456 data_used: 45903872
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:42.857449+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 153149440 unmapped: 8060928 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:43.857662+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 153149440 unmapped: 8060928 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f69a5000/0x0/0x4ffc00000, data 0x4bf69d2/0x4cc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 25.369407654s of 25.532297134s, submitted: 23
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:44.858153+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 153149440 unmapped: 8060928 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:45.858444+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f69a5000/0x0/0x4ffc00000, data 0x4bf69d2/0x4cc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 153149440 unmapped: 8060928 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 ms_handle_reset con 0x55cd991df800 session 0x55cd9a7bbe00
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 ms_handle_reset con 0x55cd99d6ec00 session 0x55cd972032c0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:46.858680+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98165400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 153149440 unmapped: 8060928 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1663584 data_alloc: 251658240 data_used: 33193984
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:47.859050+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 ms_handle_reset con 0x55cd98165400 session 0x55cd97f330e0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144982016 unmapped: 16228352 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:48.859237+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144982016 unmapped: 16228352 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:49.859792+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144982016 unmapped: 16228352 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:50.860020+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144982016 unmapped: 16228352 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a3f000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:51.860500+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144982016 unmapped: 16228352 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1663008 data_alloc: 251658240 data_used: 33189888
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:52.860983+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a3f000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144982016 unmapped: 16228352 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:53.861405+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144982016 unmapped: 16228352 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:54.861626+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144982016 unmapped: 16228352 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a3f000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:55.862063+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144982016 unmapped: 16228352 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:56.862484+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144982016 unmapped: 16228352 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.470028877s of 12.727300644s, submitted: 35
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1663008 data_alloc: 251658240 data_used: 33189888
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:57.863023+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 16203776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:58.863420+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 16203776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:59.863820+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 16203776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:00.864368+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 16203776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:01.864772+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 16203776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1663008 data_alloc: 251658240 data_used: 33189888
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:02.865180+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 16203776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:03.865692+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 16203776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:04.865872+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 16203776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:05.866192+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 16203776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:06.866631+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 16203776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1663008 data_alloc: 251658240 data_used: 33189888
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:07.866962+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 16203776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:08.867208+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 16203776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:09.867649+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 16203776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:10.867987+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 16203776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:11.868350+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 16203776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1663008 data_alloc: 251658240 data_used: 33189888
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:12.868706+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 16203776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:13.869046+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 16203776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:14.869286+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 16203776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:15.869719+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 16203776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:16.870078+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 16203776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1663008 data_alloc: 251658240 data_used: 33189888
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:17.870635+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 16203776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:18.871000+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 16203776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:19.871362+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 16203776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:20.871660+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 16203776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:21.871897+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 16203776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1663008 data_alloc: 251658240 data_used: 33189888
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:22.872214+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 16203776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:23.872629+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 16203776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:24.873025+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145014784 unmapped: 16195584 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:25.873202+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145014784 unmapped: 16195584 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:26.873634+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145014784 unmapped: 16195584 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1663008 data_alloc: 251658240 data_used: 33189888
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:27.874063+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145014784 unmapped: 16195584 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:28.874354+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145014784 unmapped: 16195584 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:29.874879+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145014784 unmapped: 16195584 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:30.875303+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145022976 unmapped: 16187392 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:31.875714+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145022976 unmapped: 16187392 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1663008 data_alloc: 251658240 data_used: 33189888
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:32.876089+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145022976 unmapped: 16187392 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:33.876515+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145022976 unmapped: 16187392 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:34.877020+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145022976 unmapped: 16187392 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:35.877376+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145022976 unmapped: 16187392 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:36.877749+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145022976 unmapped: 16187392 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:37.878051+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1663008 data_alloc: 251658240 data_used: 33189888
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145022976 unmapped: 16187392 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:38.878383+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145031168 unmapped: 16179200 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:39.878803+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145031168 unmapped: 16179200 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:40.879158+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145031168 unmapped: 16179200 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:41.879620+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145039360 unmapped: 16171008 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:42.879909+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1663008 data_alloc: 251658240 data_used: 33189888
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 ms_handle_reset con 0x55cd99ed0800 session 0x55cd981bb0e0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd984f5800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145039360 unmapped: 16171008 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:43.880191+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145039360 unmapped: 16171008 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:44.880497+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145039360 unmapped: 16171008 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:45.880908+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145047552 unmapped: 16162816 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:46.881271+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145047552 unmapped: 16162816 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:47.881671+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1663008 data_alloc: 251658240 data_used: 33189888
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145047552 unmapped: 16162816 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:48.882682+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 51.935058594s of 51.943740845s, submitted: 1
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145080320 unmapped: 16130048 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,0,0,1])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:49.882954+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145113088 unmapped: 16097280 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:50.883228+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145113088 unmapped: 16097280 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:51.883622+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145162240 unmapped: 16048128 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:52.884445+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1664736 data_alloc: 251658240 data_used: 33243136
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 15982592 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:53.884742+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 15982592 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:54.884933+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 15982592 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:55.885249+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 15982592 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:56.885449+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 15982592 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:57.885957+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1664736 data_alloc: 251658240 data_used: 33243136
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 15982592 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:58.886395+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 15982592 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:59.886838+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 15982592 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:00.887066+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 15982592 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:01.887492+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145235968 unmapped: 15974400 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:02.887864+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1664736 data_alloc: 251658240 data_used: 33243136
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145235968 unmapped: 15974400 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:03.888308+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145235968 unmapped: 15974400 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:04.888705+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145235968 unmapped: 15974400 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:05.888905+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145235968 unmapped: 15974400 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:06.889290+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145235968 unmapped: 15974400 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:07.889775+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1664736 data_alloc: 251658240 data_used: 33243136
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145235968 unmapped: 15974400 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:08.890159+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145235968 unmapped: 15974400 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:09.890612+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145235968 unmapped: 15974400 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:10.890996+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145244160 unmapped: 15966208 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:11.891345+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145244160 unmapped: 15966208 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:12.891699+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1664736 data_alloc: 251658240 data_used: 33243136
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145244160 unmapped: 15966208 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:13.892180+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145244160 unmapped: 15966208 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:14.892507+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145244160 unmapped: 15966208 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:15.892883+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145244160 unmapped: 15966208 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:16.893146+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145244160 unmapped: 15966208 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:17.893586+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1664736 data_alloc: 251658240 data_used: 33243136
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145252352 unmapped: 15958016 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:18.893906+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145252352 unmapped: 15958016 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:19.894225+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145252352 unmapped: 15958016 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:20.894615+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145252352 unmapped: 15958016 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:21.894861+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145252352 unmapped: 15958016 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:22.895227+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1664736 data_alloc: 251658240 data_used: 33243136
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:23.896063+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145252352 unmapped: 15958016 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:24.896319+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145252352 unmapped: 15958016 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:25.896490+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145252352 unmapped: 15958016 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:26.896875+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145260544 unmapped: 15949824 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:27.897156+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145260544 unmapped: 15949824 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1664736 data_alloc: 251658240 data_used: 33243136
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:28.897359+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145260544 unmapped: 15949824 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:29.897725+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145260544 unmapped: 15949824 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:30.898069+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145260544 unmapped: 15949824 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:31.898407+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145260544 unmapped: 15949824 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:32.898860+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145260544 unmapped: 15949824 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1664736 data_alloc: 251658240 data_used: 33243136
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:33.899225+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145260544 unmapped: 15949824 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:34.899616+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145268736 unmapped: 15941632 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:35.899968+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145276928 unmapped: 15933440 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:36.900312+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145276928 unmapped: 15933440 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:37.900745+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145276928 unmapped: 15933440 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1664736 data_alloc: 251658240 data_used: 33243136
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:38.901089+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145276928 unmapped: 15933440 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:39.901282+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145276928 unmapped: 15933440 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:40.901673+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145276928 unmapped: 15933440 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:41.901995+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145276928 unmapped: 15933440 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:42.902284+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145276928 unmapped: 15933440 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1664736 data_alloc: 251658240 data_used: 33243136
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:43.902514+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991de800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 53.815029144s of 54.677520752s, submitted: 132
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145285120 unmapped: 15925248 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 ms_handle_reset con 0x55cd991de800 session 0x55cd9a7bbc20
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99d6e000
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 ms_handle_reset con 0x55cd99d6e000 session 0x55cd9a7bb2c0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98165400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 ms_handle_reset con 0x55cd98165400 session 0x55cd95afcd20
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991de800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 ms_handle_reset con 0x55cd991de800 session 0x55cd98e98000
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991df800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 ms_handle_reset con 0x55cd991df800 session 0x55cd98e99a40
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:44.902906+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144875520 unmapped: 16334848 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f78eb000/0x0/0x4ffc00000, data 0x3cb1970/0x3d83000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:45.903425+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144875520 unmapped: 16334848 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:46.903815+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144875520 unmapped: 16334848 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:47.904149+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144875520 unmapped: 16334848 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1681475 data_alloc: 251658240 data_used: 33243136
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:48.904896+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144875520 unmapped: 16334848 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f78eb000/0x0/0x4ffc00000, data 0x3cb1970/0x3d83000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:49.905317+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144875520 unmapped: 16334848 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:50.905916+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144883712 unmapped: 16326656 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:51.906267+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144883712 unmapped: 16326656 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f78eb000/0x0/0x4ffc00000, data 0x3cb1970/0x3d83000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99d6ec00
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 ms_handle_reset con 0x55cd99d6ec00 session 0x55cd96be81e0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:52.906590+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144908288 unmapped: 16302080 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1684929 data_alloc: 251658240 data_used: 33243136
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99d6fc00
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd96384800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:53.907419+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144932864 unmapped: 16277504 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:54.907980+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144932864 unmapped: 16277504 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:55.908192+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144932864 unmapped: 16277504 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:56.908385+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f78ea000/0x0/0x4ffc00000, data 0x3cb1993/0x3d84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144932864 unmapped: 16277504 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:57.908695+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144932864 unmapped: 16277504 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1695113 data_alloc: 251658240 data_used: 34553856
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f78ea000/0x0/0x4ffc00000, data 0x3cb1993/0x3d84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:58.909004+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144932864 unmapped: 16277504 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:59.909347+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144932864 unmapped: 16277504 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f78ea000/0x0/0x4ffc00000, data 0x3cb1993/0x3d84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:00.909573+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144932864 unmapped: 16277504 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:01.909746+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144932864 unmapped: 16277504 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:02.909977+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144932864 unmapped: 16277504 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1695113 data_alloc: 251658240 data_used: 34553856
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f78ea000/0x0/0x4ffc00000, data 0x3cb1993/0x3d84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:03.910155+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144932864 unmapped: 16277504 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:04.910446+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144932864 unmapped: 16277504 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:05.911135+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144932864 unmapped: 16277504 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:06.911593+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144941056 unmapped: 16269312 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:07.912042+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144941056 unmapped: 16269312 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1695113 data_alloc: 251658240 data_used: 34553856
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:08.912368+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f78ea000/0x0/0x4ffc00000, data 0x3cb1993/0x3d84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144941056 unmapped: 16269312 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:09.912746+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144941056 unmapped: 16269312 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:10.913091+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144941056 unmapped: 16269312 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:11.913417+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144941056 unmapped: 16269312 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f78ea000/0x0/0x4ffc00000, data 0x3cb1993/0x3d84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:12.913771+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f78ea000/0x0/0x4ffc00000, data 0x3cb1993/0x3d84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144941056 unmapped: 16269312 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1695113 data_alloc: 251658240 data_used: 34553856
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:13.914093+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f78ea000/0x0/0x4ffc00000, data 0x3cb1993/0x3d84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144949248 unmapped: 16261120 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:14.914417+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144949248 unmapped: 16261120 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:15.914747+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144949248 unmapped: 16261120 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:16.915101+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144949248 unmapped: 16261120 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:17.915648+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144949248 unmapped: 16261120 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1695113 data_alloc: 251658240 data_used: 34553856
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:18.915865+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144957440 unmapped: 16252928 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:19.916108+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f78ea000/0x0/0x4ffc00000, data 0x3cb1993/0x3d84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144957440 unmapped: 16252928 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:20.916517+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144957440 unmapped: 16252928 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f78ea000/0x0/0x4ffc00000, data 0x3cb1993/0x3d84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:21.916980+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144965632 unmapped: 16244736 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:22.917287+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144965632 unmapped: 16244736 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1695113 data_alloc: 251658240 data_used: 34553856
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:23.917694+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144965632 unmapped: 16244736 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:24.918060+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144965632 unmapped: 16244736 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:25.918393+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144965632 unmapped: 16244736 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:26.918774+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144965632 unmapped: 16244736 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f78ea000/0x0/0x4ffc00000, data 0x3cb1993/0x3d84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:27.919160+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144965632 unmapped: 16244736 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1695113 data_alloc: 251658240 data_used: 34553856
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:28.920636+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 44.713607788s of 44.925628662s, submitted: 29
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 150003712 unmapped: 12328960 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:29.921298+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 150134784 unmapped: 12197888 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:30.921610+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149381120 unmapped: 12951552 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:31.921935+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 12935168 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:32.922234+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 12935168 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afb000/0x0/0x4ffc00000, data 0x4690993/0x4763000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1778491 data_alloc: 251658240 data_used: 35532800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:33.922484+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afb000/0x0/0x4ffc00000, data 0x4690993/0x4763000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 12935168 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:34.922794+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 12935168 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:35.923004+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 12935168 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:36.923297+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 12935168 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:37.923635+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 12935168 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1778703 data_alloc: 251658240 data_used: 35532800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:38.923956+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 12935168 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:39.924202+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 12935168 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:40.924612+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 12935168 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:41.924984+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 12935168 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:42.925277+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 12935168 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1778703 data_alloc: 251658240 data_used: 35532800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:43.925596+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 12935168 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:44.925956+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 12935168 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:45.926363+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 12935168 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:46.926714+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 12935168 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:47.927013+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 12935168 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1778703 data_alloc: 251658240 data_used: 35532800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:48.927347+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.849327087s of 20.192100525s, submitted: 87
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149331968 unmapped: 13000704 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:49.927760+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149331968 unmapped: 13000704 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:50.928047+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149331968 unmapped: 13000704 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:51.928749+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149331968 unmapped: 13000704 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:52.930395+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149331968 unmapped: 13000704 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1775711 data_alloc: 251658240 data_used: 35532800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:53.930915+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149340160 unmapped: 12992512 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:54.931298+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149340160 unmapped: 12992512 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:55.931873+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149340160 unmapped: 12992512 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:56.932290+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149340160 unmapped: 12992512 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:57.933182+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149340160 unmapped: 12992512 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1775711 data_alloc: 251658240 data_used: 35532800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:58.933632+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149340160 unmapped: 12992512 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:59.933822+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149340160 unmapped: 12992512 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:00.934188+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149340160 unmapped: 12992512 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:01.934726+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.050611496s of 13.057282448s, submitted: 1
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149356544 unmapped: 12976128 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:02.934970+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149356544 unmapped: 12976128 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:03.935349+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1775887 data_alloc: 251658240 data_used: 35532800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149356544 unmapped: 12976128 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:04.935749+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149356544 unmapped: 12976128 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:05.935951+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149356544 unmapped: 12976128 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:06.936179+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149356544 unmapped: 12976128 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:07.936622+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149356544 unmapped: 12976128 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:08.936907+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1775887 data_alloc: 251658240 data_used: 35532800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149356544 unmapped: 12976128 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:09.937251+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149356544 unmapped: 12976128 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:10.937474+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149364736 unmapped: 12967936 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:11.937704+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149364736 unmapped: 12967936 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:12.938065+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149364736 unmapped: 12967936 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:13.938304+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1775887 data_alloc: 251658240 data_used: 35532800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149364736 unmapped: 12967936 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:14.938486+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149364736 unmapped: 12967936 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:15.938648+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149364736 unmapped: 12967936 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:16.938849+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149364736 unmapped: 12967936 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:17.939275+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149364736 unmapped: 12967936 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:18.939692+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1775887 data_alloc: 251658240 data_used: 35532800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149364736 unmapped: 12967936 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:19.940066+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149364736 unmapped: 12967936 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:20.940351+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149364736 unmapped: 12967936 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:21.940747+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149364736 unmapped: 12967936 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:22.941121+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149364736 unmapped: 12967936 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:23.941515+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1775887 data_alloc: 251658240 data_used: 35532800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149364736 unmapped: 12967936 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:24.941822+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149372928 unmapped: 12959744 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:25.942104+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149381120 unmapped: 12951552 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:26.942494+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149381120 unmapped: 12951552 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:27.942895+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149381120 unmapped: 12951552 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:28.943229+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1775887 data_alloc: 251658240 data_used: 35532800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149381120 unmapped: 12951552 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:29.943673+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149381120 unmapped: 12951552 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:30.943884+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149381120 unmapped: 12951552 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:31.944216+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149381120 unmapped: 12951552 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:32.944401+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149381120 unmapped: 12951552 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:33.944637+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1775887 data_alloc: 251658240 data_used: 35532800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149381120 unmapped: 12951552 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:34.945078+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149381120 unmapped: 12951552 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:35.945450+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149381120 unmapped: 12951552 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:36.945744+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149381120 unmapped: 12951552 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:37.946064+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149381120 unmapped: 12951552 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:38.946441+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1775887 data_alloc: 251658240 data_used: 35532800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149381120 unmapped: 12951552 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:39.946736+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149389312 unmapped: 12943360 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:40.947090+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149389312 unmapped: 12943360 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:41.947314+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149389312 unmapped: 12943360 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:42.947602+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 12935168 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:43.947998+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1775887 data_alloc: 251658240 data_used: 35532800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 12935168 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:44.948214+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 12935168 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:45.948729+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 12935168 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:46.949074+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 12935168 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:47.949690+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:48.949907+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 12935168 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1775887 data_alloc: 251658240 data_used: 35532800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:49.950215+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 12935168 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:50.950633+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 12935168 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:51.951043+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149405696 unmapped: 12926976 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:52.951433+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149405696 unmapped: 12926976 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:53.951885+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149405696 unmapped: 12926976 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1775887 data_alloc: 251658240 data_used: 35532800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:54.952326+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149405696 unmapped: 12926976 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:55.952718+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149405696 unmapped: 12926976 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:56.952939+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149405696 unmapped: 12926976 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:57.953209+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149405696 unmapped: 12926976 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:58.954730+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149405696 unmapped: 12926976 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1775887 data_alloc: 251658240 data_used: 35532800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:59.956616+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149405696 unmapped: 12926976 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:00.958474+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149405696 unmapped: 12926976 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:01.959575+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149405696 unmapped: 12926976 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:02.960283+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149405696 unmapped: 12926976 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:03.960678+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149405696 unmapped: 12926976 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1775887 data_alloc: 251658240 data_used: 35532800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:04.960889+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149405696 unmapped: 12926976 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:05.961594+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149405696 unmapped: 12926976 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:06.962977+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149405696 unmapped: 12926976 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:07.963864+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149405696 unmapped: 12926976 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:08.964276+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149405696 unmapped: 12926976 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1775887 data_alloc: 251658240 data_used: 35532800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:09.964774+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149405696 unmapped: 12926976 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:10.965198+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149405696 unmapped: 12926976 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:11.965658+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149405696 unmapped: 12926976 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:12.966036+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149405696 unmapped: 12926976 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:13.966400+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149413888 unmapped: 12918784 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1775887 data_alloc: 251658240 data_used: 35532800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:14.966629+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149413888 unmapped: 12918784 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:15.966848+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149413888 unmapped: 12918784 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:16.967155+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149413888 unmapped: 12918784 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:17.967426+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149413888 unmapped: 12918784 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:18.967675+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149413888 unmapped: 12918784 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1775887 data_alloc: 251658240 data_used: 35532800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:19.968103+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149413888 unmapped: 12918784 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:20.968585+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149413888 unmapped: 12918784 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:21.968898+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149413888 unmapped: 12918784 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:22.969361+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149422080 unmapped: 12910592 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:23.969630+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149422080 unmapped: 12910592 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779087 data_alloc: 251658240 data_used: 35819520
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:24.969956+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149422080 unmapped: 12910592 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:25.970283+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149422080 unmapped: 12910592 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:26.970745+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149422080 unmapped: 12910592 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 85.276458740s of 85.293502808s, submitted: 2
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:27.971282+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149422080 unmapped: 12910592 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af8000/0x0/0x4ffc00000, data 0x4693993/0x4766000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:28.971579+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149422080 unmapped: 12910592 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779271 data_alloc: 251658240 data_used: 35819520
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:29.972100+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149422080 unmapped: 12910592 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:30.972660+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149422080 unmapped: 12910592 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:31.972942+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149422080 unmapped: 12910592 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:32.973329+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af8000/0x0/0x4ffc00000, data 0x4693993/0x4766000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149422080 unmapped: 12910592 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:33.973587+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149422080 unmapped: 12910592 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779271 data_alloc: 251658240 data_used: 35819520
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:34.973981+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149422080 unmapped: 12910592 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:35.974385+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149422080 unmapped: 12910592 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:36.974772+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149422080 unmapped: 12910592 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af8000/0x0/0x4ffc00000, data 0x4693993/0x4766000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:37.978899+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149422080 unmapped: 12910592 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:38.979163+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149430272 unmapped: 12902400 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779751 data_alloc: 251658240 data_used: 35831808
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:39.979663+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149430272 unmapped: 12902400 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:40.979967+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149430272 unmapped: 12902400 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af8000/0x0/0x4ffc00000, data 0x4693993/0x4766000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:41.980299+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149430272 unmapped: 12902400 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:42.980670+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.860312462s of 15.871120453s, submitted: 1
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149430272 unmapped: 12902400 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:43.981063+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149430272 unmapped: 12902400 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:44.981468+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149430272 unmapped: 12902400 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:45.981909+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149430272 unmapped: 12902400 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:46.982298+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149430272 unmapped: 12902400 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:47.982751+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149430272 unmapped: 12902400 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:48.983166+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149430272 unmapped: 12902400 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:49.983609+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149430272 unmapped: 12902400 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:50.984244+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149430272 unmapped: 12902400 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:51.984699+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149430272 unmapped: 12902400 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:52.984954+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149430272 unmapped: 12902400 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:53.985362+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149438464 unmapped: 12894208 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:54.985747+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149438464 unmapped: 12894208 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:55.986163+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149438464 unmapped: 12894208 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:56.986616+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149438464 unmapped: 12894208 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.327156067s of 14.337164879s, submitted: 1
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:57.987116+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 150487040 unmapped: 11845632 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:58.987797+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 150487040 unmapped: 11845632 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:59.988221+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 150487040 unmapped: 11845632 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:00.988641+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 150487040 unmapped: 11845632 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:01.989034+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 150487040 unmapped: 11845632 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:02.989469+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 150487040 unmapped: 11845632 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:03.989765+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 150487040 unmapped: 11845632 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:04.990158+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 150487040 unmapped: 11845632 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:05.990744+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 150487040 unmapped: 11845632 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:06.991092+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 150487040 unmapped: 11845632 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:07.991621+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 150487040 unmapped: 11845632 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:08.991917+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 150487040 unmapped: 11845632 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:09.992321+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 150487040 unmapped: 11845632 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:10.992512+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 150495232 unmapped: 11837440 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:11.992847+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 150495232 unmapped: 11837440 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:12.993090+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.353301048s of 15.358656883s, submitted: 1
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 12886016 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:13.993332+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 12886016 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:14.993807+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 12886016 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:15.994254+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 12886016 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:16.994670+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 12886016 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:17.995107+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 12886016 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:18.995520+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 12886016 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:19.995956+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 12886016 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:20.996287+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 12886016 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:21.996614+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 12886016 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:22.996827+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 12886016 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:23.997087+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 12886016 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:24.997324+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 12886016 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:25.997760+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 12886016 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:26.998156+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 12886016 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:27.998667+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 12886016 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:28.998896+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 12886016 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:29.999159+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 12886016 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:30.999455+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 12886016 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:31.999694+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 12886016 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:32.999948+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 12886016 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:34.003155+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 12886016 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:35.003472+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 12886016 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:36.003973+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 12886016 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:37.004812+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 12886016 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:38.005160+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 12886016 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:39.005802+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 12886016 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:40.006220+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149454848 unmapped: 12877824 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:41.006594+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149454848 unmapped: 12877824 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:42.007079+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149454848 unmapped: 12877824 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:43.007513+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149454848 unmapped: 12877824 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:44.007983+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149454848 unmapped: 12877824 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:45.008678+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149454848 unmapped: 12877824 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:46.009105+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149454848 unmapped: 12877824 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:47.009490+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149454848 unmapped: 12877824 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:48.010027+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149454848 unmapped: 12877824 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:49.010440+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149454848 unmapped: 12877824 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:50.010826+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149454848 unmapped: 12877824 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:51.011213+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149454848 unmapped: 12877824 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:52.011720+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149454848 unmapped: 12877824 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:53.012138+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149454848 unmapped: 12877824 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:54.012375+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149454848 unmapped: 12877824 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:55.012755+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149454848 unmapped: 12877824 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:56.206939+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149454848 unmapped: 12877824 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:57.207679+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149463040 unmapped: 12869632 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:58.207966+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149463040 unmapped: 12869632 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:59.208433+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149463040 unmapped: 12869632 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:00.208789+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149463040 unmapped: 12869632 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:01.209101+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149463040 unmapped: 12869632 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:02.209603+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149463040 unmapped: 12869632 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:03.209990+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149463040 unmapped: 12869632 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:04.210267+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149463040 unmapped: 12869632 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:05.210688+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149463040 unmapped: 12869632 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:06.214741+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149471232 unmapped: 12861440 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:07.216482+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149471232 unmapped: 12861440 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:08.217823+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149471232 unmapped: 12861440 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:09.218262+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149471232 unmapped: 12861440 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:10.219460+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149479424 unmapped: 12853248 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:11.219960+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149479424 unmapped: 12853248 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:12.220357+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149479424 unmapped: 12853248 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:13.220898+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149479424 unmapped: 12853248 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:14.221353+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149479424 unmapped: 12853248 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:15.221747+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149479424 unmapped: 12853248 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:16.222267+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149479424 unmapped: 12853248 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:17.222768+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149479424 unmapped: 12853248 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:18.223711+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149479424 unmapped: 12853248 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:19.224295+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149479424 unmapped: 12853248 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:20.224503+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149479424 unmapped: 12853248 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:21.224803+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149479424 unmapped: 12853248 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:22.225299+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149479424 unmapped: 12853248 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:23.225926+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149479424 unmapped: 12853248 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:24.226629+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149479424 unmapped: 12853248 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:25.226948+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:26.227377+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149479424 unmapped: 12853248 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:27.227874+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149479424 unmapped: 12853248 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:28.228218+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149479424 unmapped: 12853248 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:29.228734+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149479424 unmapped: 12853248 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:30.229181+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149487616 unmapped: 12845056 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:31.229804+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149487616 unmapped: 12845056 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:32.230243+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149487616 unmapped: 12845056 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:33.230661+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149487616 unmapped: 12845056 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:34.231040+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149487616 unmapped: 12845056 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:35.231514+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149487616 unmapped: 12845056 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:36.232020+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149487616 unmapped: 12845056 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:37.232428+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149487616 unmapped: 12845056 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:38.232744+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149487616 unmapped: 12845056 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:39.233170+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149487616 unmapped: 12845056 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:40.233511+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149487616 unmapped: 12845056 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:41.234002+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149487616 unmapped: 12845056 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:42.234400+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149487616 unmapped: 12845056 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:43.234733+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149487616 unmapped: 12845056 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:44.234949+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149487616 unmapped: 12845056 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:45.235322+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149487616 unmapped: 12845056 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:46.235689+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149495808 unmapped: 12836864 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:47.236056+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149495808 unmapped: 12836864 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:48.236388+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149495808 unmapped: 12836864 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:49.236714+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149495808 unmapped: 12836864 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:50.236987+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149495808 unmapped: 12836864 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:51.240392+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149495808 unmapped: 12836864 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:52.240707+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149495808 unmapped: 12836864 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:53.241068+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149495808 unmapped: 12836864 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:54.241466+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149495808 unmapped: 12836864 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:55.241797+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149504000 unmapped: 12828672 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:56.242160+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149504000 unmapped: 12828672 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:57.242370+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149504000 unmapped: 12828672 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:58.242808+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149504000 unmapped: 12828672 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:59.243100+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149504000 unmapped: 12828672 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:00.243458+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149504000 unmapped: 12828672 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:01.243853+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149512192 unmapped: 12820480 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:02.244251+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149512192 unmapped: 12820480 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:03.244745+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149512192 unmapped: 12820480 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:04.244978+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149512192 unmapped: 12820480 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:05.245411+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149512192 unmapped: 12820480 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:06.245770+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149512192 unmapped: 12820480 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:07.245991+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149512192 unmapped: 12820480 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:08.246316+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149512192 unmapped: 12820480 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:09.246729+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149512192 unmapped: 12820480 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:10.246992+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149512192 unmapped: 12820480 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:11.247267+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149512192 unmapped: 12820480 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:12.247676+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149512192 unmapped: 12820480 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:13.248060+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149512192 unmapped: 12820480 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:14.248419+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149512192 unmapped: 12820480 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:15.248766+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149520384 unmapped: 12812288 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:16.249221+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149520384 unmapped: 12812288 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:17.249707+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149520384 unmapped: 12812288 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:18.250028+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149528576 unmapped: 12804096 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:19.250429+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149528576 unmapped: 12804096 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:20.250783+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149528576 unmapped: 12804096 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:21.251113+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149528576 unmapped: 12804096 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:22.251465+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149528576 unmapped: 12804096 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:23.251747+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149528576 unmapped: 12804096 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:24.252103+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149528576 unmapped: 12804096 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:25.252359+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149528576 unmapped: 12804096 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:26.252721+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149528576 unmapped: 12804096 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:27.253071+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149528576 unmapped: 12804096 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:28.253478+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149528576 unmapped: 12804096 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:29.253958+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149528576 unmapped: 12804096 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:30.254466+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149528576 unmapped: 12804096 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:31.255046+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 138.569915771s of 138.576492310s, submitted: 1
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149528576 unmapped: 12804096 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:32.255402+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149528576 unmapped: 12804096 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:33.255786+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149528576 unmapped: 12804096 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:34.256201+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149528576 unmapped: 12804096 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:35.256993+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149536768 unmapped: 12795904 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780251 data_alloc: 251658240 data_used: 35921920
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:36.257328+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149536768 unmapped: 12795904 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:37.257751+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149536768 unmapped: 12795904 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:38.258203+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149536768 unmapped: 12795904 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:39.258782+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149536768 unmapped: 12795904 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:40.259212+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149536768 unmapped: 12795904 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780435 data_alloc: 251658240 data_used: 35921920
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:41.259606+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149536768 unmapped: 12795904 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:42.260321+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af5000/0x0/0x4ffc00000, data 0x4696993/0x4769000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149536768 unmapped: 12795904 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:43.260833+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149536768 unmapped: 12795904 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af5000/0x0/0x4ffc00000, data 0x4696993/0x4769000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:44.261321+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149536768 unmapped: 12795904 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af5000/0x0/0x4ffc00000, data 0x4696993/0x4769000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:45.261814+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149536768 unmapped: 12795904 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780435 data_alloc: 251658240 data_used: 35921920
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:46.262223+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149536768 unmapped: 12795904 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:47.262481+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149536768 unmapped: 12795904 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af5000/0x0/0x4ffc00000, data 0x4696993/0x4769000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:48.262901+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149536768 unmapped: 12795904 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:49.263209+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149536768 unmapped: 12795904 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:50.263750+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149544960 unmapped: 12787712 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780595 data_alloc: 251658240 data_used: 35926016
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:51.264165+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149553152 unmapped: 12779520 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:52.264568+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af5000/0x0/0x4ffc00000, data 0x4696993/0x4769000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149553152 unmapped: 12779520 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:53.264926+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149553152 unmapped: 12779520 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 22.376976013s of 22.411329269s, submitted: 4
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:54.265252+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149553152 unmapped: 12779520 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:55.265552+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149553152 unmapped: 12779520 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:56.265732+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149553152 unmapped: 12779520 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:57.266058+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149553152 unmapped: 12779520 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:58.266768+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149553152 unmapped: 12779520 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:59.267121+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149553152 unmapped: 12779520 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:00.267504+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149553152 unmapped: 12779520 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:01.267932+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149553152 unmapped: 12779520 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:02.268375+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149553152 unmapped: 12779520 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:03.268791+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149553152 unmapped: 12779520 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:04.269266+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149553152 unmapped: 12779520 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:05.269789+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149553152 unmapped: 12779520 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:06.270147+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149561344 unmapped: 12771328 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:07.270655+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.303033829s of 13.311895370s, submitted: 1
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149561344 unmapped: 12771328 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:08.271154+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149561344 unmapped: 12771328 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:09.271805+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149561344 unmapped: 12771328 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:10.272194+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149561344 unmapped: 12771328 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:11.272618+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149561344 unmapped: 12771328 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:12.273025+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149561344 unmapped: 12771328 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:13.273466+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149561344 unmapped: 12771328 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:14.273867+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149569536 unmapped: 12763136 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:15.274389+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149569536 unmapped: 12763136 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:16.274782+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149569536 unmapped: 12763136 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:17.275182+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149569536 unmapped: 12763136 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:18.275712+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149569536 unmapped: 12763136 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:19.276020+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149569536 unmapped: 12763136 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:20.276435+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149569536 unmapped: 12763136 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:21.276919+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149569536 unmapped: 12763136 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:22.277382+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.354217529s of 15.360720634s, submitted: 1
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149585920 unmapped: 12746752 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:23.277906+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149585920 unmapped: 12746752 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:24.278300+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149585920 unmapped: 12746752 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:25.278732+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149585920 unmapped: 12746752 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:26.279225+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149585920 unmapped: 12746752 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:27.279651+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149585920 unmapped: 12746752 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:28.280259+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149585920 unmapped: 12746752 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:29.280776+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149585920 unmapped: 12746752 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:30.281179+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149585920 unmapped: 12746752 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:31.281751+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149585920 unmapped: 12746752 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:32.282225+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149585920 unmapped: 12746752 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:33.282823+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:34.283179+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149585920 unmapped: 12746752 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:35.283792+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149585920 unmapped: 12746752 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:36.284081+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149585920 unmapped: 12746752 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:37.284698+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149585920 unmapped: 12746752 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:38.285118+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149585920 unmapped: 12746752 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:39.285630+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149594112 unmapped: 12738560 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:40.286024+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149594112 unmapped: 12738560 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:41.286380+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149594112 unmapped: 12738560 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:42.286803+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149594112 unmapped: 12738560 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:43.287179+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149594112 unmapped: 12738560 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:44.287644+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149594112 unmapped: 12738560 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:45.288160+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149594112 unmapped: 12738560 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:46.288653+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149594112 unmapped: 12738560 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:47.289101+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149594112 unmapped: 12738560 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:48.289438+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149594112 unmapped: 12738560 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:49.289800+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149594112 unmapped: 12738560 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:50.290200+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149594112 unmapped: 12738560 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:51.290512+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149594112 unmapped: 12738560 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:52.290833+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149594112 unmapped: 12738560 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:53.291193+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149594112 unmapped: 12738560 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:54.291611+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149602304 unmapped: 12730368 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:55.292021+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149610496 unmapped: 12722176 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:56.292387+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149610496 unmapped: 12722176 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:57.292740+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149610496 unmapped: 12722176 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:58.293237+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149610496 unmapped: 12722176 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:59.293664+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149610496 unmapped: 12722176 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:00.294035+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149610496 unmapped: 12722176 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:01.294355+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149610496 unmapped: 12722176 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:02.294782+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149610496 unmapped: 12722176 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:03.295058+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149618688 unmapped: 12713984 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:04.295382+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149618688 unmapped: 12713984 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:05.295609+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149635072 unmapped: 12697600 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:06.295811+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149635072 unmapped: 12697600 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:07.296217+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149635072 unmapped: 12697600 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:08.296768+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149635072 unmapped: 12697600 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:09.297152+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149635072 unmapped: 12697600 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:10.297623+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149635072 unmapped: 12697600 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:11.298004+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149635072 unmapped: 12697600 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 4200.1 total, 600.0 interval
                                            Cumulative writes: 10K writes, 39K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 10K writes, 2810 syncs, 3.67 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 738 writes, 2541 keys, 738 commit groups, 1.0 writes per commit group, ingest: 3.34 MB, 0.01 MB/s
                                            Interval WAL: 738 writes, 303 syncs, 2.44 writes per sync, written: 0.00 GB, 0.01 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:12.298372+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149643264 unmapped: 12689408 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:13.298723+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149643264 unmapped: 12689408 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:14.299054+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149643264 unmapped: 12689408 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:15.299463+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149643264 unmapped: 12689408 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:16.299834+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149643264 unmapped: 12689408 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:17.300193+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149643264 unmapped: 12689408 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:18.300711+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149643264 unmapped: 12689408 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:19.300933+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149659648 unmapped: 12673024 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:20.301649+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149659648 unmapped: 12673024 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:21.302083+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149659648 unmapped: 12673024 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:22.302832+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149659648 unmapped: 12673024 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:23.303439+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149659648 unmapped: 12673024 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:24.304001+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149659648 unmapped: 12673024 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:25.304315+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149676032 unmapped: 12656640 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:26.305638+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149676032 unmapped: 12656640 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:27.306135+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149676032 unmapped: 12656640 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:28.306697+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149676032 unmapped: 12656640 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:29.307036+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149676032 unmapped: 12656640 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:30.307763+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149676032 unmapped: 12656640 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:31.308177+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149676032 unmapped: 12656640 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:32.308379+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149676032 unmapped: 12656640 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:33.308856+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149684224 unmapped: 12648448 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:34.309257+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149692416 unmapped: 12640256 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:35.309781+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149692416 unmapped: 12640256 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:36.310061+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149692416 unmapped: 12640256 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:37.310432+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149692416 unmapped: 12640256 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:38.310897+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149692416 unmapped: 12640256 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:39.311264+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149692416 unmapped: 12640256 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:40.311752+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149692416 unmapped: 12640256 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:41.312142+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149692416 unmapped: 12640256 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:42.312674+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149692416 unmapped: 12640256 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:43.313132+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149700608 unmapped: 12632064 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:44.313839+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149700608 unmapped: 12632064 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:45.314153+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149716992 unmapped: 12615680 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:46.314473+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149716992 unmapped: 12615680 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:47.314823+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149716992 unmapped: 12615680 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:48.315163+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149716992 unmapped: 12615680 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:49.315503+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149716992 unmapped: 12615680 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:50.315993+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149725184 unmapped: 12607488 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:51.316346+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149725184 unmapped: 12607488 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:52.316613+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149725184 unmapped: 12607488 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:53.316902+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149725184 unmapped: 12607488 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:54.317321+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149725184 unmapped: 12607488 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:55.317750+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149725184 unmapped: 12607488 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:56.318100+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149725184 unmapped: 12607488 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:57.318425+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149725184 unmapped: 12607488 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:58.318858+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149725184 unmapped: 12607488 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:59.319144+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149725184 unmapped: 12607488 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:00.319451+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149725184 unmapped: 12607488 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:01.319850+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149725184 unmapped: 12607488 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:02.320230+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149733376 unmapped: 12599296 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:03.320679+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149733376 unmapped: 12599296 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:04.321024+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149749760 unmapped: 12582912 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:05.321422+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149749760 unmapped: 12582912 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:06.321817+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149749760 unmapped: 12582912 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:07.322186+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149749760 unmapped: 12582912 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:08.322674+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149749760 unmapped: 12582912 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:09.323063+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149749760 unmapped: 12582912 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:10.323435+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149749760 unmapped: 12582912 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:11.324021+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149749760 unmapped: 12582912 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:12.324444+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149749760 unmapped: 12582912 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:13.324997+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149749760 unmapped: 12582912 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:14.325414+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149766144 unmapped: 12566528 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:15.325621+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149766144 unmapped: 12566528 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:16.325973+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149766144 unmapped: 12566528 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:17.326351+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149766144 unmapped: 12566528 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:18.326864+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149766144 unmapped: 12566528 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:19.327173+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149766144 unmapped: 12566528 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:20.327626+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149766144 unmapped: 12566528 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:21.327854+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149766144 unmapped: 12566528 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:22.328255+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149766144 unmapped: 12566528 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:23.328646+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149766144 unmapped: 12566528 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:24.328979+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149782528 unmapped: 12550144 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:25.329237+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149782528 unmapped: 12550144 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:26.329501+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149782528 unmapped: 12550144 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:27.329927+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149782528 unmapped: 12550144 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:28.330291+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149782528 unmapped: 12550144 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:29.330657+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149782528 unmapped: 12550144 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:30.331077+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149790720 unmapped: 12541952 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:31.331479+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149790720 unmapped: 12541952 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:32.332075+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149790720 unmapped: 12541952 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:33.332508+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149790720 unmapped: 12541952 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:34.332951+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149790720 unmapped: 12541952 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:35.333363+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149790720 unmapped: 12541952 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:36.333797+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149790720 unmapped: 12541952 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:37.334195+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149790720 unmapped: 12541952 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:38.334760+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149790720 unmapped: 12541952 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:39.335209+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149790720 unmapped: 12541952 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:40.335436+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149790720 unmapped: 12541952 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:41.335781+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 234881024 data_used: 35926016
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149790720 unmapped: 12541952 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:42.336145+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149798912 unmapped: 12533760 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:43.336523+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149798912 unmapped: 12533760 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:44.337025+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149815296 unmapped: 12517376 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:45.337333+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149823488 unmapped: 12509184 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:46.337591+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 234881024 data_used: 35926016
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149831680 unmapped: 12500992 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:47.337940+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149831680 unmapped: 12500992 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:48.338369+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149831680 unmapped: 12500992 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:49.338727+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149831680 unmapped: 12500992 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:50.339040+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 147.989303589s of 147.998840332s, submitted: 1
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [0,0,0,1])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:51.339364+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149831680 unmapped: 12500992 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780487 data_alloc: 218103808 data_used: 35926016
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:52.339738+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149856256 unmapped: 12476416 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:53.340128+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149913600 unmapped: 12419072 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:54.340664+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:55.340990+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:56.341442+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780487 data_alloc: 218103808 data_used: 35926016
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:57.341743+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:58.342174+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:59.342512+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:00.342996+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:01.343325+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780487 data_alloc: 218103808 data_used: 35926016
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:02.343738+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:03.344169+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:04.344490+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:05.344903+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:06.345283+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:07.345708+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780487 data_alloc: 218103808 data_used: 35926016
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:08.346080+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:09.346480+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:10.346787+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:11.347132+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:12.347463+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780487 data_alloc: 218103808 data_used: 35926016
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:13.347769+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:14.348126+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:15.348331+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:16.348751+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:17.349087+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780487 data_alloc: 218103808 data_used: 35926016
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:18.349451+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:19.349837+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:20.350199+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:21.350760+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:22.351090+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780487 data_alloc: 218103808 data_used: 35926016
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:23.351439+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:24.351903+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:25.352226+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:26.352644+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:27.352980+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780487 data_alloc: 218103808 data_used: 35926016
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:28.353332+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:29.353732+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:30.354107+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:31.354352+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:32.354784+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780487 data_alloc: 218103808 data_used: 35926016
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:33.355144+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:34.355455+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:35.355767+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149995520 unmapped: 12337152 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:36.356065+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149995520 unmapped: 12337152 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:37.356522+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780487 data_alloc: 218103808 data_used: 35926016
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149995520 unmapped: 12337152 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:38.357142+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149995520 unmapped: 12337152 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:39.357669+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 150003712 unmapped: 12328960 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:40.357952+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 150003712 unmapped: 12328960 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 ms_handle_reset con 0x55cd99d6f400 session 0x55cd97f67680
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 ms_handle_reset con 0x55cd99d6f800 session 0x55cd9668ef00
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98165400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:41.358325+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 50.075469971s of 50.840911865s, submitted: 106
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 137519104 unmapped: 24813568 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 ms_handle_reset con 0x55cd98165400 session 0x55cd98b1f0e0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:42.358738+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1447482 data_alloc: 218103808 data_used: 20987904
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 137519104 unmapped: 24813568 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:43.359126+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 137519104 unmapped: 24813568 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f8919000/0x0/0x4ffc00000, data 0x2874973/0x2945000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:44.359709+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 137519104 unmapped: 24813568 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:45.360134+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 137519104 unmapped: 24813568 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:46.360620+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 137519104 unmapped: 24813568 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:47.360945+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1447482 data_alloc: 218103808 data_used: 20987904
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 137519104 unmapped: 24813568 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f8919000/0x0/0x4ffc00000, data 0x2874973/0x2945000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:48.361329+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 137519104 unmapped: 24813568 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:49.361721+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 137519104 unmapped: 24813568 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f8919000/0x0/0x4ffc00000, data 0x2874973/0x2945000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:50.362126+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 137519104 unmapped: 24813568 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:51.362491+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 ms_handle_reset con 0x55cd99d6fc00 session 0x55cd97203e00
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 ms_handle_reset con 0x55cd96384800 session 0x55cd964d92c0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 137519104 unmapped: 24813568 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991de800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.292172432s of 10.395132065s, submitted: 21
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:52.362921+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1346382 data_alloc: 218103808 data_used: 18604032
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 ms_handle_reset con 0x55cd991de800 session 0x55cd96e57c20
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 135651328 unmapped: 26681344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:53.363390+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 135651328 unmapped: 26681344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:54.363726+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 135651328 unmapped: 26681344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9451000/0x0/0x4ffc00000, data 0x1d3c950/0x1e0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:55.364184+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 135651328 unmapped: 26681344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:56.364688+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 135651328 unmapped: 26681344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:57.365135+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1346382 data_alloc: 218103808 data_used: 18604032
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 135651328 unmapped: 26681344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:58.365751+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 135651328 unmapped: 26681344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9451000/0x0/0x4ffc00000, data 0x1d3c950/0x1e0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:59.366053+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 135651328 unmapped: 26681344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:00.366484+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 135651328 unmapped: 26681344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:01.366903+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 135651328 unmapped: 26681344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9451000/0x0/0x4ffc00000, data 0x1d3c950/0x1e0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:02.367211+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1346382 data_alloc: 218103808 data_used: 18604032
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 135651328 unmapped: 26681344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:03.367660+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd96384800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.247656822s of 11.555605888s, submitted: 47
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 138 handle_osd_map epochs [138,139], i have 138, src has [1,139]
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 135667712 unmapped: 26664960 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 139 ms_handle_reset con 0x55cd96384800 session 0x55cd96db4b40
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:04.368037+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98165400
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 141271040 unmapped: 37847040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:05.368451+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 139 ms_handle_reset con 0x55cd98165400 session 0x55cd971a03c0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124461056 unmapped: 54657024 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99d6f800
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _renew_subs
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:06.368758+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f90bc000/0x0/0x4ffc00000, data 0x20d0097/0x21a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124461056 unmapped: 54657024 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 140 handle_osd_map epochs [141,141], i have 141, src has [1,141]
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 141 ms_handle_reset con 0x55cd99d6f800 session 0x55cd97f51680
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:07.369180+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1243710 data_alloc: 218103808 data_used: 7081984
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:08.369752+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:09.370100+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa0b8000/0x0/0x4ffc00000, data 0x10d1c78/0x11a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:10.370494+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa0b8000/0x0/0x4ffc00000, data 0x10d1c78/0x11a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:11.370942+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:12.371405+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1243710 data_alloc: 218103808 data_used: 7081984
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:13.371828+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 141 handle_osd_map epochs [141,142], i have 141, src has [1,142]
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.877271652s of 10.184530258s, submitted: 38
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:14.372270+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:15.372773+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:16.373192+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:17.373464+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246012 data_alloc: 218103808 data_used: 7081984
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:18.373897+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:19.374159+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:20.374691+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:21.375066+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:22.375634+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246012 data_alloc: 218103808 data_used: 7081984
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:23.376050+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:24.376446+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:25.376804+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:26.377198+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:27.377710+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246012 data_alloc: 218103808 data_used: 7081984
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:28.378289+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:29.378734+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:30.379648+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:31.380266+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:32.380832+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:33.381295+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:34.381776+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:35.382239+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:36.382475+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:37.382860+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:38.383310+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:39.383735+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:40.383943+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:41.384235+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:42.384857+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:43.385149+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:44.385679+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:45.386061+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:46.386515+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:47.387024+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:48.387461+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:49.387877+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:50.388322+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:51.388751+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:52.389140+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:53.389881+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:54.390286+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:55.390758+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:56.391139+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:57.391891+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:58.392375+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:59.392971+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:00.393381+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:01.393817+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:02.394203+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:03.394743+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:04.395009+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:05.395438+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:06.395842+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:07.396364+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:08.396760+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:09.397274+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:10.397767+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:11.398203+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:12.398977+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:13.399335+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:14.399641+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:15.399826+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:16.400121+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:17.400445+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:18.400871+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:19.401238+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124493824 unmapped: 54624256 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:20.401672+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124493824 unmapped: 54624256 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:21.402033+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124493824 unmapped: 54624256 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:22.402435+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124493824 unmapped: 54624256 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:23.402774+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124493824 unmapped: 54624256 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:24.402966+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124493824 unmapped: 54624256 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:25.403233+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124493824 unmapped: 54624256 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:26.403446+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124493824 unmapped: 54624256 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:27.403671+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124493824 unmapped: 54624256 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:28.403885+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124493824 unmapped: 54624256 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:29.404243+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124493824 unmapped: 54624256 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:30.404463+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124493824 unmapped: 54624256 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:31.405469+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124493824 unmapped: 54624256 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:32.405671+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124493824 unmapped: 54624256 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:33.405838+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124493824 unmapped: 54624256 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:34.406001+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124493824 unmapped: 54624256 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:35.406212+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124493824 unmapped: 54624256 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:36.406634+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124493824 unmapped: 54624256 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:37.406934+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124493824 unmapped: 54624256 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:38.407260+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:34:11 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:34:11 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:34:11 compute-0 ceph-osd[206633]: do_command 'config diff' '{prefix=config diff}'
Dec 03 02:34:11 compute-0 ceph-osd[206633]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124510208 unmapped: 54607872 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: do_command 'config show' '{prefix=config show}'
Dec 03 02:34:11 compute-0 ceph-osd[206633]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 03 02:34:11 compute-0 ceph-osd[206633]: do_command 'counter dump' '{prefix=counter dump}'
Dec 03 02:34:11 compute-0 ceph-osd[206633]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 03 02:34:11 compute-0 ceph-osd[206633]: do_command 'counter schema' '{prefix=counter schema}'
Dec 03 02:34:11 compute-0 ceph-osd[206633]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:39.407452+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124125184 unmapped: 54992896 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:34:11 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:40.407678+0000)
Dec 03 02:34:11 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124133376 unmapped: 54984704 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:34:11 compute-0 ceph-osd[206633]: do_command 'log dump' '{prefix=log dump}'
Dec 03 02:34:11 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 03 02:34:11 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 03 02:34:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2403: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:11 compute-0 ceph-mon[192821]: from='client.15683 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:34:11 compute-0 ceph-mon[192821]: from='client.15687 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:34:11 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2108106552' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec 03 02:34:11 compute-0 ceph-mon[192821]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 03 02:34:11 compute-0 ceph-mon[192821]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 03 02:34:11 compute-0 ceph-mon[192821]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 03 02:34:11 compute-0 ceph-mon[192821]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 03 02:34:11 compute-0 nova_compute[351485]: 2025-12-03 02:34:11.908 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:34:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0) v1
Dec 03 02:34:12 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3554818648' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Dec 03 02:34:12 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15703 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:34:12 compute-0 ceph-mon[192821]: pgmap v2403: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:12 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3554818648' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Dec 03 02:34:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Dec 03 02:34:12 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3089328408' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Dec 03 02:34:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0) v1
Dec 03 02:34:13 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3112872604' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Dec 03 02:34:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:34:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2404: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:13 compute-0 ceph-mon[192821]: from='client.15703 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:34:13 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3089328408' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Dec 03 02:34:13 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3112872604' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Dec 03 02:34:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Dec 03 02:34:13 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2197544235' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Dec 03 02:34:14 compute-0 nova_compute[351485]: 2025-12-03 02:34:14.319 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:34:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Dec 03 02:34:14 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2207793359' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Dec 03 02:34:14 compute-0 systemd[1]: Starting Hostname Service...
Dec 03 02:34:14 compute-0 systemd[1]: Started Hostname Service.
Dec 03 02:34:14 compute-0 ceph-mon[192821]: pgmap v2404: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:14 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2197544235' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Dec 03 02:34:14 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2207793359' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Dec 03 02:34:14 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15713 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:34:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Dec 03 02:34:15 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/958033081' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Dec 03 02:34:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2405: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Dec 03 02:34:15 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3427449169' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Dec 03 02:34:15 compute-0 ceph-mon[192821]: from='client.15713 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:34:15 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/958033081' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Dec 03 02:34:15 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3427449169' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Dec 03 02:34:16 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15719 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:34:16 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Dec 03 02:34:16 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2025420537' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Dec 03 02:34:16 compute-0 podman[476738]: 2025-12-03 02:34:16.83840981 +0000 UTC m=+0.097910174 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 03 02:34:16 compute-0 nova_compute[351485]: 2025-12-03 02:34:16.910 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:34:16 compute-0 ceph-mon[192821]: pgmap v2405: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:16 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2025420537' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Dec 03 02:34:16 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:34:17 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15725 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:34:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2406: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Dec 03 02:34:17 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4259040249' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Dec 03 02:34:17 compute-0 ceph-mon[192821]: from='client.15719 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:34:17 compute-0 ceph-mon[192821]: from='client.15723 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:34:17 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/4259040249' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Dec 03 02:34:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Dec 03 02:34:18 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/476176372' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Dec 03 02:34:18 compute-0 sudo[476995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:34:18 compute-0 sudo[476995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:34:18 compute-0 sudo[476995]: pam_unix(sudo:session): session closed for user root
Dec 03 02:34:18 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15731 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:34:18 compute-0 sudo[477023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:34:18 compute-0 sudo[477023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:34:18 compute-0 sudo[477023]: pam_unix(sudo:session): session closed for user root
Dec 03 02:34:18 compute-0 sudo[477056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:34:18 compute-0 sudo[477056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:34:18 compute-0 sudo[477056]: pam_unix(sudo:session): session closed for user root
Dec 03 02:34:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:34:18 compute-0 sudo[477090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Dec 03 02:34:18 compute-0 sudo[477090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:34:18 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15733 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:34:18 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:34:18 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 02:34:18 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:34:18 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 03 02:34:18 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:34:18 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:34:18 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:34:18 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:34:18 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:34:18 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec 03 02:34:18 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:34:18 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 02:34:18 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:34:18 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:34:18 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:34:18 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 02:34:18 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:34:18 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 02:34:18 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:34:18 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:34:18 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:34:18 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 02:34:18 compute-0 ceph-mon[192821]: from='client.15725 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:34:18 compute-0 ceph-mon[192821]: pgmap v2406: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:18 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/476176372' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Dec 03 02:34:19 compute-0 sudo[477090]: pam_unix(sudo:session): session closed for user root
Dec 03 02:34:19 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 02:34:19 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:34:19 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 02:34:19 compute-0 podman[477165]: 2025-12-03 02:34:19.139437884 +0000 UTC m=+0.124369601 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.openshift.tags=minimal rhel9, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, maintainer=Red Hat, Inc., io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 02:34:19 compute-0 podman[477167]: 2025-12-03 02:34:19.143304473 +0000 UTC m=+0.122275131 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1214.1726694543, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, distribution-scope=public, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., name=ubi9, io.buildah.version=1.29.0)
Dec 03 02:34:19 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:34:19 compute-0 podman[477166]: 2025-12-03 02:34:19.171414727 +0000 UTC m=+0.158466883 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 02:34:19 compute-0 podman[477168]: 2025-12-03 02:34:19.180167744 +0000 UTC m=+0.152315890 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 03 02:34:19 compute-0 podman[477164]: 2025-12-03 02:34:19.196450163 +0000 UTC m=+0.178286222 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 03 02:34:19 compute-0 sudo[477300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:34:19 compute-0 sudo[477300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:34:19 compute-0 sudo[477300]: pam_unix(sudo:session): session closed for user root
Dec 03 02:34:19 compute-0 sudo[477343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:34:19 compute-0 sudo[477343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:34:19 compute-0 sudo[477343]: pam_unix(sudo:session): session closed for user root
Dec 03 02:34:19 compute-0 nova_compute[351485]: 2025-12-03 02:34:19.321 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:34:19 compute-0 sudo[477374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:34:19 compute-0 sudo[477374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:34:19 compute-0 sudo[477374]: pam_unix(sudo:session): session closed for user root
Dec 03 02:34:19 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Dec 03 02:34:19 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1538585124' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Dec 03 02:34:19 compute-0 sudo[477408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 02:34:19 compute-0 sudo[477408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:34:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2407: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:19 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Dec 03 02:34:19 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/423126786' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Dec 03 02:34:20 compute-0 ceph-mon[192821]: from='client.15731 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:34:20 compute-0 ceph-mon[192821]: from='client.15733 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:34:20 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:34:20 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:34:20 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1538585124' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Dec 03 02:34:20 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/423126786' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Dec 03 02:34:20 compute-0 sudo[477408]: pam_unix(sudo:session): session closed for user root
Dec 03 02:34:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:34:20 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:34:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 02:34:20 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:34:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 02:34:20 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:34:20 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 1579ee7b-50e6-4973-95fa-f79b912b9150 does not exist
Dec 03 02:34:20 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 3debe973-0dc4-49fa-bf10-5c79f0f6462f does not exist
Dec 03 02:34:20 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 2139f690-b0c1-4fea-ab52-f932e33bbb1f does not exist
Dec 03 02:34:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 02:34:20 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:34:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 02:34:20 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:34:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:34:20 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:34:20 compute-0 sudo[477543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:34:20 compute-0 sudo[477543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:34:20 compute-0 sudo[477543]: pam_unix(sudo:session): session closed for user root
Dec 03 02:34:20 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15739 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:34:20 compute-0 sudo[477578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:34:20 compute-0 sudo[477578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:34:20 compute-0 sudo[477578]: pam_unix(sudo:session): session closed for user root
Dec 03 02:34:20 compute-0 sudo[477620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:34:20 compute-0 sudo[477620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:34:20 compute-0 sudo[477620]: pam_unix(sudo:session): session closed for user root
Dec 03 02:34:20 compute-0 sudo[477680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 02:34:20 compute-0 sudo[477680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:34:20 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15741 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:34:21 compute-0 ceph-mon[192821]: pgmap v2407: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:21 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:34:21 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:34:21 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:34:21 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:34:21 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:34:21 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:34:21 compute-0 podman[477836]: 2025-12-03 02:34:21.092889111 +0000 UTC m=+0.100304032 container create de966cf63d81c56a130c5b0ae846dab53ee33c9039b534c7f5d242e5cb1202b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:34:21 compute-0 podman[477836]: 2025-12-03 02:34:21.061409132 +0000 UTC m=+0.068824093 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:34:21 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Dec 03 02:34:21 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1909399968' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 03 02:34:21 compute-0 systemd[1]: Started libpod-conmon-de966cf63d81c56a130c5b0ae846dab53ee33c9039b534c7f5d242e5cb1202b5.scope.
Dec 03 02:34:21 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:34:21 compute-0 podman[477836]: 2025-12-03 02:34:21.214269386 +0000 UTC m=+0.221684347 container init de966cf63d81c56a130c5b0ae846dab53ee33c9039b534c7f5d242e5cb1202b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_gould, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Dec 03 02:34:21 compute-0 podman[477836]: 2025-12-03 02:34:21.223161877 +0000 UTC m=+0.230576798 container start de966cf63d81c56a130c5b0ae846dab53ee33c9039b534c7f5d242e5cb1202b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 03 02:34:21 compute-0 podman[477836]: 2025-12-03 02:34:21.227677444 +0000 UTC m=+0.235092385 container attach de966cf63d81c56a130c5b0ae846dab53ee33c9039b534c7f5d242e5cb1202b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_gould, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 03 02:34:21 compute-0 great_gould[477885]: 167 167
Dec 03 02:34:21 compute-0 systemd[1]: libpod-de966cf63d81c56a130c5b0ae846dab53ee33c9039b534c7f5d242e5cb1202b5.scope: Deactivated successfully.
Dec 03 02:34:21 compute-0 podman[477836]: 2025-12-03 02:34:21.231519683 +0000 UTC m=+0.238934604 container died de966cf63d81c56a130c5b0ae846dab53ee33c9039b534c7f5d242e5cb1202b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_gould, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:34:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-568289ee130717b7d9178a509e9900c8bcbc9158d7f45ea1935ccb97c73a73fa-merged.mount: Deactivated successfully.
Dec 03 02:34:21 compute-0 podman[477836]: 2025-12-03 02:34:21.292093442 +0000 UTC m=+0.299508363 container remove de966cf63d81c56a130c5b0ae846dab53ee33c9039b534c7f5d242e5cb1202b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:34:21 compute-0 systemd[1]: libpod-conmon-de966cf63d81c56a130c5b0ae846dab53ee33c9039b534c7f5d242e5cb1202b5.scope: Deactivated successfully.
Dec 03 02:34:21 compute-0 podman[477974]: 2025-12-03 02:34:21.502690095 +0000 UTC m=+0.061244469 container create 6c7983e3fc53727ffe95c9d9d83dd115075a798a041e504b4c2e1e53646e4d3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:34:21 compute-0 systemd[1]: Started libpod-conmon-6c7983e3fc53727ffe95c9d9d83dd115075a798a041e504b4c2e1e53646e4d3b.scope.
Dec 03 02:34:21 compute-0 podman[477974]: 2025-12-03 02:34:21.48267983 +0000 UTC m=+0.041234244 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:34:21 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:34:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b46b4438765ceb5a92e4423c0a2dd20bef287f7c52fd8fc66233f3bb14110eba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:34:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b46b4438765ceb5a92e4423c0a2dd20bef287f7c52fd8fc66233f3bb14110eba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:34:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b46b4438765ceb5a92e4423c0a2dd20bef287f7c52fd8fc66233f3bb14110eba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:34:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b46b4438765ceb5a92e4423c0a2dd20bef287f7c52fd8fc66233f3bb14110eba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:34:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b46b4438765ceb5a92e4423c0a2dd20bef287f7c52fd8fc66233f3bb14110eba/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 02:34:21 compute-0 podman[477974]: 2025-12-03 02:34:21.618784651 +0000 UTC m=+0.177339045 container init 6c7983e3fc53727ffe95c9d9d83dd115075a798a041e504b4c2e1e53646e4d3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec 03 02:34:21 compute-0 podman[477974]: 2025-12-03 02:34:21.639589918 +0000 UTC m=+0.198144292 container start 6c7983e3fc53727ffe95c9d9d83dd115075a798a041e504b4c2e1e53646e4d3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_heyrovsky, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec 03 02:34:21 compute-0 podman[477974]: 2025-12-03 02:34:21.653694166 +0000 UTC m=+0.212248560 container attach 6c7983e3fc53727ffe95c9d9d83dd115075a798a041e504b4c2e1e53646e4d3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:34:21 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status"} v 0) v1
Dec 03 02:34:21 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/867785048' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Dec 03 02:34:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2408: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:21 compute-0 nova_compute[351485]: 2025-12-03 02:34:21.911 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:34:22 compute-0 ceph-mon[192821]: from='client.15739 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:34:22 compute-0 ceph-mon[192821]: from='client.15741 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:34:22 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1909399968' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 03 02:34:22 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/867785048' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Dec 03 02:34:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0) v1
Dec 03 02:34:22 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3689222595' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Dec 03 02:34:22 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15749 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:34:22 compute-0 optimistic_heyrovsky[478011]: --> passed data devices: 0 physical, 3 LVM
Dec 03 02:34:22 compute-0 optimistic_heyrovsky[478011]: --> relative data size: 1.0
Dec 03 02:34:22 compute-0 optimistic_heyrovsky[478011]: --> All data devices are unavailable
Dec 03 02:34:22 compute-0 systemd[1]: libpod-6c7983e3fc53727ffe95c9d9d83dd115075a798a041e504b4c2e1e53646e4d3b.scope: Deactivated successfully.
Dec 03 02:34:22 compute-0 podman[477974]: 2025-12-03 02:34:22.73074695 +0000 UTC m=+1.289301324 container died 6c7983e3fc53727ffe95c9d9d83dd115075a798a041e504b4c2e1e53646e4d3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_heyrovsky, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Dec 03 02:34:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-b46b4438765ceb5a92e4423c0a2dd20bef287f7c52fd8fc66233f3bb14110eba-merged.mount: Deactivated successfully.
Dec 03 02:34:22 compute-0 podman[477974]: 2025-12-03 02:34:22.810576523 +0000 UTC m=+1.369130897 container remove 6c7983e3fc53727ffe95c9d9d83dd115075a798a041e504b4c2e1e53646e4d3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_heyrovsky, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 03 02:34:22 compute-0 systemd[1]: libpod-conmon-6c7983e3fc53727ffe95c9d9d83dd115075a798a041e504b4c2e1e53646e4d3b.scope: Deactivated successfully.
Dec 03 02:34:22 compute-0 sudo[477680]: pam_unix(sudo:session): session closed for user root
Dec 03 02:34:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0) v1
Dec 03 02:34:22 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3232833821' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 03 02:34:22 compute-0 sudo[478319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:34:22 compute-0 sudo[478319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:34:22 compute-0 sudo[478319]: pam_unix(sudo:session): session closed for user root
Dec 03 02:34:23 compute-0 ceph-mon[192821]: pgmap v2408: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:23 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3689222595' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Dec 03 02:34:23 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3232833821' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 03 02:34:23 compute-0 sudo[478363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:34:23 compute-0 sudo[478363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:34:23 compute-0 sudo[478363]: pam_unix(sudo:session): session closed for user root
Dec 03 02:34:23 compute-0 sudo[478410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:34:23 compute-0 sudo[478410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:34:23 compute-0 sudo[478410]: pam_unix(sudo:session): session closed for user root
Dec 03 02:34:23 compute-0 sudo[478486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 02:34:23 compute-0 sudo[478486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:34:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0) v1
Dec 03 02:34:23 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/559206022' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Dec 03 02:34:23 compute-0 podman[478734]: 2025-12-03 02:34:23.683489566 +0000 UTC m=+0.050363282 container create a77af5bfe446af56f75b9982016bb710e8d05c023790335b2cfb273c90c5468f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wing, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:34:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:34:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2409: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:23 compute-0 systemd[1]: Started libpod-conmon-a77af5bfe446af56f75b9982016bb710e8d05c023790335b2cfb273c90c5468f.scope.
Dec 03 02:34:23 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:34:23 compute-0 podman[478734]: 2025-12-03 02:34:23.663834462 +0000 UTC m=+0.030708178 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:34:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0) v1
Dec 03 02:34:23 compute-0 podman[478734]: 2025-12-03 02:34:23.773404374 +0000 UTC m=+0.140278090 container init a77af5bfe446af56f75b9982016bb710e8d05c023790335b2cfb273c90c5468f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wing, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:34:23 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/986181198' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Dec 03 02:34:23 compute-0 podman[478734]: 2025-12-03 02:34:23.784792025 +0000 UTC m=+0.151665761 container start a77af5bfe446af56f75b9982016bb710e8d05c023790335b2cfb273c90c5468f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wing, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:34:23 compute-0 podman[478734]: 2025-12-03 02:34:23.791108863 +0000 UTC m=+0.157982579 container attach a77af5bfe446af56f75b9982016bb710e8d05c023790335b2cfb273c90c5468f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 03 02:34:23 compute-0 reverent_wing[478777]: 167 167
Dec 03 02:34:23 compute-0 systemd[1]: libpod-a77af5bfe446af56f75b9982016bb710e8d05c023790335b2cfb273c90c5468f.scope: Deactivated successfully.
Dec 03 02:34:23 compute-0 podman[478734]: 2025-12-03 02:34:23.794137019 +0000 UTC m=+0.161010745 container died a77af5bfe446af56f75b9982016bb710e8d05c023790335b2cfb273c90c5468f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wing, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 03 02:34:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-367498fae70b5cb9c9e05133f2061edc83c16aec778f374ab0dc51a65d795bbb-merged.mount: Deactivated successfully.
Dec 03 02:34:23 compute-0 podman[478734]: 2025-12-03 02:34:23.852512066 +0000 UTC m=+0.219385782 container remove a77af5bfe446af56f75b9982016bb710e8d05c023790335b2cfb273c90c5468f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wing, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 03 02:34:23 compute-0 systemd[1]: libpod-conmon-a77af5bfe446af56f75b9982016bb710e8d05c023790335b2cfb273c90c5468f.scope: Deactivated successfully.
Dec 03 02:34:24 compute-0 ceph-mon[192821]: from='client.15749 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:34:24 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/559206022' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Dec 03 02:34:24 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/986181198' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Dec 03 02:34:24 compute-0 podman[478859]: 2025-12-03 02:34:24.075985883 +0000 UTC m=+0.061813146 container create 8216e4432c1a18c12b889201e440213a2705e61d76023b1f3a0671a11c789578 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 03 02:34:24 compute-0 systemd[1]: Started libpod-conmon-8216e4432c1a18c12b889201e440213a2705e61d76023b1f3a0671a11c789578.scope.
Dec 03 02:34:24 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:34:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8a0ed9f2ed09d2f2932e2935b7619a3073c61297f1769b6f03eb68fc52b186a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:34:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8a0ed9f2ed09d2f2932e2935b7619a3073c61297f1769b6f03eb68fc52b186a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:34:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8a0ed9f2ed09d2f2932e2935b7619a3073c61297f1769b6f03eb68fc52b186a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:34:24 compute-0 podman[478859]: 2025-12-03 02:34:24.0567608 +0000 UTC m=+0.042588073 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:34:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8a0ed9f2ed09d2f2932e2935b7619a3073c61297f1769b6f03eb68fc52b186a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:34:24 compute-0 podman[478859]: 2025-12-03 02:34:24.170042517 +0000 UTC m=+0.155869830 container init 8216e4432c1a18c12b889201e440213a2705e61d76023b1f3a0671a11c789578 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_cartwright, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:34:24 compute-0 podman[478859]: 2025-12-03 02:34:24.192774078 +0000 UTC m=+0.178601351 container start 8216e4432c1a18c12b889201e440213a2705e61d76023b1f3a0671a11c789578 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_cartwright, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 03 02:34:24 compute-0 podman[478859]: 2025-12-03 02:34:24.198282504 +0000 UTC m=+0.184109777 container attach 8216e4432c1a18c12b889201e440213a2705e61d76023b1f3a0671a11c789578 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_cartwright, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:34:24 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0) v1
Dec 03 02:34:24 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1283695438' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Dec 03 02:34:24 compute-0 nova_compute[351485]: 2025-12-03 02:34:24.322 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:34:24 compute-0 ovs-appctl[478964]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Dec 03 02:34:24 compute-0 ovs-appctl[478968]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Dec 03 02:34:24 compute-0 ovs-appctl[478973]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Dec 03 02:34:24 compute-0 nova_compute[351485]: 2025-12-03 02:34:24.608 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:34:24 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15759 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]: {
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:     "0": [
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:         {
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:             "devices": [
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:                 "/dev/loop3"
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:             ],
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:             "lv_name": "ceph_lv0",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:             "lv_size": "21470642176",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:             "name": "ceph_lv0",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:             "tags": {
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:                 "ceph.cluster_name": "ceph",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:                 "ceph.crush_device_class": "",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:                 "ceph.encrypted": "0",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:                 "ceph.osd_id": "0",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:                 "ceph.type": "block",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:                 "ceph.vdo": "0"
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:             },
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:             "type": "block",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:             "vg_name": "ceph_vg0"
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:         }
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:     ],
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:     "1": [
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:         {
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:             "devices": [
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:                 "/dev/loop4"
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:             ],
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:             "lv_name": "ceph_lv1",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:             "lv_size": "21470642176",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:             "name": "ceph_lv1",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:             "tags": {
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:                 "ceph.cluster_name": "ceph",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:                 "ceph.crush_device_class": "",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:                 "ceph.encrypted": "0",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:                 "ceph.osd_id": "1",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:                 "ceph.type": "block",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:                 "ceph.vdo": "0"
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:             },
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:             "type": "block",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:             "vg_name": "ceph_vg1"
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:         }
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:     ],
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:     "2": [
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:         {
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:             "devices": [
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:                 "/dev/loop5"
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:             ],
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:             "lv_name": "ceph_lv2",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:             "lv_size": "21470642176",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:             "name": "ceph_lv2",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:             "tags": {
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:                 "ceph.cluster_name": "ceph",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:                 "ceph.crush_device_class": "",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:                 "ceph.encrypted": "0",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:                 "ceph.osd_id": "2",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:                 "ceph.type": "block",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:                 "ceph.vdo": "0"
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:             },
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:             "type": "block",
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:             "vg_name": "ceph_vg2"
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:         }
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]:     ]
Dec 03 02:34:24 compute-0 pensive_cartwright[478893]: }
Dec 03 02:34:24 compute-0 systemd[1]: libpod-8216e4432c1a18c12b889201e440213a2705e61d76023b1f3a0671a11c789578.scope: Deactivated successfully.
Dec 03 02:34:24 compute-0 podman[478859]: 2025-12-03 02:34:24.99296594 +0000 UTC m=+0.978793203 container died 8216e4432c1a18c12b889201e440213a2705e61d76023b1f3a0671a11c789578 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_cartwright, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 03 02:34:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8a0ed9f2ed09d2f2932e2935b7619a3073c61297f1769b6f03eb68fc52b186a-merged.mount: Deactivated successfully.
Dec 03 02:34:25 compute-0 ceph-mon[192821]: pgmap v2409: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:25 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1283695438' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Dec 03 02:34:25 compute-0 podman[478859]: 2025-12-03 02:34:25.070743295 +0000 UTC m=+1.056570558 container remove 8216e4432c1a18c12b889201e440213a2705e61d76023b1f3a0671a11c789578 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_cartwright, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:34:25 compute-0 systemd[1]: libpod-conmon-8216e4432c1a18c12b889201e440213a2705e61d76023b1f3a0671a11c789578.scope: Deactivated successfully.
Dec 03 02:34:25 compute-0 sudo[478486]: pam_unix(sudo:session): session closed for user root
Dec 03 02:34:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0) v1
Dec 03 02:34:25 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1560934545' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Dec 03 02:34:25 compute-0 sudo[479111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:34:25 compute-0 sudo[479111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:34:25 compute-0 sudo[479111]: pam_unix(sudo:session): session closed for user root
Dec 03 02:34:25 compute-0 sudo[479154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:34:25 compute-0 sudo[479154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:34:25 compute-0 sudo[479154]: pam_unix(sudo:session): session closed for user root
Dec 03 02:34:25 compute-0 sudo[479193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:34:25 compute-0 sudo[479193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:34:25 compute-0 sudo[479193]: pam_unix(sudo:session): session closed for user root
Dec 03 02:34:25 compute-0 sudo[479248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 02:34:25 compute-0 sudo[479248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:34:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0) v1
Dec 03 02:34:25 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2821459846' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Dec 03 02:34:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2410: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:25 compute-0 podman[479415]: 2025-12-03 02:34:25.9558147 +0000 UTC m=+0.103869932 container create b888accb6a1e046b96ce42ef3bb7679423594c1b538de86a4c1a1545090ba2ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_fermat, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Dec 03 02:34:25 compute-0 podman[479415]: 2025-12-03 02:34:25.890288961 +0000 UTC m=+0.038344213 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:34:26 compute-0 systemd[1]: Started libpod-conmon-b888accb6a1e046b96ce42ef3bb7679423594c1b538de86a4c1a1545090ba2ea.scope.
Dec 03 02:34:26 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15765 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:34:26 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:34:26 compute-0 ceph-mon[192821]: from='client.15759 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:34:26 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1560934545' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Dec 03 02:34:26 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2821459846' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Dec 03 02:34:26 compute-0 podman[479415]: 2025-12-03 02:34:26.054271639 +0000 UTC m=+0.202326891 container init b888accb6a1e046b96ce42ef3bb7679423594c1b538de86a4c1a1545090ba2ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:34:26 compute-0 podman[479415]: 2025-12-03 02:34:26.06493609 +0000 UTC m=+0.212991322 container start b888accb6a1e046b96ce42ef3bb7679423594c1b538de86a4c1a1545090ba2ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_fermat, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 03 02:34:26 compute-0 podman[479415]: 2025-12-03 02:34:26.069991752 +0000 UTC m=+0.218046994 container attach b888accb6a1e046b96ce42ef3bb7679423594c1b538de86a4c1a1545090ba2ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:34:26 compute-0 awesome_fermat[479462]: 167 167
Dec 03 02:34:26 compute-0 systemd[1]: libpod-b888accb6a1e046b96ce42ef3bb7679423594c1b538de86a4c1a1545090ba2ea.scope: Deactivated successfully.
Dec 03 02:34:26 compute-0 podman[479415]: 2025-12-03 02:34:26.073642825 +0000 UTC m=+0.221698057 container died b888accb6a1e046b96ce42ef3bb7679423594c1b538de86a4c1a1545090ba2ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_fermat, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:34:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-25d01793413052d85f8e2e047fa584d313ed0100a301767e44bb5a400889d29d-merged.mount: Deactivated successfully.
Dec 03 02:34:26 compute-0 podman[479415]: 2025-12-03 02:34:26.131702144 +0000 UTC m=+0.279757376 container remove b888accb6a1e046b96ce42ef3bb7679423594c1b538de86a4c1a1545090ba2ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:34:26 compute-0 systemd[1]: libpod-conmon-b888accb6a1e046b96ce42ef3bb7679423594c1b538de86a4c1a1545090ba2ea.scope: Deactivated successfully.
Dec 03 02:34:26 compute-0 podman[479552]: 2025-12-03 02:34:26.346633199 +0000 UTC m=+0.061363023 container create 4b2ff11e806af83fb9ba3dfe7718309459d7a1131adcee9041d09e43c7827ec7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_jennings, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:34:26 compute-0 systemd[1]: Started libpod-conmon-4b2ff11e806af83fb9ba3dfe7718309459d7a1131adcee9041d09e43c7827ec7.scope.
Dec 03 02:34:26 compute-0 podman[479552]: 2025-12-03 02:34:26.326508661 +0000 UTC m=+0.041238505 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:34:26 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:34:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a47ea62637d47b0135bd61b0f9ace9e853f89394062d0023a533dca1264a98a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:34:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a47ea62637d47b0135bd61b0f9ace9e853f89394062d0023a533dca1264a98a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:34:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a47ea62637d47b0135bd61b0f9ace9e853f89394062d0023a533dca1264a98a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:34:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a47ea62637d47b0135bd61b0f9ace9e853f89394062d0023a533dca1264a98a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:34:26 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0) v1
Dec 03 02:34:26 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/553016088' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Dec 03 02:34:26 compute-0 podman[479552]: 2025-12-03 02:34:26.627616069 +0000 UTC m=+0.342345913 container init 4b2ff11e806af83fb9ba3dfe7718309459d7a1131adcee9041d09e43c7827ec7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Dec 03 02:34:26 compute-0 podman[479552]: 2025-12-03 02:34:26.656808012 +0000 UTC m=+0.371537836 container start 4b2ff11e806af83fb9ba3dfe7718309459d7a1131adcee9041d09e43c7827ec7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_jennings, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec 03 02:34:26 compute-0 podman[479552]: 2025-12-03 02:34:26.663694867 +0000 UTC m=+0.378424691 container attach 4b2ff11e806af83fb9ba3dfe7718309459d7a1131adcee9041d09e43c7827ec7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:34:26 compute-0 nova_compute[351485]: 2025-12-03 02:34:26.914 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:34:26 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15769 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:34:27 compute-0 ceph-mon[192821]: pgmap v2410: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:27 compute-0 ceph-mon[192821]: from='client.15765 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:34:27 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/553016088' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Dec 03 02:34:27 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15771 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:34:27 compute-0 peaceful_jennings[479586]: {
Dec 03 02:34:27 compute-0 peaceful_jennings[479586]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 02:34:27 compute-0 peaceful_jennings[479586]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:34:27 compute-0 peaceful_jennings[479586]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 02:34:27 compute-0 peaceful_jennings[479586]:         "osd_id": 2,
Dec 03 02:34:27 compute-0 peaceful_jennings[479586]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:34:27 compute-0 peaceful_jennings[479586]:         "type": "bluestore"
Dec 03 02:34:27 compute-0 peaceful_jennings[479586]:     },
Dec 03 02:34:27 compute-0 peaceful_jennings[479586]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 02:34:27 compute-0 peaceful_jennings[479586]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:34:27 compute-0 peaceful_jennings[479586]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 02:34:27 compute-0 peaceful_jennings[479586]:         "osd_id": 1,
Dec 03 02:34:27 compute-0 peaceful_jennings[479586]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:34:27 compute-0 peaceful_jennings[479586]:         "type": "bluestore"
Dec 03 02:34:27 compute-0 peaceful_jennings[479586]:     },
Dec 03 02:34:27 compute-0 peaceful_jennings[479586]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 02:34:27 compute-0 peaceful_jennings[479586]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:34:27 compute-0 peaceful_jennings[479586]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 02:34:27 compute-0 peaceful_jennings[479586]:         "osd_id": 0,
Dec 03 02:34:27 compute-0 peaceful_jennings[479586]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:34:27 compute-0 peaceful_jennings[479586]:         "type": "bluestore"
Dec 03 02:34:27 compute-0 peaceful_jennings[479586]:     }
Dec 03 02:34:27 compute-0 peaceful_jennings[479586]: }
Dec 03 02:34:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2411: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:27 compute-0 systemd[1]: libpod-4b2ff11e806af83fb9ba3dfe7718309459d7a1131adcee9041d09e43c7827ec7.scope: Deactivated successfully.
Dec 03 02:34:27 compute-0 podman[479552]: 2025-12-03 02:34:27.722334892 +0000 UTC m=+1.437064716 container died 4b2ff11e806af83fb9ba3dfe7718309459d7a1131adcee9041d09e43c7827ec7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_jennings, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:34:27 compute-0 systemd[1]: libpod-4b2ff11e806af83fb9ba3dfe7718309459d7a1131adcee9041d09e43c7827ec7.scope: Consumed 1.042s CPU time.
Dec 03 02:34:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-a47ea62637d47b0135bd61b0f9ace9e853f89394062d0023a533dca1264a98a3-merged.mount: Deactivated successfully.
Dec 03 02:34:27 compute-0 podman[479552]: 2025-12-03 02:34:27.816866579 +0000 UTC m=+1.531596413 container remove 4b2ff11e806af83fb9ba3dfe7718309459d7a1131adcee9041d09e43c7827ec7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 03 02:34:27 compute-0 systemd[1]: libpod-conmon-4b2ff11e806af83fb9ba3dfe7718309459d7a1131adcee9041d09e43c7827ec7.scope: Deactivated successfully.
Dec 03 02:34:27 compute-0 sudo[479248]: pam_unix(sudo:session): session closed for user root
Dec 03 02:34:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 02:34:27 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:34:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 02:34:27 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:34:27 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev d509603f-8a9f-4e16-a756-8040477b19c5 does not exist
Dec 03 02:34:27 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev a4981ef9-3e0f-43e0-ae92-c6b8febfac82 does not exist
Dec 03 02:34:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0) v1
Dec 03 02:34:27 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3883278395' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Dec 03 02:34:28 compute-0 sudo[479867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:34:28 compute-0 sudo[479867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:34:28 compute-0 sudo[479867]: pam_unix(sudo:session): session closed for user root
Dec 03 02:34:28 compute-0 ceph-mon[192821]: from='client.15769 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:34:28 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:34:28 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:34:28 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3883278395' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Dec 03 02:34:28 compute-0 sudo[479912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 02:34:28 compute-0 sudo[479912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:34:28 compute-0 sudo[479912]: pam_unix(sudo:session): session closed for user root
Dec 03 02:34:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0) v1
Dec 03 02:34:28 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3618671044' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Dec 03 02:34:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:34:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:34:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:34:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:34:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:34:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:34:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:34:28
Dec 03 02:34:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 02:34:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 02:34:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['images', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', '.mgr', 'default.rgw.meta', 'volumes', '.rgw.root', 'vms']
Dec 03 02:34:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 02:34:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:34:28 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15777 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:34:29 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15779 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:34:29 compute-0 ceph-mon[192821]: from='client.15771 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:34:29 compute-0 ceph-mon[192821]: pgmap v2411: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:29 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3618671044' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Dec 03 02:34:29 compute-0 ceph-mon[192821]: from='client.15777 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:34:29 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:34:29 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 02:34:29 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:34:29 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 03 02:34:29 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:34:29 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:34:29 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:34:29 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:34:29 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:34:29 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec 03 02:34:29 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:34:29 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 02:34:29 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:34:29 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:34:29 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:34:29 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 02:34:29 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:34:29 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 02:34:29 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:34:29 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:34:29 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:34:29 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 02:34:29 compute-0 nova_compute[351485]: 2025-12-03 02:34:29.324 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:34:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 02:34:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:34:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 02:34:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:34:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:34:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:34:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:34:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:34:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:34:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:34:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0) v1
Dec 03 02:34:29 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2355656610' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 03 02:34:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2412: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0) v1
Dec 03 02:34:29 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2030479226' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Dec 03 02:34:30 compute-0 podman[158098]: time="2025-12-03T02:34:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:34:30 compute-0 ceph-mon[192821]: from='client.15779 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:34:30 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2355656610' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 03 02:34:30 compute-0 ceph-mon[192821]: pgmap v2412: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:30 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2030479226' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Dec 03 02:34:30 compute-0 podman[158098]: @ - - [03/Dec/2025:02:34:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec 03 02:34:30 compute-0 podman[158098]: @ - - [03/Dec/2025:02:34:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8206 "" "Go-http-client/1.1"
Dec 03 02:34:30 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15785 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:34:30 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15787 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:34:31 compute-0 ceph-mon[192821]: from='client.15785 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:34:31 compute-0 ceph-mon[192821]: from='client.15787 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:34:31 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec 03 02:34:31 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3631740212' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 03 02:34:31 compute-0 openstack_network_exporter[368278]: ERROR   02:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:34:31 compute-0 openstack_network_exporter[368278]: ERROR   02:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:34:31 compute-0 openstack_network_exporter[368278]: ERROR   02:34:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:34:31 compute-0 openstack_network_exporter[368278]: ERROR   02:34:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:34:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:34:31 compute-0 openstack_network_exporter[368278]: ERROR   02:34:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:34:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:34:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2413: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:31 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0) v1
Dec 03 02:34:31 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4026783195' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Dec 03 02:34:31 compute-0 nova_compute[351485]: 2025-12-03 02:34:31.916 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:34:32 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3631740212' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 03 02:34:32 compute-0 ceph-mon[192821]: pgmap v2413: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:32 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/4026783195' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Dec 03 02:34:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:34:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2414: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:34 compute-0 nova_compute[351485]: 2025-12-03 02:34:34.329 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:34:34 compute-0 nova_compute[351485]: 2025-12-03 02:34:34.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:34:34 compute-0 nova_compute[351485]: 2025-12-03 02:34:34.627 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:34:34 compute-0 nova_compute[351485]: 2025-12-03 02:34:34.628 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:34:34 compute-0 nova_compute[351485]: 2025-12-03 02:34:34.628 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:34:34 compute-0 nova_compute[351485]: 2025-12-03 02:34:34.629 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 02:34:34 compute-0 nova_compute[351485]: 2025-12-03 02:34:34.629 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:34:34 compute-0 ceph-mon[192821]: pgmap v2414: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:35 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:34:35 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2921392142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:34:35 compute-0 nova_compute[351485]: 2025-12-03 02:34:35.122 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:34:35 compute-0 nova_compute[351485]: 2025-12-03 02:34:35.560 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:34:35 compute-0 nova_compute[351485]: 2025-12-03 02:34:35.562 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3869MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 02:34:35 compute-0 nova_compute[351485]: 2025-12-03 02:34:35.562 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:34:35 compute-0 nova_compute[351485]: 2025-12-03 02:34:35.563 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:34:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2415: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:35 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2921392142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:34:35 compute-0 nova_compute[351485]: 2025-12-03 02:34:35.873 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 02:34:35 compute-0 nova_compute[351485]: 2025-12-03 02:34:35.873 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 02:34:35 compute-0 nova_compute[351485]: 2025-12-03 02:34:35.890 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:34:36 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:34:36 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/268046948' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:34:36 compute-0 nova_compute[351485]: 2025-12-03 02:34:36.347 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:34:36 compute-0 nova_compute[351485]: 2025-12-03 02:34:36.354 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:34:36 compute-0 nova_compute[351485]: 2025-12-03 02:34:36.376 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:34:36 compute-0 nova_compute[351485]: 2025-12-03 02:34:36.377 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 02:34:36 compute-0 nova_compute[351485]: 2025-12-03 02:34:36.378 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:34:36 compute-0 nova_compute[351485]: 2025-12-03 02:34:36.919 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:34:37 compute-0 ceph-mon[192821]: pgmap v2415: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:37 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/268046948' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:34:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2416: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:38 compute-0 podman[480902]: 2025-12-03 02:34:38.022204989 +0000 UTC m=+0.117494416 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 03 02:34:38 compute-0 podman[480909]: 2025-12-03 02:34:38.047285597 +0000 UTC m=+0.119573245 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 03 02:34:38 compute-0 podman[480904]: 2025-12-03 02:34:38.062033903 +0000 UTC m=+0.131380778 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, config_id=edpm)
Dec 03 02:34:38 compute-0 ceph-mon[192821]: pgmap v2416: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:34:38 compute-0 virtqemud[154511]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec 03 02:34:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 02:34:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:34:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 02:34:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:34:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 03 02:34:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:34:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:34:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:34:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:34:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:34:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec 03 02:34:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:34:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 02:34:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:34:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:34:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:34:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 02:34:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:34:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 02:34:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:34:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:34:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:34:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 02:34:39 compute-0 nova_compute[351485]: 2025-12-03 02:34:39.331 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:34:39 compute-0 nova_compute[351485]: 2025-12-03 02:34:39.378 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:34:39 compute-0 nova_compute[351485]: 2025-12-03 02:34:39.379 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 02:34:39 compute-0 nova_compute[351485]: 2025-12-03 02:34:39.379 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 03 02:34:39 compute-0 nova_compute[351485]: 2025-12-03 02:34:39.396 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 03 02:34:39 compute-0 nova_compute[351485]: 2025-12-03 02:34:39.397 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:34:39 compute-0 nova_compute[351485]: 2025-12-03 02:34:39.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:34:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2417: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:40 compute-0 nova_compute[351485]: 2025-12-03 02:34:40.570 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:34:40 compute-0 nova_compute[351485]: 2025-12-03 02:34:40.571 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:34:40 compute-0 ceph-mon[192821]: pgmap v2417: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:41 compute-0 nova_compute[351485]: 2025-12-03 02:34:41.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:34:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2418: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:41 compute-0 nova_compute[351485]: 2025-12-03 02:34:41.922 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:34:42 compute-0 ceph-mon[192821]: pgmap v2418: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:43 compute-0 systemd[1]: Starting Time & Date Service...
Dec 03 02:34:43 compute-0 systemd[1]: Started Time & Date Service.
Dec 03 02:34:43 compute-0 nova_compute[351485]: 2025-12-03 02:34:43.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:34:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:34:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2419: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:44 compute-0 nova_compute[351485]: 2025-12-03 02:34:44.337 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:34:44 compute-0 ceph-mon[192821]: pgmap v2419: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:45 compute-0 sshd-session[481390]: Invalid user anonymous from 185.156.73.233 port 54312
Dec 03 02:34:45 compute-0 sshd-session[481390]: Connection closed by invalid user anonymous 185.156.73.233 port 54312 [preauth]
Dec 03 02:34:45 compute-0 nova_compute[351485]: 2025-12-03 02:34:45.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:34:45 compute-0 nova_compute[351485]: 2025-12-03 02:34:45.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 02:34:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2420: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:46 compute-0 ceph-mon[192821]: pgmap v2420: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:46 compute-0 nova_compute[351485]: 2025-12-03 02:34:46.925 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:34:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 02:34:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/943747284' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:34:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 02:34:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/943747284' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:34:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2421: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/943747284' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:34:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/943747284' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:34:47 compute-0 podman[481392]: 2025-12-03 02:34:47.885376684 +0000 UTC m=+0.134443555 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 03 02:34:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:34:48 compute-0 ceph-mon[192821]: pgmap v2421: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:49 compute-0 nova_compute[351485]: 2025-12-03 02:34:49.342 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:34:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2422: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:49 compute-0 podman[481413]: 2025-12-03 02:34:49.892710371 +0000 UTC m=+0.133094587 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_id=edpm, release=1755695350, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, name=ubi9-minimal)
Dec 03 02:34:49 compute-0 podman[481412]: 2025-12-03 02:34:49.908694562 +0000 UTC m=+0.159063779 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 03 02:34:49 compute-0 podman[481414]: 2025-12-03 02:34:49.908741714 +0000 UTC m=+0.138881711 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 03 02:34:49 compute-0 podman[481420]: 2025-12-03 02:34:49.905467721 +0000 UTC m=+0.132358836 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, version=9.4, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, managed_by=edpm_ansible, release=1214.1726694543, config_id=edpm)
Dec 03 02:34:49 compute-0 podman[481421]: 2025-12-03 02:34:49.938399361 +0000 UTC m=+0.153416091 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 03 02:34:50 compute-0 ceph-mon[192821]: pgmap v2422: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2423: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:51 compute-0 nova_compute[351485]: 2025-12-03 02:34:51.929 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:34:52 compute-0 ceph-mon[192821]: pgmap v2423: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:34:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2424: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:54 compute-0 nova_compute[351485]: 2025-12-03 02:34:54.343 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:34:54 compute-0 ceph-mon[192821]: pgmap v2424: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2425: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:56 compute-0 nova_compute[351485]: 2025-12-03 02:34:56.933 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:34:56 compute-0 ceph-mon[192821]: pgmap v2425: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2426: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:34:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:34:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:34:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:34:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:34:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:34:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:34:58 compute-0 ceph-mon[192821]: pgmap v2426: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:59 compute-0 nova_compute[351485]: 2025-12-03 02:34:59.346 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:34:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:34:59.671 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:34:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:34:59.672 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:34:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:34:59.672 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:34:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2427: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:34:59 compute-0 podman[158098]: time="2025-12-03T02:34:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:34:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:34:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec 03 02:34:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:34:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8207 "" "Go-http-client/1.1"
Dec 03 02:35:00 compute-0 ceph-mon[192821]: pgmap v2427: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:01 compute-0 openstack_network_exporter[368278]: ERROR   02:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:35:01 compute-0 openstack_network_exporter[368278]: ERROR   02:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:35:01 compute-0 openstack_network_exporter[368278]: ERROR   02:35:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:35:01 compute-0 openstack_network_exporter[368278]: ERROR   02:35:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:35:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:35:01 compute-0 openstack_network_exporter[368278]: ERROR   02:35:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:35:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:35:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2428: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:01 compute-0 nova_compute[351485]: 2025-12-03 02:35:01.935 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:35:03 compute-0 ceph-mon[192821]: pgmap v2428: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:35:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2429: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:04 compute-0 nova_compute[351485]: 2025-12-03 02:35:04.349 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:35:05 compute-0 ceph-mon[192821]: pgmap v2429: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2430: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:06 compute-0 nova_compute[351485]: 2025-12-03 02:35:06.937 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:35:07 compute-0 ceph-mon[192821]: pgmap v2430: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2431: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:08 compute-0 ceph-mon[192821]: pgmap v2431: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:35:08 compute-0 podman[481513]: 2025-12-03 02:35:08.851694938 +0000 UTC m=+0.095773793 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 03 02:35:08 compute-0 podman[481514]: 2025-12-03 02:35:08.884825093 +0000 UTC m=+0.125730508 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 03 02:35:08 compute-0 podman[481515]: 2025-12-03 02:35:08.886581542 +0000 UTC m=+0.122781935 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 03 02:35:09 compute-0 nova_compute[351485]: 2025-12-03 02:35:09.352 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:35:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2432: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:10 compute-0 ceph-mon[192821]: pgmap v2432: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2433: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:11 compute-0 nova_compute[351485]: 2025-12-03 02:35:11.940 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:35:12 compute-0 ceph-mon[192821]: pgmap v2433: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:13 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 03 02:35:13 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 03 02:35:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:35:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2434: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:14 compute-0 nova_compute[351485]: 2025-12-03 02:35:14.356 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:35:14 compute-0 ceph-mon[192821]: pgmap v2434: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2435: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:16 compute-0 ceph-mon[192821]: pgmap v2435: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:16 compute-0 nova_compute[351485]: 2025-12-03 02:35:16.945 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:35:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2436: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:18 compute-0 sudo[473075]: pam_unix(sudo:session): session closed for user root
Dec 03 02:35:18 compute-0 sshd-session[473074]: Received disconnect from 192.168.122.10 port 40372:11: disconnected by user
Dec 03 02:35:18 compute-0 sshd-session[473074]: Disconnected from user zuul 192.168.122.10 port 40372
Dec 03 02:35:18 compute-0 sshd-session[473071]: pam_unix(sshd:session): session closed for user zuul
Dec 03 02:35:18 compute-0 systemd[1]: session-63.scope: Deactivated successfully.
Dec 03 02:35:18 compute-0 systemd[1]: session-63.scope: Consumed 3min 19.474s CPU time, 911.3M memory peak, read 509.0M from disk, written 366.6M to disk.
Dec 03 02:35:18 compute-0 systemd-logind[800]: Session 63 logged out. Waiting for processes to exit.
Dec 03 02:35:18 compute-0 systemd-logind[800]: Removed session 63.
Dec 03 02:35:18 compute-0 sshd-session[481576]: Accepted publickey for zuul from 192.168.122.10 port 50924 ssh2: ECDSA SHA256:ja3ITS17A9km0/Ot+KN2pl9ub4ump/b6GV+vNoE7Szw
Dec 03 02:35:18 compute-0 systemd-logind[800]: New session 64 of user zuul.
Dec 03 02:35:18 compute-0 systemd[1]: Started Session 64 of User zuul.
Dec 03 02:35:18 compute-0 podman[481575]: 2025-12-03 02:35:18.490864692 +0000 UTC m=+0.164510723 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Dec 03 02:35:18 compute-0 sshd-session[481576]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 03 02:35:18 compute-0 sudo[481598]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cat /var/tmp/sos-osp/sosreport-compute-0-2025-12-03-ocpzjmr.tar.xz
Dec 03 02:35:18 compute-0 sudo[481598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 02:35:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:35:18 compute-0 sudo[481598]: pam_unix(sudo:session): session closed for user root
Dec 03 02:35:18 compute-0 sshd-session[481597]: Received disconnect from 192.168.122.10 port 50924:11: disconnected by user
Dec 03 02:35:18 compute-0 sshd-session[481597]: Disconnected from user zuul 192.168.122.10 port 50924
Dec 03 02:35:18 compute-0 sshd-session[481576]: pam_unix(sshd:session): session closed for user zuul
Dec 03 02:35:18 compute-0 systemd[1]: session-64.scope: Deactivated successfully.
Dec 03 02:35:18 compute-0 systemd-logind[800]: Session 64 logged out. Waiting for processes to exit.
Dec 03 02:35:18 compute-0 systemd-logind[800]: Removed session 64.
Dec 03 02:35:18 compute-0 sshd-session[481623]: Accepted publickey for zuul from 192.168.122.10 port 41664 ssh2: ECDSA SHA256:ja3ITS17A9km0/Ot+KN2pl9ub4ump/b6GV+vNoE7Szw
Dec 03 02:35:18 compute-0 systemd-logind[800]: New session 65 of user zuul.
Dec 03 02:35:18 compute-0 systemd[1]: Started Session 65 of User zuul.
Dec 03 02:35:19 compute-0 sshd-session[481623]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 03 02:35:19 compute-0 ceph-mon[192821]: pgmap v2436: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:19 compute-0 sudo[481627]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rm -rf /var/tmp/sos-osp
Dec 03 02:35:19 compute-0 sudo[481627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 02:35:19 compute-0 sudo[481627]: pam_unix(sudo:session): session closed for user root
Dec 03 02:35:19 compute-0 sshd-session[481626]: Received disconnect from 192.168.122.10 port 41664:11: disconnected by user
Dec 03 02:35:19 compute-0 sshd-session[481626]: Disconnected from user zuul 192.168.122.10 port 41664
Dec 03 02:35:19 compute-0 sshd-session[481623]: pam_unix(sshd:session): session closed for user zuul
Dec 03 02:35:19 compute-0 systemd[1]: session-65.scope: Deactivated successfully.
Dec 03 02:35:19 compute-0 systemd-logind[800]: Session 65 logged out. Waiting for processes to exit.
Dec 03 02:35:19 compute-0 systemd-logind[800]: Removed session 65.
Dec 03 02:35:19 compute-0 nova_compute[351485]: 2025-12-03 02:35:19.360 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.516 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.517 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.518 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.521 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.521 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.521 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.521 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.521 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.522 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.522 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.522 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.522 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.524 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.524 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.525 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.525 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.525 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.525 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.526 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.526 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.526 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.526 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.525 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.527 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.527 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.527 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.528 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.528 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.528 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.528 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.528 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.528 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.529 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.529 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.529 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.529 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.529 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.529 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.530 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.530 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.530 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.530 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.530 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.530 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.531 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.531 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.531 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.531 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.531 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.531 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.532 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.532 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.532 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.532 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.532 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.532 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.533 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.533 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.533 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.533 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.533 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.533 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.534 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.534 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.534 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.534 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.535 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.535 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.535 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.535 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.535 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.536 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.536 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.536 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.536 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.536 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.536 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.537 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.537 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.537 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.537 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.537 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.537 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.538 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.538 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.538 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.538 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.538 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.538 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.538 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.539 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:35:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:35:19.539 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:35:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2437: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:20 compute-0 podman[481655]: 2025-12-03 02:35:20.890725535 +0000 UTC m=+0.130823333 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.6, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.tags=minimal rhel9, architecture=x86_64, build-date=2025-08-20T13:12:41)
Dec 03 02:35:20 compute-0 podman[481658]: 2025-12-03 02:35:20.892013602 +0000 UTC m=+0.107199937 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 03 02:35:20 compute-0 podman[481656]: 2025-12-03 02:35:20.903251089 +0000 UTC m=+0.141646119 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 02:35:20 compute-0 podman[481657]: 2025-12-03 02:35:20.929340495 +0000 UTC m=+0.142094611 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., release-0.7.12=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.buildah.version=1.29.0, release=1214.1726694543, distribution-scope=public, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9)
Dec 03 02:35:20 compute-0 podman[481654]: 2025-12-03 02:35:20.955266027 +0000 UTC m=+0.197983399 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 03 02:35:21 compute-0 ceph-mon[192821]: pgmap v2437: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2438: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:21 compute-0 nova_compute[351485]: 2025-12-03 02:35:21.948 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:35:23 compute-0 ceph-mon[192821]: pgmap v2438: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:35:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2439: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:24 compute-0 nova_compute[351485]: 2025-12-03 02:35:24.363 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:35:24 compute-0 nova_compute[351485]: 2025-12-03 02:35:24.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:35:25 compute-0 ceph-mon[192821]: pgmap v2439: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2440: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:26 compute-0 nova_compute[351485]: 2025-12-03 02:35:26.951 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:35:27 compute-0 ceph-mon[192821]: pgmap v2440: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2441: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:28 compute-0 sudo[481755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:35:28 compute-0 sudo[481755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:35:28 compute-0 sudo[481755]: pam_unix(sudo:session): session closed for user root
Dec 03 02:35:28 compute-0 sudo[481780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:35:28 compute-0 sudo[481780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:35:28 compute-0 sudo[481780]: pam_unix(sudo:session): session closed for user root
Dec 03 02:35:28 compute-0 sudo[481805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:35:28 compute-0 sudo[481805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:35:28 compute-0 sudo[481805]: pam_unix(sudo:session): session closed for user root
Dec 03 02:35:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:35:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:35:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:35:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:35:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:35:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:35:28 compute-0 sudo[481830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 02:35:28 compute-0 sudo[481830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:35:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:35:28
Dec 03 02:35:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 02:35:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 02:35:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', '.rgw.root', 'vms', 'backups', 'default.rgw.log', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', 'cephfs.cephfs.meta', 'volumes']
Dec 03 02:35:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 02:35:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:35:29 compute-0 ceph-mon[192821]: pgmap v2441: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:29 compute-0 sudo[481830]: pam_unix(sudo:session): session closed for user root
Dec 03 02:35:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec 03 02:35:29 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 03 02:35:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:35:29 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:35:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 02:35:29 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:35:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 02:35:29 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:35:29 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev bc1d7b08-c81a-4082-88a8-45f1a784f639 does not exist
Dec 03 02:35:29 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 214aa9cf-9e74-4cc7-ad73-939926a68475 does not exist
Dec 03 02:35:29 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev c518ab7a-ef4c-4995-ab81-a66346c73950 does not exist
Dec 03 02:35:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 02:35:29 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:35:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 02:35:29 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:35:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:35:29 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:35:29 compute-0 sudo[481885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:35:29 compute-0 sudo[481885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:35:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 02:35:29 compute-0 nova_compute[351485]: 2025-12-03 02:35:29.366 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:35:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:35:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 02:35:29 compute-0 sudo[481885]: pam_unix(sudo:session): session closed for user root
Dec 03 02:35:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:35:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:35:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:35:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:35:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:35:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:35:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:35:29 compute-0 sudo[481910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:35:29 compute-0 sudo[481910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:35:29 compute-0 sudo[481910]: pam_unix(sudo:session): session closed for user root
Dec 03 02:35:29 compute-0 sudo[481935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:35:29 compute-0 sudo[481935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:35:29 compute-0 sudo[481935]: pam_unix(sudo:session): session closed for user root
Dec 03 02:35:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2442: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:29 compute-0 sudo[481960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 02:35:29 compute-0 sudo[481960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:35:29 compute-0 podman[158098]: time="2025-12-03T02:35:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:35:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:35:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec 03 02:35:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:35:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8200 "" "Go-http-client/1.1"
Dec 03 02:35:30 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 03 02:35:30 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:35:30 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:35:30 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:35:30 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:35:30 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:35:30 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:35:30 compute-0 ceph-mon[192821]: pgmap v2442: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:30 compute-0 podman[482022]: 2025-12-03 02:35:30.404875671 +0000 UTC m=+0.094463936 container create aabf2903b063dc16e4fd8b367583c4a8fde7a06ce508bd51dfd344b455de3fa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 03 02:35:30 compute-0 podman[482022]: 2025-12-03 02:35:30.37045512 +0000 UTC m=+0.060043455 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:35:30 compute-0 systemd[1]: Started libpod-conmon-aabf2903b063dc16e4fd8b367583c4a8fde7a06ce508bd51dfd344b455de3fa2.scope.
Dec 03 02:35:30 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:35:30 compute-0 podman[482022]: 2025-12-03 02:35:30.556343106 +0000 UTC m=+0.245931351 container init aabf2903b063dc16e4fd8b367583c4a8fde7a06ce508bd51dfd344b455de3fa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_darwin, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:35:30 compute-0 podman[482022]: 2025-12-03 02:35:30.57777005 +0000 UTC m=+0.267358325 container start aabf2903b063dc16e4fd8b367583c4a8fde7a06ce508bd51dfd344b455de3fa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 03 02:35:30 compute-0 podman[482022]: 2025-12-03 02:35:30.585054026 +0000 UTC m=+0.274642301 container attach aabf2903b063dc16e4fd8b367583c4a8fde7a06ce508bd51dfd344b455de3fa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_darwin, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Dec 03 02:35:30 compute-0 magical_darwin[482037]: 167 167
Dec 03 02:35:30 compute-0 systemd[1]: libpod-aabf2903b063dc16e4fd8b367583c4a8fde7a06ce508bd51dfd344b455de3fa2.scope: Deactivated successfully.
Dec 03 02:35:30 compute-0 podman[482022]: 2025-12-03 02:35:30.589870522 +0000 UTC m=+0.279458797 container died aabf2903b063dc16e4fd8b367583c4a8fde7a06ce508bd51dfd344b455de3fa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_darwin, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 03 02:35:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d8da6f22c093ac29e81cf51d9503fdf837ff052d7ae0e131facdaae1c61e6cc-merged.mount: Deactivated successfully.
Dec 03 02:35:30 compute-0 podman[482022]: 2025-12-03 02:35:30.66137937 +0000 UTC m=+0.350967615 container remove aabf2903b063dc16e4fd8b367583c4a8fde7a06ce508bd51dfd344b455de3fa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_darwin, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 03 02:35:30 compute-0 systemd[1]: libpod-conmon-aabf2903b063dc16e4fd8b367583c4a8fde7a06ce508bd51dfd344b455de3fa2.scope: Deactivated successfully.
Dec 03 02:35:30 compute-0 podman[482060]: 2025-12-03 02:35:30.955353766 +0000 UTC m=+0.099585932 container create f0b7a69790d54e337f6a9d49a6d21b987898e75ce1c063e66dbad240f039443b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_franklin, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:35:31 compute-0 podman[482060]: 2025-12-03 02:35:30.919044411 +0000 UTC m=+0.063276617 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:35:31 compute-0 systemd[1]: Started libpod-conmon-f0b7a69790d54e337f6a9d49a6d21b987898e75ce1c063e66dbad240f039443b.scope.
Dec 03 02:35:31 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:35:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c827dc658b1e718ea846f07a84d066452b98e05289c362078bc594551bac0dac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:35:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c827dc658b1e718ea846f07a84d066452b98e05289c362078bc594551bac0dac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:35:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c827dc658b1e718ea846f07a84d066452b98e05289c362078bc594551bac0dac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:35:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c827dc658b1e718ea846f07a84d066452b98e05289c362078bc594551bac0dac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:35:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c827dc658b1e718ea846f07a84d066452b98e05289c362078bc594551bac0dac/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 02:35:31 compute-0 podman[482060]: 2025-12-03 02:35:31.137443614 +0000 UTC m=+0.281675830 container init f0b7a69790d54e337f6a9d49a6d21b987898e75ce1c063e66dbad240f039443b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_franklin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec 03 02:35:31 compute-0 podman[482060]: 2025-12-03 02:35:31.171730032 +0000 UTC m=+0.315962198 container start f0b7a69790d54e337f6a9d49a6d21b987898e75ce1c063e66dbad240f039443b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_franklin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Dec 03 02:35:31 compute-0 podman[482060]: 2025-12-03 02:35:31.179076129 +0000 UTC m=+0.323308295 container attach f0b7a69790d54e337f6a9d49a6d21b987898e75ce1c063e66dbad240f039443b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_franklin, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 03 02:35:31 compute-0 openstack_network_exporter[368278]: ERROR   02:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:35:31 compute-0 openstack_network_exporter[368278]: ERROR   02:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:35:31 compute-0 openstack_network_exporter[368278]: ERROR   02:35:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:35:31 compute-0 openstack_network_exporter[368278]: ERROR   02:35:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:35:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:35:31 compute-0 openstack_network_exporter[368278]: ERROR   02:35:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:35:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:35:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2443: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:31 compute-0 nova_compute[351485]: 2025-12-03 02:35:31.954 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:35:32 compute-0 gallant_franklin[482077]: --> passed data devices: 0 physical, 3 LVM
Dec 03 02:35:32 compute-0 gallant_franklin[482077]: --> relative data size: 1.0
Dec 03 02:35:32 compute-0 gallant_franklin[482077]: --> All data devices are unavailable
Dec 03 02:35:32 compute-0 systemd[1]: libpod-f0b7a69790d54e337f6a9d49a6d21b987898e75ce1c063e66dbad240f039443b.scope: Deactivated successfully.
Dec 03 02:35:32 compute-0 systemd[1]: libpod-f0b7a69790d54e337f6a9d49a6d21b987898e75ce1c063e66dbad240f039443b.scope: Consumed 1.395s CPU time.
Dec 03 02:35:32 compute-0 podman[482060]: 2025-12-03 02:35:32.632180056 +0000 UTC m=+1.776412232 container died f0b7a69790d54e337f6a9d49a6d21b987898e75ce1c063e66dbad240f039443b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_franklin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:35:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-c827dc658b1e718ea846f07a84d066452b98e05289c362078bc594551bac0dac-merged.mount: Deactivated successfully.
Dec 03 02:35:32 compute-0 podman[482060]: 2025-12-03 02:35:32.739954327 +0000 UTC m=+1.884186493 container remove f0b7a69790d54e337f6a9d49a6d21b987898e75ce1c063e66dbad240f039443b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_franklin, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:35:32 compute-0 systemd[1]: libpod-conmon-f0b7a69790d54e337f6a9d49a6d21b987898e75ce1c063e66dbad240f039443b.scope: Deactivated successfully.
Dec 03 02:35:32 compute-0 sudo[481960]: pam_unix(sudo:session): session closed for user root
Dec 03 02:35:32 compute-0 ceph-mon[192821]: pgmap v2443: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:32 compute-0 sudo[482119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:35:32 compute-0 sudo[482119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:35:32 compute-0 sudo[482119]: pam_unix(sudo:session): session closed for user root
Dec 03 02:35:33 compute-0 sudo[482144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:35:33 compute-0 sudo[482144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:35:33 compute-0 sudo[482144]: pam_unix(sudo:session): session closed for user root
Dec 03 02:35:33 compute-0 sudo[482169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:35:33 compute-0 sudo[482169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:35:33 compute-0 sudo[482169]: pam_unix(sudo:session): session closed for user root
Dec 03 02:35:33 compute-0 sudo[482194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 02:35:33 compute-0 sudo[482194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:35:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:35:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2444: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:33 compute-0 podman[482256]: 2025-12-03 02:35:33.846249946 +0000 UTC m=+0.086795900 container create e4fb3359dc6e9f768ab427e759e1f0f7605c1b6810cfa77701e0e20aabd64eca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bardeen, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:35:33 compute-0 podman[482256]: 2025-12-03 02:35:33.81588321 +0000 UTC m=+0.056429214 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:35:33 compute-0 systemd[1]: Started libpod-conmon-e4fb3359dc6e9f768ab427e759e1f0f7605c1b6810cfa77701e0e20aabd64eca.scope.
Dec 03 02:35:33 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:35:34 compute-0 podman[482256]: 2025-12-03 02:35:34.004493971 +0000 UTC m=+0.245039975 container init e4fb3359dc6e9f768ab427e759e1f0f7605c1b6810cfa77701e0e20aabd64eca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 03 02:35:34 compute-0 podman[482256]: 2025-12-03 02:35:34.024492706 +0000 UTC m=+0.265038660 container start e4fb3359dc6e9f768ab427e759e1f0f7605c1b6810cfa77701e0e20aabd64eca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:35:34 compute-0 podman[482256]: 2025-12-03 02:35:34.031203525 +0000 UTC m=+0.271749489 container attach e4fb3359dc6e9f768ab427e759e1f0f7605c1b6810cfa77701e0e20aabd64eca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bardeen, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Dec 03 02:35:34 compute-0 elated_bardeen[482272]: 167 167
Dec 03 02:35:34 compute-0 systemd[1]: libpod-e4fb3359dc6e9f768ab427e759e1f0f7605c1b6810cfa77701e0e20aabd64eca.scope: Deactivated successfully.
Dec 03 02:35:34 compute-0 podman[482256]: 2025-12-03 02:35:34.039142779 +0000 UTC m=+0.279688733 container died e4fb3359dc6e9f768ab427e759e1f0f7605c1b6810cfa77701e0e20aabd64eca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 03 02:35:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-1924d5e5738d31d1b6cace2cc5ccf7bbf37ae7e81b4a73f4b13c1bbe124e650f-merged.mount: Deactivated successfully.
Dec 03 02:35:34 compute-0 podman[482256]: 2025-12-03 02:35:34.099295337 +0000 UTC m=+0.339841261 container remove e4fb3359dc6e9f768ab427e759e1f0f7605c1b6810cfa77701e0e20aabd64eca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bardeen, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:35:34 compute-0 systemd[1]: libpod-conmon-e4fb3359dc6e9f768ab427e759e1f0f7605c1b6810cfa77701e0e20aabd64eca.scope: Deactivated successfully.
Dec 03 02:35:34 compute-0 nova_compute[351485]: 2025-12-03 02:35:34.369 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:35:34 compute-0 podman[482295]: 2025-12-03 02:35:34.377520438 +0000 UTC m=+0.093053477 container create 95b4485d7058d8a5fd37dc3ca6161faca4ace60757d474dea639d7e1b1b6ffc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_kalam, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Dec 03 02:35:34 compute-0 podman[482295]: 2025-12-03 02:35:34.339162986 +0000 UTC m=+0.054696075 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:35:34 compute-0 systemd[1]: Started libpod-conmon-95b4485d7058d8a5fd37dc3ca6161faca4ace60757d474dea639d7e1b1b6ffc7.scope.
Dec 03 02:35:34 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:35:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f5866bda366566b0ac422ec0f04d40099966269e2636e4b35400a776b24ecac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:35:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f5866bda366566b0ac422ec0f04d40099966269e2636e4b35400a776b24ecac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:35:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f5866bda366566b0ac422ec0f04d40099966269e2636e4b35400a776b24ecac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:35:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f5866bda366566b0ac422ec0f04d40099966269e2636e4b35400a776b24ecac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:35:34 compute-0 podman[482295]: 2025-12-03 02:35:34.586323561 +0000 UTC m=+0.301856600 container init 95b4485d7058d8a5fd37dc3ca6161faca4ace60757d474dea639d7e1b1b6ffc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_kalam, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:35:34 compute-0 podman[482295]: 2025-12-03 02:35:34.604105532 +0000 UTC m=+0.319638571 container start 95b4485d7058d8a5fd37dc3ca6161faca4ace60757d474dea639d7e1b1b6ffc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_kalam, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 03 02:35:34 compute-0 podman[482295]: 2025-12-03 02:35:34.61076998 +0000 UTC m=+0.326303009 container attach 95b4485d7058d8a5fd37dc3ca6161faca4ace60757d474dea639d7e1b1b6ffc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_kalam, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 03 02:35:34 compute-0 ceph-mon[192821]: pgmap v2444: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:35 compute-0 infallible_kalam[482310]: {
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:     "0": [
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:         {
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:             "devices": [
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:                 "/dev/loop3"
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:             ],
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:             "lv_name": "ceph_lv0",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:             "lv_size": "21470642176",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:             "name": "ceph_lv0",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:             "tags": {
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:                 "ceph.cluster_name": "ceph",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:                 "ceph.crush_device_class": "",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:                 "ceph.encrypted": "0",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:                 "ceph.osd_id": "0",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:                 "ceph.type": "block",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:                 "ceph.vdo": "0"
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:             },
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:             "type": "block",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:             "vg_name": "ceph_vg0"
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:         }
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:     ],
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:     "1": [
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:         {
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:             "devices": [
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:                 "/dev/loop4"
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:             ],
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:             "lv_name": "ceph_lv1",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:             "lv_size": "21470642176",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:             "name": "ceph_lv1",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:             "tags": {
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:                 "ceph.cluster_name": "ceph",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:                 "ceph.crush_device_class": "",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:                 "ceph.encrypted": "0",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:                 "ceph.osd_id": "1",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:                 "ceph.type": "block",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:                 "ceph.vdo": "0"
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:             },
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:             "type": "block",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:             "vg_name": "ceph_vg1"
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:         }
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:     ],
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:     "2": [
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:         {
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:             "devices": [
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:                 "/dev/loop5"
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:             ],
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:             "lv_name": "ceph_lv2",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:             "lv_size": "21470642176",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:             "name": "ceph_lv2",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:             "tags": {
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:                 "ceph.cluster_name": "ceph",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:                 "ceph.crush_device_class": "",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:                 "ceph.encrypted": "0",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:                 "ceph.osd_id": "2",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:                 "ceph.type": "block",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:                 "ceph.vdo": "0"
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:             },
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:             "type": "block",
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:             "vg_name": "ceph_vg2"
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:         }
Dec 03 02:35:35 compute-0 infallible_kalam[482310]:     ]
Dec 03 02:35:35 compute-0 infallible_kalam[482310]: }
Dec 03 02:35:35 compute-0 systemd[1]: libpod-95b4485d7058d8a5fd37dc3ca6161faca4ace60757d474dea639d7e1b1b6ffc7.scope: Deactivated successfully.
Dec 03 02:35:35 compute-0 podman[482319]: 2025-12-03 02:35:35.527024367 +0000 UTC m=+0.046510443 container died 95b4485d7058d8a5fd37dc3ca6161faca4ace60757d474dea639d7e1b1b6ffc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 03 02:35:35 compute-0 nova_compute[351485]: 2025-12-03 02:35:35.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:35:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f5866bda366566b0ac422ec0f04d40099966269e2636e4b35400a776b24ecac-merged.mount: Deactivated successfully.
Dec 03 02:35:35 compute-0 nova_compute[351485]: 2025-12-03 02:35:35.611 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:35:35 compute-0 nova_compute[351485]: 2025-12-03 02:35:35.613 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:35:35 compute-0 nova_compute[351485]: 2025-12-03 02:35:35.613 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:35:35 compute-0 nova_compute[351485]: 2025-12-03 02:35:35.614 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 02:35:35 compute-0 nova_compute[351485]: 2025-12-03 02:35:35.614 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:35:35 compute-0 podman[482319]: 2025-12-03 02:35:35.628011497 +0000 UTC m=+0.147497543 container remove 95b4485d7058d8a5fd37dc3ca6161faca4ace60757d474dea639d7e1b1b6ffc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:35:35 compute-0 systemd[1]: libpod-conmon-95b4485d7058d8a5fd37dc3ca6161faca4ace60757d474dea639d7e1b1b6ffc7.scope: Deactivated successfully.
Dec 03 02:35:35 compute-0 sudo[482194]: pam_unix(sudo:session): session closed for user root
Dec 03 02:35:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2445: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:35 compute-0 sudo[482334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:35:35 compute-0 sudo[482334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:35:35 compute-0 sudo[482334]: pam_unix(sudo:session): session closed for user root
Dec 03 02:35:35 compute-0 sudo[482378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:35:35 compute-0 sudo[482378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:35:35 compute-0 sudo[482378]: pam_unix(sudo:session): session closed for user root
Dec 03 02:35:36 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:35:36 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2050057259' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:35:36 compute-0 sudo[482403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:35:36 compute-0 sudo[482403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:35:36 compute-0 sudo[482403]: pam_unix(sudo:session): session closed for user root
Dec 03 02:35:36 compute-0 nova_compute[351485]: 2025-12-03 02:35:36.110 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:35:36 compute-0 sudo[482430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 02:35:36 compute-0 sudo[482430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:35:36 compute-0 nova_compute[351485]: 2025-12-03 02:35:36.530 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:35:36 compute-0 nova_compute[351485]: 2025-12-03 02:35:36.531 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3881MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 02:35:36 compute-0 nova_compute[351485]: 2025-12-03 02:35:36.531 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:35:36 compute-0 nova_compute[351485]: 2025-12-03 02:35:36.531 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:35:36 compute-0 nova_compute[351485]: 2025-12-03 02:35:36.618 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 02:35:36 compute-0 nova_compute[351485]: 2025-12-03 02:35:36.619 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 02:35:36 compute-0 nova_compute[351485]: 2025-12-03 02:35:36.641 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing inventories for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 03 02:35:36 compute-0 nova_compute[351485]: 2025-12-03 02:35:36.677 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Updating ProviderTree inventory for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 03 02:35:36 compute-0 nova_compute[351485]: 2025-12-03 02:35:36.677 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Updating inventory in ProviderTree for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 03 02:35:36 compute-0 nova_compute[351485]: 2025-12-03 02:35:36.688 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing aggregate associations for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 03 02:35:36 compute-0 nova_compute[351485]: 2025-12-03 02:35:36.712 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing trait associations for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05, traits: HW_CPU_X86_SSE42,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_ABM,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_F16C,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_RESCUE_BFV,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 03 02:35:36 compute-0 nova_compute[351485]: 2025-12-03 02:35:36.726 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:35:36 compute-0 podman[482491]: 2025-12-03 02:35:36.772418132 +0000 UTC m=+0.096440412 container create ab2471ff51ae23edb8d9e7e90cc6c99b9327b2866eb3d14ed72ae0d033483b6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lamarr, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:35:36 compute-0 podman[482491]: 2025-12-03 02:35:36.735372967 +0000 UTC m=+0.059395317 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:35:36 compute-0 ceph-mon[192821]: pgmap v2445: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:36 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2050057259' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:35:36 compute-0 systemd[1]: Started libpod-conmon-ab2471ff51ae23edb8d9e7e90cc6c99b9327b2866eb3d14ed72ae0d033483b6c.scope.
Dec 03 02:35:36 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:35:36 compute-0 podman[482491]: 2025-12-03 02:35:36.898978154 +0000 UTC m=+0.223000424 container init ab2471ff51ae23edb8d9e7e90cc6c99b9327b2866eb3d14ed72ae0d033483b6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 03 02:35:36 compute-0 podman[482491]: 2025-12-03 02:35:36.907368821 +0000 UTC m=+0.231391071 container start ab2471ff51ae23edb8d9e7e90cc6c99b9327b2866eb3d14ed72ae0d033483b6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:35:36 compute-0 podman[482491]: 2025-12-03 02:35:36.911742484 +0000 UTC m=+0.235764774 container attach ab2471ff51ae23edb8d9e7e90cc6c99b9327b2866eb3d14ed72ae0d033483b6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lamarr, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:35:36 compute-0 interesting_lamarr[482507]: 167 167
Dec 03 02:35:36 compute-0 systemd[1]: libpod-ab2471ff51ae23edb8d9e7e90cc6c99b9327b2866eb3d14ed72ae0d033483b6c.scope: Deactivated successfully.
Dec 03 02:35:36 compute-0 podman[482491]: 2025-12-03 02:35:36.9144207 +0000 UTC m=+0.238442990 container died ab2471ff51ae23edb8d9e7e90cc6c99b9327b2866eb3d14ed72ae0d033483b6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 03 02:35:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-85ec873e8659d37c996818f5925f1dda0b0cdd5f151bc4f49f7dd08a35d85ea6-merged.mount: Deactivated successfully.
Dec 03 02:35:36 compute-0 nova_compute[351485]: 2025-12-03 02:35:36.958 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:35:36 compute-0 podman[482491]: 2025-12-03 02:35:36.976382918 +0000 UTC m=+0.300405178 container remove ab2471ff51ae23edb8d9e7e90cc6c99b9327b2866eb3d14ed72ae0d033483b6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 03 02:35:36 compute-0 systemd[1]: libpod-conmon-ab2471ff51ae23edb8d9e7e90cc6c99b9327b2866eb3d14ed72ae0d033483b6c.scope: Deactivated successfully.
Dec 03 02:35:37 compute-0 podman[482549]: 2025-12-03 02:35:37.18730373 +0000 UTC m=+0.074599866 container create 13e77dc1cf217feda9a40948b830acfce1af34f35b9db6fbbd74d75df54f67d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hypatia, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Dec 03 02:35:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:35:37 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/264054738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:35:37 compute-0 nova_compute[351485]: 2025-12-03 02:35:37.221 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:35:37 compute-0 nova_compute[351485]: 2025-12-03 02:35:37.232 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:35:37 compute-0 podman[482549]: 2025-12-03 02:35:37.151960983 +0000 UTC m=+0.039257149 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:35:37 compute-0 systemd[1]: Started libpod-conmon-13e77dc1cf217feda9a40948b830acfce1af34f35b9db6fbbd74d75df54f67d5.scope.
Dec 03 02:35:37 compute-0 nova_compute[351485]: 2025-12-03 02:35:37.268 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:35:37 compute-0 nova_compute[351485]: 2025-12-03 02:35:37.271 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 02:35:37 compute-0 nova_compute[351485]: 2025-12-03 02:35:37.271 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:35:37 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:35:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6609ec32b72a7df5b38623070bdaad2c1ff6fe46a0aa920bc3cfdd43d1509e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:35:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6609ec32b72a7df5b38623070bdaad2c1ff6fe46a0aa920bc3cfdd43d1509e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:35:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6609ec32b72a7df5b38623070bdaad2c1ff6fe46a0aa920bc3cfdd43d1509e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:35:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6609ec32b72a7df5b38623070bdaad2c1ff6fe46a0aa920bc3cfdd43d1509e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:35:37 compute-0 podman[482549]: 2025-12-03 02:35:37.419444681 +0000 UTC m=+0.306740887 container init 13e77dc1cf217feda9a40948b830acfce1af34f35b9db6fbbd74d75df54f67d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 03 02:35:37 compute-0 podman[482549]: 2025-12-03 02:35:37.446674919 +0000 UTC m=+0.333971085 container start 13e77dc1cf217feda9a40948b830acfce1af34f35b9db6fbbd74d75df54f67d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hypatia, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Dec 03 02:35:37 compute-0 podman[482549]: 2025-12-03 02:35:37.458066 +0000 UTC m=+0.345362216 container attach 13e77dc1cf217feda9a40948b830acfce1af34f35b9db6fbbd74d75df54f67d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hypatia, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 03 02:35:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2446: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:37 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/264054738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:35:38 compute-0 eloquent_hypatia[482567]: {
Dec 03 02:35:38 compute-0 eloquent_hypatia[482567]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 02:35:38 compute-0 eloquent_hypatia[482567]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:35:38 compute-0 eloquent_hypatia[482567]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 02:35:38 compute-0 eloquent_hypatia[482567]:         "osd_id": 2,
Dec 03 02:35:38 compute-0 eloquent_hypatia[482567]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:35:38 compute-0 eloquent_hypatia[482567]:         "type": "bluestore"
Dec 03 02:35:38 compute-0 eloquent_hypatia[482567]:     },
Dec 03 02:35:38 compute-0 eloquent_hypatia[482567]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 02:35:38 compute-0 eloquent_hypatia[482567]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:35:38 compute-0 eloquent_hypatia[482567]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 02:35:38 compute-0 eloquent_hypatia[482567]:         "osd_id": 1,
Dec 03 02:35:38 compute-0 eloquent_hypatia[482567]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:35:38 compute-0 eloquent_hypatia[482567]:         "type": "bluestore"
Dec 03 02:35:38 compute-0 eloquent_hypatia[482567]:     },
Dec 03 02:35:38 compute-0 eloquent_hypatia[482567]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 02:35:38 compute-0 eloquent_hypatia[482567]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:35:38 compute-0 eloquent_hypatia[482567]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 02:35:38 compute-0 eloquent_hypatia[482567]:         "osd_id": 0,
Dec 03 02:35:38 compute-0 eloquent_hypatia[482567]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:35:38 compute-0 eloquent_hypatia[482567]:         "type": "bluestore"
Dec 03 02:35:38 compute-0 eloquent_hypatia[482567]:     }
Dec 03 02:35:38 compute-0 eloquent_hypatia[482567]: }
Dec 03 02:35:38 compute-0 systemd[1]: libpod-13e77dc1cf217feda9a40948b830acfce1af34f35b9db6fbbd74d75df54f67d5.scope: Deactivated successfully.
Dec 03 02:35:38 compute-0 podman[482549]: 2025-12-03 02:35:38.511508498 +0000 UTC m=+1.398804634 container died 13e77dc1cf217feda9a40948b830acfce1af34f35b9db6fbbd74d75df54f67d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hypatia, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:35:38 compute-0 systemd[1]: libpod-13e77dc1cf217feda9a40948b830acfce1af34f35b9db6fbbd74d75df54f67d5.scope: Consumed 1.071s CPU time.
Dec 03 02:35:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6609ec32b72a7df5b38623070bdaad2c1ff6fe46a0aa920bc3cfdd43d1509e8-merged.mount: Deactivated successfully.
Dec 03 02:35:38 compute-0 podman[482549]: 2025-12-03 02:35:38.601063876 +0000 UTC m=+1.488360012 container remove 13e77dc1cf217feda9a40948b830acfce1af34f35b9db6fbbd74d75df54f67d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hypatia, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:35:38 compute-0 systemd[1]: libpod-conmon-13e77dc1cf217feda9a40948b830acfce1af34f35b9db6fbbd74d75df54f67d5.scope: Deactivated successfully.
Dec 03 02:35:38 compute-0 sudo[482430]: pam_unix(sudo:session): session closed for user root
Dec 03 02:35:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 02:35:38 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:35:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 02:35:38 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:35:38 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev d106c2c2-f8ee-4216-b0c5-e4690ad85100 does not exist
Dec 03 02:35:38 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 1ec6a1b9-c262-4d6c-a156-731f2a0e2a9a does not exist
Dec 03 02:35:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:35:38 compute-0 sudo[482611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:35:38 compute-0 sudo[482611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:35:38 compute-0 sudo[482611]: pam_unix(sudo:session): session closed for user root
Dec 03 02:35:38 compute-0 ceph-mon[192821]: pgmap v2446: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:38 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:35:38 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:35:38 compute-0 sudo[482636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 02:35:38 compute-0 sudo[482636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:35:38 compute-0 sudo[482636]: pam_unix(sudo:session): session closed for user root
Dec 03 02:35:39 compute-0 podman[482662]: 2025-12-03 02:35:39.098918575 +0000 UTC m=+0.100425255 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 02:35:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 02:35:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:35:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 02:35:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:35:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 03 02:35:39 compute-0 podman[482660]: 2025-12-03 02:35:39.116578533 +0000 UTC m=+0.120925093 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 03 02:35:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:35:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:35:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:35:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:35:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:35:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec 03 02:35:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:35:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 02:35:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:35:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:35:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:35:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 02:35:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:35:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 02:35:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:35:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:35:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:35:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 02:35:39 compute-0 podman[482661]: 2025-12-03 02:35:39.12494237 +0000 UTC m=+0.128592570 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec 03 02:35:39 compute-0 nova_compute[351485]: 2025-12-03 02:35:39.274 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:35:39 compute-0 nova_compute[351485]: 2025-12-03 02:35:39.274 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 02:35:39 compute-0 nova_compute[351485]: 2025-12-03 02:35:39.274 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 03 02:35:39 compute-0 nova_compute[351485]: 2025-12-03 02:35:39.319 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 03 02:35:39 compute-0 nova_compute[351485]: 2025-12-03 02:35:39.319 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:35:39 compute-0 nova_compute[351485]: 2025-12-03 02:35:39.373 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:35:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2447: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:40 compute-0 nova_compute[351485]: 2025-12-03 02:35:40.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:35:40 compute-0 ceph-mon[192821]: pgmap v2447: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:41 compute-0 nova_compute[351485]: 2025-12-03 02:35:41.570 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:35:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2448: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:41 compute-0 nova_compute[351485]: 2025-12-03 02:35:41.963 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:35:42 compute-0 ceph-mon[192821]: pgmap v2448: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:43 compute-0 nova_compute[351485]: 2025-12-03 02:35:43.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:35:43 compute-0 nova_compute[351485]: 2025-12-03 02:35:43.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:35:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:35:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2449: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:44 compute-0 nova_compute[351485]: 2025-12-03 02:35:44.375 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:35:44 compute-0 ceph-mon[192821]: pgmap v2449: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2450: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:46 compute-0 nova_compute[351485]: 2025-12-03 02:35:46.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:35:46 compute-0 nova_compute[351485]: 2025-12-03 02:35:46.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 02:35:46 compute-0 sshd-session[482720]: Received disconnect from 154.113.10.113 port 58734:11: Bye Bye [preauth]
Dec 03 02:35:46 compute-0 sshd-session[482720]: Disconnected from authenticating user root 154.113.10.113 port 58734 [preauth]
Dec 03 02:35:46 compute-0 ceph-mon[192821]: pgmap v2450: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:46 compute-0 nova_compute[351485]: 2025-12-03 02:35:46.966 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:35:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 02:35:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/451190324' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:35:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 02:35:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/451190324' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:35:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2451: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/451190324' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:35:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/451190324' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:35:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:35:48 compute-0 podman[482722]: 2025-12-03 02:35:48.886386438 +0000 UTC m=+0.121718286 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 03 02:35:48 compute-0 ceph-mon[192821]: pgmap v2451: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:49 compute-0 nova_compute[351485]: 2025-12-03 02:35:49.380 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:35:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2452: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:51 compute-0 ceph-mon[192821]: pgmap v2452: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2453: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:51 compute-0 podman[482745]: 2025-12-03 02:35:51.893119025 +0000 UTC m=+0.110628683 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, vcs-type=git, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., release-0.7.12=, build-date=2024-09-18T21:23:30, release=1214.1726694543, io.buildah.version=1.29.0, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., version=9.4, io.openshift.expose-services=)
Dec 03 02:35:51 compute-0 podman[482744]: 2025-12-03 02:35:51.895708548 +0000 UTC m=+0.124693879 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 03 02:35:51 compute-0 podman[482743]: 2025-12-03 02:35:51.907868421 +0000 UTC m=+0.150552449 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., container_name=openstack_network_exporter, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9)
Dec 03 02:35:51 compute-0 podman[482753]: 2025-12-03 02:35:51.921867296 +0000 UTC m=+0.135954587 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 03 02:35:51 compute-0 podman[482742]: 2025-12-03 02:35:51.932322512 +0000 UTC m=+0.174893807 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 03 02:35:51 compute-0 nova_compute[351485]: 2025-12-03 02:35:51.969 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:35:53 compute-0 ceph-mon[192821]: pgmap v2453: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:35:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2454: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:54 compute-0 nova_compute[351485]: 2025-12-03 02:35:54.384 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:35:55 compute-0 ceph-mon[192821]: pgmap v2454: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2455: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:56 compute-0 nova_compute[351485]: 2025-12-03 02:35:56.972 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:35:57 compute-0 ceph-mon[192821]: pgmap v2455: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2456: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:35:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:35:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:35:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:35:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:35:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:35:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:35:59 compute-0 ceph-mon[192821]: pgmap v2456: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:59 compute-0 nova_compute[351485]: 2025-12-03 02:35:59.387 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:35:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:35:59.673 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:35:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:35:59.673 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:35:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:35:59.674 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:35:59 compute-0 podman[158098]: time="2025-12-03T02:35:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:35:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2457: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:35:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:35:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec 03 02:35:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:35:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8208 "" "Go-http-client/1.1"
Dec 03 02:36:01 compute-0 ceph-mon[192821]: pgmap v2457: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:01 compute-0 openstack_network_exporter[368278]: ERROR   02:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:36:01 compute-0 openstack_network_exporter[368278]: ERROR   02:36:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:36:01 compute-0 openstack_network_exporter[368278]: ERROR   02:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:36:01 compute-0 openstack_network_exporter[368278]: ERROR   02:36:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:36:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:36:01 compute-0 openstack_network_exporter[368278]: ERROR   02:36:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:36:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:36:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2458: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:01 compute-0 nova_compute[351485]: 2025-12-03 02:36:01.976 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:36:03 compute-0 ceph-mon[192821]: pgmap v2458: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:36:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2459: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:04 compute-0 nova_compute[351485]: 2025-12-03 02:36:04.390 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:36:05 compute-0 ceph-mon[192821]: pgmap v2459: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2460: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:06 compute-0 ceph-mon[192821]: pgmap v2460: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:06 compute-0 nova_compute[351485]: 2025-12-03 02:36:06.979 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:36:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2461: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:36:08 compute-0 ceph-mon[192821]: pgmap v2461: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:09 compute-0 nova_compute[351485]: 2025-12-03 02:36:09.395 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:36:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2462: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:09 compute-0 podman[482842]: 2025-12-03 02:36:09.836827477 +0000 UTC m=+0.095651010 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 03 02:36:09 compute-0 podman[482843]: 2025-12-03 02:36:09.882759673 +0000 UTC m=+0.136291227 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Dec 03 02:36:09 compute-0 podman[482844]: 2025-12-03 02:36:09.902709986 +0000 UTC m=+0.139551009 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 03 02:36:10 compute-0 ceph-mon[192821]: pgmap v2462: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2463: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:11 compute-0 nova_compute[351485]: 2025-12-03 02:36:11.982 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:36:12 compute-0 ceph-mon[192821]: pgmap v2463: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:36:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2464: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:14 compute-0 nova_compute[351485]: 2025-12-03 02:36:14.396 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:36:14 compute-0 ceph-mon[192821]: pgmap v2464: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2465: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:16 compute-0 rsyslogd[188612]: imjournal: 16637 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Dec 03 02:36:16 compute-0 ceph-mon[192821]: pgmap v2465: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:16 compute-0 nova_compute[351485]: 2025-12-03 02:36:16.985 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:36:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2466: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:36:18 compute-0 ceph-mon[192821]: pgmap v2466: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:19 compute-0 nova_compute[351485]: 2025-12-03 02:36:19.399 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:36:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2467: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:19 compute-0 podman[482902]: 2025-12-03 02:36:19.894685068 +0000 UTC m=+0.151051894 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 03 02:36:20 compute-0 ceph-mon[192821]: pgmap v2467: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2468: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:21 compute-0 nova_compute[351485]: 2025-12-03 02:36:21.988 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:36:22 compute-0 podman[482921]: 2025-12-03 02:36:22.876781443 +0000 UTC m=+0.116722335 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_id=edpm, io.buildah.version=1.33.7, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vcs-type=git, version=9.6, distribution-scope=public, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 03 02:36:22 compute-0 podman[482925]: 2025-12-03 02:36:22.893286179 +0000 UTC m=+0.112241639 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 03 02:36:22 compute-0 podman[482922]: 2025-12-03 02:36:22.89828424 +0000 UTC m=+0.130760712 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 02:36:22 compute-0 podman[482923]: 2025-12-03 02:36:22.911109592 +0000 UTC m=+0.146229778 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, distribution-scope=public, io.buildah.version=1.29.0, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, vcs-type=git, container_name=kepler, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, release-0.7.12=, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4)
Dec 03 02:36:22 compute-0 podman[482920]: 2025-12-03 02:36:22.916389681 +0000 UTC m=+0.162497687 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 03 02:36:22 compute-0 ceph-mon[192821]: pgmap v2468: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:36:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2469: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:24 compute-0 nova_compute[351485]: 2025-12-03 02:36:24.402 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:36:24 compute-0 ceph-mon[192821]: pgmap v2469: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:25 compute-0 nova_compute[351485]: 2025-12-03 02:36:25.578 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:36:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2470: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:26 compute-0 nova_compute[351485]: 2025-12-03 02:36:26.991 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:36:27 compute-0 ceph-mon[192821]: pgmap v2470: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2471: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:28 compute-0 ceph-mon[192821]: pgmap v2471: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:36:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:36:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:36:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:36:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:36:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:36:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:36:28
Dec 03 02:36:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 02:36:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 02:36:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.meta', 'backups', 'default.rgw.log', 'default.rgw.control', '.mgr', '.rgw.root', 'vms', 'images', 'cephfs.cephfs.data', 'volumes']
Dec 03 02:36:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 02:36:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:36:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 02:36:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:36:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 02:36:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:36:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:36:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:36:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:36:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:36:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:36:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:36:29 compute-0 nova_compute[351485]: 2025-12-03 02:36:29.406 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:36:29 compute-0 podman[158098]: time="2025-12-03T02:36:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:36:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:36:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec 03 02:36:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2472: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:36:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8216 "" "Go-http-client/1.1"
Dec 03 02:36:30 compute-0 ceph-mon[192821]: pgmap v2472: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:31 compute-0 openstack_network_exporter[368278]: ERROR   02:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:36:31 compute-0 openstack_network_exporter[368278]: ERROR   02:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:36:31 compute-0 openstack_network_exporter[368278]: ERROR   02:36:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:36:31 compute-0 openstack_network_exporter[368278]: ERROR   02:36:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:36:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:36:31 compute-0 openstack_network_exporter[368278]: ERROR   02:36:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:36:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:36:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2473: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:31 compute-0 nova_compute[351485]: 2025-12-03 02:36:31.994 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:36:32 compute-0 ceph-mon[192821]: pgmap v2473: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:36:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2474: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:34 compute-0 nova_compute[351485]: 2025-12-03 02:36:34.407 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:36:34 compute-0 ceph-mon[192821]: pgmap v2474: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2475: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:36 compute-0 ceph-mon[192821]: pgmap v2475: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:36 compute-0 nova_compute[351485]: 2025-12-03 02:36:36.997 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:36:37 compute-0 nova_compute[351485]: 2025-12-03 02:36:37.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:36:37 compute-0 nova_compute[351485]: 2025-12-03 02:36:37.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:36:37 compute-0 nova_compute[351485]: 2025-12-03 02:36:37.670 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:36:37 compute-0 nova_compute[351485]: 2025-12-03 02:36:37.671 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:36:37 compute-0 nova_compute[351485]: 2025-12-03 02:36:37.671 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:36:37 compute-0 nova_compute[351485]: 2025-12-03 02:36:37.671 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 02:36:37 compute-0 nova_compute[351485]: 2025-12-03 02:36:37.672 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:36:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2476: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:36:38 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3452019504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:36:38 compute-0 nova_compute[351485]: 2025-12-03 02:36:38.218 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:36:38 compute-0 nova_compute[351485]: 2025-12-03 02:36:38.716 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:36:38 compute-0 nova_compute[351485]: 2025-12-03 02:36:38.718 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3949MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 02:36:38 compute-0 nova_compute[351485]: 2025-12-03 02:36:38.718 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:36:38 compute-0 nova_compute[351485]: 2025-12-03 02:36:38.718 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:36:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:36:38 compute-0 nova_compute[351485]: 2025-12-03 02:36:38.803 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 02:36:38 compute-0 nova_compute[351485]: 2025-12-03 02:36:38.803 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 02:36:38 compute-0 nova_compute[351485]: 2025-12-03 02:36:38.820 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:36:38 compute-0 ceph-mon[192821]: pgmap v2476: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:38 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3452019504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:36:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 02:36:39 compute-0 sudo[483068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:36:39 compute-0 sudo[483068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:36:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:36:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 02:36:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:36:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 03 02:36:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:36:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:36:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:36:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:36:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:36:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec 03 02:36:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:36:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 02:36:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:36:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:36:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:36:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 02:36:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:36:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 02:36:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:36:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:36:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:36:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 02:36:39 compute-0 sudo[483068]: pam_unix(sudo:session): session closed for user root
Dec 03 02:36:39 compute-0 sudo[483093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:36:39 compute-0 sudo[483093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:36:39 compute-0 sudo[483093]: pam_unix(sudo:session): session closed for user root
Dec 03 02:36:39 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:36:39 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1017010438' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:36:39 compute-0 nova_compute[351485]: 2025-12-03 02:36:39.331 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:36:39 compute-0 nova_compute[351485]: 2025-12-03 02:36:39.346 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:36:39 compute-0 nova_compute[351485]: 2025-12-03 02:36:39.365 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:36:39 compute-0 nova_compute[351485]: 2025-12-03 02:36:39.368 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 02:36:39 compute-0 nova_compute[351485]: 2025-12-03 02:36:39.369 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:36:39 compute-0 nova_compute[351485]: 2025-12-03 02:36:39.409 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:36:39 compute-0 sudo[483119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:36:39 compute-0 sudo[483119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:36:39 compute-0 sudo[483119]: pam_unix(sudo:session): session closed for user root
Dec 03 02:36:39 compute-0 sudo[483145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 02:36:39 compute-0 sudo[483145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:36:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2477: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:39 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1017010438' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:36:40 compute-0 sudo[483145]: pam_unix(sudo:session): session closed for user root
Dec 03 02:36:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:36:40 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:36:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 02:36:40 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:36:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 02:36:40 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:36:40 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 0b91da6f-62bd-43e6-af92-4a312f8e24b0 does not exist
Dec 03 02:36:40 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 24e99ec4-2549-4ae5-b227-9a173cd6bb42 does not exist
Dec 03 02:36:40 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 31d7cbd6-73a9-47f6-b65f-37c5a89ca665 does not exist
Dec 03 02:36:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 02:36:40 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:36:40 compute-0 nova_compute[351485]: 2025-12-03 02:36:40.370 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:36:40 compute-0 nova_compute[351485]: 2025-12-03 02:36:40.371 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 02:36:40 compute-0 nova_compute[351485]: 2025-12-03 02:36:40.371 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 03 02:36:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 02:36:40 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:36:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:36:40 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:36:40 compute-0 nova_compute[351485]: 2025-12-03 02:36:40.396 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 03 02:36:40 compute-0 sudo[483201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:36:40 compute-0 sudo[483201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:36:40 compute-0 sudo[483201]: pam_unix(sudo:session): session closed for user root
Dec 03 02:36:40 compute-0 podman[483225]: 2025-12-03 02:36:40.673909537 +0000 UTC m=+0.112404753 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 03 02:36:40 compute-0 sudo[483244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:36:40 compute-0 sudo[483244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:36:40 compute-0 podman[483226]: 2025-12-03 02:36:40.698954104 +0000 UTC m=+0.126364127 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 03 02:36:40 compute-0 sudo[483244]: pam_unix(sudo:session): session closed for user root
Dec 03 02:36:40 compute-0 podman[483227]: 2025-12-03 02:36:40.704616594 +0000 UTC m=+0.128624911 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 02:36:40 compute-0 sudo[483309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:36:40 compute-0 sudo[483309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:36:40 compute-0 sudo[483309]: pam_unix(sudo:session): session closed for user root
Dec 03 02:36:40 compute-0 sudo[483334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 02:36:40 compute-0 sudo[483334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:36:40 compute-0 ceph-mon[192821]: pgmap v2477: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:40 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:36:40 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:36:40 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:36:40 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:36:40 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:36:40 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:36:41 compute-0 podman[483398]: 2025-12-03 02:36:41.536325505 +0000 UTC m=+0.085265807 container create 443655007d3fefaf20a215adc0f9f5d6b9b2ab82686c0a57d25f45bc67139931 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:36:41 compute-0 podman[483398]: 2025-12-03 02:36:41.502652014 +0000 UTC m=+0.051592366 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:36:41 compute-0 nova_compute[351485]: 2025-12-03 02:36:41.596 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:36:41 compute-0 systemd[1]: Started libpod-conmon-443655007d3fefaf20a215adc0f9f5d6b9b2ab82686c0a57d25f45bc67139931.scope.
Dec 03 02:36:41 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:36:41 compute-0 rsyslogd[188612]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 03 02:36:41 compute-0 rsyslogd[188612]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 03 02:36:41 compute-0 podman[483398]: 2025-12-03 02:36:41.713257998 +0000 UTC m=+0.262198320 container init 443655007d3fefaf20a215adc0f9f5d6b9b2ab82686c0a57d25f45bc67139931 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Dec 03 02:36:41 compute-0 podman[483398]: 2025-12-03 02:36:41.731923165 +0000 UTC m=+0.280863457 container start 443655007d3fefaf20a215adc0f9f5d6b9b2ab82686c0a57d25f45bc67139931 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:36:41 compute-0 compassionate_elgamal[483414]: 167 167
Dec 03 02:36:41 compute-0 podman[483398]: 2025-12-03 02:36:41.744009626 +0000 UTC m=+0.292949918 container attach 443655007d3fefaf20a215adc0f9f5d6b9b2ab82686c0a57d25f45bc67139931 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 03 02:36:41 compute-0 systemd[1]: libpod-443655007d3fefaf20a215adc0f9f5d6b9b2ab82686c0a57d25f45bc67139931.scope: Deactivated successfully.
Dec 03 02:36:41 compute-0 podman[483398]: 2025-12-03 02:36:41.746980539 +0000 UTC m=+0.295920831 container died 443655007d3fefaf20a215adc0f9f5d6b9b2ab82686c0a57d25f45bc67139931 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_elgamal, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:36:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2478: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d7152e130522d0ab0b5c0ddadb61bb5afe3af1824a80c1e87ff46198567a791-merged.mount: Deactivated successfully.
Dec 03 02:36:41 compute-0 podman[483398]: 2025-12-03 02:36:41.82850704 +0000 UTC m=+0.377447332 container remove 443655007d3fefaf20a215adc0f9f5d6b9b2ab82686c0a57d25f45bc67139931 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_elgamal, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec 03 02:36:41 compute-0 systemd[1]: libpod-conmon-443655007d3fefaf20a215adc0f9f5d6b9b2ab82686c0a57d25f45bc67139931.scope: Deactivated successfully.
Dec 03 02:36:42 compute-0 nova_compute[351485]: 2025-12-03 02:36:42.000 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:36:42 compute-0 podman[483438]: 2025-12-03 02:36:42.078174185 +0000 UTC m=+0.082449488 container create 1321e0d12a39a9116d6ad476ff4f479f14a53f2c8d151b60bcfd40a630093c5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:36:42 compute-0 podman[483438]: 2025-12-03 02:36:42.042632882 +0000 UTC m=+0.046908245 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:36:42 compute-0 systemd[1]: Started libpod-conmon-1321e0d12a39a9116d6ad476ff4f479f14a53f2c8d151b60bcfd40a630093c5f.scope.
Dec 03 02:36:42 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:36:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c89260d352e479b6ed4e7c4052c9bec433698c1ef25ce32d9a8bc4f477fb5e3d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:36:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c89260d352e479b6ed4e7c4052c9bec433698c1ef25ce32d9a8bc4f477fb5e3d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:36:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c89260d352e479b6ed4e7c4052c9bec433698c1ef25ce32d9a8bc4f477fb5e3d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:36:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c89260d352e479b6ed4e7c4052c9bec433698c1ef25ce32d9a8bc4f477fb5e3d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:36:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c89260d352e479b6ed4e7c4052c9bec433698c1ef25ce32d9a8bc4f477fb5e3d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 02:36:42 compute-0 podman[483438]: 2025-12-03 02:36:42.272490088 +0000 UTC m=+0.276765421 container init 1321e0d12a39a9116d6ad476ff4f479f14a53f2c8d151b60bcfd40a630093c5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_swanson, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:36:42 compute-0 podman[483438]: 2025-12-03 02:36:42.30832794 +0000 UTC m=+0.312603243 container start 1321e0d12a39a9116d6ad476ff4f479f14a53f2c8d151b60bcfd40a630093c5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_swanson, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:36:42 compute-0 podman[483438]: 2025-12-03 02:36:42.315308597 +0000 UTC m=+0.319583890 container attach 1321e0d12a39a9116d6ad476ff4f479f14a53f2c8d151b60bcfd40a630093c5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_swanson, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:36:42 compute-0 nova_compute[351485]: 2025-12-03 02:36:42.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:36:42 compute-0 ceph-mon[192821]: pgmap v2478: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:43 compute-0 nova_compute[351485]: 2025-12-03 02:36:43.571 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:36:43 compute-0 nova_compute[351485]: 2025-12-03 02:36:43.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:36:43 compute-0 sweet_swanson[483453]: --> passed data devices: 0 physical, 3 LVM
Dec 03 02:36:43 compute-0 sweet_swanson[483453]: --> relative data size: 1.0
Dec 03 02:36:43 compute-0 sweet_swanson[483453]: --> All data devices are unavailable
Dec 03 02:36:43 compute-0 systemd[1]: libpod-1321e0d12a39a9116d6ad476ff4f479f14a53f2c8d151b60bcfd40a630093c5f.scope: Deactivated successfully.
Dec 03 02:36:43 compute-0 podman[483438]: 2025-12-03 02:36:43.701195876 +0000 UTC m=+1.705471149 container died 1321e0d12a39a9116d6ad476ff4f479f14a53f2c8d151b60bcfd40a630093c5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_swanson, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:36:43 compute-0 systemd[1]: libpod-1321e0d12a39a9116d6ad476ff4f479f14a53f2c8d151b60bcfd40a630093c5f.scope: Consumed 1.345s CPU time.
Dec 03 02:36:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:36:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-c89260d352e479b6ed4e7c4052c9bec433698c1ef25ce32d9a8bc4f477fb5e3d-merged.mount: Deactivated successfully.
Dec 03 02:36:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2479: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:43 compute-0 podman[483438]: 2025-12-03 02:36:43.793008087 +0000 UTC m=+1.797283380 container remove 1321e0d12a39a9116d6ad476ff4f479f14a53f2c8d151b60bcfd40a630093c5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:36:43 compute-0 systemd[1]: libpod-conmon-1321e0d12a39a9116d6ad476ff4f479f14a53f2c8d151b60bcfd40a630093c5f.scope: Deactivated successfully.
Dec 03 02:36:43 compute-0 sudo[483334]: pam_unix(sudo:session): session closed for user root
Dec 03 02:36:43 compute-0 sudo[483495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:36:43 compute-0 sudo[483495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:36:44 compute-0 sudo[483495]: pam_unix(sudo:session): session closed for user root
Dec 03 02:36:44 compute-0 sudo[483520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:36:44 compute-0 sudo[483520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:36:44 compute-0 sudo[483520]: pam_unix(sudo:session): session closed for user root
Dec 03 02:36:44 compute-0 sudo[483545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:36:44 compute-0 sudo[483545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:36:44 compute-0 sudo[483545]: pam_unix(sudo:session): session closed for user root
Dec 03 02:36:44 compute-0 nova_compute[351485]: 2025-12-03 02:36:44.411 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:36:44 compute-0 sudo[483570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 02:36:44 compute-0 sudo[483570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:36:44 compute-0 ceph-mon[192821]: pgmap v2479: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:45 compute-0 podman[483633]: 2025-12-03 02:36:45.024048737 +0000 UTC m=+0.094457146 container create 44bc9686b8be7719739157f46b8637c043ec4791d8406ebef7218499ce881447 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:36:45 compute-0 podman[483633]: 2025-12-03 02:36:44.990237263 +0000 UTC m=+0.060645722 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:36:45 compute-0 systemd[1]: Started libpod-conmon-44bc9686b8be7719739157f46b8637c043ec4791d8406ebef7218499ce881447.scope.
Dec 03 02:36:45 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:36:45 compute-0 podman[483633]: 2025-12-03 02:36:45.170171251 +0000 UTC m=+0.240579720 container init 44bc9686b8be7719739157f46b8637c043ec4791d8406ebef7218499ce881447 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_lamarr, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:36:45 compute-0 podman[483633]: 2025-12-03 02:36:45.188895169 +0000 UTC m=+0.259303588 container start 44bc9686b8be7719739157f46b8637c043ec4791d8406ebef7218499ce881447 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_lamarr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 03 02:36:45 compute-0 podman[483633]: 2025-12-03 02:36:45.195595158 +0000 UTC m=+0.266003617 container attach 44bc9686b8be7719739157f46b8637c043ec4791d8406ebef7218499ce881447 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:36:45 compute-0 gracious_lamarr[483648]: 167 167
Dec 03 02:36:45 compute-0 systemd[1]: libpod-44bc9686b8be7719739157f46b8637c043ec4791d8406ebef7218499ce881447.scope: Deactivated successfully.
Dec 03 02:36:45 compute-0 podman[483633]: 2025-12-03 02:36:45.20167916 +0000 UTC m=+0.272087609 container died 44bc9686b8be7719739157f46b8637c043ec4791d8406ebef7218499ce881447 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_lamarr, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 03 02:36:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ca113133e7f2a0563d56180d855de122728c7f375f13230a978fe2cfa79698f-merged.mount: Deactivated successfully.
Dec 03 02:36:45 compute-0 podman[483633]: 2025-12-03 02:36:45.289309233 +0000 UTC m=+0.359717652 container remove 44bc9686b8be7719739157f46b8637c043ec4791d8406ebef7218499ce881447 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec 03 02:36:45 compute-0 systemd[1]: libpod-conmon-44bc9686b8be7719739157f46b8637c043ec4791d8406ebef7218499ce881447.scope: Deactivated successfully.
Dec 03 02:36:45 compute-0 nova_compute[351485]: 2025-12-03 02:36:45.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:36:45 compute-0 podman[483673]: 2025-12-03 02:36:45.591329565 +0000 UTC m=+0.091592386 container create 0d8bc2f6174914a2d9a1e3d90d2376d670e34b4e2f7b32c74e9178750676ec1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec 03 02:36:45 compute-0 podman[483673]: 2025-12-03 02:36:45.558844268 +0000 UTC m=+0.059107079 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:36:45 compute-0 systemd[1]: Started libpod-conmon-0d8bc2f6174914a2d9a1e3d90d2376d670e34b4e2f7b32c74e9178750676ec1a.scope.
Dec 03 02:36:45 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:36:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf2bbd03bcd3392f0222bf82c153bedf13a5cc169d44eb9aac46f8d2eee504e5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:36:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf2bbd03bcd3392f0222bf82c153bedf13a5cc169d44eb9aac46f8d2eee504e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:36:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf2bbd03bcd3392f0222bf82c153bedf13a5cc169d44eb9aac46f8d2eee504e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:36:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf2bbd03bcd3392f0222bf82c153bedf13a5cc169d44eb9aac46f8d2eee504e5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:36:45 compute-0 podman[483673]: 2025-12-03 02:36:45.77719475 +0000 UTC m=+0.277457591 container init 0d8bc2f6174914a2d9a1e3d90d2376d670e34b4e2f7b32c74e9178750676ec1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ptolemy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:36:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2480: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:45 compute-0 podman[483673]: 2025-12-03 02:36:45.802098903 +0000 UTC m=+0.302361714 container start 0d8bc2f6174914a2d9a1e3d90d2376d670e34b4e2f7b32c74e9178750676ec1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 03 02:36:45 compute-0 podman[483673]: 2025-12-03 02:36:45.809047399 +0000 UTC m=+0.309310230 container attach 0d8bc2f6174914a2d9a1e3d90d2376d670e34b4e2f7b32c74e9178750676ec1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ptolemy, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:36:45 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #120. Immutable memtables: 0.
Dec 03 02:36:45 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:36:45.981053) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 03 02:36:45 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 120
Dec 03 02:36:45 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729405981074, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 1615, "num_deletes": 251, "total_data_size": 2452599, "memory_usage": 2489040, "flush_reason": "Manual Compaction"}
Dec 03 02:36:45 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #121: started
Dec 03 02:36:45 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729405994777, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 121, "file_size": 2405945, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49404, "largest_seqno": 51018, "table_properties": {"data_size": 2398356, "index_size": 4467, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16432, "raw_average_key_size": 20, "raw_value_size": 2382916, "raw_average_value_size": 2960, "num_data_blocks": 199, "num_entries": 805, "num_filter_entries": 805, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764729249, "oldest_key_time": 1764729249, "file_creation_time": 1764729405, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 121, "seqno_to_time_mapping": "N/A"}}
Dec 03 02:36:45 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 13769 microseconds, and 5276 cpu microseconds.
Dec 03 02:36:45 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 02:36:45 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:36:45.994820) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #121: 2405945 bytes OK
Dec 03 02:36:45 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:36:45.994834) [db/memtable_list.cc:519] [default] Level-0 commit table #121 started
Dec 03 02:36:45 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:36:45.996924) [db/memtable_list.cc:722] [default] Level-0 commit table #121: memtable #1 done
Dec 03 02:36:45 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:36:45.996935) EVENT_LOG_v1 {"time_micros": 1764729405996932, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 03 02:36:45 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:36:45.996948) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 03 02:36:45 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 2445435, prev total WAL file size 2445435, number of live WAL files 2.
Dec 03 02:36:45 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000117.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:36:46 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:36:45.997839) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Dec 03 02:36:46 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 03 02:36:46 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [121(2349KB)], [119(6887KB)]
Dec 03 02:36:46 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729405997917, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [121], "files_L6": [119], "score": -1, "input_data_size": 9458407, "oldest_snapshot_seqno": -1}
Dec 03 02:36:46 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #122: 6521 keys, 7724819 bytes, temperature: kUnknown
Dec 03 02:36:46 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729406060653, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 122, "file_size": 7724819, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7685166, "index_size": 22263, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16325, "raw_key_size": 170533, "raw_average_key_size": 26, "raw_value_size": 7571105, "raw_average_value_size": 1161, "num_data_blocks": 878, "num_entries": 6521, "num_filter_entries": 6521, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764729405, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Dec 03 02:36:46 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 02:36:46 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:36:46.061387) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 7724819 bytes
Dec 03 02:36:46 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:36:46.064666) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 149.5 rd, 122.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 6.7 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(7.1) write-amplify(3.2) OK, records in: 7035, records dropped: 514 output_compression: NoCompression
Dec 03 02:36:46 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:36:46.064703) EVENT_LOG_v1 {"time_micros": 1764729406064685, "job": 72, "event": "compaction_finished", "compaction_time_micros": 63262, "compaction_time_cpu_micros": 37005, "output_level": 6, "num_output_files": 1, "total_output_size": 7724819, "num_input_records": 7035, "num_output_records": 6521, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 03 02:36:46 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000121.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:36:46 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729406068136, "job": 72, "event": "table_file_deletion", "file_number": 121}
Dec 03 02:36:46 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:36:46 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729406072166, "job": 72, "event": "table_file_deletion", "file_number": 119}
Dec 03 02:36:46 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:36:45.997550) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:36:46 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:36:46.073277) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:36:46 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:36:46.073282) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:36:46 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:36:46.073284) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:36:46 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:36:46.073286) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:36:46 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:36:46.073287) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]: {
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:     "0": [
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:         {
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:             "devices": [
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:                 "/dev/loop3"
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:             ],
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:             "lv_name": "ceph_lv0",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:             "lv_size": "21470642176",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:             "name": "ceph_lv0",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:             "tags": {
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:                 "ceph.cluster_name": "ceph",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:                 "ceph.crush_device_class": "",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:                 "ceph.encrypted": "0",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:                 "ceph.osd_id": "0",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:                 "ceph.type": "block",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:                 "ceph.vdo": "0"
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:             },
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:             "type": "block",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:             "vg_name": "ceph_vg0"
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:         }
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:     ],
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:     "1": [
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:         {
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:             "devices": [
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:                 "/dev/loop4"
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:             ],
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:             "lv_name": "ceph_lv1",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:             "lv_size": "21470642176",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:             "name": "ceph_lv1",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:             "tags": {
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:                 "ceph.cluster_name": "ceph",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:                 "ceph.crush_device_class": "",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:                 "ceph.encrypted": "0",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:                 "ceph.osd_id": "1",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:                 "ceph.type": "block",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:                 "ceph.vdo": "0"
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:             },
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:             "type": "block",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:             "vg_name": "ceph_vg1"
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:         }
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:     ],
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:     "2": [
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:         {
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:             "devices": [
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:                 "/dev/loop5"
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:             ],
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:             "lv_name": "ceph_lv2",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:             "lv_size": "21470642176",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:             "name": "ceph_lv2",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:             "tags": {
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:                 "ceph.cluster_name": "ceph",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:                 "ceph.crush_device_class": "",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:                 "ceph.encrypted": "0",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:                 "ceph.osd_id": "2",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:                 "ceph.type": "block",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:                 "ceph.vdo": "0"
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:             },
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:             "type": "block",
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:             "vg_name": "ceph_vg2"
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:         }
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]:     ]
Dec 03 02:36:46 compute-0 nervous_ptolemy[483688]: }
Dec 03 02:36:46 compute-0 systemd[1]: libpod-0d8bc2f6174914a2d9a1e3d90d2376d670e34b4e2f7b32c74e9178750676ec1a.scope: Deactivated successfully.
Dec 03 02:36:46 compute-0 podman[483673]: 2025-12-03 02:36:46.659680924 +0000 UTC m=+1.159943745 container died 0d8bc2f6174914a2d9a1e3d90d2376d670e34b4e2f7b32c74e9178750676ec1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ptolemy, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 03 02:36:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf2bbd03bcd3392f0222bf82c153bedf13a5cc169d44eb9aac46f8d2eee504e5-merged.mount: Deactivated successfully.
Dec 03 02:36:46 compute-0 podman[483673]: 2025-12-03 02:36:46.762142495 +0000 UTC m=+1.262405286 container remove 0d8bc2f6174914a2d9a1e3d90d2376d670e34b4e2f7b32c74e9178750676ec1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ptolemy, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:36:46 compute-0 systemd[1]: libpod-conmon-0d8bc2f6174914a2d9a1e3d90d2376d670e34b4e2f7b32c74e9178750676ec1a.scope: Deactivated successfully.
Dec 03 02:36:46 compute-0 sudo[483570]: pam_unix(sudo:session): session closed for user root
Dec 03 02:36:46 compute-0 sudo[483707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:36:46 compute-0 sudo[483707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:36:46 compute-0 sudo[483707]: pam_unix(sudo:session): session closed for user root
Dec 03 02:36:46 compute-0 ceph-mon[192821]: pgmap v2480: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:47 compute-0 nova_compute[351485]: 2025-12-03 02:36:47.004 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:36:47 compute-0 sudo[483732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:36:47 compute-0 sudo[483732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:36:47 compute-0 sudo[483732]: pam_unix(sudo:session): session closed for user root
Dec 03 02:36:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 02:36:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/942960392' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:36:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 02:36:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/942960392' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:36:47 compute-0 sudo[483757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:36:47 compute-0 sudo[483757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:36:47 compute-0 sudo[483757]: pam_unix(sudo:session): session closed for user root
Dec 03 02:36:47 compute-0 sudo[483782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 02:36:47 compute-0 sudo[483782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:36:47 compute-0 nova_compute[351485]: 2025-12-03 02:36:47.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:36:47 compute-0 nova_compute[351485]: 2025-12-03 02:36:47.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 02:36:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2481: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:47 compute-0 podman[483843]: 2025-12-03 02:36:47.89343003 +0000 UTC m=+0.071612832 container create de27f91152604cc26e74ec581c6203ecdef1d8c9ed039f3db02f260908162515 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:36:47 compute-0 systemd[1]: Started libpod-conmon-de27f91152604cc26e74ec581c6203ecdef1d8c9ed039f3db02f260908162515.scope.
Dec 03 02:36:47 compute-0 podman[483843]: 2025-12-03 02:36:47.872969333 +0000 UTC m=+0.051152135 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:36:48 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:36:48 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/942960392' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:36:48 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/942960392' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:36:48 compute-0 podman[483843]: 2025-12-03 02:36:48.035782847 +0000 UTC m=+0.213965699 container init de27f91152604cc26e74ec581c6203ecdef1d8c9ed039f3db02f260908162515 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:36:48 compute-0 podman[483843]: 2025-12-03 02:36:48.053644491 +0000 UTC m=+0.231827293 container start de27f91152604cc26e74ec581c6203ecdef1d8c9ed039f3db02f260908162515 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_yalow, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 03 02:36:48 compute-0 podman[483843]: 2025-12-03 02:36:48.061231266 +0000 UTC m=+0.239414078 container attach de27f91152604cc26e74ec581c6203ecdef1d8c9ed039f3db02f260908162515 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Dec 03 02:36:48 compute-0 happy_yalow[483859]: 167 167
Dec 03 02:36:48 compute-0 systemd[1]: libpod-de27f91152604cc26e74ec581c6203ecdef1d8c9ed039f3db02f260908162515.scope: Deactivated successfully.
Dec 03 02:36:48 compute-0 podman[483843]: 2025-12-03 02:36:48.068090659 +0000 UTC m=+0.246273471 container died de27f91152604cc26e74ec581c6203ecdef1d8c9ed039f3db02f260908162515 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:36:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-c52a5c88f2676d11979354023c450777e5d3d9f2da7439dd53bccdf967564236-merged.mount: Deactivated successfully.
Dec 03 02:36:48 compute-0 podman[483843]: 2025-12-03 02:36:48.136937432 +0000 UTC m=+0.315120254 container remove de27f91152604cc26e74ec581c6203ecdef1d8c9ed039f3db02f260908162515 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 03 02:36:48 compute-0 systemd[1]: libpod-conmon-de27f91152604cc26e74ec581c6203ecdef1d8c9ed039f3db02f260908162515.scope: Deactivated successfully.
Dec 03 02:36:48 compute-0 podman[483882]: 2025-12-03 02:36:48.39201201 +0000 UTC m=+0.089014383 container create f70a65956f1784f638ac70dfe972b9b27a00acc027ed6ff1901cc6884d4cb065 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_satoshi, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:36:48 compute-0 podman[483882]: 2025-12-03 02:36:48.358025411 +0000 UTC m=+0.055027784 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:36:48 compute-0 systemd[1]: Started libpod-conmon-f70a65956f1784f638ac70dfe972b9b27a00acc027ed6ff1901cc6884d4cb065.scope.
Dec 03 02:36:48 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:36:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a9c1c783a81069a04f2b3350348783b3f2236f8325e448b80fea9e4d209af04/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:36:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a9c1c783a81069a04f2b3350348783b3f2236f8325e448b80fea9e4d209af04/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:36:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a9c1c783a81069a04f2b3350348783b3f2236f8325e448b80fea9e4d209af04/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:36:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a9c1c783a81069a04f2b3350348783b3f2236f8325e448b80fea9e4d209af04/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:36:48 compute-0 podman[483882]: 2025-12-03 02:36:48.559481486 +0000 UTC m=+0.256483879 container init f70a65956f1784f638ac70dfe972b9b27a00acc027ed6ff1901cc6884d4cb065 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_satoshi, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True)
Dec 03 02:36:48 compute-0 podman[483882]: 2025-12-03 02:36:48.586589621 +0000 UTC m=+0.283592034 container start f70a65956f1784f638ac70dfe972b9b27a00acc027ed6ff1901cc6884d4cb065 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_satoshi, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 03 02:36:48 compute-0 podman[483882]: 2025-12-03 02:36:48.59329277 +0000 UTC m=+0.290295183 container attach f70a65956f1784f638ac70dfe972b9b27a00acc027ed6ff1901cc6884d4cb065 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 03 02:36:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:36:49 compute-0 ceph-mon[192821]: pgmap v2481: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:49 compute-0 nova_compute[351485]: 2025-12-03 02:36:49.414 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:36:49 compute-0 hopeful_satoshi[483898]: {
Dec 03 02:36:49 compute-0 hopeful_satoshi[483898]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 02:36:49 compute-0 hopeful_satoshi[483898]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:36:49 compute-0 hopeful_satoshi[483898]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 02:36:49 compute-0 hopeful_satoshi[483898]:         "osd_id": 2,
Dec 03 02:36:49 compute-0 hopeful_satoshi[483898]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:36:49 compute-0 hopeful_satoshi[483898]:         "type": "bluestore"
Dec 03 02:36:49 compute-0 hopeful_satoshi[483898]:     },
Dec 03 02:36:49 compute-0 hopeful_satoshi[483898]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 02:36:49 compute-0 hopeful_satoshi[483898]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:36:49 compute-0 hopeful_satoshi[483898]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 02:36:49 compute-0 hopeful_satoshi[483898]:         "osd_id": 1,
Dec 03 02:36:49 compute-0 hopeful_satoshi[483898]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:36:49 compute-0 hopeful_satoshi[483898]:         "type": "bluestore"
Dec 03 02:36:49 compute-0 hopeful_satoshi[483898]:     },
Dec 03 02:36:49 compute-0 hopeful_satoshi[483898]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 02:36:49 compute-0 hopeful_satoshi[483898]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:36:49 compute-0 hopeful_satoshi[483898]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 02:36:49 compute-0 hopeful_satoshi[483898]:         "osd_id": 0,
Dec 03 02:36:49 compute-0 hopeful_satoshi[483898]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:36:49 compute-0 hopeful_satoshi[483898]:         "type": "bluestore"
Dec 03 02:36:49 compute-0 hopeful_satoshi[483898]:     }
Dec 03 02:36:49 compute-0 hopeful_satoshi[483898]: }
Dec 03 02:36:49 compute-0 systemd[1]: libpod-f70a65956f1784f638ac70dfe972b9b27a00acc027ed6ff1901cc6884d4cb065.scope: Deactivated successfully.
Dec 03 02:36:49 compute-0 podman[483882]: 2025-12-03 02:36:49.78680999 +0000 UTC m=+1.483812363 container died f70a65956f1784f638ac70dfe972b9b27a00acc027ed6ff1901cc6884d4cb065 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_satoshi, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:36:49 compute-0 systemd[1]: libpod-f70a65956f1784f638ac70dfe972b9b27a00acc027ed6ff1901cc6884d4cb065.scope: Consumed 1.200s CPU time.
Dec 03 02:36:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2482: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a9c1c783a81069a04f2b3350348783b3f2236f8325e448b80fea9e4d209af04-merged.mount: Deactivated successfully.
Dec 03 02:36:49 compute-0 podman[483882]: 2025-12-03 02:36:49.893778969 +0000 UTC m=+1.590781332 container remove f70a65956f1784f638ac70dfe972b9b27a00acc027ed6ff1901cc6884d4cb065 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_satoshi, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 03 02:36:49 compute-0 systemd[1]: libpod-conmon-f70a65956f1784f638ac70dfe972b9b27a00acc027ed6ff1901cc6884d4cb065.scope: Deactivated successfully.
Dec 03 02:36:49 compute-0 sudo[483782]: pam_unix(sudo:session): session closed for user root
Dec 03 02:36:49 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 02:36:49 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:36:49 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 02:36:49 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:36:49 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev aab8e85d-a638-4482-990b-b01009e8c470 does not exist
Dec 03 02:36:49 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev b93fe401-0a0e-4ae1-8a1b-8ff79f74ee1a does not exist
Dec 03 02:36:50 compute-0 podman[483944]: 2025-12-03 02:36:50.054313029 +0000 UTC m=+0.113598387 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 03 02:36:50 compute-0 sudo[483957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:36:50 compute-0 sudo[483957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:36:50 compute-0 sudo[483957]: pam_unix(sudo:session): session closed for user root
Dec 03 02:36:50 compute-0 sudo[483987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 02:36:50 compute-0 sudo[483987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:36:50 compute-0 sudo[483987]: pam_unix(sudo:session): session closed for user root
Dec 03 02:36:50 compute-0 ceph-mon[192821]: pgmap v2482: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:50 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:36:50 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:36:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2483: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:52 compute-0 nova_compute[351485]: 2025-12-03 02:36:52.009 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:36:52 compute-0 ceph-mon[192821]: pgmap v2483: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:36:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2484: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:53 compute-0 podman[484014]: 2025-12-03 02:36:53.855382435 +0000 UTC m=+0.100116686 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 02:36:53 compute-0 podman[484016]: 2025-12-03 02:36:53.881990756 +0000 UTC m=+0.107594087 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 03 02:36:53 compute-0 podman[484015]: 2025-12-03 02:36:53.887701447 +0000 UTC m=+0.123817565 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.component=ubi9-container, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, name=ubi9, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, distribution-scope=public, build-date=2024-09-18T21:23:30, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec 03 02:36:53 compute-0 podman[484013]: 2025-12-03 02:36:53.889327623 +0000 UTC m=+0.134553768 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, release=1755695350, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_id=edpm, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=)
Dec 03 02:36:53 compute-0 podman[484012]: 2025-12-03 02:36:53.920906634 +0000 UTC m=+0.166545281 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller)
Dec 03 02:36:54 compute-0 nova_compute[351485]: 2025-12-03 02:36:54.418 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:36:55 compute-0 ceph-mon[192821]: pgmap v2484: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2485: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:57 compute-0 nova_compute[351485]: 2025-12-03 02:36:57.014 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:36:57 compute-0 ceph-mon[192821]: pgmap v2485: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2486: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:36:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:36:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:36:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:36:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:36:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:36:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:36:59 compute-0 ceph-mon[192821]: pgmap v2486: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:36:59 compute-0 nova_compute[351485]: 2025-12-03 02:36:59.422 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:36:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:36:59.674 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:36:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:36:59.675 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:36:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:36:59.675 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:36:59 compute-0 podman[158098]: time="2025-12-03T02:36:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:36:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:36:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec 03 02:36:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:36:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8212 "" "Go-http-client/1.1"
Dec 03 02:36:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2487: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:01 compute-0 ceph-mon[192821]: pgmap v2487: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:01 compute-0 openstack_network_exporter[368278]: ERROR   02:37:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:37:01 compute-0 openstack_network_exporter[368278]: ERROR   02:37:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:37:01 compute-0 openstack_network_exporter[368278]: ERROR   02:37:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:37:01 compute-0 openstack_network_exporter[368278]: ERROR   02:37:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:37:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:37:01 compute-0 openstack_network_exporter[368278]: ERROR   02:37:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:37:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:37:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2488: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:02 compute-0 nova_compute[351485]: 2025-12-03 02:37:02.017 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:37:03 compute-0 ceph-mon[192821]: pgmap v2488: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:37:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2489: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:04 compute-0 nova_compute[351485]: 2025-12-03 02:37:04.426 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:37:05 compute-0 ceph-mon[192821]: pgmap v2489: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2490: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:07 compute-0 nova_compute[351485]: 2025-12-03 02:37:07.020 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:37:07 compute-0 ceph-mon[192821]: pgmap v2490: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2491: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:37:09 compute-0 ceph-mon[192821]: pgmap v2491: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:09 compute-0 nova_compute[351485]: 2025-12-03 02:37:09.429 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:37:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2492: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:10 compute-0 podman[484116]: 2025-12-03 02:37:10.869615393 +0000 UTC m=+0.112672521 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 03 02:37:10 compute-0 podman[484118]: 2025-12-03 02:37:10.876079165 +0000 UTC m=+0.111796766 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 03 02:37:10 compute-0 podman[484117]: 2025-12-03 02:37:10.87945678 +0000 UTC m=+0.120151681 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Dec 03 02:37:11 compute-0 ceph-mon[192821]: pgmap v2492: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2493: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:12 compute-0 nova_compute[351485]: 2025-12-03 02:37:12.023 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:37:13 compute-0 ceph-mon[192821]: pgmap v2493: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:37:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2494: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:14 compute-0 ceph-mon[192821]: pgmap v2494: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:14 compute-0 nova_compute[351485]: 2025-12-03 02:37:14.432 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:37:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2495: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:16 compute-0 ceph-mon[192821]: pgmap v2495: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:17 compute-0 nova_compute[351485]: 2025-12-03 02:37:17.026 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:37:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2496: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:37:18 compute-0 ceph-mon[192821]: pgmap v2496: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:19 compute-0 nova_compute[351485]: 2025-12-03 02:37:19.438 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.517 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.518 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.519 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.521 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.521 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.521 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.521 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.522 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.522 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.526 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.522 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.526 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.526 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.527 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.527 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.527 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.528 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.528 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.528 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.528 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.529 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.529 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.530 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.530 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.530 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.530 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.530 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.530 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.531 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.531 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.531 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.531 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.531 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.532 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.532 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.532 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.533 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.533 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.533 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.533 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.533 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.533 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.534 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.534 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.534 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.534 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.534 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.534 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.535 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.535 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.535 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.535 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.535 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.536 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.536 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.536 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.536 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.536 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.536 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.537 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.537 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.537 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.538 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.538 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.538 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.538 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.539 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.539 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.539 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.539 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.539 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.540 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.540 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.540 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.540 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.540 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.540 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.541 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.541 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.541 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.541 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.541 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.541 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.542 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.542 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.542 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.542 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:37:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2497: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:20 compute-0 podman[484174]: 2025-12-03 02:37:20.877758841 +0000 UTC m=+0.127936132 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 03 02:37:20 compute-0 ceph-mon[192821]: pgmap v2497: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2498: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:22 compute-0 nova_compute[351485]: 2025-12-03 02:37:22.030 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:37:22 compute-0 ceph-mon[192821]: pgmap v2498: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:37:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2499: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:24 compute-0 nova_compute[351485]: 2025-12-03 02:37:24.439 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:37:24 compute-0 podman[484197]: 2025-12-03 02:37:24.852733092 +0000 UTC m=+0.086529092 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 03 02:37:24 compute-0 podman[484198]: 2025-12-03 02:37:24.875355421 +0000 UTC m=+0.102289787 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, release=1214.1726694543, com.redhat.component=ubi9-container, container_name=kepler, distribution-scope=public, managed_by=edpm_ansible, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, io.buildah.version=1.29.0, release-0.7.12=, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 03 02:37:24 compute-0 podman[484196]: 2025-12-03 02:37:24.878591702 +0000 UTC m=+0.119380889 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, name=ubi9-minimal, architecture=x86_64, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, release=1755695350, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, managed_by=edpm_ansible, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, version=9.6, vendor=Red Hat, Inc., distribution-scope=public, maintainer=Red Hat, Inc.)
Dec 03 02:37:24 compute-0 podman[484202]: 2025-12-03 02:37:24.8799228 +0000 UTC m=+0.102834992 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 03 02:37:24 compute-0 podman[484195]: 2025-12-03 02:37:24.90722159 +0000 UTC m=+0.153265215 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 03 02:37:24 compute-0 ceph-mon[192821]: pgmap v2499: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:25 compute-0 nova_compute[351485]: 2025-12-03 02:37:25.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:37:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2500: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:25 compute-0 sshd-session[484193]: Received disconnect from 154.113.10.113 port 33742:11: Bye Bye [preauth]
Dec 03 02:37:25 compute-0 sshd-session[484193]: Disconnected from authenticating user root 154.113.10.113 port 33742 [preauth]
Dec 03 02:37:26 compute-0 ceph-mon[192821]: pgmap v2500: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:27 compute-0 nova_compute[351485]: 2025-12-03 02:37:27.033 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:37:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2501: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:37:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:37:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:37:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:37:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:37:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:37:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:37:28
Dec 03 02:37:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 02:37:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 02:37:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.meta', 'images', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'volumes', '.rgw.root', 'backups', 'vms', 'default.rgw.control']
Dec 03 02:37:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 02:37:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:37:28 compute-0 ceph-mon[192821]: pgmap v2501: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 02:37:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 02:37:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:37:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:37:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:37:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:37:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:37:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:37:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:37:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:37:29 compute-0 nova_compute[351485]: 2025-12-03 02:37:29.442 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:37:29 compute-0 podman[158098]: time="2025-12-03T02:37:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:37:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:37:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec 03 02:37:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:37:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8210 "" "Go-http-client/1.1"
Dec 03 02:37:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2502: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:30 compute-0 ceph-mon[192821]: pgmap v2502: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:31 compute-0 openstack_network_exporter[368278]: ERROR   02:37:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:37:31 compute-0 openstack_network_exporter[368278]: ERROR   02:37:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:37:31 compute-0 openstack_network_exporter[368278]: ERROR   02:37:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:37:31 compute-0 openstack_network_exporter[368278]: ERROR   02:37:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:37:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:37:31 compute-0 openstack_network_exporter[368278]: ERROR   02:37:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:37:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:37:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2503: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:32 compute-0 nova_compute[351485]: 2025-12-03 02:37:32.036 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:37:32 compute-0 ceph-mon[192821]: pgmap v2503: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:37:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2504: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:34 compute-0 nova_compute[351485]: 2025-12-03 02:37:34.446 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:37:35 compute-0 ceph-mon[192821]: pgmap v2504: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2505: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:37 compute-0 ceph-mon[192821]: pgmap v2505: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:37 compute-0 nova_compute[351485]: 2025-12-03 02:37:37.038 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:37:37 compute-0 nova_compute[351485]: 2025-12-03 02:37:37.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:37:37 compute-0 nova_compute[351485]: 2025-12-03 02:37:37.614 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:37:37 compute-0 nova_compute[351485]: 2025-12-03 02:37:37.616 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:37:37 compute-0 nova_compute[351485]: 2025-12-03 02:37:37.616 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:37:37 compute-0 nova_compute[351485]: 2025-12-03 02:37:37.616 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 02:37:37 compute-0 nova_compute[351485]: 2025-12-03 02:37:37.617 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:37:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2506: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:37:38 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/670775599' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:37:38 compute-0 nova_compute[351485]: 2025-12-03 02:37:38.156 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:37:38 compute-0 nova_compute[351485]: 2025-12-03 02:37:38.599 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:37:38 compute-0 nova_compute[351485]: 2025-12-03 02:37:38.600 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3976MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 02:37:38 compute-0 nova_compute[351485]: 2025-12-03 02:37:38.601 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:37:38 compute-0 nova_compute[351485]: 2025-12-03 02:37:38.601 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:37:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:37:38 compute-0 nova_compute[351485]: 2025-12-03 02:37:38.810 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 02:37:38 compute-0 nova_compute[351485]: 2025-12-03 02:37:38.811 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 02:37:38 compute-0 nova_compute[351485]: 2025-12-03 02:37:38.909 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:37:39 compute-0 ceph-mon[192821]: pgmap v2506: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:39 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/670775599' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:37:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 02:37:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:37:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 02:37:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:37:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 03 02:37:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:37:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:37:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:37:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:37:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:37:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec 03 02:37:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:37:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 02:37:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:37:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:37:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:37:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 02:37:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:37:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 02:37:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:37:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:37:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:37:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 02:37:39 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:37:39 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2376766912' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:37:39 compute-0 nova_compute[351485]: 2025-12-03 02:37:39.431 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:37:39 compute-0 nova_compute[351485]: 2025-12-03 02:37:39.445 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:37:39 compute-0 nova_compute[351485]: 2025-12-03 02:37:39.452 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:37:39 compute-0 nova_compute[351485]: 2025-12-03 02:37:39.475 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:37:39 compute-0 nova_compute[351485]: 2025-12-03 02:37:39.479 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 02:37:39 compute-0 nova_compute[351485]: 2025-12-03 02:37:39.479 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.878s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:37:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2507: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:40 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2376766912' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:37:40 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 03 02:37:40 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 4800.0 total, 600.0 interval
                                            Cumulative writes: 11K writes, 51K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.01 MB/s
                                            Cumulative WAL: 11K writes, 11K syncs, 1.00 writes per sync, written: 0.07 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 1352 writes, 6191 keys, 1352 commit groups, 1.0 writes per commit group, ingest: 8.69 MB, 0.01 MB/s
                                            Interval WAL: 1352 writes, 1352 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0    100.2      0.65              0.30        36    0.018       0      0       0.0       0.0
                                              L6      1/0    7.37 MB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   4.1    133.8    110.2      2.43              1.18        35    0.069    194K    19K       0.0       0.0
                                             Sum      1/0    7.37 MB   0.0      0.3     0.1      0.3       0.3      0.1       0.0   5.1    105.7    108.1      3.08              1.48        71    0.043    194K    19K       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.4    116.3    119.1      0.41              0.19        10    0.041     33K   2561       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Low      0/0    0.00 KB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   0.0    133.8    110.2      2.43              1.18        35    0.069    194K    19K       0.0       0.0
                                            High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0    100.6      0.64              0.30        35    0.018       0      0       0.0       0.0
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     18.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 4800.0 total, 600.0 interval
                                            Flush(GB): cumulative 0.063, interval 0.009
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.32 GB write, 0.07 MB/s write, 0.32 GB read, 0.07 MB/s read, 3.1 seconds
                                            Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.4 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x559a0b5b71f0#2 capacity: 304.00 MB usage: 40.39 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.000546 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2797,38.99 MB,12.827%) FilterBlock(72,541.48 KB,0.173945%) IndexBlock(72,888.86 KB,0.285535%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
Dec 03 02:37:40 compute-0 nova_compute[351485]: 2025-12-03 02:37:40.480 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:37:40 compute-0 nova_compute[351485]: 2025-12-03 02:37:40.480 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 02:37:40 compute-0 nova_compute[351485]: 2025-12-03 02:37:40.481 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 03 02:37:40 compute-0 nova_compute[351485]: 2025-12-03 02:37:40.501 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 03 02:37:40 compute-0 nova_compute[351485]: 2025-12-03 02:37:40.502 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:37:41 compute-0 ceph-mon[192821]: pgmap v2507: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:41 compute-0 nova_compute[351485]: 2025-12-03 02:37:41.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:37:41 compute-0 nova_compute[351485]: 2025-12-03 02:37:41.576 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 03 02:37:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2508: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:41 compute-0 podman[484340]: 2025-12-03 02:37:41.870589892 +0000 UTC m=+0.121431617 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 03 02:37:41 compute-0 podman[484342]: 2025-12-03 02:37:41.876250572 +0000 UTC m=+0.114614185 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 02:37:41 compute-0 podman[484341]: 2025-12-03 02:37:41.892404338 +0000 UTC m=+0.137275825 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute)
Dec 03 02:37:42 compute-0 nova_compute[351485]: 2025-12-03 02:37:42.041 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:37:42 compute-0 nova_compute[351485]: 2025-12-03 02:37:42.592 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:37:42 compute-0 nova_compute[351485]: 2025-12-03 02:37:42.592 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:37:42 compute-0 nova_compute[351485]: 2025-12-03 02:37:42.593 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 03 02:37:42 compute-0 nova_compute[351485]: 2025-12-03 02:37:42.618 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 03 02:37:43 compute-0 ceph-mon[192821]: pgmap v2508: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:43 compute-0 nova_compute[351485]: 2025-12-03 02:37:43.602 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:37:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:37:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2509: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:44 compute-0 nova_compute[351485]: 2025-12-03 02:37:44.452 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:37:44 compute-0 nova_compute[351485]: 2025-12-03 02:37:44.570 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:37:45 compute-0 ceph-mon[192821]: pgmap v2509: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2510: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:46 compute-0 nova_compute[351485]: 2025-12-03 02:37:46.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:37:47 compute-0 nova_compute[351485]: 2025-12-03 02:37:47.044 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:37:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 02:37:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/304231738' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:37:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 02:37:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/304231738' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:37:47 compute-0 ceph-mon[192821]: pgmap v2510: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/304231738' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:37:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/304231738' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:37:47 compute-0 nova_compute[351485]: 2025-12-03 02:37:47.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:37:47 compute-0 nova_compute[351485]: 2025-12-03 02:37:47.576 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 02:37:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2511: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:37:49 compute-0 ceph-mon[192821]: pgmap v2511: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:49 compute-0 nova_compute[351485]: 2025-12-03 02:37:49.455 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:37:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2512: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:50 compute-0 sudo[484400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:37:50 compute-0 sudo[484400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:37:50 compute-0 sudo[484400]: pam_unix(sudo:session): session closed for user root
Dec 03 02:37:50 compute-0 sudo[484425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:37:50 compute-0 sudo[484425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:37:50 compute-0 sudo[484425]: pam_unix(sudo:session): session closed for user root
Dec 03 02:37:50 compute-0 sudo[484450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:37:50 compute-0 sudo[484450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:37:50 compute-0 sudo[484450]: pam_unix(sudo:session): session closed for user root
Dec 03 02:37:50 compute-0 sudo[484475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 03 02:37:50 compute-0 sudo[484475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:37:51 compute-0 ceph-mon[192821]: pgmap v2512: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:51 compute-0 podman[484540]: 2025-12-03 02:37:51.488036971 +0000 UTC m=+0.127146009 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 03 02:37:51 compute-0 podman[484585]: 2025-12-03 02:37:51.700183288 +0000 UTC m=+0.127867989 container exec d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:37:51 compute-0 podman[484585]: 2025-12-03 02:37:51.823378295 +0000 UTC m=+0.251062906 container exec_died d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 03 02:37:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2513: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:52 compute-0 nova_compute[351485]: 2025-12-03 02:37:52.047 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:37:52 compute-0 sudo[484475]: pam_unix(sudo:session): session closed for user root
Dec 03 02:37:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 02:37:52 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:37:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 02:37:52 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:37:53 compute-0 sudo[484735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:37:53 compute-0 sudo[484735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:37:53 compute-0 sudo[484735]: pam_unix(sudo:session): session closed for user root
Dec 03 02:37:53 compute-0 ceph-mon[192821]: pgmap v2513: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:53 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:37:53 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:37:53 compute-0 sudo[484760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:37:53 compute-0 sudo[484760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:37:53 compute-0 sudo[484760]: pam_unix(sudo:session): session closed for user root
Dec 03 02:37:53 compute-0 sudo[484785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:37:53 compute-0 sudo[484785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:37:53 compute-0 sudo[484785]: pam_unix(sudo:session): session closed for user root
Dec 03 02:37:53 compute-0 sudo[484810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 02:37:53 compute-0 sudo[484810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:37:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:37:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2514: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:54 compute-0 sudo[484810]: pam_unix(sudo:session): session closed for user root
Dec 03 02:37:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:37:54 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:37:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 02:37:54 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:37:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 02:37:54 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:37:54 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 84bb91c1-e103-44e1-8d9a-3c014b347111 does not exist
Dec 03 02:37:54 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev e5b984aa-e0f1-4827-8dec-f563ca0cc057 does not exist
Dec 03 02:37:54 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 56960e68-9c7d-4699-8448-650ce03aeb03 does not exist
Dec 03 02:37:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 02:37:54 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:37:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 02:37:54 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:37:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:37:54 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:37:54 compute-0 sudo[484865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:37:54 compute-0 sudo[484865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:37:54 compute-0 sudo[484865]: pam_unix(sudo:session): session closed for user root
Dec 03 02:37:54 compute-0 nova_compute[351485]: 2025-12-03 02:37:54.457 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:37:54 compute-0 sudo[484890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:37:54 compute-0 sudo[484890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:37:54 compute-0 sudo[484890]: pam_unix(sudo:session): session closed for user root
Dec 03 02:37:54 compute-0 sudo[484915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:37:54 compute-0 sudo[484915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:37:54 compute-0 sudo[484915]: pam_unix(sudo:session): session closed for user root
Dec 03 02:37:54 compute-0 sudo[484940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 02:37:54 compute-0 sudo[484940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:37:55 compute-0 ceph-mon[192821]: pgmap v2514: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:55 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:37:55 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:37:55 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:37:55 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:37:55 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:37:55 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:37:55 compute-0 podman[485002]: 2025-12-03 02:37:55.400717616 +0000 UTC m=+0.076768267 container create e9e422c813eb668882266aefe4605fd8fff328dc02296364fdb36bd63d90235e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec 03 02:37:55 compute-0 podman[485002]: 2025-12-03 02:37:55.373486058 +0000 UTC m=+0.049536719 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:37:55 compute-0 systemd[1]: Started libpod-conmon-e9e422c813eb668882266aefe4605fd8fff328dc02296364fdb36bd63d90235e.scope.
Dec 03 02:37:55 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:37:55 compute-0 podman[485002]: 2025-12-03 02:37:55.560502355 +0000 UTC m=+0.236552986 container init e9e422c813eb668882266aefe4605fd8fff328dc02296364fdb36bd63d90235e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 03 02:37:55 compute-0 podman[485002]: 2025-12-03 02:37:55.572125783 +0000 UTC m=+0.248176404 container start e9e422c813eb668882266aefe4605fd8fff328dc02296364fdb36bd63d90235e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_banach, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:37:55 compute-0 podman[485002]: 2025-12-03 02:37:55.576428255 +0000 UTC m=+0.252478896 container attach e9e422c813eb668882266aefe4605fd8fff328dc02296364fdb36bd63d90235e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:37:55 compute-0 amazing_banach[485046]: 167 167
Dec 03 02:37:55 compute-0 systemd[1]: libpod-e9e422c813eb668882266aefe4605fd8fff328dc02296364fdb36bd63d90235e.scope: Deactivated successfully.
Dec 03 02:37:55 compute-0 podman[485002]: 2025-12-03 02:37:55.582965719 +0000 UTC m=+0.259016340 container died e9e422c813eb668882266aefe4605fd8fff328dc02296364fdb36bd63d90235e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_banach, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 03 02:37:55 compute-0 podman[485019]: 2025-12-03 02:37:55.593809025 +0000 UTC m=+0.116744465 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, distribution-scope=public, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, version=9.6, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.buildah.version=1.33.7, managed_by=edpm_ansible, release=1755695350, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 03 02:37:55 compute-0 podman[485021]: 2025-12-03 02:37:55.595279767 +0000 UTC m=+0.118065323 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, release-0.7.12=, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.29.0, architecture=x86_64, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec 03 02:37:55 compute-0 podman[485020]: 2025-12-03 02:37:55.607140921 +0000 UTC m=+0.118395312 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 03 02:37:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3812db55ff61d92fc4136f608ac8a91e8842885d7e907742fa2b3f55bdfe373-merged.mount: Deactivated successfully.
Dec 03 02:37:55 compute-0 podman[485016]: 2025-12-03 02:37:55.630741618 +0000 UTC m=+0.153361069 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 03 02:37:55 compute-0 podman[485002]: 2025-12-03 02:37:55.636582912 +0000 UTC m=+0.312633533 container remove e9e422c813eb668882266aefe4605fd8fff328dc02296364fdb36bd63d90235e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:37:55 compute-0 podman[485022]: 2025-12-03 02:37:55.637434236 +0000 UTC m=+0.141008680 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 03 02:37:55 compute-0 systemd[1]: libpod-conmon-e9e422c813eb668882266aefe4605fd8fff328dc02296364fdb36bd63d90235e.scope: Deactivated successfully.
Dec 03 02:37:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2515: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:55 compute-0 podman[485141]: 2025-12-03 02:37:55.868719323 +0000 UTC m=+0.082040256 container create 63d772f2091c565ca9f40cdbebd84a3bef46b6782a50d94526d671050f21ba7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 03 02:37:55 compute-0 podman[485141]: 2025-12-03 02:37:55.83601117 +0000 UTC m=+0.049332153 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:37:55 compute-0 systemd[1]: Started libpod-conmon-63d772f2091c565ca9f40cdbebd84a3bef46b6782a50d94526d671050f21ba7a.scope.
Dec 03 02:37:55 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:37:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ad47152cd5e81c166997806f093450a7d6448cf0fe05278934250a7bc990125/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:37:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ad47152cd5e81c166997806f093450a7d6448cf0fe05278934250a7bc990125/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:37:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ad47152cd5e81c166997806f093450a7d6448cf0fe05278934250a7bc990125/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:37:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ad47152cd5e81c166997806f093450a7d6448cf0fe05278934250a7bc990125/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:37:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ad47152cd5e81c166997806f093450a7d6448cf0fe05278934250a7bc990125/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 02:37:56 compute-0 podman[485141]: 2025-12-03 02:37:56.023413919 +0000 UTC m=+0.236734832 container init 63d772f2091c565ca9f40cdbebd84a3bef46b6782a50d94526d671050f21ba7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 03 02:37:56 compute-0 podman[485141]: 2025-12-03 02:37:56.042895858 +0000 UTC m=+0.256216791 container start 63d772f2091c565ca9f40cdbebd84a3bef46b6782a50d94526d671050f21ba7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_solomon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 03 02:37:56 compute-0 podman[485141]: 2025-12-03 02:37:56.049070573 +0000 UTC m=+0.262391476 container attach 63d772f2091c565ca9f40cdbebd84a3bef46b6782a50d94526d671050f21ba7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:37:57 compute-0 nova_compute[351485]: 2025-12-03 02:37:57.050 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:37:57 compute-0 ceph-mon[192821]: pgmap v2515: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:57 compute-0 jolly_solomon[485157]: --> passed data devices: 0 physical, 3 LVM
Dec 03 02:37:57 compute-0 jolly_solomon[485157]: --> relative data size: 1.0
Dec 03 02:37:57 compute-0 jolly_solomon[485157]: --> All data devices are unavailable
Dec 03 02:37:57 compute-0 systemd[1]: libpod-63d772f2091c565ca9f40cdbebd84a3bef46b6782a50d94526d671050f21ba7a.scope: Deactivated successfully.
Dec 03 02:37:57 compute-0 podman[485141]: 2025-12-03 02:37:57.405994974 +0000 UTC m=+1.619315897 container died 63d772f2091c565ca9f40cdbebd84a3bef46b6782a50d94526d671050f21ba7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 03 02:37:57 compute-0 systemd[1]: libpod-63d772f2091c565ca9f40cdbebd84a3bef46b6782a50d94526d671050f21ba7a.scope: Consumed 1.287s CPU time.
Dec 03 02:37:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ad47152cd5e81c166997806f093450a7d6448cf0fe05278934250a7bc990125-merged.mount: Deactivated successfully.
Dec 03 02:37:57 compute-0 podman[485141]: 2025-12-03 02:37:57.537819244 +0000 UTC m=+1.751140147 container remove 63d772f2091c565ca9f40cdbebd84a3bef46b6782a50d94526d671050f21ba7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_solomon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:37:57 compute-0 systemd[1]: libpod-conmon-63d772f2091c565ca9f40cdbebd84a3bef46b6782a50d94526d671050f21ba7a.scope: Deactivated successfully.
Dec 03 02:37:57 compute-0 sudo[484940]: pam_unix(sudo:session): session closed for user root
Dec 03 02:37:57 compute-0 sudo[485199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:37:57 compute-0 sudo[485199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:37:57 compute-0 sudo[485199]: pam_unix(sudo:session): session closed for user root
Dec 03 02:37:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2516: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:57 compute-0 sudo[485224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:37:57 compute-0 sudo[485224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:37:57 compute-0 sudo[485224]: pam_unix(sudo:session): session closed for user root
Dec 03 02:37:58 compute-0 sudo[485249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:37:58 compute-0 sudo[485249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:37:58 compute-0 sudo[485249]: pam_unix(sudo:session): session closed for user root
Dec 03 02:37:58 compute-0 sudo[485274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 02:37:58 compute-0 sudo[485274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:37:58 compute-0 ceph-mon[192821]: pgmap v2516: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:37:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:37:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:37:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:37:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:37:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:37:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:37:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:37:58 compute-0 podman[485336]: 2025-12-03 02:37:58.748890141 +0000 UTC m=+0.084203257 container create 83baa69d07d928a8425762f8d36e6ea7a87db0b08047c198e441c9a8b9e7a272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_knuth, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:37:58 compute-0 podman[485336]: 2025-12-03 02:37:58.710353323 +0000 UTC m=+0.045666499 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:37:58 compute-0 systemd[1]: Started libpod-conmon-83baa69d07d928a8425762f8d36e6ea7a87db0b08047c198e441c9a8b9e7a272.scope.
Dec 03 02:37:58 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:37:58 compute-0 podman[485336]: 2025-12-03 02:37:58.901324622 +0000 UTC m=+0.236637768 container init 83baa69d07d928a8425762f8d36e6ea7a87db0b08047c198e441c9a8b9e7a272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_knuth, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:37:58 compute-0 podman[485336]: 2025-12-03 02:37:58.918749214 +0000 UTC m=+0.254062320 container start 83baa69d07d928a8425762f8d36e6ea7a87db0b08047c198e441c9a8b9e7a272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 03 02:37:58 compute-0 podman[485336]: 2025-12-03 02:37:58.925973068 +0000 UTC m=+0.261286244 container attach 83baa69d07d928a8425762f8d36e6ea7a87db0b08047c198e441c9a8b9e7a272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_knuth, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:37:58 compute-0 blissful_knuth[485352]: 167 167
Dec 03 02:37:58 compute-0 systemd[1]: libpod-83baa69d07d928a8425762f8d36e6ea7a87db0b08047c198e441c9a8b9e7a272.scope: Deactivated successfully.
Dec 03 02:37:58 compute-0 podman[485336]: 2025-12-03 02:37:58.93029552 +0000 UTC m=+0.265608636 container died 83baa69d07d928a8425762f8d36e6ea7a87db0b08047c198e441c9a8b9e7a272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_knuth, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:37:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-3862847ff6319251537fc2df48583f6d6ac4c9b912680f740ad06cf878f35675-merged.mount: Deactivated successfully.
Dec 03 02:37:59 compute-0 podman[485336]: 2025-12-03 02:37:59.006886881 +0000 UTC m=+0.342199987 container remove 83baa69d07d928a8425762f8d36e6ea7a87db0b08047c198e441c9a8b9e7a272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_knuth, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:37:59 compute-0 systemd[1]: libpod-conmon-83baa69d07d928a8425762f8d36e6ea7a87db0b08047c198e441c9a8b9e7a272.scope: Deactivated successfully.
Dec 03 02:37:59 compute-0 podman[485375]: 2025-12-03 02:37:59.26800158 +0000 UTC m=+0.087763768 container create 2ed8234cb4caa9a6187f148aff36af283757c5b57d14e5acaaaca717987a48c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bell, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:37:59 compute-0 podman[485375]: 2025-12-03 02:37:59.229647828 +0000 UTC m=+0.049410056 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:37:59 compute-0 systemd[1]: Started libpod-conmon-2ed8234cb4caa9a6187f148aff36af283757c5b57d14e5acaaaca717987a48c8.scope.
Dec 03 02:37:59 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:37:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db82ac831d0183c2956d2b8562b20617994cda2a215ed0533d12d16cb3674960/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:37:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db82ac831d0183c2956d2b8562b20617994cda2a215ed0533d12d16cb3674960/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:37:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db82ac831d0183c2956d2b8562b20617994cda2a215ed0533d12d16cb3674960/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:37:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db82ac831d0183c2956d2b8562b20617994cda2a215ed0533d12d16cb3674960/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:37:59 compute-0 nova_compute[351485]: 2025-12-03 02:37:59.460 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:37:59 compute-0 podman[485375]: 2025-12-03 02:37:59.46537284 +0000 UTC m=+0.285135008 container init 2ed8234cb4caa9a6187f148aff36af283757c5b57d14e5acaaaca717987a48c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:37:59 compute-0 podman[485375]: 2025-12-03 02:37:59.485983141 +0000 UTC m=+0.305745289 container start 2ed8234cb4caa9a6187f148aff36af283757c5b57d14e5acaaaca717987a48c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bell, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:37:59 compute-0 podman[485375]: 2025-12-03 02:37:59.495048367 +0000 UTC m=+0.314810515 container attach 2ed8234cb4caa9a6187f148aff36af283757c5b57d14e5acaaaca717987a48c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bell, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:37:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:37:59.675 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:37:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:37:59.676 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:37:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:37:59.676 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:37:59 compute-0 podman[158098]: time="2025-12-03T02:37:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:37:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:37:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 44146 "" "Go-http-client/1.1"
Dec 03 02:37:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:37:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8624 "" "Go-http-client/1.1"
Dec 03 02:37:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2517: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:00 compute-0 relaxed_bell[485390]: {
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:     "0": [
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:         {
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:             "devices": [
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:                 "/dev/loop3"
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:             ],
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:             "lv_name": "ceph_lv0",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:             "lv_size": "21470642176",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:             "name": "ceph_lv0",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:             "tags": {
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:                 "ceph.cluster_name": "ceph",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:                 "ceph.crush_device_class": "",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:                 "ceph.encrypted": "0",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:                 "ceph.osd_id": "0",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:                 "ceph.type": "block",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:                 "ceph.vdo": "0"
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:             },
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:             "type": "block",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:             "vg_name": "ceph_vg0"
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:         }
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:     ],
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:     "1": [
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:         {
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:             "devices": [
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:                 "/dev/loop4"
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:             ],
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:             "lv_name": "ceph_lv1",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:             "lv_size": "21470642176",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:             "name": "ceph_lv1",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:             "tags": {
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:                 "ceph.cluster_name": "ceph",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:                 "ceph.crush_device_class": "",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:                 "ceph.encrypted": "0",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:                 "ceph.osd_id": "1",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:                 "ceph.type": "block",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:                 "ceph.vdo": "0"
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:             },
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:             "type": "block",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:             "vg_name": "ceph_vg1"
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:         }
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:     ],
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:     "2": [
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:         {
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:             "devices": [
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:                 "/dev/loop5"
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:             ],
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:             "lv_name": "ceph_lv2",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:             "lv_size": "21470642176",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:             "name": "ceph_lv2",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:             "tags": {
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:                 "ceph.cluster_name": "ceph",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:                 "ceph.crush_device_class": "",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:                 "ceph.encrypted": "0",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:                 "ceph.osd_id": "2",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:                 "ceph.type": "block",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:                 "ceph.vdo": "0"
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:             },
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:             "type": "block",
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:             "vg_name": "ceph_vg2"
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:         }
Dec 03 02:38:00 compute-0 relaxed_bell[485390]:     ]
Dec 03 02:38:00 compute-0 relaxed_bell[485390]: }
Dec 03 02:38:00 compute-0 systemd[1]: libpod-2ed8234cb4caa9a6187f148aff36af283757c5b57d14e5acaaaca717987a48c8.scope: Deactivated successfully.
Dec 03 02:38:00 compute-0 podman[485400]: 2025-12-03 02:38:00.428136759 +0000 UTC m=+0.050007492 container died 2ed8234cb4caa9a6187f148aff36af283757c5b57d14e5acaaaca717987a48c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bell, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:38:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-db82ac831d0183c2956d2b8562b20617994cda2a215ed0533d12d16cb3674960-merged.mount: Deactivated successfully.
Dec 03 02:38:00 compute-0 podman[485400]: 2025-12-03 02:38:00.546494639 +0000 UTC m=+0.168365312 container remove 2ed8234cb4caa9a6187f148aff36af283757c5b57d14e5acaaaca717987a48c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bell, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:38:00 compute-0 systemd[1]: libpod-conmon-2ed8234cb4caa9a6187f148aff36af283757c5b57d14e5acaaaca717987a48c8.scope: Deactivated successfully.
Dec 03 02:38:00 compute-0 sudo[485274]: pam_unix(sudo:session): session closed for user root
Dec 03 02:38:00 compute-0 sudo[485414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:38:00 compute-0 sudo[485414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:38:00 compute-0 sudo[485414]: pam_unix(sudo:session): session closed for user root
Dec 03 02:38:00 compute-0 sudo[485439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:38:00 compute-0 sudo[485439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:38:00 compute-0 sudo[485439]: pam_unix(sudo:session): session closed for user root
Dec 03 02:38:00 compute-0 ceph-mon[192821]: pgmap v2517: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:00 compute-0 sudo[485464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:38:00 compute-0 sudo[485464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:38:00 compute-0 sudo[485464]: pam_unix(sudo:session): session closed for user root
Dec 03 02:38:01 compute-0 sudo[485489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 02:38:01 compute-0 sudo[485489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:38:01 compute-0 openstack_network_exporter[368278]: ERROR   02:38:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:38:01 compute-0 openstack_network_exporter[368278]: ERROR   02:38:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:38:01 compute-0 openstack_network_exporter[368278]: ERROR   02:38:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:38:01 compute-0 openstack_network_exporter[368278]: ERROR   02:38:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:38:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:38:01 compute-0 openstack_network_exporter[368278]: ERROR   02:38:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:38:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:38:01 compute-0 podman[485554]: 2025-12-03 02:38:01.677162546 +0000 UTC m=+0.107505125 container create bb0da5b4ff6a5764dba3449af7ae42de28e0dbbb8d8eaac2816dfe270ce3d79b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 03 02:38:01 compute-0 podman[485554]: 2025-12-03 02:38:01.634132881 +0000 UTC m=+0.064475510 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:38:01 compute-0 systemd[1]: Started libpod-conmon-bb0da5b4ff6a5764dba3449af7ae42de28e0dbbb8d8eaac2816dfe270ce3d79b.scope.
Dec 03 02:38:01 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:38:01 compute-0 podman[485554]: 2025-12-03 02:38:01.822086215 +0000 UTC m=+0.252428854 container init bb0da5b4ff6a5764dba3449af7ae42de28e0dbbb8d8eaac2816dfe270ce3d79b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_booth, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Dec 03 02:38:01 compute-0 podman[485554]: 2025-12-03 02:38:01.834317271 +0000 UTC m=+0.264659860 container start bb0da5b4ff6a5764dba3449af7ae42de28e0dbbb8d8eaac2816dfe270ce3d79b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 03 02:38:01 compute-0 podman[485554]: 2025-12-03 02:38:01.840739952 +0000 UTC m=+0.271082541 container attach bb0da5b4ff6a5764dba3449af7ae42de28e0dbbb8d8eaac2816dfe270ce3d79b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_booth, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Dec 03 02:38:01 compute-0 nostalgic_booth[485570]: 167 167
Dec 03 02:38:01 compute-0 systemd[1]: libpod-bb0da5b4ff6a5764dba3449af7ae42de28e0dbbb8d8eaac2816dfe270ce3d79b.scope: Deactivated successfully.
Dec 03 02:38:01 compute-0 podman[485554]: 2025-12-03 02:38:01.843343155 +0000 UTC m=+0.273685744 container died bb0da5b4ff6a5764dba3449af7ae42de28e0dbbb8d8eaac2816dfe270ce3d79b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 03 02:38:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2518: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f1533dd154b974d6224bcf91e0849b0809aa8a1b7452aee4c9c92aff92723c3-merged.mount: Deactivated successfully.
Dec 03 02:38:01 compute-0 podman[485554]: 2025-12-03 02:38:01.921792519 +0000 UTC m=+0.352135078 container remove bb0da5b4ff6a5764dba3449af7ae42de28e0dbbb8d8eaac2816dfe270ce3d79b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_booth, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:38:01 compute-0 systemd[1]: libpod-conmon-bb0da5b4ff6a5764dba3449af7ae42de28e0dbbb8d8eaac2816dfe270ce3d79b.scope: Deactivated successfully.
Dec 03 02:38:02 compute-0 nova_compute[351485]: 2025-12-03 02:38:02.054 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:38:02 compute-0 podman[485592]: 2025-12-03 02:38:02.227482296 +0000 UTC m=+0.097863303 container create a03731a3014b993da5e06114e1258d414f85e56aff234d48cdd1fa1c0cdb596f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Dec 03 02:38:02 compute-0 podman[485592]: 2025-12-03 02:38:02.18900013 +0000 UTC m=+0.059381197 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:38:02 compute-0 systemd[1]: Started libpod-conmon-a03731a3014b993da5e06114e1258d414f85e56aff234d48cdd1fa1c0cdb596f.scope.
Dec 03 02:38:02 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:38:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4f2cd1d89a2251c7a9e77cf414b9fd5fc2dee1bd2ac4bf481f3444b6885d21a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:38:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4f2cd1d89a2251c7a9e77cf414b9fd5fc2dee1bd2ac4bf481f3444b6885d21a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:38:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4f2cd1d89a2251c7a9e77cf414b9fd5fc2dee1bd2ac4bf481f3444b6885d21a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:38:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4f2cd1d89a2251c7a9e77cf414b9fd5fc2dee1bd2ac4bf481f3444b6885d21a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:38:02 compute-0 podman[485592]: 2025-12-03 02:38:02.427016396 +0000 UTC m=+0.297397473 container init a03731a3014b993da5e06114e1258d414f85e56aff234d48cdd1fa1c0cdb596f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:38:02 compute-0 podman[485592]: 2025-12-03 02:38:02.451180068 +0000 UTC m=+0.321561085 container start a03731a3014b993da5e06114e1258d414f85e56aff234d48cdd1fa1c0cdb596f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec 03 02:38:02 compute-0 podman[485592]: 2025-12-03 02:38:02.457822036 +0000 UTC m=+0.328203093 container attach a03731a3014b993da5e06114e1258d414f85e56aff234d48cdd1fa1c0cdb596f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_neumann, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:38:02 compute-0 ceph-mon[192821]: pgmap v2518: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:03 compute-0 vigilant_neumann[485608]: {
Dec 03 02:38:03 compute-0 vigilant_neumann[485608]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 02:38:03 compute-0 vigilant_neumann[485608]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:38:03 compute-0 vigilant_neumann[485608]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 02:38:03 compute-0 vigilant_neumann[485608]:         "osd_id": 2,
Dec 03 02:38:03 compute-0 vigilant_neumann[485608]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:38:03 compute-0 vigilant_neumann[485608]:         "type": "bluestore"
Dec 03 02:38:03 compute-0 vigilant_neumann[485608]:     },
Dec 03 02:38:03 compute-0 vigilant_neumann[485608]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 02:38:03 compute-0 vigilant_neumann[485608]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:38:03 compute-0 vigilant_neumann[485608]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 02:38:03 compute-0 vigilant_neumann[485608]:         "osd_id": 1,
Dec 03 02:38:03 compute-0 vigilant_neumann[485608]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:38:03 compute-0 vigilant_neumann[485608]:         "type": "bluestore"
Dec 03 02:38:03 compute-0 vigilant_neumann[485608]:     },
Dec 03 02:38:03 compute-0 vigilant_neumann[485608]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 02:38:03 compute-0 vigilant_neumann[485608]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:38:03 compute-0 vigilant_neumann[485608]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 02:38:03 compute-0 vigilant_neumann[485608]:         "osd_id": 0,
Dec 03 02:38:03 compute-0 vigilant_neumann[485608]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:38:03 compute-0 vigilant_neumann[485608]:         "type": "bluestore"
Dec 03 02:38:03 compute-0 vigilant_neumann[485608]:     }
Dec 03 02:38:03 compute-0 vigilant_neumann[485608]: }
Dec 03 02:38:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:38:03 compute-0 systemd[1]: libpod-a03731a3014b993da5e06114e1258d414f85e56aff234d48cdd1fa1c0cdb596f.scope: Deactivated successfully.
Dec 03 02:38:03 compute-0 systemd[1]: libpod-a03731a3014b993da5e06114e1258d414f85e56aff234d48cdd1fa1c0cdb596f.scope: Consumed 1.334s CPU time.
Dec 03 02:38:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2519: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:03 compute-0 podman[485642]: 2025-12-03 02:38:03.869129113 +0000 UTC m=+0.061070865 container died a03731a3014b993da5e06114e1258d414f85e56aff234d48cdd1fa1c0cdb596f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 03 02:38:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4f2cd1d89a2251c7a9e77cf414b9fd5fc2dee1bd2ac4bf481f3444b6885d21a-merged.mount: Deactivated successfully.
Dec 03 02:38:03 compute-0 podman[485642]: 2025-12-03 02:38:03.94487439 +0000 UTC m=+0.136816052 container remove a03731a3014b993da5e06114e1258d414f85e56aff234d48cdd1fa1c0cdb596f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:38:03 compute-0 systemd[1]: libpod-conmon-a03731a3014b993da5e06114e1258d414f85e56aff234d48cdd1fa1c0cdb596f.scope: Deactivated successfully.
Dec 03 02:38:03 compute-0 sudo[485489]: pam_unix(sudo:session): session closed for user root
Dec 03 02:38:04 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 02:38:04 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:38:04 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 02:38:04 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:38:04 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 99019756-4799-4978-9c19-389a75ca91a2 does not exist
Dec 03 02:38:04 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 9f9a1426-0e16-4a14-a05e-a86dd7092da5 does not exist
Dec 03 02:38:04 compute-0 sudo[485657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:38:04 compute-0 sudo[485657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:38:04 compute-0 sudo[485657]: pam_unix(sudo:session): session closed for user root
Dec 03 02:38:04 compute-0 sudo[485682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 02:38:04 compute-0 sudo[485682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:38:04 compute-0 sudo[485682]: pam_unix(sudo:session): session closed for user root
Dec 03 02:38:04 compute-0 nova_compute[351485]: 2025-12-03 02:38:04.465 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:38:05 compute-0 ceph-mon[192821]: pgmap v2519: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:05 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:38:05 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:38:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2520: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:07 compute-0 ceph-mon[192821]: pgmap v2520: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:07 compute-0 nova_compute[351485]: 2025-12-03 02:38:07.060 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:38:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2521: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:38:09 compute-0 ceph-mon[192821]: pgmap v2521: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:09 compute-0 nova_compute[351485]: 2025-12-03 02:38:09.467 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:38:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2522: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:11 compute-0 ceph-mon[192821]: pgmap v2522: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2523: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:12 compute-0 nova_compute[351485]: 2025-12-03 02:38:12.063 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:38:12 compute-0 podman[485709]: 2025-12-03 02:38:12.879015759 +0000 UTC m=+0.109094220 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 02:38:12 compute-0 podman[485708]: 2025-12-03 02:38:12.897158341 +0000 UTC m=+0.128650102 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm)
Dec 03 02:38:12 compute-0 podman[485707]: 2025-12-03 02:38:12.929858283 +0000 UTC m=+0.164963306 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 03 02:38:13 compute-0 ceph-mon[192821]: pgmap v2523: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:38:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2524: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:14 compute-0 nova_compute[351485]: 2025-12-03 02:38:14.470 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:38:15 compute-0 ceph-mon[192821]: pgmap v2524: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2525: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:17 compute-0 nova_compute[351485]: 2025-12-03 02:38:17.066 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:38:17 compute-0 ceph-mon[192821]: pgmap v2525: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2526: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:18 compute-0 nova_compute[351485]: 2025-12-03 02:38:18.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:38:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:38:19 compute-0 ceph-mon[192821]: pgmap v2526: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:19 compute-0 nova_compute[351485]: 2025-12-03 02:38:19.475 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:38:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2527: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:21 compute-0 ceph-mon[192821]: pgmap v2527: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2528: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:21 compute-0 podman[485764]: 2025-12-03 02:38:21.886218678 +0000 UTC m=+0.134073935 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 03 02:38:22 compute-0 nova_compute[351485]: 2025-12-03 02:38:22.070 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:38:23 compute-0 ceph-mon[192821]: pgmap v2528: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:38:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2529: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:24 compute-0 nova_compute[351485]: 2025-12-03 02:38:24.476 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:38:25 compute-0 ceph-mon[192821]: pgmap v2529: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:25 compute-0 podman[485786]: 2025-12-03 02:38:25.856424335 +0000 UTC m=+0.095324391 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 03 02:38:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2530: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:25 compute-0 podman[485783]: 2025-12-03 02:38:25.866956092 +0000 UTC m=+0.111288692 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, config_id=edpm, container_name=openstack_network_exporter, vendor=Red Hat, Inc., vcs-type=git, maintainer=Red Hat, Inc., version=9.6, io.openshift.expose-services=, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, managed_by=edpm_ansible, distribution-scope=public, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 03 02:38:25 compute-0 podman[485784]: 2025-12-03 02:38:25.889337854 +0000 UTC m=+0.127527460 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 03 02:38:25 compute-0 podman[485785]: 2025-12-03 02:38:25.90018686 +0000 UTC m=+0.133906350 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, name=ubi9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.openshift.tags=base rhel9, config_id=edpm, architecture=x86_64, release=1214.1726694543, vcs-type=git, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., version=9.4, io.openshift.expose-services=)
Dec 03 02:38:25 compute-0 podman[485782]: 2025-12-03 02:38:25.920588045 +0000 UTC m=+0.168594238 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 03 02:38:26 compute-0 nova_compute[351485]: 2025-12-03 02:38:26.591 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:38:27 compute-0 nova_compute[351485]: 2025-12-03 02:38:27.073 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:38:27 compute-0 ceph-mon[192821]: pgmap v2530: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2531: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:28 compute-0 ceph-mon[192821]: pgmap v2531: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:38:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:38:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:38:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:38:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:38:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:38:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:38:28
Dec 03 02:38:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 02:38:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 02:38:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', '.mgr', 'volumes', 'backups', 'images', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', '.rgw.root']
Dec 03 02:38:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 02:38:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:38:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 02:38:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:38:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 02:38:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:38:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:38:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:38:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:38:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:38:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:38:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:38:29 compute-0 nova_compute[351485]: 2025-12-03 02:38:29.478 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:38:29 compute-0 podman[158098]: time="2025-12-03T02:38:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:38:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:38:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec 03 02:38:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:38:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8214 "" "Go-http-client/1.1"
Dec 03 02:38:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2532: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:31 compute-0 ceph-mon[192821]: pgmap v2532: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:31 compute-0 openstack_network_exporter[368278]: ERROR   02:38:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:38:31 compute-0 openstack_network_exporter[368278]: ERROR   02:38:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:38:31 compute-0 openstack_network_exporter[368278]: ERROR   02:38:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:38:31 compute-0 openstack_network_exporter[368278]: ERROR   02:38:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:38:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:38:31 compute-0 openstack_network_exporter[368278]: ERROR   02:38:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:38:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:38:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2533: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:32 compute-0 nova_compute[351485]: 2025-12-03 02:38:32.076 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:38:33 compute-0 ceph-mon[192821]: pgmap v2533: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:38:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2534: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:34 compute-0 nova_compute[351485]: 2025-12-03 02:38:34.482 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:38:35 compute-0 ceph-mon[192821]: pgmap v2534: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2535: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:37 compute-0 nova_compute[351485]: 2025-12-03 02:38:37.080 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:38:37 compute-0 ceph-mon[192821]: pgmap v2535: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2536: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:38 compute-0 nova_compute[351485]: 2025-12-03 02:38:38.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:38:38 compute-0 nova_compute[351485]: 2025-12-03 02:38:38.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 02:38:38 compute-0 nova_compute[351485]: 2025-12-03 02:38:38.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 03 02:38:38 compute-0 nova_compute[351485]: 2025-12-03 02:38:38.614 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 03 02:38:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:38:39 compute-0 ceph-mon[192821]: pgmap v2536: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 02:38:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:38:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 02:38:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:38:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 03 02:38:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:38:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:38:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:38:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:38:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:38:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec 03 02:38:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:38:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 02:38:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:38:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:38:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:38:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 02:38:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:38:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 02:38:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:38:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:38:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:38:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 02:38:39 compute-0 nova_compute[351485]: 2025-12-03 02:38:39.486 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:38:39 compute-0 nova_compute[351485]: 2025-12-03 02:38:39.575 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:38:39 compute-0 nova_compute[351485]: 2025-12-03 02:38:39.611 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:38:39 compute-0 nova_compute[351485]: 2025-12-03 02:38:39.611 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:38:39 compute-0 nova_compute[351485]: 2025-12-03 02:38:39.612 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:38:39 compute-0 nova_compute[351485]: 2025-12-03 02:38:39.612 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 02:38:39 compute-0 nova_compute[351485]: 2025-12-03 02:38:39.612 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:38:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2537: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:38:40 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/367587672' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:38:40 compute-0 nova_compute[351485]: 2025-12-03 02:38:40.135 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:38:40 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/367587672' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:38:40 compute-0 nova_compute[351485]: 2025-12-03 02:38:40.712 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:38:40 compute-0 nova_compute[351485]: 2025-12-03 02:38:40.713 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3970MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 02:38:40 compute-0 nova_compute[351485]: 2025-12-03 02:38:40.714 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:38:40 compute-0 nova_compute[351485]: 2025-12-03 02:38:40.714 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:38:40 compute-0 nova_compute[351485]: 2025-12-03 02:38:40.957 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 02:38:40 compute-0 nova_compute[351485]: 2025-12-03 02:38:40.958 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 02:38:40 compute-0 nova_compute[351485]: 2025-12-03 02:38:40.983 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:38:41 compute-0 ceph-mon[192821]: pgmap v2537: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:41 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:38:41 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1448743769' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:38:41 compute-0 nova_compute[351485]: 2025-12-03 02:38:41.488 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:38:41 compute-0 nova_compute[351485]: 2025-12-03 02:38:41.499 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:38:41 compute-0 nova_compute[351485]: 2025-12-03 02:38:41.632 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:38:41 compute-0 nova_compute[351485]: 2025-12-03 02:38:41.634 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 02:38:41 compute-0 nova_compute[351485]: 2025-12-03 02:38:41.635 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.921s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:38:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2538: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:42 compute-0 nova_compute[351485]: 2025-12-03 02:38:42.083 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:38:42 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1448743769' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:38:42 compute-0 nova_compute[351485]: 2025-12-03 02:38:42.637 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:38:43 compute-0 ceph-mon[192821]: pgmap v2538: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:43 compute-0 nova_compute[351485]: 2025-12-03 02:38:43.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:38:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:38:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2539: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:43 compute-0 podman[485929]: 2025-12-03 02:38:43.890489061 +0000 UTC m=+0.124137664 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 03 02:38:43 compute-0 podman[485928]: 2025-12-03 02:38:43.903011284 +0000 UTC m=+0.144377535 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, tcib_managed=true, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 03 02:38:43 compute-0 podman[485927]: 2025-12-03 02:38:43.907296115 +0000 UTC m=+0.154969044 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 03 02:38:44 compute-0 ceph-mon[192821]: pgmap v2539: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:44 compute-0 nova_compute[351485]: 2025-12-03 02:38:44.488 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:38:44 compute-0 nova_compute[351485]: 2025-12-03 02:38:44.570 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:38:44 compute-0 nova_compute[351485]: 2025-12-03 02:38:44.571 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:38:44 compute-0 nova_compute[351485]: 2025-12-03 02:38:44.600 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:38:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2540: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:46 compute-0 ceph-mon[192821]: pgmap v2540: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 02:38:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/301271762' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:38:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 02:38:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/301271762' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:38:47 compute-0 nova_compute[351485]: 2025-12-03 02:38:47.087 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:38:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2541: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/301271762' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:38:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/301271762' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:38:48 compute-0 nova_compute[351485]: 2025-12-03 02:38:48.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:38:48 compute-0 nova_compute[351485]: 2025-12-03 02:38:48.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:38:48 compute-0 nova_compute[351485]: 2025-12-03 02:38:48.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 02:38:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:38:48 compute-0 ceph-mon[192821]: pgmap v2541: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:49 compute-0 nova_compute[351485]: 2025-12-03 02:38:49.492 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:38:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2542: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:50 compute-0 ceph-mon[192821]: pgmap v2542: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2543: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:52 compute-0 nova_compute[351485]: 2025-12-03 02:38:52.090 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:38:52 compute-0 podman[485984]: 2025-12-03 02:38:52.902231288 +0000 UTC m=+0.137778059 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 03 02:38:53 compute-0 ceph-mon[192821]: pgmap v2543: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:38:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2544: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:54 compute-0 nova_compute[351485]: 2025-12-03 02:38:54.494 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:38:55 compute-0 ceph-mon[192821]: pgmap v2544: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2545: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:56 compute-0 podman[486008]: 2025-12-03 02:38:56.876158752 +0000 UTC m=+0.114325728 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 02:38:56 compute-0 podman[486007]: 2025-12-03 02:38:56.878675113 +0000 UTC m=+0.120390299 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.6, distribution-scope=public)
Dec 03 02:38:56 compute-0 podman[486009]: 2025-12-03 02:38:56.887264855 +0000 UTC m=+0.113120833 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, release=1214.1726694543, version=9.4, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, name=ubi9, release-0.7.12=, container_name=kepler)
Dec 03 02:38:56 compute-0 podman[486015]: 2025-12-03 02:38:56.89772763 +0000 UTC m=+0.122760025 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 03 02:38:56 compute-0 podman[486006]: 2025-12-03 02:38:56.906500178 +0000 UTC m=+0.156984081 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 03 02:38:57 compute-0 ceph-mon[192821]: pgmap v2545: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:57 compute-0 nova_compute[351485]: 2025-12-03 02:38:57.094 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:38:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2546: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:38:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:38:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:38:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:38:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:38:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:38:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:38:59 compute-0 ceph-mon[192821]: pgmap v2546: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:38:59 compute-0 nova_compute[351485]: 2025-12-03 02:38:59.497 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:38:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:38:59.676 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:38:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:38:59.677 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:38:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:38:59.677 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:38:59 compute-0 podman[158098]: time="2025-12-03T02:38:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:38:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:38:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec 03 02:38:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:38:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8212 "" "Go-http-client/1.1"
Dec 03 02:38:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2547: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:01 compute-0 ceph-mon[192821]: pgmap v2547: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:01 compute-0 openstack_network_exporter[368278]: ERROR   02:39:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:39:01 compute-0 openstack_network_exporter[368278]: ERROR   02:39:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:39:01 compute-0 openstack_network_exporter[368278]: ERROR   02:39:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:39:01 compute-0 openstack_network_exporter[368278]: ERROR   02:39:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:39:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:39:01 compute-0 openstack_network_exporter[368278]: ERROR   02:39:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:39:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:39:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2548: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:02 compute-0 nova_compute[351485]: 2025-12-03 02:39:02.097 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:39:03 compute-0 ceph-mon[192821]: pgmap v2548: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:39:03 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #123. Immutable memtables: 0.
Dec 03 02:39:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:39:03.772200) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 03 02:39:03 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 123
Dec 03 02:39:03 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729543772241, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 1324, "num_deletes": 255, "total_data_size": 2064886, "memory_usage": 2095600, "flush_reason": "Manual Compaction"}
Dec 03 02:39:03 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #124: started
Dec 03 02:39:03 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729543790097, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 124, "file_size": 2023243, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 51019, "largest_seqno": 52342, "table_properties": {"data_size": 2016983, "index_size": 3527, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 12805, "raw_average_key_size": 19, "raw_value_size": 2004467, "raw_average_value_size": 3041, "num_data_blocks": 159, "num_entries": 659, "num_filter_entries": 659, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764729406, "oldest_key_time": 1764729406, "file_creation_time": 1764729543, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 124, "seqno_to_time_mapping": "N/A"}}
Dec 03 02:39:03 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 18003 microseconds, and 9442 cpu microseconds.
Dec 03 02:39:03 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 02:39:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:39:03.790199) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #124: 2023243 bytes OK
Dec 03 02:39:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:39:03.790224) [db/memtable_list.cc:519] [default] Level-0 commit table #124 started
Dec 03 02:39:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:39:03.793179) [db/memtable_list.cc:722] [default] Level-0 commit table #124: memtable #1 done
Dec 03 02:39:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:39:03.793201) EVENT_LOG_v1 {"time_micros": 1764729543793194, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 03 02:39:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:39:03.793225) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 03 02:39:03 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 2058963, prev total WAL file size 2058963, number of live WAL files 2.
Dec 03 02:39:03 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000120.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:39:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:39:03.795264) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303038' seq:72057594037927935, type:22 .. '6C6F676D0032323539' seq:0, type:0; will stop at (end)
Dec 03 02:39:03 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 03 02:39:03 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [124(1975KB)], [122(7543KB)]
Dec 03 02:39:03 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729543795387, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [124], "files_L6": [122], "score": -1, "input_data_size": 9748062, "oldest_snapshot_seqno": -1}
Dec 03 02:39:03 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #125: 6658 keys, 9637348 bytes, temperature: kUnknown
Dec 03 02:39:03 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729543867354, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 125, "file_size": 9637348, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9594237, "index_size": 25335, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16709, "raw_key_size": 174228, "raw_average_key_size": 26, "raw_value_size": 9475258, "raw_average_value_size": 1423, "num_data_blocks": 1010, "num_entries": 6658, "num_filter_entries": 6658, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764729543, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Dec 03 02:39:03 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 02:39:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:39:03.867742) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 9637348 bytes
Dec 03 02:39:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:39:03.870997) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 135.3 rd, 133.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 7.4 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(9.6) write-amplify(4.8) OK, records in: 7180, records dropped: 522 output_compression: NoCompression
Dec 03 02:39:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:39:03.871085) EVENT_LOG_v1 {"time_micros": 1764729543871053, "job": 74, "event": "compaction_finished", "compaction_time_micros": 72047, "compaction_time_cpu_micros": 44381, "output_level": 6, "num_output_files": 1, "total_output_size": 9637348, "num_input_records": 7180, "num_output_records": 6658, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 03 02:39:03 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000124.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:39:03 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729543872364, "job": 74, "event": "table_file_deletion", "file_number": 124}
Dec 03 02:39:03 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:39:03 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729543875790, "job": 74, "event": "table_file_deletion", "file_number": 122}
Dec 03 02:39:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:39:03.794601) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:39:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:39:03.876107) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:39:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:39:03.876117) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:39:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:39:03.876120) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:39:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:39:03.876123) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:39:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:39:03.876125) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:39:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2549: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:04 compute-0 sudo[486104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:39:04 compute-0 sudo[486104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:39:04 compute-0 sudo[486104]: pam_unix(sudo:session): session closed for user root
Dec 03 02:39:04 compute-0 nova_compute[351485]: 2025-12-03 02:39:04.501 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:39:04 compute-0 sudo[486129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:39:04 compute-0 sudo[486129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:39:04 compute-0 sudo[486129]: pam_unix(sudo:session): session closed for user root
Dec 03 02:39:04 compute-0 sudo[486154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:39:04 compute-0 sudo[486154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:39:04 compute-0 sudo[486154]: pam_unix(sudo:session): session closed for user root
Dec 03 02:39:04 compute-0 ceph-mon[192821]: pgmap v2549: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:04 compute-0 sudo[486179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 02:39:04 compute-0 sudo[486179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:39:05 compute-0 sudo[486179]: pam_unix(sudo:session): session closed for user root
Dec 03 02:39:05 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:39:05 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:39:05 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 02:39:05 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:39:05 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 02:39:05 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:39:05 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 1068e014-027a-46b3-97a9-003b23d09828 does not exist
Dec 03 02:39:05 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev d5cb69f4-3afb-4743-8b28-4e09d521121c does not exist
Dec 03 02:39:05 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 71c6b075-e242-4f68-9d2a-bb71924a90cc does not exist
Dec 03 02:39:05 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 02:39:05 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:39:05 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 02:39:05 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:39:05 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:39:05 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:39:05 compute-0 sudo[486233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:39:05 compute-0 sudo[486233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:39:05 compute-0 sudo[486233]: pam_unix(sudo:session): session closed for user root
Dec 03 02:39:05 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:39:05 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:39:05 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:39:05 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:39:05 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:39:05 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:39:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2550: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:05 compute-0 sudo[486258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:39:05 compute-0 sudo[486258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:39:05 compute-0 sudo[486258]: pam_unix(sudo:session): session closed for user root
Dec 03 02:39:06 compute-0 sudo[486283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:39:06 compute-0 sudo[486283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:39:06 compute-0 sudo[486283]: pam_unix(sudo:session): session closed for user root
Dec 03 02:39:06 compute-0 sudo[486308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 02:39:06 compute-0 sudo[486308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:39:06 compute-0 ceph-mon[192821]: pgmap v2550: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:06 compute-0 podman[486373]: 2025-12-03 02:39:06.833697949 +0000 UTC m=+0.099498269 container create 2bbf9695927ad244a5bfcbf115f8d5f60b4c1984aa057dd05dd5b00c70e8252b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_darwin, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:39:06 compute-0 podman[486373]: 2025-12-03 02:39:06.791037035 +0000 UTC m=+0.056837415 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:39:06 compute-0 systemd[1]: Started libpod-conmon-2bbf9695927ad244a5bfcbf115f8d5f60b4c1984aa057dd05dd5b00c70e8252b.scope.
Dec 03 02:39:06 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:39:06 compute-0 podman[486373]: 2025-12-03 02:39:06.994341682 +0000 UTC m=+0.260142082 container init 2bbf9695927ad244a5bfcbf115f8d5f60b4c1984aa057dd05dd5b00c70e8252b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec 03 02:39:07 compute-0 podman[486373]: 2025-12-03 02:39:07.01338896 +0000 UTC m=+0.279189290 container start 2bbf9695927ad244a5bfcbf115f8d5f60b4c1984aa057dd05dd5b00c70e8252b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 03 02:39:07 compute-0 podman[486373]: 2025-12-03 02:39:07.022578399 +0000 UTC m=+0.288378789 container attach 2bbf9695927ad244a5bfcbf115f8d5f60b4c1984aa057dd05dd5b00c70e8252b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_darwin, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 03 02:39:07 compute-0 vigorous_darwin[486389]: 167 167
Dec 03 02:39:07 compute-0 systemd[1]: libpod-2bbf9695927ad244a5bfcbf115f8d5f60b4c1984aa057dd05dd5b00c70e8252b.scope: Deactivated successfully.
Dec 03 02:39:07 compute-0 podman[486373]: 2025-12-03 02:39:07.028708122 +0000 UTC m=+0.294508452 container died 2bbf9695927ad244a5bfcbf115f8d5f60b4c1984aa057dd05dd5b00c70e8252b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec 03 02:39:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-fab1ae4b645bf730e83cf8cb8e716eec75ab2f0cdcacce044cb234e309038c88-merged.mount: Deactivated successfully.
Dec 03 02:39:07 compute-0 nova_compute[351485]: 2025-12-03 02:39:07.100 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:39:07 compute-0 podman[486373]: 2025-12-03 02:39:07.112371093 +0000 UTC m=+0.378171413 container remove 2bbf9695927ad244a5bfcbf115f8d5f60b4c1984aa057dd05dd5b00c70e8252b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 03 02:39:07 compute-0 systemd[1]: libpod-conmon-2bbf9695927ad244a5bfcbf115f8d5f60b4c1984aa057dd05dd5b00c70e8252b.scope: Deactivated successfully.
Dec 03 02:39:07 compute-0 sshd-session[486333]: Invalid user rancher from 154.113.10.113 port 38006
Dec 03 02:39:07 compute-0 podman[486413]: 2025-12-03 02:39:07.401749879 +0000 UTC m=+0.098375257 container create 3f20d47e9a17171924034a75c26fa6286aa127594f3bd6a43e1c580f17d22428 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_northcutt, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec 03 02:39:07 compute-0 podman[486413]: 2025-12-03 02:39:07.374171201 +0000 UTC m=+0.070796569 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:39:07 compute-0 sshd-session[486333]: Received disconnect from 154.113.10.113 port 38006:11: Bye Bye [preauth]
Dec 03 02:39:07 compute-0 sshd-session[486333]: Disconnected from invalid user rancher 154.113.10.113 port 38006 [preauth]
Dec 03 02:39:07 compute-0 systemd[1]: Started libpod-conmon-3f20d47e9a17171924034a75c26fa6286aa127594f3bd6a43e1c580f17d22428.scope.
Dec 03 02:39:07 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:39:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed8cf433edf50338f86b4d96be8b89de5c7b5331b19ca8f2d7d24b2c82891cc0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:39:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed8cf433edf50338f86b4d96be8b89de5c7b5331b19ca8f2d7d24b2c82891cc0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:39:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed8cf433edf50338f86b4d96be8b89de5c7b5331b19ca8f2d7d24b2c82891cc0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:39:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed8cf433edf50338f86b4d96be8b89de5c7b5331b19ca8f2d7d24b2c82891cc0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:39:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed8cf433edf50338f86b4d96be8b89de5c7b5331b19ca8f2d7d24b2c82891cc0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 02:39:07 compute-0 podman[486413]: 2025-12-03 02:39:07.610339626 +0000 UTC m=+0.306965034 container init 3f20d47e9a17171924034a75c26fa6286aa127594f3bd6a43e1c580f17d22428 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_northcutt, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:39:07 compute-0 podman[486413]: 2025-12-03 02:39:07.642060771 +0000 UTC m=+0.338686139 container start 3f20d47e9a17171924034a75c26fa6286aa127594f3bd6a43e1c580f17d22428 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_northcutt, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:39:07 compute-0 podman[486413]: 2025-12-03 02:39:07.64876488 +0000 UTC m=+0.345390318 container attach 3f20d47e9a17171924034a75c26fa6286aa127594f3bd6a43e1c580f17d22428 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:39:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2551: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:39:08 compute-0 brave_northcutt[486429]: --> passed data devices: 0 physical, 3 LVM
Dec 03 02:39:08 compute-0 brave_northcutt[486429]: --> relative data size: 1.0
Dec 03 02:39:08 compute-0 brave_northcutt[486429]: --> All data devices are unavailable
Dec 03 02:39:08 compute-0 ceph-mon[192821]: pgmap v2551: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:08 compute-0 systemd[1]: libpod-3f20d47e9a17171924034a75c26fa6286aa127594f3bd6a43e1c580f17d22428.scope: Deactivated successfully.
Dec 03 02:39:08 compute-0 systemd[1]: libpod-3f20d47e9a17171924034a75c26fa6286aa127594f3bd6a43e1c580f17d22428.scope: Consumed 1.257s CPU time.
Dec 03 02:39:08 compute-0 podman[486413]: 2025-12-03 02:39:08.959881849 +0000 UTC m=+1.656507247 container died 3f20d47e9a17171924034a75c26fa6286aa127594f3bd6a43e1c580f17d22428 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_northcutt, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Dec 03 02:39:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed8cf433edf50338f86b4d96be8b89de5c7b5331b19ca8f2d7d24b2c82891cc0-merged.mount: Deactivated successfully.
Dec 03 02:39:09 compute-0 podman[486413]: 2025-12-03 02:39:09.0524148 +0000 UTC m=+1.749040148 container remove 3f20d47e9a17171924034a75c26fa6286aa127594f3bd6a43e1c580f17d22428 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_northcutt, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 03 02:39:09 compute-0 systemd[1]: libpod-conmon-3f20d47e9a17171924034a75c26fa6286aa127594f3bd6a43e1c580f17d22428.scope: Deactivated successfully.
Dec 03 02:39:09 compute-0 sudo[486308]: pam_unix(sudo:session): session closed for user root
Dec 03 02:39:09 compute-0 sudo[486470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:39:09 compute-0 sudo[486470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:39:09 compute-0 sudo[486470]: pam_unix(sudo:session): session closed for user root
Dec 03 02:39:09 compute-0 sudo[486495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:39:09 compute-0 sudo[486495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:39:09 compute-0 sudo[486495]: pam_unix(sudo:session): session closed for user root
Dec 03 02:39:09 compute-0 nova_compute[351485]: 2025-12-03 02:39:09.504 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:39:09 compute-0 sudo[486520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:39:09 compute-0 sudo[486520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:39:09 compute-0 sudo[486520]: pam_unix(sudo:session): session closed for user root
Dec 03 02:39:09 compute-0 sudo[486545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 02:39:09 compute-0 sudo[486545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:39:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2552: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:10 compute-0 podman[486607]: 2025-12-03 02:39:10.236220567 +0000 UTC m=+0.083244300 container create b76f8f83ca977c6713442ffe78d94bd04586308d28b0b829093fb61574621b95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_zhukovsky, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:39:10 compute-0 podman[486607]: 2025-12-03 02:39:10.200000755 +0000 UTC m=+0.047024558 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:39:10 compute-0 systemd[1]: Started libpod-conmon-b76f8f83ca977c6713442ffe78d94bd04586308d28b0b829093fb61574621b95.scope.
Dec 03 02:39:10 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:39:10 compute-0 podman[486607]: 2025-12-03 02:39:10.365057543 +0000 UTC m=+0.212081356 container init b76f8f83ca977c6713442ffe78d94bd04586308d28b0b829093fb61574621b95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_zhukovsky, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 03 02:39:10 compute-0 podman[486607]: 2025-12-03 02:39:10.389836962 +0000 UTC m=+0.236860725 container start b76f8f83ca977c6713442ffe78d94bd04586308d28b0b829093fb61574621b95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:39:10 compute-0 podman[486607]: 2025-12-03 02:39:10.397332794 +0000 UTC m=+0.244356577 container attach b76f8f83ca977c6713442ffe78d94bd04586308d28b0b829093fb61574621b95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_zhukovsky, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:39:10 compute-0 gallant_zhukovsky[486623]: 167 167
Dec 03 02:39:10 compute-0 systemd[1]: libpod-b76f8f83ca977c6713442ffe78d94bd04586308d28b0b829093fb61574621b95.scope: Deactivated successfully.
Dec 03 02:39:10 compute-0 podman[486607]: 2025-12-03 02:39:10.405019861 +0000 UTC m=+0.252043624 container died b76f8f83ca977c6713442ffe78d94bd04586308d28b0b829093fb61574621b95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_zhukovsky, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:39:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b9155b2732b910583c2b4f24c741b1eff1edd973582cc44d12eeff212bc9b90-merged.mount: Deactivated successfully.
Dec 03 02:39:10 compute-0 podman[486607]: 2025-12-03 02:39:10.49394138 +0000 UTC m=+0.340965133 container remove b76f8f83ca977c6713442ffe78d94bd04586308d28b0b829093fb61574621b95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_zhukovsky, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:39:10 compute-0 systemd[1]: libpod-conmon-b76f8f83ca977c6713442ffe78d94bd04586308d28b0b829093fb61574621b95.scope: Deactivated successfully.
Dec 03 02:39:10 compute-0 podman[486647]: 2025-12-03 02:39:10.748037681 +0000 UTC m=+0.072726004 container create 0f834ba94049d8408491cc75feb8844e958efe15df6b46dc30b1091a61534672 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_einstein, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Dec 03 02:39:10 compute-0 systemd[1]: Started libpod-conmon-0f834ba94049d8408491cc75feb8844e958efe15df6b46dc30b1091a61534672.scope.
Dec 03 02:39:10 compute-0 podman[486647]: 2025-12-03 02:39:10.714730651 +0000 UTC m=+0.039419024 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:39:10 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:39:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acc3977b87c1bb7ec69921ca272668e66335123bef72c909acc208b021c849b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:39:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acc3977b87c1bb7ec69921ca272668e66335123bef72c909acc208b021c849b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:39:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acc3977b87c1bb7ec69921ca272668e66335123bef72c909acc208b021c849b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:39:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acc3977b87c1bb7ec69921ca272668e66335123bef72c909acc208b021c849b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:39:10 compute-0 podman[486647]: 2025-12-03 02:39:10.901966444 +0000 UTC m=+0.226654817 container init 0f834ba94049d8408491cc75feb8844e958efe15df6b46dc30b1091a61534672 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_einstein, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:39:10 compute-0 podman[486647]: 2025-12-03 02:39:10.919431367 +0000 UTC m=+0.244119700 container start 0f834ba94049d8408491cc75feb8844e958efe15df6b46dc30b1091a61534672 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_einstein, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:39:10 compute-0 podman[486647]: 2025-12-03 02:39:10.924604853 +0000 UTC m=+0.249293146 container attach 0f834ba94049d8408491cc75feb8844e958efe15df6b46dc30b1091a61534672 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_einstein, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:39:10 compute-0 ceph-mon[192821]: pgmap v2552: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]: {
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:     "0": [
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:         {
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:             "devices": [
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:                 "/dev/loop3"
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:             ],
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:             "lv_name": "ceph_lv0",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:             "lv_size": "21470642176",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:             "name": "ceph_lv0",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:             "tags": {
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:                 "ceph.cluster_name": "ceph",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:                 "ceph.crush_device_class": "",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:                 "ceph.encrypted": "0",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:                 "ceph.osd_id": "0",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:                 "ceph.type": "block",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:                 "ceph.vdo": "0"
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:             },
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:             "type": "block",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:             "vg_name": "ceph_vg0"
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:         }
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:     ],
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:     "1": [
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:         {
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:             "devices": [
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:                 "/dev/loop4"
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:             ],
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:             "lv_name": "ceph_lv1",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:             "lv_size": "21470642176",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:             "name": "ceph_lv1",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:             "tags": {
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:                 "ceph.cluster_name": "ceph",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:                 "ceph.crush_device_class": "",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:                 "ceph.encrypted": "0",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:                 "ceph.osd_id": "1",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:                 "ceph.type": "block",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:                 "ceph.vdo": "0"
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:             },
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:             "type": "block",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:             "vg_name": "ceph_vg1"
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:         }
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:     ],
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:     "2": [
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:         {
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:             "devices": [
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:                 "/dev/loop5"
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:             ],
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:             "lv_name": "ceph_lv2",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:             "lv_size": "21470642176",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:             "name": "ceph_lv2",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:             "tags": {
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:                 "ceph.cluster_name": "ceph",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:                 "ceph.crush_device_class": "",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:                 "ceph.encrypted": "0",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:                 "ceph.osd_id": "2",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:                 "ceph.type": "block",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:                 "ceph.vdo": "0"
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:             },
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:             "type": "block",
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:             "vg_name": "ceph_vg2"
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:         }
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]:     ]
Dec 03 02:39:11 compute-0 heuristic_einstein[486664]: }
Dec 03 02:39:11 compute-0 systemd[1]: libpod-0f834ba94049d8408491cc75feb8844e958efe15df6b46dc30b1091a61534672.scope: Deactivated successfully.
Dec 03 02:39:11 compute-0 podman[486647]: 2025-12-03 02:39:11.786414174 +0000 UTC m=+1.111102577 container died 0f834ba94049d8408491cc75feb8844e958efe15df6b46dc30b1091a61534672 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_einstein, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 03 02:39:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-acc3977b87c1bb7ec69921ca272668e66335123bef72c909acc208b021c849b6-merged.mount: Deactivated successfully.
Dec 03 02:39:11 compute-0 podman[486647]: 2025-12-03 02:39:11.871720011 +0000 UTC m=+1.196408304 container remove 0f834ba94049d8408491cc75feb8844e958efe15df6b46dc30b1091a61534672 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 03 02:39:11 compute-0 systemd[1]: libpod-conmon-0f834ba94049d8408491cc75feb8844e958efe15df6b46dc30b1091a61534672.scope: Deactivated successfully.
Dec 03 02:39:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2553: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:11 compute-0 sudo[486545]: pam_unix(sudo:session): session closed for user root
Dec 03 02:39:12 compute-0 sudo[486686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:39:12 compute-0 sudo[486686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:39:12 compute-0 sudo[486686]: pam_unix(sudo:session): session closed for user root
Dec 03 02:39:12 compute-0 nova_compute[351485]: 2025-12-03 02:39:12.110 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:39:12 compute-0 sudo[486711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:39:12 compute-0 sudo[486711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:39:12 compute-0 sudo[486711]: pam_unix(sudo:session): session closed for user root
Dec 03 02:39:12 compute-0 sudo[486736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:39:12 compute-0 sudo[486736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:39:12 compute-0 sudo[486736]: pam_unix(sudo:session): session closed for user root
Dec 03 02:39:12 compute-0 sudo[486761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 02:39:12 compute-0 sudo[486761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:39:12 compute-0 ceph-mon[192821]: pgmap v2553: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:13 compute-0 podman[486825]: 2025-12-03 02:39:13.12084772 +0000 UTC m=+0.084612429 container create 651afdc5074b3438dfee26e3ae90c1f59830466909598bc873d43c2d8361ec68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_turing, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:39:13 compute-0 podman[486825]: 2025-12-03 02:39:13.087368425 +0000 UTC m=+0.051133184 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:39:13 compute-0 systemd[1]: Started libpod-conmon-651afdc5074b3438dfee26e3ae90c1f59830466909598bc873d43c2d8361ec68.scope.
Dec 03 02:39:13 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:39:13 compute-0 podman[486825]: 2025-12-03 02:39:13.278957282 +0000 UTC m=+0.242722031 container init 651afdc5074b3438dfee26e3ae90c1f59830466909598bc873d43c2d8361ec68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:39:13 compute-0 podman[486825]: 2025-12-03 02:39:13.295104628 +0000 UTC m=+0.258869337 container start 651afdc5074b3438dfee26e3ae90c1f59830466909598bc873d43c2d8361ec68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_turing, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:39:13 compute-0 podman[486825]: 2025-12-03 02:39:13.302594719 +0000 UTC m=+0.266359468 container attach 651afdc5074b3438dfee26e3ae90c1f59830466909598bc873d43c2d8361ec68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_turing, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:39:13 compute-0 goofy_turing[486841]: 167 167
Dec 03 02:39:13 compute-0 systemd[1]: libpod-651afdc5074b3438dfee26e3ae90c1f59830466909598bc873d43c2d8361ec68.scope: Deactivated successfully.
Dec 03 02:39:13 compute-0 podman[486825]: 2025-12-03 02:39:13.309178105 +0000 UTC m=+0.272942804 container died 651afdc5074b3438dfee26e3ae90c1f59830466909598bc873d43c2d8361ec68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_turing, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 03 02:39:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-04b188f955d95adbb15f645a77c0365d0105590f531936f94384e84abf746ef8-merged.mount: Deactivated successfully.
Dec 03 02:39:13 compute-0 podman[486825]: 2025-12-03 02:39:13.388648197 +0000 UTC m=+0.352412876 container remove 651afdc5074b3438dfee26e3ae90c1f59830466909598bc873d43c2d8361ec68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_turing, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 03 02:39:13 compute-0 systemd[1]: libpod-conmon-651afdc5074b3438dfee26e3ae90c1f59830466909598bc873d43c2d8361ec68.scope: Deactivated successfully.
Dec 03 02:39:13 compute-0 podman[486864]: 2025-12-03 02:39:13.67400462 +0000 UTC m=+0.106734703 container create 1da437410b453ac541f34a9d108bdf469be894628f80c977861a3f2229104b90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mestorf, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 03 02:39:13 compute-0 podman[486864]: 2025-12-03 02:39:13.636976765 +0000 UTC m=+0.069706908 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:39:13 compute-0 systemd[1]: Started libpod-conmon-1da437410b453ac541f34a9d108bdf469be894628f80c977861a3f2229104b90.scope.
Dec 03 02:39:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:39:13 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:39:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62fd25ab72b803e717c606ba82645e5f628e2062cfdf62a46b3d4f0289d5c847/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:39:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62fd25ab72b803e717c606ba82645e5f628e2062cfdf62a46b3d4f0289d5c847/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:39:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62fd25ab72b803e717c606ba82645e5f628e2062cfdf62a46b3d4f0289d5c847/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:39:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62fd25ab72b803e717c606ba82645e5f628e2062cfdf62a46b3d4f0289d5c847/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:39:13 compute-0 podman[486864]: 2025-12-03 02:39:13.849793801 +0000 UTC m=+0.282523944 container init 1da437410b453ac541f34a9d108bdf469be894628f80c977861a3f2229104b90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mestorf, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:39:13 compute-0 podman[486864]: 2025-12-03 02:39:13.881732102 +0000 UTC m=+0.314462195 container start 1da437410b453ac541f34a9d108bdf469be894628f80c977861a3f2229104b90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mestorf, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:39:13 compute-0 podman[486864]: 2025-12-03 02:39:13.888503713 +0000 UTC m=+0.321233806 container attach 1da437410b453ac541f34a9d108bdf469be894628f80c977861a3f2229104b90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Dec 03 02:39:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2554: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:14 compute-0 nova_compute[351485]: 2025-12-03 02:39:14.508 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:39:14 compute-0 podman[486893]: 2025-12-03 02:39:14.853192677 +0000 UTC m=+0.103748219 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 03 02:39:14 compute-0 podman[486898]: 2025-12-03 02:39:14.882496454 +0000 UTC m=+0.125051550 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 02:39:14 compute-0 podman[486897]: 2025-12-03 02:39:14.889111051 +0000 UTC m=+0.139268252 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute)
Dec 03 02:39:14 compute-0 musing_mestorf[486880]: {
Dec 03 02:39:14 compute-0 musing_mestorf[486880]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 02:39:14 compute-0 musing_mestorf[486880]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:39:14 compute-0 musing_mestorf[486880]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 02:39:14 compute-0 musing_mestorf[486880]:         "osd_id": 2,
Dec 03 02:39:14 compute-0 musing_mestorf[486880]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:39:14 compute-0 musing_mestorf[486880]:         "type": "bluestore"
Dec 03 02:39:14 compute-0 musing_mestorf[486880]:     },
Dec 03 02:39:14 compute-0 musing_mestorf[486880]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 02:39:14 compute-0 musing_mestorf[486880]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:39:14 compute-0 musing_mestorf[486880]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 02:39:14 compute-0 musing_mestorf[486880]:         "osd_id": 1,
Dec 03 02:39:14 compute-0 musing_mestorf[486880]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:39:14 compute-0 musing_mestorf[486880]:         "type": "bluestore"
Dec 03 02:39:14 compute-0 musing_mestorf[486880]:     },
Dec 03 02:39:14 compute-0 musing_mestorf[486880]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 02:39:14 compute-0 musing_mestorf[486880]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:39:14 compute-0 musing_mestorf[486880]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 02:39:14 compute-0 musing_mestorf[486880]:         "osd_id": 0,
Dec 03 02:39:14 compute-0 musing_mestorf[486880]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:39:14 compute-0 musing_mestorf[486880]:         "type": "bluestore"
Dec 03 02:39:14 compute-0 musing_mestorf[486880]:     }
Dec 03 02:39:14 compute-0 musing_mestorf[486880]: }
Dec 03 02:39:15 compute-0 ceph-mon[192821]: pgmap v2554: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:15 compute-0 systemd[1]: libpod-1da437410b453ac541f34a9d108bdf469be894628f80c977861a3f2229104b90.scope: Deactivated successfully.
Dec 03 02:39:15 compute-0 systemd[1]: libpod-1da437410b453ac541f34a9d108bdf469be894628f80c977861a3f2229104b90.scope: Consumed 1.122s CPU time.
Dec 03 02:39:15 compute-0 podman[486864]: 2025-12-03 02:39:15.024323266 +0000 UTC m=+1.457053349 container died 1da437410b453ac541f34a9d108bdf469be894628f80c977861a3f2229104b90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 03 02:39:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-62fd25ab72b803e717c606ba82645e5f628e2062cfdf62a46b3d4f0289d5c847-merged.mount: Deactivated successfully.
Dec 03 02:39:15 compute-0 podman[486864]: 2025-12-03 02:39:15.118849114 +0000 UTC m=+1.551579177 container remove 1da437410b453ac541f34a9d108bdf469be894628f80c977861a3f2229104b90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mestorf, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 03 02:39:15 compute-0 systemd[1]: libpod-conmon-1da437410b453ac541f34a9d108bdf469be894628f80c977861a3f2229104b90.scope: Deactivated successfully.
Dec 03 02:39:15 compute-0 sudo[486761]: pam_unix(sudo:session): session closed for user root
Dec 03 02:39:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 02:39:15 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:39:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 02:39:15 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:39:15 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 846c9050-a407-4804-aadb-9840dc14425f does not exist
Dec 03 02:39:15 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 69b301ab-2fee-463d-970f-bd5ebc663ab9 does not exist
Dec 03 02:39:15 compute-0 sudo[486980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:39:15 compute-0 sudo[486980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:39:15 compute-0 sudo[486980]: pam_unix(sudo:session): session closed for user root
Dec 03 02:39:15 compute-0 sudo[487005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 02:39:15 compute-0 sudo[487005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:39:15 compute-0 sudo[487005]: pam_unix(sudo:session): session closed for user root
Dec 03 02:39:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2555: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:16 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:39:16 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:39:17 compute-0 nova_compute[351485]: 2025-12-03 02:39:17.113 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:39:17 compute-0 ceph-mon[192821]: pgmap v2555: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2556: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:18 compute-0 ceph-mon[192821]: pgmap v2556: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:39:19 compute-0 nova_compute[351485]: 2025-12-03 02:39:19.511 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.518 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.519 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.520 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.521 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.522 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.522 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.522 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.522 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.527 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.526 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.528 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.528 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.529 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.529 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.529 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.530 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.530 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.530 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.531 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.531 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.531 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.532 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.532 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.532 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.532 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.527 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.533 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.534 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.534 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.533 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.535 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.536 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.536 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.536 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.537 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.537 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.534 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.538 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.538 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.538 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.538 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.539 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.539 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.539 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.539 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.539 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.540 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.540 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.540 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.540 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.540 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.541 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.541 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.541 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.541 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.541 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.542 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.542 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.542 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.542 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.543 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.543 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.543 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.543 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.543 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.544 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.544 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.544 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.545 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.545 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.545 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.545 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.546 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.546 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.547 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.547 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.547 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.547 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.548 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.548 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.548 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.548 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.548 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.549 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.549 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.549 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.549 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:39:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2557: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:20 compute-0 ceph-mon[192821]: pgmap v2557: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2558: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:22 compute-0 nova_compute[351485]: 2025-12-03 02:39:22.116 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:39:22 compute-0 nova_compute[351485]: 2025-12-03 02:39:22.838 351492 DEBUG oslo_concurrency.processutils [None req-445974f2-4675-4ece-9116-f5f717039c73 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:39:22 compute-0 nova_compute[351485]: 2025-12-03 02:39:22.892 351492 DEBUG oslo_concurrency.processutils [None req-445974f2-4675-4ece-9116-f5f717039c73 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:39:22 compute-0 ceph-mon[192821]: pgmap v2558: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:39:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2559: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:23 compute-0 podman[487033]: 2025-12-03 02:39:23.900463479 +0000 UTC m=+0.141997658 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm)
Dec 03 02:39:24 compute-0 nova_compute[351485]: 2025-12-03 02:39:24.514 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:39:24 compute-0 ceph-mon[192821]: pgmap v2559: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2560: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:27 compute-0 ceph-mon[192821]: pgmap v2560: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:27 compute-0 nova_compute[351485]: 2025-12-03 02:39:27.119 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:39:27 compute-0 podman[487056]: 2025-12-03 02:39:27.892944695 +0000 UTC m=+0.124705400 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 03 02:39:27 compute-0 podman[487057]: 2025-12-03 02:39:27.892281476 +0000 UTC m=+0.121064027 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, release-0.7.12=, io.openshift.expose-services=, config_id=edpm, vcs-type=git, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release=1214.1726694543, vendor=Red Hat, Inc., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec 03 02:39:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2561: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:27 compute-0 podman[487068]: 2025-12-03 02:39:27.904131071 +0000 UTC m=+0.125904094 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec 03 02:39:27 compute-0 podman[487055]: 2025-12-03 02:39:27.921142671 +0000 UTC m=+0.168335222 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., container_name=openstack_network_exporter, distribution-scope=public, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1755695350, build-date=2025-08-20T13:12:41, name=ubi9-minimal, config_id=edpm, io.openshift.expose-services=, version=9.6, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 03 02:39:27 compute-0 podman[487054]: 2025-12-03 02:39:27.957713603 +0000 UTC m=+0.206903280 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Dec 03 02:39:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:39:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:39:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:39:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:39:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:39:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:39:28 compute-0 nova_compute[351485]: 2025-12-03 02:39:28.578 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:39:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:39:28
Dec 03 02:39:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 02:39:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 02:39:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'volumes', 'vms', 'default.rgw.control', 'backups', '.mgr', 'images', 'default.rgw.log']
Dec 03 02:39:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 02:39:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:39:29 compute-0 ceph-mon[192821]: pgmap v2561: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 02:39:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:39:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 02:39:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:39:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:39:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:39:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:39:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:39:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:39:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:39:29 compute-0 nova_compute[351485]: 2025-12-03 02:39:29.517 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:39:29 compute-0 podman[158098]: time="2025-12-03T02:39:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:39:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:39:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec 03 02:39:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:39:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8213 "" "Go-http-client/1.1"
Dec 03 02:39:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2562: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:30 compute-0 nova_compute[351485]: 2025-12-03 02:39:30.986 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:39:30 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:39:30.987 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:a6:85', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:2a:11:ae:7b:8c'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 03 02:39:30 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:39:30.989 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 03 02:39:31 compute-0 ceph-mon[192821]: pgmap v2562: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:31 compute-0 openstack_network_exporter[368278]: ERROR   02:39:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:39:31 compute-0 openstack_network_exporter[368278]: ERROR   02:39:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:39:31 compute-0 openstack_network_exporter[368278]: ERROR   02:39:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:39:31 compute-0 openstack_network_exporter[368278]: ERROR   02:39:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:39:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:39:31 compute-0 openstack_network_exporter[368278]: ERROR   02:39:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:39:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:39:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2563: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:32 compute-0 nova_compute[351485]: 2025-12-03 02:39:32.122 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:39:33 compute-0 ceph-mon[192821]: pgmap v2563: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:39:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2564: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:34 compute-0 nova_compute[351485]: 2025-12-03 02:39:34.520 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:39:35 compute-0 ceph-mon[192821]: pgmap v2564: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2565: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:36 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:39:36.992 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=eda9fd7d-f2b1-4121-b9ac-fc31f8426272, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 03 02:39:37 compute-0 nova_compute[351485]: 2025-12-03 02:39:37.124 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:39:37 compute-0 ceph-mon[192821]: pgmap v2565: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2566: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:39:39 compute-0 ceph-mon[192821]: pgmap v2566: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 02:39:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:39:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 02:39:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:39:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 03 02:39:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:39:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:39:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:39:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:39:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:39:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec 03 02:39:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:39:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 02:39:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:39:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:39:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:39:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 02:39:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:39:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 02:39:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:39:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:39:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:39:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 02:39:39 compute-0 nova_compute[351485]: 2025-12-03 02:39:39.525 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:39:39 compute-0 nova_compute[351485]: 2025-12-03 02:39:39.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:39:39 compute-0 nova_compute[351485]: 2025-12-03 02:39:39.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 02:39:39 compute-0 nova_compute[351485]: 2025-12-03 02:39:39.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 03 02:39:39 compute-0 nova_compute[351485]: 2025-12-03 02:39:39.632 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 03 02:39:39 compute-0 nova_compute[351485]: 2025-12-03 02:39:39.633 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:39:39 compute-0 nova_compute[351485]: 2025-12-03 02:39:39.682 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:39:39 compute-0 nova_compute[351485]: 2025-12-03 02:39:39.683 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:39:39 compute-0 nova_compute[351485]: 2025-12-03 02:39:39.684 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:39:39 compute-0 nova_compute[351485]: 2025-12-03 02:39:39.685 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 02:39:39 compute-0 nova_compute[351485]: 2025-12-03 02:39:39.686 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:39:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2567: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:39:40 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1600621679' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:39:40 compute-0 nova_compute[351485]: 2025-12-03 02:39:40.214 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:39:40 compute-0 nova_compute[351485]: 2025-12-03 02:39:40.846 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:39:40 compute-0 nova_compute[351485]: 2025-12-03 02:39:40.848 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3942MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 02:39:40 compute-0 nova_compute[351485]: 2025-12-03 02:39:40.849 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:39:40 compute-0 nova_compute[351485]: 2025-12-03 02:39:40.849 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:39:40 compute-0 nova_compute[351485]: 2025-12-03 02:39:40.935 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 02:39:40 compute-0 nova_compute[351485]: 2025-12-03 02:39:40.936 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 02:39:41 compute-0 nova_compute[351485]: 2025-12-03 02:39:41.015 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:39:41 compute-0 ceph-mon[192821]: pgmap v2567: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:41 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1600621679' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:39:41 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:39:41 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2875702493' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:39:41 compute-0 nova_compute[351485]: 2025-12-03 02:39:41.492 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:39:41 compute-0 nova_compute[351485]: 2025-12-03 02:39:41.499 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:39:41 compute-0 nova_compute[351485]: 2025-12-03 02:39:41.517 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:39:41 compute-0 nova_compute[351485]: 2025-12-03 02:39:41.518 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 02:39:41 compute-0 nova_compute[351485]: 2025-12-03 02:39:41.518 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:39:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2568: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:42 compute-0 nova_compute[351485]: 2025-12-03 02:39:42.128 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:39:42 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2875702493' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:39:42 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 03 02:39:42 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 4800.1 total, 600.0 interval
                                            Cumulative writes: 10K writes, 40K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 10K writes, 3012 syncs, 3.57 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 423 writes, 1287 keys, 423 commit groups, 1.0 writes per commit group, ingest: 0.35 MB, 0.00 MB/s
                                            Interval WAL: 423 writes, 202 syncs, 2.09 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 03 02:39:43 compute-0 ceph-mon[192821]: pgmap v2568: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:39:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2569: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:44 compute-0 nova_compute[351485]: 2025-12-03 02:39:44.463 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:39:44 compute-0 nova_compute[351485]: 2025-12-03 02:39:44.527 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:39:44 compute-0 nova_compute[351485]: 2025-12-03 02:39:44.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:39:45 compute-0 ceph-mon[192821]: pgmap v2569: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:45 compute-0 nova_compute[351485]: 2025-12-03 02:39:45.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:39:45 compute-0 podman[487200]: 2025-12-03 02:39:45.856140258 +0000 UTC m=+0.103191663 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 02:39:45 compute-0 podman[487198]: 2025-12-03 02:39:45.860951833 +0000 UTC m=+0.110586831 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 03 02:39:45 compute-0 podman[487199]: 2025-12-03 02:39:45.882364338 +0000 UTC m=+0.127561071 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251125)
Dec 03 02:39:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2570: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:46 compute-0 nova_compute[351485]: 2025-12-03 02:39:46.570 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:39:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 02:39:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1485462913' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:39:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 02:39:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1485462913' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:39:47 compute-0 nova_compute[351485]: 2025-12-03 02:39:47.133 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:39:47 compute-0 ceph-mon[192821]: pgmap v2570: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/1485462913' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:39:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/1485462913' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:39:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2571: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:48 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 03 02:39:48 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 4800.1 total, 600.0 interval
                                            Cumulative writes: 12K writes, 46K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 12K writes, 3504 syncs, 3.50 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 495 writes, 1261 keys, 495 commit groups, 1.0 writes per commit group, ingest: 0.44 MB, 0.00 MB/s
                                            Interval WAL: 495 writes, 232 syncs, 2.13 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 03 02:39:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:39:49 compute-0 ceph-mon[192821]: pgmap v2571: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:49 compute-0 nova_compute[351485]: 2025-12-03 02:39:49.529 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:39:49 compute-0 nova_compute[351485]: 2025-12-03 02:39:49.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:39:49 compute-0 nova_compute[351485]: 2025-12-03 02:39:49.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 02:39:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2572: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:50 compute-0 nova_compute[351485]: 2025-12-03 02:39:50.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:39:51 compute-0 ceph-mon[192821]: pgmap v2572: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2573: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:52 compute-0 nova_compute[351485]: 2025-12-03 02:39:52.137 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:39:52 compute-0 ceph-mon[192821]: pgmap v2573: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:39:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2574: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:54 compute-0 nova_compute[351485]: 2025-12-03 02:39:54.532 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:39:54 compute-0 podman[487258]: 2025-12-03 02:39:54.886972082 +0000 UTC m=+0.136787111 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Dec 03 02:39:54 compute-0 ceph-mon[192821]: pgmap v2574: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2575: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:55 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 03 02:39:55 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 4800.1 total, 600.0 interval
                                            Cumulative writes: 9641 writes, 36K keys, 9641 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 9641 writes, 2604 syncs, 3.70 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 416 writes, 951 keys, 416 commit groups, 1.0 writes per commit group, ingest: 0.37 MB, 0.00 MB/s
                                            Interval WAL: 416 writes, 194 syncs, 2.14 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 03 02:39:57 compute-0 ceph-mon[192821]: pgmap v2575: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:57 compute-0 ceph-mgr[193109]: [devicehealth INFO root] Check health
Dec 03 02:39:57 compute-0 nova_compute[351485]: 2025-12-03 02:39:57.140 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:39:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2576: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:39:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:39:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:39:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:39:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:39:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:39:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:39:58 compute-0 podman[487278]: 2025-12-03 02:39:58.879778037 +0000 UTC m=+0.115302605 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, managed_by=edpm_ansible, name=ubi9-minimal, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41)
Dec 03 02:39:58 compute-0 podman[487287]: 2025-12-03 02:39:58.882144394 +0000 UTC m=+0.094398415 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 03 02:39:58 compute-0 podman[487279]: 2025-12-03 02:39:58.900711588 +0000 UTC m=+0.131890683 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 02:39:58 compute-0 podman[487285]: 2025-12-03 02:39:58.909720812 +0000 UTC m=+0.123018373 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, container_name=kepler, name=ubi9, release-0.7.12=, vendor=Red Hat, Inc., distribution-scope=public, version=9.4, com.redhat.component=ubi9-container, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec 03 02:39:58 compute-0 podman[487277]: 2025-12-03 02:39:58.92948728 +0000 UTC m=+0.183341665 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 03 02:39:59 compute-0 ceph-mon[192821]: pgmap v2576: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:39:59 compute-0 nova_compute[351485]: 2025-12-03 02:39:59.534 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:39:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:39:59.677 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:39:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:39:59.678 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:39:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:39:59.678 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:39:59 compute-0 podman[158098]: time="2025-12-03T02:39:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:39:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:39:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec 03 02:39:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:39:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8214 "" "Go-http-client/1.1"
Dec 03 02:39:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2577: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:01 compute-0 ceph-mon[192821]: pgmap v2577: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:01 compute-0 openstack_network_exporter[368278]: ERROR   02:40:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:40:01 compute-0 openstack_network_exporter[368278]: ERROR   02:40:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:40:01 compute-0 openstack_network_exporter[368278]: ERROR   02:40:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:40:01 compute-0 openstack_network_exporter[368278]: ERROR   02:40:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:40:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:40:01 compute-0 openstack_network_exporter[368278]: ERROR   02:40:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:40:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:40:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2578: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:02 compute-0 nova_compute[351485]: 2025-12-03 02:40:02.144 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:40:03 compute-0 ceph-mon[192821]: pgmap v2578: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:40:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2579: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:04 compute-0 nova_compute[351485]: 2025-12-03 02:40:04.538 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:40:05 compute-0 ceph-mon[192821]: pgmap v2579: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2580: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:07 compute-0 ceph-mon[192821]: pgmap v2580: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:07 compute-0 nova_compute[351485]: 2025-12-03 02:40:07.147 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:40:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2581: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:40:09 compute-0 ceph-mon[192821]: pgmap v2581: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:09 compute-0 nova_compute[351485]: 2025-12-03 02:40:09.542 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:40:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2582: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:11 compute-0 ceph-mon[192821]: pgmap v2582: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2583: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:12 compute-0 nova_compute[351485]: 2025-12-03 02:40:12.150 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:40:13 compute-0 ceph-mon[192821]: pgmap v2583: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:40:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2584: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:14 compute-0 nova_compute[351485]: 2025-12-03 02:40:14.546 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:40:15 compute-0 ceph-mon[192821]: pgmap v2584: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:15 compute-0 sudo[487382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:40:15 compute-0 sudo[487382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:40:15 compute-0 sudo[487382]: pam_unix(sudo:session): session closed for user root
Dec 03 02:40:15 compute-0 sudo[487407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:40:15 compute-0 sudo[487407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:40:15 compute-0 sudo[487407]: pam_unix(sudo:session): session closed for user root
Dec 03 02:40:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2585: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:15 compute-0 sudo[487432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:40:15 compute-0 sudo[487432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:40:15 compute-0 sudo[487432]: pam_unix(sudo:session): session closed for user root
Dec 03 02:40:16 compute-0 podman[487458]: 2025-12-03 02:40:16.060059665 +0000 UTC m=+0.099019555 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 03 02:40:16 compute-0 sudo[487478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 02:40:16 compute-0 sudo[487478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:40:16 compute-0 podman[487456]: 2025-12-03 02:40:16.080235065 +0000 UTC m=+0.128967831 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 03 02:40:16 compute-0 podman[487457]: 2025-12-03 02:40:16.086908853 +0000 UTC m=+0.121644554 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec 03 02:40:16 compute-0 sudo[487478]: pam_unix(sudo:session): session closed for user root
Dec 03 02:40:16 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:40:16 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:40:16 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 02:40:16 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:40:16 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 02:40:16 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:40:16 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev f3df4bec-171c-4134-b92d-b8c299e84bbb does not exist
Dec 03 02:40:16 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 119854fe-9f00-431a-9d53-54d219afad90 does not exist
Dec 03 02:40:16 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev eb904554-eec3-43b1-ba60-776c6e2bbcb2 does not exist
Dec 03 02:40:16 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 02:40:16 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:40:16 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 02:40:16 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:40:16 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:40:16 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:40:16 compute-0 sudo[487570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:40:16 compute-0 sudo[487570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:40:16 compute-0 sudo[487570]: pam_unix(sudo:session): session closed for user root
Dec 03 02:40:17 compute-0 sudo[487595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:40:17 compute-0 sudo[487595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:40:17 compute-0 sudo[487595]: pam_unix(sudo:session): session closed for user root
Dec 03 02:40:17 compute-0 ceph-mon[192821]: pgmap v2585: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:17 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:40:17 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:40:17 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:40:17 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:40:17 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:40:17 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:40:17 compute-0 nova_compute[351485]: 2025-12-03 02:40:17.153 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:40:17 compute-0 sudo[487620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:40:17 compute-0 sudo[487620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:40:17 compute-0 sudo[487620]: pam_unix(sudo:session): session closed for user root
Dec 03 02:40:17 compute-0 sudo[487645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 02:40:17 compute-0 sudo[487645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:40:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2586: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:17 compute-0 podman[487708]: 2025-12-03 02:40:17.987133156 +0000 UTC m=+0.093103348 container create 90f1d2b8a7306873d762e573e672f3bb23ac49f70c835a456eced183d07b226b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec 03 02:40:18 compute-0 podman[487708]: 2025-12-03 02:40:17.94970553 +0000 UTC m=+0.055675772 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:40:18 compute-0 systemd[1]: Started libpod-conmon-90f1d2b8a7306873d762e573e672f3bb23ac49f70c835a456eced183d07b226b.scope.
Dec 03 02:40:18 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:40:18 compute-0 podman[487708]: 2025-12-03 02:40:18.139211368 +0000 UTC m=+0.245181610 container init 90f1d2b8a7306873d762e573e672f3bb23ac49f70c835a456eced183d07b226b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:40:18 compute-0 podman[487708]: 2025-12-03 02:40:18.157877975 +0000 UTC m=+0.263848157 container start 90f1d2b8a7306873d762e573e672f3bb23ac49f70c835a456eced183d07b226b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_napier, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 03 02:40:18 compute-0 podman[487708]: 2025-12-03 02:40:18.165051287 +0000 UTC m=+0.271021529 container attach 90f1d2b8a7306873d762e573e672f3bb23ac49f70c835a456eced183d07b226b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_napier, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:40:18 compute-0 vigorous_napier[487724]: 167 167
Dec 03 02:40:18 compute-0 podman[487708]: 2025-12-03 02:40:18.17365044 +0000 UTC m=+0.279620662 container died 90f1d2b8a7306873d762e573e672f3bb23ac49f70c835a456eced183d07b226b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_napier, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:40:18 compute-0 systemd[1]: libpod-90f1d2b8a7306873d762e573e672f3bb23ac49f70c835a456eced183d07b226b.scope: Deactivated successfully.
Dec 03 02:40:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-2933bceaf615f0f0843f11cf272a144fa4abf18411b25c2a6a2b311fb815dfbd-merged.mount: Deactivated successfully.
Dec 03 02:40:18 compute-0 podman[487708]: 2025-12-03 02:40:18.270330438 +0000 UTC m=+0.376300600 container remove 90f1d2b8a7306873d762e573e672f3bb23ac49f70c835a456eced183d07b226b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_napier, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 03 02:40:18 compute-0 systemd[1]: libpod-conmon-90f1d2b8a7306873d762e573e672f3bb23ac49f70c835a456eced183d07b226b.scope: Deactivated successfully.
Dec 03 02:40:18 compute-0 podman[487747]: 2025-12-03 02:40:18.555132365 +0000 UTC m=+0.083834756 container create bde6ba6d865bb4cdb7b254d75ae4ca2718ea4ca376066e33feeb6f2230a0f7c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 03 02:40:18 compute-0 podman[487747]: 2025-12-03 02:40:18.526281211 +0000 UTC m=+0.054983662 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:40:18 compute-0 systemd[1]: Started libpod-conmon-bde6ba6d865bb4cdb7b254d75ae4ca2718ea4ca376066e33feeb6f2230a0f7c3.scope.
Dec 03 02:40:18 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:40:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b77bc7aa3b2c7f72642bba78f02b18cfda46a6cce9c84feb09edd66c566f095/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:40:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b77bc7aa3b2c7f72642bba78f02b18cfda46a6cce9c84feb09edd66c566f095/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:40:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b77bc7aa3b2c7f72642bba78f02b18cfda46a6cce9c84feb09edd66c566f095/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:40:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b77bc7aa3b2c7f72642bba78f02b18cfda46a6cce9c84feb09edd66c566f095/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:40:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b77bc7aa3b2c7f72642bba78f02b18cfda46a6cce9c84feb09edd66c566f095/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 02:40:18 compute-0 podman[487747]: 2025-12-03 02:40:18.708970947 +0000 UTC m=+0.237673388 container init bde6ba6d865bb4cdb7b254d75ae4ca2718ea4ca376066e33feeb6f2230a0f7c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lichterman, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:40:18 compute-0 podman[487747]: 2025-12-03 02:40:18.744645833 +0000 UTC m=+0.273348224 container start bde6ba6d865bb4cdb7b254d75ae4ca2718ea4ca376066e33feeb6f2230a0f7c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lichterman, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:40:18 compute-0 podman[487747]: 2025-12-03 02:40:18.750705044 +0000 UTC m=+0.279407445 container attach bde6ba6d865bb4cdb7b254d75ae4ca2718ea4ca376066e33feeb6f2230a0f7c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lichterman, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:40:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:40:19 compute-0 ceph-mon[192821]: pgmap v2586: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:19 compute-0 nova_compute[351485]: 2025-12-03 02:40:19.548 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:40:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2587: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:20 compute-0 nervous_lichterman[487764]: --> passed data devices: 0 physical, 3 LVM
Dec 03 02:40:20 compute-0 nervous_lichterman[487764]: --> relative data size: 1.0
Dec 03 02:40:20 compute-0 nervous_lichterman[487764]: --> All data devices are unavailable
Dec 03 02:40:20 compute-0 systemd[1]: libpod-bde6ba6d865bb4cdb7b254d75ae4ca2718ea4ca376066e33feeb6f2230a0f7c3.scope: Deactivated successfully.
Dec 03 02:40:20 compute-0 podman[487747]: 2025-12-03 02:40:20.089477275 +0000 UTC m=+1.618179666 container died bde6ba6d865bb4cdb7b254d75ae4ca2718ea4ca376066e33feeb6f2230a0f7c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:40:20 compute-0 systemd[1]: libpod-bde6ba6d865bb4cdb7b254d75ae4ca2718ea4ca376066e33feeb6f2230a0f7c3.scope: Consumed 1.282s CPU time.
Dec 03 02:40:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b77bc7aa3b2c7f72642bba78f02b18cfda46a6cce9c84feb09edd66c566f095-merged.mount: Deactivated successfully.
Dec 03 02:40:20 compute-0 podman[487747]: 2025-12-03 02:40:20.207485124 +0000 UTC m=+1.736187515 container remove bde6ba6d865bb4cdb7b254d75ae4ca2718ea4ca376066e33feeb6f2230a0f7c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lichterman, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 03 02:40:20 compute-0 systemd[1]: libpod-conmon-bde6ba6d865bb4cdb7b254d75ae4ca2718ea4ca376066e33feeb6f2230a0f7c3.scope: Deactivated successfully.
Dec 03 02:40:20 compute-0 sudo[487645]: pam_unix(sudo:session): session closed for user root
Dec 03 02:40:20 compute-0 sudo[487807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:40:20 compute-0 sudo[487807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:40:20 compute-0 sudo[487807]: pam_unix(sudo:session): session closed for user root
Dec 03 02:40:20 compute-0 sudo[487832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:40:20 compute-0 sudo[487832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:40:20 compute-0 sudo[487832]: pam_unix(sudo:session): session closed for user root
Dec 03 02:40:20 compute-0 sudo[487857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:40:20 compute-0 sudo[487857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:40:20 compute-0 sudo[487857]: pam_unix(sudo:session): session closed for user root
Dec 03 02:40:20 compute-0 sudo[487882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 02:40:20 compute-0 sudo[487882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:40:21 compute-0 ceph-mon[192821]: pgmap v2587: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:21 compute-0 podman[487944]: 2025-12-03 02:40:21.437257738 +0000 UTC m=+0.103844612 container create 1eace89d3b6c530f9a63e19f38675901167489d14f45d3f783e77bc66513af19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:40:21 compute-0 podman[487944]: 2025-12-03 02:40:21.391770124 +0000 UTC m=+0.058357078 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:40:21 compute-0 systemd[1]: Started libpod-conmon-1eace89d3b6c530f9a63e19f38675901167489d14f45d3f783e77bc66513af19.scope.
Dec 03 02:40:21 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:40:21 compute-0 podman[487944]: 2025-12-03 02:40:21.600381881 +0000 UTC m=+0.266968795 container init 1eace89d3b6c530f9a63e19f38675901167489d14f45d3f783e77bc66513af19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 03 02:40:21 compute-0 podman[487944]: 2025-12-03 02:40:21.61874959 +0000 UTC m=+0.285336484 container start 1eace89d3b6c530f9a63e19f38675901167489d14f45d3f783e77bc66513af19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_lovelace, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec 03 02:40:21 compute-0 podman[487944]: 2025-12-03 02:40:21.625355356 +0000 UTC m=+0.291942320 container attach 1eace89d3b6c530f9a63e19f38675901167489d14f45d3f783e77bc66513af19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:40:21 compute-0 competent_lovelace[487960]: 167 167
Dec 03 02:40:21 compute-0 systemd[1]: libpod-1eace89d3b6c530f9a63e19f38675901167489d14f45d3f783e77bc66513af19.scope: Deactivated successfully.
Dec 03 02:40:21 compute-0 podman[487944]: 2025-12-03 02:40:21.632488827 +0000 UTC m=+0.299075771 container died 1eace89d3b6c530f9a63e19f38675901167489d14f45d3f783e77bc66513af19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 03 02:40:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-01f386d7cf362774680dd90444c9e7090d1fcf18791e5a553459bdad0964f894-merged.mount: Deactivated successfully.
Dec 03 02:40:21 compute-0 podman[487944]: 2025-12-03 02:40:21.724636388 +0000 UTC m=+0.391223292 container remove 1eace89d3b6c530f9a63e19f38675901167489d14f45d3f783e77bc66513af19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_lovelace, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 03 02:40:21 compute-0 systemd[1]: libpod-conmon-1eace89d3b6c530f9a63e19f38675901167489d14f45d3f783e77bc66513af19.scope: Deactivated successfully.
Dec 03 02:40:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2588: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:22 compute-0 podman[487985]: 2025-12-03 02:40:22.005863024 +0000 UTC m=+0.071078137 container create b5a571adf656d919922d05a7baeea26171dcb6f9673542b51970c59f6ffc22da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_noether, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:40:22 compute-0 systemd[1]: Started libpod-conmon-b5a571adf656d919922d05a7baeea26171dcb6f9673542b51970c59f6ffc22da.scope.
Dec 03 02:40:22 compute-0 podman[487985]: 2025-12-03 02:40:21.987781984 +0000 UTC m=+0.052997117 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:40:22 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:40:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cda2d6f4278a5d798dff3ee990fa9334da896bad379a001774e67c9010a057c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:40:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cda2d6f4278a5d798dff3ee990fa9334da896bad379a001774e67c9010a057c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:40:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cda2d6f4278a5d798dff3ee990fa9334da896bad379a001774e67c9010a057c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:40:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cda2d6f4278a5d798dff3ee990fa9334da896bad379a001774e67c9010a057c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:40:22 compute-0 podman[487985]: 2025-12-03 02:40:22.13437243 +0000 UTC m=+0.199587553 container init b5a571adf656d919922d05a7baeea26171dcb6f9673542b51970c59f6ffc22da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_noether, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:40:22 compute-0 podman[487985]: 2025-12-03 02:40:22.146295117 +0000 UTC m=+0.211510230 container start b5a571adf656d919922d05a7baeea26171dcb6f9673542b51970c59f6ffc22da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_noether, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:40:22 compute-0 podman[487985]: 2025-12-03 02:40:22.150475815 +0000 UTC m=+0.215690928 container attach b5a571adf656d919922d05a7baeea26171dcb6f9673542b51970c59f6ffc22da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_noether, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:40:22 compute-0 nova_compute[351485]: 2025-12-03 02:40:22.176 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:40:23 compute-0 dreamy_noether[488002]: {
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:     "0": [
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:         {
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:             "devices": [
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:                 "/dev/loop3"
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:             ],
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:             "lv_name": "ceph_lv0",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:             "lv_size": "21470642176",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:             "name": "ceph_lv0",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:             "tags": {
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:                 "ceph.cluster_name": "ceph",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:                 "ceph.crush_device_class": "",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:                 "ceph.encrypted": "0",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:                 "ceph.osd_id": "0",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:                 "ceph.type": "block",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:                 "ceph.vdo": "0"
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:             },
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:             "type": "block",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:             "vg_name": "ceph_vg0"
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:         }
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:     ],
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:     "1": [
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:         {
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:             "devices": [
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:                 "/dev/loop4"
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:             ],
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:             "lv_name": "ceph_lv1",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:             "lv_size": "21470642176",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:             "name": "ceph_lv1",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:             "tags": {
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:                 "ceph.cluster_name": "ceph",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:                 "ceph.crush_device_class": "",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:                 "ceph.encrypted": "0",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:                 "ceph.osd_id": "1",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:                 "ceph.type": "block",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:                 "ceph.vdo": "0"
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:             },
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:             "type": "block",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:             "vg_name": "ceph_vg1"
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:         }
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:     ],
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:     "2": [
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:         {
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:             "devices": [
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:                 "/dev/loop5"
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:             ],
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:             "lv_name": "ceph_lv2",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:             "lv_size": "21470642176",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:             "name": "ceph_lv2",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:             "tags": {
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:                 "ceph.cluster_name": "ceph",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:                 "ceph.crush_device_class": "",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:                 "ceph.encrypted": "0",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:                 "ceph.osd_id": "2",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:                 "ceph.type": "block",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:                 "ceph.vdo": "0"
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:             },
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:             "type": "block",
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:             "vg_name": "ceph_vg2"
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:         }
Dec 03 02:40:23 compute-0 dreamy_noether[488002]:     ]
Dec 03 02:40:23 compute-0 dreamy_noether[488002]: }
Dec 03 02:40:23 compute-0 systemd[1]: libpod-b5a571adf656d919922d05a7baeea26171dcb6f9673542b51970c59f6ffc22da.scope: Deactivated successfully.
Dec 03 02:40:23 compute-0 podman[487985]: 2025-12-03 02:40:23.094887516 +0000 UTC m=+1.160102659 container died b5a571adf656d919922d05a7baeea26171dcb6f9673542b51970c59f6ffc22da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 03 02:40:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-7cda2d6f4278a5d798dff3ee990fa9334da896bad379a001774e67c9010a057c-merged.mount: Deactivated successfully.
Dec 03 02:40:23 compute-0 ceph-mon[192821]: pgmap v2588: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:23 compute-0 podman[487985]: 2025-12-03 02:40:23.208231875 +0000 UTC m=+1.273446998 container remove b5a571adf656d919922d05a7baeea26171dcb6f9673542b51970c59f6ffc22da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 03 02:40:23 compute-0 systemd[1]: libpod-conmon-b5a571adf656d919922d05a7baeea26171dcb6f9673542b51970c59f6ffc22da.scope: Deactivated successfully.
Dec 03 02:40:23 compute-0 sudo[487882]: pam_unix(sudo:session): session closed for user root
Dec 03 02:40:23 compute-0 sudo[488023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:40:23 compute-0 sudo[488023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:40:23 compute-0 sudo[488023]: pam_unix(sudo:session): session closed for user root
Dec 03 02:40:23 compute-0 sudo[488048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:40:23 compute-0 sudo[488048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:40:23 compute-0 sudo[488048]: pam_unix(sudo:session): session closed for user root
Dec 03 02:40:23 compute-0 sudo[488073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:40:23 compute-0 sudo[488073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:40:23 compute-0 sudo[488073]: pam_unix(sudo:session): session closed for user root
Dec 03 02:40:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:40:23 compute-0 sudo[488098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 02:40:23 compute-0 sudo[488098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:40:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2589: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:24 compute-0 podman[488160]: 2025-12-03 02:40:24.403016011 +0000 UTC m=+0.084424784 container create a9e478454e36aecaa97114cff47e189b115d16c84dccecbdbc19baed193788fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:40:24 compute-0 podman[488160]: 2025-12-03 02:40:24.367170269 +0000 UTC m=+0.048579102 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:40:24 compute-0 systemd[1]: Started libpod-conmon-a9e478454e36aecaa97114cff47e189b115d16c84dccecbdbc19baed193788fe.scope.
Dec 03 02:40:24 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:40:24 compute-0 nova_compute[351485]: 2025-12-03 02:40:24.551 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:40:24 compute-0 podman[488160]: 2025-12-03 02:40:24.558987262 +0000 UTC m=+0.240396085 container init a9e478454e36aecaa97114cff47e189b115d16c84dccecbdbc19baed193788fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 03 02:40:24 compute-0 podman[488160]: 2025-12-03 02:40:24.574431848 +0000 UTC m=+0.255840591 container start a9e478454e36aecaa97114cff47e189b115d16c84dccecbdbc19baed193788fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brahmagupta, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 03 02:40:24 compute-0 podman[488160]: 2025-12-03 02:40:24.579700877 +0000 UTC m=+0.261109710 container attach a9e478454e36aecaa97114cff47e189b115d16c84dccecbdbc19baed193788fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brahmagupta, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:40:24 compute-0 nostalgic_brahmagupta[488177]: 167 167
Dec 03 02:40:24 compute-0 systemd[1]: libpod-a9e478454e36aecaa97114cff47e189b115d16c84dccecbdbc19baed193788fe.scope: Deactivated successfully.
Dec 03 02:40:24 compute-0 podman[488160]: 2025-12-03 02:40:24.587152947 +0000 UTC m=+0.268561730 container died a9e478454e36aecaa97114cff47e189b115d16c84dccecbdbc19baed193788fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brahmagupta, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:40:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-c39d0f166615323d7de4d1ee1dc2a335ca0ce04153eadd9c62320d2ba33208ad-merged.mount: Deactivated successfully.
Dec 03 02:40:24 compute-0 podman[488160]: 2025-12-03 02:40:24.652593444 +0000 UTC m=+0.334002217 container remove a9e478454e36aecaa97114cff47e189b115d16c84dccecbdbc19baed193788fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 03 02:40:24 compute-0 systemd[1]: libpod-conmon-a9e478454e36aecaa97114cff47e189b115d16c84dccecbdbc19baed193788fe.scope: Deactivated successfully.
Dec 03 02:40:24 compute-0 podman[488199]: 2025-12-03 02:40:24.886130954 +0000 UTC m=+0.061310471 container create c480d6506ac213d04f6337677de0e94d64718724129523647a7f4330ac8a2bb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 03 02:40:24 compute-0 systemd[1]: Started libpod-conmon-c480d6506ac213d04f6337677de0e94d64718724129523647a7f4330ac8a2bb2.scope.
Dec 03 02:40:24 compute-0 podman[488199]: 2025-12-03 02:40:24.870494583 +0000 UTC m=+0.045674120 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:40:24 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:40:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bdcd087b6c18b9e47a07b0418124df1520d81086b846d83a36818f6bc99e6ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:40:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bdcd087b6c18b9e47a07b0418124df1520d81086b846d83a36818f6bc99e6ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:40:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bdcd087b6c18b9e47a07b0418124df1520d81086b846d83a36818f6bc99e6ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:40:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bdcd087b6c18b9e47a07b0418124df1520d81086b846d83a36818f6bc99e6ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:40:25 compute-0 podman[488199]: 2025-12-03 02:40:25.031892237 +0000 UTC m=+0.207071784 container init c480d6506ac213d04f6337677de0e94d64718724129523647a7f4330ac8a2bb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_meitner, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:40:25 compute-0 podman[488199]: 2025-12-03 02:40:25.062206633 +0000 UTC m=+0.237386190 container start c480d6506ac213d04f6337677de0e94d64718724129523647a7f4330ac8a2bb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:40:25 compute-0 podman[488199]: 2025-12-03 02:40:25.073055999 +0000 UTC m=+0.248235566 container attach c480d6506ac213d04f6337677de0e94d64718724129523647a7f4330ac8a2bb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec 03 02:40:25 compute-0 podman[488214]: 2025-12-03 02:40:25.129399709 +0000 UTC m=+0.157420153 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 03 02:40:25 compute-0 ceph-mon[192821]: pgmap v2589: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2590: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:26 compute-0 funny_meitner[488215]: {
Dec 03 02:40:26 compute-0 funny_meitner[488215]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 02:40:26 compute-0 funny_meitner[488215]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:40:26 compute-0 funny_meitner[488215]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 02:40:26 compute-0 funny_meitner[488215]:         "osd_id": 2,
Dec 03 02:40:26 compute-0 funny_meitner[488215]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:40:26 compute-0 funny_meitner[488215]:         "type": "bluestore"
Dec 03 02:40:26 compute-0 funny_meitner[488215]:     },
Dec 03 02:40:26 compute-0 funny_meitner[488215]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 02:40:26 compute-0 funny_meitner[488215]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:40:26 compute-0 funny_meitner[488215]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 02:40:26 compute-0 funny_meitner[488215]:         "osd_id": 1,
Dec 03 02:40:26 compute-0 funny_meitner[488215]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:40:26 compute-0 funny_meitner[488215]:         "type": "bluestore"
Dec 03 02:40:26 compute-0 funny_meitner[488215]:     },
Dec 03 02:40:26 compute-0 funny_meitner[488215]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 02:40:26 compute-0 funny_meitner[488215]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:40:26 compute-0 funny_meitner[488215]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 02:40:26 compute-0 funny_meitner[488215]:         "osd_id": 0,
Dec 03 02:40:26 compute-0 funny_meitner[488215]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:40:26 compute-0 funny_meitner[488215]:         "type": "bluestore"
Dec 03 02:40:26 compute-0 funny_meitner[488215]:     }
Dec 03 02:40:26 compute-0 funny_meitner[488215]: }
Dec 03 02:40:26 compute-0 systemd[1]: libpod-c480d6506ac213d04f6337677de0e94d64718724129523647a7f4330ac8a2bb2.scope: Deactivated successfully.
Dec 03 02:40:26 compute-0 systemd[1]: libpod-c480d6506ac213d04f6337677de0e94d64718724129523647a7f4330ac8a2bb2.scope: Consumed 1.217s CPU time.
Dec 03 02:40:26 compute-0 podman[488266]: 2025-12-03 02:40:26.372302014 +0000 UTC m=+0.069001128 container died c480d6506ac213d04f6337677de0e94d64718724129523647a7f4330ac8a2bb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_meitner, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:40:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-8bdcd087b6c18b9e47a07b0418124df1520d81086b846d83a36818f6bc99e6ce-merged.mount: Deactivated successfully.
Dec 03 02:40:26 compute-0 podman[488266]: 2025-12-03 02:40:26.489331506 +0000 UTC m=+0.186030580 container remove c480d6506ac213d04f6337677de0e94d64718724129523647a7f4330ac8a2bb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:40:26 compute-0 systemd[1]: libpod-conmon-c480d6506ac213d04f6337677de0e94d64718724129523647a7f4330ac8a2bb2.scope: Deactivated successfully.
Dec 03 02:40:26 compute-0 sudo[488098]: pam_unix(sudo:session): session closed for user root
Dec 03 02:40:26 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 02:40:26 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:40:26 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 02:40:26 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:40:26 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 24fa1481-e392-44a0-83ff-c6bf2975afb4 does not exist
Dec 03 02:40:26 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 3557086b-ac86-468f-ad71-5c2885d78cf4 does not exist
Dec 03 02:40:26 compute-0 sudo[488279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:40:26 compute-0 sudo[488279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:40:26 compute-0 sudo[488279]: pam_unix(sudo:session): session closed for user root
Dec 03 02:40:26 compute-0 sudo[488304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 02:40:26 compute-0 sudo[488304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:40:26 compute-0 sudo[488304]: pam_unix(sudo:session): session closed for user root
Dec 03 02:40:27 compute-0 nova_compute[351485]: 2025-12-03 02:40:27.179 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:40:27 compute-0 ceph-mon[192821]: pgmap v2590: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:27 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:40:27 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:40:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2591: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:40:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:40:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:40:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:40:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:40:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:40:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:40:28
Dec 03 02:40:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 02:40:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 02:40:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['backups', 'images', 'default.rgw.meta', 'vms', '.rgw.root', '.mgr', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.log']
Dec 03 02:40:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 02:40:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:40:29 compute-0 ceph-mon[192821]: pgmap v2591: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 02:40:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:40:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 02:40:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:40:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:40:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:40:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:40:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:40:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:40:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:40:29 compute-0 nova_compute[351485]: 2025-12-03 02:40:29.554 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:40:29 compute-0 podman[158098]: time="2025-12-03T02:40:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:40:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:40:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec 03 02:40:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:40:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8210 "" "Go-http-client/1.1"
Dec 03 02:40:29 compute-0 podman[488331]: 2025-12-03 02:40:29.883341347 +0000 UTC m=+0.104286334 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 02:40:29 compute-0 podman[488330]: 2025-12-03 02:40:29.889339716 +0000 UTC m=+0.131423390 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, maintainer=Red Hat, Inc., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, release=1755695350, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=)
Dec 03 02:40:29 compute-0 podman[488337]: 2025-12-03 02:40:29.898752221 +0000 UTC m=+0.115867360 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 03 02:40:29 compute-0 podman[488332]: 2025-12-03 02:40:29.92919145 +0000 UTC m=+0.159326007 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, io.openshift.expose-services=, maintainer=Red Hat, Inc., container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, release=1214.1726694543, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, version=9.4)
Dec 03 02:40:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2592: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:29 compute-0 podman[488329]: 2025-12-03 02:40:29.949223085 +0000 UTC m=+0.184907719 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:40:30 compute-0 ceph-mon[192821]: pgmap v2592: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:30 compute-0 nova_compute[351485]: 2025-12-03 02:40:30.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:40:31 compute-0 openstack_network_exporter[368278]: ERROR   02:40:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:40:31 compute-0 openstack_network_exporter[368278]: ERROR   02:40:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:40:31 compute-0 openstack_network_exporter[368278]: ERROR   02:40:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:40:31 compute-0 openstack_network_exporter[368278]: ERROR   02:40:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:40:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:40:31 compute-0 openstack_network_exporter[368278]: ERROR   02:40:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:40:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:40:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2593: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:32 compute-0 nova_compute[351485]: 2025-12-03 02:40:32.183 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:40:33 compute-0 ceph-mon[192821]: pgmap v2593: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:40:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2594: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:34 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #126. Immutable memtables: 0.
Dec 03 02:40:34 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:40:34.035016) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 03 02:40:34 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 126
Dec 03 02:40:34 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729634035053, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 975, "num_deletes": 251, "total_data_size": 1403388, "memory_usage": 1431856, "flush_reason": "Manual Compaction"}
Dec 03 02:40:34 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #127: started
Dec 03 02:40:34 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729634046478, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 127, "file_size": 1379190, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 52343, "largest_seqno": 53317, "table_properties": {"data_size": 1374344, "index_size": 2434, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10414, "raw_average_key_size": 19, "raw_value_size": 1364671, "raw_average_value_size": 2574, "num_data_blocks": 109, "num_entries": 530, "num_filter_entries": 530, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764729544, "oldest_key_time": 1764729544, "file_creation_time": 1764729634, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 127, "seqno_to_time_mapping": "N/A"}}
Dec 03 02:40:34 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 11585 microseconds, and 5662 cpu microseconds.
Dec 03 02:40:34 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 02:40:34 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:40:34.046599) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #127: 1379190 bytes OK
Dec 03 02:40:34 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:40:34.046618) [db/memtable_list.cc:519] [default] Level-0 commit table #127 started
Dec 03 02:40:34 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:40:34.048640) [db/memtable_list.cc:722] [default] Level-0 commit table #127: memtable #1 done
Dec 03 02:40:34 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:40:34.048655) EVENT_LOG_v1 {"time_micros": 1764729634048650, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 03 02:40:34 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:40:34.048671) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 03 02:40:34 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 1398736, prev total WAL file size 1398736, number of live WAL files 2.
Dec 03 02:40:34 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000123.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:40:34 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:40:34.050697) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Dec 03 02:40:34 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 03 02:40:34 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [127(1346KB)], [125(9411KB)]
Dec 03 02:40:34 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729634050776, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [127], "files_L6": [125], "score": -1, "input_data_size": 11016538, "oldest_snapshot_seqno": -1}
Dec 03 02:40:34 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #128: 6674 keys, 9270065 bytes, temperature: kUnknown
Dec 03 02:40:34 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729634124208, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 128, "file_size": 9270065, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9227240, "index_size": 25048, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16709, "raw_key_size": 175215, "raw_average_key_size": 26, "raw_value_size": 9108277, "raw_average_value_size": 1364, "num_data_blocks": 991, "num_entries": 6674, "num_filter_entries": 6674, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764729634, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Dec 03 02:40:34 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 02:40:34 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:40:34.124471) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 9270065 bytes
Dec 03 02:40:34 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:40:34.126408) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 149.8 rd, 126.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 9.2 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(14.7) write-amplify(6.7) OK, records in: 7188, records dropped: 514 output_compression: NoCompression
Dec 03 02:40:34 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:40:34.126428) EVENT_LOG_v1 {"time_micros": 1764729634126419, "job": 76, "event": "compaction_finished", "compaction_time_micros": 73528, "compaction_time_cpu_micros": 47948, "output_level": 6, "num_output_files": 1, "total_output_size": 9270065, "num_input_records": 7188, "num_output_records": 6674, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 03 02:40:34 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000127.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:40:34 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729634126949, "job": 76, "event": "table_file_deletion", "file_number": 127}
Dec 03 02:40:34 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:40:34 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729634129843, "job": 76, "event": "table_file_deletion", "file_number": 125}
Dec 03 02:40:34 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:40:34.049471) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:40:34 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:40:34.130515) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:40:34 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:40:34.130570) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:40:34 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:40:34.130572) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:40:34 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:40:34.130574) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:40:34 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:40:34.130576) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:40:34 compute-0 nova_compute[351485]: 2025-12-03 02:40:34.557 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:40:35 compute-0 ceph-mon[192821]: pgmap v2594: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2595: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:37 compute-0 ceph-mon[192821]: pgmap v2595: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:37 compute-0 nova_compute[351485]: 2025-12-03 02:40:37.186 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:40:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2596: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:40:39 compute-0 ceph-mon[192821]: pgmap v2596: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 02:40:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:40:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 02:40:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:40:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 03 02:40:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:40:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:40:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:40:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:40:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:40:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec 03 02:40:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:40:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 02:40:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:40:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:40:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:40:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 02:40:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:40:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 02:40:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:40:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:40:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:40:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 02:40:39 compute-0 nova_compute[351485]: 2025-12-03 02:40:39.559 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:40:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2597: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:40 compute-0 nova_compute[351485]: 2025-12-03 02:40:40.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:40:40 compute-0 nova_compute[351485]: 2025-12-03 02:40:40.632 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:40:40 compute-0 nova_compute[351485]: 2025-12-03 02:40:40.633 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:40:40 compute-0 nova_compute[351485]: 2025-12-03 02:40:40.633 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:40:40 compute-0 nova_compute[351485]: 2025-12-03 02:40:40.634 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 02:40:40 compute-0 nova_compute[351485]: 2025-12-03 02:40:40.634 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:40:41 compute-0 ceph-mon[192821]: pgmap v2597: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:41 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:40:41 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1749493240' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:40:41 compute-0 nova_compute[351485]: 2025-12-03 02:40:41.190 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:40:41 compute-0 nova_compute[351485]: 2025-12-03 02:40:41.771 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:40:41 compute-0 nova_compute[351485]: 2025-12-03 02:40:41.774 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3966MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 02:40:41 compute-0 nova_compute[351485]: 2025-12-03 02:40:41.775 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:40:41 compute-0 nova_compute[351485]: 2025-12-03 02:40:41.776 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:40:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2598: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:42 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1749493240' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:40:42 compute-0 nova_compute[351485]: 2025-12-03 02:40:42.189 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:40:42 compute-0 nova_compute[351485]: 2025-12-03 02:40:42.713 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 02:40:42 compute-0 nova_compute[351485]: 2025-12-03 02:40:42.714 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 02:40:43 compute-0 ceph-mon[192821]: pgmap v2598: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:43 compute-0 nova_compute[351485]: 2025-12-03 02:40:43.128 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing inventories for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 03 02:40:43 compute-0 nova_compute[351485]: 2025-12-03 02:40:43.577 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Updating ProviderTree inventory for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 03 02:40:43 compute-0 nova_compute[351485]: 2025-12-03 02:40:43.578 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Updating inventory in ProviderTree for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 03 02:40:43 compute-0 nova_compute[351485]: 2025-12-03 02:40:43.596 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing aggregate associations for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 03 02:40:43 compute-0 nova_compute[351485]: 2025-12-03 02:40:43.625 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing trait associations for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05, traits: HW_CPU_X86_SSE42,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_ABM,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_F16C,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_RESCUE_BFV,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 03 02:40:43 compute-0 nova_compute[351485]: 2025-12-03 02:40:43.646 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:40:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:40:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2599: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:40:44 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1591209415' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:40:44 compute-0 nova_compute[351485]: 2025-12-03 02:40:44.129 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:40:44 compute-0 nova_compute[351485]: 2025-12-03 02:40:44.144 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:40:44 compute-0 nova_compute[351485]: 2025-12-03 02:40:44.169 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:40:44 compute-0 nova_compute[351485]: 2025-12-03 02:40:44.172 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 02:40:44 compute-0 nova_compute[351485]: 2025-12-03 02:40:44.173 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.398s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:40:44 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1591209415' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:40:44 compute-0 nova_compute[351485]: 2025-12-03 02:40:44.562 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:40:45 compute-0 nova_compute[351485]: 2025-12-03 02:40:45.174 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:40:45 compute-0 nova_compute[351485]: 2025-12-03 02:40:45.174 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 02:40:45 compute-0 nova_compute[351485]: 2025-12-03 02:40:45.175 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 03 02:40:45 compute-0 nova_compute[351485]: 2025-12-03 02:40:45.197 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 03 02:40:45 compute-0 nova_compute[351485]: 2025-12-03 02:40:45.197 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:40:45 compute-0 nova_compute[351485]: 2025-12-03 02:40:45.198 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:40:45 compute-0 ceph-mon[192821]: pgmap v2599: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2600: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:46 compute-0 podman[488476]: 2025-12-03 02:40:46.864267103 +0000 UTC m=+0.102558495 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 03 02:40:46 compute-0 podman[488475]: 2025-12-03 02:40:46.898621812 +0000 UTC m=+0.143149100 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Dec 03 02:40:46 compute-0 podman[488474]: 2025-12-03 02:40:46.901284107 +0000 UTC m=+0.152824493 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec 03 02:40:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 02:40:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1655014276' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:40:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 02:40:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1655014276' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:40:47 compute-0 nova_compute[351485]: 2025-12-03 02:40:47.193 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:40:47 compute-0 ceph-mon[192821]: pgmap v2600: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/1655014276' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:40:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/1655014276' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:40:47 compute-0 nova_compute[351485]: 2025-12-03 02:40:47.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:40:47 compute-0 nova_compute[351485]: 2025-12-03 02:40:47.578 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:40:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2601: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:48 compute-0 ceph-mon[192821]: pgmap v2601: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:48 compute-0 sshd-session[488536]: Received disconnect from 154.113.10.113 port 34704:11: Bye Bye [preauth]
Dec 03 02:40:48 compute-0 sshd-session[488536]: Disconnected from authenticating user root 154.113.10.113 port 34704 [preauth]
Dec 03 02:40:48 compute-0 nova_compute[351485]: 2025-12-03 02:40:48.570 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:40:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:40:49 compute-0 nova_compute[351485]: 2025-12-03 02:40:49.567 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:40:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2602: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:50 compute-0 nova_compute[351485]: 2025-12-03 02:40:50.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:40:50 compute-0 nova_compute[351485]: 2025-12-03 02:40:50.578 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 02:40:51 compute-0 ceph-mon[192821]: pgmap v2602: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2603: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:52 compute-0 nova_compute[351485]: 2025-12-03 02:40:52.196 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:40:52 compute-0 nova_compute[351485]: 2025-12-03 02:40:52.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:40:53 compute-0 ceph-mon[192821]: pgmap v2603: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:40:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2604: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:54 compute-0 nova_compute[351485]: 2025-12-03 02:40:54.568 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:40:55 compute-0 ceph-mon[192821]: pgmap v2604: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:55 compute-0 podman[488539]: 2025-12-03 02:40:55.88908036 +0000 UTC m=+0.147680268 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 03 02:40:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2605: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:57 compute-0 ceph-mon[192821]: pgmap v2605: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:57 compute-0 nova_compute[351485]: 2025-12-03 02:40:57.199 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:40:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2606: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:40:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:40:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:40:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:40:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:40:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:40:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:40:59 compute-0 ceph-mon[192821]: pgmap v2606: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:40:59 compute-0 nova_compute[351485]: 2025-12-03 02:40:59.572 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:40:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:40:59.678 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:40:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:40:59.679 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:40:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:40:59.679 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:40:59 compute-0 podman[158098]: time="2025-12-03T02:40:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:40:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:40:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec 03 02:40:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:40:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8218 "" "Go-http-client/1.1"
Dec 03 02:40:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2607: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:41:00 compute-0 podman[488560]: 2025-12-03 02:41:00.867799828 +0000 UTC m=+0.093972852 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 03 02:41:00 compute-0 podman[488568]: 2025-12-03 02:41:00.911773819 +0000 UTC m=+0.124166595 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Dec 03 02:41:00 compute-0 podman[488561]: 2025-12-03 02:41:00.913919899 +0000 UTC m=+0.135063611 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., config_id=edpm, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, release-0.7.12=, com.redhat.component=ubi9-container, io.openshift.expose-services=, io.buildah.version=1.29.0, name=ubi9, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc.)
Dec 03 02:41:00 compute-0 podman[488559]: 2025-12-03 02:41:00.936287311 +0000 UTC m=+0.170370779 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, vcs-type=git, config_id=edpm, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, io.openshift.expose-services=, release=1755695350, vendor=Red Hat, Inc., container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6)
Dec 03 02:41:00 compute-0 podman[488558]: 2025-12-03 02:41:00.954919717 +0000 UTC m=+0.199910252 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 03 02:41:01 compute-0 ceph-mon[192821]: pgmap v2607: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:41:01 compute-0 openstack_network_exporter[368278]: ERROR   02:41:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:41:01 compute-0 openstack_network_exporter[368278]: ERROR   02:41:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:41:01 compute-0 openstack_network_exporter[368278]: ERROR   02:41:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:41:01 compute-0 openstack_network_exporter[368278]: ERROR   02:41:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:41:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:41:01 compute-0 openstack_network_exporter[368278]: ERROR   02:41:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:41:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:41:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2608: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:41:02 compute-0 nova_compute[351485]: 2025-12-03 02:41:02.202 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:41:03 compute-0 ceph-mon[192821]: pgmap v2608: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:41:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:41:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2609: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:41:04 compute-0 nova_compute[351485]: 2025-12-03 02:41:04.575 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:41:05 compute-0 ceph-mon[192821]: pgmap v2609: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:41:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2610: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:41:07 compute-0 ceph-mon[192821]: pgmap v2610: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:41:07 compute-0 nova_compute[351485]: 2025-12-03 02:41:07.205 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:41:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2611: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:41:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:41:09 compute-0 ceph-mon[192821]: pgmap v2611: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:41:09 compute-0 nova_compute[351485]: 2025-12-03 02:41:09.578 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:41:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2612: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:41:11 compute-0 ceph-mon[192821]: pgmap v2612: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:41:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2613: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:41:12 compute-0 nova_compute[351485]: 2025-12-03 02:41:12.208 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:41:13 compute-0 ceph-mon[192821]: pgmap v2613: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:41:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:41:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2614: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:41:14 compute-0 nova_compute[351485]: 2025-12-03 02:41:14.580 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:41:15 compute-0 ceph-mon[192821]: pgmap v2614: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:41:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2615: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:41:17 compute-0 nova_compute[351485]: 2025-12-03 02:41:17.211 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:41:17 compute-0 ceph-mon[192821]: pgmap v2615: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:41:17 compute-0 podman[488664]: 2025-12-03 02:41:17.884258939 +0000 UTC m=+0.121182941 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 02:41:17 compute-0 podman[488662]: 2025-12-03 02:41:17.887805339 +0000 UTC m=+0.140326171 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 03 02:41:17 compute-0 podman[488663]: 2025-12-03 02:41:17.899285163 +0000 UTC m=+0.147219226 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Dec 03 02:41:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2616: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:41:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:41:19 compute-0 ceph-mon[192821]: pgmap v2616: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.519 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.519 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.520 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.521 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.521 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.522 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.522 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.522 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.522 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.524 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.526 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.526 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.527 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.527 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.527 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.527 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.528 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.528 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.526 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.529 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.529 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.529 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.530 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.530 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.530 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.528 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.531 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.531 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.531 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.532 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.532 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.532 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.532 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.532 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.533 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.533 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.533 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.533 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.533 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.534 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.534 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.534 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.534 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.534 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.535 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.535 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.535 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.535 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.535 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.535 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.536 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.536 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.536 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.536 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.536 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.537 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.537 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.537 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.537 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.537 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.537 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.538 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.538 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.538 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.538 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.538 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.538 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.539 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.539 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.540 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.540 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.540 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.541 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.541 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.541 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.541 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.542 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.542 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.542 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.542 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.542 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.543 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.543 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.543 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.543 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.544 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.544 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.544 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.545 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.545 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.545 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.545 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.545 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:41:19 compute-0 nova_compute[351485]: 2025-12-03 02:41:19.584 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:41:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2617: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:41:21 compute-0 ceph-mon[192821]: pgmap v2617: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:41:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2618: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 B/s wr, 1 op/s
Dec 03 02:41:22 compute-0 nova_compute[351485]: 2025-12-03 02:41:22.214 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:41:23 compute-0 ceph-mon[192821]: pgmap v2618: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 B/s wr, 1 op/s
Dec 03 02:41:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:41:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2619: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 B/s wr, 1 op/s
Dec 03 02:41:24 compute-0 nova_compute[351485]: 2025-12-03 02:41:24.589 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:41:25 compute-0 ceph-mon[192821]: pgmap v2619: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 B/s wr, 1 op/s
Dec 03 02:41:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2620: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 35 op/s
Dec 03 02:41:26 compute-0 ceph-mon[192821]: pgmap v2620: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 35 op/s
Dec 03 02:41:26 compute-0 podman[488723]: 2025-12-03 02:41:26.880781268 +0000 UTC m=+0.135966788 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 03 02:41:27 compute-0 sudo[488743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:41:27 compute-0 sudo[488743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:41:27 compute-0 sudo[488743]: pam_unix(sudo:session): session closed for user root
Dec 03 02:41:27 compute-0 sudo[488769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:41:27 compute-0 sudo[488769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:41:27 compute-0 sudo[488769]: pam_unix(sudo:session): session closed for user root
Dec 03 02:41:27 compute-0 nova_compute[351485]: 2025-12-03 02:41:27.217 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:41:27 compute-0 sudo[488794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:41:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2621: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Dec 03 02:41:28 compute-0 sudo[488794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:41:28 compute-0 sudo[488794]: pam_unix(sudo:session): session closed for user root
Dec 03 02:41:28 compute-0 sudo[488819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 02:41:28 compute-0 sudo[488819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:41:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:41:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:41:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:41:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:41:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:41:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:41:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:41:28
Dec 03 02:41:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 02:41:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 02:41:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['images', '.mgr', 'default.rgw.meta', 'default.rgw.log', 'backups', 'volumes', '.rgw.root', 'vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control']
Dec 03 02:41:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 02:41:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:41:28 compute-0 sudo[488819]: pam_unix(sudo:session): session closed for user root
Dec 03 02:41:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:41:28 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:41:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 02:41:28 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:41:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 02:41:28 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:41:28 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 0f89b476-c266-4863-954e-27166a514892 does not exist
Dec 03 02:41:28 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 20f1ebe2-7c80-4205-89ef-2cc8b2cf4dc6 does not exist
Dec 03 02:41:28 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev e72c6680-acee-46a6-8265-90adc8307442 does not exist
Dec 03 02:41:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 02:41:28 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:41:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 02:41:28 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:41:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:41:28 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:41:29 compute-0 sudo[488874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:41:29 compute-0 ceph-mon[192821]: pgmap v2621: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Dec 03 02:41:29 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:41:29 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:41:29 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:41:29 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:41:29 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:41:29 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:41:29 compute-0 sudo[488874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:41:29 compute-0 sudo[488874]: pam_unix(sudo:session): session closed for user root
Dec 03 02:41:29 compute-0 sudo[488899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:41:29 compute-0 sudo[488899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:41:29 compute-0 sudo[488899]: pam_unix(sudo:session): session closed for user root
Dec 03 02:41:29 compute-0 sudo[488924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:41:29 compute-0 sudo[488924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:41:29 compute-0 sudo[488924]: pam_unix(sudo:session): session closed for user root
Dec 03 02:41:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 02:41:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:41:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 02:41:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:41:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:41:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:41:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:41:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:41:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:41:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:41:29 compute-0 sudo[488949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 02:41:29 compute-0 sudo[488949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:41:29 compute-0 nova_compute[351485]: 2025-12-03 02:41:29.589 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:41:29 compute-0 podman[158098]: time="2025-12-03T02:41:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:41:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:41:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec 03 02:41:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:41:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8220 "" "Go-http-client/1.1"
Dec 03 02:41:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2622: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 03 02:41:30 compute-0 podman[489013]: 2025-12-03 02:41:30.048829569 +0000 UTC m=+0.086163093 container create 73641ca0f5886b9c1c56dee33863cc5d64f3fce45abb728228679327a5f79c31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_curie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:41:30 compute-0 podman[489013]: 2025-12-03 02:41:30.019304725 +0000 UTC m=+0.056638249 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:41:30 compute-0 systemd[1]: Started libpod-conmon-73641ca0f5886b9c1c56dee33863cc5d64f3fce45abb728228679327a5f79c31.scope.
Dec 03 02:41:30 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:41:30 compute-0 podman[489013]: 2025-12-03 02:41:30.2072737 +0000 UTC m=+0.244607264 container init 73641ca0f5886b9c1c56dee33863cc5d64f3fce45abb728228679327a5f79c31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_curie, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 03 02:41:30 compute-0 podman[489013]: 2025-12-03 02:41:30.229367223 +0000 UTC m=+0.266700737 container start 73641ca0f5886b9c1c56dee33863cc5d64f3fce45abb728228679327a5f79c31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_curie, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 03 02:41:30 compute-0 podman[489013]: 2025-12-03 02:41:30.236281598 +0000 UTC m=+0.273615182 container attach 73641ca0f5886b9c1c56dee33863cc5d64f3fce45abb728228679327a5f79c31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_curie, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:41:30 compute-0 sweet_curie[489027]: 167 167
Dec 03 02:41:30 compute-0 systemd[1]: libpod-73641ca0f5886b9c1c56dee33863cc5d64f3fce45abb728228679327a5f79c31.scope: Deactivated successfully.
Dec 03 02:41:30 compute-0 podman[489013]: 2025-12-03 02:41:30.243865042 +0000 UTC m=+0.281198566 container died 73641ca0f5886b9c1c56dee33863cc5d64f3fce45abb728228679327a5f79c31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 03 02:41:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b27f78b2f3803d89acfe3075bb3cccfec1e4617734e8d5774151ceb321774cf-merged.mount: Deactivated successfully.
Dec 03 02:41:30 compute-0 podman[489013]: 2025-12-03 02:41:30.318697084 +0000 UTC m=+0.356030568 container remove 73641ca0f5886b9c1c56dee33863cc5d64f3fce45abb728228679327a5f79c31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_curie, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:41:30 compute-0 systemd[1]: libpod-conmon-73641ca0f5886b9c1c56dee33863cc5d64f3fce45abb728228679327a5f79c31.scope: Deactivated successfully.
Dec 03 02:41:30 compute-0 podman[489052]: 2025-12-03 02:41:30.591245366 +0000 UTC m=+0.090532736 container create ffb104b47b17d82c7640e2cbc3bd8382101a87fdd467a9c33b2b81cd010eb467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mclean, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 03 02:41:30 compute-0 podman[489052]: 2025-12-03 02:41:30.554640913 +0000 UTC m=+0.053928333 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:41:30 compute-0 systemd[1]: Started libpod-conmon-ffb104b47b17d82c7640e2cbc3bd8382101a87fdd467a9c33b2b81cd010eb467.scope.
Dec 03 02:41:30 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:41:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdd72cd677a1dae63363c9760b48253e067eabf9e7432b325ba09183eea7d545/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:41:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdd72cd677a1dae63363c9760b48253e067eabf9e7432b325ba09183eea7d545/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:41:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdd72cd677a1dae63363c9760b48253e067eabf9e7432b325ba09183eea7d545/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:41:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdd72cd677a1dae63363c9760b48253e067eabf9e7432b325ba09183eea7d545/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:41:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdd72cd677a1dae63363c9760b48253e067eabf9e7432b325ba09183eea7d545/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 02:41:30 compute-0 podman[489052]: 2025-12-03 02:41:30.770067472 +0000 UTC m=+0.269354902 container init ffb104b47b17d82c7640e2cbc3bd8382101a87fdd467a9c33b2b81cd010eb467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 03 02:41:30 compute-0 podman[489052]: 2025-12-03 02:41:30.801137869 +0000 UTC m=+0.300425249 container start ffb104b47b17d82c7640e2cbc3bd8382101a87fdd467a9c33b2b81cd010eb467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mclean, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:41:30 compute-0 podman[489052]: 2025-12-03 02:41:30.807620282 +0000 UTC m=+0.306907702 container attach ffb104b47b17d82c7640e2cbc3bd8382101a87fdd467a9c33b2b81cd010eb467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mclean, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 03 02:41:31 compute-0 ceph-mon[192821]: pgmap v2622: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 03 02:41:31 compute-0 openstack_network_exporter[368278]: ERROR   02:41:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:41:31 compute-0 openstack_network_exporter[368278]: ERROR   02:41:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:41:31 compute-0 openstack_network_exporter[368278]: ERROR   02:41:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:41:31 compute-0 openstack_network_exporter[368278]: ERROR   02:41:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:41:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:41:31 compute-0 openstack_network_exporter[368278]: ERROR   02:41:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:41:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:41:31 compute-0 podman[489083]: 2025-12-03 02:41:31.886371583 +0000 UTC m=+0.104581952 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 02:41:31 compute-0 podman[489082]: 2025-12-03 02:41:31.897578019 +0000 UTC m=+0.129253228 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, version=9.6, vcs-type=git, container_name=openstack_network_exporter, architecture=x86_64)
Dec 03 02:41:31 compute-0 podman[489084]: 2025-12-03 02:41:31.910573696 +0000 UTC m=+0.117888898 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., release-0.7.12=, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, io.openshift.expose-services=, release=1214.1726694543, architecture=x86_64, io.buildah.version=1.29.0)
Dec 03 02:41:31 compute-0 podman[489080]: 2025-12-03 02:41:31.92417713 +0000 UTC m=+0.156955840 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true)
Dec 03 02:41:31 compute-0 podman[489085]: 2025-12-03 02:41:31.929081268 +0000 UTC m=+0.140004822 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 03 02:41:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2623: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 03 02:41:32 compute-0 modest_mclean[489068]: --> passed data devices: 0 physical, 3 LVM
Dec 03 02:41:32 compute-0 modest_mclean[489068]: --> relative data size: 1.0
Dec 03 02:41:32 compute-0 modest_mclean[489068]: --> All data devices are unavailable
Dec 03 02:41:32 compute-0 systemd[1]: libpod-ffb104b47b17d82c7640e2cbc3bd8382101a87fdd467a9c33b2b81cd010eb467.scope: Deactivated successfully.
Dec 03 02:41:32 compute-0 systemd[1]: libpod-ffb104b47b17d82c7640e2cbc3bd8382101a87fdd467a9c33b2b81cd010eb467.scope: Consumed 1.266s CPU time.
Dec 03 02:41:32 compute-0 podman[489052]: 2025-12-03 02:41:32.115936802 +0000 UTC m=+1.615224202 container died ffb104b47b17d82c7640e2cbc3bd8382101a87fdd467a9c33b2b81cd010eb467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mclean, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:41:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-bdd72cd677a1dae63363c9760b48253e067eabf9e7432b325ba09183eea7d545-merged.mount: Deactivated successfully.
Dec 03 02:41:32 compute-0 nova_compute[351485]: 2025-12-03 02:41:32.220 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:41:32 compute-0 podman[489052]: 2025-12-03 02:41:32.223882128 +0000 UTC m=+1.723169468 container remove ffb104b47b17d82c7640e2cbc3bd8382101a87fdd467a9c33b2b81cd010eb467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:41:32 compute-0 systemd[1]: libpod-conmon-ffb104b47b17d82c7640e2cbc3bd8382101a87fdd467a9c33b2b81cd010eb467.scope: Deactivated successfully.
Dec 03 02:41:32 compute-0 sudo[488949]: pam_unix(sudo:session): session closed for user root
Dec 03 02:41:32 compute-0 sudo[489209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:41:32 compute-0 sudo[489209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:41:32 compute-0 sudo[489209]: pam_unix(sudo:session): session closed for user root
Dec 03 02:41:32 compute-0 sudo[489234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:41:32 compute-0 nova_compute[351485]: 2025-12-03 02:41:32.578 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:41:32 compute-0 sudo[489234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:41:32 compute-0 sudo[489234]: pam_unix(sudo:session): session closed for user root
Dec 03 02:41:32 compute-0 sudo[489259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:41:32 compute-0 sudo[489259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:41:32 compute-0 sudo[489259]: pam_unix(sudo:session): session closed for user root
Dec 03 02:41:32 compute-0 sudo[489284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 02:41:32 compute-0 sudo[489284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:41:33 compute-0 ceph-mon[192821]: pgmap v2623: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 03 02:41:33 compute-0 podman[489346]: 2025-12-03 02:41:33.493770124 +0000 UTC m=+0.101115084 container create dc3badaccd94c19198f75e5d893f80d5645f834c0ade0fca30abd5b82b153c32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_heyrovsky, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 03 02:41:33 compute-0 podman[489346]: 2025-12-03 02:41:33.460413513 +0000 UTC m=+0.067758483 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:41:33 compute-0 systemd[1]: Started libpod-conmon-dc3badaccd94c19198f75e5d893f80d5645f834c0ade0fca30abd5b82b153c32.scope.
Dec 03 02:41:33 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:41:33 compute-0 podman[489346]: 2025-12-03 02:41:33.657286238 +0000 UTC m=+0.264631248 container init dc3badaccd94c19198f75e5d893f80d5645f834c0ade0fca30abd5b82b153c32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_heyrovsky, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:41:33 compute-0 podman[489346]: 2025-12-03 02:41:33.673979489 +0000 UTC m=+0.281324449 container start dc3badaccd94c19198f75e5d893f80d5645f834c0ade0fca30abd5b82b153c32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_heyrovsky, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:41:33 compute-0 podman[489346]: 2025-12-03 02:41:33.680826033 +0000 UTC m=+0.288171023 container attach dc3badaccd94c19198f75e5d893f80d5645f834c0ade0fca30abd5b82b153c32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:41:33 compute-0 suspicious_heyrovsky[489362]: 167 167
Dec 03 02:41:33 compute-0 systemd[1]: libpod-dc3badaccd94c19198f75e5d893f80d5645f834c0ade0fca30abd5b82b153c32.scope: Deactivated successfully.
Dec 03 02:41:33 compute-0 podman[489346]: 2025-12-03 02:41:33.688327414 +0000 UTC m=+0.295672354 container died dc3badaccd94c19198f75e5d893f80d5645f834c0ade0fca30abd5b82b153c32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 03 02:41:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-590fe348a658b8de6f4d425f33acbc8b8dfc9d4555f6d606203919aade8f5d6a-merged.mount: Deactivated successfully.
Dec 03 02:41:33 compute-0 podman[489346]: 2025-12-03 02:41:33.758757812 +0000 UTC m=+0.366102762 container remove dc3badaccd94c19198f75e5d893f80d5645f834c0ade0fca30abd5b82b153c32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 03 02:41:33 compute-0 systemd[1]: libpod-conmon-dc3badaccd94c19198f75e5d893f80d5645f834c0ade0fca30abd5b82b153c32.scope: Deactivated successfully.
Dec 03 02:41:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:41:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2624: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 58 op/s
Dec 03 02:41:34 compute-0 podman[489384]: 2025-12-03 02:41:34.064654594 +0000 UTC m=+0.096620867 container create a126ad8af0468dfefa669f35e090d2ecf00c97837e8efd82a13d89dee0dfabe9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bhaskara, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 03 02:41:34 compute-0 podman[489384]: 2025-12-03 02:41:34.025395716 +0000 UTC m=+0.057362059 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:41:34 compute-0 systemd[1]: Started libpod-conmon-a126ad8af0468dfefa669f35e090d2ecf00c97837e8efd82a13d89dee0dfabe9.scope.
Dec 03 02:41:34 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:41:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8af90076fbc878cb00bab4858231a4f0d6d30cdbbbaad5b6fa10d7a62ff6d11/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:41:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8af90076fbc878cb00bab4858231a4f0d6d30cdbbbaad5b6fa10d7a62ff6d11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:41:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8af90076fbc878cb00bab4858231a4f0d6d30cdbbbaad5b6fa10d7a62ff6d11/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:41:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8af90076fbc878cb00bab4858231a4f0d6d30cdbbbaad5b6fa10d7a62ff6d11/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:41:34 compute-0 podman[489384]: 2025-12-03 02:41:34.225412341 +0000 UTC m=+0.257378694 container init a126ad8af0468dfefa669f35e090d2ecf00c97837e8efd82a13d89dee0dfabe9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:41:34 compute-0 podman[489384]: 2025-12-03 02:41:34.245060155 +0000 UTC m=+0.277026438 container start a126ad8af0468dfefa669f35e090d2ecf00c97837e8efd82a13d89dee0dfabe9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bhaskara, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:41:34 compute-0 podman[489384]: 2025-12-03 02:41:34.250864029 +0000 UTC m=+0.282830402 container attach a126ad8af0468dfefa669f35e090d2ecf00c97837e8efd82a13d89dee0dfabe9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:41:34 compute-0 nova_compute[351485]: 2025-12-03 02:41:34.592 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]: {
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:     "0": [
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:         {
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:             "devices": [
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:                 "/dev/loop3"
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:             ],
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:             "lv_name": "ceph_lv0",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:             "lv_size": "21470642176",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:             "name": "ceph_lv0",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:             "tags": {
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:                 "ceph.cluster_name": "ceph",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:                 "ceph.crush_device_class": "",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:                 "ceph.encrypted": "0",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:                 "ceph.osd_id": "0",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:                 "ceph.type": "block",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:                 "ceph.vdo": "0"
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:             },
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:             "type": "block",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:             "vg_name": "ceph_vg0"
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:         }
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:     ],
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:     "1": [
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:         {
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:             "devices": [
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:                 "/dev/loop4"
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:             ],
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:             "lv_name": "ceph_lv1",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:             "lv_size": "21470642176",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:             "name": "ceph_lv1",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:             "tags": {
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:                 "ceph.cluster_name": "ceph",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:                 "ceph.crush_device_class": "",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:                 "ceph.encrypted": "0",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:                 "ceph.osd_id": "1",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:                 "ceph.type": "block",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:                 "ceph.vdo": "0"
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:             },
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:             "type": "block",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:             "vg_name": "ceph_vg1"
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:         }
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:     ],
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:     "2": [
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:         {
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:             "devices": [
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:                 "/dev/loop5"
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:             ],
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:             "lv_name": "ceph_lv2",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:             "lv_size": "21470642176",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:             "name": "ceph_lv2",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:             "tags": {
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:                 "ceph.cluster_name": "ceph",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:                 "ceph.crush_device_class": "",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:                 "ceph.encrypted": "0",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:                 "ceph.osd_id": "2",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:                 "ceph.type": "block",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:                 "ceph.vdo": "0"
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:             },
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:             "type": "block",
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:             "vg_name": "ceph_vg2"
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:         }
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]:     ]
Dec 03 02:41:35 compute-0 clever_bhaskara[489399]: }
Dec 03 02:41:35 compute-0 systemd[1]: libpod-a126ad8af0468dfefa669f35e090d2ecf00c97837e8efd82a13d89dee0dfabe9.scope: Deactivated successfully.
Dec 03 02:41:35 compute-0 podman[489384]: 2025-12-03 02:41:35.071429605 +0000 UTC m=+1.103395898 container died a126ad8af0468dfefa669f35e090d2ecf00c97837e8efd82a13d89dee0dfabe9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bhaskara, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Dec 03 02:41:35 compute-0 ceph-mon[192821]: pgmap v2624: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 58 op/s
Dec 03 02:41:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8af90076fbc878cb00bab4858231a4f0d6d30cdbbbaad5b6fa10d7a62ff6d11-merged.mount: Deactivated successfully.
Dec 03 02:41:35 compute-0 podman[489384]: 2025-12-03 02:41:35.170054349 +0000 UTC m=+1.202020622 container remove a126ad8af0468dfefa669f35e090d2ecf00c97837e8efd82a13d89dee0dfabe9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bhaskara, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 03 02:41:35 compute-0 systemd[1]: libpod-conmon-a126ad8af0468dfefa669f35e090d2ecf00c97837e8efd82a13d89dee0dfabe9.scope: Deactivated successfully.
Dec 03 02:41:35 compute-0 sudo[489284]: pam_unix(sudo:session): session closed for user root
Dec 03 02:41:35 compute-0 sudo[489420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:41:35 compute-0 sudo[489420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:41:35 compute-0 sudo[489420]: pam_unix(sudo:session): session closed for user root
Dec 03 02:41:35 compute-0 sudo[489445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:41:35 compute-0 sudo[489445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:41:35 compute-0 sudo[489445]: pam_unix(sudo:session): session closed for user root
Dec 03 02:41:35 compute-0 sudo[489470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:41:35 compute-0 sudo[489470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:41:35 compute-0 sudo[489470]: pam_unix(sudo:session): session closed for user root
Dec 03 02:41:35 compute-0 sudo[489495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 02:41:35 compute-0 sudo[489495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:41:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2625: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 58 op/s
Dec 03 02:41:36 compute-0 podman[489557]: 2025-12-03 02:41:36.404095812 +0000 UTC m=+0.072007353 container create 51936ecfbe2d5f3ba01400fb57371281ff5347fb836967b69a54574c8cba01c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_snyder, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:41:36 compute-0 podman[489557]: 2025-12-03 02:41:36.371827712 +0000 UTC m=+0.039739293 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:41:36 compute-0 systemd[1]: Started libpod-conmon-51936ecfbe2d5f3ba01400fb57371281ff5347fb836967b69a54574c8cba01c5.scope.
Dec 03 02:41:36 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:41:36 compute-0 podman[489557]: 2025-12-03 02:41:36.545677958 +0000 UTC m=+0.213589499 container init 51936ecfbe2d5f3ba01400fb57371281ff5347fb836967b69a54574c8cba01c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Dec 03 02:41:36 compute-0 podman[489557]: 2025-12-03 02:41:36.563203902 +0000 UTC m=+0.231115433 container start 51936ecfbe2d5f3ba01400fb57371281ff5347fb836967b69a54574c8cba01c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_snyder, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:41:36 compute-0 podman[489557]: 2025-12-03 02:41:36.570313383 +0000 UTC m=+0.238224894 container attach 51936ecfbe2d5f3ba01400fb57371281ff5347fb836967b69a54574c8cba01c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_snyder, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 03 02:41:36 compute-0 optimistic_snyder[489573]: 167 167
Dec 03 02:41:36 compute-0 systemd[1]: libpod-51936ecfbe2d5f3ba01400fb57371281ff5347fb836967b69a54574c8cba01c5.scope: Deactivated successfully.
Dec 03 02:41:36 compute-0 podman[489557]: 2025-12-03 02:41:36.578640358 +0000 UTC m=+0.246551899 container died 51936ecfbe2d5f3ba01400fb57371281ff5347fb836967b69a54574c8cba01c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_snyder, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:41:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb4f12fd02adc9c2e39fcf1cd58d081918a9b1baa14654f1435f6fbb09f3a29f-merged.mount: Deactivated successfully.
Dec 03 02:41:36 compute-0 podman[489557]: 2025-12-03 02:41:36.65241045 +0000 UTC m=+0.320321981 container remove 51936ecfbe2d5f3ba01400fb57371281ff5347fb836967b69a54574c8cba01c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_snyder, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:41:36 compute-0 systemd[1]: libpod-conmon-51936ecfbe2d5f3ba01400fb57371281ff5347fb836967b69a54574c8cba01c5.scope: Deactivated successfully.
Dec 03 02:41:36 compute-0 podman[489598]: 2025-12-03 02:41:36.921598966 +0000 UTC m=+0.090978108 container create 3c207baae55433121d7c9636dbefcfa4fb05f871b6c377b4fb01ae01561a8ddc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_banzai, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:41:36 compute-0 podman[489598]: 2025-12-03 02:41:36.889387157 +0000 UTC m=+0.058766349 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:41:37 compute-0 systemd[1]: Started libpod-conmon-3c207baae55433121d7c9636dbefcfa4fb05f871b6c377b4fb01ae01561a8ddc.scope.
Dec 03 02:41:37 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:41:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abdf2fa4b0cf2672f3ac81bdd7ea37ca381b7df6d7bf6017fd7d1499df3c01cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:41:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abdf2fa4b0cf2672f3ac81bdd7ea37ca381b7df6d7bf6017fd7d1499df3c01cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:41:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abdf2fa4b0cf2672f3ac81bdd7ea37ca381b7df6d7bf6017fd7d1499df3c01cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:41:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abdf2fa4b0cf2672f3ac81bdd7ea37ca381b7df6d7bf6017fd7d1499df3c01cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:41:37 compute-0 podman[489598]: 2025-12-03 02:41:37.114122359 +0000 UTC m=+0.283501551 container init 3c207baae55433121d7c9636dbefcfa4fb05f871b6c377b4fb01ae01561a8ddc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_banzai, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:41:37 compute-0 ceph-mon[192821]: pgmap v2625: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 58 op/s
Dec 03 02:41:37 compute-0 podman[489598]: 2025-12-03 02:41:37.154394206 +0000 UTC m=+0.323773348 container start 3c207baae55433121d7c9636dbefcfa4fb05f871b6c377b4fb01ae01561a8ddc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_banzai, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:41:37 compute-0 podman[489598]: 2025-12-03 02:41:37.161734193 +0000 UTC m=+0.331113345 container attach 3c207baae55433121d7c9636dbefcfa4fb05f871b6c377b4fb01ae01561a8ddc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_banzai, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:41:37 compute-0 nova_compute[351485]: 2025-12-03 02:41:37.225 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:41:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2626: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Dec 03 02:41:38 compute-0 kind_banzai[489613]: {
Dec 03 02:41:38 compute-0 kind_banzai[489613]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 02:41:38 compute-0 kind_banzai[489613]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:41:38 compute-0 kind_banzai[489613]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 02:41:38 compute-0 kind_banzai[489613]:         "osd_id": 2,
Dec 03 02:41:38 compute-0 kind_banzai[489613]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:41:38 compute-0 kind_banzai[489613]:         "type": "bluestore"
Dec 03 02:41:38 compute-0 kind_banzai[489613]:     },
Dec 03 02:41:38 compute-0 kind_banzai[489613]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 02:41:38 compute-0 kind_banzai[489613]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:41:38 compute-0 kind_banzai[489613]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 02:41:38 compute-0 kind_banzai[489613]:         "osd_id": 1,
Dec 03 02:41:38 compute-0 kind_banzai[489613]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:41:38 compute-0 kind_banzai[489613]:         "type": "bluestore"
Dec 03 02:41:38 compute-0 kind_banzai[489613]:     },
Dec 03 02:41:38 compute-0 kind_banzai[489613]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 02:41:38 compute-0 kind_banzai[489613]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:41:38 compute-0 kind_banzai[489613]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 02:41:38 compute-0 kind_banzai[489613]:         "osd_id": 0,
Dec 03 02:41:38 compute-0 kind_banzai[489613]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:41:38 compute-0 kind_banzai[489613]:         "type": "bluestore"
Dec 03 02:41:38 compute-0 kind_banzai[489613]:     }
Dec 03 02:41:38 compute-0 kind_banzai[489613]: }
Dec 03 02:41:38 compute-0 systemd[1]: libpod-3c207baae55433121d7c9636dbefcfa4fb05f871b6c377b4fb01ae01561a8ddc.scope: Deactivated successfully.
Dec 03 02:41:38 compute-0 systemd[1]: libpod-3c207baae55433121d7c9636dbefcfa4fb05f871b6c377b4fb01ae01561a8ddc.scope: Consumed 1.294s CPU time.
Dec 03 02:41:38 compute-0 podman[489646]: 2025-12-03 02:41:38.539634887 +0000 UTC m=+0.057340299 container died 3c207baae55433121d7c9636dbefcfa4fb05f871b6c377b4fb01ae01561a8ddc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_banzai, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 03 02:41:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-abdf2fa4b0cf2672f3ac81bdd7ea37ca381b7df6d7bf6017fd7d1499df3c01cc-merged.mount: Deactivated successfully.
Dec 03 02:41:38 compute-0 podman[489646]: 2025-12-03 02:41:38.638873688 +0000 UTC m=+0.156579100 container remove 3c207baae55433121d7c9636dbefcfa4fb05f871b6c377b4fb01ae01561a8ddc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_banzai, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:41:38 compute-0 systemd[1]: libpod-conmon-3c207baae55433121d7c9636dbefcfa4fb05f871b6c377b4fb01ae01561a8ddc.scope: Deactivated successfully.
Dec 03 02:41:38 compute-0 sudo[489495]: pam_unix(sudo:session): session closed for user root
Dec 03 02:41:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 02:41:38 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:41:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 02:41:38 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:41:38 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 25c92181-1724-4c33-8b90-457e3a6c724b does not exist
Dec 03 02:41:38 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 89ae2803-1f21-4cb7-b069-70f5c6aa0484 does not exist
Dec 03 02:41:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:41:38 compute-0 sudo[489660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:41:38 compute-0 sudo[489660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:41:38 compute-0 sudo[489660]: pam_unix(sudo:session): session closed for user root
Dec 03 02:41:39 compute-0 sudo[489685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 02:41:39 compute-0 sudo[489685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:41:39 compute-0 sudo[489685]: pam_unix(sudo:session): session closed for user root
Dec 03 02:41:39 compute-0 ceph-mon[192821]: pgmap v2626: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Dec 03 02:41:39 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:41:39 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:41:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 02:41:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:41:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 02:41:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:41:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 03 02:41:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:41:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:41:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:41:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:41:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:41:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec 03 02:41:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:41:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 02:41:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:41:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:41:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:41:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 02:41:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:41:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 02:41:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:41:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:41:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:41:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 02:41:39 compute-0 nova_compute[351485]: 2025-12-03 02:41:39.598 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:41:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2627: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 0 B/s wr, 6 op/s
Dec 03 02:41:41 compute-0 ceph-mon[192821]: pgmap v2627: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 0 B/s wr, 6 op/s
Dec 03 02:41:41 compute-0 nova_compute[351485]: 2025-12-03 02:41:41.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:41:41 compute-0 nova_compute[351485]: 2025-12-03 02:41:41.613 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:41:41 compute-0 nova_compute[351485]: 2025-12-03 02:41:41.613 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:41:41 compute-0 nova_compute[351485]: 2025-12-03 02:41:41.614 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:41:41 compute-0 nova_compute[351485]: 2025-12-03 02:41:41.614 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 02:41:41 compute-0 nova_compute[351485]: 2025-12-03 02:41:41.615 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:41:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2628: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:41:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:41:42 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3174112701' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:41:42 compute-0 nova_compute[351485]: 2025-12-03 02:41:42.148 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:41:42 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3174112701' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:41:42 compute-0 nova_compute[351485]: 2025-12-03 02:41:42.231 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:41:42 compute-0 nova_compute[351485]: 2025-12-03 02:41:42.627 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:41:42 compute-0 nova_compute[351485]: 2025-12-03 02:41:42.629 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3933MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 02:41:42 compute-0 nova_compute[351485]: 2025-12-03 02:41:42.629 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:41:42 compute-0 nova_compute[351485]: 2025-12-03 02:41:42.630 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:41:42 compute-0 nova_compute[351485]: 2025-12-03 02:41:42.716 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 02:41:42 compute-0 nova_compute[351485]: 2025-12-03 02:41:42.716 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 02:41:42 compute-0 nova_compute[351485]: 2025-12-03 02:41:42.742 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:41:43 compute-0 ceph-mon[192821]: pgmap v2628: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:41:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:41:43 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1479137495' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:41:43 compute-0 nova_compute[351485]: 2025-12-03 02:41:43.238 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:41:43 compute-0 nova_compute[351485]: 2025-12-03 02:41:43.253 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:41:43 compute-0 nova_compute[351485]: 2025-12-03 02:41:43.272 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:41:43 compute-0 nova_compute[351485]: 2025-12-03 02:41:43.275 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 02:41:43 compute-0 nova_compute[351485]: 2025-12-03 02:41:43.276 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:41:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:41:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2629: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:41:44 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1479137495' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:41:44 compute-0 nova_compute[351485]: 2025-12-03 02:41:44.276 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:41:44 compute-0 nova_compute[351485]: 2025-12-03 02:41:44.277 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 02:41:44 compute-0 nova_compute[351485]: 2025-12-03 02:41:44.277 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 03 02:41:44 compute-0 nova_compute[351485]: 2025-12-03 02:41:44.316 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 03 02:41:44 compute-0 nova_compute[351485]: 2025-12-03 02:41:44.317 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:41:44 compute-0 nova_compute[351485]: 2025-12-03 02:41:44.602 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:41:45 compute-0 ceph-mon[192821]: pgmap v2629: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:41:45 compute-0 nova_compute[351485]: 2025-12-03 02:41:45.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:41:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2630: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:41:46 compute-0 nova_compute[351485]: 2025-12-03 02:41:46.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:41:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 02:41:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2594545969' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:41:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 02:41:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2594545969' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:41:47 compute-0 nova_compute[351485]: 2025-12-03 02:41:47.234 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:41:47 compute-0 ceph-mon[192821]: pgmap v2630: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:41:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/2594545969' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:41:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/2594545969' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:41:47 compute-0 nova_compute[351485]: 2025-12-03 02:41:47.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:41:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2631: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:41:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:41:48 compute-0 podman[489756]: 2025-12-03 02:41:48.87365695 +0000 UTC m=+0.111338273 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 03 02:41:48 compute-0 podman[489754]: 2025-12-03 02:41:48.894395066 +0000 UTC m=+0.139963471 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 03 02:41:48 compute-0 podman[489755]: 2025-12-03 02:41:48.927423348 +0000 UTC m=+0.168566788 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 03 02:41:49 compute-0 ceph-mon[192821]: pgmap v2631: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:41:49 compute-0 nova_compute[351485]: 2025-12-03 02:41:49.570 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:41:49 compute-0 nova_compute[351485]: 2025-12-03 02:41:49.606 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:41:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2632: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:41:51 compute-0 ceph-mon[192821]: pgmap v2632: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:41:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2633: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:41:52 compute-0 nova_compute[351485]: 2025-12-03 02:41:52.238 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:41:52 compute-0 nova_compute[351485]: 2025-12-03 02:41:52.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:41:52 compute-0 nova_compute[351485]: 2025-12-03 02:41:52.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 02:41:53 compute-0 ceph-mon[192821]: pgmap v2633: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:41:53 compute-0 nova_compute[351485]: 2025-12-03 02:41:53.578 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:41:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:41:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2634: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:41:54 compute-0 nova_compute[351485]: 2025-12-03 02:41:54.610 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:41:55 compute-0 ceph-mon[192821]: pgmap v2634: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:41:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2635: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:41:56 compute-0 ceph-mon[192821]: pgmap v2635: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:41:57 compute-0 nova_compute[351485]: 2025-12-03 02:41:57.242 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:41:57 compute-0 podman[489817]: 2025-12-03 02:41:57.904752223 +0000 UTC m=+0.157919427 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 03 02:41:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2636: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:41:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:41:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:41:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:41:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:41:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:41:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:41:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:41:59 compute-0 ceph-mon[192821]: pgmap v2636: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:41:59 compute-0 nova_compute[351485]: 2025-12-03 02:41:59.614 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:41:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:41:59.679 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:41:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:41:59.680 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:41:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:41:59.680 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:41:59 compute-0 podman[158098]: time="2025-12-03T02:41:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:41:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:41:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec 03 02:41:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:41:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8216 "" "Go-http-client/1.1"
Dec 03 02:42:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2637: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:01 compute-0 ceph-mon[192821]: pgmap v2637: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:01 compute-0 openstack_network_exporter[368278]: ERROR   02:42:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:42:01 compute-0 openstack_network_exporter[368278]: ERROR   02:42:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:42:01 compute-0 openstack_network_exporter[368278]: ERROR   02:42:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:42:01 compute-0 openstack_network_exporter[368278]: ERROR   02:42:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:42:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:42:01 compute-0 openstack_network_exporter[368278]: ERROR   02:42:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:42:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:42:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2638: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:02 compute-0 nova_compute[351485]: 2025-12-03 02:42:02.245 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:42:02 compute-0 podman[489839]: 2025-12-03 02:42:02.878744518 +0000 UTC m=+0.106223999 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.openshift.expose-services=, managed_by=edpm_ansible, architecture=x86_64, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, maintainer=Red Hat, Inc., release-0.7.12=, name=ubi9, vcs-type=git, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., distribution-scope=public)
Dec 03 02:42:02 compute-0 podman[489843]: 2025-12-03 02:42:02.893206476 +0000 UTC m=+0.111978681 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:42:02 compute-0 podman[489838]: 2025-12-03 02:42:02.893607108 +0000 UTC m=+0.124942137 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 02:42:02 compute-0 podman[489837]: 2025-12-03 02:42:02.916890025 +0000 UTC m=+0.155549581 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, maintainer=Red Hat, Inc., release=1755695350, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6)
Dec 03 02:42:02 compute-0 podman[489836]: 2025-12-03 02:42:02.925106446 +0000 UTC m=+0.169715280 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 03 02:42:03 compute-0 ceph-mon[192821]: pgmap v2638: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:42:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2639: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:04 compute-0 nova_compute[351485]: 2025-12-03 02:42:04.617 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:42:05 compute-0 ceph-mon[192821]: pgmap v2639: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2640: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:07 compute-0 ceph-mon[192821]: pgmap v2640: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:07 compute-0 nova_compute[351485]: 2025-12-03 02:42:07.248 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:42:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2641: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:42:09 compute-0 ceph-mon[192821]: pgmap v2641: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:09 compute-0 nova_compute[351485]: 2025-12-03 02:42:09.620 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:42:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2642: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:11 compute-0 ceph-mon[192821]: pgmap v2642: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2643: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:12 compute-0 nova_compute[351485]: 2025-12-03 02:42:12.251 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:42:13 compute-0 ceph-mon[192821]: pgmap v2643: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:42:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2644: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:14 compute-0 nova_compute[351485]: 2025-12-03 02:42:14.632 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:42:15 compute-0 ceph-mon[192821]: pgmap v2644: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2645: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:17 compute-0 ceph-mon[192821]: pgmap v2645: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:17 compute-0 nova_compute[351485]: 2025-12-03 02:42:17.253 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:42:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2646: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:42:19 compute-0 ceph-mon[192821]: pgmap v2646: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:19 compute-0 nova_compute[351485]: 2025-12-03 02:42:19.645 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:42:19 compute-0 podman[489937]: 2025-12-03 02:42:19.855097095 +0000 UTC m=+0.114752009 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 03 02:42:19 compute-0 podman[489939]: 2025-12-03 02:42:19.866510877 +0000 UTC m=+0.108504013 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 02:42:19 compute-0 podman[489938]: 2025-12-03 02:42:19.89707082 +0000 UTC m=+0.140155176 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 03 02:42:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2647: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:21 compute-0 ceph-mon[192821]: pgmap v2647: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2648: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:22 compute-0 nova_compute[351485]: 2025-12-03 02:42:22.256 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:42:23 compute-0 ceph-mon[192821]: pgmap v2648: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:42:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2649: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:24 compute-0 nova_compute[351485]: 2025-12-03 02:42:24.649 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:42:25 compute-0 ceph-mon[192821]: pgmap v2649: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2650: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:26 compute-0 ceph-mon[192821]: pgmap v2650: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:27 compute-0 nova_compute[351485]: 2025-12-03 02:42:27.259 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:42:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2651: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:42:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:42:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:42:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:42:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:42:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:42:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:42:28
Dec 03 02:42:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 02:42:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 02:42:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups', 'vms', 'images', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', 'volumes']
Dec 03 02:42:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 02:42:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:42:28 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #129. Immutable memtables: 0.
Dec 03 02:42:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:42:28.830427) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 03 02:42:28 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 129
Dec 03 02:42:28 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729748830619, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 1116, "num_deletes": 250, "total_data_size": 1661997, "memory_usage": 1687128, "flush_reason": "Manual Compaction"}
Dec 03 02:42:28 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #130: started
Dec 03 02:42:28 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729748843446, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 130, "file_size": 976118, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 53318, "largest_seqno": 54433, "table_properties": {"data_size": 971995, "index_size": 1710, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10895, "raw_average_key_size": 20, "raw_value_size": 963036, "raw_average_value_size": 1823, "num_data_blocks": 78, "num_entries": 528, "num_filter_entries": 528, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764729635, "oldest_key_time": 1764729635, "file_creation_time": 1764729748, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 130, "seqno_to_time_mapping": "N/A"}}
Dec 03 02:42:28 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 13428 microseconds, and 7542 cpu microseconds.
Dec 03 02:42:28 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 02:42:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:42:28.843871) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #130: 976118 bytes OK
Dec 03 02:42:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:42:28.843899) [db/memtable_list.cc:519] [default] Level-0 commit table #130 started
Dec 03 02:42:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:42:28.846315) [db/memtable_list.cc:722] [default] Level-0 commit table #130: memtable #1 done
Dec 03 02:42:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:42:28.846336) EVENT_LOG_v1 {"time_micros": 1764729748846329, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 03 02:42:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:42:28.846360) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 03 02:42:28 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 1656863, prev total WAL file size 1656863, number of live WAL files 2.
Dec 03 02:42:28 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000126.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:42:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:42:28.848218) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323534' seq:72057594037927935, type:22 .. '6D6772737461740032353035' seq:0, type:0; will stop at (end)
Dec 03 02:42:28 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 03 02:42:28 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [130(953KB)], [128(9052KB)]
Dec 03 02:42:28 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729748848303, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [130], "files_L6": [128], "score": -1, "input_data_size": 10246183, "oldest_snapshot_seqno": -1}
Dec 03 02:42:28 compute-0 podman[489996]: 2025-12-03 02:42:28.887596009 +0000 UTC m=+0.135200876 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.build-date=20251125)
Dec 03 02:42:28 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #131: 6741 keys, 7616915 bytes, temperature: kUnknown
Dec 03 02:42:28 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729748908895, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 131, "file_size": 7616915, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7576959, "index_size": 21987, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16901, "raw_key_size": 176737, "raw_average_key_size": 26, "raw_value_size": 7460042, "raw_average_value_size": 1106, "num_data_blocks": 867, "num_entries": 6741, "num_filter_entries": 6741, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764729748, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Dec 03 02:42:28 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 02:42:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:42:28.909093) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 7616915 bytes
Dec 03 02:42:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:42:28.911033) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 168.9 rd, 125.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 8.8 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(18.3) write-amplify(7.8) OK, records in: 7202, records dropped: 461 output_compression: NoCompression
Dec 03 02:42:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:42:28.911052) EVENT_LOG_v1 {"time_micros": 1764729748911043, "job": 78, "event": "compaction_finished", "compaction_time_micros": 60647, "compaction_time_cpu_micros": 37086, "output_level": 6, "num_output_files": 1, "total_output_size": 7616915, "num_input_records": 7202, "num_output_records": 6741, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 03 02:42:28 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000130.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:42:28 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729748911374, "job": 78, "event": "table_file_deletion", "file_number": 130}
Dec 03 02:42:28 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:42:28 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729748913061, "job": 78, "event": "table_file_deletion", "file_number": 128}
Dec 03 02:42:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:42:28.847468) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:42:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:42:28.913364) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:42:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:42:28.913371) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:42:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:42:28.913374) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:42:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:42:28.913377) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:42:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:42:28.913381) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:42:29 compute-0 ceph-mon[192821]: pgmap v2651: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 02:42:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:42:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 02:42:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:42:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:42:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:42:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:42:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:42:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:42:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:42:29 compute-0 nova_compute[351485]: 2025-12-03 02:42:29.653 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:42:29 compute-0 podman[158098]: time="2025-12-03T02:42:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:42:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:42:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec 03 02:42:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:42:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8214 "" "Go-http-client/1.1"
Dec 03 02:42:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2652: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:30 compute-0 sshd-session[490015]: Invalid user admin from 154.113.10.113 port 46632
Dec 03 02:42:30 compute-0 sshd-session[490015]: Received disconnect from 154.113.10.113 port 46632:11: Bye Bye [preauth]
Dec 03 02:42:30 compute-0 sshd-session[490015]: Disconnected from invalid user admin 154.113.10.113 port 46632 [preauth]
Dec 03 02:42:31 compute-0 ceph-mon[192821]: pgmap v2652: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:31 compute-0 openstack_network_exporter[368278]: ERROR   02:42:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:42:31 compute-0 openstack_network_exporter[368278]: ERROR   02:42:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:42:31 compute-0 openstack_network_exporter[368278]: ERROR   02:42:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:42:31 compute-0 openstack_network_exporter[368278]: ERROR   02:42:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:42:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:42:31 compute-0 openstack_network_exporter[368278]: ERROR   02:42:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:42:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:42:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2653: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:32 compute-0 nova_compute[351485]: 2025-12-03 02:42:32.262 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:42:33 compute-0 ceph-mon[192821]: pgmap v2653: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:42:33 compute-0 podman[490019]: 2025-12-03 02:42:33.870199175 +0000 UTC m=+0.100795675 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 02:42:33 compute-0 podman[490026]: 2025-12-03 02:42:33.889237123 +0000 UTC m=+0.109587574 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 03 02:42:33 compute-0 podman[490020]: 2025-12-03 02:42:33.895981403 +0000 UTC m=+0.120938894 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, architecture=x86_64, config_id=edpm, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, vendor=Red Hat, Inc., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., vcs-type=git, com.redhat.component=ubi9-container, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.buildah.version=1.29.0, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec 03 02:42:33 compute-0 podman[490018]: 2025-12-03 02:42:33.897071354 +0000 UTC m=+0.137218524 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1755695350, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.buildah.version=1.33.7, io.openshift.expose-services=, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.component=ubi9-minimal-container, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter)
Dec 03 02:42:33 compute-0 podman[490017]: 2025-12-03 02:42:33.939423859 +0000 UTC m=+0.185342862 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 03 02:42:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2654: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:34 compute-0 nova_compute[351485]: 2025-12-03 02:42:34.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:42:34 compute-0 nova_compute[351485]: 2025-12-03 02:42:34.657 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:42:35 compute-0 ceph-mon[192821]: pgmap v2654: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2655: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:37 compute-0 ceph-mon[192821]: pgmap v2655: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:37 compute-0 nova_compute[351485]: 2025-12-03 02:42:37.265 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:42:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2656: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:42:39 compute-0 ceph-mon[192821]: pgmap v2656: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:39 compute-0 sudo[490116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:42:39 compute-0 sudo[490116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:42:39 compute-0 sudo[490116]: pam_unix(sudo:session): session closed for user root
Dec 03 02:42:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 02:42:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:42:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 02:42:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:42:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 03 02:42:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:42:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:42:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:42:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:42:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:42:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec 03 02:42:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:42:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 02:42:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:42:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:42:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:42:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 02:42:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:42:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 02:42:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:42:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:42:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:42:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 02:42:39 compute-0 sudo[490141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:42:39 compute-0 sudo[490141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:42:39 compute-0 sudo[490141]: pam_unix(sudo:session): session closed for user root
Dec 03 02:42:39 compute-0 sudo[490166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:42:39 compute-0 sudo[490166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:42:39 compute-0 sudo[490166]: pam_unix(sudo:session): session closed for user root
Dec 03 02:42:39 compute-0 sudo[490191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 02:42:39 compute-0 sudo[490191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:42:39 compute-0 nova_compute[351485]: 2025-12-03 02:42:39.659 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:42:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2657: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:40 compute-0 sudo[490191]: pam_unix(sudo:session): session closed for user root
Dec 03 02:42:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:42:40 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:42:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 02:42:40 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:42:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 02:42:40 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:42:40 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 700d9fe8-8cb3-4633-bcb0-1c5e954366d1 does not exist
Dec 03 02:42:40 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev b3ad8527-ef30-4ef4-8379-af37c8c1102c does not exist
Dec 03 02:42:40 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 713bebe8-5390-41bc-a7f9-35ee3694797c does not exist
Dec 03 02:42:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 02:42:40 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:42:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 02:42:40 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:42:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:42:40 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:42:40 compute-0 sudo[490246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:42:40 compute-0 sudo[490246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:42:40 compute-0 sudo[490246]: pam_unix(sudo:session): session closed for user root
Dec 03 02:42:40 compute-0 sudo[490271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:42:40 compute-0 sudo[490271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:42:40 compute-0 sudo[490271]: pam_unix(sudo:session): session closed for user root
Dec 03 02:42:40 compute-0 sudo[490296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:42:40 compute-0 sudo[490296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:42:40 compute-0 sudo[490296]: pam_unix(sudo:session): session closed for user root
Dec 03 02:42:41 compute-0 sudo[490321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 02:42:41 compute-0 sudo[490321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:42:41 compute-0 ceph-mon[192821]: pgmap v2657: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:41 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:42:41 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:42:41 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:42:41 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:42:41 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:42:41 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:42:41 compute-0 nova_compute[351485]: 2025-12-03 02:42:41.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:42:41 compute-0 nova_compute[351485]: 2025-12-03 02:42:41.610 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:42:41 compute-0 nova_compute[351485]: 2025-12-03 02:42:41.610 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:42:41 compute-0 nova_compute[351485]: 2025-12-03 02:42:41.611 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:42:41 compute-0 nova_compute[351485]: 2025-12-03 02:42:41.611 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 02:42:41 compute-0 nova_compute[351485]: 2025-12-03 02:42:41.612 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:42:41 compute-0 podman[490385]: 2025-12-03 02:42:41.615332451 +0000 UTC m=+0.087496280 container create ce69163649fbca7cfcdd476cab32948601439675f3ecbe4bb42171ccfe5239f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_murdock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 03 02:42:41 compute-0 podman[490385]: 2025-12-03 02:42:41.588129813 +0000 UTC m=+0.060293622 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:42:41 compute-0 systemd[1]: Started libpod-conmon-ce69163649fbca7cfcdd476cab32948601439675f3ecbe4bb42171ccfe5239f0.scope.
Dec 03 02:42:41 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:42:41 compute-0 podman[490385]: 2025-12-03 02:42:41.793082837 +0000 UTC m=+0.265246646 container init ce69163649fbca7cfcdd476cab32948601439675f3ecbe4bb42171ccfe5239f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 03 02:42:41 compute-0 podman[490385]: 2025-12-03 02:42:41.807182645 +0000 UTC m=+0.279346444 container start ce69163649fbca7cfcdd476cab32948601439675f3ecbe4bb42171ccfe5239f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_murdock, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 03 02:42:41 compute-0 podman[490385]: 2025-12-03 02:42:41.813670788 +0000 UTC m=+0.285834607 container attach ce69163649fbca7cfcdd476cab32948601439675f3ecbe4bb42171ccfe5239f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 03 02:42:41 compute-0 festive_murdock[490400]: 167 167
Dec 03 02:42:41 compute-0 systemd[1]: libpod-ce69163649fbca7cfcdd476cab32948601439675f3ecbe4bb42171ccfe5239f0.scope: Deactivated successfully.
Dec 03 02:42:41 compute-0 conmon[490400]: conmon ce69163649fbca7cfcdd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ce69163649fbca7cfcdd476cab32948601439675f3ecbe4bb42171ccfe5239f0.scope/container/memory.events
Dec 03 02:42:41 compute-0 podman[490385]: 2025-12-03 02:42:41.821984883 +0000 UTC m=+0.294148712 container died ce69163649fbca7cfcdd476cab32948601439675f3ecbe4bb42171ccfe5239f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Dec 03 02:42:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-4fbb1fb5092a9c4b6f1798b2d0a963437d2fd5814972b2314a05e0788f16c854-merged.mount: Deactivated successfully.
Dec 03 02:42:41 compute-0 podman[490385]: 2025-12-03 02:42:41.912728174 +0000 UTC m=+0.384891973 container remove ce69163649fbca7cfcdd476cab32948601439675f3ecbe4bb42171ccfe5239f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_murdock, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:42:41 compute-0 systemd[1]: libpod-conmon-ce69163649fbca7cfcdd476cab32948601439675f3ecbe4bb42171ccfe5239f0.scope: Deactivated successfully.
Dec 03 02:42:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2658: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:42:42 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3818334107' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:42:42 compute-0 nova_compute[351485]: 2025-12-03 02:42:42.116 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:42:42 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3818334107' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:42:42 compute-0 podman[490442]: 2025-12-03 02:42:42.201484032 +0000 UTC m=+0.112785364 container create 2b3cf780b177289831f7ab8d7125830700a36d7fc86034e6b9b5c997ce13b7f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Dec 03 02:42:42 compute-0 podman[490442]: 2025-12-03 02:42:42.16385437 +0000 UTC m=+0.075155842 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:42:42 compute-0 systemd[1]: Started libpod-conmon-2b3cf780b177289831f7ab8d7125830700a36d7fc86034e6b9b5c997ce13b7f2.scope.
Dec 03 02:42:42 compute-0 nova_compute[351485]: 2025-12-03 02:42:42.268 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:42:42 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:42:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e8074d7daa631630a065e53d80d5bda280dfe243ed83e9440005e42d76ef612/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:42:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e8074d7daa631630a065e53d80d5bda280dfe243ed83e9440005e42d76ef612/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:42:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e8074d7daa631630a065e53d80d5bda280dfe243ed83e9440005e42d76ef612/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:42:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e8074d7daa631630a065e53d80d5bda280dfe243ed83e9440005e42d76ef612/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:42:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e8074d7daa631630a065e53d80d5bda280dfe243ed83e9440005e42d76ef612/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 02:42:42 compute-0 podman[490442]: 2025-12-03 02:42:42.329092933 +0000 UTC m=+0.240394335 container init 2b3cf780b177289831f7ab8d7125830700a36d7fc86034e6b9b5c997ce13b7f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_solomon, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:42:42 compute-0 podman[490442]: 2025-12-03 02:42:42.35058109 +0000 UTC m=+0.261882442 container start 2b3cf780b177289831f7ab8d7125830700a36d7fc86034e6b9b5c997ce13b7f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_solomon, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:42:42 compute-0 podman[490442]: 2025-12-03 02:42:42.355268692 +0000 UTC m=+0.266570064 container attach 2b3cf780b177289831f7ab8d7125830700a36d7fc86034e6b9b5c997ce13b7f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 03 02:42:42 compute-0 nova_compute[351485]: 2025-12-03 02:42:42.529 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:42:42 compute-0 nova_compute[351485]: 2025-12-03 02:42:42.531 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3923MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 02:42:42 compute-0 nova_compute[351485]: 2025-12-03 02:42:42.532 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:42:42 compute-0 nova_compute[351485]: 2025-12-03 02:42:42.532 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:42:42 compute-0 nova_compute[351485]: 2025-12-03 02:42:42.682 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 02:42:42 compute-0 nova_compute[351485]: 2025-12-03 02:42:42.683 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 02:42:42 compute-0 nova_compute[351485]: 2025-12-03 02:42:42.703 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:42:43 compute-0 ceph-mon[192821]: pgmap v2658: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:42:43 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2771843871' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:42:43 compute-0 nova_compute[351485]: 2025-12-03 02:42:43.298 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:42:43 compute-0 nova_compute[351485]: 2025-12-03 02:42:43.312 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:42:43 compute-0 nova_compute[351485]: 2025-12-03 02:42:43.349 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:42:43 compute-0 nova_compute[351485]: 2025-12-03 02:42:43.352 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 02:42:43 compute-0 nova_compute[351485]: 2025-12-03 02:42:43.353 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.821s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:42:43 compute-0 boring_solomon[490460]: --> passed data devices: 0 physical, 3 LVM
Dec 03 02:42:43 compute-0 boring_solomon[490460]: --> relative data size: 1.0
Dec 03 02:42:43 compute-0 boring_solomon[490460]: --> All data devices are unavailable
Dec 03 02:42:43 compute-0 systemd[1]: libpod-2b3cf780b177289831f7ab8d7125830700a36d7fc86034e6b9b5c997ce13b7f2.scope: Deactivated successfully.
Dec 03 02:42:43 compute-0 systemd[1]: libpod-2b3cf780b177289831f7ab8d7125830700a36d7fc86034e6b9b5c997ce13b7f2.scope: Consumed 1.279s CPU time.
Dec 03 02:42:43 compute-0 podman[490511]: 2025-12-03 02:42:43.766719622 +0000 UTC m=+0.048241502 container died 2b3cf780b177289831f7ab8d7125830700a36d7fc86034e6b9b5c997ce13b7f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_solomon, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Dec 03 02:42:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e8074d7daa631630a065e53d80d5bda280dfe243ed83e9440005e42d76ef612-merged.mount: Deactivated successfully.
Dec 03 02:42:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:42:43 compute-0 podman[490511]: 2025-12-03 02:42:43.882893631 +0000 UTC m=+0.164415461 container remove 2b3cf780b177289831f7ab8d7125830700a36d7fc86034e6b9b5c997ce13b7f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_solomon, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:42:43 compute-0 systemd[1]: libpod-conmon-2b3cf780b177289831f7ab8d7125830700a36d7fc86034e6b9b5c997ce13b7f2.scope: Deactivated successfully.
Dec 03 02:42:43 compute-0 sudo[490321]: pam_unix(sudo:session): session closed for user root
Dec 03 02:42:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2659: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:44 compute-0 sudo[490526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:42:44 compute-0 sudo[490526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:42:44 compute-0 sudo[490526]: pam_unix(sudo:session): session closed for user root
Dec 03 02:42:44 compute-0 sudo[490551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:42:44 compute-0 sudo[490551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:42:44 compute-0 sudo[490551]: pam_unix(sudo:session): session closed for user root
Dec 03 02:42:44 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2771843871' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:42:44 compute-0 sudo[490576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:42:44 compute-0 sudo[490576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:42:44 compute-0 sudo[490576]: pam_unix(sudo:session): session closed for user root
Dec 03 02:42:44 compute-0 nova_compute[351485]: 2025-12-03 02:42:44.354 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:42:44 compute-0 nova_compute[351485]: 2025-12-03 02:42:44.355 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 02:42:44 compute-0 nova_compute[351485]: 2025-12-03 02:42:44.355 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 03 02:42:44 compute-0 nova_compute[351485]: 2025-12-03 02:42:44.371 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 03 02:42:44 compute-0 sudo[490601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 02:42:44 compute-0 sudo[490601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:42:44 compute-0 nova_compute[351485]: 2025-12-03 02:42:44.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:42:44 compute-0 nova_compute[351485]: 2025-12-03 02:42:44.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 03 02:42:44 compute-0 nova_compute[351485]: 2025-12-03 02:42:44.663 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:42:45 compute-0 podman[490664]: 2025-12-03 02:42:45.057134478 +0000 UTC m=+0.086094321 container create 235a40a5ac741947d6032fb146c0906d56290f195dd98c5e8783e8b5ef4cc20b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_boyd, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 03 02:42:45 compute-0 podman[490664]: 2025-12-03 02:42:45.020805062 +0000 UTC m=+0.049764965 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:42:45 compute-0 systemd[1]: Started libpod-conmon-235a40a5ac741947d6032fb146c0906d56290f195dd98c5e8783e8b5ef4cc20b.scope.
Dec 03 02:42:45 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:42:45 compute-0 podman[490664]: 2025-12-03 02:42:45.198138557 +0000 UTC m=+0.227098470 container init 235a40a5ac741947d6032fb146c0906d56290f195dd98c5e8783e8b5ef4cc20b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_boyd, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:42:45 compute-0 podman[490664]: 2025-12-03 02:42:45.208609522 +0000 UTC m=+0.237569335 container start 235a40a5ac741947d6032fb146c0906d56290f195dd98c5e8783e8b5ef4cc20b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_boyd, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 03 02:42:45 compute-0 podman[490664]: 2025-12-03 02:42:45.213769388 +0000 UTC m=+0.242729251 container attach 235a40a5ac741947d6032fb146c0906d56290f195dd98c5e8783e8b5ef4cc20b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_boyd, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 03 02:42:45 compute-0 magical_boyd[490680]: 167 167
Dec 03 02:42:45 compute-0 systemd[1]: libpod-235a40a5ac741947d6032fb146c0906d56290f195dd98c5e8783e8b5ef4cc20b.scope: Deactivated successfully.
Dec 03 02:42:45 compute-0 podman[490664]: 2025-12-03 02:42:45.216581277 +0000 UTC m=+0.245541090 container died 235a40a5ac741947d6032fb146c0906d56290f195dd98c5e8783e8b5ef4cc20b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 03 02:42:45 compute-0 ceph-mon[192821]: pgmap v2659: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-9bb206d9fe45f4069c29135de71bc6a798d77939322bd7d54f3e6aee2e9cb7e7-merged.mount: Deactivated successfully.
Dec 03 02:42:45 compute-0 podman[490664]: 2025-12-03 02:42:45.284170355 +0000 UTC m=+0.313130178 container remove 235a40a5ac741947d6032fb146c0906d56290f195dd98c5e8783e8b5ef4cc20b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_boyd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:42:45 compute-0 systemd[1]: libpod-conmon-235a40a5ac741947d6032fb146c0906d56290f195dd98c5e8783e8b5ef4cc20b.scope: Deactivated successfully.
Dec 03 02:42:45 compute-0 podman[490702]: 2025-12-03 02:42:45.564706241 +0000 UTC m=+0.083682502 container create 0e456aaa0160e17d709cd0fbc29ca5422a39c1d0846926280783493f2935ec50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 03 02:42:45 compute-0 nova_compute[351485]: 2025-12-03 02:42:45.595 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:42:45 compute-0 podman[490702]: 2025-12-03 02:42:45.536890026 +0000 UTC m=+0.055866297 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:42:45 compute-0 systemd[1]: Started libpod-conmon-0e456aaa0160e17d709cd0fbc29ca5422a39c1d0846926280783493f2935ec50.scope.
Dec 03 02:42:45 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:42:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/817f7a8c6b86f9fb534afb624d79aeea47ee5e063a712426713060a588243639/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:42:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/817f7a8c6b86f9fb534afb624d79aeea47ee5e063a712426713060a588243639/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:42:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/817f7a8c6b86f9fb534afb624d79aeea47ee5e063a712426713060a588243639/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:42:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/817f7a8c6b86f9fb534afb624d79aeea47ee5e063a712426713060a588243639/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:42:45 compute-0 podman[490702]: 2025-12-03 02:42:45.745678228 +0000 UTC m=+0.264654539 container init 0e456aaa0160e17d709cd0fbc29ca5422a39c1d0846926280783493f2935ec50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 03 02:42:45 compute-0 podman[490702]: 2025-12-03 02:42:45.772795514 +0000 UTC m=+0.291771765 container start 0e456aaa0160e17d709cd0fbc29ca5422a39c1d0846926280783493f2935ec50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_curie, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 03 02:42:45 compute-0 podman[490702]: 2025-12-03 02:42:45.779946485 +0000 UTC m=+0.298922796 container attach 0e456aaa0160e17d709cd0fbc29ca5422a39c1d0846926280783493f2935ec50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_curie, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Dec 03 02:42:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2660: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:46 compute-0 nova_compute[351485]: 2025-12-03 02:42:46.578 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:42:46 compute-0 admiring_curie[490718]: {
Dec 03 02:42:46 compute-0 admiring_curie[490718]:     "0": [
Dec 03 02:42:46 compute-0 admiring_curie[490718]:         {
Dec 03 02:42:46 compute-0 admiring_curie[490718]:             "devices": [
Dec 03 02:42:46 compute-0 admiring_curie[490718]:                 "/dev/loop3"
Dec 03 02:42:46 compute-0 admiring_curie[490718]:             ],
Dec 03 02:42:46 compute-0 admiring_curie[490718]:             "lv_name": "ceph_lv0",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:             "lv_size": "21470642176",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:             "name": "ceph_lv0",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:             "tags": {
Dec 03 02:42:46 compute-0 admiring_curie[490718]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:                 "ceph.cluster_name": "ceph",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:                 "ceph.crush_device_class": "",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:                 "ceph.encrypted": "0",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:                 "ceph.osd_id": "0",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:                 "ceph.type": "block",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:                 "ceph.vdo": "0"
Dec 03 02:42:46 compute-0 admiring_curie[490718]:             },
Dec 03 02:42:46 compute-0 admiring_curie[490718]:             "type": "block",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:             "vg_name": "ceph_vg0"
Dec 03 02:42:46 compute-0 admiring_curie[490718]:         }
Dec 03 02:42:46 compute-0 admiring_curie[490718]:     ],
Dec 03 02:42:46 compute-0 admiring_curie[490718]:     "1": [
Dec 03 02:42:46 compute-0 admiring_curie[490718]:         {
Dec 03 02:42:46 compute-0 admiring_curie[490718]:             "devices": [
Dec 03 02:42:46 compute-0 admiring_curie[490718]:                 "/dev/loop4"
Dec 03 02:42:46 compute-0 admiring_curie[490718]:             ],
Dec 03 02:42:46 compute-0 admiring_curie[490718]:             "lv_name": "ceph_lv1",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:             "lv_size": "21470642176",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:             "name": "ceph_lv1",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:             "tags": {
Dec 03 02:42:46 compute-0 admiring_curie[490718]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:                 "ceph.cluster_name": "ceph",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:                 "ceph.crush_device_class": "",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:                 "ceph.encrypted": "0",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:                 "ceph.osd_id": "1",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:                 "ceph.type": "block",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:                 "ceph.vdo": "0"
Dec 03 02:42:46 compute-0 admiring_curie[490718]:             },
Dec 03 02:42:46 compute-0 admiring_curie[490718]:             "type": "block",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:             "vg_name": "ceph_vg1"
Dec 03 02:42:46 compute-0 admiring_curie[490718]:         }
Dec 03 02:42:46 compute-0 admiring_curie[490718]:     ],
Dec 03 02:42:46 compute-0 admiring_curie[490718]:     "2": [
Dec 03 02:42:46 compute-0 admiring_curie[490718]:         {
Dec 03 02:42:46 compute-0 admiring_curie[490718]:             "devices": [
Dec 03 02:42:46 compute-0 admiring_curie[490718]:                 "/dev/loop5"
Dec 03 02:42:46 compute-0 admiring_curie[490718]:             ],
Dec 03 02:42:46 compute-0 admiring_curie[490718]:             "lv_name": "ceph_lv2",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:             "lv_size": "21470642176",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:             "name": "ceph_lv2",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:             "tags": {
Dec 03 02:42:46 compute-0 admiring_curie[490718]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:                 "ceph.cluster_name": "ceph",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:                 "ceph.crush_device_class": "",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:                 "ceph.encrypted": "0",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:                 "ceph.osd_id": "2",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:                 "ceph.type": "block",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:                 "ceph.vdo": "0"
Dec 03 02:42:46 compute-0 admiring_curie[490718]:             },
Dec 03 02:42:46 compute-0 admiring_curie[490718]:             "type": "block",
Dec 03 02:42:46 compute-0 admiring_curie[490718]:             "vg_name": "ceph_vg2"
Dec 03 02:42:46 compute-0 admiring_curie[490718]:         }
Dec 03 02:42:46 compute-0 admiring_curie[490718]:     ]
Dec 03 02:42:46 compute-0 admiring_curie[490718]: }
Dec 03 02:42:46 compute-0 systemd[1]: libpod-0e456aaa0160e17d709cd0fbc29ca5422a39c1d0846926280783493f2935ec50.scope: Deactivated successfully.
Dec 03 02:42:46 compute-0 podman[490702]: 2025-12-03 02:42:46.72729279 +0000 UTC m=+1.246269041 container died 0e456aaa0160e17d709cd0fbc29ca5422a39c1d0846926280783493f2935ec50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_curie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:42:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-817f7a8c6b86f9fb534afb624d79aeea47ee5e063a712426713060a588243639-merged.mount: Deactivated successfully.
Dec 03 02:42:46 compute-0 podman[490702]: 2025-12-03 02:42:46.845946998 +0000 UTC m=+1.364923249 container remove 0e456aaa0160e17d709cd0fbc29ca5422a39c1d0846926280783493f2935ec50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_curie, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 03 02:42:46 compute-0 systemd[1]: libpod-conmon-0e456aaa0160e17d709cd0fbc29ca5422a39c1d0846926280783493f2935ec50.scope: Deactivated successfully.
Dec 03 02:42:46 compute-0 sudo[490601]: pam_unix(sudo:session): session closed for user root
Dec 03 02:42:47 compute-0 sudo[490740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:42:47 compute-0 sudo[490740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:42:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 02:42:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3200321659' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:42:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 02:42:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3200321659' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:42:47 compute-0 sudo[490740]: pam_unix(sudo:session): session closed for user root
Dec 03 02:42:47 compute-0 sudo[490765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:42:47 compute-0 sudo[490765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:42:47 compute-0 sudo[490765]: pam_unix(sudo:session): session closed for user root
Dec 03 02:42:47 compute-0 ceph-mon[192821]: pgmap v2660: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/3200321659' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:42:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/3200321659' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:42:47 compute-0 nova_compute[351485]: 2025-12-03 02:42:47.271 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:42:47 compute-0 sudo[490790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:42:47 compute-0 sudo[490790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:42:47 compute-0 sudo[490790]: pam_unix(sudo:session): session closed for user root
Dec 03 02:42:47 compute-0 sudo[490815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 02:42:47 compute-0 sudo[490815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:42:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2661: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:48 compute-0 podman[490878]: 2025-12-03 02:42:48.065486232 +0000 UTC m=+0.104980173 container create 5ae51d004d9a9609c1f1d2dbfcbe4247da8d3fc4d6eafb1ef472857122113539 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_colden, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 03 02:42:48 compute-0 podman[490878]: 2025-12-03 02:42:48.016009576 +0000 UTC m=+0.055503587 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:42:48 compute-0 systemd[1]: Started libpod-conmon-5ae51d004d9a9609c1f1d2dbfcbe4247da8d3fc4d6eafb1ef472857122113539.scope.
Dec 03 02:42:48 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:42:48 compute-0 podman[490878]: 2025-12-03 02:42:48.202698735 +0000 UTC m=+0.242192696 container init 5ae51d004d9a9609c1f1d2dbfcbe4247da8d3fc4d6eafb1ef472857122113539 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_colden, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 03 02:42:48 compute-0 podman[490878]: 2025-12-03 02:42:48.222332709 +0000 UTC m=+0.261826640 container start 5ae51d004d9a9609c1f1d2dbfcbe4247da8d3fc4d6eafb1ef472857122113539 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_colden, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:42:48 compute-0 podman[490878]: 2025-12-03 02:42:48.229621714 +0000 UTC m=+0.269115705 container attach 5ae51d004d9a9609c1f1d2dbfcbe4247da8d3fc4d6eafb1ef472857122113539 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_colden, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:42:48 compute-0 sleepy_colden[490894]: 167 167
Dec 03 02:42:48 compute-0 systemd[1]: libpod-5ae51d004d9a9609c1f1d2dbfcbe4247da8d3fc4d6eafb1ef472857122113539.scope: Deactivated successfully.
Dec 03 02:42:48 compute-0 podman[490878]: 2025-12-03 02:42:48.235254963 +0000 UTC m=+0.274748894 container died 5ae51d004d9a9609c1f1d2dbfcbe4247da8d3fc4d6eafb1ef472857122113539 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec 03 02:42:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-80e5035cf03ee1529707d1d7af1ef9fadbe1e1f962aa926f3081958c8266ab1f-merged.mount: Deactivated successfully.
Dec 03 02:42:48 compute-0 podman[490878]: 2025-12-03 02:42:48.320429477 +0000 UTC m=+0.359923418 container remove 5ae51d004d9a9609c1f1d2dbfcbe4247da8d3fc4d6eafb1ef472857122113539 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_colden, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Dec 03 02:42:48 compute-0 systemd[1]: libpod-conmon-5ae51d004d9a9609c1f1d2dbfcbe4247da8d3fc4d6eafb1ef472857122113539.scope: Deactivated successfully.
Dec 03 02:42:48 compute-0 nova_compute[351485]: 2025-12-03 02:42:48.578 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:42:48 compute-0 podman[490918]: 2025-12-03 02:42:48.647306961 +0000 UTC m=+0.096249027 container create c5d286456b9b9e57f239e8aae2b3144a5d0c8f120e10714cf8b0453a8ba54bc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:42:48 compute-0 podman[490918]: 2025-12-03 02:42:48.61607708 +0000 UTC m=+0.065019176 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:42:48 compute-0 systemd[1]: Started libpod-conmon-c5d286456b9b9e57f239e8aae2b3144a5d0c8f120e10714cf8b0453a8ba54bc0.scope.
Dec 03 02:42:48 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:42:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/263b6b05bdce39395e5fbd33234de832984777082ace5c3cea3476d393397b47/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:42:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/263b6b05bdce39395e5fbd33234de832984777082ace5c3cea3476d393397b47/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:42:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/263b6b05bdce39395e5fbd33234de832984777082ace5c3cea3476d393397b47/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:42:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/263b6b05bdce39395e5fbd33234de832984777082ace5c3cea3476d393397b47/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:42:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:42:48 compute-0 podman[490918]: 2025-12-03 02:42:48.834499834 +0000 UTC m=+0.283441950 container init c5d286456b9b9e57f239e8aae2b3144a5d0c8f120e10714cf8b0453a8ba54bc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_heyrovsky, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:42:48 compute-0 podman[490918]: 2025-12-03 02:42:48.8573813 +0000 UTC m=+0.306323356 container start c5d286456b9b9e57f239e8aae2b3144a5d0c8f120e10714cf8b0453a8ba54bc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Dec 03 02:42:48 compute-0 podman[490918]: 2025-12-03 02:42:48.864708897 +0000 UTC m=+0.313651013 container attach c5d286456b9b9e57f239e8aae2b3144a5d0c8f120e10714cf8b0453a8ba54bc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:42:49 compute-0 ceph-mon[192821]: pgmap v2661: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:49 compute-0 nova_compute[351485]: 2025-12-03 02:42:49.666 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:42:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2662: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:50 compute-0 eager_heyrovsky[490933]: {
Dec 03 02:42:50 compute-0 eager_heyrovsky[490933]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 02:42:50 compute-0 eager_heyrovsky[490933]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:42:50 compute-0 eager_heyrovsky[490933]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 02:42:50 compute-0 eager_heyrovsky[490933]:         "osd_id": 2,
Dec 03 02:42:50 compute-0 eager_heyrovsky[490933]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:42:50 compute-0 eager_heyrovsky[490933]:         "type": "bluestore"
Dec 03 02:42:50 compute-0 eager_heyrovsky[490933]:     },
Dec 03 02:42:50 compute-0 eager_heyrovsky[490933]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 02:42:50 compute-0 eager_heyrovsky[490933]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:42:50 compute-0 eager_heyrovsky[490933]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 02:42:50 compute-0 eager_heyrovsky[490933]:         "osd_id": 1,
Dec 03 02:42:50 compute-0 eager_heyrovsky[490933]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:42:50 compute-0 eager_heyrovsky[490933]:         "type": "bluestore"
Dec 03 02:42:50 compute-0 eager_heyrovsky[490933]:     },
Dec 03 02:42:50 compute-0 eager_heyrovsky[490933]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 02:42:50 compute-0 eager_heyrovsky[490933]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:42:50 compute-0 eager_heyrovsky[490933]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 02:42:50 compute-0 eager_heyrovsky[490933]:         "osd_id": 0,
Dec 03 02:42:50 compute-0 eager_heyrovsky[490933]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:42:50 compute-0 eager_heyrovsky[490933]:         "type": "bluestore"
Dec 03 02:42:50 compute-0 eager_heyrovsky[490933]:     }
Dec 03 02:42:50 compute-0 eager_heyrovsky[490933]: }
Dec 03 02:42:50 compute-0 systemd[1]: libpod-c5d286456b9b9e57f239e8aae2b3144a5d0c8f120e10714cf8b0453a8ba54bc0.scope: Deactivated successfully.
Dec 03 02:42:50 compute-0 podman[490918]: 2025-12-03 02:42:50.148985619 +0000 UTC m=+1.597927685 container died c5d286456b9b9e57f239e8aae2b3144a5d0c8f120e10714cf8b0453a8ba54bc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_heyrovsky, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 03 02:42:50 compute-0 systemd[1]: libpod-c5d286456b9b9e57f239e8aae2b3144a5d0c8f120e10714cf8b0453a8ba54bc0.scope: Consumed 1.282s CPU time.
Dec 03 02:42:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-263b6b05bdce39395e5fbd33234de832984777082ace5c3cea3476d393397b47-merged.mount: Deactivated successfully.
Dec 03 02:42:50 compute-0 podman[490918]: 2025-12-03 02:42:50.256697978 +0000 UTC m=+1.705640014 container remove c5d286456b9b9e57f239e8aae2b3144a5d0c8f120e10714cf8b0453a8ba54bc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_heyrovsky, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:42:50 compute-0 systemd[1]: libpod-conmon-c5d286456b9b9e57f239e8aae2b3144a5d0c8f120e10714cf8b0453a8ba54bc0.scope: Deactivated successfully.
Dec 03 02:42:50 compute-0 sudo[490815]: pam_unix(sudo:session): session closed for user root
Dec 03 02:42:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 02:42:50 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:42:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 02:42:50 compute-0 podman[490978]: 2025-12-03 02:42:50.322815714 +0000 UTC m=+0.119416841 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 03 02:42:50 compute-0 podman[490970]: 2025-12-03 02:42:50.323985287 +0000 UTC m=+0.113503414 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 03 02:42:50 compute-0 podman[490979]: 2025-12-03 02:42:50.325099629 +0000 UTC m=+0.115804879 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 03 02:42:50 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:42:50 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev d7409c70-f8b7-4d40-a0b0-4580e9765683 does not exist
Dec 03 02:42:50 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev bcf0ab97-a8c2-40b2-9c8f-b5b4fdb73957 does not exist
Dec 03 02:42:50 compute-0 sudo[491036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:42:50 compute-0 sudo[491036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:42:50 compute-0 sudo[491036]: pam_unix(sudo:session): session closed for user root
Dec 03 02:42:50 compute-0 sudo[491061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 02:42:50 compute-0 sudo[491061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:42:50 compute-0 sudo[491061]: pam_unix(sudo:session): session closed for user root
Dec 03 02:42:50 compute-0 nova_compute[351485]: 2025-12-03 02:42:50.572 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:42:50 compute-0 nova_compute[351485]: 2025-12-03 02:42:50.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:42:50 compute-0 nova_compute[351485]: 2025-12-03 02:42:50.576 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 03 02:42:50 compute-0 nova_compute[351485]: 2025-12-03 02:42:50.599 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 03 02:42:51 compute-0 ceph-mon[192821]: pgmap v2662: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:51 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:42:51 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:42:51 compute-0 nova_compute[351485]: 2025-12-03 02:42:51.594 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:42:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2663: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:52 compute-0 nova_compute[351485]: 2025-12-03 02:42:52.276 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:42:52 compute-0 ceph-mon[192821]: pgmap v2663: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:52 compute-0 sshd-session[491086]: Invalid user esuser from 185.65.202.184 port 49126
Dec 03 02:42:52 compute-0 sshd-session[491086]: Received disconnect from 185.65.202.184 port 49126:11: Bye Bye [preauth]
Dec 03 02:42:52 compute-0 sshd-session[491086]: Disconnected from invalid user esuser 185.65.202.184 port 49126 [preauth]
Dec 03 02:42:53 compute-0 nova_compute[351485]: 2025-12-03 02:42:53.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:42:53 compute-0 nova_compute[351485]: 2025-12-03 02:42:53.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:42:53 compute-0 nova_compute[351485]: 2025-12-03 02:42:53.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 02:42:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:42:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2664: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:54 compute-0 nova_compute[351485]: 2025-12-03 02:42:54.669 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:42:55 compute-0 ceph-mon[192821]: pgmap v2664: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2665: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:57 compute-0 ceph-mon[192821]: pgmap v2665: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:57 compute-0 nova_compute[351485]: 2025-12-03 02:42:57.279 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:42:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2666: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:42:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:42:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:42:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:42:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:42:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:42:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:42:59 compute-0 ceph-mon[192821]: pgmap v2666: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:42:59 compute-0 nova_compute[351485]: 2025-12-03 02:42:59.673 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:42:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:42:59.680 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:42:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:42:59.681 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:42:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:42:59.681 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:42:59 compute-0 podman[158098]: time="2025-12-03T02:42:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:42:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:42:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec 03 02:42:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:42:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8220 "" "Go-http-client/1.1"
Dec 03 02:42:59 compute-0 podman[491088]: 2025-12-03 02:42:59.887198198 +0000 UTC m=+0.131856002 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Dec 03 02:43:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2667: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:01 compute-0 ceph-mon[192821]: pgmap v2667: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:01 compute-0 openstack_network_exporter[368278]: ERROR   02:43:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:43:01 compute-0 openstack_network_exporter[368278]: ERROR   02:43:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:43:01 compute-0 openstack_network_exporter[368278]: ERROR   02:43:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:43:01 compute-0 openstack_network_exporter[368278]: ERROR   02:43:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:43:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:43:01 compute-0 openstack_network_exporter[368278]: ERROR   02:43:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:43:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:43:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2668: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:02 compute-0 nova_compute[351485]: 2025-12-03 02:43:02.282 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:43:03 compute-0 ceph-mon[192821]: pgmap v2668: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:43:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2669: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:04 compute-0 nova_compute[351485]: 2025-12-03 02:43:04.677 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:43:04 compute-0 podman[491109]: 2025-12-03 02:43:04.882171854 +0000 UTC m=+0.125357178 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, managed_by=edpm_ansible, version=9.6, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., name=ubi9-minimal, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 03 02:43:04 compute-0 podman[491112]: 2025-12-03 02:43:04.887189256 +0000 UTC m=+0.109790879 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:43:04 compute-0 podman[491111]: 2025-12-03 02:43:04.9025951 +0000 UTC m=+0.133565759 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., version=9.4, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, config_id=edpm, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, name=ubi9, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec 03 02:43:04 compute-0 podman[491110]: 2025-12-03 02:43:04.907308313 +0000 UTC m=+0.142421049 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 03 02:43:04 compute-0 podman[491108]: 2025-12-03 02:43:04.907633112 +0000 UTC m=+0.153546433 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 03 02:43:05 compute-0 ceph-mon[192821]: pgmap v2669: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2670: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:07 compute-0 ceph-mon[192821]: pgmap v2670: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:07 compute-0 nova_compute[351485]: 2025-12-03 02:43:07.286 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:43:07 compute-0 sshd-session[491211]: Invalid user svn from 186.31.95.163 port 57480
Dec 03 02:43:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2671: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:08 compute-0 sshd-session[491211]: Received disconnect from 186.31.95.163 port 57480:11: Bye Bye [preauth]
Dec 03 02:43:08 compute-0 sshd-session[491211]: Disconnected from invalid user svn 186.31.95.163 port 57480 [preauth]
Dec 03 02:43:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:43:09 compute-0 ceph-mon[192821]: pgmap v2671: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:09 compute-0 nova_compute[351485]: 2025-12-03 02:43:09.680 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:43:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2672: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:11 compute-0 ceph-mon[192821]: pgmap v2672: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2673: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:12 compute-0 nova_compute[351485]: 2025-12-03 02:43:12.289 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:43:13 compute-0 ceph-mon[192821]: pgmap v2673: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:43:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2674: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:14 compute-0 nova_compute[351485]: 2025-12-03 02:43:14.684 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:43:15 compute-0 ceph-mon[192821]: pgmap v2674: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2675: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:17 compute-0 ceph-mon[192821]: pgmap v2675: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:17 compute-0 nova_compute[351485]: 2025-12-03 02:43:17.293 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:43:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2676: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:43:19 compute-0 ceph-mon[192821]: pgmap v2676: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.519 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.522 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.522 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.523 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.526 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.526 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.526 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.526 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.527 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.527 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.528 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.529 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.529 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.529 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.530 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.530 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.530 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.530 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.530 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.531 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.531 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.531 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.531 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.531 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.531 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.532 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.532 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.532 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.532 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.532 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.533 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.533 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.533 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.533 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.528 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.534 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.535 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.535 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.535 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.536 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.536 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.536 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.537 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.537 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.535 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.538 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.538 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.538 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.538 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.539 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.537 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'power.state': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.539 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'power.state': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.540 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'power.state': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.539 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.541 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.541 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.541 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.541 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.542 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.542 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.542 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.542 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.542 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.543 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.543 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.543 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.543 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.544 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.544 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.544 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.544 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.545 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.545 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.545 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.546 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.546 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.546 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.546 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.546 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.547 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.547 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.547 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.547 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.547 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.547 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.548 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.548 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.548 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.548 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.548 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.548 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.549 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.549 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.549 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.549 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.549 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.549 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:43:19 compute-0 nova_compute[351485]: 2025-12-03 02:43:19.688 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:43:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2677: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:20 compute-0 podman[491215]: 2025-12-03 02:43:20.890400461 +0000 UTC m=+0.130274238 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 03 02:43:20 compute-0 podman[491217]: 2025-12-03 02:43:20.896322098 +0000 UTC m=+0.122159669 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 02:43:20 compute-0 podman[491216]: 2025-12-03 02:43:20.913893434 +0000 UTC m=+0.150990022 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125)
Dec 03 02:43:21 compute-0 ceph-mon[192821]: pgmap v2677: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2678: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:22 compute-0 nova_compute[351485]: 2025-12-03 02:43:22.296 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:43:23 compute-0 ceph-mon[192821]: pgmap v2678: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:43:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2679: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:24 compute-0 nova_compute[351485]: 2025-12-03 02:43:24.690 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:43:25 compute-0 ceph-mon[192821]: pgmap v2679: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2680: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:27 compute-0 nova_compute[351485]: 2025-12-03 02:43:27.299 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:43:27 compute-0 ceph-mon[192821]: pgmap v2680: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:27 compute-0 nova_compute[351485]: 2025-12-03 02:43:27.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:43:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2681: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:28 compute-0 ceph-mon[192821]: pgmap v2681: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:43:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:43:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:43:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:43:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:43:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:43:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:43:28
Dec 03 02:43:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 02:43:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 02:43:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'images', 'default.rgw.control', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.data', 'backups', 'vms', 'cephfs.cephfs.meta']
Dec 03 02:43:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 02:43:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:43:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 02:43:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:43:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 02:43:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:43:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:43:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:43:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:43:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:43:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:43:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:43:29 compute-0 nova_compute[351485]: 2025-12-03 02:43:29.692 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:43:29 compute-0 podman[158098]: time="2025-12-03T02:43:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:43:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:43:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec 03 02:43:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:43:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8204 "" "Go-http-client/1.1"
Dec 03 02:43:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2682: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:30 compute-0 podman[491270]: 2025-12-03 02:43:30.870583781 +0000 UTC m=+0.120860141 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 03 02:43:31 compute-0 ceph-mon[192821]: pgmap v2682: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:31 compute-0 openstack_network_exporter[368278]: ERROR   02:43:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:43:31 compute-0 openstack_network_exporter[368278]: ERROR   02:43:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:43:31 compute-0 openstack_network_exporter[368278]: ERROR   02:43:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:43:31 compute-0 openstack_network_exporter[368278]: ERROR   02:43:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:43:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:43:31 compute-0 openstack_network_exporter[368278]: ERROR   02:43:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:43:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:43:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2683: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:32 compute-0 nova_compute[351485]: 2025-12-03 02:43:32.301 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:43:33 compute-0 ceph-mon[192821]: pgmap v2683: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:43:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2684: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:34 compute-0 nova_compute[351485]: 2025-12-03 02:43:34.695 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:43:34 compute-0 sshd-session[491290]: Received disconnect from 147.50.227.142 port 40220:11: Bye Bye [preauth]
Dec 03 02:43:34 compute-0 sshd-session[491290]: Disconnected from authenticating user root 147.50.227.142 port 40220 [preauth]
Dec 03 02:43:35 compute-0 ceph-mon[192821]: pgmap v2684: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:35 compute-0 podman[491295]: 2025-12-03 02:43:35.874757909 +0000 UTC m=+0.111573320 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9, container_name=kepler, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., vcs-type=git, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, config_id=edpm, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, architecture=x86_64)
Dec 03 02:43:35 compute-0 podman[491301]: 2025-12-03 02:43:35.882978011 +0000 UTC m=+0.100558059 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 03 02:43:35 compute-0 podman[491293]: 2025-12-03 02:43:35.89428164 +0000 UTC m=+0.134348433 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, config_id=edpm, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, name=ubi9-minimal, managed_by=edpm_ansible, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc.)
Dec 03 02:43:35 compute-0 podman[491294]: 2025-12-03 02:43:35.899688842 +0000 UTC m=+0.132800778 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 03 02:43:35 compute-0 podman[491292]: 2025-12-03 02:43:35.924494002 +0000 UTC m=+0.172578411 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2)
Dec 03 02:43:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2685: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:36 compute-0 nova_compute[351485]: 2025-12-03 02:43:36.594 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:43:37 compute-0 ceph-mon[192821]: pgmap v2685: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:37 compute-0 nova_compute[351485]: 2025-12-03 02:43:37.304 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:43:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2686: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:43:39 compute-0 ceph-mon[192821]: pgmap v2686: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 02:43:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:43:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 02:43:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:43:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 03 02:43:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:43:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:43:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:43:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:43:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:43:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec 03 02:43:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:43:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 02:43:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:43:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:43:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:43:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 02:43:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:43:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 02:43:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:43:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:43:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:43:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 02:43:39 compute-0 nova_compute[351485]: 2025-12-03 02:43:39.699 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:43:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2687: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:41 compute-0 ceph-mon[192821]: pgmap v2687: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2688: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:42 compute-0 nova_compute[351485]: 2025-12-03 02:43:42.307 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:43:42 compute-0 nova_compute[351485]: 2025-12-03 02:43:42.575 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:43:42 compute-0 nova_compute[351485]: 2025-12-03 02:43:42.621 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:43:42 compute-0 nova_compute[351485]: 2025-12-03 02:43:42.622 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:43:42 compute-0 nova_compute[351485]: 2025-12-03 02:43:42.622 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:43:42 compute-0 nova_compute[351485]: 2025-12-03 02:43:42.623 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 02:43:42 compute-0 nova_compute[351485]: 2025-12-03 02:43:42.623 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:43:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:43:43 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4097921568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:43:43 compute-0 nova_compute[351485]: 2025-12-03 02:43:43.147 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:43:43 compute-0 ceph-mon[192821]: pgmap v2688: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:43 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/4097921568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:43:43 compute-0 nova_compute[351485]: 2025-12-03 02:43:43.751 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:43:43 compute-0 nova_compute[351485]: 2025-12-03 02:43:43.754 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3954MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 02:43:43 compute-0 nova_compute[351485]: 2025-12-03 02:43:43.754 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:43:43 compute-0 nova_compute[351485]: 2025-12-03 02:43:43.755 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:43:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:43:43 compute-0 nova_compute[351485]: 2025-12-03 02:43:43.854 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 02:43:43 compute-0 nova_compute[351485]: 2025-12-03 02:43:43.855 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 02:43:43 compute-0 nova_compute[351485]: 2025-12-03 02:43:43.884 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:43:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2689: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:43:44 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1117918483' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:43:44 compute-0 nova_compute[351485]: 2025-12-03 02:43:44.497 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:43:44 compute-0 nova_compute[351485]: 2025-12-03 02:43:44.506 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:43:44 compute-0 nova_compute[351485]: 2025-12-03 02:43:44.527 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:43:44 compute-0 nova_compute[351485]: 2025-12-03 02:43:44.528 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 02:43:44 compute-0 nova_compute[351485]: 2025-12-03 02:43:44.528 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.773s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:43:44 compute-0 nova_compute[351485]: 2025-12-03 02:43:44.703 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:43:45 compute-0 ceph-mon[192821]: pgmap v2689: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:45 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1117918483' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:43:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2690: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:46 compute-0 nova_compute[351485]: 2025-12-03 02:43:46.530 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:43:46 compute-0 nova_compute[351485]: 2025-12-03 02:43:46.530 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 02:43:46 compute-0 nova_compute[351485]: 2025-12-03 02:43:46.531 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 03 02:43:46 compute-0 nova_compute[351485]: 2025-12-03 02:43:46.550 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 03 02:43:46 compute-0 nova_compute[351485]: 2025-12-03 02:43:46.551 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:43:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 02:43:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1881994563' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:43:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 02:43:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1881994563' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:43:47 compute-0 ceph-mon[192821]: pgmap v2690: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/1881994563' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:43:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/1881994563' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:43:47 compute-0 nova_compute[351485]: 2025-12-03 02:43:47.310 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:43:47 compute-0 podman[158098]: time="2025-12-03T02:43:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:43:48 compute-0 podman[158098]: @ - - [03/Dec/2025:02:43:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 43258 "" "Go-http-client/1.1"
Dec 03 02:43:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2691: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:48 compute-0 nova_compute[351485]: 2025-12-03 02:43:48.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:43:48 compute-0 nova_compute[351485]: 2025-12-03 02:43:48.578 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:43:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:43:49 compute-0 ceph-mon[192821]: pgmap v2691: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:49 compute-0 nova_compute[351485]: 2025-12-03 02:43:49.706 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:43:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2692: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:50 compute-0 nova_compute[351485]: 2025-12-03 02:43:50.571 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:43:50 compute-0 sudo[491444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:43:50 compute-0 sudo[491444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:43:50 compute-0 sudo[491444]: pam_unix(sudo:session): session closed for user root
Dec 03 02:43:50 compute-0 sudo[491469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:43:50 compute-0 sudo[491469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:43:50 compute-0 sudo[491469]: pam_unix(sudo:session): session closed for user root
Dec 03 02:43:50 compute-0 sudo[491494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:43:51 compute-0 sudo[491494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:43:51 compute-0 sudo[491494]: pam_unix(sudo:session): session closed for user root
Dec 03 02:43:51 compute-0 podman[491520]: 2025-12-03 02:43:51.135338077 +0000 UTC m=+0.098438979 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 02:43:51 compute-0 sudo[491536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 02:43:51 compute-0 sudo[491536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:43:51 compute-0 podman[491518]: 2025-12-03 02:43:51.157237015 +0000 UTC m=+0.129630789 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Dec 03 02:43:51 compute-0 podman[491519]: 2025-12-03 02:43:51.168695838 +0000 UTC m=+0.133961361 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 03 02:43:51 compute-0 ceph-mon[192821]: pgmap v2692: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:51 compute-0 sudo[491536]: pam_unix(sudo:session): session closed for user root
Dec 03 02:43:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:43:51 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:43:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 02:43:51 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:43:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 02:43:51 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:43:51 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev b2d5e90f-9e80-4472-91ca-6c1fab50ff5b does not exist
Dec 03 02:43:51 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 253330e7-f990-43ec-954c-e9c367c9a1a0 does not exist
Dec 03 02:43:51 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 1b0f409e-073b-4933-8a1e-06fafcee3036 does not exist
Dec 03 02:43:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 02:43:51 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:43:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 02:43:51 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:43:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:43:51 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:43:51 compute-0 sudo[491631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:43:51 compute-0 sudo[491631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:43:51 compute-0 sudo[491631]: pam_unix(sudo:session): session closed for user root
Dec 03 02:43:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2693: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:52 compute-0 sudo[491656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:43:52 compute-0 sudo[491656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:43:52 compute-0 sudo[491656]: pam_unix(sudo:session): session closed for user root
Dec 03 02:43:52 compute-0 sudo[491681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:43:52 compute-0 sudo[491681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:43:52 compute-0 sudo[491681]: pam_unix(sudo:session): session closed for user root
Dec 03 02:43:52 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:43:52 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:43:52 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:43:52 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:43:52 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:43:52 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:43:52 compute-0 nova_compute[351485]: 2025-12-03 02:43:52.313 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:43:52 compute-0 sudo[491706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 02:43:52 compute-0 sudo[491706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:43:52 compute-0 podman[491771]: 2025-12-03 02:43:52.826920893 +0000 UTC m=+0.071859589 container create 220c587150cc4eb9c1599c2ae4f46e3db57441e808a81d515cee76ffbf8d101e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:43:52 compute-0 podman[491771]: 2025-12-03 02:43:52.791575545 +0000 UTC m=+0.036514271 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:43:52 compute-0 systemd[1]: Started libpod-conmon-220c587150cc4eb9c1599c2ae4f46e3db57441e808a81d515cee76ffbf8d101e.scope.
Dec 03 02:43:52 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:43:52 compute-0 podman[491771]: 2025-12-03 02:43:52.992140095 +0000 UTC m=+0.237078851 container init 220c587150cc4eb9c1599c2ae4f46e3db57441e808a81d515cee76ffbf8d101e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_einstein, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Dec 03 02:43:53 compute-0 podman[491771]: 2025-12-03 02:43:53.00364655 +0000 UTC m=+0.248585256 container start 220c587150cc4eb9c1599c2ae4f46e3db57441e808a81d515cee76ffbf8d101e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_einstein, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 03 02:43:53 compute-0 podman[491771]: 2025-12-03 02:43:53.010715049 +0000 UTC m=+0.255653745 container attach 220c587150cc4eb9c1599c2ae4f46e3db57441e808a81d515cee76ffbf8d101e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:43:53 compute-0 bold_einstein[491786]: 167 167
Dec 03 02:43:53 compute-0 systemd[1]: libpod-220c587150cc4eb9c1599c2ae4f46e3db57441e808a81d515cee76ffbf8d101e.scope: Deactivated successfully.
Dec 03 02:43:53 compute-0 podman[491771]: 2025-12-03 02:43:53.017866021 +0000 UTC m=+0.262804717 container died 220c587150cc4eb9c1599c2ae4f46e3db57441e808a81d515cee76ffbf8d101e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_einstein, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:43:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-87e7be88b0be2d0f71c4269f4a018423cec3d14ce7eab737d2f22799440abf8c-merged.mount: Deactivated successfully.
Dec 03 02:43:53 compute-0 podman[491771]: 2025-12-03 02:43:53.094848863 +0000 UTC m=+0.339787529 container remove 220c587150cc4eb9c1599c2ae4f46e3db57441e808a81d515cee76ffbf8d101e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_einstein, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:43:53 compute-0 systemd[1]: libpod-conmon-220c587150cc4eb9c1599c2ae4f46e3db57441e808a81d515cee76ffbf8d101e.scope: Deactivated successfully.
Dec 03 02:43:53 compute-0 ceph-mon[192821]: pgmap v2693: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:53 compute-0 nova_compute[351485]: 2025-12-03 02:43:53.345 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:43:53 compute-0 podman[491809]: 2025-12-03 02:43:53.382892652 +0000 UTC m=+0.088662183 container create 1b60b5d07582b450902f5d064a546a62f6a6877d741f90a21fb75c980ebc2413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mclean, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:43:53 compute-0 podman[491809]: 2025-12-03 02:43:53.35482467 +0000 UTC m=+0.060594251 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:43:53 compute-0 systemd[1]: Started libpod-conmon-1b60b5d07582b450902f5d064a546a62f6a6877d741f90a21fb75c980ebc2413.scope.
Dec 03 02:43:53 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:43:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a911859b1dc6fbcca357c7398169481a4130068ee53506847abe65666fee12fc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:43:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a911859b1dc6fbcca357c7398169481a4130068ee53506847abe65666fee12fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:43:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a911859b1dc6fbcca357c7398169481a4130068ee53506847abe65666fee12fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:43:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a911859b1dc6fbcca357c7398169481a4130068ee53506847abe65666fee12fc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:43:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a911859b1dc6fbcca357c7398169481a4130068ee53506847abe65666fee12fc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 02:43:53 compute-0 podman[491809]: 2025-12-03 02:43:53.550297866 +0000 UTC m=+0.256067467 container init 1b60b5d07582b450902f5d064a546a62f6a6877d741f90a21fb75c980ebc2413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:43:53 compute-0 podman[491809]: 2025-12-03 02:43:53.58018578 +0000 UTC m=+0.285955331 container start 1b60b5d07582b450902f5d064a546a62f6a6877d741f90a21fb75c980ebc2413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Dec 03 02:43:53 compute-0 podman[491809]: 2025-12-03 02:43:53.587338842 +0000 UTC m=+0.293108413 container attach 1b60b5d07582b450902f5d064a546a62f6a6877d741f90a21fb75c980ebc2413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:43:53 compute-0 nova_compute[351485]: 2025-12-03 02:43:53.588 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:43:53 compute-0 nova_compute[351485]: 2025-12-03 02:43:53.589 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 02:43:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:43:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2694: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:54 compute-0 nova_compute[351485]: 2025-12-03 02:43:54.578 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:43:54 compute-0 nova_compute[351485]: 2025-12-03 02:43:54.710 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:43:54 compute-0 focused_mclean[491824]: --> passed data devices: 0 physical, 3 LVM
Dec 03 02:43:54 compute-0 focused_mclean[491824]: --> relative data size: 1.0
Dec 03 02:43:54 compute-0 focused_mclean[491824]: --> All data devices are unavailable
Dec 03 02:43:54 compute-0 systemd[1]: libpod-1b60b5d07582b450902f5d064a546a62f6a6877d741f90a21fb75c980ebc2413.scope: Deactivated successfully.
Dec 03 02:43:54 compute-0 systemd[1]: libpod-1b60b5d07582b450902f5d064a546a62f6a6877d741f90a21fb75c980ebc2413.scope: Consumed 1.280s CPU time.
Dec 03 02:43:54 compute-0 podman[491809]: 2025-12-03 02:43:54.916177751 +0000 UTC m=+1.621947322 container died 1b60b5d07582b450902f5d064a546a62f6a6877d741f90a21fb75c980ebc2413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mclean, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:43:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-a911859b1dc6fbcca357c7398169481a4130068ee53506847abe65666fee12fc-merged.mount: Deactivated successfully.
Dec 03 02:43:55 compute-0 podman[491809]: 2025-12-03 02:43:55.029476438 +0000 UTC m=+1.735245999 container remove 1b60b5d07582b450902f5d064a546a62f6a6877d741f90a21fb75c980ebc2413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mclean, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:43:55 compute-0 systemd[1]: libpod-conmon-1b60b5d07582b450902f5d064a546a62f6a6877d741f90a21fb75c980ebc2413.scope: Deactivated successfully.
Dec 03 02:43:55 compute-0 sudo[491706]: pam_unix(sudo:session): session closed for user root
Dec 03 02:43:55 compute-0 sudo[491865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:43:55 compute-0 sudo[491865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:43:55 compute-0 sudo[491865]: pam_unix(sudo:session): session closed for user root
Dec 03 02:43:55 compute-0 ceph-mon[192821]: pgmap v2694: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:55 compute-0 sudo[491890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:43:55 compute-0 sudo[491890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:43:55 compute-0 sudo[491890]: pam_unix(sudo:session): session closed for user root
Dec 03 02:43:55 compute-0 sudo[491915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:43:55 compute-0 sudo[491915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:43:55 compute-0 sudo[491915]: pam_unix(sudo:session): session closed for user root
Dec 03 02:43:55 compute-0 sudo[491940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 02:43:55 compute-0 sudo[491940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:43:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2695: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:56 compute-0 podman[492003]: 2025-12-03 02:43:56.286354387 +0000 UTC m=+0.100137777 container create d844a8ad48a0ed51046f2f28200d3c0353cf3501bb8c0aa65ec00c6734ba2c90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_burnell, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 03 02:43:56 compute-0 podman[492003]: 2025-12-03 02:43:56.249703243 +0000 UTC m=+0.063486693 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:43:56 compute-0 systemd[1]: Started libpod-conmon-d844a8ad48a0ed51046f2f28200d3c0353cf3501bb8c0aa65ec00c6734ba2c90.scope.
Dec 03 02:43:56 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:43:56 compute-0 podman[492003]: 2025-12-03 02:43:56.444470399 +0000 UTC m=+0.258253849 container init d844a8ad48a0ed51046f2f28200d3c0353cf3501bb8c0aa65ec00c6734ba2c90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_burnell, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:43:56 compute-0 podman[492003]: 2025-12-03 02:43:56.46043964 +0000 UTC m=+0.274223030 container start d844a8ad48a0ed51046f2f28200d3c0353cf3501bb8c0aa65ec00c6734ba2c90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 03 02:43:56 compute-0 podman[492003]: 2025-12-03 02:43:56.468205229 +0000 UTC m=+0.281988769 container attach d844a8ad48a0ed51046f2f28200d3c0353cf3501bb8c0aa65ec00c6734ba2c90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_burnell, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Dec 03 02:43:56 compute-0 recursing_burnell[492018]: 167 167
Dec 03 02:43:56 compute-0 systemd[1]: libpod-d844a8ad48a0ed51046f2f28200d3c0353cf3501bb8c0aa65ec00c6734ba2c90.scope: Deactivated successfully.
Dec 03 02:43:56 compute-0 conmon[492018]: conmon d844a8ad48a0ed51046f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d844a8ad48a0ed51046f2f28200d3c0353cf3501bb8c0aa65ec00c6734ba2c90.scope/container/memory.events
Dec 03 02:43:56 compute-0 podman[492003]: 2025-12-03 02:43:56.474517597 +0000 UTC m=+0.288300977 container died d844a8ad48a0ed51046f2f28200d3c0353cf3501bb8c0aa65ec00c6734ba2c90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_burnell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:43:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-07502613c02a388831c07cd57c328e93f7c55a5145f1d0f0916dafd1c4eeb39b-merged.mount: Deactivated successfully.
Dec 03 02:43:56 compute-0 podman[492003]: 2025-12-03 02:43:56.547078004 +0000 UTC m=+0.360861364 container remove d844a8ad48a0ed51046f2f28200d3c0353cf3501bb8c0aa65ec00c6734ba2c90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:43:56 compute-0 systemd[1]: libpod-conmon-d844a8ad48a0ed51046f2f28200d3c0353cf3501bb8c0aa65ec00c6734ba2c90.scope: Deactivated successfully.
Dec 03 02:43:56 compute-0 podman[492041]: 2025-12-03 02:43:56.817980049 +0000 UTC m=+0.091956686 container create 1d2dc362b16e171d084d6525a3189ca4c5a657a8f90a576316b167a7b8de5136 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_poincare, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 03 02:43:56 compute-0 podman[492041]: 2025-12-03 02:43:56.784223617 +0000 UTC m=+0.058200294 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:43:56 compute-0 systemd[1]: Started libpod-conmon-1d2dc362b16e171d084d6525a3189ca4c5a657a8f90a576316b167a7b8de5136.scope.
Dec 03 02:43:56 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:43:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4a583eb269232d1989fa8c06087fc2715ff83174e6ff3d10c6d7159ca50b87c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:43:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4a583eb269232d1989fa8c06087fc2715ff83174e6ff3d10c6d7159ca50b87c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:43:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4a583eb269232d1989fa8c06087fc2715ff83174e6ff3d10c6d7159ca50b87c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:43:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4a583eb269232d1989fa8c06087fc2715ff83174e6ff3d10c6d7159ca50b87c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:43:57 compute-0 podman[492041]: 2025-12-03 02:43:57.012765416 +0000 UTC m=+0.286742103 container init 1d2dc362b16e171d084d6525a3189ca4c5a657a8f90a576316b167a7b8de5136 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_poincare, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 03 02:43:57 compute-0 podman[492041]: 2025-12-03 02:43:57.041808816 +0000 UTC m=+0.315785443 container start 1d2dc362b16e171d084d6525a3189ca4c5a657a8f90a576316b167a7b8de5136 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:43:57 compute-0 podman[492041]: 2025-12-03 02:43:57.0497683 +0000 UTC m=+0.323744977 container attach 1d2dc362b16e171d084d6525a3189ca4c5a657a8f90a576316b167a7b8de5136 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec 03 02:43:57 compute-0 nova_compute[351485]: 2025-12-03 02:43:57.315 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:43:57 compute-0 ceph-mon[192821]: pgmap v2695: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:57 compute-0 awesome_poincare[492056]: {
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:     "0": [
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:         {
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:             "devices": [
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:                 "/dev/loop3"
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:             ],
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:             "lv_name": "ceph_lv0",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:             "lv_size": "21470642176",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:             "name": "ceph_lv0",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:             "tags": {
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:                 "ceph.cluster_name": "ceph",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:                 "ceph.crush_device_class": "",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:                 "ceph.encrypted": "0",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:                 "ceph.osd_id": "0",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:                 "ceph.type": "block",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:                 "ceph.vdo": "0"
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:             },
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:             "type": "block",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:             "vg_name": "ceph_vg0"
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:         }
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:     ],
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:     "1": [
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:         {
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:             "devices": [
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:                 "/dev/loop4"
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:             ],
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:             "lv_name": "ceph_lv1",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:             "lv_size": "21470642176",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:             "name": "ceph_lv1",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:             "tags": {
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:                 "ceph.cluster_name": "ceph",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:                 "ceph.crush_device_class": "",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:                 "ceph.encrypted": "0",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:                 "ceph.osd_id": "1",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:                 "ceph.type": "block",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:                 "ceph.vdo": "0"
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:             },
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:             "type": "block",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:             "vg_name": "ceph_vg1"
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:         }
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:     ],
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:     "2": [
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:         {
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:             "devices": [
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:                 "/dev/loop5"
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:             ],
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:             "lv_name": "ceph_lv2",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:             "lv_size": "21470642176",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:             "name": "ceph_lv2",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:             "tags": {
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:                 "ceph.cluster_name": "ceph",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:                 "ceph.crush_device_class": "",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:                 "ceph.encrypted": "0",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:                 "ceph.osd_id": "2",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:                 "ceph.type": "block",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:                 "ceph.vdo": "0"
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:             },
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:             "type": "block",
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:             "vg_name": "ceph_vg2"
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:         }
Dec 03 02:43:57 compute-0 awesome_poincare[492056]:     ]
Dec 03 02:43:57 compute-0 awesome_poincare[492056]: }
Dec 03 02:43:57 compute-0 systemd[1]: libpod-1d2dc362b16e171d084d6525a3189ca4c5a657a8f90a576316b167a7b8de5136.scope: Deactivated successfully.
Dec 03 02:43:57 compute-0 podman[492041]: 2025-12-03 02:43:57.844308552 +0000 UTC m=+1.118285189 container died 1d2dc362b16e171d084d6525a3189ca4c5a657a8f90a576316b167a7b8de5136 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_poincare, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:43:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4a583eb269232d1989fa8c06087fc2715ff83174e6ff3d10c6d7159ca50b87c-merged.mount: Deactivated successfully.
Dec 03 02:43:57 compute-0 podman[492041]: 2025-12-03 02:43:57.954655616 +0000 UTC m=+1.228632253 container remove 1d2dc362b16e171d084d6525a3189ca4c5a657a8f90a576316b167a7b8de5136 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_poincare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 03 02:43:57 compute-0 systemd[1]: libpod-conmon-1d2dc362b16e171d084d6525a3189ca4c5a657a8f90a576316b167a7b8de5136.scope: Deactivated successfully.
Dec 03 02:43:58 compute-0 sudo[491940]: pam_unix(sudo:session): session closed for user root
Dec 03 02:43:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2696: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:58 compute-0 sudo[492080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:43:58 compute-0 sudo[492080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:43:58 compute-0 sudo[492080]: pam_unix(sudo:session): session closed for user root
Dec 03 02:43:58 compute-0 sudo[492105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:43:58 compute-0 sudo[492105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:43:58 compute-0 sudo[492105]: pam_unix(sudo:session): session closed for user root
Dec 03 02:43:58 compute-0 sudo[492130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:43:58 compute-0 sudo[492130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:43:58 compute-0 sudo[492130]: pam_unix(sudo:session): session closed for user root
Dec 03 02:43:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:43:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:43:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:43:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:43:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:43:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:43:58 compute-0 sudo[492155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 02:43:58 compute-0 sudo[492155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:43:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:43:59 compute-0 podman[492220]: 2025-12-03 02:43:59.125468546 +0000 UTC m=+0.064551323 container create 955740e4e28513e487f8919c2b45782d1d5ea17b421865de945f27dad45f135b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 03 02:43:59 compute-0 systemd[1]: Started libpod-conmon-955740e4e28513e487f8919c2b45782d1d5ea17b421865de945f27dad45f135b.scope.
Dec 03 02:43:59 compute-0 podman[492220]: 2025-12-03 02:43:59.103389942 +0000 UTC m=+0.042472769 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:43:59 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:43:59 compute-0 podman[492220]: 2025-12-03 02:43:59.269980854 +0000 UTC m=+0.209063661 container init 955740e4e28513e487f8919c2b45782d1d5ea17b421865de945f27dad45f135b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_haibt, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 03 02:43:59 compute-0 podman[492220]: 2025-12-03 02:43:59.288798695 +0000 UTC m=+0.227881512 container start 955740e4e28513e487f8919c2b45782d1d5ea17b421865de945f27dad45f135b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:43:59 compute-0 podman[492220]: 2025-12-03 02:43:59.296254805 +0000 UTC m=+0.235337622 container attach 955740e4e28513e487f8919c2b45782d1d5ea17b421865de945f27dad45f135b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_haibt, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:43:59 compute-0 great_haibt[492235]: 167 167
Dec 03 02:43:59 compute-0 podman[492220]: 2025-12-03 02:43:59.301323428 +0000 UTC m=+0.240406235 container died 955740e4e28513e487f8919c2b45782d1d5ea17b421865de945f27dad45f135b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec 03 02:43:59 compute-0 systemd[1]: libpod-955740e4e28513e487f8919c2b45782d1d5ea17b421865de945f27dad45f135b.scope: Deactivated successfully.
Dec 03 02:43:59 compute-0 ceph-mon[192821]: pgmap v2696: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:43:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f0d8b9ddaa572c76f761dd7592eb075aaf64297b3abe00b626799970a7e0698-merged.mount: Deactivated successfully.
Dec 03 02:43:59 compute-0 podman[492220]: 2025-12-03 02:43:59.375716688 +0000 UTC m=+0.314799505 container remove 955740e4e28513e487f8919c2b45782d1d5ea17b421865de945f27dad45f135b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:43:59 compute-0 systemd[1]: libpod-conmon-955740e4e28513e487f8919c2b45782d1d5ea17b421865de945f27dad45f135b.scope: Deactivated successfully.
Dec 03 02:43:59 compute-0 podman[492261]: 2025-12-03 02:43:59.640812299 +0000 UTC m=+0.082101728 container create 2dde7b8e399409409416498473d94d3295ec4a7d510977c6104dc4d46ca5cac6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lumiere, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 03 02:43:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:43:59.681 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:43:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:43:59.683 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:43:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:43:59.683 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:43:59 compute-0 podman[492261]: 2025-12-03 02:43:59.605692227 +0000 UTC m=+0.046981666 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:43:59 compute-0 nova_compute[351485]: 2025-12-03 02:43:59.712 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:43:59 compute-0 systemd[1]: Started libpod-conmon-2dde7b8e399409409416498473d94d3295ec4a7d510977c6104dc4d46ca5cac6.scope.
Dec 03 02:43:59 compute-0 podman[158098]: time="2025-12-03T02:43:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:43:59 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:43:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/565d59719158a08036a6988132827fdee52ba0667968a2ab90ecaff2491d7e2f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:43:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/565d59719158a08036a6988132827fdee52ba0667968a2ab90ecaff2491d7e2f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:43:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/565d59719158a08036a6988132827fdee52ba0667968a2ab90ecaff2491d7e2f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:43:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/565d59719158a08036a6988132827fdee52ba0667968a2ab90ecaff2491d7e2f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:43:59 compute-0 podman[492261]: 2025-12-03 02:43:59.804686923 +0000 UTC m=+0.245976392 container init 2dde7b8e399409409416498473d94d3295ec4a7d510977c6104dc4d46ca5cac6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lumiere, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec 03 02:43:59 compute-0 podman[492261]: 2025-12-03 02:43:59.836746258 +0000 UTC m=+0.278035697 container start 2dde7b8e399409409416498473d94d3295ec4a7d510977c6104dc4d46ca5cac6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Dec 03 02:43:59 compute-0 podman[492261]: 2025-12-03 02:43:59.844676212 +0000 UTC m=+0.285965681 container attach 2dde7b8e399409409416498473d94d3295ec4a7d510977c6104dc4d46ca5cac6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lumiere, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:43:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:43:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 44147 "" "Go-http-client/1.1"
Dec 03 02:43:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:43:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8625 "" "Go-http-client/1.1"
Dec 03 02:44:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2697: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:01 compute-0 vigorous_lumiere[492277]: {
Dec 03 02:44:01 compute-0 vigorous_lumiere[492277]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 02:44:01 compute-0 vigorous_lumiere[492277]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:44:01 compute-0 vigorous_lumiere[492277]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 02:44:01 compute-0 vigorous_lumiere[492277]:         "osd_id": 2,
Dec 03 02:44:01 compute-0 vigorous_lumiere[492277]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:44:01 compute-0 vigorous_lumiere[492277]:         "type": "bluestore"
Dec 03 02:44:01 compute-0 vigorous_lumiere[492277]:     },
Dec 03 02:44:01 compute-0 vigorous_lumiere[492277]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 02:44:01 compute-0 vigorous_lumiere[492277]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:44:01 compute-0 vigorous_lumiere[492277]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 02:44:01 compute-0 vigorous_lumiere[492277]:         "osd_id": 1,
Dec 03 02:44:01 compute-0 vigorous_lumiere[492277]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:44:01 compute-0 vigorous_lumiere[492277]:         "type": "bluestore"
Dec 03 02:44:01 compute-0 vigorous_lumiere[492277]:     },
Dec 03 02:44:01 compute-0 vigorous_lumiere[492277]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 02:44:01 compute-0 vigorous_lumiere[492277]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:44:01 compute-0 vigorous_lumiere[492277]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 02:44:01 compute-0 vigorous_lumiere[492277]:         "osd_id": 0,
Dec 03 02:44:01 compute-0 vigorous_lumiere[492277]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:44:01 compute-0 vigorous_lumiere[492277]:         "type": "bluestore"
Dec 03 02:44:01 compute-0 vigorous_lumiere[492277]:     }
Dec 03 02:44:01 compute-0 vigorous_lumiere[492277]: }
Dec 03 02:44:01 compute-0 systemd[1]: libpod-2dde7b8e399409409416498473d94d3295ec4a7d510977c6104dc4d46ca5cac6.scope: Deactivated successfully.
Dec 03 02:44:01 compute-0 podman[492261]: 2025-12-03 02:44:01.052297561 +0000 UTC m=+1.493586990 container died 2dde7b8e399409409416498473d94d3295ec4a7d510977c6104dc4d46ca5cac6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lumiere, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:44:01 compute-0 systemd[1]: libpod-2dde7b8e399409409416498473d94d3295ec4a7d510977c6104dc4d46ca5cac6.scope: Consumed 1.217s CPU time.
Dec 03 02:44:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-565d59719158a08036a6988132827fdee52ba0667968a2ab90ecaff2491d7e2f-merged.mount: Deactivated successfully.
Dec 03 02:44:01 compute-0 podman[492261]: 2025-12-03 02:44:01.167218254 +0000 UTC m=+1.608507653 container remove 2dde7b8e399409409416498473d94d3295ec4a7d510977c6104dc4d46ca5cac6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lumiere, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:44:01 compute-0 systemd[1]: libpod-conmon-2dde7b8e399409409416498473d94d3295ec4a7d510977c6104dc4d46ca5cac6.scope: Deactivated successfully.
Dec 03 02:44:01 compute-0 sudo[492155]: pam_unix(sudo:session): session closed for user root
Dec 03 02:44:01 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 02:44:01 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:44:01 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 02:44:01 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:44:01 compute-0 podman[492311]: 2025-12-03 02:44:01.256167014 +0000 UTC m=+0.152469754 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 03 02:44:01 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev d56e715c-d9b4-4351-8152-8789b3f16756 does not exist
Dec 03 02:44:01 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 89cf344b-5911-4c61-bc81-bba6f352f236 does not exist
Dec 03 02:44:01 compute-0 sudo[492340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:44:01 compute-0 ceph-mon[192821]: pgmap v2697: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:01 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:44:01 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:44:01 compute-0 sudo[492340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:44:01 compute-0 sudo[492340]: pam_unix(sudo:session): session closed for user root
Dec 03 02:44:01 compute-0 openstack_network_exporter[368278]: ERROR   02:44:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:44:01 compute-0 openstack_network_exporter[368278]: ERROR   02:44:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:44:01 compute-0 openstack_network_exporter[368278]: ERROR   02:44:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:44:01 compute-0 openstack_network_exporter[368278]: ERROR   02:44:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:44:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:44:01 compute-0 openstack_network_exporter[368278]: ERROR   02:44:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:44:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:44:01 compute-0 sudo[492365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 02:44:01 compute-0 sudo[492365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:44:01 compute-0 sudo[492365]: pam_unix(sudo:session): session closed for user root
Dec 03 02:44:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2698: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:02 compute-0 nova_compute[351485]: 2025-12-03 02:44:02.319 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:44:02 compute-0 ceph-mon[192821]: pgmap v2698: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:44:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2699: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:04 compute-0 nova_compute[351485]: 2025-12-03 02:44:04.717 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:44:05 compute-0 ceph-mon[192821]: pgmap v2699: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2700: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:06 compute-0 podman[492393]: 2025-12-03 02:44:06.87723211 +0000 UTC m=+0.108353439 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, managed_by=edpm_ansible, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, config_id=edpm, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., name=ubi9, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, version=9.4)
Dec 03 02:44:06 compute-0 podman[492391]: 2025-12-03 02:44:06.883810155 +0000 UTC m=+0.122067236 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, config_id=edpm, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.33.7, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 03 02:44:06 compute-0 podman[492399]: 2025-12-03 02:44:06.904860269 +0000 UTC m=+0.118609088 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 03 02:44:06 compute-0 podman[492392]: 2025-12-03 02:44:06.906012062 +0000 UTC m=+0.134798115 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 02:44:06 compute-0 podman[492390]: 2025-12-03 02:44:06.938218911 +0000 UTC m=+0.183962133 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec 03 02:44:07 compute-0 ceph-mon[192821]: pgmap v2700: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:07 compute-0 nova_compute[351485]: 2025-12-03 02:44:07.322 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:44:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2701: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:08 compute-0 nova_compute[351485]: 2025-12-03 02:44:08.324 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:44:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:44:09 compute-0 ceph-mon[192821]: pgmap v2701: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:09 compute-0 nova_compute[351485]: 2025-12-03 02:44:09.719 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:44:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2702: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:11 compute-0 ceph-mon[192821]: pgmap v2702: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:11 compute-0 sshd-session[492489]: Invalid user james from 154.113.10.113 port 38272
Dec 03 02:44:11 compute-0 sshd-session[492489]: Received disconnect from 154.113.10.113 port 38272:11: Bye Bye [preauth]
Dec 03 02:44:11 compute-0 sshd-session[492489]: Disconnected from invalid user james 154.113.10.113 port 38272 [preauth]
Dec 03 02:44:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2703: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:12 compute-0 nova_compute[351485]: 2025-12-03 02:44:12.325 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:44:13 compute-0 ceph-mon[192821]: pgmap v2703: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:44:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2704: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:14 compute-0 nova_compute[351485]: 2025-12-03 02:44:14.722 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:44:15 compute-0 ceph-mon[192821]: pgmap v2704: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2705: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:17 compute-0 ceph-mon[192821]: pgmap v2705: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:17 compute-0 nova_compute[351485]: 2025-12-03 02:44:17.328 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:44:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2706: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:44:19 compute-0 ceph-mon[192821]: pgmap v2706: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:19 compute-0 nova_compute[351485]: 2025-12-03 02:44:19.724 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:44:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2707: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:21 compute-0 ceph-mon[192821]: pgmap v2707: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:21 compute-0 podman[492492]: 2025-12-03 02:44:21.856982853 +0000 UTC m=+0.107853815 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 03 02:44:21 compute-0 podman[492493]: 2025-12-03 02:44:21.886769263 +0000 UTC m=+0.130363850 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Dec 03 02:44:21 compute-0 podman[492494]: 2025-12-03 02:44:21.910896044 +0000 UTC m=+0.144864289 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 03 02:44:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2708: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:22 compute-0 nova_compute[351485]: 2025-12-03 02:44:22.331 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:44:23 compute-0 ceph-mon[192821]: pgmap v2708: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:44:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2709: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:24 compute-0 nova_compute[351485]: 2025-12-03 02:44:24.728 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:44:25 compute-0 ceph-mon[192821]: pgmap v2709: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2710: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:27 compute-0 nova_compute[351485]: 2025-12-03 02:44:27.334 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:44:27 compute-0 ceph-mon[192821]: pgmap v2710: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:27 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #132. Immutable memtables: 0.
Dec 03 02:44:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:44:27.421212) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 03 02:44:27 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 132
Dec 03 02:44:27 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729867421274, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 1175, "num_deletes": 251, "total_data_size": 1770347, "memory_usage": 1801664, "flush_reason": "Manual Compaction"}
Dec 03 02:44:27 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #133: started
Dec 03 02:44:27 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729867436813, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 133, "file_size": 1742446, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 54434, "largest_seqno": 55608, "table_properties": {"data_size": 1736756, "index_size": 3084, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11976, "raw_average_key_size": 19, "raw_value_size": 1725386, "raw_average_value_size": 2851, "num_data_blocks": 138, "num_entries": 605, "num_filter_entries": 605, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764729749, "oldest_key_time": 1764729749, "file_creation_time": 1764729867, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 133, "seqno_to_time_mapping": "N/A"}}
Dec 03 02:44:27 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 15876 microseconds, and 8839 cpu microseconds.
Dec 03 02:44:27 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 02:44:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:44:27.437088) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #133: 1742446 bytes OK
Dec 03 02:44:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:44:27.437288) [db/memtable_list.cc:519] [default] Level-0 commit table #133 started
Dec 03 02:44:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:44:27.439405) [db/memtable_list.cc:722] [default] Level-0 commit table #133: memtable #1 done
Dec 03 02:44:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:44:27.439429) EVENT_LOG_v1 {"time_micros": 1764729867439422, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 03 02:44:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:44:27.439450) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 03 02:44:27 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 1764967, prev total WAL file size 1764967, number of live WAL files 2.
Dec 03 02:44:27 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000129.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:44:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:44:27.444299) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Dec 03 02:44:27 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 03 02:44:27 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [133(1701KB)], [131(7438KB)]
Dec 03 02:44:27 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729867444405, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [133], "files_L6": [131], "score": -1, "input_data_size": 9359361, "oldest_snapshot_seqno": -1}
Dec 03 02:44:27 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #134: 6832 keys, 7658713 bytes, temperature: kUnknown
Dec 03 02:44:27 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729867500966, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 134, "file_size": 7658713, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7618204, "index_size": 22348, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17093, "raw_key_size": 179286, "raw_average_key_size": 26, "raw_value_size": 7499630, "raw_average_value_size": 1097, "num_data_blocks": 877, "num_entries": 6832, "num_filter_entries": 6832, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764729867, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Dec 03 02:44:27 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 03 02:44:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:44:27.501282) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 7658713 bytes
Dec 03 02:44:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:44:27.503921) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 165.2 rd, 135.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 7.3 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(9.8) write-amplify(4.4) OK, records in: 7346, records dropped: 514 output_compression: NoCompression
Dec 03 02:44:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:44:27.503951) EVENT_LOG_v1 {"time_micros": 1764729867503937, "job": 80, "event": "compaction_finished", "compaction_time_micros": 56653, "compaction_time_cpu_micros": 32568, "output_level": 6, "num_output_files": 1, "total_output_size": 7658713, "num_input_records": 7346, "num_output_records": 6832, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 03 02:44:27 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000133.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:44:27 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729867504710, "job": 80, "event": "table_file_deletion", "file_number": 133}
Dec 03 02:44:27 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 03 02:44:27 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729867507005, "job": 80, "event": "table_file_deletion", "file_number": 131}
Dec 03 02:44:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:44:27.444083) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:44:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:44:27.507327) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:44:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:44:27.507336) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:44:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:44:27.507339) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:44:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:44:27.507342) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:44:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:44:27.507345) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 03 02:44:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2711: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:28 compute-0 ceph-mon[192821]: pgmap v2711: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:44:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:44:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:44:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:44:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:44:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:44:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:44:28
Dec 03 02:44:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 02:44:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 02:44:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['images', 'default.rgw.log', 'backups', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', 'volumes', '.mgr', 'cephfs.cephfs.data', 'default.rgw.control']
Dec 03 02:44:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 02:44:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:44:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 02:44:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:44:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 02:44:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:44:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:44:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:44:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:44:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:44:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:44:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:44:29 compute-0 nova_compute[351485]: 2025-12-03 02:44:29.732 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:44:29 compute-0 podman[158098]: time="2025-12-03T02:44:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:44:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:44:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec 03 02:44:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:44:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8210 "" "Go-http-client/1.1"
Dec 03 02:44:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2712: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:31 compute-0 ceph-mon[192821]: pgmap v2712: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:31 compute-0 openstack_network_exporter[368278]: ERROR   02:44:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:44:31 compute-0 openstack_network_exporter[368278]: ERROR   02:44:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:44:31 compute-0 openstack_network_exporter[368278]: ERROR   02:44:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:44:31 compute-0 openstack_network_exporter[368278]: ERROR   02:44:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:44:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:44:31 compute-0 openstack_network_exporter[368278]: ERROR   02:44:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:44:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:44:31 compute-0 podman[492555]: 2025-12-03 02:44:31.867704382 +0000 UTC m=+0.120693677 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, config_id=edpm, managed_by=edpm_ansible)
Dec 03 02:44:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2713: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:32 compute-0 nova_compute[351485]: 2025-12-03 02:44:32.338 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:44:33 compute-0 ceph-mon[192821]: pgmap v2713: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:44:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2714: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:34 compute-0 nova_compute[351485]: 2025-12-03 02:44:34.735 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:44:35 compute-0 ceph-mon[192821]: pgmap v2714: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2715: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:37 compute-0 ceph-mon[192821]: pgmap v2715: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:37 compute-0 nova_compute[351485]: 2025-12-03 02:44:37.341 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:44:37 compute-0 nova_compute[351485]: 2025-12-03 02:44:37.601 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:44:37 compute-0 podman[492573]: 2025-12-03 02:44:37.893268682 +0000 UTC m=+0.128580998 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, version=9.6, config_id=edpm, vcs-type=git, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter)
Dec 03 02:44:37 compute-0 podman[492574]: 2025-12-03 02:44:37.894171778 +0000 UTC m=+0.128058353 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 02:44:37 compute-0 podman[492583]: 2025-12-03 02:44:37.902097011 +0000 UTC m=+0.120011106 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 03 02:44:37 compute-0 podman[492572]: 2025-12-03 02:44:37.911317092 +0000 UTC m=+0.165393447 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 03 02:44:37 compute-0 podman[492580]: 2025-12-03 02:44:37.913240336 +0000 UTC m=+0.134449144 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, io.openshift.tags=base rhel9, container_name=kepler, managed_by=edpm_ansible, config_id=edpm, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, release-0.7.12=, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, release=1214.1726694543)
Dec 03 02:44:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2716: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:44:39 compute-0 ceph-mon[192821]: pgmap v2716: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec 03 02:44:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:44:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 02:44:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:44:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 03 02:44:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:44:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:44:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:44:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:44:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:44:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec 03 02:44:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:44:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 02:44:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:44:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:44:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:44:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 02:44:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:44:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 02:44:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:44:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:44:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:44:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 02:44:39 compute-0 nova_compute[351485]: 2025-12-03 02:44:39.738 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:44:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2717: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:41 compute-0 ceph-mon[192821]: pgmap v2717: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2718: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:42 compute-0 nova_compute[351485]: 2025-12-03 02:44:42.344 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:44:43 compute-0 ceph-mon[192821]: pgmap v2718: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:43 compute-0 nova_compute[351485]: 2025-12-03 02:44:43.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:44:43 compute-0 nova_compute[351485]: 2025-12-03 02:44:43.618 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:44:43 compute-0 nova_compute[351485]: 2025-12-03 02:44:43.619 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:44:43 compute-0 nova_compute[351485]: 2025-12-03 02:44:43.619 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:44:43 compute-0 nova_compute[351485]: 2025-12-03 02:44:43.619 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 03 02:44:43 compute-0 nova_compute[351485]: 2025-12-03 02:44:43.620 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:44:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:44:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:44:44 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1499390691' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:44:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2719: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:44 compute-0 nova_compute[351485]: 2025-12-03 02:44:44.123 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:44:44 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1499390691' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:44:44 compute-0 nova_compute[351485]: 2025-12-03 02:44:44.655 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 03 02:44:44 compute-0 nova_compute[351485]: 2025-12-03 02:44:44.657 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3975MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 03 02:44:44 compute-0 nova_compute[351485]: 2025-12-03 02:44:44.657 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:44:44 compute-0 nova_compute[351485]: 2025-12-03 02:44:44.658 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:44:44 compute-0 nova_compute[351485]: 2025-12-03 02:44:44.741 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:44:44 compute-0 nova_compute[351485]: 2025-12-03 02:44:44.763 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 03 02:44:44 compute-0 nova_compute[351485]: 2025-12-03 02:44:44.763 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 03 02:44:44 compute-0 nova_compute[351485]: 2025-12-03 02:44:44.789 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 03 02:44:45 compute-0 ceph-mon[192821]: pgmap v2719: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:45 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 03 02:44:45 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2835206597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:44:45 compute-0 nova_compute[351485]: 2025-12-03 02:44:45.347 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 03 02:44:45 compute-0 nova_compute[351485]: 2025-12-03 02:44:45.360 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 03 02:44:45 compute-0 nova_compute[351485]: 2025-12-03 02:44:45.383 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 03 02:44:45 compute-0 nova_compute[351485]: 2025-12-03 02:44:45.386 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 03 02:44:45 compute-0 nova_compute[351485]: 2025-12-03 02:44:45.387 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:44:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2720: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:46 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2835206597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 03 02:44:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 03 02:44:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3647278499' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:44:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 03 02:44:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3647278499' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:44:47 compute-0 ceph-mon[192821]: pgmap v2720: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/3647278499' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 03 02:44:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.10:0/3647278499' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 03 02:44:47 compute-0 nova_compute[351485]: 2025-12-03 02:44:47.347 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:44:47 compute-0 nova_compute[351485]: 2025-12-03 02:44:47.388 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:44:47 compute-0 nova_compute[351485]: 2025-12-03 02:44:47.388 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 03 02:44:47 compute-0 nova_compute[351485]: 2025-12-03 02:44:47.389 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 03 02:44:47 compute-0 nova_compute[351485]: 2025-12-03 02:44:47.410 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 03 02:44:47 compute-0 nova_compute[351485]: 2025-12-03 02:44:47.410 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:44:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2721: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:44:49 compute-0 ceph-mon[192821]: pgmap v2721: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:49 compute-0 nova_compute[351485]: 2025-12-03 02:44:49.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:44:49 compute-0 nova_compute[351485]: 2025-12-03 02:44:49.744 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:44:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2722: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:50 compute-0 nova_compute[351485]: 2025-12-03 02:44:50.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:44:51 compute-0 ceph-mon[192821]: pgmap v2722: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:51 compute-0 nova_compute[351485]: 2025-12-03 02:44:51.571 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:44:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2723: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:52 compute-0 nova_compute[351485]: 2025-12-03 02:44:52.351 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:44:52 compute-0 podman[492723]: 2025-12-03 02:44:52.858045602 +0000 UTC m=+0.089859386 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 03 02:44:52 compute-0 podman[492722]: 2025-12-03 02:44:52.859348149 +0000 UTC m=+0.106287590 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible)
Dec 03 02:44:52 compute-0 podman[492721]: 2025-12-03 02:44:52.860995296 +0000 UTC m=+0.105726155 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 03 02:44:53 compute-0 ceph-mon[192821]: pgmap v2723: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:44:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2724: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:54 compute-0 nova_compute[351485]: 2025-12-03 02:44:54.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:44:54 compute-0 nova_compute[351485]: 2025-12-03 02:44:54.747 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:44:55 compute-0 ceph-mon[192821]: pgmap v2724: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:55 compute-0 nova_compute[351485]: 2025-12-03 02:44:55.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:44:55 compute-0 nova_compute[351485]: 2025-12-03 02:44:55.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 03 02:44:55 compute-0 sshd-session[492781]: Accepted publickey for zuul from 192.168.122.10 port 41184 ssh2: ECDSA SHA256:ja3ITS17A9km0/Ot+KN2pl9ub4ump/b6GV+vNoE7Szw
Dec 03 02:44:55 compute-0 systemd-logind[800]: New session 66 of user zuul.
Dec 03 02:44:55 compute-0 systemd[1]: Started Session 66 of User zuul.
Dec 03 02:44:55 compute-0 sshd-session[492781]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 03 02:44:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2725: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:56 compute-0 sudo[492785]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Dec 03 02:44:56 compute-0 sudo[492785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 03 02:44:56 compute-0 nova_compute[351485]: 2025-12-03 02:44:56.572 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 03 02:44:57 compute-0 nova_compute[351485]: 2025-12-03 02:44:57.354 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:44:57 compute-0 ceph-mon[192821]: pgmap v2725: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2726: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:44:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:44:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:44:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:44:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:44:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:44:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:44:59 compute-0 ceph-mon[192821]: pgmap v2726: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:44:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:44:59.682 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 03 02:44:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:44:59.683 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 03 02:44:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:44:59.683 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 03 02:44:59 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15881 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:44:59 compute-0 podman[158098]: time="2025-12-03T02:44:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:44:59 compute-0 nova_compute[351485]: 2025-12-03 02:44:59.751 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:44:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:44:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec 03 02:44:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:44:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8206 "" "Go-http-client/1.1"
Dec 03 02:45:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2727: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:45:00 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15883 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:45:01 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Dec 03 02:45:01 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4002863928' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 03 02:45:01 compute-0 openstack_network_exporter[368278]: ERROR   02:45:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:45:01 compute-0 openstack_network_exporter[368278]: ERROR   02:45:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:45:01 compute-0 openstack_network_exporter[368278]: ERROR   02:45:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:45:01 compute-0 openstack_network_exporter[368278]: ERROR   02:45:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:45:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:45:01 compute-0 ceph-mon[192821]: from='client.15881 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:45:01 compute-0 ceph-mon[192821]: pgmap v2727: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:45:01 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/4002863928' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 03 02:45:01 compute-0 openstack_network_exporter[368278]: ERROR   02:45:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:45:01 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:45:01 compute-0 sudo[493029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:45:01 compute-0 sudo[493029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:45:01 compute-0 sudo[493029]: pam_unix(sudo:session): session closed for user root
Dec 03 02:45:01 compute-0 sudo[493054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:45:01 compute-0 sudo[493054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:45:01 compute-0 sudo[493054]: pam_unix(sudo:session): session closed for user root
Dec 03 02:45:01 compute-0 sudo[493085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:45:01 compute-0 sudo[493085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:45:02 compute-0 sudo[493085]: pam_unix(sudo:session): session closed for user root
Dec 03 02:45:02 compute-0 sudo[493116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Dec 03 02:45:02 compute-0 sudo[493116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:45:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2728: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:45:02 compute-0 podman[493109]: 2025-12-03 02:45:02.126673831 +0000 UTC m=+0.113829213 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 03 02:45:02 compute-0 nova_compute[351485]: 2025-12-03 02:45:02.360 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:45:02 compute-0 ceph-mon[192821]: from='client.15883 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:45:02 compute-0 sudo[493116]: pam_unix(sudo:session): session closed for user root
Dec 03 02:45:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 02:45:02 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:45:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 02:45:02 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:45:02 compute-0 sudo[493173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:45:02 compute-0 sudo[493173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:45:02 compute-0 sudo[493173]: pam_unix(sudo:session): session closed for user root
Dec 03 02:45:02 compute-0 sudo[493198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:45:02 compute-0 sudo[493198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:45:02 compute-0 sudo[493198]: pam_unix(sudo:session): session closed for user root
Dec 03 02:45:02 compute-0 sudo[493223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:45:02 compute-0 sudo[493223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:45:02 compute-0 sudo[493223]: pam_unix(sudo:session): session closed for user root
Dec 03 02:45:03 compute-0 sudo[493248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 03 02:45:03 compute-0 sudo[493248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:45:03 compute-0 ceph-mon[192821]: pgmap v2728: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:45:03 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:45:03 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:45:03 compute-0 sudo[493248]: pam_unix(sudo:session): session closed for user root
Dec 03 02:45:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:45:03 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:45:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 03 02:45:03 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:45:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 03 02:45:03 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:45:03 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev aabf4f6b-36b0-4f52-86d1-f764beaa541e does not exist
Dec 03 02:45:03 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 5f8d11b8-7a48-403c-a85f-b2376ae83ce8 does not exist
Dec 03 02:45:03 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev a2caed12-b76a-49dc-ba7d-1c5683b32be3 does not exist
Dec 03 02:45:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 03 02:45:03 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:45:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 03 02:45:03 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:45:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:45:03 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:45:03 compute-0 sudo[493306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:45:03 compute-0 sudo[493306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:45:03 compute-0 sudo[493306]: pam_unix(sudo:session): session closed for user root
Dec 03 02:45:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:45:03 compute-0 sudo[493331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:45:03 compute-0 sudo[493331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:45:04 compute-0 sudo[493331]: pam_unix(sudo:session): session closed for user root
Dec 03 02:45:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2729: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:45:04 compute-0 sudo[493356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:45:04 compute-0 sudo[493356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:45:04 compute-0 sudo[493356]: pam_unix(sudo:session): session closed for user root
Dec 03 02:45:04 compute-0 sudo[493387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 03 02:45:04 compute-0 sudo[493387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:45:04 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:45:04 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 03 02:45:04 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:45:04 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 03 02:45:04 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 03 02:45:04 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:45:04 compute-0 ceph-mon[192821]: pgmap v2729: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:45:04 compute-0 ovs-vsctl[493453]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec 03 02:45:04 compute-0 nova_compute[351485]: 2025-12-03 02:45:04.755 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:45:04 compute-0 podman[493481]: 2025-12-03 02:45:04.841945375 +0000 UTC m=+0.081591753 container create bf734ce80c10e5eed0a97441ed8dac249b8b2e0e2e162123dc2a6a01e06fbf1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_neumann, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:45:04 compute-0 podman[493481]: 2025-12-03 02:45:04.81199695 +0000 UTC m=+0.051643328 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:45:04 compute-0 systemd[1]: Started libpod-conmon-bf734ce80c10e5eed0a97441ed8dac249b8b2e0e2e162123dc2a6a01e06fbf1d.scope.
Dec 03 02:45:04 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:45:04 compute-0 podman[493481]: 2025-12-03 02:45:04.989218031 +0000 UTC m=+0.228864389 container init bf734ce80c10e5eed0a97441ed8dac249b8b2e0e2e162123dc2a6a01e06fbf1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_neumann, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:45:05 compute-0 podman[493481]: 2025-12-03 02:45:05.00689053 +0000 UTC m=+0.246536868 container start bf734ce80c10e5eed0a97441ed8dac249b8b2e0e2e162123dc2a6a01e06fbf1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:45:05 compute-0 podman[493481]: 2025-12-03 02:45:05.011819109 +0000 UTC m=+0.251465467 container attach bf734ce80c10e5eed0a97441ed8dac249b8b2e0e2e162123dc2a6a01e06fbf1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_neumann, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 03 02:45:05 compute-0 cranky_neumann[493506]: 167 167
Dec 03 02:45:05 compute-0 systemd[1]: libpod-bf734ce80c10e5eed0a97441ed8dac249b8b2e0e2e162123dc2a6a01e06fbf1d.scope: Deactivated successfully.
Dec 03 02:45:05 compute-0 podman[493481]: 2025-12-03 02:45:05.025051592 +0000 UTC m=+0.264697970 container died bf734ce80c10e5eed0a97441ed8dac249b8b2e0e2e162123dc2a6a01e06fbf1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_neumann, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:45:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce5f6db52bb7d115df883225cd28654992e0d9e954db5d7c6752954709733bba-merged.mount: Deactivated successfully.
Dec 03 02:45:05 compute-0 podman[493481]: 2025-12-03 02:45:05.091301022 +0000 UTC m=+0.330947370 container remove bf734ce80c10e5eed0a97441ed8dac249b8b2e0e2e162123dc2a6a01e06fbf1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 03 02:45:05 compute-0 systemd[1]: libpod-conmon-bf734ce80c10e5eed0a97441ed8dac249b8b2e0e2e162123dc2a6a01e06fbf1d.scope: Deactivated successfully.
Dec 03 02:45:05 compute-0 podman[493543]: 2025-12-03 02:45:05.345596228 +0000 UTC m=+0.074342469 container create 6c31113b57bb9585433da53199d841e62fa91385f7a983bbf692c0ab080e339c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lalande, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 03 02:45:05 compute-0 podman[493543]: 2025-12-03 02:45:05.312437102 +0000 UTC m=+0.041183443 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:45:05 compute-0 systemd[1]: Started libpod-conmon-6c31113b57bb9585433da53199d841e62fa91385f7a983bbf692c0ab080e339c.scope.
Dec 03 02:45:05 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:45:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95922a1bb0e75f034591bf6355eb35fb81211620d4e6d7e25ceb22cccf3788df/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:45:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95922a1bb0e75f034591bf6355eb35fb81211620d4e6d7e25ceb22cccf3788df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:45:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95922a1bb0e75f034591bf6355eb35fb81211620d4e6d7e25ceb22cccf3788df/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:45:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95922a1bb0e75f034591bf6355eb35fb81211620d4e6d7e25ceb22cccf3788df/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:45:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95922a1bb0e75f034591bf6355eb35fb81211620d4e6d7e25ceb22cccf3788df/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 03 02:45:05 compute-0 podman[493543]: 2025-12-03 02:45:05.495823358 +0000 UTC m=+0.224569599 container init 6c31113b57bb9585433da53199d841e62fa91385f7a983bbf692c0ab080e339c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Dec 03 02:45:05 compute-0 podman[493543]: 2025-12-03 02:45:05.51400271 +0000 UTC m=+0.242748981 container start 6c31113b57bb9585433da53199d841e62fa91385f7a983bbf692c0ab080e339c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 03 02:45:05 compute-0 podman[493543]: 2025-12-03 02:45:05.522821089 +0000 UTC m=+0.251567350 container attach 6c31113b57bb9585433da53199d841e62fa91385f7a983bbf692c0ab080e339c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:45:06 compute-0 virtqemud[154511]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec 03 02:45:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2730: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:45:06 compute-0 virtqemud[154511]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec 03 02:45:06 compute-0 virtqemud[154511]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec 03 02:45:06 compute-0 vibrant_lalande[493565]: --> passed data devices: 0 physical, 3 LVM
Dec 03 02:45:06 compute-0 vibrant_lalande[493565]: --> relative data size: 1.0
Dec 03 02:45:06 compute-0 vibrant_lalande[493565]: --> All data devices are unavailable
Dec 03 02:45:06 compute-0 systemd[1]: libpod-6c31113b57bb9585433da53199d841e62fa91385f7a983bbf692c0ab080e339c.scope: Deactivated successfully.
Dec 03 02:45:06 compute-0 systemd[1]: libpod-6c31113b57bb9585433da53199d841e62fa91385f7a983bbf692c0ab080e339c.scope: Consumed 1.196s CPU time.
Dec 03 02:45:06 compute-0 podman[493774]: 2025-12-03 02:45:06.838567479 +0000 UTC m=+0.039685491 container died 6c31113b57bb9585433da53199d841e62fa91385f7a983bbf692c0ab080e339c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lalande, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:45:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-95922a1bb0e75f034591bf6355eb35fb81211620d4e6d7e25ceb22cccf3788df-merged.mount: Deactivated successfully.
Dec 03 02:45:06 compute-0 podman[493774]: 2025-12-03 02:45:06.90594719 +0000 UTC m=+0.107065202 container remove 6c31113b57bb9585433da53199d841e62fa91385f7a983bbf692c0ab080e339c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lalande, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec 03 02:45:06 compute-0 systemd[1]: libpod-conmon-6c31113b57bb9585433da53199d841e62fa91385f7a983bbf692c0ab080e339c.scope: Deactivated successfully.
Dec 03 02:45:06 compute-0 sudo[493387]: pam_unix(sudo:session): session closed for user root
Dec 03 02:45:06 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq asok_command: cache status {prefix=cache status} (starting...)
Dec 03 02:45:07 compute-0 sudo[493828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:45:07 compute-0 sudo[493828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:45:07 compute-0 sudo[493828]: pam_unix(sudo:session): session closed for user root
Dec 03 02:45:07 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq asok_command: client ls {prefix=client ls} (starting...)
Dec 03 02:45:07 compute-0 sudo[493883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:45:07 compute-0 lvm[493923]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 03 02:45:07 compute-0 lvm[493923]: VG ceph_vg0 finished
Dec 03 02:45:07 compute-0 sudo[493883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:45:07 compute-0 lvm[493922]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 03 02:45:07 compute-0 lvm[493922]: VG ceph_vg2 finished
Dec 03 02:45:07 compute-0 sudo[493883]: pam_unix(sudo:session): session closed for user root
Dec 03 02:45:07 compute-0 ceph-mon[192821]: pgmap v2730: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:45:07 compute-0 sudo[493937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:45:07 compute-0 sudo[493937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:45:07 compute-0 sudo[493937]: pam_unix(sudo:session): session closed for user root
Dec 03 02:45:07 compute-0 lvm[494002]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 03 02:45:07 compute-0 lvm[494002]: VG ceph_vg1 finished
Dec 03 02:45:07 compute-0 sudo[493979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- lvm list --format json
Dec 03 02:45:07 compute-0 sudo[493979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:45:07 compute-0 nova_compute[351485]: 2025-12-03 02:45:07.364 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:45:07 compute-0 podman[494124]: 2025-12-03 02:45:07.719786237 +0000 UTC m=+0.048462989 container create 1288a43273fe22906e0a7fbad5b299db9501e92786c454296d917994e312c6f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_moser, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 03 02:45:07 compute-0 systemd[1]: Started libpod-conmon-1288a43273fe22906e0a7fbad5b299db9501e92786c454296d917994e312c6f4.scope.
Dec 03 02:45:07 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:45:07 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq asok_command: damage ls {prefix=damage ls} (starting...)
Dec 03 02:45:07 compute-0 podman[494124]: 2025-12-03 02:45:07.701167211 +0000 UTC m=+0.029843993 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:45:07 compute-0 podman[494124]: 2025-12-03 02:45:07.814685625 +0000 UTC m=+0.143362397 container init 1288a43273fe22906e0a7fbad5b299db9501e92786c454296d917994e312c6f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_moser, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 03 02:45:07 compute-0 podman[494124]: 2025-12-03 02:45:07.82798196 +0000 UTC m=+0.156658712 container start 1288a43273fe22906e0a7fbad5b299db9501e92786c454296d917994e312c6f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:45:07 compute-0 podman[494124]: 2025-12-03 02:45:07.832783876 +0000 UTC m=+0.161460658 container attach 1288a43273fe22906e0a7fbad5b299db9501e92786c454296d917994e312c6f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_moser, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 03 02:45:07 compute-0 determined_moser[494166]: 167 167
Dec 03 02:45:07 compute-0 systemd[1]: libpod-1288a43273fe22906e0a7fbad5b299db9501e92786c454296d917994e312c6f4.scope: Deactivated successfully.
Dec 03 02:45:07 compute-0 conmon[494166]: conmon 1288a43273fe22906e0a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1288a43273fe22906e0a7fbad5b299db9501e92786c454296d917994e312c6f4.scope/container/memory.events
Dec 03 02:45:07 compute-0 podman[494124]: 2025-12-03 02:45:07.838230959 +0000 UTC m=+0.166907711 container died 1288a43273fe22906e0a7fbad5b299db9501e92786c454296d917994e312c6f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_moser, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:45:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e7ae8a4602f081740d5af799d098639a504f5d8f2b0b75c0245a8f81ca10d95-merged.mount: Deactivated successfully.
Dec 03 02:45:07 compute-0 podman[494124]: 2025-12-03 02:45:07.893325874 +0000 UTC m=+0.222002626 container remove 1288a43273fe22906e0a7fbad5b299db9501e92786c454296d917994e312c6f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_moser, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 03 02:45:07 compute-0 systemd[1]: libpod-conmon-1288a43273fe22906e0a7fbad5b299db9501e92786c454296d917994e312c6f4.scope: Deactivated successfully.
Dec 03 02:45:07 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq asok_command: dump loads {prefix=dump loads} (starting...)
Dec 03 02:45:07 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15887 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:45:08 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Dec 03 02:45:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2731: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:45:08 compute-0 podman[494216]: 2025-12-03 02:45:08.14443993 +0000 UTC m=+0.114251525 container create 97db9fddeda2e521c9fe11e4e17452591f36477dbb5a75a27c8871626948a7d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_pike, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 03 02:45:08 compute-0 podman[494216]: 2025-12-03 02:45:08.063687282 +0000 UTC m=+0.033498907 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:45:08 compute-0 systemd[1]: Started libpod-conmon-97db9fddeda2e521c9fe11e4e17452591f36477dbb5a75a27c8871626948a7d9.scope.
Dec 03 02:45:08 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Dec 03 02:45:08 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:45:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03233762f38b7ab7dce5fbb8145625d348015ff654a1344ed3fb5d5654336fae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:45:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03233762f38b7ab7dce5fbb8145625d348015ff654a1344ed3fb5d5654336fae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:45:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03233762f38b7ab7dce5fbb8145625d348015ff654a1344ed3fb5d5654336fae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:45:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03233762f38b7ab7dce5fbb8145625d348015ff654a1344ed3fb5d5654336fae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:45:08 compute-0 podman[494216]: 2025-12-03 02:45:08.302218523 +0000 UTC m=+0.272030148 container init 97db9fddeda2e521c9fe11e4e17452591f36477dbb5a75a27c8871626948a7d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_pike, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Dec 03 02:45:08 compute-0 podman[494216]: 2025-12-03 02:45:08.316592669 +0000 UTC m=+0.286404274 container start 97db9fddeda2e521c9fe11e4e17452591f36477dbb5a75a27c8871626948a7d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_pike, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:45:08 compute-0 podman[494216]: 2025-12-03 02:45:08.327100655 +0000 UTC m=+0.296912260 container attach 97db9fddeda2e521c9fe11e4e17452591f36477dbb5a75a27c8871626948a7d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_pike, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:45:08 compute-0 podman[494249]: 2025-12-03 02:45:08.357949966 +0000 UTC m=+0.152134384 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 03 02:45:08 compute-0 podman[494252]: 2025-12-03 02:45:08.360801376 +0000 UTC m=+0.151467395 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0)
Dec 03 02:45:08 compute-0 podman[494248]: 2025-12-03 02:45:08.360838097 +0000 UTC m=+0.158159134 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., release=1755695350, config_id=edpm, version=9.6, vcs-type=git, name=ubi9-minimal, vendor=Red Hat, Inc., managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.buildah.version=1.33.7)
Dec 03 02:45:08 compute-0 podman[494250]: 2025-12-03 02:45:08.367117854 +0000 UTC m=+0.162541978 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.buildah.version=1.29.0, release=1214.1726694543, config_id=edpm, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 03 02:45:08 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Dec 03 02:45:08 compute-0 podman[494246]: 2025-12-03 02:45:08.400766314 +0000 UTC m=+0.203950706 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 03 02:45:08 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15890 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:45:08 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Dec 03 02:45:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Dec 03 02:45:08 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/735022238' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 03 02:45:08 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Dec 03 02:45:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:45:08 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq asok_command: get subtrees {prefix=get subtrees} (starting...)
Dec 03 02:45:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 03 02:45:09 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3918308904' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:45:09 compute-0 ceph-mon[192821]: from='client.15887 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:45:09 compute-0 ceph-mon[192821]: pgmap v2731: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:45:09 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/735022238' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 03 02:45:09 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3918308904' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 03 02:45:09 compute-0 happy_pike[494308]: {
Dec 03 02:45:09 compute-0 happy_pike[494308]:     "0": [
Dec 03 02:45:09 compute-0 happy_pike[494308]:         {
Dec 03 02:45:09 compute-0 happy_pike[494308]:             "devices": [
Dec 03 02:45:09 compute-0 happy_pike[494308]:                 "/dev/loop3"
Dec 03 02:45:09 compute-0 happy_pike[494308]:             ],
Dec 03 02:45:09 compute-0 happy_pike[494308]:             "lv_name": "ceph_lv0",
Dec 03 02:45:09 compute-0 happy_pike[494308]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:45:09 compute-0 happy_pike[494308]:             "lv_size": "21470642176",
Dec 03 02:45:09 compute-0 happy_pike[494308]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:45:09 compute-0 happy_pike[494308]:             "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:45:09 compute-0 happy_pike[494308]:             "name": "ceph_lv0",
Dec 03 02:45:09 compute-0 happy_pike[494308]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:45:09 compute-0 happy_pike[494308]:             "tags": {
Dec 03 02:45:09 compute-0 happy_pike[494308]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 03 02:45:09 compute-0 happy_pike[494308]:                 "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec 03 02:45:09 compute-0 happy_pike[494308]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:45:09 compute-0 happy_pike[494308]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:45:09 compute-0 happy_pike[494308]:                 "ceph.cluster_name": "ceph",
Dec 03 02:45:09 compute-0 happy_pike[494308]:                 "ceph.crush_device_class": "",
Dec 03 02:45:09 compute-0 happy_pike[494308]:                 "ceph.encrypted": "0",
Dec 03 02:45:09 compute-0 happy_pike[494308]:                 "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:45:09 compute-0 happy_pike[494308]:                 "ceph.osd_id": "0",
Dec 03 02:45:09 compute-0 happy_pike[494308]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:45:09 compute-0 happy_pike[494308]:                 "ceph.type": "block",
Dec 03 02:45:09 compute-0 happy_pike[494308]:                 "ceph.vdo": "0"
Dec 03 02:45:09 compute-0 happy_pike[494308]:             },
Dec 03 02:45:09 compute-0 happy_pike[494308]:             "type": "block",
Dec 03 02:45:09 compute-0 happy_pike[494308]:             "vg_name": "ceph_vg0"
Dec 03 02:45:09 compute-0 happy_pike[494308]:         }
Dec 03 02:45:09 compute-0 happy_pike[494308]:     ],
Dec 03 02:45:09 compute-0 happy_pike[494308]:     "1": [
Dec 03 02:45:09 compute-0 happy_pike[494308]:         {
Dec 03 02:45:09 compute-0 happy_pike[494308]:             "devices": [
Dec 03 02:45:09 compute-0 happy_pike[494308]:                 "/dev/loop4"
Dec 03 02:45:09 compute-0 happy_pike[494308]:             ],
Dec 03 02:45:09 compute-0 happy_pike[494308]:             "lv_name": "ceph_lv1",
Dec 03 02:45:09 compute-0 happy_pike[494308]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:45:09 compute-0 happy_pike[494308]:             "lv_size": "21470642176",
Dec 03 02:45:09 compute-0 happy_pike[494308]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:45:09 compute-0 happy_pike[494308]:             "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:45:09 compute-0 happy_pike[494308]:             "name": "ceph_lv1",
Dec 03 02:45:09 compute-0 happy_pike[494308]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:45:09 compute-0 happy_pike[494308]:             "tags": {
Dec 03 02:45:09 compute-0 happy_pike[494308]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 03 02:45:09 compute-0 happy_pike[494308]:                 "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec 03 02:45:09 compute-0 happy_pike[494308]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:45:09 compute-0 happy_pike[494308]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:45:09 compute-0 happy_pike[494308]:                 "ceph.cluster_name": "ceph",
Dec 03 02:45:09 compute-0 happy_pike[494308]:                 "ceph.crush_device_class": "",
Dec 03 02:45:09 compute-0 happy_pike[494308]:                 "ceph.encrypted": "0",
Dec 03 02:45:09 compute-0 happy_pike[494308]:                 "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:45:09 compute-0 happy_pike[494308]:                 "ceph.osd_id": "1",
Dec 03 02:45:09 compute-0 happy_pike[494308]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:45:09 compute-0 happy_pike[494308]:                 "ceph.type": "block",
Dec 03 02:45:09 compute-0 happy_pike[494308]:                 "ceph.vdo": "0"
Dec 03 02:45:09 compute-0 happy_pike[494308]:             },
Dec 03 02:45:09 compute-0 happy_pike[494308]:             "type": "block",
Dec 03 02:45:09 compute-0 happy_pike[494308]:             "vg_name": "ceph_vg1"
Dec 03 02:45:09 compute-0 happy_pike[494308]:         }
Dec 03 02:45:09 compute-0 happy_pike[494308]:     ],
Dec 03 02:45:09 compute-0 happy_pike[494308]:     "2": [
Dec 03 02:45:09 compute-0 happy_pike[494308]:         {
Dec 03 02:45:09 compute-0 happy_pike[494308]:             "devices": [
Dec 03 02:45:09 compute-0 happy_pike[494308]:                 "/dev/loop5"
Dec 03 02:45:09 compute-0 happy_pike[494308]:             ],
Dec 03 02:45:09 compute-0 happy_pike[494308]:             "lv_name": "ceph_lv2",
Dec 03 02:45:09 compute-0 happy_pike[494308]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:45:09 compute-0 happy_pike[494308]:             "lv_size": "21470642176",
Dec 03 02:45:09 compute-0 happy_pike[494308]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 03 02:45:09 compute-0 happy_pike[494308]:             "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:45:09 compute-0 happy_pike[494308]:             "name": "ceph_lv2",
Dec 03 02:45:09 compute-0 happy_pike[494308]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:45:09 compute-0 happy_pike[494308]:             "tags": {
Dec 03 02:45:09 compute-0 happy_pike[494308]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 03 02:45:09 compute-0 happy_pike[494308]:                 "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec 03 02:45:09 compute-0 happy_pike[494308]:                 "ceph.cephx_lockbox_secret": "",
Dec 03 02:45:09 compute-0 happy_pike[494308]:                 "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:45:09 compute-0 happy_pike[494308]:                 "ceph.cluster_name": "ceph",
Dec 03 02:45:09 compute-0 happy_pike[494308]:                 "ceph.crush_device_class": "",
Dec 03 02:45:09 compute-0 happy_pike[494308]:                 "ceph.encrypted": "0",
Dec 03 02:45:09 compute-0 happy_pike[494308]:                 "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:45:09 compute-0 happy_pike[494308]:                 "ceph.osd_id": "2",
Dec 03 02:45:09 compute-0 happy_pike[494308]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 03 02:45:09 compute-0 happy_pike[494308]:                 "ceph.type": "block",
Dec 03 02:45:09 compute-0 happy_pike[494308]:                 "ceph.vdo": "0"
Dec 03 02:45:09 compute-0 happy_pike[494308]:             },
Dec 03 02:45:09 compute-0 happy_pike[494308]:             "type": "block",
Dec 03 02:45:09 compute-0 happy_pike[494308]:             "vg_name": "ceph_vg2"
Dec 03 02:45:09 compute-0 happy_pike[494308]:         }
Dec 03 02:45:09 compute-0 happy_pike[494308]:     ]
Dec 03 02:45:09 compute-0 happy_pike[494308]: }
Dec 03 02:45:09 compute-0 podman[494216]: 2025-12-03 02:45:09.247233021 +0000 UTC m=+1.217044646 container died 97db9fddeda2e521c9fe11e4e17452591f36477dbb5a75a27c8871626948a7d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_pike, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 03 02:45:09 compute-0 systemd[1]: libpod-97db9fddeda2e521c9fe11e4e17452591f36477dbb5a75a27c8871626948a7d9.scope: Deactivated successfully.
Dec 03 02:45:09 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq asok_command: ops {prefix=ops} (starting...)
Dec 03 02:45:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-03233762f38b7ab7dce5fbb8145625d348015ff654a1344ed3fb5d5654336fae-merged.mount: Deactivated successfully.
Dec 03 02:45:09 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15897 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:45:09 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T02:45:09.294+0000 7fabb0026640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 03 02:45:09 compute-0 ceph-mgr[193109]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 03 02:45:09 compute-0 podman[494216]: 2025-12-03 02:45:09.329270696 +0000 UTC m=+1.299082311 container remove 97db9fddeda2e521c9fe11e4e17452591f36477dbb5a75a27c8871626948a7d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_pike, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 03 02:45:09 compute-0 systemd[1]: libpod-conmon-97db9fddeda2e521c9fe11e4e17452591f36477dbb5a75a27c8871626948a7d9.scope: Deactivated successfully.
Dec 03 02:45:09 compute-0 sudo[493979]: pam_unix(sudo:session): session closed for user root
Dec 03 02:45:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Dec 03 02:45:09 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/606138239' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec 03 02:45:09 compute-0 sudo[494511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:45:09 compute-0 sudo[494511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:45:09 compute-0 sudo[494511]: pam_unix(sudo:session): session closed for user root
Dec 03 02:45:09 compute-0 sudo[494561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 03 02:45:09 compute-0 sudo[494561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:45:09 compute-0 sudo[494561]: pam_unix(sudo:session): session closed for user root
Dec 03 02:45:09 compute-0 sudo[494605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:45:09 compute-0 sudo[494605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:45:09 compute-0 sudo[494605]: pam_unix(sudo:session): session closed for user root
Dec 03 02:45:09 compute-0 sudo[494637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -- raw list --format json
Dec 03 02:45:09 compute-0 sudo[494637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:45:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Dec 03 02:45:09 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3626519256' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec 03 02:45:09 compute-0 nova_compute[351485]: 2025-12-03 02:45:09.757 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:45:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Dec 03 02:45:09 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/586845474' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec 03 02:45:09 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq asok_command: session ls {prefix=session ls} (starting...)
Dec 03 02:45:10 compute-0 podman[494732]: 2025-12-03 02:45:10.079035845 +0000 UTC m=+0.063772001 container create 96c5558e901e239fbd749a0d8e50910add9cab7060060179677e82951ba062eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_babbage, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 03 02:45:10 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq asok_command: status {prefix=status} (starting...)
Dec 03 02:45:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2732: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:45:10 compute-0 systemd[1]: Started libpod-conmon-96c5558e901e239fbd749a0d8e50910add9cab7060060179677e82951ba062eb.scope.
Dec 03 02:45:10 compute-0 podman[494732]: 2025-12-03 02:45:10.06079132 +0000 UTC m=+0.045527506 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:45:10 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:45:10 compute-0 podman[494732]: 2025-12-03 02:45:10.188360419 +0000 UTC m=+0.173096605 container init 96c5558e901e239fbd749a0d8e50910add9cab7060060179677e82951ba062eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_babbage, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 03 02:45:10 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Dec 03 02:45:10 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2529839193' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 03 02:45:10 compute-0 podman[494732]: 2025-12-03 02:45:10.198880496 +0000 UTC m=+0.183616672 container start 96c5558e901e239fbd749a0d8e50910add9cab7060060179677e82951ba062eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_babbage, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 03 02:45:10 compute-0 angry_babbage[494767]: 167 167
Dec 03 02:45:10 compute-0 systemd[1]: libpod-96c5558e901e239fbd749a0d8e50910add9cab7060060179677e82951ba062eb.scope: Deactivated successfully.
Dec 03 02:45:10 compute-0 podman[494732]: 2025-12-03 02:45:10.204751521 +0000 UTC m=+0.189487697 container attach 96c5558e901e239fbd749a0d8e50910add9cab7060060179677e82951ba062eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_babbage, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 03 02:45:10 compute-0 podman[494732]: 2025-12-03 02:45:10.20505891 +0000 UTC m=+0.189795066 container died 96c5558e901e239fbd749a0d8e50910add9cab7060060179677e82951ba062eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 03 02:45:10 compute-0 ceph-mon[192821]: from='client.15890 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:45:10 compute-0 ceph-mon[192821]: from='client.15897 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:45:10 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/606138239' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec 03 02:45:10 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3626519256' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec 03 02:45:10 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/586845474' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec 03 02:45:10 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2529839193' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 03 02:45:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed10de8fa22999c235a6ed768ca793d3a2873e1d7bf3d097344507ad9dded5e7-merged.mount: Deactivated successfully.
Dec 03 02:45:10 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15907 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:45:10 compute-0 podman[494732]: 2025-12-03 02:45:10.255927566 +0000 UTC m=+0.240663722 container remove 96c5558e901e239fbd749a0d8e50910add9cab7060060179677e82951ba062eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_babbage, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec 03 02:45:10 compute-0 systemd[1]: libpod-conmon-96c5558e901e239fbd749a0d8e50910add9cab7060060179677e82951ba062eb.scope: Deactivated successfully.
Dec 03 02:45:10 compute-0 podman[494824]: 2025-12-03 02:45:10.455269961 +0000 UTC m=+0.063315958 container create f780c8c773605a779e2c9633edd2d4f72243133760f08e7ee3d3050d1a83b191 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_euler, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 03 02:45:10 compute-0 systemd[1]: Started libpod-conmon-f780c8c773605a779e2c9633edd2d4f72243133760f08e7ee3d3050d1a83b191.scope.
Dec 03 02:45:10 compute-0 systemd[1]: Started libcrun container.
Dec 03 02:45:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6dc679fc79c01466f1e0388f77e57a8d8d951a951d5180cce3ed936a78d5da8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 03 02:45:10 compute-0 podman[494824]: 2025-12-03 02:45:10.436727728 +0000 UTC m=+0.044773745 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 03 02:45:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6dc679fc79c01466f1e0388f77e57a8d8d951a951d5180cce3ed936a78d5da8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 03 02:45:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6dc679fc79c01466f1e0388f77e57a8d8d951a951d5180cce3ed936a78d5da8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 03 02:45:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6dc679fc79c01466f1e0388f77e57a8d8d951a951d5180cce3ed936a78d5da8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 03 02:45:10 compute-0 podman[494824]: 2025-12-03 02:45:10.545496377 +0000 UTC m=+0.153542394 container init f780c8c773605a779e2c9633edd2d4f72243133760f08e7ee3d3050d1a83b191 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 03 02:45:10 compute-0 podman[494824]: 2025-12-03 02:45:10.559293557 +0000 UTC m=+0.167339554 container start f780c8c773605a779e2c9633edd2d4f72243133760f08e7ee3d3050d1a83b191 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_euler, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Dec 03 02:45:10 compute-0 podman[494824]: 2025-12-03 02:45:10.563034672 +0000 UTC m=+0.171080669 container attach f780c8c773605a779e2c9633edd2d4f72243133760f08e7ee3d3050d1a83b191 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 03 02:45:10 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Dec 03 02:45:10 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1357595830' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 03 02:45:10 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15911 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:45:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Dec 03 02:45:11 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2121455930' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 03 02:45:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Dec 03 02:45:11 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2960436668' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 03 02:45:11 compute-0 ceph-mon[192821]: pgmap v2732: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:45:11 compute-0 ceph-mon[192821]: from='client.15907 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:45:11 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1357595830' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 03 02:45:11 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2121455930' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 03 02:45:11 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2960436668' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 03 02:45:11 compute-0 xenodochial_euler[494860]: {
Dec 03 02:45:11 compute-0 xenodochial_euler[494860]:     "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec 03 02:45:11 compute-0 xenodochial_euler[494860]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:45:11 compute-0 xenodochial_euler[494860]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 03 02:45:11 compute-0 xenodochial_euler[494860]:         "osd_id": 2,
Dec 03 02:45:11 compute-0 xenodochial_euler[494860]:         "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec 03 02:45:11 compute-0 xenodochial_euler[494860]:         "type": "bluestore"
Dec 03 02:45:11 compute-0 xenodochial_euler[494860]:     },
Dec 03 02:45:11 compute-0 xenodochial_euler[494860]:     "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec 03 02:45:11 compute-0 xenodochial_euler[494860]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:45:11 compute-0 xenodochial_euler[494860]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 03 02:45:11 compute-0 xenodochial_euler[494860]:         "osd_id": 1,
Dec 03 02:45:11 compute-0 xenodochial_euler[494860]:         "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec 03 02:45:11 compute-0 xenodochial_euler[494860]:         "type": "bluestore"
Dec 03 02:45:11 compute-0 xenodochial_euler[494860]:     },
Dec 03 02:45:11 compute-0 xenodochial_euler[494860]:     "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec 03 02:45:11 compute-0 xenodochial_euler[494860]:         "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec 03 02:45:11 compute-0 xenodochial_euler[494860]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 03 02:45:11 compute-0 xenodochial_euler[494860]:         "osd_id": 0,
Dec 03 02:45:11 compute-0 xenodochial_euler[494860]:         "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec 03 02:45:11 compute-0 xenodochial_euler[494860]:         "type": "bluestore"
Dec 03 02:45:11 compute-0 xenodochial_euler[494860]:     }
Dec 03 02:45:11 compute-0 xenodochial_euler[494860]: }
Dec 03 02:45:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Dec 03 02:45:11 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3236205888' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec 03 02:45:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Dec 03 02:45:11 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/914263576' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 03 02:45:11 compute-0 systemd[1]: libpod-f780c8c773605a779e2c9633edd2d4f72243133760f08e7ee3d3050d1a83b191.scope: Deactivated successfully.
Dec 03 02:45:11 compute-0 podman[494824]: 2025-12-03 02:45:11.556183649 +0000 UTC m=+1.164229646 container died f780c8c773605a779e2c9633edd2d4f72243133760f08e7ee3d3050d1a83b191 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_euler, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec 03 02:45:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6dc679fc79c01466f1e0388f77e57a8d8d951a951d5180cce3ed936a78d5da8-merged.mount: Deactivated successfully.
Dec 03 02:45:11 compute-0 podman[494824]: 2025-12-03 02:45:11.626054611 +0000 UTC m=+1.234100608 container remove f780c8c773605a779e2c9633edd2d4f72243133760f08e7ee3d3050d1a83b191 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_euler, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 03 02:45:11 compute-0 systemd[1]: libpod-conmon-f780c8c773605a779e2c9633edd2d4f72243133760f08e7ee3d3050d1a83b191.scope: Deactivated successfully.
Dec 03 02:45:11 compute-0 sudo[494637]: pam_unix(sudo:session): session closed for user root
Dec 03 02:45:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 03 02:45:11 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:45:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 03 02:45:11 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:45:11 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 40376fc5-d14d-476c-9cee-0d3f406aec80 does not exist
Dec 03 02:45:11 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 4e209fec-10ec-4178-a01b-d18d942e7b0f does not exist
Dec 03 02:45:11 compute-0 sudo[495035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 03 02:45:11 compute-0 sudo[495035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:45:11 compute-0 sudo[495035]: pam_unix(sudo:session): session closed for user root
Dec 03 02:45:11 compute-0 sudo[495088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 03 02:45:11 compute-0 sudo[495088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 03 02:45:11 compute-0 sudo[495088]: pam_unix(sudo:session): session closed for user root
Dec 03 02:45:11 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15923 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:45:11 compute-0 ceph-mgr[193109]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 03 02:45:11 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T02:45:11.971+0000 7fabb0026640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 03 02:45:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Dec 03 02:45:11 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1862237977' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 03 02:45:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2733: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:45:12 compute-0 ceph-mon[192821]: from='client.15911 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:45:12 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3236205888' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec 03 02:45:12 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/914263576' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 03 02:45:12 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:45:12 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec 03 02:45:12 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1862237977' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 03 02:45:12 compute-0 nova_compute[351485]: 2025-12-03 02:45:12.368 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:45:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Dec 03 02:45:12 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1729441193' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec 03 02:45:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Dec 03 02:45:12 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2477747180' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 03 02:45:12 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15929 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:45:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Dec 03 02:45:12 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1470977912' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec 03 02:45:13 compute-0 ceph-mon[192821]: from='client.15923 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:45:13 compute-0 ceph-mon[192821]: pgmap v2733: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:45:13 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1729441193' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec 03 02:45:13 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2477747180' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 03 02:45:13 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1470977912' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec 03 02:45:13 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15933 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:45:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Dec 03 02:45:13 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/621837691' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 03 02:45:13 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15937 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:45:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:45:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Dec 03 02:45:13 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2439200539' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 03 02:45:13 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194740 data_alloc: 218103808 data_used: 9998336
Dec 03 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:04.063869+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:05.064272+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:06.064678+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:07.065028+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:08.065301+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:13 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194740 data_alloc: 218103808 data_used: 9998336
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:09.065714+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:10.066136+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:11.066691+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:12.067090+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:13.067670+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:13 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194740 data_alloc: 218103808 data_used: 9998336
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:14.067965+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:15.068325+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:16.068778+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:17.069127+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:18.069516+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:13 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194740 data_alloc: 218103808 data_used: 9998336
Dec 03 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:19.069971+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:20.070414+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:21.070854+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:22.071238+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:23.071725+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:13 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194740 data_alloc: 218103808 data_used: 9998336
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:24.072180+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:25.072718+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:26.073037+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:27.073379+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:28.073672+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:13 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194740 data_alloc: 218103808 data_used: 9998336
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:29.074025+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 ms_handle_reset con 0x558b85ec8c00 session 0x558b8624b0e0
Dec 03 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 ms_handle_reset con 0x558b85ec9400 session 0x558b85fc3c20
Dec 03 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 ms_handle_reset con 0x558b84a92800 session 0x558b86376d20
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:30.075064+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83a73000
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 100.205551147s of 100.784317017s, submitted: 90
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:31.075468+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 ms_handle_reset con 0x558b83a73000 session 0x558b83a94f00
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:32.075938+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:33.076166+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:13 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1126457 data_alloc: 218103808 data_used: 6328320
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:34.076522+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa5f0000/0x0/0x4ffc00000, data 0x13c29f8/0x148e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:35.076943+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:36.077361+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:37.077772+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:38.078038+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:13 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1126457 data_alloc: 218103808 data_used: 6328320
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:39.078409+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa5f0000/0x0/0x4ffc00000, data 0x13c29f8/0x148e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:40.078710+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:41.079179+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:42.079726+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:43.080126+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:13 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1126457 data_alloc: 218103808 data_used: 6328320
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:44.080457+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:45.080821+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa5f0000/0x0/0x4ffc00000, data 0x13c29f8/0x148e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:46.081149+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa5f0000/0x0/0x4ffc00000, data 0x13c29f8/0x148e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:47.081671+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:48.082055+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:13 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1126457 data_alloc: 218103808 data_used: 6328320
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:49.082439+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.514204025s of 18.552843094s, submitted: 8
Dec 03 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 ms_handle_reset con 0x558b83c47400 session 0x558b85fc23c0
Dec 03 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 ms_handle_reset con 0x558b843b6000 session 0x558b85bbdc20
Dec 03 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 ms_handle_reset con 0x558b83c46c00 session 0x558b862aa3c0
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96763904 unmapped: 33923072 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83a73000
Dec 03 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa5f0000/0x0/0x4ffc00000, data 0x13c29f8/0x148e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:50.082766+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92102656 unmapped: 38584320 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 ms_handle_reset con 0x558b83a73000 session 0x558b8634be00
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:51.082983+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:52.083356+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:53.083659+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:13 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:54.084075+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:55.084368+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:56.084742+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:57.084981+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:58.085341+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:13 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:59.085674+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:00.086031+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:01.086463+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:02.087082+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:03.087480+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:13 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:04.087835+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:05.088166+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:06.088725+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:07.089081+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:08.089446+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:13 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:09.089874+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:10.090286+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:11.090818+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:12.091444+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:13.091796+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:13 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec 03 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:14.092291+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:15.092731+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:16.093052+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:17.093429+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:18.093800+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:13 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:19.094203+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:20.094690+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:21.095038+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:22.095484+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:23.095825+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:13 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:24.096223+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:25.096692+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:26.097105+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:27.097617+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:28.097977+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:13 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:29.098316+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:30.098801+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:31.099212+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:32.099604+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:33.099904+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:13 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:34.100217+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:35.100465+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:36.100709+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:37.101080+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:38.101454+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:13 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:39.102004+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:40.102991+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:41.103357+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:42.103636+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:43.104075+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:13 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:44.104420+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:45.104749+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:46.105092+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:47.105462+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:48.105816+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:13 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:49.106221+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:50.106447+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:51.106832+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:52.107253+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:53.107521+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:13 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:54.107962+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:55.108315+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:56.108769+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:57.109114+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:58.109643+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:13 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:59.109987+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:00.110345+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:01.110820+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:02.111264+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:03.111733+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:13 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:04.112092+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:05.112324+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:06.112795+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:07.113197+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:08.113433+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:13 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:09.113767+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:10.116948+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:11.117390+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:12.117791+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:13.118198+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:14.118640+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:13 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c46c00
Dec 03 02:45:13 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 85.285675049s of 85.578956604s, submitted: 49
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:15.119031+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92102656 unmapped: 38584320 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49e8/0x98f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:16.119466+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 handle_osd_map epochs [133,134], i have 133, src has [1,134]
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92200960 unmapped: 38486016 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: osd.2 134 ms_handle_reset con 0x558b83c46c00 session 0x558b85fc3860
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c47400
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:17.119826+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92233728 unmapped: 38453248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:18.120256+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _renew_subs
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:45:13 compute-0 ceph-osd[208731]: osd.2 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91750400 unmapped: 38936576 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c47400 session 0x558b86376960
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:19.120739+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:13 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095506 data_alloc: 218103808 data_used: 53248
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:20.121026+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:21.121426+0000)
Dec 03 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:13 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:13 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:22.121841+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:23.122190+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:24.122429+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095506 data_alloc: 218103808 data_used: 53248
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:25.122797+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:26.123112+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:27.123634+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:28.123979+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:29.124309+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095506 data_alloc: 218103808 data_used: 53248
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:30.124760+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:31.125160+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:32.125835+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:33.126196+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:34.126685+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095506 data_alloc: 218103808 data_used: 53248
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:35.127028+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:36.127407+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:37.127860+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:38.128184+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:39.128576+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095506 data_alloc: 218103808 data_used: 53248
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:40.129052+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:41.129457+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:42.129891+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:43.130267+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:44.130675+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095506 data_alloc: 218103808 data_used: 53248
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:45.131021+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:46.131684+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:47.131920+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:48.132623+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:49.133083+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095506 data_alloc: 218103808 data_used: 53248
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:50.133385+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:51.133765+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:52.134172+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:53.134367+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:54.134732+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095506 data_alloc: 218103808 data_used: 53248
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:55.135111+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:56.136084+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:57.136681+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:58.136994+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:59.137423+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095506 data_alloc: 218103808 data_used: 53248
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:00.137876+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:01.138401+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:02.139094+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:03.139502+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:04.140001+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095506 data_alloc: 218103808 data_used: 53248
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:05.140508+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b843b6000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b843b6000 session 0x558b84927860
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b84a92800
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b84a92800 session 0x558b8311e3c0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b84a92800
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b84a92800 session 0x558b8625a960
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:06.140971+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83a73000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83a73000 session 0x558b83dbb680
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c46c00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:07.141336+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b85278000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 35282944 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:08.141794+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c47400
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c47400 session 0x558b862841e0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b843b6000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 53.474491119s of 53.753597260s, submitted: 32
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 35282944 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b843b6000 session 0x558b86284f00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83a73000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83a73000 session 0x558b852443c0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c46c00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:09.142211+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b85244960
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c47400
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c47400 session 0x558b862fa780
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b84a92800
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b84a92800 session 0x558b849b8f00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166167 data_alloc: 218103808 data_used: 4714496
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96403456 unmapped: 34283520 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:10.142711+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c48000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c48000 session 0x558b86376d20
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c48000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c48000 session 0x558b86377e00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83a73000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83a73000 session 0x558b83dbb680
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c46c00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96534528 unmapped: 34152448 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b8625a960
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c47400
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c47400 session 0x558b84927860
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:11.143116+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96534528 unmapped: 34152448 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:12.143714+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96534528 unmapped: 34152448 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:13.144057+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9d26000/0x0/0x4ffc00000, data 0x1c861aa/0x1d58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96534528 unmapped: 34152448 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b84a92800
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b84a92800 session 0x558b85fc3c20
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:14.144588+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1175957 data_alloc: 218103808 data_used: 4714496
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96534528 unmapped: 34152448 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83a73000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83a73000 session 0x558b85fc23c0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:15.144842+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c46c00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b8634be00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9d26000/0x0/0x4ffc00000, data 0x1c861aa/0x1d58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c47400
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c47400 session 0x558b85bbdc20
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c48000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c48000 session 0x558b83a94f00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ec8800
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96485376 unmapped: 34201600 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85ec8800 session 0x558b8624b0e0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ec8800
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85ec8800 session 0x558b84a2cd20
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83a73000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83a73000 session 0x558b862841e0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:16.145264+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c46c00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b86284f00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96485376 unmapped: 34201600 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:17.145625+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c47400
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c48000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9592000/0x0/0x4ffc00000, data 0x20081dd/0x20dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ec8c00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85ec8c00 session 0x558b86376000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96509952 unmapped: 34177024 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:18.146014+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ec9c00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85ec9c00 session 0x558b86376b40
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96493568 unmapped: 34193408 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:19.146424+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83a73000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83a73000 session 0x558b863763c0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c46c00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ec8800
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.407323837s of 10.792876244s, submitted: 47
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231863 data_alloc: 218103808 data_used: 4722688
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b862fbe00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85ec8800 session 0x558b86261e00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ec8c00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85ec8c00 session 0x558b844090e0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88138000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88138000 session 0x558b844225a0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83a73000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96804864 unmapped: 33882112 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83a73000 session 0x558b8625b4a0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c46c00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b849b92c0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:20.146852+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ec8800
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ec8c00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96821248 unmapped: 33865728 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:21.147203+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96821248 unmapped: 33865728 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9308000/0x0/0x4ffc00000, data 0x229020f/0x2366000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:22.147710+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96821248 unmapped: 33865728 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88138400
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:23.147929+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88138400 session 0x558b8634b860
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96829440 unmapped: 33857536 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88138800
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88138c00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:24.148153+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262169 data_alloc: 218103808 data_used: 8060928
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9307000/0x0/0x4ffc00000, data 0x2290232/0x2367000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,1])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96804864 unmapped: 33882112 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:25.148477+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 97984512 unmapped: 32702464 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:26.148651+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 97984512 unmapped: 32702464 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:27.149044+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9307000/0x0/0x4ffc00000, data 0x2290232/0x2367000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88139000 session 0x558b862faf00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 97984512 unmapped: 32702464 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139400
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88139400 session 0x558b862fab40
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:28.152940+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9307000/0x0/0x4ffc00000, data 0x2290232/0x2367000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 97951744 unmapped: 32735232 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:29.153175+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83a73000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83a73000 session 0x558b83ca2000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c46c00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b83ca43c0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1291687 data_alloc: 234881024 data_used: 12001280
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88138400
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:30.153379+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98394112 unmapped: 32292864 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:31.153563+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 30040064 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:32.153877+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 30040064 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:33.154164+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9306000/0x0/0x4ffc00000, data 0x2290242/0x2368000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 30040064 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:34.154601+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1317927 data_alloc: 234881024 data_used: 15679488
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 30023680 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:35.154956+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 30023680 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:36.155473+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 30023680 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:37.155664+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85ec8800 session 0x558b8624a780
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85ec8c00 session 0x558b862fb0e0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139400
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 101007360 unmapped: 29679616 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:38.155913+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.547910690s of 18.724098206s, submitted: 24
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88139400 session 0x558b83dbb0e0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 102367232 unmapped: 28319744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:39.156090+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f93d2000/0x0/0x4ffc00000, data 0x21c5210/0x229b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1318572 data_alloc: 234881024 data_used: 17334272
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 102449152 unmapped: 28237824 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:40.156378+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 102449152 unmapped: 28237824 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:41.156697+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 102449152 unmapped: 28237824 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:42.157076+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 102449152 unmapped: 28237824 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:43.157502+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88138400 session 0x558b8634a5a0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88139000 session 0x558b85de72c0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f93d2000/0x0/0x4ffc00000, data 0x21c5210/0x229b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c46c00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 29696000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:44.157818+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b867461e0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1283015 data_alloc: 234881024 data_used: 15118336
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 29696000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:45.158010+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 29696000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:46.158195+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 29696000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:47.158390+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 29696000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:48.158588+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9632000/0x0/0x4ffc00000, data 0x1f67200/0x203c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 29696000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:49.158796+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1283015 data_alloc: 234881024 data_used: 15118336
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 29696000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:50.159001+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 29687808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:51.159204+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9632000/0x0/0x4ffc00000, data 0x1f67200/0x203c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 29687808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:52.159462+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 29687808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:53.159676+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 29687808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:54.159870+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1283015 data_alloc: 234881024 data_used: 15118336
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 29687808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:55.160085+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9632000/0x0/0x4ffc00000, data 0x1f67200/0x203c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 29687808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:56.160295+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 29687808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:57.160501+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.113079071s of 19.223537445s, submitted: 17
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108929024 unmapped: 21757952 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:58.160664+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f85a5000/0x0/0x4ffc00000, data 0x2ff4200/0x30c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108937216 unmapped: 21749760 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:59.160823+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8546000/0x0/0x4ffc00000, data 0x3053200/0x3128000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1415883 data_alloc: 234881024 data_used: 16392192
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 109969408 unmapped: 20717568 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:00.161245+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:01.161507+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 109969408 unmapped: 20717568 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:02.161973+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 109969408 unmapped: 20717568 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:03.162378+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 109969408 unmapped: 20717568 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:04.162903+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110362624 unmapped: 20324352 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139800
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88139800 session 0x558b867465a0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139c00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88139c00 session 0x558b86746780
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c46c00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b86746960
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88138400
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88138400 session 0x558b86746b40
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7c5f000/0x0/0x4ffc00000, data 0x393a200/0x3a0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1522536 data_alloc: 234881024 data_used: 16547840
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88139000 session 0x558b86746d20
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139800
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88139800 session 0x558b86747a40
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7ad6000/0x0/0x4ffc00000, data 0x3ac2210/0x3b98000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139c00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88139c00 session 0x558b84410d20
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:05.163240+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c46c00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110125056 unmapped: 20561920 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b84408b40
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88138400
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88138400 session 0x558b83c53680
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7ad6000/0x0/0x4ffc00000, data 0x3ac2210/0x3b98000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:06.163428+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 20406272 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:07.163736+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 20406272 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7720000/0x0/0x4ffc00000, data 0x3e78210/0x3f4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:08.164108+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 20406272 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:09.164511+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 20406272 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530322 data_alloc: 234881024 data_used: 16613376
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:10.164779+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 20406272 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:11.165186+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110288896 unmapped: 20398080 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:12.165522+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110288896 unmapped: 20398080 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88139000 session 0x558b83c52960
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7720000/0x0/0x4ffc00000, data 0x3e78210/0x3f4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139800
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.731391907s of 15.542462349s, submitted: 209
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139c00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:13.165844+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110288896 unmapped: 20398080 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:14.166173+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110288896 unmapped: 20398080 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7720000/0x0/0x4ffc00000, data 0x3e78210/0x3f4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530454 data_alloc: 234881024 data_used: 16613376
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:15.166859+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110305280 unmapped: 20381696 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7720000/0x0/0x4ffc00000, data 0x3e78210/0x3f4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:16.167052+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110305280 unmapped: 20381696 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:17.167465+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110305280 unmapped: 20381696 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:18.168178+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110305280 unmapped: 20381696 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7720000/0x0/0x4ffc00000, data 0x3e78210/0x3f4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:19.168643+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110305280 unmapped: 20381696 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1528914 data_alloc: 234881024 data_used: 16613376
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:20.169059+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110305280 unmapped: 20381696 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:21.169498+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110305280 unmapped: 20381696 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:22.169991+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110362624 unmapped: 20324352 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f771d000/0x0/0x4ffc00000, data 0x3e7b210/0x3f51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:23.170286+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110804992 unmapped: 19881984 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:24.170496+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112484352 unmapped: 18202624 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1566994 data_alloc: 234881024 data_used: 21929984
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:25.170716+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114016256 unmapped: 16670720 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:26.170945+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114016256 unmapped: 16670720 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:27.171171+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114016256 unmapped: 16670720 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:28.171387+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f771d000/0x0/0x4ffc00000, data 0x3e7b210/0x3f51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 16654336 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:29.171673+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 16654336 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1566994 data_alloc: 234881024 data_used: 21929984
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:30.171914+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 16654336 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.591274261s of 17.624994278s, submitted: 3
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:31.172193+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114139136 unmapped: 16547840 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f771d000/0x0/0x4ffc00000, data 0x3e7b210/0x3f51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:32.172432+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114139136 unmapped: 16547840 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:33.172678+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114139136 unmapped: 16547840 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:34.172914+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114139136 unmapped: 16547840 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f771d000/0x0/0x4ffc00000, data 0x3e7b210/0x3f51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1567522 data_alloc: 234881024 data_used: 21929984
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:35.173305+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114139136 unmapped: 16547840 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:36.173652+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114139136 unmapped: 16547840 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:37.173975+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114139136 unmapped: 16547840 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:38.174408+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114139136 unmapped: 16547840 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:39.174834+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114147328 unmapped: 16539648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:40.175224+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1567522 data_alloc: 234881024 data_used: 21929984
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114163712 unmapped: 16523264 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f771d000/0x0/0x4ffc00000, data 0x3e7b210/0x3f51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.202155113s of 10.243181229s, submitted: 7
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:41.175671+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114163712 unmapped: 16523264 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7718000/0x0/0x4ffc00000, data 0x3e80210/0x3f56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:42.176117+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114163712 unmapped: 16523264 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7718000/0x0/0x4ffc00000, data 0x3e80210/0x3f56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:43.176718+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114163712 unmapped: 16523264 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:44.177020+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114163712 unmapped: 16523264 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:45.177403+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1567254 data_alloc: 234881024 data_used: 21929984
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114163712 unmapped: 16523264 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:46.177789+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7718000/0x0/0x4ffc00000, data 0x3e80210/0x3f56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 16515072 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c47400 session 0x558b86285680
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c48000 session 0x558b862852c0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c46c00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:47.177999+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110878720 unmapped: 19808256 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b849272c0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88138800 session 0x558b86377e00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88138c00 session 0x558b8624be00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:48.178227+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110878720 unmapped: 19808256 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c47400
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c47400 session 0x558b8521f680
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:49.178445+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110878720 unmapped: 19808256 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8ebc000/0x0/0x4ffc00000, data 0x267816b/0x274a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:50.178638+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336665 data_alloc: 234881024 data_used: 13844480
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88138400
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110878720 unmapped: 19808256 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:51.178968+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110878720 unmapped: 19808256 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:52.179406+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110878720 unmapped: 19808256 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:53.179669+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110878720 unmapped: 19808256 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:54.180169+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8f24000/0x0/0x4ffc00000, data 0x267816b/0x274a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110878720 unmapped: 19808256 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8f24000/0x0/0x4ffc00000, data 0x267816b/0x274a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:55.180618+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336445 data_alloc: 234881024 data_used: 13844480
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110878720 unmapped: 19808256 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:56.181007+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.562313080s of 15.728222847s, submitted: 35
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 19046400 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:57.181166+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113491968 unmapped: 17195008 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:58.181438+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8a9f000/0x0/0x4ffc00000, data 0x2af716b/0x2bc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113999872 unmapped: 16687104 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:59.181629+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 18096128 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:00.181951+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1385755 data_alloc: 234881024 data_used: 14766080
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 18096128 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:01.182282+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8a8d000/0x0/0x4ffc00000, data 0x2b0916b/0x2bdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 18096128 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:02.182743+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 18096128 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:03.183153+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 18096128 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8a8d000/0x0/0x4ffc00000, data 0x2b0916b/0x2bdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:04.183459+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 18096128 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:05.183752+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1385755 data_alloc: 234881024 data_used: 14766080
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 18096128 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:06.184015+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:07.184322+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8a8d000/0x0/0x4ffc00000, data 0x2b0916b/0x2bdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:08.184516+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:09.184846+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:10.185065+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1386715 data_alloc: 234881024 data_used: 14835712
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.230500221s of 14.482069969s, submitted: 64
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:11.185417+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:12.185820+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:13.186042+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8a93000/0x0/0x4ffc00000, data 0x2b0916b/0x2bdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:14.186206+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:15.186490+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1384779 data_alloc: 234881024 data_used: 14835712
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:16.186894+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:17.187245+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:18.187630+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8a93000/0x0/0x4ffc00000, data 0x2b0916b/0x2bdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:19.187861+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8a93000/0x0/0x4ffc00000, data 0x2b0916b/0x2bdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:20.188281+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1384779 data_alloc: 234881024 data_used: 14835712
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112615424 unmapped: 18071552 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:21.188690+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112615424 unmapped: 18071552 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85e1c800
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85e1c800 session 0x558b8624af00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c46c00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b8624a5a0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85a8bc00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85a8bc00 session 0x558b85bbc3c0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:22.188932+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c47400
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c47400 session 0x558b85bbcf00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88138c00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.310770035s of 11.319570541s, submitted: 1
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8a93000/0x0/0x4ffc00000, data 0x2b0916b/0x2bdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88138c00 session 0x558b85bbda40
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88138800
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88138800 session 0x558b85de6000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c46c00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b8634b4a0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 18259968 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c47400
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c47400 session 0x558b867461e0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85a8bc00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85a8bc00 session 0x558b867465a0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:23.189140+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 18259968 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:24.189336+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88138c00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88138c00 session 0x558b86746960
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ea2000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85ea2000 session 0x558b862fa780
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 18259968 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ea2000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85ea2000 session 0x558b862fb0e0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c46c00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b8624b680
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c47400
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:25.189510+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c47400 session 0x558b8624a3c0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497051 data_alloc: 234881024 data_used: 14835712
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85a8bc00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85a8bc00 session 0x558b8624bc20
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88138c00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88138c00 session 0x558b8624a000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c46c00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b8624a780
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 18735104 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c47400
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c47400 session 0x558b83ca3a40
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:26.189796+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 18677760 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:27.190009+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7e81000/0x0/0x4ffc00000, data 0x371a17b/0x37ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112025600 unmapped: 18661376 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:28.190320+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7e81000/0x0/0x4ffc00000, data 0x371a17b/0x37ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112074752 unmapped: 18612224 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85a8bc00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85a8bc00 session 0x558b83ca2000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:29.190757+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112091136 unmapped: 18595840 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ea2000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ea3400
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:30.191079+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1502491 data_alloc: 234881024 data_used: 15368192
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112091136 unmapped: 18595840 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:31.191669+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7e81000/0x0/0x4ffc00000, data 0x371a17b/0x37ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112091136 unmapped: 18595840 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:32.192023+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112091136 unmapped: 18595840 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:33.192483+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ea2400
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.251416206s of 11.426207542s, submitted: 28
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112123904 unmapped: 18563072 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85ea2400 session 0x558b862abe00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7e80000/0x0/0x4ffc00000, data 0x371a19e/0x37ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:34.192725+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112132096 unmapped: 18554880 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b861f5c00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:35.193284+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b84a6f800
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1504272 data_alloc: 234881024 data_used: 15368192
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112132096 unmapped: 18554880 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:36.193797+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112132096 unmapped: 18554880 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:37.194290+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112132096 unmapped: 18554880 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:38.194738+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112132096 unmapped: 18554880 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:39.195079+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7e80000/0x0/0x4ffc00000, data 0x371a19e/0x37ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112320512 unmapped: 18366464 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:40.195305+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1521392 data_alloc: 234881024 data_used: 17547264
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114794496 unmapped: 15892480 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:41.195759+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115277824 unmapped: 15409152 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:42.196487+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115286016 unmapped: 15400960 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:43.197039+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115286016 unmapped: 15400960 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7e80000/0x0/0x4ffc00000, data 0x371a19e/0x37ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:44.197258+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.764178276s of 10.793314934s, submitted: 4
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 15155200 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:45.197805+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1552128 data_alloc: 234881024 data_used: 21549056
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 15155200 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:46.198050+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7e80000/0x0/0x4ffc00000, data 0x371a19e/0x37ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 15155200 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:47.198427+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7e80000/0x0/0x4ffc00000, data 0x371a19e/0x37ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 13492224 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:48.198687+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7e80000/0x0/0x4ffc00000, data 0x371a19e/0x37ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118415360 unmapped: 12271616 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:49.199050+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b861f5c00 session 0x558b8311e1e0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b84a6f800 session 0x558b849774a0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c46c00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 15089664 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:50.199244+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b86285c20
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1511816 data_alloc: 234881024 data_used: 21544960
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 15089664 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:51.199468+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 15089664 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:52.199793+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f833a000/0x0/0x4ffc00000, data 0x326017b/0x3333000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 15089664 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:53.200055+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 15089664 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:54.200430+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 15089664 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:55.200762+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1511816 data_alloc: 234881024 data_used: 21544960
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 15089664 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:56.201000+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88138400 session 0x558b849b9860
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.645172119s of 11.731092453s, submitted: 27
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88139000 session 0x558b85de3a40
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c47400
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112082944 unmapped: 18604032 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:57.201355+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c47400 session 0x558b85fc25a0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8f69000/0x0/0x4ffc00000, data 0x2632158/0x2704000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:58.201707+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8f69000/0x0/0x4ffc00000, data 0x2632158/0x2704000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:59.201941+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:00.202301+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1389946 data_alloc: 234881024 data_used: 17207296
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:01.202588+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:02.203032+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8f69000/0x0/0x4ffc00000, data 0x2632158/0x2704000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:03.203332+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:04.203685+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:05.204058+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1389946 data_alloc: 234881024 data_used: 17207296
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:06.204361+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:07.204780+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8f69000/0x0/0x4ffc00000, data 0x2632158/0x2704000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:08.205078+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8f69000/0x0/0x4ffc00000, data 0x2632158/0x2704000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:09.207215+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:10.207663+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1389946 data_alloc: 234881024 data_used: 17207296
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:11.207933+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:12.208398+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8f69000/0x0/0x4ffc00000, data 0x2632158/0x2704000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.237525940s of 16.362010956s, submitted: 31
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8eab000/0x0/0x4ffc00000, data 0x26f1158/0x27c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,0,2])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119226368 unmapped: 11460608 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:13.208874+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:14.209093+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 11444224 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f850d000/0x0/0x4ffc00000, data 0x3081158/0x3153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:15.209434+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119398400 unmapped: 11288576 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1484722 data_alloc: 234881024 data_used: 17747968
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:16.209627+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119422976 unmapped: 11264000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:17.210040+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119422976 unmapped: 11264000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8476000/0x0/0x4ffc00000, data 0x311d158/0x31ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:18.210299+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119422976 unmapped: 11264000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:19.210717+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119422976 unmapped: 11264000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8476000/0x0/0x4ffc00000, data 0x311d158/0x31ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:20.210937+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119422976 unmapped: 11264000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85ea2000 session 0x558b84974000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85ea3400 session 0x558b863774a0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b84a6f800
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1478306 data_alloc: 234881024 data_used: 17756160
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:21.211256+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114073600 unmapped: 16613376 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88138400
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f845f000/0x0/0x4ffc00000, data 0x313d158/0x320f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [1])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b84a6f800 session 0x558b8521e960
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:22.211742+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113524736 unmapped: 33947648 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.629374504s of 10.309167862s, submitted: 151
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:23.211943+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 handle_osd_map epochs [135,136], i have 135, src has [1,136]
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _renew_subs
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 136 handle_osd_map epochs [136,136], i have 136, src has [1,136]
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113549312 unmapped: 33923072 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 136 ms_handle_reset con 0x558b88139000 session 0x558b83d11680
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 136 ms_handle_reset con 0x558b88138400 session 0x558b84410d20
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b84a6f800
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 136 ms_handle_reset con 0x558b84a6f800 session 0x558b83c53860
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ea2000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ea3400
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 136 ms_handle_reset con 0x558b85ea2000 session 0x558b84927e00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 136 ms_handle_reset con 0x558b85ea3400 session 0x558b8634a780
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 136 ms_handle_reset con 0x558b88139000 session 0x558b84926b40
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85a8bc00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 136 ms_handle_reset con 0x558b85a8bc00 session 0x558b85244960
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85a8bc00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 136 ms_handle_reset con 0x558b85a8bc00 session 0x558b852785a0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b84a6f800
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:24.212131+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 33906688 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 136 ms_handle_reset con 0x558b84a6f800 session 0x558b83c53860
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ea2000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 136 ms_handle_reset con 0x558b85ea2000 session 0x558b83c52780
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ea3400
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 136 ms_handle_reset con 0x558b85ea3400 session 0x558b8624bc20
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 136 ms_handle_reset con 0x558b88139000 session 0x558b8624a000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 136 ms_handle_reset con 0x558b88139000 session 0x558b8624b4a0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f8c1c000/0x0/0x4ffc00000, data 0x297b13e/0x2a52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:25.212335+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113639424 unmapped: 33832960 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b84a6f800
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1377459 data_alloc: 234881024 data_used: 11022336
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:26.212709+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _renew_subs
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113631232 unmapped: 33841152 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 137 ms_handle_reset con 0x558b88139800 session 0x558b849274a0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 137 ms_handle_reset con 0x558b88139c00 session 0x558b862fa960
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85a8bc00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 137 ms_handle_reset con 0x558b84a6f800 session 0x558b8624af00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:27.213133+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108437504 unmapped: 39034880 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 137 ms_handle_reset con 0x558b85a8bc00 session 0x558b849774a0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:28.213457+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108437504 unmapped: 39034880 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:29.213785+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108437504 unmapped: 39034880 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f95bf000/0x0/0x4ffc00000, data 0x1fd98b9/0x20ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:30.214105+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108437504 unmapped: 39034880 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253805 data_alloc: 218103808 data_used: 4730880
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:31.214437+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108437504 unmapped: 39034880 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b84a6f800
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 137 ms_handle_reset con 0x558b84a6f800 session 0x558b85de61e0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:32.214791+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108437504 unmapped: 39034880 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139800
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:33.215032+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f95bf000/0x0/0x4ffc00000, data 0x1fd98dc/0x20af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108445696 unmapped: 39026688 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:34.215384+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108445696 unmapped: 39026688 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:35.215751+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108445696 unmapped: 39026688 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139c00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1269378 data_alloc: 218103808 data_used: 6901760
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 137 ms_handle_reset con 0x558b88139c00 session 0x558b86284d20
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:36.216144+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 111984640 unmapped: 35487744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:37.216415+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f95bf000/0x0/0x4ffc00000, data 0x1fd98dc/0x20af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ea2000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.745358467s of 14.380681038s, submitted: 113
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 111984640 unmapped: 35487744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 137 ms_handle_reset con 0x558b85ea2000 session 0x558b83d4a1e0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ea3400
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 137 ms_handle_reset con 0x558b85ea3400 session 0x558b84a2cf00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ea2400
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 137 ms_handle_reset con 0x558b85ea2400 session 0x558b84927e00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b84a6f800
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 137 ms_handle_reset con 0x558b84a6f800 session 0x558b844101e0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ea2000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 137 ms_handle_reset con 0x558b85ea2000 session 0x558b84408000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:38.216753+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112033792 unmapped: 35438592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:39.217185+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _renew_subs
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112033792 unmapped: 35438592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:40.217641+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112033792 unmapped: 35438592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1294828 data_alloc: 218103808 data_used: 8527872
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:41.217931+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f9538000/0x0/0x4ffc00000, data 0x205d3a1/0x2135000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112033792 unmapped: 35438592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ea3400
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:42.218197+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b85ea3400 session 0x558b8521ed20
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112033792 unmapped: 35438592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139c00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ec6c00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:43.218664+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112050176 unmapped: 35422208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:44.219010+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112050176 unmapped: 35422208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:45.219265+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112050176 unmapped: 35422208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1301826 data_alloc: 218103808 data_used: 9060352
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f9538000/0x0/0x4ffc00000, data 0x205d3c4/0x2136000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:46.219569+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112050176 unmapped: 35422208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:47.219813+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 35414016 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:48.220089+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 35414016 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:49.220369+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 35414016 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:50.220676+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 35414016 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1301826 data_alloc: 218103808 data_used: 9060352
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:51.221007+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 35414016 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f9538000/0x0/0x4ffc00000, data 0x205d3c4/0x2136000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:52.221342+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 35414016 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:53.221774+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 35414016 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f9538000/0x0/0x4ffc00000, data 0x205d3c4/0x2136000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:54.222146+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112066560 unmapped: 35405824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f9538000/0x0/0x4ffc00000, data 0x205d3c4/0x2136000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:55.222357+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f9538000/0x0/0x4ffc00000, data 0x205d3c4/0x2136000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112066560 unmapped: 35405824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1301826 data_alloc: 218103808 data_used: 9060352
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:56.222605+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112066560 unmapped: 35405824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:57.222939+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112066560 unmapped: 35405824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:58.223343+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112066560 unmapped: 35405824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:59.223791+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112066560 unmapped: 35405824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:00.224124+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112066560 unmapped: 35405824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1302306 data_alloc: 218103808 data_used: 9072640
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:01.224506+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f9538000/0x0/0x4ffc00000, data 0x205d3c4/0x2136000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112066560 unmapped: 35405824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:02.224977+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112074752 unmapped: 35397632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:03.225369+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112074752 unmapped: 35397632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:04.225631+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112074752 unmapped: 35397632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:05.225968+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f9538000/0x0/0x4ffc00000, data 0x205d3c4/0x2136000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112074752 unmapped: 35397632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1302306 data_alloc: 218103808 data_used: 9072640
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:06.226337+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112074752 unmapped: 35397632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:07.226735+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112074752 unmapped: 35397632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:08.227093+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112074752 unmapped: 35397632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 31.471420288s of 31.809257507s, submitted: 56
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:09.227460+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115089408 unmapped: 32382976 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8fdd000/0x0/0x4ffc00000, data 0x25b83c4/0x2691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:10.227682+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114663424 unmapped: 32808960 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1359190 data_alloc: 234881024 data_used: 9846784
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:11.227905+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114712576 unmapped: 32759808 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8fc7000/0x0/0x4ffc00000, data 0x25cd3c4/0x26a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:12.228255+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8fc7000/0x0/0x4ffc00000, data 0x25cd3c4/0x26a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114606080 unmapped: 32866304 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:13.228984+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114606080 unmapped: 32866304 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:14.229433+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114606080 unmapped: 32866304 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:15.229807+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114606080 unmapped: 32866304 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:16.230185+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1369410 data_alloc: 234881024 data_used: 9990144
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8fc7000/0x0/0x4ffc00000, data 0x25cd3c4/0x26a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114606080 unmapped: 32866304 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:17.230670+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114606080 unmapped: 32866304 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8fc7000/0x0/0x4ffc00000, data 0x25cd3c4/0x26a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:18.231065+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114966528 unmapped: 32505856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:19.231883+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114466816 unmapped: 33005568 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:20.232266+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114466816 unmapped: 33005568 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8b19000/0x0/0x4ffc00000, data 0x2a7c3c4/0x2b55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.458605766s of 11.824682236s, submitted: 72
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:21.232797+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1404216 data_alloc: 234881024 data_used: 10186752
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114499584 unmapped: 32972800 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:22.233352+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114499584 unmapped: 32972800 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:23.233708+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114499584 unmapped: 32972800 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:24.233949+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114499584 unmapped: 32972800 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:25.234312+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114499584 unmapped: 32972800 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 3600.1 total, 600.0 interval
                                            Cumulative writes: 8914 writes, 35K keys, 8914 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 8914 writes, 2261 syncs, 3.94 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 1912 writes, 7094 keys, 1912 commit groups, 1.0 writes per commit group, ingest: 7.72 MB, 0.01 MB/s
                                            Interval WAL: 1912 writes, 777 syncs, 2.46 writes per sync, written: 0.01 GB, 0.01 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:26.234728+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8b0d000/0x0/0x4ffc00000, data 0x2a883c4/0x2b61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411348 data_alloc: 234881024 data_used: 10186752
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114499584 unmapped: 32972800 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:27.235016+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8b0d000/0x0/0x4ffc00000, data 0x2a883c4/0x2b61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114499584 unmapped: 32972800 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:28.235308+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114499584 unmapped: 32972800 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:29.235695+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114499584 unmapped: 32972800 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:30.236176+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8b0d000/0x0/0x4ffc00000, data 0x2a883c4/0x2b61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114499584 unmapped: 32972800 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:31.236469+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411348 data_alloc: 234881024 data_used: 10186752
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114499584 unmapped: 32972800 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:32.236936+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8b0d000/0x0/0x4ffc00000, data 0x2a883c4/0x2b61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114507776 unmapped: 32964608 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:33.237336+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: mgrc ms_handle_reset ms_handle_reset con 0x558b85ea3000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1922561230
Dec 03 02:45:14 compute-0 ceph-osd[208731]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1922561230,v1:192.168.122.100:6801/1922561230]
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: get_auth_request con 0x558b85ea2400 auth_method 0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: mgrc handle_mgr_configure stats_period=5
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:34.237508+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:35.237690+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:36.237917+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411348 data_alloc: 234881024 data_used: 10186752
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8b0d000/0x0/0x4ffc00000, data 0x2a883c4/0x2b61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:37.238843+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:38.239208+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:39.240106+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.394577026s of 18.451017380s, submitted: 14
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:40.240941+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:41.241449+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411668 data_alloc: 234881024 data_used: 10194944
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8b0d000/0x0/0x4ffc00000, data 0x2a883c4/0x2b61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:42.241903+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:43.242351+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:44.242812+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:45.243128+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8b0b000/0x0/0x4ffc00000, data 0x2a893c4/0x2b62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:46.243463+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411304 data_alloc: 234881024 data_used: 10215424
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b88139000 session 0x558b85de72c0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b88139800 session 0x558b85de74a0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139800
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8b0b000/0x0/0x4ffc00000, data 0x2a893c4/0x2b62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:47.244140+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b88139800 session 0x558b862852c0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:48.244698+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:49.245007+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:50.245517+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:51.245990+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321090 data_alloc: 218103808 data_used: 7651328
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:52.246404+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:53.246853+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:54.247190+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:55.247638+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:56.248047+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321090 data_alloc: 218103808 data_used: 7651328
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:57.248410+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:58.248778+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:59.249061+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:00.249424+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:01.249848+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321090 data_alloc: 218103808 data_used: 7651328
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:02.250255+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:03.250701+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:04.251057+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:05.251443+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:06.251793+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321090 data_alloc: 218103808 data_used: 7651328
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:07.252074+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:08.252929+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:09.253282+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:10.253683+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:11.253982+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321090 data_alloc: 218103808 data_used: 7651328
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:12.254250+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:13.254639+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:14.254949+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:15.255276+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:16.255707+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321090 data_alloc: 218103808 data_used: 7651328
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:17.256009+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:18.256434+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:19.256748+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:20.257388+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:21.257740+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321090 data_alloc: 218103808 data_used: 7651328
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:22.257990+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:23.258502+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:24.258931+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:25.259364+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:26.259772+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321090 data_alloc: 218103808 data_used: 7651328
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:27.260163+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:28.260887+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113549312 unmapped: 33923072 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:29.261280+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113549312 unmapped: 33923072 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:30.261706+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113549312 unmapped: 33923072 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:31.262124+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113549312 unmapped: 33923072 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321090 data_alloc: 218103808 data_used: 7651328
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:32.262924+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113549312 unmapped: 33923072 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:33.263348+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113549312 unmapped: 33923072 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:34.263744+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113549312 unmapped: 33923072 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:35.264028+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113549312 unmapped: 33923072 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:36.264426+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113549312 unmapped: 33923072 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321090 data_alloc: 218103808 data_used: 7651328
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:37.264857+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113549312 unmapped: 33923072 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:38.265204+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113557504 unmapped: 33914880 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:39.265609+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113557504 unmapped: 33914880 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:40.266016+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113557504 unmapped: 33914880 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:41.266319+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113557504 unmapped: 33914880 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321090 data_alloc: 218103808 data_used: 7651328
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:42.266620+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 33906688 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b83c47000 session 0x558b8634af00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b84a6f800
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:43.266862+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 33906688 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:44.267148+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 33906688 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:45.267658+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 33906688 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:46.268049+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 33906688 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321090 data_alloc: 218103808 data_used: 7651328
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:47.268299+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 33906688 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:48.268928+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 33906688 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:49.269522+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 69.777755737s of 69.905036926s, submitted: 21
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 33906688 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:50.269787+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113582080 unmapped: 33890304 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:51.270194+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 33865728 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322354 data_alloc: 218103808 data_used: 7688192
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:52.270495+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113647616 unmapped: 33824768 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:53.270828+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113688576 unmapped: 33783808 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:54.271186+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 33759232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:55.271906+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 33759232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:56.272309+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 33759232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322354 data_alloc: 218103808 data_used: 7688192
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:57.272777+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 33759232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:58.273098+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 33759232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:59.273507+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 33759232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:00.273887+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 33759232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:01.274308+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 33759232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322354 data_alloc: 218103808 data_used: 7688192
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:02.274630+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 33759232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:03.275015+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:04.275294+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:05.275742+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:06.276128+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322354 data_alloc: 218103808 data_used: 7688192
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:07.276509+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:08.276855+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:09.277175+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:10.277453+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:11.277866+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322354 data_alloc: 218103808 data_used: 7688192
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:12.278172+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:13.278517+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:14.279018+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:15.279483+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:16.279827+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322354 data_alloc: 218103808 data_used: 7688192
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:17.280053+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:18.280481+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:19.280894+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:20.281204+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:21.281750+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322354 data_alloc: 218103808 data_used: 7688192
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:22.282170+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:23.282473+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:24.282903+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:25.283263+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:26.283642+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322354 data_alloc: 218103808 data_used: 7688192
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:27.283990+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:28.284308+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:29.284683+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:30.284987+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:31.285225+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322354 data_alloc: 218103808 data_used: 7688192
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:32.285651+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:33.286025+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:34.286244+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:35.286642+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:36.287011+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:37.287375+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322354 data_alloc: 218103808 data_used: 7688192
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:38.287729+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:39.288063+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:40.288413+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:41.288746+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:42.289131+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322354 data_alloc: 218103808 data_used: 7688192
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 33742848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:43.289466+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c47000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b83c47000 session 0x558b86746780
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ea2000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b85ea2000 session 0x558b86284b40
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ea3400
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b85ea3400 session 0x558b849774a0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113737728 unmapped: 33734656 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b88139000 session 0x558b86747a40
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 54.146961212s of 54.832401276s, submitted: 108
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:44.289777+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b88139000 session 0x558b84a2cf00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c47000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b83c47000 session 0x558b83df8b40
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ea2000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b85ea2000 session 0x558b85fc2b40
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ea3400
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b85ea3400 session 0x558b86285860
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139800
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891d000/0x0/0x4ffc00000, data 0x2c7a391/0x2d51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b88139800 session 0x558b8521e960
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 33701888 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:45.290166+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 33701888 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:46.290662+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 33701888 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:47.291043+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1393042 data_alloc: 218103808 data_used: 7688192
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 33701888 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891d000/0x0/0x4ffc00000, data 0x2c7a391/0x2d51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:48.291597+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 33701888 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:49.292035+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 33693696 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:50.292304+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139800
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b88139800 session 0x558b83d4a1e0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 33693696 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891d000/0x0/0x4ffc00000, data 0x2c7a391/0x2d51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:51.292743+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c47000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b83c47000 session 0x558b85278780
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891d000/0x0/0x4ffc00000, data 0x2c7a391/0x2d51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 33693696 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:52.293009+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ea2000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b85ea2000 session 0x558b85279e00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1393042 data_alloc: 218103808 data_used: 7688192
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ea3400
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b85ea3400 session 0x558b84409860
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 33693696 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:53.293510+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b879ce000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 33693696 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:54.293813+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 33693696 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:55.294059+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 33693696 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:56.294312+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114376704 unmapped: 33095680 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891c000/0x0/0x4ffc00000, data 0x2c7a3a1/0x2d52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:57.294591+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1446880 data_alloc: 234881024 data_used: 15056896
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117514240 unmapped: 29958144 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:58.294740+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 29753344 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:59.295096+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 29753344 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:00.295321+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 29753344 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:01.295641+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 29753344 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891c000/0x0/0x4ffc00000, data 0x2c7a3a1/0x2d52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:02.295882+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1468960 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 29753344 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:03.296136+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 29753344 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:04.296341+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 29753344 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:05.296574+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 29753344 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:06.296999+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891c000/0x0/0x4ffc00000, data 0x2c7a3a1/0x2d52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 29745152 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:07.297252+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1468960 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 29745152 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:08.297687+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 29745152 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891c000/0x0/0x4ffc00000, data 0x2c7a3a1/0x2d52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:09.298061+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 29745152 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:10.298400+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891c000/0x0/0x4ffc00000, data 0x2c7a3a1/0x2d52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 29745152 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:11.298827+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891c000/0x0/0x4ffc00000, data 0x2c7a3a1/0x2d52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 29745152 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:12.299259+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1468960 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 29745152 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:13.299713+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 29736960 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:14.300097+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891c000/0x0/0x4ffc00000, data 0x2c7a3a1/0x2d52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117743616 unmapped: 29728768 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:15.300305+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117743616 unmapped: 29728768 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:16.300787+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891c000/0x0/0x4ffc00000, data 0x2c7a3a1/0x2d52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117743616 unmapped: 29728768 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:17.301195+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1468960 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117743616 unmapped: 29728768 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:18.301514+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117743616 unmapped: 29728768 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:19.301904+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117743616 unmapped: 29728768 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:20.302307+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117743616 unmapped: 29728768 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:21.302520+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891c000/0x0/0x4ffc00000, data 0x2c7a3a1/0x2d52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117743616 unmapped: 29728768 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:22.302946+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1468960 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117751808 unmapped: 29720576 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:23.303155+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117751808 unmapped: 29720576 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:24.303500+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117751808 unmapped: 29720576 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:25.303867+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891c000/0x0/0x4ffc00000, data 0x2c7a3a1/0x2d52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117751808 unmapped: 29720576 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:26.304290+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117751808 unmapped: 29720576 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:27.304723+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1468960 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117751808 unmapped: 29720576 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:28.304998+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 44.778945923s of 44.865009308s, submitted: 6
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 26681344 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:29.305445+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8588000/0x0/0x4ffc00000, data 0x300e3a1/0x30e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118964224 unmapped: 28508160 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:30.305675+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118964224 unmapped: 28508160 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:31.305922+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118964224 unmapped: 28508160 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:32.306344+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118964224 unmapped: 28508160 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:33.306725+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118964224 unmapped: 28508160 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:34.307201+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118964224 unmapped: 28508160 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:35.307666+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118964224 unmapped: 28508160 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:36.308048+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118972416 unmapped: 28499968 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:37.308396+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:38.308751+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:39.309152+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:40.309521+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:41.309936+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:42.310441+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:43.310706+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:44.311058+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:45.311386+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:46.311728+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:47.312119+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:48.312462+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:49.312830+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:50.313069+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:51.313250+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:52.313752+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:53.314038+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:54.314402+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:55.314641+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:56.314894+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:57.315238+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:58.315521+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:59.315935+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:00.316136+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:01.316409+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:02.316790+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:03.317164+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:04.317607+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:05.317887+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:06.318146+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:07.318687+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:08.318946+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:09.319178+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:10.319621+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:11.319885+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:12.320264+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:13.320597+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:14.320905+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:15.321097+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:16.321405+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:17.321639+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:18.321947+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:19.322346+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:20.322729+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:21.323074+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:22.323457+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:23.323769+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:24.324015+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:25.324355+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:26.324844+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:27.325175+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:28.325504+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:29.325868+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:30.326250+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:31.326612+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:32.327040+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:33.327409+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:34.327654+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:35.328092+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:36.328593+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:37.329258+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:38.329644+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:39.329955+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:40.330361+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:41.330848+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:42.331364+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119005184 unmapped: 28467200 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:43.331679+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119005184 unmapped: 28467200 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:44.332058+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119005184 unmapped: 28467200 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:45.332405+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119005184 unmapped: 28467200 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:46.332842+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119005184 unmapped: 28467200 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:47.333231+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119005184 unmapped: 28467200 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:48.333515+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119013376 unmapped: 28459008 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:49.333913+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119013376 unmapped: 28459008 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:50.334256+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119013376 unmapped: 28459008 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:51.334645+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119013376 unmapped: 28459008 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:52.334836+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119013376 unmapped: 28459008 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:53.335045+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119013376 unmapped: 28459008 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:54.335183+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119013376 unmapped: 28459008 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:55.335369+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119013376 unmapped: 28459008 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:56.335596+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 28450816 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:57.335963+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 28450816 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:58.336152+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 28450816 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:59.336423+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 28450816 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:00.336776+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 28450816 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:01.337813+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 28450816 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:02.338276+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 28450816 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:03.339055+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 28450816 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:04.339617+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 28450816 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:05.340112+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 28450816 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:06.340715+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:07.341397+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:08.341874+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:09.342125+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:10.342622+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:11.343145+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:12.343675+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:13.344657+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:14.344838+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:15.346793+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:16.347430+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:17.348618+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:18.349010+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:19.349737+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:20.349910+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119037952 unmapped: 28434432 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:21.350309+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 112.571006775s of 112.683082581s, submitted: 22
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119037952 unmapped: 28434432 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:22.350574+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119037952 unmapped: 28434432 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:23.350996+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501114 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119037952 unmapped: 28434432 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:24.351306+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:25.351598+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:26.352018+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:27.352381+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:28.352842+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501114 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:29.353656+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:30.353982+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:31.354698+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:32.354997+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:33.355613+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501114 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:34.355887+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:35.357140+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:36.357456+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:37.358699+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:38.359036+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501114 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:39.360057+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:40.360423+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:41.361577+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:42.361834+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:43.363870+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501114 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:44.366451+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119054336 unmapped: 28418048 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:45.366738+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:46.367122+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:47.367666+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:48.367962+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501114 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:49.368189+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:50.368459+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:51.369962+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:52.370280+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:53.370515+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501114 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:54.370761+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:55.371807+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:56.372147+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:57.373144+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 35.938446045s of 35.963088989s, submitted: 3
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:58.373430+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:59.374265+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:00.374712+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:01.375627+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:02.376112+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:03.376614+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:04.376857+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:05.377455+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:06.377775+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:07.378829+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:08.379215+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:09.380198+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:10.381118+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:11.382891+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:12.383293+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:13.384254+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:14.384717+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:15.385000+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:16.385308+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:17.387169+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:18.387641+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:19.387839+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:20.388060+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:21.388278+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:22.388685+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:23.388975+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:24.389347+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:25.389745+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:26.390044+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:27.390416+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:28.390614+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:29.391746+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:30.392009+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:31.393268+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:32.393558+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:33.393949+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:34.394115+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:35.397240+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:36.398120+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:37.398603+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:38.398956+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:39.399375+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:40.399585+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:41.400586+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:42.400931+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:43.401295+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:44.401716+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:45.402688+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:46.403102+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:47.403370+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:48.403670+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:49.404320+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:50.404741+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:51.405857+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:52.406251+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:53.406853+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:54.407224+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:55.409396+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:56.409750+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:57.410036+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:58.410325+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:59.410741+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:00.410958+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:01.411173+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:02.411591+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:03.411836+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:04.413245+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:05.413645+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:06.413976+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:07.414190+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:08.414684+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:09.415610+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:10.416508+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:11.416926+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:12.417393+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:13.417680+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:14.418005+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:15.419129+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:16.419622+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:17.420260+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:18.420680+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:19.420912+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:20.421277+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:21.422725+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:22.423127+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:23.423348+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:24.423585+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:25.424403+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:26.424727+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:27.425863+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:28.426111+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119103488 unmapped: 28368896 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:29.427218+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119103488 unmapped: 28368896 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:30.427581+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:31.427878+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:32.428294+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:33.428741+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:34.429126+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:35.430288+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:36.430709+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:37.431732+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:38.431964+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:39.432476+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:40.432716+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:41.433406+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:42.433657+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:43.433919+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:44.434297+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:45.434451+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:46.437358+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:47.437575+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:48.437850+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:49.438412+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:50.438630+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:51.438790+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:52.439201+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:53.439393+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:54.439614+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:55.439997+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:56.440263+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:57.441244+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:58.441441+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:59.441946+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:00.442298+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:01.442746+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:02.443224+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:03.443672+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:04.443968+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:05.444366+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:06.444673+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:07.445024+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:08.445371+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:09.445639+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119128064 unmapped: 28344320 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:10.445890+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119128064 unmapped: 28344320 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:11.446202+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119128064 unmapped: 28344320 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:12.446664+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119128064 unmapped: 28344320 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:13.446910+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119128064 unmapped: 28344320 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:14.447380+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119128064 unmapped: 28344320 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:15.447624+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119128064 unmapped: 28344320 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:16.448173+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119128064 unmapped: 28344320 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:17.448435+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119128064 unmapped: 28344320 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:18.448698+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119136256 unmapped: 28336128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:19.449035+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119136256 unmapped: 28336128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:20.449522+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119136256 unmapped: 28336128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:21.449942+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119136256 unmapped: 28336128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:22.450278+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119136256 unmapped: 28336128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:23.450745+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119136256 unmapped: 28336128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:24.451063+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119136256 unmapped: 28336128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:25.451343+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:26.451820+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:27.452022+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:28.452248+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:29.452752+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:30.453081+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:31.453301+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:32.453655+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:33.453989+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:34.454347+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501582 data_alloc: 234881024 data_used: 18178048
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:35.454814+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:36.455220+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:37.455682+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:38.456045+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:39.456479+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501582 data_alloc: 234881024 data_used: 18178048
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:40.456841+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:41.457270+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:42.457788+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:43.458257+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:44.458663+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501582 data_alloc: 234881024 data_used: 18178048
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:45.458994+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:46.459352+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:47.459787+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:48.460089+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:49.460420+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501582 data_alloc: 234881024 data_used: 18178048
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:50.460798+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:51.461164+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:52.461655+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:53.461999+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:54.462219+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501582 data_alloc: 234881024 data_used: 18178048
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:55.462619+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:56.462941+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:57.463171+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:58.463504+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119160832 unmapped: 28311552 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:59.463706+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501582 data_alloc: 234881024 data_used: 18178048
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:00.464000+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:01.464205+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:02.464646+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:03.464983+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:04.465261+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501582 data_alloc: 234881024 data_used: 18178048
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:05.465756+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:06.466014+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 189.647811890s of 189.655746460s, submitted: 1
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:07.466413+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:08.466821+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:09.467180+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:10.467756+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:11.468163+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:12.468652+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:13.468921+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:14.469353+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119177216 unmapped: 28295168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:15.469800+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119177216 unmapped: 28295168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:16.470138+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119177216 unmapped: 28295168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:17.470479+0000)
Dec 03 02:45:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2734: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119177216 unmapped: 28295168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:18.470892+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119177216 unmapped: 28295168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:19.471266+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119177216 unmapped: 28295168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:20.471768+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119177216 unmapped: 28295168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:21.472186+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119185408 unmapped: 28286976 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:22.472670+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119185408 unmapped: 28286976 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:23.473100+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119185408 unmapped: 28286976 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:24.473485+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119185408 unmapped: 28286976 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:25.473909+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119185408 unmapped: 28286976 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:26.474298+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119185408 unmapped: 28286976 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:27.474809+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119185408 unmapped: 28286976 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:28.475226+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119185408 unmapped: 28286976 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:29.475661+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119193600 unmapped: 28278784 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:30.476038+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119193600 unmapped: 28278784 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:31.476460+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119193600 unmapped: 28278784 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:32.477000+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119193600 unmapped: 28278784 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:33.477367+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119193600 unmapped: 28278784 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:34.477803+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119193600 unmapped: 28278784 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:35.478161+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119193600 unmapped: 28278784 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:36.478871+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119193600 unmapped: 28278784 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:37.479234+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119193600 unmapped: 28278784 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:38.479705+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119201792 unmapped: 28270592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:39.480118+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119201792 unmapped: 28270592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:40.480703+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119201792 unmapped: 28270592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:41.481069+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119201792 unmapped: 28270592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:42.481520+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119201792 unmapped: 28270592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:43.481891+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119201792 unmapped: 28270592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:44.482356+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119201792 unmapped: 28270592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:45.482734+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119201792 unmapped: 28270592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:46.482982+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119201792 unmapped: 28270592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:47.483387+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119201792 unmapped: 28270592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:48.483715+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 28262400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:49.484093+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 28262400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:50.484478+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 28262400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:51.484885+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 28262400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:52.485360+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 28262400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:53.485814+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 28262400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:54.486212+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 28262400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:55.486618+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 28262400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:56.486935+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 28262400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:57.487320+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 28262400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:58.487579+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 28262400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:59.487993+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 28262400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:00.488319+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:01.488622+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:02.488923+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:03.489201+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:04.489518+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:05.490005+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:06.490318+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:07.490789+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:08.491215+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:09.491460+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:10.491775+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:11.492020+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:12.492398+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:13.492766+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:14.493083+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:15.493418+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:16.493761+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:17.494071+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:18.494410+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:19.494853+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:20.495214+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:21.495729+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:22.496091+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:23.496996+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:24.497509+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:25.498083+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 4200.1 total, 600.0 interval
                                            Cumulative writes: 9225 writes, 35K keys, 9225 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 9225 writes, 2410 syncs, 3.83 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 311 writes, 768 keys, 311 commit groups, 1.0 writes per commit group, ingest: 0.41 MB, 0.00 MB/s
                                            Interval WAL: 311 writes, 149 syncs, 2.09 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:26.498449+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:27.498871+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:28.499175+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:29.499518+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:30.499948+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:31.500313+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:32.500762+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:33.501070+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:34.501407+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:35.501756+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:36.502742+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:37.503032+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:38.503232+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:39.503676+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:40.504016+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:41.504387+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:42.504976+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:43.505405+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:44.505910+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:45.506401+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:46.506668+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:47.507035+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:48.507448+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:49.507764+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:50.508137+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 28213248 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:51.508480+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 28213248 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:52.509095+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 28213248 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:53.509457+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 28213248 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:54.509803+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 28213248 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:55.510168+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 28213248 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:56.510498+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 28213248 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:57.510891+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 28213248 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:58.511279+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:59.511772+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:00.512060+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:01.512360+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:02.512851+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:03.513149+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:04.513486+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:05.513871+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:06.514246+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:07.514727+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:08.515118+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:09.515428+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:10.515777+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:11.516087+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:12.516701+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:13.517096+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:14.517455+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:15.517745+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:16.518013+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:17.518347+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:18.518857+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:19.519177+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:20.519507+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:21.519825+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 28196864 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:22.520161+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:23.520479+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:24.520885+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:25.521204+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:26.521660+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:27.522028+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:28.522411+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:29.522748+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:30.523029+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:31.523511+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:32.524128+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:33.524485+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:34.524766+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:35.525089+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:36.525516+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:37.525920+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:38.526247+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:39.526638+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:40.526921+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:41.527270+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:42.527622+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:43.528035+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:44.528388+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:45.528704+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:46.529046+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:47.529765+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:48.530153+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:49.531152+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:50.531465+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 163.277404785s of 163.285751343s, submitted: 1
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119308288 unmapped: 28164096 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501538 data_alloc: 234881024 data_used: 18178048
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:51.531810+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119324672 unmapped: 28147712 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:52.532231+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119349248 unmapped: 28123136 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:53.532825+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119414784 unmapped: 28057600 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:54.533140+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:55.533503+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501538 data_alloc: 234881024 data_used: 18178048
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:56.534005+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:57.534292+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:58.534773+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:59.535185+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:00.535492+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501538 data_alloc: 234881024 data_used: 18178048
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:01.535966+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:02.536268+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:03.536607+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:04.536929+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:05.537331+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501538 data_alloc: 234881024 data_used: 18178048
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:06.537782+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:07.538123+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:08.538454+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:09.538776+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:10.539164+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501538 data_alloc: 234881024 data_used: 18178048
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:11.539675+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:12.540174+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:13.540520+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:14.541223+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:15.541727+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501538 data_alloc: 234881024 data_used: 18178048
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:16.542118+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:17.542619+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:18.542910+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:19.543323+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:20.543804+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501538 data_alloc: 234881024 data_used: 18178048
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:21.544225+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:22.544682+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:23.544927+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:24.545246+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:25.545672+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501538 data_alloc: 234881024 data_used: 18178048
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:26.546043+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:27.546664+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:28.547075+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:29.547840+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:30.548318+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501538 data_alloc: 234881024 data_used: 18178048
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:31.548724+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:32.549130+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:33.549336+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:34.549810+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119439360 unmapped: 28033024 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:35.550279+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119439360 unmapped: 28033024 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:36.550718+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119439360 unmapped: 28033024 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501538 data_alloc: 234881024 data_used: 18178048
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:37.551183+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119439360 unmapped: 28033024 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:38.551833+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119439360 unmapped: 28033024 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:39.552238+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119439360 unmapped: 28033024 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:40.552446+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119439360 unmapped: 28033024 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 49.532619476s of 50.152313232s, submitted: 90
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b88139c00 session 0x558b83ca5c20
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b85ec6c00 session 0x558b849b8000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c47000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:41.552785+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119472128 unmapped: 28000256 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501242 data_alloc: 234881024 data_used: 18178048
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b83c47000 session 0x558b86260f00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:42.553180+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 26927104 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:43.553996+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 26927104 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:44.554417+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 26927104 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:45.554794+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8ab9000/0x0/0x4ffc00000, data 0x2add31c/0x2bb3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 26927104 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:46.555150+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 26927104 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1450062 data_alloc: 234881024 data_used: 17440768
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:47.555781+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 26927104 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:48.556153+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 26927104 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:49.556521+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 26927104 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:50.556981+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 26927104 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8ab9000/0x0/0x4ffc00000, data 0x2add31c/0x2bb3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b88139000 session 0x558b86284000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b879ce000 session 0x558b862aa5a0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8ab9000/0x0/0x4ffc00000, data 0x2add31c/0x2bb3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:51.557217+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 26927104 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1450062 data_alloc: 234881024 data_used: 17440768
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ea2000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.970065117s of 11.374399185s, submitted: 57
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:52.557670+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 33382400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b85ea2000 session 0x558b844112c0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:53.558045+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 33382400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:54.558725+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113385472 unmapped: 34086912 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:55.559114+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113385472 unmapped: 34086912 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f985c000/0x0/0x4ffc00000, data 0x1d3d30c/0x1e12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:56.559677+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113385472 unmapped: 34086912 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278563 data_alloc: 218103808 data_used: 6950912
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f985c000/0x0/0x4ffc00000, data 0x1d3d30c/0x1e12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:57.560104+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113385472 unmapped: 34086912 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:58.560656+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113385472 unmapped: 34086912 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:59.561064+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113393664 unmapped: 34078720 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:00.561410+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f985c000/0x0/0x4ffc00000, data 0x1d3d30c/0x1e12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113393664 unmapped: 34078720 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:01.561969+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113393664 unmapped: 34078720 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278563 data_alloc: 218103808 data_used: 6950912
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:02.562433+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113393664 unmapped: 34078720 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b83c47000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.220892906s of 11.268563271s, submitted: 8
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:03.562820+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _renew_subs
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113401856 unmapped: 34070528 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 139 ms_handle_reset con 0x558b83c47000 session 0x558b863761e0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fa058000/0x0/0x4ffc00000, data 0x153eeba/0x1614000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:04.563270+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ec6c00
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106258432 unmapped: 41213952 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:05.564153+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _renew_subs
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 140 ms_handle_reset con 0x558b85ec6c00 session 0x558b86285c20
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 41205760 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b879ce000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:06.564659+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 140 handle_osd_map epochs [140,141], i have 140, src has [1,141]
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106315776 unmapped: 41156608 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1214842 data_alloc: 218103808 data_used: 143360
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 141 ms_handle_reset con 0x558b879ce000 session 0x558b85fc3c20
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:07.565041+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:08.565470+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:09.565840+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa053000/0x0/0x4ffc00000, data 0x1542634/0x161a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:10.566200+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa053000/0x0/0x4ffc00000, data 0x1542634/0x161a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:11.566975+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa053000/0x0/0x4ffc00000, data 0x1542634/0x161a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1214842 data_alloc: 218103808 data_used: 143360
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:12.567300+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:13.567692+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:14.567899+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:15.568204+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa053000/0x0/0x4ffc00000, data 0x1542634/0x161a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b88139000
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.120633125s of 12.398234367s, submitted: 53
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 141 ms_handle_reset con 0x558b88139000 session 0x558b83ca3c20
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 141 handle_osd_map epochs [142,142], i have 142, src has [1,142]
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:16.568640+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217816 data_alloc: 218103808 data_used: 143360
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:17.568957+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:18.569396+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:19.569770+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:20.570125+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:21.570515+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217816 data_alloc: 218103808 data_used: 143360
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:22.571042+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:23.571410+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:24.571771+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:25.572251+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:26.572777+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217816 data_alloc: 218103808 data_used: 143360
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:27.573220+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:28.573507+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:29.574028+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:30.574366+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:31.575031+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217816 data_alloc: 218103808 data_used: 143360
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:32.575674+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:33.576143+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:34.576626+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:35.577058+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:36.577460+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217816 data_alloc: 218103808 data_used: 143360
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:37.577850+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:38.578102+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:39.578480+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:40.578638+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:41.579018+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217816 data_alloc: 218103808 data_used: 143360
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:42.579501+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:43.579839+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:44.580099+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:45.580426+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:46.580827+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217816 data_alloc: 218103808 data_used: 143360
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:47.581222+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:48.581789+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:49.582205+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:50.582716+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:51.583173+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217816 data_alloc: 218103808 data_used: 143360
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:52.583689+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:53.584223+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:54.584612+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:55.585066+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:56.585429+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217816 data_alloc: 218103808 data_used: 143360
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:57.585792+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:58.586383+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:59.586725+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:00.587041+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:01.587378+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:02.587804+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:03.588191+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:04.588645+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:05.589026+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:06.589396+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:07.589704+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:08.590103+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:09.590621+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:10.591058+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:11.591454+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:12.591903+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:13.592103+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:14.592272+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:15.593010+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:16.593393+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:17.593767+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:18.594107+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:19.594448+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:20.594764+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:21.595151+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:22.595679+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:23.596006+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:24.596188+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:25.596507+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:26.596786+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:27.596971+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 40976384 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: do_command 'config diff' '{prefix=config diff}'
Dec 03 02:45:14 compute-0 ceph-osd[208731]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 03 02:45:14 compute-0 ceph-osd[208731]: do_command 'config show' '{prefix=config show}'
Dec 03 02:45:14 compute-0 ceph-osd[208731]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:28.597331+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: do_command 'counter dump' '{prefix=counter dump}'
Dec 03 02:45:14 compute-0 ceph-osd[208731]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106618880 unmapped: 40853504 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: do_command 'counter schema' '{prefix=counter schema}'
Dec 03 02:45:14 compute-0 ceph-osd[208731]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:29.597569+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106463232 unmapped: 41009152 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:30.597814+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106717184 unmapped: 40755200 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: do_command 'log dump' '{prefix=log dump}'
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:31.598027+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106717184 unmapped: 40755200 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: do_command 'perf dump' '{prefix=perf dump}'
Dec 03 02:45:14 compute-0 ceph-osd[208731]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Dec 03 02:45:14 compute-0 ceph-osd[208731]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Dec 03 02:45:14 compute-0 ceph-osd[208731]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Dec 03 02:45:14 compute-0 ceph-osd[208731]: do_command 'perf schema' '{prefix=perf schema}'
Dec 03 02:45:14 compute-0 ceph-osd[208731]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:32.598264+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:33.598850+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:34.599090+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:35.599313+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:36.599515+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:37.599707+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:38.600089+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:39.600310+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:40.600513+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:41.601252+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:42.601752+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:43.602112+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:44.602295+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:45.602683+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:46.603097+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:47.603276+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:48.603454+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:49.603718+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:50.603897+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:51.604110+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:52.604695+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:53.604987+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:54.605240+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:55.605403+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:56.605611+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:57.605899+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:58.606072+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:59.606344+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:00.606735+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:01.606962+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:02.607226+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:03.607580+0000)
Dec 03 02:45:14 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15941 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:04.607782+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:05.608105+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:06.608266+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:07.608480+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:08.608683+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:09.608891+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:10.609241+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:11.609654+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:12.609973+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:13.610315+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:14.610757+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:15.611267+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:16.611828+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:17.612229+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:18.612705+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:19.613253+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:20.613760+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:21.614149+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:22.614417+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:23.614898+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:24.615435+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:25.615875+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:26.616318+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:27.616807+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:28.617325+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:29.617754+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:30.618093+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:31.618478+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:32.618943+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:33.619230+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:34.619663+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:35.620094+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:36.620405+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:37.620803+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:38.621179+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:39.621456+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:40.621816+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:41.622262+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:42.622908+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:43.623135+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:44.623697+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:45.624115+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:46.624499+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:47.624849+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:48.625167+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:49.625628+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:50.625975+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:51.626354+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:52.626764+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:53.627202+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:54.627595+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:55.627939+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:56.628359+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:57.628712+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:58.629312+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:59.629756+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:00.630156+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:01.630629+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:02.631078+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:03.631626+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:04.631976+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:05.632225+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:06.632505+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:07.632994+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:08.633228+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:09.633687+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:10.634064+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:11.634512+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:12.635101+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:13.635516+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:14.636030+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:15.637990+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:16.638390+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:17.638821+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:18.639041+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:19.639594+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:20.640068+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:21.640635+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:22.641039+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:23.641406+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:24.641871+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:25.642271+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:26.642855+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:27.643314+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:28.643809+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:29.644323+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:30.644750+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:31.645042+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:32.645488+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:33.645899+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:34.646259+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:35.646786+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:36.647214+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:37.647784+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:38.648219+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:39.648670+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:40.649067+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:41.649689+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:42.650261+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 ms_handle_reset con 0x558b84a6f800 session 0x558b83e61860
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: handle_auth_request added challenge on 0x558b85ea3400
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:43.650881+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:44.651336+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:45.651769+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:46.652206+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:47.652625+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:48.653033+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:49.653296+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:50.653774+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:51.654124+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:52.654661+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:53.655081+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:54.655684+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:55.656062+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:56.656452+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:57.656837+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:58.657283+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:59.657862+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:00.658327+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:01.658866+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:02.659272+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:03.659758+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:04.660096+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:05.660483+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:06.660768+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:07.661193+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:08.661410+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:09.661780+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:10.662182+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:11.662590+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:12.663057+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:13.663618+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:14.664115+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:15.664455+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:16.664740+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:17.665018+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:18.665336+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:19.665774+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:20.666165+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:21.666655+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:22.667025+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:23.667411+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:24.667821+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:25.668233+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:26.668689+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:27.669036+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:28.669379+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:29.669786+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:30.670154+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:31.670655+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:32.671132+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:33.671503+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:34.671870+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:35.672201+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:36.672654+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:37.673245+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:38.673588+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:39.673960+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:40.674335+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:41.674762+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:42.675252+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:43.675718+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:44.675994+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:45.676377+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:46.676897+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:47.677312+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:48.677798+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 40935424 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:49.678138+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 40935424 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:50.678476+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 40935424 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:51.678950+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 40935424 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:52.679417+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 40935424 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:53.679843+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 40935424 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:54.680261+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 40935424 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:55.680700+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 40935424 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:56.681167+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 40935424 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:57.681686+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 40935424 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:58.682200+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 40935424 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:59.682730+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 40935424 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:00.683228+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 40935424 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:01.683461+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 40935424 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:02.683758+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 40935424 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:03.684087+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 40935424 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:04.684412+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 40935424 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:05.684759+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 40935424 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:06.685105+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 40935424 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:07.685441+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 40935424 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:08.685838+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 40935424 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:09.686149+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 40935424 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:10.686456+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 40935424 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:11.686857+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:12.687272+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:13.687859+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:14.688212+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:15.688690+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:16.689078+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:17.689690+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:18.690063+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:19.690477+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:20.690891+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:21.691248+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:22.691640+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:23.691896+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:24.692218+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:25.692401+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:26.692645+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:27.692981+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:28.693360+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:29.693697+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:30.693945+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:31.694238+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:32.694725+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:33.695152+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:34.695466+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:35.695875+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:36.696312+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:37.696700+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:38.697030+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:39.697401+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:40.698101+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:41.698676+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106553344 unmapped: 40919040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:42.699115+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106553344 unmapped: 40919040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:43.699502+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106553344 unmapped: 40919040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:44.699853+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106553344 unmapped: 40919040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:45.700199+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106553344 unmapped: 40919040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:46.700789+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106553344 unmapped: 40919040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:47.701169+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106553344 unmapped: 40919040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:48.701661+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106553344 unmapped: 40919040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:49.702034+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106553344 unmapped: 40919040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:50.702384+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106553344 unmapped: 40919040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:51.702722+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106553344 unmapped: 40919040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:52.703088+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106553344 unmapped: 40919040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:53.703485+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:54.703955+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:55.704389+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:56.704750+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:57.705055+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:58.705403+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:59.705898+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:00.706326+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:01.706836+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:02.707236+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:03.707790+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:04.708144+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:05.708351+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:06.709394+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 40894464 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:07.709740+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 40894464 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:08.710106+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 40894464 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:09.710466+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 40894464 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:10.710782+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 40894464 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:11.711173+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 40894464 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:12.711655+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 40894464 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:13.712037+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 40894464 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:14.712509+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 40894464 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:15.713056+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 40894464 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:16.713759+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 40894464 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:17.714141+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 40894464 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:18.714641+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 40894464 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:19.714999+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 40894464 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:20.715432+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 40894464 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:21.715782+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 40894464 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:22.716183+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 40894464 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:23.716698+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 40894464 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:24.717133+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 40894464 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:25.717510+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 40894464 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:26.717971+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 40894464 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:27.718376+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 40894464 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:28.718810+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 40894464 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:29.719224+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106586112 unmapped: 40886272 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:30.719669+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106594304 unmapped: 40878080 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:31.719974+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106594304 unmapped: 40878080 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:32.720369+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106594304 unmapped: 40878080 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:33.720779+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106594304 unmapped: 40878080 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:34.721192+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106594304 unmapped: 40878080 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:35.721635+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106594304 unmapped: 40878080 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:36.721973+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106594304 unmapped: 40878080 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:37.722397+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106594304 unmapped: 40878080 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:38.722799+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106594304 unmapped: 40878080 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:39.723177+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106594304 unmapped: 40878080 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:40.723696+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106594304 unmapped: 40878080 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:41.724002+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106594304 unmapped: 40878080 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:42.724393+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106594304 unmapped: 40878080 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:43.724929+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106594304 unmapped: 40878080 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:44.726592+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106594304 unmapped: 40878080 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:45.727077+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106602496 unmapped: 40869888 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:46.727405+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106602496 unmapped: 40869888 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:47.727726+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106602496 unmapped: 40869888 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:48.728099+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106602496 unmapped: 40869888 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:49.728477+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106602496 unmapped: 40869888 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:50.728909+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106602496 unmapped: 40869888 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:51.729287+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106602496 unmapped: 40869888 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:52.729739+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106602496 unmapped: 40869888 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:53.730117+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106610688 unmapped: 40861696 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:54.730599+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106610688 unmapped: 40861696 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:55.730994+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106610688 unmapped: 40861696 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:56.731478+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106610688 unmapped: 40861696 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:57.731877+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106610688 unmapped: 40861696 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:58.732280+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106610688 unmapped: 40861696 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:59.732904+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106618880 unmapped: 40853504 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:00.733301+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106618880 unmapped: 40853504 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:01.733641+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106618880 unmapped: 40853504 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:02.734161+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106618880 unmapped: 40853504 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:03.734678+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106618880 unmapped: 40853504 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:04.735136+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106618880 unmapped: 40853504 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:05.735455+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106618880 unmapped: 40853504 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:06.736430+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106618880 unmapped: 40853504 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:07.736825+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106618880 unmapped: 40853504 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:08.737184+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106618880 unmapped: 40853504 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:09.737792+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106618880 unmapped: 40853504 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:10.738130+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106618880 unmapped: 40853504 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:11.738450+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106618880 unmapped: 40853504 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:12.739070+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106618880 unmapped: 40853504 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:13.739439+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106618880 unmapped: 40853504 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:14.739764+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106618880 unmapped: 40853504 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:15.740039+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106618880 unmapped: 40853504 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:16.740459+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106618880 unmapped: 40853504 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:17.740831+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106618880 unmapped: 40853504 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:18.741209+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106627072 unmapped: 40845312 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:19.741703+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106627072 unmapped: 40845312 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:20.742137+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106627072 unmapped: 40845312 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:21.742647+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106627072 unmapped: 40845312 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:22.743055+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106627072 unmapped: 40845312 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:23.743461+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106627072 unmapped: 40845312 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:24.743838+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106635264 unmapped: 40837120 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:25.744141+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 4800.1 total, 600.0 interval
                                            Cumulative writes: 9641 writes, 36K keys, 9641 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 9641 writes, 2604 syncs, 3.70 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 416 writes, 951 keys, 416 commit groups, 1.0 writes per commit group, ingest: 0.37 MB, 0.00 MB/s
                                            Interval WAL: 416 writes, 194 syncs, 2.14 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106635264 unmapped: 40837120 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:26.744627+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106635264 unmapped: 40837120 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:27.744992+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106635264 unmapped: 40837120 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:28.745441+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106635264 unmapped: 40837120 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:29.745913+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106635264 unmapped: 40837120 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:30.746205+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106635264 unmapped: 40837120 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:31.746446+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106635264 unmapped: 40837120 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:32.746927+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106635264 unmapped: 40837120 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:33.747276+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106635264 unmapped: 40837120 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:34.747695+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 40828928 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:35.748079+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 40828928 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:36.748498+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 40828928 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:37.748968+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 40828928 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:38.749378+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 40828928 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:39.749620+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 40828928 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:40.750008+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 40828928 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:41.750408+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 40828928 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:42.750759+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 40828928 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:43.751078+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 40828928 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:44.751360+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 40828928 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:45.751728+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 40828928 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:46.752290+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 40828928 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:47.752745+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 40828928 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:48.753050+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 40828928 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:49.753496+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 40828928 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:50.753805+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106651648 unmapped: 40820736 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:51.754132+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106651648 unmapped: 40820736 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:52.755213+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106651648 unmapped: 40820736 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:53.755463+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106651648 unmapped: 40820736 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:54.757021+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106659840 unmapped: 40812544 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:55.758216+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106659840 unmapped: 40812544 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:56.759728+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106659840 unmapped: 40812544 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:57.760505+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106659840 unmapped: 40812544 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:58.760883+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106668032 unmapped: 40804352 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:59.764227+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106668032 unmapped: 40804352 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:00.764713+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106668032 unmapped: 40804352 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:01.765193+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106668032 unmapped: 40804352 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:02.765762+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106668032 unmapped: 40804352 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:03.766168+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106668032 unmapped: 40804352 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:04.766512+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106668032 unmapped: 40804352 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:05.766920+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106668032 unmapped: 40804352 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:06.767262+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106668032 unmapped: 40804352 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:07.767711+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106668032 unmapped: 40804352 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:08.768081+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106668032 unmapped: 40804352 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:09.768671+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106676224 unmapped: 40796160 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:10.768996+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106676224 unmapped: 40796160 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:11.769337+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:12.769706+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106676224 unmapped: 40796160 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:13.770111+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106676224 unmapped: 40796160 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:14.770646+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106676224 unmapped: 40796160 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:15.771041+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106676224 unmapped: 40796160 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:16.771344+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106676224 unmapped: 40796160 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:17.771905+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106676224 unmapped: 40796160 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:18.772354+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106676224 unmapped: 40796160 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:19.772788+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106676224 unmapped: 40796160 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:20.773211+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106676224 unmapped: 40796160 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:21.773682+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106676224 unmapped: 40796160 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:22.774130+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106684416 unmapped: 40787968 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:23.774370+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106684416 unmapped: 40787968 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:24.774690+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106684416 unmapped: 40787968 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:25.774954+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106684416 unmapped: 40787968 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:26.775505+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106692608 unmapped: 40779776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:27.776059+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106692608 unmapped: 40779776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:28.776463+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106692608 unmapped: 40779776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:29.776806+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 40771584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:30.777022+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 40771584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:31.777489+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 40771584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:32.778186+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 40771584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:33.778845+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 40771584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:34.779245+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 40771584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:35.779725+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 40771584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:36.780175+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 40771584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:37.780752+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 40771584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:38.781133+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 40763392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:39.781471+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 40763392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:40.781839+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 40763392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:41.782176+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 40763392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:42.782627+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 40763392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:43.783032+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 40763392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:44.783404+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 40763392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:45.783758+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106717184 unmapped: 40755200 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:46.784134+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106717184 unmapped: 40755200 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:47.784497+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106717184 unmapped: 40755200 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:48.784836+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106717184 unmapped: 40755200 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:49.785213+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106717184 unmapped: 40755200 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:50.785465+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 515.138610840s of 515.162719727s, submitted: 14
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106717184 unmapped: 40755200 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:51.785880+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106725376 unmapped: 40747008 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:52.786267+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106766336 unmapped: 40706048 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:53.786683+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106815488 unmapped: 40656896 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:54.787044+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:55.787431+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:56.787811+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:57.788167+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:58.788522+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:59.788857+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:00.789199+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:01.789696+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:02.790231+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:03.790660+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:04.791078+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:05.791433+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:06.791814+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:07.792036+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:08.792367+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:09.792761+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:10.793158+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:11.793406+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:12.793956+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:13.794391+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:14.794894+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:15.795352+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:16.795765+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:17.796155+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:18.796517+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:19.796965+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:20.797361+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:21.797769+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:22.798188+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:23.798459+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:24.799387+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:25.799858+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:26.800328+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:27.800735+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:28.801123+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:29.801579+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:30.802025+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:31.802464+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:32.802996+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:33.803322+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:34.803799+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106831872 unmapped: 40640512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:35.804084+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106831872 unmapped: 40640512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:36.804464+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106831872 unmapped: 40640512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:37.804864+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106831872 unmapped: 40640512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:38.805310+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106840064 unmapped: 40632320 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:39.805770+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106848256 unmapped: 40624128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:40.806177+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106848256 unmapped: 40624128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:41.806636+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106848256 unmapped: 40624128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:42.807018+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106848256 unmapped: 40624128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:43.807462+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106848256 unmapped: 40624128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:44.807711+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106848256 unmapped: 40624128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:45.808085+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106848256 unmapped: 40624128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:46.808461+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106848256 unmapped: 40624128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:47.808828+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106848256 unmapped: 40624128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:48.809211+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106848256 unmapped: 40624128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:49.809731+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106848256 unmapped: 40624128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:50.810182+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106848256 unmapped: 40624128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:51.810828+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106848256 unmapped: 40624128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:52.811292+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106848256 unmapped: 40624128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:53.811773+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106856448 unmapped: 40615936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:54.812212+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106856448 unmapped: 40615936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:55.812651+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106856448 unmapped: 40615936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:56.813095+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106856448 unmapped: 40615936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:57.813439+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106856448 unmapped: 40615936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:58.813646+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106856448 unmapped: 40615936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:59.814065+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:00.814456+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106856448 unmapped: 40615936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:01.814863+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106856448 unmapped: 40615936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:02.815276+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106856448 unmapped: 40615936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:03.815573+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106856448 unmapped: 40615936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:04.816051+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106856448 unmapped: 40615936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:05.816404+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106856448 unmapped: 40615936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:06.816794+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 40607744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:07.817228+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 40607744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:08.817720+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 40607744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:09.818111+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106872832 unmapped: 40599552 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:10.818351+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106872832 unmapped: 40599552 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:11.818704+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106872832 unmapped: 40599552 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:12.818973+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106872832 unmapped: 40599552 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:13.819315+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106872832 unmapped: 40599552 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:14.819597+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106872832 unmapped: 40599552 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:15.819952+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106872832 unmapped: 40599552 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:16.820290+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106872832 unmapped: 40599552 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:17.820726+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106872832 unmapped: 40599552 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:18.821316+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106872832 unmapped: 40599552 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:19.821899+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106872832 unmapped: 40599552 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:20.822258+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106872832 unmapped: 40599552 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:21.822664+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106872832 unmapped: 40599552 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:22.823066+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106881024 unmapped: 40591360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:23.823273+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106881024 unmapped: 40591360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:24.823719+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106889216 unmapped: 40583168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:25.824093+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106889216 unmapped: 40583168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:26.824469+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106889216 unmapped: 40583168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:27.824881+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106889216 unmapped: 40583168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:28.825228+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106889216 unmapped: 40583168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:29.825791+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106889216 unmapped: 40583168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:30.826318+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106889216 unmapped: 40583168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:31.826849+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106889216 unmapped: 40583168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:32.827353+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106889216 unmapped: 40583168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:33.827795+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106889216 unmapped: 40583168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:34.828006+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106889216 unmapped: 40583168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:35.828448+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106889216 unmapped: 40583168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:36.828872+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106889216 unmapped: 40583168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:37.829249+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106889216 unmapped: 40583168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:38.829711+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 40574976 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:39.830152+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 40574976 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:40.830633+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 40574976 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:41.831031+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 40574976 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:42.831522+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 40574976 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:43.832128+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 40574976 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:44.832507+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106905600 unmapped: 40566784 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:45.832971+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106905600 unmapped: 40566784 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:46.833353+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106913792 unmapped: 40558592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:47.833709+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106913792 unmapped: 40558592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:48.834086+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106913792 unmapped: 40558592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:49.834501+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106913792 unmapped: 40558592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:50.834871+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106913792 unmapped: 40558592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:51.835243+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106913792 unmapped: 40558592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:52.835767+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106913792 unmapped: 40558592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:53.836115+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106921984 unmapped: 40550400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:54.836409+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106921984 unmapped: 40550400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:55.836826+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106921984 unmapped: 40550400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:56.837300+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106921984 unmapped: 40550400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:57.837833+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106921984 unmapped: 40550400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:58.838233+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106921984 unmapped: 40550400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:59.838700+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106921984 unmapped: 40550400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:00.839111+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106921984 unmapped: 40550400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:01.839507+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106921984 unmapped: 40550400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:02.840081+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106921984 unmapped: 40550400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:03.840517+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106921984 unmapped: 40550400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:04.841787+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106921984 unmapped: 40550400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:05.842885+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106921984 unmapped: 40550400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:06.843473+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106921984 unmapped: 40550400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:07.844028+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106921984 unmapped: 40550400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:08.844387+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106930176 unmapped: 40542208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:09.844752+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106930176 unmapped: 40542208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:10.845119+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106946560 unmapped: 40525824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:11.845504+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106946560 unmapped: 40525824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:12.846044+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106946560 unmapped: 40525824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:13.846390+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106946560 unmapped: 40525824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:14.846750+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106946560 unmapped: 40525824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:15.847167+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106946560 unmapped: 40525824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:16.847647+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106946560 unmapped: 40525824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:17.848013+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106946560 unmapped: 40525824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:18.848319+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106946560 unmapped: 40525824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:19.848821+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106946560 unmapped: 40525824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:20.849138+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106946560 unmapped: 40525824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:21.849595+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106946560 unmapped: 40525824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:22.849921+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106946560 unmapped: 40525824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:23.850246+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106946560 unmapped: 40525824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:24.850808+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106946560 unmapped: 40525824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:25.851196+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106946560 unmapped: 40525824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:26.851605+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106946560 unmapped: 40525824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:27.851925+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106946560 unmapped: 40525824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:28.852301+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106946560 unmapped: 40525824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:29.852509+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106946560 unmapped: 40525824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:30.852923+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106946560 unmapped: 40525824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:31.853269+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106946560 unmapped: 40525824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:32.853807+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106946560 unmapped: 40525824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:33.854242+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106954752 unmapped: 40517632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:34.854772+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 40509440 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:35.855219+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 40509440 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:36.855458+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 40509440 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:37.855834+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 40509440 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:38.856045+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 40509440 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:39.856307+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 40509440 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:40.856765+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 40509440 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:41.857069+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 40509440 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:42.857421+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 40509440 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:43.857771+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 40509440 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:44.858153+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 40509440 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:45.858733+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 40509440 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:46.859142+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 40509440 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:47.859741+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 40509440 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:48.860131+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 40509440 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:49.860686+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106971136 unmapped: 40501248 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:50.861100+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106971136 unmapped: 40501248 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:51.861406+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106971136 unmapped: 40501248 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:52.861803+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106971136 unmapped: 40501248 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:53.862176+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106979328 unmapped: 40493056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:54.862643+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106979328 unmapped: 40493056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:55.863014+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106979328 unmapped: 40493056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:56.863468+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106987520 unmapped: 40484864 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:57.863761+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106987520 unmapped: 40484864 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:58.864122+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106987520 unmapped: 40484864 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:59.864627+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106987520 unmapped: 40484864 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:00.865072+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106987520 unmapped: 40484864 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:01.865730+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106987520 unmapped: 40484864 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:02.866279+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106987520 unmapped: 40484864 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:03.866827+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106987520 unmapped: 40484864 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:04.867198+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106995712 unmapped: 40476672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:05.867655+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106995712 unmapped: 40476672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:06.868047+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106995712 unmapped: 40476672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:07.868322+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106995712 unmapped: 40476672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:08.868808+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106995712 unmapped: 40476672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:09.869189+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106995712 unmapped: 40476672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:10.869676+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106995712 unmapped: 40476672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:11.870055+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106995712 unmapped: 40476672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:12.870686+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106995712 unmapped: 40476672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:13.871058+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106995712 unmapped: 40476672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:14.871435+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106995712 unmapped: 40476672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:15.871786+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106995712 unmapped: 40476672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:16.872133+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106995712 unmapped: 40476672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:17.872442+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106995712 unmapped: 40476672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:18.872846+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:19.873186+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106995712 unmapped: 40476672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:20.873692+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106995712 unmapped: 40476672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:21.874042+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106995712 unmapped: 40476672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:22.874321+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106995712 unmapped: 40476672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:23.874806+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107003904 unmapped: 40468480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:24.875204+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107003904 unmapped: 40468480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:25.875486+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107003904 unmapped: 40468480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:26.875897+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107003904 unmapped: 40468480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:27.876245+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107003904 unmapped: 40468480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:28.876627+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107003904 unmapped: 40468480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:29.876904+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107003904 unmapped: 40468480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:30.877330+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107003904 unmapped: 40468480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:31.877769+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107003904 unmapped: 40468480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:32.878224+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107003904 unmapped: 40468480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:33.878593+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107003904 unmapped: 40468480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:34.878986+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107003904 unmapped: 40468480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:35.879245+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107003904 unmapped: 40468480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:36.879414+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107003904 unmapped: 40468480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:37.879634+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107003904 unmapped: 40468480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:38.879826+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107003904 unmapped: 40468480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:39.880044+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 40460288 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:40.880244+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 40460288 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: do_command 'config diff' '{prefix=config diff}'
Dec 03 02:45:14 compute-0 ceph-osd[208731]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 03 02:45:14 compute-0 ceph-osd[208731]: do_command 'config show' '{prefix=config show}'
Dec 03 02:45:14 compute-0 ceph-osd[208731]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 03 02:45:14 compute-0 ceph-osd[208731]: do_command 'counter dump' '{prefix=counter dump}'
Dec 03 02:45:14 compute-0 ceph-osd[208731]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 03 02:45:14 compute-0 ceph-osd[208731]: do_command 'counter schema' '{prefix=counter schema}'
Dec 03 02:45:14 compute-0 ceph-osd[208731]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:41.880434+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 40222720 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:42.880711+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec 03 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106815488 unmapped: 40656896 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: tick
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_tickets
Dec 03 02:45:14 compute-0 ceph-osd[208731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:43.880894+0000)
Dec 03 02:45:14 compute-0 ceph-osd[208731]: do_command 'log dump' '{prefix=log dump}'
Dec 03 02:45:14 compute-0 ceph-mon[192821]: from='client.15929 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:45:14 compute-0 ceph-mon[192821]: from='client.15933 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:45:14 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/621837691' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 03 02:45:14 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2439200539' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 03 02:45:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Dec 03 02:45:14 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3672553893' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 03 02:45:14 compute-0 rsyslogd[188612]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 03 02:45:14 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15945 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:45:14 compute-0 nova_compute[351485]: 2025-12-03 02:45:14.758 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:45:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Dec 03 02:45:14 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1468121917' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 03 02:45:14 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15949 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:45:15 compute-0 ceph-mon[192821]: from='client.15937 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:45:15 compute-0 ceph-mon[192821]: pgmap v2734: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:45:15 compute-0 ceph-mon[192821]: from='client.15941 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:45:15 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3672553893' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 03 02:45:15 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1468121917' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 03 02:45:15 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15953 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:45:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Dec 03 02:45:15 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3471470328' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 03 02:45:15 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15955 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:45:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Dec 03 02:45:15 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/900289867' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec 03 02:45:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2735: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:45:16 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15959 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:45:16 compute-0 ceph-mon[192821]: from='client.15945 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:45:16 compute-0 ceph-mon[192821]: from='client.15949 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:45:16 compute-0 ceph-mon[192821]: from='client.15953 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:45:16 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3471470328' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 03 02:45:16 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/900289867' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec 03 02:45:16 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Dec 03 02:45:16 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2529789203' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec 03 02:45:17 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15967 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:45:17 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T02:45:17.081+0000 7fabb0026640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 03 02:45:17 compute-0 ceph-mgr[193109]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 03 02:45:17 compute-0 ceph-mon[192821]: from='client.15955 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:45:17 compute-0 ceph-mon[192821]: pgmap v2735: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:45:17 compute-0 ceph-mon[192821]: from='client.15959 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:45:17 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2529789203' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec 03 02:45:17 compute-0 nova_compute[351485]: 2025-12-03 02:45:17.383 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:45:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Dec 03 02:45:17 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2671506822' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec 03 02:45:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Dec 03 02:45:17 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1877054004' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec 03 02:45:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Dec 03 02:45:17 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3744788624' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec 03 02:45:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Dec 03 02:45:18 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2850072463' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec 03 02:45:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2736: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:45:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Dec 03 02:45:18 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1084016343' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec 03 02:45:18 compute-0 ceph-mon[192821]: from='client.15967 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:45:18 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2671506822' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec 03 02:45:18 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1877054004' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec 03 02:45:18 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3744788624' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec 03 02:45:18 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2850072463' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec 03 02:45:18 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1084016343' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec 03 02:45:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Dec 03 02:45:18 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3236038443' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec 03 02:45:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Dec 03 02:45:18 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/507683952' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec 03 02:45:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:45:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Dec 03 02:45:18 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/26662517' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:07.072950+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367004 data_alloc: 218103808 data_used: 16027648
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:08.073333+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:09.073824+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:10.074232+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:11.074717+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:12.075157+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367004 data_alloc: 218103808 data_used: 16027648
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:13.075702+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:14.076173+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:15.076766+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:16.077012+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:17.077843+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367004 data_alloc: 218103808 data_used: 16027648
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:18.078274+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:19.078725+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:20.078943+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:21.079385+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:22.079737+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367004 data_alloc: 218103808 data_used: 16027648
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:23.080185+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:24.080694+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:25.081042+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:26.081243+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:27.081699+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367004 data_alloc: 218103808 data_used: 16027648
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:28.082119+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:29.082417+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 ms_handle_reset con 0x55f0a4b38c00 session 0x55f0a75bd0e0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 99.323188782s of 99.935211182s, submitted: 90
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 ms_handle_reset con 0x55f0a4b3d000 session 0x55f0a529c960
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 ms_handle_reset con 0x55f0a4b3a800 session 0x55f0a57e3860
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:30.082854+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b38c00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112738304 unmapped: 32669696 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:31.083190+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f9d79000/0x0/0x4ffc00000, data 0x182af05/0x18f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,1])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 ms_handle_reset con 0x55f0a4b38c00 session 0x55f0a75c1680
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:32.083517+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198259 data_alloc: 218103808 data_used: 8048640
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:33.083832+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:34.084242+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f9db7000/0x0/0x4ffc00000, data 0x17eee93/0x18b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:35.084720+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f9db7000/0x0/0x4ffc00000, data 0x17eee93/0x18b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f9db7000/0x0/0x4ffc00000, data 0x17eee93/0x18b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:36.085036+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:37.085356+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198259 data_alloc: 218103808 data_used: 8048640
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:38.085752+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:39.086080+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:40.086380+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:41.086742+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f9db7000/0x0/0x4ffc00000, data 0x17eee93/0x18b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:42.087160+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f9db7000/0x0/0x4ffc00000, data 0x17eee93/0x18b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198259 data_alloc: 218103808 data_used: 8048640
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:43.087772+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:44.088249+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:45.088775+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:46.089135+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f9db7000/0x0/0x4ffc00000, data 0x17eee93/0x18b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:47.089657+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198259 data_alloc: 218103808 data_used: 8048640
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f9db7000/0x0/0x4ffc00000, data 0x17eee93/0x18b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:48.090005+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:49.090372+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.926574707s of 19.366115570s, submitted: 65
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 ms_handle_reset con 0x55f0a4b3dc00 session 0x55f0a7f5bc20
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 ms_handle_reset con 0x55f0a57ab000 session 0x55f0a56af2c0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b38c00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:50.090655+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107028480 unmapped: 38379520 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 ms_handle_reset con 0x55f0a4b38c00 session 0x55f0a553f680
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:51.090885+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:52.091223+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:53.091464+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:54.091941+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:55.092215+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:56.092632+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:57.092866+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:58.093246+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:59.093684+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:00.094027+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:01.094487+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:02.094907+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:03.095303+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:04.095765+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:05.096104+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:06.096492+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:07.096861+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:08.097195+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:09.097916+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:10.098265+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:11.099088+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:12.099631+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:13.100025+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:14.100494+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:15.100867+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:16.101316+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:17.102087+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:18.102476+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:19.102840+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:20.103238+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:21.103671+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:22.104045+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:23.104467+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:24.105037+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:25.105684+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:26.106058+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:27.106445+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:28.106894+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:29.107225+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:30.107696+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:31.108058+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:32.108294+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:33.108664+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:34.109060+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:35.109271+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:36.109666+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:37.110042+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:38.110407+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:39.110781+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:40.111145+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:41.111519+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:42.111985+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:43.112668+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:44.113128+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:45.113585+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:46.113960+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:47.114354+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:48.114773+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:49.115218+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:50.115595+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:51.116086+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:52.116508+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:53.116883+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:54.117347+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:55.117725+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:56.118064+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:57.118459+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:58.118784+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:59.119189+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:00.119728+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:01.120030+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:02.120374+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:03.120810+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:04.121342+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:05.121758+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:06.122145+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:07.122661+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:08.123058+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:09.123501+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:10.123997+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:11.124413+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:12.124973+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:13.125411+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:14.125839+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a7e6b400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:15.126166+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 85.550582886s of 85.862503052s, submitted: 51
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:16.126616+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _renew_subs
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 134 ms_handle_reset con 0x55f0a7e6b400 session 0x55f0a7cbbc20
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a57aa000
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:17.126992+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107044864 unmapped: 38363136 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:18.127431+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1142798 data_alloc: 218103808 data_used: 7061504
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _renew_subs
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a57aa000 session 0x55f0a4d42b40
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:19.127923+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:20.129492+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:21.129946+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:22.130375+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:23.130750+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146916 data_alloc: 218103808 data_used: 7069696
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:24.131038+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:25.131375+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:26.131790+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:27.132268+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:28.132773+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146916 data_alloc: 218103808 data_used: 7069696
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:29.133160+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:30.133642+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:31.134141+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:32.134485+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:33.134876+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146916 data_alloc: 218103808 data_used: 7069696
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:34.135274+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:35.135797+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:36.136168+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:37.136503+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:38.136815+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146916 data_alloc: 218103808 data_used: 7069696
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:39.137179+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:40.137678+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:41.137958+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:42.138141+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:43.138507+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146916 data_alloc: 218103808 data_used: 7069696
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:44.139125+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:45.139383+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:46.139818+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:47.140035+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:48.140351+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146916 data_alloc: 218103808 data_used: 7069696
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:49.140768+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:50.141196+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:51.142746+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:52.143230+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:53.143504+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146916 data_alloc: 218103808 data_used: 7069696
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:54.143971+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:55.144417+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:56.144742+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:57.145132+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:58.145507+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146916 data_alloc: 218103808 data_used: 7069696
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:59.145972+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:00.146393+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:01.146805+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:02.147212+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:03.147669+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146916 data_alloc: 218103808 data_used: 7069696
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:04.148108+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:05.148285+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:06.148762+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a6bbdc00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a6bbdc00 session 0x55f0a7ecc1e0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b3f800
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:07.149103+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b3f800 session 0x55f0a7ecc3c0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 113639424 unmapped: 31768576 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:08.149376+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1167396 data_alloc: 218103808 data_used: 13885440
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b38c00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 53.427654266s of 53.600463867s, submitted: 15
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 27410432 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b38c00 session 0x55f0a726d860
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:09.149751+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b3f800
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b3f800 session 0x55f0a726d680
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a57aa000
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a57aa000 session 0x55f0a553e000
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a6bbdc00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a6bbdc00 session 0x55f0a75b83c0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 113803264 unmapped: 31604736 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a7e6b400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a7e6b400 session 0x55f0a7cbba40
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:10.149983+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b38c00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b38c00 session 0x55f0a64152c0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b3f800
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b3f800 session 0x55f0a80205a0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a57aa000
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a57aa000 session 0x55f0a756c780
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a6bbdc00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 113991680 unmapped: 31416320 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a6bbdc00 session 0x55f0a54dcf00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a7e6a400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a7e6a400 session 0x55f0a56aed20
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:11.150439+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 113991680 unmapped: 31416320 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:12.150859+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 113991680 unmapped: 31416320 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:13.151302+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8eec000/0x0/0x4ffc00000, data 0x26b655e/0x2782000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303610 data_alloc: 218103808 data_used: 13885440
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 113991680 unmapped: 31416320 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:14.151754+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a8591c00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a8591c00 session 0x55f0a54dd4a0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a60c9000
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a60c9000 session 0x55f0a8b46780
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 113975296 unmapped: 31432704 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a858e800
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a858e800 session 0x55f0a8df6b40
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a651e000
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a651e000 session 0x55f0a4d7d680
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:15.152189+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a8592800
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8eec000/0x0/0x4ffc00000, data 0x26b655e/0x2782000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a8592800 session 0x55f0a5578d20
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a60c9000
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a60c9000 session 0x55f0a4b37c20
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a651e000
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a651e000 session 0x55f0a84fa780
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a858e800
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a858e800 session 0x55f0a7eb74a0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114302976 unmapped: 31105024 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a8591c00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a8591c00 session 0x55f0a52a65a0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:16.152673+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b3cc00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b3cc00 session 0x55f0a885b2c0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114327552 unmapped: 31080448 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:17.152935+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b3cc00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a60c9000
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114343936 unmapped: 31064064 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:18.153315+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1400757 data_alloc: 218103808 data_used: 13889536
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 31055872 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:19.153674+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a651e000
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.290942192s of 10.699803352s, submitted: 40
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a7e71400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a651e000 session 0x55f0a58825a0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a7e71400 session 0x55f0a810bc20
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114057216 unmapped: 31350784 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:20.153968+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a858e800
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a8591c00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114057216 unmapped: 31350784 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:21.154263+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f7a68000/0x0/0x4ffc00000, data 0x3b3956e/0x3c06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114057216 unmapped: 31350784 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:22.154689+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114057216 unmapped: 31350784 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:23.154979+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b3c000
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b3c000 session 0x55f0a81732c0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f7a68000/0x0/0x4ffc00000, data 0x3b3956e/0x3c06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1467170 data_alloc: 218103808 data_used: 13893632
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 113500160 unmapped: 31907840 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a7e6cc00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b41800
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:24.155226+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 113508352 unmapped: 31899648 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:25.155656+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f7a3e000/0x0/0x4ffc00000, data 0x3b6356e/0x3c30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 31236096 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:26.155912+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 30670848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:27.156378+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 30670848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:28.156910+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f7a3e000/0x0/0x4ffc00000, data 0x3b6356e/0x3c30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1516450 data_alloc: 234881024 data_used: 20791296
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 30670848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:29.157443+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a7450400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a7450400 session 0x55f0a7eb72c0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 116916224 unmapped: 28491776 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a7ec3c00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.427840233s of 10.600404739s, submitted: 22
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:30.157876+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 121511936 unmapped: 23896064 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:31.158402+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 125722624 unmapped: 19685376 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:32.158747+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f7a14000/0x0/0x4ffc00000, data 0x3b8d56e/0x3c5a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131022848 unmapped: 14385152 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:33.158945+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680281 data_alloc: 251658240 data_used: 40378368
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 132775936 unmapped: 12632064 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:34.159397+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 132775936 unmapped: 12632064 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:35.159747+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 132775936 unmapped: 12632064 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:36.160072+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 132775936 unmapped: 12632064 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:37.160303+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a858e800 session 0x55f0a96b8f00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a8591c00 session 0x55f0a7eced20
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b3c000
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a651e000
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 132833280 unmapped: 12574720 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f7a14000/0x0/0x4ffc00000, data 0x3b8d56e/0x3c5a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:38.160650+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b3c000 session 0x55f0a6afd680
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1544478 data_alloc: 234881024 data_used: 33796096
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 129966080 unmapped: 15441920 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:39.160896+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133218304 unmapped: 12189696 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:40.161316+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f83f5000/0x0/0x4ffc00000, data 0x31ad55e/0x3279000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134144000 unmapped: 11264000 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:41.161510+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134144000 unmapped: 11264000 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:42.161882+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134144000 unmapped: 11264000 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:43.162235+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a651e000 session 0x55f0a57e6780
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.212381363s of 13.371566772s, submitted: 31
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a7ec3c00 session 0x55f0a8571e00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a8594c00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1596346 data_alloc: 251658240 data_used: 41136128
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130449408 unmapped: 14958592 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:44.162580+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a8594c00 session 0x55f0a57e23c0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8c24000/0x0/0x4ffc00000, data 0x297e55e/0x2a4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 14942208 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:45.185992+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 14942208 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:46.186664+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 14942208 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:47.187075+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8c24000/0x0/0x4ffc00000, data 0x297e55e/0x2a4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 14942208 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:48.187500+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1475072 data_alloc: 234881024 data_used: 32735232
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 14942208 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:49.187895+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 14942208 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:50.188110+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:51.188308+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 14942208 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8c24000/0x0/0x4ffc00000, data 0x297e55e/0x2a4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:52.188674+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 14942208 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:53.189037+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 14942208 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1475072 data_alloc: 234881024 data_used: 32735232
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:54.189447+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 14942208 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8c24000/0x0/0x4ffc00000, data 0x297e55e/0x2a4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:55.189792+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 14942208 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:56.190105+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 14942208 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:57.190279+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 14942208 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.805484772s of 13.987822533s, submitted: 30
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:58.190469+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135127040 unmapped: 10280960 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1533936 data_alloc: 234881024 data_used: 33460224
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f85de000/0x0/0x4ffc00000, data 0x2fc455e/0x3090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:59.190637+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135127040 unmapped: 10280960 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:00.190943+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133750784 unmapped: 11657216 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:01.191191+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133750784 unmapped: 11657216 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:02.192180+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133750784 unmapped: 11657216 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:03.192603+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133750784 unmapped: 11657216 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1585670 data_alloc: 234881024 data_used: 34004992
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:04.193013+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134864896 unmapped: 10543104 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8073000/0x0/0x4ffc00000, data 0x352955e/0x35f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a858d800
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a858d800 session 0x55f0a60fc780
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:05.193284+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135520256 unmapped: 20389888 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:06.193701+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137887744 unmapped: 18022400 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f780e000/0x0/0x4ffc00000, data 0x3d8555e/0x3e51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:07.194125+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137912320 unmapped: 17997824 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:08.194501+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137912320 unmapped: 17997824 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1667382 data_alloc: 251658240 data_used: 34631680
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:09.194871+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137912320 unmapped: 17997824 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a56d1800
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a56d1800 session 0x55f0a4b37e00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:10.195235+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137912320 unmapped: 17997824 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a8597000
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a8597000 session 0x55f0a810bc20
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.885555267s of 13.601085663s, submitted: 132
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:11.195680+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136953856 unmapped: 18956288 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f781b000/0x0/0x4ffc00000, data 0x3d8755e/0x3e53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a8594000
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a8594000 session 0x55f0a810b0e0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a8593800
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:12.196102+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a8593800 session 0x55f0a810a000
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137117696 unmapped: 18792448 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a56d1800
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a858d800
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:13.196499+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137117696 unmapped: 18792448 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1665953 data_alloc: 251658240 data_used: 34635776
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:14.197099+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137117696 unmapped: 18792448 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:15.197626+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137125888 unmapped: 18784256 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f77f6000/0x0/0x4ffc00000, data 0x3dab56e/0x3e78000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:16.197966+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137125888 unmapped: 18784256 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:17.198334+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137125888 unmapped: 18784256 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:18.198856+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f77f5000/0x0/0x4ffc00000, data 0x3dac56e/0x3e79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137125888 unmapped: 18784256 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1665621 data_alloc: 251658240 data_used: 34635776
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:19.199254+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137125888 unmapped: 18784256 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f77f5000/0x0/0x4ffc00000, data 0x3dac56e/0x3e79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:20.199743+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137125888 unmapped: 18784256 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:21.200132+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137125888 unmapped: 18784256 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:22.200468+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137076736 unmapped: 18833408 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:23.200756+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 17981440 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1695221 data_alloc: 251658240 data_used: 38678528
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:24.200992+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 139608064 unmapped: 16302080 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f77f5000/0x0/0x4ffc00000, data 0x3dac56e/0x3e79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:25.201208+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140410880 unmapped: 15499264 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:26.201475+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140410880 unmapped: 15499264 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:27.201677+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140410880 unmapped: 15499264 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f77f5000/0x0/0x4ffc00000, data 0x3dac56e/0x3e79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:28.201894+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 15532032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1717141 data_alloc: 251658240 data_used: 41816064
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:29.202170+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 15532032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:30.202486+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 15532032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:31.202781+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f77f5000/0x0/0x4ffc00000, data 0x3dac56e/0x3e79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 15532032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:32.203098+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 15532032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:33.203353+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f77f5000/0x0/0x4ffc00000, data 0x3dac56e/0x3e79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 15532032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1717141 data_alloc: 251658240 data_used: 41816064
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:34.203713+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 15532032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:35.204058+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 15532032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:36.204446+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 15532032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:37.204856+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 15532032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:38.205083+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f77f5000/0x0/0x4ffc00000, data 0x3dac56e/0x3e79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 15532032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1717461 data_alloc: 251658240 data_used: 41824256
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:39.205481+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f77f5000/0x0/0x4ffc00000, data 0x3dac56e/0x3e79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140386304 unmapped: 15523840 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:40.205738+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140386304 unmapped: 15523840 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:41.206161+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.360565186s of 30.452342987s, submitted: 11
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140386304 unmapped: 15523840 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:42.206413+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140386304 unmapped: 15523840 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:43.206808+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140386304 unmapped: 15523840 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:44.207218+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1717637 data_alloc: 251658240 data_used: 41824256
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140386304 unmapped: 15523840 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:45.207730+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f77f5000/0x0/0x4ffc00000, data 0x3dac56e/0x3e79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140550144 unmapped: 15360000 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:46.207961+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b3cc00 session 0x55f0a8df70e0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a60c9000 session 0x55f0a81721e0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140574720 unmapped: 15335424 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a8591400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:47.208172+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136282112 unmapped: 19628032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a8591400 session 0x55f0a75ba780
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a7e6cc00 session 0x55f0a81734a0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b41800 session 0x55f0a6414780
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:48.211243+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136282112 unmapped: 19628032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:49.211737+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1549753 data_alloc: 234881024 data_used: 34029568
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136282112 unmapped: 19628032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:50.212072+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b3cc00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f85eb000/0x0/0x4ffc00000, data 0x2fb656e/0x3083000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a60c9000
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136298496 unmapped: 19611648 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:51.212606+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136298496 unmapped: 19611648 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:52.212997+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136306688 unmapped: 19603456 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f85eb000/0x0/0x4ffc00000, data 0x2fb656e/0x3083000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [1])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:53.213498+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136134656 unmapped: 19775488 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:54.214354+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550873 data_alloc: 234881024 data_used: 34156544
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f85eb000/0x0/0x4ffc00000, data 0x2fb656e/0x3083000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136142848 unmapped: 19767296 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:55.214690+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136142848 unmapped: 19767296 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:56.218426+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.911537170s of 15.160791397s, submitted: 53
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135208960 unmapped: 20701184 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:57.218643+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137076736 unmapped: 18833408 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:58.219047+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136634368 unmapped: 19275776 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:59.219486+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1623403 data_alloc: 234881024 data_used: 34353152
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136634368 unmapped: 19275776 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:00.219796+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f7e77000/0x0/0x4ffc00000, data 0x372956e/0x37f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136634368 unmapped: 19275776 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:01.220165+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136634368 unmapped: 19275776 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:02.220629+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136634368 unmapped: 19275776 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:03.220958+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f7e77000/0x0/0x4ffc00000, data 0x372956e/0x37f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136634368 unmapped: 19275776 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:04.221301+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1621595 data_alloc: 234881024 data_used: 34357248
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135503872 unmapped: 20406272 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:05.221655+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135503872 unmapped: 20406272 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:06.222026+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135503872 unmapped: 20406272 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f7e55000/0x0/0x4ffc00000, data 0x374c56e/0x3819000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:07.222279+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135503872 unmapped: 20406272 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:08.222609+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135503872 unmapped: 20406272 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:09.223082+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1621595 data_alloc: 234881024 data_used: 34357248
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135503872 unmapped: 20406272 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:10.223275+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135503872 unmapped: 20406272 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.919089317s of 14.367115021s, submitted: 74
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:11.223435+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 19357696 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:12.223666+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f7e4b000/0x0/0x4ffc00000, data 0x375656e/0x3823000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 19357696 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:13.223842+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 19357696 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:14.224089+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1621715 data_alloc: 234881024 data_used: 34357248
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 19357696 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:15.224318+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f7e4b000/0x0/0x4ffc00000, data 0x375656e/0x3823000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 19357696 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:16.224871+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 19357696 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:17.225299+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 19357696 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:18.225633+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136560640 unmapped: 19349504 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:19.226050+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1621715 data_alloc: 234881024 data_used: 34357248
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136560640 unmapped: 19349504 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:20.226702+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136560640 unmapped: 19349504 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:21.227038+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f7e4b000/0x0/0x4ffc00000, data 0x375656e/0x3823000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136560640 unmapped: 19349504 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:22.227363+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b39c00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.317792892s of 11.335372925s, submitted: 3
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b39c00 session 0x55f0a529b2c0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a56d1c00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a56d1c00 session 0x55f0a56acd20
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4d40c00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4d40c00 session 0x55f0a7eb61e0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a8593800
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134938624 unmapped: 20971520 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a8593800 session 0x55f0a632b0e0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a858d000
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a858d000 session 0x55f0a7dee960
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:23.227768+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134922240 unmapped: 20987904 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:24.228001+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f78b7000/0x0/0x4ffc00000, data 0x3ce95d0/0x3db7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1668235 data_alloc: 234881024 data_used: 34357248
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b39c00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b39c00 session 0x55f0a75b7c20
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4d40c00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4d40c00 session 0x55f0a75b74a0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a56d1c00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a56d1c00 session 0x55f0a75b6960
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f78b7000/0x0/0x4ffc00000, data 0x3ce95d0/0x3db7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134922240 unmapped: 20987904 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a8593800
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a8593800 session 0x55f0a75b7860
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a6ab1800
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:25.228377+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a6ab1800 session 0x55f0a553fa40
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a6ab1800
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a6ab1800 session 0x55f0a7ecd4a0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b39c00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b39c00 session 0x55f0a7ecc1e0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4d40c00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4d40c00 session 0x55f0a7ecc3c0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a56d1c00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a56d1c00 session 0x55f0a52a7e00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135241728 unmapped: 28540928 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f78b6000/0x0/0x4ffc00000, data 0x3ce95e0/0x3db8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:26.228823+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135340032 unmapped: 28442624 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:27.229436+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a6ab0800
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a6ab0800 session 0x55f0a57e32c0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a6ab0800
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a6ab0800 session 0x55f0a810ba40
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135585792 unmapped: 28196864 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:28.229944+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a651f000
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a651f000 session 0x55f0a75bb680
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b39c00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135618560 unmapped: 28164096 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b39c00 session 0x55f0a810a3c0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:29.230291+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1772936 data_alloc: 251658240 data_used: 35344384
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135921664 unmapped: 27860992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4d40c00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a56d1c00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:30.230659+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135929856 unmapped: 27852800 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:31.231008+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f6d41000/0x0/0x4ffc00000, data 0x485b665/0x492c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b38400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b38400 session 0x55f0a58901e0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135929856 unmapped: 27852800 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:32.231422+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a8590c00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a8590c00 session 0x55f0a60110e0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135929856 unmapped: 27852800 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:33.231641+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b38400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b38400 session 0x55f0a4d42780
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b39c00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.765680313s of 11.383768082s, submitted: 98
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b39c00 session 0x55f0a7eccd20
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136249344 unmapped: 27533312 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:34.231981+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1777362 data_alloc: 251658240 data_used: 35352576
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136265728 unmapped: 27516928 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a651f000
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a6ab0800
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:35.232197+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136265728 unmapped: 27516928 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:36.232701+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f6d1d000/0x0/0x4ffc00000, data 0x487f675/0x4951000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136265728 unmapped: 27516928 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:37.233080+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136265728 unmapped: 27516928 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:38.233443+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136265728 unmapped: 27516928 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:39.233799+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1777682 data_alloc: 251658240 data_used: 35360768
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136896512 unmapped: 26886144 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:40.234179+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137576448 unmapped: 26206208 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:41.234626+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f6d1d000/0x0/0x4ffc00000, data 0x487f675/0x4951000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 139427840 unmapped: 24354816 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:42.234849+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 139427840 unmapped: 24354816 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:43.235157+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 139427840 unmapped: 24354816 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:44.235602+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1818642 data_alloc: 251658240 data_used: 41193472
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 139427840 unmapped: 24354816 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:45.235898+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f6d1d000/0x0/0x4ffc00000, data 0x487f675/0x4951000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 139427840 unmapped: 24354816 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:46.236166+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 139427840 unmapped: 24354816 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:47.236626+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 139993088 unmapped: 23789568 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:48.236868+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 145022976 unmapped: 18759680 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:49.237083+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a651f000 session 0x55f0a6acf4a0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a6ab0800 session 0x55f0a75c21e0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a8593c00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.797070503s of 15.820782661s, submitted: 2
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1866786 data_alloc: 251658240 data_used: 48082944
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 142245888 unmapped: 21536768 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a8593c00 session 0x55f0a7ece5a0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:50.237340+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 142245888 unmapped: 21536768 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:51.237832+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f748f000/0x0/0x4ffc00000, data 0x3d0e5f3/0x3ddd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f748f000/0x0/0x4ffc00000, data 0x3d0e5f3/0x3ddd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 142245888 unmapped: 21536768 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:52.238177+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 142245888 unmapped: 21536768 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:53.238638+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 142245888 unmapped: 21536768 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:54.239142+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1735401 data_alloc: 251658240 data_used: 41189376
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 142245888 unmapped: 21536768 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:55.239690+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 142245888 unmapped: 21536768 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:56.240189+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b3cc00 session 0x55f0a7eb7a40
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a60c9000 session 0x55f0a6b8fa40
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a7e73000
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131342336 unmapped: 32440320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a7e73000 session 0x55f0a6b8f0e0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:57.240515+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8aa1000/0x0/0x4ffc00000, data 0x2aff5e3/0x2bcd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131342336 unmapped: 32440320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:58.241037+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131342336 unmapped: 32440320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:59.241345+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1491587 data_alloc: 234881024 data_used: 27148288
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131342336 unmapped: 32440320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:00.241577+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:01.241979+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131342336 unmapped: 32440320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:02.242428+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131342336 unmapped: 32440320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:03.242844+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131342336 unmapped: 32440320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8acb000/0x0/0x4ffc00000, data 0x2ad55e3/0x2ba3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:04.243325+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131342336 unmapped: 32440320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1491587 data_alloc: 234881024 data_used: 27148288
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:05.243787+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131342336 unmapped: 32440320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:06.244329+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131342336 unmapped: 32440320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:07.244767+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131342336 unmapped: 32440320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:08.245179+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131342336 unmapped: 32440320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8acb000/0x0/0x4ffc00000, data 0x2ad55e3/0x2ba3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:09.245406+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131342336 unmapped: 32440320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8acb000/0x0/0x4ffc00000, data 0x2ad55e3/0x2ba3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1491587 data_alloc: 234881024 data_used: 27148288
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:10.245813+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131342336 unmapped: 32440320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:11.246082+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131342336 unmapped: 32440320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:12.246396+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131342336 unmapped: 32440320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.834514618s of 23.068130493s, submitted: 47
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:13.246987+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134021120 unmapped: 29761536 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:14.247304+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134103040 unmapped: 29679616 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8139000/0x0/0x4ffc00000, data 0x34675e3/0x3535000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1571961 data_alloc: 234881024 data_used: 27537408
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:15.247576+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134651904 unmapped: 29130752 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:16.247797+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 29065216 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:17.248152+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 29065216 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:18.248612+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 29065216 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:19.248876+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 29065216 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1578329 data_alloc: 234881024 data_used: 27856896
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f812f000/0x0/0x4ffc00000, data 0x34715e3/0x353f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:20.249103+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 29065216 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4d40c00 session 0x55f0a80210e0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a56d1c00 session 0x55f0a885ad20
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b3cc00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f812f000/0x0/0x4ffc00000, data 0x34715e3/0x353f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,1])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:21.249394+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 129646592 unmapped: 34136064 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a7c74800
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b3cc00 session 0x55f0a529d4a0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:22.249629+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8c71000/0x0/0x4ffc00000, data 0x251e56e/0x25eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 129687552 unmapped: 34095104 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a5f18400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.617131233s of 10.342635155s, submitted: 143
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:23.249837+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _renew_subs
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 136 ms_handle_reset con 0x55f0a5f18400 session 0x55f0a75bb680
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 136 ms_handle_reset con 0x55f0a7c74800 session 0x55f0a56ad860
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 129671168 unmapped: 34111488 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a858fc00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 136 ms_handle_reset con 0x55f0a858fc00 session 0x55f0a553fa40
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b39c00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 136 ms_handle_reset con 0x55f0a4b39c00 session 0x55f0a75b7c20
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b39c00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:24.250448+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 129179648 unmapped: 34603008 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 136 ms_handle_reset con 0x55f0a4b39c00 session 0x55f0a75b74a0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b3cc00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 136 ms_handle_reset con 0x55f0a4b3cc00 session 0x55f0a7ece780
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a5f18400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 136 ms_handle_reset con 0x55f0a5f18400 session 0x55f0a6b8e960
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a7c74800
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 136 ms_handle_reset con 0x55f0a7c74800 session 0x55f0a75c3860
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a858fc00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1484659 data_alloc: 234881024 data_used: 21315584
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 136 ms_handle_reset con 0x55f0a858fc00 session 0x55f0a562e5a0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f859a000/0x0/0x4ffc00000, data 0x2bf40fb/0x2cc3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:25.250800+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 129204224 unmapped: 34578432 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b39c00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:26.251133+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f859a000/0x0/0x4ffc00000, data 0x2bf40fb/0x2cc3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _renew_subs
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 129138688 unmapped: 34643968 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 137 ms_handle_reset con 0x55f0a56d1800 session 0x55f0a810a5a0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 137 ms_handle_reset con 0x55f0a858d800 session 0x55f0a756c780
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a8590c00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 137 ms_handle_reset con 0x55f0a4b39c00 session 0x55f0a529c000
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:27.251402+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 123150336 unmapped: 40632320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 137 ms_handle_reset con 0x55f0a8590c00 session 0x55f0a75b8b40
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:28.251825+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 123150336 unmapped: 40632320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:29.252029+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 123150336 unmapped: 40632320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f9584000/0x0/0x4ffc00000, data 0x1c0acac/0x1cd9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1307635 data_alloc: 218103808 data_used: 13893632
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a8595c00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 137 ms_handle_reset con 0x55f0a8595c00 session 0x55f0a7eb7e00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:30.252393+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 123150336 unmapped: 40632320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b39c00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 137 ms_handle_reset con 0x55f0a4b39c00 session 0x55f0a7cbb860
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f9584000/0x0/0x4ffc00000, data 0x1c0acac/0x1cd9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:31.252708+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 123150336 unmapped: 40632320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a858ec00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 137 ms_handle_reset con 0x55f0a858ec00 session 0x55f0a7eb61e0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b38800
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 137 ms_handle_reset con 0x55f0a4b38800 session 0x55f0a4b37e00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:32.253125+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 40861696 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a60c7000
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a8593400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:33.253370+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 40861696 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a6955400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 137 ms_handle_reset con 0x55f0a6955400 session 0x55f0a80d1e00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a785b000
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 137 ms_handle_reset con 0x55f0a785b000 session 0x55f0a6b8e780
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b38800
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 137 ms_handle_reset con 0x55f0a4b38800 session 0x55f0a7eb74a0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:34.253631+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 40861696 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1314319 data_alloc: 218103808 data_used: 14106624
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:35.254659+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 122880000 unmapped: 40902656 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b39c00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.097565651s of 12.628091812s, submitted: 91
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 137 ms_handle_reset con 0x55f0a4b39c00 session 0x55f0a75b92c0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a6955400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 137 ms_handle_reset con 0x55f0a6955400 session 0x55f0a5897860
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f955a000/0x0/0x4ffc00000, data 0x1c34cbc/0x1d04000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:36.255068+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 123027456 unmapped: 40755200 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a7e6f000
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 137 ms_handle_reset con 0x55f0a7e6f000 session 0x55f0a726c960
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:37.255353+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a6bbf000
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 39845888 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 137 ms_handle_reset con 0x55f0a6bbf000 session 0x55f0a8b463c0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:38.255670+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 137 handle_osd_map epochs [137,138], i have 137, src has [1,138]
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 124067840 unmapped: 39714816 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:39.256099+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 124067840 unmapped: 39714816 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b38800
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a4b38800 session 0x55f0a6aced20
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1436171 data_alloc: 234881024 data_used: 20680704
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:40.256479+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 124067840 unmapped: 39714816 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b39c00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a4b39c00 session 0x55f0a6afcb40
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:41.256808+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 124067840 unmapped: 39714816 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a858c000
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a858c000 session 0x55f0a75ba960
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8ca7000/0x0/0x4ffc00000, data 0x24e571f/0x25b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a8590c00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:42.257366+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a8590c00 session 0x55f0a6acf680
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 124084224 unmapped: 39698432 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a7ec3400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8ca7000/0x0/0x4ffc00000, data 0x24e571f/0x25b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:43.257656+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 124084224 unmapped: 39698432 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:44.258048+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 124084224 unmapped: 39698432 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1437451 data_alloc: 234881024 data_used: 20795392
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:45.258298+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 39149568 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8ca7000/0x0/0x4ffc00000, data 0x24e571f/0x25b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:46.258607+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 38887424 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8ca7000/0x0/0x4ffc00000, data 0x24e571f/0x25b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [1])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:47.258844+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128049152 unmapped: 35733504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:48.259090+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8ca7000/0x0/0x4ffc00000, data 0x24e571f/0x25b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128049152 unmapped: 35733504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:49.259365+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8ca7000/0x0/0x4ffc00000, data 0x24e571f/0x25b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128049152 unmapped: 35733504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1491211 data_alloc: 234881024 data_used: 28409856
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:50.259718+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128049152 unmapped: 35733504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:51.260014+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128049152 unmapped: 35733504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:52.260226+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128049152 unmapped: 35733504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:53.260579+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8ca7000/0x0/0x4ffc00000, data 0x24e571f/0x25b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128049152 unmapped: 35733504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:54.260963+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128049152 unmapped: 35733504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1491211 data_alloc: 234881024 data_used: 28409856
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8ca7000/0x0/0x4ffc00000, data 0x24e571f/0x25b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:55.261240+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128049152 unmapped: 35733504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8ca7000/0x0/0x4ffc00000, data 0x24e571f/0x25b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:56.261827+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128049152 unmapped: 35733504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:57.262135+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128049152 unmapped: 35733504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8ca7000/0x0/0x4ffc00000, data 0x24e571f/0x25b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:58.262629+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128049152 unmapped: 35733504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:59.264316+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128049152 unmapped: 35733504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1491211 data_alloc: 234881024 data_used: 28409856
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:00.264658+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8ca7000/0x0/0x4ffc00000, data 0x24e571f/0x25b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128057344 unmapped: 35725312 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:01.265026+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128057344 unmapped: 35725312 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:02.265321+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128065536 unmapped: 35717120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8ca7000/0x0/0x4ffc00000, data 0x24e571f/0x25b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:03.265617+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8ca7000/0x0/0x4ffc00000, data 0x24e571f/0x25b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128065536 unmapped: 35717120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:04.265898+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128065536 unmapped: 35717120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1491211 data_alloc: 234881024 data_used: 28409856
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:05.266237+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8ca7000/0x0/0x4ffc00000, data 0x24e571f/0x25b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128065536 unmapped: 35717120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:06.266765+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128065536 unmapped: 35717120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:07.267500+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8ca7000/0x0/0x4ffc00000, data 0x24e571f/0x25b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128065536 unmapped: 35717120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:08.267893+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128065536 unmapped: 35717120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 33.258148193s of 33.476356506s, submitted: 37
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:09.268377+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 138297344 unmapped: 25485312 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1608251 data_alloc: 234881024 data_used: 29712384
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:10.268765+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7e73000/0x0/0x4ffc00000, data 0x330c71f/0x33dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 138543104 unmapped: 25239552 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:11.268995+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137125888 unmapped: 26656768 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:12.269303+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137142272 unmapped: 26640384 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:13.269790+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137142272 unmapped: 26640384 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:14.270577+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137142272 unmapped: 26640384 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1621193 data_alloc: 234881024 data_used: 29835264
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:15.270935+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137142272 unmapped: 26640384 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:16.271620+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7dc7000/0x0/0x4ffc00000, data 0x33be71f/0x348f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137142272 unmapped: 26640384 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:17.272145+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137142272 unmapped: 26640384 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:18.272673+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 3600.1 total, 600.0 interval
                                            Cumulative writes: 11K writes, 42K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 11K writes, 2973 syncs, 3.71 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 2088 writes, 7681 keys, 2088 commit groups, 1.0 writes per commit group, ingest: 7.32 MB, 0.01 MB/s
                                            Interval WAL: 2088 writes, 866 syncs, 2.41 writes per sync, written: 0.01 GB, 0.01 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137175040 unmapped: 26607616 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.264138222s of 10.018401146s, submitted: 173
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:19.273075+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137969664 unmapped: 25812992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7bc4000/0x0/0x4ffc00000, data 0x35c371f/0x3694000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:20.273642+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1633635 data_alloc: 234881024 data_used: 30130176
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137969664 unmapped: 25812992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:21.274034+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137109504 unmapped: 26673152 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:22.274506+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137109504 unmapped: 26673152 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:23.274925+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137109504 unmapped: 26673152 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b3a000/0x0/0x4ffc00000, data 0x364a71f/0x371b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:24.275178+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137109504 unmapped: 26673152 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:25.275433+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1641113 data_alloc: 234881024 data_used: 29941760
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a7c73c00 session 0x55f0a80d0780
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a8590800
Dec 03 02:45:19 compute-0 ceph-osd[207705]: mgrc ms_handle_reset ms_handle_reset con 0x55f0a651e800
Dec 03 02:45:19 compute-0 ceph-osd[207705]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1922561230
Dec 03 02:45:19 compute-0 ceph-osd[207705]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1922561230,v1:192.168.122.100:6801/1922561230]
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: get_auth_request con 0x55f0a858fc00 auth_method 0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: mgrc handle_mgr_configure stats_period=5
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137158656 unmapped: 26624000 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:26.275692+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a4d41c00 session 0x55f0a56ac000
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a8f73400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137158656 unmapped: 26624000 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a6ab1400 session 0x55f0a57e7860
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a8f70400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:27.276054+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137330688 unmapped: 26451968 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:28.276273+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137330688 unmapped: 26451968 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:29.276664+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b33000/0x0/0x4ffc00000, data 0x365a71f/0x372b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.565891266s of 10.747445107s, submitted: 33
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137330688 unmapped: 26451968 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:30.276885+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1636453 data_alloc: 234881024 data_used: 29941760
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137330688 unmapped: 26451968 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:31.277103+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137330688 unmapped: 26451968 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:32.277348+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b30000/0x0/0x4ffc00000, data 0x365d71f/0x372e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137330688 unmapped: 26451968 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:33.277572+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137330688 unmapped: 26451968 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:34.277952+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 26443776 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:35.278256+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1636453 data_alloc: 234881024 data_used: 29941760
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 26443776 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:36.278580+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b30000/0x0/0x4ffc00000, data 0x365d71f/0x372e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 26443776 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:37.278763+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 26443776 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:38.278968+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 26443776 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:39.279868+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 26443776 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.241274834s of 10.265392303s, submitted: 5
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:40.280722+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1636785 data_alloc: 234881024 data_used: 29941760
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 26443776 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b2d000/0x0/0x4ffc00000, data 0x366071f/0x3731000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:41.282356+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 26443776 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:42.282933+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b2d000/0x0/0x4ffc00000, data 0x366071f/0x3731000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137347072 unmapped: 26435584 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:43.283440+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137347072 unmapped: 26435584 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:44.284180+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137347072 unmapped: 26435584 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:45.284514+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1638097 data_alloc: 234881024 data_used: 29954048
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137388032 unmapped: 26394624 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:46.284899+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a60c7000 session 0x55f0a7eb6d20
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a8593400 session 0x55f0a6afda40
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137355264 unmapped: 26427392 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a9f58800
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:47.287216+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a9f58800 session 0x55f0a6afd860
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:48.287471+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:49.287790+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:50.288114+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1409806 data_alloc: 234881024 data_used: 21741568
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:51.288368+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.852160454s of 12.121880531s, submitted: 48
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:52.288793+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:53.289192+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:54.289832+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:55.290283+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1409982 data_alloc: 234881024 data_used: 21741568
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:56.290805+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:57.291130+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:58.291631+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:59.291992+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:00.292367+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1409982 data_alloc: 234881024 data_used: 21741568
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:01.292823+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:02.293228+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:03.293641+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:04.294111+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:05.294462+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1409982 data_alloc: 234881024 data_used: 21741568
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:06.294639+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133881856 unmapped: 29900800 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:07.294794+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133881856 unmapped: 29900800 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:08.294959+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133881856 unmapped: 29900800 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:09.295349+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133881856 unmapped: 29900800 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:10.295755+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1409982 data_alloc: 234881024 data_used: 21741568
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133881856 unmapped: 29900800 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:11.295913+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133881856 unmapped: 29900800 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:12.296277+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133881856 unmapped: 29900800 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:13.296652+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133881856 unmapped: 29900800 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:14.297047+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 29892608 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:15.297394+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1409982 data_alloc: 234881024 data_used: 21741568
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 29892608 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:16.297798+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 29892608 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:17.298295+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 29892608 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:18.298799+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 29892608 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:19.299142+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 29892608 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:20.299617+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1409982 data_alloc: 234881024 data_used: 21741568
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133898240 unmapped: 29884416 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:21.299995+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133898240 unmapped: 29884416 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:22.300449+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.712953568s of 30.721988678s, submitted: 1
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133931008 unmapped: 29851648 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:23.300768+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133931008 unmapped: 29851648 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:24.301111+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133931008 unmapped: 29851648 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:25.301392+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411390 data_alloc: 234881024 data_used: 21741568
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133931008 unmapped: 29851648 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:26.301735+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133931008 unmapped: 29851648 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:27.302102+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133931008 unmapped: 29851648 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:28.302296+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:29.302692+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133931008 unmapped: 29851648 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:30.303030+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133939200 unmapped: 29843456 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411390 data_alloc: 234881024 data_used: 21741568
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:31.303288+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133939200 unmapped: 29843456 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:32.303692+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133939200 unmapped: 29843456 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:33.303919+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133939200 unmapped: 29843456 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:34.304287+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133939200 unmapped: 29843456 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:35.304506+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133939200 unmapped: 29843456 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411390 data_alloc: 234881024 data_used: 21741568
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:36.304890+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133947392 unmapped: 29835264 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:37.305239+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133947392 unmapped: 29835264 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:38.305670+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133947392 unmapped: 29835264 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:39.305980+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133955584 unmapped: 29827072 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:40.306345+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133955584 unmapped: 29827072 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411390 data_alloc: 234881024 data_used: 21741568
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:41.306671+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133955584 unmapped: 29827072 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:42.307076+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133955584 unmapped: 29827072 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a4b40800 session 0x55f0a8544b40
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a7e71000
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:43.307396+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133955584 unmapped: 29827072 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:44.307698+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133955584 unmapped: 29827072 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:45.307954+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133955584 unmapped: 29827072 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411390 data_alloc: 234881024 data_used: 21741568
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:46.308403+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133955584 unmapped: 29827072 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:47.309422+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133963776 unmapped: 29818880 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:48.309677+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133963776 unmapped: 29818880 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.298162460s of 26.320926666s, submitted: 8
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:49.310038+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 29794304 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:50.310645+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 29777920 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1410126 data_alloc: 234881024 data_used: 21749760
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:51.311236+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 29753344 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:52.311586+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134103040 unmapped: 29679616 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:53.311830+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134160384 unmapped: 29622272 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:54.312243+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135208960 unmapped: 28573696 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:55.312734+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135208960 unmapped: 28573696 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1410126 data_alloc: 234881024 data_used: 21749760
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:56.313097+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135208960 unmapped: 28573696 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:57.313304+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135208960 unmapped: 28573696 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:58.313503+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135208960 unmapped: 28573696 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:59.313669+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135208960 unmapped: 28573696 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:00.313864+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135217152 unmapped: 28565504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1410126 data_alloc: 234881024 data_used: 21749760
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:01.314078+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135217152 unmapped: 28565504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:02.314632+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135217152 unmapped: 28565504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:03.315166+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135217152 unmapped: 28565504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:04.315596+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135217152 unmapped: 28565504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:05.316185+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135217152 unmapped: 28565504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1410126 data_alloc: 234881024 data_used: 21749760
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:06.316629+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135217152 unmapped: 28565504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:07.317045+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135217152 unmapped: 28565504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:08.318048+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135225344 unmapped: 28557312 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:09.318385+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135225344 unmapped: 28557312 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:10.318832+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135225344 unmapped: 28557312 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1410126 data_alloc: 234881024 data_used: 21749760
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:11.319211+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135225344 unmapped: 28557312 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:12.319732+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135225344 unmapped: 28557312 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:13.320147+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135225344 unmapped: 28557312 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:14.320591+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135225344 unmapped: 28557312 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:15.321010+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135225344 unmapped: 28557312 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1410126 data_alloc: 234881024 data_used: 21749760
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:16.321427+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135225344 unmapped: 28557312 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:17.321739+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135225344 unmapped: 28557312 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:18.322171+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135233536 unmapped: 28549120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:19.322617+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135233536 unmapped: 28549120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:20.322923+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135233536 unmapped: 28549120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1410126 data_alloc: 234881024 data_used: 21749760
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:21.323199+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135233536 unmapped: 28549120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:22.323704+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135233536 unmapped: 28549120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:23.324086+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135233536 unmapped: 28549120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:24.324612+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135233536 unmapped: 28549120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:25.324952+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135233536 unmapped: 28549120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1410126 data_alloc: 234881024 data_used: 21749760
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:26.325339+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135233536 unmapped: 28549120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:27.325825+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135233536 unmapped: 28549120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:28.326187+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135233536 unmapped: 28549120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:29.326431+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135233536 unmapped: 28549120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:30.326820+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135233536 unmapped: 28549120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1410126 data_alloc: 234881024 data_used: 21749760
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:31.327156+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135233536 unmapped: 28549120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:32.327616+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135233536 unmapped: 28549120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:33.328253+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135241728 unmapped: 28540928 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:34.329725+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135241728 unmapped: 28540928 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:35.330163+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135241728 unmapped: 28540928 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1410126 data_alloc: 234881024 data_used: 21749760
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:36.330521+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135241728 unmapped: 28540928 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:37.331273+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135241728 unmapped: 28540928 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:38.331649+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135241728 unmapped: 28540928 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:39.331985+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135241728 unmapped: 28540928 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:40.332303+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135241728 unmapped: 28540928 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1410126 data_alloc: 234881024 data_used: 21749760
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:41.332710+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135241728 unmapped: 28540928 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:42.333087+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135241728 unmapped: 28540928 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:43.333867+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a8596000
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a8596000 session 0x55f0a7ecd680
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a80ed400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a80ed400 session 0x55f0a4b37680
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a60c7000
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a60c7000 session 0x55f0a6415680
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a80ed400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135249920 unmapped: 28532736 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a80ed400 session 0x55f0a529c5a0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a8593400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 54.039604187s of 54.708065033s, submitted: 110
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a8593400 session 0x55f0a96b8b40
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:44.334223+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a8596000
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a8596000 session 0x55f0a7def0e0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a9f58800
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a9f58800 session 0x55f0a5529860
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a60c7000
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a60c7000 session 0x55f0a810b4a0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a80ed400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a80ed400 session 0x55f0a8172780
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135708672 unmapped: 31752192 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:45.334446+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135708672 unmapped: 31752192 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1502243 data_alloc: 234881024 data_used: 21749760
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:46.334753+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135708672 unmapped: 31752192 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:47.335077+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135708672 unmapped: 31752192 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:48.335372+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135716864 unmapped: 31744000 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:49.335752+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8740000/0x0/0x4ffc00000, data 0x2a4c781/0x2b1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135716864 unmapped: 31744000 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:50.336335+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135725056 unmapped: 31735808 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:51.337326+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1502243 data_alloc: 234881024 data_used: 21749760
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135725056 unmapped: 31735808 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:52.338026+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a5768c00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a5768c00 session 0x55f0a4b37860
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135372800 unmapped: 32088064 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:53.338327+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a694a000
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a7e72800
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135372800 unmapped: 32088064 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:54.338629+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135372800 unmapped: 32088064 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8716000/0x0/0x4ffc00000, data 0x2a76781/0x2b48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:55.338831+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 32743424 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:56.339115+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1527149 data_alloc: 234881024 data_used: 25067520
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 32743424 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:57.339362+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134979584 unmapped: 32481280 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:58.342191+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:59.342380+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8716000/0x0/0x4ffc00000, data 0x2a76781/0x2b48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:00.342794+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:01.343038+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1567309 data_alloc: 234881024 data_used: 30720000
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:02.343226+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:03.343495+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:04.344012+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:05.344387+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8716000/0x0/0x4ffc00000, data 0x2a76781/0x2b48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:06.344762+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1567309 data_alloc: 234881024 data_used: 30720000
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:07.345077+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:08.345479+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:09.345804+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:10.347223+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:11.347494+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1567309 data_alloc: 234881024 data_used: 30720000
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8716000/0x0/0x4ffc00000, data 0x2a76781/0x2b48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:12.348190+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8716000/0x0/0x4ffc00000, data 0x2a76781/0x2b48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:13.348638+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:14.348899+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8716000/0x0/0x4ffc00000, data 0x2a76781/0x2b48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:15.349269+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:16.349703+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1567309 data_alloc: 234881024 data_used: 30720000
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8716000/0x0/0x4ffc00000, data 0x2a76781/0x2b48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:17.350054+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:18.350402+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:19.350734+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8716000/0x0/0x4ffc00000, data 0x2a76781/0x2b48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:20.350980+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:21.351216+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1567309 data_alloc: 234881024 data_used: 30720000
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8716000/0x0/0x4ffc00000, data 0x2a76781/0x2b48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:22.351421+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8716000/0x0/0x4ffc00000, data 0x2a76781/0x2b48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:23.351666+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:24.352074+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:25.352258+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:26.352601+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1567309 data_alloc: 234881024 data_used: 30720000
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:27.352918+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8716000/0x0/0x4ffc00000, data 0x2a76781/0x2b48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:28.353234+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 44.706817627s of 44.924095154s, submitted: 44
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 142917632 unmapped: 24543232 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:29.354948+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143196160 unmapped: 24264704 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:30.355387+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 141459456 unmapped: 26001408 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:31.355831+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1678761 data_alloc: 234881024 data_used: 31834112
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 142753792 unmapped: 24707072 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b7d000/0x0/0x4ffc00000, data 0x3606781/0x36d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:32.356214+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 142753792 unmapped: 24707072 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:33.356631+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 142753792 unmapped: 24707072 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:34.356985+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b7d000/0x0/0x4ffc00000, data 0x3606781/0x36d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 142753792 unmapped: 24707072 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:35.357413+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 142753792 unmapped: 24707072 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:36.357746+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1678761 data_alloc: 234881024 data_used: 31834112
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:37.358113+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b7b000/0x0/0x4ffc00000, data 0x3611781/0x36e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:38.358521+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b7b000/0x0/0x4ffc00000, data 0x3611781/0x36e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:39.359913+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:40.360313+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:41.360755+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b7b000/0x0/0x4ffc00000, data 0x3611781/0x36e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673013 data_alloc: 234881024 data_used: 31838208
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:42.361165+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:43.361446+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:44.361881+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b7b000/0x0/0x4ffc00000, data 0x3611781/0x36e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:45.362232+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:46.362640+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673013 data_alloc: 234881024 data_used: 31838208
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:47.363024+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:48.363413+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.688020706s of 20.183700562s, submitted: 135
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:49.363823+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:50.364217+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:51.364737+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1672881 data_alloc: 234881024 data_used: 31838208
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:52.364949+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:53.365300+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:54.365567+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:55.365902+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:56.366267+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1672881 data_alloc: 234881024 data_used: 31838208
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:57.366590+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:58.366972+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:59.367341+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:00.368231+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:01.369706+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1672881 data_alloc: 234881024 data_used: 31838208
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:02.371283+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:03.372883+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:04.373695+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:05.373898+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:06.374101+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1672881 data_alloc: 234881024 data_used: 31838208
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:07.374453+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:08.374814+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:09.375056+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:10.375279+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:11.375691+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1672881 data_alloc: 234881024 data_used: 31838208
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:12.375932+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:13.376720+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:14.379966+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:15.381120+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:16.382517+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1672881 data_alloc: 234881024 data_used: 31838208
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:17.383917+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:18.385061+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:19.386725+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:20.387856+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:21.389225+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1672881 data_alloc: 234881024 data_used: 31838208
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:22.390865+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:23.392140+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:24.393119+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:25.394115+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:26.395516+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1672881 data_alloc: 234881024 data_used: 31838208
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:27.397339+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:28.398657+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:29.400230+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:30.401905+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:31.403573+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143024128 unmapped: 24436736 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1672881 data_alloc: 234881024 data_used: 31838208
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:32.405231+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143032320 unmapped: 24428544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 43.763500214s of 43.778255463s, submitted: 2
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:33.406518+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143032320 unmapped: 24428544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:34.408437+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143032320 unmapped: 24428544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:35.410195+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143032320 unmapped: 24428544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:36.411817+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143032320 unmapped: 24428544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673585 data_alloc: 234881024 data_used: 31838208
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:37.413452+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143032320 unmapped: 24428544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:38.415195+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143032320 unmapped: 24428544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:39.416521+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143032320 unmapped: 24428544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:40.418158+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143032320 unmapped: 24428544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:41.419755+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143032320 unmapped: 24428544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673585 data_alloc: 234881024 data_used: 31838208
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:42.420964+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143032320 unmapped: 24428544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:43.422411+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143032320 unmapped: 24428544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:44.424139+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143032320 unmapped: 24428544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:45.425823+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143032320 unmapped: 24428544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:46.427521+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143032320 unmapped: 24428544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673585 data_alloc: 234881024 data_used: 31838208
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:47.429365+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143032320 unmapped: 24428544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:48.431089+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143040512 unmapped: 24420352 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:49.432729+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143040512 unmapped: 24420352 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:50.434469+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143040512 unmapped: 24420352 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:51.436212+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143040512 unmapped: 24420352 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673585 data_alloc: 234881024 data_used: 31838208
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:52.437781+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143040512 unmapped: 24420352 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:53.439472+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143040512 unmapped: 24420352 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:54.441314+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143048704 unmapped: 24412160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:55.442946+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143048704 unmapped: 24412160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:56.444111+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143048704 unmapped: 24412160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673585 data_alloc: 234881024 data_used: 31838208
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:57.445687+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143048704 unmapped: 24412160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:58.446978+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143048704 unmapped: 24412160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:59.448423+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143048704 unmapped: 24412160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:00.449449+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143048704 unmapped: 24412160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:01.450215+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143048704 unmapped: 24412160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673585 data_alloc: 234881024 data_used: 31838208
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:02.450740+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143048704 unmapped: 24412160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:03.451095+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143048704 unmapped: 24412160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:04.451788+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143048704 unmapped: 24412160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:05.452309+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143056896 unmapped: 24403968 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:06.452754+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143056896 unmapped: 24403968 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673585 data_alloc: 234881024 data_used: 31838208
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:07.453207+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143056896 unmapped: 24403968 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:08.454487+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143056896 unmapped: 24403968 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:09.456284+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143056896 unmapped: 24403968 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:10.458018+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143056896 unmapped: 24403968 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:11.459118+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143056896 unmapped: 24403968 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673585 data_alloc: 234881024 data_used: 31838208
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:12.460092+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143056896 unmapped: 24403968 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:13.461753+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143056896 unmapped: 24403968 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:14.463319+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143065088 unmapped: 24395776 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:15.465226+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143065088 unmapped: 24395776 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:16.466927+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143065088 unmapped: 24395776 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673585 data_alloc: 234881024 data_used: 31838208
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:17.467946+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143065088 unmapped: 24395776 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:18.469066+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143065088 unmapped: 24395776 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:19.470384+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143065088 unmapped: 24395776 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:20.471294+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143065088 unmapped: 24395776 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:21.472996+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143073280 unmapped: 24387584 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673585 data_alloc: 234881024 data_used: 31838208
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:22.474797+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143073280 unmapped: 24387584 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:23.475447+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143073280 unmapped: 24387584 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:24.475783+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143073280 unmapped: 24387584 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:25.476157+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143073280 unmapped: 24387584 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:26.476685+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143073280 unmapped: 24387584 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673585 data_alloc: 234881024 data_used: 31838208
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 54.550392151s of 54.573677063s, submitted: 5
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:27.477455+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 24199168 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:28.478694+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 24199168 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:29.479059+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 24199168 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:30.479398+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 24199168 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:31.479784+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b61000/0x0/0x4ffc00000, data 0x362b781/0x36fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 24199168 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673149 data_alloc: 234881024 data_used: 31838208
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:32.480024+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 24199168 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b61000/0x0/0x4ffc00000, data 0x362b781/0x36fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:33.480336+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 24199168 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b61000/0x0/0x4ffc00000, data 0x362b781/0x36fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:34.480787+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 24199168 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:35.481067+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 24199168 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:36.481471+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 24199168 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673149 data_alloc: 234881024 data_used: 31838208
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:37.481821+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 24199168 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:38.482085+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b61000/0x0/0x4ffc00000, data 0x362b781/0x36fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 24199168 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:39.482486+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b61000/0x0/0x4ffc00000, data 0x362b781/0x36fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 24199168 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:40.482877+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b61000/0x0/0x4ffc00000, data 0x362b781/0x36fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 24199168 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:41.483294+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b61000/0x0/0x4ffc00000, data 0x362b781/0x36fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 24199168 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:42.483745+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673309 data_alloc: 234881024 data_used: 31842304
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 24199168 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.851950645s of 15.870507240s, submitted: 2
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:43.484069+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143343616 unmapped: 24117248 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:44.484455+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143351808 unmapped: 24109056 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:45.484885+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143351808 unmapped: 24109056 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:46.485263+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143351808 unmapped: 24109056 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:47.485644+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673729 data_alloc: 234881024 data_used: 31842304
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143351808 unmapped: 24109056 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:48.486024+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143351808 unmapped: 24109056 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:49.486408+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143351808 unmapped: 24109056 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:50.486828+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143351808 unmapped: 24109056 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:51.487130+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143351808 unmapped: 24109056 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:52.487759+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673729 data_alloc: 234881024 data_used: 31842304
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143360000 unmapped: 24100864 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:53.487999+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143360000 unmapped: 24100864 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:54.488216+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143360000 unmapped: 24100864 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:55.488656+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143360000 unmapped: 24100864 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:56.488858+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143360000 unmapped: 24100864 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:57.489142+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673729 data_alloc: 234881024 data_used: 31842304
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.315594673s of 14.335906982s, submitted: 2
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143384576 unmapped: 24076288 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:58.489378+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143384576 unmapped: 24076288 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:59.489613+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143384576 unmapped: 24076288 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:00.489945+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143384576 unmapped: 24076288 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:01.490374+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143384576 unmapped: 24076288 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:02.490737+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143384576 unmapped: 24076288 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:03.491041+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143384576 unmapped: 24076288 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:04.491452+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143384576 unmapped: 24076288 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:05.491797+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143384576 unmapped: 24076288 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:06.492153+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143384576 unmapped: 24076288 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:07.492470+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143384576 unmapped: 24076288 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:08.492882+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143384576 unmapped: 24076288 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:09.493324+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143384576 unmapped: 24076288 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:10.493769+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:11.494161+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:12.494690+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:13.495104+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:14.495500+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:15.495840+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:16.496110+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:17.496421+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:18.496758+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:19.497045+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:20.497379+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:21.497692+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:22.498005+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:23.498333+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:24.498770+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:25.499046+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:26.499411+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:27.499771+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:28.500045+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:29.500614+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:30.500810+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:31.501033+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:32.501328+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:33.501751+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:34.502120+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:35.502480+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:36.502853+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:37.503056+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:38.503817+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 24059904 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:39.504417+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 24059904 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:40.504860+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 24059904 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:41.505212+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 24059904 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:42.508796+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 24059904 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:43.509253+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 24059904 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:44.509706+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 24059904 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:45.510171+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 24059904 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:46.510606+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 24059904 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:47.511008+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 24059904 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:48.511391+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 24059904 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:49.511785+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 24059904 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:50.512130+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 24051712 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:51.512515+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 24051712 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:52.513030+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 24051712 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:53.513259+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 24051712 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:54.513808+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 24051712 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:55.514067+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 24051712 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:56.514260+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 24051712 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:57.514795+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 24051712 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:58.515062+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 24051712 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:59.515323+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 24051712 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:00.515608+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 24051712 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:01.515996+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 24051712 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:02.516488+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 24051712 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:03.517005+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 24051712 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:04.517593+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 24051712 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:05.518158+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 24051712 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:06.519395+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143417344 unmapped: 24043520 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:07.520441+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143417344 unmapped: 24043520 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:08.521628+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143417344 unmapped: 24043520 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:09.521987+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143417344 unmapped: 24043520 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:10.522837+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143417344 unmapped: 24043520 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:11.523478+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143417344 unmapped: 24043520 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:12.524068+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143417344 unmapped: 24043520 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:13.524422+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143417344 unmapped: 24043520 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:14.524843+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143417344 unmapped: 24043520 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:15.525778+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143417344 unmapped: 24043520 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:16.526322+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143417344 unmapped: 24043520 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:17.527021+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143417344 unmapped: 24043520 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:18.527629+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143417344 unmapped: 24043520 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:19.528019+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143417344 unmapped: 24043520 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:20.528873+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143417344 unmapped: 24043520 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:21.529459+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 24035328 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:22.530009+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 24035328 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:23.530384+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 24035328 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:24.530880+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 24035328 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:25.531330+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 24035328 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:26.531739+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 24035328 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:27.532075+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 24035328 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:28.532429+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 24035328 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:29.532825+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 24035328 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:30.533145+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 24027136 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:31.533517+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 24027136 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:32.534021+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 24027136 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:33.534467+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 24027136 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:34.534996+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 24027136 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:35.535665+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 24027136 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:36.536032+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 24027136 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:37.536379+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 24027136 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:38.536609+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 24027136 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:39.536963+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 24027136 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:40.537243+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 24027136 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:41.537482+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 24027136 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:42.537744+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 24027136 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:43.538019+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 24027136 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:44.538371+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 24027136 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:45.538708+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 24027136 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:46.538893+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 24018944 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:47.539093+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 24018944 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:48.539435+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 24018944 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:49.539929+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 24018944 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:50.550041+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 24018944 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:51.551619+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 24018944 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:52.553424+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 24018944 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:53.555707+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 24018944 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:54.557676+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 24018944 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:55.558954+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 24018944 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:56.560596+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 24018944 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:57.562610+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 24018944 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:58.563264+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 24018944 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:59.563641+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 24018944 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:00.564045+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 24018944 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:01.564452+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 24018944 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:02.564788+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 24018944 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:03.565135+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 24018944 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:04.565501+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 24010752 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:05.565749+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 24010752 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:06.566132+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 24010752 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:07.566430+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 24010752 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:08.566719+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 24010752 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:09.567048+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 24010752 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:10.567865+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 24010752 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:11.568454+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 24010752 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:12.569950+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 24010752 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:13.570641+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 24010752 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:14.571355+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 24010752 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:15.571965+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 24010752 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:16.573304+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 24010752 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:17.573984+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 24010752 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:18.574373+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 24010752 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:19.574869+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 24010752 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:20.575678+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 24010752 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:21.576140+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 24010752 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:22.576739+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 24010752 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:23.577275+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143458304 unmapped: 24002560 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:24.577921+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143458304 unmapped: 24002560 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:25.578341+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143458304 unmapped: 24002560 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:26.578715+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143458304 unmapped: 24002560 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:27.579208+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143458304 unmapped: 24002560 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:28.579792+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143458304 unmapped: 24002560 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:29.580201+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143458304 unmapped: 24002560 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:30.580608+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143458304 unmapped: 24002560 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:31.580948+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143499264 unmapped: 23961600 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:32.581329+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143499264 unmapped: 23961600 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:33.581723+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1678609 data_alloc: 234881024 data_used: 32022528
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143499264 unmapped: 23961600 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:34.581970+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143507456 unmapped: 23953408 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:35.582394+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:36.582987+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143507456 unmapped: 23953408 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 159.426895142s of 159.453201294s, submitted: 15
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:37.583437+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143605760 unmapped: 23855104 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:38.583918+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143605760 unmapped: 23855104 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676813 data_alloc: 234881024 data_used: 32022528
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:39.584295+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143605760 unmapped: 23855104 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:40.584501+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143605760 unmapped: 23855104 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b35000/0x0/0x4ffc00000, data 0x3657781/0x3729000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:41.584935+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143605760 unmapped: 23855104 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:42.585308+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143605760 unmapped: 23855104 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:43.585642+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143605760 unmapped: 23855104 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676813 data_alloc: 234881024 data_used: 32022528
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:44.586238+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143605760 unmapped: 23855104 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b35000/0x0/0x4ffc00000, data 0x3657781/0x3729000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:45.586643+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143605760 unmapped: 23855104 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:46.587179+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143605760 unmapped: 23855104 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:47.587670+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143605760 unmapped: 23855104 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:48.588054+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143605760 unmapped: 23855104 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1677293 data_alloc: 234881024 data_used: 32034816
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:49.588354+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143605760 unmapped: 23855104 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:50.588781+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143605760 unmapped: 23855104 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b35000/0x0/0x4ffc00000, data 0x3657781/0x3729000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:51.589123+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143605760 unmapped: 23855104 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:52.589480+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143605760 unmapped: 23855104 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:53.589886+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143605760 unmapped: 23855104 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1677293 data_alloc: 234881024 data_used: 32034816
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.874889374s of 16.894330978s, submitted: 2
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:54.590223+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143605760 unmapped: 23855104 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:55.590710+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143605760 unmapped: 23855104 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:56.591048+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143605760 unmapped: 23855104 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:57.591311+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143605760 unmapped: 23855104 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:58.591725+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143605760 unmapped: 23855104 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1677713 data_alloc: 234881024 data_used: 32034816
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:59.591925+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143605760 unmapped: 23855104 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:00.592251+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143605760 unmapped: 23855104 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:01.592625+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143605760 unmapped: 23855104 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:02.593032+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143613952 unmapped: 23846912 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:03.593409+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143613952 unmapped: 23846912 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1677713 data_alloc: 234881024 data_used: 32034816
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:04.593981+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143613952 unmapped: 23846912 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:05.594307+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143613952 unmapped: 23846912 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:06.594742+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143613952 unmapped: 23846912 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.290188789s of 13.311237335s, submitted: 2
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:07.595116+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143622144 unmapped: 23838720 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:08.595400+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143622144 unmapped: 23838720 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:09.595766+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143622144 unmapped: 23838720 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:10.596260+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143622144 unmapped: 23838720 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:11.596956+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143622144 unmapped: 23838720 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:12.597355+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143622144 unmapped: 23838720 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:13.598048+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143622144 unmapped: 23838720 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:14.598447+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143622144 unmapped: 23838720 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:15.598852+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143622144 unmapped: 23838720 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:16.599348+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143622144 unmapped: 23838720 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:17.599742+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143622144 unmapped: 23838720 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:18.600110+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143622144 unmapped: 23838720 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:19.600485+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143622144 unmapped: 23838720 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:20.600886+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143622144 unmapped: 23838720 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:21.601365+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143622144 unmapped: 23838720 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:22.601911+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143622144 unmapped: 23838720 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:23.602310+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143622144 unmapped: 23838720 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:24.602769+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143622144 unmapped: 23838720 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:25.603087+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143622144 unmapped: 23838720 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:26.603488+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143622144 unmapped: 23838720 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:27.603779+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143622144 unmapped: 23838720 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:28.604126+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143630336 unmapped: 23830528 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:29.604515+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143630336 unmapped: 23830528 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:30.605115+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143630336 unmapped: 23830528 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:31.605359+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143630336 unmapped: 23830528 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:32.605721+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143630336 unmapped: 23830528 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:33.606030+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143630336 unmapped: 23830528 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:34.606387+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143630336 unmapped: 23830528 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:35.606719+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143630336 unmapped: 23830528 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:36.607081+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143630336 unmapped: 23830528 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:37.607455+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143630336 unmapped: 23830528 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:38.607937+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143630336 unmapped: 23830528 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:39.608339+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143630336 unmapped: 23830528 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:40.608817+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143630336 unmapped: 23830528 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:41.609196+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143630336 unmapped: 23830528 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:42.609749+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143630336 unmapped: 23830528 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:43.610168+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143630336 unmapped: 23830528 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:44.610709+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143638528 unmapped: 23822336 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:45.611414+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143638528 unmapped: 23822336 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:46.611786+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143638528 unmapped: 23822336 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:47.612162+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143638528 unmapped: 23822336 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:48.612628+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143638528 unmapped: 23822336 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:49.612878+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143638528 unmapped: 23822336 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:50.613304+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143638528 unmapped: 23822336 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:51.613748+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143638528 unmapped: 23822336 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:52.614186+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143638528 unmapped: 23822336 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:53.614636+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143638528 unmapped: 23822336 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:54.615141+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143646720 unmapped: 23814144 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:55.615483+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143646720 unmapped: 23814144 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:56.615837+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143646720 unmapped: 23814144 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:57.616217+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143646720 unmapped: 23814144 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:58.616642+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143646720 unmapped: 23814144 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:59.616886+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143646720 unmapped: 23814144 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:00.617262+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143646720 unmapped: 23814144 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:01.617690+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143646720 unmapped: 23814144 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:02.618065+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143646720 unmapped: 23814144 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:03.618356+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143646720 unmapped: 23814144 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:04.618805+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143646720 unmapped: 23814144 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:05.619104+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143646720 unmapped: 23814144 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:06.619352+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143646720 unmapped: 23814144 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:07.619725+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143646720 unmapped: 23814144 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:08.620104+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143646720 unmapped: 23814144 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:09.620492+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143646720 unmapped: 23814144 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:10.620866+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143654912 unmapped: 23805952 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:11.621230+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143654912 unmapped: 23805952 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:12.621467+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143654912 unmapped: 23805952 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:13.621805+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143654912 unmapped: 23805952 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:14.622338+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143654912 unmapped: 23805952 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:15.622816+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143654912 unmapped: 23805952 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:16.623222+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143654912 unmapped: 23805952 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:17.623671+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143654912 unmapped: 23805952 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:18.623988+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 4200.1 total, 600.0 interval
                                            Cumulative writes: 11K writes, 45K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 11K writes, 3272 syncs, 3.60 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 751 writes, 2590 keys, 751 commit groups, 1.0 writes per commit group, ingest: 3.26 MB, 0.01 MB/s
                                            Interval WAL: 751 writes, 299 syncs, 2.51 writes per sync, written: 0.00 GB, 0.01 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143654912 unmapped: 23805952 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:19.624519+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143654912 unmapped: 23805952 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:20.624909+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143654912 unmapped: 23805952 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:21.625338+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143654912 unmapped: 23805952 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:22.625744+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143654912 unmapped: 23805952 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:23.626143+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143654912 unmapped: 23805952 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:24.626700+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143654912 unmapped: 23805952 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:25.627187+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143654912 unmapped: 23805952 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:26.627757+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143663104 unmapped: 23797760 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:27.628346+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143663104 unmapped: 23797760 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:28.628747+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143663104 unmapped: 23797760 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:29.629334+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143663104 unmapped: 23797760 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:30.629923+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143663104 unmapped: 23797760 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:31.630336+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143663104 unmapped: 23797760 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:32.630813+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143663104 unmapped: 23797760 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:33.631115+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143663104 unmapped: 23797760 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:34.631489+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143663104 unmapped: 23797760 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:35.631876+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143663104 unmapped: 23797760 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:36.632201+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143663104 unmapped: 23797760 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:37.632829+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143663104 unmapped: 23797760 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:38.633032+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143663104 unmapped: 23797760 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:39.633639+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143663104 unmapped: 23797760 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:40.634055+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143663104 unmapped: 23797760 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:41.634391+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143671296 unmapped: 23789568 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:42.634880+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143671296 unmapped: 23789568 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:43.635342+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143671296 unmapped: 23789568 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:44.635867+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143671296 unmapped: 23789568 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:45.636204+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143671296 unmapped: 23789568 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:46.636755+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143671296 unmapped: 23789568 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:47.637154+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143671296 unmapped: 23789568 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:48.637669+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143671296 unmapped: 23789568 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:49.638155+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143671296 unmapped: 23789568 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:50.638617+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143671296 unmapped: 23789568 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:51.638965+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143671296 unmapped: 23789568 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:52.639447+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143671296 unmapped: 23789568 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:53.639845+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143679488 unmapped: 23781376 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:54.640269+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143679488 unmapped: 23781376 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:55.640669+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143687680 unmapped: 23773184 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:56.640997+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:57.641338+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143687680 unmapped: 23773184 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:58.641812+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143687680 unmapped: 23773184 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:59.642218+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143687680 unmapped: 23773184 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:00.642688+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143687680 unmapped: 23773184 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:01.643049+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143687680 unmapped: 23773184 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:02.643361+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143687680 unmapped: 23773184 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:03.644014+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143687680 unmapped: 23773184 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:04.644429+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143687680 unmapped: 23773184 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:05.644808+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143687680 unmapped: 23773184 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:06.645386+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143687680 unmapped: 23773184 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:07.645799+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143687680 unmapped: 23773184 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:08.646160+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143687680 unmapped: 23773184 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:09.646645+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143687680 unmapped: 23773184 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:10.646997+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143687680 unmapped: 23773184 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:11.647429+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143687680 unmapped: 23773184 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:12.647840+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143695872 unmapped: 23764992 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:13.648273+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143695872 unmapped: 23764992 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:14.648718+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143695872 unmapped: 23764992 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:15.648955+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143695872 unmapped: 23764992 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:16.649221+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143695872 unmapped: 23764992 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:17.666835+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143695872 unmapped: 23764992 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:18.667086+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143695872 unmapped: 23764992 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:19.667379+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143695872 unmapped: 23764992 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:20.667723+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143695872 unmapped: 23764992 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:21.668013+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143695872 unmapped: 23764992 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:22.668432+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143695872 unmapped: 23764992 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:23.668758+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143695872 unmapped: 23764992 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:24.669126+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143695872 unmapped: 23764992 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:25.669472+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143695872 unmapped: 23764992 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:26.669761+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143704064 unmapped: 23756800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:27.670118+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143704064 unmapped: 23756800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:28.670487+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143704064 unmapped: 23756800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:29.670832+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143704064 unmapped: 23756800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:30.671194+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143704064 unmapped: 23756800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:31.671695+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143704064 unmapped: 23756800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:32.672153+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143704064 unmapped: 23756800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:33.672652+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143704064 unmapped: 23756800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:34.672937+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143704064 unmapped: 23756800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:35.673230+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143704064 unmapped: 23756800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:36.673950+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143704064 unmapped: 23756800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:37.674310+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143704064 unmapped: 23756800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:38.674920+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143704064 unmapped: 23756800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:39.675177+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143704064 unmapped: 23756800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:40.675359+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143704064 unmapped: 23756800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:41.675833+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143704064 unmapped: 23756800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:42.676226+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143712256 unmapped: 23748608 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:43.676715+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143712256 unmapped: 23748608 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:44.677037+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143712256 unmapped: 23748608 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:45.677417+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143712256 unmapped: 23748608 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:46.677742+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143712256 unmapped: 23748608 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:47.678156+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143712256 unmapped: 23748608 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:48.678669+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143712256 unmapped: 23748608 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:49.678989+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143712256 unmapped: 23748608 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680513 data_alloc: 234881024 data_used: 32022528
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 163.313308716s of 163.336517334s, submitted: 15
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:50.679355+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143720448 unmapped: 23740416 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:51.679817+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143720448 unmapped: 23740416 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:52.680184+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143761408 unmapped: 23699456 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [0,0,0,1])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:53.680735+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:54.681131+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1677697 data_alloc: 218103808 data_used: 32022528
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:55.681647+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:56.981496+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:57.981934+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:58.982394+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:59.982779+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1677697 data_alloc: 218103808 data_used: 32022528
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:00.983187+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2516145822' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:01.983709+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:02.984153+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:03.984495+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:04.984968+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1677697 data_alloc: 218103808 data_used: 32022528
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:05.985388+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:06.985782+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:07.986190+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:08.986679+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:09.987071+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1677697 data_alloc: 218103808 data_used: 32022528
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:10.987470+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:11.987886+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:12.988256+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:13.988781+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:14.989202+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1677697 data_alloc: 218103808 data_used: 32022528
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:15.989690+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:16.990085+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:17.990447+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:18.990931+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:19.991312+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1677697 data_alloc: 218103808 data_used: 32022528
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:20.991805+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:21.992153+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:22.992673+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:23.993051+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:24.993312+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1677697 data_alloc: 218103808 data_used: 32022528
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:25.993705+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:26.994078+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:27.994613+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:28.994905+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:29.995208+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1677697 data_alloc: 218103808 data_used: 32022528
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:30.995610+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:31.995968+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:32.996165+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:33.996508+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:34.997073+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1677697 data_alloc: 218103808 data_used: 32022528
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:35.997666+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:36.998067+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:37.998492+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:38.999020+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:39.999490+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b20000/0x0/0x4ffc00000, data 0x366c781/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1677697 data_alloc: 218103808 data_used: 32022528
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a7ec3400 session 0x55f0a7f5ab40
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:41.000008+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a87bb000
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 23658496 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 50.244667053s of 50.852611542s, submitted: 90
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:42.000406+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a87bb000 session 0x55f0a562ef00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 139329536 unmapped: 28131328 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:43.000832+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 139329536 unmapped: 28131328 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:44.001083+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 139329536 unmapped: 28131328 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8677000/0x0/0x4ffc00000, data 0x2b15781/0x2be7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:45.001649+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 139329536 unmapped: 28131328 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1535347 data_alloc: 234881024 data_used: 24195072
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:46.002133+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 139329536 unmapped: 28131328 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:47.002615+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 139329536 unmapped: 28131328 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8677000/0x0/0x4ffc00000, data 0x2b15781/0x2be7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:48.002957+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 139329536 unmapped: 28131328 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:49.003332+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8677000/0x0/0x4ffc00000, data 0x2b15781/0x2be7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 139329536 unmapped: 28131328 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:50.003787+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 139329536 unmapped: 28131328 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1535347 data_alloc: 234881024 data_used: 24195072
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:51.004244+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a694a000 session 0x55f0a8172960
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a7e72800 session 0x55f0a75bda40
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 139329536 unmapped: 28131328 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a7e6c400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.297913551s of 10.423287392s, submitted: 18
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:52.004867+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 36700160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8677000/0x0/0x4ffc00000, data 0x2b15781/0x2be7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a7e6c400 session 0x55f0a7ece1e0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:53.005162+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 36700160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:54.005343+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 36700160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:55.005808+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 36700160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1283831 data_alloc: 218103808 data_used: 13914112
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:56.006222+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f9c56000/0x0/0x4ffc00000, data 0x153870f/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 36700160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:57.022880+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 36700160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:58.023704+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 36700160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:59.024083+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 36700160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:00.024478+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 36700160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1283831 data_alloc: 218103808 data_used: 13914112
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:01.024903+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f9c56000/0x0/0x4ffc00000, data 0x153870f/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 36700160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:02.025363+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f9c56000/0x0/0x4ffc00000, data 0x153870f/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 36700160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:03.025743+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a6bbd800
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.247928619s of 11.414648056s, submitted: 33
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130793472 unmapped: 36667392 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _renew_subs
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:04.025998+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 139 ms_handle_reset con 0x55f0a6bbd800 session 0x55f0a64145a0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130818048 unmapped: 36642816 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a4b41800
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:05.026429+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130908160 unmapped: 36552704 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1320370 data_alloc: 218103808 data_used: 13918208
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 139 handle_osd_map epochs [139,140], i have 139, src has [1,140]
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 140 ms_handle_reset con 0x55f0a4b41800 session 0x55f0a58874a0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:06.026818+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a9f59400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130932736 unmapped: 36528128 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _renew_subs
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:07.027155+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 141 ms_handle_reset con 0x55f0a9f59400 session 0x55f0a6b8e960
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x19ada6d/0x1a81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:08.027642+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:09.028041+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153da4a/0x1610000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:10.028461+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296774 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:11.028823+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:12.029223+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:13.029696+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:14.030106+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153da4a/0x1610000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.709212303s of 11.402141571s, submitted: 109
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:15.030607+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: get_auth_request con 0x55f0a5768000 auth_method 0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:16.031024+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:17.031285+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:18.031763+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:19.032200+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:20.032887+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:21.033371+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:22.034007+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:23.034420+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:24.034669+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:25.035105+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:26.035621+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:27.036119+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:28.036688+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:29.037159+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:30.037628+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:31.038059+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:32.038475+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:33.038943+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:34.039371+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:35.039895+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:36.040151+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:37.040624+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:38.040903+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:39.041165+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:40.041621+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:41.042017+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:42.042378+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:43.042670+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:44.043036+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:45.043306+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:46.043856+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:47.044236+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:48.044444+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:49.044875+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:50.045987+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:51.046441+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:52.046882+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:53.047328+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:54.047766+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:55.048486+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:56.048898+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:57.049282+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:58.049753+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:59.050189+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:00.050719+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:01.051122+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:02.051511+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:03.052015+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:04.052387+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:05.052863+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:06.053238+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:07.053735+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:08.054144+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:09.054637+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131031040 unmapped: 36429824 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:10.055090+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131039232 unmapped: 36421632 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:11.055807+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131039232 unmapped: 36421632 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:12.056207+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131039232 unmapped: 36421632 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:13.056632+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131039232 unmapped: 36421632 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:14.056864+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131039232 unmapped: 36421632 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:15.057219+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131039232 unmapped: 36421632 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:16.057658+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131039232 unmapped: 36421632 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:17.058124+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131039232 unmapped: 36421632 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:18.058639+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131039232 unmapped: 36421632 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:19.058815+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131039232 unmapped: 36421632 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:20.059159+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131039232 unmapped: 36421632 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:21.059439+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131039232 unmapped: 36421632 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:22.059790+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131039232 unmapped: 36421632 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:23.060184+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131039232 unmapped: 36421632 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:24.060363+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131039232 unmapped: 36421632 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:25.060674+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131039232 unmapped: 36421632 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:26.060980+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131039232 unmapped: 36421632 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:27.061337+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131039232 unmapped: 36421632 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:28.061812+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131039232 unmapped: 36421632 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:29.062008+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131039232 unmapped: 36421632 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:30.062200+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131039232 unmapped: 36421632 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:31.062409+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131039232 unmapped: 36421632 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:32.062610+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131039232 unmapped: 36421632 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:33.062784+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131096576 unmapped: 36364288 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: do_command 'config diff' '{prefix=config diff}'
Dec 03 02:45:19 compute-0 ceph-osd[207705]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 03 02:45:19 compute-0 ceph-osd[207705]: do_command 'config show' '{prefix=config show}'
Dec 03 02:45:19 compute-0 ceph-osd[207705]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 03 02:45:19 compute-0 ceph-osd[207705]: do_command 'counter dump' '{prefix=counter dump}'
Dec 03 02:45:19 compute-0 ceph-osd[207705]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 03 02:45:19 compute-0 ceph-osd[207705]: do_command 'counter schema' '{prefix=counter schema}'
Dec 03 02:45:19 compute-0 ceph-osd[207705]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:34.062978+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131252224 unmapped: 36208640 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:35.063355+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130621440 unmapped: 36839424 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:36.063685+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130727936 unmapped: 36732928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: do_command 'log dump' '{prefix=log dump}'
Dec 03 02:45:19 compute-0 ceph-osd[207705]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: do_command 'perf dump' '{prefix=perf dump}'
Dec 03 02:45:19 compute-0 ceph-osd[207705]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:37.063883+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Dec 03 02:45:19 compute-0 ceph-osd[207705]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130973696 unmapped: 36487168 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: do_command 'perf schema' '{prefix=perf schema}'
Dec 03 02:45:19 compute-0 ceph-osd[207705]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:38.064300+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:39.064582+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:40.064776+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:41.065070+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:42.065271+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:43.065470+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:44.065648+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:45.066064+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:46.066270+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:47.067039+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:48.067458+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:49.067651+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:50.067837+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:51.068032+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:52.068218+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:53.068394+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:54.068639+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:55.068841+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:56.069116+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:57.069333+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:58.069649+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:59.069945+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:00.070180+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:01.070698+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:02.070950+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:03.071354+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:04.071773+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:05.072085+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:06.072296+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:07.072718+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:08.072985+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:09.073445+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:10.073758+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:11.074118+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:12.074376+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:13.074814+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:14.075212+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:15.075701+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:16.076058+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:17.076375+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:18.076814+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:19.077254+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:20.077793+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:21.078173+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:22.078648+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:23.079059+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:24.079676+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:25.080276+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 ms_handle_reset con 0x55f0a8590800 session 0x55f0a80d0f00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a7c73c00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:26.080746+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 ms_handle_reset con 0x55f0a8f73400 session 0x55f0a756c3c0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a7e6e800
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 ms_handle_reset con 0x55f0a8f70400 session 0x55f0a75c34a0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a72d2000
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:27.081211+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:28.081717+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:29.082072+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:30.082473+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:31.082912+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:32.083344+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:33.083773+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:34.084178+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:35.084458+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:36.084651+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:37.084865+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:38.085054+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:39.085275+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:40.085861+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:41.086476+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:42.086725+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:43.086950+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:44.087159+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:45.087734+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:46.087919+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:47.088312+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:48.088711+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:49.088978+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:50.089366+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:51.089742+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:52.090106+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:53.090511+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:54.091002+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:55.091442+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:56.091815+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:57.092222+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:58.092914+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:59.093117+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:00.093610+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:01.093949+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:02.094329+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:03.094646+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:04.095000+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:05.095422+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:06.095640+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:07.096052+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:08.096505+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:09.096717+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:10.097147+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:11.097519+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:12.097991+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:13.098357+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:14.098766+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:15.099219+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:16.099677+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:17.100039+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:18.100402+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:19.100750+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:20.101158+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:21.101621+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:22.101985+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:23.102406+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:24.102834+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:25.103319+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:26.103856+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:27.104332+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:28.104807+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:29.105185+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:30.105693+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:31.106109+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:32.106683+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:33.107091+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:34.107486+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:35.107943+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:36.108328+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:37.108798+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:38.109104+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:39.109379+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:40.109847+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:41.110132+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:42.110633+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 ms_handle_reset con 0x55f0a7e71000 session 0x55f0a58974a0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: handle_auth_request added challenge on 0x55f0a979ec00
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:43.111083+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:44.111465+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:45.111872+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:46.112275+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:47.112739+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:48.113174+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:49.113628+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:50.114084+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:51.114459+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:52.114835+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:53.115290+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:54.115749+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:55.116213+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:56.116732+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:57.117072+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:58.117448+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:59.117937+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:00.118334+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:01.118701+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:02.119089+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:03.119454+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:04.119839+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:05.120323+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:06.120795+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:07.121358+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:08.121773+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:09.122127+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:10.122330+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:11.122678+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:12.122929+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:13.123306+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:14.123691+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:15.124230+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:16.124647+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:17.124853+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:18.125113+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:19.125493+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:20.125684+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:21.126025+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:22.126299+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:23.126746+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:24.127115+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:25.127518+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:26.128067+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:27.128514+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:28.129091+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:29.129524+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:30.130010+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:31.130359+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:32.130747+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:33.131150+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:34.131590+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:35.132151+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:36.132519+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:37.132990+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:38.133273+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:39.133684+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:40.134257+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:41.134710+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:42.135193+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:43.135901+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:44.136164+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:45.136853+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:46.137289+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:47.137777+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:48.138300+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:49.138740+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:50.139015+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:51.139435+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:52.139971+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:53.140393+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:54.140811+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:55.141255+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:56.141755+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:57.142417+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:58.142801+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:59.143158+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:00.143499+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:01.143935+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 36741120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:02.144668+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130727936 unmapped: 36732928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:03.145093+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130727936 unmapped: 36732928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:04.145414+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130727936 unmapped: 36732928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:05.145921+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130727936 unmapped: 36732928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:06.146416+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130727936 unmapped: 36732928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:07.146972+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130727936 unmapped: 36732928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:08.147383+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130727936 unmapped: 36732928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:09.147846+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130727936 unmapped: 36732928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:10.148291+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130727936 unmapped: 36732928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:11.148783+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130727936 unmapped: 36732928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:12.149229+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130727936 unmapped: 36732928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:13.149673+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130727936 unmapped: 36732928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:14.150009+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130727936 unmapped: 36732928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:15.150459+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130727936 unmapped: 36732928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:16.150831+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130736128 unmapped: 36724736 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:17.151177+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130736128 unmapped: 36724736 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:18.151640+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130736128 unmapped: 36724736 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:19.152040+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130736128 unmapped: 36724736 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:20.152430+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130736128 unmapped: 36724736 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:21.152843+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130736128 unmapped: 36724736 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:22.153139+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130736128 unmapped: 36724736 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:23.153986+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130736128 unmapped: 36724736 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:24.154290+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130736128 unmapped: 36724736 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:25.154662+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130736128 unmapped: 36724736 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:26.154923+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130736128 unmapped: 36724736 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:27.155248+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130736128 unmapped: 36724736 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:28.155622+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130736128 unmapped: 36724736 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:29.155872+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130736128 unmapped: 36724736 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:30.156321+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130736128 unmapped: 36724736 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:31.156627+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130736128 unmapped: 36724736 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:32.156949+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130736128 unmapped: 36724736 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:33.157376+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130736128 unmapped: 36724736 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:34.157888+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130744320 unmapped: 36716544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:35.158362+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130744320 unmapped: 36716544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:36.158973+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130744320 unmapped: 36716544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:37.159324+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130744320 unmapped: 36716544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:38.159754+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130744320 unmapped: 36716544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:39.159957+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130744320 unmapped: 36716544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:40.160349+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130744320 unmapped: 36716544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:41.160748+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130744320 unmapped: 36716544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:42.160997+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130744320 unmapped: 36716544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:43.161397+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130744320 unmapped: 36716544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:44.161890+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130744320 unmapped: 36716544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:45.162369+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:46.162844+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130744320 unmapped: 36716544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:47.163235+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130744320 unmapped: 36716544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:48.163648+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130744320 unmapped: 36716544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:49.164046+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130744320 unmapped: 36716544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:50.164440+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 36708352 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:51.165092+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 36708352 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:52.165507+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 36708352 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:53.166001+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 36708352 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:54.166413+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 36708352 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:55.166875+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 36708352 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:56.167192+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 36708352 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:57.167820+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 36708352 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:58.168207+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 36708352 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:59.168678+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 36708352 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:00.169097+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 36708352 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:01.169516+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 36708352 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:02.170012+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 36708352 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:03.170420+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 36708352 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:04.170823+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 36708352 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:05.171318+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 36708352 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:06.171807+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 36700160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:07.172202+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 36700160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:08.172670+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 36700160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:09.172981+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 36700160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:10.173296+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 36700160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:11.173695+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 36700160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:12.174094+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 36700160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:13.174520+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 36700160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:14.174918+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 36700160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:15.175383+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 36700160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:16.175781+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 36700160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:17.176143+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 36700160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:18.176521+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 36700160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:19.177114+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 36700160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:20.177739+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 36700160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:21.178041+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 36700160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:22.178439+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 36700160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:23.178840+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130768896 unmapped: 36691968 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:24.179252+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130768896 unmapped: 36691968 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:25.179798+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130768896 unmapped: 36691968 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:26.180200+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130768896 unmapped: 36691968 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:27.180636+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130768896 unmapped: 36691968 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:28.181012+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130768896 unmapped: 36691968 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:29.181245+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130768896 unmapped: 36691968 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:30.182032+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130768896 unmapped: 36691968 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:31.182445+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130768896 unmapped: 36691968 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:32.182872+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130768896 unmapped: 36691968 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:33.183356+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130768896 unmapped: 36691968 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:34.183850+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130768896 unmapped: 36691968 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:35.184348+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130768896 unmapped: 36691968 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:36.185745+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130768896 unmapped: 36691968 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:37.185934+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130768896 unmapped: 36691968 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:38.186367+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130768896 unmapped: 36691968 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:39.186788+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130777088 unmapped: 36683776 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:40.187086+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130777088 unmapped: 36683776 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:41.187497+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130777088 unmapped: 36683776 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:42.187889+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130777088 unmapped: 36683776 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:43.188306+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130777088 unmapped: 36683776 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:44.188763+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130777088 unmapped: 36683776 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:45.189131+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130777088 unmapped: 36683776 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:46.189723+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130777088 unmapped: 36683776 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:47.190204+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130777088 unmapped: 36683776 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:48.190841+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130777088 unmapped: 36683776 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:49.191266+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130777088 unmapped: 36683776 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:50.191806+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130777088 unmapped: 36683776 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:51.192182+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130777088 unmapped: 36683776 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:52.192698+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130777088 unmapped: 36683776 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:53.193281+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130777088 unmapped: 36683776 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:54.193787+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130777088 unmapped: 36683776 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:55.194253+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 36675584 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:56.194682+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 36675584 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:57.195083+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 36675584 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:58.195418+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 36675584 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:59.195810+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 36675584 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:00.196197+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 36675584 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:01.196669+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 36675584 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:02.197041+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 36675584 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:03.197505+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 36675584 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:04.198373+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 36675584 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:05.198882+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 36675584 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:06.199203+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 36675584 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:07.199701+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 36675584 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:08.200095+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 36675584 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:09.200621+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 36675584 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:10.200961+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130793472 unmapped: 36667392 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:11.201226+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130793472 unmapped: 36667392 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:12.201682+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130793472 unmapped: 36667392 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:13.202040+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130793472 unmapped: 36667392 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:14.202494+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130793472 unmapped: 36667392 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:15.203057+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130793472 unmapped: 36667392 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:16.203624+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130793472 unmapped: 36667392 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:17.204051+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130793472 unmapped: 36667392 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:18.204481+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 4800.1 total, 600.0 interval
                                            Cumulative writes: 12K writes, 46K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 12K writes, 3504 syncs, 3.50 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 495 writes, 1261 keys, 495 commit groups, 1.0 writes per commit group, ingest: 0.44 MB, 0.00 MB/s
                                            Interval WAL: 495 writes, 232 syncs, 2.13 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130793472 unmapped: 36667392 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:19.204933+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130793472 unmapped: 36667392 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:20.205313+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130793472 unmapped: 36667392 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:21.205947+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130793472 unmapped: 36667392 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:22.206363+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130793472 unmapped: 36667392 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:23.206811+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130793472 unmapped: 36667392 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:24.207274+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130793472 unmapped: 36667392 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:25.207755+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130793472 unmapped: 36667392 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:26.208136+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130801664 unmapped: 36659200 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:27.208625+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130801664 unmapped: 36659200 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:28.209034+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130801664 unmapped: 36659200 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:29.210090+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130801664 unmapped: 36659200 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:30.210861+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130801664 unmapped: 36659200 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:31.211278+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130801664 unmapped: 36659200 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:32.211757+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130801664 unmapped: 36659200 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:33.212178+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130801664 unmapped: 36659200 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:34.212895+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130801664 unmapped: 36659200 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:35.213379+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130801664 unmapped: 36659200 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:36.218727+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130801664 unmapped: 36659200 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:37.219235+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130801664 unmapped: 36659200 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:38.219813+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130801664 unmapped: 36659200 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:39.220220+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130801664 unmapped: 36659200 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:40.220821+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130801664 unmapped: 36659200 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:41.221204+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130801664 unmapped: 36659200 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:42.221690+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130809856 unmapped: 36651008 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:43.222156+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130809856 unmapped: 36651008 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:44.222670+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130809856 unmapped: 36651008 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:45.223128+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130809856 unmapped: 36651008 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:46.223495+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130809856 unmapped: 36651008 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:47.223895+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130809856 unmapped: 36651008 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:48.224484+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130809856 unmapped: 36651008 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:49.224870+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130809856 unmapped: 36651008 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:50.225158+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130809856 unmapped: 36651008 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:51.225686+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130809856 unmapped: 36651008 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:52.226072+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130809856 unmapped: 36651008 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:53.226420+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130809856 unmapped: 36651008 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:54.226855+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130809856 unmapped: 36651008 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:55.227302+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130809856 unmapped: 36651008 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:56.227797+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130809856 unmapped: 36651008 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:57.228333+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130809856 unmapped: 36651008 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:58.228735+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130809856 unmapped: 36651008 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:59.228974+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:00.229346+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130809856 unmapped: 36651008 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:01.229721+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130686976 unmapped: 36773888 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:02.230098+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130686976 unmapped: 36773888 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:03.230456+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130686976 unmapped: 36773888 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:04.230787+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130686976 unmapped: 36773888 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:05.231360+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130695168 unmapped: 36765696 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:06.231762+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130695168 unmapped: 36765696 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:07.232128+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130695168 unmapped: 36765696 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:08.232489+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130695168 unmapped: 36765696 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:09.232840+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130695168 unmapped: 36765696 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:10.233266+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130695168 unmapped: 36765696 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:11.233788+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130695168 unmapped: 36765696 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:12.234225+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130695168 unmapped: 36765696 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:13.234505+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130695168 unmapped: 36765696 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:14.234936+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130695168 unmapped: 36765696 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:15.235809+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130695168 unmapped: 36765696 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:16.236125+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130695168 unmapped: 36765696 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:17.236709+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130695168 unmapped: 36765696 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:18.237051+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130695168 unmapped: 36765696 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:19.237459+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130695168 unmapped: 36765696 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:20.237911+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130695168 unmapped: 36765696 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:21.238383+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130695168 unmapped: 36765696 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:22.238798+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130695168 unmapped: 36765696 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:23.239202+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130703360 unmapped: 36757504 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:24.239623+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130703360 unmapped: 36757504 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:25.240030+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130703360 unmapped: 36757504 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:26.240447+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130703360 unmapped: 36757504 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:27.240855+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130703360 unmapped: 36757504 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:28.241292+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130703360 unmapped: 36757504 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:29.241879+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130703360 unmapped: 36757504 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:30.242306+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130703360 unmapped: 36757504 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:31.242888+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130703360 unmapped: 36757504 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:32.243322+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130703360 unmapped: 36757504 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:33.243865+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130703360 unmapped: 36757504 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:34.244331+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130703360 unmapped: 36757504 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:35.244915+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130703360 unmapped: 36757504 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:36.245356+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130703360 unmapped: 36757504 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:37.245807+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130703360 unmapped: 36757504 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:38.246302+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130703360 unmapped: 36757504 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:39.246791+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130711552 unmapped: 36749312 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:40.247201+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130711552 unmapped: 36749312 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:41.247789+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130711552 unmapped: 36749312 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:42.248173+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130711552 unmapped: 36749312 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:43.248662+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130711552 unmapped: 36749312 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:44.249018+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130711552 unmapped: 36749312 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:45.249440+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130711552 unmapped: 36749312 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:46.249804+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130711552 unmapped: 36749312 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:47.250212+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130711552 unmapped: 36749312 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:48.250761+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4a000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130711552 unmapped: 36749312 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:49.251122+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297799 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130711552 unmapped: 36749312 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:50.251496+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130711552 unmapped: 36749312 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 516.238464355s of 516.264099121s, submitted: 13
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:51.251941+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130736128 unmapped: 36724736 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:52.252353+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [0,0,0,1])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130744320 unmapped: 36716544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:53.252767+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 36700160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:54.253142+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296991 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130793472 unmapped: 36667392 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:55.253693+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130826240 unmapped: 36634624 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:56.254069+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130826240 unmapped: 36634624 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:57.254341+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130826240 unmapped: 36634624 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:58.254803+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130826240 unmapped: 36634624 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:59.255182+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296919 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130826240 unmapped: 36634624 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:00.255657+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130826240 unmapped: 36634624 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:01.256057+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130826240 unmapped: 36634624 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:02.256468+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130826240 unmapped: 36634624 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:03.256758+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130826240 unmapped: 36634624 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:04.257114+0000)
Dec 03 02:45:19 compute-0 crontab[496219]: (root) LIST (root)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296919 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130826240 unmapped: 36634624 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:05.257482+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130826240 unmapped: 36634624 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:06.257756+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130826240 unmapped: 36634624 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:07.258018+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130826240 unmapped: 36634624 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:08.258403+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130826240 unmapped: 36634624 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:09.258763+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296919 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130826240 unmapped: 36634624 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:10.259154+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130826240 unmapped: 36634624 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:11.259470+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130826240 unmapped: 36634624 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:12.259865+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130826240 unmapped: 36634624 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:13.260283+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130826240 unmapped: 36634624 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:14.260709+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296919 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130826240 unmapped: 36634624 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:15.261192+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130826240 unmapped: 36634624 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:16.261699+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130826240 unmapped: 36634624 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:17.262076+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130826240 unmapped: 36634624 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:18.262500+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130826240 unmapped: 36634624 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:19.262839+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296919 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130826240 unmapped: 36634624 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:20.263415+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130826240 unmapped: 36634624 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:21.263813+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130826240 unmapped: 36634624 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:22.265022+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130826240 unmapped: 36634624 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:23.265407+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130826240 unmapped: 36634624 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:24.266047+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296919 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130826240 unmapped: 36634624 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:25.266341+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130826240 unmapped: 36634624 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:26.266845+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130834432 unmapped: 36626432 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:27.267356+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:28.267763+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130834432 unmapped: 36626432 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:29.268173+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130834432 unmapped: 36626432 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296919 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:30.268433+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130834432 unmapped: 36626432 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:31.268904+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130834432 unmapped: 36626432 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:32.269315+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130834432 unmapped: 36626432 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:33.269800+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130834432 unmapped: 36626432 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:34.270233+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130834432 unmapped: 36626432 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296919 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:35.270861+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130834432 unmapped: 36626432 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:36.271303+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130834432 unmapped: 36626432 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:37.271837+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130834432 unmapped: 36626432 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:38.272299+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130834432 unmapped: 36626432 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:39.272905+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130834432 unmapped: 36626432 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296919 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:40.273275+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130834432 unmapped: 36626432 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:41.273927+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130834432 unmapped: 36626432 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:42.274150+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130834432 unmapped: 36626432 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:43.274627+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130842624 unmapped: 36618240 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:44.275040+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130842624 unmapped: 36618240 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296919 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:45.275488+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130842624 unmapped: 36618240 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:46.275943+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130842624 unmapped: 36618240 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:47.276335+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130842624 unmapped: 36618240 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:48.276721+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130842624 unmapped: 36618240 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:49.277106+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130842624 unmapped: 36618240 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296919 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:50.277453+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130842624 unmapped: 36618240 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:51.277809+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130842624 unmapped: 36618240 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:52.278196+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130842624 unmapped: 36618240 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:53.278646+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130842624 unmapped: 36618240 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:54.279030+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130842624 unmapped: 36618240 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296919 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:55.279364+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130842624 unmapped: 36618240 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130842624 unmapped: 36618240 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:56.785691+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130842624 unmapped: 36618240 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:57.785951+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130842624 unmapped: 36618240 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:58.786209+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130850816 unmapped: 36610048 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:59.786647+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296919 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130850816 unmapped: 36610048 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:00.787029+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130850816 unmapped: 36610048 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:01.787265+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130850816 unmapped: 36610048 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:02.787776+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130850816 unmapped: 36610048 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:03.788469+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130850816 unmapped: 36610048 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:04.788892+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296919 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130850816 unmapped: 36610048 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:05.789370+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130850816 unmapped: 36610048 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:06.789780+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130850816 unmapped: 36610048 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:07.790143+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130850816 unmapped: 36610048 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:08.790475+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130850816 unmapped: 36610048 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:09.790855+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296919 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130850816 unmapped: 36610048 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:10.791130+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130850816 unmapped: 36610048 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:11.791357+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130850816 unmapped: 36610048 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:12.791807+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130850816 unmapped: 36610048 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:13.792201+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130850816 unmapped: 36610048 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:14.792627+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296919 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130850816 unmapped: 36610048 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:15.793003+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130850816 unmapped: 36610048 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:16.793901+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130850816 unmapped: 36610048 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:17.795878+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130850816 unmapped: 36610048 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:18.796205+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130850816 unmapped: 36610048 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:19.796643+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296919 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130850816 unmapped: 36610048 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:20.797102+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130850816 unmapped: 36610048 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:21.797990+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130859008 unmapped: 36601856 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:22.798403+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130859008 unmapped: 36601856 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:23.798709+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130859008 unmapped: 36601856 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:24.799145+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296919 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130859008 unmapped: 36601856 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:25.799777+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130859008 unmapped: 36601856 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:26.800328+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130859008 unmapped: 36601856 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:27.800759+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130859008 unmapped: 36601856 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:28.801151+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130859008 unmapped: 36601856 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:29.801426+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296919 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130859008 unmapped: 36601856 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:30.801915+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130859008 unmapped: 36601856 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:31.802398+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130859008 unmapped: 36601856 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:32.802858+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130859008 unmapped: 36601856 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:33.803308+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130859008 unmapped: 36601856 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:34.803719+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296919 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130859008 unmapped: 36601856 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:35.804354+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:36.804735+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 36593664 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:37.805130+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 36593664 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:38.805675+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 36593664 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:39.806031+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 36593664 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296919 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:40.806395+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 36593664 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:41.806774+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 36593664 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:42.807145+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 36593664 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:43.807513+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 36593664 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:44.808038+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 36593664 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296919 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:45.808778+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 36593664 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:46.809137+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 36593664 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:47.809504+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 36593664 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:48.809918+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 36593664 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:49.810316+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 36593664 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296919 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:50.810696+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 36593664 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:51.811081+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 36593664 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:52.811492+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 36593664 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:53.811818+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130875392 unmapped: 36585472 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:54.812087+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130875392 unmapped: 36585472 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296919 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:55.812485+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130875392 unmapped: 36585472 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:56.812750+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130875392 unmapped: 36585472 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:57.813043+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130875392 unmapped: 36585472 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:58.813396+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130875392 unmapped: 36585472 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:59.813811+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130875392 unmapped: 36585472 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296919 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:00.814345+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130875392 unmapped: 36585472 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:01.814750+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130875392 unmapped: 36585472 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:02.815110+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130875392 unmapped: 36585472 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:03.815501+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130875392 unmapped: 36585472 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:04.816075+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130875392 unmapped: 36585472 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296919 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:05.816511+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130875392 unmapped: 36585472 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:06.817002+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130875392 unmapped: 36585472 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:07.817477+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130875392 unmapped: 36585472 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:08.817913+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130883584 unmapped: 36577280 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:09.818230+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130883584 unmapped: 36577280 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296919 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:10.818777+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130883584 unmapped: 36577280 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:11.819218+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130883584 unmapped: 36577280 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:12.819657+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130883584 unmapped: 36577280 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:13.820075+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130883584 unmapped: 36577280 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:14.821228+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130883584 unmapped: 36577280 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296919 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:15.823119+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130883584 unmapped: 36577280 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:16.824904+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130883584 unmapped: 36577280 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:17.825795+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130883584 unmapped: 36577280 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:18.826135+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130883584 unmapped: 36577280 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:19.826452+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130883584 unmapped: 36577280 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296919 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:20.826828+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130883584 unmapped: 36577280 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:21.827257+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130883584 unmapped: 36577280 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:22.827806+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130883584 unmapped: 36577280 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:23.828098+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130883584 unmapped: 36577280 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:24.828463+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130891776 unmapped: 36569088 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296919 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:25.829007+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130891776 unmapped: 36569088 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:26.829424+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130891776 unmapped: 36569088 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:27.829850+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130891776 unmapped: 36569088 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:28.830148+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130891776 unmapped: 36569088 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:29.840729+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130899968 unmapped: 36560896 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296919 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:30.840950+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130899968 unmapped: 36560896 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:31.841340+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130899968 unmapped: 36560896 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:32.841719+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130899968 unmapped: 36560896 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:33.842057+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130899968 unmapped: 36560896 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:34.842838+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130908160 unmapped: 36552704 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296919 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:35.843276+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130908160 unmapped: 36552704 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:36.843672+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130908160 unmapped: 36552704 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:37.844111+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130908160 unmapped: 36552704 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:38.844636+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130908160 unmapped: 36552704 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:39.845012+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130908160 unmapped: 36552704 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296919 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:40.845415+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130908160 unmapped: 36552704 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:41.845808+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130916352 unmapped: 36544512 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:42.846236+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130916352 unmapped: 36544512 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:43.846771+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130916352 unmapped: 36544512 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:44.847086+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130916352 unmapped: 36544512 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:45.847484+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296919 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130916352 unmapped: 36544512 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:46.847837+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130916352 unmapped: 36544512 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:47.848235+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130916352 unmapped: 36544512 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:48.848795+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130916352 unmapped: 36544512 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:49.849141+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130916352 unmapped: 36544512 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:50.849698+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296919 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130924544 unmapped: 36536320 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:51.850090+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130924544 unmapped: 36536320 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:52.850505+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130924544 unmapped: 36536320 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:53.851024+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130924544 unmapped: 36536320 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:54.851398+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130924544 unmapped: 36536320 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:55.851859+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296919 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130924544 unmapped: 36536320 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:56.852186+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130924544 unmapped: 36536320 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:57.852679+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130924544 unmapped: 36536320 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:58.853063+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130924544 unmapped: 36536320 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:59.853475+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130924544 unmapped: 36536320 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:00.853742+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296919 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130924544 unmapped: 36536320 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:01.854107+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130924544 unmapped: 36536320 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:02.854783+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130924544 unmapped: 36536320 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:03.855065+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130924544 unmapped: 36536320 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:04.855486+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130924544 unmapped: 36536320 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:05.856123+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296919 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130924544 unmapped: 36536320 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:06.856767+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130924544 unmapped: 36536320 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:07.856968+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130924544 unmapped: 36536320 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:08.857369+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130924544 unmapped: 36536320 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:09.857681+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130924544 unmapped: 36536320 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:10.858102+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296919 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130924544 unmapped: 36536320 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:11.858858+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130924544 unmapped: 36536320 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:12.859337+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130924544 unmapped: 36536320 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:13.859730+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130932736 unmapped: 36528128 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:14.860152+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130932736 unmapped: 36528128 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:15.860816+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296919 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130932736 unmapped: 36528128 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:16.861141+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130932736 unmapped: 36528128 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:17.861465+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130932736 unmapped: 36528128 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:18.861840+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130932736 unmapped: 36528128 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:19.862237+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130932736 unmapped: 36528128 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:20.862733+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296919 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130932736 unmapped: 36528128 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:21.863104+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130932736 unmapped: 36528128 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:22.863359+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130932736 unmapped: 36528128 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:23.863815+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130932736 unmapped: 36528128 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:24.864082+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130932736 unmapped: 36528128 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:25.864509+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296919 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130932736 unmapped: 36528128 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:26.865354+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130932736 unmapped: 36528128 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:27.865787+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130940928 unmapped: 36519936 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:28.866163+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130940928 unmapped: 36519936 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:29.866491+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130940928 unmapped: 36519936 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:30.866910+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296919 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130940928 unmapped: 36519936 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:31.867096+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130940928 unmapped: 36519936 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:32.867284+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130940928 unmapped: 36519936 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:33.867470+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130940928 unmapped: 36519936 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:34.867780+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130940928 unmapped: 36519936 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:35.868139+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296919 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130940928 unmapped: 36519936 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:36.868387+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130940928 unmapped: 36519936 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:37.868615+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130940928 unmapped: 36519936 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:38.868833+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130940928 unmapped: 36519936 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:39.869035+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130940928 unmapped: 36519936 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:40.869273+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296919 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130940928 unmapped: 36519936 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:41.869463+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130940928 unmapped: 36519936 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:42.869872+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130940928 unmapped: 36519936 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:43.870121+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130949120 unmapped: 36511744 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:44.870498+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130949120 unmapped: 36511744 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:45.870885+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: do_command 'config diff' '{prefix=config diff}'
Dec 03 02:45:19 compute-0 ceph-osd[207705]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296919 data_alloc: 218103808 data_used: 13926400
Dec 03 02:45:19 compute-0 ceph-osd[207705]: do_command 'config show' '{prefix=config show}'
Dec 03 02:45:19 compute-0 ceph-osd[207705]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 03 02:45:19 compute-0 ceph-osd[207705]: do_command 'counter dump' '{prefix=counter dump}'
Dec 03 02:45:19 compute-0 ceph-osd[207705]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 03 02:45:19 compute-0 ceph-osd[207705]: do_command 'counter schema' '{prefix=counter schema}'
Dec 03 02:45:19 compute-0 ceph-osd[207705]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131178496 unmapped: 36282368 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:46.871132+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9c4b000/0x0/0x4ffc00000, data 0x153f4cd/0x1613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 36708352 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:47.871309+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130916352 unmapped: 36544512 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: tick
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_tickets
Dec 03 02:45:19 compute-0 ceph-osd[207705]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:48.871491+0000)
Dec 03 02:45:19 compute-0 ceph-osd[207705]: do_command 'log dump' '{prefix=log dump}'
Dec 03 02:45:19 compute-0 ceph-mon[192821]: pgmap v2736: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:45:19 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3236038443' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec 03 02:45:19 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/507683952' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec 03 02:45:19 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/26662517' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec 03 02:45:19 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2516145822' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec 03 02:45:19 compute-0 rsyslogd[188612]: imjournal from <compute-0:ceph-osd>: begin to drop messages due to rate-limiting
Dec 03 02:45:19 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Dec 03 02:45:19 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3125635330' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.520 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.520 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.521 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.522 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.522 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.522 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.522 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.522 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.523 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.525 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.525 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.526 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.526 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.526 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.526 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.526 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.527 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.527 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.527 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.527 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.527 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.527 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.527 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.528 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.528 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.528 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.528 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.528 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.528 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.528 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.529 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.529 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.529 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.529 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.529 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.530 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.529 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.530 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.530 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.530 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.530 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.530 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.531 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.531 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.531 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.531 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.531 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.531 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.531 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.531 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.532 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.532 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.532 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.532 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.532 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.533 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.533 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.533 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.533 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.533 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.533 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.533 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.535 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.535 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.535 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.535 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.535 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.535 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.535 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.535 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.535 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.535 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.536 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:45:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:45:19.536 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 03 02:45:19 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Dec 03 02:45:19 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1033090250' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec 03 02:45:19 compute-0 nova_compute[351485]: 2025-12-03 02:45:19.759 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:45:19 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Dec 03 02:45:19 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4005012853' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Dec 03 02:45:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Dec 03 02:45:20 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/323295108' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 03 02:45:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2737: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:45:20 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3125635330' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec 03 02:45:20 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1033090250' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec 03 02:45:20 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/4005012853' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Dec 03 02:45:20 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/323295108' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 03 02:45:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Dec 03 02:45:20 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1455880363' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec 03 02:45:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Dec 03 02:45:20 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3622320020' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec 03 02:45:20 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15999 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:45:21 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.16001 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:45:21 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.16003 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:45:21 compute-0 ceph-mon[192821]: pgmap v2737: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:45:21 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1455880363' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec 03 02:45:21 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3622320020' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec 03 02:45:21 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.16005 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:45:21 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.16007 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:45:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2738: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:45:22 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.16011 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:45:22 compute-0 nova_compute[351485]: 2025-12-03 02:45:22.385 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:45:22 compute-0 ceph-mon[192821]: from='client.15999 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:45:22 compute-0 ceph-mon[192821]: from='client.16001 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:45:22 compute-0 ceph-mon[192821]: from='client.16003 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:45:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Dec 03 02:45:22 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/67348879' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec 03 02:45:22 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.16015 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:45:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0) v1
Dec 03 02:45:22 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2382505171' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec 03 02:45:23 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.16019 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:45:23 compute-0 ceph-mon[192821]: from='client.16005 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:45:23 compute-0 ceph-mon[192821]: from='client.16007 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:45:23 compute-0 ceph-mon[192821]: pgmap v2738: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:45:23 compute-0 ceph-mon[192821]: from='client.16011 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:45:23 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/67348879' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec 03 02:45:23 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2382505171' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec 03 02:45:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Dec 03 02:45:23 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3668477928' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 03 02:45:23 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.16023 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:45:23 compute-0 podman[496769]: 2025-12-03 02:45:23.842972708 +0000 UTC m=+0.095513306 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4)
Dec 03 02:45:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:45:23 compute-0 podman[496767]: 2025-12-03 02:45:23.879257852 +0000 UTC m=+0.133378875 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 03 02:45:23 compute-0 podman[496770]: 2025-12-03 02:45:23.88023257 +0000 UTC m=+0.122001894 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 03 02:45:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Dec 03 02:45:23 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/60768788' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec 03 02:45:24 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 03 02:45:24 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:08.646135+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189607 data_alloc: 218103808 data_used: 5914624
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 12353536 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:09.646509+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 12353536 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:10.647047+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4f9e19000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 12353536 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:11.647437+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4f9e19000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 12353536 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:12.647773+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 12353536 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4f9e19000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:13.648178+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189607 data_alloc: 218103808 data_used: 5914624
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 12353536 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:14.648499+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 12353536 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:15.648784+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4f9e19000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 12353536 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:16.649057+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4f9e19000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 12353536 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:17.649462+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:18.649768+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 12353536 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189607 data_alloc: 218103808 data_used: 5914624
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:19.650051+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99540992 unmapped: 12345344 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:20.650403+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99540992 unmapped: 12345344 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4f9e19000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:21.650807+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99540992 unmapped: 12345344 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:22.651169+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99540992 unmapped: 12345344 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:23.651621+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99540992 unmapped: 12345344 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189607 data_alloc: 218103808 data_used: 5914624
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:24.651914+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99540992 unmapped: 12345344 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4f9e19000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:25.652168+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99540992 unmapped: 12345344 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4f9e19000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:26.652446+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99549184 unmapped: 12337152 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4f9e19000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:27.652816+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99549184 unmapped: 12337152 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4f9e19000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:28.653289+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99549184 unmapped: 12337152 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189607 data_alloc: 218103808 data_used: 5914624
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:29.653830+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99549184 unmapped: 12337152 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 99.197372437s of 99.864547729s, submitted: 106
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 133 ms_handle_reset con 0x55cd99ed1400 session 0x55cd964a25a0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 133 ms_handle_reset con 0x55cd99ed1800 session 0x55cd98b03860
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd9732a400
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:30.654150+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 99565568 unmapped: 12320768 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4f9e19000/0x0/0x4ffc00000, data 0x1788188/0x1855000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,0,0,1])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:31.654506+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 96436224 unmapped: 15450112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 133 ms_handle_reset con 0x55cd9732a400 session 0x55cd981bb860
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:32.655072+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 96436224 unmapped: 15450112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:33.655453+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 96436224 unmapped: 15450112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1042487 data_alloc: 218103808 data_used: 1507328
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:34.655919+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 96436224 unmapped: 15450112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fad32000/0x0/0x4ffc00000, data 0x8700f3/0x93a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:35.656348+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 96436224 unmapped: 15450112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:36.656762+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 96436224 unmapped: 15450112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:37.657416+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 96436224 unmapped: 15450112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:38.657772+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 96436224 unmapped: 15450112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1042487 data_alloc: 218103808 data_used: 1507328
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:39.658289+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 96436224 unmapped: 15450112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fad32000/0x0/0x4ffc00000, data 0x8700f3/0x93a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:40.659026+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 96436224 unmapped: 15450112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:41.659444+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 96436224 unmapped: 15450112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:42.659830+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 96436224 unmapped: 15450112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:43.660232+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 96436224 unmapped: 15450112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:44.660682+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1042487 data_alloc: 218103808 data_used: 1507328
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 96436224 unmapped: 15450112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fad32000/0x0/0x4ffc00000, data 0x8700f3/0x93a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:45.661081+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 96436224 unmapped: 15450112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:46.661477+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 96436224 unmapped: 15450112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:47.662261+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 96436224 unmapped: 15450112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:48.662806+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 96436224 unmapped: 15450112 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 133 ms_handle_reset con 0x55cd98eac800 session 0x55cd95cfd860
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.159221649s of 19.462923050s, submitted: 48
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 133 ms_handle_reset con 0x55cd98ead800 session 0x55cd98e99c20
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fad32000/0x0/0x4ffc00000, data 0x8700f3/0x93a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:49.663159+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1042415 data_alloc: 218103808 data_used: 1507328
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd984f5800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 96337920 unmapped: 15548416 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:50.663506+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95379456 unmapped: 16506880 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 133 ms_handle_reset con 0x55cd984f5800 session 0x55cd98e82960
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:51.664129+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:52.664578+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:53.664958+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:54.665245+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971097 data_alloc: 218103808 data_used: 217088
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fb4e2000/0x0/0x4ffc00000, data 0xc4072/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:55.665704+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:56.666095+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:57.666622+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fb4e2000/0x0/0x4ffc00000, data 0xc4072/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:58.666966+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:12:59.667322+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971097 data_alloc: 218103808 data_used: 217088
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:00.667796+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fb4e2000/0x0/0x4ffc00000, data 0xc4072/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:01.668102+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:02.668422+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fb4e2000/0x0/0x4ffc00000, data 0xc4072/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:03.668721+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:04.669091+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fb4e2000/0x0/0x4ffc00000, data 0xc4072/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971097 data_alloc: 218103808 data_used: 217088
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:05.669455+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fb4e2000/0x0/0x4ffc00000, data 0xc4072/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:06.669753+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:07.670190+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:08.670490+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fb4e2000/0x0/0x4ffc00000, data 0xc4072/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fb4e2000/0x0/0x4ffc00000, data 0xc4072/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:09.670806+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971097 data_alloc: 218103808 data_used: 217088
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:10.671146+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:11.671382+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:12.671772+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:13.672115+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:14.672888+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971097 data_alloc: 218103808 data_used: 217088
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fb4e2000/0x0/0x4ffc00000, data 0xc4072/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:15.673140+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:16.673520+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fb4e2000/0x0/0x4ffc00000, data 0xc4072/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:17.674013+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fb4e2000/0x0/0x4ffc00000, data 0xc4072/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:18.675072+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:19.675326+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971097 data_alloc: 218103808 data_used: 217088
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:20.675729+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fb4e2000/0x0/0x4ffc00000, data 0xc4072/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:21.676041+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:22.676350+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:23.676784+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:24.677096+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971097 data_alloc: 218103808 data_used: 217088
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:25.677359+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:26.677710+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fb4e2000/0x0/0x4ffc00000, data 0xc4072/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:27.678087+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:28.678486+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 16465920 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:29.678756+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971097 data_alloc: 218103808 data_used: 217088
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95428608 unmapped: 16457728 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:30.679202+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95428608 unmapped: 16457728 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:31.679500+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fb4e2000/0x0/0x4ffc00000, data 0xc4072/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95428608 unmapped: 16457728 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:32.679836+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95428608 unmapped: 16457728 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:33.681386+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95428608 unmapped: 16457728 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:34.681833+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fb4e2000/0x0/0x4ffc00000, data 0xc4072/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971097 data_alloc: 218103808 data_used: 217088
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95379456 unmapped: 16506880 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:35.682213+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95379456 unmapped: 16506880 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fb4e2000/0x0/0x4ffc00000, data 0xc4072/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:36.682586+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95379456 unmapped: 16506880 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:37.683036+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95379456 unmapped: 16506880 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:38.683364+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95379456 unmapped: 16506880 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:39.683740+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971097 data_alloc: 218103808 data_used: 217088
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:40.684152+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fb4e2000/0x0/0x4ffc00000, data 0xc4072/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:41.684469+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:42.684837+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:43.685236+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:44.686411+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971097 data_alloc: 218103808 data_used: 217088
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fb4e2000/0x0/0x4ffc00000, data 0xc4072/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:45.686752+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:46.687109+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:47.687620+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:48.687928+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:49.688313+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971097 data_alloc: 218103808 data_used: 217088
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fb4e2000/0x0/0x4ffc00000, data 0xc4072/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:50.688837+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:51.689159+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:52.689756+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:53.690154+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:54.690626+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971097 data_alloc: 218103808 data_used: 217088
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fb4e2000/0x0/0x4ffc00000, data 0xc4072/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:55.690982+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:56.691287+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:57.692009+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fb4e2000/0x0/0x4ffc00000, data 0xc4072/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:58.692369+0000)
Dec 03 02:45:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2739: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:13:59.692703+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fb4e2000/0x0/0x4ffc00000, data 0xc4072/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971097 data_alloc: 218103808 data_used: 217088
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:00.693175+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:01.693634+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:02.694068+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fb4e2000/0x0/0x4ffc00000, data 0xc4072/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fb4e2000/0x0/0x4ffc00000, data 0xc4072/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:03.694644+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:04.694896+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971097 data_alloc: 218103808 data_used: 217088
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:05.695256+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:06.695763+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:07.696209+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:08.696721+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fb4e2000/0x0/0x4ffc00000, data 0xc4072/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:09.697119+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971097 data_alloc: 218103808 data_used: 217088
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:10.697479+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:11.697865+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fb4e2000/0x0/0x4ffc00000, data 0xc4072/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:12.698232+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:13.698463+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95387648 unmapped: 16498688 heap: 111886336 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:14.698836+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98502400
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 85.413833618s of 85.549705505s, submitted: 21
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974456 data_alloc: 218103808 data_used: 217088
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 33267712 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:15.699249+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _renew_subs
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 134 ms_handle_reset con 0x55cd98502400 session 0x55cd99c66d20
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 33259520 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:16.699853+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 134 heartbeat osd_stat(store_statfs(0x4face2000/0x0/0x4ffc00000, data 0x8c4082/0x98c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd9732a400
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:17.700409+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 134 handle_osd_map epochs [134,135], i have 134, src has [1,135]
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd9732a400 session 0x55cd9729e780
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:18.700954+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:19.701456+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1091572 data_alloc: 218103808 data_used: 225280
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:20.701878+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fa4da000/0x0/0x4ffc00000, data 0x10c777c/0x1192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:21.702227+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:22.702657+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:23.703000+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:24.703358+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fa4da000/0x0/0x4ffc00000, data 0x10c777c/0x1192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1091572 data_alloc: 218103808 data_used: 225280
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:25.703772+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:26.704187+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:27.704475+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:28.704820+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:29.705221+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1091572 data_alloc: 218103808 data_used: 225280
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fa4da000/0x0/0x4ffc00000, data 0x10c777c/0x1192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:30.705777+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fa4da000/0x0/0x4ffc00000, data 0x10c777c/0x1192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:31.706093+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fa4da000/0x0/0x4ffc00000, data 0x10c777c/0x1192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:32.706466+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:33.706843+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:34.707089+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1091572 data_alloc: 218103808 data_used: 225280
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:35.707735+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fa4da000/0x0/0x4ffc00000, data 0x10c777c/0x1192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:36.708152+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:37.708632+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:38.708860+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:39.709249+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1091572 data_alloc: 218103808 data_used: 225280
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:40.709736+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fa4da000/0x0/0x4ffc00000, data 0x10c777c/0x1192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:41.720607+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:42.720963+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fa4da000/0x0/0x4ffc00000, data 0x10c777c/0x1192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:43.721293+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:44.721609+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1091572 data_alloc: 218103808 data_used: 225280
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:45.721992+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fa4da000/0x0/0x4ffc00000, data 0x10c777c/0x1192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:46.723180+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:47.725002+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:48.726607+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:49.728324+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1091572 data_alloc: 218103808 data_used: 225280
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:50.730063+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:51.731681+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fa4da000/0x0/0x4ffc00000, data 0x10c777c/0x1192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:52.732421+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fa4da000/0x0/0x4ffc00000, data 0x10c777c/0x1192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:53.732767+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:54.733114+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1091572 data_alloc: 218103808 data_used: 225280
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:55.733436+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fa4da000/0x0/0x4ffc00000, data 0x10c777c/0x1192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:56.733776+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:57.734204+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:58.734678+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:14:59.735085+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 33259520 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1091572 data_alloc: 218103808 data_used: 225280
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fa4da000/0x0/0x4ffc00000, data 0x10c777c/0x1192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:00.735452+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 33259520 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:01.735822+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 33259520 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:02.736057+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 33259520 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:03.736410+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 33259520 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fa4da000/0x0/0x4ffc00000, data 0x10c777c/0x1192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:04.736810+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 33259520 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1091572 data_alloc: 218103808 data_used: 225280
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:05.737248+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fa4da000/0x0/0x4ffc00000, data 0x10c777c/0x1192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 33259520 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:06.737761+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd984f5800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 51.784358978s of 51.954193115s, submitted: 16
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd984f5800 session 0x55cd96be94a0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 33275904 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98502400
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98502400 session 0x55cd9a7baf00
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:07.738183+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 102187008 unmapped: 26484736 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98eac800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:08.738476+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 22454272 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98eac800 session 0x55cd96603a40
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98ead800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98ead800 session 0x55cd981cc1e0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd9732a400
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd9732a400 session 0x55cd9a7bbe00
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd984f5800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd984f5800 session 0x55cd9a7bb680
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98502400
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98502400 session 0x55cd981bbe00
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:09.739269+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98eac800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98eac800 session 0x55cd981bab40
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99ed1400
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd99ed1400 session 0x55cd9a64fc20
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd9732a400
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd9732a400 session 0x55cd9a64ed20
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 102481920 unmapped: 26189824 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f9dee000/0x0/0x4ffc00000, data 0x17b577c/0x1880000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd984f5800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd984f5800 session 0x55cd9a64f2c0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98502400
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98502400 session 0x55cd9a64fe00
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98eac800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98eac800 session 0x55cd96be90e0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98eac000
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98eac000 session 0x55cd98ab34a0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260267 data_alloc: 218103808 data_used: 7045120
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:10.739687+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd9732a400
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 25419776 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd9732a400 session 0x55cd98ab2f00
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd984f5800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd984f5800 session 0x55cd96707680
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:11.739998+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 25419776 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:12.740302+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 25419776 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:13.740797+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 25419776 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98502400
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98502400 session 0x55cd967074a0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f9330000/0x0/0x4ffc00000, data 0x22727de/0x233e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:14.741158+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98eac800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98eac800 session 0x55cd96707a40
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99dffc00
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd99dffc00 session 0x55cd981cc1e0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd9732a400
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd9732a400 session 0x55cd981cd0e0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 25419776 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd984f5800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd984f5800 session 0x55cd9a7bbc20
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98502400
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98502400 session 0x55cd9a7ba3c0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98eac800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98eac800 session 0x55cd9a7baf00
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99dff000
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd99dff000 session 0x55cd96b90b40
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd9732a400
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd9732a400 session 0x55cd971a10e0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:15.741629+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313159 data_alloc: 218103808 data_used: 7045120
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd984f5800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f9284000/0x0/0x4ffc00000, data 0x231d807/0x23ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd984f5800 session 0x55cd98b03860
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98502400
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98502400 session 0x55cd966754a0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 105086976 unmapped: 23584768 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98eac800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98eac800 session 0x55cd966745a0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991de000
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991de000 session 0x55cd98ab3a40
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:16.741889+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 105447424 unmapped: 23224320 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd9732a400
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.374081612s of 10.282898903s, submitted: 125
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd984f5800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:17.747832+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 105447424 unmapped: 23224320 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:18.748232+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98502400
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98502400 session 0x55cd97f26960
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98eac800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98eac800 session 0x55cd96e56000
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991de400
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 105472000 unmapped: 23199744 heap: 128671744 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991de400 session 0x55cd98b030e0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991de800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991de800 session 0x55cd98e82000
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991dec00
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991df000
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991dec00 session 0x55cd96674d20
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991df000 session 0x55cd95cfdc20
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98502400
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98502400 session 0x55cd9a7bab40
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98eac800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:19.748584+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98eac800 session 0x55cd99c67e00
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991de400
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991de400 session 0x55cd98e99e00
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991de800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991de800 session 0x55cd964d92c0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 26951680 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98502400
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:20.748896+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1409345 data_alloc: 218103808 data_used: 7049216
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 105480192 unmapped: 26869760 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f82ea000/0x0/0x4ffc00000, data 0x32b58b2/0x3384000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98eac800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98eac800 session 0x55cd98ab21e0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:21.749340+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 105447424 unmapped: 26902528 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991de400
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991de400 session 0x55cd95affa40
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:22.749907+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991de800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991de800 session 0x55cd98b032c0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991df000
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 105447424 unmapped: 26902528 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991df000 session 0x55cd98b03e00
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:23.750095+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991df400
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991df800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 105562112 unmapped: 26787840 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:24.750607+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 105570304 unmapped: 26779648 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:25.750909+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f82e9000/0x0/0x4ffc00000, data 0x32b58c2/0x3385000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1439075 data_alloc: 234881024 data_used: 11091968
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 105865216 unmapped: 26484736 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:26.751306+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107208704 unmapped: 25141248 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:27.751792+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107208704 unmapped: 25141248 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f82e9000/0x0/0x4ffc00000, data 0x32b58c2/0x3385000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:28.752009+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 107724800 unmapped: 24625152 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991dfc00
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.870912552s of 12.140001297s, submitted: 55
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991dfc00 session 0x55cd9668fc20
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:29.755323+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98eac800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 111099904 unmapped: 21250048 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:30.755508+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1524220 data_alloc: 234881024 data_used: 22650880
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 113868800 unmapped: 18481152 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:31.755765+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 117587968 unmapped: 14761984 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:32.756003+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 12640256 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:33.756388+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f82e8000/0x0/0x4ffc00000, data 0x32b58e5/0x3386000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 119775232 unmapped: 12574720 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:34.756758+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 119775232 unmapped: 12574720 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:35.757111+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1570620 data_alloc: 251658240 data_used: 29188096
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 119775232 unmapped: 12574720 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:36.757323+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 119775232 unmapped: 12574720 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98502400 session 0x55cd964d9c20
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:37.757755+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991de400
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 15228928 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991de400 session 0x55cd98b02b40
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f8da5000/0x0/0x4ffc00000, data 0x27f8883/0x28c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:38.757982+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 116064256 unmapped: 16285696 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:39.758246+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 116457472 unmapped: 15892480 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:40.758513+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1488350 data_alloc: 251658240 data_used: 29421568
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 119611392 unmapped: 12738560 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:41.758790+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 119611392 unmapped: 12738560 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:42.758981+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 119611392 unmapped: 12738560 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.953304291s of 14.133977890s, submitted: 40
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98eac800 session 0x55cd98aec780
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991de800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:43.759137+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f8da5000/0x0/0x4ffc00000, data 0x27f8883/0x28c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,0,1])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 115376128 unmapped: 16973824 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991de800 session 0x55cd958bbe00
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:44.759320+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 114696192 unmapped: 17653760 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:45.759773+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332442 data_alloc: 234881024 data_used: 19333120
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 114696192 unmapped: 17653760 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:46.760211+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 114696192 unmapped: 17653760 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:47.760773+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 114696192 unmapped: 17653760 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f97ad000/0x0/0x4ffc00000, data 0x1d1d7ee/0x1dea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:48.768312+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 114696192 unmapped: 17653760 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:49.768581+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 114696192 unmapped: 17653760 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f97ad000/0x0/0x4ffc00000, data 0x1d1d7ee/0x1dea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:50.768825+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332442 data_alloc: 234881024 data_used: 19333120
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 114696192 unmapped: 17653760 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:51.769167+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 114696192 unmapped: 17653760 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:52.769884+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 114696192 unmapped: 17653760 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:53.770193+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 114696192 unmapped: 17653760 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:54.770437+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 114696192 unmapped: 17653760 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f97ad000/0x0/0x4ffc00000, data 0x1d1d7ee/0x1dea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:55.770802+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332442 data_alloc: 234881024 data_used: 19333120
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 114696192 unmapped: 17653760 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:56.771001+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 114696192 unmapped: 17653760 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.753718376s of 13.965150833s, submitted: 43
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:57.771241+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f97ad000/0x0/0x4ffc00000, data 0x1d1d7ee/0x1dea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 15818752 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:58.771470+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 15818752 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:15:59.771808+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 16089088 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:00.772004+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1362744 data_alloc: 234881024 data_used: 19480576
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 16089088 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:01.772284+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 16089088 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:02.772463+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 16089088 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:03.772670+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f96e8000/0x0/0x4ffc00000, data 0x1eb17ee/0x1f7e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991df000
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991df000 session 0x55cd98b030e0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98502400
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98502400 session 0x55cd981ae1e0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98eac800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98eac800 session 0x55cd97f32d20
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 11501568 heap: 132349952 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991de400
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991de400 session 0x55cd981cdc20
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991de800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:04.772956+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991de800 session 0x55cd973a63c0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99d6e000
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd99d6e000 session 0x55cd9729f680
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 118923264 unmapped: 21823488 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:05.773480+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1525693 data_alloc: 234881024 data_used: 20959232
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f8494000/0x0/0x4ffc00000, data 0x310c850/0x31da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 119644160 unmapped: 21102592 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:06.773851+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 119644160 unmapped: 21102592 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:07.774202+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f840d000/0x0/0x4ffc00000, data 0x3193850/0x3261000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 119644160 unmapped: 21102592 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:08.774860+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f840d000/0x0/0x4ffc00000, data 0x3193850/0x3261000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 119644160 unmapped: 21102592 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:09.775248+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98502400
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98502400 session 0x55cd97f330e0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f840d000/0x0/0x4ffc00000, data 0x3193850/0x3261000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 119652352 unmapped: 21094400 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98eac800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98eac800 session 0x55cd9668fa40
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:10.775678+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.796039581s of 13.633616447s, submitted: 196
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1522041 data_alloc: 234881024 data_used: 20971520
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 119808000 unmapped: 20938752 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:11.776026+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991de400
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991de400 session 0x55cd96431c20
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991de800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991de800 session 0x55cd96db4d20
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 119816192 unmapped: 20930560 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:12.776192+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 119816192 unmapped: 20930560 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:13.776493+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f83ed000/0x0/0x4ffc00000, data 0x31b2873/0x3281000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 119816192 unmapped: 20930560 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f83ed000/0x0/0x4ffc00000, data 0x31b2873/0x3281000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:14.776939+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 119824384 unmapped: 20922368 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:15.777269+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1523870 data_alloc: 234881024 data_used: 20971520
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 119824384 unmapped: 20922368 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:16.777747+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 119824384 unmapped: 20922368 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:17.778698+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 119881728 unmapped: 20865024 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:18.779100+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f83e3000/0x0/0x4ffc00000, data 0x31bc873/0x328b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 119881728 unmapped: 20865024 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:19.779318+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f83e3000/0x0/0x4ffc00000, data 0x31bc873/0x328b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 119881728 unmapped: 20865024 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:20.779824+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1523894 data_alloc: 234881024 data_used: 20975616
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 119881728 unmapped: 20865024 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:21.780233+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99d6e000
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 119906304 unmapped: 20840448 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:22.780695+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 122036224 unmapped: 18710528 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:23.781100+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.950662613s of 13.055603981s, submitted: 12
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123740160 unmapped: 17006592 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:24.781491+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123740160 unmapped: 17006592 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:25.781890+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f83e0000/0x0/0x4ffc00000, data 0x31bf873/0x328e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1582690 data_alloc: 251658240 data_used: 28319744
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123740160 unmapped: 17006592 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:26.782282+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f83e0000/0x0/0x4ffc00000, data 0x31bf873/0x328e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123740160 unmapped: 17006592 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:27.782671+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123740160 unmapped: 17006592 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:28.782886+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123740160 unmapped: 17006592 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:29.783218+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123740160 unmapped: 17006592 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:30.783393+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1582998 data_alloc: 251658240 data_used: 28319744
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123781120 unmapped: 16965632 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:31.783764+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123781120 unmapped: 16965632 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:32.784215+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f83de000/0x0/0x4ffc00000, data 0x31c0873/0x328f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123781120 unmapped: 16965632 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:33.784826+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.107700348s of 10.130168915s, submitted: 3
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123781120 unmapped: 16965632 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:34.785190+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123781120 unmapped: 16965632 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:35.785495+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1582646 data_alloc: 251658240 data_used: 28319744
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123805696 unmapped: 16941056 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:36.785866+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f83dd000/0x0/0x4ffc00000, data 0x31c1873/0x3290000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123805696 unmapped: 16941056 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:37.786159+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f83dd000/0x0/0x4ffc00000, data 0x31c1873/0x3290000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 16932864 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:38.786490+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 16932864 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:39.786739+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 16932864 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:40.787222+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1585242 data_alloc: 251658240 data_used: 28307456
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123985920 unmapped: 16760832 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:41.787685+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123994112 unmapped: 16752640 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:42.787946+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f83de000/0x0/0x4ffc00000, data 0x31c1873/0x3290000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123994112 unmapped: 16752640 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:43.788405+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123994112 unmapped: 16752640 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:44.788836+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.781484604s of 10.835879326s, submitted: 20
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123994112 unmapped: 16752640 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:45.789238+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1583282 data_alloc: 251658240 data_used: 28307456
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd9732a400 session 0x55cd98ab2f00
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd984f5800 session 0x55cd9a7bbc20
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124002304 unmapped: 16744448 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:46.789672+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd9732a400
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f83d8000/0x0/0x4ffc00000, data 0x31c7873/0x3296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,0,0,0,3])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 121233408 unmapped: 19513344 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:47.789981+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd9732a400 session 0x55cd9729f0e0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991df400 session 0x55cd96602000
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991df800 session 0x55cd96be90e0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 121233408 unmapped: 19513344 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:48.790268+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98502400
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98502400 session 0x55cd9a64f2c0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 121233408 unmapped: 19513344 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:49.791331+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd9732a400
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd984f5800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 122298368 unmapped: 18448384 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:50.791598+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1457656 data_alloc: 234881024 data_used: 21876736
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 122298368 unmapped: 18448384 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:51.792114+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f8c85000/0x0/0x4ffc00000, data 0x291a873/0x29e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 122314752 unmapped: 18432000 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:52.792630+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 122322944 unmapped: 18423808 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:53.793010+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 122322944 unmapped: 18423808 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:54.793385+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:55.793632+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 122322944 unmapped: 18423808 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f8c85000/0x0/0x4ffc00000, data 0x291a873/0x29e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1459736 data_alloc: 234881024 data_used: 22016000
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.302616119s of 11.499516487s, submitted: 34
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:56.793847+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 125272064 unmapped: 15474688 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:57.794088+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126509056 unmapped: 14237696 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:58.794383+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 14016512 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:16:59.794721+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 14016512 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:00.795111+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126877696 unmapped: 13869056 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1571568 data_alloc: 234881024 data_used: 23171072
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f7f6d000/0x0/0x4ffc00000, data 0x362a873/0x36f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:01.795467+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126877696 unmapped: 13869056 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:02.795683+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126877696 unmapped: 13869056 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:03.796007+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126877696 unmapped: 13869056 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f7f6d000/0x0/0x4ffc00000, data 0x362a873/0x36f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:04.796313+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126877696 unmapped: 13869056 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:05.796701+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126877696 unmapped: 13869056 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1571584 data_alloc: 234881024 data_used: 23171072
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:06.797062+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126877696 unmapped: 13869056 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f7f6d000/0x0/0x4ffc00000, data 0x362a873/0x36f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:07.797946+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126877696 unmapped: 13869056 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:08.798279+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126877696 unmapped: 13869056 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:09.798621+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126877696 unmapped: 13869056 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:10.798894+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126877696 unmapped: 13869056 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1571584 data_alloc: 234881024 data_used: 23171072
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:11.799167+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126877696 unmapped: 13869056 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f7f6d000/0x0/0x4ffc00000, data 0x362a873/0x36f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:12.799364+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 13860864 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:13.799584+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 13860864 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:14.799989+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 13860864 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:15.800352+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 13860864 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1571584 data_alloc: 234881024 data_used: 23171072
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:16.800764+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 13860864 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:17.801174+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 13860864 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f7f6d000/0x0/0x4ffc00000, data 0x362a873/0x36f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:18.801783+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126894080 unmapped: 13852672 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f7f6d000/0x0/0x4ffc00000, data 0x362a873/0x36f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:19.802266+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126894080 unmapped: 13852672 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:20.802565+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126894080 unmapped: 13852672 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1571584 data_alloc: 234881024 data_used: 23171072
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:21.802950+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98502400
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98502400 session 0x55cd98b03e00
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f7f6d000/0x0/0x4ffc00000, data 0x362a873/0x36f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991df400
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991df400 session 0x55cd98b02b40
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126894080 unmapped: 13852672 heap: 140746752 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991df800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991df800 session 0x55cd98b030e0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98eac800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98eac800 session 0x55cd98e91a40
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991de400
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 25.372167587s of 25.784019470s, submitted: 101
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991de400 session 0x55cd98e905a0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:22.803143+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 125100032 unmapped: 19324928 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f7721000/0x0/0x4ffc00000, data 0x3e7e873/0x3f4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:23.803518+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 125100032 unmapped: 19324928 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:24.803846+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 125100032 unmapped: 19324928 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98502400
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98502400 session 0x55cd98e914a0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98eac800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98eac800 session 0x55cd96db50e0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991df400
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991df400 session 0x55cd9a64e780
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991df800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991df800 session 0x55cd96707c20
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:25.804131+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991de800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991de800 session 0x55cd96b90f00
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f71ea000/0x0/0x4ffc00000, data 0x43b5873/0x4484000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126304256 unmapped: 18120704 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1674037 data_alloc: 234881024 data_used: 23396352
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:26.804382+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 125665280 unmapped: 18759680 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991de800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991de800 session 0x55cd96be94a0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f71e8000/0x0/0x4ffc00000, data 0x43b5873/0x4484000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:27.804592+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98502400
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98502400 session 0x55cd966f2000
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 125673472 unmapped: 18751488 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:28.804857+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98eac800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98eac800 session 0x55cd981ae1e0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991df400
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 125804544 unmapped: 18620416 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991df400 session 0x55cd981afa40
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:29.805065+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 125812736 unmapped: 18612224 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991df800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f71e9000/0x0/0x4ffc00000, data 0x43b5883/0x4485000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:30.805669+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 125820928 unmapped: 18604032 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1683651 data_alloc: 234881024 data_used: 24293376
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:31.806067+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99d6e400
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd99d6e400 session 0x55cd98e99e00
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 125820928 unmapped: 18604032 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98502400
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98502400 session 0x55cd96707680
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:32.806644+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126017536 unmapped: 18407424 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98eac800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98eac800 session 0x55cd96be9e00
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991de800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.024081230s of 11.396586418s, submitted: 59
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:33.806992+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991de800 session 0x55cd96602780
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126017536 unmapped: 18407424 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:34.807448+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126017536 unmapped: 18407424 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991df400
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f71e4000/0x0/0x4ffc00000, data 0x43ba883/0x448a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99d6e800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:35.807831+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126025728 unmapped: 18399232 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1686890 data_alloc: 234881024 data_used: 24293376
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:36.808347+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126025728 unmapped: 18399232 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:37.808689+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126025728 unmapped: 18399232 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:38.809127+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126025728 unmapped: 18399232 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:39.809309+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99d6ec00
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 126025728 unmapped: 18399232 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f71e4000/0x0/0x4ffc00000, data 0x43ba883/0x448a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:40.809618+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 127647744 unmapped: 16777216 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1731510 data_alloc: 251658240 data_used: 30433280
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:41.809844+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 131833856 unmapped: 12591104 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:42.810189+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 131842048 unmapped: 12582912 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:43.810402+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 131842048 unmapped: 12582912 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f71e3000/0x0/0x4ffc00000, data 0x43bb883/0x448b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:44.810787+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 131842048 unmapped: 12582912 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:45.811335+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 131850240 unmapped: 12574720 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1749430 data_alloc: 251658240 data_used: 33026048
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:46.812113+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 131850240 unmapped: 12574720 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.432714462s of 13.512619019s, submitted: 11
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:47.812600+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f71e3000/0x0/0x4ffc00000, data 0x43bb883/0x448b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 132661248 unmapped: 11763712 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:48.813016+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 133513216 unmapped: 10911744 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991df400 session 0x55cd98e834a0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd99d6e800 session 0x55cd96602960
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98502400
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:49.813220+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 132579328 unmapped: 11845632 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98502400 session 0x55cd98b023c0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f771a000/0x0/0x4ffc00000, data 0x3e84883/0x3f54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:50.813650+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 132579328 unmapped: 11845632 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1705750 data_alloc: 251658240 data_used: 33021952
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:51.813824+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 132579328 unmapped: 11845632 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:52.814038+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 132579328 unmapped: 11845632 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:53.814445+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f771a000/0x0/0x4ffc00000, data 0x3e84883/0x3f54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 132579328 unmapped: 11845632 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:54.814776+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 132579328 unmapped: 11845632 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:55.815192+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 132579328 unmapped: 11845632 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd9732a400 session 0x55cd9668fa40
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd984f5800 session 0x55cd96db43c0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98eac800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1705582 data_alloc: 251658240 data_used: 33034240
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:56.815371+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd98eac800 session 0x55cd98ab34a0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 127893504 unmapped: 16531456 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:57.815743+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 127893504 unmapped: 16531456 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:58.815966+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 127893504 unmapped: 16531456 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:17:59.816808+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f83d0000/0x0/0x4ffc00000, data 0x2e31811/0x2eff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 127893504 unmapped: 16531456 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:00.817184+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 127893504 unmapped: 16531456 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1517145 data_alloc: 234881024 data_used: 25182208
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:01.817786+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f83d0000/0x0/0x4ffc00000, data 0x2e31811/0x2eff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 127893504 unmapped: 16531456 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:02.818214+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 127893504 unmapped: 16531456 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:03.818735+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 127893504 unmapped: 16531456 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f83d0000/0x0/0x4ffc00000, data 0x2e31811/0x2eff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:04.819130+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 127893504 unmapped: 16531456 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f83d0000/0x0/0x4ffc00000, data 0x2e31811/0x2eff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:05.819709+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 127893504 unmapped: 16531456 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f83d0000/0x0/0x4ffc00000, data 0x2e31811/0x2eff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1517145 data_alloc: 234881024 data_used: 25182208
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:06.820115+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f83d0000/0x0/0x4ffc00000, data 0x2e31811/0x2eff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 127893504 unmapped: 16531456 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:07.820464+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 127893504 unmapped: 16531456 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:08.820806+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 127893504 unmapped: 16531456 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:09.821500+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 127893504 unmapped: 16531456 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:10.822043+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 127893504 unmapped: 16531456 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:11.822374+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f83d0000/0x0/0x4ffc00000, data 0x2e31811/0x2eff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1517145 data_alloc: 234881024 data_used: 25182208
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 127893504 unmapped: 16531456 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 25.079788208s of 25.470129013s, submitted: 80
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:12.822751+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f864f000/0x0/0x4ffc00000, data 0x2f51811/0x301f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 14475264 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:13.823961+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 130056192 unmapped: 14368768 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:14.824803+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 130260992 unmapped: 14163968 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:15.825036+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 130260992 unmapped: 14163968 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:16.825439+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1558459 data_alloc: 234881024 data_used: 25649152
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 130260992 unmapped: 14163968 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f82e0000/0x0/0x4ffc00000, data 0x32c0811/0x338e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:17.825824+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 130260992 unmapped: 14163968 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:18.826173+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 130269184 unmapped: 14155776 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:19.826392+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 130269184 unmapped: 14155776 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd99d6ec00 session 0x55cd981af2c0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd991df800 session 0x55cd981ae960
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:20.826633+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99d6ec00
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123731968 unmapped: 20692992 heap: 144424960 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd9732a400
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 ms_handle_reset con 0x55cd99d6ec00 session 0x55cd97f67680
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:21.826940+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1454260 data_alloc: 234881024 data_used: 16445440
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123731968 unmapped: 37478400 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:22.827301+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f8354000/0x0/0x4ffc00000, data 0x324d801/0x331a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _renew_subs
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.226628304s of 10.548670769s, submitted: 55
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 136 ms_handle_reset con 0x55cd9732a400 session 0x55cd964d94a0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123731968 unmapped: 37478400 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:23.827722+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd984f5800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 136 ms_handle_reset con 0x55cd984f5800 session 0x55cd972034a0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 37257216 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98502400
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:24.827997+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 136 ms_handle_reset con 0x55cd98502400 session 0x55cd981bab40
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 37257216 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd9732a400
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:25.828219+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 136 heartbeat osd_stat(store_statfs(0x4f7783000/0x0/0x4ffc00000, data 0x3e1b403/0x3eeb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 136 handle_osd_map epochs [136,137], i have 136, src has [1,137]
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 137 ms_handle_reset con 0x55cd99d6e000 session 0x55cd9668e5a0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123985920 unmapped: 37224448 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd984f5800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:26.828649+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 137 ms_handle_reset con 0x55cd9732a400 session 0x55cd98ab3c20
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1583793 data_alloc: 234881024 data_used: 16461824
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 137 ms_handle_reset con 0x55cd984f5800 session 0x55cd966745a0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 115851264 unmapped: 45359104 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:27.829106+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f8c34000/0x0/0x4ffc00000, data 0x2906f4f/0x29d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 115851264 unmapped: 45359104 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:28.829476+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 115851264 unmapped: 45359104 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:29.829848+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 115851264 unmapped: 45359104 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:30.830112+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 115851264 unmapped: 45359104 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:31.830603+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991df800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1365450 data_alloc: 218103808 data_used: 7061504
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 137 ms_handle_reset con 0x55cd991df800 session 0x55cd981af2c0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f8c34000/0x0/0x4ffc00000, data 0x2906f4f/0x29d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 115875840 unmapped: 45334528 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:32.830982+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99d6ec00
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 115875840 unmapped: 45334528 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:33.831188+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98eac800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 137 ms_handle_reset con 0x55cd98eac800 session 0x55cd964d9e00
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd9732a400
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 137 ms_handle_reset con 0x55cd9732a400 session 0x55cd964d92c0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd984f5800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 137 ms_handle_reset con 0x55cd984f5800 session 0x55cd964d90e0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 115875840 unmapped: 45334528 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:34.831603+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991df800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f8c98000/0x0/0x4ffc00000, data 0x2906f4f/0x29d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 115875840 unmapped: 45334528 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99d6e000
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 137 ms_handle_reset con 0x55cd99d6e000 session 0x55cd95cfd2c0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991de800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:35.831792+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 137 ms_handle_reset con 0x55cd991de800 session 0x55cd9a7bab40
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 127811584 unmapped: 33398784 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:36.832010+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f8c98000/0x0/0x4ffc00000, data 0x2906f4f/0x29d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [1])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1436238 data_alloc: 234881024 data_used: 23932928
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991df400
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 137 ms_handle_reset con 0x55cd991df400 session 0x55cd97203e00
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd9732a400
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.541071892s of 14.065881729s, submitted: 86
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 130252800 unmapped: 30957568 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 137 ms_handle_reset con 0x55cd9732a400 session 0x55cd964314a0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd984f5800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 137 ms_handle_reset con 0x55cd984f5800 session 0x55cd98e985a0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991de800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 137 ms_handle_reset con 0x55cd991de800 session 0x55cd98e98960
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:37.832294+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99d6e000
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 137 ms_handle_reset con 0x55cd99d6e000 session 0x55cd98e994a0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99d6f000
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 137 ms_handle_reset con 0x55cd99d6f000 session 0x55cd98e99a40
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f808a000/0x0/0x4ffc00000, data 0x3513f5f/0x35e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 131375104 unmapped: 29835264 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:38.832781+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 131399680 unmapped: 29810688 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:39.833070+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd9732a400
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 ms_handle_reset con 0x55cd9732a400 session 0x55cd98e98d20
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 131432448 unmapped: 29777920 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd984f5800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 ms_handle_reset con 0x55cd984f5800 session 0x55cd96be9e00
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:40.833432+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 131432448 unmapped: 29777920 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:41.833750+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991de800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 ms_handle_reset con 0x55cd991de800 session 0x55cd96be8b40
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1585930 data_alloc: 251658240 data_used: 30912512
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99d6e000
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 ms_handle_reset con 0x55cd99d6e000 session 0x55cd96be92c0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f8086000/0x0/0x4ffc00000, data 0x35159c2/0x35e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 131784704 unmapped: 29425664 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:42.833980+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99d6f400
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99d6f800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 131784704 unmapped: 29425664 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:43.834238+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 131784704 unmapped: 29425664 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:44.834446+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f805c000/0x0/0x4ffc00000, data 0x353f9d2/0x3612000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 134365184 unmapped: 26845184 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:45.834696+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 138723328 unmapped: 22487040 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:46.834977+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1665266 data_alloc: 251658240 data_used: 41414656
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 140386304 unmapped: 20824064 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:47.835242+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 141189120 unmapped: 20021248 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:48.835589+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 141189120 unmapped: 20021248 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:49.835907+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 141189120 unmapped: 20021248 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:50.836218+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f805c000/0x0/0x4ffc00000, data 0x353f9d2/0x3612000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 141221888 unmapped: 19988480 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:51.836473+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1679186 data_alloc: 251658240 data_used: 43380736
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 141221888 unmapped: 19988480 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:52.837654+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 141221888 unmapped: 19988480 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:53.838112+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 141230080 unmapped: 19980288 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:54.838510+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f805c000/0x0/0x4ffc00000, data 0x353f9d2/0x3612000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 141262848 unmapped: 19947520 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:55.839019+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 141262848 unmapped: 19947520 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:56.839450+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1679506 data_alloc: 251658240 data_used: 43388928
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 141262848 unmapped: 19947520 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:57.839891+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 141262848 unmapped: 19947520 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:58.840303+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 141279232 unmapped: 19931136 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:18:59.840625+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f805c000/0x0/0x4ffc00000, data 0x353f9d2/0x3612000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 141312000 unmapped: 19898368 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:00.840977+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 141312000 unmapped: 19898368 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:01.841419+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1679506 data_alloc: 251658240 data_used: 43388928
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 141312000 unmapped: 19898368 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:02.841782+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f805c000/0x0/0x4ffc00000, data 0x353f9d2/0x3612000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 141344768 unmapped: 19865600 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:03.842089+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 141344768 unmapped: 19865600 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:04.842298+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 141344768 unmapped: 19865600 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:05.842715+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f805c000/0x0/0x4ffc00000, data 0x353f9d2/0x3612000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 141344768 unmapped: 19865600 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:06.843050+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1679506 data_alloc: 251658240 data_used: 43388928
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f805c000/0x0/0x4ffc00000, data 0x353f9d2/0x3612000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 141344768 unmapped: 19865600 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:07.843491+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:08.843930+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 141344768 unmapped: 19865600 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 31.587177277s of 31.760829926s, submitted: 25
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:09.844305+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145563648 unmapped: 15646720 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7b92000/0x0/0x4ffc00000, data 0x3a099d2/0x3adc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:10.844660+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145620992 unmapped: 15589376 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:11.844915+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 146866176 unmapped: 14344192 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 3600.1 total, 600.0 interval
                                            Cumulative writes: 9581 writes, 36K keys, 9581 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 9581 writes, 2507 syncs, 3.82 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 2123 writes, 7531 keys, 2123 commit groups, 1.0 writes per commit group, ingest: 7.42 MB, 0.01 MB/s
                                            Interval WAL: 2123 writes, 874 syncs, 2.43 writes per sync, written: 0.01 GB, 0.01 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1718392 data_alloc: 251658240 data_used: 43724800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:12.845286+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 146866176 unmapped: 14344192 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7b8e000/0x0/0x4ffc00000, data 0x3a0d9d2/0x3ae0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:13.845787+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 146866176 unmapped: 14344192 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:14.846429+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 146866176 unmapped: 14344192 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7b8e000/0x0/0x4ffc00000, data 0x3a0d9d2/0x3ae0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:15.846809+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 146866176 unmapped: 14344192 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:16.847652+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 146866176 unmapped: 14344192 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1718392 data_alloc: 251658240 data_used: 43724800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:17.848140+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 146866176 unmapped: 14344192 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:18.848358+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 155246592 unmapped: 5963776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.473926544s of 10.002105713s, submitted: 160
Dec 03 02:45:24 compute-0 ceph-osd[206633]: mgrc ms_handle_reset ms_handle_reset con 0x55cd963ae000
Dec 03 02:45:24 compute-0 ceph-osd[206633]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1922561230
Dec 03 02:45:24 compute-0 ceph-osd[206633]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1922561230,v1:192.168.122.100:6801/1922561230]
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: get_auth_request con 0x55cd99d6f000 auth_method 0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: mgrc handle_mgr_configure stats_period=5
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:19.849174+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6a34000/0x0/0x4ffc00000, data 0x4b5f9d2/0x4c32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 152649728 unmapped: 8560640 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:20.849637+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 152756224 unmapped: 8454144 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:21.849836+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 152969216 unmapped: 8241152 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1876214 data_alloc: 268435456 data_used: 45903872
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:22.850120+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f69a7000/0x0/0x4ffc00000, data 0x4bf49d2/0x4cc7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 153001984 unmapped: 8208384 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f69a7000/0x0/0x4ffc00000, data 0x4bf49d2/0x4cc7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:23.850372+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 153034752 unmapped: 8175616 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:24.850797+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 153034752 unmapped: 8175616 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:25.851122+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 153034752 unmapped: 8175616 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f69a7000/0x0/0x4ffc00000, data 0x4bf49d2/0x4cc7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:26.851371+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 ms_handle_reset con 0x55cd98165400 session 0x55cd9ae405a0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99ed0000
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 153075712 unmapped: 8134656 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1876214 data_alloc: 268435456 data_used: 45903872
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:27.852117+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 153116672 unmapped: 8093696 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:28.852334+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 153116672 unmapped: 8093696 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:29.852739+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 153116672 unmapped: 8093696 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:30.852929+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 153116672 unmapped: 8093696 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:31.853115+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 153116672 unmapped: 8093696 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f69a5000/0x0/0x4ffc00000, data 0x4bf69d2/0x4cc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1871934 data_alloc: 268435456 data_used: 45903872
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:32.853332+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 153116672 unmapped: 8093696 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f69a5000/0x0/0x4ffc00000, data 0x4bf69d2/0x4cc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:33.853572+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 153116672 unmapped: 8093696 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:34.853801+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f69a5000/0x0/0x4ffc00000, data 0x4bf69d2/0x4cc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 153124864 unmapped: 8085504 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f69a5000/0x0/0x4ffc00000, data 0x4bf69d2/0x4cc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:35.854231+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 153124864 unmapped: 8085504 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:36.854666+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 153124864 unmapped: 8085504 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1871934 data_alloc: 268435456 data_used: 45903872
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:37.855105+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 153124864 unmapped: 8085504 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:38.855581+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 153133056 unmapped: 8077312 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:39.856041+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 153141248 unmapped: 8069120 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f69a5000/0x0/0x4ffc00000, data 0x4bf69d2/0x4cc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:40.856605+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 153141248 unmapped: 8069120 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:41.857067+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 153141248 unmapped: 8069120 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1871934 data_alloc: 268435456 data_used: 45903872
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:42.857449+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 153149440 unmapped: 8060928 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:43.857662+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 153149440 unmapped: 8060928 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f69a5000/0x0/0x4ffc00000, data 0x4bf69d2/0x4cc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 25.369407654s of 25.532297134s, submitted: 23
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:44.858153+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 153149440 unmapped: 8060928 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:45.858444+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f69a5000/0x0/0x4ffc00000, data 0x4bf69d2/0x4cc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 153149440 unmapped: 8060928 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 ms_handle_reset con 0x55cd991df800 session 0x55cd9a7bbe00
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 ms_handle_reset con 0x55cd99d6ec00 session 0x55cd972032c0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:46.858680+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98165400
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 153149440 unmapped: 8060928 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1663584 data_alloc: 251658240 data_used: 33193984
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:47.859050+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 ms_handle_reset con 0x55cd98165400 session 0x55cd97f330e0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144982016 unmapped: 16228352 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:48.859237+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144982016 unmapped: 16228352 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:49.859792+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144982016 unmapped: 16228352 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:50.860020+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144982016 unmapped: 16228352 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a3f000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:51.860500+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144982016 unmapped: 16228352 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1663008 data_alloc: 251658240 data_used: 33189888
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:52.860983+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a3f000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144982016 unmapped: 16228352 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:53.861405+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144982016 unmapped: 16228352 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:54.861626+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144982016 unmapped: 16228352 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a3f000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:55.862063+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144982016 unmapped: 16228352 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:56.862484+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144982016 unmapped: 16228352 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.470028877s of 12.727300644s, submitted: 35
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1663008 data_alloc: 251658240 data_used: 33189888
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:57.863023+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 16203776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:58.863420+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 16203776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:19:59.863820+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 16203776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:00.864368+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 16203776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:01.864772+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 16203776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1663008 data_alloc: 251658240 data_used: 33189888
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:02.865180+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 16203776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:03.865692+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 16203776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:04.865872+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 16203776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:05.866192+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 16203776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:06.866631+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 16203776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1663008 data_alloc: 251658240 data_used: 33189888
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:07.866962+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 16203776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:08.867208+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 16203776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:09.867649+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 16203776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:10.867987+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 16203776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:11.868350+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 16203776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1663008 data_alloc: 251658240 data_used: 33189888
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:12.868706+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 16203776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:13.869046+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 16203776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:14.869286+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 16203776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:15.869719+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 16203776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:16.870078+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 16203776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1663008 data_alloc: 251658240 data_used: 33189888
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:17.870635+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 16203776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:18.871000+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 16203776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:19.871362+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 16203776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:20.871660+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 16203776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:21.871897+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 16203776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1663008 data_alloc: 251658240 data_used: 33189888
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:22.872214+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 16203776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:23.872629+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 16203776 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:24.873025+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145014784 unmapped: 16195584 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:25.873202+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145014784 unmapped: 16195584 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:26.873634+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145014784 unmapped: 16195584 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1663008 data_alloc: 251658240 data_used: 33189888
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:27.874063+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145014784 unmapped: 16195584 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:28.874354+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145014784 unmapped: 16195584 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:29.874879+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145014784 unmapped: 16195584 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:30.875303+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145022976 unmapped: 16187392 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:31.875714+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145022976 unmapped: 16187392 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1663008 data_alloc: 251658240 data_used: 33189888
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:32.876089+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145022976 unmapped: 16187392 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:33.876515+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145022976 unmapped: 16187392 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:34.877020+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145022976 unmapped: 16187392 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:35.877376+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145022976 unmapped: 16187392 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:36.877749+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145022976 unmapped: 16187392 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:37.878051+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1663008 data_alloc: 251658240 data_used: 33189888
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145022976 unmapped: 16187392 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:38.878383+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145031168 unmapped: 16179200 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:39.878803+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145031168 unmapped: 16179200 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:40.879158+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145031168 unmapped: 16179200 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:41.879620+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145039360 unmapped: 16171008 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:42.879909+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1663008 data_alloc: 251658240 data_used: 33189888
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 ms_handle_reset con 0x55cd99ed0800 session 0x55cd981bb0e0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd984f5800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145039360 unmapped: 16171008 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:43.880191+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145039360 unmapped: 16171008 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:44.880497+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145039360 unmapped: 16171008 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:45.880908+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145047552 unmapped: 16162816 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:46.881271+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145047552 unmapped: 16162816 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:47.881671+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1663008 data_alloc: 251658240 data_used: 33189888
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145047552 unmapped: 16162816 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:48.882682+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 51.935058594s of 51.943740845s, submitted: 1
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145080320 unmapped: 16130048 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,0,0,1])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:49.882954+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145113088 unmapped: 16097280 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:50.883228+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145113088 unmapped: 16097280 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:51.883622+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145162240 unmapped: 16048128 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:52.884445+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1664736 data_alloc: 251658240 data_used: 33243136
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 15982592 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:53.884742+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 15982592 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:54.884933+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 15982592 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:55.885249+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 15982592 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:56.885449+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 15982592 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:57.885957+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1664736 data_alloc: 251658240 data_used: 33243136
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 15982592 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:58.886395+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 15982592 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:20:59.886838+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 15982592 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:00.887066+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 15982592 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:01.887492+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145235968 unmapped: 15974400 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:02.887864+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1664736 data_alloc: 251658240 data_used: 33243136
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145235968 unmapped: 15974400 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:03.888308+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145235968 unmapped: 15974400 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:04.888705+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145235968 unmapped: 15974400 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:05.888905+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145235968 unmapped: 15974400 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:06.889290+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145235968 unmapped: 15974400 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:07.889775+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1664736 data_alloc: 251658240 data_used: 33243136
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145235968 unmapped: 15974400 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:08.890159+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145235968 unmapped: 15974400 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:09.890612+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145235968 unmapped: 15974400 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:10.890996+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145244160 unmapped: 15966208 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:11.891345+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145244160 unmapped: 15966208 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:12.891699+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1664736 data_alloc: 251658240 data_used: 33243136
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145244160 unmapped: 15966208 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:13.892180+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145244160 unmapped: 15966208 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:14.892507+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145244160 unmapped: 15966208 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:15.892883+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145244160 unmapped: 15966208 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:16.893146+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145244160 unmapped: 15966208 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:17.893586+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1664736 data_alloc: 251658240 data_used: 33243136
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145252352 unmapped: 15958016 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:18.893906+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145252352 unmapped: 15958016 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:19.894225+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145252352 unmapped: 15958016 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:20.894615+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145252352 unmapped: 15958016 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:21.894861+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145252352 unmapped: 15958016 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:22.895227+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1664736 data_alloc: 251658240 data_used: 33243136
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:23.896063+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145252352 unmapped: 15958016 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:24.896319+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145252352 unmapped: 15958016 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:25.896490+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145252352 unmapped: 15958016 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:26.896875+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145260544 unmapped: 15949824 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:27.897156+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145260544 unmapped: 15949824 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1664736 data_alloc: 251658240 data_used: 33243136
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:28.897359+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145260544 unmapped: 15949824 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:29.897725+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145260544 unmapped: 15949824 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:30.898069+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145260544 unmapped: 15949824 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:31.898407+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145260544 unmapped: 15949824 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:32.898860+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145260544 unmapped: 15949824 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1664736 data_alloc: 251658240 data_used: 33243136
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:33.899225+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145260544 unmapped: 15949824 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:34.899616+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145268736 unmapped: 15941632 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:35.899968+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145276928 unmapped: 15933440 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:36.900312+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145276928 unmapped: 15933440 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:37.900745+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145276928 unmapped: 15933440 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1664736 data_alloc: 251658240 data_used: 33243136
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:38.901089+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145276928 unmapped: 15933440 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:39.901282+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145276928 unmapped: 15933440 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:40.901673+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145276928 unmapped: 15933440 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:41.901995+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145276928 unmapped: 15933440 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:42.902284+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145276928 unmapped: 15933440 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1664736 data_alloc: 251658240 data_used: 33243136
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f7a40000/0x0/0x4ffc00000, data 0x3b5c970/0x3c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:43.902514+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991de800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 53.815029144s of 54.677520752s, submitted: 132
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 145285120 unmapped: 15925248 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 ms_handle_reset con 0x55cd991de800 session 0x55cd9a7bbc20
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99d6e000
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 ms_handle_reset con 0x55cd99d6e000 session 0x55cd9a7bb2c0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98165400
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 ms_handle_reset con 0x55cd98165400 session 0x55cd95afcd20
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991de800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 ms_handle_reset con 0x55cd991de800 session 0x55cd98e98000
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991df800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 ms_handle_reset con 0x55cd991df800 session 0x55cd98e99a40
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:44.902906+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144875520 unmapped: 16334848 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f78eb000/0x0/0x4ffc00000, data 0x3cb1970/0x3d83000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:45.903425+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144875520 unmapped: 16334848 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:46.903815+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144875520 unmapped: 16334848 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:47.904149+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144875520 unmapped: 16334848 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1681475 data_alloc: 251658240 data_used: 33243136
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:48.904896+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144875520 unmapped: 16334848 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 rsyslogd[188612]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f78eb000/0x0/0x4ffc00000, data 0x3cb1970/0x3d83000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:49.905317+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144875520 unmapped: 16334848 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:50.905916+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144883712 unmapped: 16326656 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:51.906267+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144883712 unmapped: 16326656 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f78eb000/0x0/0x4ffc00000, data 0x3cb1970/0x3d83000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99d6ec00
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 ms_handle_reset con 0x55cd99d6ec00 session 0x55cd96be81e0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:52.906590+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144908288 unmapped: 16302080 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1684929 data_alloc: 251658240 data_used: 33243136
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99d6fc00
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd96384800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:53.907419+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144932864 unmapped: 16277504 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:54.907980+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144932864 unmapped: 16277504 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:55.908192+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144932864 unmapped: 16277504 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:56.908385+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f78ea000/0x0/0x4ffc00000, data 0x3cb1993/0x3d84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144932864 unmapped: 16277504 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:57.908695+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144932864 unmapped: 16277504 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1695113 data_alloc: 251658240 data_used: 34553856
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f78ea000/0x0/0x4ffc00000, data 0x3cb1993/0x3d84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:58.909004+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144932864 unmapped: 16277504 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:21:59.909347+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144932864 unmapped: 16277504 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f78ea000/0x0/0x4ffc00000, data 0x3cb1993/0x3d84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:00.909573+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144932864 unmapped: 16277504 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:01.909746+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144932864 unmapped: 16277504 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:02.909977+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144932864 unmapped: 16277504 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1695113 data_alloc: 251658240 data_used: 34553856
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f78ea000/0x0/0x4ffc00000, data 0x3cb1993/0x3d84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:03.910155+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144932864 unmapped: 16277504 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:04.910446+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144932864 unmapped: 16277504 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:05.911135+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144932864 unmapped: 16277504 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:06.911593+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144941056 unmapped: 16269312 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:07.912042+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144941056 unmapped: 16269312 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1695113 data_alloc: 251658240 data_used: 34553856
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:08.912368+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f78ea000/0x0/0x4ffc00000, data 0x3cb1993/0x3d84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144941056 unmapped: 16269312 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:09.912746+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144941056 unmapped: 16269312 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:10.913091+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144941056 unmapped: 16269312 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:11.913417+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144941056 unmapped: 16269312 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f78ea000/0x0/0x4ffc00000, data 0x3cb1993/0x3d84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:12.913771+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f78ea000/0x0/0x4ffc00000, data 0x3cb1993/0x3d84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144941056 unmapped: 16269312 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1695113 data_alloc: 251658240 data_used: 34553856
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:13.914093+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f78ea000/0x0/0x4ffc00000, data 0x3cb1993/0x3d84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144949248 unmapped: 16261120 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:14.914417+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144949248 unmapped: 16261120 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:15.914747+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144949248 unmapped: 16261120 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:16.915101+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144949248 unmapped: 16261120 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:17.915648+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144949248 unmapped: 16261120 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1695113 data_alloc: 251658240 data_used: 34553856
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:18.915865+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144957440 unmapped: 16252928 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:19.916108+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f78ea000/0x0/0x4ffc00000, data 0x3cb1993/0x3d84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144957440 unmapped: 16252928 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:20.916517+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144957440 unmapped: 16252928 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f78ea000/0x0/0x4ffc00000, data 0x3cb1993/0x3d84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:21.916980+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144965632 unmapped: 16244736 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:22.917287+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144965632 unmapped: 16244736 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1695113 data_alloc: 251658240 data_used: 34553856
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:23.917694+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144965632 unmapped: 16244736 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:24.918060+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144965632 unmapped: 16244736 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:25.918393+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144965632 unmapped: 16244736 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:26.918774+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144965632 unmapped: 16244736 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f78ea000/0x0/0x4ffc00000, data 0x3cb1993/0x3d84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:27.919160+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 144965632 unmapped: 16244736 heap: 161210368 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1695113 data_alloc: 251658240 data_used: 34553856
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:28.920636+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 44.713607788s of 44.925628662s, submitted: 29
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 150003712 unmapped: 12328960 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:29.921298+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 150134784 unmapped: 12197888 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:30.921610+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149381120 unmapped: 12951552 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:31.921935+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 12935168 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:32.922234+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 12935168 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afb000/0x0/0x4ffc00000, data 0x4690993/0x4763000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1778491 data_alloc: 251658240 data_used: 35532800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:33.922484+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afb000/0x0/0x4ffc00000, data 0x4690993/0x4763000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 12935168 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:34.922794+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 12935168 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:35.923004+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 12935168 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:36.923297+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 12935168 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:37.923635+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 12935168 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1778703 data_alloc: 251658240 data_used: 35532800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:38.923956+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 12935168 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:39.924202+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 12935168 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:40.924612+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 12935168 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:41.924984+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 12935168 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:42.925277+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 12935168 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1778703 data_alloc: 251658240 data_used: 35532800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:43.925596+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 12935168 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:44.925956+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 12935168 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:45.926363+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 12935168 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:46.926714+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 12935168 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:47.927013+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 12935168 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1778703 data_alloc: 251658240 data_used: 35532800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:48.927347+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.849327087s of 20.192100525s, submitted: 87
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149331968 unmapped: 13000704 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:49.927760+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149331968 unmapped: 13000704 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:50.928047+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149331968 unmapped: 13000704 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:51.928749+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149331968 unmapped: 13000704 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:52.930395+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149331968 unmapped: 13000704 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1775711 data_alloc: 251658240 data_used: 35532800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:53.930915+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149340160 unmapped: 12992512 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:54.931298+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149340160 unmapped: 12992512 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:55.931873+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149340160 unmapped: 12992512 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:56.932290+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149340160 unmapped: 12992512 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:57.933182+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149340160 unmapped: 12992512 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1775711 data_alloc: 251658240 data_used: 35532800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:58.933632+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149340160 unmapped: 12992512 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:22:59.933822+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149340160 unmapped: 12992512 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:00.934188+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149340160 unmapped: 12992512 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:01.934726+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.050611496s of 13.057282448s, submitted: 1
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149356544 unmapped: 12976128 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:02.934970+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149356544 unmapped: 12976128 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:03.935349+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1775887 data_alloc: 251658240 data_used: 35532800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149356544 unmapped: 12976128 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:04.935749+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149356544 unmapped: 12976128 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:05.935951+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149356544 unmapped: 12976128 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:06.936179+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149356544 unmapped: 12976128 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:07.936622+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149356544 unmapped: 12976128 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:08.936907+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1775887 data_alloc: 251658240 data_used: 35532800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149356544 unmapped: 12976128 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:09.937251+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149356544 unmapped: 12976128 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:10.937474+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149364736 unmapped: 12967936 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:11.937704+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149364736 unmapped: 12967936 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:12.938065+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149364736 unmapped: 12967936 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:13.938304+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1775887 data_alloc: 251658240 data_used: 35532800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149364736 unmapped: 12967936 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:14.938486+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149364736 unmapped: 12967936 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:15.938648+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149364736 unmapped: 12967936 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:16.938849+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149364736 unmapped: 12967936 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:17.939275+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149364736 unmapped: 12967936 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:18.939692+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1775887 data_alloc: 251658240 data_used: 35532800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149364736 unmapped: 12967936 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:19.940066+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149364736 unmapped: 12967936 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:20.940351+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149364736 unmapped: 12967936 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:21.940747+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149364736 unmapped: 12967936 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:22.941121+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149364736 unmapped: 12967936 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:23.941515+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1775887 data_alloc: 251658240 data_used: 35532800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149364736 unmapped: 12967936 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:24.941822+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149372928 unmapped: 12959744 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:25.942104+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149381120 unmapped: 12951552 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:26.942494+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149381120 unmapped: 12951552 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:27.942895+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149381120 unmapped: 12951552 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:28.943229+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1775887 data_alloc: 251658240 data_used: 35532800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149381120 unmapped: 12951552 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:29.943673+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149381120 unmapped: 12951552 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:30.943884+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149381120 unmapped: 12951552 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:31.944216+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149381120 unmapped: 12951552 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:32.944401+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149381120 unmapped: 12951552 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:33.944637+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1775887 data_alloc: 251658240 data_used: 35532800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149381120 unmapped: 12951552 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:34.945078+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149381120 unmapped: 12951552 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:35.945450+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149381120 unmapped: 12951552 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:36.945744+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149381120 unmapped: 12951552 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:37.946064+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149381120 unmapped: 12951552 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:38.946441+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1775887 data_alloc: 251658240 data_used: 35532800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149381120 unmapped: 12951552 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:39.946736+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149389312 unmapped: 12943360 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:40.947090+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149389312 unmapped: 12943360 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:41.947314+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149389312 unmapped: 12943360 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:42.947602+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 12935168 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:43.947998+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1775887 data_alloc: 251658240 data_used: 35532800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 12935168 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:44.948214+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 12935168 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:45.948729+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 12935168 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:46.949074+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 12935168 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:47.949690+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:48.949907+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 12935168 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1775887 data_alloc: 251658240 data_used: 35532800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:49.950215+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 12935168 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:50.950633+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 12935168 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:51.951043+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149405696 unmapped: 12926976 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:52.951433+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149405696 unmapped: 12926976 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:53.951885+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149405696 unmapped: 12926976 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1775887 data_alloc: 251658240 data_used: 35532800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:54.952326+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149405696 unmapped: 12926976 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:55.952718+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149405696 unmapped: 12926976 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:56.952939+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149405696 unmapped: 12926976 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:57.953209+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149405696 unmapped: 12926976 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:58.954730+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149405696 unmapped: 12926976 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1775887 data_alloc: 251658240 data_used: 35532800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:23:59.956616+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149405696 unmapped: 12926976 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:00.958474+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149405696 unmapped: 12926976 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:01.959575+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149405696 unmapped: 12926976 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:02.960283+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149405696 unmapped: 12926976 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:03.960678+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149405696 unmapped: 12926976 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1775887 data_alloc: 251658240 data_used: 35532800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:04.960889+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149405696 unmapped: 12926976 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:05.961594+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149405696 unmapped: 12926976 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:06.962977+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149405696 unmapped: 12926976 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:07.963864+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149405696 unmapped: 12926976 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:08.964276+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149405696 unmapped: 12926976 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1775887 data_alloc: 251658240 data_used: 35532800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:09.964774+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149405696 unmapped: 12926976 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:10.965198+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149405696 unmapped: 12926976 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:11.965658+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149405696 unmapped: 12926976 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:12.966036+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149405696 unmapped: 12926976 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:13.966400+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149413888 unmapped: 12918784 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1775887 data_alloc: 251658240 data_used: 35532800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:14.966629+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149413888 unmapped: 12918784 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:15.966848+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149413888 unmapped: 12918784 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:16.967155+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149413888 unmapped: 12918784 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:17.967426+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149413888 unmapped: 12918784 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:18.967675+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149413888 unmapped: 12918784 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1775887 data_alloc: 251658240 data_used: 35532800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:19.968103+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149413888 unmapped: 12918784 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:20.968585+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149413888 unmapped: 12918784 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:21.968898+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149413888 unmapped: 12918784 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:22.969361+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149422080 unmapped: 12910592 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:23.969630+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149422080 unmapped: 12910592 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779087 data_alloc: 251658240 data_used: 35819520
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6afa000/0x0/0x4ffc00000, data 0x4691993/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:24.969956+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149422080 unmapped: 12910592 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:25.970283+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149422080 unmapped: 12910592 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:26.970745+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149422080 unmapped: 12910592 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 85.276458740s of 85.293502808s, submitted: 2
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:27.971282+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149422080 unmapped: 12910592 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af8000/0x0/0x4ffc00000, data 0x4693993/0x4766000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:28.971579+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149422080 unmapped: 12910592 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779271 data_alloc: 251658240 data_used: 35819520
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:29.972100+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149422080 unmapped: 12910592 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:30.972660+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149422080 unmapped: 12910592 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:31.972942+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149422080 unmapped: 12910592 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:32.973329+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af8000/0x0/0x4ffc00000, data 0x4693993/0x4766000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149422080 unmapped: 12910592 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:33.973587+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149422080 unmapped: 12910592 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779271 data_alloc: 251658240 data_used: 35819520
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:34.973981+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149422080 unmapped: 12910592 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:35.974385+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149422080 unmapped: 12910592 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:36.974772+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149422080 unmapped: 12910592 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af8000/0x0/0x4ffc00000, data 0x4693993/0x4766000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:37.978899+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149422080 unmapped: 12910592 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:38.979163+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149430272 unmapped: 12902400 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779751 data_alloc: 251658240 data_used: 35831808
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:39.979663+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149430272 unmapped: 12902400 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:40.979967+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149430272 unmapped: 12902400 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af8000/0x0/0x4ffc00000, data 0x4693993/0x4766000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:41.980299+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149430272 unmapped: 12902400 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:42.980670+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.860312462s of 15.871120453s, submitted: 1
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149430272 unmapped: 12902400 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:43.981063+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149430272 unmapped: 12902400 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:44.981468+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149430272 unmapped: 12902400 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:45.981909+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149430272 unmapped: 12902400 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:46.982298+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149430272 unmapped: 12902400 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:47.982751+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149430272 unmapped: 12902400 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:48.983166+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149430272 unmapped: 12902400 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:49.983609+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149430272 unmapped: 12902400 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:50.984244+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149430272 unmapped: 12902400 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:51.984699+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149430272 unmapped: 12902400 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:52.984954+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149430272 unmapped: 12902400 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:53.985362+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149438464 unmapped: 12894208 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:54.985747+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149438464 unmapped: 12894208 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:55.986163+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149438464 unmapped: 12894208 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:56.986616+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149438464 unmapped: 12894208 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.327156067s of 14.337164879s, submitted: 1
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:57.987116+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 150487040 unmapped: 11845632 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:58.987797+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 150487040 unmapped: 11845632 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:24:59.988221+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 150487040 unmapped: 11845632 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:00.988641+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 150487040 unmapped: 11845632 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:01.989034+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 150487040 unmapped: 11845632 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:02.989469+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 150487040 unmapped: 11845632 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:03.989765+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 150487040 unmapped: 11845632 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:04.990158+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 150487040 unmapped: 11845632 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:05.990744+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 150487040 unmapped: 11845632 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:06.991092+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 150487040 unmapped: 11845632 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:07.991621+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 150487040 unmapped: 11845632 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:08.991917+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 150487040 unmapped: 11845632 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:09.992321+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 150487040 unmapped: 11845632 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:10.992512+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 150495232 unmapped: 11837440 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:11.992847+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 150495232 unmapped: 11837440 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:12.993090+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.353301048s of 15.358656883s, submitted: 1
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 12886016 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:13.993332+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 12886016 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:14.993807+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 12886016 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:15.994254+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 12886016 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:16.994670+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 12886016 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:17.995107+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 12886016 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:18.995520+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 12886016 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:19.995956+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 12886016 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:20.996287+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 12886016 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:21.996614+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 12886016 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:22.996827+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 12886016 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:23.997087+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 12886016 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:24.997324+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 12886016 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:25.997760+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 12886016 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:26.998156+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 12886016 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:27.998667+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 12886016 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:28.998896+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 12886016 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:29.999159+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 12886016 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:30.999455+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 12886016 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:31.999694+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 12886016 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:32.999948+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 12886016 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:34.003155+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 12886016 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:35.003472+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 12886016 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:36.003973+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 12886016 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:37.004812+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 12886016 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:38.005160+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 12886016 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:39.005802+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 12886016 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:40.006220+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149454848 unmapped: 12877824 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:41.006594+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149454848 unmapped: 12877824 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:42.007079+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149454848 unmapped: 12877824 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:43.007513+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149454848 unmapped: 12877824 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:44.007983+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149454848 unmapped: 12877824 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:45.008678+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149454848 unmapped: 12877824 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:46.009105+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149454848 unmapped: 12877824 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:47.009490+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149454848 unmapped: 12877824 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:48.010027+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149454848 unmapped: 12877824 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:49.010440+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149454848 unmapped: 12877824 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:50.010826+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149454848 unmapped: 12877824 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:51.011213+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149454848 unmapped: 12877824 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:52.011720+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149454848 unmapped: 12877824 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:53.012138+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149454848 unmapped: 12877824 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:54.012375+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149454848 unmapped: 12877824 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:55.012755+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149454848 unmapped: 12877824 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:56.206939+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149454848 unmapped: 12877824 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:57.207679+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149463040 unmapped: 12869632 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:58.207966+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149463040 unmapped: 12869632 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:25:59.208433+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149463040 unmapped: 12869632 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:00.208789+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149463040 unmapped: 12869632 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:01.209101+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149463040 unmapped: 12869632 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:02.209603+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149463040 unmapped: 12869632 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:03.209990+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149463040 unmapped: 12869632 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:04.210267+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149463040 unmapped: 12869632 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:05.210688+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149463040 unmapped: 12869632 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:06.214741+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149471232 unmapped: 12861440 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:07.216482+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149471232 unmapped: 12861440 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:08.217823+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149471232 unmapped: 12861440 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:09.218262+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149471232 unmapped: 12861440 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:10.219460+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149479424 unmapped: 12853248 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:11.219960+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149479424 unmapped: 12853248 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:12.220357+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149479424 unmapped: 12853248 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:13.220898+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149479424 unmapped: 12853248 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:14.221353+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149479424 unmapped: 12853248 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:15.221747+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149479424 unmapped: 12853248 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:16.222267+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149479424 unmapped: 12853248 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:17.222768+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149479424 unmapped: 12853248 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:18.223711+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149479424 unmapped: 12853248 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:19.224295+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149479424 unmapped: 12853248 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:20.224503+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149479424 unmapped: 12853248 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:21.224803+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149479424 unmapped: 12853248 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:22.225299+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149479424 unmapped: 12853248 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:23.225926+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149479424 unmapped: 12853248 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:24.226629+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149479424 unmapped: 12853248 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:25.226948+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:26.227377+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149479424 unmapped: 12853248 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:27.227874+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149479424 unmapped: 12853248 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:28.228218+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149479424 unmapped: 12853248 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:29.228734+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149479424 unmapped: 12853248 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:30.229181+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149487616 unmapped: 12845056 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:31.229804+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149487616 unmapped: 12845056 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:32.230243+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149487616 unmapped: 12845056 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:33.230661+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149487616 unmapped: 12845056 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:34.231040+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149487616 unmapped: 12845056 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:35.231514+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149487616 unmapped: 12845056 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:36.232020+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149487616 unmapped: 12845056 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:37.232428+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149487616 unmapped: 12845056 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:38.232744+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149487616 unmapped: 12845056 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:39.233170+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149487616 unmapped: 12845056 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:40.233511+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149487616 unmapped: 12845056 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:41.234002+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149487616 unmapped: 12845056 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:42.234400+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149487616 unmapped: 12845056 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:43.234733+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149487616 unmapped: 12845056 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:44.234949+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149487616 unmapped: 12845056 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:45.235322+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149487616 unmapped: 12845056 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:46.235689+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149495808 unmapped: 12836864 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:47.236056+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149495808 unmapped: 12836864 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:48.236388+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149495808 unmapped: 12836864 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:49.236714+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149495808 unmapped: 12836864 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:50.236987+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149495808 unmapped: 12836864 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:51.240392+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149495808 unmapped: 12836864 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:52.240707+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149495808 unmapped: 12836864 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:53.241068+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149495808 unmapped: 12836864 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:54.241466+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149495808 unmapped: 12836864 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:55.241797+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149504000 unmapped: 12828672 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:56.242160+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149504000 unmapped: 12828672 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:57.242370+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149504000 unmapped: 12828672 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:58.242808+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149504000 unmapped: 12828672 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:26:59.243100+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149504000 unmapped: 12828672 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:00.243458+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149504000 unmapped: 12828672 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:01.243853+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149512192 unmapped: 12820480 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:02.244251+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149512192 unmapped: 12820480 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:03.244745+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149512192 unmapped: 12820480 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:04.244978+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149512192 unmapped: 12820480 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:05.245411+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149512192 unmapped: 12820480 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:06.245770+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149512192 unmapped: 12820480 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:07.245991+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149512192 unmapped: 12820480 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:08.246316+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149512192 unmapped: 12820480 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:09.246729+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149512192 unmapped: 12820480 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:10.246992+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149512192 unmapped: 12820480 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:11.247267+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149512192 unmapped: 12820480 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:12.247676+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149512192 unmapped: 12820480 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:13.248060+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149512192 unmapped: 12820480 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:14.248419+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149512192 unmapped: 12820480 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:15.248766+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149520384 unmapped: 12812288 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:16.249221+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149520384 unmapped: 12812288 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:17.249707+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149520384 unmapped: 12812288 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:18.250028+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149528576 unmapped: 12804096 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:19.250429+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149528576 unmapped: 12804096 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:20.250783+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149528576 unmapped: 12804096 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:21.251113+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149528576 unmapped: 12804096 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:22.251465+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149528576 unmapped: 12804096 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:23.251747+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149528576 unmapped: 12804096 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:24.252103+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149528576 unmapped: 12804096 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:25.252359+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149528576 unmapped: 12804096 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:26.252721+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149528576 unmapped: 12804096 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:27.253071+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149528576 unmapped: 12804096 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:28.253478+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149528576 unmapped: 12804096 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:29.253958+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149528576 unmapped: 12804096 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:30.254466+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149528576 unmapped: 12804096 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1779643 data_alloc: 251658240 data_used: 35831808
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:31.255046+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 138.569915771s of 138.576492310s, submitted: 1
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149528576 unmapped: 12804096 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:32.255402+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149528576 unmapped: 12804096 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:33.255786+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149528576 unmapped: 12804096 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:34.256201+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149528576 unmapped: 12804096 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:35.256993+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149536768 unmapped: 12795904 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780251 data_alloc: 251658240 data_used: 35921920
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:36.257328+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149536768 unmapped: 12795904 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:37.257751+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af7000/0x0/0x4ffc00000, data 0x4694993/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149536768 unmapped: 12795904 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:38.258203+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149536768 unmapped: 12795904 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:39.258782+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149536768 unmapped: 12795904 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:40.259212+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149536768 unmapped: 12795904 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780435 data_alloc: 251658240 data_used: 35921920
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:41.259606+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149536768 unmapped: 12795904 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:42.260321+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af5000/0x0/0x4ffc00000, data 0x4696993/0x4769000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149536768 unmapped: 12795904 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:43.260833+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149536768 unmapped: 12795904 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af5000/0x0/0x4ffc00000, data 0x4696993/0x4769000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:44.261321+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149536768 unmapped: 12795904 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af5000/0x0/0x4ffc00000, data 0x4696993/0x4769000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:45.261814+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149536768 unmapped: 12795904 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780435 data_alloc: 251658240 data_used: 35921920
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:46.262223+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149536768 unmapped: 12795904 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:47.262481+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149536768 unmapped: 12795904 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af5000/0x0/0x4ffc00000, data 0x4696993/0x4769000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:48.262901+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149536768 unmapped: 12795904 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:49.263209+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149536768 unmapped: 12795904 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:50.263750+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149544960 unmapped: 12787712 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780595 data_alloc: 251658240 data_used: 35926016
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:51.264165+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149553152 unmapped: 12779520 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:52.264568+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af5000/0x0/0x4ffc00000, data 0x4696993/0x4769000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149553152 unmapped: 12779520 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:53.264926+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149553152 unmapped: 12779520 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 22.376976013s of 22.411329269s, submitted: 4
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:54.265252+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149553152 unmapped: 12779520 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:55.265552+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149553152 unmapped: 12779520 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:56.265732+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149553152 unmapped: 12779520 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:57.266058+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149553152 unmapped: 12779520 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:58.266768+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149553152 unmapped: 12779520 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:27:59.267121+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149553152 unmapped: 12779520 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:00.267504+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149553152 unmapped: 12779520 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:01.267932+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149553152 unmapped: 12779520 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:02.268375+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149553152 unmapped: 12779520 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:03.268791+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149553152 unmapped: 12779520 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:04.269266+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149553152 unmapped: 12779520 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:05.269789+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149553152 unmapped: 12779520 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:06.270147+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149561344 unmapped: 12771328 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:07.270655+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.303033829s of 13.311895370s, submitted: 1
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149561344 unmapped: 12771328 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:08.271154+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149561344 unmapped: 12771328 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:09.271805+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149561344 unmapped: 12771328 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:10.272194+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149561344 unmapped: 12771328 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:11.272618+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149561344 unmapped: 12771328 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:12.273025+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149561344 unmapped: 12771328 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:13.273466+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149561344 unmapped: 12771328 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:14.273867+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149569536 unmapped: 12763136 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:15.274389+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149569536 unmapped: 12763136 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:16.274782+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149569536 unmapped: 12763136 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:17.275182+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149569536 unmapped: 12763136 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:18.275712+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149569536 unmapped: 12763136 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:19.276020+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149569536 unmapped: 12763136 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:20.276435+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149569536 unmapped: 12763136 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:21.276919+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149569536 unmapped: 12763136 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:22.277382+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.354217529s of 15.360720634s, submitted: 1
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149585920 unmapped: 12746752 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:23.277906+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149585920 unmapped: 12746752 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:24.278300+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149585920 unmapped: 12746752 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:25.278732+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149585920 unmapped: 12746752 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:26.279225+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149585920 unmapped: 12746752 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:27.279651+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149585920 unmapped: 12746752 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:28.280259+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149585920 unmapped: 12746752 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:29.280776+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149585920 unmapped: 12746752 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:30.281179+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149585920 unmapped: 12746752 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:31.281751+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149585920 unmapped: 12746752 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:32.282225+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149585920 unmapped: 12746752 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:33.282823+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:34.283179+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149585920 unmapped: 12746752 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:35.283792+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149585920 unmapped: 12746752 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:36.284081+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149585920 unmapped: 12746752 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:37.284698+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149585920 unmapped: 12746752 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:38.285118+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149585920 unmapped: 12746752 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:39.285630+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149594112 unmapped: 12738560 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:40.286024+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149594112 unmapped: 12738560 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:41.286380+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149594112 unmapped: 12738560 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:42.286803+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149594112 unmapped: 12738560 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:43.287179+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149594112 unmapped: 12738560 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:44.287644+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149594112 unmapped: 12738560 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:45.288160+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149594112 unmapped: 12738560 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:46.288653+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149594112 unmapped: 12738560 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:47.289101+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149594112 unmapped: 12738560 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:48.289438+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149594112 unmapped: 12738560 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:49.289800+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149594112 unmapped: 12738560 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:50.290200+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149594112 unmapped: 12738560 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:51.290512+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149594112 unmapped: 12738560 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:52.290833+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149594112 unmapped: 12738560 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:53.291193+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149594112 unmapped: 12738560 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:54.291611+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149602304 unmapped: 12730368 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:55.292021+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149610496 unmapped: 12722176 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:56.292387+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149610496 unmapped: 12722176 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:57.292740+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149610496 unmapped: 12722176 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:58.293237+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149610496 unmapped: 12722176 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:28:59.293664+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149610496 unmapped: 12722176 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:00.294035+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149610496 unmapped: 12722176 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:01.294355+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149610496 unmapped: 12722176 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:02.294782+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149610496 unmapped: 12722176 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:03.295058+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149618688 unmapped: 12713984 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:04.295382+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149618688 unmapped: 12713984 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:05.295609+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149635072 unmapped: 12697600 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:06.295811+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149635072 unmapped: 12697600 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:07.296217+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149635072 unmapped: 12697600 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:08.296768+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149635072 unmapped: 12697600 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:09.297152+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149635072 unmapped: 12697600 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:10.297623+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149635072 unmapped: 12697600 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:11.298004+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149635072 unmapped: 12697600 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 4200.1 total, 600.0 interval
                                            Cumulative writes: 10K writes, 39K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 10K writes, 2810 syncs, 3.67 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 738 writes, 2541 keys, 738 commit groups, 1.0 writes per commit group, ingest: 3.34 MB, 0.01 MB/s
                                            Interval WAL: 738 writes, 303 syncs, 2.44 writes per sync, written: 0.00 GB, 0.01 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:12.298372+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149643264 unmapped: 12689408 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:13.298723+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149643264 unmapped: 12689408 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:14.299054+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149643264 unmapped: 12689408 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:15.299463+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149643264 unmapped: 12689408 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:16.299834+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149643264 unmapped: 12689408 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:17.300193+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149643264 unmapped: 12689408 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:18.300711+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149643264 unmapped: 12689408 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:19.300933+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149659648 unmapped: 12673024 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:20.301649+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149659648 unmapped: 12673024 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:21.302083+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149659648 unmapped: 12673024 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:22.302832+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149659648 unmapped: 12673024 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:23.303439+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149659648 unmapped: 12673024 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:24.304001+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149659648 unmapped: 12673024 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:25.304315+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149676032 unmapped: 12656640 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:26.305638+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149676032 unmapped: 12656640 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:27.306135+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149676032 unmapped: 12656640 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:28.306697+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149676032 unmapped: 12656640 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:29.307036+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149676032 unmapped: 12656640 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:30.307763+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149676032 unmapped: 12656640 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:31.308177+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149676032 unmapped: 12656640 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:32.308379+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149676032 unmapped: 12656640 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:33.308856+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149684224 unmapped: 12648448 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:34.309257+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149692416 unmapped: 12640256 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:35.309781+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149692416 unmapped: 12640256 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:36.310061+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149692416 unmapped: 12640256 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:37.310432+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149692416 unmapped: 12640256 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:38.310897+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149692416 unmapped: 12640256 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:39.311264+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149692416 unmapped: 12640256 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:40.311752+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149692416 unmapped: 12640256 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:41.312142+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149692416 unmapped: 12640256 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:42.312674+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149692416 unmapped: 12640256 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:43.313132+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149700608 unmapped: 12632064 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:44.313839+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149700608 unmapped: 12632064 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:45.314153+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149716992 unmapped: 12615680 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:46.314473+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149716992 unmapped: 12615680 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:47.314823+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149716992 unmapped: 12615680 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:48.315163+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149716992 unmapped: 12615680 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:49.315503+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149716992 unmapped: 12615680 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:50.315993+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149725184 unmapped: 12607488 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:51.316346+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149725184 unmapped: 12607488 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:52.316613+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149725184 unmapped: 12607488 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:53.316902+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149725184 unmapped: 12607488 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:54.317321+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149725184 unmapped: 12607488 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:55.317750+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149725184 unmapped: 12607488 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:56.318100+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149725184 unmapped: 12607488 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:57.318425+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149725184 unmapped: 12607488 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:58.318858+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149725184 unmapped: 12607488 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:29:59.319144+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149725184 unmapped: 12607488 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:00.319451+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149725184 unmapped: 12607488 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:01.319850+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149725184 unmapped: 12607488 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:02.320230+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149733376 unmapped: 12599296 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:03.320679+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149733376 unmapped: 12599296 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:04.321024+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149749760 unmapped: 12582912 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:05.321422+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149749760 unmapped: 12582912 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:06.321817+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149749760 unmapped: 12582912 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:07.322186+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149749760 unmapped: 12582912 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:08.322674+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149749760 unmapped: 12582912 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:09.323063+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149749760 unmapped: 12582912 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:10.323435+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149749760 unmapped: 12582912 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:11.324021+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149749760 unmapped: 12582912 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:12.324444+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149749760 unmapped: 12582912 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:13.324997+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149749760 unmapped: 12582912 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:14.325414+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149766144 unmapped: 12566528 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:15.325621+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149766144 unmapped: 12566528 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:16.325973+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149766144 unmapped: 12566528 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:17.326351+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149766144 unmapped: 12566528 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:18.326864+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149766144 unmapped: 12566528 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:19.327173+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149766144 unmapped: 12566528 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:20.327626+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149766144 unmapped: 12566528 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:21.327854+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149766144 unmapped: 12566528 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:22.328255+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149766144 unmapped: 12566528 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:23.328646+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149766144 unmapped: 12566528 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:24.328979+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149782528 unmapped: 12550144 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:25.329237+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149782528 unmapped: 12550144 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:26.329501+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149782528 unmapped: 12550144 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:27.329927+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149782528 unmapped: 12550144 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:28.330291+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149782528 unmapped: 12550144 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:29.330657+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149782528 unmapped: 12550144 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:30.331077+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149790720 unmapped: 12541952 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:31.331479+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149790720 unmapped: 12541952 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:32.332075+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149790720 unmapped: 12541952 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:33.332508+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149790720 unmapped: 12541952 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:34.332951+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149790720 unmapped: 12541952 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:35.333363+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149790720 unmapped: 12541952 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:36.333797+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 251658240 data_used: 35926016
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149790720 unmapped: 12541952 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:37.334195+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149790720 unmapped: 12541952 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:38.334760+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149790720 unmapped: 12541952 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:39.335209+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149790720 unmapped: 12541952 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:40.335436+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149790720 unmapped: 12541952 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:41.335781+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 234881024 data_used: 35926016
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149790720 unmapped: 12541952 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:42.336145+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149798912 unmapped: 12533760 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:43.336523+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149798912 unmapped: 12533760 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:44.337025+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149815296 unmapped: 12517376 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:45.337333+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149823488 unmapped: 12509184 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:46.337591+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780839 data_alloc: 234881024 data_used: 35926016
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149831680 unmapped: 12500992 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:47.337940+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149831680 unmapped: 12500992 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:48.338369+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149831680 unmapped: 12500992 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:49.338727+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149831680 unmapped: 12500992 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:50.339040+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 147.989303589s of 147.998840332s, submitted: 1
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [0,0,0,1])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:51.339364+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149831680 unmapped: 12500992 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780487 data_alloc: 218103808 data_used: 35926016
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:52.339738+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149856256 unmapped: 12476416 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:53.340128+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149913600 unmapped: 12419072 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:54.340664+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:55.340990+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:56.341442+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780487 data_alloc: 218103808 data_used: 35926016
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:57.341743+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:58.342174+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:30:59.342512+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:00.342996+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:01.343325+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780487 data_alloc: 218103808 data_used: 35926016
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:02.343738+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:03.344169+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:04.344490+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:05.344903+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:06.345283+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:07.345708+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780487 data_alloc: 218103808 data_used: 35926016
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:08.346080+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:09.346480+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:10.346787+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:11.347132+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:12.347463+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780487 data_alloc: 218103808 data_used: 35926016
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:13.347769+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:14.348126+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:15.348331+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:16.348751+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:17.349087+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780487 data_alloc: 218103808 data_used: 35926016
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:18.349451+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:19.349837+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:20.350199+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:21.350760+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:22.351090+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780487 data_alloc: 218103808 data_used: 35926016
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:23.351439+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:24.351903+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:25.352226+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:26.352644+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:27.352980+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780487 data_alloc: 218103808 data_used: 35926016
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:28.353332+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:29.353732+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:30.354107+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:31.354352+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:32.354784+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780487 data_alloc: 218103808 data_used: 35926016
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:33.355144+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:34.355455+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 12345344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:35.355767+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149995520 unmapped: 12337152 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:36.356065+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149995520 unmapped: 12337152 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:37.356522+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1780487 data_alloc: 218103808 data_used: 35926016
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149995520 unmapped: 12337152 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:38.357142+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 149995520 unmapped: 12337152 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f6af4000/0x0/0x4ffc00000, data 0x4697993/0x476a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:39.357669+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 150003712 unmapped: 12328960 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:40.357952+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 150003712 unmapped: 12328960 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 ms_handle_reset con 0x55cd99d6f400 session 0x55cd97f67680
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 ms_handle_reset con 0x55cd99d6f800 session 0x55cd9668ef00
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98165400
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:41.358325+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 50.075469971s of 50.840911865s, submitted: 106
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 137519104 unmapped: 24813568 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 ms_handle_reset con 0x55cd98165400 session 0x55cd98b1f0e0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:42.358738+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1447482 data_alloc: 218103808 data_used: 20987904
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 137519104 unmapped: 24813568 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:43.359126+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 137519104 unmapped: 24813568 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f8919000/0x0/0x4ffc00000, data 0x2874973/0x2945000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:44.359709+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 137519104 unmapped: 24813568 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:45.360134+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 137519104 unmapped: 24813568 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:46.360620+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 137519104 unmapped: 24813568 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:47.360945+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1447482 data_alloc: 218103808 data_used: 20987904
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 137519104 unmapped: 24813568 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f8919000/0x0/0x4ffc00000, data 0x2874973/0x2945000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:48.361329+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 137519104 unmapped: 24813568 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:49.361721+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 137519104 unmapped: 24813568 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f8919000/0x0/0x4ffc00000, data 0x2874973/0x2945000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:50.362126+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 137519104 unmapped: 24813568 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:51.362491+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 ms_handle_reset con 0x55cd99d6fc00 session 0x55cd97203e00
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 ms_handle_reset con 0x55cd96384800 session 0x55cd964d92c0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 137519104 unmapped: 24813568 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd991de800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.292172432s of 10.395132065s, submitted: 21
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:52.362921+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1346382 data_alloc: 218103808 data_used: 18604032
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 ms_handle_reset con 0x55cd991de800 session 0x55cd96e57c20
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 135651328 unmapped: 26681344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:53.363390+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 135651328 unmapped: 26681344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:54.363726+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 135651328 unmapped: 26681344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9451000/0x0/0x4ffc00000, data 0x1d3c950/0x1e0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:55.364184+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 135651328 unmapped: 26681344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:56.364688+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 135651328 unmapped: 26681344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:57.365135+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1346382 data_alloc: 218103808 data_used: 18604032
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 135651328 unmapped: 26681344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:58.365751+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 135651328 unmapped: 26681344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9451000/0x0/0x4ffc00000, data 0x1d3c950/0x1e0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:31:59.366053+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 135651328 unmapped: 26681344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:00.366484+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 135651328 unmapped: 26681344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:01.366903+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 135651328 unmapped: 26681344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9451000/0x0/0x4ffc00000, data 0x1d3c950/0x1e0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:02.367211+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1346382 data_alloc: 218103808 data_used: 18604032
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 135651328 unmapped: 26681344 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:03.367660+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd96384800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.247656822s of 11.555605888s, submitted: 47
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 138 handle_osd_map epochs [138,139], i have 138, src has [1,139]
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 135667712 unmapped: 26664960 heap: 162332672 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 139 ms_handle_reset con 0x55cd96384800 session 0x55cd96db4b40
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:04.368037+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd98165400
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 141271040 unmapped: 37847040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:05.368451+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 139 ms_handle_reset con 0x55cd98165400 session 0x55cd971a03c0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124461056 unmapped: 54657024 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd99d6f800
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _renew_subs
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:06.368758+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f90bc000/0x0/0x4ffc00000, data 0x20d0097/0x21a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124461056 unmapped: 54657024 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 140 handle_osd_map epochs [141,141], i have 141, src has [1,141]
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 141 ms_handle_reset con 0x55cd99d6f800 session 0x55cd97f51680
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:07.369180+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1243710 data_alloc: 218103808 data_used: 7081984
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:08.369752+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:09.370100+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa0b8000/0x0/0x4ffc00000, data 0x10d1c78/0x11a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:10.370494+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa0b8000/0x0/0x4ffc00000, data 0x10d1c78/0x11a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:11.370942+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:12.371405+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1243710 data_alloc: 218103808 data_used: 7081984
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:13.371828+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 141 handle_osd_map epochs [141,142], i have 141, src has [1,142]
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.877271652s of 10.184530258s, submitted: 38
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:14.372270+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:15.372773+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:16.373192+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:17.373464+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246012 data_alloc: 218103808 data_used: 7081984
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:18.373897+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:19.374159+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:20.374691+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:21.375066+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:22.375634+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246012 data_alloc: 218103808 data_used: 7081984
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:23.376050+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:24.376446+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:25.376804+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:26.377198+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:27.377710+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246012 data_alloc: 218103808 data_used: 7081984
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:28.378289+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:29.378734+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:30.379648+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:31.380266+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:32.380832+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:33.381295+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:34.381776+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:35.382239+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:36.382475+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:37.382860+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:38.383310+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:39.383735+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:40.383943+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:41.384235+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:42.384857+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:43.385149+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 54640640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:44.385679+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:45.386061+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:46.386515+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:47.387024+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:48.387461+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:49.387877+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:50.388322+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:51.388751+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:52.389140+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:53.389881+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:54.390286+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:55.390758+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:56.391139+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:57.391891+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:58.392375+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:32:59.392971+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:00.393381+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:01.393817+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:02.394203+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:03.394743+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:04.395009+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:05.395438+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:06.395842+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:07.396364+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:08.396760+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:09.397274+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:10.397767+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:11.398203+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:12.398977+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:13.399335+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:14.399641+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:15.399826+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:16.400121+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:17.400445+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:18.400871+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 54632448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:19.401238+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124493824 unmapped: 54624256 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:20.401672+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124493824 unmapped: 54624256 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:21.402033+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124493824 unmapped: 54624256 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:22.402435+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124493824 unmapped: 54624256 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:23.402774+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124493824 unmapped: 54624256 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:24.402966+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124493824 unmapped: 54624256 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:25.403233+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124493824 unmapped: 54624256 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:26.403446+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124493824 unmapped: 54624256 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:27.403671+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124493824 unmapped: 54624256 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:28.403885+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124493824 unmapped: 54624256 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:29.404243+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124493824 unmapped: 54624256 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:30.404463+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124493824 unmapped: 54624256 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:31.405469+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124493824 unmapped: 54624256 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:32.405671+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124493824 unmapped: 54624256 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:33.405838+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124493824 unmapped: 54624256 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:34.406001+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124493824 unmapped: 54624256 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 03 02:45:24 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:35.406212+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124493824 unmapped: 54624256 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:36.406634+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124493824 unmapped: 54624256 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:37.406934+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124493824 unmapped: 54624256 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:38.407260+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: do_command 'config diff' '{prefix=config diff}'
Dec 03 02:45:24 compute-0 ceph-osd[206633]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124510208 unmapped: 54607872 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: do_command 'config show' '{prefix=config show}'
Dec 03 02:45:24 compute-0 ceph-osd[206633]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 03 02:45:24 compute-0 ceph-osd[206633]: do_command 'counter dump' '{prefix=counter dump}'
Dec 03 02:45:24 compute-0 ceph-osd[206633]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 03 02:45:24 compute-0 ceph-osd[206633]: do_command 'counter schema' '{prefix=counter schema}'
Dec 03 02:45:24 compute-0 ceph-osd[206633]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:39.407452+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124125184 unmapped: 54992896 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:40.407678+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124133376 unmapped: 54984704 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: do_command 'log dump' '{prefix=log dump}'
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:41.407928+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 124133376 unmapped: 54984704 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: do_command 'perf dump' '{prefix=perf dump}'
Dec 03 02:45:24 compute-0 ceph-osd[206633]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Dec 03 02:45:24 compute-0 ceph-osd[206633]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Dec 03 02:45:24 compute-0 ceph-osd[206633]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:42.408260+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: do_command 'perf schema' '{prefix=perf schema}'
Dec 03 02:45:24 compute-0 ceph-osd[206633]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:43.408454+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:44.408747+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:45.408932+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:46.409229+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:47.409440+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:48.409632+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:49.409871+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:50.410071+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:51.410266+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:52.410829+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:53.411249+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:54.412210+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:55.412742+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:56.413202+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:57.413378+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:58.413757+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:33:59.414078+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:00.414427+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:01.414777+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:02.415777+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:03.417072+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:04.418650+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:05.420460+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:06.421805+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:07.422250+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:08.422489+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:09.422676+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:10.422857+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:11.424147+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:12.425793+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:13.427621+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:14.428916+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:15.429428+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:16.429900+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:17.430322+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:18.430767+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:19.431205+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:20.431660+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:21.432080+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:22.432295+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:23.432806+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:24.433220+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:25.433702+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:26.433995+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 ms_handle_reset con 0x55cd99ed0000 session 0x55cd9668e780
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd9732a400
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:27.434489+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:28.435032+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:29.435477+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:30.435911+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:31.436414+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:32.436849+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:33.437260+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:34.437699+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:35.438059+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:36.438321+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:37.438767+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:38.439249+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:39.439454+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:40.439913+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:41.440333+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:42.440710+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:43.440902+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:44.441338+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:45.441754+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:46.442086+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:47.442458+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:48.443023+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:49.443348+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:50.443717+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:51.444133+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:52.444511+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:53.444993+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:54.445389+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:55.445764+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:56.446162+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:57.446825+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:58.447253+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:34:59.447506+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:00.447913+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:01.448225+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:02.448795+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:03.449184+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:04.449494+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:05.449750+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:06.449932+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:07.451110+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:08.451627+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:09.452409+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:10.452826+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:11.453162+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:12.453501+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:13.453920+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:14.454741+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:15.455105+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:16.455661+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:17.456023+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:18.456454+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 55255040 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:19.456965+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:20.457434+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:21.457951+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:22.458299+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:23.458796+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:24.459169+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:25.459807+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:26.460298+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:27.461070+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:28.461714+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:29.462048+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:30.462494+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:31.462893+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:32.463357+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:33.463783+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:34.464265+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:35.464799+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:36.465250+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:37.465946+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:38.466391+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:39.466767+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:40.467190+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:41.467733+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:42.468086+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 ms_handle_reset con 0x55cd984f5800 session 0x55cd981ba000
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: handle_auth_request added challenge on 0x55cd992bdc00
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:43.468815+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:44.469184+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:45.469711+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:46.470096+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:47.470464+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:48.470867+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:49.471311+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:50.471784+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:51.472161+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:52.472627+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:53.473039+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:54.473432+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:55.473880+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:56.474386+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:57.474679+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:58.475317+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:35:59.475832+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:00.476211+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:01.476694+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:02.477132+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:03.477688+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:04.478055+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:05.478426+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:06.478798+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:07.479175+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:08.479571+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:09.479857+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:10.480263+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:11.480699+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:12.481081+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:13.481486+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:14.481881+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:15.482273+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:16.482684+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:17.483146+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:18.483727+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:19.484038+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:20.484447+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:21.484835+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:22.485236+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:23.485683+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:24.486071+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:25.486430+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:26.486819+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:27.487622+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:28.488131+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:29.488702+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:30.489125+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:31.489745+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:32.490183+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:33.490670+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:34.491066+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:35.491475+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:36.491868+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:37.492284+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:38.492685+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:39.493050+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:40.493298+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:41.493786+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:42.494201+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:43.494731+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:44.495122+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:45.495633+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:46.495953+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:47.496328+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:48.496773+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:49.497171+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:50.497657+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:51.498092+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:52.498633+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:53.499006+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:54.499363+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:55.499797+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:56.500174+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:57.500686+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:58.501032+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:36:59.502139+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:00.502638+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:01.503018+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:02.503408+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:03.503835+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:04.504243+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:05.504699+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:06.505144+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:07.505523+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:08.505975+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:09.506346+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:10.506977+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:11.507227+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:12.507677+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:13.508069+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:14.508472+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:15.508854+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:16.509243+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:17.509715+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:18.510046+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:19.510488+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:20.510811+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:21.511189+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:22.511599+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:23.511865+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:24.512300+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:25.512755+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:26.513089+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:27.513278+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:28.513488+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:29.513865+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:30.514240+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:31.514849+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:32.515301+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:33.515710+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:34.516055+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:35.516417+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:36.516813+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:37.517171+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:38.517438+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:39.517825+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:40.518085+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:41.518447+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:42.518793+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:43.519149+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:44.519502+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:45.519900+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:46.520295+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:47.520802+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:48.521238+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:49.521807+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:50.522182+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:51.522652+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:52.523036+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:53.523395+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:54.523753+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:55.524153+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:56.524701+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:57.525087+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:58.525520+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:37:59.525918+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:00.526211+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:01.526699+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:02.527331+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:03.527799+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:04.528182+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:05.528512+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:06.529073+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:07.529396+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:08.529698+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:09.530067+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:10.530401+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:11.530707+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:12.531082+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:13.531501+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:14.532697+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:15.533039+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:16.533472+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:17.533857+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:18.534272+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:19.534779+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:20.535214+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:21.535694+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:22.536045+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:23.536370+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:24.536785+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:25.537171+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:26.537521+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:27.538013+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:28.538390+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:29.538776+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:30.539173+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:31.539864+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:32.540226+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:33.540757+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:34.541070+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:35.541462+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:36.541715+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:37.542017+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:38.542262+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:39.542593+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:40.542939+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:41.543159+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:42.543614+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:43.544107+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:44.544404+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:45.544790+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:46.545264+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:47.545710+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:48.546033+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:49.546364+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:50.546712+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:51.547006+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:52.547348+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:53.547702+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:54.548018+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:55.548370+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:56.548753+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:57.549109+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:58.549476+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:38:59.549805+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:00.550178+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:01.550693+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 55246848 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:02.550900+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123879424 unmapped: 55238656 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:03.551313+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123879424 unmapped: 55238656 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:04.551777+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123879424 unmapped: 55238656 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:05.552096+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123879424 unmapped: 55238656 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:06.552447+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123879424 unmapped: 55238656 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:07.552823+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123879424 unmapped: 55238656 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:08.553283+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123879424 unmapped: 55238656 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:09.553805+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123879424 unmapped: 55238656 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:10.554140+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123879424 unmapped: 55238656 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:11.554359+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123879424 unmapped: 55238656 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 4800.1 total, 600.0 interval
                                            Cumulative writes: 10K writes, 40K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 10K writes, 3012 syncs, 3.57 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 423 writes, 1287 keys, 423 commit groups, 1.0 writes per commit group, ingest: 0.35 MB, 0.00 MB/s
                                            Interval WAL: 423 writes, 202 syncs, 2.09 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:12.554694+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123879424 unmapped: 55238656 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:13.555147+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123879424 unmapped: 55238656 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:14.555504+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123879424 unmapped: 55238656 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:15.555936+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123879424 unmapped: 55238656 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:16.556359+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123879424 unmapped: 55238656 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:17.556846+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123879424 unmapped: 55238656 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:18.557188+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123879424 unmapped: 55238656 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:19.557692+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123879424 unmapped: 55238656 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:20.558143+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123879424 unmapped: 55238656 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:21.558714+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123879424 unmapped: 55238656 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:22.558954+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123879424 unmapped: 55238656 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:23.559387+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123879424 unmapped: 55238656 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:24.559782+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123879424 unmapped: 55238656 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:25.560213+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123879424 unmapped: 55238656 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:26.560999+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123879424 unmapped: 55238656 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:27.561423+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123879424 unmapped: 55238656 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:28.561928+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123879424 unmapped: 55238656 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:29.562342+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123879424 unmapped: 55238656 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:30.562773+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123879424 unmapped: 55238656 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:31.563364+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123879424 unmapped: 55238656 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:32.563717+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123879424 unmapped: 55238656 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:33.564105+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123879424 unmapped: 55238656 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:34.564451+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123887616 unmapped: 55230464 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:35.564871+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123887616 unmapped: 55230464 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:36.565291+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123887616 unmapped: 55230464 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:37.565790+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123887616 unmapped: 55230464 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:38.566108+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123887616 unmapped: 55230464 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:39.566493+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123887616 unmapped: 55230464 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:40.566804+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123887616 unmapped: 55230464 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:41.567187+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 55222272 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:42.567788+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 55222272 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:43.568221+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 55222272 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:44.568737+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 55222272 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:45.569029+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 55222272 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:46.569436+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 55222272 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:47.569834+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 55222272 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:48.570104+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 55222272 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:49.570471+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 55222272 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:50.571153+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 55222272 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:51.571646+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 55214080 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:52.572063+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 55214080 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:53.572469+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 55214080 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:54.572712+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 55214080 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:55.573117+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 55214080 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:56.573498+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 55214080 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:57.573890+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 55214080 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:58.574244+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123912192 unmapped: 55205888 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:39:59.574674+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123912192 unmapped: 55205888 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:00.575698+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123912192 unmapped: 55205888 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:01.576141+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123912192 unmapped: 55205888 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:02.576727+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:03.576976+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123912192 unmapped: 55205888 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:04.577368+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123912192 unmapped: 55205888 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:05.577724+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123912192 unmapped: 55205888 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:06.578073+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123912192 unmapped: 55205888 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:07.578423+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 55197696 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:08.578859+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 55197696 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:09.579234+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 55197696 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:10.580409+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 55197696 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:11.580802+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 55197696 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:12.581212+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 55197696 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:13.581587+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 55197696 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:14.582002+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 55762944 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:15.582314+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 55762944 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:16.582761+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 55762944 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:17.583133+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123363328 unmapped: 55754752 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:18.583487+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123363328 unmapped: 55754752 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:19.583793+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123363328 unmapped: 55754752 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:20.584168+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123363328 unmapped: 55754752 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:21.584654+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123363328 unmapped: 55754752 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:22.585064+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123363328 unmapped: 55754752 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:23.585486+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123363328 unmapped: 55754752 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:24.585919+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123363328 unmapped: 55754752 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:25.586188+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123363328 unmapped: 55754752 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:26.586725+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123363328 unmapped: 55754752 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:27.587352+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123363328 unmapped: 55754752 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:28.587849+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123363328 unmapped: 55754752 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:29.588232+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123363328 unmapped: 55754752 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:30.588730+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 55746560 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:31.588966+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 55746560 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:32.589312+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 55746560 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:33.589818+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 55746560 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:34.590142+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 55746560 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:35.590634+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 55746560 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:36.591022+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 55746560 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:37.591378+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 55746560 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:38.591853+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123379712 unmapped: 55738368 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:39.592182+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123379712 unmapped: 55738368 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:40.592639+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123379712 unmapped: 55738368 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:41.593004+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123379712 unmapped: 55738368 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:42.593340+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123379712 unmapped: 55738368 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:43.593856+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123379712 unmapped: 55738368 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:44.594228+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123379712 unmapped: 55738368 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:45.594853+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 55730176 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:46.595240+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246332 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 55730176 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:47.595672+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 55730176 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:48.596117+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 55730176 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:49.596609+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 55730176 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:50.597037+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 55730176 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 517.360229492s of 517.381652832s, submitted: 9
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [0,0,0,1])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:51.597505+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245452 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123412480 unmapped: 55705600 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:52.597989+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123412480 unmapped: 55705600 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:53.598351+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123428864 unmapped: 55689216 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:54.598811+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 55672832 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:55.599353+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 55672832 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:56.599846+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245452 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 55672832 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:57.602822+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 55672832 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:58.603111+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 55672832 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:40:59.603622+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 55672832 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:00.603966+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 55672832 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:01.604298+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245452 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 55672832 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:02.604676+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 55672832 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:03.605117+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 55672832 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:04.605303+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 55672832 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:05.605699+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 55672832 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:06.606006+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245452 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 55672832 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:07.606417+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 55672832 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:08.606960+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 55672832 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:09.607330+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 55672832 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:10.607863+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 55672832 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:11.608270+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245452 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 55672832 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:12.608747+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 55672832 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:13.609175+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 55672832 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:14.609474+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 55672832 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:15.609845+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 55672832 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:16.610286+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245452 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 55672832 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:17.610838+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 55672832 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:18.611328+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 55672832 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:19.611758+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 55672832 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:20.612187+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 55672832 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:21.612711+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245452 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 55672832 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:22.613150+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 55672832 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:23.613741+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 55672832 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:24.614004+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 55672832 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:25.614479+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 55672832 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:26.614935+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245452 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 55672832 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:27.615410+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 55672832 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:28.615909+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 55672832 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:29.616345+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123453440 unmapped: 55664640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:30.616795+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123453440 unmapped: 55664640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:31.617365+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245452 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123453440 unmapped: 55664640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:32.617777+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123453440 unmapped: 55664640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:33.618182+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123453440 unmapped: 55664640 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:34.618599+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123461632 unmapped: 55656448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:35.619029+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123461632 unmapped: 55656448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:36.619442+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245452 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123461632 unmapped: 55656448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:37.619858+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123461632 unmapped: 55656448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:38.620341+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123461632 unmapped: 55656448 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:39.620809+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123469824 unmapped: 55648256 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:40.621187+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123469824 unmapped: 55648256 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:41.621422+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245452 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123469824 unmapped: 55648256 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:42.621707+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123469824 unmapped: 55648256 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:43.621925+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123469824 unmapped: 55648256 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:44.622310+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123469824 unmapped: 55648256 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:45.622789+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123469824 unmapped: 55648256 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:46.623367+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245452 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123469824 unmapped: 55648256 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:47.623799+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123469824 unmapped: 55648256 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:48.624396+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123469824 unmapped: 55648256 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:49.624894+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123469824 unmapped: 55648256 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:50.625282+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123469824 unmapped: 55648256 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:51.625689+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245452 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123478016 unmapped: 55640064 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:52.626103+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123478016 unmapped: 55640064 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:53.626523+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123478016 unmapped: 55640064 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:54.626986+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123478016 unmapped: 55640064 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:55.627474+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123478016 unmapped: 55640064 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:56.627831+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245452 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123478016 unmapped: 55640064 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:57.628361+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123478016 unmapped: 55640064 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:58.628911+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123478016 unmapped: 55640064 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:41:59.629375+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123478016 unmapped: 55640064 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:00.629811+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123478016 unmapped: 55640064 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:01.630232+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245452 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123478016 unmapped: 55640064 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:02.630754+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123478016 unmapped: 55640064 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:03.631137+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123478016 unmapped: 55640064 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:04.631661+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123478016 unmapped: 55640064 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:05.632074+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123486208 unmapped: 55631872 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:06.632428+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245452 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123486208 unmapped: 55631872 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:07.632833+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123486208 unmapped: 55631872 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:08.633319+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123486208 unmapped: 55631872 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:09.633701+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123486208 unmapped: 55631872 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:10.634008+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123486208 unmapped: 55631872 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:11.634310+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245452 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123486208 unmapped: 55631872 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:12.634652+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123486208 unmapped: 55631872 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:13.635085+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123494400 unmapped: 55623680 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:14.635515+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123494400 unmapped: 55623680 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:15.636034+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123494400 unmapped: 55623680 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:16.636436+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245452 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123494400 unmapped: 55623680 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:17.636636+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123494400 unmapped: 55623680 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:18.637065+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123494400 unmapped: 55623680 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:19.637682+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123494400 unmapped: 55623680 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:20.638099+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:21.638467+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123494400 unmapped: 55623680 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245452 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:22.638841+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123494400 unmapped: 55623680 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:23.639434+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123494400 unmapped: 55623680 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:24.639827+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123494400 unmapped: 55623680 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:25.640267+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123494400 unmapped: 55623680 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:26.640755+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123494400 unmapped: 55623680 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245452 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:27.641092+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123494400 unmapped: 55623680 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:28.641435+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123494400 unmapped: 55623680 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:29.641792+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123494400 unmapped: 55623680 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:30.642160+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123502592 unmapped: 55615488 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:31.642723+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123502592 unmapped: 55615488 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245452 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:32.643148+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123502592 unmapped: 55615488 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:33.643673+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123502592 unmapped: 55615488 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:34.644072+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123502592 unmapped: 55615488 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:35.644424+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123502592 unmapped: 55615488 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:36.644876+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123502592 unmapped: 55615488 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245452 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:37.645207+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123502592 unmapped: 55615488 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:38.645512+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123502592 unmapped: 55615488 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:39.645913+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123510784 unmapped: 55607296 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:40.646345+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123510784 unmapped: 55607296 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:41.646744+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123510784 unmapped: 55607296 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245452 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:42.647367+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123510784 unmapped: 55607296 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:43.647793+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123510784 unmapped: 55607296 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:44.648110+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123510784 unmapped: 55607296 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:45.648416+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123510784 unmapped: 55607296 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:46.648758+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123518976 unmapped: 55599104 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245452 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:47.649165+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123518976 unmapped: 55599104 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:48.649789+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123518976 unmapped: 55599104 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:49.650241+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123518976 unmapped: 55599104 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:50.651096+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123518976 unmapped: 55599104 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:51.651610+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123518976 unmapped: 55599104 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245452 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:52.651994+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123518976 unmapped: 55599104 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:53.652452+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123518976 unmapped: 55599104 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:54.652847+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123518976 unmapped: 55599104 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:55.653435+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123518976 unmapped: 55599104 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:56.653918+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123518976 unmapped: 55599104 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:57.654226+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245452 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123518976 unmapped: 55599104 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:58.654793+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123518976 unmapped: 55599104 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:42:59.655199+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123518976 unmapped: 55599104 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:00.655716+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123518976 unmapped: 55599104 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:01.656195+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123518976 unmapped: 55599104 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:02.656629+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245452 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123527168 unmapped: 55590912 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:03.657170+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123527168 unmapped: 55590912 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:04.657865+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123527168 unmapped: 55590912 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:05.658201+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123527168 unmapped: 55590912 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:06.658895+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123527168 unmapped: 55590912 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:07.659295+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245452 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123527168 unmapped: 55590912 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:08.659758+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123527168 unmapped: 55590912 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:09.660163+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123535360 unmapped: 55582720 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:10.660517+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123543552 unmapped: 55574528 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:11.661118+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123543552 unmapped: 55574528 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:12.661639+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245452 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123543552 unmapped: 55574528 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:13.661969+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123543552 unmapped: 55574528 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:14.663028+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123543552 unmapped: 55574528 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:15.663413+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123543552 unmapped: 55574528 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:16.663879+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123543552 unmapped: 55574528 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:17.664272+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245452 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123551744 unmapped: 55566336 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:18.664769+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123551744 unmapped: 55566336 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:19.665184+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123551744 unmapped: 55566336 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:20.665598+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123551744 unmapped: 55566336 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:21.665899+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123551744 unmapped: 55566336 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:22.666158+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245452 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123551744 unmapped: 55566336 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:23.666366+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123551744 unmapped: 55566336 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:24.666745+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123551744 unmapped: 55566336 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:25.667042+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123551744 unmapped: 55566336 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:26.667384+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123551744 unmapped: 55566336 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:27.667622+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245452 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123551744 unmapped: 55566336 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:28.668032+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123551744 unmapped: 55566336 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:29.668345+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123551744 unmapped: 55566336 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:30.668803+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123551744 unmapped: 55566336 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:31.669238+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123559936 unmapped: 55558144 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:32.669715+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245452 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123559936 unmapped: 55558144 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:33.670160+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123568128 unmapped: 55549952 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:34.670651+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123568128 unmapped: 55549952 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:35.671099+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123568128 unmapped: 55549952 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:36.671403+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123568128 unmapped: 55549952 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:37.671834+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245452 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123568128 unmapped: 55549952 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:38.672309+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123568128 unmapped: 55549952 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:39.672801+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123568128 unmapped: 55549952 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:40.673159+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123568128 unmapped: 55549952 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:41.673655+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123568128 unmapped: 55549952 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:42.674028+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245452 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123568128 unmapped: 55549952 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:43.674832+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 55541760 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:44.675178+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 55541760 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:45.675708+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 55541760 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:46.676095+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 55541760 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:47.676605+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245452 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 55541760 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:48.677096+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 55541760 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:49.677391+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 55541760 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:50.677768+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 55541760 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:51.678112+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 55541760 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:52.678451+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245452 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 55541760 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:53.678764+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 55541760 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [4])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:54.679116+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 55541760 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:55.679696+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 55541760 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:56.680089+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 55541760 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:57.680375+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245452 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 55541760 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:58.680644+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123592704 unmapped: 55525376 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:43:59.681036+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123592704 unmapped: 55525376 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:00.681321+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123592704 unmapped: 55525376 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:01.681522+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123592704 unmapped: 55525376 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:02.681855+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245452 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123592704 unmapped: 55525376 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:03.682231+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123592704 unmapped: 55525376 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:04.682714+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123592704 unmapped: 55525376 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:05.683136+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123592704 unmapped: 55525376 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:06.683464+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123592704 unmapped: 55525376 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:07.683976+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245452 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123592704 unmapped: 55525376 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:08.684440+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123592704 unmapped: 55525376 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:09.684850+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123592704 unmapped: 55525376 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:10.685320+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123592704 unmapped: 55525376 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:11.685812+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123592704 unmapped: 55525376 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:12.686185+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245452 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123592704 unmapped: 55525376 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:13.686503+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123592704 unmapped: 55525376 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:14.686953+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123600896 unmapped: 55517184 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:15.687383+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123600896 unmapped: 55517184 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:16.687825+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123600896 unmapped: 55517184 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:17.688246+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245452 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123600896 unmapped: 55517184 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:18.688679+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123600896 unmapped: 55517184 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:19.689170+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123600896 unmapped: 55517184 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:20.689745+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123600896 unmapped: 55517184 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:21.690129+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123600896 unmapped: 55517184 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:22.690504+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245452 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123609088 unmapped: 55508992 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:23.690750+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123609088 unmapped: 55508992 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:24.691193+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123609088 unmapped: 55508992 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:25.691757+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123609088 unmapped: 55508992 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:26.692117+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123609088 unmapped: 55508992 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:27.692366+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245452 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123609088 unmapped: 55508992 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:28.692756+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123609088 unmapped: 55508992 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:29.693208+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123617280 unmapped: 55500800 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:30.693692+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123617280 unmapped: 55500800 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:31.693948+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123617280 unmapped: 55500800 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:32.694249+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245452 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123617280 unmapped: 55500800 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:33.694513+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123617280 unmapped: 55500800 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:34.694790+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123617280 unmapped: 55500800 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:35.695001+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123617280 unmapped: 55500800 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:36.695257+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123617280 unmapped: 55500800 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:37.695508+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245452 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123617280 unmapped: 55500800 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:38.695768+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123617280 unmapped: 55500800 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:39.695944+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123617280 unmapped: 55500800 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:40.696119+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123617280 unmapped: 55500800 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:41.696293+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123625472 unmapped: 55492608 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:42.696507+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245452 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:43.696817+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123625472 unmapped: 55492608 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:44.697072+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123625472 unmapped: 55492608 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:45.697265+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123625472 unmapped: 55492608 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:46.697456+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123625472 unmapped: 55492608 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:47.697687+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123625472 unmapped: 55492608 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245452 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:48.698145+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123625472 unmapped: 55492608 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:49.698351+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123633664 unmapped: 55484416 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:50.698575+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123633664 unmapped: 55484416 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: do_command 'config diff' '{prefix=config diff}'
Dec 03 02:45:24 compute-0 ceph-osd[206633]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 03 02:45:24 compute-0 ceph-osd[206633]: do_command 'config show' '{prefix=config show}'
Dec 03 02:45:24 compute-0 ceph-osd[206633]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 03 02:45:24 compute-0 ceph-osd[206633]: do_command 'counter dump' '{prefix=counter dump}'
Dec 03 02:45:24 compute-0 ceph-osd[206633]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 03 02:45:24 compute-0 ceph-osd[206633]: do_command 'counter schema' '{prefix=counter schema}'
Dec 03 02:45:24 compute-0 ceph-osd[206633]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:51.698795+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123363328 unmapped: 55754752 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:52.699151+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 123379712 unmapped: 55738368 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa0b7000/0x0/0x4ffc00000, data 0x10d36fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 03 02:45:24 compute-0 ceph-osd[206633]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 03 02:45:24 compute-0 ceph-osd[206633]: bluestore.MempoolThread(0x55cd94bbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245452 data_alloc: 218103808 data_used: 7090176
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: tick
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_tickets
Dec 03 02:45:24 compute-0 ceph-osd[206633]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-03T02:44:53.699348+0000)
Dec 03 02:45:24 compute-0 ceph-osd[206633]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 56180736 heap: 179118080 old mem: 2845415832 new mem: 2845415832
Dec 03 02:45:24 compute-0 ceph-osd[206633]: do_command 'log dump' '{prefix=log dump}'
Dec 03 02:45:24 compute-0 ceph-mon[192821]: from='client.16015 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:45:24 compute-0 ceph-mon[192821]: from='client.16019 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:45:24 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3668477928' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 03 02:45:24 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/60768788' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec 03 02:45:24 compute-0 ceph-mon[192821]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 03 02:45:24 compute-0 ceph-mon[192821]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 03 02:45:24 compute-0 ceph-mon[192821]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 03 02:45:24 compute-0 ceph-mon[192821]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 03 02:45:24 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0) v1
Dec 03 02:45:24 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/402537234' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Dec 03 02:45:24 compute-0 nova_compute[351485]: 2025-12-03 02:45:24.762 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:45:25 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.16037 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:45:25 compute-0 ceph-mon[192821]: from='client.16023 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:45:25 compute-0 ceph-mon[192821]: pgmap v2739: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:45:25 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/402537234' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Dec 03 02:45:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Dec 03 02:45:25 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3766014711' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Dec 03 02:45:26 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0) v1
Dec 03 02:45:26 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4226525927' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Dec 03 02:45:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2740: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:45:26 compute-0 ceph-mon[192821]: from='client.16037 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:45:26 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3766014711' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Dec 03 02:45:26 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/4226525927' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Dec 03 02:45:26 compute-0 ceph-mon[192821]: pgmap v2740: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:45:26 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Dec 03 02:45:26 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1532213692' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Dec 03 02:45:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Dec 03 02:45:27 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1192562059' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Dec 03 02:45:27 compute-0 systemd[1]: Starting Hostname Service...
Dec 03 02:45:27 compute-0 nova_compute[351485]: 2025-12-03 02:45:27.387 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:45:27 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1532213692' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Dec 03 02:45:27 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1192562059' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Dec 03 02:45:27 compute-0 systemd[1]: Started Hostname Service.
Dec 03 02:45:27 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.16047 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:45:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Dec 03 02:45:28 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3710861224' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Dec 03 02:45:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2741: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:45:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:45:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:45:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:45:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:45:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec 03 02:45:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec 03 02:45:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Dec 03 02:45:28 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2735130737' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Dec 03 02:45:28 compute-0 ceph-mon[192821]: from='client.16047 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:45:28 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3710861224' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Dec 03 02:45:28 compute-0 ceph-mon[192821]: pgmap v2741: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:45:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:45:28
Dec 03 02:45:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 03 02:45:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec 03 02:45:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', 'default.rgw.log', 'images', 'cephfs.cephfs.data', 'vms', 'default.rgw.control']
Dec 03 02:45:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec 03 02:45:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:45:28 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.16053 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:45:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Dec 03 02:45:29 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1120692283' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Dec 03 02:45:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 03 02:45:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:45:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:45:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 03 02:45:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 03 02:45:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:45:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:45:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 03 02:45:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 03 02:45:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 03 02:45:29 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2735130737' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Dec 03 02:45:29 compute-0 ceph-mon[192821]: from='client.16053 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:45:29 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1120692283' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Dec 03 02:45:29 compute-0 podman[158098]: time="2025-12-03T02:45:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 03 02:45:29 compute-0 nova_compute[351485]: 2025-12-03 02:45:29.764 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:45:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:45:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec 03 02:45:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:45:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8209 "" "Go-http-client/1.1"
Dec 03 02:45:29 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.16057 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:45:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2742: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:45:30 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.16059 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:45:30 compute-0 ceph-mon[192821]: from='client.16057 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:45:30 compute-0 ceph-mon[192821]: pgmap v2742: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:45:30 compute-0 ceph-mon[192821]: from='client.16059 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:45:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Dec 03 02:45:30 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3735704383' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Dec 03 02:45:31 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Dec 03 02:45:31 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/481762584' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Dec 03 02:45:31 compute-0 openstack_network_exporter[368278]: ERROR   02:45:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:45:31 compute-0 openstack_network_exporter[368278]: ERROR   02:45:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 03 02:45:31 compute-0 openstack_network_exporter[368278]: ERROR   02:45:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 03 02:45:31 compute-0 openstack_network_exporter[368278]: ERROR   02:45:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 03 02:45:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:45:31 compute-0 openstack_network_exporter[368278]: ERROR   02:45:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 03 02:45:31 compute-0 openstack_network_exporter[368278]: 
Dec 03 02:45:31 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.16065 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:45:31 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3735704383' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Dec 03 02:45:31 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/481762584' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Dec 03 02:45:31 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.16067 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:45:31 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:45:31 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 03 02:45:31 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:45:31 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 03 02:45:31 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:45:31 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:45:31 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:45:31 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:45:31 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:45:31 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec 03 02:45:31 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:45:31 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 03 02:45:31 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:45:31 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:45:31 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:45:31 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 03 02:45:31 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:45:31 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 03 02:45:31 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:45:31 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 03 02:45:31 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 03 02:45:31 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 03 02:45:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2743: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:45:32 compute-0 nova_compute[351485]: 2025-12-03 02:45:32.391 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:45:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Dec 03 02:45:32 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2630461486' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Dec 03 02:45:32 compute-0 ceph-mon[192821]: from='client.16065 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:45:32 compute-0 ceph-mon[192821]: from='client.16067 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:45:32 compute-0 ceph-mon[192821]: pgmap v2743: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:45:32 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2630461486' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Dec 03 02:45:32 compute-0 podman[497791]: 2025-12-03 02:45:32.884910829 +0000 UTC m=+0.135675460 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125)
Dec 03 02:45:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Dec 03 02:45:32 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/283373315' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Dec 03 02:45:33 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.16073 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:45:33 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/283373315' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Dec 03 02:45:33 compute-0 ceph-mon[192821]: from='client.16073 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:45:33 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.16075 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:45:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 03 02:45:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2744: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:45:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Dec 03 02:45:34 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4284614636' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 03 02:45:34 compute-0 ceph-mon[192821]: from='client.16075 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 03 02:45:34 compute-0 ceph-mon[192821]: pgmap v2744: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:45:34 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/4284614636' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 03 02:45:34 compute-0 nova_compute[351485]: 2025-12-03 02:45:34.766 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 03 02:45:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status"} v 0) v1
Dec 03 02:45:34 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1235908720' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Dec 03 02:45:35 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0) v1
Dec 03 02:45:35 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3535567855' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Dec 03 02:45:35 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.16083 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:45:35 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1235908720' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Dec 03 02:45:35 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3535567855' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Dec 03 02:45:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2745: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:45:36 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0) v1
Dec 03 02:45:36 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2832394691' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 03 02:45:36 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0) v1
Dec 03 02:45:36 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/235292509' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Dec 03 02:45:36 compute-0 ceph-mon[192821]: from='client.16083 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 03 02:45:36 compute-0 ceph-mon[192821]: pgmap v2745: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 03 02:45:36 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2832394691' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 03 02:45:36 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/235292509' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Dec 03 02:45:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0) v1
Dec 03 02:45:37 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/883128021' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Dec 03 02:45:37 compute-0 nova_compute[351485]: 2025-12-03 02:45:37.395 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
